From r at roze.lv Sat Aug 1 09:24:36 2020 From: r at roze.lv (Reinis Rozitis) Date: Sat, 1 Aug 2020 12:24:36 +0300 Subject: When does NGINX start logging In-Reply-To: References: Message-ID: <000001d667e5$9344a710$b9cdf530$@roze.lv> > I'm going over some Web Server STIGs (referenced here: > https://www.stigviewer.com/stig/web_server_security_requirements_guide > /) to make sure my NGINX web server is configured to comply with those > security requirements. One of the requirements is that "The web server must > initiate session logging upon start up." So my question is: Are there any > NGINX documentation or resource that shows NGINX starts logging as soon > as it's started before any requests are handled? Imho for that to be true you would need to run nginx in debug mode with debug log. http://nginx.org/en/docs/debugging_log.html Since for a typical request / web application there is usually one line in the access log as the webserver has to wait for the upstream (or even on disk) to return the response (based on it has to decide on the http return code / content length etc). rr From nginx-forum at forum.nginx.org Sat Aug 1 14:10:27 2020 From: nginx-forum at forum.nginx.org (shadyabhi) Date: Sat, 01 Aug 2020 10:10:27 -0400 Subject: Nginx spending high CPU in perf tests (vs HAProxy) Message-ID: <3ab850a9901f5c0c565a6f27c0a1a329.NginxMailingListEnglish@forum.nginx.org> Two scenarios:- * When upstream supports higher aggregate QPS than what Nginx can support on a host, nginx has better CPU performance than HAProxy. (Nginx == 700% CPU, HAProxy = 1200% CPU) * When upstream capacity is limited, and my benchmarking tool sends QPS higher than what the upstream supports, nginx is showing unbelievably high CPU usage when compared to HAProxy. (Nginx == 400% CPU, HAProxy = 60% CPU) Hence, there's something about Nginx where it shines as the throughput increases but at low throughput, it has higher CPU usage than HAProxy. Any pointers on what behavior is causing this? My guess is, Nginx is enqueuing a lot more TCP streams than the upstream can handle, which ultimately causes higher CPU than HAProxy. NGINX CONFIG (v1.15) ``` user root; worker_processes auto; daemon off; error_log ./error.log; pid /run/nginx.pid; include /usr/share/nginx/modules/*.conf; events { worker_connections 1024; } stream { server_traffic_status_zone; log_format basic '$proxy_protocol_addr - $remote_user [$time_local] ' access_log ./access.log basic; upstream xproxy { server 10.100.10.1:12270; } server { listen 80; proxy_pass xproxy; proxy_protocol on; } } ``` Thank you. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,288935,288935#msg-288935 From kaushalshriyan at gmail.com Sat Aug 1 17:09:17 2020 From: kaushalshriyan at gmail.com (Kaushal Shriyan) Date: Sat, 1 Aug 2020 22:39:17 +0530 Subject: nginx subsite configuration not working Message-ID: Hi, I am running nginx version: nginx/1.16.1 on CentOS Linux release 7.8.2003 (Core). I am setting up the Nginx subsite configuration. The details are as follows. [root at developerportal www]# pwd /var/www [root at developerportal www]# ls -l total 0 drwxr-xr-x 2 root root 6 Apr 2 18:44 cgi-bin drwxr-xr-x 3 nginx nginx 64 Jun 10 13:52 *drupal* drwxr-xr-x 3 nginx nginx 28 Aug 1 12:32 *drupalsubsite* drwxr-xr-x 2 root root 6 Apr 2 18:44 html [root at developerportal www]# [root at developerportal web]# pwd /var/www/drupal/marketplace-v2/mpV2/web [root at developerportal web]# ls -l total 73576 -rwxr-xr-x 1 nginx nginx 385 Jul 28 18:25 autoload.php drwxrwxrwx 12 nginx nginx 4096 Jun 11 14:21 core -rw-r--r-- 1 nginx nginx 75310608 Aug 1 11:18 db.sql -rwxr-xr-x 1 nginx nginx 549 Jul 26 18:47 index.php drwxrwxrwx 14 nginx nginx 286 Jul 28 18:25 libraries drwxrwxrwx 6 nginx nginx 101 Jul 26 18:47 modules drwxrwxrwx 2 nginx nginx 22 Jul 26 18:47 profiles lrwxrwxrwx 1 nginx nginx 47 Aug 1 16:23 *retail* -> /var/www/drupalsubsite/marketplace-v2/mpV2/web/ -rwxr-xr-x 1 nginx nginx 1594 Jul 26 18:47 robots.txt drwxrwxrwx 3 nginx nginx 112 Jul 26 18:47 sites drwxrwxrwx 4 nginx nginx 69 Jul 26 18:47 themes -rwxr-xr-x 1 nginx nginx 848 Jul 26 18:47 update.php -rwxr-xr-x 1 nginx nginx 4566 Jul 26 18:47 web.config [root at developerportal web]# I have set the below in /etc/nginx/nginx.conf location ~ ^/retail/(.*) { > return 301 /var/www/drupalsubsite/marketplace-v2/mpV2/web; > index index.php; > } When I hit https://developerportal.example.com/retail I get https://developerportal.example.com/var/www/drupalsubsite/marketplace-v2/mpV2/web in the browser I get The requested page could not be found instead of serving the web page. When I hit https://developerportal.example.com/retail/user I get https://developerportal.example.com/var/www/drupalsubsite/marketplace-v2/mpV2/web in the browser I get The requested page could not be found instead of serving the web page. When I hit https://developerportal.example.com/retail/user/login I get https://developerportal.example.com/var/www/drupalsubsite/marketplace-v2/mpV2/web in the browser I get The requested page could not be found instead of serving the web page. Any help will be highly appreciated. Please let me know if you need any additional configuration and I look forward to hearing from you soon. Thanks in advance. Best Regards, Kaushal -------------- next part -------------- An HTML attachment was scrubbed... URL: From teward at thomas-ward.net Sat Aug 1 17:12:37 2020 From: teward at thomas-ward.net (Thomas Ward) Date: Sat, 01 Aug 2020 13:12:37 -0400 Subject: nginx subsite configuration not working In-Reply-To: References: Message-ID: <0b411d9f-3ecb-40e5-843b-6ab7337cb887@thomas-ward.net> 301 Redirects don't work for full system paths because they are returned to the client saying "go here instead".? It then interprets your path as a URI which doesn't exist inside your site docroot. You might have meant to use `root` instead of `return 301` here to serve the data directly from that directory at the path you are accessing it from on the client browser. -------- Original Message -------- From: Kaushal Shriyan Sent: Sat Aug 01 13:09:17 EDT 2020 To: nginx at nginx.org Subject: nginx subsite configuration not working Hi, I am running nginx version: nginx/1.16.1 on CentOS Linux release 7.8.2003 (Core). I am setting up the Nginx subsite configuration. The details are as follows. [root at developerportal www]# pwd /var/www [root at developerportal www]# ls -l total 0 drwxr-xr-x 2 root root 6 Apr 2 18:44 cgi-bin drwxr-xr-x 3 nginx nginx 64 Jun 10 13:52 *drupal* drwxr-xr-x 3 nginx nginx 28 Aug 1 12:32 *drupalsubsite* drwxr-xr-x 2 root root 6 Apr 2 18:44 html [root at developerportal www]# [root at developerportal web]# pwd /var/www/drupal/marketplace-v2/mpV2/web [root at developerportal web]# ls -l total 73576 -rwxr-xr-x 1 nginx nginx 385 Jul 28 18:25 autoload.php drwxrwxrwx 12 nginx nginx 4096 Jun 11 14:21 core -rw-r--r-- 1 nginx nginx 75310608 Aug 1 11:18 db.sql -rwxr-xr-x 1 nginx nginx 549 Jul 26 18:47 index.php drwxrwxrwx 14 nginx nginx 286 Jul 28 18:25 libraries drwxrwxrwx 6 nginx nginx 101 Jul 26 18:47 modules drwxrwxrwx 2 nginx nginx 22 Jul 26 18:47 profiles lrwxrwxrwx 1 nginx nginx 47 Aug 1 16:23 *retail* -> /var/www/drupalsubsite/marketplace-v2/mpV2/web/ -rwxr-xr-x 1 nginx nginx 1594 Jul 26 18:47 robots.txt drwxrwxrwx 3 nginx nginx 112 Jul 26 18:47 sites drwxrwxrwx 4 nginx nginx 69 Jul 26 18:47 themes -rwxr-xr-x 1 nginx nginx 848 Jul 26 18:47 update.php -rwxr-xr-x 1 nginx nginx 4566 Jul 26 18:47 web.config [root at developerportal web]# I have set the below in /etc/nginx/nginx.conf location ~ ^/retail/(.*) { > return 301 /var/www/drupalsubsite/marketplace-v2/mpV2/web; > index index.php; > } When I hit https://developerportal.example.com/retail I get https://developerportal.example.com/var/www/drupalsubsite/marketplace-v2/mpV2/web in the browser I get The requested page could not be found instead of serving the web page. When I hit https://developerportal.example.com/retail/user I get https://developerportal.example.com/var/www/drupalsubsite/marketplace-v2/mpV2/web in the browser I get The requested page could not be found instead of serving the web page. When I hit https://developerportal.example.com/retail/user/login I get https://developerportal.example.com/var/www/drupalsubsite/marketplace-v2/mpV2/web in the browser I get The requested page could not be found instead of serving the web page. Any help will be highly appreciated. Please let me know if you need any additional configuration and I look forward to hearing from you soon. Thanks in advance. Best Regards, Kaushal ------------------------------------------------------------------------ _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Sat Aug 1 20:31:14 2020 From: nginx-forum at forum.nginx.org (anish10dec) Date: Sat, 01 Aug 2020 16:31:14 -0400 Subject: Request Time in Nginx Log as always 0.000 for HIT Request Message-ID: <7130ffaa757c7da514ef95e0cd7cc9dd.NginxMailingListEnglish@forum.nginx.org> We are observing a behavior where request time and upstream response time is logged as same value when request is MISS in log file. And when there is HIT for the request , request time is logged as 0.000 for all the requests. Please help what could be the reason for this , we tried compiling from source , rpm , upgrading and downgrading the version of Nginx. But always the case remains same. Please help what could be causing this behavior Posted at Nginx Forum: https://forum.nginx.org/read.php?2,288938,288938#msg-288938 From kaushalshriyan at gmail.com Sat Aug 1 23:48:34 2020 From: kaushalshriyan at gmail.com (Kaushal Shriyan) Date: Sun, 2 Aug 2020 05:18:34 +0530 Subject: nginx subsite configuration not working In-Reply-To: <0b411d9f-3ecb-40e5-843b-6ab7337cb887@thomas-ward.net> References: <0b411d9f-3ecb-40e5-843b-6ab7337cb887@thomas-ward.net> Message-ID: On Sat, Aug 1, 2020 at 10:42 PM Thomas Ward wrote: > 301 Redirects don't work for full system paths because they are returned > to the client saying "go here instead". It then interprets your path as a > URI which doesn't exist inside your site docroot. > > You might have meant to use `root` instead of `return 301` here to serve > the data directly from that directory at the path you are accessing it from > on the client browser. > Hi Thomas Thanks for the reply. I have set it to below. location /retail { root /var/www/drupalsubsite/marketplace-v2/mpV2/web; #root /var/www/drupal/marketplace-v2/mpV2/web; index index.php; } *retail* is a soft link inside /var/www/drupal/marketplace-v2/mpV2/web directory pointing to /var/www/drupalsubsite/marketplace-v2/mpV2/web directory [root at developerportal www]# pwd /var/www [root at developerportal www]# ls -l total 0 drwxr-xr-x 2 root root 6 Apr 2 18:44 cgi-bin drwxr-xr-x 3 nginx nginx 64 Jun 10 13:52 *drupal* drwxr-xr-x 3 nginx nginx 28 Aug 1 12:32 *drupalsubsite* drwxr-xr-x 2 root root 6 Apr 2 18:44 html [root at developerportal www]# [root at developerportal web]# pwd /var/www/drupal/marketplace-v2/mpV2/web [root at developerportal web]# ls -l total 73576 -rwxr-xr-x 1 nginx nginx 385 Jul 28 18:25 autoload.php drwxrwxrwx 12 nginx nginx 4096 Jun 11 14:21 core -rw-r--r-- 1 nginx nginx 75310608 Aug 1 11:18 db.sql -rwxr-xr-x 1 nginx nginx 549 Jul 26 18:47 index.php drwxrwxrwx 14 nginx nginx 286 Jul 28 18:25 libraries drwxrwxrwx 6 nginx nginx 101 Jul 26 18:47 modules drwxrwxrwx 2 nginx nginx 22 Jul 26 18:47 profiles lrwxrwxrwx 1 nginx nginx 47 Aug 1 16:23 *retail* -> */var/www/drupalsubsite/marketplace-v2/mpV2/web/* -rwxr-xr-x 1 nginx nginx 1594 Jul 26 18:47 robots.txt drwxrwxrwx 3 nginx nginx 112 Jul 26 18:47 sites drwxrwxrwx 4 nginx nginx 69 Jul 26 18:47 themes -rwxr-xr-x 1 nginx nginx 848 Jul 26 18:47 update.php -rwxr-xr-x 1 nginx nginx 4566 Jul 26 18:47 web.config [root at developerportal web]# When I hit https://developerportal.example.com/retail 2020/08/02 05:12:29 [error] 3429#0: *1 "/var/www/drupalsubsite/marketplace-v2/mpV2/web/retail/index.php" is not found (2: No such file or directory), client: 192.168.0.10, server: developerportal.example.com, request: "GET /retail/ HTTP/1.1", host: " developerportal.example.com" ==> /var/log/nginx/access.log <== 192.168.0.10 - [02/Aug/2020:05:12:29 +0530] "GET /retail/ HTTP/1.1" 404 153 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:79.0) Gecko/20100101 Firefox/79.0" "-" 2020/08/02 05:11:46 [error] 3429#0: *1 open() "/var/www/drupalsubsite/marketplace-v2/mpV2/web/retail/user" failed (2: No such file or directory), client: 192.168.0.10, server: developerportal.example.com, request: "GET /retail/user HTTP/1.1", host: " developerportal.example.com" ==> /var/log/nginx/access.log <== 192.168.0.10 - [02/Aug/2020:05:11:46 +0530] "GET /retail/user HTTP/1.1" 404 153 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:79.0) Gecko/20100101 Firefox/79.0" "-" ==> /var/log/nginx/error.log <== 2020/08/02 05:11:50 [error] 3429#0: *1 open() "/var/www/drupalsubsite/marketplace-v2/mpV2/web/retail/user/login" failed (2: No such file or directory), client: 192.168.0.10, server: developerportal.example.com, request: "GET /retail/user/login HTTP/1.1", host: "developerportal.example.com" ==> /var/log/nginx/access.log <== 192.168.0.10 - [02/Aug/2020:05:11:50 +0530] "GET /retail/user/login HTTP/1.1" 404 153 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:79.0) Gecko/20100101 Firefox/79.0" "-" Any help will be highly appreciated. Please let me know if you need any additional configuration and I look forward to hearing from you soon. Thanks in advance. Best Regards, Kaushal -------------- next part -------------- An HTML attachment was scrubbed... URL: From kaushalshriyan at gmail.com Sun Aug 2 10:39:51 2020 From: kaushalshriyan at gmail.com (Kaushal Shriyan) Date: Sun, 2 Aug 2020 16:09:51 +0530 Subject: nginx subsite configuration not working In-Reply-To: References: <0b411d9f-3ecb-40e5-843b-6ab7337cb887@thomas-ward.net> Message-ID: On Sun, Aug 2, 2020 at 5:18 AM Kaushal Shriyan wrote: > > > On Sat, Aug 1, 2020 at 10:42 PM Thomas Ward > wrote: > >> 301 Redirects don't work for full system paths because they are returned >> to the client saying "go here instead". It then interprets your path as a >> URI which doesn't exist inside your site docroot. >> >> You might have meant to use `root` instead of `return 301` here to serve >> the data directly from that directory at the path you are accessing it from >> on the client browser. >> > > Hi Thomas > > Thanks for the reply. I have set it to below. > > location /retail { > root /var/www/drupalsubsite/marketplace-v2/mpV2/web; > #root /var/www/drupal/marketplace-v2/mpV2/web; > index index.php; > } > > *retail* is a soft link inside /var/www/drupal/marketplace-v2/mpV2/web > directory pointing to /var/www/drupalsubsite/marketplace-v2/mpV2/web > directory > > [root at developerportal www]# pwd > /var/www > [root at developerportal www]# ls -l > total 0 > drwxr-xr-x 2 root root 6 Apr 2 18:44 cgi-bin > drwxr-xr-x 3 nginx nginx 64 Jun 10 13:52 *drupal* > drwxr-xr-x 3 nginx nginx 28 Aug 1 12:32 *drupalsubsite* > drwxr-xr-x 2 root root 6 Apr 2 18:44 html > [root at developerportal www]# > [root at developerportal web]# pwd > /var/www/drupal/marketplace-v2/mpV2/web > [root at developerportal web]# ls -l > total 73576 > -rwxr-xr-x 1 nginx nginx 385 Jul 28 18:25 autoload.php > drwxrwxrwx 12 nginx nginx 4096 Jun 11 14:21 core > -rw-r--r-- 1 nginx nginx 75310608 Aug 1 11:18 db.sql > -rwxr-xr-x 1 nginx nginx 549 Jul 26 18:47 index.php > drwxrwxrwx 14 nginx nginx 286 Jul 28 18:25 libraries > drwxrwxrwx 6 nginx nginx 101 Jul 26 18:47 modules > drwxrwxrwx 2 nginx nginx 22 Jul 26 18:47 profiles > lrwxrwxrwx 1 nginx nginx 47 Aug 1 16:23 *retail* -> > */var/www/drupalsubsite/marketplace-v2/mpV2/web/* > -rwxr-xr-x 1 nginx nginx 1594 Jul 26 18:47 robots.txt > drwxrwxrwx 3 nginx nginx 112 Jul 26 18:47 sites > drwxrwxrwx 4 nginx nginx 69 Jul 26 18:47 themes > -rwxr-xr-x 1 nginx nginx 848 Jul 26 18:47 update.php > -rwxr-xr-x 1 nginx nginx 4566 Jul 26 18:47 web.config > [root at developerportal web]# > > When I hit https://developerportal.example.com/retail > > 2020/08/02 05:12:29 [error] 3429#0: *1 > "/var/www/drupalsubsite/marketplace-v2/mpV2/web/retail/index.php" is not > found (2: No such file or directory), client: 192.168.0.10, server: > developerportal.example.com, request: "GET /retail/ HTTP/1.1", host: " > developerportal.example.com" > > ==> /var/log/nginx/access.log <== > 192.168.0.10 - [02/Aug/2020:05:12:29 +0530] "GET /retail/ HTTP/1.1" 404 > 153 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:79.0) > Gecko/20100101 Firefox/79.0" "-" > > 2020/08/02 05:11:46 [error] 3429#0: *1 open() > "/var/www/drupalsubsite/marketplace-v2/mpV2/web/retail/user" failed (2: No > such file or directory), client: 192.168.0.10, server: > developerportal.example.com, request: "GET /retail/user HTTP/1.1", host: " > developerportal.example.com" > > ==> /var/log/nginx/access.log <== > 192.168.0.10 - [02/Aug/2020:05:11:46 +0530] "GET /retail/user HTTP/1.1" > 404 153 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:79.0) > Gecko/20100101 Firefox/79.0" "-" > > ==> /var/log/nginx/error.log <== > 2020/08/02 05:11:50 [error] 3429#0: *1 open() > "/var/www/drupalsubsite/marketplace-v2/mpV2/web/retail/user/login" failed > (2: No such file or directory), client: 192.168.0.10, server: > developerportal.example.com, request: "GET /retail/user/login HTTP/1.1", > host: "developerportal.example.com" > > ==> /var/log/nginx/access.log <== > 192.168.0.10 - [02/Aug/2020:05:11:50 +0530] "GET /retail/user/login > HTTP/1.1" 404 153 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; > rv:79.0) Gecko/20100101 Firefox/79.0" "-" > > Any help will be highly appreciated. Please let me know if you need any > additional configuration and I look forward to hearing from you soon. > > Thanks in advance. > > Best Regards, > > Kaushal > > Hi, When I hit https://developer.example.com/retail on the browser it should render web pages from /var/www/drupalsubsite/marketplace-v2/mpV2/web directory. I tried the below options in /etc/nginx/nginx.conf and none of the options work. Option 1) rewrite ^/$ /var/www/drupalsubsite/marketplace-v2/mpV2/web; Option 2) location /retail { root /var/www/drupalsubsite/marketplace-v2/mpV2/web; # root /var/www/drupal/marketplace-v2/mpV2/web; index index.php; } Option 3) location /retail { root /var/www/drupal/marketplace-v2/mpV2/web; disable_symlinks off; index index.php; } Any help will be highly appreciated. Thanks in Advance. I look forward to hearing from you. Best Regards, Kaushal -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Sun Aug 2 11:00:36 2020 From: nginx-forum at forum.nginx.org (orgads) Date: Sun, 02 Aug 2020 07:00:36 -0400 Subject: SO_BINDTODEVICE In-Reply-To: References: Message-ID: <24aa29e66c92033671b74c91f6f852ea.NginxMailingListEnglish@forum.nginx.org> Do you have a reliable source about it being deprecated? I couldn't find any. A comment on stackoverflow claims the opposite[1]. Would you consider accepting a patch for supporting it (we can rebase the original patch and restart the discussion)? [1] https://stackoverflow.com/a/1215424/764870 Posted at Nginx Forum: https://forum.nginx.org/read.php?2,266753,288942#msg-288942 From miguelmclara at gmail.com Sun Aug 2 13:35:34 2020 From: miguelmclara at gmail.com (Miguel C) Date: Sun, 2 Aug 2020 14:35:34 +0100 Subject: nginx subsite configuration not working In-Reply-To: References: <0b411d9f-3ecb-40e5-843b-6ab7337cb887@thomas-ward.net> Message-ID: Looks like it's a Drupal (php) site just pointing it to index.php won't work you need to have a back end for PHP.... I.E. php-fpm and setup nginx to use fastcgi proxy. The exact config might be different depending on how urls are handled in Drupal, I'm more familiar with WordPress, but I found this that seems to offer great help: https://dashohoxha.blogspot.com/2012/10/using-nginx-as-web-server-for-drupal.html?m=1 -------------- next part -------------- An HTML attachment was scrubbed... URL: From kaushalshriyan at gmail.com Sun Aug 2 16:47:37 2020 From: kaushalshriyan at gmail.com (Kaushal Shriyan) Date: Sun, 2 Aug 2020 22:17:37 +0530 Subject: nginx subsite configuration not working In-Reply-To: References: <0b411d9f-3ecb-40e5-843b-6ab7337cb887@thomas-ward.net> Message-ID: On Sun, Aug 2, 2020 at 7:06 PM Miguel C wrote: > Looks like it's a Drupal (php) site just pointing it to index.php won't > work you need to have a back end for PHP.... I.E. php-fpm and setup nginx > to use fastcgi proxy. > > The exact config might be different depending on how urls are handled in > Drupal, I'm more familiar with WordPress, but I found this that seems to > offer great help: > > > https://dashohoxha.blogspot.com/2012/10/using-nginx-as-web-server-for-drupal.html?m=1 > _______________________________________________ > Thanks Miguel for the email and much appreciated. -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Sun Aug 2 19:06:33 2020 From: mdounin at mdounin.ru (Maxim Dounin) Date: Sun, 2 Aug 2020 22:06:33 +0300 Subject: Request Time in Nginx Log as always 0.000 for HIT Request In-Reply-To: <7130ffaa757c7da514ef95e0cd7cc9dd.NginxMailingListEnglish@forum.nginx.org> References: <7130ffaa757c7da514ef95e0cd7cc9dd.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20200802190633.GJ12747@mdounin.ru> Hello! On Sat, Aug 01, 2020 at 04:31:14PM -0400, anish10dec wrote: > We are observing a behavior where request time and upstream response time is > logged as same value when request is MISS in log file. > > And when there is HIT for the request , request time is logged as 0.000 for > all the requests. nginx updates internal time only once per event loop iteration, so it is relatively easy to see $request_time logged as 0 (or identical to $upstream_response_time as long as there was an request to upstream). This implies: - the request can be read immediately without waiting for additional data from client; - the response is small enough to fit into the socket buffer. -- Maxim Dounin http://mdounin.ru/ From nginx-forum at forum.nginx.org Mon Aug 3 09:24:46 2020 From: nginx-forum at forum.nginx.org (Evald80) Date: Mon, 03 Aug 2020 05:24:46 -0400 Subject: Found Nginx 1.19.0 stopped but no idea what happened In-Reply-To: <0b2c0553f84abe4ccd21dec4d78d3a8a.NginxMailingListEnglish@forum.nginx.org> References: <20200709170335.GM20939@daoine.org> <76d7ae8b3ae9ce05e22e3f4c7415b7ad.NginxMailingListEnglish@forum.nginx.org> <0b2c0553f84abe4ccd21dec4d78d3a8a.NginxMailingListEnglish@forum.nginx.org> Message-ID: <0a2f19fa2b6998471e961c5e73197462.NginxMailingListEnglish@forum.nginx.org> Seems the problem is with the modsec module. After disabling it, the problem disappeared... Also nginx -t command is very slow with modsec module enabled. Pretty strange that nobody has encountered this issue. Mos probably people are not running modsec and websites on the same installation of nginx. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,288511,288950#msg-288950 From nginx-forum at forum.nginx.org Mon Aug 3 19:54:15 2020 From: nginx-forum at forum.nginx.org (anish10dec) Date: Mon, 03 Aug 2020 15:54:15 -0400 Subject: Request Time in Nginx Log as always 0.000 for HIT Request In-Reply-To: <20200802190633.GJ12747@mdounin.ru> References: <20200802190633.GJ12747@mdounin.ru> Message-ID: In our case response body is of size around 4MB to 8MB and its showing 0.000. Since "request time" is for analyzing the time taken for delivering the content to client , we are not able to get the actual value or time taken . Even on slow user connection its showing 0.000 . Generally it should be much higher as it captures the total time taken for delivering last byte of the content to user. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,288938,288954#msg-288954 From mdounin at mdounin.ru Mon Aug 3 21:07:10 2020 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 4 Aug 2020 00:07:10 +0300 Subject: Request Time in Nginx Log as always 0.000 for HIT Request In-Reply-To: References: <20200802190633.GJ12747@mdounin.ru> Message-ID: <20200803210710.GT12747@mdounin.ru> Hello! On Mon, Aug 03, 2020 at 03:54:15PM -0400, anish10dec wrote: > In our case response body is of size around 4MB to 8MB and its showing > 0.000. > > Since "request time" is for analyzing the time taken for delivering the > content to client , we are not able to get the actual value or time taken . > > Even on slow user connection its showing 0.000 . > Generally it should be much higher as it captures the total time taken for > delivering last byte of the content to user. The $request_time variable shows the time from reading the the first byte of the request from the client to sending the last byte of the response to the socket buffer. Note the difference: not deliverying the last byte to the user, but sending the response to the socket buffer. With large enough socket buffers and/or small enough responses $request_time can be 0, and usually this is expected result since all request processing happens within a single event loop iteration. In some extreme cases $request_time can be seen to be 0 even if sending the response is slow and takes significant time: this might happen if reading from disk is very slow (slower than sending to the client), so nginx never blocks on sending. Most often this happens when using sendfile, and the sendfile_max_chunk directive (http://nginx.org/r/sendfile_max_chunk) can be used to limit the amount of data sent within a single event loop iteration. Given the response sizes this is unlikely your case though, as 4M to 8M looks quite comparable to modern sizes of socket buffers, especially when using auto-tuning. -- Maxim Dounin http://mdounin.ru/ From nginx-forum at forum.nginx.org Tue Aug 4 03:33:35 2020 From: nginx-forum at forum.nginx.org (vijay.dcrust) Date: Mon, 03 Aug 2020 23:33:35 -0400 Subject: unable to get local issuer certificate In-Reply-To: <1124207028.901143.1584957968660@ox.hosteurope.de> References: <1124207028.901143.1584957968660@ox.hosteurope.de> Message-ID: Did you get any fix for this. I am also having same problem. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,287423,288956#msg-288956 From gheorghe.nica at bofa.com Wed Aug 5 21:21:42 2020 From: gheorghe.nica at bofa.com (Nica, George) Date: Wed, 05 Aug 2020 21:21:42 +0000 Subject: how to configure request rate limiting by Kerberos authenticated user? Message-ID: <93533a8a73b749869ec5ea5d6a919d1b@bofa.com> Hi, We are currently using "limit_req_zone $binary_remote_addr" for rate limiting. However, some of our users are connecting from more than one IP address, using clients running on computer grids. We wanted to do request rate limiting by authenticated user (in addition to the existing one by $binary_remote_addr). Is there any way we could do request rate limiting based on authenticated user? We use Kerberos for authentication, using ngx_http_auth_spnego_module (https://github.com/stnoonan/spnego-http-auth-nginx-module). We tried "limit_req_zone $remote_user zone=user:10m rate=20r/s;" and "limit_req zone=user burst=20;" but the key was apparently empty - all requests, from all users, were getting limited (all bunched under one key). However, interestingly, $remote_user is passed fine to the upstream using "proxy_set_header X-Forwarded-User $remote_user;"... Apparently $remote_user only works for request limiting when using basic authentication. Thank you for any suggestions/pointers. Best, George ---------------------------------------------------------------------- This message, and any attachments, is for the intended recipient(s) only, may contain information that is privileged, confidential and/or proprietary and subject to important terms and conditions available at http://www.bankofamerica.com/emaildisclaimer. If you are not the intended recipient, please delete this message. -------------- next part -------------- An HTML attachment was scrubbed... URL: From xserverlinux at gmail.com Wed Aug 5 23:31:26 2020 From: xserverlinux at gmail.com (Rick Gutierrez) Date: Wed, 5 Aug 2020 17:31:26 -0600 Subject: app mvc behind proxy reverse Message-ID: Hi, I am having some problems to load an app made in mvc, when I access the url and I want to edit a table to make a change, the proxy returns me to the root of the project, and it does not stay in app/pais I paste the url log, where it says it can't find resource 404 https://pastebin.com/AeRRrMRi 1.1.1.1 - - [05/Aug/2020:17:21:23 -0600] "POST /agregareditar HTTP/2.0" 404 709 "https://test.domain.com/pais" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_6) AppleWebKit/605.1.15 (KHTML, like Gecko) Version/13.1.2 Safari/605.1.15" proxy reverse config: location / { proxy_http_version 1.1; proxy_set_header Connection ""; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_connect_timeout 900s; proxy_send_timeout 900s; proxy_read_timeout 900s; proxy_buffer_size 64k; proxy_buffers 16 32k; proxy_busy_buffers_size 64k; proxy_redirect off; proxy_request_buffering off; proxy_buffering off; proxy_pass http://backend29; } === backend config upstream backend29 { server 192.168.11.95:80; ## web windows keepalive 2; } server { listen 80; server_name test.domain.com; #YourIP or domain pagespeed unplugged; return 301 https://$server_name$request_uri; # redirect all to use ssl } any ideas? -- rickygm http://gnuforever.homelinux.com From mdounin at mdounin.ru Thu Aug 6 00:34:52 2020 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 6 Aug 2020 03:34:52 +0300 Subject: how to configure request rate limiting by Kerberos authenticated user? In-Reply-To: <93533a8a73b749869ec5ea5d6a919d1b@bofa.com> References: <93533a8a73b749869ec5ea5d6a919d1b@bofa.com> Message-ID: <20200806003452.GY12747@mdounin.ru> Hello! On Wed, Aug 05, 2020 at 09:21:42PM +0000, Nica, George wrote: > We are currently using "limit_req_zone $binary_remote_addr" for rate limiting. However, some of our users are connecting from more than one IP address, using clients running on computer grids. > We wanted to do request rate limiting by authenticated user (in addition to the existing one by $binary_remote_addr). > Is there any way we could do request rate limiting based on authenticated user? > We use Kerberos for authentication, using ngx_http_auth_spnego_module (https://github.com/stnoonan/spnego-http-auth-nginx-module). > We tried "limit_req_zone $remote_user zone=user:10m rate=20r/s;" and "limit_req zone=user burst=20;" but the key was apparently empty - all requests, from all users, were getting limited (all bunched under one key). However, interestingly, $remote_user is passed fine to the upstream using "proxy_set_header X-Forwarded-User $remote_user;"... Apparently $remote_user only works for request limiting when using basic authentication. > Thank you for any suggestions/pointers. The $remote_user variable is extracted by nginx from the Authorization header only when using Basic authentication. The SPNEGO auth module tries to make it work by providing a fake "Authorization: Basic ..." header to nginx, but this won't work for limit_req because rate limiting happens before access checks (and so before the SPNEGO auth module adds the fake header). If you want to limit requests based on the user name from the SPNEGO auth module, the most obvious solution would be to do this with additional proxying, so the user name will be known. Alternative solutions include adding its own variable to the module, so it can be used at any time (much like $remote_user when using Basic authentication), or doing some clever redirect tricks to convince nginx to do authentication first, and then to do rate limiting (in another location, after a redirect). -- Maxim Dounin http://mdounin.ru/ From francis at daoine.org Thu Aug 6 12:28:59 2020 From: francis at daoine.org (Francis Daly) Date: Thu, 6 Aug 2020 13:28:59 +0100 Subject: app mvc behind proxy reverse In-Reply-To: References: Message-ID: <20200806122859.GA20939@daoine.org> On Wed, Aug 05, 2020 at 05:31:26PM -0600, Rick Gutierrez wrote: Hi there, > Hi, I am having some problems to load an app made in mvc, when I > access the url and I want to edit a table to make a change, the proxy > returns me to the root of the project, and it does not stay in > app/pais Your nginx config says that when the client asks nginx for /pais/agregareditar, nginx asks the upstream for /pais/agregareditar -- your log at "05/Aug/2020:17:20:51" shows this request. Your nginx config says that when the client asks nginx for /agregareditar, nginx asks the upstream for /agregareditar -- your log at "05/Aug/2020:17:21:23" shows this request. It looks like the app is pointing to /agregareditar instead of to /pais/agregareditar; and it looks like nginx is not involved in this. Can you see what the html-or-javascript that tells the browser where to POST the request, says about where to POST the request? > any ideas? Maybe something like "location = /pais { return 301 /pais/; }" in your nginx config will help, if the upstream server should do that but does not? Good luck with it, f -- Francis Daly francis at daoine.org From nginx-forum at forum.nginx.org Thu Aug 6 15:27:34 2020 From: nginx-forum at forum.nginx.org (George Nica) Date: Thu, 06 Aug 2020 11:27:34 -0400 Subject: how to configure request rate limiting by Kerberos authenticated user? In-Reply-To: <20200806003452.GY12747@mdounin.ru> References: <20200806003452.GY12747@mdounin.ru> Message-ID: <8eae6103b0935027fc45e9bac4d29380.NginxMailingListEnglish@forum.nginx.org> Thank you Maxim. Adding an extra variable to the spnego auth module sounds intriguing, but also challenging because; as you mention "rate limiting happens before access checks" and this module mainly deals with access checks until now. Sounds like an extra level of proxying is the way ahead for now. It would be nice if Kerberos were supported directly by nginx. :) Posted at Nginx Forum: https://forum.nginx.org/read.php?2,288976,288990#msg-288990 From xserverlinux at gmail.com Thu Aug 6 15:48:55 2020 From: xserverlinux at gmail.com (Rick Gutierrez) Date: Thu, 6 Aug 2020 09:48:55 -0600 Subject: app mvc behind proxy reverse In-Reply-To: <20200806122859.GA20939@daoine.org> References: <20200806122859.GA20939@daoine.org> Message-ID: El jue., 6 ago. 2020 a las 6:29, Francis Daly () escribi?: > Hi francis > > Your nginx config says that when the client asks nginx for > /pais/agregareditar, nginx asks the upstream for /pais/agregareditar -- > your log at "05/Aug/2020:17:20:51" shows this request. > > Your nginx config says that when the client asks nginx for > /agregareditar, nginx asks the upstream for /agregareditar -- your log at > "05/Aug/2020:17:21:23" shows this request. > > It looks like the app is pointing to /agregareditar instead of to > /pais/agregareditar; and it looks like nginx is not involved in this. > > > Can you see what the html-or-javascript that tells the browser where to > POST the request, says about where to POST the request? The country file, is a javascript, performs the task, only that url that calls is where the controller is, that verifies if it edits or saves, and the proxy reverse does not interpret it that way because it is only a folder address, not a file address per that's it send a 404. I am not a programmer, the country folder has several files. # Pais ls AgregarEditar.cshtml ListaPaises.cshtml Index.cshtml Pais.txt > > > any ideas? > > Maybe something like "location = /pais { return 301 /pais/; }" in > your nginx config will help, if the upstream server should do that but > does not? a little slower here, this would have to go down to the first location, so what you tell me: location = /pais { return 301 /pais/; }" > > Good luck with it, > > f thank -- rickygm http://gnuforever.homelinux.com From francis at daoine.org Thu Aug 6 16:25:41 2020 From: francis at daoine.org (Francis Daly) Date: Thu, 6 Aug 2020 17:25:41 +0100 Subject: app mvc behind proxy reverse In-Reply-To: References: <20200806122859.GA20939@daoine.org> Message-ID: <20200806162541.GB20939@daoine.org> On Thu, Aug 06, 2020 at 09:48:55AM -0600, Rick Gutierrez wrote: > El jue., 6 ago. 2020 a las 6:29, Francis Daly () escribi?: Hi there, > > Can you see what the html-or-javascript that tells the browser where to > > POST the request, says about where to POST the request? > > The country file, is a javascript, performs the task, only that url > that calls is where the controller is, that verifies if it edits or > saves, and the proxy reverse does not interpret it that way because it > is only a folder address, not a file address per that's it send a 404. As I understand it, your nginx conf says (basically) location / { proxy_pass http://backend29; } so every request to nginx gets sent to the backend server. nginx does not know or care about files or folders; it proxy_pass:es all requests. So the 404 comes from the backend server, because the app causes the browser to ask for /agregareditar and not for /pais/agregareditar. > I am not a programmer, the country folder has several files. > > # Pais ls > > AgregarEditar.cshtml ListaPaises.cshtml > > Index.cshtml Pais.txt > > > > > > > any ideas? > > > > Maybe something like "location = /pais { return 301 /pais/; }" in > > your nginx config will help, if the upstream server should do that but > > does not? > > a little slower here, this would have to go down to the first > location, so what you tell me: > > location = /pais { return 301 /pais/; }" My guess is that your browser's first request is for "/pais", and that returns something from the backend that says "ask for agregareditar". And the browser correctly resolves "/pais" + "agregareditar" to "/agregareditar". But that is not what you want. My guess is that if your browser's first request is for "/pais/" (with a / at the end), then what is returned from the backend will be the same; but now the browser will resolve "/pais/" + "agregareditar" to "/pais/agregareditar", which (hopefully) is what you want. If that does work -- if you start by asking for "/pais/" and everything works as you want it to -- then you can tell nginx to intercept the first request for "/pais", and tell the browser to instead ask for "/pais/". And then things might work. You can tell nginx to do that one-interception by putting location = /pais { return 301 /pais/; } in the same file as your main config, just before the location / { line that you showed. If your application does *not* work when you start by requesting "/pais/", then this change to nginx will not fix things for you. Right now, I see no evidence of a problem in the nginx config; only in the backend application. Maybe some evidence will appear, if there is still a problem. Good luck with it, f -- Francis Daly francis at daoine.org From fusca14 at gmail.com Fri Aug 7 00:43:46 2020 From: fusca14 at gmail.com (Fabiano Furtado Pessoa Coelho) Date: Thu, 6 Aug 2020 21:43:46 -0300 Subject: Strange error_page behavior Message-ID: Hi... I have the following setup in my NGINX 1.18.0 server: http { ... error_page 400 /my400.html; error_page 401 /my401.html; error_page 403 /my403.html; error_page 404 /my404.html; error_page 405 /my405.html; error_page 413 /my413.html; error_page 500 /my500.html; error_page 502 503 /my503.html; error_page 504 /my504.html; ... server { ... allow 10.0.0.0/8; # my internal network deny all; # all others networks error_page 403 =503 /my503.html; #error_page 502 503 /my503.html; location ~ ^/(my400|my401|my403|my404|my405|my413|my500|my503|my504)\.html$ { root /usr/share/nginx/my_custom_error_pages; internal; allow all; } ... location / { ... proxy_pass http://...; } } } In this setup, I want to allow my internal network to access my backend server via proxy_pass AND deny this access to others networks (error 403 "redirected" to 503 --> defined in "error_page 403 =503 /my503.html;"). This is working great, but when my backend gets down, I receive a default NGINX "502 Bad Gateway" error page instead of my custom my503.html page. Why this is happening? I can solve this issue uncommenting the "#error_page 502 503 /my503.html;" line in server block, but this configuration is already defined in http block, which is hierarchically superior in comparison with server block. Why NGINX isn't regarging the error_page in http block? Thanks in advance. From francis at daoine.org Fri Aug 7 07:05:00 2020 From: francis at daoine.org (Francis Daly) Date: Fri, 7 Aug 2020 08:05:00 +0100 Subject: Strange error_page behavior In-Reply-To: References: Message-ID: <20200807070500.GC20939@daoine.org> On Thu, Aug 06, 2020 at 09:43:46PM -0300, Fabiano Furtado Pessoa Coelho wrote: Hi there, > http { > error_page 502 503 /my503.html; > ... > server { > ... > error_page 403 =503 /my503.html; > #error_page 502 503 /my503.html; > I can solve this issue uncommenting the "#error_page 502 503 > /my503.html;" line in server block, but this configuration is already > defined in http block, which is hierarchically superior in comparison > with server block. Why NGINX isn't regarging the error_page in http > block? Nginx directive inheritance is (in general) "by replacement" or "not at all". If you have a directive in a location, that is the full set of that-directives that applies to the matching request. Since you have an error_page at server{} level, any error_page defined at http level is irrelevant for any request handled in this server{}. http://nginx.org/r/error_page If you want a distinct set of error_page directives to apply at server{}-level, you must write them all there (perhaps by "include"ing a file). f -- Francis Daly francis at daoine.org From fusca14 at gmail.com Fri Aug 7 13:29:45 2020 From: fusca14 at gmail.com (Fabiano Furtado Pessoa Coelho) Date: Fri, 7 Aug 2020 10:29:45 -0300 Subject: Strange error_page behavior In-Reply-To: <20200807070500.GC20939@daoine.org> References: <20200807070500.GC20939@daoine.org> Message-ID: Thanks for the reply and I'm sorry! It was my fault that I didn't see that text in the documentation. "These directives are inherited from the previous level if and only if there are no error_page directives defined on the current level." On Fri, Aug 7, 2020 at 4:05 AM Francis Daly wrote: > > On Thu, Aug 06, 2020 at 09:43:46PM -0300, Fabiano Furtado Pessoa Coelho wrote: > > Hi there, > > > http { > > error_page 502 503 /my503.html; > > ... > > server { > > ... > > error_page 403 =503 /my503.html; > > #error_page 502 503 /my503.html; > > > I can solve this issue uncommenting the "#error_page 502 503 > > /my503.html;" line in server block, but this configuration is already > > defined in http block, which is hierarchically superior in comparison > > with server block. Why NGINX isn't regarging the error_page in http > > block? > > Nginx directive inheritance is (in general) "by replacement" or "not > at all". > > If you have a directive in a location, that is the full set of > that-directives that applies to the matching request. > > Since you have an error_page at server{} level, any error_page defined > at http level is irrelevant for any request handled in this server{}. > > http://nginx.org/r/error_page > > If you want a distinct set of error_page directives to apply at > server{}-level, you must write them all there (perhaps by "include"ing > a file). > > f > -- > Francis Daly francis at daoine.org > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From xserverlinux at gmail.com Fri Aug 7 14:16:22 2020 From: xserverlinux at gmail.com (Rick Gutierrez) Date: Fri, 7 Aug 2020 08:16:22 -0600 Subject: app mvc behind proxy reverse In-Reply-To: <20200806162541.GB20939@daoine.org> References: <20200806122859.GA20939@daoine.org> <20200806162541.GB20939@daoine.org> Message-ID: Thank Francis, if you're right, it's a backend problem. we are solving. El El jue, 6 de ago. de 2020 a la(s) 10:25, Francis Daly escribi?: > On Thu, Aug 06, 2020 at 09:48:55AM -0600, Rick Gutierrez wrote: > > > El jue., 6 ago. 2020 a las 6:29, Francis Daly () > escribi?: > > > > Hi there, > > > > > > Can you see what the html-or-javascript that tells the browser where to > > > > POST the request, says about where to POST the request? > > > > > > The country file, is a javascript, performs the task, only that url > > > that calls is where the controller is, that verifies if it edits or > > > saves, and the proxy reverse does not interpret it that way because it > > > is only a folder address, not a file address per that's it send a 404. > > > > As I understand it, your nginx conf says (basically) > > > > location / { > > proxy_pass http://backend29; > > } > > > > so every request to nginx gets sent to the backend server. nginx does > > not know or care about files or folders; it proxy_pass:es all requests. > > > > So the 404 comes from the backend server, because the app causes the > > browser to ask for /agregareditar and not for /pais/agregareditar. > > > > > I am not a programmer, the country folder has several files. > > > > > > # Pais ls > > > > > > AgregarEditar.cshtml ListaPaises.cshtml > > > > > > Index.cshtml Pais.txt > > > > > > > > > > > > > > > any ideas? > > > > > > > > Maybe something like "location = /pais { return 301 /pais/; }" in > > > > your nginx config will help, if the upstream server should do that but > > > > does not? > > > > > > a little slower here, this would have to go down to the first > > > location, so what you tell me: > > > > > > location = /pais { return 301 /pais/; }" > > > > My guess is that your browser's first request is for "/pais", > > and that returns something from the backend that says "ask for > > agregareditar". And the browser correctly resolves "/pais" + > > "agregareditar" to "/agregareditar". But that is not what you want. > > > > My guess is that if your browser's first request is for "/pais/" (with > > a / at the end), then what is returned from the backend will be the > > same; but now the browser will resolve "/pais/" + "agregareditar" to > > "/pais/agregareditar", which (hopefully) is what you want. > > > > If that does work -- if you start by asking for "/pais/" and everything > > works as you want it to -- then you can tell nginx to intercept the first > > request for "/pais", and tell the browser to instead ask for "/pais/". And > > then things might work. > > > > You can tell nginx to do that one-interception by putting > > > > location = /pais { return 301 /pais/; } > > > > in the same file as your main config, just before the > > > > location / { > > > > line that you showed. > > > > If your application does *not* work when you start by requesting "/pais/", > > then this change to nginx will not fix things for you. > > > > Right now, I see no evidence of a problem in the nginx config; only in > > the backend application. > > > > Maybe some evidence will appear, if there is still a problem. > > > > Good luck with it, > > > > f > > -- > > Francis Daly francis at daoine.org > > _______________________________________________ > > nginx mailing list > > nginx at nginx.org > > http://mailman.nginx.org/mailman/listinfo/nginx > > -- rickygm http://gnuforever.homelinux.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From victor at bitonic.nl Fri Aug 7 14:18:51 2020 From: victor at bitonic.nl (=?ISO-8859-1?Q?V=EDctor_Enr=EDquez?=) Date: Fri, 07 Aug 2020 16:18:51 +0200 Subject: Issue with NGINX as reverse proxy for grpc service Message-ID: <400baf35af123cdb23125d0fac59ec63b737706b.camel@bitonic.nl> Hi, So we have a service exposing a grpc interface under a certain location and we are using nginx in front of it. The config looks like the following: upstream grpcservers { server fqdn:port; server fqdn:port; } ... server { listen port ssl http2; client_max_body_size 15m; server_name fqdn; ssl_certificate /etc/certs/server.crt; ssl_certificate_key /etc/certs/server.key; location /my.location. { grpc_set_header X-Ip-Address $remote_addr; grpc_set_header X-Forwarded-For $proxy_add_x_forwarded_for; grpc_ssl_certificate /etc/ssl/mtls-client.crt; grpc_ssl_certificate_key /etc/ssl/mtls-client.key; grpc_pass grpcs://grpcservers; ... } # Error responses include conf.d/errors.grpc_conf; # gRPC-compliant error responses default_type application/grpc; # Ensure gRPC for all error responses } //End of the server directive Now we just realized that each time we do a GET / to that specific port under that specific location using curl --http2, the request is forwarded to the backend in such a way that it makes nginx believe that the backend has crashed, allowing anyone to DDoS this particular service by just repeteadly sending GET / request to the endpoint. I am seeing the following messages in the logs: 020/08/07 13:02:37 [error] 1100#1100: *199 upstream rejected request with error 2 while reading response header from upstream, client: X.X.X.X, server: fqdn1, request: "POST /my.location.magic.API/GetMagic HTTP/2.0", upstream: "grpcs://Z.Z.Z.Z:PORT", host: "fqdn1:PORT" Eventually after the 2 upstream servers are marked as failed we get the following message: 020/08/07 11:07:05 [error] 1100#1100: *96 no live upstreams while connecting to upstream, client: X.X.X.X, server: fqdn1, request: "POST /my.location.magic.API/GetMagic HTTP/2.0", upstream: "grpcs://grpcservers", host: "fqdn1:PORT" until the servers are marked as valid again. And the cycle repeats. I am not an expert on HTTP/2 or gRPC, but it seems like nginx is unable to negotiate the connection with the backend for those particular requests created with curl and ends marking the backend as failed when in fact, it is not failing. Any ideas about how can I further debug this issue? Thanks in advance. From nginx-forum at forum.nginx.org Fri Aug 7 14:23:44 2020 From: nginx-forum at forum.nginx.org (George Nica) Date: Fri, 07 Aug 2020 10:23:44 -0400 Subject: handling client disconnect. call-back? Message-ID: My understanding is that abrupt client disconnects are transparent through nginx -- the connection to upstream is closed and the upstream should handle that as it can. Please correct me if I am wrong. Is there a way to use a call-back (or something similar, a redirect), in nginx.conf, for client disconnects? This would be useful when the upstream is not good at directly handling the disconnect (not async, and still processing the response for the now-disconnected client). This could help clean up resources, on a parallel channel. Best, George Posted at Nginx Forum: https://forum.nginx.org/read.php?2,289006,289006#msg-289006 From pluknet at nginx.com Fri Aug 7 16:28:07 2020 From: pluknet at nginx.com (Sergey Kandaurov) Date: Fri, 7 Aug 2020 19:28:07 +0300 Subject: Issue with NGINX as reverse proxy for grpc service In-Reply-To: <400baf35af123cdb23125d0fac59ec63b737706b.camel@bitonic.nl> References: <400baf35af123cdb23125d0fac59ec63b737706b.camel@bitonic.nl> Message-ID: > On 7 Aug 2020, at 17:18, V?ctor Enr?quez wrote: > > Hi, > > So we have a service exposing a grpc interface under a certain location > and we are using nginx in front of it. The config looks like the > following: > > upstream grpcservers { > server fqdn:port; > server fqdn:port; > } > > ... > > server { > listen port ssl http2; > client_max_body_size 15m; > server_name fqdn; > > ssl_certificate /etc/certs/server.crt; > ssl_certificate_key /etc/certs/server.key; > > location /my.location. { > grpc_set_header X-Ip-Address $remote_addr; > grpc_set_header X-Forwarded-For $proxy_add_x_forwarded_for; > grpc_ssl_certificate /etc/ssl/mtls-client.crt; > grpc_ssl_certificate_key /etc/ssl/mtls-client.key; > grpc_pass grpcs://grpcservers; > ... > } > > # Error responses > include conf.d/errors.grpc_conf; # gRPC-compliant error responses > default_type application/grpc; # Ensure gRPC for all error > responses > > } //End of the server directive > > Now we just realized that each time we do a GET / to that specific port > under that specific location using curl --http2, the request is > forwarded to the backend in such a way that it makes nginx believe that > the backend has crashed, allowing anyone to DDoS this particular > service by just repeteadly sending GET / request to the endpoint. > > I am seeing the following messages in the logs: > > 020/08/07 13:02:37 [error] 1100#1100: *199 upstream rejected request > with error 2 while reading response header from upstream, client: > X.X.X.X, server: fqdn1, request: "POST /my.location.magic.API/GetMagic > HTTP/2.0", upstream: "grpcs://Z.Z.Z.Z:PORT", host: "fqdn1:PORT" "error 2" means that backend responded with RST_STREAM(INTERNAL_ERROR), that is, effectively rejected processing request. You may want to consult with backend error log to find out the reason. -- Sergey Kandaurov From nginx-forum at forum.nginx.org Sat Aug 8 16:55:47 2020 From: nginx-forum at forum.nginx.org (stmx38) Date: Sat, 08 Aug 2020 12:55:47 -0400 Subject: Why Nginx send traffic to the next upstream on 504 error Message-ID: Hello, We have an Nginx where we configured http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_next_upstream in Nginx main config file at http level. We don't want to send traffic to the next upstream on 504: ---- proxy_next_upstream error timeout http_502 http_503 non_idempotent; ---- At vhost level we don't redefine this directive. But periodically we see the following in Nginx logs: ---- 141.101.69.85 - 192.168.1.10 - 59.489 - [08/Aug/2020:11:10:41.098 +0000] - POST - /api/ - HTTP/1.1 - 499 - - 0 - 2054 - "Java/1.8.0_242" - api-worker - "192.168.1.11:8080, 192.168.1.12:8080" - "504, -" - "0.000, 0.000" - "31.000, 28.489" ---- api-worker - "192.168.1.11:8080, 192.168.1.12:8080" - "504, -" >From the logs we see that Nginx received 504 status from the first upstream and then for some reason send traffic to the next one, despite the fact that it should not do it on 504 http status. We did a short test using http://httpstat.us/504 ---- upstream test-504 { server 104.31.86.226:80; server 104.31.87.226:80; } server { listen 443 ssl http2; server_name domain.tld # Test 504 location /test-504 { # proxy_next_upstream error timeout http_502 http_503 http_504 non_idempotent; proxy_pass http://test-504/504; proxy_set_header Host httpstat.us; } } ---- If we comment 'proxy_next_upstream' it uses one defined in the main config at http level and don't send traffic to the next upstream. If we uncomment it, we see that Nginx send traffic to the next upstream. All works as expected and described in the documentation. But the question remained for our production: Why Nginx send traffic to the next upstream on 504 error? 1. It is some misconfiguration on our side, maybe timeouts on any other directives should be enabled/disabled? 2. It is some kind if misunderstanding how 'proxy_next_upstream' directive works? 3. It is some kind of bug? Thank you! Posted at Nginx Forum: https://forum.nginx.org/read.php?2,289020,289020#msg-289020 From mdounin at mdounin.ru Sun Aug 9 12:29:43 2020 From: mdounin at mdounin.ru (Maxim Dounin) Date: Sun, 9 Aug 2020 15:29:43 +0300 Subject: Why Nginx send traffic to the next upstream on 504 error In-Reply-To: References: Message-ID: <20200809122943.GG12747@mdounin.ru> Hello! On Sat, Aug 08, 2020 at 12:55:47PM -0400, stmx38 wrote: > We have an Nginx where we configured > http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_next_upstream > in Nginx main config file at http level. > > We don't want to send traffic to the next upstream on 504: > ---- > proxy_next_upstream error timeout http_502 http_503 non_idempotent; > ---- > > At vhost level we don't redefine this directive. But periodically we see the > following in Nginx logs: > ---- > 141.101.69.85 - 192.168.1.10 - 59.489 - [08/Aug/2020:11:10:41.098 +0000] - > POST - /api/ - HTTP/1.1 - 499 - - 0 - 2054 - "Java/1.8.0_242" - api-worker > - "192.168.1.11:8080, 192.168.1.12:8080" - "504, -" - "0.000, 0.000" - > "31.000, 28.489" > ---- > > api-worker - "192.168.1.11:8080, 192.168.1.12:8080" - "504, -" > > From the logs we see that Nginx received 504 status from the first upstream > and then for some reason send traffic to the next one, despite the fact that > it should not do it on 504 http status. The "504" status in $uptream_status is used to indicate timeouts, much like 502 to indicate errors. These are status codes nginx itself generates due to observed error conditions. And your "proxy_next_upstream" includes "timeout". [...] -- Maxim Dounin http://mdounin.ru/ From nginx-forum at forum.nginx.org Sun Aug 9 14:45:14 2020 From: nginx-forum at forum.nginx.org (stmx38) Date: Sun, 09 Aug 2020 10:45:14 -0400 Subject: Why Nginx send traffic to the next upstream on 504 error In-Reply-To: <20200809122943.GG12747@mdounin.ru> References: <20200809122943.GG12747@mdounin.ru> Message-ID: <26e77e9500eba5f5d107b4e1d9b1f60b.NginxMailingListEnglish@forum.nginx.org> Maxim, thank you for reply! Some additional question here: 1. If we will remove 'timeout' will Nginx send traffic to the next upstream on 502 error? 2. More general question related the the Q.1. When Nginx interpret reply as 502 - on timeout, if yes - on which one? We have a lot of timeouts defined in the main config: ---- client_header_timeout 30s; client_body_timeout 30s; send_timeout 65s; proxy_connect_timeout 5s; proxy_send_timeout 65s; proxy_read_timeout 65s; ---- Thanks! Posted at Nginx Forum: https://forum.nginx.org/read.php?2,289020,289025#msg-289025 From doctor at doctor.nl2k.ab.ca Mon Aug 10 03:34:20 2020 From: doctor at doctor.nl2k.ab.ca (The Doctor) Date: Sun, 9 Aug 2020 21:34:20 -0600 Subject: cgi-bin and nginx Message-ID: <20200810033420.GA43465@doctor.nl2k.ab.ca> Hello I am trying to migrate from Apache to nginx. Straight forward HTML no problem. Cgi-bin issues are stopping me. So In my https system I have location /cgi-bin/ { #try_files $uri =404 ; gzip off; root /usr/local/www/apache24/cgi-bin/; fastcgi_pass unix:/var/run/fcgiwrap/fcgiwrap.sock; include /usr/local/etc/nginx/fastcgi_params; fastcgi_param SCRIPT_FILENAME /usr/local/www/apache24/cgi-bin/$fastcgi_script_name; } And In my error log I have 2020/08/09 21:25:24 [debug] 50073#101493: timer delta: 1 2020/08/09 21:25:24 [debug] 50073#101493: worker cycle 2020/08/09 21:25:24 [debug] 50073#101493: kevent timer: 59999, changes: 0 2020/08/09 21:25:24 [debug] 50073#101493: kevent events: 1 2020/08/09 21:25:24 [debug] 50073#101493: *1 event timer add: 10: 60000:34559651 3 2020/08/09 21:25:24 [debug] 50073#101493: *1 http finalize request: -4, "/cgi-bi n/syswatch.pl?" a:1, c:2 2020/08/09 21:25:24 [debug] 50073#101493: *1 http request count:2 blk:0 2020/08/09 21:25:24 [debug] 50073#101493: timer delta: 2 2020/08/09 21:25:24 [debug] 50073#101493: worker cycle 2020/08/09 21:25:24 [debug] 50073#101493: kevent timer: 60000, changes: 2 2020/08/09 21:25:24 [debug] 50073#101493: kevent events: 1 2020/08/09 21:25:24 [debug] 50073#101493: kevent: 3: ft:-2 fl:0020 ff:00000000 d :33751 ud:00000008028BC9B8 2020/08/09 21:25:24 [debug] 50073#101493: *1 http run request: "/cgi-bin/syswatc h.pl?" 2020/08/09 21:25:24 [debug] 50073#101493: *1 http upstream check client, write e vent:1, "/cgi-bin/syswatch.pl" 2020/08/09 21:25:24 [debug] 50073#101493: timer delta: 1 2020/08/09 21:25:24 [debug] 50073#101493: worker cycle 2020/08/09 21:25:24 [debug] 50073#101493: kevent timer: 59999, changes: 0 2020/08/09 21:25:24 [debug] 50073#101493: kevent events: 1 2020/08/09 21:25:24 [debug] 50073#101493: kevent: 10: ft:-1 fl:0020 ff:00000000 d:104 ud:000000080289FD20 2020/08/09 21:25:24 [debug] 50073#101493: *1 http upstream request: "/cgi-bin/syswatch.pl?" 2020/08/09 21:25:24 [debug] 50073#101493: *1 http upstream process header 2020/08/09 21:25:24 [debug] 50073#101493: *1 malloc: 0000000801F8C000:4096 2020/08/09 21:25:24 [debug] 50073#101493: *1 recv: eof:0, avail:104, err:0 2020/08/09 21:25:24 [debug] 50073#101493: *1 recv: fd:10 104 of 4096 2020/08/09 21:25:24 [debug] 50073#101493: *1 http fastcgi record byte: 01 2020/08/09 21:25:24 [debug] 50073#101493: *1 http fastcgi record byte: 06 2020/08/09 21:25:24 [debug] 50073#101493: *1 http fastcgi record byte: 00 2020/08/09 21:25:24 [debug] 50073#101493: *1 http fastcgi record byte: 01 2020/08/09 21:25:24 [debug] 50073#101493: *1 http fastcgi record byte: 00 2020/08/09 21:25:24 [debug] 50073#101493: *1 http fastcgi record byte: 42 2020/08/09 21:25:24 [debug] 50073#101493: *1 http fastcgi record byte: 06 2020/08/09 21:25:24 [debug] 50073#101493: *1 http fastcgi record byte: 00 2020/08/09 21:25:24 [debug] 50073#101493: *1 http fastcgi record length: 66 2020/08/09 21:25:24 [debug] 50073#101493: *1 http fastcgi parser: 0 2020/08/09 21:25:24 [debug] 50073#101493: *1 http fastcgi header: "Status: 403 Forbidden" 2020/08/09 21:25:24 [debug] 50073#101493: *1 http fastcgi parser: 0 2020/08/09 21:25:24 [debug] 50073#101493: *1 http fastcgi header: "Content-Type: text/plain" 2020/08/09 21:25:24 [debug] 50073#101493: *1 http fastcgi parser: 1 2020/08/09 21:25:24 [debug] 50073#101493: *1 http fastcgi header done 2020/08/09 21:25:24 [debug] 50073#101493: *1 posix_memalign: 0000000801FD4000:4096 @16 2020/08/09 21:25:24 [debug] 50073#101493: *1 HTTP/1.1 403 Forbidden Server: nginx/1.18.0 Date: Mon, 10 Aug 2020 03:25:24 GMT Content-Type: text/plain Transfer-Encoding: chunked Connection: keep-alive What is not correct? -- Member - Liberal International This is doctor@@nl2k.ab.ca Ici doctor@@nl2k.ab.ca Yahweh, Queen & country!Never Satan President Republic!Beware AntiChrist rising! https://www.empire.kred/ROOTNK?t=94a1f39b That man is the richest whose pleasures are the cheapest. -Henry David Thoreau From nginx-forum at forum.nginx.org Mon Aug 10 08:19:52 2020 From: nginx-forum at forum.nginx.org (mark_liao) Date: Mon, 10 Aug 2020 04:19:52 -0400 Subject: The nginx upstream configuration do not distinguish domain Message-ID: <5ef4a6b06572605631426f92cdd30488.NginxMailingListEnglish@forum.nginx.org> Hi, Dear friends: I post my problem detail in there: https://stackoverflow.com/questions/63239923/the-nginx-upstream-configuration-do-not-distinguish-domain why can not differentiate the two upstreams? Please take a look at this. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,289030,289030#msg-289030 From francis at daoine.org Mon Aug 10 09:43:42 2020 From: francis at daoine.org (Francis Daly) Date: Mon, 10 Aug 2020 10:43:42 +0100 Subject: cgi-bin and nginx In-Reply-To: <20200810033420.GA43465@doctor.nl2k.ab.ca> References: <20200810033420.GA43465@doctor.nl2k.ab.ca> Message-ID: <20200810094342.GD20939@daoine.org> On Sun, Aug 09, 2020 at 09:34:20PM -0600, The Doctor wrote: Hi there, > location /cgi-bin/ { > #try_files $uri =404 ; > gzip off; > root /usr/local/www/apache24/cgi-bin/; > fastcgi_pass unix:/var/run/fcgiwrap/fcgiwrap.sock; > > include /usr/local/etc/nginx/fastcgi_params; > fastcgi_param SCRIPT_FILENAME /usr/local/www/apache24/cgi-bin/$fastcgi_script_name; > } > And In my error log I have > 2020/08/09 21:25:24 [debug] 50073#101493: *1 http upstream request: "/cgi-bin/syswatch.pl?" > 2020/08/09 21:25:24 [debug] 50073#101493: *1 http upstream process header > 2020/08/09 21:25:24 [debug] 50073#101493: *1 http fastcgi header: "Status: 403 Forbidden" I think that says that your fastcgi server is returning 403 to nginx, so nginx is returning that to the client. You should probably ask your fastcgi server why it is returning 403. If you look back a bit in your error log, you will probably see some lines like fastcgi param: "SCRIPT_FILENAME: /usr/local/www/apache24/cgi-bin//cgi-bin/syswatch.pl" which indicate exactly what key/value pairs nginx is presenting to your fastcgi server. If the fastcgi server logs are not clear enough, perhaps you can use those "fastcgi param" values to work out what is going wrong. Does the user that the fastcgi service runs as, have access to execute the file that nginx invites it to process? Perhaps, as that user ls -l /usr/local/www/apache24/cgi-bin//cgi-bin/syswatch.pl or ls -ld /usr/local/www/apache24/cgi-bin/cgi-bin/ ls -ld /usr/local/www/apache24/cgi-bin/ ls -ld /usr/local/www/apache24/ etc will show if there is a permission problem? Good luck with it, f -- Francis Daly francis at daoine.org From victor at bitonic.nl Mon Aug 10 09:49:02 2020 From: victor at bitonic.nl (=?ISO-8859-1?Q?V=EDctor_Enr=EDquez?=) Date: Mon, 10 Aug 2020 11:49:02 +0200 Subject: Issue with NGINX as reverse proxy for grpc service In-Reply-To: References: <400baf35af123cdb23125d0fac59ec63b737706b.camel@bitonic.nl> Message-ID: <2a0c238935211c2942dac37b65068132f30bdf61.camel@bitonic.nl> Hi Sergey, First thanks for your reply. What I don't really understand is, shouldn't nginx be more strict by default with the requests that are passed to grpc backends? I have been reading a little bit about the GRPC protocol, and it's supposed to just use POST requests (I might be wrong about this though). Shouldn't any other kind of request be filtered by nginx directly (at least if it is obvious that they are malformed)? I am going to try to reproduce this with a docker compose file and the standard hello grpc example, to see if it's a failure on our backend implementation or if I can reproduce it with other grpc backends. Other than this issue, the backend is working fine if the request is not created with curl (the clients that are supposed to use the backend are working fine), so I guess there is some checking on the golang's grpc implementation to check if the request is valid, and it returns that value RST_STREAM(INTERNAL_ERROR) (but I think we don't have control over it AFAIK, again I might be wrong). To me ,without being an expert, it sounds like there is something wrong, either on the nginx side or on the grpc implementation side of our backend, but we are not returning that error code directly. (Perhaps the golang's grpc implementation should return other error code that doesn't make nginx believe that the upstream is dead when it receives a malformed request). Again thanks for your help. On Fri, 2020-08-07 at 19:28 +0300, Sergey Kandaurov wrote: > > On 7 Aug 2020, at 17:18, V?ctor Enr?quez wrote: > > > > Hi, > > > > So we have a service exposing a grpc interface under a certain > > location > > and we are using nginx in front of it. The config looks like the > > following: > > > > upstream grpcservers { > > server fqdn:port; > > server fqdn:port; > > } > > > > ... > > > > server { > > listen port ssl http2; > > client_max_body_size 15m; > > server_name fqdn; > > > > ssl_certificate /etc/certs/server.crt; > > ssl_certificate_key /etc/certs/server.key; > > > > location /my.location. { > > grpc_set_header X-Ip-Address $remote_addr; > > grpc_set_header X-Forwarded-For $proxy_add_x_forwarded_for; > > grpc_ssl_certificate /etc/ssl/mtls-client.crt; > > grpc_ssl_certificate_key /etc/ssl/mtls-client.key; > > grpc_pass grpcs://grpcservers; > > ... > > } > > > > # Error responses > > include conf.d/errors.grpc_conf; # gRPC-compliant error responses > > default_type application/grpc; # Ensure gRPC for all error > > responses > > > > } //End of the server directive > > > > Now we just realized that each time we do a GET / to that specific > > port > > under that specific location using curl --http2, the request is > > forwarded to the backend in such a way that it makes nginx believe > > that > > the backend has crashed, allowing anyone to DDoS this particular > > service by just repeteadly sending GET / request to the endpoint. > > > > I am seeing the following messages in the logs: > > > > 020/08/07 13:02:37 [error] 1100#1100: *199 upstream rejected > > request > > with error 2 while reading response header from upstream, client: > > X.X.X.X, server: fqdn1, request: "POST > > /my.location.magic.API/GetMagic > > HTTP/2.0", upstream: "grpcs://Z.Z.Z.Z:PORT", host: "fqdn1:PORT" > > "error 2" means that backend responded with > RST_STREAM(INTERNAL_ERROR), > that is, effectively rejected processing request. > You may want to consult with backend error log to find out the > reason. > From mdounin at mdounin.ru Mon Aug 10 12:13:31 2020 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 10 Aug 2020 15:13:31 +0300 Subject: The nginx upstream configuration do not distinguish domain In-Reply-To: <5ef4a6b06572605631426f92cdd30488.NginxMailingListEnglish@forum.nginx.org> References: <5ef4a6b06572605631426f92cdd30488.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20200810121331.GI12747@mdounin.ru> Hello! On Mon, Aug 10, 2020 at 04:19:52AM -0400, mark_liao wrote: > Hi, Dear friends: > > I post my problem detail in there: > > > https://stackoverflow.com/questions/63239923/the-nginx-upstream-configuration-do-not-distinguish-domain > > why can not differentiate the two upstreams? > Please take a look at this. In your nginx configuration, there is no difference for the two server blocks configured except the uptream servers they proxy all requests to. On the other hand, you claim that you've started two upstream servers: "then I started the localhost:3003 and localhost:3002, each prefer to /www/wwwroot/www.demo1.com and /www/wwwroot/www.demo1.com." Given the "prefer to" paths (whatever it means), both upstream servers seems to use the same configuration and expected to return identical responses. -- Maxim Dounin http://mdounin.ru/ From mdounin at mdounin.ru Mon Aug 10 12:46:49 2020 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 10 Aug 2020 15:46:49 +0300 Subject: Why Nginx send traffic to the next upstream on 504 error In-Reply-To: <26e77e9500eba5f5d107b4e1d9b1f60b.NginxMailingListEnglish@forum.nginx.org> References: <20200809122943.GG12747@mdounin.ru> <26e77e9500eba5f5d107b4e1d9b1f60b.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20200810124649.GK12747@mdounin.ru> Hello! On Sun, Aug 09, 2020 at 10:45:14AM -0400, stmx38 wrote: > Maxim, thank you for reply! > > Some additional question here: > 1. If we will remove 'timeout' will Nginx send traffic to the next upstream > on 502 error? The "timeout" parameter corresponds to 504 returned to clients and in $upstream_status, the "error" one corresponds to 502 returned to clients and in $upstream_status. Accordingly, removing "timeout" won't affect anything related to 502. It will, however, stop nginx from trying next upstream servers on timeouts, also known as 504. > 2. More general question related the the Q.1. When Nginx interpret reply as > 502 - on timeout, if yes - on which one? We have a lot of timeouts defined > in the main config: > ---- > client_header_timeout 30s; > client_body_timeout 30s; > send_timeout 65s; > > proxy_connect_timeout 5s; > proxy_send_timeout 65s; > proxy_read_timeout 65s; > ---- Quoting the docs (http://nginx.org/r/proxy_next_upstream): timeout a timeout has occurred while establishing a connection with the server, passing a request to it, or reading the response header; That is, this implies timeouts when working with upstream server. Relevant configuration directives are proxy_connect_timeout, proxy_send_timeout, and proxy_read_timeout. -- Maxim Dounin http://mdounin.ru/ From nginx-forum at forum.nginx.org Tue Aug 11 12:25:45 2020 From: nginx-forum at forum.nginx.org (anish10dec) Date: Tue, 11 Aug 2020 08:25:45 -0400 Subject: Request Time in Nginx Log as always 0.000 for HIT Request In-Reply-To: <20200803210710.GT12747@mdounin.ru> References: <20200803210710.GT12747@mdounin.ru> Message-ID: Thanks Maxim for the explanation. Is there a way to figure out how much time Nginx took to deliver the files to the end user. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,288938,289054#msg-289054 From nginx-forum at forum.nginx.org Tue Aug 11 12:39:54 2020 From: nginx-forum at forum.nginx.org (izhocell) Date: Tue, 11 Aug 2020 08:39:54 -0400 Subject: Nginx reverse proxy issue long request Message-ID: <463abc2c39f6f51a98bd73fce9c8f8c1.NginxMailingListEnglish@forum.nginx.org> I'm stuck on a problem for a long time now with two nginxs server which the first is acting as a reverse proxy and the second as the backend server. Here is my design : Client made a GET request on HTTP address from internet Reverse Proxy Handle it and reverse it to Backend server Backend server handle it and made a SQL request to an database server SQL request run while 15min (900MB of Data returned) PHP-FPM on Backend server will compress the datas and send it back to reverse proxy Reverse proxy get back the data and give them back to the client This design is working well for small SQL request, but as the datas made too much time to came back to reverse proxy, the connection between reverse proxy and cliend beeing closed ( I guess ) and the reverse is obliged to initiate a new one itself When I made the request from a local client to the local HTTP address of my reverse proxy it works well Here is my nginx config Reverse Proxy : server { listen 80 deferred; server_name FQDN; if ($request_method !~ ^(GET|HEAD|POST|PUT)$ ) { return 405; } access_log /var/log/nginx/qa.access.log; error_log /var/log/nginx/qa.error.log debug; location /ws/ { proxy_pass http://IP:8080/; proxy_redirect default; proxy_http_version 1.1; proxy_set_header Host $host; #proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $remote_addr; proxy_connect_timeout 1200s; proxy_send_timeout 1200s; proxy_read_timeout 1200s; send_timeout 1200s; #proxy_connect_timeout 300s; #proxy_send_timeout 300s; add_header Front-End_Https on; } location / { return 444; } } BACKEND server: server { listen 8080 deferred; root /home/Websites/ws2.5/web; index index.html index.htm index.nginx-debian.html index.php; server_name NAME_BACKEND; location / { try_files $uri /index.php$is_args$args; } location ~ ^/index\.php(/|$) { fastcgi_pass unix:/var/run/php/php7.2-fpm.sock; #fastcgi_read_timeout 300; fastcgi_send_timeout 1200s; fastcgi_read_timeout 1200s; fastcgi_split_path_info ^(.+\.php)(/.*)$; include fastcgi_params; fastcgi_param SCRIPT_FILENAME $realpath_root$fastcgi_script_name; fastcgi_param DOCUMENT_ROOT $realpath_root; } location ~ \.php$ { return 404; } error_log /var/log/nginx/ws2.5_error.log debug; access_log /var/log/nginx/ws2.5_access.log; location ~ /\.ht { deny all; } } When it works, here is what I can see in log : BACKEND - [10/Aug/2020:17:40:22 +0200] "GET /me/patents/ HTTP/1.1" 200 25441300 "-" "RestSharp/105.2.3.0" REVERSE PROXY - [10/Aug/2020:17:41:05 +0200] "GET /ws/me/patents/ HTTP/1.1" 200 25441067 "-" "RestSharp/105.2.3.0" And here is what it returned when it doesn't work : BACKEND - [10/Aug/2020:16:28:39 +0200] "GET /me/patents/ HTTP/1.1" 200 25444257 "-" "RestSharp/105.2.3.0" REVERSE PROXY - [10/Aug/2020:16:29:39 +0200] "GET /ws/me/patents/ HTTP/1.1" 200 142307 "-" "RestSharp/105.2.3.0" In both case backend send the among of datas expected, but in the second we can see that the reverse proxy receive less than it sent by BACKEND I'm totally lost with this Could you help me ? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,289055,289055#msg-289055 From mdounin at mdounin.ru Tue Aug 11 15:11:46 2020 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 11 Aug 2020 18:11:46 +0300 Subject: nginx-1.19.2 Message-ID: <20200811151146.GT12747@mdounin.ru> Changes with nginx 1.19.2 11 Aug 2020 *) Change: now nginx starts closing keepalive connections before all free worker connections are exhausted, and logs a warning about this to the error log. *) Change: optimization of client request body reading when using chunked transfer encoding. *) Bugfix: memory leak if the "ssl_ocsp" directive was used. *) Bugfix: "zero size buf in output" alerts might appear in logs if a FastCGI server returned an incorrect response; the bug had appeared in 1.19.1. *) Bugfix: a segmentation fault might occur in a worker process if different large_client_header_buffers sizes were used in different virtual servers. *) Bugfix: SSL shutdown might not work. *) Bugfix: "SSL_shutdown() failed (SSL: ... bad write retry)" messages might appear in logs. *) Bugfix: in the ngx_http_slice_module. *) Bugfix: in the ngx_http_xslt_filter_module. -- Maxim Dounin http://nginx.org/ From xeioex at nginx.com Tue Aug 11 17:27:36 2020 From: xeioex at nginx.com (Dmitry Volyntsev) Date: Tue, 11 Aug 2020 20:27:36 +0300 Subject: njs-0.4.3 Message-ID: Hello, I'm glad to announce a new release of NGINX JavaScript module (njs). This release proceeds to extend the coverage of ECMAScript specifications. Notable new features: - querystring module. : var qs = require('querystring'); : : function fix_args(args) { :???? args = qs.parse(args); : :???? args.t2 = args.t; :???? delete args.t; : :???? return qs.stringify(args); : } : : fix_args("t=1&v=%41%42") -> "v=AB&t2=1" - TextDecoder/TextEncoder. : >> (new TextDecoder()).decode(new Uint8Array([206,177,206,178])) : '??' You can learn more about njs: - Overview and introduction: http://nginx.org/en/docs/njs/ - Presentation: https://youtu.be/Jc_L6UffFOs - Using node modules with njs: http://nginx.org/en/docs/njs/node_modules.html - Writing njs code using TypeScript definition files: ? http://nginx.org/en/docs/njs/typescript.html Feel free to try it and give us feedback on: - Github: https://github.com/nginx/njs/issues - Mailing list: http://mailman.nginx.org/mailman/listinfo/nginx-devel Changes with njs 0.4.3????????????????????????????????????????????????????? 11 Aug 2020 ??? Core: ??? *) Feature: added Query String module. ??? *) Feature: improved fs.mkdir() to support recursive directory creation. ?????? Thanks to Artem S. Povalyukhin. ??? *) Feature: improved fs.rmdir() to support recursive directory removal. ?????? Thanks to Artem S. Povalyukhin. ??? *) Feature: introduced UTF-8 decoder according to WHATWG encoding spec. ??? *) Feature: added TextEncoder/TextDecoder implementation. ??? *) Bugfix: fixed parsing return statement without semicolon. ??? *) Bugfix: fixed njs_number_to_int32() for big-endian platforms. ??? *) Bugfix: fixed unit test on big-endian platforms. ??? *) Bugfix: fixed regexp-literals parsing with '=' characters. ??? *) Bugfix: fixed pre/post increment/decrement in assignment operations. From nginx-forum at forum.nginx.org Wed Aug 12 06:16:50 2020 From: nginx-forum at forum.nginx.org (Dr_tux) Date: Wed, 12 Aug 2020 02:16:50 -0400 Subject: Nginx reverse proxy redirect Message-ID: Hi guys, I have a Nginx reverse proxy. How can I redirect it to the real server URL when I download mp3 files in the reverse proxy. For example: normal reverse proxy request goes to backend node, but If the url contains mp3, it redirects to another server. Thanks. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,289072,289072#msg-289072 From nginx-forum at forum.nginx.org Thu Aug 13 05:32:18 2020 From: nginx-forum at forum.nginx.org (RekGRpth) Date: Thu, 13 Aug 2020 01:32:18 -0400 Subject: Help to find error Message-ID: Core was generated by `nginx:'. Program terminated with signal SIGSEGV, Segmentation fault. #0 0x0000555c3ace3b7e in ngx_rbtree_min (node=0x0, sentinel=0x555c3ae2f720 ) at src/core/ngx_rbtree.h:76 76 while (node->left != sentinel) { [Current thread is 1 (LWP 8)] (gdb) bt #0 0x0000555c3ace3b7e in ngx_rbtree_min (node=0x0, sentinel=0x555c3ae2f720 ) at src/core/ngx_rbtree.h:76 #1 0x0000555c3ace3c87 in ngx_event_expire_timers () at src/event/ngx_event_timer.c:68 #2 0x0000555c3ace1118 in ngx_process_events_and_timers (cycle=0x555c3bd46650) at src/event/ngx_event.c:266 #3 0x0000555c3acf4329 in ngx_worker_process_cycle (cycle=0x555c3bd46650, data=0x0) at src/os/unix/ngx_process_cycle.c:812 #4 0x0000555c3aceff2d in ngx_spawn_process (cycle=0x555c3bd46650, proc=0x555c3acf41fe , data=0x0, name=0x555c3addb9cd "worker process", respawn=-3) at src/os/unix/ngx_process.c:199 #5 0x0000555c3acf2d35 in ngx_start_worker_processes (cycle=0x555c3bd46650, n=8, type=-3) at src/os/unix/ngx_process_cycle.c:388 #6 0x0000555c3acf1fe8 in ngx_master_process_cycle (cycle=0x555c3bd46650) at src/os/unix/ngx_process_cycle.c:136 #7 0x0000555c3aca4f77 in main (argc=1, argv=0x7ffcd3346468) at src/core/nginx.c:385 Posted at Nginx Forum: https://forum.nginx.org/read.php?2,289082,289082#msg-289082 From francis at daoine.org Thu Aug 13 08:43:58 2020 From: francis at daoine.org (Francis Daly) Date: Thu, 13 Aug 2020 09:43:58 +0100 Subject: Nginx reverse proxy redirect In-Reply-To: References: Message-ID: <20200813084358.GE20939@daoine.org> On Wed, Aug 12, 2020 at 02:16:50AM -0400, Dr_tux wrote: Hi there, > I have a Nginx reverse proxy. How can I redirect it to the real server URL > when I download mp3 files in the reverse proxy. Make a location{} that handles mp3-file requests, and "return 301" (or otherwise redirect) there. > normal reverse proxy request goes to backend node, but If the url contains > mp3, it redirects to another server. You have something like location [normal reverse proxy request] { proxy_pass [somewhere] } Add location [mp3 request] { return 301 [somewhere else] } Fill in the bits in [square brackets] appropriately. Depending on the full set of requirements, "~ mp3" or "~ mp3$" might be a useful value for [mp3 request]. Good luck with it, f -- Francis Daly francis at daoine.org From mlybarger at gmail.com Thu Aug 13 10:57:59 2020 From: mlybarger at gmail.com (Mark Lybarger) Date: Thu, 13 Aug 2020 06:57:59 -0400 Subject: rewrite ssl proxy retain query string parms Message-ID: I'm using rewrite to change some tokens in the url path, and am using ssl proxy to send traffic to a downstream server. if i post to https://myhost/start/foo/213/hello, the request gets to https://client-service-host/client/service/hello/213 using the needed certificate. great. my question is, how do i retain query string parameters in this example so that if i post(or get) using query strings, they get also used? https://myhost/start/foo/213/hello?name=world https://myhost/start/foo/213/hello?name=world&greet=full thanks! location ~ /start/(.*)/(.*)/hello { # $1 is used to pick which cert to use. removed by proxy pass. rewrite /start/(.*)/(.*)/(.*)? /client/service/$1/$3/$2 ; } location /client/service/foo/ { proxy_buffering off; proxy_cache off; proxy_ssl_certificate /etc/ssl/certs/client-service-foo-cert.pem; proxy_ssl_certificate_key /etc/ssl/certs/client-service-foo.key; proxy_pass https://client-service-host/client/service/; proxy_ssl_session_reuse on; proxy_set_header X-Proxy true; proxy_set_header Host $proxy_host; proxy_ssl_server_name on; proxy_set_header X-Real-IP $remote_addr; } -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Thu Aug 13 13:06:45 2020 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 13 Aug 2020 16:06:45 +0300 Subject: Help to find error In-Reply-To: References: Message-ID: <20200813130645.GZ12747@mdounin.ru> Hello! On Thu, Aug 13, 2020 at 01:32:18AM -0400, RekGRpth wrote: > Core was generated by `nginx:'. > Program terminated with signal SIGSEGV, Segmentation fault. > #0 0x0000555c3ace3b7e in ngx_rbtree_min (node=0x0, sentinel=0x555c3ae2f720 > ) at src/core/ngx_rbtree.h:76 > 76 while (node->left != sentinel) { > [Current thread is 1 (LWP 8)] > (gdb) bt > #0 0x0000555c3ace3b7e in ngx_rbtree_min (node=0x0, sentinel=0x555c3ae2f720 > ) at src/core/ngx_rbtree.h:76 > #1 0x0000555c3ace3c87 in ngx_event_expire_timers () at > src/event/ngx_event_timer.c:68 > #2 0x0000555c3ace1118 in ngx_process_events_and_timers > (cycle=0x555c3bd46650) at src/event/ngx_event.c:266 > #3 0x0000555c3acf4329 in ngx_worker_process_cycle (cycle=0x555c3bd46650, > data=0x0) at src/os/unix/ngx_process_cycle.c:812 > #4 0x0000555c3aceff2d in ngx_spawn_process (cycle=0x555c3bd46650, > proc=0x555c3acf41fe , data=0x0, > name=0x555c3addb9cd "worker process", respawn=-3) at > src/os/unix/ngx_process.c:199 > #5 0x0000555c3acf2d35 in ngx_start_worker_processes (cycle=0x555c3bd46650, > n=8, type=-3) at src/os/unix/ngx_process_cycle.c:388 > #6 0x0000555c3acf1fe8 in ngx_master_process_cycle (cycle=0x555c3bd46650) at > src/os/unix/ngx_process_cycle.c:136 > #7 0x0000555c3aca4f77 in main (argc=1, argv=0x7ffcd3346468) at > src/core/nginx.c:385 The backtrace suggests that the timer tree is corrupted. In practice this usually means a bug elsewhere, often in a 3rd party module. -- Maxim Dounin http://mdounin.ru/ From nginx-forum at forum.nginx.org Thu Aug 13 13:10:22 2020 From: nginx-forum at forum.nginx.org (vergil) Date: Thu, 13 Aug 2020 09:10:22 -0400 Subject: SSL_shutdown() failed (SSL: ... bad write retry) Message-ID: <87b1e98c6aedb2a14af88c082d6f2f81.NginxMailingListEnglish@forum.nginx.org> Good day. Todays morning we've updated one of our nginx servers to the latest version (1.19.2 from 1.19.1). Since then our log contains "SSL_shutdown() failed (SSL: ... bad write retry)" messages which i think you'd fixed in this version (i didn't seen this messages before as far as i remember). Should i create a proper bug report with more info or you know about this? Regards, Alexander. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,289087,289087#msg-289087 From mdounin at mdounin.ru Thu Aug 13 13:32:11 2020 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 13 Aug 2020 16:32:11 +0300 Subject: SSL_shutdown() failed (SSL: ... bad write retry) In-Reply-To: <87b1e98c6aedb2a14af88c082d6f2f81.NginxMailingListEnglish@forum.nginx.org> References: <87b1e98c6aedb2a14af88c082d6f2f81.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20200813133211.GB12747@mdounin.ru> Hello! On Thu, Aug 13, 2020 at 09:10:22AM -0400, vergil wrote: > Good day. > > Todays morning we've updated one of our nginx servers to the latest version > (1.19.2 from 1.19.1). > > Since then our log contains "SSL_shutdown() failed (SSL: ... bad write > retry)" messages which i think you'd fixed in this version (i didn't seen > this messages before as far as i remember). > > Should i create a proper bug report with more info or you know about this? Your report is the first one. It would be helpful if you'll be able to provide more details, including "nginx -V" output, full error log messages seen and relevant configuration details. A debugging log would be ideal (http://nginx.org/en/docs/debugging_log.html). -- Maxim Dounin http://mdounin.ru/ From mdounin at mdounin.ru Thu Aug 13 13:44:22 2020 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 13 Aug 2020 16:44:22 +0300 Subject: rewrite ssl proxy retain query string parms In-Reply-To: References: Message-ID: <20200813134422.GC12747@mdounin.ru> Hello! On Thu, Aug 13, 2020 at 06:57:59AM -0400, Mark Lybarger wrote: > I'm using rewrite to change some tokens in the url path, and am using ssl > proxy to send traffic to a downstream server. > > if i post to https://myhost/start/foo/213/hello, the request gets to > https://client-service-host/client/service/hello/213 using the needed > certificate. great. > > my question is, how do i retain query string parameters in this example so > that if i post(or get) using query strings, they get also used? > > https://myhost/start/foo/213/hello?name=world > https://myhost/start/foo/213/hello?name=world&greet=full > > thanks! > > location ~ /start/(.*)/(.*)/hello { > # $1 is used to pick which cert to use. removed by proxy pass. > rewrite /start/(.*)/(.*)/(.*)? /client/service/$1/$3/$2 ; > } The configuration in question does not touch any query string parameters, so they are retained by default. Note that the rewrite directive retains request arguments by default (and you can use a trailing '?' to remove them, see http://nginx.org/r/rewrite). -- Maxim Dounin http://mdounin.ru/ From nginx-forum at forum.nginx.org Thu Aug 13 14:01:38 2020 From: nginx-forum at forum.nginx.org (vergil) Date: Thu, 13 Aug 2020 10:01:38 -0400 Subject: SSL_shutdown() failed (SSL: ... bad write retry) In-Reply-To: <20200813133211.GB12747@mdounin.ru> References: <20200813133211.GB12747@mdounin.ru> Message-ID: <54c8a40c4db2bb74dfa31b3783b1aa96.NginxMailingListEnglish@forum.nginx.org> Of course! nginx version: nginx/1.19.2 built by gcc 5.4.0 20160609 (Ubuntu 5.4.0-6ubuntu1~16.04.12) built with OpenSSL 1.0.2g 1 Mar 2016 TLS SNI support enabled configure arguments: --prefix=/etc/nginx --sbin-path=/usr/sbin/nginx --modules-path=/usr/lib/nginx/modules --conf-path=/etc/nginx/nginx.conf --error-log-path=/var/log/nginx/error.log --http-log-path=/var/log/nginx/access.log --pid-path=/var/run/nginx.pid --lock-path=/var/run/nginx.lock --http-client-body-temp-path=/var/cache/nginx/client_temp --http-proxy-temp-path=/var/cache/nginx/proxy_temp --http-fastcgi-temp-path=/var/cache/nginx/fastcgi_temp --http-uwsgi-temp-path=/var/cache/nginx/uwsgi_temp --http-scgi-temp-path=/var/cache/nginx/scgi_temp --user=nginx --group=nginx --with-compat --with-file-aio --with-threads --with-http_addition_module --with-http_auth_request_module --with-http_dav_module --with-http_flv_module --with-http_gunzip_module --with-http_gzip_static_module --with-http_mp4_module --with-http_random_index_module --with-http_realip_module --with-http_secure_link_module --with-http_slice_module --with-http_ssl_module --with-http_stub_status_module --with-http_sub_module --with-http_v2_module --with-mail --with-mail_ssl_module --with-stream --with-stream_realip_module --with-stream_ssl_module --with-stream_ssl_preread_module --with-cc-opt='-g -O2 -fPIE -fstack-protector-strong -Wformat -Werror=format-security -Wp,-D_FORTIFY_SOURCE=2 -fPIC' --with-ld-opt='-Wl,-Bsymbolic-functions -fPIE -pie -Wl,-z,relro -Wl,-z,now -Wl,--as-needed -pie' There's two different messages in the logs (http/2 are more rare): 2020/08/13 16:53:27 [crit] 2466#2466: *743603 SSL_shutdown() failed (SSL: error:1409F07F:SSL routines:ssl3_write_pending:bad write retry) while closing request, client: XXX.XXX.XXX.XXX, server: XXX.XXX.XXX.XXX:443 2020/08/13 16:34:31 [crit] 2459#2459: *421699 SSL_shutdown() failed (SSL: error:1409F07F:SSL routines:ssl3_write_pending:bad write retry) while processing HTTP/2 connection, client: XXX.XXX.XXX.XXX, server: XXX.XXX.XXX.XXX:443 I'll try to gather debug log with corresponding errors and update this topic (i'll also gather configuration settings) Regards, Alexander. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,289087,289092#msg-289092 From nginx-forum at forum.nginx.org Thu Aug 13 15:39:36 2020 From: nginx-forum at forum.nginx.org (vergil) Date: Thu, 13 Aug 2020 11:39:36 -0400 Subject: SSL_shutdown() failed (SSL: ... bad write retry) In-Reply-To: <20200813133211.GB12747@mdounin.ru> References: <20200813133211.GB12747@mdounin.ru> Message-ID: This one was hard to catch. I've captured one error with 30 seconds delta before and after the event. Where can i attach log file for you? There's 400K messages, so i cannot simple put it here. Regards, Alexander. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,289087,289093#msg-289093 From mdounin at mdounin.ru Thu Aug 13 15:42:06 2020 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 13 Aug 2020 18:42:06 +0300 Subject: SSL_shutdown() failed (SSL: ... bad write retry) In-Reply-To: <54c8a40c4db2bb74dfa31b3783b1aa96.NginxMailingListEnglish@forum.nginx.org> References: <20200813133211.GB12747@mdounin.ru> <54c8a40c4db2bb74dfa31b3783b1aa96.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20200813154206.GD12747@mdounin.ru> Hello! On Thu, Aug 13, 2020 at 10:01:38AM -0400, vergil wrote: > Of course! > > nginx version: nginx/1.19.2 > built by gcc 5.4.0 20160609 (Ubuntu 5.4.0-6ubuntu1~16.04.12) > built with OpenSSL 1.0.2g 1 Mar 2016 > TLS SNI support enabled > configure arguments: --prefix=/etc/nginx --sbin-path=/usr/sbin/nginx > --modules-path=/usr/lib/nginx/modules --conf-path=/etc/nginx/nginx.conf > --error-log-path=/var/log/nginx/error.log > --http-log-path=/var/log/nginx/access.log --pid-path=/var/run/nginx.pid > --lock-path=/var/run/nginx.lock > --http-client-body-temp-path=/var/cache/nginx/client_temp > --http-proxy-temp-path=/var/cache/nginx/proxy_temp > --http-fastcgi-temp-path=/var/cache/nginx/fastcgi_temp > --http-uwsgi-temp-path=/var/cache/nginx/uwsgi_temp > --http-scgi-temp-path=/var/cache/nginx/scgi_temp --user=nginx --group=nginx > --with-compat --with-file-aio --with-threads --with-http_addition_module > --with-http_auth_request_module --with-http_dav_module > --with-http_flv_module --with-http_gunzip_module > --with-http_gzip_static_module --with-http_mp4_module > --with-http_random_index_module --with-http_realip_module > --with-http_secure_link_module --with-http_slice_module > --with-http_ssl_module --with-http_stub_status_module --with-http_sub_module > --with-http_v2_module --with-mail --with-mail_ssl_module --with-stream > --with-stream_realip_module --with-stream_ssl_module > --with-stream_ssl_preread_module --with-cc-opt='-g -O2 -fPIE > -fstack-protector-strong -Wformat -Werror=format-security > -Wp,-D_FORTIFY_SOURCE=2 -fPIC' --with-ld-opt='-Wl,-Bsymbolic-functions -fPIE > -pie -Wl,-z,relro -Wl,-z,now -Wl,--as-needed -pie' > > > There's two different messages in the logs (http/2 are more rare): > > 2020/08/13 16:53:27 [crit] 2466#2466: *743603 SSL_shutdown() failed (SSL: > error:1409F07F:SSL routines:ssl3_write_pending:bad write retry) while > closing request, client: XXX.XXX.XXX.XXX, server: XXX.XXX.XXX.XXX:443 > > 2020/08/13 16:34:31 [crit] 2459#2459: *421699 SSL_shutdown() failed (SSL: > error:1409F07F:SSL routines:ssl3_write_pending:bad write retry) while > processing HTTP/2 connection, client: XXX.XXX.XXX.XXX, server: > XXX.XXX.XXX.XXX:443 Thanks. Is there anything related to these connections on other logging levels, such as "info"? (Just in case, connection identifiers are "*743603" and "*421699" in the above log messages). > I'll try to gather debug log with corresponding errors and update this topic > (i'll also gather configuration settings) Yes, please, debug log would be really helpful. -- Maxim Dounin http://mdounin.ru/ From mdounin at mdounin.ru Thu Aug 13 15:49:59 2020 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 13 Aug 2020 18:49:59 +0300 Subject: SSL_shutdown() failed (SSL: ... bad write retry) In-Reply-To: References: <20200813133211.GB12747@mdounin.ru> Message-ID: <20200813154959.GE12747@mdounin.ru> Hello! On Thu, Aug 13, 2020 at 11:39:36AM -0400, vergil wrote: > This one was hard to catch. > > I've captured one error with 30 seconds delta before and after the event. > Where can i attach log file for you? There's 400K messages, so i cannot > simple put it here. Attaching the log to the message into the mailing list should work, but I'm not sure it's supported by the (obsolete) forum interface you are using. If not, you may put the log at a convinient place and provide a link here, or attach it to a ticket on trac.nginx.org, or email to me privetely. -- Maxim Dounin http://mdounin.ru/ From nginx-forum at forum.nginx.org Thu Aug 13 16:11:54 2020 From: nginx-forum at forum.nginx.org (vergil) Date: Thu, 13 Aug 2020 12:11:54 -0400 Subject: SSL_shutdown() failed (SSL: ... bad write retry) In-Reply-To: <20200813154959.GE12747@mdounin.ru> References: <20200813154959.GE12747@mdounin.ru> Message-ID: <00a4a0c7d7706b1bb2eb9df4b9defbaf.NginxMailingListEnglish@forum.nginx.org> Maxim Dounin Wrote: ------------------------------------------------------- > Hello! > > On Thu, Aug 13, 2020 at 11:39:36AM -0400, vergil wrote: > > > This one was hard to catch. > > > > I've captured one error with 30 seconds delta before and after the > event. > > Where can i attach log file for you? There's 400K messages, so i > cannot > > simple put it here. > > Attaching the log to the message into the mailing list should > work, but I'm not sure it's supported by the (obsolete) forum > interface you are using. If not, you may put the log at a > convinient place and provide a link here, or attach it to a > ticket on trac.nginx.org, or email to me privetely. > > -- > Maxim Dounin > http://mdounin.ru/ > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx I've attached log file to our S3 public storage. You can download it through this link: https://drive-public-eu.s3.eu-central-1.amazonaws.com/nginx/nginx-debug.csv A note: this is a CSV format from our logging system. I can try to extract logs in original format if you need. Regards, Alexander. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,289087,289096#msg-289096 From mdounin at mdounin.ru Thu Aug 13 16:43:15 2020 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 13 Aug 2020 19:43:15 +0300 Subject: SSL_shutdown() failed (SSL: ... bad write retry) In-Reply-To: <00a4a0c7d7706b1bb2eb9df4b9defbaf.NginxMailingListEnglish@forum.nginx.org> References: <20200813154959.GE12747@mdounin.ru> <00a4a0c7d7706b1bb2eb9df4b9defbaf.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20200813164315.GF12747@mdounin.ru> Hello! On Thu, Aug 13, 2020 at 12:11:54PM -0400, vergil wrote: > Maxim Dounin Wrote: > ------------------------------------------------------- > > Hello! > > > > On Thu, Aug 13, 2020 at 11:39:36AM -0400, vergil wrote: > > > > > This one was hard to catch. > > > > > > I've captured one error with 30 seconds delta before and after the > > event. > > > Where can i attach log file for you? There's 400K messages, so i > > cannot > > > simple put it here. > > > > Attaching the log to the message into the mailing list should > > work, but I'm not sure it's supported by the (obsolete) forum > > interface you are using. If not, you may put the log at a > > convinient place and provide a link here, or attach it to a > > ticket on trac.nginx.org, or email to me privetely. > > I've attached log file to our S3 public storage. You can download it through > this link: > https://drive-public-eu.s3.eu-central-1.amazonaws.com/nginx/nginx-debug.csv > > A note: this is a CSV format from our logging system. I can try to extract > logs in original format if you need. Thanks, but this doesn't seem to contain anything related to the SSL_shutdown() except the message itself: "2020-08-13T15:19:03.279Z","7","shmtx lock", "2020-08-13T15:19:03.279Z","7","shmtx lock", "2020-08-13T15:19:03.279Z","7","timer delta: 0", "2020-08-13T15:19:03.280Z","2","SSL_shutdown() failed (SSL: error:1409F07F:SSL routines:ssl3_write_pending:bad write retry) while closing request, client: XXX.XXX.XXX.XXX, server: XXX.XXX.XXX.XXX:443","9140" "2020-08-13T15:19:03.280Z","7","epoll: fd:322 ev:0005 d:00007F0A0FCDDEB0", "2020-08-13T15:19:03.280Z","7","epoll: fd:54 ev:0004 d:00007F0A0FCDFAC9", And nothing else in the log saying "SSL_shutdow()", while there should be a debug messages like "SSL_shutdown: -1" and "SSL_get_error: ..." right before the message, and nothing at all related to the connection 9140. It looks like the debug logging is only enabled on the global level, but disabled at http or server level. Please see the part starting at "Note that redefining the log without also specifying the debug level will disable the debugging log" in the "A debugging log" article (http://nginx.org/en/docs/debugging_log.html). -- Maxim Dounin http://mdounin.ru/ From nginx-forum at forum.nginx.org Thu Aug 13 17:04:42 2020 From: nginx-forum at forum.nginx.org (vergil) Date: Thu, 13 Aug 2020 13:04:42 -0400 Subject: SSL_shutdown() failed (SSL: ... bad write retry) In-Reply-To: <20200813164315.GF12747@mdounin.ru> References: <20200813164315.GF12747@mdounin.ru> Message-ID: Maxim Dounin Wrote: ------------------------------------------------------- > Hello! > > On Thu, Aug 13, 2020 at 12:11:54PM -0400, vergil wrote: > > > Maxim Dounin Wrote: > > ------------------------------------------------------- > > > Hello! > > > > > > On Thu, Aug 13, 2020 at 11:39:36AM -0400, vergil wrote: > > > > > > > This one was hard to catch. > > > > > > > > I've captured one error with 30 seconds delta before and after > the > > > event. > > > > Where can i attach log file for you? There's 400K messages, so i > > > cannot > > > > simple put it here. > > > > > > Attaching the log to the message into the mailing list should > > > work, but I'm not sure it's supported by the (obsolete) forum > > > interface you are using. If not, you may put the log at a > > > convinient place and provide a link here, or attach it to a > > > ticket on trac.nginx.org, or email to me privetely. > > > > I've attached log file to our S3 public storage. You can download it > through > > this link: > > > https://drive-public-eu.s3.eu-central-1.amazonaws.com/nginx/nginx-debu > g.csv > > > > A note: this is a CSV format from our logging system. I can try to > extract > > logs in original format if you need. > > Thanks, but this doesn't seem to contain anything related to the > SSL_shutdown() except the message itself: > > "2020-08-13T15:19:03.279Z","7","shmtx lock", > "2020-08-13T15:19:03.279Z","7","shmtx lock", > "2020-08-13T15:19:03.279Z","7","timer delta: 0", > "2020-08-13T15:19:03.280Z","2","SSL_shutdown() failed (SSL: > error:1409F07F:SSL routines:ssl3_write_pending:bad write retry) while > closing request, client: XXX.XXX.XXX.XXX, server: > XXX.XXX.XXX.XXX:443","9140" > "2020-08-13T15:19:03.280Z","7","epoll: fd:322 ev:0005 > d:00007F0A0FCDDEB0", > "2020-08-13T15:19:03.280Z","7","epoll: fd:54 ev:0004 > d:00007F0A0FCDFAC9", > > And nothing else in the log saying "SSL_shutdow()", while there > should be a debug messages like "SSL_shutdown: -1" and > "SSL_get_error: ..." right before the message, and nothing at all > related to the connection 9140. > > It looks like the debug logging is only enabled on the global > level, but disabled at http or server level. Please see the part > starting at "Note that redefining the log without also specifying > the debug level will disable the debugging log" in the "A > debugging log" article > (http://nginx.org/en/docs/debugging_log.html). > > -- > Maxim Dounin > http://mdounin.ru/ > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx Oh, good catch. I completely forgot about this. My bad. I will re-upload the logs when i'd gather new information. Sorry for this. Regards, Alexander. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,289087,289098#msg-289098 From nginx-forum at forum.nginx.org Thu Aug 13 19:04:47 2020 From: nginx-forum at forum.nginx.org (nathanpgibson) Date: Thu, 13 Aug 2020 15:04:47 -0400 Subject: Connection timeout on SSL with shared hosting Message-ID: <890e53d40b86b429b03b4886c31b941e.NginxMailingListEnglish@forum.nginx.org> Hi All, Newbie question. I posted this on Stack Overflow but haven't gotten any replies yet. https://stackoverflow.com/questions/63391424/why-do-i-get-connection-timeout-on-ssl-even-though-nginx-is-listening-and-firewa Most/many visitors to my site https://example.org get a connection timeout. Some visitors get through, possibly ones redirected from http://example.org or those who've previously visited the site. I'm trying to determine if this is a firewall issue or an nginx configuration issue. Firewall I'm using UFW as a firewall, which has the following rules: To Action From -- ------ ---- SSH ALLOW Anywhere Nginx Full ALLOW Anywhere 80/tcp ALLOW Anywhere 443/tcp ALLOW Anywhere SSH (v6) ALLOW Anywhere (v6) Nginx Full (v6) ALLOW Anywhere (v6) 80/tcp (v6) ALLOW Anywhere (v6) 443/tcp (v6) ALLOW Anywhere (v6) I could give some relevant rules from iptables if anyone needs that, but I'd need some direction on what to look for. For sudo netstat -anop | grep LISTEN | grep ':443' I get tcp 0 0 0.0.0.0:443 0.0.0.0:* LISTEN 120907/nginx: worke off (0.00/0/0) tcp6 0 0 :::443 :::* LISTEN 120907/nginx: worke off (0.00/0/0) Not sure what "worke off" means. nginx It's a virtual host with the server name myservername.com which serves up two websites, example.org and example.com/directory. Example.org points to a docker container running eXist-db. Example.com/directory is serving up a directory on localhost:8080 proxied from another server where example.com lives. Example.com/directory is running smoothly on https when I access it in the browser -- I presume this is because it actually talks to the example.com host over http. Example.org and myservername.com both have certs from let's encrypt generated by certbot. When I try nmap from my local machine I get some results I can't explain. Notice the discrepancy between ports 80 and ports 443 and between IPv4 and IPv6 $ nmap -A -T4 -p443 example.org 443/tcp filtered https $ nmap -A -T4 -p443 my.server.ip.address 443/tcp filtered https $ nmap -A -T4 -p443 -6 my:server:ip::v6:address 443/tcp open ssl/http nginx 1.10.3 $ nmap -A -T4 -p80 example.org 80/tcp open http nginx 1.10.3 $ nmap -A -T4 -p80 my.server.ip.address 80/tcp open http nginx 1.10.3 My nginx.conf is user www-data; worker_processes auto; pid /run/nginx.pid; include /etc/nginx/modules-enabled/*.conf; events { worker_connections 768; # multi_accept on; } http { ## # Basic Settings ## client_max_body_size 50M; sendfile on; tcp_nopush on; tcp_nodelay on; keepalive_timeout 65; types_hash_max_size 2048; # server_tokens off; server_names_hash_bucket_size 64; # server_name_in_redirect off; include /etc/nginx/mime.types; default_type application/octet-stream; ## # SSL Settings ## ssl_protocols TLSv1 TLSv1.1 TLSv1.2; # Dropping SSLv3, ref: POODLE ssl_prefer_server_ciphers on; ## # Logging Settings ## access_log /var/log/nginx/access.log; error_log /var/log/nginx/error.log; ## # Gzip Settings ## gzip on; gzip_disable "msie6"; # gzip_vary on; # gzip_proxied any; # gzip_comp_level 6; # gzip_buffers 16 8k; # gzip_http_version 1.1; # gzip_types text/plain text/css application/json application/javascript text/xml application/xml application/xml+rss text/javascript; ## # Virtual Host Configs ## include /etc/nginx/conf.d/*.conf; include /etc/nginx/sites-enabled/*; } and my nginx server blocks: server { listen 80 default_server; listen [::]:80 default_server; server_name _ myservername.com; return 301 https://myservername.com$request_uri; } server { # SSL configuration # listen 443 ssl default_server; listen [::]:443 ssl default_server; server_name _ myservername.com; location / { proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_pass http://localhost:8080; } ssl_certificate /etc/letsencrypt/live/myservername.com/fullchain.pem; ssl_certificate_key /etc/letsencrypt/live/myservername.com/privkey.pem; } server { listen 80; listen [::]:80; server_name example.com www.example.com; gzip off; location / { proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_pass http://localhost:8080; } } server { listen 80; listen [::]:80; server_name example.org www.example.org; return 301 https://example.org$request_uri; } server { # SSL configuration # listen 443 ssl; listen [::]:443 ssl; server_name example.org www.example.org; gzip off; location / { proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_pass http://docker.container.ip.address:port/exist/apps/example/; } location /workshop2020/ { return 302 http://example.org/forum2020/; } location /exist/apps/example/ { rewrite ^/exist/apps/example/(.*)$ /$1; } ssl_certificate /etc/letsencrypt/live/example.org/fullchain.pem; # managed by Certbot ssl_certificate_key /etc/letsencrypt/live/example.org/privkey.pem; # managed by Certbot } Very grateful for any help!! Nathan Posted at Nginx Forum: https://forum.nginx.org/read.php?2,289099,289099#msg-289099 From teward at thomas-ward.net Thu Aug 13 19:33:42 2020 From: teward at thomas-ward.net (Thomas Ward) Date: Thu, 13 Aug 2020 15:33:42 -0400 Subject: Connection timeout on SSL with shared hosting In-Reply-To: <890e53d40b86b429b03b4886c31b941e.NginxMailingListEnglish@forum.nginx.org> References: <890e53d40b86b429b03b4886c31b941e.NginxMailingListEnglish@forum.nginx.org> Message-ID: <83a51afe-cef2-0475-f6ba-5e35909fecc9@thomas-ward.net> You said this is "shared hosting" - when you say "shared hosting" do you mean this is *not* a dedicated machine but one machine out of many in a shared environment? Have you tested briefly by disabling your firewall just to see if that fixes the issue? What is the backend?? You're passing everything to 8080 which suggests the backend might be having issues too. Thomas On 8/13/20 3:04 PM, nathanpgibson wrote: > Hi All, > Newbie question. I posted this on Stack Overflow but haven't gotten any > replies yet. > https://stackoverflow.com/questions/63391424/why-do-i-get-connection-timeout-on-ssl-even-though-nginx-is-listening-and-firewa > > Most/many visitors to my site https://example.org get a connection timeout. > Some visitors get through, possibly ones redirected from http://example.org > or those who've previously visited the site. > > I'm trying to determine if this is a firewall issue or an nginx > configuration issue. > > Firewall > > I'm using UFW as a firewall, which has the following rules: > > To Action From > -- ------ ---- > SSH ALLOW Anywhere > Nginx Full ALLOW Anywhere > 80/tcp ALLOW Anywhere > 443/tcp ALLOW Anywhere > SSH (v6) ALLOW Anywhere (v6) > Nginx Full (v6) ALLOW Anywhere (v6) > 80/tcp (v6) ALLOW Anywhere (v6) > 443/tcp (v6) ALLOW Anywhere (v6) > > I could give some relevant rules from iptables if anyone needs that, but I'd > need some direction on what to look for. > > For sudo netstat -anop | grep LISTEN | grep ':443' I get > > tcp 0 0 0.0.0.0:443 0.0.0.0:* LISTEN > 120907/nginx: worke off (0.00/0/0) > tcp6 0 0 :::443 :::* LISTEN > 120907/nginx: worke off (0.00/0/0) > > Not sure what "worke off" means. > > nginx > > It's a virtual host with the server name myservername.com which serves up > two websites, example.org and example.com/directory. Example.org points to a > docker container running eXist-db. Example.com/directory is serving up a > directory on localhost:8080 proxied from another server where example.com > lives. Example.com/directory is running smoothly on https when I access it > in the browser -- I presume this is because it actually talks to the > example.com host over http. > > Example.org and myservername.com both have certs from let's encrypt > generated by certbot. > > When I try nmap from my local machine I get some results I can't explain. > Notice the discrepancy between ports 80 and ports 443 and between IPv4 and > IPv6 > > $ nmap -A -T4 -p443 example.org > 443/tcp filtered https > > $ nmap -A -T4 -p443 my.server.ip.address > 443/tcp filtered https > > $ nmap -A -T4 -p443 -6 my:server:ip::v6:address > 443/tcp open ssl/http nginx 1.10.3 > > $ nmap -A -T4 -p80 example.org > 80/tcp open http nginx 1.10.3 > > $ nmap -A -T4 -p80 my.server.ip.address > 80/tcp open http nginx 1.10.3 > > My nginx.conf is > > user www-data; > worker_processes auto; > pid /run/nginx.pid; > include /etc/nginx/modules-enabled/*.conf; > > events { > worker_connections 768; > # multi_accept on; > } > > http { > > ## > # Basic Settings > ## > > client_max_body_size 50M; > sendfile on; > tcp_nopush on; > tcp_nodelay on; > keepalive_timeout 65; > types_hash_max_size 2048; > # server_tokens off; > > server_names_hash_bucket_size 64; > # server_name_in_redirect off; > > include /etc/nginx/mime.types; > default_type application/octet-stream; > > ## > # SSL Settings > ## > > ssl_protocols TLSv1 TLSv1.1 TLSv1.2; # Dropping SSLv3, ref: POODLE > ssl_prefer_server_ciphers on; > > ## > # Logging Settings > ## > > access_log /var/log/nginx/access.log; > error_log /var/log/nginx/error.log; > > ## > # Gzip Settings > ## > > gzip on; > gzip_disable "msie6"; > > # gzip_vary on; > # gzip_proxied any; > # gzip_comp_level 6; > # gzip_buffers 16 8k; > # gzip_http_version 1.1; > # gzip_types text/plain text/css application/json > application/javascript text/xml application/xml application/xml+rss > text/javascript; > > ## > # Virtual Host Configs > ## > > include /etc/nginx/conf.d/*.conf; > include /etc/nginx/sites-enabled/*; > } > > and my nginx server blocks: > > server { > listen 80 default_server; > listen [::]:80 default_server; > > server_name _ myservername.com; > return 301 https://myservername.com$request_uri; > } > > server { > # SSL configuration > # > listen 443 ssl default_server; > listen [::]:443 ssl default_server; > > server_name _ myservername.com; > > location / { > proxy_set_header Host $host; > proxy_set_header X-Real-IP $remote_addr; > proxy_pass http://localhost:8080; > } > > ssl_certificate > /etc/letsencrypt/live/myservername.com/fullchain.pem; > ssl_certificate_key > /etc/letsencrypt/live/myservername.com/privkey.pem; > } > > server { > listen 80; > listen [::]:80; > > server_name example.com www.example.com; > > gzip off; > > location / { > proxy_set_header Host $host; > proxy_set_header X-Real-IP $remote_addr; > proxy_pass http://localhost:8080; > } > } > > server { > listen 80; > listen [::]:80; > > server_name example.org www.example.org; > return 301 https://example.org$request_uri; > } > > server { > > # SSL configuration > # > listen 443 ssl; > listen [::]:443 ssl; > > server_name example.org www.example.org; > > gzip off; > > location / { > proxy_set_header Host $host; > proxy_set_header X-Real-IP $remote_addr; > proxy_pass > http://docker.container.ip.address:port/exist/apps/example/; > } > > location /workshop2020/ { > return 302 http://example.org/forum2020/; > } > > > location /exist/apps/example/ { > rewrite ^/exist/apps/example/(.*)$ /$1; > } > > > ssl_certificate /etc/letsencrypt/live/example.org/fullchain.pem; # > managed by Certbot > ssl_certificate_key /etc/letsencrypt/live/example.org/privkey.pem; # > managed by Certbot > > } > > Very grateful for any help!! > Nathan > > Posted at Nginx Forum: https://forum.nginx.org/read.php?2,289099,289099#msg-289099 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Thu Aug 13 20:18:31 2020 From: nginx-forum at forum.nginx.org (nathanpgibson) Date: Thu, 13 Aug 2020 16:18:31 -0400 Subject: Connection timeout on SSL with shared hosting In-Reply-To: <83a51afe-cef2-0475-f6ba-5e35909fecc9@thomas-ward.net> References: <83a51afe-cef2-0475-f6ba-5e35909fecc9@thomas-ward.net> Message-ID: <6d0b523686ea8fa77bf8d474bb642dc0.NginxMailingListEnglish@forum.nginx.org> Thanks for the reply, Thomas. > You said this is "shared hosting" - when you say "shared hosting" do you > mean this is *not* a dedicated machine but one machine out of many in a > shared environment? Sorry, I meant virtual hosting. > Have you tested briefly by disabling your firewall just to see if that > fixes the issue? When I disable UFW I get the same nmap results. Somebody else configured the server previously, so there could be something besides UFW interfering, but I'm not sure where to check for that. > What is the backend? You're passing everything to 8080 which suggests > the backend might be having issues too. 8080 is eXist-db running in a docker container (for example.org) and standalone (for example.com). There are no issues connecting to these via http. Any thoughts what to try next? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,289099,289101#msg-289101 From vbart at nginx.com Thu Aug 13 21:12:20 2020 From: vbart at nginx.com (Valentin V. Bartenev) Date: Fri, 14 Aug 2020 00:12:20 +0300 Subject: Unit 1.19.0 release Message-ID: <2163728.ElGaqSPkdT@vbart-laptop> Hi, I'm always happy to announce a new release of NGINX Unit, but this one's BIG. Besides the varied features and bugfixes, some breakthrough improvements were made under the hood. As you may know, Unit uses an advanced architecture that relies on dedicated processes to serve different roles in request processing. The process that handles client connections is the router. It uses asynchronous threads (one per CPU core) to accept new connections and send or receive data over already established connections in a non-blocking manner. For security and scalability, all applications run as separate processes over which you have a degree of control: https://unit.nginx.org/configuration/#process-management To talk to application processes, relay requests for actual processing, and obtain their responses, the router process uses an elaborate mechanism of inter-process communication (IPC) based on shared memory segments. The general idea is to avoid copying data between processes and minimize overhead, potentially achieving almost zero-latency application interaction. Our first implementation of this protocol used a complex algorithm to distribute requests between processes, heavily utilizing Unix socket pairs to pass synchronization control messages. In practice, this turned out rather sub-optimal due to lots of extra syscalls and overt complexity. Also, the push semantics became a serious limitation that prevented us from efficiently handling asynchronous applications. Thus, we stepped back a bit at the end of the last year to meticulously reconsider our approach to IPC, and now this tremendous work finally sees the light of day with the release of Unit version 1.19.0. Maintaining the progress achieved while working with shared memory segments, the protocol now is enhanced to bring the number of syscalls almost to zero under heavy load. We have also changed the request distribution semantics. Now, instead of pushing requests to application processes using a complex router process algorithm, we make application processes pull requests out of a shared queue anytime they're ready. This enables implementing async interfaces in applications in the most effective manner. Relying on this new approach to IPC, we shall be able to improve the performance of Go and Node.js modules in the upcoming releases, also introducing multithreading and new interfaces, such as ASGI in Python. We are obsessed over performance and will continue optimizing Unit to make it the best and brightest in every aspect. As for the other features of the release, there's an improvement in proxying: now it speaks HTTP/1.1 and accepts chunked responses from backends. Moreover, request matching rules were also upgraded to enable more complex wildcard patterns like "*/some/*/path/*.php*". Finally, we have introduced our first configuration variables. They are a small bunch at the moment, but that's to change. In a while, variables shall be sufficiently diversified and will be available in more and more options. Changes with Unit 1.19.0 13 Aug 2020 *) Feature: reworked IPC between the router process and the applications to lower latencies, increase performance, and improve scalability. *) Feature: support for an arbitrary number of wildcards in route matching patterns. *) Feature: chunked transfer encoding in proxy responses. *) Feature: basic variables support in the "pass" option. *) Feature: compatibility with PHP 8 Beta 1. Thanks to Remi Collet. *) Bugfix: the router process could crash while passing requests to an application under high load. *) Bugfix: a number of language modules failed to build on some systems; the bug had appeared in 1.18.0. *) Bugfix: time in error log messages from PHP applications could lag. *) Bugfix: reconfiguration requests could hang if an application had failed to start; the bug had appeared in 1.18.0. *) Bugfix: memory leak during reconfiguration. *) Bugfix: the daemon didn't start without language modules; the bug had appeared in 1.18.0. *) Bugfix: the router process could crash at exit. *) Bugfix: Node.js applications could crash at exit. *) Bugfix: the Ruby module could be linked against a wrong library version. Also, official packages for Fedora 32 are available now: - https://unit.nginx.org/installation/#fedora And if you'd like to know more about the features introduced recently in the previous release, see the blog posts: - NGINX Unit 1.18.0 Adds Filesystem Isolation and Other Enhancements https://www.nginx.com/blog/nginx-unit-1-18-0-now-available/ - Filesystem Isolation in NGINX Unit https://www.nginx.com/blog/filesystem-isolation-nginx-unit/ Stay tuned! wbr, Valentin V. Bartenev From nginx-forum at forum.nginx.org Thu Aug 13 21:50:28 2020 From: nginx-forum at forum.nginx.org (Dr_tux) Date: Thu, 13 Aug 2020 17:50:28 -0400 Subject: Nginx reverse proxy redirect In-Reply-To: <20200813084358.GE20939@daoine.org> References: <20200813084358.GE20939@daoine.org> Message-ID: Thank you very much for this solution. but I have 3 mp3 servers. How can I share requests equally? With 301, I can only send a request to a single ip address. Best. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,289072,289104#msg-289104 From arut at nginx.com Fri Aug 14 09:58:15 2020 From: arut at nginx.com (Roman Arutyunyan) Date: Fri, 14 Aug 2020 12:58:15 +0300 Subject: No revalidation when using stale-while-revalidate In-Reply-To: <20200727014200.GA12747@mdounin.ru> References: <414fa7be-e6a6-9dcd-82b3-e6588e52a595@cdn77.com> <20200724023348.GY12747@mdounin.ru> <07e8cbcc-4291-517b-e91c-471b02e94baa@cdn77.com> <20200727014200.GA12747@mdounin.ru> Message-ID: <20200814095815.cwdx3mcmxtu4qjes@Romans-MacBook-Pro.local> Hi, On Mon, Jul 27, 2020 at 04:42:00AM +0300, Maxim Dounin wrote: > Hello! > > On Fri, Jul 24, 2020 at 03:21:31PM +0200, Adam Volek wrote: > > > On 24. 07. 20 4:33, Maxim Dounin wrote: > > > As long as the response returned isn't cacheable (either > > > as specified in the response Cache-Control / Expires > > > headers, or per proxy_cache_valid), nginx won't put > > > the response into cache and will continue serving previously > > > cached response till stale-while-revalidate timeout expires. > > > > > > Most likely "specific status code" in your tests in fact means > > > responses returned by your upstream server without Cache-Control > > > headers, and hence not cached by nginx. > > > > This is not the case as far as I can tell. In our tests, the upstream server was set up to send these two responses, the 204 first, and then then the 404: > > > > HTTP/1.1 204 No Content > > Date: Fri, 24 Jul 2020 11:32:33 GMT > > Connection: keep-alive > > cache-control: max-age=5, stale-while-revalidate=10 > > > > HTTP/1.1 404 Not Found > > Date: Fri, 24 Jul 2020 11:32:35 GMT > > Content-Type: text/plain > > Connection: close > > cache-control: max-age=5, stale-while-revalidate=10 > > > > In this scenario, nginx returns fresh 204 for five seconds and then it returns stale 204 for ten seconds even though it's attempting revalidation according > > to access log at the upstream server. If we send the following 410 response instead of 404 however, nginx behaves as we would expect: it returns the fresh > > 204 for five seconds, then it revalidates it almost instantly and starts returning the fresh 410: > > > > HTTP/1.1 410 Gone > > Date: Fri, 24 Jul 2020 11:41:56 GMT > > Content-Type: text/plain > > Connection: close > > cache-control: max-age=5, stale-while-revalidate=10 > > You are right, this seems to be an incorrect behaviour of > stale-while-revalidate / stale-if-error handling. > > Internally, stale-if-error (and stale-while-revalidate) currently > behave as if "proxy_cache_use_stale" was set with all possible > flags (http://nginx.org/r/proxy_cache_use_stale) when handling > upstream server responses. Notably this includes http_403, > http_404, and http_429 flags, and this causes the effect you > observe. > > This probably should be fixed. > > Just in case, the following configuration can be used to reproduce > the issue within nginx itself: > > proxy_cache_path cache keys_zone=one:1m; > > server { > listen 8080; > > location / { > proxy_pass http://127.0.0.1:8081; > proxy_cache one; > add_header X-Cache-Status $upstream_cache_status always; > } > } > > server { > listen 8081; > > location / { > add_header cache-control "max-age=5, stale-while-revalidate=10" > always; > > if ($connection = "3") { > return 204; > } > > return 404; > } > } The fix was committed: https://hg.nginx.org/nginx/rev/7015f26aef90 -- Roman Arutyunyan From francis at daoine.org Fri Aug 14 13:34:05 2020 From: francis at daoine.org (Francis Daly) Date: Fri, 14 Aug 2020 14:34:05 +0100 Subject: Nginx reverse proxy redirect In-Reply-To: References: <20200813084358.GE20939@daoine.org> Message-ID: <20200814133404.GF20939@daoine.org> On Thu, Aug 13, 2020 at 05:50:28PM -0400, Dr_tux wrote: Hi there, > Thank you very much for this solution. but I have 3 mp3 servers. How can I > share requests equally? With 301, I can only send a request to a single ip > address. With 301, you invite the client to send a request to a single hostname. You can set DNS up to make that be as many IP addresses as you like, for example. If you want within-nginx to issue 301 redirects to different hostnames, you can use various ways, including perhaps http://nginx.org/r/split_clients But you might be happier at that stage, reverse-proxying the mp3 requests rather than redirecting them -- set an "upstream" (http://nginx.org/r/upstream) with your three "server"s, and proxy_pass to the upstream -- that can also take care of some servers not always being available, if that is a concern here. Good luck with it, f -- Francis Daly francis at daoine.org From nginx-forum at forum.nginx.org Fri Aug 14 14:34:49 2020 From: nginx-forum at forum.nginx.org (vergil) Date: Fri, 14 Aug 2020 10:34:49 -0400 Subject: SSL_shutdown() failed (SSL: ... bad write retry) In-Reply-To: <20200813164315.GF12747@mdounin.ru> References: <20200813164315.GF12747@mdounin.ru> Message-ID: <163894ff475840bb5be2a86f12258f31.NginxMailingListEnglish@forum.nginx.org> Maxim Dounin Wrote: ------------------------------------------------------- > Hello! > > On Thu, Aug 13, 2020 at 12:11:54PM -0400, vergil wrote: > > > Maxim Dounin Wrote: > > ------------------------------------------------------- > > > Hello! > > > > > > On Thu, Aug 13, 2020 at 11:39:36AM -0400, vergil wrote: > > > > > > > This one was hard to catch. > > > > > > > > I've captured one error with 30 seconds delta before and after > the > > > event. > > > > Where can i attach log file for you? There's 400K messages, so i > > > cannot > > > > simple put it here. > > > > > > Attaching the log to the message into the mailing list should > > > work, but I'm not sure it's supported by the (obsolete) forum > > > interface you are using. If not, you may put the log at a > > > convinient place and provide a link here, or attach it to a > > > ticket on trac.nginx.org, or email to me privetely. > > > > I've attached log file to our S3 public storage. You can download it > through > > this link: > > > https://drive-public-eu.s3.eu-central-1.amazonaws.com/nginx/nginx-debu > g.csv > > > > A note: this is a CSV format from our logging system. I can try to > extract > > logs in original format if you need. > > Thanks, but this doesn't seem to contain anything related to the > SSL_shutdown() except the message itself: > > "2020-08-13T15:19:03.279Z","7","shmtx lock", > "2020-08-13T15:19:03.279Z","7","shmtx lock", > "2020-08-13T15:19:03.279Z","7","timer delta: 0", > "2020-08-13T15:19:03.280Z","2","SSL_shutdown() failed (SSL: > error:1409F07F:SSL routines:ssl3_write_pending:bad write retry) while > closing request, client: XXX.XXX.XXX.XXX, server: > XXX.XXX.XXX.XXX:443","9140" > "2020-08-13T15:19:03.280Z","7","epoll: fd:322 ev:0005 > d:00007F0A0FCDDEB0", > "2020-08-13T15:19:03.280Z","7","epoll: fd:54 ev:0004 > d:00007F0A0FCDFAC9", > > And nothing else in the log saying "SSL_shutdow()", while there > should be a debug messages like "SSL_shutdown: -1" and > "SSL_get_error: ..." right before the message, and nothing at all > related to the connection 9140. > > It looks like the debug logging is only enabled on the global > level, but disabled at http or server level. Please see the part > starting at "Note that redefining the log without also specifying > the debug level will disable the debugging log" in the "A > debugging log" article > (http://nginx.org/en/docs/debugging_log.html). > > -- > Maxim Dounin > http://mdounin.ru/ > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx So... Bad news: i cannot capture the event when full debug enabled. Server cannot handle the load and our service partially down at that time. What can i say is that this problem reveal itself on all servers with new nginx version. I'll send you the link privately where you can get our (some-redacted) config files. Regards, Alexander. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,289087,289112#msg-289112 From mdounin at mdounin.ru Sat Aug 15 22:54:46 2020 From: mdounin at mdounin.ru (Maxim Dounin) Date: Sun, 16 Aug 2020 01:54:46 +0300 Subject: SSL_shutdown() failed (SSL: ... bad write retry) In-Reply-To: <163894ff475840bb5be2a86f12258f31.NginxMailingListEnglish@forum.nginx.org> References: <20200813164315.GF12747@mdounin.ru> <163894ff475840bb5be2a86f12258f31.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20200815225446.GK12747@mdounin.ru> Hello! On Fri, Aug 14, 2020 at 10:34:49AM -0400, vergil wrote: > Maxim Dounin Wrote: > ------------------------------------------------------- > > Hello! > > > > On Thu, Aug 13, 2020 at 12:11:54PM -0400, vergil wrote: > > > > > Maxim Dounin Wrote: > > > ------------------------------------------------------- > > > > Hello! > > > > > > > > On Thu, Aug 13, 2020 at 11:39:36AM -0400, vergil wrote: > > > > > > > > > This one was hard to catch. > > > > > > > > > > I've captured one error with 30 seconds delta before and after > > the > > > > event. > > > > > Where can i attach log file for you? There's 400K messages, so i > > > > cannot > > > > > simple put it here. > > > > > > > > Attaching the log to the message into the mailing list should > > > > work, but I'm not sure it's supported by the (obsolete) forum > > > > interface you are using. If not, you may put the log at a > > > > convinient place and provide a link here, or attach it to a > > > > ticket on trac.nginx.org, or email to me privetely. > > > > > > I've attached log file to our S3 public storage. You can download it > > through > > > this link: > > > > > https://drive-public-eu.s3.eu-central-1.amazonaws.com/nginx/nginx-debu > > g.csv > > > > > > A note: this is a CSV format from our logging system. I can try to > > extract > > > logs in original format if you need. > > > > Thanks, but this doesn't seem to contain anything related to the > > SSL_shutdown() except the message itself: > > > > "2020-08-13T15:19:03.279Z","7","shmtx lock", > > "2020-08-13T15:19:03.279Z","7","shmtx lock", > > "2020-08-13T15:19:03.279Z","7","timer delta: 0", > > "2020-08-13T15:19:03.280Z","2","SSL_shutdown() failed (SSL: > > error:1409F07F:SSL routines:ssl3_write_pending:bad write retry) while > > closing request, client: XXX.XXX.XXX.XXX, server: > > XXX.XXX.XXX.XXX:443","9140" > > "2020-08-13T15:19:03.280Z","7","epoll: fd:322 ev:0005 > > d:00007F0A0FCDDEB0", > > "2020-08-13T15:19:03.280Z","7","epoll: fd:54 ev:0004 > > d:00007F0A0FCDFAC9", > > > > And nothing else in the log saying "SSL_shutdow()", while there > > should be a debug messages like "SSL_shutdown: -1" and > > "SSL_get_error: ..." right before the message, and nothing at all > > related to the connection 9140. > > > > It looks like the debug logging is only enabled on the global > > level, but disabled at http or server level. Please see the part > > starting at "Note that redefining the log without also specifying > > the debug level will disable the debugging log" in the "A > > debugging log" article > > (http://nginx.org/en/docs/debugging_log.html). > > So... Bad news: i cannot capture the event when full debug enabled. Server > cannot handle the load and our service partially down at that time. > > What can i say is that this problem reveal itself on all servers with new > nginx version. > > I'll send you the link privately where you can get our (some-redacted) > config files. Thank you for your efforts. Just in case, it is possible to configure debug logging only for parts of the load - using the debug_connection directive with large networks in CIDR notation (http://nginx.org/r/debug_connection). It's probably not needed in this particular case, see below. I was able to reporoduce an "SSL_shutdown() failed (SSL: ... bad write retry)" error at least in one case, similar to the one previously observed with SSL_write() in https://trac.nginx.org/nginx/ticket/1194. Previously, this case wasn't causing SSL_shutdown() errors, but SSL shutdown fix introduced in nginx 1.19.2 revealed the problem. The following patch should fix this. It was discussed previously as a possible fix for other SSL_shutdown() errors fixed in 1.19.2, but wasn't commited as there were concerns it will effectively disable SSL shutdown in some unrelated cases where c->error flag is misused. Now it is more or less clear that the change is needed. Patch (it would be great if you'll be able to test if it fixes the problem for you): # HG changeset patch # User Maxim Dounin # Date 1597531639 -10800 # Sun Aug 16 01:47:19 2020 +0300 # Node ID be7a3155e00161baf7359ffa73a3a226f1e487c9 # Parent 7d46c9f56c9afe34a38bb3aea99550a2fd884280 SSL: disabled shutdown after connection errors. This fixes ""SSL_shutdown() failed (SSL: ... bad write retry)" errors as observed on the second SSL_shutdown() call after SSL shutdown fixes in 09fb2135a589 (1.19.2), notably when sending fails in ngx_http_test_expect(), similarly to ticket 1194. Note that there are some places where c->error is misused to prevent further output, such as ngx_http_v2_finalize_connection() if there are pending streams, or in filter finalization. These places seem to be extreme enough to don't care about missing shutdown though. For example, filter finalization currently prevents keepalive from being used. diff --git a/src/event/ngx_event_openssl.c b/src/event/ngx_event_openssl.c --- a/src/event/ngx_event_openssl.c +++ b/src/event/ngx_event_openssl.c @@ -2793,7 +2793,7 @@ ngx_ssl_shutdown(ngx_connection_t *c) return NGX_OK; } - if (c->timedout) { + if (c->timedout || c->error) { mode = SSL_RECEIVED_SHUTDOWN|SSL_SENT_SHUTDOWN; SSL_set_quiet_shutdown(c->ssl->connection, 1); -- Maxim Dounin http://mdounin.ru/ From nginx-forum at forum.nginx.org Sun Aug 16 17:20:31 2020 From: nginx-forum at forum.nginx.org (vergil) Date: Sun, 16 Aug 2020 13:20:31 -0400 Subject: SSL_shutdown() failed (SSL: ... bad write retry) In-Reply-To: <20200815225446.GK12747@mdounin.ru> References: <20200815225446.GK12747@mdounin.ru> Message-ID: <3a6e26baad75e2b2b3d2f6f727ac2cdd.NginxMailingListEnglish@forum.nginx.org> Great to hear. I knew about debug_connection but thought it would be too difficult to tune. We'll see how patch helps in that regard. I'll try to test it tomorrow. Regards, Alexander. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,289087,289115#msg-289115 From nginx-forum at forum.nginx.org Tue Aug 18 03:35:12 2020 From: nginx-forum at forum.nginx.org (skwok) Date: Mon, 17 Aug 2020 23:35:12 -0400 Subject: How to set geoip country code for Crimea Message-ID: <2db4a23ed2822f2f9e35e8d7c18d7791.NginxMailingListEnglish@forum.nginx.org> Hi, I'd like to use the geoip module (v1) to block Crimea from access. The syntax that I'm using is: geoip_country /usr/share/GeoIP/GeoIP.dat; map $geoip_country_code $allowed_country { default yes; IR no; } As there is no ISO3166 code for Crimea (https://dev.maxmind.com/geoip/legacy/codes/iso3166/), I contacted Maxmind's support and they have provided me with this: https://dev.maxmind.com/release-note/crimea-accuracy-update-2019/ . Can someone please tell me how I can make use of that? Thanks, skwok Posted at Nginx Forum: https://forum.nginx.org/read.php?2,289125,289125#msg-289125 From francis at daoine.org Tue Aug 18 08:39:12 2020 From: francis at daoine.org (Francis Daly) Date: Tue, 18 Aug 2020 09:39:12 +0100 Subject: How to set geoip country code for Crimea In-Reply-To: <2db4a23ed2822f2f9e35e8d7c18d7791.NginxMailingListEnglish@forum.nginx.org> References: <2db4a23ed2822f2f9e35e8d7c18d7791.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20200818083912.GG20939@daoine.org> On Mon, Aug 17, 2020 at 11:35:12PM -0400, skwok wrote: Hi there, > I'd like to use the geoip module (v1) to block Crimea from access. The > syntax that I'm using is: > geoip_country /usr/share/GeoIP/GeoIP.dat; When all you have is country-level granularity, you can't block just part of a country. So if you want to block the nginx offices in Cork, you would have to block the entire country of IE. And, as your later note describes, for the purposes of that version of that database, the IP addresses associated with the geographic region of Crimea are included in the country code UA. > https://dev.maxmind.com/release-note/crimea-accuracy-update-2019/ . Can > someone please tell me how I can make use of that? One option is to use the geoip_city database file, which gives sub-country granularity -- that note suggests that the particular case of "Crimea" is approximately covered by (interpreting their region_codes.csv file) UA,11,"Krym" and UA,20,"Sevastopol'". Another option is to write your own database file, and group the IP addresses that you want to block into your own country designation, and then block that. Whether that is easy or possible is probably not an nginx-specific question. Good luck with it, f -- Francis Daly francis at daoine.org From nginx-forum at forum.nginx.org Tue Aug 18 09:35:16 2020 From: nginx-forum at forum.nginx.org (vergil) Date: Tue, 18 Aug 2020 05:35:16 -0400 Subject: SSL_shutdown() failed (SSL: ... bad write retry) In-Reply-To: <20200815225446.GK12747@mdounin.ru> References: <20200815225446.GK12747@mdounin.ru> Message-ID: <6b8740621740d735756c20f29c829b20.NginxMailingListEnglish@forum.nginx.org> Maxim Dounin Wrote: ------------------------------------------------------- > Hello! > > On Fri, Aug 14, 2020 at 10:34:49AM -0400, vergil wrote: > > > Maxim Dounin Wrote: > > ------------------------------------------------------- > > > Hello! > > > > > > On Thu, Aug 13, 2020 at 12:11:54PM -0400, vergil wrote: > > > > > > > Maxim Dounin Wrote: > > > > ------------------------------------------------------- > > > > > Hello! > > > > > > > > > > On Thu, Aug 13, 2020 at 11:39:36AM -0400, vergil wrote: > > > > > > > > > > > This one was hard to catch. > > > > > > > > > > > > I've captured one error with 30 seconds delta before and > after > > > the > > > > > event. > > > > > > Where can i attach log file for you? There's 400K messages, > so i > > > > > cannot > > > > > > simple put it here. > > > > > > > > > > Attaching the log to the message into the mailing list should > > > > > work, but I'm not sure it's supported by the (obsolete) forum > > > > > interface you are using. If not, you may put the log at a > > > > > convinient place and provide a link here, or attach it to a > > > > > ticket on trac.nginx.org, or email to me privetely. > > > > > > > > I've attached log file to our S3 public storage. You can > download it > > > through > > > > this link: > > > > > > > > https://drive-public-eu.s3.eu-central-1.amazonaws.com/nginx/nginx-debu > > > g.csv > > > > > > > > A note: this is a CSV format from our logging system. I can try > to > > > extract > > > > logs in original format if you need. > > > > > > Thanks, but this doesn't seem to contain anything related to the > > > SSL_shutdown() except the message itself: > > > > > > "2020-08-13T15:19:03.279Z","7","shmtx lock", > > > "2020-08-13T15:19:03.279Z","7","shmtx lock", > > > "2020-08-13T15:19:03.279Z","7","timer delta: 0", > > > "2020-08-13T15:19:03.280Z","2","SSL_shutdown() failed (SSL: > > > error:1409F07F:SSL routines:ssl3_write_pending:bad write retry) > while > > > closing request, client: XXX.XXX.XXX.XXX, server: > > > XXX.XXX.XXX.XXX:443","9140" > > > "2020-08-13T15:19:03.280Z","7","epoll: fd:322 ev:0005 > > > d:00007F0A0FCDDEB0", > > > "2020-08-13T15:19:03.280Z","7","epoll: fd:54 ev:0004 > > > d:00007F0A0FCDFAC9", > > > > > > And nothing else in the log saying "SSL_shutdow()", while there > > > should be a debug messages like "SSL_shutdown: -1" and > > > "SSL_get_error: ..." right before the message, and nothing at all > > > related to the connection 9140. > > > > > > It looks like the debug logging is only enabled on the global > > > level, but disabled at http or server level. Please see the part > > > starting at "Note that redefining the log without also specifying > > > the debug level will disable the debugging log" in the "A > > > debugging log" article > > > (http://nginx.org/en/docs/debugging_log.html). > > > > So... Bad news: i cannot capture the event when full debug enabled. > Server > > cannot handle the load and our service partially down at that time. > > > > What can i say is that this problem reveal itself on all servers > with new > > nginx version. > > > > I'll send you the link privately where you can get our > (some-redacted) > > config files. > > Thank you for your efforts. Just in case, it is possible to > configure debug logging only for parts of the load - using the > debug_connection directive with large networks in CIDR notation > (http://nginx.org/r/debug_connection). It's probably not needed > in this particular case, see below. > > I was able to reporoduce an "SSL_shutdown() failed (SSL: ... bad > write retry)" error at least in one case, similar to the one > previously observed with SSL_write() in > https://trac.nginx.org/nginx/ticket/1194. Previously, this case > wasn't causing SSL_shutdown() errors, but SSL shutdown fix > introduced in nginx 1.19.2 revealed the problem. > > The following patch should fix this. It was discussed previously > as a possible fix for other SSL_shutdown() errors fixed in 1.19.2, > but wasn't commited as there were concerns it will effectively > disable SSL shutdown in some unrelated cases where c->error flag > is misused. Now it is more or less clear that the change is > needed. > > Patch (it would be great if you'll be able to test if it fixes the > problem for you): > > # HG changeset patch > # User Maxim Dounin > # Date 1597531639 -10800 > # Sun Aug 16 01:47:19 2020 +0300 > # Node ID be7a3155e00161baf7359ffa73a3a226f1e487c9 > # Parent 7d46c9f56c9afe34a38bb3aea99550a2fd884280 > SSL: disabled shutdown after connection errors. > > This fixes ""SSL_shutdown() failed (SSL: ... bad write retry)" errors > as observed on the second SSL_shutdown() call after SSL shutdown fixes > in > 09fb2135a589 (1.19.2), notably when sending fails in > ngx_http_test_expect(), > similarly to ticket 1194. > > Note that there are some places where c->error is misused to prevent > further output, such as ngx_http_v2_finalize_connection() if there > are pending streams, or in filter finalization. These places seem > to be extreme enough to don't care about missing shutdown though. > For example, filter finalization currently prevents keepalive from > being used. > > diff --git a/src/event/ngx_event_openssl.c > b/src/event/ngx_event_openssl.c > --- a/src/event/ngx_event_openssl.c > +++ b/src/event/ngx_event_openssl.c > @@ -2793,7 +2793,7 @@ ngx_ssl_shutdown(ngx_connection_t *c) > return NGX_OK; > } > > - if (c->timedout) { > + if (c->timedout || c->error) { > mode = SSL_RECEIVED_SHUTDOWN|SSL_SENT_SHUTDOWN; > SSL_set_quiet_shutdown(c->ssl->connection, 1); > > > > > -- > Maxim Dounin > http://mdounin.ru/ > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx Good day. Patch mentioned above solved half the problems. SSL_shutdown() failed (SSL: error:1409F07F:SSL routines:ssl3_write_pending:bad write retry) while processing HTTP/2 connection Still remains in the logs. Regards, Alexander. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,289087,289130#msg-289130 From gfrankliu at gmail.com Tue Aug 18 22:52:01 2020 From: gfrankliu at gmail.com (Frank Liu) Date: Tue, 18 Aug 2020 15:52:01 -0700 Subject: keepalive and "down" flag Message-ID: Hi, If I use keepalive between nginx and upstream backend servers, and later add the "down" flag to one of the servers, will nginx stop sending traffic to it immediately or will it still send requests using the existing keepalive connection, just not creating any new connection? Is the "down" flag only consulted for creating new upstream connections or is it also for every request on existing/keepalive connection? Thanks! Frank -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Wed Aug 19 16:50:54 2020 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 19 Aug 2020 19:50:54 +0300 Subject: SSL_shutdown() failed (SSL: ... bad write retry) In-Reply-To: <6b8740621740d735756c20f29c829b20.NginxMailingListEnglish@forum.nginx.org> References: <20200815225446.GK12747@mdounin.ru> <6b8740621740d735756c20f29c829b20.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20200819165054.GN12747@mdounin.ru> Hello! On Tue, Aug 18, 2020 at 05:35:16AM -0400, vergil wrote: > Maxim Dounin Wrote: > ------------------------------------------------------- > > Hello! > > > > On Fri, Aug 14, 2020 at 10:34:49AM -0400, vergil wrote: > > > > > Maxim Dounin Wrote: > > > ------------------------------------------------------- > > > > Hello! > > > > > > > > On Thu, Aug 13, 2020 at 12:11:54PM -0400, vergil wrote: > > > > > > > > > Maxim Dounin Wrote: > > > > > ------------------------------------------------------- > > > > > > Hello! > > > > > > > > > > > > On Thu, Aug 13, 2020 at 11:39:36AM -0400, vergil wrote: > > > > > > > > > > > > > This one was hard to catch. > > > > > > > > > > > > > > I've captured one error with 30 seconds delta before and > > after > > > > the > > > > > > event. > > > > > > > Where can i attach log file for you? There's 400K messages, > > so i > > > > > > cannot > > > > > > > simple put it here. > > > > > > > > > > > > Attaching the log to the message into the mailing list should > > > > > > work, but I'm not sure it's supported by the (obsolete) forum > > > > > > interface you are using. If not, you may put the log at a > > > > > > convinient place and provide a link here, or attach it to a > > > > > > ticket on trac.nginx.org, or email to me privetely. > > > > > > > > > > I've attached log file to our S3 public storage. You can > > download it > > > > through > > > > > this link: > > > > > > > > > > > https://drive-public-eu.s3.eu-central-1.amazonaws.com/nginx/nginx-debu > > > > g.csv > > > > > > > > > > A note: this is a CSV format from our logging system. I can try > > to > > > > extract > > > > > logs in original format if you need. > > > > > > > > Thanks, but this doesn't seem to contain anything related to the > > > > SSL_shutdown() except the message itself: > > > > > > > > "2020-08-13T15:19:03.279Z","7","shmtx lock", > > > > "2020-08-13T15:19:03.279Z","7","shmtx lock", > > > > "2020-08-13T15:19:03.279Z","7","timer delta: 0", > > > > "2020-08-13T15:19:03.280Z","2","SSL_shutdown() failed (SSL: > > > > error:1409F07F:SSL routines:ssl3_write_pending:bad write retry) > > while > > > > closing request, client: XXX.XXX.XXX.XXX, server: > > > > XXX.XXX.XXX.XXX:443","9140" > > > > "2020-08-13T15:19:03.280Z","7","epoll: fd:322 ev:0005 > > > > d:00007F0A0FCDDEB0", > > > > "2020-08-13T15:19:03.280Z","7","epoll: fd:54 ev:0004 > > > > d:00007F0A0FCDFAC9", > > > > > > > > And nothing else in the log saying "SSL_shutdow()", while there > > > > should be a debug messages like "SSL_shutdown: -1" and > > > > "SSL_get_error: ..." right before the message, and nothing at all > > > > related to the connection 9140. > > > > > > > > It looks like the debug logging is only enabled on the global > > > > level, but disabled at http or server level. Please see the part > > > > starting at "Note that redefining the log without also specifying > > > > the debug level will disable the debugging log" in the "A > > > > debugging log" article > > > > (http://nginx.org/en/docs/debugging_log.html). > > > > > > So... Bad news: i cannot capture the event when full debug enabled. > > Server > > > cannot handle the load and our service partially down at that time. > > > > > > What can i say is that this problem reveal itself on all servers > > with new > > > nginx version. > > > > > > I'll send you the link privately where you can get our > > (some-redacted) > > > config files. > > > > Thank you for your efforts. Just in case, it is possible to > > configure debug logging only for parts of the load - using the > > debug_connection directive with large networks in CIDR notation > > (http://nginx.org/r/debug_connection). It's probably not needed > > in this particular case, see below. > > > > I was able to reporoduce an "SSL_shutdown() failed (SSL: ... bad > > write retry)" error at least in one case, similar to the one > > previously observed with SSL_write() in > > https://trac.nginx.org/nginx/ticket/1194. Previously, this case > > wasn't causing SSL_shutdown() errors, but SSL shutdown fix > > introduced in nginx 1.19.2 revealed the problem. > > > > The following patch should fix this. It was discussed previously > > as a possible fix for other SSL_shutdown() errors fixed in 1.19.2, > > but wasn't commited as there were concerns it will effectively > > disable SSL shutdown in some unrelated cases where c->error flag > > is misused. Now it is more or less clear that the change is > > needed. > > > > Patch (it would be great if you'll be able to test if it fixes the > > problem for you): > > > > # HG changeset patch > > # User Maxim Dounin > > # Date 1597531639 -10800 > > # Sun Aug 16 01:47:19 2020 +0300 > > # Node ID be7a3155e00161baf7359ffa73a3a226f1e487c9 > > # Parent 7d46c9f56c9afe34a38bb3aea99550a2fd884280 > > SSL: disabled shutdown after connection errors. > > > > This fixes ""SSL_shutdown() failed (SSL: ... bad write retry)" errors > > as observed on the second SSL_shutdown() call after SSL shutdown fixes > > in > > 09fb2135a589 (1.19.2), notably when sending fails in > > ngx_http_test_expect(), > > similarly to ticket 1194. > > > > Note that there are some places where c->error is misused to prevent > > further output, such as ngx_http_v2_finalize_connection() if there > > are pending streams, or in filter finalization. These places seem > > to be extreme enough to don't care about missing shutdown though. > > For example, filter finalization currently prevents keepalive from > > being used. > > > > diff --git a/src/event/ngx_event_openssl.c > > b/src/event/ngx_event_openssl.c > > --- a/src/event/ngx_event_openssl.c > > +++ b/src/event/ngx_event_openssl.c > > @@ -2793,7 +2793,7 @@ ngx_ssl_shutdown(ngx_connection_t *c) > > return NGX_OK; > > } > > > > - if (c->timedout) { > > + if (c->timedout || c->error) { > > mode = SSL_RECEIVED_SHUTDOWN|SSL_SENT_SHUTDOWN; > > SSL_set_quiet_shutdown(c->ssl->connection, 1); > > > > > > Good day. > > Patch mentioned above solved half the problems. > > SSL_shutdown() failed (SSL: error:1409F07F:SSL > routines:ssl3_write_pending:bad write retry) while processing HTTP/2 > connection > > Still remains in the logs. Do you see any other errors on the same connection before the SSL_shutdown() error? As suggested previously, somethig relevant might be logged at the "info" level. Note that seeing info-level error messages will probably require error logging to be reconfigured, much like with debug. If there is nothing, I'm afraid the only solution would be to try to catch a debugging log related to these errors. As previously suggested, this can be done without too much load by using the debug_connection with relatively large CIDR blocks and waiting for the error to happen from with a client from one of these blocks. -- Maxim Dounin http://mdounin.ru/ From 6617065164 at txt.att.net Wed Aug 19 16:59:31 2020 From: 6617065164 at txt.att.net (6617065164 at txt.att.net) Date: Wed, 19 Aug 2020 16:59:31 -0000 Subject: SSL_shutdown() failed (SSL: ... bad write retry) In-Reply-To: 20200819165054.GN12747@mdounin.ru Message-ID: Please take me off all of your contact lists and please stop blowing up my phone with text messages. -----Original Message----- From: Sent: Wed, 19 Aug 2020 19:50:54 +0300 To: 6617065164 at txt.att.net Subject: Re: SSL_shutdown() failed (SSL: ... bad write retry) >Hello! > >On Tue, Aug 18, 2020 at 05:35:16AM -0400, vergil wrote: > >> Maxim Dounin Wrote: >> ------------------------------------------------------- >> > Hello! >> > >> > On Fr ================================================================== This mobile text message is brought to you by AT&T From nginx-forum at forum.nginx.org Wed Aug 19 18:53:54 2020 From: nginx-forum at forum.nginx.org (CranberryPie) Date: Wed, 19 Aug 2020 14:53:54 -0400 Subject: Nginx crashing my site when adding new config Message-ID: <7adecb7d397a2177864f7dc6cb1c3a9d.NginxMailingListEnglish@forum.nginx.org> Hello, I am trying to create a config for a process called Isso, when I do create the config as you can see below Nginx crashes my site. If I remove the Isso Nginx config my site comes back online. ``` nginx.service - A high performance web server and a reverse proxy server Loaded: loaded (/lib/systemd/system/nginx.service; enabled; vendor preset: en Active: failed (Result: exit-code) since Wed 2020-08-19 18:04:04 UTC; 11s ago Docs: man:nginx(8) Process: 8596 ExecStop=/sbin/start-stop-daemon --quiet --stop --retry QUIT/5 - Process: 20499 ExecStartPre=/usr/sbin/nginx -t -q -g daemon on; master_process Main PID: 335 (code=exited, status=0/SUCCESS) ``` My Nginx config located in /etc/nginx/sites-available and linked to sites-enabled. ``` server { listen 80; listen [::]:80; server_name isso.mydomain.tld; return 301 https://isso.mydomain.tld$request_uri; access_log /dev/null; error_log /dev/null; } server { listen 443 ssl http2; listen [::]:443 ssl http2; server_name isso.mydomain.tld; access_log /var/log/nginx/isso-access.log; error_log /var/log/nginx/isso-error.log; ssl_certificate /etc/nginx/https/fullchain.pem; ssl_certificate_key /etc/nginx/https/key.pem; location / { proxy_set_header Host $http_host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto $scheme; proxy_pass http://localhost:8080; } } ``` Posted at Nginx Forum: https://forum.nginx.org/read.php?2,289141,289141#msg-289141 From nginx-forum at forum.nginx.org Thu Aug 20 13:30:37 2020 From: nginx-forum at forum.nginx.org (vergil) Date: Thu, 20 Aug 2020 09:30:37 -0400 Subject: SSL_shutdown() failed (SSL: ... bad write retry) In-Reply-To: <20200819165054.GN12747@mdounin.ru> References: <20200819165054.GN12747@mdounin.ru> Message-ID: <0a8efd62970833aceb3d61f3f6c44742.NginxMailingListEnglish@forum.nginx.org> Maxim Dounin Wrote: ------------------------------------------------------- > Do you see any other errors on the same connection before the > SSL_shutdown() error? As suggested previously, somethig relevant > might be logged at the "info" level. Note that seeing info-level > error messages will probably require error logging to be > reconfigured, much like with debug. > > If there is nothing, I'm afraid the only solution would be to try > to catch a debugging log related to these errors. As previously > suggested, this can be done without too much load by using the > debug_connection with relatively large CIDR blocks and waiting for > the error to happen from with a client from one of these blocks. > > -- > Maxim Dounin > http://mdounin.ru/ > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx Good day. I've change log level from notice to info and there's indeed one new message related to HTTP/2 problem. 2020/08/20 15:59:31 [info] 32305#32305: *1982005 client timed out (110: Connection timed out) while processing HTTP/2 connection, client: XXX, server: XXX:443 2020/08/20 15:59:36 [crit] 32305#32305: *1982005 SSL_shutdown() failed (SSL: error:1409F07F:SSL routines:ssl3_write retry) while processing HTTP/2 connection, client: XXX, server: XXX:443 I don't know if this will help. Regards, Alexander. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,289087,289142#msg-289142 From 6617065164 at txt.att.net Thu Aug 20 14:41:03 2020 From: 6617065164 at txt.att.net (6617065164 at txt.att.net) Date: Thu, 20 Aug 2020 14:41:03 -0000 Subject: SSL_shutdown() failed (SSL: ... bad write retry) In-Reply-To: 0a8efd62970833aceb3d61f3f6c44742.NginxMailingListEnglish@forum.nginx.org Message-ID: Please stop sending me messages! -----Original Message----- From: Sent: Thu, 20 Aug 2020 09:30:37 -0400 To: 6617065164 at txt.att.net Subject: Re: SSL_shutdown() failed (SSL: ... bad write retry) >Maxim Dounin Wrote: >------------------------------------------------------- >> Do you see any other errors on the same connection before the >> SSL_shutdown() error? As sugges ================================================================== This mobile text message is brought to you by AT&T From mdounin at mdounin.ru Thu Aug 20 19:16:31 2020 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 20 Aug 2020 22:16:31 +0300 Subject: SSL_shutdown() failed (SSL: ... bad write retry) In-Reply-To: <0a8efd62970833aceb3d61f3f6c44742.NginxMailingListEnglish@forum.nginx.org> References: <20200819165054.GN12747@mdounin.ru> <0a8efd62970833aceb3d61f3f6c44742.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20200820191631.GP12747@mdounin.ru> Hello! On Thu, Aug 20, 2020 at 09:30:37AM -0400, vergil wrote: > Maxim Dounin Wrote: > ------------------------------------------------------- > > Do you see any other errors on the same connection before the > > SSL_shutdown() error? As suggested previously, somethig relevant > > might be logged at the "info" level. Note that seeing info-level > > error messages will probably require error logging to be > > reconfigured, much like with debug. > > > > If there is nothing, I'm afraid the only solution would be to try > > to catch a debugging log related to these errors. As previously > > suggested, this can be done without too much load by using the > > debug_connection with relatively large CIDR blocks and waiting for > > the error to happen from with a client from one of these blocks. > > > > -- > > Maxim Dounin > > http://mdounin.ru/ > > _______________________________________________ > > nginx mailing list > > nginx at nginx.org > > http://mailman.nginx.org/mailman/listinfo/nginx > > > Good day. > > I've change log level from notice to info and there's indeed one new message > related to HTTP/2 problem. > > 2020/08/20 15:59:31 [info] 32305#32305: *1982005 client timed out (110: > Connection timed out) while processing HTTP/2 connection, client: XXX, > server: XXX:443 > 2020/08/20 15:59:36 [crit] 32305#32305: *1982005 SSL_shutdown() failed (SSL: > error:1409F07F:SSL routines:ssl3_write retry) while processing HTTP/2 > connection, client: XXX, server: XXX:443 > > I don't know if this will help. Thanks, I think I have an idea about what's going on here. Likely these are read timeouts, which can interfere with writes in HTTP/2, causing SSL_shutdown() errors. Please try the following patch: # HG changeset patch # User Maxim Dounin # Date 1597950898 -10800 # Thu Aug 20 22:14:58 2020 +0300 # Node ID f95e76e9144773a664271c3e91e4cb6df3bc774a # Parent 7015f26aef904e2ec17b4b6f6387fd3b8298f79d HTTP/2: connections with read timeouts marked as timed out. In HTTP/2, closing a connection because of a read timeout might happen when there are unfinished writes, resulting in SSL_shutdown() errors. Fix is to mark such connections with the c->timedout flag to avoid sending SSL shutdown. diff --git a/src/http/v2/ngx_http_v2.c b/src/http/v2/ngx_http_v2.c --- a/src/http/v2/ngx_http_v2.c +++ b/src/http/v2/ngx_http_v2.c @@ -346,6 +346,7 @@ ngx_http_v2_read_handler(ngx_event_t *re if (rev->timedout) { ngx_log_error(NGX_LOG_INFO, c->log, NGX_ETIMEDOUT, "client timed out"); + c->timedout = 1; ngx_http_v2_finalize_connection(h2c, NGX_HTTP_V2_PROTOCOL_ERROR); return; } -- Maxim Dounin http://mdounin.ru/ From pluknet at nginx.com Thu Aug 20 20:47:08 2020 From: pluknet at nginx.com (Sergey Kandaurov) Date: Thu, 20 Aug 2020 23:47:08 +0300 Subject: SSL_shutdown() failed (SSL: ... bad write retry) In-Reply-To: <20200820191631.GP12747@mdounin.ru> References: <20200819165054.GN12747@mdounin.ru> <0a8efd62970833aceb3d61f3f6c44742.NginxMailingListEnglish@forum.nginx.org> <20200820191631.GP12747@mdounin.ru> Message-ID: <2168C4F3-8096-4A59-BBCC-6919E007DAF0@nginx.com> > On 20 Aug 2020, at 22:16, Maxim Dounin wrote: > > Hello! > > On Thu, Aug 20, 2020 at 09:30:37AM -0400, vergil wrote: > >> Maxim Dounin Wrote: >> ------------------------------------------------------- >>> Do you see any other errors on the same connection before the >>> SSL_shutdown() error? As suggested previously, somethig relevant >>> might be logged at the "info" level. Note that seeing info-level >>> error messages will probably require error logging to be >>> reconfigured, much like with debug. >>> >>> If there is nothing, I'm afraid the only solution would be to try >>> to catch a debugging log related to these errors. As previously >>> suggested, this can be done without too much load by using the >>> debug_connection with relatively large CIDR blocks and waiting for >>> the error to happen from with a client from one of these blocks. >>> >>> -- >>> Maxim Dounin >>> http://mdounin.ru/ >>> _______________________________________________ >>> nginx mailing list >>> nginx at nginx.org >>> http://mailman.nginx.org/mailman/listinfo/nginx >> >> >> Good day. >> >> I've change log level from notice to info and there's indeed one new message >> related to HTTP/2 problem. >> >> 2020/08/20 15:59:31 [info] 32305#32305: *1982005 client timed out (110: >> Connection timed out) while processing HTTP/2 connection, client: XXX, >> server: XXX:443 >> 2020/08/20 15:59:36 [crit] 32305#32305: *1982005 SSL_shutdown() failed (SSL: >> error:1409F07F:SSL routines:ssl3_write retry) while processing HTTP/2 >> connection, client: XXX, server: XXX:443 >> >> I don't know if this will help. > > Thanks, I think I have an idea about what's going on here. Likely > these are read timeouts, which can interfere with writes in > HTTP/2, causing SSL_shutdown() errors. Please try the following > patch: > > # HG changeset patch > # User Maxim Dounin > # Date 1597950898 -10800 > # Thu Aug 20 22:14:58 2020 +0300 > # Node ID f95e76e9144773a664271c3e91e4cb6df3bc774a > # Parent 7015f26aef904e2ec17b4b6f6387fd3b8298f79d > HTTP/2: connections with read timeouts marked as timed out. > > In HTTP/2, closing a connection because of a read timeout might happen > when there are unfinished writes, resulting in SSL_shutdown() errors. > Fix is to mark such connections with the c->timedout flag to avoid sending > SSL shutdown. > > diff --git a/src/http/v2/ngx_http_v2.c b/src/http/v2/ngx_http_v2.c > --- a/src/http/v2/ngx_http_v2.c > +++ b/src/http/v2/ngx_http_v2.c > @@ -346,6 +346,7 @@ ngx_http_v2_read_handler(ngx_event_t *re > > if (rev->timedout) { > ngx_log_error(NGX_LOG_INFO, c->log, NGX_ETIMEDOUT, "client timed out"); > + c->timedout = 1; > ngx_http_v2_finalize_connection(h2c, NGX_HTTP_V2_PROTOCOL_ERROR); > return; > } FYI, I could reproduce this case, the patch fixes it for me. A similar case exists in idle handler, it also needs love: diff --git a/src/http/v2/ngx_http_v2.c b/src/http/v2/ngx_http_v2.c @@ -4584,6 +4585,7 @@ ngx_http_v2_idle_handler(ngx_event_t *re ngx_log_debug0(NGX_LOG_DEBUG_HTTP, c->log, 0, "http2 idle handler"); if (rev->timedout || c->close) { + c->timedout = 1; ngx_http_v2_finalize_connection(h2c, NGX_HTTP_V2_NO_ERROR); return; } Traces from read handler: 2020/08/20 23:25:48 [debug] 1286#0: *1 http2 frame complete pos:000000010E521838 end:000000010E52183A 2020/08/20 23:25:48 [debug] 1286#0: *1 http2 frame state save pos:000000010E521838 end:000000010E52183A handler:000000010A29B900 2020/08/20 23:25:48 [debug] 1286#0: *1 event timer: 7, old: 1931548335, new: 1931548341 2020/08/20 23:25:48 [debug] 1286#0: timer delta: 1 2020/08/20 23:25:48 [debug] 1286#0: worker cycle 2020/08/20 23:25:48 [debug] 1286#0: kevent timer: 994, changes: 0 2020/08/20 23:25:49 [debug] 1286#0: kevent events: 0 2020/08/20 23:25:49 [debug] 1286#0: timer delta: 999 2020/08/20 23:25:49 [debug] 1286#0: *1 event timer del: 7: 1931548335 2020/08/20 23:25:49 [info] 1286#0: *1 client timed out (60: Operation timed out) while processing HTTP/2 connection, client: 127.0.0.1, server: 127.0.0.1:8080 2020/08/20 23:25:49 [debug] 1286#0: *1 http2 send GOAWAY frame: last sid 1, error 1 2020/08/20 23:25:49 [debug] 1286#0: *1 http2 frame out: 00006060000077C0 sid:0 bl:0 len:8 2020/08/20 23:25:49 [debug] 1286#0: *1 SSL buf copy: 17 2020/08/20 23:25:49 [debug] 1286#0: *1 SSL to write: 17 2020/08/20 23:25:49 [debug] 1286#0: *1 SSL_write: -1 2020/08/20 23:25:49 [debug] 1286#0: *1 SSL_get_error: 3 2020/08/20 23:25:49 [debug] 1286#0: *1 kevent set event: 7: ft:-2 fl:0025 2020/08/20 23:25:49 [debug] 1286#0: *1 http2 frame sent: 00006060000077C0 sid:0 bl:0 len:8 2020/08/20 23:25:49 [debug] 1286#0: *1 event timer add: 7: 8000:1931556340 2020/08/20 23:25:49 [debug] 1286#0: *1 close http connection: 7 2020/08/20 23:25:49 [debug] 1286#0: *1 SSL_shutdown: -1 2020/08/20 23:25:49 [debug] 1286#0: *1 SSL_get_error: 3 2020/08/20 23:25:49 [debug] 1286#0: *1 event timer add: 7: 3000:1931551340 2020/08/20 23:25:49 [debug] 1286#0: worker cycle 2020/08/20 23:25:49 [debug] 1286#0: kevent timer: 3000, changes: 1 2020/08/20 23:25:51 [debug] 1286#0: kevent events: 1 2020/08/20 23:25:51 [debug] 1286#0: kevent: 7: ft:-2 fl:0025 ff:00000000 d:49039 ud:000062F00000E538 2020/08/20 23:25:51 [debug] 1286#0: *1 SSL shutdown handler 2020/08/20 23:25:51 [debug] 1286#0: *1 SSL_shutdown: -1 2020/08/20 23:25:51 [debug] 1286#0: *1 SSL_get_error: 1 2020/08/20 23:25:51 [crit] 1286#0: *1 SSL_shutdown() failed (SSL: error:1409F07F:SSL routines:ssl3_write_pending:bad write retry) while processing HTTP/2 connection, client: 127.0.0.1, server: 127.0.0.1:8080 And from idle handler: 2020/08/20 23:32:00 [debug] 1374#0: *1 http2 idle handler 2020/08/20 23:32:00 [debug] 1374#0: *1 http2 send GOAWAY frame: last sid 1, error 0 2020/08/20 23:32:00 [debug] 1374#0: *1 http2 frame out: 00006060000077C0 sid:0 bl:0 len:8 2020/08/20 23:32:00 [debug] 1374#0: *1 SSL buf copy: 17 2020/08/20 23:32:00 [debug] 1374#0: *1 SSL to write: 17 2020/08/20 23:32:00 [debug] 1374#0: *1 SSL_write: -1 2020/08/20 23:32:00 [debug] 1374#0: *1 SSL_get_error: 3 2020/08/20 23:32:00 [debug] 1374#0: *1 kevent set event: 7: ft:-2 fl:0025 2020/08/20 23:32:00 [debug] 1374#0: *1 http2 frame sent: 00006060000077C0 sid:0 bl:0 len:8 2020/08/20 23:32:00 [debug] 1374#0: *1 event timer add: 7: 8000:1931927860 2020/08/20 23:32:00 [debug] 1374#0: *1 close http connection: 7 2020/08/20 23:32:00 [debug] 1374#0: *1 SSL_shutdown: -1 2020/08/20 23:32:00 [debug] 1374#0: *1 SSL_get_error: 3 2020/08/20 23:32:00 [debug] 1374#0: *1 event timer add: 7: 3000:1931922860 2020/08/20 23:32:00 [debug] 1374#0: worker cycle 2020/08/20 23:32:00 [debug] 1374#0: kevent timer: 3000, changes: 1 2020/08/20 23:32:02 [debug] 1374#0: kevent events: 1 2020/08/20 23:32:02 [debug] 1374#0: kevent: 7: ft:-2 fl:0025 ff:00000000 d:49039 ud:000062F00000E538 2020/08/20 23:32:02 [debug] 1374#0: *1 SSL shutdown handler 2020/08/20 23:32:02 [debug] 1374#0: *1 SSL_shutdown: -1 2020/08/20 23:32:02 [debug] 1374#0: *1 SSL_get_error: 1 2020/08/20 23:32:02 [crit] 1374#0: *1 SSL_shutdown() failed (SSL: error:1409F07F:SSL routines:ssl3_write_pending:bad write retry) while processing HTTP/2 connection, client: 127.0.0.1, server: 127.0.0.1:8080 -- Sergey Kandaurov From L.Meren at f5.com Thu Aug 20 22:41:52 2020 From: L.Meren at f5.com (Libby Meren) Date: Thu, 20 Aug 2020 22:41:52 +0000 Subject: Nginx Hackathon! Message-ID: <9CE89998-EA92-4E0B-9AA4-26C5C51DE470@f5.com> Hi, I wanted to let you know that NGINX will be running our first Hackathon in September. The Hackathon theme is ?NGINX for Good?: we?re inviting participants to build a website or application that helps others, with the site or app running on our dynamic application server, NGINX Unit. The Hackathon winner gets their project funded (with support from the members of the engineering team). If you?re interested, go ahead and sign up ? all are welcome! For any questions, you can contact us at nginxhackathon at f5.com. We hope to see you there, Libby [signature_1604307688] Libby Meren -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.png Type: image/png Size: 22446 bytes Desc: image001.png URL: From 6617065164 at txt.att.net Thu Aug 20 22:43:43 2020 From: 6617065164 at txt.att.net (6617065164 at txt.att.net) Date: Thu, 20 Aug 2020 22:43:43 -0000 Subject: SSL_shutdown() failed (SSL: ... bad write retry) In-Reply-To: 2168C4F3-8096-4A59-BBCC-6919E007DAF0@nginx.com Message-ID: Please remove me from all of your contact lists, please. Thank you. -----Original Message----- From: Sent: Thu, 20 Aug 2020 23:47:08 +0300 To: 6617065164 at txt.att.net Subject: Re: SSL_shutdown() failed (SSL: ... bad write retry) > >> On 20 Aug 2020, at 22:16, Maxim Dounin wrote: >> >> Hello! >> >> On Thu, Aug 20, 2020 at 09:30:37AM -0400, vergil wrote: >> >>> Maxim Dounin Wrote: >> ================================================================== This mobile text message is brought to you by AT&T From 6617065164 at txt.att.net Thu Aug 20 22:45:10 2020 From: 6617065164 at txt.att.net (6617065164 at txt.att.net) Date: Thu, 20 Aug 2020 22:45:10 -0000 Subject: Nginx Hackathon! In-Reply-To: 9CE89998-EA92-4E0B-9AA4-26C5C51DE470@f5.com Message-ID: Please remove my contact from any and all of your lists, please. Thank you. -----Original Message----- From: Sent: Thu, 20 Aug 2020 22:41:52 +0000 To: 6617065164 at txt.att.net Subject: Nginx Hackathon! >Hi, > >I wanted to let you know that NGINX will be running our first Hackathon in September. The Hackathon theme is ? ================================================================== This mobile text message is brought to you by AT&T From nginx-forum at forum.nginx.org Fri Aug 21 15:46:44 2020 From: nginx-forum at forum.nginx.org (brutuz) Date: Fri, 21 Aug 2020 11:46:44 -0400 Subject: Multiple MAP implementation issue Message-ID: <9c2da5fa1a6887974afbcf9bcb911be8.NginxMailingListEnglish@forum.nginx.org> I have this map statements === This works well ====== map http_header1 var1 { default "abc" 1a "def" 1b "ghi" } map http_header2 var2 { default "123" 2a "445" 2b "678" } map http_header3 finalvar { default "xxxx" aaaa $var1 bbbb $var2 } ========== Now I need to do regex on http_header3.. if I change the last map statement to.. map http_header3 finalvar { default "xxxx" ~aaaa $var1 ~bbbb $var2 } This causes ERROR too much redirects.. so im confused... Any input is appreciated. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,289153,289153#msg-289153 From nginx-forum at forum.nginx.org Fri Aug 21 17:09:59 2020 From: nginx-forum at forum.nginx.org (vergil) Date: Fri, 21 Aug 2020 13:09:59 -0400 Subject: SSL_shutdown() failed (SSL: ... bad write retry) In-Reply-To: <2168C4F3-8096-4A59-BBCC-6919E007DAF0@nginx.com> References: <2168C4F3-8096-4A59-BBCC-6919E007DAF0@nginx.com> Message-ID: Yes, can confirm that this patch solved the issue. Regards, Alexander. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,289087,289155#msg-289155 From mdounin at mdounin.ru Fri Aug 21 21:45:28 2020 From: mdounin at mdounin.ru (Maxim Dounin) Date: Sat, 22 Aug 2020 00:45:28 +0300 Subject: Multiple MAP implementation issue In-Reply-To: <9c2da5fa1a6887974afbcf9bcb911be8.NginxMailingListEnglish@forum.nginx.org> References: <9c2da5fa1a6887974afbcf9bcb911be8.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20200821214528.GR12747@mdounin.ru> Hello! On Fri, Aug 21, 2020 at 11:46:44AM -0400, brutuz wrote: > I have this map statements > > === This works well ====== > > map http_header1 var1 { > default "abc" > 1a "def" > 1b "ghi" > > } > > map http_header2 var2 { > default "123" > 2a "445" > 2b "678" > } > > map http_header3 finalvar { > default "xxxx" > aaaa $var1 > bbbb $var2 > } > > ========== These statements are not going to work well, at least due to multiple syntax errors (missing "$" characters, missing semicolons). > Now I need to do regex on http_header3.. > > if I change the last map statement to.. > > map http_header3 finalvar { > default "xxxx" > ~aaaa $var1 > ~bbbb $var2 > } This is, again, is not going to pass even configuration syntax check as it lacks required "$" characters before variable names and semicolons. > This causes ERROR too much redirects.. so im confused... Too many redirects is likely a result in something else in your configuration, since map statements does not cause it by itself. An example of properly working configuration snippet, with syntax errors fixed (and a "return 200..." block added to facilitate testing): map $http_header1 $var1 { default "abc"; 1a "def"; 1b "ghi"; } map $http_header2 $var2 { default "123"; 2a "445"; 2b "678"; } map $http_header3 $finalvar { default "xxxx"; ~aaaa $var1; ~bbbb $var2; } server { listen 8080; return 200 "finalvar: $finalvar\n"; } This can be easily tested with curl. Simple tests follows: $ curl -H 'Header3: ' http://127.0.0.1:8080/ finalvar: xxxx $ curl -H 'Header3: aaaa' http://127.0.0.1:8080/ finalvar: abc $ curl -H 'Header3: bbbb' http://127.0.0.1:8080/ finalvar: 123 -- Maxim Dounin http://mdounin.ru/ From mdounin at mdounin.ru Fri Aug 21 22:30:52 2020 From: mdounin at mdounin.ru (Maxim Dounin) Date: Sat, 22 Aug 2020 01:30:52 +0300 Subject: SSL_shutdown() failed (SSL: ... bad write retry) In-Reply-To: <2168C4F3-8096-4A59-BBCC-6919E007DAF0@nginx.com> References: <20200819165054.GN12747@mdounin.ru> <0a8efd62970833aceb3d61f3f6c44742.NginxMailingListEnglish@forum.nginx.org> <20200820191631.GP12747@mdounin.ru> <2168C4F3-8096-4A59-BBCC-6919E007DAF0@nginx.com> Message-ID: <20200821223052.GT12747@mdounin.ru> Hello! On Thu, Aug 20, 2020 at 11:47:08PM +0300, Sergey Kandaurov wrote: > > > On 20 Aug 2020, at 22:16, Maxim Dounin wrote: > > > > Hello! > > > > On Thu, Aug 20, 2020 at 09:30:37AM -0400, vergil wrote: > > > >> Maxim Dounin Wrote: > >> ------------------------------------------------------- > >>> Do you see any other errors on the same connection before the > >>> SSL_shutdown() error? As suggested previously, somethig relevant > >>> might be logged at the "info" level. Note that seeing info-level > >>> error messages will probably require error logging to be > >>> reconfigured, much like with debug. > >>> > >>> If there is nothing, I'm afraid the only solution would be to try > >>> to catch a debugging log related to these errors. As previously > >>> suggested, this can be done without too much load by using the > >>> debug_connection with relatively large CIDR blocks and waiting for > >>> the error to happen from with a client from one of these blocks. > >>> > >>> -- > >>> Maxim Dounin > >>> http://mdounin.ru/ > >>> _______________________________________________ > >>> nginx mailing list > >>> nginx at nginx.org > >>> http://mailman.nginx.org/mailman/listinfo/nginx > >> > >> > >> Good day. > >> > >> I've change log level from notice to info and there's indeed one new message > >> related to HTTP/2 problem. > >> > >> 2020/08/20 15:59:31 [info] 32305#32305: *1982005 client timed out (110: > >> Connection timed out) while processing HTTP/2 connection, client: XXX, > >> server: XXX:443 > >> 2020/08/20 15:59:36 [crit] 32305#32305: *1982005 SSL_shutdown() failed (SSL: > >> error:1409F07F:SSL routines:ssl3_write retry) while processing HTTP/2 > >> connection, client: XXX, server: XXX:443 > >> > >> I don't know if this will help. > > > > Thanks, I think I have an idea about what's going on here. Likely > > these are read timeouts, which can interfere with writes in > > HTTP/2, causing SSL_shutdown() errors. Please try the following > > patch: > > > > # HG changeset patch > > # User Maxim Dounin > > # Date 1597950898 -10800 > > # Thu Aug 20 22:14:58 2020 +0300 > > # Node ID f95e76e9144773a664271c3e91e4cb6df3bc774a > > # Parent 7015f26aef904e2ec17b4b6f6387fd3b8298f79d > > HTTP/2: connections with read timeouts marked as timed out. > > > > In HTTP/2, closing a connection because of a read timeout might happen > > when there are unfinished writes, resulting in SSL_shutdown() errors. > > Fix is to mark such connections with the c->timedout flag to avoid sending > > SSL shutdown. > > > > diff --git a/src/http/v2/ngx_http_v2.c b/src/http/v2/ngx_http_v2.c > > --- a/src/http/v2/ngx_http_v2.c > > +++ b/src/http/v2/ngx_http_v2.c > > @@ -346,6 +346,7 @@ ngx_http_v2_read_handler(ngx_event_t *re > > > > if (rev->timedout) { > > ngx_log_error(NGX_LOG_INFO, c->log, NGX_ETIMEDOUT, "client timed out"); > > + c->timedout = 1; > > ngx_http_v2_finalize_connection(h2c, NGX_HTTP_V2_PROTOCOL_ERROR); > > return; > > } > > FYI, I could reproduce this case, the patch fixes it for me. > A similar case exists in idle handler, it also needs love: > > diff --git a/src/http/v2/ngx_http_v2.c b/src/http/v2/ngx_http_v2.c > @@ -4584,6 +4585,7 @@ ngx_http_v2_idle_handler(ngx_event_t *re > ngx_log_debug0(NGX_LOG_DEBUG_HTTP, c->log, 0, "http2 idle handler"); > > if (rev->timedout || c->close) { > + c->timedout = 1; > ngx_http_v2_finalize_connection(h2c, NGX_HTTP_V2_NO_ERROR); > return; > } > > Traces from read handler: > > 2020/08/20 23:25:48 [debug] 1286#0: *1 http2 frame complete pos:000000010E521838 end:000000010E52183A > 2020/08/20 23:25:48 [debug] 1286#0: *1 http2 frame state save pos:000000010E521838 end:000000010E52183A handler:000000010A29B900 > 2020/08/20 23:25:48 [debug] 1286#0: *1 event timer: 7, old: 1931548335, new: 1931548341 > 2020/08/20 23:25:48 [debug] 1286#0: timer delta: 1 > 2020/08/20 23:25:48 [debug] 1286#0: worker cycle > 2020/08/20 23:25:48 [debug] 1286#0: kevent timer: 994, changes: 0 > 2020/08/20 23:25:49 [debug] 1286#0: kevent events: 0 > 2020/08/20 23:25:49 [debug] 1286#0: timer delta: 999 > 2020/08/20 23:25:49 [debug] 1286#0: *1 event timer del: 7: 1931548335 > 2020/08/20 23:25:49 [info] 1286#0: *1 client timed out (60: Operation timed out) while processing HTTP/2 connection, client: 127.0.0.1, server: 127.0.0.1:8080 > 2020/08/20 23:25:49 [debug] 1286#0: *1 http2 send GOAWAY frame: last sid 1, error 1 > 2020/08/20 23:25:49 [debug] 1286#0: *1 http2 frame out: 00006060000077C0 sid:0 bl:0 len:8 > 2020/08/20 23:25:49 [debug] 1286#0: *1 SSL buf copy: 17 > 2020/08/20 23:25:49 [debug] 1286#0: *1 SSL to write: 17 > 2020/08/20 23:25:49 [debug] 1286#0: *1 SSL_write: -1 > 2020/08/20 23:25:49 [debug] 1286#0: *1 SSL_get_error: 3 > 2020/08/20 23:25:49 [debug] 1286#0: *1 kevent set event: 7: ft:-2 fl:0025 > 2020/08/20 23:25:49 [debug] 1286#0: *1 http2 frame sent: 00006060000077C0 sid:0 bl:0 len:8 > 2020/08/20 23:25:49 [debug] 1286#0: *1 event timer add: 7: 8000:1931556340 > 2020/08/20 23:25:49 [debug] 1286#0: *1 close http connection: 7 > 2020/08/20 23:25:49 [debug] 1286#0: *1 SSL_shutdown: -1 > 2020/08/20 23:25:49 [debug] 1286#0: *1 SSL_get_error: 3 > 2020/08/20 23:25:49 [debug] 1286#0: *1 event timer add: 7: 3000:1931551340 > 2020/08/20 23:25:49 [debug] 1286#0: worker cycle > 2020/08/20 23:25:49 [debug] 1286#0: kevent timer: 3000, changes: 1 > 2020/08/20 23:25:51 [debug] 1286#0: kevent events: 1 > 2020/08/20 23:25:51 [debug] 1286#0: kevent: 7: ft:-2 fl:0025 ff:00000000 d:49039 ud:000062F00000E538 > 2020/08/20 23:25:51 [debug] 1286#0: *1 SSL shutdown handler > 2020/08/20 23:25:51 [debug] 1286#0: *1 SSL_shutdown: -1 > 2020/08/20 23:25:51 [debug] 1286#0: *1 SSL_get_error: 1 > 2020/08/20 23:25:51 [crit] 1286#0: *1 SSL_shutdown() failed (SSL: error:1409F07F:SSL routines:ssl3_write_pending:bad write retry) while processing HTTP/2 connection, client: 127.0.0.1, server: 127.0.0.1:8080 > > And from idle handler: > > 2020/08/20 23:32:00 [debug] 1374#0: *1 http2 idle handler > 2020/08/20 23:32:00 [debug] 1374#0: *1 http2 send GOAWAY frame: last sid 1, error 0 > 2020/08/20 23:32:00 [debug] 1374#0: *1 http2 frame out: 00006060000077C0 sid:0 bl:0 len:8 > 2020/08/20 23:32:00 [debug] 1374#0: *1 SSL buf copy: 17 > 2020/08/20 23:32:00 [debug] 1374#0: *1 SSL to write: 17 > 2020/08/20 23:32:00 [debug] 1374#0: *1 SSL_write: -1 > 2020/08/20 23:32:00 [debug] 1374#0: *1 SSL_get_error: 3 > 2020/08/20 23:32:00 [debug] 1374#0: *1 kevent set event: 7: ft:-2 fl:0025 > 2020/08/20 23:32:00 [debug] 1374#0: *1 http2 frame sent: 00006060000077C0 sid:0 bl:0 len:8 > 2020/08/20 23:32:00 [debug] 1374#0: *1 event timer add: 7: 8000:1931927860 > 2020/08/20 23:32:00 [debug] 1374#0: *1 close http connection: 7 > 2020/08/20 23:32:00 [debug] 1374#0: *1 SSL_shutdown: -1 > 2020/08/20 23:32:00 [debug] 1374#0: *1 SSL_get_error: 3 > 2020/08/20 23:32:00 [debug] 1374#0: *1 event timer add: 7: 3000:1931922860 It's with disabled lingering close, correct? > 2020/08/20 23:32:00 [debug] 1374#0: worker cycle > 2020/08/20 23:32:00 [debug] 1374#0: kevent timer: 3000, changes: 1 > 2020/08/20 23:32:02 [debug] 1374#0: kevent events: 1 > 2020/08/20 23:32:02 [debug] 1374#0: kevent: 7: ft:-2 fl:0025 ff:00000000 d:49039 ud:000062F00000E538 > 2020/08/20 23:32:02 [debug] 1374#0: *1 SSL shutdown handler > 2020/08/20 23:32:02 [debug] 1374#0: *1 SSL_shutdown: -1 > 2020/08/20 23:32:02 [debug] 1374#0: *1 SSL_get_error: 1 > 2020/08/20 23:32:02 [crit] 1374#0: *1 SSL_shutdown() failed (SSL: error:1409F07F:SSL routines:ssl3_write_pending:bad write retry) while processing HTTP/2 connection, client: 127.0.0.1, server: 127.0.0.1:8080 I don't like the idea of disabling SSL shutdown after HTTP/2 idle timeout. This more or less means disabling it in all normal cases. We might either consider introducing a code to continue sending the blocked frame, or to find a different way to mitigate this. In particular, the following patch might be good enough (replaces both patches for read timeouts, as and incorporates initial c->error patch): diff --git a/src/event/ngx_event_openssl.c b/src/event/ngx_event_openssl.c --- a/src/event/ngx_event_openssl.c +++ b/src/event/ngx_event_openssl.c @@ -2793,7 +2793,7 @@ ngx_ssl_shutdown(ngx_connection_t *c) return NGX_OK; } - if (c->timedout) { + if (c->timedout || c->error || c->buffered) { mode = SSL_RECEIVED_SHUTDOWN|SSL_SENT_SHUTDOWN; SSL_set_quiet_shutdown(c->ssl->connection, 1); -- Maxim Dounin http://mdounin.ru/ From nginx-forum at forum.nginx.org Sat Aug 22 09:50:29 2020 From: nginx-forum at forum.nginx.org (Shin Lim) Date: Sat, 22 Aug 2020 05:50:29 -0400 Subject: How to hide Kernel Info & also compile the nginx Message-ID: <400b680c5f05e36e81b21891954cbe69.NginxMailingListEnglish@forum.nginx.org> Hello, I have hosted Nginx 1.16.1 on Ubuntu 16.04. Have configured SSL from LetsEncrypt. Everything is running fine. Only port 80 and 443 are allowed. During security testing, I see that kernel information is exposed on domain.Is there any way to hide kernel information using Nginx ? Can I compile nginx on Ubuntu 16.04 and reuse it on other deployments? Or do I need to compile every time ? Please advise. More details at https://bit.ly/30juXpv/plugins/nessus/11936 Posted at Nginx Forum: https://forum.nginx.org/read.php?2,289160,289160#msg-289160 From themadbeaker at gmail.com Sat Aug 22 14:17:10 2020 From: themadbeaker at gmail.com (J.R.) Date: Sat, 22 Aug 2020 09:17:10 -0500 Subject: How to hide Kernel Info & also compile the nginx Message-ID: "Is there any way to hide kernel information using Nginx?" Scanners 'guess' kernel versions based on various TCP options and such your server supports. Unless you want to kill performance and make your server look like it's running an older kernel, there is nothing to be done. From themadbeaker at gmail.com Sat Aug 22 14:19:51 2020 From: themadbeaker at gmail.com (J.R.) Date: Sat, 22 Aug 2020 09:19:51 -0500 Subject: SSL_shutdown() failed (SSL: ... bad write retry) Message-ID: > Please remove me from all of your contact lists, please. Thank you. You have to unsubscribe from the mailing list via: http://mailman.nginx.org/mailman/listinfo/nginx From pluknet at nginx.com Mon Aug 24 10:02:43 2020 From: pluknet at nginx.com (Sergey Kandaurov) Date: Mon, 24 Aug 2020 13:02:43 +0300 Subject: SSL_shutdown() failed (SSL: ... bad write retry) In-Reply-To: <20200821223052.GT12747@mdounin.ru> References: <20200819165054.GN12747@mdounin.ru> <0a8efd62970833aceb3d61f3f6c44742.NginxMailingListEnglish@forum.nginx.org> <20200820191631.GP12747@mdounin.ru> <2168C4F3-8096-4A59-BBCC-6919E007DAF0@nginx.com> <20200821223052.GT12747@mdounin.ru> Message-ID: > On 22 Aug 2020, at 01:30, Maxim Dounin wrote: > > Hello! > > On Thu, Aug 20, 2020 at 11:47:08PM +0300, Sergey Kandaurov wrote: > >> >>> On 20 Aug 2020, at 22:16, Maxim Dounin wrote: >>> >>> Hello! >>> >>> On Thu, Aug 20, 2020 at 09:30:37AM -0400, vergil wrote: >>> >>>> Maxim Dounin Wrote: >>>> ------------------------------------------------------- >>>>> Do you see any other errors on the same connection before the >>>>> SSL_shutdown() error? As suggested previously, somethig relevant >>>>> might be logged at the "info" level. Note that seeing info-level >>>>> error messages will probably require error logging to be >>>>> reconfigured, much like with debug. >>>>> >>>>> If there is nothing, I'm afraid the only solution would be to try >>>>> to catch a debugging log related to these errors. As previously >>>>> suggested, this can be done without too much load by using the >>>>> debug_connection with relatively large CIDR blocks and waiting for >>>>> the error to happen from with a client from one of these blocks. >>>>> >>>>> -- >>>>> Maxim Dounin >>>>> http://mdounin.ru/ >>>>> _______________________________________________ >>>>> nginx mailing list >>>>> nginx at nginx.org >>>>> http://mailman.nginx.org/mailman/listinfo/nginx >>>> >>>> >>>> Good day. >>>> >>>> I've change log level from notice to info and there's indeed one new message >>>> related to HTTP/2 problem. >>>> >>>> 2020/08/20 15:59:31 [info] 32305#32305: *1982005 client timed out (110: >>>> Connection timed out) while processing HTTP/2 connection, client: XXX, >>>> server: XXX:443 >>>> 2020/08/20 15:59:36 [crit] 32305#32305: *1982005 SSL_shutdown() failed (SSL: >>>> error:1409F07F:SSL routines:ssl3_write retry) while processing HTTP/2 >>>> connection, client: XXX, server: XXX:443 >>>> >>>> I don't know if this will help. >>> >>> Thanks, I think I have an idea about what's going on here. Likely >>> these are read timeouts, which can interfere with writes in >>> HTTP/2, causing SSL_shutdown() errors. Please try the following >>> patch: >>> >>> # HG changeset patch >>> # User Maxim Dounin >>> # Date 1597950898 -10800 >>> # Thu Aug 20 22:14:58 2020 +0300 >>> # Node ID f95e76e9144773a664271c3e91e4cb6df3bc774a >>> # Parent 7015f26aef904e2ec17b4b6f6387fd3b8298f79d >>> HTTP/2: connections with read timeouts marked as timed out. >>> >>> In HTTP/2, closing a connection because of a read timeout might happen >>> when there are unfinished writes, resulting in SSL_shutdown() errors. >>> Fix is to mark such connections with the c->timedout flag to avoid sending >>> SSL shutdown. >>> >>> diff --git a/src/http/v2/ngx_http_v2.c b/src/http/v2/ngx_http_v2.c >>> --- a/src/http/v2/ngx_http_v2.c >>> +++ b/src/http/v2/ngx_http_v2.c >>> @@ -346,6 +346,7 @@ ngx_http_v2_read_handler(ngx_event_t *re >>> >>> if (rev->timedout) { >>> ngx_log_error(NGX_LOG_INFO, c->log, NGX_ETIMEDOUT, "client timed out"); >>> + c->timedout = 1; >>> ngx_http_v2_finalize_connection(h2c, NGX_HTTP_V2_PROTOCOL_ERROR); >>> return; >>> } >> >> FYI, I could reproduce this case, the patch fixes it for me. >> A similar case exists in idle handler, it also needs love: >> >> diff --git a/src/http/v2/ngx_http_v2.c b/src/http/v2/ngx_http_v2.c >> @@ -4584,6 +4585,7 @@ ngx_http_v2_idle_handler(ngx_event_t *re >> ngx_log_debug0(NGX_LOG_DEBUG_HTTP, c->log, 0, "http2 idle handler"); >> >> if (rev->timedout || c->close) { >> + c->timedout = 1; >> ngx_http_v2_finalize_connection(h2c, NGX_HTTP_V2_NO_ERROR); >> return; >> } >> >> Traces from read handler: >> >> 2020/08/20 23:25:48 [debug] 1286#0: *1 http2 frame complete pos:000000010E521838 end:000000010E52183A >> 2020/08/20 23:25:48 [debug] 1286#0: *1 http2 frame state save pos:000000010E521838 end:000000010E52183A handler:000000010A29B900 >> 2020/08/20 23:25:48 [debug] 1286#0: *1 event timer: 7, old: 1931548335, new: 1931548341 >> 2020/08/20 23:25:48 [debug] 1286#0: timer delta: 1 >> 2020/08/20 23:25:48 [debug] 1286#0: worker cycle >> 2020/08/20 23:25:48 [debug] 1286#0: kevent timer: 994, changes: 0 >> 2020/08/20 23:25:49 [debug] 1286#0: kevent events: 0 >> 2020/08/20 23:25:49 [debug] 1286#0: timer delta: 999 >> 2020/08/20 23:25:49 [debug] 1286#0: *1 event timer del: 7: 1931548335 >> 2020/08/20 23:25:49 [info] 1286#0: *1 client timed out (60: Operation timed out) while processing HTTP/2 connection, client: 127.0.0.1, server: 127.0.0.1:8080 >> 2020/08/20 23:25:49 [debug] 1286#0: *1 http2 send GOAWAY frame: last sid 1, error 1 >> 2020/08/20 23:25:49 [debug] 1286#0: *1 http2 frame out: 00006060000077C0 sid:0 bl:0 len:8 >> 2020/08/20 23:25:49 [debug] 1286#0: *1 SSL buf copy: 17 >> 2020/08/20 23:25:49 [debug] 1286#0: *1 SSL to write: 17 >> 2020/08/20 23:25:49 [debug] 1286#0: *1 SSL_write: -1 >> 2020/08/20 23:25:49 [debug] 1286#0: *1 SSL_get_error: 3 >> 2020/08/20 23:25:49 [debug] 1286#0: *1 kevent set event: 7: ft:-2 fl:0025 >> 2020/08/20 23:25:49 [debug] 1286#0: *1 http2 frame sent: 00006060000077C0 sid:0 bl:0 len:8 >> 2020/08/20 23:25:49 [debug] 1286#0: *1 event timer add: 7: 8000:1931556340 >> 2020/08/20 23:25:49 [debug] 1286#0: *1 close http connection: 7 >> 2020/08/20 23:25:49 [debug] 1286#0: *1 SSL_shutdown: -1 >> 2020/08/20 23:25:49 [debug] 1286#0: *1 SSL_get_error: 3 >> 2020/08/20 23:25:49 [debug] 1286#0: *1 event timer add: 7: 3000:1931551340 >> 2020/08/20 23:25:49 [debug] 1286#0: worker cycle >> 2020/08/20 23:25:49 [debug] 1286#0: kevent timer: 3000, changes: 1 >> 2020/08/20 23:25:51 [debug] 1286#0: kevent events: 1 >> 2020/08/20 23:25:51 [debug] 1286#0: kevent: 7: ft:-2 fl:0025 ff:00000000 d:49039 ud:000062F00000E538 >> 2020/08/20 23:25:51 [debug] 1286#0: *1 SSL shutdown handler >> 2020/08/20 23:25:51 [debug] 1286#0: *1 SSL_shutdown: -1 >> 2020/08/20 23:25:51 [debug] 1286#0: *1 SSL_get_error: 1 >> 2020/08/20 23:25:51 [crit] 1286#0: *1 SSL_shutdown() failed (SSL: error:1409F07F:SSL routines:ssl3_write_pending:bad write retry) while processing HTTP/2 connection, client: 127.0.0.1, server: 127.0.0.1:8080 >> >> And from idle handler: >> >> 2020/08/20 23:32:00 [debug] 1374#0: *1 http2 idle handler >> 2020/08/20 23:32:00 [debug] 1374#0: *1 http2 send GOAWAY frame: last sid 1, error 0 >> 2020/08/20 23:32:00 [debug] 1374#0: *1 http2 frame out: 00006060000077C0 sid:0 bl:0 len:8 >> 2020/08/20 23:32:00 [debug] 1374#0: *1 SSL buf copy: 17 >> 2020/08/20 23:32:00 [debug] 1374#0: *1 SSL to write: 17 >> 2020/08/20 23:32:00 [debug] 1374#0: *1 SSL_write: -1 >> 2020/08/20 23:32:00 [debug] 1374#0: *1 SSL_get_error: 3 >> 2020/08/20 23:32:00 [debug] 1374#0: *1 kevent set event: 7: ft:-2 fl:0025 >> 2020/08/20 23:32:00 [debug] 1374#0: *1 http2 frame sent: 00006060000077C0 sid:0 bl:0 len:8 >> 2020/08/20 23:32:00 [debug] 1374#0: *1 event timer add: 7: 8000:1931927860 >> 2020/08/20 23:32:00 [debug] 1374#0: *1 close http connection: 7 >> 2020/08/20 23:32:00 [debug] 1374#0: *1 SSL_shutdown: -1 >> 2020/08/20 23:32:00 [debug] 1374#0: *1 SSL_get_error: 3 >> 2020/08/20 23:32:00 [debug] 1374#0: *1 event timer add: 7: 3000:1931922860 > > It's with disabled lingering close, correct? Yes, I had to disable it. > >> 2020/08/20 23:32:00 [debug] 1374#0: worker cycle >> 2020/08/20 23:32:00 [debug] 1374#0: kevent timer: 3000, changes: 1 >> 2020/08/20 23:32:02 [debug] 1374#0: kevent events: 1 >> 2020/08/20 23:32:02 [debug] 1374#0: kevent: 7: ft:-2 fl:0025 ff:00000000 d:49039 ud:000062F00000E538 >> 2020/08/20 23:32:02 [debug] 1374#0: *1 SSL shutdown handler >> 2020/08/20 23:32:02 [debug] 1374#0: *1 SSL_shutdown: -1 >> 2020/08/20 23:32:02 [debug] 1374#0: *1 SSL_get_error: 1 >> 2020/08/20 23:32:02 [crit] 1374#0: *1 SSL_shutdown() failed (SSL: error:1409F07F:SSL routines:ssl3_write_pending:bad write retry) while processing HTTP/2 connection, client: 127.0.0.1, server: 127.0.0.1:8080 > > I don't like the idea of disabling SSL shutdown after HTTP/2 idle > timeout. This more or less means disabling it in all normal > cases. > > We might either consider introducing a code to continue sending > the blocked frame, or to find a different way to mitigate this. > In particular, the following patch might be good enough (replaces > both patches for read timeouts, as and incorporates initial > c->error patch): > > diff --git a/src/event/ngx_event_openssl.c b/src/event/ngx_event_openssl.c > --- a/src/event/ngx_event_openssl.c > +++ b/src/event/ngx_event_openssl.c > @@ -2793,7 +2793,7 @@ ngx_ssl_shutdown(ngx_connection_t *c) > return NGX_OK; > } > > - if (c->timedout) { > + if (c->timedout || c->error || c->buffered) { > mode = SSL_RECEIVED_SHUTDOWN|SSL_SENT_SHUTDOWN; > SSL_set_quiet_shutdown(c->ssl->connection, 1); > Works for me. -- Sergey Kandaurov From andersondonda at gmail.com Mon Aug 24 11:22:50 2020 From: andersondonda at gmail.com (Anderson dos Santos Donda) Date: Mon, 24 Aug 2020 13:22:50 +0200 Subject: Is this an attack or a normal request? Message-ID: Hello everyone, I?m new in the webserver world, and I have a very basic knowledge about Nginx, so I want apologize in advance if I'm making a stupid question. I have a very basic webserver hosting a WordPress webpage and in the past 3 days I have receiving thousands of below request: 5.122.236.249 - - [24/Aug/2020:12:30:41 +0200] "\x1E\x80\xEBol\xDF\x86z\x84\xA4A^\xAF;\xA1\x98\x1B\x0E\xB7\x88\xD3h\x8FyW\xE4\x0F=.\x15\xF7f:9\xF7\xC3\xBB\xB1}n\xA5\x88\x8B\xE7\xF4\x5C\x80\x98=\xE2X\xC8\xD4\x1Bv/\xDC3yAI\xEE\xE6\xFA\xB1\xF3\x90]\x9EG\xFD\x9B\xAB\x9B:\xA7q\x82*\xE1:\x1A 5.122.236.249 - - [24/Aug/2020:12:30:41 +0200] "P\xCE \x9C\xA9\xB6pS\xD6#1\x84\x22\xB0s\xB8\xAA\x09\x06Ex\xDD\x88\x11\xFC\x0E\xDB\x04\x18~*\xE7h\xD2H\xD422\x83,\xB3u\xDF|\xED\x8BP\x9Box\xA4\x042\xFBz\xAAh\xF9\x14^\x96\xDD\x1D\xF6\xDD*\xF4" 400 173 "-" "-? This comes from a hundred of different IPs and in many requests at same time. Is this kind of DDOS attack or a legitimate request(which my server returns 400 for them)? If is an attack, has a specific name that I can search and try to understand it better and mitigate it? Thank so much for the help. Best Regards, Donda -- Att. Anderson Donda *" **Mar calmo n?o cria bom marinheiro, muito menos bom capit?o.**"* -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Mon Aug 24 11:35:24 2020 From: nginx-forum at forum.nginx.org (nathanpgibson) Date: Mon, 24 Aug 2020 07:35:24 -0400 Subject: Connection timeout on SSL with shared hosting In-Reply-To: <6d0b523686ea8fa77bf8d474bb642dc0.NginxMailingListEnglish@forum.nginx.org> References: <83a51afe-cef2-0475-f6ba-5e35909fecc9@thomas-ward.net> <6d0b523686ea8fa77bf8d474bb642dc0.NginxMailingListEnglish@forum.nginx.org> Message-ID: Just wondering if anyone has further thoughts on what to try here? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,289099,289172#msg-289172 From francis at daoine.org Mon Aug 24 13:02:13 2020 From: francis at daoine.org (Francis Daly) Date: Mon, 24 Aug 2020 14:02:13 +0100 Subject: Connection timeout on SSL with shared hosting In-Reply-To: References: <83a51afe-cef2-0475-f6ba-5e35909fecc9@thomas-ward.net> <6d0b523686ea8fa77bf8d474bb642dc0.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20200824130213.GH20939@daoine.org> On Mon, Aug 24, 2020 at 07:35:24AM -0400, nathanpgibson wrote: Hi there, > Just wondering if anyone has further thoughts on what to try here? You wrote: """ When I try nmap from my local machine I get some results I can't explain. Notice the discrepancy between ports 80 and ports 443 and between IPv4 and IPv6 $ nmap -A -T4 -p443 example.org 443/tcp filtered https $ nmap -A -T4 -p443 my.server.ip.address 443/tcp filtered https $ nmap -A -T4 -p443 -6 my:server:ip::v6:address 443/tcp open ssl/http nginx 1.10.3 $ nmap -A -T4 -p80 example.org 80/tcp open http nginx 1.10.3 $ nmap -A -T4 -p80 my.server.ip.address 80/tcp open http nginx 1.10.3 """ For nmap, filtered means: Nmap cannot determine whether the port is open because packet filtering prevents its probes from reaching the port. The filtering could be from a dedicated firewall device, router rules, or host-based firewall software. (From https://nmap.org/book/man-port-scanning-basics.html) That means that something in between your nmap testing client and your nginx server is interfering with the IPv4 https/port 443 traffic. Find and fix that something, and things will probably work better. You also indicate that most visitors get a connection timeout message, while some get through. Do your nginx logs indicate that all of the ones that get through are using IPv6, not IPv4? That might also point at IPv4 being blocked. (Or: do your nginx logs indicate that all of the ones that get through are coming from similar IP addresses? Perhaps there is wonky routing involved? Although that would not explain the difference between ports 80 and 443 of the same IPv4 address.) If you "tcpdump" on the nginx server for the port 443 traffic, do you see anything? If tcpdump sees the traffic but nginx does not, there is probably a local (on the same server as nginx) network control device ("firewall") involved. If tcpdump does not see the traffic, then there is an external network control device involved. If you, for example, "tcptraceroute" to your IPv4 address, port 443, from a remote client, how far does the traffic get? That might hint at where the first block is happening. But right now, there is nothing obviously related to nginx in this diagnosis. Good luck with it, f -- Francis Daly francis at daoine.org From themadbeaker at gmail.com Mon Aug 24 18:05:46 2020 From: themadbeaker at gmail.com (J.R.) Date: Mon, 24 Aug 2020 13:05:46 -0500 Subject: Is this an attack or a normal request? Message-ID: > Is this kind of DDOS attack or a legitimate request(which my server returns > 400 for them)? That's typically how various unicode characters are hex encoded. If you aren't expecting that kind of input, then yes it is likely an attack (probably trying to exploit an unknown specific piece of software). Welcome to the internet where everything connected is bombarded 24/7 from everything else with random attacks. That's why it's important to keep your server (and wordpress) up to date. From lists at lazygranch.com Mon Aug 24 18:54:35 2020 From: lists at lazygranch.com (lists) Date: Mon, 24 Aug 2020 11:54:35 -0700 Subject: Is this an attack or a normal request? In-Reply-To: Message-ID: I can't find it, but someone wrote a script to decode that style of hacking. For the hacks I was decoding, they were RDP hack attempts. The hackers just "spray" their attacks. Often they are not meaningful to your server. I have Nginx maps set up to match requests that are not relevant to my server. For instance I don't run WordPress, so anything WordPress related gets a 444 response. On a weekly basis I pull all the IP addresses that generated a 400 or 444 and run them through a IP lookup website. If they come back to a hosting company, VPS, or basically anything not an ISP, I block the associated IP space via my firewall. The only reason I can do this weekly is I have blocked so much IP space already that I don't get many hackers. At a minimum I suggest blocking all Amazon AWS. No eyeballs there, just hackers. Also block all of OVH. You can block any of the hosting companies since there are no eyeballs there. This blocks many VPNs as well but nobody says you have to accept traffic from VPNs. Firewalls are very CPU efficient though they do use a lot of memory. In the long run blocking all those hackers improves system efficiency since nginx does have to parse all that nonsense. I have scripts to pull the hacker IP out of the log file but a have a nonstandard log format. If you can create a file of IPs, this site will return the domains: https://www.bulkseotools.com/bulk-ip-to-location.php If you see a domain that is obviously not an ISP, you can find their entire IP space using bgp.he.net This sounds more complicate than it is. I have it down to about 20 minutes a week. You can also block countries in the firewall. Some people block all of China. I don't but that does cut down on hackers. ? Original Message ? From: themadbeaker at gmail.com Sent: August 24, 2020 11:06 AM To: nginx at nginx.org Reply-to: nginx at nginx.org Subject: Re: Is this an attack or a normal request? > Is this kind of DDOS attack or a legitimate request(which my server returns > 400 for them)? That's typically how various unicode characters are hex encoded. If you aren't expecting that kind of input, then yes it is likely an attack (probably trying to exploit an unknown specific piece of software). Welcome to the internet where everything connected is bombarded 24/7 from everything else with random attacks. That's why it's important to keep your server (and wordpress) up to date. _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx From peter_booth at me.com Mon Aug 24 19:18:32 2020 From: peter_booth at me.com (Peter Booth) Date: Mon, 24 Aug 2020 15:18:32 -0400 Subject: Is this an attack or a normal request? In-Reply-To: References: Message-ID: I agree with the advice already given It can also be useful to track the User-Agent header of web requests - both to understand who is trying to do what to your website, and then to start blocking on the basis of user agent. There may be some bots and spiders that are helpful or even necessary for your business. Peter > On Aug 24, 2020, at 2:54 PM, lists wrote: > > I can't find it, but someone wrote a script to decode that style of hacking. For the hacks I was decoding, they were RDP hack attempts. The hackers just "spray" their attacks. Often they are not meaningful to your server. > > I have Nginx maps set up to match requests that are not relevant to my server. For instance I don't run WordPress, so anything WordPress related gets a 444 response. On a weekly basis I pull all the IP addresses that generated a 400 or 444 and run them through a IP lookup website. If they come back to a hosting company, VPS, or basically anything not an ISP, I block the associated IP space via my firewall. The only reason I can do this weekly is I have blocked so much IP space already that I don't get many hackers. > > At a minimum I suggest blocking all Amazon AWS. No eyeballs there, just hackers. Also block all of OVH. You can block any of the hosting companies since there are no eyeballs there. This blocks many VPNs as well but nobody says you have to accept traffic from VPNs. > > Firewalls are very CPU efficient though they do use a lot of memory. In the long run blocking all those hackers improves system efficiency since nginx does have to parse all that nonsense. > > I have scripts to pull the hacker IP out of the log file but a have a nonstandard log format. If you can create a file of IPs, this site will return the domains: > > https://www.bulkseotools.com/bulk-ip-to-location.php > > If you see a domain that is obviously not an ISP, you can find their entire IP space using bgp.he.net > > This sounds more complicate than it is. I have it down to about 20 minutes a week. > > You can also block countries in the firewall. Some people block all of China. I don't but that does cut down on hackers. > > > > Original Message > > > From: themadbeaker at gmail.com > Sent: August 24, 2020 11:06 AM > To: nginx at nginx.org > Reply-to: nginx at nginx.org > Subject: Re: Is this an attack or a normal request? > > >> Is this kind of DDOS attack or a legitimate request(which my server returns >> 400 for them)? > > That's typically how various unicode characters are hex encoded. If > you aren't expecting that kind of input, then yes it is likely an > attack (probably trying to exploit an unknown specific piece of > software). Welcome to the internet where everything connected is > bombarded 24/7 from everything else with random attacks. > > That's why it's important to keep your server (and wordpress) up to date. > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From SPAM_TRAP_gmane at jonz.net Tue Aug 25 01:44:46 2020 From: SPAM_TRAP_gmane at jonz.net (Jonesy) Date: Tue, 25 Aug 2020 01:44:46 -0000 (UTC) Subject: Is this an attack or a normal request? References: Message-ID: On Mon, 24 Aug 2020 11:54:35 -0700, lists wrote: <-snip-> > At a minimum I suggest blocking all Amazon AWS. No eyeballs there, > just hackers. Also block all of OVH. Great suggestions. Also, block all of Digital Sewer ... err Digital Ocean. Once you catch a bad actor IP, and if you want to block the entire network, drop the ASN from a `whois` of the bad actor IP into https://enjen.net/asn-blocklist/index.php May the mask be with you, Jonesy -- Marvin L Jones | Marvin | W3DHJ.net | linux 38.238N 104.547W | @ jonz.net | Jonesy | FreeBSD * Killfiling google & XXXXbanter.com: jonz.net/ng.htm From lists at lazygranch.com Tue Aug 25 02:51:08 2020 From: lists at lazygranch.com (lists) Date: Mon, 24 Aug 2020 19:51:08 -0700 Subject: Is this an attack or a normal request? In-Reply-To: Message-ID: My VPS is on digital Ocean. Oh and I block them too. And Linode. I am an equal opportunity blocker. Google is a little tricky to find the IP space. Remember you don't want to block Google search. In fact you should create an account with Google to help them find your website. The suggested method is to get the IP space from their SPF. https://support.symphony.com/hc/en-us/articles/360029563832-Obtaining-GCP-IP-ranges-to-enable-proxy-and-firewall-configuration AWS has a json scheme to document their space. https://docs.aws.amazon.com/general/latest/gr/aws-ip-ranges.html I don't? block complete ASNs. Sometimes there are corporate accounts there. They have eyeballs. bgp.he.net will get just the entity that is doing the hacking. Bulletproof hoster: https://www.hetzner.com/ ? Original Message ? From: SPAM_TRAP_gmane at jonz.net Sent: August 24, 2020 6:55 PM To: nginx at nginx.org Reply-to: nginx at nginx.org Subject: Re: Is this an attack or a normal request? On Mon, 24 Aug 2020 11:54:35 -0700, lists wrote: <-snip-> > At a minimum I suggest blocking all Amazon AWS. No eyeballs there, > just hackers. Also block all of OVH. Great suggestions.? Also, block all of Digital Sewer ... err Digital Ocean. Once you catch a bad actor IP, and if you want to block the entire network, drop the ASN from a `whois` of the bad actor IP into https://enjen.net/asn-blocklist/index.php May the mask be with you, Jonesy -- ? Marvin L Jones??? | Marvin????? | W3DHJ.net? | linux ?? 38.238N 104.547W |? @ jonz.net | Jonesy???? |? FreeBSD ??? * Killfiling google & XXXXbanter.com: jonz.net/ng.htm _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx From andersondonda at gmail.com Tue Aug 25 05:53:22 2020 From: andersondonda at gmail.com (Anderson dos Santos Donda) Date: Tue, 25 Aug 2020 07:53:22 +0200 Subject: Is this an attack or a normal request? In-Reply-To: References: Message-ID: Thank you very much. Everyone! I will try to implement all the insithgts given. With desperate times come desperate measures, and I implemented a fail2ban that block any IP that doesn't have any GET or POST in the request. It is not efficient, I know. My firewall list is growing abruptly but, at least, it buys me some time to improve the all counter-measure that you guys meantionated. BR, Donda On Mon, Aug 24, 2020 at 9:18 PM Peter Booth wrote: > I agree with the advice already given > > It can also be useful to track the User-Agent header of web requests - > both to understand who is trying to do what to your website, > and then to start blocking on the basis of user agent. > There may be some bots and spiders that are helpful or even necessary for > your business. > > Peter > > > > > On Aug 24, 2020, at 2:54 PM, lists wrote: > > > > I can't find it, but someone wrote a script to decode that style of > hacking. For the hacks I was decoding, they were RDP hack attempts. The > hackers just "spray" their attacks. Often they are not meaningful to your > server. > > > > I have Nginx maps set up to match requests that are not relevant to my > server. For instance I don't run WordPress, so anything WordPress related > gets a 444 response. On a weekly basis I pull all the IP addresses that > generated a 400 or 444 and run them through a IP lookup website. If they > come back to a hosting company, VPS, or basically anything not an ISP, I > block the associated IP space via my firewall. The only reason I can do > this weekly is I have blocked so much IP space already that I don't get > many hackers. > > > > At a minimum I suggest blocking all Amazon AWS. No eyeballs there, just > hackers. Also block all of OVH. You can block any of the hosting companies > since there are no eyeballs there. This blocks many VPNs as well but nobody > says you have to accept traffic from VPNs. > > > > Firewalls are very CPU efficient though they do use a lot of memory. In > the long run blocking all those hackers improves system efficiency since > nginx does have to parse all that nonsense. > > > > I have scripts to pull the hacker IP out of the log file but a have a > nonstandard log format. If you can create a file of IPs, this site will > return the domains: > > > > https://www.bulkseotools.com/bulk-ip-to-location.php > > > > If you see a domain that is obviously not an ISP, you can find their > entire IP space using bgp.he.net > > > > This sounds more complicate than it is. I have it down to about 20 > minutes a week. > > > > You can also block countries in the firewall. Some people block all of > China. I don't but that does cut down on hackers. > > > > > > > > Original Message > > > > > > From: themadbeaker at gmail.com > > Sent: August 24, 2020 11:06 AM > > To: nginx at nginx.org > > Reply-to: nginx at nginx.org > > Subject: Re: Is this an attack or a normal request? > > > > > >> Is this kind of DDOS attack or a legitimate request(which my server > returns > >> 400 for them)? > > > > That's typically how various unicode characters are hex encoded. If > > you aren't expecting that kind of input, then yes it is likely an > > attack (probably trying to exploit an unknown specific piece of > > software). Welcome to the internet where everything connected is > > bombarded 24/7 from everything else with random attacks. > > > > That's why it's important to keep your server (and wordpress) up to date. > > _______________________________________________ > > nginx mailing list > > nginx at nginx.org > > http://mailman.nginx.org/mailman/listinfo/nginx > > _______________________________________________ > > nginx mailing list > > nginx at nginx.org > > http://mailman.nginx.org/mailman/listinfo/nginx > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -- Att. Anderson Donda *" **Mar calmo n?o cria bom marinheiro, muito menos bom capit?o.**"* -------------- next part -------------- An HTML attachment was scrubbed... URL: From lists at lazygranch.com Tue Aug 25 06:27:44 2020 From: lists at lazygranch.com (lists) Date: Mon, 24 Aug 2020 23:27:44 -0700 Subject: Is this an attack or a normal request? In-Reply-To: Message-ID: <8ud4bn6p8liq5q9ekg1d4i4b.1598336864041@lazygranch.com> An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Tue Aug 25 09:25:05 2020 From: nginx-forum at forum.nginx.org (nathanpgibson) Date: Tue, 25 Aug 2020 05:25:05 -0400 Subject: Connection timeout on SSL with shared hosting In-Reply-To: <20200824130213.GH20939@daoine.org> References: <20200824130213.GH20939@daoine.org> Message-ID: <6c349cbbf1ad7ca4fc39c7121c5d6984.NginxMailingListEnglish@forum.nginx.org> Thanks so much, Francis Daly! This is a huge help in isolating the problem. Based on the nginx access log, IPv6 requests to port 443 are getting to nginx but IPv4 requests to port 443 are not. But they are getting to tcpdump. All I see there is a bunch of packets with the tcpflag [S]. I take it this means the handshake is not completing. It was easy to confirm this by turning off IPv6 in my browser, at which point https stopped resolving for the site in the browser but I could see the packets coming in on tcpdump. So presumably some sort of firewall on the local server machine--probably one I didn't configure or know about! I'll try to figure out how to find that blockage. In any case, apparently not an nginx issue, as you rightly perceived. Thanks again for the help! Nathan Posted at Nginx Forum: https://forum.nginx.org/read.php?2,289099,289184#msg-289184 From nginx-forum at forum.nginx.org Tue Aug 25 11:41:43 2020 From: nginx-forum at forum.nginx.org (anish10dec) Date: Tue, 25 Aug 2020 07:41:43 -0400 Subject: Cache Volume utilized at around 50 % with proxy_cache_min_uses Message-ID: With use of proxy_cache_min_uses volume of cache is getting settled up at around 50% utilization. No matter what is the volume allocated in max_size its not filling up further beyond 50%. If the proxy_cache_min_uses is removed the cache gets filled up with max_size allocated volume. No of files in cache directory is far less beyond the size allocated in key zone. Its getting capped up near 20 Lakhs whereas allocated key zone could have accommodate around 80 L files with below configuration proxy_cache_path /cache/contentcache keys_zone=content:1000m levels=1:2 max_size=1000g inactive=7d use_temp_path=off; proxy_cache_min_uses 2; Cache volume is utilized with above configuration is around 550 GB which is not growing beyond and as inactive is set to 7d so this would have been effective only after 7 days when content should have got deleted if not accessed within 7 days time period. Writing all the objects on disk is causing high i/o so using proxy_cache_min_uses would have been beneficial with utilizing cache optimally and high cache hit ratio Posted at Nginx Forum: https://forum.nginx.org/read.php?2,289185,289185#msg-289185 From nginx-forum at forum.nginx.org Tue Aug 25 11:49:07 2020 From: nginx-forum at forum.nginx.org (nathanpgibson) Date: Tue, 25 Aug 2020 07:49:07 -0400 Subject: Connection timeout on SSL with shared hosting In-Reply-To: <6c349cbbf1ad7ca4fc39c7121c5d6984.NginxMailingListEnglish@forum.nginx.org> References: <20200824130213.GH20939@daoine.org> <6c349cbbf1ad7ca4fc39c7121c5d6984.NginxMailingListEnglish@forum.nginx.org> Message-ID: <81ccc2169cda02cb07ca476190a096d5.NginxMailingListEnglish@forum.nginx.org> Turned out there was an INPUT DROP rule in iptables (but not in ip6tables), although I am using ufw as a firewall. Now https works and my nginx redirects are functioning as expected! Posted at Nginx Forum: https://forum.nginx.org/read.php?2,289099,289186#msg-289186 From mdounin at mdounin.ru Tue Aug 25 16:50:00 2020 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 25 Aug 2020 19:50:00 +0300 Subject: Cache Volume utilized at around 50 % with proxy_cache_min_uses In-Reply-To: References: Message-ID: <20200825165000.GU12747@mdounin.ru> Hello! On Tue, Aug 25, 2020 at 07:41:43AM -0400, anish10dec wrote: > With use of proxy_cache_min_uses volume of cache is getting settled up at > around 50% utilization. > No matter what is the volume allocated in max_size its not filling up > further beyond 50%. > If the proxy_cache_min_uses is removed the cache gets filled up with > max_size allocated volume. > > No of files in cache directory is far less beyond the size allocated in key > zone. Its getting capped up near 20 Lakhs whereas allocated key zone could > have accommodate around 80 L files with below configuration It is important to understand that number of files in the cache directory is not directly related to the keys zone size when using proxy_cache_min_uses. Instead, when using proxy_cache_min_uses, keys zone needs to keep information about all resources requested, to correctly trac usage numbers, and this usually much higher than the number of files saved to disk. This is what proxy_cache_min_uses does: it saves disk space and disk bandwidth by tracking information only in the keys zone. Given the above, I see two possible reasons why the cache volume is only filled at 50%: 1. You've run out of keys_zone size. 2. You've run out of resources requested frequent enough to be cached with proxy_cache_min_uses set to 2. It should be easy enough to find out what happens in your case. -- Maxim Dounin http://mdounin.ru/ From nginx-forum at forum.nginx.org Tue Aug 25 19:24:57 2020 From: nginx-forum at forum.nginx.org (anish10dec) Date: Tue, 25 Aug 2020 15:24:57 -0400 Subject: Cache Volume utilized at around 50 % with proxy_cache_min_uses In-Reply-To: <20200825165000.GU12747@mdounin.ru> References: <20200825165000.GU12747@mdounin.ru> Message-ID: <5287506230b6074ba171d1fe2ea21bb2.NginxMailingListEnglish@forum.nginx.org> > Given the above, I see two possible reasons why the cache volume > is only filled at 50%: > > 1. You've run out of keys_zone size. > > 2. You've run out of resources requested frequent enough to be > cached with proxy_cache_min_uses set to 2. > > It should be easy enough to find out what happens in your case. > It seems possible reason is keys_zone size. Will look into by increasing the same and trying different permutations. As in general 1M stores around 8000 Keys, what could be probable formula for keys_zone size with proxy_cache_min_uses. Since it keeps information of all requested resource so it would highly depend upon number of requested resources. In my case number of request per sec is around 1000 i.e. 36 Lakhs per hour during peak hours Posted at Nginx Forum: https://forum.nginx.org/read.php?2,289185,289189#msg-289189 From jeff.dyke at gmail.com Wed Aug 26 02:30:38 2020 From: jeff.dyke at gmail.com (Jeff Dyke) Date: Tue, 25 Aug 2020 22:30:38 -0400 Subject: Is this an attack or a normal request? In-Reply-To: References: Message-ID: I've seen the rest of this thread, and there are many good ideas, fail2ban is great, i actually use it with wazuh. The best security measure i ever made with wordpress is changing the name of the /admin/login.php and disabling or at least access listing the api. If no one needs api access, shut it off. With fail2ban with wazuh, perhaps fail2band handles this on its own, you can set up volume rules which will create FW rules. Also, i like to put in a snippit into nginx config for to many responses. limit_req_zone $limit_key zone=req_limit:10m rate=10r/s; limit_req_log_level warn; # don't use 503 as we have specific logic for that status limit_req_status 420; As the comment says we handle 503's and other status codes differently, so i adopted Twitters Ease You Calm status code. Change the limits to your environment. On Mon, Aug 24, 2020 at 7:23 AM Anderson dos Santos Donda < andersondonda at gmail.com> wrote: > Hello everyone, > > I?m new in the webserver world, and I have a very basic knowledge about > Nginx, so I want apologize in advance if I'm making a stupid question. > > I have a very basic webserver hosting a WordPress webpage and in the past > 3 days I have receiving thousands of below request: > > 5.122.236.249 - - [24/Aug/2020:12:30:41 +0200] > "\x1E\x80\xEBol\xDF\x86z\x84\xA4A^\xAF;\xA1\x98\x1B\x0E\xB7\x88\xD3h\x8FyW\xE4\x0F=.\x15\xF7f:9\xF7\xC3\xBB\xB1}n\xA5\x88\x8B\xE7\xF4\x5C\x80\x98=\xE2X\xC8\xD4\x1Bv/\xDC3yAI\xEE\xE6\xFA\xB1\xF3\x90]\x9EG\xFD\x9B\xAB\x9B:\xA7q\x82*\xE1:\x1A > 5.122.236.249 - - [24/Aug/2020:12:30:41 +0200] "P\xCE > \x9C\xA9\xB6pS\xD6#1\x84\x22\xB0s\xB8\xAA\x09\x06Ex\xDD\x88\x11\xFC\x0E\xDB\x04\x18~*\xE7h\xD2H\xD422\x83,\xB3u\xDF|\xED\x8BP\x9Box\xA4\x042\xFBz\xAAh\xF9\x14^\x96\xDD\x1D\xF6\xDD*\xF4" > 400 173 "-" "-? > > This comes from a hundred of different IPs and in many requests at same > time. > > Is this kind of DDOS attack or a legitimate request(which my server > returns 400 for them)? > > If is an attack, has a specific name that I can search and try to > understand it better and mitigate it? > > Thank so much for the help. > > Best Regards, > Donda > > > -- > Att. > Anderson Donda > > *" **Mar calmo n?o cria bom marinheiro, muito menos bom capit?o.**"* > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From yangxu0823 at foxmail.com Wed Aug 26 08:57:16 2020 From: yangxu0823 at foxmail.com (=?gb18030?B?WHUgWWFuZw==?=) Date: Wed, 26 Aug 2020 16:57:16 +0800 Subject: =?UTF-8?Q?=D7=AA=EF=BF=BD=EF=BF=BD=EF=BF=BD=EF=BF=BD=5BPATCH=5D_HTTP/2=3A_?= =?UTF-8?Q?check_stream_identifier_other_than_0_for_GOAWAY_frame?= Message-ID: Hi all,    This is a patch for HTTP/2 GOAWAY frame process, please refer to the detail. thanks. -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: tip.patch Type: application/octet-stream Size: 1476 bytes Desc: not available URL: From francis at daoine.org Wed Aug 26 09:10:43 2020 From: francis at daoine.org (Francis Daly) Date: Wed, 26 Aug 2020 10:10:43 +0100 Subject: Connection timeout on SSL with shared hosting In-Reply-To: <81ccc2169cda02cb07ca476190a096d5.NginxMailingListEnglish@forum.nginx.org> References: <20200824130213.GH20939@daoine.org> <6c349cbbf1ad7ca4fc39c7121c5d6984.NginxMailingListEnglish@forum.nginx.org> <81ccc2169cda02cb07ca476190a096d5.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20200826091043.GA29287@daoine.org> On Tue, Aug 25, 2020 at 07:49:07AM -0400, nathanpgibson wrote: Hi there, > Turned out there was an INPUT DROP rule in iptables (but not in ip6tables), > although I am using ufw as a firewall. Now https works and my nginx > redirects are functioning as expected! Great that you found and fixed the problem; and thanks for sharing the answer with the list -- it will probably help the next person with a similar head-scratching issue! (I guess you either removed the INPUT DROP rule; or added an explicit "allow 443" beside the "allow 80" rule that was already there. Whichever it was, it was "make the local firewall allow the traffic get to nginx".) Cheers, f -- Francis Daly francis at daoine.org From nginx-forum at forum.nginx.org Thu Aug 27 08:53:35 2020 From: nginx-forum at forum.nginx.org (petecooper) Date: Thu, 27 Aug 2020 04:53:35 -0400 Subject: Selecting a TLS library for Nginx in 2020 Message-ID: I compile Nginx from mainline source and update shortly after each patch/point release. As part of the compile process, I obtain the current OpenSSL source and bake that in with these compile flags: --with-openssl-opt="enable-ec_nistp_64_gcc_128 shared no-ssl2 no-ssl3 no-weak-ssl-ciphers -fstack-protector-strong" \ --with-openssl=../../openssl-source/openssl-OpenSSL_$openssl_source_version I understand Nginx can be compiled with other TLS libraries. I also understand this might be 'there be dragons' territory. I use OpenSSL because it appears to work for my use case. However, I am researching alternative TLS libraries to perhaps use with Nginx. Heartbleed (2014) alerted me to the issue(s) with OpenSSL and although some time has passed, I am aware that projects like LibreSSL were borne out of a necessity to improve code quality. TLS 1.3 support in LibreSSL is improving, and that's my impetus to investigate a potential change. If you compile Nginx with a TLS library -- whether it's OpenSSL or not -- I would be grateful if you could tell me what vendor/flavour you use, and a brief note about why you selected it. Thank you, and best wishes to you from rainy Cornwall, United Kingdom. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,289206,289206#msg-289206 From n5d9xq3ti233xiyif2vp at protonmail.ch Sun Aug 30 16:43:27 2020 From: n5d9xq3ti233xiyif2vp at protonmail.ch (Laura Smith) Date: Sun, 30 Aug 2020 16:43:27 +0000 Subject: NGINX PHP FPM - Download prompt when accessing directories Message-ID: Hi, I have a largely working NGINX config as below.? The only problem is that when "/administrator" or "/administrator/" or "administrator/foo.php" is called, I always get prompted to download the PHP file rather than it be executed by PHP FPM.? Meanwhile, calls to "/" or "/foo.php" operate as expected? (i.e. executed correctly by PHP FPM). Thanks in advance for you help Laura #### CONFIG START server {???????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????? ? ? listen 192.0.2.43:80;?????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????? ? ? listen [2001:db8:69d0:723c:1cba::1173]:80;???????????????????????????????????????????????????????????????????????????????????????????????? ? ? server_tokens off;???????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????? ? ? server_name? example.com? www.example.com? ;????????????????????????????????????????????????????????????????????????? ? ? if ($request_method !~ ^(GET|HEAD)$ )????????????????????????????????????????????????????????????????????????????????????????????????????????????? ? ? {????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????? ? ???? return 405;???????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????? ? ? }????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????? ? ? return 301 https://example.com$request_uri;??????????????????????????????????????????????????????????????????????????????????????????????? ? }?????????????????? ? server {??????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????? ? ? listen 192.0.2.43:443 ssl http2;??????????????????????????????????????????????????????????????????????????????????????????????????????????????? ? ? listen [2001:db8:69d0:723c:1cba::1173]:443 ssl http2;????????????????????????????????????????????????????????????????????????????????????? ? ? server_tokens off;???????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????? ? ? server_name? example.com www.example.com ;????????????????????????????????????????????????????????????????????????? ? ? root /usr/share/nginx/foo;???????????????????????????????????????????????????????????????????????????????????????????????????????????????? ? ? index index.php index.html index.htm default.html default.htm;???????????????????????????????????????????????????????????????????????????????????? ? ? ssl_certificate /etc/ssl/example.com/example.com.pem;????????????????????????????????????????????????????????????????? ? ? ssl_certificate_key /etc/ssl/example.com/example.com.key;????????????????????????????????????????????????????????????? ? ? ssl_dhparam /etc/ssl/example.com/example.com.dhp;????????????????????????????????????????????????????????????????????? ? ? ssl_session_timeout 1d;??????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????? ? ? ssl_session_cache shared:MozSSL:10m;?????????????????????????????????????????????????????????????????????????????????????????????????????????????? ? ? ssl_session_tickets off;?????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????? ? ? ssl_protocols TLSv1.2 TLSv1.3;???????????????????????????????????????????????????????????????????????????????????????????????????????????????????? ? ? ssl_ciphers ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA2$ -POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384;?????????????????????????????????????????????????????????? ? ? ssl_prefer_server_ciphers on;????????????????????????????????????????????????????????????????????????????????????????????????????????????????????? ? ? ssl_stapling on;?????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????? ? ? ssl_stapling_verify on;??????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????? ? ? ssl_trusted_certificate /etc/ssl/example.com/example.com.chain;??????????????????????????????????????????????????????? ? ? resolver 198.51.100.25 203.0.113.25 [2001:db8::aaaa:20] [2001:db8::aaaa:25];?????????????????????????????????????????????????????????????????????? ? ? gzip on;?????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????? ? ? gzip_vary on;????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????? ? ? gzip_comp_level 5;???????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????? ? ? gzip_proxied expired no-cache no-store private auth;?????????????????????????????????????????????????????????????????????????????????????????????? ? ? gzip_types text/html text/plain text/css text/javascript application/x-javascript;???????????????????????????????????????????????????????????????? ? ? add_header Strict-Transport-Security "max-age=31536000; includeSubdomains;preload";??????????????????????????????????????????????????????????????? ? ? add_header X-Frame-Options SAMEORIGIN;???????????????????????????????????????????????????????????????????????????????????????????????????????????? ? ? add_header X-Content-Type-Options nosniff;???????????????????????????????????????????????????????????????????????????????????????????????????????? ? ? add_header X-XSS-Protection "1; mode=block"; ? add_header Content-Security-Policy "referrer no-referrer"; ? add_header Referrer-Policy "no-referrer"; ? add_header Cache-Control private,max-age=300,no-transform; ? expires 300; ?if ($request_method !~ ^(GET|HEAD|POST|)$ ) ? { ??? return 405; ? } ? location /log { ??? return 403; ? } ? location /logs { ??? return 403; ? } ????? location ^~ /administrator/ { ??????? limit_except GET HEAD POST? { ????????? deny all; ??????? } ????? try_files $uri /index.php /index.html; ??????? auth_basic "Hello"; ??????? auth_basic_user_file /etc/nginx/salt_htpasswd/htpasswd_foo; ??? } ? location / { ??? try_files $uri $uri/ /index.php$is_args$args; ??????? if ($query_string ~ "base64_encode[^(]*\([^)]*\)"){ ??????????? return 403; ??????? } ??????? if ($query_string ~* "(<|%3C)([^s]*s)+cript.*(>|%3E)"){ ??????????? return 403; ??????? } ??????? if ($query_string ~ "GLOBALS(=|\[|\%[0-9A-Z]{0,2})"){ ??????????? return 403; ??????? } ??????? if ($query_string ~ "_REQUEST(=|\[|\%[0-9A-Z]{0,2})"){ ??????????? return 403; ??????? } ??????? auth_basic "Hello"; ??????? auth_basic_user_file /etc/nginx/salt_htpasswd/htpasswd_foo; ? } ? # PHP config from https://www.nginx.com/resources/wiki/start/topics/examples/phpfcgi/ ? location ~ [^/]\.php(/|$) { ??? fastcgi_split_path_info ^(.+?\.php)(/.*)$; ??? if (!-f $document_root$fastcgi_script_name) { ????? return 404; ??? } ??? fastcgi_param HTTP_PROXY ""; ??? fastcgi_pass unix:/run/php/php7.4-fpm.sock; ??? fastcgi_index index.php; ??? include fastcgi_params; ??? fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; ? limit_except GET HEAD POST? { ??? deny all; ? } ? } }#### CONFIG END From francis at daoine.org Sun Aug 30 23:08:21 2020 From: francis at daoine.org (Francis Daly) Date: Mon, 31 Aug 2020 00:08:21 +0100 Subject: NGINX PHP FPM - Download prompt when accessing directories In-Reply-To: References: Message-ID: <20200830230821.GB29287@daoine.org> On Sun, Aug 30, 2020 at 04:43:27PM +0000, Laura Smith wrote: Hi there, > I have a largely working NGINX config as below.? The only problem is that when "/administrator" or "/administrator/" or "administrator/foo.php" is called, I always get prompted to download the PHP file rather than it be executed by PHP FPM.? Meanwhile, calls to "/" or "/foo.php" operate as expected? (i.e. executed correctly by PHP FPM). > In nginx, one request is handled in one location{}. Your > ????? location ^~ /administrator/ { will handle all requests that start with /administrator/; that location does not do any special handling of php requests; all it will do is serve files from the filesystem. Depending on how you want the requests handled, possibly removing the "^~" will work (that would mean that any requests that match other regex-location{}s will not be handled in this location); or possibly creating a nested regex location for "~php" within this location will work (with contents very like the "main" php location). (Your config seems to want basic authentication for "file" requests, but not for "php" requests; that may well be what you intend.) (And the limit_except lines seem redundant with the "if ($request_method" line; but if what you have works, it works.) Good luck with it, f -- Francis Daly francis at daoine.org From n5d9xq3ti233xiyif2vp at protonmail.ch Sun Aug 30 23:15:03 2020 From: n5d9xq3ti233xiyif2vp at protonmail.ch (Laura Smith) Date: Sun, 30 Aug 2020 23:15:03 +0000 Subject: NGINX PHP FPM - Download prompt when accessing directories In-Reply-To: <20200830230821.GB29287@daoine.org> References: <20200830230821.GB29287@daoine.org> Message-ID: <8nmUm5L2VpH1TmZ3zw3BQtAUnEQKezTE8cW_j5wGiirKdVhi3cj0ohI7q_gh2VWoi7KAReUonpX0tQ2ZfFKfOC5LReQjbu8Xep4lgGdAmcI=@protonmail.ch> Sent with ProtonMail Secure Email. ??????? Original Message ??????? On Sunday, August 30, 2020 11:08 PM, Francis Daly wrote: Hi, Thanks for your reply. > Your > > > location ^~ /administrator/ { > > will handle all requests that start with /administrator/; that location > does not do any special handling of php requests; all it will do is > serve files from the filesystem. > But then is that not the case for my 'location / {' as well ? That section also does not do any special handling of php requests and yet the php works ? Laura From nginx-forum at forum.nginx.org Mon Aug 31 07:49:58 2020 From: nginx-forum at forum.nginx.org (moyamos) Date: Mon, 31 Aug 2020 03:49:58 -0400 Subject: How do I add text to a response from a remote URL in NGINX? Message-ID: Hi, I have the following server in NGINX and it works fine. But, I am wondering is it possible to add text to a response from a remote URL where hosts my before_body.txt and after_body.txt? Is there any way to tackle this? Is it possible at all? server { listen 80; root /storage/path; index index.html; server_name test.domain.com; location / { try_files $uri $uri/ =404; add_before_body /src/before_body.txt; add_after_body /src/after_body.txt; autoindex on; } location /src/ { alias /storage/path/content/; } } Posted at Nginx Forum: https://forum.nginx.org/read.php?2,289227,289227#msg-289227 From francis at daoine.org Mon Aug 31 08:01:03 2020 From: francis at daoine.org (Francis Daly) Date: Mon, 31 Aug 2020 09:01:03 +0100 Subject: NGINX PHP FPM - Download prompt when accessing directories In-Reply-To: <8nmUm5L2VpH1TmZ3zw3BQtAUnEQKezTE8cW_j5wGiirKdVhi3cj0ohI7q_gh2VWoi7KAReUonpX0tQ2ZfFKfOC5LReQjbu8Xep4lgGdAmcI=@protonmail.ch> References: <20200830230821.GB29287@daoine.org> <8nmUm5L2VpH1TmZ3zw3BQtAUnEQKezTE8cW_j5wGiirKdVhi3cj0ohI7q_gh2VWoi7KAReUonpX0tQ2ZfFKfOC5LReQjbu8Xep4lgGdAmcI=@protonmail.ch> Message-ID: <20200831080103.GC29287@daoine.org> On Sun, Aug 30, 2020 at 11:15:03PM +0000, Laura Smith wrote: Hi there, > > > location ^~ /administrator/ { > > > > will handle all requests that start with /administrator/; that location > > does not do any special handling of php requests; all it will do is > > serve files from the filesystem. > But then is that not the case for my 'location / {' as well ? That section also does not do any special handling of php requests and yet the php works ? > You have: == location ^~ /administrator/ { location / { location ~ [^/]\.php(/|$) { fastcgi_pass == It is the case that any request that is handled in the second location will be served from the filesystem. But it is not the case that any request that starts with / will be handled in the second location. The various characters after the word "location" change how you have asked nginx to handle different requests. http://nginx.org/r/location has more details. Cheers, f -- Francis Daly francis at daoine.org From nginx-forum at forum.nginx.org Mon Aug 31 10:10:31 2020 From: nginx-forum at forum.nginx.org (Dr_tux) Date: Mon, 31 Aug 2020 06:10:31 -0400 Subject: Nginx reverse proxy redirect In-Reply-To: <20200814133404.GF20939@daoine.org> References: <20200814133404.GF20939@daoine.org> Message-ID: That's perfect. Thank you. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,289072,289230#msg-289230 From nginx-forum at forum.nginx.org Mon Aug 31 10:15:00 2020 From: nginx-forum at forum.nginx.org (Dr_tux) Date: Mon, 31 Aug 2020 06:15:00 -0400 Subject: Nginx TCP/UDP Load Balancer Message-ID: <0ca6a93f22f9adf57fa96edaffaf1a22.NginxMailingListEnglish@forum.nginx.org> Hi, I have 2 turn server. I would like to use Nginx for load balancer them. But I have a problem. When I use the AWS ELB it works perfectly. If I try with Nginx, I got an error. Remote addr should be client_ip. Nginx, send itself IP address to coturn server. There are 2 output from AWS ELB and Nginx AWS Output: 13: handle_udp_packet: New UDP endpoint: local addr coturn_ip:3478 coturn, remote addr client_ip:54203 Nginx Output: 96: handle_udp_packet: New UDP endpoint: local addr coturn_ip:3478, remote addr nginx_ip:59902 Posted at Nginx Forum: https://forum.nginx.org/read.php?2,289231,289231#msg-289231 From r at roze.lv Mon Aug 31 13:22:32 2020 From: r at roze.lv (Reinis Rozitis) Date: Mon, 31 Aug 2020 16:22:32 +0300 Subject: How do I add text to a response from a remote URL in NGINX? In-Reply-To: References: Message-ID: <000001d67f99$c8c02dd0$5a408970$@roze.lv> > I have the following server in NGINX and it works fine. But, I am wondering is > it possible to add text to a response from a remote URL where hosts my > before_body.txt and after_body.txt? Is there any way to tackle this? Is it > possible at all? According to documentation (http://nginx.org/en/docs/http/ngx_http_addition_module.html) add_before_body/add_after_body does a subrequest. So something like that might work (I don't have an nginx instance with particular module compiled so you'll have to test yourself) - eg have a (internal) named location with a proxy_pass to the remote url: location @beforebody { proxy_pass http://externalserver/somefile.txt; } location / { try_files $uri $uri/ =404; add_before_body @beforebody; } rr From nginx-forum at forum.nginx.org Mon Aug 31 15:03:16 2020 From: nginx-forum at forum.nginx.org (james.anderson) Date: Mon, 31 Aug 2020 11:03:16 -0400 Subject: repeated reloads lead to unresponsive server Message-ID: we observe that after several days in service, where the server is reloaded several hundred times a day, it eventually stops responding. a reload completes, but still all connections time out. a restart corrects the issue. is there a limit to the number of times a server permits a reload before it is necessary to restart it. when the problem starts, entries like the following appear in the nginx error log ter process /usr/sbin/nginx -g daemon on; master_process on;: /build/nginx-5J5hor/nginx-1.18.0/debian/modules/nchan/src/store/memory/memst\ ore.c:701: nchan_store_init_worker: Assertion `procslot_found == 1' failed. 2020/08/31 12:07:18 [alert] 1451759#1451759: worker process 1500846 exited on signal 6 (core dumped) 2020/08/31 12:07:18 [alert] 1451759#1451759: shared memory zone "memstore" was locked by 1500846 i see no mention of this issue here. but i did note https://github.com/slact/nchan/issues/446 versions: root at nl12:~# uname -a Linux nl12.dydra.com 5.4.0-42-generic #46-Ubuntu SMP Fri Jul 10 00:24:02 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux root at nl12:~# which nginx /usr/sbin/nginx root at nl12:~# /usr/sbin/nginx -v nginx version: nginx/1.18.0 (Ubuntu) Posted at Nginx Forum: https://forum.nginx.org/read.php?2,289247,289247#msg-289247 From teward at thomas-ward.net Mon Aug 31 15:28:23 2020 From: teward at thomas-ward.net (Thomas Ward) Date: Mon, 31 Aug 2020 11:28:23 -0400 Subject: repeated reloads lead to unresponsive server In-Reply-To: References: Message-ID: <2fe0dda3-3abc-ad84-d04b-ce13cda0aea6@thomas-ward.net> Do you actually use NCHAN for anything?? If you are not actively using nchan, you should consider simply removing `libnginx-mod-nchan` Note also this is a third party module, so it's not necessarily 'endorsed' by NGINX Upstream per se.? Also, Debian is *ancient* with its nginx version and modules, I'd suggest you switch to a more recent Debian version with newer nginx, or use the nginx.org repositories (no third party plugins there though...) Thomas On 8/31/20 11:03 AM, james.anderson wrote: > we observe that after several days in service, where the server is reloaded > several hundred times a day, it eventually stops responding. > > a reload completes, but still all connections time out. > a restart corrects the issue. > > is there a limit to the number of times a server permits a reload before it > is necessary to restart it. > when the problem starts, entries like the following appear in the nginx > error log > > ter process /usr/sbin/nginx -g daemon on; master_process on;: > /build/nginx-5J5hor/nginx-1.18.0/debian/modules/nchan/src/store/memory/memst\ > ore.c:701: nchan_store_init_worker: Assertion `procslot_found == 1' failed. > 2020/08/31 12:07:18 [alert] 1451759#1451759: worker process 1500846 exited > on signal 6 (core dumped) > 2020/08/31 12:07:18 [alert] 1451759#1451759: shared memory zone "memstore" > was locked by 1500846 > > i see no mention of this issue here. but i did note > > https://github.com/slact/nchan/issues/446 > > versions: > root at nl12:~# uname -a > Linux nl12.dydra.com 5.4.0-42-generic #46-Ubuntu SMP Fri Jul 10 00:24:02 UTC > 2020 x86_64 x86_64 x86_64 GNU/Linux > root at nl12:~# which nginx > /usr/sbin/nginx > root at nl12:~# /usr/sbin/nginx -v > nginx version: nginx/1.18.0 (Ubuntu) > > Posted at Nginx Forum: https://forum.nginx.org/read.php?2,289247,289247#msg-289247 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From pluknet at nginx.com Mon Aug 31 16:03:18 2020 From: pluknet at nginx.com (Sergey Kandaurov) Date: Mon, 31 Aug 2020 19:03:18 +0300 Subject: =?UTF-8?B?UmU6INeq77+977+977+977+9W1BBVENIXSBIVFRQLzI6IGNoZWNrIHN0cmVhbSBp?= =?UTF-8?B?ZGVudGlmaWVyIG90aGVyIHRoYW4gMCBmb3IgR09BV0FZIGZyYW1l?= In-Reply-To: References: Message-ID: > On 26 Aug 2020, at 11:57, Xu Yang wrote: > > Hi all, > This is a patch for HTTP/2 GOAWAY frame process, please refer to the detail. > thanks. Please see a more complete patch below. # HG changeset patch # User Sergey Kandaurov # Date 1598889483 -10800 # Mon Aug 31 18:58:03 2020 +0300 # Node ID 9a88a26f9dfca4effbe0b0ce97d0a569d1b3026d # Parent 7015f26aef904e2ec17b4b6f6387fd3b8298f79d HTTP/2: reject invalid stream identifiers with PROTOCOL_ERROR. Prodded by Xu Yang. diff --git a/src/http/v2/ngx_http_v2.c b/src/http/v2/ngx_http_v2.c --- a/src/http/v2/ngx_http_v2.c +++ b/src/http/v2/ngx_http_v2.c @@ -953,6 +953,13 @@ ngx_http_v2_state_data(ngx_http_v2_conne ngx_log_debug0(NGX_LOG_DEBUG_HTTP, h2c->connection->log, 0, "http2 DATA frame"); + if (h2c->state.sid == 0) { + ngx_log_error(NGX_LOG_INFO, h2c->connection->log, 0, + "client sent DATA frame with incorrect identifier"); + + return ngx_http_v2_connection_error(h2c, NGX_HTTP_V2_PROTOCOL_ERROR); + } + if (size > h2c->recv_window) { ngx_log_error(NGX_LOG_INFO, h2c->connection->log, 0, "client violated connection flow control: " @@ -2095,6 +2102,16 @@ static u_char * ngx_http_v2_state_settings(ngx_http_v2_connection_t *h2c, u_char *pos, u_char *end) { + ngx_log_debug0(NGX_LOG_DEBUG_HTTP, h2c->connection->log, 0, + "http2 SETTINGS frame"); + + if (h2c->state.sid) { + ngx_log_error(NGX_LOG_INFO, h2c->connection->log, 0, + "client sent SETTINGS frame with incorrect identifier"); + + return ngx_http_v2_connection_error(h2c, NGX_HTTP_V2_PROTOCOL_ERROR); + } + if (h2c->state.flags == NGX_HTTP_V2_ACK_FLAG) { if (h2c->state.length != 0) { @@ -2118,9 +2135,6 @@ ngx_http_v2_state_settings(ngx_http_v2_c return ngx_http_v2_connection_error(h2c, NGX_HTTP_V2_SIZE_ERROR); } - ngx_log_debug0(NGX_LOG_DEBUG_HTTP, h2c->connection->log, 0, - "http2 SETTINGS frame"); - return ngx_http_v2_state_settings_params(h2c, pos, end); } @@ -2269,6 +2283,13 @@ ngx_http_v2_state_ping(ngx_http_v2_conne ngx_log_debug0(NGX_LOG_DEBUG_HTTP, h2c->connection->log, 0, "http2 PING frame"); + if (h2c->state.sid) { + ngx_log_error(NGX_LOG_INFO, h2c->connection->log, 0, + "client sent PING frame with incorrect identifier"); + + return ngx_http_v2_connection_error(h2c, NGX_HTTP_V2_PROTOCOL_ERROR); + } + if (h2c->state.flags & NGX_HTTP_V2_ACK_FLAG) { return ngx_http_v2_state_skip(h2c, pos, end); } @@ -2310,6 +2331,13 @@ ngx_http_v2_state_goaway(ngx_http_v2_con return ngx_http_v2_state_save(h2c, pos, end, ngx_http_v2_state_goaway); } + if (h2c->state.sid) { + ngx_log_error(NGX_LOG_INFO, h2c->connection->log, 0, + "client sent GOAWAY frame with incorrect identifier"); + + return ngx_http_v2_connection_error(h2c, NGX_HTTP_V2_PROTOCOL_ERROR); + } + #if (NGX_DEBUG) h2c->state.length -= NGX_HTTP_V2_GOAWAY_SIZE; -- Sergey Kandaurov From nginx-forum at forum.nginx.org Mon Aug 31 16:07:40 2020 From: nginx-forum at forum.nginx.org (james.anderson) Date: Mon, 31 Aug 2020 12:07:40 -0400 Subject: repeated reloads lead to unresponsive server In-Reply-To: <2fe0dda3-3abc-ad84-d04b-ce13cda0aea6@thomas-ward.net> References: <2fe0dda3-3abc-ad84-d04b-ce13cda0aea6@thomas-ward.net> Message-ID: <1757fbdda62c264e233d640349b329dc.NginxMailingListEnglish@forum.nginx.org> we found it to be impractical to move from ubuntu's released nginx version in order to avoid that the perl module crashed on _every_ reload. i was unaware, that the presence of the nchan module could cause an issue if one did not use it, but will disable it an see if that helps. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,289247,289252#msg-289252 From mlybarger at gmail.com Mon Aug 31 17:38:28 2020 From: mlybarger at gmail.com (Mark Lybarger) Date: Mon, 31 Aug 2020 13:38:28 -0400 Subject: transforming static files Message-ID: i have a bunch of files on a local filesystem (ok, it's NAS) that I serve up using an nginx docker image, just pointing the doc root to the system i want to share. that's fine for my xml files. the users can browse and see then on the filesystem. i also have some .bin files that can be converted using a custom java api. how can i easily hook the bin files to processed through a command on the system? java -jar MyTranscoder.jar myInputFile.bin -------------- next part -------------- An HTML attachment was scrubbed... URL: