From jordanc.carter at outlook.com Fri Mar 1 08:20:22 2024 From: jordanc.carter at outlook.com (J Carter) Date: Fri, 1 Mar 2024 08:20:22 +0000 Subject: ssl_reject_handshake breaks other server blocks In-Reply-To: References: Message-ID: Hello, On Wed, 28 Feb 2024 21:45:37 -0300 Taco de Wolff wrote: > Hi, > > I've noticed at least in 1.24.0 and 1.25.4 that adding an > ssl_reject_handshake to the default server breaks SNI for other > servers. Example: > > ``` > server { > server_name _; > listen 80 default_server; > listen 443 default_server ssl; > listen 443 default_server quic reuseport; > listen [::]:80 default_server; > listen [::]:443 default_server ssl; > listen [::]:443 default_server quic reuseport; > > http2 on; > > # SSL > ssl_certificate /etc/pki/lego/certificates/server.crt; > ssl_certificate_key /etc/pki/lego/certificates/server.key; > ssl_trusted_certificate /etc/pki/lego/certificates/server.crt; > ssl_reject_handshake on; > > return 444; > } > > server { > server_name domain.com; > listen 443 ssl; > listen 443 quic; > listen [::]:443 ssl; > listen [::]:443 quic; > > http2 on; > > root /srv/www/html; > > # SSL > ssl_certificate /etc/pki/lego/certificates/server.crt; > ssl_certificate_key /etc/pki/lego/certificates/server.key; > ssl_trusted_certificate /etc/pki/lego/certificates/server.crt; > > location / { > try_files /index.html =404; > } > } > ``` > > There are two remarks for this example: > - While enabling HTTP/3 I had to add the ssl_certificate lines to the > default server, while using solely HTTP/2 this wasn't necessary. It > will throw an error on trying to start Nginx, is that a bug? TLS is mandatory for HTTP/3 (well, more accurately for QUIC). https://stackoverflow.com/questions/72826710/quic-transfer-protocol-need-not-tls > - The ssl_reject_handshake in the default server will prevent proper > SNI matching for domain.com. If I run `curl https://domain.com/` it > works fine, but `curl -k -H 'Host: domain.com' > https://ipaddress-of-server/` does not. When I remove > ssl_reject_handshake it works as expected > If you curl an IP Address rather than an FQDN, curl will not include SNI extension in client hello at all. ssl_reject_handshake, as the name suggests, rejects TLS handshakes prior to completion. Nginx cannot perform secondary search for correct server block using host/authority header, as that would require first completing handshake, and then parsing host/authority header. > My intent is to have a default server that responds to non-existing > domain names. Preferably it responds with 444, but requests over TLS > (such as old domains names with HTST) will throw a security warning > that the server's certificates don't match the request's virtual > host's domain name (as expected). > return 444; just a special return value that causes nginx to terminate connection, nothing get's sent back to the client at all. return directives (rewrite module more accurately) runs post TLS handshake though. For default server TLS connections with your present configuration - it will never get to that point. Generally ssl_reject_hanshake is preferable for terminating connections anyway, as it saves performing heavy TLS handshake. The return 444 is still relevant for plain text connections that reach your default server though, so I'd recommend still keeping it. > Instead of showing a security warning in the browser I prefer a > connection error, which is why I want to employ ssl_reject_handshake. Your present configuration should work fine then. > Kind regards, > Taco de Wolff From quickfire28 at gmail.com Fri Mar 1 08:45:07 2024 From: quickfire28 at gmail.com (zen zenitram) Date: Fri, 1 Mar 2024 16:45:07 +0800 Subject: NGINX upload limit Message-ID: Good day! We created an institutional repository with eprints and using NGINX as load balancer, but we encountered problem in uploading file to our repository. It only alccepts 128 kb file upload, the client_max_body_size is set to 2 gb. but still it only accepts 128 kb max upload size. How to solve this problem? Thank you! -------------- next part -------------- An HTML attachment was scrubbed... URL: From osa at freebsd.org.ru Fri Mar 1 15:27:18 2024 From: osa at freebsd.org.ru (Sergey A. Osokin) Date: Fri, 1 Mar 2024 18:27:18 +0300 Subject: NGINX upload limit In-Reply-To: References: Message-ID: Hi there, On Fri, Mar 01, 2024 at 04:45:07PM +0800, zen zenitram wrote: > > We created an institutional repository with eprints and using NGINX as load > balancer, but we encountered problem in uploading file to our repository. > It only alccepts 128 kb file upload, the client_max_body_size is set to 2 > gb. > > but still it only accepts 128 kb max upload size. > How to solve this problem? I'd recommend to share the nginx configuration file in the maillist. Don't forget to remove any sensitive information or create a minimal nginx configuration reproduces the case. Thank you. -- Sergey A. Osokin From tacodewolff at gmail.com Sat Mar 2 12:54:46 2024 From: tacodewolff at gmail.com (Taco de Wolff) Date: Sat, 2 Mar 2024 09:54:46 -0300 Subject: ssl_reject_handshake breaks other server blocks In-Reply-To: References: Message-ID: Thank you Jordan for the response. Including the SNI information in cURL works, thank you. I wasn't aware this was so very different from TCP/HTTP2. The point I was trying to make about the ssl_certificate options to be mandatory, is that HTTP/2 also requires SSL but recognizes that when ssl_reject_handshake=on it doesn't need the certificate. For HTTP/3 it doesn't seem to recognize that it doesn't need the certificate since it will reject handshakes anyways. Kind regards, Taco de Wolff Op vr 1 mrt 2024 om 05:20 schreef J Carter : > Hello, > > On Wed, 28 Feb 2024 21:45:37 -0300 > Taco de Wolff wrote: > > > Hi, > > > > I've noticed at least in 1.24.0 and 1.25.4 that adding an > > ssl_reject_handshake to the default server breaks SNI for other > > servers. Example: > > > > ``` > > server { > > server_name _; > > listen 80 default_server; > > listen 443 default_server ssl; > > listen 443 default_server quic reuseport; > > listen [::]:80 default_server; > > listen [::]:443 default_server ssl; > > listen [::]:443 default_server quic reuseport; > > > > http2 on; > > > > # SSL > > ssl_certificate /etc/pki/lego/certificates/server.crt; > > ssl_certificate_key /etc/pki/lego/certificates/server.key; > > ssl_trusted_certificate /etc/pki/lego/certificates/server.crt; > > ssl_reject_handshake on; > > > > return 444; > > } > > > > server { > > server_name domain.com; > > listen 443 ssl; > > listen 443 quic; > > listen [::]:443 ssl; > > listen [::]:443 quic; > > > > http2 on; > > > > root /srv/www/html; > > > > # SSL > > ssl_certificate /etc/pki/lego/certificates/server.crt; > > ssl_certificate_key /etc/pki/lego/certificates/server.key; > > ssl_trusted_certificate /etc/pki/lego/certificates/server.crt; > > > > location / { > > try_files /index.html =404; > > } > > } > > ``` > > > > There are two remarks for this example: > > - While enabling HTTP/3 I had to add the ssl_certificate lines to the > > default server, while using solely HTTP/2 this wasn't necessary. It > > will throw an error on trying to start Nginx, is that a bug? > > TLS is mandatory for HTTP/3 (well, more accurately for QUIC). > > > https://stackoverflow.com/questions/72826710/quic-transfer-protocol-need-not-tls > > > - The ssl_reject_handshake in the default server will prevent proper > > SNI matching for domain.com. If I run `curl https://domain.com/` > it > > works fine, but `curl -k -H 'Host: domain.com' > > https://ipaddress-of-server/` does not. > When I remove > > ssl_reject_handshake it works as expected > > > > If you curl an IP Address rather than an FQDN, curl will not include > SNI extension in client hello at all. > > ssl_reject_handshake, as the name suggests, rejects TLS handshakes prior > to completion. Nginx cannot perform secondary search for correct server > block using host/authority header, as that would require first > completing handshake, and then parsing host/authority header. > > > My intent is to have a default server that responds to non-existing > > domain names. Preferably it responds with 444, but requests over TLS > > (such as old domains names with HTST) will throw a security warning > > that the server's certificates don't match the request's virtual > > host's domain name (as expected). > > > > return 444; just a special return value that causes nginx to terminate > connection, nothing get's sent back to the client at all. return > directives (rewrite module more accurately) runs post TLS handshake > though. For default server TLS connections with your present > configuration - it will never get to that point. > > Generally ssl_reject_hanshake is preferable for terminating connections > anyway, as it saves performing heavy TLS handshake. > > The return 444 is still relevant for plain text connections that reach > your default server though, so I'd recommend still keeping it. > > > Instead of showing a security warning in the browser I prefer a > > connection error, which is why I want to employ ssl_reject_handshake. > > Your present configuration should work fine then. > > > Kind regards, > > Taco de Wolff > _______________________________________________ > nginx mailing list > nginx at nginx.org > https://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jordanc.carter at outlook.com Sat Mar 2 18:51:36 2024 From: jordanc.carter at outlook.com (J Carter) Date: Sat, 2 Mar 2024 18:51:36 +0000 Subject: ssl_reject_handshake breaks other server blocks In-Reply-To: References: Message-ID: Hello Taco, On Sat, 2 Mar 2024 09:54:46 -0300 Taco de Wolff wrote: > Thank you Jordan for the response. > No problem. > Including the SNI information in cURL works, thank you. I wasn't aware this > was so very different from TCP/HTTP2. > > The point I was trying to make about the ssl_certificate options to be > mandatory, is that HTTP/2 also requires SSL HTTP2 can be used without TLS by the way (called h2c), and this is also implemented in nginx. With curl you can test it easily with --http2-prior-knowledge flag against plain-text port. The $http2 variable [1] can also be easily used to distinguish h2c vs h2(with tls). Of course, I doubt there is a lot of real world usage of h2c. Still, it can be useful for testing :) [1] https://nginx.org/en/docs/http/ngx_http_v2_module.html#variables > but recognizes that when > ssl_reject_handshake=on it doesn't need the certificate. For HTTP/3 it > doesn't seem to recognize that it doesn't need the certificate since it > will reject handshakes anyways. I see, but when testing with exactly the configuration you posted, it does not appear to require them in the default server (on 1.25.4). If I remove ssl_certificate and ssl_certificate_key directives, it still works... 1) Are you using any out of band patches in your nginx build (if self built)? 2) Which TLS library are you using (openssl, boringssl, ect)? 3) Which OS? From tacodewolff at gmail.com Sat Mar 2 22:55:44 2024 From: tacodewolff at gmail.com (Taco de Wolff) Date: Sat, 2 Mar 2024 19:55:44 -0300 Subject: ssl_reject_handshake breaks other server blocks In-Reply-To: References: Message-ID: Hi Jordan, You are right, very sorry for the noise. Must have confounded the error with the many changes I made at the same time. Thanks for your time! Kind regards, Taco de Wolff Op za 2 mrt 2024 om 15:52 schreef J Carter : > Hello Taco, > > On Sat, 2 Mar 2024 09:54:46 -0300 > Taco de Wolff wrote: > > > Thank you Jordan for the response. > > > > No problem. > > > Including the SNI information in cURL works, thank you. I wasn't aware > this > > was so very different from TCP/HTTP2. > > > > The point I was trying to make about the ssl_certificate options to be > > mandatory, is that HTTP/2 also requires SSL > > HTTP2 can be used without TLS by the way (called h2c), and this is also > implemented in nginx. With curl you can test it easily with > --http2-prior-knowledge flag against plain-text port. > > The $http2 variable [1] can also be easily used to distinguish h2c vs > h2(with tls). > > Of course, I doubt there is a lot of real world usage of h2c. Still, it > can > be useful for testing :) > > [1] https://nginx.org/en/docs/http/ngx_http_v2_module.html#variables > > > but recognizes that when > > ssl_reject_handshake=on it doesn't need the certificate. For HTTP/3 it > > doesn't seem to recognize that it doesn't need the certificate since it > > will reject handshakes anyways. > > I see, but when testing with exactly the configuration you posted, it > does not appear to require them in the default server (on 1.25.4). If I > remove ssl_certificate and ssl_certificate_key directives, it still > works... > > 1) Are you using any out of band patches in your nginx build (if self > built)? > > 2) Which TLS library are you using (openssl, boringssl, ect)? > > 3) Which OS? > _______________________________________________ > nginx mailing list > nginx at nginx.org > https://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From lists at lazygranch.com Tue Mar 5 21:07:53 2024 From: lists at lazygranch.com (lists at lazygranch.com) Date: Tue, 5 Mar 2024 13:07:53 -0800 Subject: Question regarding $invalid_referer Message-ID: <20240305130753.4a1f736b.lists@lazygranch.com> I am presently using a scheme like this to prevent scraping documents. ************************************ location /images/ { valid_referers none blocked www.example.com example.com forums.othersite.com ; # you can tell the browser that it can only download content from the domains you explicitly allow # if ($invalid_referer) { # return 403; if ($invalid_referer) { return 302 $scheme://www.example.com; *************************************** I commented out some old code which just sends an error message. I pulled that from the nginx website. I later added the code which sends the user to the top level of the website. It works but the results really aren't user friendly. What I rather do is if I find an invalid_referer to some document, I would like to redirect the request to the html page that has my link to the document. I am relatively sure I will need to hand code the redirection for every file, but plan on only doing this for pdfs. Probably 20 files. Here is a google referral I pulled from the log file ********************************************* 302 172.0.0.0 - - [05/Mar/2024:20:18:52 +0000] "GET /images/ttr/0701crash.pdf HTTP/2.0" 145 "https://www.google.com/" "Mozilla/5.0 (Linux; Android 10; K) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/122.0.0.0 Mobile Safari/537.36" "-" ********************************************** So I would need to map /images/ttr/0701crash.pdf to the referring page on the website. From dronord at gmail.com Wed Mar 6 11:27:17 2024 From: dronord at gmail.com (dronord) Date: Wed, 6 Mar 2024 14:27:17 +0300 Subject: next upstream on timeout and 307 Message-ID: Hello! I have two(or more) http(rest) servers - main and backup. On error from main need redirect POST to backup, and if possible set him as main. Errors: - connection error/timeout - read timeout - 30x and 50x responses Can nginx help me?) -------------- next part -------------- An HTML attachment was scrubbed... URL: From clima.gabrielphoto at gmail.com Thu Mar 7 06:17:23 2024 From: clima.gabrielphoto at gmail.com (Clima Gabriel) Date: Thu, 7 Mar 2024 08:17:23 +0200 Subject: $request_time variable = 0 for small files. Message-ID: Greetings, I'm investigating a bug, super easy to reproduce. Thought you might be curious. Minimal Nginx config. Create two files. 100M and 1M: dd if=/dev/zero of=/var/www/file100M bs=100M count=1 dd if=/dev/zero of=/var/www/file1M bs=1M count=1 Get them files: curl --limit-rate 10M -o /dev/null 127.0.0.42:80/file100M curl --limit-rate 100k -o /dev/null 127.0.0.42:80/file1M Both transfers take ~10s, but Nginx logs 0s request_time for the small file. master_process off; daemon off; error_log /dev/stderr; events {} http { log_format req_time "$request_time"; server { server_name 127.0.0.42; listen 127.0.0.42:80; root /var/www/; index index.html; location / { access_log /dev/stderr req_time; error_log /dev/stderr; } } } -------------- next part -------------- An HTML attachment was scrubbed... URL: From dronord at gmail.com Thu Mar 7 08:26:57 2024 From: dronord at gmail.com (dronord) Date: Thu, 7 Mar 2024 11:26:57 +0300 Subject: next upstream on timeout and 307 Message-ID: It's ok? upstream rest { server s1; server s2; } server { listen 80; server_name _; root /usr/share/nginx/html; proxy_connect_timeout 3s; proxy_read_timeout 3s; proxy_intercept_errors on; proxy_next_upstream non_idempotent http_500; proxy_next_upstream_timeout 3s; location / { error_page 302 307 500 502 504 = @redirect; proxy_pass http://rest; } location @redirect { proxy_pass http://rest; } } -------------- next part -------------- An HTML attachment was scrubbed... URL: From jordanc.carter at outlook.com Thu Mar 7 09:45:31 2024 From: jordanc.carter at outlook.com (J Carter) Date: Thu, 7 Mar 2024 09:45:31 +0000 Subject: $request_time variable = 0 for small files. In-Reply-To: References: Message-ID: Hello, On Thu, 7 Mar 2024 08:17:23 +0200 Clima Gabriel wrote: > Greetings, > I'm investigating a bug, super easy to reproduce. > Thought you might be curious. > > Minimal Nginx config. Create two files. 100M and 1M: > dd if=/dev/zero of=/var/www/file100M bs=100M count=1 > dd if=/dev/zero of=/var/www/file1M bs=1M count=1 > > Get them files: > curl --limit-rate 10M -o /dev/null 127.0.0.42:80/file100M > curl --limit-rate 100k -o /dev/null 127.0.0.42:80/file1M > > Both transfers take ~10s, but Nginx logs 0s request_time for the small file. > This isn't an issue with nginx. The response nginx sends truly does take 0s to reach the client's socket. Curl's limit-rate flag only applies at the application layer, but it has no effect on curl's tcp socket, or it's buffers/how fast things are read into the buffer. The entire response sent by nginx is being received into into curl's tcp socket buffer instantly, which is auto-scaled to a large window size because you are making these requests from local machine. You can temporarily set tcp read window to smallest possible minimum, default, and maximum to confirm. Like this: sysctl -w net.ipv4.tcp_rmem="4096 4096 4096" or just view tcp traffic via wireshark. > master_process off; > daemon off; > error_log /dev/stderr; > events {} > http > { > log_format req_time "$request_time"; > server > { > server_name 127.0.0.42; > listen 127.0.0.42:80; > root /var/www/; > index index.html; > location / > { > access_log /dev/stderr req_time; > error_log /dev/stderr; > } > } > } From jordanc.carter at outlook.com Thu Mar 7 10:12:30 2024 From: jordanc.carter at outlook.com (J Carter) Date: Thu, 7 Mar 2024 10:12:30 +0000 Subject: Question regarding $invalid_referer In-Reply-To: <20240305130753.4a1f736b.lists@lazygranch.com> References: <20240305130753.4a1f736b.lists@lazygranch.com> Message-ID: Hello, On Tue, 5 Mar 2024 13:07:53 -0800 "lists at lazygranch.com" wrote: > I am presently using a scheme like this to prevent scraping documents. > ************************************ > location /images/ { > valid_referers none blocked www.example.com example.com forums.othersite.com ; > # you can tell the browser that it can only download content from the domains you explicitly allow > # if ($invalid_referer) { > # return 403; > if ($invalid_referer) { > return 302 $scheme://www.example.com; > *************************************** > I commented out some old code which just sends an error message. I > pulled that from the nginx website. I later added the code which sends > the user to the top level of the website. > > It works but the results really aren't user friendly. What I rather do > is if I find an invalid_referer to some document, I would like to > redirect the request to the html page that has my link to the document. > > I am relatively sure I will need to hand code the redirection for every > file, but plan on only doing this for pdfs. Probably 20 files. > > Here is a google referral I pulled from the log file > > ********************************************* > 302 172.0.0.0 - - [05/Mar/2024:20:18:52 +0000] "GET /images/ttr/0701crash.pdf HTTP/2.0" 145 "https://www.google.com/" "Mozilla/5.0 (Linux; Android 10; K) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/122.0.0.0 Mobile Safari/537.36" "-" > ********************************************** > So I would need to map /images/ttr/0701crash.pdf to the referring page > on the website. > _______________________________________________ There is really a question in your email :) however, you could use the SSI module[1] to auto generate the referring page with the link dynamically if don't already have that. [1] https://nginx.org/en/docs/http/ngx_http_ssi_module.html In terms of doing the mapping to some static set of referring pages if you already have those, that will depend upon what path scheme you plan for those in relation to original files. A sensible way would be to make the referring pages's path related to pdf name, (something like /referring/0701crash). In nginx when you do redirect, you can do those mappings dynamically using regex captures. Something like this using nested locations: location /images { ... location ~/(.+).pdf { if ($invalid_referer) { return 302 $scheme://www.example.com/referring/${1}; } } } From clima.gabrielphoto at gmail.com Thu Mar 7 10:33:49 2024 From: clima.gabrielphoto at gmail.com (Clima Gabriel) Date: Thu, 7 Mar 2024 12:33:49 +0200 Subject: $request_time variable = 0 for small files. In-Reply-To: References: Message-ID: 0.000 sysctl -w net.ipv4.tcp_rmem="4096 4096 4096" 0.072 sysctl -w net.ipv4.tcp_rmem="512 512 512" 0.106 sysctl -w net.ipv4.tcp_rmem="256 256 256" You're right. This was invaluable, thank you! On Thu, Mar 7, 2024 at 11:46 AM J Carter wrote: > Hello, > > On Thu, 7 Mar 2024 08:17:23 +0200 > Clima Gabriel wrote: > > > Greetings, > > I'm investigating a bug, super easy to reproduce. > > Thought you might be curious. > > > > Minimal Nginx config. Create two files. 100M and 1M: > > dd if=/dev/zero of=/var/www/file100M bs=100M count=1 > > dd if=/dev/zero of=/var/www/file1M bs=1M count=1 > > > > Get them files: > > curl --limit-rate 10M -o /dev/null 127.0.0.42:80/file100M > > curl --limit-rate 100k -o /dev/null 127.0.0.42:80/file1M > > > > Both transfers take ~10s, but Nginx logs 0s request_time for the small > file. > > > > This isn't an issue with nginx. The response nginx sends > truly does take 0s to reach the client's socket. > > Curl's limit-rate flag only applies at the application layer, but it has > no effect on curl's tcp socket, or it's buffers/how fast things are > read into the buffer. The entire response sent by nginx is being > received into into curl's tcp socket buffer instantly, which is > auto-scaled to a large window size because you are making these > requests from local machine. > > You can temporarily set tcp read window to smallest possible minimum, > default, and maximum to confirm. Like this: > > sysctl -w net.ipv4.tcp_rmem="4096 4096 4096" > > or just view tcp traffic via wireshark. > > > master_process off; > > daemon off; > > error_log /dev/stderr; > > events {} > > http > > { > > log_format req_time "$request_time"; > > server > > { > > server_name 127.0.0.42; > > listen 127.0.0.42:80; > > root /var/www/; > > index index.html; > > location / > > { > > access_log /dev/stderr req_time; > > error_log /dev/stderr; > > } > > } > > } > _______________________________________________ > nginx mailing list > nginx at nginx.org > https://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From victor at camb.com Thu Mar 7 18:54:16 2024 From: victor at camb.com (Victor Oppenheimer) Date: Thu, 7 Mar 2024 13:54:16 -0500 Subject: Forcing URLs to lower case example Message-ID: Greetings, I am using nginx on MS Windows primarily for reverse proxying. I would like to force URLs reaching nginx into lower case before further processing in my nginx.configfile. That is, I'd like subsequent server and/or location directives to all receive lower case versions of URLs being processed regardless of the case mixture of the original URL. Is this possible?  If so I'd appreciate a link to the relevant documentation and perhaps examples.  If not, how close can I come to making nginx on windows case independent? Thanks,    Victor From noloader at gmail.com Thu Mar 7 19:59:02 2024 From: noloader at gmail.com (Jeffrey Walton) Date: Thu, 7 Mar 2024 14:59:02 -0500 Subject: Forcing URLs to lower case example In-Reply-To: References: Message-ID: On Thu, Mar 7, 2024 at 1:54 PM Victor Oppenheimer wrote: > > I am using nginx on MS Windows primarily for reverse proxying. > > I would like to force URLs reaching nginx into lower case before further > processing in my nginx.configfile. > > That is, I'd like subsequent server and/or location directives to all > receive lower case versions of URLs being processed regardless of the > case mixture of the original URL. > > Is this possible? If so I'd appreciate a link to the relevant > documentation and perhaps examples. If not, how close can I come to > making nginx on windows case independent? Related, that may not be a good idea. The scheme and host part of a URL is not case sensitive. However, the remaining path and query of the URL might be case sensitive. Whether the remaining parts are case-sensitive depends on the scheme. RFC 3986, Section 6.2.2.1, Case Normalization: For all URIs, the hexadecimal digits within a percent-encoding triplet (e.g., "%3a" versus "%3A") are case-insensitive and therefore should be normalized to use uppercase letters for the digits A-F. When a URI uses components of the generic syntax, the component syntax equivalence rules always apply; namely, that the scheme and host are case-insensitive and therefore should be normalized to lowercase. For example, the URI is equivalent to . The other generic syntax components are assumed to be case-sensitive unless specifically defined otherwise by the scheme (see Section 6.2.3). Jeff From victor at camb.com Thu Mar 7 20:20:23 2024 From: victor at camb.com (Victor Oppenheimer) Date: Thu, 7 Mar 2024 15:20:23 -0500 Subject: what is my syntax error Message-ID: In my nginx.conf file on my Windows computerI have the following code in nginx.conf. http { # http context specific to HTTP affecting all virtual servers # force incoming URLs to lower case     map $uri $lowercase {~ ^(.+)$ /$1}; When I run nginx -t I get the following error. nginx: [emerg] invalid variable name in C:\nginx/conf/nginx.conf What am I dong incorrectly? Thanks,    Victor -------------- next part -------------- An HTML attachment was scrubbed... URL: From osa at freebsd.org.ru Thu Mar 7 20:37:34 2024 From: osa at freebsd.org.ru (Sergey A. Osokin) Date: Thu, 7 Mar 2024 23:37:34 +0300 Subject: what is my syntax error In-Reply-To: References: Message-ID: On Thu, Mar 07, 2024 at 03:20:23PM -0500, Victor Oppenheimer wrote: > In my nginx.conf file on my Windows computerI have the following code in > nginx.conf. > > http { # http context specific to HTTP affecting all virtual servers > > # force incoming URLs to lower case >     map $uri $lowercase {~ ^(.+)$ /$1}; You may need to remove space after tilde symbol, after that everything works as expected: map $uri $lowercase { ~^(.+)$ /$1; } server { listen 80; location / { return 200 "lowercase=$lowercase\n"; } } Test: % curl 127.1:80/foobar lowercase=//foobar -- Sergey A. Osokin From davidmichaelkarr at gmail.com Fri Mar 8 19:18:10 2024 From: davidmichaelkarr at gmail.com (David Karr) Date: Fri, 8 Mar 2024 11:18:10 -0800 Subject: nginx can't implement session stickiness with nodeport services, must be clusterip? Message-ID: I maintain the Java side of a platform that supports a couple of hundred services running in a number of k8s clusters. Each pod has a container running the Java process, and a container running nginx, as a proxy to the Java service. All the k8s service objects are type NodePort, not ClusterIP. I don't know a lot about nginx, we consider it mostly a blackbox. We have one service that unfortunately requires session stickiness. I am being told that we have to change the service type for this service to ClusterIP, because, and I quote the person who told me this: "Nginx needs to be able to read the "endpoint" objects off of the service. For some reason, that's not possible with NodePorts, but works fine with ClusterIPs." Does this make sense to anyone here? Can someone explain why this might be? -------------- next part -------------- An HTML attachment was scrubbed... URL: From teo.en.ming at protonmail.com Sat Mar 9 07:10:50 2024 From: teo.en.ming at protonmail.com (Turritopsis Dohrnii Teo En Ming) Date: Sat, 09 Mar 2024 07:10:50 +0000 Subject: nginx web server configuration file for Suprema BioStar 2 Door Access System Message-ID: <9tOwd46QpDqh3T0MlquYU0iklTRlehEoWHCzI-nutqSiFOJ9NuGPHVeKLngix4GnEr6PgI-ZlkM75wzZ22R9Ei2-1bRMEPIuckL3JXGpkUI=@protonmail.com> Subject: nginx web server configuration file for Suprema BioStar 2 Door Access System Good day from Singapore, On 7 Mar 2024 Thursday, I was installing NEW self-signed SSL certificate for Suprema BioStar 2 door access system version 2.7.12.39 for a law firm in Singapore because the common name (CN) in the existing SSL certificate was pointing to the WRONG private IPv4 address 192.168.0.149. I have referred to the following Suprema technical support guide to install new self-signed SSL certificate for the door access system. Article: [BioStar 2] How to Apply a Private Certificate for HTTPS Link: https://support.supremainc.com/en/support/solutions/articles/24000005211--biostar-2-how-to-apply-a-private-certificate-for-https The server certificate/public key (biostar_cert.crt), private key (biostar_cert.key), PKCS12 file (biostar_cert.p12) and Java Keystore (keystore.jks) are all located inside the folder C:\Program Files\BioStar 2(x64)\nginx\conf Looking at the above directory pathname, it is apparent that the South Korean Suprema BioStar 2 door access system is using the open source nginx web server. But why are ssl_certificate and ssl_certificate_key directives NOT configured for the HTTPS section in the nginx configuration file? The entire HTTPS section was also commented out. I am baffled. Why is there a Java Keystore (keystore.jks)? Is nginx web server being used in conjunction with some type of open source Java web server? Looking forward to your reply. Thank you. I shall reproduce the nginx web server configuration file for the Suprema BioStar 2 door access system below for your reference. nginx.conf is inside C:\Program Files\BioStar 2(x64)\nginx\conf #user nobody; worker_processes 1; #error_log logs/error.log; #error_log logs/error.log notice; #error_log logs/error.log info; #pid logs/nginx.pid; events { worker_connections 1024; } http { include mime.types; default_type application/octet-stream; #log_format main '$remote_addr - $remote_user [$time_local] "$request" ' # '$status $body_bytes_sent "$http_referer" ' # '"$http_user_agent" "$http_x_forwarded_for"'; #access_log logs/access.log main; sendfile on; #tcp_nopush on; #keepalive_timeout 0; keepalive_timeout 65; #gzip on; server { listen 80; server_name localhost; #charset koi8-r; #access_log logs/host.access.log main; location / { root html; index index.html index.htm; } #error_page 404 /404.html; # redirect server error pages to the static page /50x.html # error_page 500 502 503 504 /50x.html; location = /50x.html { root html; } # proxy the PHP scripts to Apache listening on 127.0.0.1:80 # #location ~ \.php$ { # proxy_pass http://127.0.0.1; #} # pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000 # #location ~ \.php$ { # root html; # fastcgi_pass 127.0.0.1:9000; # fastcgi_index index.php; # fastcgi_param SCRIPT_FILENAME /scripts$fastcgi_script_name; # include fastcgi_params; #} # Swagger document location location /biostar { root html; } # Report document location location /report { root html; } # FASTCGI location location /api { fastcgi_pass 127.0.0.1:9000; fastcgi_read_timeout 300; include fastcgi_params; } # WEBSOCKET location location /wsapi { proxy_pass http://127.0.0.1:9002; proxy_http_version 1.1; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection "upgrade"; } location /webdav { autoindex on; alias html/download; client_body_temp_path html/download; dav_methods PUT DELETE MKCOL COPY MOVE; create_full_put_path on; client_body_in_file_only on; client_body_buffer_size 128K; client_max_body_size 1000M; dav_access user:rw group:rw all:r; } location /resources { root html; autoindex on; } # deny access to .htaccess files, if Apache's document root # concurs with nginx's one # #location ~ /\.ht { # deny all; #} } # another virtual host using mix of IP-, name-, and port-based configuration # #server { # listen 8000; # listen somename:8080; # server_name somename alias another.alias; # location / { # root html; # index index.html index.htm; # } #} # HTTPS server # #server { # listen 443; # server_name localhost; # ssl on; # ssl_certificate cert.pem; # ssl_certificate_key cert.key; # ssl_session_timeout 5m; # ssl_protocols SSLv2 SSLv3 TLSv1; # ssl_ciphers HIGH:!aNULL:!MD5; # ssl_prefer_server_ciphers on; # location / { # root html; # index index.html index.htm; # } #} } Regards, Mr. Turritopsis Dohrnii Teo En Ming Targeted Individual in Singapore Blogs: https://tdtemcerts.blogspot.com https://tdtemcerts.wordpress.com GIMP also stands for Government-Induced Medical Problems. From venefax at gmail.com Mon Mar 11 06:34:14 2024 From: venefax at gmail.com (Saint Michael) Date: Mon, 11 Mar 2024 02:34:14 -0400 Subject: No SNI support on multisite installation Message-ID: I have an openresty server, latest, compiled with http_ssl. So I have 5 websites on the same IP, each one with a server block, a listen statement XXXX:443 SSL; and its own server_name but when I test any of the certificates (example https:// 3y3. us), the online analyzer https://www.ssllabs.com/ssltest/ says that there is no SNI support, "This site works only in browsers with SNI support." " Certificate #2: RSA 2048 bits (SHA256withRSA) No SNI Server Key and Certificate #1 Subjectssnode1.minixel.com Fingerprint SHA256: 2c43df752c9f32a0b9072c9918c7f4064f215a75f321a3eed54f3ea53d377291 Pin SHA256: 0EYY9GZfp68L6vPN7Y0wSjXldFNAUDJBnJ3zFl+KhXs=Common namesssnode1.minixel.comAlternative namesssnode1.minixel.com MISMATCH. Revocation status Good (not revoked) Trusted No NOT TRUSTED Mozilla Apple Android Java Windows so how do I avoid this issue? Is there anything missing in my configuration? I need to use the same IP for every website. From naikvin at gmail.com Mon Mar 11 06:54:44 2024 From: naikvin at gmail.com (Vineet Naik) Date: Mon, 11 Mar 2024 12:24:44 +0530 Subject: auth_request module is sending the auth subrequest twice In-Reply-To: References: Message-ID: Hello, I had sent the original email to the nginx mailing list address a week ago. But I don't see it on the March 2024 archives page - https://mailman.nginx.org/pipermail/nginx/2024-March/thread.html#start. I am wondering if that's the case because I was not subscribed to the mailing list at the time of sending the email (I have subscribed just now) or if it's stuck in moderation. Appreciate any help. Thanks, Vineet On Mon, 4 Mar 2024 at 11:52, Vineet Naik wrote: > Hello, > > I am using the auth_request module to restrict access to static files at > location `/`. I noticed that when authentication is successful, the `/auth` > endpoint is receiving 2 requests for every request sent to nginx by the > client application. Interestingly, this only happens when the user is > logged in i.e. the `/auth` endpoint responds with 200 status code. > Otherwise, the auth endpoint is called only once. I have verified this by > logging every incoming request to `/auth` handler in the server > application. > > I can see that the internal subrequests made by nginx to the auth endpoint > are not being logged. Is there a way to enable logging for auth > subrequests? How do I investigate this further? > > Nginx config for reference: > > server { > listen 80; > server_name spapoc.local; > > access_log /var/log/nginx/spapoc.access.log main; > > location ~ ^/(login|logout) { > auth_request off; > proxy_pass http://127.0.0.1:5001; > proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; > proxy_set_header X-Forwarded-Proto $scheme; > proxy_set_header X-Forwarded-Host $host; > proxy_set_header X-Forwarded-Prefix /; > } > > location /xhr/ { > auth_request off; > proxy_pass http://127.0.0.1:5001/; > proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; > proxy_set_header X-Forwarded-Proto $scheme; > proxy_set_header X-Forwarded-Host $host; > proxy_set_header X-Forwarded-Prefix /; > } > > location = /favicon.ico { > auth_request off; > root /home/vmadmin/spa; > } > > location / { > auth_request /auth; > auth_request_set $auth_status $upstream_status; > error_page 401 = @error401; > > root /home/vmadmin/spa; > try_files $uri $uri/ /index.html; > } > > location = /auth { > internal; > auth_request off; > proxy_pass http://127.0.0.1:5001; > proxy_pass_request_body off; > proxy_set_header Content-Length ""; > proxy_set_header X-Original-URI $request_uri; > } > > location @error401 { > return 302 /login; > } > > #error_page 404 /404.html; > > # redirect server error pages to the static page /50x.html > # > error_page 500 502 503 504 /50x.html; > location = /50x.html { > root /usr/share/nginx/html; > } > } > > -- > Thanks, > Vineet > > -- ~ Vineet -------------- next part -------------- An HTML attachment was scrubbed... URL: From arut at nginx.com Mon Mar 11 13:37:09 2024 From: arut at nginx.com (Roman Arutyunyan) Date: Mon, 11 Mar 2024 17:37:09 +0400 Subject: auth_request module is sending the auth subrequest twice In-Reply-To: References: Message-ID: <20240311133709.jpzovqeru7obbfx4@N00W24XTQX> Hi, On Mon, Mar 11, 2024 at 12:24:44PM +0530, Vineet Naik wrote: > Hello, > > I had sent the original email to the nginx mailing list address a week ago. > But I don't see it on the March 2024 archives page - > https://mailman.nginx.org/pipermail/nginx/2024-March/thread.html#start. I > am wondering if that's the case because I was not subscribed to the mailing > list at the time of sending the email (I have subscribed just now) or if > it's stuck in moderation. > > Appreciate any help. > > Thanks, > Vineet > > On Mon, 4 Mar 2024 at 11:52, Vineet Naik wrote: > > > Hello, > > > > I am using the auth_request module to restrict access to static files at > > location `/`. I noticed that when authentication is successful, the `/auth` > > endpoint is receiving 2 requests for every request sent to nginx by the > > client application. Interestingly, this only happens when the user is > > logged in i.e. the `/auth` endpoint responds with 200 status code. > > Otherwise, the auth endpoint is called only once. I have verified this by > > logging every incoming request to `/auth` handler in the server > > application. It happens because of try_files. The last try_files argument performs internal redirect to the specified uri. Internal redirect is almost like a new request. While going through its phases, auth_request is processed again. https://nginx.org/en/docs/http/ngx_http_core_module.html#try_files > > I can see that the internal subrequests made by nginx to the auth endpoint > > are not being logged. Is there a way to enable logging for auth > > subrequests? How do I investigate this further? Yes, use 'log_subrequest on': https://nginx.org/en/docs/http/ngx_http_core_module.html#log_subrequest > > Nginx config for reference: > > > > server { > > listen 80; > > server_name spapoc.local; > > > > access_log /var/log/nginx/spapoc.access.log main; > > > > location ~ ^/(login|logout) { > > auth_request off; > > proxy_pass http://127.0.0.1:5001; > > proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; > > proxy_set_header X-Forwarded-Proto $scheme; > > proxy_set_header X-Forwarded-Host $host; > > proxy_set_header X-Forwarded-Prefix /; > > } > > > > location /xhr/ { > > auth_request off; > > proxy_pass http://127.0.0.1:5001/; > > proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; > > proxy_set_header X-Forwarded-Proto $scheme; > > proxy_set_header X-Forwarded-Host $host; > > proxy_set_header X-Forwarded-Prefix /; > > } > > > > location = /favicon.ico { > > auth_request off; > > root /home/vmadmin/spa; > > } > > > > location / { > > auth_request /auth; > > auth_request_set $auth_status $upstream_status; > > error_page 401 = @error401; > > > > root /home/vmadmin/spa; > > try_files $uri $uri/ /index.html; > > } > > > > location = /auth { > > internal; > > auth_request off; > > proxy_pass http://127.0.0.1:5001; > > proxy_pass_request_body off; > > proxy_set_header Content-Length ""; > > proxy_set_header X-Original-URI $request_uri; > > } > > > > location @error401 { > > return 302 /login; > > } > > > > #error_page 404 /404.html; > > > > # redirect server error pages to the static page /50x.html > > # > > error_page 500 502 503 504 /50x.html; > > location = /50x.html { > > root /usr/share/nginx/html; > > } > > } > > > > -- > > Thanks, > > Vineet > > > > > > -- > ~ Vineet > _______________________________________________ > nginx mailing list > nginx at nginx.org > https://mailman.nginx.org/mailman/listinfo/nginx -- Roman Arutyunyan From naikvin at gmail.com Mon Mar 11 17:33:21 2024 From: naikvin at gmail.com (Vineet Naik) Date: Mon, 11 Mar 2024 23:03:21 +0530 Subject: auth_request module is sending the auth subrequest twice In-Reply-To: <20240311133709.jpzovqeru7obbfx4@N00W24XTQX> References: <20240311133709.jpzovqeru7obbfx4@N00W24XTQX> Message-ID: Hi, On Mon, 11 Mar 2024 at 19:07, Roman Arutyunyan wrote: > Hi, > > On Mon, Mar 11, 2024 at 12:24:44PM +0530, Vineet Naik wrote: > > Hello, > > > > I had sent the original email to the nginx mailing list address a week > ago. > > But I don't see it on the March 2024 archives page - > > https://mailman.nginx.org/pipermail/nginx/2024-March/thread.html#start. > I > > am wondering if that's the case because I was not subscribed to the > mailing > > list at the time of sending the email (I have subscribed just now) or if > > it's stuck in moderation. > > > > Appreciate any help. > > > > Thanks, > > Vineet > > > > On Mon, 4 Mar 2024 at 11:52, Vineet Naik wrote: > > > > > Hello, > > > > > > I am using the auth_request module to restrict access to static files > at > > > location `/`. I noticed that when authentication is successful, the > `/auth` > > > endpoint is receiving 2 requests for every request sent to nginx by the > > > client application. Interestingly, this only happens when the user is > > > logged in i.e. the `/auth` endpoint responds with 200 status code. > > > Otherwise, the auth endpoint is called only once. I have verified this > by > > > logging every incoming request to `/auth` handler in the server > > > application. > > It happens because of try_files. The last try_files argument performs > internal > redirect to the specified uri. Internal redirect is almost like a new > request. > While going through its phases, auth_request is processed again. > > https://nginx.org/en/docs/http/ngx_http_core_module.html#try_files This is helpful. Thanks. I'll try tweaking the config and see if this can be avoided. > > > > > I can see that the internal subrequests made by nginx to the auth > endpoint > > > are not being logged. Is there a way to enable logging for auth > > > subrequests? How do I investigate this further? > > Yes, use 'log_subrequest on': > > https://nginx.org/en/docs/http/ngx_http_core_module.html#log_subrequest > > > > Nginx config for reference: > > > > > > server { > > > listen 80; > > > server_name spapoc.local; > > > > > > access_log /var/log/nginx/spapoc.access.log main; > > > > > > location ~ ^/(login|logout) { > > > auth_request off; > > > proxy_pass http://127.0.0.1:5001; > > > proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; > > > proxy_set_header X-Forwarded-Proto $scheme; > > > proxy_set_header X-Forwarded-Host $host; > > > proxy_set_header X-Forwarded-Prefix /; > > > } > > > > > > location /xhr/ { > > > auth_request off; > > > proxy_pass http://127.0.0.1:5001/; > > > proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; > > > proxy_set_header X-Forwarded-Proto $scheme; > > > proxy_set_header X-Forwarded-Host $host; > > > proxy_set_header X-Forwarded-Prefix /; > > > } > > > > > > location = /favicon.ico { > > > auth_request off; > > > root /home/vmadmin/spa; > > > } > > > > > > location / { > > > auth_request /auth; > > > auth_request_set $auth_status $upstream_status; > > > error_page 401 = @error401; > > > > > > root /home/vmadmin/spa; > > > try_files $uri $uri/ /index.html; > > > } > > > > > > location = /auth { > > > internal; > > > auth_request off; > > > proxy_pass http://127.0.0.1:5001; > > > proxy_pass_request_body off; > > > proxy_set_header Content-Length ""; > > > proxy_set_header X-Original-URI $request_uri; > > > } > > > > > > location @error401 { > > > return 302 /login; > > > } > > > > > > #error_page 404 /404.html; > > > > > > # redirect server error pages to the static page /50x.html > > > # > > > error_page 500 502 503 504 /50x.html; > > > location = /50x.html { > > > root /usr/share/nginx/html; > > > } > > > } > > > > > > -- > > > Thanks, > > > Vineet > > > > > > > > > > -- > > ~ Vineet > > > _______________________________________________ > > nginx mailing list > > nginx at nginx.org > > https://mailman.nginx.org/mailman/listinfo/nginx > > -- > Roman Arutyunyan > _______________________________________________ > nginx mailing list > nginx at nginx.org > https://mailman.nginx.org/mailman/listinfo/nginx > -- ~ Vineet -------------- next part -------------- An HTML attachment was scrubbed... URL: From chris.paul at rexconsulting.net Tue Mar 12 03:09:53 2024 From: chris.paul at rexconsulting.net (Christopher Paul) Date: Mon, 11 Mar 2024 20:09:53 -0700 Subject: missing something with auth_jwt_key_request Message-ID: <62bb3c63-ced3-435c-a7a4-7e18d1c2f596@rexconsulting.net> Hi NGINX-users, I am running nginx version: nginx/1.25.3 (nginx-plus-r31-p1 on Rocky 9.3 in a lab, trying to get OIDC authentication working to KeyCloak 23.0.7. Attached are the relevant files /etc/nginx.conf and included /etc/nginx/conf.d files, most of which are from the nginx-openid-connect github repo (https://github.com/nginxinc/nginx-openid-connect). Keycloak and nginx are running on the same VM. What am I missing/doing wrong? When I try to hit the server, the redirect to Keycloak does not happen. I can tell this for sure by running "sudo tcpdump -i lo". There are no packets transmitted to localhost:8080. When I "curl -v https://rocky.rexconsulting.net", besides no packets between nginx and keycloak, the output of curl is: * Connected to rocky.rexconsulting.net (10.5.5.90) port 443 * ALPN: curl offers h2,http/1.1 * TLSv1.3 (OUT), TLS handshake, Client hello (1): *  CAfile: /etc/ssl/cert.pem *  CApath: none * TLSv1.3 (IN), TLS handshake, Server hello (2): * TLSv1.3 (IN), TLS handshake, Unknown (8): * TLSv1.3 (IN), TLS handshake, Certificate (11): * TLSv1.3 (IN), TLS handshake, CERT verify (15): * TLSv1.3 (IN), TLS handshake, Finished (20): * TLSv1.3 (OUT), TLS handshake, Finished (20): * SSL connection using TLSv1.3 / TLS_AES_256_GCM_SHA384 / [blank] / UNDEF * ALPN: server accepted http/1.1 * Server certificate: *  subject: CN=rocky.rexconsulting.net *  start date: Mar  7 23:46:13 2024 GMT *  expire date: Jun  5 23:46:12 2024 GMT *  subjectAltName: host "rocky.rexconsulting.net" matched cert's "rocky.rexconsulting.net" *  issuer: C=US; O=Let's Encrypt; CN=R3 *  SSL certificate verify ok. *   Certificate level 0: Public key type ? (256/128 Bits/secBits), signed using sha256WithRSAEncryption *   Certificate level 1: Public key type ? (2048/112 Bits/secBits), signed using sha256WithRSAEncryption * using HTTP/1.x > GET / HTTP/1.1 > Host: rocky.rexconsulting.net > User-Agent: curl/8.6.0 > Accept: */* > * old SSL session ID is stale, removing < HTTP/1.1 401 Unauthorized < Server: nginx/1.25.3 < Date: Tue, 12 Mar 2024 03:07:32 GMT < Content-Type: text/html < Content-Length: 179 < Connection: keep-alive < WWW-Authenticate: Bearer realm="closed site" < 401 Authorization Required

401 Authorization Required


nginx/1.25.3
* Connection #0 to host rocky.rexconsulting.net left intact Many thanks for any insight that might be offered on this. Chris Paul -------------- next part -------------- user nginx; worker_processes auto; error_log /var/log/nginx/error.log debug; pid /var/run/nginx.pid; load_module modules/ngx_http_js_module.so; load_module modules/ngx_stream_js_module.so; events { worker_connections 1024; } http { include /etc/nginx/mime.types; default_type application/octet-stream; log_format main '$remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"'; access_log /var/log/nginx/access.log main; sendfile on; keepalive_timeout 65; include /etc/nginx/conf.d/*.conf; } -------------- next part -------------- # OpenID Connect configuration # # Each map block allows multiple values so that multiple IdPs can be supported, # the $host variable is used as the default input parameter but can be changed. # map $host $oidc_authz_endpoint { #default "http://127.0.0.1:8080/auth/realms/master/protocol/openid-connect/auth"; #www.example.com "https://my-idp/oauth2/v1/authorize"; default "http://127.0.0.1:8080/realms/rexlab/protocol/openid-connect/auth"; } map $host $oidc_authz_extra_args { # Extra arguments to include in the request to the IdP's authorization # endpoint. # Some IdPs provide extended capabilities controlled by extra arguments, # for example Keycloak can select an IdP to delegate to via the # "kc_idp_hint" argument. # Arguments must be expressed as query string parameters and URL-encoded # if required. default ""; #www.example.com "kc_idp_hint=another_provider" } map $host $oidc_token_endpoint { #default "http://127.0.0.1:8080/auth/realms/master/protocol/openid-connect/token"; default "http://127.0.0.1:8080/auth/realms/rexlab/protocol/openid-connect/token"; } map $host $oidc_jwt_keyfile { #default "http://127.0.0.1:8080/auth/realms/master/protocol/openid-connect/certs"; default "http://127.0.0.1:8080/realms/rexlab/protocol/openid-connect/certs"; } map $host $oidc_client { default "nginx-plus"; } map $host $oidc_pkce_enable { default 0; } map $host $oidc_client_secret { default "UxPA37ZTMv36mTGSZhfSTFCl91YYzwcx"; } map $host $oidc_scopes { default "openid+profile+email+offline_access"; } map $host $oidc_logout_redirect { # Where to send browser after requesting /logout location. This can be # replaced with a custom logout page, or complete URL. default "/_logout"; # Built-in, simple logout page } map $host $oidc_hmac_key { # This should be unique for every NGINX instance/cluster default "f3etJkRhybOLWPAt59lWN4GmXz"; } map $host $zone_sync_leeway { # Specifies the maximum timeout for synchronizing ID tokens between cluster # nodes when you use shared memory zone content sync. This option is only # recommended for scenarios where cluster nodes can randomly process # requests from user agents and there may be a situation where node "A" # successfully received a token, and node "B" receives the next request in # less than zone_sync_interval. default 0; # Time in milliseconds, e.g. (zone_sync_interval * 2 * 1000) } map $proto $oidc_cookie_flags { http "Path=/; SameSite=lax;"; # For HTTP/plaintext testing https "Path=/; SameSite=lax; HttpOnly; Secure;"; # Production recommendation } map $http_x_forwarded_port $redirect_base { "" $proto://$host:$server_port; default $proto://$host:$http_x_forwarded_port; } map $http_x_forwarded_proto $proto { "" $scheme; default $http_x_forwarded_proto; } # ADVANCED CONFIGURATION BELOW THIS LINE # Additional advanced configuration (server context) in openid_connect.server_conf # JWK Set will be fetched from $oidc_jwks_uri and cached here - ensure writable by nginx user proxy_cache_path /var/cache/nginx/jwk levels=1 keys_zone=jwk:64k max_size=1m; # Change timeout values to at least the validity period of each token type keyval_zone zone=oidc_id_tokens:1M state=conf.d/oidc_id_tokens.json timeout=1h; keyval_zone zone=oidc_access_tokens:1M state=conf.d/oidc_access_tokens.json timeout=1h; keyval_zone zone=refresh_tokens:1M state=conf.d/refresh_tokens.json timeout=8h; keyval_zone zone=oidc_pkce:128K timeout=90s; # Temporary storage for PKCE code verifier. keyval $cookie_auth_token $session_jwt zone=oidc_id_tokens; # Exchange cookie for JWT keyval $cookie_auth_token $access_token zone=oidc_access_tokens; # Exchange cookie for access token keyval $cookie_auth_token $refresh_token zone=refresh_tokens; # Exchange cookie for refresh token keyval $request_id $new_session zone=oidc_id_tokens; # For initial session creation keyval $request_id $new_access_token zone=oidc_access_tokens; keyval $request_id $new_refresh zone=refresh_tokens; # '' keyval $pkce_id $pkce_code_verifier zone=oidc_pkce; auth_jwt_claim_set $jwt_audience aud; # In case aud is an array js_import oidc from conf.d/openid_connect.js; # vim: syntax=nginx -------------- next part -------------- # This is the backend application we are protecting with OpenID Connect upstream my_backend { zone my_backend 64k; server 10.0.0.1:80; } # Custom log format to include the 'sub' claim in the REMOTE_USER field log_format main_jwt '$remote_addr - $jwt_claim_sub [$time_local] "$request" $status ' '$body_bytes_sent "$http_referer" "$http_user_agent" "$http_x_forwarded_for"'; # The frontend server - reverse proxy with OpenID Connect authentication # server { include conf.d/openid_connect.server_conf; # Authorization code flow and Relying Party processing error_log /var/log/nginx/error.log debug; # Reduce severity level as required listen 8010; # Use SSL/TLS in production location / { # This site is protected with OpenID Connect auth_jwt "" token=$session_jwt; error_page 401 = @do_oidc_flow; #auth_jwt_key_file $oidc_jwt_keyfile; # Enable when using filename auth_jwt_key_request /_jwks_uri; # Enable when using URL # Successfully authenticated users are proxied to the backend, # with 'sub' claim passed as HTTP header proxy_set_header username $jwt_claim_sub; # Bearer token is uses to authorize NGINX to access protected backend #proxy_set_header Authorization "Bearer $access_token"; # Intercept and redirect "401 Unauthorized" proxied responses to nginx # for processing with the error_page directive. Necessary if Access Token # can expire before ID Token. #proxy_intercept_errors on; proxy_pass http://my_backend; # The backend site/app access_log /var/log/nginx/access.log main_jwt; } } # vim: syntax=nginx -------------- next part -------------- server { listen 80; server_name rocky.rexconsulting.net; return 301 https://$host$request_uri; } server { # listen 80 default_server; listen 443 ssl; # auth_jwt "API"; # auth_jwt_key_request "http://localhost:8080/realms/rexlab/protocol/openid-connect/certs"; # auth_jwt_type encrypted; access_log /var/log/nginx/access.log; error_log /var/log/nginx/error.log debug; server_name rocky.rexconsulting.net; ssl_certificate /etc/ssl/nginx/nginx.crt; ssl_certificate_key /etc/ssl/nginx/nginx.key; # ssl_verify_client on; # ssl_trusted_certificate /etc/ssl/cachain.pem; # ssl_ocsp on; # Enable OCSP validation #access_log /var/log/nginx/host.access.log main; location / { auth_jwt_key_request /_jwks_uri; auth_jwt "closed site" token=$cookie_auth_token; root /usr/share/nginx/html; index index.html index.htm; } location = /_jwks_uri { internal; #proxy_pass $oidc_jwt_keyfile; # Uses the mapped value proxy_pass "http://127.0.0.1:8080/realms/rexlab/protocol/openid-connect/certs"; proxy_cache_valid 200 1d; # Caches the keys for a day, adjust as needed proxy_cache jwk; # Assumes you have a proxy_cache_path defined with `jwk` as the key error_page 401 = @error401; } location @error401 { internal; proxy_pass $oidc_authz_endpoint; # Redirect to Keycloak for login } #error_page 404 /404.html; # redirect server error pages to the static page /50x.html error_page 500 502 503 504 /50x.html; location = /50x.html { root /usr/share/nginx/html; } } From bmvishwas at gmail.com Tue Mar 12 05:46:46 2024 From: bmvishwas at gmail.com (Vishwas Bm) Date: Tue, 12 Mar 2024 11:16:46 +0530 Subject: Query on diff between nginx 1.18 and nginx 1.20+ Message-ID: HI, We were using the tool from *https://github.com/fstab/h2c *and seeing change in behaviour between 1.18 and 1.20+ wrt client_header_timeout configuration. We suspect change https://github.com/nginx/nginx/commit/0f5d0c5798eacb60407bcf0a76fc0b2c39e356bb causing this change in behaviour. Can we get some thoughts on this ? Scenario: > > *With nginx 1.20+ * > > *# h2c connect https://:449; while true; do sleep 1;h2c ping;done* > [2024-02-14 15:35:13] -> SETTINGS(0) > [2024-02-14 15:35:13] <- SETTINGS(0) > [2024-02-14 15:35:13] <- WINDOW_UPDATE(0) > [2024-02-14 15:35:13] -> SETTINGS(0) > [2024-02-14 15:35:13] <- SETTINGS(0) > [2024-02-14 15:35:14] -> PING(0) > [2024-02-14 15:35:14] <- PING(0) > [2024-02-14 15:35:15] -> PING(0) > [2024-02-14 15:35:15] <- PING(0) > [2024-02-14 15:35:16] -> PING(0) > [2024-02-14 15:35:16] <- PING(0) > [2024-02-14 15:35:17] -> PING(0) > [2024-02-14 15:35:17] <- PING(0) > [2024-02-14 15:35:18] -> PING(0) > [2024-02-14 15:35:18] <- PING(0) > [2024-02-14 15:35:19] -> PING(0) > [2024-02-14 15:35:19] <- PING(0) > [2024-02-14 15:35:20] -> PING(0) > [2024-02-14 15:35:20] <- PING(0) > [2024-02-14 15:35:21] -> PING(0) > [2024-02-14 15:35:21] <- PING(0) > [2024-02-14 15:35:22] -> PING(0) > [2024-02-14 15:35:22] <- PING(0) > [2024-02-14 15:35:23] -> PING(0) > [2024-02-14 15:35:23] <- PING(0) > [2024-02-14 15:35:24] -> PING(0) > [2024-02-14 15:35:24] <- PING(0) > [2024-02-14 15:35:25] -> PING(0) > [2024-02-14 15:35:25] <- PING(0) > [2024-02-14 15:35:26] -> PING(0) > [2024-02-14 15:35:26] <- PING(0) > [2024-02-14 15:35:27] -> PING(0) > [2024-02-14 15:35:27] <- PING(0) > [2024-02-14 15:35:28] -> PING(0) > [2024-02-14 15:35:28] <- PING(0) > [2024-02-14 15:35:29] -> PING(0) > [2024-02-14 15:35:29] <- PING(0) > [2024-02-14 15:35:30] -> PING(0) > [2024-02-14 15:35:30] <- PING(0) > [2024-02-14 15:35:31] -> PING(0) > [2024-02-14 15:35:31] <- PING(0) > [2024-02-14 15:35:32] -> PING(0) > [2024-02-14 15:35:32] <- PING(0) > [2024-02-14 15:35:33] -> PING(0) > [2024-02-14 15:35:33] <- PING(0) > [2024-02-14 15:35:34] -> PING(0) > [2024-02-14 15:35:34] <- PING(0) > Error while reading next frame: EOF > [2024-02-14 15:35:35] <- GOAWAY(0) << exactly 22s (client_header_timeout set to 22s) > > > TEST 2 (with nginx 1.18) > *h2c connect https://:449; while true; do sleep 1;h2c ping; done * > [2024-02-14 15:46:18] -> SETTINGS(0) > [2024-02-14 15:46:18] <- SETTINGS(0) > [2024-02-14 15:46:18] <- WINDOW_UPDATE(0) > [2024-02-14 15:46:18] -> SETTINGS(0) > [2024-02-14 15:46:18] <- SETTINGS(0) > [2024-02-14 15:47:18] -> PING(0) > [2024-02-14 15:47:18] <- PING(0) > [2024-02-14 15:48:18] -> PING(0) > [2024-02-14 15:48:18] <- PING(0) > [2024-02-14 15:49:18] -> PING(0) > [2024-02-14 15:49:18] <- PING(0) > [2024-02-14 15:50:19] -> PING(0) > [2024-02-14 15:50:19] <- PING(0) > [2024-02-14 15:51:19] -> PING(0) > [2024-02-14 15:51:19] <- PING(0) > [2024-02-14 15:52:19] -> PING(0) > [2024-02-14 15:52:19] <- PING(0) > > --> We are not seeing connection being closed, with client_header_timeout set to 22s. > > *Queries:* How can we achieve the same behaviour of 1.18 in 1.20+ ? Is it possible to achieve it ? Also what is the difference between keepalive_timeout and client_header_timeout ? In nginx 1.20+, we are seeing keepalive_timeout closing the connection before client_header_timeout for below config and it was not the same case in 1.18. > keepalive_timeout 10s; client_header_timeout 22s; *Thanks & Regards,* *Vishwas * -------------- next part -------------- An HTML attachment was scrubbed... URL: From crh3675 at gmail.com Wed Mar 13 19:05:00 2024 From: crh3675 at gmail.com (Craig Hoover) Date: Wed, 13 Mar 2024 15:05:00 -0400 Subject: AWS + ECS Docker NodeJS 20 + nGinx Docker Sidecar Message-ID: We have a pretty hard-hitting API application in NodeJS that is deployed in AWS ECS using nGinx as a sidecar container to proxy to the NodeJS services. We have some odd issues that occur where the NodeJS application reports millisecond processing times up to res.send() but occasionally, the browser reports time for response 2 - 5 seconds. Connections don't timeout, just occasionally hang after the NodeJS process completes the request. The process in NodeJS and within the output, reports 100ms processing time but something is "catching" random outgoing requests for 2-5 seconds before delivering. We believe nGinx is the culprit but can't figure it out. Any help would be appreciated. Here is the config ---- worker_rlimit_nofile 2048; events { worker_connections 1024; worker_aio_requests 64; accept_mutex on; accept_mutex_delay 500ms; multi_accept on; use epoll; epoll_events 512; } http { # Nginx will handle gzip compression of responses from the app server gzip on; gzip_proxied any; gzip_types text/plain application/json text/css text/javascript application/javascript; gzip_min_length 1000; client_max_body_size 10M; tcp_nopush on; tcp_nodelay on; sendfile on; # Offset from AWS ALB to prevent premature closed connections keepalive_timeout 65s; # Erase all memory associated with the connection after it times out. reset_timedout_connection on; # Store metadata of files to increase speed open_file_cache max=10000 inactive=5s; open_file_cache_valid 15s; open_file_cache_min_uses 1; # nGinx is a proxy, keep this off open_file_cache_errors off; upstream node_backend { zone upstreams 256K; server 127.0.0.1:3000 max_fails=1 fail_timeout=3s; keepalive 256; } server { listen 80; proxy_read_timeout 60s; proxy_send_timeout 60s; access_log off; add_header Strict-Transport-Security "max-age=31536000; includeSubDomains"; add_header X-Frame-Options "SAMEORIGIN"; add_header Referrer-Policy "strict-origin-when-cross-origin"; add_header X-Content-Type-Options "nosniff"; add_header Content-Security-Policy "frame-ancestors 'self'"; location / { # Reject requests with unsupported HTTP method if ($request_method !~ ^(GET|POST|HEAD|OPTIONS|PUT|DELETE)$) { return 405; } # Only requests matching the whitelist expectations will # get sent to the node server proxy_pass http://node_backend; proxy_http_version 1.1; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection 'upgrade'; proxy_set_header Host $http_host; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_cache_bypass $http_upgrade; } error_page 500 502 503 504 /50x.html; location = /50x.html { root /usr/share/nginx/html; internal; } } } --- -------------- next part -------------- An HTML attachment was scrubbed... URL: From osa at freebsd.org.ru Wed Mar 13 20:43:58 2024 From: osa at freebsd.org.ru (Sergey A. Osokin) Date: Wed, 13 Mar 2024 23:43:58 +0300 Subject: AWS + ECS Docker NodeJS 20 + nGinx Docker Sidecar In-Reply-To: References: Message-ID: Hi Graig, On Wed, Mar 13, 2024 at 03:05:00PM -0400, Craig Hoover wrote: > We have a pretty hard-hitting API application in NodeJS that is deployed in > AWS ECS using nGinx as a sidecar container to proxy to the NodeJS services. > > We have some odd issues that occur where the NodeJS application reports > millisecond processing times up to res.send() but occasionally, the browser > reports time for response 2 - 5 seconds. > > Connections don't timeout, just occasionally hang after the NodeJS process > completes the request. The process in NodeJS and within the output, > reports 100ms processing time but something is "catching" random outgoing > requests for 2-5 seconds before delivering. We believe nGinx is the > culprit but can't figure it out. Any help would be appreciated. > > Here is the config > ---- > worker_rlimit_nofile 2048; > > events { > worker_connections 1024; > worker_aio_requests 64; > accept_mutex on; > accept_mutex_delay 500ms; > multi_accept on; > use epoll; > epoll_events 512; > } > > http { > # Nginx will handle gzip compression of responses from the app server > gzip on; > gzip_proxied any; > gzip_types text/plain application/json text/css text/javascript > application/javascript; > gzip_min_length 1000; > client_max_body_size 10M; > tcp_nopush on; > tcp_nodelay on; > sendfile on; > > # Offset from AWS ALB to prevent premature closed connections > keepalive_timeout 65s; > > # Erase all memory associated with the connection after it times out. > reset_timedout_connection on; > > # Store metadata of files to increase speed > open_file_cache max=10000 inactive=5s; > open_file_cache_valid 15s; > open_file_cache_min_uses 1; > > # nGinx is a proxy, keep this off > open_file_cache_errors off; > > upstream node_backend { > zone upstreams 256K; > server 127.0.0.1:3000 max_fails=1 fail_timeout=3s; > keepalive 256; > } > > server { > listen 80; > proxy_read_timeout 60s; > proxy_send_timeout 60s; > access_log off; > > add_header Strict-Transport-Security "max-age=31536000; > includeSubDomains"; > add_header X-Frame-Options "SAMEORIGIN"; > add_header Referrer-Policy "strict-origin-when-cross-origin"; > add_header X-Content-Type-Options "nosniff"; > add_header Content-Security-Policy "frame-ancestors 'self'"; > > location / { > # Reject requests with unsupported HTTP method > if ($request_method !~ ^(GET|POST|HEAD|OPTIONS|PUT|DELETE)$) { > return 405; > } > > # Only requests matching the whitelist expectations will > # get sent to the node server > proxy_pass http://node_backend; > proxy_http_version 1.1; > proxy_set_header Upgrade $http_upgrade; > proxy_set_header Connection 'upgrade'; > proxy_set_header Host $http_host; > proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; > proxy_cache_bypass $http_upgrade; > } > > error_page 500 502 503 504 /50x.html; > location = /50x.html { > root /usr/share/nginx/html; > internal; > } > } > } Is there something in system logs? You may want to update the current configuration with: - keepalive directive, [1]; - increase number of connections/limits, [2], that may help to improve performance. References ---------- 1. https://nginx.org/en/docs/http/ngx_http_upstream_module.html#keepalive 2. https://www.nginx.com/blog/tuning-nginx/ -- Sergey A. Osokin From osa at freebsd.org.ru Wed Mar 13 20:52:40 2024 From: osa at freebsd.org.ru (Sergey A. Osokin) Date: Wed, 13 Mar 2024 23:52:40 +0300 Subject: missing something with auth_jwt_key_request In-Reply-To: <62bb3c63-ced3-435c-a7a4-7e18d1c2f596@rexconsulting.net> References: <62bb3c63-ced3-435c-a7a4-7e18d1c2f596@rexconsulting.net> Message-ID: Hi Christopher, please correct me if I'm wrong here, but the question is related to NGINX Plus and OIDC implementation. On Mon, Mar 11, 2024 at 08:09:53PM -0700, Christopher Paul wrote: > Hi NGINX-users, > > I am running nginx version: nginx/1.25.3 (nginx-plus-r31-p1 on Rocky 9.3 in > a lab, trying to get OIDC authentication working to KeyCloak 23.0.7. > Attached are the relevant files /etc/nginx.conf and included > /etc/nginx/conf.d files, most of which are from the nginx-openid-connect > github repo (https://github.com/nginxinc/nginx-openid-connect). > > Keycloak and nginx are running on the same VM. > > What am I missing/doing wrong? When I try to hit the server, the redirect to > Keycloak does not happen. I can tell this for sure by running "sudo tcpdump > -i lo". There are no packets transmitted to localhost:8080. When I "curl -v > https://rocky.rexconsulting.net", besides no packets between nginx and > keycloak, the output of curl is: [...] I'd recommend to get a support from F5 NGINX Help and Support, https://www.nginx.com/support/ on the MyF5 Portal, https://my.f5.com/manage/s/ Thank you. -- Sergey A. Osokin From osa at freebsd.org.ru Wed Mar 13 20:55:17 2024 From: osa at freebsd.org.ru (Sergey A. Osokin) Date: Wed, 13 Mar 2024 23:55:17 +0300 Subject: Query on diff between nginx 1.18 and nginx 1.20+ In-Reply-To: References: Message-ID: Hi Vishwas, thanks for the report. On Tue, Mar 12, 2024 at 11:16:46AM +0530, Vishwas Bm wrote: > > We were using the tool from *https://github.com/fstab/h2c > *and seeing change in behaviour between 1.18 > and 1.20+ wrt client_header_timeout configuration. [...] The request is related to a legacy version of nginx, could you try to use the recent stable version, 1.24.0. Thank you. -- Sergey A. Osokin From osa at freebsd.org.ru Wed Mar 13 21:03:11 2024 From: osa at freebsd.org.ru (Sergey A. Osokin) Date: Thu, 14 Mar 2024 00:03:11 +0300 Subject: nginx can't implement session stickiness with nodeport services, must be clusterip? In-Reply-To: References: Message-ID: Hi David, On Fri, Mar 08, 2024 at 11:18:10AM -0800, David Karr wrote: > I maintain the Java side of a platform that supports a couple of hundred > services running in a number of k8s clusters. Each pod has a container > running the Java process, and a container running nginx, as a proxy to the > Java service. All the k8s service objects are type NodePort, not ClusterIP. > > I don't know a lot about nginx, we consider it mostly a blackbox. How's nginx configured there? Is it L7 or L4 reverse proxy? > We have one service that unfortunately requires session stickiness. The "session persistense" feature avaialable with NGINX Plus product, [1]. > I am being told that we have to change the service type for this service > to ClusterIP, because, and I quote the person who told me this: > > "Nginx needs to be able to read the "endpoint" objects off of the > service. For some reason, that's not possible with NodePorts, but works > fine with ClusterIPs." > > Does this make sense to anyone here? Can someone explain why this might be? That's a bit unclear to me, so that needs to get more details about the current solution. Thank you. References ---------- 1. https://docs.nginx.com/nginx/admin-guide/load-balancer/http-load-balancer/#enabling-session-persistence -- Sergey A. Osokin From bmvishwas at gmail.com Thu Mar 14 04:25:52 2024 From: bmvishwas at gmail.com (Vishwas Bm) Date: Thu, 14 Mar 2024 09:55:52 +0530 Subject: Query on diff between nginx 1.18 and nginx 1.20+ In-Reply-To: References: Message-ID: Hi Sergey, Thanks for the reply. We see the same behaviour with nginx 1.24 and connection breaks because of client_header_timeout. Can you provide more information on keepalive_timeout and client_header_timeout. When does these two timers get triggered. If you can brief explanation with respect to client and nginx connection it will be helpful. Regards, Vishwas On Thu, Mar 14, 2024, 02:25 Sergey A. Osokin wrote: > Hi Vishwas, > > thanks for the report. > > On Tue, Mar 12, 2024 at 11:16:46AM +0530, Vishwas Bm wrote: > > > > We were using the tool from *https://github.com/fstab/h2c > > *and seeing change in behaviour between > 1.18 > > and 1.20+ wrt client_header_timeout configuration. > > [...] > > The request is related to a legacy version of nginx, could > you try to use the recent stable version, 1.24.0. > > Thank you. > > -- > Sergey A. Osokin > _______________________________________________ > nginx mailing list > nginx at nginx.org > https://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From chris.paul at rexconsulting.net Thu Mar 14 18:09:25 2024 From: chris.paul at rexconsulting.net (Christopher Paul) Date: Thu, 14 Mar 2024 18:09:25 +0000 Subject: missing something with auth_jwt_key_request In-Reply-To: References: <62bb3c63-ced3-435c-a7a4-7e18d1c2f596@rexconsulting.net> Message-ID: > -----Original Message----- > From: nginx On Behalf Of Sergey A. Osokin > please correct me if I'm wrong here, but the question is related to NGINX Plus > and OIDC implementation. Hi Sergey, The question is related to NGINX in general. I tried NGINX FOSS first, then Plus. Should this "nginx-openid-connect" work differently for Plus vs the FOSS version? From osa at freebsd.org.ru Thu Mar 14 18:25:46 2024 From: osa at freebsd.org.ru (Sergey A. Osokin) Date: Thu, 14 Mar 2024 21:25:46 +0300 Subject: missing something with auth_jwt_key_request In-Reply-To: References: <62bb3c63-ced3-435c-a7a4-7e18d1c2f596@rexconsulting.net> Message-ID: Hi Christopher, On Thu, Mar 14, 2024 at 06:09:25PM +0000, Christopher Paul wrote: [...] > > The question is related to NGINX in general. I tried NGINX FOSS first, then Plus. > Should this "nginx-openid-connect" work differently for Plus vs the FOSS version? The solution utilizes keyval, [1] and auth_jwt, [2], features, so that's related to NGINX Plus only I believe. Thank you. References ---------- 1. https://nginx.org/en/docs/http/ngx_http_keyval_module.html 2. https://nginx.org/en/docs/http/ngx_http_auth_jwt_module.html#auth_jwt -- Sergey A. Osokin From teward at thomas-ward.net Fri Mar 15 18:04:56 2024 From: teward at thomas-ward.net (Thomas Ward) Date: Fri, 15 Mar 2024 18:04:56 +0000 Subject: No SNI support on multisite installation In-Reply-To: References: Message-ID: If you only have one IP, then you cannot fix this. SNI is what determines which certificate to serve for the request. The only solution would be individual IPs for each domain, thus not needing SNI to get the correct cert for each domain. Sent from my Galaxy -------- Original message -------- From: Saint Michael Date: 3/11/24 02:34 (GMT-05:00) To: nginx at nginx.org Subject: No SNI support on multisite installation I have an openresty server, latest, compiled with http_ssl. So I have 5 websites on the same IP, each one with a server block, a listen statement XXXX:443 SSL; and its own server_name but when I test any of the certificates (example https:// 3y3. us), the online analyzer https://www.ssllabs.com/ssltest/ says that there is no SNI support, "This site works only in browsers with SNI support." " Certificate #2: RSA 2048 bits (SHA256withRSA) No SNI Server Key and Certificate #1 Subjectssnode1.minixel.com Fingerprint SHA256: 2c43df752c9f32a0b9072c9918c7f4064f215a75f321a3eed54f3ea53d377291 Pin SHA256: 0EYY9GZfp68L6vPN7Y0wSjXldFNAUDJBnJ3zFl+KhXs=Common namesssnode1.minixel.comAlternative namesssnode1.minixel.com MISMATCH. Revocation status Good (not revoked) Trusted No NOT TRUSTED Mozilla Apple Android Java Windows so how do I avoid this issue? Is there anything missing in my configuration? I need to use the same IP for every website. _______________________________________________ nginx mailing list nginx at nginx.org https://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From noloader at gmail.com Fri Mar 15 18:23:50 2024 From: noloader at gmail.com (Jeffrey Walton) Date: Fri, 15 Mar 2024 14:23:50 -0400 Subject: No SNI support on multisite installation In-Reply-To: References: Message-ID: On Fri, Mar 15, 2024 at 2:05 PM Thomas Ward via nginx wrote: > > If you only have one IP, then you cannot fix this. SNI is what determines which certificate to serve for the request. The only solution would be individual IPs for each domain, thus not needing SNI to get the correct cert for each domain. The real fix needs to be made in openrusty. SNI is a standard extension. its about time openrusty properly support it. Another way to fix it is, find a CA to issue a certificate that includes all the domains in the Subject Alt Name. So the end entity certificate issued would have, say, 10 or 12 different domains so the same cert can be used for all the connections. Google serves a cert like that for 'google.com', but they own all the web properties. $ openssl s_client -connect google.com:443 -servername google.com | openssl x509 -text -noout ... DNS:*.google.com, DNS:*.appengine.google.com, DNS:*.bdn.dev, DNS :*.origin-test.bdn.dev, DNS:*.cloud.google.com, DNS:*.crowdsource.google.com, DN S:*.datacompute.google.com, DNS:*.google.ca, DNS:*.google.cl, DNS:*.google.co.in , DNS:*.google.co.jp, DNS:*.google.co.uk, DNS:*.google.com.ar, DNS:*.google.com. au, DNS:*.google.com.br, DNS:*.google.com.co, DNS:*.google.com.mx, DNS:*.google. com.tr, DNS:*.google.com.vn, DNS:*.google.de, DNS:*.google.es, DNS:*.google.fr, DNS:*.google.hu, DNS:*.google.it, DNS:*.google.nl, DNS:*.google.pl, DNS:*.google .pt, DNS:*.googleapis.cn, DNS:*.googlevideo.com, DNS:*.gstatic.cn, DNS:*.gstatic -cn.com, DNS:googlecnapps.cn, DNS:*.googlecnapps.cn, DNS:googleapps-cn.com, DNS: *.googleapps-cn.com, DNS:gkecnapps.cn, DNS:*.gkecnapps.cn, DNS:googledownloads.c n, DNS:*.googledownloads.cn, DNS:recaptcha.net.cn, DNS:*.recaptcha.net.cn, DNS:r ecaptcha-cn.net, DNS:*.recaptcha-cn.net, DNS:widevine.cn, DNS:*.widevine.cn, DNS :ampproject.org.cn, DNS:*.ampproject.org.cn, DNS:ampproject.net.cn, DNS:*.amppro ject.net.cn, DNS:google-analytics-cn.com, DNS:*.google-analytics-cn.com, DNS:goo gleadservices-cn.com, DNS:*.googleadservices-cn.com, DNS:googlevads-cn.com, DNS: *.googlevads-cn.com, DNS:googleapis-cn.com, DNS:*.googleapis-cn.com, DNS:googleo ptimize-cn.com, DNS:*.googleoptimize-cn.com, DNS:doubleclick-cn.net, DNS:*.doubl eclick-cn.net, DNS:*.fls.doubleclick-cn.net, DNS:*.g.doubleclick-cn.net, DNS:dou bleclick.cn, DNS:*.doubleclick.cn, DNS:*.fls.doubleclick.cn, DNS:*.g.doubleclick .cn, DNS:dartsearch-cn.net, DNS:*.dartsearch-cn.net, DNS:googletraveladservices- cn.com, DNS:*.googletraveladservices-cn.com, DNS:googletagservices-cn.com, DNS:* .googletagservices-cn.com, DNS:googletagmanager-cn.com, DNS:*.googletagmanager-c n.com, DNS:googlesyndication-cn.com, DNS:*.googlesyndication-cn.com, DNS:*.safef rame.googlesyndication-cn.com, DNS:app-measurement-cn.com, DNS:*.app-measurement -cn.com, DNS:gvt1-cn.com, DNS:*.gvt1-cn.com, DNS:gvt2-cn.com, DNS:*.gvt2-cn.com, DNS:2mdn-cn.net, DNS:*.2mdn-cn.net, DNS:googleflights-cn.net, DNS:*.googlefligh ts-cn.net, DNS:admob-cn.com, DNS:*.admob-cn.com, DNS:googlesandbox-cn.com, DNS:* .googlesandbox-cn.com, DNS:*.safenup.googlesandbox-cn.com, DNS:*.gstatic.com, DN S:*.metric.gstatic.com, DNS:*.gvt1.com, DNS:*.gcpcdn.gvt1.com, DNS:*.gvt2.com, D NS:*.gcp.gvt2.com, DNS:*.url.google.com, DNS:*.youtube-nocookie.com, DNS:*.ytimg .com, DNS:android.com, DNS:*.android.com, DNS:*.flash.android.com, DNS:g.cn, DNS :*.g.cn, DNS:g.co, DNS:*.g.co, DNS:goo.gl, DNS:www.goo.gl, DNS:google-analytics. com, DNS:*.google-analytics.com, DNS:google.com, DNS:googlecommerce.com, DNS:*.g ooglecommerce.com, DNS:ggpht.cn, DNS:*.ggpht.cn, DNS:urchin.com, DNS:*.urchin.co m, DNS:youtu.be, DNS:youtube.com, DNS:*.youtube.com, DNS:youtubeeducation.com, D NS:*.youtubeeducation.com, DNS:youtubekids.com, DNS:*.youtubekids.com, DNS:yt.be , DNS:*.yt.be, DNS:android.clients.google.com, DNS:developer.android.google.cn, DNS:developers.android.google.cn, DNS:source.android.google.cn, DNS:developer.ch rome.google.cn, DNS:web.developers.google.cn ... Jeff From teward at thomas-ward.net Fri Mar 15 18:37:11 2024 From: teward at thomas-ward.net (Thomas Ward) Date: Fri, 15 Mar 2024 18:37:11 +0000 Subject: No SNI support on multisite installation In-Reply-To: References: Message-ID: Jeffrey, If I read OP's information right, the test they were seeing was that it says it needs SNI support and a number of browsers showed "No SNI support". I know from testing OpenResty supports SNI. That isn't the issue here I believe. Sent from my Galaxy -------- Original message -------- From: Jeffrey Walton Date: 3/15/24 14:24 (GMT-05:00) To: nginx at nginx.org Cc: Thomas Ward Subject: Re: No SNI support on multisite installation On Fri, Mar 15, 2024 at 2:05 PM Thomas Ward via nginx wrote: > > If you only have one IP, then you cannot fix this. SNI is what determines which certificate to serve for the request. The only solution would be individual IPs for each domain, thus not needing SNI to get the correct cert for each domain. The real fix needs to be made in openrusty. SNI is a standard extension. its about time openrusty properly support it. Another way to fix it is, find a CA to issue a certificate that includes all the domains in the Subject Alt Name. So the end entity certificate issued would have, say, 10 or 12 different domains so the same cert can be used for all the connections. Google serves a cert like that for 'google.com', but they own all the web properties. $ openssl s_client -connect google.com:443 -servername google.com | openssl x509 -text -noout ... DNS:*.google.com, DNS:*.appengine.google.com, DNS:*.bdn.dev, DNS :*.origin-test.bdn.dev, DNS:*.cloud.google.com, DNS:*.crowdsource.google.com, DN S:*.datacompute.google.com, DNS:*.google.ca, DNS:*.google.cl, DNS:*.google.co.in , DNS:*.google.co.jp, DNS:*.google.co.uk, DNS:*.google.com.ar, DNS:*.google.com. au, DNS:*.google.com.br, DNS:*.google.com.co, DNS:*.google.com.mx, DNS:*.google. com.tr, DNS:*.google.com.vn, DNS:*.google.de, DNS:*.google.es, DNS:*.google.fr, DNS:*.google.hu, DNS:*.google.it, DNS:*.google.nl, DNS:*.google.pl, DNS:*.google .pt, DNS:*.googleapis.cn, DNS:*.googlevideo.com, DNS:*.gstatic.cn, DNS:*.gstatic -cn.com, DNS:googlecnapps.cn, DNS:*.googlecnapps.cn, DNS:googleapps-cn.com, DNS: *.googleapps-cn.com, DNS:gkecnapps.cn, DNS:*.gkecnapps.cn, DNS:googledownloads.c n, DNS:*.googledownloads.cn, DNS:recaptcha.net.cn, DNS:*.recaptcha.net.cn, DNS:r ecaptcha-cn.net, DNS:*.recaptcha-cn.net, DNS:widevine.cn, DNS:*.widevine.cn, DNS :ampproject.org.cn, DNS:*.ampproject.org.cn, DNS:ampproject.net.cn, DNS:*.amppro ject.net.cn, DNS:google-analytics-cn.com, DNS:*.google-analytics-cn.com, DNS:goo gleadservices-cn.com, DNS:*.googleadservices-cn.com, DNS:googlevads-cn.com, DNS: *.googlevads-cn.com, DNS:googleapis-cn.com, DNS:*.googleapis-cn.com, DNS:googleo ptimize-cn.com, DNS:*.googleoptimize-cn.com, DNS:doubleclick-cn.net, DNS:*.doubl eclick-cn.net, DNS:*.fls.doubleclick-cn.net, DNS:*.g.doubleclick-cn.net, DNS:dou bleclick.cn, DNS:*.doubleclick.cn, DNS:*.fls.doubleclick.cn, DNS:*.g.doubleclick .cn, DNS:dartsearch-cn.net, DNS:*.dartsearch-cn.net, DNS:googletraveladservices- cn.com, DNS:*.googletraveladservices-cn.com, DNS:googletagservices-cn.com, DNS:* .googletagservices-cn.com, DNS:googletagmanager-cn.com, DNS:*.googletagmanager-c n.com, DNS:googlesyndication-cn.com, DNS:*.googlesyndication-cn.com, DNS:*.safef rame.googlesyndication-cn.com, DNS:app-measurement-cn.com, DNS:*.app-measurement -cn.com, DNS:gvt1-cn.com, DNS:*.gvt1-cn.com, DNS:gvt2-cn.com, DNS:*.gvt2-cn.com, DNS:2mdn-cn.net, DNS:*.2mdn-cn.net, DNS:googleflights-cn.net, DNS:*.googlefligh ts-cn.net, DNS:admob-cn.com, DNS:*.admob-cn.com, DNS:googlesandbox-cn.com, DNS:* .googlesandbox-cn.com, DNS:*.safenup.googlesandbox-cn.com, DNS:*.gstatic.com, DN S:*.metric.gstatic.com, DNS:*.gvt1.com, DNS:*.gcpcdn.gvt1.com, DNS:*.gvt2.com, D NS:*.gcp.gvt2.com, DNS:*.url.google.com, DNS:*.youtube-nocookie.com, DNS:*.ytimg .com, DNS:android.com, DNS:*.android.com, DNS:*.flash.android.com, DNS:g.cn, DNS :*.g.cn, DNS:g.co, DNS:*.g.co, DNS:goo.gl, DNS:www.goo.gl, DNS:google-analytics. com, DNS:*.google-analytics.com, DNS:google.com, DNS:googlecommerce.com, DNS:*.g ooglecommerce.com, DNS:ggpht.cn, DNS:*.ggpht.cn, DNS:urchin.com, DNS:*.urchin.co m, DNS:youtu.be, DNS:youtube.com, DNS:*.youtube.com, DNS:youtubeeducation.com, D NS:*.youtubeeducation.com, DNS:youtubekids.com, DNS:*.youtubekids.com, DNS:yt.be , DNS:*.yt.be, DNS:android.clients.google.com, DNS:developer.android.google.cn, DNS:developers.android.google.cn, DNS:source.android.google.cn, DNS:developer.ch rome.google.cn, DNS:web.developers.google.cn ... Jeff -------------- next part -------------- An HTML attachment was scrubbed... URL: From noloader at gmail.com Fri Mar 15 18:47:25 2024 From: noloader at gmail.com (Jeffrey Walton) Date: Fri, 15 Mar 2024 14:47:25 -0400 Subject: No SNI support on multisite installation In-Reply-To: References: Message-ID: On Fri, Mar 15, 2024 at 2:37 PM Thomas Ward wrote: > > Jeffrey, > > If I read OP's information right, the test they were seeing was that it says it needs SNI support and a number of browsers showed "No SNI support". I know from testing OpenResty supports SNI. That isn't the issue here I believe. My bad. After reading , it seemed like openrusty did not have the native support for SNI. Or did not have it enabled by default. Jeff From jmax at cock.li Sun Mar 17 04:52:28 2024 From: jmax at cock.li (GNAA Jmax) Date: Sun, 17 Mar 2024 04:52:28 +0000 Subject: nginx-niggers and nginx-lgbt projects. Message-ID: <7753f3614978a2970b7bbe77529a94ff@cock.li> Dear Brothers and Sisters: I am interested in starting some nginx projects. As a homosexual, nginx-using, black, I am surprised at the low numbers of black and/or LGBT members of the nginx community. I believe that starting nginx-niggers, and nginx-gay or nginx-lgbt projects would help to increase participation of the respective parties in the nginx community. The first step in achieving this goal is to start mailing lists, where fellow nginx-using niggers and gays can communicate. I'm sure if such great niggers as Doctor Martin Luther King Jr. or Malcolm X were alive today, they would be NGINX advocates! Heralds of free speech and free software! Please respond with haste, not hate! Jonathan Maxwell, Head of Free Speech at Gay Nigger Advocates of America, a division of SUKI (TM) From reflux4448 at outlook.com Sun Mar 17 05:54:56 2024 From: reflux4448 at outlook.com (F Reflux) Date: Sun, 17 Mar 2024 05:54:56 +0000 Subject: nginx-niggers and nginx-lgbt projects. In-Reply-To: <7753f3614978a2970b7bbe77529a94ff@cock.li> References: <7753f3614978a2970b7bbe77529a94ff@cock.li> Message-ID: https://lists.debian.org/debian-project/2006/06/msg00174.html As a side note, since GNOME is already offering the Google Summer of Code's $9000 to female developers, perhaps Debian could offer its money to nigger developers? On Thu, 15 Jun 2006 12:38:41 +0200, Wouter Verhelst wrote: For those of you not familiar with the GNAA: please do not reply to this message or in any way assume that he is serious. The GNAA is a well-known trolling organisation. See -------------- next part -------------- An HTML attachment was scrubbed... URL: From srebecchi at kameleoon.com Mon Mar 18 13:41:50 2024 From: srebecchi at kameleoon.com (=?UTF-8?Q?S=C3=A9bastien_Rebecchi?=) Date: Mon, 18 Mar 2024 14:41:50 +0100 Subject: number of keepalive connections to an upstream Message-ID: Hello, What is the good rule of thumbs for setting the number of keepalive connections to an upstream group? 1. https://www.nginx.com/blog/performance-tuning-tips-tricks/ in this blog, the writer seems to recommend a constant value of 128, no real explanation why it would fit whatever the number of servers in the upstream 2. https://nginx.org/en/docs/http/ngx_http_upstream_module.html#keepalive the upstream module doc seems to recommend a rule like 16 times the number of servers in the upstream, as we have two examples with respectively keepalive 32 for 2 upstream servers and keepalive 16 for 1 upstream server 3. https://www.nginx.com/blog/avoiding-top-10-nginx-configuration-mistakes/#no-keepalives in this blog, the writer recommends a rule of 2 times the number of servers in the upstream I used to follow rule of item 3 as it comes with a somewhat good explanation, but it does not seem to be largely accepted. What could explain such a divergence between several sources? What would you recommend please? Regards, Sébastien. -------------- next part -------------- An HTML attachment was scrubbed... URL: From tacodewolff at gmail.com Tue Mar 19 12:39:12 2024 From: tacodewolff at gmail.com (Taco de Wolff) Date: Tue, 19 Mar 2024 09:39:12 -0300 Subject: Unable to activate TLS1.3 Message-ID: Hi, I'm using Nginx 1.25.4 with the OpenSSL 1.1.1k FIPS build on CentOS Stream 8 (FIPS not enabled). I have checked that the OpenSSL library can connect to other services using TLS1.3 and Postfix + Dovecot work fine on TLS1.3 as well, but Nginx doesn't seem to enable TLS1.3 as reported by SSLLabs and by checking manually using: $ openssl s_client -connect domain.com:443 -tls1_3 CONNECTED(00000003) 4027EC8EC57D0000:error:0A00042E:SSL routines:ssl3_read_bytes:tlsv1 alert protocol version:ssl/record/rec_layer_s3.c:865:SSL alert number 70 TLS1.2 works fine though, and I'm sure TLS1.3 used to work but I can't figure out what has changed. The relevant configuration: http { # SSL ssl_session_timeout 1d; ssl_session_cache shared:SSL:32m; ssl_session_tickets off; # Diffie-Hellman parameter for DHE ciphersuites ssl_dhparam /etc/nginx/dhparam.pem; # SSL ciphers ssl_protocols TLSv1.2 TLSv1.3; ssl_ciphers TLS_AES_128_GCM_SHA256:TLS_AES_256_GCM_SHA384:TLS_CHACHA20_POLY1305_SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384:DHE-RSA-CHACHA20-POLY1305; #ssl_prefer_server_ciphers on; # OCSP Stapling ssl_stapling on; ssl_stapling_verify on; #ssl_trusted_certificate /path/to/root_CA_cert_plus_intermediates; resolver 1.1.1.1 1.0.0.1 208.67.222.222 208.67.220.220 valid=60s; resolver_timeout 2s; # HTTP3 http3_hq on; quic_gso on; quic_retry on; #ssl_early_data on; # ... } server { listen 443 ssl; listen 443 quic; listen [::]:443 ssl; listen [::]:443 quic; http2 on; # SSL ssl_certificate /etc/pki/lego/certificates/domain.com.crt; ssl_certificate_key /etc/pki/lego/certificates/domain.com.key; ssl_trusted_certificate /etc/pki/lego/certificates/domain.com.issuer.crt; # ... } I'm really at a loss and unsure how to proceed debugging this. What else could be the problem? Thank you for your time. Kind regards, Taco de Wolff -------------- next part -------------- An HTML attachment was scrubbed... URL: From tacodewolff at gmail.com Wed Mar 20 11:53:39 2024 From: tacodewolff at gmail.com (Taco de Wolff) Date: Wed, 20 Mar 2024 08:53:39 -0300 Subject: Unable to activate TLS1.3 In-Reply-To: References: Message-ID: I figured it out. One of the servers that is listening on 443 uses "ssl_reject_handshake on;" and thus I didn't define an ssl_certificate + ssl_certificate_key + ssl_trusted_certificate as it is not (and should not be) required. For some reason, this disabled TLS1.3 for all servers quite unexpectedly. Adding all three variables and keeping the ssl_reject_handshake, re-enabled TLS1.3 (eventhough TLS1.2 works fine in both cases). Could this be a bug? Kind regards, Taco de Wolff Op di 19 mrt 2024 om 09:39 schreef Taco de Wolff : > Hi, > > I'm using Nginx 1.25.4 with the OpenSSL 1.1.1k FIPS build on CentOS Stream > 8 (FIPS not enabled). I have checked that the OpenSSL library can connect > to other services using TLS1.3 and Postfix + Dovecot work fine on TLS1.3 as > well, but Nginx doesn't seem to enable TLS1.3 as reported by SSLLabs and by > checking manually using: > > $ openssl s_client -connect domain.com:443 -tls1_3 > CONNECTED(00000003) > 4027EC8EC57D0000:error:0A00042E:SSL routines:ssl3_read_bytes:tlsv1 alert > protocol version:ssl/record/rec_layer_s3.c:865:SSL alert number 70 > > TLS1.2 works fine though, and I'm sure TLS1.3 used to work but I can't > figure out what has changed. The relevant configuration: > > > http { > # SSL > ssl_session_timeout 1d; > ssl_session_cache shared:SSL:32m; > ssl_session_tickets off; > > # Diffie-Hellman parameter for DHE ciphersuites > ssl_dhparam /etc/nginx/dhparam.pem; > > # SSL ciphers > ssl_protocols TLSv1.2 TLSv1.3; > ssl_ciphers > TLS_AES_128_GCM_SHA256:TLS_AES_256_GCM_SHA384:TLS_CHACHA20_POLY1305_SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384:DHE-RSA-CHACHA20-POLY1305; > #ssl_prefer_server_ciphers on; > > # OCSP Stapling > ssl_stapling on; > ssl_stapling_verify on; > #ssl_trusted_certificate /path/to/root_CA_cert_plus_intermediates; > resolver 1.1.1.1 1.0.0.1 208.67.222.222 208.67.220.220 valid=60s; > resolver_timeout 2s; > > # HTTP3 > http3_hq on; > quic_gso on; > quic_retry on; > #ssl_early_data on; > > # ... > } > > server { > listen 443 ssl; > listen 443 quic; > listen [::]:443 ssl; > listen [::]:443 quic; > > http2 on; > > # SSL > ssl_certificate /etc/pki/lego/certificates/domain.com.crt; > ssl_certificate_key /etc/pki/lego/certificates/domain.com.key; > ssl_trusted_certificate > /etc/pki/lego/certificates/domain.com.issuer.crt; > > # ... > } > > > I'm really at a loss and unsure how to proceed debugging this. What else > could be the problem? Thank you for your time. > > Kind regards, > Taco de Wolff > -------------- next part -------------- An HTML attachment was scrubbed... URL: From iippolitov at nginx.com Wed Mar 20 12:26:30 2024 From: iippolitov at nginx.com (Igor Ippolitov) Date: Wed, 20 Mar 2024 12:26:30 +0000 Subject: number of keepalive connections to an upstream In-Reply-To: References: Message-ID: <91fa7cf3-7714-4519-8d46-52c891c6972e@nginx.com> Sébastien, Keepalive in an upstream defines a pool of connections attached to that upstream. The main purpose of the pool is to reduce the amount of new TCP connections: the fewer new connections you open the less load you have. Any specific recommendation will fail in some case. So the real value is dictated by your load and your upstream applications. Consider the following when choosing a value: If the pool is smaller than the number of servers in an upstream group - nginx may end up closing connections to an upstream every time. So the common sense is to have keepalive pool at least as big as there are servers in a group (10 servers dictate having a pool of at least 10 connections, 1 per server). If you have a low count of lightweight upstream processes (say, it's another nginx) and a high count of concurrent requests - the value for keepalive can easily be in thousands. On the other hand, if you have 10 concurrent connections and 5 servers in an upstream something like "15" would be a good choice. Be careful setting high values though: in opensource version keepalive is set per worker. So if you have 'keepalive 10' and 16 workers you will end up with 160 connections from nginx to an upstream. I hope this answers your question. Kind regards, Igor On 18/03/2024 13:41, Sébastien Rebecchi wrote: > Hello, > > What is the good rule of thumbs for setting the number of keepalive > connections to an upstream group? > > 1. https://www.nginx.com/blog/performance-tuning-tips-tricks/ > in this blog, the writer seems to recommend a constant value of 128, > no real explanation why it would fit whatever the number of servers in > the upstream > > 2. https://nginx.org/en/docs/http/ngx_http_upstream_module.html#keepalive > the upstream module doc seems to recommend a rule like 16 times the > number of servers in the upstream, as we have two examples with > respectively keepalive 32 for 2 upstream servers and keepalive 16 for > 1 upstream server > > 3. > https://www.nginx.com/blog/avoiding-top-10-nginx-configuration-mistakes/#no-keepalives > in this blog, the writer recommends a rule of 2 times the number of > servers in the upstream > > I used to follow rule of item 3 as it comes with a somewhat good > explanation, but it does not seem to be largely accepted. > > What could explain such a divergence between several sources? What > would you recommend please? > > Regards, > > Sébastien. > > _______________________________________________ > nginx mailing list > nginx at nginx.org > https://mailman.nginx.org/mailman/listinfo/nginx From srebecchi at kameleoon.com Mon Mar 25 12:31:48 2024 From: srebecchi at kameleoon.com (=?UTF-8?Q?S=C3=A9bastien_Rebecchi?=) Date: Mon, 25 Mar 2024 13:31:48 +0100 Subject: Nginx prematurely closing connections when reloaded Message-ID: Hello I have an issue with nginx closing prematurely connections when reload is performed. I have some nginx servers configured to proxy_pass requests to an upstream group. This group itself is composed of several servers which are nginx themselves, and is configured to use keepalive connections. When I trigger a reload (-s reload) on an nginx of one of the servers which is target of the upstream, I see in error logs of all servers in front that connection was reset by the nginx which was reloaded. Here configuration of upstream group (IPs are hidden replaced by IP_X): --- BEGIN --- upstream data_api { random; server IP_1:80 max_fails=3 fail_timeout=30s; server IP_2:80 max_fails=3 fail_timeout=30s; server IP_3:80 max_fails=3 fail_timeout=30s; server IP_4:80 max_fails=3 fail_timeout=30s; server IP_5:80 max_fails=3 fail_timeout=30s; server IP_6:80 max_fails=3 fail_timeout=30s; server IP_7:80 max_fails=3 fail_timeout=30s; server IP_8:80 max_fails=3 fail_timeout=30s; server IP_9:80 max_fails=3 fail_timeout=30s; server IP_10:80 max_fails=3 fail_timeout=30s; keepalive 20; } --- END --- Here configuration of the location using this upstream: --- BEGIN --- location / { proxy_pass http://data_api; proxy_http_version 1.1; proxy_set_header Connection ""; proxy_set_header Host $host; proxy_set_header X-Real-IP $real_ip; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_connect_timeout 2s; proxy_send_timeout 6s; proxy_read_timeout 10s; proxy_next_upstream error timeout http_502 http_504; } --- END --- And here the kind of error messages I get when I reload nginx of "IP_1": --- BEGIN --- 2024/03/25 11:24:25 [error] 3758170#0: *1795895162 recv() failed (104: Connection reset by peer) while reading response header from upstream, client: CLIENT_IP_HIDDEN, server: SERVER_HIDDEN, request: "POST /REQUEST_LOCATION_HIDDEN HTTP/2.0", upstream: " http://IP_1:80/REQUEST_LOCATION_HIDDEN", host: "HOST_HIDDEN", referrer: "REFERRER_HIDDEN" --- END --- I thought -s reload was doing graceful shutdown of connections. Is it due to the fact that nginx can not handle that when using keepalive connections? Is it a bug? I am using nginx 1.24.0 everywhere, no particular Thank you for any help. Sébastien -------------- next part -------------- An HTML attachment was scrubbed... URL: From crh3675 at gmail.com Mon Mar 25 16:31:48 2024 From: crh3675 at gmail.com (Craig Hoover) Date: Mon, 25 Mar 2024 12:31:48 -0400 Subject: Nginx prematurely closing connections when reloaded In-Reply-To: References: Message-ID: What language is your upstream API written in and are you hosting in Amazon? Even if you aren't hosting in Amazon, it seems your values are quite low for connect, read and send. If you are running NodeJS for the upstream, one thing I have found is that certain values need to be offset to avoid the 502 errors: 1. nGinx: proxy_read_timeout and proxy_send_timeout should be set higher if not equal (60s) 2. NodeJS: server.keepAliveTimeout = 70 * 1000; server.headersTimeout = 75 * 1000; keepaliveTimeout for the NodeJS app needs to be longer than the nginx timeouts and headersTimeout needs to slightly extend keepAliveTimeout. Not sure if this applies to other languages but that was an issue we ran into a few years ago. Craig On Mon, Mar 25, 2024 at 8:32 AM Sébastien Rebecchi wrote: > Hello > > > I have an issue with nginx closing prematurely connections when reload is > performed. > > > I have some nginx servers configured to proxy_pass requests to an > upstream group. This group itself is composed of several servers which are > nginx themselves, and is configured to use keepalive connections. > > When I trigger a reload (-s reload) on an nginx of one of the servers > which is target of the upstream, I see in error logs of all servers in > front that connection was reset by the nginx which was reloaded. > > > Here configuration of upstream group (IPs are hidden replaced by IP_X): > > --- BEGIN --- > > upstream data_api { > > random; > > > server IP_1:80 max_fails=3 fail_timeout=30s; > > server IP_2:80 max_fails=3 fail_timeout=30s; > > server IP_3:80 max_fails=3 fail_timeout=30s; > > server IP_4:80 max_fails=3 fail_timeout=30s; > > server IP_5:80 max_fails=3 fail_timeout=30s; > > server IP_6:80 max_fails=3 fail_timeout=30s; > > server IP_7:80 max_fails=3 fail_timeout=30s; > > server IP_8:80 max_fails=3 fail_timeout=30s; > > server IP_9:80 max_fails=3 fail_timeout=30s; > > server IP_10:80 max_fails=3 fail_timeout=30s; > > > keepalive 20; > > } > > --- END --- > > > Here configuration of the location using this upstream: > > --- BEGIN --- > > location / { > > proxy_pass http://data_api; > > > proxy_http_version 1.1; > > proxy_set_header Connection ""; > > > proxy_set_header Host $host; > > proxy_set_header X-Real-IP $real_ip; > > proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; > > > proxy_connect_timeout 2s; > > proxy_send_timeout 6s; > > proxy_read_timeout 10s; > > > proxy_next_upstream error timeout http_502 http_504; > > } > > --- END --- > > > And here the kind of error messages I get when I reload nginx of "IP_1": > > --- BEGIN --- > > 2024/03/25 11:24:25 [error] 3758170#0: *1795895162 recv() failed (104: > Connection reset by peer) while reading response header from upstream, > client: CLIENT_IP_HIDDEN, server: SERVER_HIDDEN, request: "POST > /REQUEST_LOCATION_HIDDEN HTTP/2.0", upstream: " > http://IP_1:80/REQUEST_LOCATION_HIDDEN", host: "HOST_HIDDEN", referrer: > "REFERRER_HIDDEN" > > --- END --- > > > I thought -s reload was doing graceful shutdown of connections. Is it due > to the fact that nginx can not handle that when using keepalive > connections? Is it a bug? > > I am using nginx 1.24.0 everywhere, no particular > > > Thank you for any help. > > > Sébastien > > _______________________________________________ > nginx mailing list > nginx at nginx.org > https://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From crh3675 at gmail.com Mon Mar 25 16:34:32 2024 From: crh3675 at gmail.com (Craig Hoover) Date: Mon, 25 Mar 2024 12:34:32 -0400 Subject: Nginx prematurely closing connections when reloaded In-Reply-To: References: Message-ID: <428DC36B-3F88-416A-A958-4CAEF126B0B1@gmail.com> An HTML attachment was scrubbed... URL: From iippolitov at nginx.com Mon Mar 25 16:59:26 2024 From: iippolitov at nginx.com (Igor Ippolitov) Date: Mon, 25 Mar 2024 16:59:26 +0000 Subject: Nginx prematurely closing connections when reloaded In-Reply-To: References: Message-ID: <1521c6a0-20d4-41ee-abeb-cb98d8732e08@nginx.com> Sébastien, Nginx should keep active connections open and wait for a request to complete before closing. A reload starts a new set of workers while old workers wait for old connections to shut down. The only exception I'm aware of is having worker_shutdown_timeout configured: in this case a worker will wait till this timeout and forcibly close a connection. Be default there is no timeout. It would be curious to see error log of nginx at IP_1 (the reloaded one) while the reload happens. It may explain the reason for connection resets. Kind regards, Igor. On 25/03/2024 12:31, Sébastien Rebecchi wrote: > > Hello > > > I have an issue with nginx closing prematurely connections when reload > is performed. > > > I have some nginx servers configured to proxy_pass requests to an > upstream group. This group itself is composed of several servers which > are nginx themselves, and is configured to use keepalive connections. > > When I trigger a reload (-s reload) on an nginx of one of the servers > which is target of the upstream, I see in error logs of all servers in > front that connection was reset by the nginx which was reloaded. > > > Here configuration of upstream group (IPs are hidden replaced by IP_X): > > --- BEGIN --- > > upstream data_api { > > random; > > > server IP_1:80 max_fails=3 fail_timeout=30s; > > server IP_2:80 max_fails=3 fail_timeout=30s; > > server IP_3:80 max_fails=3 fail_timeout=30s; > > server IP_4:80 max_fails=3 fail_timeout=30s; > > server IP_5:80 max_fails=3 fail_timeout=30s; > > server IP_6:80 max_fails=3 fail_timeout=30s; > > server IP_7:80 max_fails=3 fail_timeout=30s; > > server IP_8:80 max_fails=3 fail_timeout=30s; > > server IP_9:80 max_fails=3 fail_timeout=30s; > > server IP_10:80 max_fails=3 fail_timeout=30s; > > > keepalive 20; > > } > > --- END --- > > > Here configuration of the location using this upstream: > > --- BEGIN --- > > location / { > > proxy_pass http://data_api; > > > proxy_http_version 1.1; > > proxy_set_header Connection ""; > > > proxy_set_header Host $host; > > proxy_set_header X-Real-IP $real_ip; > > proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; > > > proxy_connect_timeout 2s; > > proxy_send_timeout 6s; > > proxy_read_timeout 10s; > > > proxy_next_upstream error timeout http_502 http_504; > > } > > --- END --- > > > And here the kind of error messages I get when I reload nginx of "IP_1": > > --- BEGIN --- > > 2024/03/25 11:24:25 [error] 3758170#0: *1795895162 recv() failed (104: > Connection reset by peer) while reading response header from upstream, > client: CLIENT_IP_HIDDEN, server: SERVER_HIDDEN, request: "POST > /REQUEST_LOCATION_HIDDEN HTTP/2.0", upstream: > "http://IP_1:80/REQUEST_LOCATION_HIDDEN > ", host: "HOST_HIDDEN", > referrer: "REFERRER_HIDDEN" > > --- END --- > > > I thought -s reload was doing graceful shutdown of connections. Is it > due to the fact that nginx can not handle that when using keepalive > connections? Is it a bug? > > I am using nginx 1.24.0 everywhere, no particular > > > Thank you for any help. > > > Sébastien > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > https://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From srebecchi at kameleoon.com Tue Mar 26 12:41:08 2024 From: srebecchi at kameleoon.com (=?UTF-8?Q?S=C3=A9bastien_Rebecchi?=) Date: Tue, 26 Mar 2024 13:41:08 +0100 Subject: Nginx prematurely closing connections when reloaded In-Reply-To: <1521c6a0-20d4-41ee-abeb-cb98d8732e08@nginx.com> References: <1521c6a0-20d4-41ee-abeb-cb98d8732e08@nginx.com> Message-ID: Hi Igor There is no special logs on the IP_1 (the reloaded one) side, only 1 log line, which is expected: --- BEGIN --- 2024/03/26 13:37:55 [notice] 3928855#0: signal process started --- END --- I did not configure worker_shutdown_timeout, it is unlimited. Sébastien. Le lun. 25 mars 2024 à 17:59, Igor Ippolitov a écrit : > Sébastien, > > Nginx should keep active connections open and wait for a request to > complete before closing. > A reload starts a new set of workers while old workers wait for old > connections to shut down. > The only exception I'm aware of is having worker_shutdown_timeout > configured: in this case a worker will wait till this timeout and forcibly > close a connection. Be default there is no timeout. > > It would be curious to see error log of nginx at IP_1 (the reloaded one) > while the reload happens. It may explain the reason for connection resets. > > Kind regards, > Igor. > > On 25/03/2024 12:31, Sébastien Rebecchi wrote: > > Hello > > > I have an issue with nginx closing prematurely connections when reload is > performed. > > > I have some nginx servers configured to proxy_pass requests to an > upstream group. This group itself is composed of several servers which are > nginx themselves, and is configured to use keepalive connections. > > When I trigger a reload (-s reload) on an nginx of one of the servers > which is target of the upstream, I see in error logs of all servers in > front that connection was reset by the nginx which was reloaded. > > > Here configuration of upstream group (IPs are hidden replaced by IP_X): > > --- BEGIN --- > > upstream data_api { > > random; > > > server IP_1:80 max_fails=3 fail_timeout=30s; > > server IP_2:80 max_fails=3 fail_timeout=30s; > > server IP_3:80 max_fails=3 fail_timeout=30s; > > server IP_4:80 max_fails=3 fail_timeout=30s; > > server IP_5:80 max_fails=3 fail_timeout=30s; > > server IP_6:80 max_fails=3 fail_timeout=30s; > > server IP_7:80 max_fails=3 fail_timeout=30s; > > server IP_8:80 max_fails=3 fail_timeout=30s; > > server IP_9:80 max_fails=3 fail_timeout=30s; > > server IP_10:80 max_fails=3 fail_timeout=30s; > > > keepalive 20; > > } > > --- END --- > > > Here configuration of the location using this upstream: > > --- BEGIN --- > > location / { > > proxy_pass http://data_api; > > > proxy_http_version 1.1; > > proxy_set_header Connection ""; > > > proxy_set_header Host $host; > > proxy_set_header X-Real-IP $real_ip; > > proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; > > > proxy_connect_timeout 2s; > > proxy_send_timeout 6s; > > proxy_read_timeout 10s; > > > proxy_next_upstream error timeout http_502 http_504; > > } > > --- END --- > > > And here the kind of error messages I get when I reload nginx of "IP_1": > > --- BEGIN --- > > 2024/03/25 11:24:25 [error] 3758170#0: *1795895162 recv() failed (104: > Connection reset by peer) while reading response header from upstream, > client: CLIENT_IP_HIDDEN, server: SERVER_HIDDEN, request: "POST > /REQUEST_LOCATION_HIDDEN HTTP/2.0", upstream: " > http://IP_1:80/REQUEST_LOCATION_HIDDEN", host: "HOST_HIDDEN", referrer: > "REFERRER_HIDDEN" > > --- END --- > > > I thought -s reload was doing graceful shutdown of connections. Is it due > to the fact that nginx can not handle that when using keepalive > connections? Is it a bug? > > I am using nginx 1.24.0 everywhere, no particular > > > Thank you for any help. > > > Sébastien > > _______________________________________________ > nginx mailing listnginx at nginx.orghttps://mailman.nginx.org/mailman/listinfo/nginx > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From xcripz at gmail.com Wed Mar 27 14:22:04 2024 From: xcripz at gmail.com (=?UTF-8?B?0JjQstCw0L0g0JPRgNC40LPQvtGA0YzQtdCy?=) Date: Wed, 27 Mar 2024 17:22:04 +0300 Subject: Fwd: Checking cert+key bundle In-Reply-To: References: Message-ID: Hello. I discovered an unusual behavior (imho) - if you “mix” cryptographic algorithms (ECDSA certificate, RSA key or vice versa), then nginx doesn't report a problem when reloading and applies the configuration (there are no errors when executing nginx -t either). Is this expected behavior or does it look like a bug? I checked it on different versions and OS, but just in case: *snake at carbon:~$ nginx -Vnginx version: nginx/1.24.0built by gcc 10.2.1 20210110 (Debian 10.2.1-6) built with OpenSSL 1.1.1n 15 Mar 2022 (running with OpenSSL 1.1.1f 31 Mar 2020)TLS SNI support enabledconfigure arguments: --prefix=/etc/nginx --sbin-path=/usr/sbin/nginx --modules-path=/usr/lib/nginx/modules --conf-path=/etc/nginx/nginx.conf --error-log-path=/var/log/nginx/error.log --http-log-path=/var/log/nginx/access.log --pid-path=/var/run/nginx.pid --lock-path=/var/run/nginx.lock --http-client-body-temp-path=/var/cache/nginx/client_temp --http-proxy-temp-path=/var/cache/nginx/proxy_temp --http-fastcgi-temp-path=/var/cache/nginx/fastcgi_temp --http-uwsgi-temp-path=/var/cache/nginx/uwsgi_temp --http-scgi-temp-path=/var/cache/nginx/scgi_temp --user=nginx --group=nginx --with-compat --with-file-aio --with-threads --with-http_addition_module --with-http_auth_request_module --with-http_dav_module --with-http_flv_module --with-http_gunzip_module --with-http_gzip_static_module --with-http_mp4_module --with-http_random_index_module --with-http_realip_module --with-http_secure_link_module --with-http_slice_module --with-http_ssl_module --with-http_stub_status_module --with-http_sub_module --with-http_v2_module --with-mail --with-mail_ssl_module --with-stream --with-stream_realip_module --with-stream_ssl_module --with-stream_ssl_preread_module --with-cc-opt='-g -O2 -ffile-prefix-map=/data/builder/debuild/nginx-1.24.0/debian/debuild-base/nginx-1.24.0=. -fstack-protector-strong -Wformat -Werror=format-security -Wp,-D_FORTIFY_SOURCE=2 -fPIC' --with-ld-opt='-Wl,-z,relro -Wl,-z,now -Wl,--as-needed -pie'* -------------- next part -------------- An HTML attachment was scrubbed... URL: From iippolitov at nginx.com Thu Mar 28 15:40:03 2024 From: iippolitov at nginx.com (Igor Ippolitov) Date: Thu, 28 Mar 2024 15:40:03 +0000 Subject: Nginx prematurely closing connections when reloaded In-Reply-To: References: <1521c6a0-20d4-41ee-abeb-cb98d8732e08@nginx.com> Message-ID: <8675534d-3cec-4f25-8568-fce39d7871c2@nginx.com> Sébastien, The message about the signal process is only the beginning of the process. You are interested in messages like the following: > 2024/03/26 13:36:36 [notice] 723#723: signal 1 (SIGHUP) received from > 69064, reconfiguring > 2024/03/26 13:36:36 [notice] 723#723: reconfiguring > 2024/03/26 13:36:36 [notice] 723#723: using the "epoll" event method > 2024/03/26 13:36:36 [notice] 723#723: start worker processes > 2024/03/26 13:36:36 [notice] 723#723: start worker process 69065 > 2024/03/26 13:36:36 [notice] 723#723: start worker process 69066 > 2024/03/26 13:36:36 [notice] 723#723: start cache manager process 69067 > 2024/03/26 13:36:36 [notice] 61903#61903: gracefully shutting down > 2024/03/26 13:36:36 [notice] 61905#61905: exiting > 2024/03/26 13:36:36 [notice] 61903#61903: exiting > 2024/03/26 13:36:36 [notice] 61904#61904: gracefully shutting down > 2024/03/26 13:36:36 [notice] 61904#61904: exiting > 2024/03/26 13:36:36 [notice] 61903#61903: exit Note the 'gracefully shutting down' and 'exiting' message from workers. Also the 'start' and 'reconfiguring' messages from the master process. There should be a similar sequence somewhere in your logs. Having these logs may help explaining what happens on a reload. Kind regards, Igor. On 26/03/2024 12:41, Sébastien Rebecchi wrote: > Hi Igor > > There is no special logs on the IP_1 (the reloaded one) side, only 1 > log line, which is expected: > --- BEGIN --- > 2024/03/26 13:37:55 [notice] 3928855#0: signal process started > --- END --- > > I did not configure worker_shutdown_timeout, it is unlimited. > > Sébastien. > > Le lun. 25 mars 2024 à 17:59, Igor Ippolitov a > écrit : > > Sébastien, > > Nginx should keep active connections open and wait for a request > to complete before closing. > A reload starts a new set of workers while old workers wait for > old connections to shut down. > The only exception I'm aware of is having worker_shutdown_timeout > configured: in this case a worker will wait till this timeout and > forcibly close a connection. Be default there is no timeout. > > It would be curious to see error log of nginx at IP_1 (the > reloaded one) while the reload happens. It may explain the reason > for connection resets. > > Kind regards, > Igor. > > On 25/03/2024 12:31, Sébastien Rebecchi wrote: >> >> Hello >> >> >> I have an issue with nginx closing prematurely connections when >> reload is performed. >> >> >> I have some nginx servers configured to proxy_pass requests to an >> upstream group. This group itself is composed of several servers >> which are nginx themselves, and is configured to use keepalive >> connections. >> >> When I trigger a reload (-s reload) on an nginx of one of the >> servers which is target of the upstream, I see in error logs of >> all servers in front that connection was reset by the nginx which >> was reloaded. >> >> >> Here configuration of upstream group (IPs are hidden replaced by >> IP_X): >> >> --- BEGIN --- >> >> upstream data_api { >> >> random; >> >> >> server IP_1:80 max_fails=3 fail_timeout=30s; >> >> server IP_2:80 max_fails=3 fail_timeout=30s; >> >> server IP_3:80 max_fails=3 fail_timeout=30s; >> >> server IP_4:80 max_fails=3 fail_timeout=30s; >> >> server IP_5:80 max_fails=3 fail_timeout=30s; >> >> server IP_6:80 max_fails=3 fail_timeout=30s; >> >> server IP_7:80 max_fails=3 fail_timeout=30s; >> >> server IP_8:80 max_fails=3 fail_timeout=30s; >> >> server IP_9:80 max_fails=3 fail_timeout=30s; >> >> server IP_10:80 max_fails=3 fail_timeout=30s; >> >> >> keepalive 20; >> >> } >> >> --- END --- >> >> >> Here configuration of the location using this upstream: >> >> --- BEGIN --- >> >> location / { >> >> proxy_pass http://data_api; >> >> >> proxy_http_version 1.1; >> >> proxy_set_header Connection ""; >> >> >> proxy_set_header Host $host; >> >> proxy_set_header X-Real-IP $real_ip; >> >> proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; >> >> >> proxy_connect_timeout 2s; >> >> proxy_send_timeout 6s; >> >> proxy_read_timeout 10s; >> >> >> proxy_next_upstream error timeout http_502 http_504; >> >> } >> >> --- END --- >> >> >> And here the kind of error messages I get when I reload nginx of >> "IP_1": >> >> --- BEGIN --- >> >> 2024/03/25 11:24:25 [error] 3758170#0: *1795895162 recv() failed >> (104: Connection reset by peer) while reading response header >> from upstream, client: CLIENT_IP_HIDDEN, server: SERVER_HIDDEN, >> request: "POST /REQUEST_LOCATION_HIDDEN HTTP/2.0", upstream: >> "http://IP_1:80/REQUEST_LOCATION_HIDDEN >> ", host: "HOST_HIDDEN", >> referrer: "REFERRER_HIDDEN" >> >> --- END --- >> >> >> I thought -s reload was doing graceful shutdown of connections. >> Is it due to the fact that nginx can not handle that when using >> keepalive connections? Is it a bug? >> >> I am using nginx 1.24.0 everywhere, no particular >> >> >> Thank you for any help. >> >> >> Sébastien >> >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> https://mailman.nginx.org/mailman/listinfo/nginx > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From srebecchi at kameleoon.com Thu Mar 28 18:27:21 2024 From: srebecchi at kameleoon.com (=?UTF-8?Q?S=C3=A9bastien_Rebecchi?=) Date: Thu, 28 Mar 2024 19:27:21 +0100 Subject: Nginx prematurely closing connections when reloaded In-Reply-To: <8675534d-3cec-4f25-8568-fce39d7871c2@nginx.com> References: <1521c6a0-20d4-41ee-abeb-cb98d8732e08@nginx.com> <8675534d-3cec-4f25-8568-fce39d7871c2@nginx.com> Message-ID: Hi Igor, Thanks for the answer. I really got that message 'signal process started' every time i do 'nginx -s reload' and this is the only log line I have, I don't have the other lines you mentioned. Is there anything to do to enable those logs? Sébastien Le jeu. 28 mars 2024, 16:40, Igor Ippolitov a écrit : > Sébastien, > > The message about the signal process is only the beginning of the process. > You are interested in messages like the following: > > 2024/03/26 13:36:36 [notice] 723#723: signal 1 (SIGHUP) received from > 69064, reconfiguring > 2024/03/26 13:36:36 [notice] 723#723: reconfiguring > 2024/03/26 13:36:36 [notice] 723#723: using the "epoll" event method > 2024/03/26 13:36:36 [notice] 723#723: start worker processes > 2024/03/26 13:36:36 [notice] 723#723: start worker process 69065 > 2024/03/26 13:36:36 [notice] 723#723: start worker process 69066 > 2024/03/26 13:36:36 [notice] 723#723: start cache manager process 69067 > 2024/03/26 13:36:36 [notice] 61903#61903: gracefully shutting down > 2024/03/26 13:36:36 [notice] 61905#61905: exiting > 2024/03/26 13:36:36 [notice] 61903#61903: exiting > 2024/03/26 13:36:36 [notice] 61904#61904: gracefully shutting down > 2024/03/26 13:36:36 [notice] 61904#61904: exiting > 2024/03/26 13:36:36 [notice] 61903#61903: exit > > > Note the 'gracefully shutting down' and 'exiting' message from workers. > Also the 'start' and 'reconfiguring' messages from the master process. > There should be a similar sequence somewhere in your logs. > Having these logs may help explaining what happens on a reload. > > Kind regards, > Igor. > > On 26/03/2024 12:41, Sébastien Rebecchi wrote: > > Hi Igor > > There is no special logs on the IP_1 (the reloaded one) side, only 1 log > line, which is expected: > --- BEGIN --- > 2024/03/26 13:37:55 [notice] 3928855#0: signal process started > --- END --- > > I did not configure worker_shutdown_timeout, it is unlimited. > > Sébastien. > > Le lun. 25 mars 2024 à 17:59, Igor Ippolitov a > écrit : > >> Sébastien, >> >> Nginx should keep active connections open and wait for a request to >> complete before closing. >> A reload starts a new set of workers while old workers wait for old >> connections to shut down. >> The only exception I'm aware of is having worker_shutdown_timeout >> configured: in this case a worker will wait till this timeout and forcibly >> close a connection. Be default there is no timeout. >> >> It would be curious to see error log of nginx at IP_1 (the reloaded one) >> while the reload happens. It may explain the reason for connection resets. >> >> Kind regards, >> Igor. >> >> On 25/03/2024 12:31, Sébastien Rebecchi wrote: >> >> Hello >> >> >> I have an issue with nginx closing prematurely connections when reload >> is performed. >> >> >> I have some nginx servers configured to proxy_pass requests to an >> upstream group. This group itself is composed of several servers which are >> nginx themselves, and is configured to use keepalive connections. >> >> When I trigger a reload (-s reload) on an nginx of one of the servers >> which is target of the upstream, I see in error logs of all servers in >> front that connection was reset by the nginx which was reloaded. >> >> >> Here configuration of upstream group (IPs are hidden replaced by IP_X): >> >> --- BEGIN --- >> >> upstream data_api { >> >> random; >> >> >> server IP_1:80 max_fails=3 fail_timeout=30s; >> >> server IP_2:80 max_fails=3 fail_timeout=30s; >> >> server IP_3:80 max_fails=3 fail_timeout=30s; >> >> server IP_4:80 max_fails=3 fail_timeout=30s; >> >> server IP_5:80 max_fails=3 fail_timeout=30s; >> >> server IP_6:80 max_fails=3 fail_timeout=30s; >> >> server IP_7:80 max_fails=3 fail_timeout=30s; >> >> server IP_8:80 max_fails=3 fail_timeout=30s; >> >> server IP_9:80 max_fails=3 fail_timeout=30s; >> >> server IP_10:80 max_fails=3 fail_timeout=30s; >> >> >> keepalive 20; >> >> } >> >> --- END --- >> >> >> Here configuration of the location using this upstream: >> >> --- BEGIN --- >> >> location / { >> >> proxy_pass http://data_api; >> >> >> proxy_http_version 1.1; >> >> proxy_set_header Connection ""; >> >> >> proxy_set_header Host $host; >> >> proxy_set_header X-Real-IP $real_ip; >> >> proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; >> >> >> proxy_connect_timeout 2s; >> >> proxy_send_timeout 6s; >> >> proxy_read_timeout 10s; >> >> >> proxy_next_upstream error timeout http_502 http_504; >> >> } >> >> --- END --- >> >> >> And here the kind of error messages I get when I reload nginx of "IP_1": >> >> --- BEGIN --- >> >> 2024/03/25 11:24:25 [error] 3758170#0: *1795895162 recv() failed (104: >> Connection reset by peer) while reading response header from upstream, >> client: CLIENT_IP_HIDDEN, server: SERVER_HIDDEN, request: "POST >> /REQUEST_LOCATION_HIDDEN HTTP/2.0", upstream: " >> http://IP_1:80/REQUEST_LOCATION_HIDDEN", host: "HOST_HIDDEN", referrer: >> "REFERRER_HIDDEN" >> >> --- END --- >> >> >> I thought -s reload was doing graceful shutdown of connections. Is it due >> to the fact that nginx can not handle that when using keepalive >> connections? Is it a bug? >> >> I am using nginx 1.24.0 everywhere, no particular >> >> >> Thank you for any help. >> >> >> Sébastien >> >> _______________________________________________ >> nginx mailing listnginx at nginx.orghttps://mailman.nginx.org/mailman/listinfo/nginx >> >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From iippolitov at nginx.com Thu Mar 28 23:04:44 2024 From: iippolitov at nginx.com (Igor Ippolitov) Date: Thu, 28 Mar 2024 23:04:44 +0000 Subject: Nginx prematurely closing connections when reloaded In-Reply-To: References: <1521c6a0-20d4-41ee-abeb-cb98d8732e08@nginx.com> <8675534d-3cec-4f25-8568-fce39d7871c2@nginx.com> Message-ID: <7e2f5384-4b04-45ce-abc3-b4662907de99@nginx.com> Sébastien, Is it possible that messages go to another log file? These messages go to the main error log file, defined in the root context. Another common pitfall is a log level above notice. Try setting error log to a more verbose one, maybe? Regards, Igor. On 28/03/2024 18:27, Sébastien Rebecchi wrote: > Hi Igor, > > Thanks for the answer. > > I really got that message 'signal process started' every time i do > 'nginx -s reload' and this is the only log line I have, I don't have > the other lines you mentioned. Is there anything to do to enable those > logs? > > Sébastien > > Le jeu. 28 mars 2024, 16:40, Igor Ippolitov a > écrit : > > Sébastien, > > The message about the signal process is only the beginning of the > process. > You are interested in messages like the following: > >> 2024/03/26 13:36:36 [notice] 723#723: signal 1 (SIGHUP) received >> from 69064, reconfiguring >> 2024/03/26 13:36:36 [notice] 723#723: reconfiguring >> 2024/03/26 13:36:36 [notice] 723#723: using the "epoll" event method >> 2024/03/26 13:36:36 [notice] 723#723: start worker processes >> 2024/03/26 13:36:36 [notice] 723#723: start worker process 69065 >> 2024/03/26 13:36:36 [notice] 723#723: start worker process 69066 >> 2024/03/26 13:36:36 [notice] 723#723: start cache manager process >> 69067 >> 2024/03/26 13:36:36 [notice] 61903#61903: gracefully shutting down >> 2024/03/26 13:36:36 [notice] 61905#61905: exiting >> 2024/03/26 13:36:36 [notice] 61903#61903: exiting >> 2024/03/26 13:36:36 [notice] 61904#61904: gracefully shutting down >> 2024/03/26 13:36:36 [notice] 61904#61904: exiting >> 2024/03/26 13:36:36 [notice] 61903#61903: exit > > Note the 'gracefully shutting down' and 'exiting' message from > workers. Also the 'start' and 'reconfiguring' messages from the > master process. > There should be a similar sequence somewhere in your logs. > Having these logs may help explaining what happens on a reload. > > Kind regards, > Igor. > > On 26/03/2024 12:41, Sébastien Rebecchi wrote: >> Hi Igor >> >> There is no special logs on the IP_1 (the reloaded one) side, >> only 1 log line, which is expected: >> --- BEGIN --- >> 2024/03/26 13:37:55 [notice] 3928855#0: signal process started >> --- END --- >> >> I did not configure worker_shutdown_timeout, it is unlimited. >> >> Sébastien. >> >> Le lun. 25 mars 2024 à 17:59, Igor Ippolitov >> a écrit : >> >> Sébastien, >> >> Nginx should keep active connections open and wait for a >> request to complete before closing. >> A reload starts a new set of workers while old workers wait >> for old connections to shut down. >> The only exception I'm aware of is having >> worker_shutdown_timeout configured: in this case a worker >> will wait till this timeout and forcibly close a connection. >> Be default there is no timeout. >> >> It would be curious to see error log of nginx at IP_1 (the >> reloaded one) while the reload happens. It may explain the >> reason for connection resets. >> >> Kind regards, >> Igor. >> >> On 25/03/2024 12:31, Sébastien Rebecchi wrote: >>> >>> Hello >>> >>> >>> I have an issue with nginx closing prematurely connections >>> when reload is performed. >>> >>> >>> I have some nginx servers configured to proxy_pass requests >>> to an upstream group. This group itself is composed of >>> several servers which are nginx themselves, and is >>> configured to use keepalive connections. >>> >>> When I trigger a reload (-s reload) on an nginx of one of >>> the servers which is target of the upstream, I see in error >>> logs of all servers in front that connection was reset by >>> the nginx which was reloaded. >>> >>> >>> Here configuration of upstream group (IPs are hidden >>> replaced by IP_X): >>> >>> --- BEGIN --- >>> >>> upstream data_api { >>> >>> random; >>> >>> >>> server IP_1:80 max_fails=3 fail_timeout=30s; >>> >>> server IP_2:80 max_fails=3 fail_timeout=30s; >>> >>> server IP_3:80 max_fails=3 fail_timeout=30s; >>> >>> server IP_4:80 max_fails=3 fail_timeout=30s; >>> >>> server IP_5:80 max_fails=3 fail_timeout=30s; >>> >>> server IP_6:80 max_fails=3 fail_timeout=30s; >>> >>> server IP_7:80 max_fails=3 fail_timeout=30s; >>> >>> server IP_8:80 max_fails=3 fail_timeout=30s; >>> >>> server IP_9:80 max_fails=3 fail_timeout=30s; >>> >>> server IP_10:80 max_fails=3 fail_timeout=30s; >>> >>> >>> keepalive 20; >>> >>> } >>> >>> --- END --- >>> >>> >>> Here configuration of the location using this upstream: >>> >>> --- BEGIN --- >>> >>> location / { >>> >>> proxy_pass http://data_api; >>> >>> >>> proxy_http_version 1.1; >>> >>> proxy_set_header Connection ""; >>> >>> >>> proxy_set_header Host $host; >>> >>> proxy_set_header X-Real-IP $real_ip; >>> >>> proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; >>> >>> >>> proxy_connect_timeout 2s; >>> >>> proxy_send_timeout 6s; >>> >>> proxy_read_timeout 10s; >>> >>> >>> proxy_next_upstream error timeout http_502 http_504; >>> >>> } >>> >>> --- END --- >>> >>> >>> And here the kind of error messages I get when I reload >>> nginx of "IP_1": >>> >>> --- BEGIN --- >>> >>> 2024/03/25 11:24:25 [error] 3758170#0: *1795895162 recv() >>> failed (104: Connection reset by peer) while reading >>> response header from upstream, client: CLIENT_IP_HIDDEN, >>> server: SERVER_HIDDEN, request: "POST >>> /REQUEST_LOCATION_HIDDEN HTTP/2.0", upstream: >>> "http://IP_1:80/REQUEST_LOCATION_HIDDEN >>> ", host: >>> "HOST_HIDDEN", referrer: "REFERRER_HIDDEN" >>> >>> --- END --- >>> >>> >>> I thought -s reload was doing graceful shutdown of >>> connections. Is it due to the fact that nginx can not handle >>> that when using keepalive connections? Is it a bug? >>> >>> I am using nginx 1.24.0 everywhere, no particular >>> >>> >>> Thank you for any help. >>> >>> >>> Sébastien >>> >>> >>> _______________________________________________ >>> nginx mailing list >>> nginx at nginx.org >>> https://mailman.nginx.org/mailman/listinfo/nginx >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From srebecchi at kameleoon.com Fri Mar 29 08:40:57 2024 From: srebecchi at kameleoon.com (=?UTF-8?Q?S=C3=A9bastien_Rebecchi?=) Date: Fri, 29 Mar 2024 09:40:57 +0100 Subject: Nginx prematurely closing connections when reloaded In-Reply-To: <7e2f5384-4b04-45ce-abc3-b4662907de99@nginx.com> References: <1521c6a0-20d4-41ee-abeb-cb98d8732e08@nginx.com> <8675534d-3cec-4f25-8568-fce39d7871c2@nginx.com> <7e2f5384-4b04-45ce-abc3-b4662907de99@nginx.com> Message-ID: Hi Igor, I did not have error_log directive at main context, so it took default conf, which seems why i got only 1 log file. I added directive and now I have more logs when I do nginx -s reload: 2024/03/29 09:04:20 [notice] 1064394#0: signal process started 2024/03/29 09:04:20 [notice] 3718160#0: signal 1 (SIGHUP) received from 1064394, reconfiguring 2024/03/29 09:04:20 [notice] 3718160#0: reconfiguring 2024/03/29 09:04:20 [notice] 3718160#0: using the "epoll" event method 2024/03/29 09:04:20 [notice] 3718160#0: start worker processes 2024/03/29 09:04:20 [notice] 3718160#0: start worker process 1064395 2024/03/29 09:04:20 [notice] 3718160#0: start worker process 1064396 2024/03/29 09:04:20 [notice] 3718160#0: start worker process 1064397 2024/03/29 09:04:20 [notice] 3718160#0: start worker process 1064398 2024/03/29 09:04:20 [notice] 3718160#0: start worker process 1064399 2024/03/29 09:04:20 [notice] 3718160#0: start worker process 1064400 2024/03/29 09:04:20 [notice] 3718160#0: start worker process 1064401 2024/03/29 09:04:20 [notice] 3718160#0: start worker process 1064402 2024/03/29 09:04:20 [notice] 3718160#0: start worker process 1064403 2024/03/29 09:04:20 [notice] 3718160#0: start worker process 1064404 2024/03/29 09:04:20 [notice] 3718160#0: start worker process 1064405 2024/03/29 09:04:20 [notice] 3718160#0: start worker process 1064406 2024/03/29 09:04:20 [notice] 1063598#0: gracefully shutting down 2024/03/29 09:04:20 [notice] 1063599#0: gracefully shutting down 2024/03/29 09:04:20 [notice] 1063600#0: gracefully shutting down 2024/03/29 09:04:20 [notice] 1063601#0: gracefully shutting down 2024/03/29 09:04:20 [notice] 1063602#0: gracefully shutting down 2024/03/29 09:04:20 [notice] 1063603#0: gracefully shutting down 2024/03/29 09:04:20 [notice] 1063604#0: gracefully shutting down 2024/03/29 09:04:20 [notice] 1063607#0: gracefully shutting down 2024/03/29 09:04:20 [notice] 1063608#0: gracefully shutting down 2024/03/29 09:04:20 [notice] 1063597#0: gracefully shutting down 2024/03/29 09:04:20 [notice] 1063605#0: gracefully shutting down 2024/03/29 09:04:20 [notice] 1063609#0: gracefully shutting down 2024/03/29 09:04:23 [notice] 3718160#0: signal 17 (SIGCHLD) received from 3989432 2024/03/29 09:04:23 [notice] 3718160#0: worker process 3989432 exited with code 0 2024/03/29 09:04:23 [notice] 3718160#0: signal 29 (SIGIO) received 2024/03/29 09:04:26 [notice] 1060347#0: exiting 2024/03/29 09:04:26 [notice] 1060347#0: exit 2024/03/29 09:04:26 [notice] 3718160#0: signal 17 (SIGCHLD) received from 1060347 2024/03/29 09:04:26 [notice] 3718160#0: worker process 1060347 exited with code 0 2024/03/29 09:04:26 [notice] 3718160#0: signal 29 (SIGIO) received 2024/03/29 09:04:29 [notice] 3718160#0: signal 17 (SIGCHLD) received from 3989423 2024/03/29 09:04:29 [notice] 3718160#0: worker process 3989423 exited with code 0 2024/03/29 09:04:29 [notice] 3718160#0: signal 29 (SIGIO) received ... etc ... Could the pb I encounter be linked to that discussion? https://mailman.nginx.org/pipermail/nginx-devel/2024-January/YSJATQMPXDIBETCDS46OTKUZNOJK6Q22.html Seems a race condition between a client that have started to send a new request at the same time that the server has decided to close the idle connection. Is there a plan to add a bugfix in nginx to handle that properly? Thanks, Sébastien Le ven. 29 mars 2024 à 00:04, Igor Ippolitov a écrit : > Sébastien, > > Is it possible that messages go to another log file? These messages go to > the main error log file, defined in the root context. > Another common pitfall is a log level above notice. Try setting error log > to a more verbose one, maybe? > > Regards, > Igor. > > > On 28/03/2024 18:27, Sébastien Rebecchi wrote: > > Hi Igor, > > Thanks for the answer. > > I really got that message 'signal process started' every time i do 'nginx > -s reload' and this is the only log line I have, I don't have the other > lines you mentioned. Is there anything to do to enable those logs? > > Sébastien > > Le jeu. 28 mars 2024, 16:40, Igor Ippolitov a > écrit : > >> Sébastien, >> >> The message about the signal process is only the beginning of the process. >> You are interested in messages like the following: >> >> 2024/03/26 13:36:36 [notice] 723#723: signal 1 (SIGHUP) received from >> 69064, reconfiguring >> 2024/03/26 13:36:36 [notice] 723#723: reconfiguring >> 2024/03/26 13:36:36 [notice] 723#723: using the "epoll" event method >> 2024/03/26 13:36:36 [notice] 723#723: start worker processes >> 2024/03/26 13:36:36 [notice] 723#723: start worker process 69065 >> 2024/03/26 13:36:36 [notice] 723#723: start worker process 69066 >> 2024/03/26 13:36:36 [notice] 723#723: start cache manager process 69067 >> 2024/03/26 13:36:36 [notice] 61903#61903: gracefully shutting down >> 2024/03/26 13:36:36 [notice] 61905#61905: exiting >> 2024/03/26 13:36:36 [notice] 61903#61903: exiting >> 2024/03/26 13:36:36 [notice] 61904#61904: gracefully shutting down >> 2024/03/26 13:36:36 [notice] 61904#61904: exiting >> 2024/03/26 13:36:36 [notice] 61903#61903: exit >> >> >> Note the 'gracefully shutting down' and 'exiting' message from workers. >> Also the 'start' and 'reconfiguring' messages from the master process. >> There should be a similar sequence somewhere in your logs. >> Having these logs may help explaining what happens on a reload. >> >> Kind regards, >> Igor. >> >> On 26/03/2024 12:41, Sébastien Rebecchi wrote: >> >> Hi Igor >> >> There is no special logs on the IP_1 (the reloaded one) side, only 1 log >> line, which is expected: >> --- BEGIN --- >> 2024/03/26 13:37:55 [notice] 3928855#0: signal process started >> --- END --- >> >> I did not configure worker_shutdown_timeout, it is unlimited. >> >> Sébastien. >> >> Le lun. 25 mars 2024 à 17:59, Igor Ippolitov a >> écrit : >> >>> Sébastien, >>> >>> Nginx should keep active connections open and wait for a request to >>> complete before closing. >>> A reload starts a new set of workers while old workers wait for old >>> connections to shut down. >>> The only exception I'm aware of is having worker_shutdown_timeout >>> configured: in this case a worker will wait till this timeout and forcibly >>> close a connection. Be default there is no timeout. >>> >>> It would be curious to see error log of nginx at IP_1 (the reloaded one) >>> while the reload happens. It may explain the reason for connection resets. >>> >>> Kind regards, >>> Igor. >>> >>> On 25/03/2024 12:31, Sébastien Rebecchi wrote: >>> >>> Hello >>> >>> >>> I have an issue with nginx closing prematurely connections when reload >>> is performed. >>> >>> >>> I have some nginx servers configured to proxy_pass requests to an >>> upstream group. This group itself is composed of several servers which are >>> nginx themselves, and is configured to use keepalive connections. >>> >>> When I trigger a reload (-s reload) on an nginx of one of the servers >>> which is target of the upstream, I see in error logs of all servers in >>> front that connection was reset by the nginx which was reloaded. >>> >>> >>> Here configuration of upstream group (IPs are hidden replaced by IP_X): >>> >>> --- BEGIN --- >>> >>> upstream data_api { >>> >>> random; >>> >>> >>> server IP_1:80 max_fails=3 fail_timeout=30s; >>> >>> server IP_2:80 max_fails=3 fail_timeout=30s; >>> >>> server IP_3:80 max_fails=3 fail_timeout=30s; >>> >>> server IP_4:80 max_fails=3 fail_timeout=30s; >>> >>> server IP_5:80 max_fails=3 fail_timeout=30s; >>> >>> server IP_6:80 max_fails=3 fail_timeout=30s; >>> >>> server IP_7:80 max_fails=3 fail_timeout=30s; >>> >>> server IP_8:80 max_fails=3 fail_timeout=30s; >>> >>> server IP_9:80 max_fails=3 fail_timeout=30s; >>> >>> server IP_10:80 max_fails=3 fail_timeout=30s; >>> >>> >>> keepalive 20; >>> >>> } >>> >>> --- END --- >>> >>> >>> Here configuration of the location using this upstream: >>> >>> --- BEGIN --- >>> >>> location / { >>> >>> proxy_pass http://data_api; >>> >>> >>> proxy_http_version 1.1; >>> >>> proxy_set_header Connection ""; >>> >>> >>> proxy_set_header Host $host; >>> >>> proxy_set_header X-Real-IP $real_ip; >>> >>> proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; >>> >>> >>> proxy_connect_timeout 2s; >>> >>> proxy_send_timeout 6s; >>> >>> proxy_read_timeout 10s; >>> >>> >>> proxy_next_upstream error timeout http_502 http_504; >>> >>> } >>> >>> --- END --- >>> >>> >>> And here the kind of error messages I get when I reload nginx of "IP_1": >>> >>> --- BEGIN --- >>> >>> 2024/03/25 11:24:25 [error] 3758170#0: *1795895162 recv() failed (104: >>> Connection reset by peer) while reading response header from upstream, >>> client: CLIENT_IP_HIDDEN, server: SERVER_HIDDEN, request: "POST >>> /REQUEST_LOCATION_HIDDEN HTTP/2.0", upstream: " >>> http://IP_1:80/REQUEST_LOCATION_HIDDEN", host: "HOST_HIDDEN", referrer: >>> "REFERRER_HIDDEN" >>> >>> --- END --- >>> >>> >>> I thought -s reload was doing graceful shutdown of connections. Is it >>> due to the fact that nginx can not handle that when using keepalive >>> connections? Is it a bug? >>> >>> I am using nginx 1.24.0 everywhere, no particular >>> >>> >>> Thank you for any help. >>> >>> >>> Sébastien >>> >>> _______________________________________________ >>> nginx mailing listnginx at nginx.orghttps://mailman.nginx.org/mailman/listinfo/nginx >>> >>> >>> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From l2dy at aosc.io Sat Mar 30 07:34:40 2024 From: l2dy at aosc.io (Zero King) Date: Sat, 30 Mar 2024 15:34:40 +0800 Subject: Limiting number of client TLS connections In-Reply-To: References: <489b61d6-d20e-429f-be71-cdc704d983a9@aosc.io> Message-ID: <7dcf03ce-5240-40ff-afe9-70df04373ed9@aosc.io> Hello, With the new pass directive committed, I should be able to implement it with less overhead as you have suggested. https://hg.nginx.org/nginx/rev/913518341c20 I'm still trying to push our platform team to implement a firewall, but this gives me an interim solution. Thanks a lot! P.S. I also see that Nginx has introduced an ssl_reject_handshake directive. It would be interesting if its behavior can be scripted. On 12/9/23 4:38 AM, J Carter wrote: > Hello again, > > By coincidence, and since my previous email, someone has kindly submitted a fixed > window rate limiting example to the NJS examples Github repo. > > https://github.com/nginx/njs-examples/pull/31/files/ba33771cefefdc019ba76bd1f176e25e18adbc67 > > https://github.com/nginx/njs-examples/tree/master/conf/http/rate-limit > > The example is for rate limiting in http context, however I believe you'd be > able to adapt this for stream (and your use case) with minor modifications > (use js_access rather than 'if' as mentioned previously, setting key to a > fixed value). > > Just forwarding it on in case you need it. > > > On Sat, 25 Nov 2023 16:03:37 +0800 > Zero King wrote: > >> Hi Jordan, >> >> Thanks for your suggestion. I will give it a try and also try to push >> our K8s team to implement a firewall if possible. >> >> On 20/11/23 10:33, J Carter wrote: >>> Hello, >>> >>> A self contained solution would be to double proxy, first through nginx stream server and then locally back to nginx http server (with proxy_pass via unix socket, or to localhost on a different port). >>> >>> You can implement your own custom rate limiting logic in the stream server with NJS (js_access) and use the new js_shared_dict_zone (which is shared between workers) for persistently storing rate calculations. >>> >>> You'd have additional overhead from the stream tcp proxy and the njs, but it shouldn't be too great (at least compared to overhead of TLS handshakes). >>> >>> Regards, >>> Jordan Carter. >>> >>> ________________________________________ >>> From: nginx on behalf of Zero King >>> Sent: Saturday, November 18, 2023 6:44 AM >>> To: nginx at nginx.org >>> Subject: Limiting number of client TLS connections >>> >>> Hi all, >>> >>> I want Nginx to limit the rate of new TLS connections and the total (or >>> per-worker) number of all client-facing connections, so that under a >>> sudden surge of requests, existing connections can get enough share of >>> CPU to be served properly, while excessive connections are rejected and >>> retried against other servers in the cluster. >>> >>> I am running Nginx on a managed Kubernetes cluster, so tuning kernel >>> parameters or configuring layer 4 firewall is not an option. >>> >>> To serve existing connections well, worker_connections can not be used, >>> because it also affects connections with proxied servers. >>> >>> Is there a way to implement these measures in Nginx configuration? >>> _______________________________________________ > _______________________________________________ > nginx mailing list > nginx at nginx.org > https://mailman.nginx.org/mailman/listinfo/nginx From arisudesu at yandex.ru Sun Mar 31 12:46:17 2024 From: arisudesu at yandex.ru (Arisu Desu) Date: Sun, 31 Mar 2024 15:46:17 +0300 Subject: How to deny serving requests if Host field and SNI do not match? Message-ID: <6e473337-7a5a-47cf-8ba7-338ba40ed52d@yandex.ru> Hello everyone! My previous attempt at posting to mailing list failed miserably, so forgive me for sending this twice, please. I'm trying to understand the behavior of nginx when it is configured to serve multiple domains with different certificates. I have a single server with single IP address, and several domains, say, domain-a.com, domain-b.com. nginx is 1.24.0, configured as follows: server {     listen 443 ssl http2 default_server;     ssl_reject_handshake on;     return 444; } server {     listen 443 ssl http2;     server_name domain-a.com sub.domain-a.com;     ssl_certificate     /etc/certs/domain-a.com.crt; <- this is wildcard cert     ssl_certificate_key /etc/certs/domain-a.com.key; <- for domain-a.com, *.domain-a.com     return 200 "serving domain-a.com"; } server {     listen 443 ssl http2;     server_name domain-b.com sub.domain-b.com;     ssl_certificate     /etc/certs/domain-b.com.crt; <- this is wildcard cert     ssl_certificate_key /etc/certs/domain-b.com.key; <- for domain-b.com, *.domain-b.com     return 200 "serving domain-b.com"; } Here's what happens when I send some requests with curl: $curl https://domain-a.com $curl https://sub.domain-a.com $curl https://domain-b.com $curl https://sub.domain-b.com : these returns expected responses for matching domain names. $curl https://domain-a.com -H 'Host: domain-b.com' $curl https://domain-b.com -H 'Host: domain-a.com' Here is the interesting part. nginx negotiates SSL connection for SNI domain-a.com but the response is from domain-b's virtual server! And vice versa. $curl https://domain-a.com -H 'Host: not_matching_any_server_name' This too negotiates SSL for SNI domain-a.com, but doesn't send any response. Because it lands in the default_server block, I suppose. The question is, when reading request Host field, does nginx validate it against negotiated SNI or not? Is there any setting I'm missing to deny serving requests if Host and SNI mismatch? Or is it just impossible to implement such check due to protocols requirements? I think that choosing virtual server solely based on Host field and ignoring SNI may pose a risk from a security research perspective. E.g. if client negotiates SSL for domain-a.com, I certainly do not want my server to even hint that it can serve domain-b.com, much less send valid responses for that domain. The best I could come up with was to insert into each server this: if ($ssl_server_name != $host) {     return 421; } but I'm not sure whether this breaks any clients. I looked through older threads and there were ones asking the same thing (https://forum.nginx.org/read.php?2,281564,281564), though not sure whether answers are up to date. -------------- next part -------------- An HTML attachment was scrubbed... URL: