From devarajan.dk at zohocorp.com Sun Oct 1 14:50:23 2023 From: devarajan.dk at zohocorp.com (Devarajan D) Date: Sun, 01 Oct 2023 20:20:23 +0530 Subject: Requesting a Nginx Variable - "client_time_taken" (similar to request_time & upstream_response_time) Message-ID: <18aebb81187.cdd37dcd3108.3875315204294721314@zohocorp.com> Dear Team, We're using Nginx 1.22.1 open source version as Load Balancer and tomcat servers as upstream. We got a complaint from one of our clients stating the request time is too long (~30 minutes for ~10MB uploads) from our server for few MBs of Request Upload. On checking, we found the requests are reaching the upstream after this said ~30 minutes delay as checked from tomcat logs. (So slowness is not in the tomcat server) Also, found the request body is buffered to the temporary file (client_body_buffer_size 16K default value is used) We created a separate location block for this particular client URL path (say. abc.service.com/customer1/) and We persisted the temporary client body buffer file (using the directive client_body_in_file_only on) and found the ~30 minutes delay matched with the (temp buffer file last modified time - file creation time) We assume the client is slow on the following basis, 1. Temporary buffer file time as said above 2. Requests of other Clients of similar requests body sizes are served faster in a few seconds. 3. There is no hardware issue in the Nginx server as checked from atopsar and other commands. Need from Nginx Developers/community: Currently, there is no straightforward way to measure the time taken by client to upload the request body.  1. A variable similar to request_time, upstream_response_time can be helpful to easily log this time taken by client.     So it will be easy to prove to the client where the slowness is. 2. Also, is there a timeout for the whole request?     (say request should be timed out if it is more than 15 minutes) Thanks & Regards, Devarajan D. -------------- next part -------------- An HTML attachment was scrubbed... URL: From paul at stormy.ca Sun Oct 1 15:08:06 2023 From: paul at stormy.ca (Paul) Date: Sun, 1 Oct 2023 11:08:06 -0400 Subject: SSL Reuse not happening in s3 presigned urls In-Reply-To: References: Message-ID: <2919d0d9-6c04-bae4-5b05-f681fdeadc55@stormy.ca> On 2023-09-30 15:09, Vijay Kumar Kamannavar wrote: > I am using nginx reverse proxy for s3 presigned urls. [Disclaimer: very limited experience with amazonaws, so will assume that you comply fully with , if not, maybe ask them?] [snip] >     # HTTPS server block with SSL certificate and S3 reverse proxy >     server { >         listen 443 ssl; >         ssl_protocols         SSLv3 TLSv1 TLSv1.1 TLSv1.2; nginx strongly suggested at removing SSLv3 nine years ago. SSL Labs will also give you a rock bottom rating when you allow TLSv1 and TLSv1.1 (although they might still be vaguely acceptable) and the latest security standard TLSv1.3 (rfc8446, 2018) works extremely well in nginx with e.g. CertBot certificates. *Perhaps* if you updated your config. to basic industry standards (probably required for compatibility with amazonaws?), then some of your handshake caching timeouts and errors would be vastly attenuated or disappear. [snip] > If I run 4K clients using a simulator,I will see 100% CPU in the nginx > container.I believe if we cache SSL sessions then SSL handshake for > every request will be avoided hence we may not have high CPU at nginx > container. "run 4k clients"? Over what period of time? Simultaneous, identical connection requests? Even if your connectivity, router and firewall can handle that, your "16 Core and 32GB" with potential security problems could well be brought to its knees. As a rule of thumb for servers (nginx and apache), I have always used 8 GiB memory per core. YMMV. Paul From mdounin at mdounin.ru Sun Oct 1 21:13:31 2023 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 2 Oct 2023 00:13:31 +0300 Subject: SSL Reuse not happening in s3 presigned urls In-Reply-To: References: Message-ID: Hello! On Sun, Oct 01, 2023 at 12:39:20AM +0530, Vijay Kumar Kamannavar wrote: > Hello. > > I am using nginx reverse proxy for s3 presigned urls. > I am running nginx as a container using nginx:1.25.2 debian image. My host > has 16 Core and 32GB. > > Below is the nginx configuration. > > user nginx; > worker_processes 14; > pid /run/nginx.pid; > worker_rlimit_nofile 40000; > events { > worker_connections 1024; > } > http { > upstream s3_backend { > server .s3.amazonaws.com:443; > keepalive 10; > } > > log_format combined_ssl '$remote_addr - $remote_user [$time_local] ' > '"$request" $status $body_bytes_sent ' > '"$http_referer" "$http_user_agent" ' > '$ssl_protocol/$ssl_cipher ' > '$ssl_session_reused'; > proxy_ssl_session_reuse on; > proxy_ssl_server_name on; Just a side note: with "proxy_ssl_server_name" nginx uses the name as written in the "proxy_pass" directive during SSL handshake. In your case, it is "s3_backend". Most likely, this is not what you want to happen. If you do not want to send the name (which is the default), consider removing the "proxy_ssl_server_name" directive from your configuration. If you want to redefine the name, consider using the "proxy_ssl_name" directive (http://nginx.org/r/proxy_ssl_name). [...] > But in the log /var/log/nginx/ssl_debug.log, I see SSL Handshake every time > when I request an S3 object via proxy using S3presigned URLs. SSL handshake is expected to happen on each SSL connection establishment. Depending on whether there is a cached SSL session, SSL handshake can be full or abbreviated - with abbreviated being more efficient. >From the logs you've provided it looks like SSL handshakes to upstream servers is your concern. If you want to avoid SSL handshakes to upstream servers on proxying, focus on keeping connections to upstream servers alive: this should be possible as long as the upstream server supports it and it is configured on nginx side (and it looks to be already configured in your config). Alternatively, check that SSL sessions are being reused - this normally happens automatically for statically defined upstream servers (unless explicitly disabled with "proxy_ssl_session_reuse off;", see http://nginx.org/r/proxy_ssl_session_reuse). > > Below is the log I see every time for every request. > > 2023/09/30 18:07:19 [debug] 36#36: *9 event timer add: 22: 60000:721858477 > 2023/09/30 18:07:19 [debug] 36#36: *9 http finalize request: -4, > "/blob/zte3odk1ymnl at CIBC-2mb > /singleurl0?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=ASIASQOYQRM4CTRY6I54%2F20230930> > 2023/09/30 18:07:19 [debug] 36#36: *9 http request count:2 blk:0 > 2023/09/30 18:07:19 [debug] 36#36: *9 http run request: > "/blob/zte3odk1ymnl at CIBC-2mb > /singleurl0?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=ASIASQOYQRM4CTRY6I54%2F20230930%2Fus-eas> > 2023/09/30 18:07:19 [debug] 36#36: *9 http upstream check client, write > event:1, "/blob/zte3odk1ymnl at CIBC-2mb/singleurl0" > 2023/09/30 18:07:19 [debug] 36#36: *9 http upstream request: > "/blob/zte3odk1ymnl at CIBC-2mb > /singleurl0?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=ASIASQOYQRM4CTRY6I54%2F20230930%2Fu> > 2023/09/30 18:07:19 [debug] 36#36: *9 http upstream send request handler > 2023/09/30 18:07:19 [debug] 36#36: *9 malloc: 000055ED330A1DD0:96 > 2023/09/30 18:07:19 [debug] 36#36: *9 upstream SSL server name: "s3_backend" > 2023/09/30 18:07:19 [debug] 36#36: *9 set session: 0000000000000000 Note: here nginx restores previously saved SSL session, yet there are none. This suggests this is a first request to the upstream server in question within the given nginx worker process. > 2023/09/30 18:07:19 [debug] 36#36: *9 tcp_nodelay > 2023/09/30 18:07:19 [debug] 36#36: *9 SSL_do_handshake: -1 > 2023/09/30 18:07:19 [debug] 36#36: *9 SSL_get_error: 2 > 2023/09/30 18:07:19 [debug] 36#36: *9 SSL handshake handler: 0 > 2023/09/30 18:07:19 [debug] 36#36: *9 SSL_do_handshake: -1 > 2023/09/30 18:07:19 [debug] 36#36: *9 SSL_get_error: 2 > 2023/09/30 18:07:19 [debug] 36#36: *9 SSL handshake handler: 1 > 2023/09/30 18:07:19 [debug] 36#36: *9 SSL_do_handshake: -1 > 2023/09/30 18:07:19 [debug] 36#36: *9 SSL_get_error: 2 > 2023/09/30 18:07:19 [debug] 36#36: *9 SSL handshake handler: 0 > 2023/09/30 18:07:19 [debug] 36#36: *9 SSL_do_handshake: -1 > 2023/09/30 18:07:19 [debug] 36#36: *9 SSL_get_error: 2 > 2023/09/30 18:07:19 [debug] 36#36: *9 SSL handshake handler: 1 > 2023/09/30 18:07:19 [debug] 36#36: *9 SSL_do_handshake: -1 > 2023/09/30 18:07:19 [debug] 36#36: *9 SSL_get_error: 2 > 2023/09/30 18:07:19 [debug] 36#36: *9 SSL handshake handler: 1 > 2023/09/30 18:07:19 [debug] 36#36: *9 SSL_do_handshake: -1 > 2023/09/30 18:07:19 [debug] 36#36: *9 SSL_get_error: 2 > 2023/09/30 18:07:19 [debug] 36#36: *9 SSL handshake handler: 0 > 2023/09/30 18:07:19 [debug] 36#36: *9 save session: 000055ED330FBAC0 Note: here nginx saves the SSL session which was established during the handshake. This SSL session is expected to be used during following handshakes in the same worker process. > 2023/09/30 18:07:19 [debug] 36#36: *9 SSL_do_handshake: 1 > 2023/09/30 18:07:19 [debug] 36#36: *9 SSL: TLSv1.2, cipher: > "ECDHE-RSA-AES128-GCM-SHA256 TLSv1.2 Kx=ECDH Au=RSA Enc=AESGCM(128) > Mac=AEAD" Note: here nginx logs handshake details. This handshake does not reuse an SSL session, since there were none. If there was an SSL session and it was correctly reused during the SSL handshake, the next log line would be: 2023/09/30 18:07:19 [debug] 36#36: *9 SSL reused session Check the following SSL handshakes in the same worker process to see if sessions are actually reused or not. Most likely, these sessions are properly reused, and everything already works as it should. > 2023/09/30 18:07:19 [debug] 36#36: *9 *http upstream ssl handshake*: > "/blob/zte3odk1ymnl at CIBC-2mb > /singleurl0?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=ASIASQOYQRM4CTRY6I54%2F202309> > 2023/09/30 18:07:19 [debug] 36#36: *9 http upstream send request > 2023/09/30 18:07:19 [debug] 36#36: *9 http upstream send request body > > If I run 4K clients using a simulator,I will see 100% CPU in the nginx > container.I believe if we cache SSL sessions then SSL handshake for every > request will be avoided hence we may not have high CPU at nginx container. > > Can you please help how to achieve SSL Cache? how to make sure the CPU is > not high? Is there any reason why the CPU is high other than SSL Handshake. As outlined above, most likely SSL session reuse to upstream servers is already working properly in your setup. Note though that SSL is generally costly, and you are using it for both client connections and upstream connections. Depending on the certificates being used, ciphers being used and so on costs might vary, and there might be a room for improvement. Hope this helps. -- Maxim Dounin http://mdounin.ru/ From mdounin at mdounin.ru Sun Oct 1 22:16:40 2023 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 2 Oct 2023 01:16:40 +0300 Subject: Requesting a Nginx Variable - "client_time_taken" (similar to request_time & upstream_response_time) In-Reply-To: <18aebb81187.cdd37dcd3108.3875315204294721314@zohocorp.com> References: <18aebb81187.cdd37dcd3108.3875315204294721314@zohocorp.com> Message-ID: Hello! On Sun, Oct 01, 2023 at 08:20:23PM +0530, Devarajan D via nginx wrote: > Currently, there is no straightforward way to measure the time > taken by client to upload the request body.  > > 1. A variable similar to request_time, upstream_response_time > can be helpful to easily log this time taken by client. >     So it will be easy to prove to the client where the slowness > is. In general, $request_time minus $upstream_response_time is the slowness introduced by the client. (In some cases, $upstream_response_time might also depend on the client behaviour, such as with "proxy_request_buffering off;" or with "proxy_buffering off;" and/or due to proxy_max_temp_file_size reached.) Further, $request_time can be saved at various request processing stages, such as after reading request headers via the "set" directive, or via a map when sending the response headers. This provides mostly arbitrary time measurements if you need it. For detailed investigation on what happens with the particular client, debugging log is the most efficient instrument, notably the "debug_connection" directive which makes it possible to activate debug logging only for a particular client (http://nginx.org/r/debug_connection). > 2. Also, is there a timeout for the whole request? > >     (say request should be timed out if it is more than 15 > minutes) No. -- Maxim Dounin http://mdounin.ru/ From devarajan.dk at zohocorp.com Mon Oct 2 09:55:15 2023 From: devarajan.dk at zohocorp.com (Devarajan D) Date: Mon, 02 Oct 2023 15:25:15 +0530 Subject: Requesting a Nginx Variable - "client_time_taken" (similar to request_time & upstream_response_time) In-Reply-To: References: <18aebb81187.cdd37dcd3108.3875315204294721314@zohocorp.com> Message-ID: <18aefd0389b.12281eb1b5807.4219676481643141853@zohocorp.com> Dear Maxim Dounin, Team & Community, Thank you for your suggestions. Would be helpful if you could suggest the following, > In general, $request_time minus $upstream_response_time is the slowness introduced by the client. 1. It's true most of the time. But clients are not willing to accept unless they see a log from server side. (Say the client server itself is running in another hosing services like amazon EC2 instance) > Further, $request_time can be saved at various request processing stages, such as after reading request headers via the "set" directive, or via a map when sending the response headers. This provides mostly arbitrary time measurements if you need it. 2. How do we get control in nginx configuration when the last byte of request body is received from the client > For detailed investigation on what happens with the particular client, debugging log is the most efficient instrument, notably the "debug_connection" directive which makes it possible to activate debug logging only for a particular client This debug log would definitely help to check the last byte of the request body ! 3. But is it recommended to used nginx built with --with-debug in production environments 4. We receive such slow requests infrequently. Enabling debug log is producing a huge amount of logs/per request (2MB of log file per 10 MB request body upload) and it becomes hard to identify the slow request in that. Thats why it is mentioned as no straightforward way to measure the time taken by client to send the request body completely.  > Is there a timeout for the whole request? 5. How to prevent attacks like slow-loris DDos from exhausting the client connections when using the open-source version. Timeouts such as client_body_timeout are not much helpful for such attacks. Thanks & Regards, Devarajan D. ---- On Mon, 02 Oct 2023 03:46:40 +0530 Maxim Dounin wrote --- Hello! On Sun, Oct 01, 2023 at 08:20:23PM +0530, Devarajan D via nginx wrote: > Currently, there is no straightforward way to measure the time > taken by client to upload the request body.  > > 1. A variable similar to request_time, upstream_response_time > can be helpful to easily log this time taken by client. >     So it will be easy to prove to the client where the slowness > is. In general, $request_time minus $upstream_response_time is the slowness introduced by the client. (In some cases, $upstream_response_time might also depend on the client behaviour, such as with "proxy_request_buffering off;" or with "proxy_buffering off;" and/or due to proxy_max_temp_file_size reached.) Further, $request_time can be saved at various request processing stages, such as after reading request headers via the "set" directive, or via a map when sending the response headers. This provides mostly arbitrary time measurements if you need it. For detailed investigation on what happens with the particular client, debugging log is the most efficient instrument, notably the "debug_connection" directive which makes it possible to activate debug logging only for a particular client (http://nginx.org/r/debug_connection). > 2. Also, is there a timeout for the whole request? > >     (say request should be timed out if it is more than 15 > minutes) No. -- Maxim Dounin http://mdounin.ru/ _______________________________________________ nginx mailing list mailto:nginx at nginx.org https://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Tue Oct 3 02:35:40 2023 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 3 Oct 2023 05:35:40 +0300 Subject: Requesting a Nginx Variable - "client_time_taken" (similar to request_time & upstream_response_time) In-Reply-To: <18aefd0389b.12281eb1b5807.4219676481643141853@zohocorp.com> References: <18aebb81187.cdd37dcd3108.3875315204294721314@zohocorp.com> <18aefd0389b.12281eb1b5807.4219676481643141853@zohocorp.com> Message-ID: Hello! On Mon, Oct 02, 2023 at 03:25:15PM +0530, Devarajan D via nginx wrote: > > In general, $request_time minus $upstream_response_time is the > > slowness introduced by the client. > > 1. It's true most of the time. But clients are not willing to > accept unless they see a log from server side. (Say the client > server itself is running in another hosing services like amazon > EC2 instance) Well, $request_time and $upstream_response_time are logs from server side. Introducing yet another variable which will calculate the difference just to convince your clients is not something I would reasonably expect to happen. > > Further, $request_time can be saved at various request > > processing stages, such as after reading request headers via > > the "set" directive, or via a map when sending the response > > headers. This provides mostly arbitrary time measurements if > > you need it. > > 2. How do we get control in nginx configuration when the last > byte of request body is received from the client In simple proxying configurations, nginx starts to read the request body when control reaches the proxy module (so you can save start time with a simple "set" in the relevant location), and when the request body is completely read, nginx will create the request to the upstream server (so you can save this time by accessing a map in proxy_set_header). > > For detailed investigation on what happens with the particular > > client, debugging log is the most efficient instrument, > > notably the "debug_connection" directive which makes it > > possible to activate debug logging only for a particular client > > This debug log would definitely help to check the last byte of > the request body ! > > 3. But is it recommended to used nginx built with --with-debug > in production environments The "--with-debug" is designed to be used in production environments. It incurs some extra costs, and therefore not the default, and on loaded servers it might be a good idea to use nginx compiled without "--with-debug" unless you are debugging something. But unless debugging is actually activated in the configuration, the difference is negligible. > 4. We receive such slow requests infrequently. Enabling debug > log is producing a huge amount of logs/per request (2MB of log > file per 10 MB request body upload) and it becomes hard to > identify the slow request in that. Thats why it is mentioned as > no straightforward way to measure the time taken by client to > send the request body completely.  As previously suggested, using $request_time minus $upstream_response_time (or even just $request_time) makes it trivial to identify requests to look into. > > > Is there a timeout for the whole request? > > 5. How to prevent attacks like slow-loris DDos from exhausting > the client connections when using the open-source version. > Timeouts such as client_body_timeout are not much helpful for > such attacks. Stopping DDoS attacks is generally a hard problem, and timeouts are not an effective solution either. Not to mention that in many practical cases total timeout on the request body reading cannot be less than several hours, making such timeouts irrelevant. For trivial in-nginx protection from Slowloris-like attacks involving request body, consider using limit_conn (http://nginx.org/r/limit_conn). [...] -- Maxim Dounin http://mdounin.ru/ From peljasz at yahoo.co.uk Fri Oct 6 06:13:27 2023 From: peljasz at yahoo.co.uk (lejeczek) Date: Fri, 6 Oct 2023 08:13:27 +0200 Subject: logrotate (?) screws it badly References: Message-ID: Hi guys. I run off distro's vanilla-default logrotate, like so: ``` /var/log/nginx/*log {     daily     rotate 10     missingok     notifempty     compress     delaycompress     sharedscripts     postrotate         /bin/kill -USR1 `cat /run/nginx.pid 2>/dev/null` 2>/dev/null || true     endscript } ``` and I wonder... if it is logrotate's _fault_ or perhaps I screwed Nginx's configs somewhere? For after logs got rotated Nginx logs into: access.log.1 & error.log.1 and now as it should, you know access.log & error.log many thanks, L. -------------- next part -------------- An HTML attachment was scrubbed... URL: From r at roze.lv Fri Oct 6 08:28:55 2023 From: r at roze.lv (Reinis Rozitis) Date: Fri, 6 Oct 2023 11:28:55 +0300 Subject: logrotate (?) screws it badly In-Reply-To: References: Message-ID: <003701d9f82f$258efee0$70acfca0$@roze.lv> > postrotate > /bin/kill -USR1 `cat /run/nginx.pid 2>/dev/null` 2>/dev/null || true > endscript >} ``` > and I wonder... > if it is logrotate's _fault_ or perhaps I screwed Nginx's configs somewhere? For after logs got rotated Nginx logs into: access.log.1 & error.log.1 and now as it should, you > know access.log & error.log You need to check if the postrotate command completes successfully (for example - is the pid file in correct path? / I would remove all the || true parts and just leave kill -USR1 `cat /run/nginx.pid`). Now it seems that logrotate just renames the files but nginx is not getting the signal and still has open handle to them so keeps logging in the renamed file. rr From ralph at ml.seichter.de Fri Oct 6 08:40:35 2023 From: ralph at ml.seichter.de (Ralph Seichter) Date: Fri, 06 Oct 2023 10:40:35 +0200 Subject: logrotate (?) screws it badly In-Reply-To: References: Message-ID: <87h6n4qh30.fsf@ra.horus-it.com> * lejeczek via nginx: > For after logs got rotated Nginx logs into: > access.log.1 & error.log.1 > and now as it should, you know > access.log & error.log You may want to try logrotate's "copytruncate" option. -Ralph From me at gend.moe Mon Oct 9 15:55:15 2023 From: me at gend.moe (Gentry Deng) Date: Mon, 9 Oct 2023 23:55:15 +0800 Subject: Compatibility of X25519Kyber768 ClientHello Message-ID: Hello, I recently encountered a compatibility issue with X25519Kyber768 : I was unable to access the site via X25519Kyber768-enabled Google Chrome on a server with only TLS 1.2 enabled, but not TLS 1.3. The Chromium team replied: > Regarding TLS 1.2 vs TLS 1.3, a TLS ClientHello is generally good for > all the parameters we support. So though we include TLS 1.3 with Kyber > in there, we also include parameters for TLS 1.3 without Kyber and TLS > 1.2. So if the server and network well behaving correctly, it's > perfectly fine if the server only supports TLS 1.2. > > I'm able to reproduce the problem. It looks like a bug in > www.paypal.cn's server. They didn't implement TLS 1.2 correctly. > Specifically, they do not correctly handle when the ClientHello comes > in in two reads. Before Kyber, this wasn't very common because > ClientHellos usually fit in a packet. But Kyber makes ClientHellos > larger, so it is possible to get only a partial ClientHello in the > first read, and require a second read to try again. This is something > that any TCP-based application needs to handle; you may not have > gotten the whole message on a given read and need to keep on reading. > > www.paypal.cn will need to fix their server to correctly handle this case. So the Chromium team isn't considering making a change, so I'm wondering how compatible nginx is with this? Or what version is needed to make it error free? Best regards, Gentry -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Mon Oct 9 18:02:43 2023 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 9 Oct 2023 21:02:43 +0300 Subject: Compatibility of X25519Kyber768 ClientHello In-Reply-To: References: Message-ID: Hello! On Mon, Oct 09, 2023 at 11:55:15PM +0800, Gentry Deng via nginx wrote: > I recently encountered a compatibility issue with X25519Kyber768 > : I was > unable to access the site via X25519Kyber768-enabled Google Chrome on a > server with only TLS 1.2 enabled, but not TLS 1.3. > > The Chromium team replied: > > > > Regarding TLS 1.2 vs TLS 1.3, a TLS ClientHello is generally good for > > all the parameters we support. So though we include TLS 1.3 with Kyber > > in there, we also include parameters for TLS 1.3 without Kyber and TLS > > 1.2. So if the server and network well behaving correctly, it's > > perfectly fine if the server only supports TLS 1.2. > > > > I'm able to reproduce the problem. It looks like a bug in > > www.paypal.cn's server. They didn't implement TLS 1.2 correctly. > > Specifically, they do not correctly handle when the ClientHello comes > > in in two reads. Before Kyber, this wasn't very common because > > ClientHellos usually fit in a packet. But Kyber makes ClientHellos > > larger, so it is possible to get only a partial ClientHello in the > > first read, and require a second read to try again. This is something > > that any TCP-based application needs to handle; you may not have > > gotten the whole message on a given read and need to keep on reading. > > > > www.paypal.cn will need to fix their server to correctly handle this case. > > > So the Chromium team isn't considering making a change, so I'm wondering > how compatible nginx is with this? Or what version is needed to make it > error free? There are no known issues in nginx with ClientHello split between packets (with all supported SSL libraries). And I would be very much surprised if there are any, as this is indeed a very basic thing TCP-based applications used to handle. Such issues are more likely to be seen in various packet-based filtering solutions, and I would assume this is most likely the case for the site in question. -- Maxim Dounin http://mdounin.ru/ From noloader at gmail.com Mon Oct 9 18:47:27 2023 From: noloader at gmail.com (Jeffrey Walton) Date: Mon, 9 Oct 2023 14:47:27 -0400 Subject: Compatibility of X25519Kyber768 ClientHello In-Reply-To: References: Message-ID: On Mon, Oct 9, 2023 at 11:55 AM Gentry Deng via nginx wrote: > > ... > I'm able to reproduce the problem. It looks like a bug in www.paypal.cn's server. They didn't implement TLS 1.2 correctly. Specifically, they do not correctly handle when the ClientHello comes in in two reads. Before Kyber, this wasn't very common because ClientHellos usually fit in a packet. But Kyber makes ClientHellos larger, so it is possible to get only a partial ClientHello in the first read, and require a second read to try again. This is something that any TCP-based application needs to handle; you may not have gotten the whole message on a given read and need to keep on reading. > > www.paypal.cn will need to fix their server to correctly handle this case. It sounds like this, assuming they are using a F5: . Broken middleware is always interesting. One of my favorites was Ironport and its fixed sized buffer for a ClientHello that resulted in buffer overflows and crashes when TLS 1.1 and TLS 1.2 increased the size of a ClientHello due to additional cipher suites. See . Jeff From noloader at gmail.com Tue Oct 10 18:50:37 2023 From: noloader at gmail.com (Jeffrey Walton) Date: Tue, 10 Oct 2023 14:50:37 -0400 Subject: OT: Rapid Reset attacks on HTTP/2 Message-ID: Hi Everyone, This just made my radar: . >From the article: F5, in an independent advisory of its own, said the attack impacts the NGINX HTTP/2 module and has urged its customers to update their NGINX configuration to limit the number of concurrent streams to a default of 128 and persist HTTP connections for up to 1000 requests. Jeff From kapouer at melix.org Tue Oct 10 18:55:10 2023 From: kapouer at melix.org (=?UTF-8?B?SsOpcsOpbXkgTGFs?=) Date: Tue, 10 Oct 2023 20:55:10 +0200 Subject: OT: Rapid Reset attacks on HTTP/2 In-Reply-To: References: Message-ID: Hi, from the article, these are the default values, so not too much to worry yet. Le mar. 10 oct. 2023 à 20:51, Jeffrey Walton a écrit : > Hi Everyone, > > This just made my radar: > . > > From the article: > > F5, in an independent advisory of its own, said the attack impacts the > NGINX HTTP/2 module and has urged its customers to update their NGINX > configuration to limit the number of concurrent streams to a default of > 128 and persist HTTP connections for up to 1000 requests. > > Jeff > _______________________________________________ > nginx mailing list > nginx at nginx.org > https://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Tue Oct 10 19:03:42 2023 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 10 Oct 2023 22:03:42 +0300 Subject: OT: Rapid Reset attacks on HTTP/2 In-Reply-To: References: Message-ID: Hello! On Tue, Oct 10, 2023 at 02:50:37PM -0400, Jeffrey Walton wrote: > Hi Everyone, > > This just made my radar: > . > > From the article: > > F5, in an independent advisory of its own, said the attack impacts the > NGINX HTTP/2 module and has urged its customers to update their NGINX > configuration to limit the number of concurrent streams to a default of > 128 and persist HTTP connections for up to 1000 requests. The "the attack impacts the NGINX HTTP/2 module" claim is incorrect, see here: https://mailman.nginx.org/pipermail/nginx-devel/2023-October/S36Q5HBXR7CAIMPLLPRSSSYR4PCMWILK.html Hope this helps. -- Maxim Dounin http://mdounin.ru/ From noloader at gmail.com Tue Oct 10 19:46:25 2023 From: noloader at gmail.com (Jeffrey Walton) Date: Tue, 10 Oct 2023 15:46:25 -0400 Subject: OT: Rapid Reset attacks on HTTP/2 In-Reply-To: References: Message-ID: On Tue, Oct 10, 2023 at 3:04 PM Maxim Dounin wrote: > > On Tue, Oct 10, 2023 at 02:50:37PM -0400, Jeffrey Walton wrote: > > > This just made my radar: > > . > > > > From the article: > > > > F5, in an independent advisory of its own, said the attack impacts the > > NGINX HTTP/2 module and has urged its customers to update their NGINX > > configuration to limit the number of concurrent streams to a default of > > 128 and persist HTTP connections for up to 1000 requests. > > The "the attack impacts the NGINX HTTP/2 module" claim is > incorrect, see here: > > https://mailman.nginx.org/pipermail/nginx-devel/2023-October/S36Q5HBXR7CAIMPLLPRSSSYR4PCMWILK.html > > Hope this helps. Thanks Maxim. The Nginx team may want to publish a blog post or knowledge article. I got 0 hits when searching the site . It will help admins and executives find the team's information. Jeff From xserverlinux at gmail.com Tue Oct 10 21:30:52 2023 From: xserverlinux at gmail.com (Rick Gutierrez) Date: Tue, 10 Oct 2023 17:30:52 -0400 Subject: OT: Rapid Reset attacks on HTTP/2 In-Reply-To: References: Message-ID: In the open version 1.24 and 1.25 the correction will be applied?, ¿or in the new release? Regards On Tue, Oct 10, 2023 at 3:46 PM Jeffrey Walton wrote: > On Tue, Oct 10, 2023 at 3:04 PM Maxim Dounin wrote: > > > > On Tue, Oct 10, 2023 at 02:50:37PM -0400, Jeffrey Walton wrote: > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Tue Oct 10 21:54:59 2023 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 11 Oct 2023 00:54:59 +0300 Subject: OT: Rapid Reset attacks on HTTP/2 In-Reply-To: References: Message-ID: Hello! On Tue, Oct 10, 2023 at 05:30:52PM -0400, Rick Gutierrez wrote: > In the open version 1.24 and 1.25 the correction will be applied?, ¿or in > the new release? To re-iterate: We do not consider nginx to be affected by this issue. In the default configuration, nginx is sufficiently protected by the limit of allowed requests per connection (see http://nginx.org/r/keepalive_requests for details), so an attacker will be required to reconnect very often, making the attack obvious and easy to stop at the network level. And it is not possible to circumvent the max concurrent streams limit in nginx, as nginx only allows additional streams when previous streams are completely closed. Further, additional protection can be implemented in nginx by using the "limit_req" directive, which limits the rate of requests and rejects excessive requests. Overall, with the handling as implemented in nginx, impact of streams being reset does no seem to be significantly different from impacts from over workloads with large number of requests being sent by the client, such as handling of multiple HTTP/2 requests or HTTP/1.x pipelined requests. Nevertheless, we've decided to implemented some additional mitigations which will help nginx to detect such attacks and drop connections with misbehaving clients faster. The patch to do so was committed (http://hg.nginx.org/nginx/rev/cdda286c0f1b) and will be available in the next nginx release. -- Maxim Dounin http://mdounin.ru/ From xserverlinux at gmail.com Thu Oct 12 15:46:25 2023 From: xserverlinux at gmail.com (Rick Gutierrez) Date: Thu, 12 Oct 2023 11:46:25 -0400 Subject: OT: Rapid Reset attacks on HTTP/2 In-Reply-To: References: Message-ID: good to know, thanks for the info Maxim. El mar, 10 oct 2023 a las 17:55, Maxim Dounin () escribió: > > Hello! > > On Tue, Oct 10, 2023 at 05:30:52PM -0400, Rick Gutierrez wrote: > > > In the open version 1.24 and 1.25 the correction will be applied?, ¿or in > > the new release? > > To re-iterate: > > We do not consider nginx to be affected by this issue. In the > default configuration, nginx is sufficiently protected by the > limit of allowed requests per connection (see > http://nginx.org/r/keepalive_requests for details), so an attacker > will be required to reconnect very often, making the attack > obvious and easy to stop at the network level. And it is not > possible to circumvent the max concurrent streams limit in nginx, > as nginx only allows additional streams when previous streams are > completely closed. > > Further, additional protection can be implemented in nginx by > using the "limit_req" directive, which limits the rate of requests > and rejects excessive requests. > > Overall, with the handling as implemented in nginx, impact of > streams being reset does no seem to be significantly different > from impacts from over workloads with large number of requests > being sent by the client, such as handling of multiple HTTP/2 > requests or HTTP/1.x pipelined requests. > > Nevertheless, we've decided to implemented some additional > mitigations which will help nginx to detect such attacks and drop > connections with misbehaving clients faster. The patch to do so > was committed (http://hg.nginx.org/nginx/rev/cdda286c0f1b) and > will be available in the next nginx release. > > -- > Maxim Dounin > http://mdounin.ru/ > _______________________________________________ > nginx mailing list > nginx at nginx.org > https://mailman.nginx.org/mailman/listinfo/nginx -- rickygm http://gnuforever.homelinux.com From jeff at ForerunnerTV.com Tue Oct 17 15:50:38 2023 From: jeff at ForerunnerTV.com (Jeff) Date: Tue, 17 Oct 2023 10:50:38 -0500 Subject: Run PHP on NGINX Message-ID: Can PHP code be run using NGINX? I am new to web server stuff, so just investigating. Thanks From n5d9xq3ti233xiyif2vp at protonmail.ch Tue Oct 17 16:00:08 2023 From: n5d9xq3ti233xiyif2vp at protonmail.ch (Laura Smith) Date: Tue, 17 Oct 2023 16:00:08 +0000 Subject: Run PHP on NGINX In-Reply-To: References: Message-ID: ------- Original Message ------- On Tuesday, October 17th, 2023 at 16:50, Jeff wrote: > Can PHP code be run using NGINX? > > Yes of course. There are surely thousands of how-to's on Google already ? Its not difficult, only about 5 lines in the config file. From osa at freebsd.org.ru Tue Oct 17 22:36:26 2023 From: osa at freebsd.org.ru (Sergey A. Osokin) Date: Wed, 18 Oct 2023 01:36:26 +0300 Subject: Run PHP on NGINX In-Reply-To: References: Message-ID: Hi Jeff, hope you're doing well. On Tue, Oct 17, 2023 at 10:50:38AM -0500, Jeff wrote: > Can PHP code be run using NGINX? > I am new to web server stuff, so just investigating. Short answer is no, NGINX by itself can't execute a PHP code. But here's the important notes here: 1. NGINX has a fastcgi module, [1], that modules allows passing requests to a FastCGI server; 2. PHP distribution contains PHP FastCGI Process Manager, [2] is a primary PHP FastCGI implementation. There many resources in the internet that describes how to integrate NGINX with a FastCGI server, so I'd recommend to start a journey from the nginx.org, [3]. Also, you may want to take a look on NGINX Unit, [4], a lightweight and versatile application runtime hat provides the essential components for your web application as a single open-source server. Hope that helps. References ---------- 1. https://nginx.org/en/docs/http/ngx_http_fastcgi_module.html 2. https://www.php.net/manual/en/install.fpm.php 3. https://nginx.org/en/docs/beginners_guide.html#fastcgi 4. https://unit.nginx.org/configuration/#configuration-php -- Sergey A. Osokin From Sam at SimpleSamples.info Wed Oct 18 03:03:24 2023 From: Sam at SimpleSamples.info (Sam Hobbs) Date: Tue, 17 Oct 2023 20:03:24 -0700 Subject: Run PHP on NGINX In-Reply-To: References: Message-ID: <9da2827e-1d85-00bb-35db-762e6bbd6e98@SimpleSamples.info> The original HTML had forms. Forms still exist for HTML. A form can specify that a different HTML file be shown when the form is submitted. Between the the form submission and the showing of the second HTML file, a script, called a CGI script, can be executed to process the form. And that is how server-side languages such as PHP are executed. The original CGI is inefficient. FastCGI is more efficient. Therefore PHP websites execute PHP using FastCGI. Other servers such as Apache and IIS use FastCGI or something very similar. Jeff wrote on 10/17/2023 8:50 AM: > Can PHP code be run using NGINX? > > I am new to web server stuff, so just investigating. > > Thanks > _______________________________________________ > nginx mailing list > nginx at nginx.org > https://mailman.nginx.org/mailman/listinfo/nginx From alienmega at protonmail.com Wed Oct 18 16:13:39 2023 From: alienmega at protonmail.com (alienmega) Date: Wed, 18 Oct 2023 16:13:39 +0000 Subject: trying to disable gzip Message-ID: Hello, I am trying to disable gzip to mitigate the breach attack( I use a service to check for vulnerabilities and it came up with that). I added gzip off to nginx.conf file and then check the configuration with nginx -t, and then reloaded with systemctl reload nginx. When I visit the site, I still have Accept-Encoding: gzip, deflate, br I check that I dont have gip on anywhere else on /etc/nginx/* grep -Ri "gzip off" /etc/nginx I also use brave in incognito mode to make sure there was no cache involve. Not sure what else to do to disable gzip I am running php 8.1, nginx1.24 on ubuntu (22.04.03) this is the result of nginx -V nginx version: nginx/1.24.0 built by gcc 11.2.0 (Ubuntu 11.2.0-19ubuntu1) built with OpenSSL 3.0.2 15 Mar 2022 TLS SNI support enabled configure arguments: --prefix=/etc/nginx --sbin-path=/usr/sbin/nginx --modules-path=/usr/lib/nginx/modules --conf-path=/etc/nginx/nginx.conf --error-log-path=/var/log/nginx/error.log --http-log-path=/var/log/nginx/access.log --pid-path=/var/run/nginx.pid --lock-path=/var/run/nginx.lock --http-client-body-temp-path=/var/cache/nginx/client_temp --http-proxy-temp-path=/var/cache/nginx/proxy_temp --http-fastcgi-temp-path=/var/cache/nginx/fastcgi_temp --http-uwsgi-temp-path=/var/cache/nginx/uwsgi_temp --http-scgi-temp-path=/var/cache/nginx/scgi_temp --user=nginx --group=nginx --with-compat --with-file-aio --with-threads --with-http_addition_module --with-http_auth_request_module --with-http_dav_module --with-http_flv_module --with-http_gunzip_module --with-http_gzip_static_module --with-http_mp4_module --with-http_random_index_module --with-http_realip_module --with-http_secure_link_module --with-http_slice_module --with-http_ssl_module --with-http_stub_status_module --with-http_sub_module --with-http_v2_module --with-mail --with-mail_ssl_module --with-stream --with-stream_realip_module --with-stream_ssl_module --with-stream_ssl_preread_module --with-cc-opt='-g -O2 -ffile-prefix-map=/data/builder/debuild/nginx-1.24.0/debian/debuild-base/nginx-1.24.0=. -flto=auto -ffat-lto-objects -flto=auto -ffat-lto-objects -fstack-protector-strong -Wformat -Werror=format-security -Wp,-D_FORTIFY_SOURCE=2 -fPIC' --with-ld-opt='-Wl,-Bsymbolic-functions -flto=auto -ffat-lto-objects -flto=auto -Wl,-z,relro -Wl,-z,now -Wl,--as-needed -pie' thank you for any ideas Sent with [Proton Mail](https://proton.me/) secure email. -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Wed Oct 18 16:46:39 2023 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 18 Oct 2023 19:46:39 +0300 Subject: trying to disable gzip In-Reply-To: References: Message-ID: Hello! On Wed, Oct 18, 2023 at 04:13:39PM +0000, alienmega via nginx wrote: > Hello, > I am trying to disable gzip to mitigate the breach attack( I use > a service to check for vulnerabilities and it came up with > that). I added gzip off to nginx.conf file and then check the > configuration with nginx -t, and then reloaded with systemctl > reload nginx. > > When I visit the site, I still have > Accept-Encoding: gzip, deflate, br The "Accept-Encoding" is a _request_ header, sent by your browser. You have to look at the response headers instead, notably Content-Encoding. > I check that I dont have gip on anywhere else on /etc/nginx/* > grep -Ri "gzip off" /etc/nginx As long as you don't have "gzip on" (or "gzip_static", but it is certainly not affected by BREACH) in your nginx configuration, nginx won't use gzip. Note though that if you are using some backend server to return dynamic responses, you might need to disable gzip there as well. Note well that completely disabling gzip might not be the best solution. The BREACH attack only affects response body compression if the resource being returned 1) contains some secret information and 2) it reflects some user input. That is, it certainly does not affect static files, and can be easily avoided by masking secrets in dynamic pages, see https://www.breachattack.com/ for details. -- Maxim Dounin http://mdounin.ru/ From r at roze.lv Wed Oct 18 16:48:10 2023 From: r at roze.lv (Reinis Rozitis) Date: Wed, 18 Oct 2023 19:48:10 +0300 Subject: trying to disable gzip In-Reply-To: References: Message-ID: <000001da01e2$e10a0d60$a31e2820$@roze.lv> > I added gzip off to nginx.conf file and then check the configuration with nginx -t, and then reloaded with systemctl reload nginx. > > When I visit the site, I still have > Accept-Encoding: gzip, deflate, br First of all - how are you testing? 'Accept-Encoding' - is the header in http request sent by client/browser identifying what the browser supports to what the server actually responds with 'Content-Encoding'. In any case if you see something like that also in 'Content-Encoding' response headers - while I don't see that in provided configure line (the module might be dynamically loaded?) nginx default gzip module doesn't support 'br' (brotli) compression so it's either (a third party) ngx_brotli module (you can search your config for 'brotli') or something else .. .. for example if you are testing on php site - php can have it's own output compression (for example via https://www.php.net/manual/en/zlib.configuration.php#ini.zlib.output-compression ) rr From alienmega at protonmail.com Thu Oct 19 04:35:55 2023 From: alienmega at protonmail.com (alienmega) Date: Thu, 19 Oct 2023 04:35:55 +0000 Subject: trying to disable gzip In-Reply-To: References: Message-ID: Thank you for the information. I didnt notice I was lookgin at the wrong place. It turns out that the culprit is cloudflare. If I dont use it, I can see the gzip going on and off(as expected), but as soo as I use cloudflare, it overwrites that response. Now I need to check on cloudflare if there is anyway to turn it off. Sent with Proton Mail secure email. ------- Original Message ------- On Wednesday, October 18th, 2023 at 12:46 PM, Maxim Dounin wrote: > Hello! > > On Wed, Oct 18, 2023 at 04:13:39PM +0000, alienmega via nginx wrote: > > > Hello, > > I am trying to disable gzip to mitigate the breach attack( I use > > a service to check for vulnerabilities and it came up with > > that). I added gzip off to nginx.conf file and then check the > > configuration with nginx -t, and then reloaded with systemctl > > reload nginx. > > > > When I visit the site, I still have > > Accept-Encoding: gzip, deflate, br > > > The "Accept-Encoding" is a request header, sent by your browser. > You have to look at the response headers instead, notably > Content-Encoding. > > > I check that I dont have gip on anywhere else on /etc/nginx/* > > grep -Ri "gzip off" /etc/nginx > > > As long as you don't have "gzip on" (or "gzip_static", but it is > certainly not affected by BREACH) in your nginx configuration, > nginx won't use gzip. Note though that if you are using some > backend server to return dynamic responses, you might need to > disable gzip there as well. > > Note well that completely disabling gzip might not be the best > solution. The BREACH attack only affects response body > compression if the resource being returned 1) contains some secret > information and 2) it reflects some user input. That is, it > certainly does not affect static files, and can be easily avoided > by masking secrets in dynamic pages, see > https://www.breachattack.com/ for details. > > -- > Maxim Dounin > http://mdounin.ru/ > _______________________________________________ > nginx mailing list > nginx at nginx.org > https://mailman.nginx.org/mailman/listinfo/nginx From noloader at gmail.com Thu Oct 19 17:22:30 2023 From: noloader at gmail.com (Jeffrey Walton) Date: Thu, 19 Oct 2023 13:22:30 -0400 Subject: trying to disable gzip In-Reply-To: References: Message-ID: On Thu, Oct 19, 2023 at 12:36 AM alienmega via nginx wrote: > > Thank you for the information. I didnt notice I was lookgin at the wrong place. It turns out that the culprit is cloudflare. If I dont use it, I can see the gzip going on and off(as expected), but as soo as I use cloudflare, it overwrites that response. Now I need to check on cloudflare if there is anyway to turn it off. One comment about 3rd parties, like Cloudfare... Remember, the cloud is just someone else's machine. If Cloudfare is supporting protocols like SDPY, then compression is baked into the protocol. You cannot disable compression in this case. So compression may be available and used on their web servers whether you want it or not. An easier way to avoid CRIME and BREACh may be to use TLS v1.2 and above with AEAD ciphers modes like CCM or GCM since CRIME and BREACH were timing attacks on cipher modes like CBC. Stream ciphers should avoid the problem, too, like TLS v1.3's ChaCha20-Poly1305. Jeff From Sam at SimpleSamples.info Sat Oct 21 19:27:30 2023 From: Sam at SimpleSamples.info (Sam Hobbs) Date: Sat, 21 Oct 2023 12:27:30 -0700 Subject: Custom scheme/protocol Message-ID: Are there any articles and/or samples of a custom scheme or protocol? What I mean by scheme or protocol is the first part of an URL, such as HTTP and FTP. What is necessary to develop a custom protocol in a server? This is not important. It is something I have been curious about for years. I assume the details are dependent on the server. Stack Overflow has questions like c# - Register multiple applications to the same custom protocol - Stack Overflow . They ask about configuring protocols within the client. It is difficult to find articles about implementing custom protocols in a server. -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Tue Oct 24 15:52:02 2023 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 24 Oct 2023 18:52:02 +0300 Subject: nginx-1.25.3 Message-ID: Changes with nginx 1.25.3 24 Oct 2023 *) Change: improved detection of misbehaving clients when using HTTP/2. *) Feature: startup speedup when using a large number of locations. Thanks to Yusuke Nojima. *) Bugfix: a segmentation fault might occur in a worker process when using HTTP/2 without SSL; the bug had appeared in 1.25.1. *) Bugfix: the "Status" backend response header line with an empty reason phrase was handled incorrectly. *) Bugfix: memory leak during reconfiguration when using the PCRE2 library. Thanks to ZhenZhong Wu. *) Bugfixes and improvements in HTTP/3. -- Maxim Dounin http://nginx.org/ From xeioex at nginx.com Tue Oct 24 19:59:10 2023 From: xeioex at nginx.com (Dmitry Volyntsev) Date: Tue, 24 Oct 2023 12:59:10 -0700 Subject: njs-0.8.2 Message-ID: Hello, I'm glad to announce a new release of NGINX JavaScript module (njs). Notable new features: - console object in nginx modules: Console object is a global object that provides access to the environment's console. It can be used to log information to the console, using console.log(), console.info(), console.warn(), console.error() methods. This feature unifies logging in nginx modules and njs CLI. Learn more about njs: - Overview and introduction: https://nginx.org/en/docs/njs/ - NGINX JavaScript in Your Web Server Configuration: https://youtu.be/Jc_L6UffFOs - Extending NGINX with Custom Code: https://youtu.be/0CVhq4AUU7M - Using node modules with njs: https://nginx.org/en/docs/njs/node_modules.html - Writing njs code using TypeScript definition files: https://nginx.org/en/docs/njs/typescript.html Feel free to try it and give us feedback on: - Github: https://github.com/nginx/njs/issues - Mailing list: https://mailman.nginx.org/mailman/listinfo/nginx-devel Additional examples and howtos can be found here: - Github: https://github.com/nginx/njs-examples Changes with njs 0.8.2 24 Oct 2023 nginx modules: *) Feature: introduced console object. The following methods were introduced: error(), info(), log(), time(), timeEnd(), warn(). *) Bugfix: fixed HEAD response handling with large Content-Length in fetch API. *) Bugfix: fixed items() method for a shared dictionary. *) Bugfix: fixed delete() method for a shared dictionary. Core: *) Feature: extended "fs" module. Added existsSync(). *) Bugfix: fixed "xml" module. Fixed broken XML exception handling in parse() method. *) Bugfix: fixed RegExp.prototype.exec() with global regexp and unicode input. *) Bugfix: fixed return statement parsing with invalid expression. From rozhuk.im at gmail.com Mon Oct 30 12:05:53 2023 From: rozhuk.im at gmail.com (Rozhuk Ivan) Date: Mon, 30 Oct 2023 14:05:53 +0200 Subject: proxy_protocol send incorrect header Message-ID: <20231030140553.2c6984b9@rimwks.local> Hi! I got incorrect proxy header: PROXY TCP4 172.16.0.208 unix:/var/run/nginx_443_test.sock 9795 0\r\nSSH-2.0-OpenSSH_9.3\r\n Expect: PROXY TCP4 172.16.0.208 172.16.0.254 9795 443\r\nSSH-2.0-OpenSSH_9.3\r\n My config: 172.16.0.208 - initiator and tcp server on 4443 port. 172.16.0.254 - nginx host initiator: ssh root at 172.16.0.254 -p 443 tcp server on 4443: any app that can accept tcp and print received data. nginx config: ======================================== # Set default for TLS and non TLS connections. map $ssl_preread_protocol $upstream_proto_val { "" unix:/var/run/nginx_443_test.sock; default unix:/var/run/nginx_443_http.sock; } # ALPN map table. map $ssl_preread_alpn_protocols $upstream_alpn_val { default $upstream_proto_val; "xmpp-client" unix:/var/run/nginx_443_xmpp.sock; "xmpps-client" unix:/var/run/nginx_443_xmpp.sock; "stun.turn" unix:/var/run/nginx_443_stun.sock; "stun.nat-discovery" unix:/var/run/nginx_443_stun.sock; } # ALPN router. server { listen *:443 rcvbuf=1m sndbuf=1m so_keepalive=30m::10; listen [::]:443 rcvbuf=1m sndbuf=1m so_keepalive=30m::10 ipv6only=on; ssl_preread on; #proxy_protocol $proxy_protocol_val; proxy_protocol on; proxy_pass $upstream_alpn_val; } server { listen unix:/var/run/nginx_443_test.sock proxy_protocol rcvbuf=1m sndbuf=1m; set_real_ip_from unix:; proxy_protocol on; proxy_pass 172.16.0.208:4443; } # Strip proxy protocol for xmpp. server { listen unix:/var/run/nginx_443_xmpp.sock proxy_protocol rcvbuf=1m sndbuf=1m; proxy_protocol off; proxy_pass 127.0.0.1:5223; } ======================================== PS: it will be very nice if this "proxy_protocol $proxy_protocol_val;" will work. It does not accept vars, only static values from config. From arut at nginx.com Mon Oct 30 13:00:38 2023 From: arut at nginx.com (Roman Arutyunyan) Date: Mon, 30 Oct 2023 17:00:38 +0400 Subject: proxy_protocol send incorrect header In-Reply-To: <20231030140553.2c6984b9@rimwks.local> References: <20231030140553.2c6984b9@rimwks.local> Message-ID: <3627005B-4315-4EF7-9782-3A0C897C7F09@nginx.com> Hi Ivan, > On 30 Oct 2023, at 16:05, Rozhuk Ivan wrote: > > Hi! > > I got incorrect proxy header: > PROXY TCP4 172.16.0.208 unix:/var/run/nginx_443_test.sock 9795 0\r\nSSH-2.0-OpenSSH_9.3\r\n > > Expect: > PROXY TCP4 172.16.0.208 172.16.0.254 9795 443\r\nSSH-2.0-OpenSSH_9.3\r\n > > > > My config: > 172.16.0.208 - initiator and tcp server on 4443 port. > 172.16.0.254 - nginx host > > initiator: > ssh root at 172.16.0.254 -p 443 > > tcp server on 4443: any app that can accept tcp and print received data. > > > nginx config: > ======================================== > # Set default for TLS and non TLS connections. > map $ssl_preread_protocol $upstream_proto_val { > "" unix:/var/run/nginx_443_test.sock; > default unix:/var/run/nginx_443_http.sock; > } > > # ALPN map table. > map $ssl_preread_alpn_protocols $upstream_alpn_val { > default $upstream_proto_val; > "xmpp-client" unix:/var/run/nginx_443_xmpp.sock; > "xmpps-client" unix:/var/run/nginx_443_xmpp.sock; > "stun.turn" unix:/var/run/nginx_443_stun.sock; > "stun.nat-discovery" unix:/var/run/nginx_443_stun.sock; > } > > > # ALPN router. > server { > listen *:443 rcvbuf=1m sndbuf=1m so_keepalive=30m::10; > listen [::]:443 rcvbuf=1m sndbuf=1m so_keepalive=30m::10 ipv6only=on; > > ssl_preread on; > #proxy_protocol $proxy_protocol_val; > proxy_protocol on; > proxy_pass $upstream_alpn_val; > } > > > server { > listen unix:/var/run/nginx_443_test.sock proxy_protocol rcvbuf=1m sndbuf=1m; > > set_real_ip_from unix:; > > proxy_protocol on; > proxy_pass 172.16.0.208:4443; > } > > # Strip proxy protocol for xmpp. > server { > listen unix:/var/run/nginx_443_xmpp.sock proxy_protocol rcvbuf=1m sndbuf=1m; > > proxy_protocol off; > proxy_pass 127.0.0.1:5223; > } > > ======================================== > > > PS: it will be very nice if this "proxy_protocol $proxy_protocol_val;" will work. It does not accept vars, only static values from config. Currently the realip module only changes the client address (c->sockaddr) and leaves the server address (c->local_sockaddr) unchanged. The behavior is the same for Stream and HTTP and is explained by the fact that initially the module only supported HTTP fields like X-Real-IP and X-Forwarded-For, which carry only client address. Indeed it does look inconsistent in scenarios like yours when address families are different. But do you really need the server address or you just highlight the inconsistency? ---- Roman Arutyunyan arut at nginx.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From rozhuk.im at gmail.com Mon Oct 30 14:47:17 2023 From: rozhuk.im at gmail.com (Rozhuk Ivan) Date: Mon, 30 Oct 2023 16:47:17 +0200 Subject: proxy_protocol send incorrect header In-Reply-To: <3627005B-4315-4EF7-9782-3A0C897C7F09@nginx.com> References: <20231030140553.2c6984b9@rimwks.local> <3627005B-4315-4EF7-9782-3A0C897C7F09@nginx.com> Message-ID: <20231030164717.5b8b6cdc@rimwks.local> On Mon, 30 Oct 2023 17:00:38 +0400 Roman Arutyunyan wrote: > > I got incorrect proxy header: > > PROXY TCP4 172.16.0.208 unix:/var/run/nginx_443_test.sock 9795 > > 0\r\nSSH-2.0-OpenSSH_9.3\r\n > > > > Expect: > > PROXY TCP4 172.16.0.208 172.16.0.254 9795 > > 443\r\nSSH-2.0-OpenSSH_9.3\r\n > > > Currently the realip module only changes the client address > (c->sockaddr) and leaves the server address (c->local_sockaddr) > unchanged. The behavior is the same for Stream and HTTP and is > explained by the fact that initially the module only supported HTTP > fields like X-Real-IP and X-Forwarded-For, which carry only client > address. > > Indeed it does look inconsistent in scenarios like yours when address > families are different. But do you really need the server address or > you just highlight the inconsistency? 1. I am writing proxy protocol (PP) parser, and it uses: inet_pton(family, straddr, sa_addr) where family was taken from TCP4/TCP6 => AF_INET/AF_INET6 It fail by 2 reasons: a. inet_pton() support only AF_INET/AF_INET6 at least on FreeBSD b. It never get AF_UNIX - since it is not expected in proxy protocol v1. 2. Even in case I do addr type auto detection, record for AF_UNUX should be: /var/run/nginx_443_test.sock not unix:/var/run/nginx_443_test.sock 3. Proxy protocol designed to pass info about client connection, so it is impossible get mix of AF_INET/AF_INET6/AF_UNIX in one connection. __Current nginx implementation violate proxy protocol specification.__ 4. I suppose other parser implementations of proxy protocol will be also fail to parse mixed address families. In my use case 443 shared between few services, one of them support proxy protocol but does not support TLS+PP, so I need terminate TLS using nginx and pass PP+plain text to service. Few services does not support PP, and I must make additional proxy inside nginx to remove PP because proxy_protocol option does not support variables.