From nginx-forum at forum.nginx.org Sat Nov 2 00:45:31 2019 From: nginx-forum at forum.nginx.org (ufsi7259) Date: Fri, 01 Nov 2019 20:45:31 -0400 Subject: only allow abc.com/live/* and abc.com/trial/* Message-ID: Hi, We would like to use filter to give some protection to backend. We only want to allow abc.com/live/* and abc.com/trial/* I wonder how to achieve this? and can I do better in general? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,286077,286077#msg-286077 From nginx-forum at forum.nginx.org Sun Nov 3 09:20:41 2019 From: nginx-forum at forum.nginx.org (q1548) Date: Sun, 03 Nov 2019 04:20:41 -0500 Subject: Content_Length and gzip Message-ID: Hello, I found if enable gzip, the http header will not include "Content_Length:..." and use "Transfer-Encoding: chunked" instead. But in your home page, "nginx.org" will return both gzip and Content_Length, why? do you use another version for yourself? Current, nginx.org seem use 1.17.3 version, I have tested with 1.17.3 and 1.17.5. Thanks Posted at Nginx Forum: https://forum.nginx.org/read.php?2,286079,286079#msg-286079 From nginx-forum at forum.nginx.org Sun Nov 3 11:05:12 2019 From: nginx-forum at forum.nginx.org (q1548) Date: Sun, 03 Nov 2019 06:05:12 -0500 Subject: Content_Length and gzip In-Reply-To: References: Message-ID: <5d91b85c4a8470cb9e37dbd576aae981.NginxMailingListEnglish@forum.nginx.org> I find out, thanks. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,286079,286080#msg-286080 From support at foxserv.be Sun Nov 3 16:29:19 2019 From: support at foxserv.be (Support) Date: Sun, 03 Nov 2019 17:29:19 +0100 Subject: TR: Content_Length and gzip In-Reply-To: <001601d59263$9085c0c0$b1914240$@fxstudio.be> Message-ID: <2029-5dbf0080-5-35c5ba0@195035315> Then, we will edit the file /etc/nginx/nginx/nginx.conf This is what I put in mine?: user www-data; worker_processes 2; error_log /var/log/nginx/error.log; pid /var/run/nginx.pid; events { worker_connections 1024; } http { include /etc/nginx/mime.types; default_type application/octet-stream; access_log /var/log/nginx/access.log; server_names_hash_bucket_size 64; sendfile on; tcp_nopush on; #keepalive_timeout 0; keepalive_timeout 65; tcp_nodelay on; gzip on; gzip_comp_level 5; gzip_http_version 1.0; gzip_min_length 0; gzip_types text/plain text/html text/css image/x-icon application/x-javascript; gzip_vary on; include /etc/nginx/conf.d/*.conf; include /etc/nginx/sites-enabled/*; }? I find out, thanks. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,286079,286080#msg-286080 _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx ? ? -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Mon Nov 4 11:07:07 2019 From: nginx-forum at forum.nginx.org (Proline29) Date: Mon, 04 Nov 2019 06:07:07 -0500 Subject: Block spefic URL Message-ID: Hello, I'm trying to block such kind of URL https://mysite.com/#id=826c99368cc93a894267703e0fc2ed46 Tried if ( $request_uri = https://mysite.com/#id=826c99368cc93a894267703e0fc2ed46) { return 444; } location ~* https://mysite.com/#id=826c99368cc93a894267703e0fc2ed46 { deny all; } if ( $query_string = "826c99368cc93a894267703e0fc2ed46" ) { return 404; } No one of this solution didn't help Posted at Nginx Forum: https://forum.nginx.org/read.php?2,286086,286086#msg-286086 From al-nginx at none.at Mon Nov 4 13:28:43 2019 From: al-nginx at none.at (Aleksandar Lazic) Date: Mon, 4 Nov 2019 14:28:43 +0100 Subject: Block spefic URL In-Reply-To: References: Message-ID: <615f919c-14be-503b-e2e6-fa7d1097e4eb@none.at> Hi. Am 04.11.2019 um 12:07 schrieb Proline29: > Hello, > I'm trying to block such kind of URL > > https://mysite.com/#id=826c99368cc93a894267703e0fc2ed46 > > Tried > if ( $request_uri = https://mysite.com/#id=826c99368cc93a894267703e0fc2ed46) > { > return 444; > } > > location ~* https://mysite.com/#id=826c99368cc93a894267703e0fc2ed46 { > deny all; > } > > if ( $query_string = "826c99368cc93a894267703e0fc2ed46" ) { > return 404; > } > > No one of this solution didn't help As '#id=826c99368cc93a894267703e0fc2ed46' is a fragment I think the part does not reach the server as maxim mentioned in the below answer. https://forum.nginx.org/read.php?2,49075,49079 What do you see when you activate the debug log? Regards Aleks > Posted at Nginx Forum: https://forum.nginx.org/read.php?2,286086,286086#msg-286086 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > From nginx-forum at forum.nginx.org Tue Nov 5 05:26:22 2019 From: nginx-forum at forum.nginx.org (noloader) Date: Tue, 05 Nov 2019 00:26:22 -0500 Subject: Git Plugin Message-ID: Hi Everyone, I'm running a Fedora 31 server. The server hosts one Nginx-based website on 80 and 443. The server also hosts one Git-project. The Git project is located at /var/myproject and only accessible over SSH at the moment. I'd like to put a web front-end on the Git project for browsing and diff'ing. Searching the forum I don't see discussions of Git plugins or Nginx front-ends for Git projects. My question is, does someone have a recommendation for a simple Nginx plugin to put the project on the web? Thanks in advance. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,286091,286091#msg-286091 From abbot at monksofcool.net Tue Nov 5 23:21:13 2019 From: abbot at monksofcool.net (Ralph Seichter) Date: Wed, 06 Nov 2019 00:21:13 +0100 Subject: Git Plugin In-Reply-To: References: Message-ID: <874kzhhpue.fsf@wedjat.horus-it.com> * noloader: > I'd like to put a web front-end on the Git project for browsing and > diff'ing. Git comes with its own web interface called "gitweb" which covers most basic needs (see https://git-scm.com/docs/gitweb). -Ralph From nginx-forum at forum.nginx.org Wed Nov 6 19:41:15 2019 From: nginx-forum at forum.nginx.org (mogwai) Date: Wed, 06 Nov 2019 14:41:15 -0500 Subject: SSL handshake attack mitigation Message-ID: Greetings! I run a bunch of sites on nginx-plus-r19 (OpenSSL 1.0.2k-fips) and was recently hit by a nasty DDoS SSL handshake attack. I noticed nginx worker processes suddenly eating all available CPU and the "Handshakes failed" counter in the nginx plus dashboard suddenly climbing out of proportion to the successful handshakes. If I understand correctly, the limit_req directive would not be effective in mitigating this type of attack since the SSL handshake occurs earlier in the request chain. I ended up setting the error_log level to "info" and feeding the failed handshake client IPs to fail2ban. My first question is regarding the particular error log messages produced during the attack - see example below: [info] 8050#8050: *146 SSL_do_handshake() failed (SSL: error:14094416:SSL routines:ssl3_read_bytes:sslv3 alert certificate unknown:SSL alert number 46) while SSL handshaking, client: XXX.XXX.XXX.XXX, server: 0.0.0.0:443 The "certificate unknown" seems to suggest that nginx is trying to verify the certificate of the client, yet "ssl_verify_client" should be off by default, so why does nginx care about that certificate? My second question - is there a better way of mitigating this type of attack? (Preferably without putting an expensive firewall in front of nginx) I would also like to put in a feature request to have a limit_req equivalent for SSL handshakes. Thanks! Posted at Nginx Forum: https://forum.nginx.org/read.php?2,286113,286113#msg-286113 From lists at lazygranch.com Wed Nov 6 20:22:38 2019 From: lists at lazygranch.com (lists) Date: Wed, 06 Nov 2019 12:22:38 -0800 Subject: SSL handshake attack mitigation In-Reply-To: Message-ID: <96j7r2c6j7a1b1tb8o9nt59b.1573071758445@lazygranch.com> IMHO you did the right thing with fail2ban. I don't see how a firewall is "expensive" other than they they are a little RAM heavy. Half the internet traffic is bots. That doesn't even count the hot linkers. So the reality is you will need a firewall to block what doesn't have eyeballs, namely datacenters. At a bare minimum you should be blocking all of AWS from everything except port 25. Firewalls have a low CPU load. I think when the dust settles, killing the trouble makers at the source is the way to go. ? Original Message ? From: nginx-forum at forum.nginx.org Sent: November 6, 2019 11:41 AM To: nginx at nginx.org Reply-to: nginx at nginx.org Subject: SSL handshake attack mitigation Greetings! I run a bunch of sites on nginx-plus-r19 (OpenSSL 1.0.2k-fips) and was recently hit by a nasty DDoS SSL handshake attack. I noticed nginx worker processes suddenly eating all available CPU and the "Handshakes failed" counter in the nginx plus dashboard suddenly climbing out of proportion to the successful handshakes. If I understand correctly, the limit_req directive would not be effective in mitigating this type of attack since the SSL handshake occurs earlier in the request chain. I ended up setting the error_log level to "info" and feeding the failed handshake client IPs to fail2ban. My first question is regarding the particular error log messages produced during the attack - see example below: [info] 8050#8050: *146 SSL_do_handshake() failed (SSL: error:14094416:SSL routines:ssl3_read_bytes:sslv3 alert certificate unknown:SSL alert number 46) while SSL handshaking, client: XXX.XXX.XXX.XXX, server: 0.0.0.0:443 The "certificate unknown" seems to suggest that nginx is trying to verify the certificate of the client, yet "ssl_verify_client" should be off by default, so why does nginx care about that certificate? My second question - is there a better way of mitigating this type of attack? (Preferably without putting an expensive firewall in front of nginx) I would also like to put in a feature request to have a limit_req equivalent for SSL handshakes. Thanks! Posted at Nginx Forum: https://forum.nginx.org/read.php?2,286113,286113#msg-286113 _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx From bobbob601 at hotmail.com Wed Nov 6 20:24:03 2019 From: bobbob601 at hotmail.com (bob bob) Date: Wed, 6 Nov 2019 20:24:03 +0000 Subject: Custom Sticky Module development In-Reply-To: References: , , Message-ID: Hi guys We have a use case when we plan to use Nginx as our load-balancer with a session persistence requirement. We are using it in the context of Kubernetes. Nothing special here Our specific need is that each user will have one non-shared pod which means that once a upstream server is assigned to one session, it should not be assigned to another user. To simplify the architecture, we have a Redis cache storage were the list of available servers is listed, which means that each time a user is redirected to a server, the server notifies the Redis that he is not available for assignment but still available for traffic (that?s why we can?t use health probes because we want the traffic to continue to be redirected but only for one user We think about development a fork of the stick module (https://bitbucket.org/nginx-goodies/nginx-sticky-module-ng/src/master/) and make it read the list of availables pods from Redis instead of the in memory list of all upstreams server. Questions : 1. Does it seems feasible ? 2. It is better to overwrite the sticky module and edit its code (seems to be a built-in module) 3. Is it better to develop and load a custom module .so but then how ensure it can be loaded instead of the builtin module ? Thank a lot for your help and thoughts Provenance : Courrier pour Windows 10 -------------- next part -------------- An HTML attachment was scrubbed... URL: From osa at freebsd.org.ru Wed Nov 6 20:34:07 2019 From: osa at freebsd.org.ru (Sergey A. Osokin) Date: Wed, 6 Nov 2019 23:34:07 +0300 Subject: SSL handshake attack mitigation In-Reply-To: References: Message-ID: <20191106203407.GA88318@FreeBSD.org.ru> Hi, On Wed, Nov 06, 2019 at 02:41:15PM -0500, mogwai wrote: > Greetings! > > I run a bunch of sites on nginx-plus-r19 (OpenSSL 1.0.2k-fips) and was > recently hit by a nasty DDoS SSL handshake attack. there are several techics are avaialble to mitigate DDoS attacks with NGINX and NGINX Plus, please see the following link for details: https://www.nginx.com/blog/mitigating-ddos-attacks-with-nginx-and-nginx-plus/ Since you have NGINX Plus version I'd recommend to contact the NGINX Plus Support team through the Customer Portal, https://cs.nginx.com/ or via email plus-support at nginx.com. -- Sergey Osokin From pluknet at nginx.com Thu Nov 7 12:11:14 2019 From: pluknet at nginx.com (Sergey Kandaurov) Date: Thu, 7 Nov 2019 15:11:14 +0300 Subject: SSL handshake attack mitigation In-Reply-To: References: Message-ID: > On 6 Nov 2019, at 22:41, mogwai wrote: > > My first question is regarding the particular error log messages produced > during the attack - see example below: > > [info] 8050#8050: *146 SSL_do_handshake() failed (SSL: error:14094416:SSL > routines:ssl3_read_bytes:sslv3 alert certificate unknown:SSL alert number > 46) while SSL handshaking, client: XXX.XXX.XXX.XXX, server: 0.0.0.0:443 > > The "certificate unknown" seems to suggest that nginx is trying to verify > the certificate of the client, yet "ssl_verify_client" should be off by > default, so why does nginx care about that certificate? That's opposite: nginx received a certificate_unknown alert message from a client for some reason while in handshake. -- Sergey Kandaurov From nginx-forum at forum.nginx.org Thu Nov 7 15:23:20 2019 From: nginx-forum at forum.nginx.org (shivramg94) Date: Thu, 07 Nov 2019 10:23:20 -0500 Subject: Disable only Hostname verification of proxied HTTPS server certificate Message-ID: <70d6c8d70c2d8b1f17f710562bdf6363.NginxMailingListEnglish@forum.nginx.org> Is there any way where we can configure nginx to only verify the root of the proxied HTTPS server (upstream server) certificate and to skip the host name (or domain name) verification? As I understand, proxy_ssl_verify directive can be used to completely enable/disable the verification of proxied HTTPS server certificate but not selectively. Is there any directive to only disable the host name verification? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,286129,286129#msg-286129 From shahzaib.cb at gmail.com Fri Nov 8 12:10:12 2019 From: shahzaib.cb at gmail.com (shahzaib mushtaq) Date: Fri, 8 Nov 2019 17:10:12 +0500 Subject: NginX Sudden "Weird server reply" HACKED ? Message-ID: Hi, We just recently received an alert against one of our Nginx based server which has started to download files with any extension e.g .html, .php) on HTTP instead of processing it. On HTTPS file process fine but on HTTP, even though its .html extension file it is started to download by the browser. We've forced redirect setup from HTTP to HTTPS, which is also stopped working. If we send curl request to HTTP , following is the reply we get: [root at cw025 /usr/local/etc/nginx/vhosts]# curl -I http://cw025.domain.com/test.html curl: (8) Weird server reply Can anyone help whats going on? Regards. -------------- next part -------------- An HTML attachment was scrubbed... URL: From shahzaib.cb at gmail.com Fri Nov 8 13:00:16 2019 From: shahzaib.cb at gmail.com (shahzaib mushtaq) Date: Fri, 8 Nov 2019 18:00:16 +0500 Subject: NginX Sudden "Weird server reply" HACKED ? In-Reply-To: References: Message-ID: Hi, I tried to create a test.html file without any content in it and curl request showed following output: [root at cw025 /tunefiles/tunefiles_git]# curl http://cw025.domain.com/test.html ??? On Fri, Nov 8, 2019 at 5:10 PM shahzaib mushtaq wrote: > Hi, > > We just recently received an alert against one of our Nginx based server > which has started to download files with any extension e.g .html, .php) on > HTTP instead of processing it. On HTTPS file process fine but on HTTP, even > though its .html extension file it is started to download by the browser. > > We've forced redirect setup from HTTP to HTTPS, which is also stopped > working. If we send curl request to HTTP , following is the reply we get: > > [root at cw025 /usr/local/etc/nginx/vhosts]# curl -I > http://cw025.domain.com/test.html > curl: (8) Weird server reply > > Can anyone help whats going on? > > Regards. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From shahzaib.cb at gmail.com Fri Nov 8 13:12:59 2019 From: shahzaib.cb at gmail.com (shahzaib mushtaq) Date: Fri, 8 Nov 2019 18:12:59 +0500 Subject: NginX Sudden "Weird server reply" HACKED ? In-Reply-To: References: Message-ID: Ok found it, i mistakenly put http2 header in HTTP section of nginx . On Fri, Nov 8, 2019 at 6:00 PM shahzaib mushtaq wrote: > Hi, > > I tried to create a test.html file without any content in it and curl > request showed following output: > > [root at cw025 /tunefiles/tunefiles_git]# curl > http://cw025.domain.com/test.html > ??? > > > On Fri, Nov 8, 2019 at 5:10 PM shahzaib mushtaq > wrote: > >> Hi, >> >> We just recently received an alert against one of our Nginx based server >> which has started to download files with any extension e.g .html, .php) on >> HTTP instead of processing it. On HTTPS file process fine but on HTTP, even >> though its .html extension file it is started to download by the browser. >> >> We've forced redirect setup from HTTP to HTTPS, which is also stopped >> working. If we send curl request to HTTP , following is the reply we get: >> >> [root at cw025 /usr/local/etc/nginx/vhosts]# curl -I >> http://cw025.domain.com/test.html >> curl: (8) Weird server reply >> >> Can anyone help whats going on? >> >> Regards. >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Sun Nov 10 18:03:20 2019 From: nginx-forum at forum.nginx.org (frank.muller) Date: Sun, 10 Nov 2019 13:03:20 -0500 Subject: Extremely slow file (~5MB) upload via POST Message-ID: <64b373e35fde33f9457ee9457afef7ff.NginxMailingListEnglish@forum.nginx.org> Hi everyone I'm new here, and i've searched if the problem appeared before but couldn't find anything useful. [DESCRIPTION] I've an upstream backend service behind Niginx (1.16.1, openssl-1.1.1) which allow people upload files from their browser. The files are simply stored on disk. Nothing else is performed on them. [SYSTEM CONFIG] . Linux 4.15.0 Ubuntu 18.04 LTS SMP x86_64 . RAM: 32GB . CPU: 8-Cores Intel(R) Xeon(R) CPU E3-1270 v6 @ 3.80GHz . DISK: 1TB SSD . NETWORK CARD: 10Gbps . System is never under load. We usually upload 10 files per hour at max. [DATA CONFIG] . File size is between 5MB to 20MB [NGINX CONFIG] We are running Nginx 1.16.1 with TLSv1.3 support (built on openssl 1.1.1). -8-8-8-8-8-8-8-8-8-8-8-8-8-8-8-8-8-8-8-8-8 $ cat /etc/nginx/nginx.conf worker_processes auto; worker_rlimit_nofile 100000; pid /run/nginx.pid; error_log off; #/var/log/nginx/error.log info; events { worker_connections 655350; multi_accept on; use epoll; } http { include mime.types; default_type application/octet-stream; server_tokens off; keepalive_timeout 3600; access_log off; #/var/log/nginx/access.log; sendfile on; tcp_nopush on; tcp_nodelay on; types_hash_max_size 2048; open_file_cache max=10000 inactive=10m; open_file_cache_valid 1h; open_file_cache_min_uses 1; open_file_cache_errors on; include /etc/nginx/conf.d/*.conf; } -8-8-8-8-8-8-8-8-8-8-8-8-8-8-8-8-8-8-8-8-8 $ cat /etc/nginx/conf.d/uploader.conf server { listen 443 ssl; server_name BACKEND_HOST_NAME: ssl_certificate /etc/nginx/certs/bundle.pem; ssl_certificate_key /etc/nginx/certs/key.pem; ssl_dhparam /etc/nginx/certs/dh.pem; ## 2048-bit ssl_protocols TLSv1.2 TLSv1.3; ssl_prefer_server_ciphers on; ssl_ciphers ECDHE-RSA-AES256-GCM-SHA512:DHE-RSA-AES256-GCM-SHA512:ECDHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-SHA384; client_max_body_size 30m; location / { proxy_pass http://127.0.0.1:7777; } } -8-8-8-8-8-8-8-8-8-8-8-8-8-8-8-8-8-8-8-8-8 [PROBLEM] A 5MB file takes almost 30 seconds to upload via Nginx. When uploading it directly to the upstream backend, it takes ~400 millisec at max. Running strace, we've got this: % time seconds usecs/call calls errors syscall ------ ----------- ----------- --------- --------- ---------------- 47.96 0.041738 11 3887 1489 read 21.73 0.018909 13 1509 epoll_wait 17.95 0.015622 22 708 writev 10.62 0.009241 13 712 write 0.47 0.000407 19 21 21 connect 0.39 0.000338 17 20 close 0.21 0.000180 8 22 epoll_ctl 0.20 0.000173 8 21 socket 0.13 0.000110 110 1 accept4 0.11 0.000095 5 21 getsockopt 0.10 0.000091 4 21 recvfrom 0.07 0.000060 3 21 ioctl 0.04 0.000037 12 3 brk 0.03 0.000023 8 3 setsockopt ------ ----------- ----------- --------- --------- ---------------- 100.00 0.087024 6970 1510 total A lot of errors in "read" calls: 1489 errors. They all correspond to (thanks again to strace): 22807 read(3, "\26\3\1\2\0\1\0\1\374\3\3\304\353\3\333\314\0\36\223\244z\246\322n\375\205\360\322\35\237_\240"..., 16709) = 517 22807 read(3, 0x559de2a23f03, 16709) = -1 EAGAIN (Resource temporarily unavailable) 22807 read(3, "\24\3\3\0\1\1\26\3\3\2\0\1\0\1\374\3\3\304\353\3\333\314\0\36\223\244z\246\322n\375\205"..., 16709) = 523 22807 read(3, 0x559de2a23f03, 16709) = -1 EAGAIN (Resource temporarily unavailable) 22807 read(3, "\27\3\3\0E\271m'\306\262\26X\36J\25lC/\202_7\241\32\342XN \357\303%\264\0"..., 16709) = 74 22807 read(3, "\27\3\3\0\245\240\204\304KJ\260\207\301\232\3147\217\357I$\243\266p+*\343L\335\6v\276\323"..., 16709) = 478 22807 read(3, 0x559de2a1e9a3, 16709) = -1 EAGAIN (Resource temporarily unavailable) 22807 read(3, "\27\3\3\0\32`\324\324\237\v\266n\300x\24\277\357z\374)\365\260F\235\24\346#A%\300\376", 16709) = 31 22807 read(3, 0x559de2a1e9a3, 16709) = -1 EAGAIN (Resource temporarily unavailable) 22807 read(3, "\27\3\3\0\177\310*W\352\265\230\357\325\177\302\275\357=\246`\246^\372\214T\206\264b\352;\273z"..., 16709) = 814 22807 read(3, 0x559de2a1e9a3, 16709) = -1 EAGAIN (Resource temporarily unavailable) 22807 read(3, "\27\3\3\0Y\330\276PNY\220\245\254E\0066\2016\355\334\237Yo\2510\253\320+\26z\342\275"..., 16709) = 644 22807 read(3, 0x559de2a229e3, 16709) = -1 EAGAIN (Resource temporarily unavailable) 22807 read(3, "\27\3\3\0Z \237j\230\f\331\222\246\325\1\272Y]\252\255%\31\257L\25\10\226\267 \253\353\367"..., 16709) = 285 22807 read(3, 0x559de2a1e9a3, 16709) = -1 EAGAIN (Resource temporarily unavailable) 22807 read(3, "\27\3\3\0\212\216j6\256\370\367\310\366Hjs\275r\276>\217\216\374\377a\375\363\4\2yr\23"..., 16709) = 176 22807 read(3, 0x559de2a1e9a3, 16709) = -1 EAGAIN (Resource temporarily unavailable) 22807 read(3, "\27\3\3\0\227K2\345P\200Ls\234\10\230f\362\221\273\270V\2371X\261|\245\315\240B\177\224"..., 16709) = 1717 22807 read(3, 0x559de2a1e9a3, 16709) = -1 EAGAIN (Resource temporarily unavailable) 22807 read(3, "\27\3\3>\232\344\316\245i\375hM\362\376\frr\340\21umx&\3311\373}\35\4\3069`"..., 16709) = 4380 22807 read(3, 0x559de2a1fabf, 11651) = -1 EAGAIN (Resource temporarily unavailable) We tried to tune our Nginx's config, but the result is always the same: 22807 read(3, 0x559de2a1fabf, 11651) = -1 EAGAIN (Resource temporarily unavailable) Help appreciated /F. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,286161,286161#msg-286161 From lagged at gmail.com Sun Nov 10 22:58:36 2019 From: lagged at gmail.com (Andrei) Date: Mon, 11 Nov 2019 00:58:36 +0200 Subject: Extremely slow file (~5MB) upload via POST In-Reply-To: <64b373e35fde33f9457ee9457afef7ff.NginxMailingListEnglish@forum.nginx.org> References: <64b373e35fde33f9457ee9457afef7ff.NginxMailingListEnglish@forum.nginx.org> Message-ID: Bit off-topic, but, if you really want to improve the performance and ditch an upstream service which just takes in file uploads, you can do it directly in nginx with some Lua. For example https://www.yanxurui.cc/posts/server/2017-03-21-NGINX-as-a-file-server/. I used this method for large sites (20k online) and it worked way better than having to pass the file to a backend for saving. There are many other articles on "nginx file server" or "nginx image server" with details on how to also process images. Hope this helps. On Sun, Nov 10, 2019, 20:03 frank.muller wrote: > Hi everyone > > I'm new here, and i've searched if the problem appeared before but couldn't > find anything useful. > > [DESCRIPTION] I've an upstream backend service behind Niginx (1.16.1, > openssl-1.1.1) which allow people upload files from their browser. The > files > are simply stored on disk. Nothing else is performed on them. > > [SYSTEM CONFIG] > . Linux 4.15.0 Ubuntu 18.04 LTS SMP x86_64 > . RAM: 32GB > . CPU: 8-Cores Intel(R) Xeon(R) CPU E3-1270 v6 @ 3.80GHz > . DISK: 1TB SSD > . NETWORK CARD: 10Gbps > . System is never under load. We usually upload 10 files per hour at max. > > [DATA CONFIG] > . File size is between 5MB to 20MB > > [NGINX CONFIG] > > We are running Nginx 1.16.1 with TLSv1.3 support (built on openssl 1.1.1). > > -8-8-8-8-8-8-8-8-8-8-8-8-8-8-8-8-8-8-8-8-8 > > $ cat /etc/nginx/nginx.conf > worker_processes auto; > worker_rlimit_nofile 100000; > pid /run/nginx.pid; > > error_log off; #/var/log/nginx/error.log info; > > events { > worker_connections 655350; > multi_accept on; > use epoll; > } > > http { > include mime.types; > default_type application/octet-stream; > > server_tokens off; > > keepalive_timeout 3600; > > access_log off; #/var/log/nginx/access.log; > sendfile on; > tcp_nopush on; > tcp_nodelay on; > > types_hash_max_size 2048; > > open_file_cache max=10000 inactive=10m; > open_file_cache_valid 1h; > open_file_cache_min_uses 1; > open_file_cache_errors on; > include /etc/nginx/conf.d/*.conf; > > } > > -8-8-8-8-8-8-8-8-8-8-8-8-8-8-8-8-8-8-8-8-8 > > $ cat /etc/nginx/conf.d/uploader.conf > server { > listen 443 ssl; > > server_name BACKEND_HOST_NAME: > > ssl_certificate /etc/nginx/certs/bundle.pem; > ssl_certificate_key /etc/nginx/certs/key.pem; > ssl_dhparam /etc/nginx/certs/dh.pem; ## 2048-bit > > ssl_protocols TLSv1.2 TLSv1.3; > > ssl_prefer_server_ciphers on; > ssl_ciphers > > ECDHE-RSA-AES256-GCM-SHA512:DHE-RSA-AES256-GCM-SHA512:ECDHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-SHA384; > > client_max_body_size 30m; > > location / { > proxy_pass http://127.0.0.1:7777; > } > } > > -8-8-8-8-8-8-8-8-8-8-8-8-8-8-8-8-8-8-8-8-8 > > [PROBLEM] A 5MB file takes almost 30 seconds to upload via Nginx. > When uploading it directly to the upstream backend, it takes ~400 millisec > at max. > > Running strace, we've got this: > % time seconds usecs/call calls errors syscall > ------ ----------- ----------- --------- --------- ---------------- > 47.96 0.041738 11 3887 1489 read > 21.73 0.018909 13 1509 epoll_wait > 17.95 0.015622 22 708 writev > 10.62 0.009241 13 712 write > 0.47 0.000407 19 21 21 connect > 0.39 0.000338 17 20 close > 0.21 0.000180 8 22 epoll_ctl > 0.20 0.000173 8 21 socket > 0.13 0.000110 110 1 accept4 > 0.11 0.000095 5 21 getsockopt > 0.10 0.000091 4 21 recvfrom > 0.07 0.000060 3 21 ioctl > 0.04 0.000037 12 3 brk > 0.03 0.000023 8 3 setsockopt > ------ ----------- ----------- --------- --------- ---------------- > 100.00 0.087024 6970 1510 total > > A lot of errors in "read" calls: 1489 errors. They all correspond to > (thanks > again to strace): > > 22807 read(3, > > "\26\3\1\2\0\1\0\1\374\3\3\304\353\3\333\314\0\36\223\244z\246\322n\375\205\360\322\35\237_\240"..., > 16709) = 517 > 22807 read(3, 0x559de2a23f03, 16709) = -1 EAGAIN (Resource temporarily > unavailable) > 22807 read(3, > > "\24\3\3\0\1\1\26\3\3\2\0\1\0\1\374\3\3\304\353\3\333\314\0\36\223\244z\246\322n\375\205"..., > 16709) = 523 > 22807 read(3, 0x559de2a23f03, 16709) = -1 EAGAIN (Resource temporarily > unavailable) > 22807 read(3, "\27\3\3\0E\271m'\306\262\26X\36J\25lC/\202_7\241\32\342XN > \357\303%\264\0"..., 16709) = 74 > 22807 read(3, > > "\27\3\3\0\245\240\204\304KJ\260\207\301\232\3147\217\357I$\243\266p+*\343L\335\6v\276\323"..., > 16709) = 478 > 22807 read(3, 0x559de2a1e9a3, 16709) = -1 EAGAIN (Resource temporarily > unavailable) > 22807 read(3, > > "\27\3\3\0\32`\324\324\237\v\266n\300x\24\277\357z\374)\365\260F\235\24\346#A%\300\376", > 16709) = 31 > 22807 read(3, 0x559de2a1e9a3, 16709) = -1 EAGAIN (Resource temporarily > unavailable) > 22807 read(3, > > "\27\3\3\0\177\310*W\352\265\230\357\325\177\302\275\357=\246`\246^\372\214T\206\264b\352;\273z"..., > 16709) = 814 > 22807 read(3, 0x559de2a1e9a3, 16709) = -1 EAGAIN (Resource temporarily > unavailable) > 22807 read(3, > > "\27\3\3\0Y\330\276PNY\220\245\254E\0066\2016\355\334\237Yo\2510\253\320+\26z\342\275"..., > 16709) = 644 > 22807 read(3, 0x559de2a229e3, 16709) = -1 EAGAIN (Resource temporarily > unavailable) > 22807 read(3, "\27\3\3\0Z > \237j\230\f\331\222\246\325\1\272Y]\252\255%\31\257L\25\10\226\267 > \253\353\367"..., 16709) = 285 > 22807 read(3, 0x559de2a1e9a3, 16709) = -1 EAGAIN (Resource temporarily > unavailable) > 22807 read(3, > > "\27\3\3\0\212\216j6\256\370\367\310\366Hjs\275r\276>\217\216\374\377a\375\363\4\2yr\23"..., > 16709) = 176 > 22807 read(3, 0x559de2a1e9a3, 16709) = -1 EAGAIN (Resource temporarily > unavailable) > 22807 read(3, > > "\27\3\3\0\227K2\345P\200Ls\234\10\230f\362\221\273\270V\2371X\261|\245\315\240B\177\224"..., > 16709) = 1717 > 22807 read(3, 0x559de2a1e9a3, 16709) = -1 EAGAIN (Resource temporarily > unavailable) > 22807 read(3, > > "\27\3\3>\232\344\316\245i\375hM\362\376\frr\340\21umx&\3311\373}\35\4\3069`"..., > 16709) = 4380 > 22807 read(3, 0x559de2a1fabf, 11651) = -1 EAGAIN (Resource temporarily > unavailable) > > > We tried to tune our Nginx's config, but the result is always the same: > 22807 read(3, 0x559de2a1fabf, 11651) = -1 EAGAIN (Resource temporarily > unavailable) > > > Help appreciated > > /F. > > Posted at Nginx Forum: > https://forum.nginx.org/read.php?2,286161,286161#msg-286161 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Mon Nov 11 02:31:55 2019 From: nginx-forum at forum.nginx.org (Breukers) Date: Sun, 10 Nov 2019 21:31:55 -0500 Subject: =?UTF-8?Q?Pandora-Charms_als_auch_-Armb=C3=A4nder?= Message-ID: <3a1d4d72d8699b29732ab79cb64d5790.NginxMailingListEnglish@forum.nginx.org> I am Allgemeinen k?nnen die [url=https://www.setschmuckch.com/]pandora schmuck online kaufen[/url] systematisch aus den neuesten Schmuckwerkzeugen und -methoden hergestellt werden, um ihre Eleganz und Anmut f?r expire zielorientierten Kunden zu erh?hen. Auf der anderen Seite haben die Pandora-Charms ihre eigene Einzigartigkeit und Vielseitigkeit, um Ihre Blicke auf sich zu ziehen. Was fabelhaftesten ist, dass sie zur Zeit zum echten Schmuck f?r die hei?esten, brutzelndsten und glamour?sesten Damen der gro?e Welt geworden sind. Dieses Pandora-Armband und der Pandora-Charm k?nnen [url=https://www.setschmuckch.com/silber]pandora silber schweiz sale[/url] absolut in sehr vielen einzigartigen Stilen und Models nach Ihren eigenen W?nschen erh?ltlich sein. Sie k?nnen Ihre eigenen Pandora-Charms als auch -Armb?nder sowohl aus kulturellen als auch aus sozialen Gr?nden verwenden. Auf diesem internationalen Markt gibt es ein weiteres fabelhaftes und einzigartiges Design f?r Modeschmuck. Ha sido wird als Pandora-Ohrring bezeichnet. Neben den [url=https://www.setschmuckch.com/tiere]pandora tiere online[/url] k?nnen Sie die Pandora-Ringe uneingeschr?nkt nutzen, um Ihr Selbstverst?ndnis f?r lange Zeit allzu st?rken. Denken Sie daran, dass sie f?r Sie sehr beliebte, symbolische als auch kosteng?nstige Schmuckdesigns sind. Aus diesem Grund w?rden sie auf jeden Fall Ihre Augen an Ort als auch Stelle packen. Derzeit bieten Ihnen verschiedene Online-Juweliergesch?fte expire besten Arten von Pandora-Juwelen, Ringen, Anh?ngern und Armb?ndern kosteng?nstig online an. Bracelets werden [url=https://www.setschmuckch.com/reise]pandora reise charms sale[/url] seit Jahrhunderten als Medium f?r Aberglauben, religi?sen Glauben und Mode geschluckt. Die alten ?gypter verwendeten zun?chst Amulette, um expire Gunst der G?tter allzu erlangen und einen Durchgang ins Jenseits zu erhalten. Sp?ter, im Laufe der Jahre, wurden Zauber auch eingesetzt, um b?se Geister abzuwehren und gleichzeitig Feinde zu verfluchen. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,286163,286163#msg-286163 From lxlenovostar at gmail.com Mon Nov 11 03:26:30 2019 From: lxlenovostar at gmail.com (lx) Date: Mon, 11 Nov 2019 11:26:30 +0800 Subject: hi all, [nginx]"accept_mutex on" cause 1s delay Message-ID: hi all: I use nginx-1.16.0, nginx is running on X86 embedded devices. The embedded device has 4 CPU, CPU type is: "Intel(R) Atom(TM) CPU D525 @ 1.80GHz". When I use "accpet_mutex on", nginx use 1 secod for get static file. *events { use epoll; accept_mutex on; worker_connections 10240;}* The debug log is: ########################################################################### 2019/11/08 17:08:10 [debug] 2552#2552: *1 post access phase: 12 2019/11/08 17:08:10 [debug] 2552#2552: *1 generic phase: 13 2019/11/08 17:08:10 [debug] 2552#2552: *1 generic phase: 14 2019/11/08 17:08:10 [debug] 2552#2552: *1 http script copy: "http:// " 2019/11/08 17:08:10 [debug] 2552#2552: *1 http script var: " pcdnapkwsdl2.com.cn " 2019/11/08 17:08:10 [debug] 2552#2552: *1 http script copy: "/" 2019/11/08 17:08:10 [debug] 2552#2552: *1 http script var: "appstore/developer/soft/20191008/201910081449521157660_v2_820_811.patch" 2019/11/08 17:08:10 [debug] 2552#2552: *1 http init upstream, client timer: 0 2019/11/08 17:08:10 [debug] 2552#2552: *1 epoll add event: fd:15 op:3 ev:80002005 2019/11/08 17:08:10 [debug] 2552#2552: *1 http cache key: " http://pcdnapkwsdl2.vivo.com.cn " 2019/11/08 17:08:10 [debug] 2552#2552: *1 http cache key: "/appstore/developer/soft/20191008/201910081449521157660_v2_820_811.patch" 2019/11/08 17:08:10 [debug] 2552#2552: *1 add cleanup: 00000000026C46C0 2019/11/08 17:08:10 [debug] 2552#2552: shmtx lock 2019/11/08 17:08:10 [debug] 2552#2552: shmtx unlock 2019/11/08 17:08:10 [debug] 2552#2552: *1 http file cache exists: 0 e:1 2019/11/08 17:08:10 [debug] 2552#2552: *1 cache file: "/tmp/storage/youyu/ikcdndata/wangsu2/wan1/p2p_proxy/cache/c/df/468f0ede6aa8ba9073f9a989b8377dfc" 2019/11/08 17:08:10 [debug] 2552#2552: *1 add cleanup: 00000000026C4740 2019/11/08 17:08:10 [debug] 2552#2552: *1 http file cache fd: 16 2019/11/08 17:08:10 [debug] 2552#2552: *1 malloc: 00000000026C48D0:4096 2019/11/08 17:08:10 [debug] 2552#2552: *1 thread read: 16, 00000000026C48D0, 4096, 0 2019/11/08 17:08:10 [debug] 2552#2552: task #0 added to thread pool "default" 2019/11/08 17:08:10 [debug] 2552#2552: *1 http upstream cache: -2 2019/11/08 17:08:10 [debug] 2552#2552: *1 http finalize request: -4, "/ pcdnapkwsdl2.com.cn/pcdnapkwsdltest.com.cn/appstore/developer/soft/20191008/201910081449521157660_v2_820_811 2019/11/08 17:08:10 [debug] 2552#2552: *1 http request count:2 blk:1 2019/11/08 17:08:10 [debug] 2552#2552: worker cycle 2019/11/08 17:08:10 [debug] 2552#2552: accept mutex locked 2019/11/08 17:08:10 [debug] 2552#2552: epoll timer: -1 2019/11/08 17:08:11 [debug] 2552#2552: epoll: fd:15 ev:0004 d:00007F026302A3F0 2019/11/08 17:08:11 [debug] 2552#2552: *1 post event 00007F0262E48190 2019/11/08 17:08:11 [debug] 2552#2552: timer delta: 533 2019/11/08 17:08:11 [debug] 2552#2552: posted event 00007F0262E48190 2019/11/08 17:08:11 [debug] 2552#2552: *1 delete posted event 00007F0262E48190 2019/11/08 17:08:11 [debug] 2552#2552: *1 http run request: "/ pcdnapkwsdl2.com.cn/pcdnapkwsdltest.com.cn/appstore/developer/soft/20191008/201910081449521157660_v2_820_811.patch ?" 2019/11/08 17:08:11 [debug] 2552#2552: worker cycle 2019/11/08 17:08:11 [debug] 2552#2552: accept mutex locked 2019/11/08 17:08:11 [debug] 2552#2552: epoll timer: -1 2019/11/08 17:08:11 [debug] 2552#2564: run task #0 in thread pool "default" 2019/11/08 17:08:11 [debug] 2552#2564: thread read handler 2019/11/08 17:08:11 [debug] 2554#2554: timer delta: 809 2019/11/08 17:08:11 [debug] 2554#2554: worker cycle 2019/11/08 17:08:11 [debug] 2554#2554: accept mutex lock failed: 02019/11/08 17:08:11 [debug] 2554#2554: epoll timer: 500 2019/11/08 17:08:11 [debug] 2553#2553: timer delta: 809 2019/11/08 17:08:11 [debug] 2553#2553: worker cycle 2019/11/08 17:08:11 [debug] 2553#2553: accept mutex lock failed: 0 2019/11/08 17:08:11 [debug] 2553#2553: epoll timer: 500 2019/11/08 17:08:11 [debug] 2555#2555: timer delta: 811 2019/11/08 17:08:11 [debug] 2555#2555: worker cycle 2019/11/08 17:08:11 [debug] 2555#2555: accept mutex lock failed: 0 2019/11/08 17:08:11 [debug] 2555#2555: epoll timer: 500 2019/11/08 17:08:11 [debug] 2552#2564: pread: 4096 (err: 0) of 4096 @0 2019/11/08 17:08:11 [debug] 2552#2564: complete task #0 in thread pool "default" 2019/11/08 17:08:11 [debug] 2552#2552: epoll: fd:11 ev:0001 d:000000000086FF20 2019/11/08 17:08:11 [debug] 2552#2552: post event 000000000086FEC0 2019/11/08 17:08:11 [debug] 2552#2552: timer delta: 343 2019/11/08 17:08:11 [debug] 2552#2552: posted event 000000000086FEC0 2019/11/08 17:08:11 [debug] 2552#2552: delete posted event 000000000086FEC0 2019/11/08 17:08:11 [debug] 2552#2552: thread pool handler2019/11/08 17:08:11 [debug] 2552#2552: run completion handler for task #0 2019/11/08 17:08:11 [debug] 2552#2552: *1 http file cache thread: "/ pcdnapkwsdl2.com.cn/pcdnapkwsdltest.com.cn/appstore/developer/soft/20191008/201910081449521157660_v2_820_811.patch ?" 2019/11/08 17:08:11 [debug] 2552#2552: *1 thread read: 16, 00000000026C48D0, 4096, 0 2019/11/08 17:08:11 [debug] 2552#2552: *1 http upstream cache: 0 2019/11/08 17:08:11 [debug] 2552#2552: *1 posix_memalign: 00000000026C58E0:4096 @16 2019/11/08 17:08:11 [debug] 2552#2552: *1 http proxy status 200 "200 OK" ########################################################################### If I don't use "accpet_mutex on" , I can get http response quickly. The debug log is: ########################################################################### 2019/11/08 17:14:34 [debug] 23726#23726: *1 malloc: 000000000226E8D0:4096 2019/11/08 17:14:34 [debug] 23726#23726: *1 thread read: 17, 000000000226E8D0, 4096, 0 2019/11/08 17:14:34 [debug] 23726#23726: task #0 added to thread pool "default" 2019/11/08 17:14:34 [debug] 23726#23726: *1 http upstream cache: -2 2019/11/08 17:14:34 [debug] 23726#23795: run task #0 in thread pool "default" 2019/11/08 17:14:34 [debug] 23726#23726: *1 http finalize request: -4, "/ pcdnapkwsdl2.com.cn/pcdnapkwsdltest.com.cn/appstore/developer/soft/20191008/201910081449521157660_v2_820_8 2019/11/08 17:14:34 [debug] 23726#23795: thread read handler 2019/11/08 17:14:34 [debug] 23726#23726: *1 http request count:2 blk:1 2019/11/08 17:14:34 [debug] 23726#23726: timer delta: 2 2019/11/08 17:14:34 [debug] 23726#23795: pread: 4096 (err: 0) of 4096 @0 2019/11/08 17:14:34 [debug] 23726#23726: worker cycle 2019/11/08 17:14:34 [debug] 23726#23795: complete task #0 in thread pool "default" 2019/11/08 17:14:34 [debug] 23726#23726: epoll timer: -1 2019/11/08 17:14:34 [debug] 23726#23726: epoll: fd:16 ev:0004 d:00007F3D265033F0 2019/11/08 17:14:34 [debug] 23726#23726: *1 http run request: "/ pcdnapkwsdl2.com.cn/pcdnapkwsdltest.com.cn/appstore/developer/soft/20191008/201910081449521157660_v2_820_811.patch ? 2019/11/08 17:14:34 [debug] 23726#23726: timer delta: 0 2019/11/08 17:14:34 [debug] 23726#23726: worker cycle 2019/11/08 17:14:34 [debug] 23726#23726: epoll timer: -1 2019/11/08 17:14:34 [debug] 23726#23726: epoll: fd:13 ev:0001 d:000000000086FF20 2019/11/08 17:14:34 [debug] 23726#23726: thread pool handler 2019/11/08 17:14:34 [debug] 23726#23726: run completion handler for task #0 2019/11/08 17:14:34 [debug] 23726#23726: *1 http file cache thread: "/ pcdnapkwsdl2.com.cn/pcdnapkwsdltest.com.cn/appstore/developer/soft/20191008/201910081449521157660_v2_820_811 . 2019/11/08 17:14:34 [debug] 23726#23726: *1 thread read: 17, 000000000226E8D0, 4096, 0 2019/11/08 17:14:34 [debug] 23726#23726: shmtx lock 2019/11/08 17:14:34 [debug] 23726#23726: shmtx unlock 2019/11/08 17:14:34 [debug] 23726#23726: *1 http upstream cache: 0 2019/11/08 17:14:34 [debug] 23726#23726: *1 posix_memalign: 000000000226F8E0:4096 @16 2019/11/08 17:14:34 [debug] 23726#23726: *1 http proxy status 200 "200 OK" 2019/11/08 17:14:34 [debug] 23726#23726: *1 http proxy header: "Date: Fri, 08 Nov 2019 07:50:58 GMT" 2019/11/08 17:14:34 [debug] 23726#23726: *1 http proxy header: "Content-Type: application/octet-stream" 2019/11/08 17:14:34 [debug] 23726#23726: *1 http proxy header: "Content-Length: 40492454" 2019/11/08 17:14:34 [debug] 23726#23726: *1 http proxy header: "Connection: close" 2019/11/08 17:14:34 [debug] 23726#23726: *1 http proxy header: "Server: AliyunOSS" 2019/11/08 17:14:34 [debug] 23726#23726: *1 http proxy header: "x-oss-request-id: 5D9C3B95591574F5686E7B00" 2019/11/08 17:14:34 [debug] 23726#23726: *1 http proxy header: "Accept-Ranges: bytes" ########################################################################### How should this problem be analyzed? I am a newcomer to nginx. Thank you. -------------- next part -------------- An HTML attachment was scrubbed... URL: From francis at daoine.org Mon Nov 11 12:11:56 2019 From: francis at daoine.org (Francis Daly) Date: Mon, 11 Nov 2019 12:11:56 +0000 Subject: Extremely slow file (~5MB) upload via POST In-Reply-To: <64b373e35fde33f9457ee9457afef7ff.NginxMailingListEnglish@forum.nginx.org> References: <64b373e35fde33f9457ee9457afef7ff.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20191111121156.GF1914@daoine.org> On Sun, Nov 10, 2019 at 01:03:20PM -0500, frank.muller wrote: Hi there, I don't have an answer for you, but there are some things you could perhaps try, if you are happy to keep investigating. > [DESCRIPTION] I've an upstream backend service behind Niginx (1.16.1, > openssl-1.1.1) which allow people upload files from their browser. The files > are simply stored on disk. Nothing else is performed on them. The sequence is: client writes to nginx; nginx writes to upstream. Can you see: is the extra slowness in the first part, or the second? The usual first place to look is in the log files. > error_log off; #/var/log/nginx/error.log info; You can probably look in the file /usr/local/nginx/off to see what nginx says is happening; but you might want to increase the log level to see more details. > access_log off; #/var/log/nginx/access.log; You don't have an access log to look in. That probably does not matter much here. > sendfile on; > tcp_nopush on; > tcp_nodelay on; > > types_hash_max_size 2048; > > open_file_cache max=10000 inactive=10m; > open_file_cache_valid 1h; > open_file_cache_min_uses 1; > open_file_cache_errors on; I think that those directives should not affect this test, either way. > listen 443 ssl; > location / { > proxy_pass http://127.0.0.1:7777; > } > [PROBLEM] A 5MB file takes almost 30 seconds to upload via Nginx. > When uploading it directly to the upstream backend, it takes ~400 millisec > at max. That does sound unnecessarily slow. The (presumed) ssl/no-ssl difference should not account for that much overhead. > Running strace, we've got this: > % time seconds usecs/call calls errors syscall > ------ ----------- ----------- --------- --------- ---------------- > 47.96 0.041738 11 3887 1489 read > 21.73 0.018909 13 1509 epoll_wait > 17.95 0.015622 22 708 writev > 10.62 0.009241 13 712 write > 0.47 0.000407 19 21 21 connect I don't know the details, but what is nginx trying to connect() to that is erroring every time? Is that likely relevant to the problem? > A lot of errors in "read" calls: 1489 errors. They all correspond to (thanks > again to strace): It is possible that the nginx debug log might have more nginx-related details than the bare strace. Good luck with it, f -- Francis Daly francis at daoine.org From francis at daoine.org Mon Nov 11 12:18:51 2019 From: francis at daoine.org (Francis Daly) Date: Mon, 11 Nov 2019 12:18:51 +0000 Subject: hi all, [nginx]"accept_mutex on" cause 1s delay In-Reply-To: References: Message-ID: <20191111121851.GG1914@daoine.org> On Mon, Nov 11, 2019 at 11:26:30AM +0800, lx wrote: Hi there, > I use nginx-1.16.0, nginx is running on X86 embedded devices. The > embedded device has 4 CPU, CPU type is: "Intel(R) Atom(TM) CPU D525 @ > 1.80GHz". > When I use "accpet_mutex on", nginx use 1 secod for get static file. > If I don't use "accpet_mutex on" , I can get http response quickly. > How should this problem be analyzed? I am a newcomer to nginx. "accept_mutex off" is the default - http://nginx.org/r/accept_mutex https://www.nginx.com/blog/performance-tuning-tips-tricks/ says """ We recommend keeping the default value (off) unless you have extensive knowledge of your app?s performance and the opportunity to test under a variety of conditions, but it can lead to inefficient use of system resources if the volume of new connections is low. Changing the value to on might be beneficial under some high loads. """ Your testing suggests that "accept_mutex on" does not help your use case. So - don't set "accept_mutex on". What further analysis is needed? Cheers, f -- Francis Daly francis at daoine.org From velychkovsky at gmail.com Mon Nov 11 13:09:58 2019 From: velychkovsky at gmail.com (Kostya Velychkovsky) Date: Mon, 11 Nov 2019 15:09:58 +0200 Subject: Per IP bandwidth limit Message-ID: Hello, is it the correct way to limit download/upload speed per client IP, at the same time ignore how many connections it opens and request rate produced? I need just limit bandwidth for example 100 mbit/s per IP, and no matter it opens 1 connection or 100 simulation connections. -- *Best Regards * *Kostiantyn Velychkovsky * -------------- next part -------------- An HTML attachment was scrubbed... URL: From chris at cretaforce.gr Mon Nov 11 13:23:31 2019 From: chris at cretaforce.gr (Christos Chatzaras) Date: Mon, 11 Nov 2019 15:23:31 +0200 Subject: Per IP bandwidth limit In-Reply-To: References: Message-ID: > On 11 Nov 2019, at 15:09, Kostya Velychkovsky wrote: > > Hello, is it the correct way to limit download/upload speed per client IP, at the same time ignore how many connections it opens and request rate produced? > > I need just limit bandwidth for example 100 mbit/s per IP, and no matter it opens 1 connection or 100 simulation connections. > Maybe it's better to do this with firewall. For example in FreeBSD this can be done with IPFW + Dummynet. From velychkovsky at gmail.com Mon Nov 11 13:29:14 2019 From: velychkovsky at gmail.com (Kostya Velychkovsky) Date: Mon, 11 Nov 2019 15:29:14 +0200 Subject: Per IP bandwidth limit In-Reply-To: References: Message-ID: I use Linux, and had a bad experience with Linux shaper (native kernel QoS mechanism - tc ), it consumed a lot of CPU and worked unstable. So I rejected the idea to keep using it. ??, 11 ????. 2019 ?. ? 15:23, Christos Chatzaras : > > > On 11 Nov 2019, at 15:09, Kostya Velychkovsky > wrote: > > > > Hello, is it the correct way to limit download/upload speed per client > IP, at the same time ignore how many connections it opens and request rate > produced? > > > > I need just limit bandwidth for example 100 mbit/s per IP, and no > matter it opens 1 connection or 100 simulation connections. > > > > Maybe it's better to do this with firewall. For example in FreeBSD this > can be done with IPFW + Dummynet. > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -- *Best Regards * *Kostiantyn Velychkovsky * -------------- next part -------------- An HTML attachment was scrubbed... URL: From peter_booth at me.com Mon Nov 11 18:05:52 2019 From: peter_booth at me.com (Peter Booth) Date: Mon, 11 Nov 2019 13:05:52 -0500 Subject: Per IP bandwidth limit In-Reply-To: References: Message-ID: <34D0023F-B929-43F5-9E2F-0285EB6E263B@me.com> Why do you want to do this at all? What is the real underlying problem that you are attempting to solve? > On Nov 11, 2019, at 8:29 AM, Kostya Velychkovsky wrote: > > I use Linux, and had a bad experience with Linux shaper (native kernel QoS mechanism - tc ), it consumed a lot of CPU and worked unstable. So I rejected the idea to keep using it. > > ??, 11 ????. 2019 ?. ? 15:23, Christos Chatzaras >: > > > On 11 Nov 2019, at 15:09, Kostya Velychkovsky > wrote: > > > > Hello, is it the correct way to limit download/upload speed per client IP, at the same time ignore how many connections it opens and request rate produced? > > > > I need just limit bandwidth for example 100 mbit/s per IP, and no matter it opens 1 connection or 100 simulation connections. > > > > Maybe it's better to do this with firewall. For example in FreeBSD this can be done with IPFW + Dummynet. > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > > > -- > Best Regards > > Kostiantyn Velychkovsky > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From velychkovsky at gmail.com Mon Nov 11 18:50:10 2019 From: velychkovsky at gmail.com (Kostya Velychkovsky) Date: Mon, 11 Nov 2019 20:50:10 +0200 Subject: Per IP bandwidth limit In-Reply-To: <34D0023F-B929-43F5-9E2F-0285EB6E263B@me.com> References: <34D0023F-B929-43F5-9E2F-0285EB6E263B@me.com> Message-ID: I have big data storage, some clients upload files to it, some download, some clients might upload a lot of small files in 100-200 parallel connections, but using only 20-30 mbit/s bandwidth, some clients can put big files in 10 parallel connections but using - 3Gbit/s bandwidth. The same situation with download. So, the first situation is normal behavior and I can't afford to limit connection number per IP, in the second case it's bandwidth overload, but I can't limit bandwidth per IP, because limit_rate directive is working per request only. In general case, I just need to limit bandwidth from 1 IP, independently how many parallel TCP connections per IP used by client 10 or 100. ??, 11 ????. 2019 ?. ? 20:06, Peter Booth : > Why do you want to do this at all? > What is the real underlying problem that you are attempting to solve? > > > > On Nov 11, 2019, at 8:29 AM, Kostya Velychkovsky > wrote: > > I use Linux, and had a bad experience with Linux shaper (native kernel QoS > mechanism - tc ), it consumed a lot of CPU and worked unstable. So I > rejected the idea to keep using it. > > ??, 11 ????. 2019 ?. ? 15:23, Christos Chatzaras : > >> >> > On 11 Nov 2019, at 15:09, Kostya Velychkovsky >> wrote: >> > >> > Hello, is it the correct way to limit download/upload speed per client >> IP, at the same time ignore how many connections it opens and request rate >> produced? >> > >> > I need just limit bandwidth for example 100 mbit/s per IP, and no >> matter it opens 1 connection or 100 simulation connections. >> > >> >> Maybe it's better to do this with firewall. For example in FreeBSD this >> can be done with IPFW + Dummynet. >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx >> > > > -- > *Best Regards * > > *Kostiantyn Velychkovsky * > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -- *Best Regards * *Kostiantyn Velychkovsky * -------------- next part -------------- An HTML attachment was scrubbed... URL: From lists at lazygranch.com Mon Nov 11 19:56:45 2019 From: lists at lazygranch.com (lists) Date: Mon, 11 Nov 2019 11:56:45 -0800 Subject: Per IP bandwidth limit In-Reply-To: Message-ID: An HTML attachment was scrubbed... URL: From themadbeaker at gmail.com Tue Nov 12 00:01:11 2019 From: themadbeaker at gmail.com (J.R.) Date: Mon, 11 Nov 2019 18:01:11 -0600 Subject: Per IP bandwidth limit Message-ID: Maybe you can write something with the njs module? Nothing that I have read in the standard nginx docs or blogs really addresses how you want to throttle (though it does make sense). Maybe there is a 3rd party module? From velychkovsky at gmail.com Tue Nov 12 08:51:22 2019 From: velychkovsky at gmail.com (Kostya Velychkovsky) Date: Tue, 12 Nov 2019 10:51:22 +0200 Subject: Per IP bandwidth limit In-Reply-To: References: Message-ID: Maybe, but I didn?t find something relatively to my question. ????????? ? iPhone > 12 ????. 2019 ?. ? 02:01 J.R. ????: > > ?Maybe you can write something with the njs module? Nothing that I have > read in the standard nginx docs or blogs really addresses how you want > to throttle (though it does make sense). > > Maybe there is a 3rd party module? > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From zafer.gurel at advancity.com.tr Tue Nov 12 12:03:07 2019 From: zafer.gurel at advancity.com.tr (=?UTF-8?B?WmFmZXIgR8O8cmVs?=) Date: Tue, 12 Nov 2019 15:03:07 +0300 Subject: Per IP bandwidth limit In-Reply-To: References: Message-ID: There is something like follows: https://github.com/openresty/lua-resty-limit-traffic It's a lua script and may be a starting point to develop a module for your need. There is a comment like that: here we use the remote (IP) address as the limiting key Hope it helps. Zafer Kostya Velychkovsky , 12 Kas 2019 Sal, 11:51 tarihinde ?unu yazd?: > Maybe, but I didn?t find something relatively to my question. > > ????????? ? iPhone > > > 12 ????. 2019 ?. ? 02:01 J.R. ????: > > > > ?Maybe you can write something with the njs module? Nothing that I have > > read in the standard nginx docs or blogs really addresses how you want > > to throttle (though it does make sense). > > > > Maybe there is a 3rd party module? > > _______________________________________________ > > nginx mailing list > > nginx at nginx.org > > http://mailman.nginx.org/mailman/listinfo/nginx > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From fireman777 at ukr.net Wed Nov 13 14:38:54 2019 From: fireman777 at ukr.net (=?UTF-8?b?0K3QvNCw0L3Rg9C40Lsg0JrQsNC90YI=?=) Date: Wed, 13 Nov 2019 16:38:54 +0200 Subject: Help with nginx second request Message-ID: <1573655932.406816000.4ow42y66@frv55.fwdcdn.com> Hi all, I would really appreciate it if you could help me with nginx. The situation is: Nginx (v. 1.14.2) redirects the request to the application server. In case this request with the POST method and the application server gives an error code 500, the response is transmitted to the client. But nginx then makes a second request. Is there any way to disable execution of the?second?request by?nginx? Many thanks in advance. Here is a part of nginx file: #Makes upstream to?the branch with response code 500 upstream test5 { ? ? ? ?server 127.0.0.1:90; ? ? ? ?keepalive 20; } #virtualhost that issues response code 500; server { ? ?listen 90 default_server; ? ?server_name localhost; ? ?root /home/jetty/www; ? ? ? ?location @intercept_disabled { ? ? ? ? ? ? ? ?proxy_intercept_errors off; ? ? ? ? ? ? ? ?proxy_pass?http://test5; ? ? ? ?} ? ?location /test500 { ? ? ? ?return 500; ? ? ? ?error_page 500 /500.html; ? ?} } # Redirection to virtualhost that issues response code 500; ? ? ? ?location /test500 { ? ? ? ? ? ?proxy_pass?http://test5; ? ? ? ?} ================================ Request and responses: ================================ curl -i -X POST localhost/test500 HTTP/1.1 500 Internal Server Error Server: nginx Date: Wed, 13 Nov 2019 14:21:25 GMT Content-Type: text/html Content-Length: 13 Connection: close ETag: "5dcbd0b0-d" ERROR 500 page. ngrep -qiltW byline -s?1000?-c?1024?-d lo '' port 90 T?2019/11/13 14:25:08.751892 127.0.0.1:44416 -> 127.0.0.1:90 [AP] #4 POST /test500 HTTP/1.1. X-Forwarded-For: 127.0.0.1. Host: localhost. X-Forwarded-Proto: http. User-Agent: curl/7.58.0. Accept: */*. T?2019/11/13 14:25:08.752038 127.0.0.1:90 -> 127.0.0.1:44416 [AFP] #6 HTTP/1.1 500 Internal Server Error. Server: nginx. Date: Wed, 13 Nov 2019 14:25:08 GMT. Content-Type: text/html. Content-Length: 13. Connection: close. ETag: "5dcbd0b0-d". ERROR 500 page. T?2019/11/13 14:25:08.752139 127.0.0.1:44418 -> 127.0.0.1:90 [AP] #12 POST /test500 HTTP/1.1. X-Forwarded-For: 127.0.0.1. Host: localhost. X-Forwarded-Proto: http. User-Agent: curl/7.58.0. Accept: */*. T?2019/11/13 14:25:08.752221 127.0.0.1:90 -> 127.0.0.1:44418 [AFP] #14 HTTP/1.1 500 Internal Server Error. Server: nginx. Date: Wed, 13 Nov 2019 14:25:08 GMT. Content-Type: text/html. Content-Length: 13. Connection: close. ETag: "5dcbd0b0-d". ERROR 500 page. So, in the response we see that nginx makes second request. -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Wed Nov 13 16:24:29 2019 From: nginx-forum at forum.nginx.org (skumar48) Date: Wed, 13 Nov 2019 11:24:29 -0500 Subject: Nginx 405 not allowed issue Message-ID: <0448c10155b8a55a9138f3b924d85df6.NginxMailingListEnglish@forum.nginx.org> Hi, I want to allow to accept post request for static content by nginx server. i have three solution so i can add patches. take a look below. 1.) error_page 405 =200 $uri This basically tells nginx to change the response code to 200 for 405 messages 2.) location / { rewrite ^.*$ /ampsec-tv-widget.html last; } # To allow POST on static pages error_page 405 = $uri; first we send POST request to api.json (which is our static file), then we proxy 405 error to the original url and our rewrite condition matches the request and return api.json with 200 status code 3.) create a proxy for static content, converting POST request to GET. But i want to add any patch. is there is any other solution to make request for static content by nginx server Posted at Nginx Forum: https://forum.nginx.org/read.php?2,286195,286195#msg-286195 From nsclick at gmx.de Wed Nov 13 19:00:58 2019 From: nsclick at gmx.de (nsclick at gmx.de) Date: Wed, 13 Nov 2019 20:00:58 +0100 Subject: Mail Proxy with Multiple Mail Domains Message-ID: Hello, ? I would like to setup a Nginx mail proxy which handles IMAP and SMTP for two different mail domains and two different backend servers (one server for each of the domains). Let's say we have the two mail domains: - mail.foo.com - mail.bar.com ? Then we can setup a minimalistic mail block like: ? mail { server_name mail.foo.com; <-- ############ Can I simply add 'mail.bar.com' here? ############ auth_http localhost/nginxauth.php; server { listen 25; protocol smtp; } server { listen 143; protocol imap; } } And a minimalistic nginxauth.php script like: But how to solve the questions marked with "###" above? I tried to find something in the Nginx documentation, but without success. Any ideas? Thanks a lot in advance. From richard.dakin at digital.justice.gov.uk Wed Nov 13 19:43:48 2019 From: richard.dakin at digital.justice.gov.uk (Richard Dakin) Date: Wed, 13 Nov 2019 19:43:48 +0000 Subject: Nginx Container crash no logs Message-ID: Hey all, We have a new set up running large amounts of data through a container nginx. This is crashing, without error. Forcing a reboot to recover daily at the moment. We are getting nothing from nginx logs or the docker logs of any use. Any suggestions to debugging this? CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 9bc827d6ccd7 nginx:stable Container is running on ubuntu 18.04.3 LTS OS. The last thing we see in the logs is 18:15:36 [alert] 7#7: ignore long locked inactive cache entry 9f78089258be73e98f58abed986ddb8b, count:1 2019/11/13 18:25:36 [alert] 7#7: ignore long locked inactive cache entry 9f78089258be73e98f58abed986ddb8b, count:1 2019/11/13 18:35:36 [alert] 7#7: ignore long locked inactive cache entry 9f78089258be73e98f58abed986ddb8b, count:1 We're tried a lot of things such as down grading the nginx version as we were using latest but this just prolonged the crash time. Small improvement by a day Any suggestions welcome Nginx conf is below Windows Terminal worker_rlimit_nofile 30000; events {} http { log_format compression '$remote_addr - $remote_user [$time_local] ' '"$request" $status $body_bytes_sent ' '"$http_referer" "$http_user_agent" "$gzip_ratio"'; error_log /etc/nginx/error_log.log warn; client_max_body_size 20m; server_names_hash_bucket_size 512; proxy_headers_hash_bucket_size 128; proxy_cache_path /var/cache/nginx levels=1:2 keys_zone=content_cache:10m max_size=10g use_temp_path=off; upstream hub_node { server hub-node:3000; keepalive 16; } upstream hub_cms { server hub-be:80; keepalive 16; } upstream hub_analytics { server hub-matomo:80; keepalive 16; } server { listen 443 default_server; server_name _; return 418; } server { listen 443 ssl http2; server_name digital-hub.bwi.dpn.gov.uk; add_header X-Robots-Tag "noindex, nofollow, nosnippet, noarchive"; location /sites/default/files/ { proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto $scheme; proxy_set_header X-Forwarded-For $remote_addr; proxy_cache_use_stale error timeout http_500 http_502 http_503 http_504; proxy_cache_valid 200 302 10m; proxy_cache content_cache; proxy_pass http://hub_cms/sites/default/files/; } location / { access_log /var/log/nginx/access.log compression buffer=32k; proxy_pass http://hub_node/; } } server { listen 443 ssl http2; server_name analytics.digital-hub.bwi.dpn.gov.uk; add_header X-Robots-Tag "noindex, nofollow, nosnippet, noarchive"; location / { proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto $scheme; proxy_pass http://hub_analytics/; } } server { listen 443 ssl http2; server_name content.digital-hub.bwi.dpn.gov.uk; add_header X-Robots-Tag "noindex, nofollow, nosnippet, noarchive"; location / { proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto $scheme; proxy_set_header X-Forwarded-For $remote_addr; proxy_cache_use_stale error timeout http_500 http_502 http_503 http_504; proxy_cache_valid 200 302 10m; proxy_cache content_cache; proxy_pass http://hub_cms/; } } ssl_certificate /etc/letsencrypt/live/localhost/san.digital-hub.crt; ssl_certificate_key /etc/letsencrypt/live/localhost/san.digital-hub.rsa; ssl_session_timeout 1d; ssl_session_cache shared:MozSSL:10m; # about 40000 sessions ssl_session_tickets off; ssl_protocols TLSv1.2 TLSv1.3; ssl_ciphers ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES128-GCM- SHA256:DHE-RSA-AES256-GCM-SHA384; ssl_prefer_server_ciphers off; # HSTS (ngx_http_headers_module is required) (63072000 seconds) add_header Strict-Transport-Security "max-age=63072000" always; -------------- next part -------------- An HTML attachment was scrubbed... URL: From 201904-nginx at jslf.app Thu Nov 14 00:37:11 2019 From: 201904-nginx at jslf.app (Patrick) Date: Thu, 14 Nov 2019 08:37:11 +0800 Subject: Mail Proxy with Multiple Mail Domains In-Reply-To: References: Message-ID: <20191114003711.GA32220@haller.ws> On 2019-11-13 20:00, nsclick at gmx.de wrote: > I would like to setup a Nginx mail proxy which handles IMAP and SMTP for two different mail domains and two different backend servers (one server for each of the domains). The docs have a good example at: https://www.nginx.com/resources/wiki/start/topics/examples/imapauthenticatewithapacheperlscript/ Users need to login with "username at foo.com" or "username at bar.com" otherwise name collisions will occur... `Auth-User' will have the username, so match on the domain part to route the user to the correct server. Patrick From phillip.odam at nitorgroup.com Thu Nov 14 01:10:28 2019 From: phillip.odam at nitorgroup.com (Phillip Odam) Date: Wed, 13 Nov 2019 20:10:28 -0500 Subject: Mail Proxy with Multiple Mail Domains In-Reply-To: <20191114003711.GA32220@haller.ws> References: <20191114003711.GA32220@haller.ws> Message-ID: The only issue we encountered using the nginx Mail auth api was in finding out what encoding is used for the header values. In Java we currently use the following to decode the password password = URLDecoder.decode(password.replaceAll("\\+", "%2b"), "UTF-8"); My understanding is that nginx encodes some characters in the typical %XX form (where X is a hexadecimal character) but leaves + as +, so when decoding, + is incorrectly decoded to space. That?s what the code above resolves. On Wed, Nov 13, 2019 at 7:37 PM Patrick <201904-nginx at jslf.app> wrote: > On 2019-11-13 20:00, nsclick at gmx.de wrote: > > I would like to setup a Nginx mail proxy which handles IMAP and SMTP for > two different mail domains and two different backend servers (one server > for each of the domains). > > The docs have a good example at: > > https://www.nginx.com/resources/wiki/start/topics/examples/imapauthenticatewithapacheperlscript/ > > Users need to login with "username at foo.com" or "username at bar.com" > otherwise name collisions will occur... > > `Auth-User' will have the username, so match on the domain part to route > the user to the correct server. > > > Patrick > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From vbart at nginx.com Thu Nov 14 18:33:06 2019 From: vbart at nginx.com (Valentin V. Bartenev) Date: Thu, 14 Nov 2019 21:33:06 +0300 Subject: Unit 1.13.0 release Message-ID: <1883662.Z8tWfoiyoH@vbart-workstation> Hi, I'm glad to announce a new release of NGINX Unit. This release expands Unit's functionality as a generic web server by introducing basic HTTP reverse proxying. See the details in our documentation: - https://unit.nginx.org/configuration/#proxying Compared to mature proxy servers and load balancers, Unit's proxy features are limited now, but we'll continue the advance. Also, this release improves the user experience for Python and Ruby modules and remediates compatibility issues with existing applications in these languages. Our long-term goal is to turn Unit into the ultimate high-performance building block that will be helpful and easy to use with web services of any kind. To accomplish this, Unit's future releases will focus on the following aspects: - security, isolation, and DoS protection - ability to run various types of dynamic applications - connectivity with load balancing and fault tolerance - efficient serving of static media assets - statistics and monitoring Changes with Unit 1.13.0 14 Nov 2019 *) Feature: basic support for HTTP reverse proxying. *) Feature: compatibility with Python 3.8. *) Bugfix: memory leak in Python application processes when the close handler was used. *) Bugfix: threads in Python applications might not work correctly. *) Bugfix: Ruby on Rails applications might not work on Ruby 2.6. *) Bugfix: backtraces for uncaught exceptions in Python 3 might be logged with significant delays. *) Bugfix: explicit setting a namespaces isolation option to false might have enabled it. Please feel free to share your experiences and ideas on GitHub: - https://github.com/nginx/unit/issues Or via Unit mailing list: - https://mailman.nginx.org/mailman/listinfo/unit wbr, Valentin V. Bartenev From francis at daoine.org Thu Nov 14 19:12:02 2019 From: francis at daoine.org (Francis Daly) Date: Thu, 14 Nov 2019 19:12:02 +0000 Subject: Mail Proxy with Multiple Mail Domains In-Reply-To: References: Message-ID: <20191114191202.GK1914@daoine.org> On Wed, Nov 13, 2019 at 08:00:58PM +0100, nsclick at gmx.de wrote: Hi there, Untested, but... > I would like to setup a Nginx mail proxy which handles IMAP and SMTP for two different mail domains and two different backend servers (one server for each of the domains). > The easiest way is probably to have nginx listening on two IP addresses; each one handling one domain. > Let's say we have the two mail domains: > - mail.foo.com > - mail.bar.com > ? > Then we can setup a minimalistic mail block like: > ? > mail { > server_name mail.foo.com; <-- ############ Can I simply add 'mail.bar.com' here? ############ No. http://nginx.org/en/docs/mail/ngx_mail_core_module.html#server_name says when this is used. If it is important in your case to have two different names, then you will want to set it in each server{}. > $backend_ip["mailhost_foo"] ="192.168.1.10"; > $backend_ip["mailhost_bar"] ="192.168.1.20"; > > $selection <-- ############ How to make this selection? ############ > Do we have information about the requested mail domain here? > If yes, in which $_SERVER item? If you use something like server { server_name foo; listen ip1:25; } server { server_name bar; listen ip2:25; } then you can also include an auth_http_header to say "this is foo", or "this is bar". Or you can use a different auth_http url for foo and for bar, so that each one "knows" the backend ip for itself. > But how to solve the questions marked with "###" above? > I tried to find something in the Nginx documentation, but without success. > Any ideas? http://nginx.org/en/docs/mail/ngx_mail_core_module.html#listen says """ Different servers must listen on different address:port pairs. """ Alternatively, http://nginx.org/en/docs/mail/ngx_mail_auth_http_module.html#protocol shows that you will probably have an Auth-User for IMAP, and an Auth-SMTP-To for SMTP. If those values make it clear which mail domain is used in this request, then your auth_http script can use the appropriate logic. Cheers, f -- Francis Daly francis at daoine.org From nsclick at gmx.de Fri Nov 15 14:42:57 2019 From: nsclick at gmx.de (nsclick at gmx.de) Date: Fri, 15 Nov 2019 15:42:57 +0100 Subject: Reply to a thread Message-ID: An HTML attachment was scrubbed... URL: From vbart at nginx.com Fri Nov 15 15:19:52 2019 From: vbart at nginx.com (Valentin V. Bartenev) Date: Fri, 15 Nov 2019 18:19:52 +0300 Subject: Reply to a thread In-Reply-To: References: Message-ID: <3359170.4MhSxCp3XN@vbart-workstation> On Friday 15 November 2019 15:42:57 nsclick at gmx.de wrote: > Hello, > > excuse my, maybe, very stupid question. But how can I reply to post on the mailing list? > I only receive daily summaries of all posts (without any reply links and withoput the chance to simply reply to a single e-mail). > When trying to register/login to the forum (https://forum.nginx.org/) instead, I only get the me message that I will receive a confirmation e-mail....but such an e-mail never arrives. > > Could you give me a little "kick" into the right direction, please? > Why is it so hard to reply to a post? > > Thanks. > The digest mode in mailing lists is usually suited only if you want to receive "latest news" and don't want to participate in discussions. Otherwise you should subscribe in normal mode. wbr, Valentin V. Bartenev From nsclick at gmx.de Fri Nov 15 20:17:16 2019 From: nsclick at gmx.de (nsclick at gmx.de) Date: Fri, 15 Nov 2019 21:17:16 +0100 Subject: Mail Proxy: SSL to Backend Message-ID: An HTML attachment was scrubbed... URL: From targon at technologist.com Sat Nov 16 02:19:50 2019 From: targon at technologist.com (targon at technologist.com) Date: Sat, 16 Nov 2019 10:19:50 +0800 Subject: Expert needed to Tuning For Best Performance In-Reply-To: <0448c10155b8a55a9138f3b924d85df6.NginxMailingListEnglish@forum.nginx.org> References: <0448c10155b8a55a9138f3b924d85df6.NginxMailingListEnglish@forum.nginx.org> Message-ID: <586A1A27-F7BB-4FE1-AF8A-F9E6E7B5F578@technologist.com> Is there anyone one the list consider themselves an Nginx tuning expert interested in running some HTTP Load Testing to establish baseline on my server and then tune config for best performance? 1000 Mbps up/down connection Xeon E-2146G (6 core -12 thread) 3.5-4.5 GHz Supermicro X11SCZ-F 32GB RAM Samsung SSD 860 2x512 GB SSD I?ve been reading Denis Denisov (denji) NGINX Tuning For Best Performance https://github.com/denji/nginx-tuning But this specifically says for testing not PRODUCTION. I?m preparing my server for Production environment. If any interest, please email me, happy to compensate for efforts. Thanks -------------- next part -------------- An HTML attachment was scrubbed... URL: From targon at technologist.com Sat Nov 16 02:42:16 2019 From: targon at technologist.com (targon at technologist.com) Date: Sat, 16 Nov 2019 10:42:16 +0800 Subject: Expert needed to Tune Nginx For Best Performance Message-ID: <24D791D8-8F57-41CF-B370-171A830F96F2@technologist.com> Is there anyone one the list consider themselves an Nginx tuning expert interested in running some HTTP Load Testing to establish baseline on my server and then tune config for best performance? 1000 Mbps up/down connection Xeon E-2146G (6 core -12 thread) 3.5-4.5 GHz Supermicro X11SCZ-F 32GB RAM Samsung SSD 860 2x512 GB SSD I?ve been reading Denis Denisov (denji) NGINX Tuning For Best Performance https://github.com/denji/nginx-tuning But this specifically says for testing not PRODUCTION. I?m preparing my server for Production environment. If any interest, please email me, happy to compensate for efforts. -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Sat Nov 16 09:26:01 2019 From: nginx-forum at forum.nginx.org (j94305) Date: Sat, 16 Nov 2019 04:26:01 -0500 Subject: Expert needed to Tune Nginx For Best Performance In-Reply-To: <24D791D8-8F57-41CF-B370-171A830F96F2@technologist.com> References: <24D791D8-8F57-41CF-B370-171A830F96F2@technologist.com> Message-ID: <13c213d54f308f7795c26db7ee7bd877.NginxMailingListEnglish@forum.nginx.org> Optimizing for production is not simply an optimization of one component, e.g., NGINX. This is also about your security model, the application architecture and scaling abilities. If you simply have static files to be served, place them into a memory-based file system and you'll serve them blindingly fast - in theory. Actual performance will depend on the locations of your clients, with their latencies and bandwidths, so the approach may be not to accelerate one NGINX server location, but rather have a geographic distribution to serving content at the edge. If you need to serve to lower-bandwidth clients, gzip compression may be essential for latencies. If you have clients with broadband capabilities, you may want to save the extra CPU cycles for other tasks. If your security model requires complex signature verification on every request, you may need significantly more CPU power in there, compared to when you simply handle easily-verifiable cookies (which were assigned through a more compute-intensive calculation and verification scheme - but only once per session). OIDC-based schemes come to my mind. If you have different types of loads, it is sensible to separate them. Static files will be served by one cluster of servers, dynamic content will be served by another. Having auto-scaling groups on the application side , you can scale well, assuming state-less micro-services there. If there is a broadly varying spectrum of load situations, you may want to consider clusters of NGINX, possibly with auto-scaling, to handle loads more effectively. Optimizing single NGINX instances may not what will really boost performance enough. So, my point is: optimizing any application environment for production use is not just a matter of nifty directives to speed up NGINX. It's a question of optimizing the architecture AND its components, including the application services. I've seen massive speed-ups just by changing the application into a set of state-less micro-services - focus on "state-less". While you will find people who help you with NGINX optimization in a given scenario, this may not be what you really need to optimize your entire application, including NGINX. Cheers, --j. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,286223,286224#msg-286224 From nginx-forum at forum.nginx.org Sun Nov 17 11:40:36 2019 From: nginx-forum at forum.nginx.org (naupe) Date: Sun, 17 Nov 2019 06:40:36 -0500 Subject: 502 Bad Gateway - nginx/1.14.0 (Ubuntu) Message-ID: Recently I moved my Dell Server from one location to another, with a completely different Router. Its Main OS/VM Manager is Proxmox VE 5.3. I got an Nginx VM that is Reverse Proxying several other VMs. After configuring my new Router, I got several of my VMs to connect to the Internet (setting the same Private IPs to the same MAC Addresses, and opening the same ports for the same Private IPs). However, with my Discourse VM, I am receiving 502 Bad Gateway - nginx/1.14.0 (Ubuntu) when trying to access Discourse in a Browser. In the past, it was usually through the Discourse software that I could fix the problem ... but not so this time. Please read here for more details: https://meta.discourse.org/t/502-bad-gateway-nginx-1-14-0-ubuntu-unable-to-find-image-locally-error-response-from-daemon/133392 I've basically hit a point where I believe the issue is with the configuration of the Nginx VM. Recently I've tried renewed the SSL Certificates (using Let's Encrypt) for all my sites on Nginx, hoping it would fix the problem ... but it didn't. However, I checked the Nginx Error Log and found a message being reported over and over again: > root at ngx:/etc/nginx/sites-available# less /var/log/nginx/error.log > 2019/11/17 06:01:24 [error] 23646#23646: *37 connect() failed (111: Connection refused) while connecting to upstream, client: 1.2.3.4, server: discourse.domainame.com, request: "GET / HTTP/2.0", upstream: "http://5.6.7.8:8080/", host: "discourse.domainame.com" * 1.2.3.4 = My Old Public IP Address * 5.6.7.8 = My New Public IP Address It looks to me like Nginx is trying to connect this site using my Old Public IP Address. That is definitely incorrect. My /etc/nginx/sites-available CONF file for the Discourse Site can be found in THIS link: https://pastebin.com/fiiyATeP * 192.168.0.101 = Nginx VM * 192.168.0.104 = Discourse VM ------------------------------------------------------------------------------------------------------------------------------------------------------------ So my question is: How can I tell Nginx to connect discourse.domainname.com to my New Public IP Address 5.6.7.8? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,286226,286226#msg-286226 From francis at daoine.org Sun Nov 17 12:14:48 2019 From: francis at daoine.org (Francis Daly) Date: Sun, 17 Nov 2019 12:14:48 +0000 Subject: 502 Bad Gateway - nginx/1.14.0 (Ubuntu) In-Reply-To: References: Message-ID: <20191117121448.GM1914@daoine.org> On Sun, Nov 17, 2019 at 06:40:36AM -0500, naupe wrote: Hi there, > > root at ngx:/etc/nginx/sites-available# less /var/log/nginx/error.log > > 2019/11/17 06:01:24 [error] 23646#23646: *37 connect() failed (111: > Connection refused) while connecting to upstream, client: 1.2.3.4, server: > discourse.domainame.com, request: "GET / HTTP/2.0", upstream: > "http://5.6.7.8:8080/", host: "discourse.domainame.com" > > * 1.2.3.4 = My Old Public IP Address > * 5.6.7.8 = My New Public IP Address nginx is trying, and failing, to connect to 5.6.7.8:8080. Should nginx be trying to connect to 5.6.7.8? If so, fix the network so that it is able to connect there. If not, fix the nginx config so that it tries to connect to the place that you want. > My /etc/nginx/sites-available CONF file for the Discourse Site can be found > in THIS link: https://pastebin.com/fiiyATeP The relevant bit there seems to be location / { proxy_pass http://discourse.domainname.com:8080/; > * 192.168.0.101 = Nginx VM > * 192.168.0.104 = Discourse VM When nginx starts, it will ask the system to resolve the name discourse.domainname.com, and it (presumably) is told 5.6.7.8. It looks to me like you want that to be 192.168.0.104 instead. In that case - either change the system resolver; or just change the proxy_pass line to use the 192.168.0.104 IP address instead of the hostname. (Other options exist too; these are probably the ones with fewest changes.) Good luck with it, f -- Francis Daly francis at daoine.org From nginx-forum at forum.nginx.org Mon Nov 18 07:39:28 2019 From: nginx-forum at forum.nginx.org (naupe) Date: Mon, 18 Nov 2019 02:39:28 -0500 Subject: 502 Bad Gateway - nginx/1.14.0 (Ubuntu) In-Reply-To: <20191117121448.GM1914@daoine.org> References: <20191117121448.GM1914@daoine.org> Message-ID: <663d15e50b9ab9a46fe5a790b8320182.NginxMailingListEnglish@forum.nginx.org> Francis Day, Wow ... that did it! You resolved my issue! After troubleshooting for 1 full week of troubleshooting, it's resolved! Thank you so much! So I edited my /etc/nginx/sites-available/discourse.conf file: * I changed THIS line: proxy_pass http://discourse.domainame.com:8080/; To THIS line: proxy_pass http://192.168.0.104:8080/; So basically calling the Local IP Address instead of the Hostname as you suggested. Anyways, after I reloaded and restarted Nginx, and my Discourse Forum site came right up! Posted at Nginx Forum: https://forum.nginx.org/read.php?2,286226,286229#msg-286229 From mdounin at mdounin.ru Mon Nov 18 14:44:52 2019 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 18 Nov 2019 17:44:52 +0300 Subject: Mail Proxy: SSL to Backend In-Reply-To: References: Message-ID: <20191118144452.GM12894@mdounin.ru> Hello! On Fri, Nov 15, 2019 at 09:17:16PM +0100, nsclick at gmx.de wrote: > does the Nginx Mail Proxy support SSL towards the backend server? > > I found some websites which tell that this is not possible, however, > such websites are quite a bit old. > Is SSL to the backend supported now? The answer is still "no". But if you really want to use SSL with backends for some reasons, you can additionally proxy backend connections through nginx stream module. -- Maxim Dounin http://mdounin.ru/ From targon at technologist.com Tue Nov 19 02:58:55 2019 From: targon at technologist.com (targon at technologist.com) Date: Tue, 19 Nov 2019 10:58:55 +0800 Subject: Expert needed to Tune Nginx For Best Performance In-Reply-To: <13c213d54f308f7795c26db7ee7bd877.NginxMailingListEnglish@forum.nginx.org> References: <24D791D8-8F57-41CF-B370-171A830F96F2@technologist.com> <13c213d54f308f7795c26db7ee7bd877.NginxMailingListEnglish@forum.nginx.org> Message-ID: <7AE20C70-55F0-4E95-B2D0-9BB7B1666259@technologist.com> Hi J, There?s a guy on Upwork (seems to me MIA) who advertises a service that covers my requirements: https://www.upwork.com/fl/koencalliauw Web Performance Expert / Professional Linux Systems Engineer As a systems engineer and PHP developer, I have developed a lot of skills in the area of performance optimization. Web performance is extremely important for your visitors as well as for how your rank in the major search engines. My approach to helping clients consists of the following: - Establishing a baseline performance of a specific problematic page, that can be easily reproduces - Analyse the HTML/CSS and Javascript loading to detect any bottlenecks on the page - Optimise loading of external assets by following best modern practices - Analyse the Linux server configuration, this means CPU, Memory and disk performance, Apache / Nginx configuration, PHP configuration, etc. - Profile the web application to detect the parts of the code that take up the most CPU time (like for example slow SQL queries), use the most memory, are waiting for disk I/O the longest, or transferring the most data over the network - Finally, I write a quick report to the client with my findings, and upon agreement, I start optimising the code, system configuration, etc. in order to bring down the time it takes for the page to load. - Caching can be implemented (like in memcached, redis, ...) and PHP OpCache can be configured and deployed - SSL settings can be fine-tuned to get a good rating on SSL Labs, keeping in mind the visitors and their browsers. > On 16 Nov 2019, at 17:26, j94305 wrote: > > Optimizing for production is not simply an optimization of one component, > e.g., NGINX. > > This is also about your security model, the application architecture and > scaling abilities. > > If you simply have static files to be served, place them into a memory-based > file system and you'll serve them blindingly fast - in theory. Actual > performance will depend on the locations of your clients, with their > latencies and bandwidths, so the approach may be not to accelerate one NGINX > server location, but rather have a geographic distribution to serving > content at the edge. > > If you need to serve to lower-bandwidth clients, gzip compression may be > essential for latencies. If you have clients with broadband capabilities, > you may want to save the extra CPU cycles for other tasks. > > If your security model requires complex signature verification on every > request, you may need significantly more CPU power in there, compared to > when you simply handle easily-verifiable cookies (which were assigned > through a more compute-intensive calculation and verification scheme - but > only once per session). OIDC-based schemes come to my mind. > > If you have different types of loads, it is sensible to separate them. > Static files will be served by one cluster of servers, dynamic content will > be served by another. Having auto-scaling groups on the application side , > you can scale well, assuming state-less micro-services there. > > If there is a broadly varying spectrum of load situations, you may want to > consider clusters of NGINX, possibly with auto-scaling, to handle loads more > effectively. Optimizing single NGINX instances may not what will really > boost performance enough. > > So, my point is: optimizing any application environment for production use > is not just a matter of nifty directives to speed up NGINX. It's a question > of optimizing the architecture AND its components, including the application > services. I've seen massive speed-ups just by changing the application into > a set of state-less micro-services - focus on "state-less". > > While you will find people who help you with NGINX optimization in a given > scenario, this may not be what you really need to optimize your entire > application, including NGINX. > > Cheers, > --j. > > Posted at Nginx Forum: https://forum.nginx.org/read.php?2,286223,286224#msg-286224 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From lists at lazygranch.com Tue Nov 19 04:35:22 2019 From: lists at lazygranch.com (lists) Date: Mon, 18 Nov 2019 20:35:22 -0800 Subject: Expert needed to Tune Nginx For Best Performance In-Reply-To: <7AE20C70-55F0-4E95-B2D0-9BB7B1666259@technologist.com> Message-ID: An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Tue Nov 19 14:32:36 2019 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 19 Nov 2019 17:32:36 +0300 Subject: nginx-1.17.6 Message-ID: <20191119143236.GU12894@mdounin.ru> Changes with nginx 1.17.6 19 Nov 2019 *) Feature: the $proxy_protocol_server_addr and $proxy_protocol_server_port variables. *) Feature: the "limit_conn_dry_run" directive. *) Feature: the $limit_req_status and $limit_conn_status variables. -- Maxim Dounin http://nginx.org/ From xeioex at nginx.com Tue Nov 19 14:46:33 2019 From: xeioex at nginx.com (Dmitry Volyntsev) Date: Tue, 19 Nov 2019 17:46:33 +0300 Subject: njs-0.3.7 Message-ID: <4e95bd6b-cf62-24fb-d713-9b29df24c820@nginx.com> Hello, I'm glad to announce a new release of NGINX JavaScript module (njs). This release proceeds to extend the coverage of ECMAScript specifications. Notable new features: - Object.assign() method: : > var obj = { a: 1, b: 2 } : undefined : > var copy = Object.assign({}, obj) : undefined : > console.log(copy) : {a:1,b:2} You can learn more about njs: - Overview and introduction: http://nginx.org/en/docs/njs/ - Presentation: https://youtu.be/Jc_L6UffFOs Feel free to try it and give us feedback on: - Github: https://github.com/nginx/njs/issues - Mailing list: http://mailman.nginx.org/mailman/listinfo/nginx-devel Changes with njs 0.3.7 19 Nov 2019 nginx modules: *) Improvement: refactored iteration over external objects. Core: *) Feature: added Object.assign(). *) Feature: added Array.prototype.copyWithin(). *) Feature: added support for labels in console.time(). *) Change: removed console.help() from CLI. *) Improvement: moved constructors and top-level objects to global object. *) Improvement: arguments validation for configure script. *) Improvement: refactored JSON methods. *) Bugfix: fixed heap-buffer-overflow in njs_array_reverse_iterator() function. The following functions were affected: Array.prototype.lastIndexOf(), Array.prototype.reduceRight(). *) Bugfix: fixed [[Prototype]] slot of NativeErrors. *) Bugfix: fixed NativeError.prototype.message properties. *) Bugfix: added conversion of "this" value to object in Array.prototype functions. *) Bugfix: fixed iterator for Array.prototype.find() and Array.prototype.findIndex() functions. *) Bugfix: fixed Array.prototype.includes() and Array.prototype.join() with "undefined" argument. *) Bugfix: fixed "constructor" property of "Hash" and "Hmac" objects. *) Bugfix: fixed "__proto__" property of getters and setters. *) Bugfix: fixed "Date" object string formatting. *) Bugfix: fixed handling of NaN and -0 arguments in Math.min() and Math.max(). *) Bugfix: fixed Math.round() according to the specification. *) Bugfix: reimplemented "bound" functions according to the specification. From nginx-forum at forum.nginx.org Tue Nov 19 15:29:37 2019 From: nginx-forum at forum.nginx.org (feanorknd) Date: Tue, 19 Nov 2019 10:29:37 -0500 Subject: How to avoid sending incomplete request data to backend if 499 error Message-ID: <435db6793a7f527f6a06661893c3df40.NginxMailingListEnglish@forum.nginx.org> Hello... Few days ago I have had this problem... let me explain with log lines: X.X.X.X - - [16/Nov/2019:04:36:17 +0100] "POST /api/budgets/new HTTP/2.0" 200 2239 "----" "Mozilla/5.0 (iPhone; CPU iPhone OS 13_2 like Mac OS X) AppleWebKit/605.1.15 (KHTML, like Gecko) GSA/86.0.276299193 Mobile/15E148 Safari/605.1" Exec: "2.190" Conn: "10" Upstream Time: "2.185" Upstream Status: "200" X.X.X.X - - [16/Nov/2019:04:36:55 +0100] "POST /api/budgets/new HTTP/2.0" 499 0 ""----"" "Mozilla/5.0 (iPhone; CPU iPhone OS 13_2 like Mac OS X) AppleWebKit/605.1.15 (KHTML, like Gecko) GSA/86.0.276299193 Mobile/15E148 Safari/605.1" Exec: "0.147" Conn: "1" Upstream Time: "0.142" Upstream Status: "-" In the first line, there is nothing of interest... just the POST request was completely fine. In the second request, there was a client disconnection and POST request was not complete, as given by the 499 logged error. The problem was: - the incomplete POST data was sent from nginx to the backend fastcgi server somehow. - that code did process the incomplete request data and generated a corrupt entry in certain database... another history. I need NGINX to do not behavior like this. If request data is not complete and connection was timed out, dropping a 499, I want NGINX to discard completely that request instead of sending incomplete data to the fastcgi backend. I guess there would be two ways: - Nginx main core buffer the client request and discard it completely if not finishing correctly (499). - Nginx fastcgi module buffer the client request and discard it completely if not finishing correctly (499). But I do not know how to configure like this. Even "fastcgi_request_buffering on" is supposed to be default, but in this case, incomplete request was sent to backend generating an execution of code with corrupt data. Is there a way to discard incomplete requests when happening a client disconnect and before parsing it to the backends? Thanks to all! -- Gino Posted at Nginx Forum: https://forum.nginx.org/read.php?2,286264,286264#msg-286264 From mdounin at mdounin.ru Tue Nov 19 15:38:51 2019 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 19 Nov 2019 18:38:51 +0300 Subject: Help with nginx second request In-Reply-To: <1573655932.406816000.4ow42y66@frv55.fwdcdn.com> References: <1573655932.406816000.4ow42y66@frv55.fwdcdn.com> Message-ID: <20191119153851.GY12894@mdounin.ru> Hello! On Wed, Nov 13, 2019 at 04:38:54PM +0200, ??????? ???? wrote: > Hi all, > I would really appreciate it if you could help me with nginx. > > The situation is: > Nginx (v. 1.14.2) redirects the request to the application > server. In case this request with the POST method and the > application server gives an error code 500, the response is > transmitted to the client. > But nginx then makes a second request. Is there any way to > disable execution of the?second?request by?nginx? > Many thanks in advance. > > Here is a part of nginx file: > > #Makes upstream to?the branch with response code 500 > upstream test5 { > ? ? ? ?server 127.0.0.1:90; > ? ? ? ?keepalive 20; > } > > #virtualhost that issues response code 500; > server { > ? ?listen 90 default_server; > ? ?server_name localhost; > ? ?root /home/jetty/www; > > ? ? ? ?location @intercept_disabled { > ? ? ? ? ? ? ? ?proxy_intercept_errors off; > ? ? ? ? ? ? ? ?proxy_pass?http://test5; > ? ? ? ?} > ? ?location /test500 { > ? ? ? ?return 500; > ? ? ? ?error_page 500 /500.html; > ? ?} > } > > # Redirection to virtualhost that issues response code 500; > ? ? ? ?location /test500 { > ? ? ? ? ? ?proxy_pass?http://test5; > ? ? ? ?} What else is configured in the server{} block with "location /test500"? Results you provide (that is, two duplicate requests) and the unneeded "location @intercept_disabled" in the first server suggests that you've configured proxy_intercept_errors and "error_page 500" with a name-based location with additional proxy_pass to the same server. And this is what causes duplicate requests - because this is exactly what you've asked nginx to do. An obvious solution is to remove this error handling or change it to something different. -- Maxim Dounin http://mdounin.ru/ From kworthington at gmail.com Tue Nov 19 16:40:42 2019 From: kworthington at gmail.com (Kevin Worthington) Date: Tue, 19 Nov 2019 11:40:42 -0500 Subject: [nginx-announce] nginx-1.17.6 In-Reply-To: <20191119143247.GV12894@mdounin.ru> References: <20191119143247.GV12894@mdounin.ru> Message-ID: Hello Nginx users, Now available: Nginx 1.17.6 for Windows https:// kevinworthington.com/nginxwin1176 (32-bit and 64-bit versions) These versions are to support legacy users who are already using Cygwin based builds of Nginx. Officially supported native Windows binaries are at nginx.org. Announcements are also available here: Twitter http://twitter.com/kworthington Thank you, Kevin -- Kevin Worthington kworthington *@* (gmail] [dot} {com) https://kevinworthington.com/ https://twitter.com/kworthington On Tue, Nov 19, 2019 at 9:33 AM Maxim Dounin wrote: > Changes with nginx 1.17.6 19 Nov > 2019 > > *) Feature: the $proxy_protocol_server_addr and > $proxy_protocol_server_port variables. > > *) Feature: the "limit_conn_dry_run" directive. > > *) Feature: the $limit_req_status and $limit_conn_status variables. > > > -- > Maxim Dounin > http://nginx.org/ > _______________________________________________ > nginx-announce mailing list > nginx-announce at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx-announce > -------------- next part -------------- An HTML attachment was scrubbed... URL: From rogerdpack2 at gmail.com Tue Nov 19 17:47:01 2019 From: rogerdpack2 at gmail.com (Roger Pack) Date: Tue, 19 Nov 2019 10:47:01 -0700 Subject: feature request: warn when domain name resolves to several addresses Message-ID: I noticed that in ngx_http_proxy_module proxy_pass http://localhost:8000/uri/; "If a domain name resolves to several addresses, all of them will be used in a round-robin fashion. In addition, an address can be specified as a server group." However this can be confusing for end users who innocently put the domain name "localhost" then find that round-robin across ipv6 and ipv4 is occurring, ref: https://stackoverflow.com/a/58924751/32453 https://stackoverflow.com/a/52550758/32453 Suggestion/feature request: If a domain name resolves to several addresses, log a warning in error.log file somehow, or at least in the output of -T, to warn somehow. Then there won't be unexpected round-robins occurring and "supposedly single" servers being considered unavailable due to timeouts, surprising people like myself. Thank you for your attention, and for nginx, it's rocking fast! :) -Roger Pack- From mdounin at mdounin.ru Tue Nov 19 17:48:32 2019 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 19 Nov 2019 20:48:32 +0300 Subject: How to avoid sending incomplete request data to backend if 499 error In-Reply-To: <435db6793a7f527f6a06661893c3df40.NginxMailingListEnglish@forum.nginx.org> References: <435db6793a7f527f6a06661893c3df40.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20191119174832.GZ12894@mdounin.ru> Hello! On Tue, Nov 19, 2019 at 10:29:37AM -0500, feanorknd wrote: > Hello... > > Few days ago I have had this problem... let me explain with log lines: > > X.X.X.X - - [16/Nov/2019:04:36:17 +0100] "POST /api/budgets/new HTTP/2.0" > 200 2239 "----" "Mozilla/5.0 (iPhone; CPU iPhone OS 13_2 like Mac OS X) > AppleWebKit/605.1.15 (KHTML, like Gecko) GSA/86.0.276299193 Mobile/15E148 > Safari/605.1" Exec: "2.190" Conn: "10" Upstream Time: "2.185" Upstream > Status: "200" > > X.X.X.X - - [16/Nov/2019:04:36:55 +0100] "POST /api/budgets/new HTTP/2.0" > 499 0 ""----"" "Mozilla/5.0 (iPhone; CPU iPhone OS 13_2 like Mac OS X) > AppleWebKit/605.1.15 (KHTML, like Gecko) GSA/86.0.276299193 Mobile/15E148 > Safari/605.1" Exec: "0.147" Conn: "1" Upstream Time: "0.142" Upstream > Status: "-" > > In the first line, there is nothing of interest... just the POST request was > completely fine. > > In the second request, there was a client disconnection and POST request was > not complete, as given by the 499 logged error. > > The problem was: > - the incomplete POST data was sent from nginx to the backend fastcgi server > somehow. > - that code did process the incomplete request data and generated a corrupt > entry in certain database... another history. In no particular order: - By default, nginx will pass the request to the backend server only when it's completely received from the client, including the request body. You can change things by explicitly using "fastcgi_request_buffering off;", but this shouldn't change things much as long as your backend is properly written. - Any backend server is expected to verify if it received complete request, as the connection between nginx and backend can be broken from unrelated reasons, such as network issues. - An incomplete request will result in 400, not 499. The 499 error means that the request was complete, but the client closed connection before the response was generated. Given the above, most likely what you've seen was a connection close by the client after sending a complete request. For example, this might have happened when nginx was sending the request body to the backend. By default, if the client closes the connection - nginx closes the connection with the backend to let the backend know that no further request processing is needed. A properly written backend is expected to handle this correctly, as it is anyway have to check the request integrity. If for some reason your backend cannot cope with this, there is the fastcgi_ignore_client_abort directive to control this behaviour (http://nginx.org/r/fastcgi_ignore_client_abort). -- Maxim Dounin http://mdounin.ru/ From mdounin at mdounin.ru Tue Nov 19 19:01:48 2019 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 19 Nov 2019 22:01:48 +0300 Subject: feature request: warn when domain name resolves to several addresses In-Reply-To: References: Message-ID: <20191119190148.GA12894@mdounin.ru> Hello! On Tue, Nov 19, 2019 at 10:47:01AM -0700, Roger Pack wrote: > I noticed that in ngx_http_proxy_module > > proxy_pass http://localhost:8000/uri/; > "If a domain name resolves to several addresses, all of them will be > used in a round-robin fashion. In addition, an address can be > specified as a server group." > > However this can be confusing for end users who innocently put the > domain name "localhost" then find that round-robin across ipv6 and > ipv4 is occurring, ref: > https://stackoverflow.com/a/58924751/32453 This seems to be your own answer, and it looks incorrect to me. In particular, the 499 error is logged when the client closes connection, and there is no need to have more than one backend server specified to see 499 errors. > https://stackoverflow.com/a/52550758/32453 Changing "localhost" to "127.0.0.1" here "works" because having just one address triggers slightly different logic in the upstream code: with just one address, max_fails / fail_timeout logic is disabled, and nginx always uses the (only) address available, even if there are errors. The underlying problem is still the same though: backends cannot cope with the load, and there are errors. (And no, it's not a DNS failure - DNS is only used when nginx resolves the name in the proxy_pass directive while parsing configuration on startup.) > Suggestion/feature request: If a domain name resolves to several > addresses, log a warning in error.log file somehow, or at least in the > output of -T, to warn somehow. Then there won't be unexpected > round-robins occurring and "supposedly single" servers being > considered unavailable due to timeouts, surprising people like myself. Multiple addresses are fairy normal, and I don't think that logging a warning is a good idea. -- Maxim Dounin http://mdounin.ru/ From rogerdpack2 at gmail.com Wed Nov 20 02:26:35 2019 From: rogerdpack2 at gmail.com (Roger Pack) Date: Tue, 19 Nov 2019 19:26:35 -0700 Subject: feature request: warn when domain name resolves to several addresses In-Reply-To: <20191119190148.GA12894@mdounin.ru> References: <20191119190148.GA12894@mdounin.ru> Message-ID: On Tue, Nov 19, 2019 at 12:01 PM Maxim Dounin wrote: > > Hello! Hi back again :) > On Tue, Nov 19, 2019 at 10:47:01AM -0700, Roger Pack wrote: > > > I noticed that in ngx_http_proxy_module > > > > proxy_pass http://localhost:8000/uri/; > > "If a domain name resolves to several addresses, all of them will be > > used in a round-robin fashion. In addition, an address can be > > specified as a server group." > > > > However this can be confusing for end users who innocently put the > > domain name "localhost" then find that round-robin across ipv6 and > > ipv4 is occurring, ref: > > https://stackoverflow.com/a/58924751/32453 > > This seems to be your own answer, and it looks incorrect to me. > In particular, the 499 error is logged when the client closes > connection, and there is no need to have more than one backend > server specified to see 499 errors. True, those cases were covered in some other answers to that question, but I'll add a note. :) It can also be logged when the backend server times out, at least empirically that seems to be the case... see also https://serverfault.com/questions/523340/post-request-is-repeated-with-nginx-loadbalanced-server-status-499/783624#783624 > > https://stackoverflow.com/a/52550758/32453 > > Changing "localhost" to "127.0.0.1" here "works" because having just > one address triggers slightly different logic in the upstream > code: with just one address, max_fails / fail_timeout logic is > disabled, and nginx always uses the (only) address available, even > if there are errors. Right. The confusion in my mind is that people configuring Nginx will use one backend "localhost", and assume they have set it up for a "single server" type server group. Since they have listed only one host. But it has not... See for instance https://stackoverflow.com/a/52550758 > The underlying problem is still the same though: backends cannot > cope with the load, and there are errors. Right. However with the "single server" scenario this behavior is handled differently (it doesn't exhaust the server group of available servers and begin to return with 502's exclusively for a time, as it did in my instance...). Basically if, while setting it up, you happen to forward to 127.0.0.1, it will work fine, no "periods of 502's" (though you may get some 504's). But if you forward it to "localhost" you may be surprised one day to discover that you are getting "periods of 502's" if any connections timeout (> 60s) for any reason. Since only 2 of those and your entire server group has been exhausted. > (And no, it's not a DNS failure - DNS is only used when nginx > resolves the name in the proxy_pass directive while parsing > configuration on startup.) > > > Suggestion/feature request: If a domain name resolves to several > > addresses, log a warning in error.log file somehow, or at least in the > > output of -T, to warn somehow. Then there won't be unexpected > > round-robins occurring and "supposedly single" servers being > > considered unavailable due to timeouts, surprising people like myself. > > Multiple addresses are fairy normal, and I don't think that > logging a warning is a good idea. I'm just saying...it might help somebody like me out, in the future. There be dragons...or maybe the default error log could be configured to make it more obvious to people what is going on? (https://stackoverflow.com/a/52550758) Or possibly the "-T" output could be enhanced to add "this server group resolves to this many total unique servers" or something. Your call of course, regardless :) Thanks for the helps and conversations, all the best. -Roger Pack- From nginx-forum at forum.nginx.org Wed Nov 20 02:48:23 2019 From: nginx-forum at forum.nginx.org (wu560130911) Date: Tue, 19 Nov 2019 21:48:23 -0500 Subject: How to control keepalive connections for upstream before the version of 1.15.3 In-Reply-To: <20190225144314.GM1877@mdounin.ru> References: <20190225144314.GM1877@mdounin.ru> Message-ID: <11c06c39bc13d9dde3f9e78dd0c18bae.NginxMailingListEnglish@forum.nginx.org> I'm use Nginx proxy to tomcat, tomcat have two parameters manage keep alive, first one is keepAliveTimeout : how the connection idle time; second is maxKeepAliveRequests : how the connection reuse counts; when use Nginx version before the version of 1.15.3, Nginx can't know when tomcat will be close connection, so online have 502 error; I'm analysis network packages, find have two reasons cause 502 error to occur; but there have same root reason; fist reason : tomcat keepalive time is arrived and close handle, but nginx don't know that; cause recv() failed (104: Connection reset by peer) while reading response header from upstream; second reason : tomcat have send tcp fin package , but nginx also send request to it; case upstream prematurely closed connection while reading response header from upstream; I have adjust the size of keepAliveTimeout, but the problem still occur, because use nginx with version before 1.15.3; how can I slove this problem? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,283137,286282#msg-286282 From mdounin at mdounin.ru Wed Nov 20 12:48:57 2019 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 20 Nov 2019 15:48:57 +0300 Subject: Disable only Hostname verification of proxied HTTPS server certificate In-Reply-To: <70d6c8d70c2d8b1f17f710562bdf6363.NginxMailingListEnglish@forum.nginx.org> References: <70d6c8d70c2d8b1f17f710562bdf6363.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20191120124857.GD12894@mdounin.ru> Hello! On Thu, Nov 07, 2019 at 10:23:20AM -0500, shivramg94 wrote: > Is there any way where we can configure nginx to only verify the root of the > proxied HTTPS server (upstream server) certificate and to skip the host name > (or domain name) verification? > > As I understand, proxy_ssl_verify directive can be used to completely > enable/disable the verification of proxied HTTPS server certificate but not > selectively. Is there any directive to only disable the host name > verification? No. You can, however, set a particular name to verify, by using the "proxy_ssl_name" directive. See http://nginx.org/r/proxy_ssl_name for details. -- Maxim Dounin http://mdounin.ru/ From wi2p at hotmail.com Wed Nov 20 14:11:42 2019 From: wi2p at hotmail.com (kev jr) Date: Wed, 20 Nov 2019 14:11:42 +0000 Subject: NGINX configuration with two backends (without load balancing) and NGINX - MYSQL TLS encryption Message-ID: Hi all, Question 1 Is it possible to have NGINX reverse proxy to multiple MySQL servers listening on the same port using different names like you can with http? We don't want to perform any load balancing operation on them, we just want to be able to redirect to MySQL instances based on a logical name, same as on http. Question 2 When I try to implement TLS encryption between NGINX and MYSQL Database server, I have the following error on my MySQL Client : ERROR 2013 (HY000): Lost connection to MySQL server at 'reading initial communication packet', system error I have the following configuration : Ubuntu server with the MySQL Client // NGINX (with the configuration below) // MYSQL Database (with SSL activated) stream { ? upstream mysql1 {? server 172.31.39.168:3306;? ? }? ? server {? listen 3306;? proxy_pass mysql1;? proxy_ssl on;? ? proxy_ssl_certificate /etc/ssl/client-cert.pem;? proxy_ssl_certificate_key /etc/ssl/client-key.pem;? #proxy_ssl_protocols TLSv1 TLSv1.1 TLSv1.2;? #proxy_ssl_ciphers HIGH:!aNULL:!MD5;? proxy_ssl_trusted_certificate /etc/ssl/ca-cert.pem;? ? proxy_ssl_verify on;? proxy_ssl_verify_depth 2;? proxy_ssl_session_reuse on;? }? }? If I comment proxy_ssl* parameters on NGINX, the connection works between "Ubuntu server (with the MySQL Client)" and "MYSQL Database (with SSL activated)" throught "NGINX". Thanks all -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Wed Nov 20 19:28:31 2019 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 20 Nov 2019 22:28:31 +0300 Subject: feature request: warn when domain name resolves to several addresses In-Reply-To: References: <20191119190148.GA12894@mdounin.ru> Message-ID: <20191120192831.GF12894@mdounin.ru> Hello! On Tue, Nov 19, 2019 at 07:26:35PM -0700, Roger Pack wrote: > On Tue, Nov 19, 2019 at 12:01 PM Maxim Dounin wrote: > > > On Tue, Nov 19, 2019 at 10:47:01AM -0700, Roger Pack wrote: > > > > > I noticed that in ngx_http_proxy_module > > > > > > proxy_pass http://localhost:8000/uri/; > > > "If a domain name resolves to several addresses, all of them will be > > > used in a round-robin fashion. In addition, an address can be > > > specified as a server group." > > > > > > However this can be confusing for end users who innocently put the > > > domain name "localhost" then find that round-robin across ipv6 and > > > ipv4 is occurring, ref: > > > https://stackoverflow.com/a/58924751/32453 > > > > This seems to be your own answer, and it looks incorrect to me. > > In particular, the 499 error is logged when the client closes > > connection, and there is no need to have more than one backend > > server specified to see 499 errors. > > True, those cases were covered in some other answers to that question, > but I'll add a note. :) > It can also be logged when the backend server times out, at least > empirically that seems to be the case... > see also https://serverfault.com/questions/523340/post-request-is-repeated-with-nginx-loadbalanced-server-status-499/783624#783624 It is logged when the client closes the connection, only. But reasons why the client closes the connect might be different. In particular, when the backend server times out, it means that processing the request takes a long time. And if processing takes time, it is likely that the client will give up waiting and will close the connection, resulting in 499. > > > https://stackoverflow.com/a/52550758/32453 > > > > Changing "localhost" to "127.0.0.1" here "works" because having just > > one address triggers slightly different logic in the upstream > > code: with just one address, max_fails / fail_timeout logic is > > disabled, and nginx always uses the (only) address available, even > > if there are errors. > > Right. The confusion in my mind is that people configuring Nginx will > use one backend "localhost", and assume they have set it up for a > "single server" type server group. > Since they have listed only one host. But it has not... > See for instance https://stackoverflow.com/a/52550758 > > > The underlying problem is still the same though: backends cannot > > cope with the load, and there are errors. > > Right. However with the "single server" scenario this behavior is > handled differently (it doesn't exhaust the server group of available > servers and begin to return with 502's exclusively for a time, as it > did in my instance...). > > Basically if, while setting it up, you happen to forward to 127.0.0.1, > it will work fine, no "periods of 502's" (though you may get some > 504's). > > But if you forward it to "localhost" you may be surprised one day to > discover that you are getting "periods of 502's" if any connections > timeout (> 60s) for any reason. Since only 2 of those and your entire > server group has been exhausted. I don't think people know and/or expect the difference in handling between single address and multiple addresses, regardless of whether they know there are multiple addresses, or not. As such, a configuration-time warning won't help. Rather, we can consider explaining the difference. Alternatively, we can make it go away - either by changing the single-address case to be identical to the multiple-addresses one, or vice versa. Or even by making this configurable. (Actually, previously multiple-addresses case was handled differently, closer to the single-address approach, and resulted in just one 502, with "quick recovery" of all servers on the first request. But some time ago this was changed to follow fail_timeout instead, as quick recovery of all servers seems to cause more harm than good in most configurations.) > > (And no, it's not a DNS failure - DNS is only used when nginx > > resolves the name in the proxy_pass directive while parsing > > configuration on startup.) > > > > > Suggestion/feature request: If a domain name resolves to several > > > addresses, log a warning in error.log file somehow, or at least in the > > > output of -T, to warn somehow. Then there won't be unexpected > > > round-robins occurring and "supposedly single" servers being > > > considered unavailable due to timeouts, surprising people like myself. > > > > Multiple addresses are fairy normal, and I don't think that > > logging a warning is a good idea. > > I'm just saying...it might help somebody like me out, in the future. > There be dragons...or maybe the default error log could be configured > to make it more obvious to people what is going on? > (https://stackoverflow.com/a/52550758) >From the error log things are expected to be pretty obvious - nginx logs the original errors, and it also logs when it cannot pick an upstream server to use ("no live upstreams", which means "all upstream servers are disabled due to errors"). Further, it also logs when it disables a server, though it happens on the "warn" level. The main problem is that people hardly look into error logs at all. For example, the answer you are referring to only provides access log information, and this is what makes it confusing. On the other hand, another answer to the same question is based on the "no live upstreams" error message from the question, and correctly refers to the max_fails/fail_timeout parameters. -- Maxim Dounin http://mdounin.ru/ From francis at daoine.org Wed Nov 20 20:39:15 2019 From: francis at daoine.org (Francis Daly) Date: Wed, 20 Nov 2019 20:39:15 +0000 Subject: How to control keepalive connections for upstream before the version of 1.15.3 In-Reply-To: <11c06c39bc13d9dde3f9e78dd0c18bae.NginxMailingListEnglish@forum.nginx.org> References: <20190225144314.GM1877@mdounin.ru> <11c06c39bc13d9dde3f9e78dd0c18bae.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20191120203915.GR1914@daoine.org> On Tue, Nov 19, 2019 at 09:48:23PM -0500, wu560130911 wrote: Hi there, > when use Nginx version before the version of 1.15.3, Nginx can't know when > tomcat will be close connection, so online have 502 error; > I have adjust the size of keepAliveTimeout, but the problem still occur, > because use nginx with version before 1.15.3; > > how can I slove this problem? If it is important in your use case that when keepalive works, keepalive honours the extra limits; then you probably should either stay with the same nginx version and disable keepalive; or update to the newer version and use the extra limits. Any other change that involves "deploy a modified version of nginx", is probably more work than just "update". Good luck with it, f -- Francis Daly francis at daoine.org From rogerdpack2 at gmail.com Thu Nov 21 20:53:17 2019 From: rogerdpack2 at gmail.com (Roger Pack) Date: Thu, 21 Nov 2019 13:53:17 -0700 Subject: feature request: warn when domain name resolves to several addresses In-Reply-To: <20191120192831.GF12894@mdounin.ru> References: <20191119190148.GA12894@mdounin.ru> <20191120192831.GF12894@mdounin.ru> Message-ID: On Wed, Nov 20, 2019 at 12:28 PM Maxim Dounin wrote: > > Hello! > > On Tue, Nov 19, 2019 at 07:26:35PM -0700, Roger Pack wrote: > > > On Tue, Nov 19, 2019 at 12:01 PM Maxim Dounin wrote: > > > > > On Tue, Nov 19, 2019 at 10:47:01AM -0700, Roger Pack wrote: > > > > > > > I noticed that in ngx_http_proxy_module > > > > > > > > proxy_pass http://localhost:8000/uri/; > > > > "If a domain name resolves to several addresses, all of them will be > > > > used in a round-robin fashion. In addition, an address can be > > > > specified as a server group." > > > > > > > > However this can be confusing for end users who innocently put the > > > > domain name "localhost" then find that round-robin across ipv6 and > > > > ipv4 is occurring, ref: > > > > https://stackoverflow.com/a/58924751/32453 > > > > > > This seems to be your own answer, and it looks incorrect to me. > > > In particular, the 499 error is logged when the client closes > > > connection, and there is no need to have more than one backend > > > server specified to see 499 errors. > > > > True, those cases were covered in some other answers to that question, > > but I'll add a note. :) > > It can also be logged when the backend server times out, at least > > empirically that seems to be the case... > > see also https://serverfault.com/questions/523340/post-request-is-repeated-with-nginx-loadbalanced-server-status-499/783624#783624 > > It is logged when the client closes the connection, only. But > reasons why the client closes the connect might be different. > > In particular, when the backend server times out, it means that > processing the request takes a long time. And if processing > takes time, it is likely that the client will give up waiting and > will close the connection, resulting in 499. OK you're right, thank you for the hint, turns out our client had a 60s timeout, so basically we'd see "connection timed out" error log and "499 response" in quick succession and thought it was related. Thank you that helped me figure out what was going on with my system! > > > > https://stackoverflow.com/a/52550758/32453 > > > > > > Changing "localhost" to "127.0.0.1" here "works" because having just > > > one address triggers slightly different logic in the upstream > > > code: with just one address, max_fails / fail_timeout logic is > > > disabled, and nginx always uses the (only) address available, even > > > if there are errors. > > > > Right. The confusion in my mind is that people configuring Nginx will > > use one backend "localhost", and assume they have set it up for a > > "single server" type server group. > > Since they have listed only one host. But it has not... > > See for instance https://stackoverflow.com/a/52550758 > > > > > The underlying problem is still the same though: backends cannot > > > cope with the load, and there are errors. > > > > Right. However with the "single server" scenario this behavior is > > handled differently (it doesn't exhaust the server group of available > > servers and begin to return with 502's exclusively for a time, as it > > did in my instance...). > > > > Basically if, while setting it up, you happen to forward to 127.0.0.1, > > it will work fine, no "periods of 502's" (though you may get some > > 504's). > > > > But if you forward it to "localhost" you may be surprised one day to > > discover that you are getting "periods of 502's" if any connections > > timeout (> 60s) for any reason. Since only 2 of those and your entire > > server group has been exhausted. > > I don't think people know and/or expect the difference in handling > between single address and multiple addresses, regardless of > whether they know there are multiple addresses, or not. As such, > a configuration-time warning won't help. > > Rather, we can consider explaining the difference. Alternatively, > we can make it go away - either by changing the single-address case > to be identical to the multiple-addresses one, or vice versa. Or even > by making this configurable. > (Actually, previously multiple-addresses case was handled > differently, closer to the single-address approach, and resulted > in just one 502, with "quick recovery" of all servers on the first > request. But some time ago this was changed to follow > fail_timeout instead, as quick recovery of all servers seems to > cause more harm than good in most configurations.) Yeah, it might make sense to make the behavior similar. Maybe never disable the "last server marked as available" (of a server group) or to enforce the 10s fail_timeout for single server (if it was useful for multiple...then again maybe single is supposed to be a simpler configuration?). Or maybe add a warning to the documentation near where it says "If a domain name resolves to several addresses, all of them will be used in a round-robin fashion." If you specify a hostname like 'localhost' and your system supports both IPv4 and IPv6, the hostname can be interpreted to mean two different servers. Specify an exact IP address if you wish to avoid this ambiguity, like '127.0.0.1' (or something like that). Also the documentation for max_fails, fail_timeout and slow_start maybe could add a note in them that they are ignored in the case of single server. > > > (And no, it's not a DNS failure - DNS is only used when nginx > > > resolves the name in the proxy_pass directive while parsing > > > configuration on startup.) > > > > > > > Suggestion/feature request: If a domain name resolves to several > > > > addresses, log a warning in error.log file somehow, or at least in the > > > > output of -T, to warn somehow. Then there won't be unexpected > > > > round-robins occurring and "supposedly single" servers being > > > > considered unavailable due to timeouts, surprising people like myself. > > > > > > Multiple addresses are fairy normal, and I don't think that > > > logging a warning is a good idea. > > > > I'm just saying...it might help somebody like me out, in the future. > > There be dragons...or maybe the default error log could be configured > > to make it more obvious to people what is going on? > > (https://stackoverflow.com/a/52550758) > > From the error log things are expected to be pretty obvious - > nginx logs the original errors, and it also logs when it cannot > pick an upstream server to use ("no live upstreams", which means > "all upstream servers are disabled due to errors"). Further, it > also logs when it disables a server, though it happens on the > "warn" level. Might be nice to log that at the error level, or possibly add it to the " upstream timed out " log error message like " upstream timed out, marking server as unavailable" or something like that (if easier :). A few more thoughts/ideas for that error message. maybe could enhance it a bit, ex "upstream timed out after x seconds" and "trying next server" (or "giving up") depending on what it does next. Just for quicker understanding of what decisions are being made (and which configs being respected). > The main problem is that people hardly look into error logs at > all. For example, the answer you are referring to only provides > access log information, and this is what makes it confusing. On > the other hand, another answer to the same question is based on > the "no live upstreams" error message from the question, and > correctly refers to the max_fails/fail_timeout parameters. I looked at the error logs when problems started happening (502's), so the error logs are useful! :) My answer references the error log (or at least does now, with some recent changes): https://stackoverflow.com/a/58924751/32453 Some others don't :) Thanks for your thoughtful replies. Cheers! -Roger- From rogerdpack2 at gmail.com Thu Nov 21 21:09:13 2019 From: rogerdpack2 at gmail.com (Roger Pack) Date: Thu, 21 Nov 2019 14:09:13 -0700 Subject: NGINX configuration with two backends (without load balancing) and NGINX - MYSQL TLS encryption In-Reply-To: References: Message-ID: Since mysql connections don't have HTTP headers to "lookup the correct backend server group" based on, I doubt it can do #1 FWIW...I actually know little about nginx though. Maybe if you have it listen on various different ports (or different IP addresses coming in?) For #2 maybe need "listen..ssl"? https://docs.nginx.com/nginx/admin-guide/security-controls/terminating-ssl-tcp/ but I have no expertise, good luck! On Wed, Nov 20, 2019 at 7:11 AM kev jr wrote: > > Hi all, > > Question 1 > Is it possible to have NGINX reverse proxy to multiple MySQL servers listening on the same port using different names like you can with http? We don't want to perform any load balancing operation on them, we just want to be able to redirect to MySQL instances based on a logical name, same as on http. > > Question 2 > When I try to implement TLS encryption between NGINX and MYSQL Database server, I have the following error on my MySQL Client : ERROR 2013 (HY000): Lost connection to MySQL server at 'reading initial communication packet', system error > > I have the following configuration : Ubuntu server with the MySQL Client // NGINX (with the configuration below) // MYSQL Database (with SSL activated) > stream { > upstream mysql1 { > server 172.31.39.168:3306; > } > server { > listen 3306; > proxy_pass mysql1; > proxy_ssl on; > proxy_ssl_certificate /etc/ssl/client-cert.pem; > proxy_ssl_certificate_key /etc/ssl/client-key.pem; > #proxy_ssl_protocols TLSv1 TLSv1.1 TLSv1.2; > #proxy_ssl_ciphers HIGH:!aNULL:!MD5; > proxy_ssl_trusted_certificate /etc/ssl/ca-cert.pem; > proxy_ssl_verify on; > proxy_ssl_verify_depth 2; > proxy_ssl_session_reuse on; > } > } > > If I comment proxy_ssl* parameters on NGINX, the connection works between "Ubuntu server (with the MySQL Client)" and "MYSQL Database (with SSL activated)" throught "NGINX". > > Thanks all > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From edigarov at qarea.com Mon Nov 25 10:24:18 2019 From: edigarov at qarea.com (Gregory Edigarov) Date: Mon, 25 Nov 2019 12:24:18 +0200 Subject: two identical keycloak servers + nginx as reverse proxy Message-ID: Hello, Can somebody enlighten me please? i have two identical keycloak servers running in HA mode via DNS discovery keycloak1.my.domain & keycloak2.my.domain the dns discovery record is: keycloak.my.domain this part is working no questions. no i am trying to add nginx to the picture: upstream signin { ????? server 172.19.24.13:8080; ????? server 172.19.24.16:8080; ? } server { ??????? listen 443; ??????? ignore_invalid_headers off; ??????? ssl on; ??????? ssl_certificate /etc/ssl/my.domain.crt; ??????? ssl_certificate_key /etc/ssl/my.domain.key; ??????? server_name signin.my.domain; ??????? access_log /var/log/nginx/access.log; ??????? error_log /var/log/nginx/error.log; ??????? location / { ??????????? proxy_pass????????? http://signin; ??????????? proxy_redirect????? off; ??????????? proxy_set_header??? Host?????????????? $host; ??????????? proxy_set_header??? X-Real-IP????????? $remote_addr; ??????????? proxy_set_header??? X-Forwarded-For $proxy_add_x_forwarded_for; ??????????? proxy_set_header??? X-Forwarded-Host?? $host; ??????????? proxy_set_header??? X-Forwarded-Server $host; ??????????? proxy_set_header??? X-Forwarded-Port?? $server_port; ??????????? proxy_set_header??? X-Forwarded-Proto? $scheme; ??????? } every request to https://signin.my.domain? results in error 500, and in logs i see: rewrite or internal redirection cycle while internally redirecting to "////////////", i know exactly that keycloak part work , i could go to keycloak.my.domain in my browser no problem. From francis at daoine.org Tue Nov 26 09:31:56 2019 From: francis at daoine.org (Francis Daly) Date: Tue, 26 Nov 2019 09:31:56 +0000 Subject: two identical keycloak servers + nginx as reverse proxy In-Reply-To: References: Message-ID: <20191126093156.GD26683@daoine.org> On Mon, Nov 25, 2019 at 12:24:18PM +0200, Gregory Edigarov wrote: Hi there, > ??????? location / { > ??????????? proxy_pass????????? http://signin; > ??????????? proxy_redirect????? off; > ??????????? proxy_set_header??? Host?????????????? $host; > every request to https://signin.my.domain? results in error 500, and in logs > i see: > > rewrite or internal redirection cycle while internally redirecting to > "////////////", I think that the config snippet that you show does not lead to the error log that you show. Is there some other config in place? If you add the line return 200 "Inside location /, request $uri\n"; before the proxy_pass, and make the same request, what response do you get? > i know exactly that keycloak part work , i could go to keycloak.my.domain in > my browser no problem. You report that you can go to keycloak.my.domain in your browser and things work. Your config asks nginx to go to http://172.19.24.13:8080 using the hostname signin.my.domain. That is not the same as keycloak.my.domain. Possibly that difference is a reason for things not working? Cheers, f -- Francis Daly francis at daoine.org From lxlenovostar at gmail.com Tue Nov 26 11:24:00 2019 From: lxlenovostar at gmail.com (lx) Date: Tue, 26 Nov 2019 19:24:00 +0800 Subject: upstream_response_length and upstream_addr can't work Message-ID: hi all: When I use module of slice, upstream_response_length and upstream_addr can't work. nginx.conf : ######################################################################### include mime.types; default_type application/octet-stream; log_format main '$status^$scheme^$request^$body_bytes_sent^$request_time^$upstream_cache_status^$remote_addr^$http_referer^$http_user_agent^$content_type^$http_range^$cookie_name^$upstream_addr^$upstream_response_time^$upstream_bytes_received^$upstream_response_length^[$time_local]'; access_log logs/access.log main; rewrite_log on; sendfile on; aio threads; keepalive_timeout 65; if ($uri ~ ^/([a-zA-Z0-9\.]+)/([a-zA-Z0-9\.]+)/(.*)) { set $cdn $1; set $new_host $2; set $new_uri $3; } location / { slice 1m; proxy_cache_lock on; proxy_cache my_cache; proxy_cache_key $uri$is_args$args$slice_range; proxy_set_header Range $slice_range; proxy_cache_valid 200 206 24h; proxy_pass http://$cdn/$new_uri; } ######################################################################### I Initiate a rang htttp request, for example ######################################################################### curl -o result -H 'Range: bytes=2001-4932000' " http://127.0.0.1:64002/A.com/B.com/appstore/developer/soft/20191008/201910081449521157660.patch " ######################################################################### upstream_response_length and upstream_bytes_received is just 1 MB, not 4.9MB. I find nginx build 5 http request to A.com by tcpdump, and nginx implement slice by subrequest. This is why? How to fix it? Thank you -------------- next part -------------- An HTML attachment was scrubbed... URL: From arut at nginx.com Tue Nov 26 13:10:24 2019 From: arut at nginx.com (Roman Arutyunyan) Date: Tue, 26 Nov 2019 16:10:24 +0300 Subject: upstream_response_length and upstream_addr can't work In-Reply-To: References: Message-ID: <20191126131024.xyys74uxbrhcff4y@Romans-MacBook-Pro.local> Hi, On Tue, Nov 26, 2019 at 07:24:00PM +0800, lx wrote: > hi all: > When I use module of slice, upstream_response_length and > upstream_addr can't work. > nginx.conf : > ######################################################################### > include mime.types; > default_type application/octet-stream; > > log_format main > '$status^$scheme^$request^$body_bytes_sent^$request_time^$upstream_cache_status^$remote_addr^$http_referer^$http_user_agent^$content_type^$http_range^$cookie_name^$upstream_addr^$upstream_response_time^$upstream_bytes_received^$upstream_response_length^[$time_local]'; > > > access_log logs/access.log main; > rewrite_log on; > > sendfile on; > aio threads; > > keepalive_timeout 65; > > if ($uri ~ ^/([a-zA-Z0-9\.]+)/([a-zA-Z0-9\.]+)/(.*)) { > set $cdn $1; > set $new_host $2; > set $new_uri $3; > } > > location / { > slice 1m; > proxy_cache_lock on; > proxy_cache my_cache; > proxy_cache_key $uri$is_args$args$slice_range; > proxy_set_header Range $slice_range; > proxy_cache_valid 200 206 24h; > proxy_pass http://$cdn/$new_uri; > } > ######################################################################### > I Initiate a rang htttp request, for example > ######################################################################### > curl -o result -H 'Range: bytes=2001-4932000' " > http://127.0.0.1:64002/A.com/B.com/appstore/developer/soft/20191008/201910081449521157660.patch > " > ######################################################################### > upstream_response_length and upstream_bytes_received is just 1 MB, not > 4.9MB. I find nginx build 5 http request to A.com by tcpdump, and nginx > implement slice by subrequest. > > This is why? How to fix it? Yes. When using the slice module, response is served by multiple subrequests. Each subrequest serves its own part. It has a separate cache key, fetches a separate cache entry and contacts the upstream server using a serparate connection. When you use an $upstream_XXX variable, it returns data from a subrequest. If you want combined numbers, use client-side variables like $bytes_sent instead. > > Thank you > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -- Roman Arutyunyan From nginx-forum at forum.nginx.org Tue Nov 26 21:10:28 2019 From: nginx-forum at forum.nginx.org (ptcell) Date: Tue, 26 Nov 2019 16:10:28 -0500 Subject: Are modules built with --with-compat compatible across minor versions of NGINX? Message-ID: If I build a dynamic module against, say nginx 1.12.2 with `--with-compat`, will it work with, say nginx 1.12.1 (assuming --with-compat all around) I assume not, because I found this in ngx_module.c, separate from the signature check. nginx_version has the minor version in it. if (module->version != nginx_version) { ngx_conf_log_error(NGX_LOG_EMERG, cf, 0, "module \"%V\" version %ui instead of %ui", file, module->version, (ngx_uint_t) nginx_version); return NGX_ERROR; } Posted at Nginx Forum: https://forum.nginx.org/read.php?2,286335,286335#msg-286335 From lxlenovostar at gmail.com Wed Nov 27 01:14:09 2019 From: lxlenovostar at gmail.com (lx) Date: Wed, 27 Nov 2019 09:14:09 +0800 Subject: upstream_response_length and upstream_addr can't work In-Reply-To: <20191126131024.xyys74uxbrhcff4y@Romans-MacBook-Pro.local> References: <20191126131024.xyys74uxbrhcff4y@Romans-MacBook-Pro.local> Message-ID: Hi: When we use module of slice, the nuber of bytes from upstream server is more than the bytes which sent to client, So I want to get the number of bytes from upstream server, How to get it? Thank you Roman Arutyunyan ?2019?11?26??? ??9:10??? > Hi, > > On Tue, Nov 26, 2019 at 07:24:00PM +0800, lx wrote: > > hi all: > > When I use module of slice, upstream_response_length and > > upstream_addr can't work. > > nginx.conf : > > ######################################################################### > > include mime.types; > > default_type application/octet-stream; > > > > log_format main > > > '$status^$scheme^$request^$body_bytes_sent^$request_time^$upstream_cache_status^$remote_addr^$http_referer^$http_user_agent^$content_type^$http_range^$cookie_name^$upstream_addr^$upstream_response_time^$upstream_bytes_received^$upstream_response_length^[$time_local]'; > > > > > > access_log logs/access.log main; > > rewrite_log on; > > > > sendfile on; > > aio threads; > > > > keepalive_timeout 65; > > > > if ($uri ~ ^/([a-zA-Z0-9\.]+)/([a-zA-Z0-9\.]+)/(.*)) { > > set $cdn $1; > > set $new_host $2; > > set $new_uri $3; > > } > > > > location / { > > slice 1m; > > proxy_cache_lock on; > > proxy_cache my_cache; > > proxy_cache_key $uri$is_args$args$slice_range; > > proxy_set_header Range $slice_range; > > proxy_cache_valid 200 206 24h; > > proxy_pass http://$cdn/$new_uri; > > } > > ######################################################################### > > I Initiate a rang htttp request, for example > > ######################################################################### > > curl -o result -H 'Range: bytes=2001-4932000' " > > > http://127.0.0.1:64002/A.com/B.com/appstore/developer/soft/20191008/201910081449521157660.patch > > " > > ######################################################################### > > upstream_response_length and upstream_bytes_received is just 1 MB, not > > 4.9MB. I find nginx build 5 http request to A.com by tcpdump, and nginx > > implement slice by subrequest. > > > > This is why? How to fix it? > > Yes. When using the slice module, response is served by multiple > subrequests. > Each subrequest serves its own part. It has a separate cache key, fetches > a > separate cache entry and contacts the upstream server using a serparate > connection. When you use an $upstream_XXX variable, it returns data from > a subrequest. > > If you want combined numbers, use client-side variables like $bytes_sent > instead. > > > > > Thank you > > > _______________________________________________ > > nginx mailing list > > nginx at nginx.org > > http://mailman.nginx.org/mailman/listinfo/nginx > > > -- > Roman Arutyunyan > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From arut at nginx.com Wed Nov 27 10:20:47 2019 From: arut at nginx.com (Roman Arutyunyan) Date: Wed, 27 Nov 2019 13:20:47 +0300 Subject: upstream_response_length and upstream_addr can't work In-Reply-To: References: <20191126131024.xyys74uxbrhcff4y@Romans-MacBook-Pro.local> Message-ID: <20191127102047.aaziub2zcuubvifr@Romans-MacBook-Pro.local> Hi, On Wed, Nov 27, 2019 at 09:14:09AM +0800, lx wrote: > Hi: > When we use module of slice, the nuber of bytes from upstream server > is more than the bytes which sent to client, So I want to get the number > of bytes from upstream server, How to get it? We don't have a variable that returns the sum of all upstream response sizes from all subrequests. Also, there can be multiple upstream servers. And each slice can be fetched from a different one. > Thank you > > Roman Arutyunyan ?2019?11?26??? ??9:10??? > > > Hi, > > > > On Tue, Nov 26, 2019 at 07:24:00PM +0800, lx wrote: > > > hi all: > > > When I use module of slice, upstream_response_length and > > > upstream_addr can't work. > > > nginx.conf : > > > ######################################################################### > > > include mime.types; > > > default_type application/octet-stream; > > > > > > log_format main > > > > > '$status^$scheme^$request^$body_bytes_sent^$request_time^$upstream_cache_status^$remote_addr^$http_referer^$http_user_agent^$content_type^$http_range^$cookie_name^$upstream_addr^$upstream_response_time^$upstream_bytes_received^$upstream_response_length^[$time_local]'; > > > > > > > > > access_log logs/access.log main; > > > rewrite_log on; > > > > > > sendfile on; > > > aio threads; > > > > > > keepalive_timeout 65; > > > > > > if ($uri ~ ^/([a-zA-Z0-9\.]+)/([a-zA-Z0-9\.]+)/(.*)) { > > > set $cdn $1; > > > set $new_host $2; > > > set $new_uri $3; > > > } > > > > > > location / { > > > slice 1m; > > > proxy_cache_lock on; > > > proxy_cache my_cache; > > > proxy_cache_key $uri$is_args$args$slice_range; > > > proxy_set_header Range $slice_range; > > > proxy_cache_valid 200 206 24h; > > > proxy_pass http://$cdn/$new_uri; > > > } > > > ######################################################################### > > > I Initiate a rang htttp request, for example > > > ######################################################################### > > > curl -o result -H 'Range: bytes=2001-4932000' " > > > > > http://127.0.0.1:64002/A.com/B.com/appstore/developer/soft/20191008/201910081449521157660.patch > > > " > > > ######################################################################### > > > upstream_response_length and upstream_bytes_received is just 1 MB, not > > > 4.9MB. I find nginx build 5 http request to A.com by tcpdump, and nginx > > > implement slice by subrequest. > > > > > > This is why? How to fix it? > > > > Yes. When using the slice module, response is served by multiple > > subrequests. > > Each subrequest serves its own part. It has a separate cache key, fetches > > a > > separate cache entry and contacts the upstream server using a serparate > > connection. When you use an $upstream_XXX variable, it returns data from > > a subrequest. > > > > If you want combined numbers, use client-side variables like $bytes_sent > > instead. > > > > > > > > Thank you > > > > > _______________________________________________ > > > nginx mailing list > > > nginx at nginx.org > > > http://mailman.nginx.org/mailman/listinfo/nginx > > > > > > -- > > Roman Arutyunyan > > _______________________________________________ > > nginx mailing list > > nginx at nginx.org > > http://mailman.nginx.org/mailman/listinfo/nginx > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -- Roman Arutyunyan From mdounin at mdounin.ru Wed Nov 27 12:03:00 2019 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 27 Nov 2019 15:03:00 +0300 Subject: Are modules built with --with-compat compatible across minor versions of NGINX? In-Reply-To: References: Message-ID: <20191127120300.GM12894@mdounin.ru> Hello! On Tue, Nov 26, 2019 at 04:10:28PM -0500, ptcell wrote: > If I build a dynamic module against, say nginx 1.12.2 with `--with-compat`, > will it work with, say nginx 1.12.1 (assuming --with-compat all around) > > > I assume not, because I found this in ngx_module.c, separate from the > signature check. nginx_version has the minor version in it. Your assumption is correct. ABI stability is not guranteed on version changes, even minor, hence modules build for different versions are not compatible. Version is checked to explicitly prevent loading of incompatible modules. -- Maxim Dounin http://mdounin.ru/ From nginx-forum at forum.nginx.org Wed Nov 27 16:45:37 2019 From: nginx-forum at forum.nginx.org (gagandeep) Date: Wed, 27 Nov 2019 11:45:37 -0500 Subject: Add new Directive to existing nginx module Message-ID: <0ae14458dfa4e78635c3d3eeff1f2c02.NginxMailingListEnglish@forum.nginx.org> Is it possible to add new directive to existing nginx modules? While trying to add i am getting the error. "directive is duplicate in " Posted at Nginx Forum: https://forum.nginx.org/read.php?2,286351,286351#msg-286351 From lxlenovostar at gmail.com Thu Nov 28 10:39:53 2019 From: lxlenovostar at gmail.com (lx) Date: Thu, 28 Nov 2019 18:39:53 +0800 Subject: upstream_response_length and upstream_addr can't work In-Reply-To: <20191127102047.aaziub2zcuubvifr@Romans-MacBook-Pro.local> References: <20191126131024.xyys74uxbrhcff4y@Romans-MacBook-Pro.local> <20191127102047.aaziub2zcuubvifr@Romans-MacBook-Pro.local> Message-ID: Hi: If I want to implement this variable which returns the sum of all upstream response sizes from all subrequests. This is feasible? Thank you Roman Arutyunyan ?2019?11?27??? ??6:20??? > Hi, > > On Wed, Nov 27, 2019 at 09:14:09AM +0800, lx wrote: > > Hi: > > When we use module of slice, the nuber of bytes from upstream server > > is more than the bytes which sent to client, So I want to get the > number > > of bytes from upstream server, How to get it? > > We don't have a variable that returns the sum of all upstream response > sizes > from all subrequests. > > Also, there can be multiple upstream servers. And each slice can be > fetched > from a different one. > > > Thank you > > > > Roman Arutyunyan ?2019?11?26??? ??9:10??? > > > > > Hi, > > > > > > On Tue, Nov 26, 2019 at 07:24:00PM +0800, lx wrote: > > > > hi all: > > > > When I use module of slice, upstream_response_length and > > > > upstream_addr can't work. > > > > nginx.conf : > > > > > ######################################################################### > > > > include mime.types; > > > > default_type application/octet-stream; > > > > > > > > log_format main > > > > > > > > '$status^$scheme^$request^$body_bytes_sent^$request_time^$upstream_cache_status^$remote_addr^$http_referer^$http_user_agent^$content_type^$http_range^$cookie_name^$upstream_addr^$upstream_response_time^$upstream_bytes_received^$upstream_response_length^[$time_local]'; > > > > > > > > > > > > access_log logs/access.log main; > > > > rewrite_log on; > > > > > > > > sendfile on; > > > > aio threads; > > > > > > > > keepalive_timeout 65; > > > > > > > > if ($uri ~ ^/([a-zA-Z0-9\.]+)/([a-zA-Z0-9\.]+)/(.*)) { > > > > set $cdn $1; > > > > set $new_host $2; > > > > set $new_uri $3; > > > > } > > > > > > > > location / { > > > > slice 1m; > > > > proxy_cache_lock on; > > > > proxy_cache my_cache; > > > > proxy_cache_key $uri$is_args$args$slice_range; > > > > proxy_set_header Range $slice_range; > > > > proxy_cache_valid 200 206 24h; > > > > proxy_pass http://$cdn/$new_uri; > > > > } > > > > > ######################################################################### > > > > I Initiate a rang htttp request, for example > > > > > ######################################################################### > > > > curl -o result -H 'Range: bytes=2001-4932000' " > > > > > > > > http://127.0.0.1:64002/A.com/B.com/appstore/developer/soft/20191008/201910081449521157660.patch > > > > " > > > > > ######################################################################### > > > > upstream_response_length and upstream_bytes_received is just 1 MB, > not > > > > 4.9MB. I find nginx build 5 http request to A.com by tcpdump, and > nginx > > > > implement slice by subrequest. > > > > > > > > This is why? How to fix it? > > > > > > Yes. When using the slice module, response is served by multiple > > > subrequests. > > > Each subrequest serves its own part. It has a separate cache key, > fetches > > > a > > > separate cache entry and contacts the upstream server using a serparate > > > connection. When you use an $upstream_XXX variable, it returns data > from > > > a subrequest. > > > > > > If you want combined numbers, use client-side variables like > $bytes_sent > > > instead. > > > > > > > > > > > Thank you > > > > > > > _______________________________________________ > > > > nginx mailing list > > > > nginx at nginx.org > > > > http://mailman.nginx.org/mailman/listinfo/nginx > > > > > > > > > -- > > > Roman Arutyunyan > > > _______________________________________________ > > > nginx mailing list > > > nginx at nginx.org > > > http://mailman.nginx.org/mailman/listinfo/nginx > > > > > > _______________________________________________ > > nginx mailing list > > nginx at nginx.org > > http://mailman.nginx.org/mailman/listinfo/nginx > > > -- > Roman Arutyunyan > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Thu Nov 28 20:33:01 2019 From: nginx-forum at forum.nginx.org (yoav.cohen) Date: Thu, 28 Nov 2019 15:33:01 -0500 Subject: Offload TCP traffic to another process Message-ID: Dear experts, We are evaluating nginx as a platform for the product of our new startup company. Our use-case requires a TCP proxy that will terminate TLS, which nginx handles very well. However, we need to be able to send all TCP traffic to another process for offline processing. Initially we thought we could write a NGX_STREAM_MODULE (call it tcp_mirror) that will be able to read both the downstream bytes (client <--> nginx) and upstream bytes (proxy <--> server) and send them to another process, but after looking at a few module examples and trying out a few things we understood that we can only use a single content handler for each stream configuration. For example, we were hoping the following mock configuration would work for us, but realized we can't have both proxy_pass and tcp_mirror under server because there can be only one content handler: stream { server { listen 12346; proxy_pass backend.example.com:12346; tcp_mirror processor.acme.com:6666; } } The above led us to the conclusion that in order to implement our use-case we would have to write a new proxy_pass module, more specifically we would have to re-write ngx_stream_proxy_module.c. The idea is that we would manage two upstreams, the server and the processor. The configuration would look something like this: stream { server { listen 12346; proxy_pass_mirror backend.example.com:12346 processor.acme.com:6666; } } Before we begin implementation of this design, we wanted to consult with the experts here and understand whether anyone has a better idea on how to implement our use-case on top of nginx. Thanks in advance, Yoav Cohen. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,286360,286360#msg-286360 From marcin.wanat at gmail.com Thu Nov 28 22:38:24 2019 From: marcin.wanat at gmail.com (Marcin Wanat) Date: Thu, 28 Nov 2019 23:38:24 +0100 Subject: Offload TCP traffic to another process In-Reply-To: References: Message-ID: On Thu, Nov 28, 2019 at 9:33 PM yoav.cohen wrote: > Dear experts, > > We are evaluating nginx as a platform for the product of our new startup > company. > > Our use-case requires a TCP proxy that will terminate TLS, which nginx > handles very well. However, we need to be able to send all TCP traffic to > another process for offline processing. > > Initially we thought we could write a NGX_STREAM_MODULE (call it > tcp_mirror) > that will be able to read both the downstream bytes (client <--> nginx) and > upstream bytes (proxy <--> server) and send them to another process, but > after looking at a few module examples and trying out a few things we > understood that we can only use a single content handler for each stream > configuration. > > For example, we were hoping the following mock configuration would work for > us, but realized we can't have both proxy_pass and tcp_mirror under server > because there can be only one content handler: > stream { > server { > listen 12346; > proxy_pass backend.example.com:12346; > tcp_mirror processor.acme.com:6666; > } > } > > The above led us to the conclusion that in order to implement our use-case > we would have to write a new proxy_pass module, more specifically we would > have to re-write ngx_stream_proxy_module.c. The idea is that we would > manage > two upstreams, the server and the processor. The configuration would look > something like this: > stream { > server { > listen 12346; > proxy_pass_mirror backend.example.com:12346 > processor.acme.com:6666; > } > } > > Before we begin implementation of this design, we wanted to consult with > the > experts here and understand whether anyone has a better idea on how to > implement our use-case on top of nginx. > > Thanks in advance, > Yoav Cohen. > > Have you tried http://nginx.org/en/docs/http/ngx_http_mirror_module.html ? Regards, Marcin Wanat -------------- next part -------------- An HTML attachment was scrubbed... URL: From 201904-nginx at jslf.app Fri Nov 29 00:26:06 2019 From: 201904-nginx at jslf.app (Patrick) Date: Fri, 29 Nov 2019 08:26:06 +0800 Subject: Offload TCP traffic to another process In-Reply-To: References: Message-ID: <20191129002606.GA6403@haller.ws> On 2019-11-28 15:33, yoav.cohen wrote: > However, we need to be able to send all TCP traffic to another process > for offline processing. This can probably be done using the packet mgmt features of the OS, e.g. with netfilter/iptables `TEE' for Linux: http://ipset.netfilter.org/iptables-extensions.man.html#lbDU or ipf `dup-to' for FreeBSD: https://www.freebsd.org/cgi/man.cgi?query=ipf&sektion=5&apropos=0&manpath=FreeBSD+12.1-RELEASE+and+Ports Mirroring the inside interfaces will yield the un-TLS'd traffic. Patrick From nginx-forum at forum.nginx.org Fri Nov 29 07:26:30 2019 From: nginx-forum at forum.nginx.org (alon.ludmer) Date: Fri, 29 Nov 2019 02:26:30 -0500 Subject: Offload TCP traffic to another process In-Reply-To: References: Message-ID: <74ba289a7e286b0187dcb3edf3c21bb9.NginxMailingListEnglish@forum.nginx.org> Hello experts, Thanks for the quick response! My name is Alon and I am working with Yoav in the new startup company. I would like to clarify few things on our use-case in order to give you the information?you need to help us doing the right thing with Nginx. 1. The application layer could be any protocol over TCP layer. 2. We need to do TLS termination in both directions, downstream and upstream. 3. The mirror traffic is not for raw packets, it should be done to the decrypted?TCP content after the TLS termination(in both directions).? So we thought?on writing new stream module which works along side?with the proxy_pass stream command. The new module register?a handler on a stream content phase and copy the TCP content?traffic to other?process for offline?analysis. As Yoav mentioned, seems like there is only 1 handler in the content phase (which already taken by the proxy_pass stream).? Do we need to re-write the ngx_stream_proxy_module for such mirror capabilities?? Is there other better way to implement the use-case with Nginx? Thanks, Alon Posted at Nginx Forum: https://forum.nginx.org/read.php?2,286360,286364#msg-286364