From hlee at vendasta.com Mon Jan 1 06:07:04 2018 From: hlee at vendasta.com (House Lee) Date: Mon, 1 Jan 2018 00:07:04 -0600 Subject: Dynamic nginx configuration without reloading Message-ID: <20FF2626-B4DF-4BC5-97BA-BA42C44B81D9@vendasta.com> Hi All, We are currently building a shared hosting platform for parts of our clients. We are using nginx as our web servers running on the nodes. Everything works fine. However, we discovered that when we are hosting more than a certain amount of sites (typically around 300 conf files), reloading nginx conf is extremely slow. Is there a way to dynamically configure nginx (e.g. adding/removing server blocks) without reloading nginx? Thanks & Best Regards House From mdounin at mdounin.ru Mon Jan 1 20:34:49 2018 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 1 Jan 2018 23:34:49 +0300 Subject: Error response body not sent if upload is incomplete In-Reply-To: References: <2796511.iUK7Q0K1Mb@vbart-laptop> Message-ID: <20180101203449.GN34136@mdounin.ru> Hello! On Sat, Dec 30, 2017 at 06:04:26PM -0500, naktinis wrote: > Find the server logs below. It does seem to match what you've quoted. > > Do you think the upstream server (uwsgi) is the one not returning the body? > > I was able to fix this by consuming the request body in my application > before returning the response. However, I'm still wondering how nginx is > supposed to behave in such situations. > > Log for the request with the larger file: > api-server_1 | [pid: 21|app: 0|req: 1/1] 172.19.0.1 () {38 vars in 601 > bytes} [Sat Dec 30 11:14:46 2017] POST / => generated 54 bytes in 2 msecs > (HTTP/1.1 403) 2 headers in 78 bytes (1 switches on core 0) > api-server_1 | 2017/12/30 11:14:46 [error] 12#12: *1 readv() failed (104: > Connection reset by peer) while reading upstream, client: 172.19.0.1, > server: , request: "POST / HTTP/1.1", upstream: > "uwsgi://unix:///tmp/uwsgi.sock:", host: "0.0.0.0:5000" > api-server_1 | 172.19.0.1 - - [30/Dec/2017:11:14:46 +0000] "POST / > HTTP/1.1" 403 25 "-" "curl/7.55.1" "-" > > With the smaller file: > api-server_1 | 172.19.0.1 - - [30/Dec/2017:11:15:41 +0000] "POST / > HTTP/1.1" 403 79 "-" "curl/7.55.1" "-" > api-server_1 | [pid: 20|app: 0|req: 3/4] 172.19.0.1 () {38 vars in 595 > bytes} [Sat Dec 30 11:15:41 2017] POST / => generated 54 bytes in 1 msecs > (HTTP/1.1 403) 2 headers in 78 bytes (1 switches on core 0) The "readv() failed (104: Connection reset by peer)" error indicate that there is a backend problem which makes it impossible to reliably receive the body of the response. To make it possible for nginx to receive the body, backend must either read the whole body, or implement proper connection teardown (in nginx, this is called lingering_close, see http://nginx.org/r/lingering_close). -- Maxim Dounin http://mdounin.ru/ From mdounin at mdounin.ru Mon Jan 1 20:58:03 2018 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 1 Jan 2018 23:58:03 +0300 Subject: Dynamic nginx configuration without reloading In-Reply-To: <20FF2626-B4DF-4BC5-97BA-BA42C44B81D9@vendasta.com> References: <20FF2626-B4DF-4BC5-97BA-BA42C44B81D9@vendasta.com> Message-ID: <20180101205803.GP34136@mdounin.ru> Hello! On Mon, Jan 01, 2018 at 12:07:04AM -0600, House Lee wrote: > We are currently building a shared hosting platform for parts of > our clients. We are using nginx as our web servers running on > the nodes. > > Everything works fine. However, we discovered that when we are > hosting more than a certain amount of sites (typically around > 300 conf files), reloading nginx conf is extremely slow. > > Is there a way to dynamically configure nginx (e.g. > adding/removing server blocks) without reloading nginx? Short answer: No. Long answer: There are ways to do limited reconfiguration without reloading, notably the API module[1], available as part of our commercial subscription. It won't, however, allow to add or remove server{} blocks. That is, to add or remove server{} blocks you have to do a configuration reload. Note though, that in many cases you can actually avoid adding or removing server{} blocks. Multiple identical or slightly different sites can be handled using a single server{} block with multiple server names. Note well that 300 looks a way too low for "extremely slow", you may want to check what actually makes it so slow. In particular, make sure that: - you do have enough memory, as reload creates a new set of worker processes and this more or less doubles memory used by nginx; - you don't have problems with DNS and/or no names are used in a configuration. Slow DNS can easily make loading a configuration with multiple names extremely slow due to blocking DNS lookups during configuration parsing. [1] http://nginx.org/en/docs/http/ngx_http_api_module.html -- Maxim Dounin http://mdounin.ru/ From nginx-forum at forum.nginx.org Tue Jan 2 06:40:20 2018 From: nginx-forum at forum.nginx.org (Kurogane) Date: Tue, 02 Jan 2018 01:40:20 -0500 Subject: Multiple https website with IPv6 Message-ID: <9feeacf930965dda117924b39a2619ec.NginxMailingListEnglish@forum.nginx.org> I am using nginx with multiples https with a single IPv4 and dedicated IPv6 for each domain. The problem i'm having is i'm unable to redirect non www to www without conflicting with the vhosts. Here my setup [b]Default[/b] [code] server { listen 80 default_server; listen [::2]:80 default_server; server_name localhost; } [/code] [b]domain[/b] [code] server { listen 80; listen [::2]:80; server_name domain.com www.domain.com; return 301 https://www.domain.com$request_uri; } server { listen 443 ssl http2; listen [::2]:443 ssl http2; server_name domain.com; return 301 https://www.$server_name$request_uri; } server { listen 443 default_server ssl http2; listen [::2]:443 default_server ssl http2; server_name www.domain.com; } [/code] [b]domain 2[/b] [code] server { listen 80; listen [::3]:80; server_name domain2.com www.domain2.com; return 301 https://www.domain2.com$request_uri; } server { listen 443 ssl http2; listen [::3]:443 ssl http2; server_name domain2.com; return 301 https://www.$server_name$request_uri; } server { listen 443 ssl http2; listen [::3]:443 default_server ssl http2; server_name www.domain2.com; } [/code] So here's the problem IPv4 https://www.domain.com ? https://domain.com ? http://www.domain.com ? http://domain.com ? https://www.domain2.com ? https://domain2.com ?(NET::ERR_CERT_COMMON_NAME_INVALID - domain.com) http://www.domain2.com ? http://domain2.com ? IPv6 https://www.domain.com ? https://domain.com ? http://www.domain.com ? http://domain.com ? https://www.domain2.com ? https://domain2.com ? http://www.domain2.com ? http://domain2.com ? In IPv4 domain (https://domain2.com) the certificate of domain.com is served. What's wrong with my config? If work on IPv6 why not in IPv4 is in same config block? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,277962,277962#msg-277962 From francis at daoine.org Tue Jan 2 11:01:59 2018 From: francis at daoine.org (Francis Daly) Date: Tue, 2 Jan 2018 11:01:59 +0000 Subject: Multiple https website with IPv6 In-Reply-To: <9feeacf930965dda117924b39a2619ec.NginxMailingListEnglish@forum.nginx.org> References: <9feeacf930965dda117924b39a2619ec.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20180102110159.GF3127@daoine.org> On Tue, Jan 02, 2018 at 01:40:20AM -0500, Kurogane wrote: Hi there, > I am using nginx with multiples https with a single IPv4 and dedicated IPv6 > for each domain. Looking at your (edited) config... > server { > listen 443 ssl http2; > server_name domain.com; > return 301 https://www.$server_name$request_uri; > } > > server { > listen 443 default_server ssl http2; > server_name www.domain.com; > } > server { > listen 443 ssl http2; > server_name domain2.com; > return 301 https://www.$server_name$request_uri; > } > > server { > listen 443 ssl http2; > server_name www.domain2.com; > } It looks to me like your question is "how do I run multiple https web sites on a single IP address?". If that is the case, then the modern answer is "use SNI". http://nginx.org/en/docs/http/configuring_https_servers.html > What's wrong with my config? If work on IPv6 why not in IPv4 is in same > config block? You have a dedicated IPv6 address. You have a shared IPv4 address. It is not "IPv6 works, IPv4 fails"; it is "dedicated works, shared fails". f -- Francis Daly francis at daoine.org From nginx-forum at forum.nginx.org Tue Jan 2 16:27:07 2018 From: nginx-forum at forum.nginx.org (Kurogane) Date: Tue, 02 Jan 2018 11:27:07 -0500 Subject: how do I run multiple https web sites on a single IP address In-Reply-To: <20180102110159.GF3127@daoine.org> References: <20180102110159.GF3127@daoine.org> Message-ID: <0b12412b3121788fe794193871b30175.NginxMailingListEnglish@forum.nginx.org> >It looks to me like your question is "how do I run multiple https web sites on a single IP address?". >If that is the case, then the modern answer is "use SNI". >http://nginx.org/en/docs/http/configuring_https_servers.html I'm not sure what is your point here? nginx have built SNI a decade ago even CentOS have nginx updated version. If my nginx not have enabled or not SNI support then why works with www? Can you enlighten me what i do wrong or what is the "special" configuration to use SNI with shared IPv4 address. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,277962,277965#msg-277965 From vbart at nginx.com Tue Jan 2 16:34:37 2018 From: vbart at nginx.com (Valentin V. Bartenev) Date: Tue, 02 Jan 2018 19:34:37 +0300 Subject: how do I run multiple https web sites on a single IP address In-Reply-To: <0b12412b3121788fe794193871b30175.NginxMailingListEnglish@forum.nginx.org> References: <20180102110159.GF3127@daoine.org> <0b12412b3121788fe794193871b30175.NginxMailingListEnglish@forum.nginx.org> Message-ID: <12419384.tgz52WnpR8@vbart-laptop> On Tuesday, 2 January 2018 19:27:07 MSK Kurogane wrote: > >It looks to me like your question is "how do I run multiple https web sites > on a single IP address?". > > >If that is the case, then the modern answer is "use SNI". > > >http://nginx.org/en/docs/http/configuring_https_servers.html > > I'm not sure what is your point here? nginx have built SNI a decade ago even > CentOS have nginx updated version. > > If my nginx not have enabled or not SNI support then why works with www? > > Can you enlighten me what i do wrong or what is the "special" configuration > to use SNI with shared IPv4 address. > [..] Are you sure that a tool you're using to check supports SNI? wbr, Valentin V. Bartenev From agus.262 at gmail.com Tue Jan 2 17:42:53 2018 From: agus.262 at gmail.com (Agus) Date: Tue, 2 Jan 2018 14:42:53 -0300 Subject: Too many workers conenction on udp stream Message-ID: Hi guys, I have an nginx proxying udp/streams to another proxy which handles the connection to the backend. The same proxy proxying the udp streams to another proxy is working ok. But when it proxies it to the other one, it fills with the worker error. I turned on debugging and what i see, is that nginx aint releasing the udp connections... I could use a hand as I cant get it to work. in the first proxy i have: server { listen *:8330 udp; proxy_responses 1; proxy_pass second-proxy:8330; error_log /var/log/nginx/8330udp.log debug; } in the second that is the main which receives from various proxies: server { listen *:8330 udp; proxy_responses 1; proxy_pass server:8302; error_log /var/log/nginx/udp8330.log debug; } This same config in another "third" proxy for a differnet set of backends works ok. The main proxy for the working requests logs is like this, it is ending the connections: 2018/01/02 17:08:01 [debug] 6158#6158: *13 recv: fd:70 183 of 16384 2018/01/02 17:08:01 [debug] 6158#6158: *13 write new buf t:1 f:0 00000000012A6D00, pos 00000000012A7270, size: 183 file: 0, size: 0 2018/01/02 17:08:01 [debug] 6158#6158: *13 stream write filter: l:1 f:1 s:183 2018/01/02 17:08:01 [debug] 6158#6158: *13 sendmsg: 183 of 183 2018/01/02 17:08:01 [debug] 6158#6158: *13 stream write filter 0000000000000000 2018/01/02 17:08:01 [info] 6158#6158: *13 udp upstream disconnected, bytes from/to client:122/183, bytes from/to upstream:183/122 2018/01/02 17:08:01 [debug] 6158#6158: *13 finalize stream proxy: 200 2018/01/02 17:08:01 [debug] 6158#6158: *13 free rr peer 1 0 2018/01/02 17:08:01 [debug] 6158#6158: *13 close stream proxy upstream connection: 70 2018/01/02 17:08:01 [debug] 6158#6158: *13 reusable connection: 0 2018/01/02 17:08:01 [debug] 6158#6158: *13 finalize stream session: 200 2018/01/02 17:08:01 [debug] 6158#6158: *13 stream log handler 2018/01/02 17:08:01 [debug] 6158#6158: *13 close stream connection: 41 2018/01/02 17:08:01 [debug] 6158#6158: *13 event timer del: 41: 1514913481260 2018/01/02 17:08:01 [debug] 6158#6158: *13 reusable connection: 0 2018/01/02 17:08:01 [debug] 6158#6158: *13 free: 00000000012A7270 2018/01/02 17:08:01 [debug] 6158#6158: *13 free: 00000000012A70D0 2018/01/02 17:08:01 [debug] 6158#6158: *13 free: 0000000001199550, unused: 0 2018/01/02 17:08:01 [debug] 6158#6158: *13 free: 00000000012A6C90, unused: 0 2018/01/02 17:08:01 [debug] 6158#6158: *13 free: 00000000012A6DA0, unused: 0 2018/01/02 17:08:01 [debug] 6158#6158: *13 free: 00000000012A6EB0, unused: 24 The same proxy for the other non working one is: NO finalize, nor closing connection 2018/01/02 17:06:30 [debug] 6101#6101: *291 recvmsg: 52.200.231.253:13129 fd:51 n:313 2018/01/02 17:06:30 [info] 6101#6101: *291 udp client 52.200.231.253:13129 connected to 0.0.0.0:8330 2018/01/02 17:06:30 [debug] 6101#6101: *291 posix_memalign: 00000000025DE410:256 @16 2018/01/02 17:06:30 [debug] 6101#6101: *291 posix_memalign: 00000000025DE520:256 @16 2018/01/02 17:06:30 [debug] 6101#6101: *291 generic phase: 0 2018/01/02 17:06:30 [debug] 6101#6101: *291 generic phase: 1 2018/01/02 17:06:30 [debug] 6101#6101: *291 access: FDE7C834 FFFFFFFF 9405CE34 2018/01/02 17:06:30 [debug] 6101#6101: *291 access: FDE7C834 FFFFFFFF 12282534 2018/01/02 17:06:30 [debug] 6101#6101: *291 access: FDE7C834 0000FFFF 00000D0A 2018/01/02 17:06:30 [debug] 6101#6101: *291 access: FDE7C834 FFFFFFFF 2E952734 2018/01/02 17:06:30 [debug] 6101#6101: *291 access: FDE7C834 FFFFFFFF 368A1934 2018/01/02 17:06:30 [debug] 6101#6101: *291 access: FDE7C834 FFFFFFFF AEEB2C34 2018/01/02 17:06:30 [debug] 6101#6101: *291 access: FDE7C834 FFFFFFFF FDE7C834 2018/01/02 17:06:30 [debug] 6101#6101: *291 generic phase: 2 2018/01/02 17:06:30 [debug] 6101#6101: *291 proxy connection handler 2018/01/02 17:06:30 [debug] 6101#6101: *291 malloc: 00000000025DE630:400 2018/01/02 17:06:30 [debug] 6101#6101: *291 get rr peer, try: 1 2018/01/02 17:06:30 [debug] 6101#6101: *291 dgram socket 87 2018/01/02 17:06:30 [debug] 6101#6101: *291 epoll add connection: fd:87 ev:80002005 2018/01/02 17:06:30 [debug] 6101#6101: *291 connect to 52.44.235.174:8330, fd:87 #292 2018/01/02 17:06:30 [debug] 6101#6101: *291 connected 2018/01/02 17:06:30 [debug] 6101#6101: *291 proxy connect: 0 2018/01/02 17:06:30 [info] 6101#6101: *291 udp proxy 10.13.11.74:48173 connected to 52.44.235.174:8330 2018/01/02 17:06:30 [debug] 6101#6101: *291 malloc: 00000000025DE7D0:16384 2018/01/02 17:06:30 [debug] 6101#6101: *291 stream proxy add preread buffer: 313 2018/01/02 17:06:30 [debug] 6101#6101: *291 posix_memalign: 00000000025E27E0:256 @16 2018/01/02 17:06:30 [debug] 6101#6101: *291 write new buf t:1 f:0 00000000025DE2C0, pos 00000000025DE2C0, size: 313 file: 0, size: 0 2018/01/02 17:06:30 [debug] 6101#6101: *291 stream write filter: l:1 f:1 s:313 2018/01/02 17:06:30 [debug] 6101#6101: *291 sendmsg: 313 of 313 2018/01/02 17:06:30 [debug] 6101#6101: *291 stream write filter 0000000000000000 2018/01/02 17:06:30 [debug] 6101#6101: *291 event timer add: 51: 600000:1514913390811 2018/01/02 17:06:30 [debug] 6101#6101: *291 event timer: 51, old: 1514913390811, new: 1514913390811 2018/01/02 17:06:30 [debug] 6101#6101: *291 event timer: 51, old: 1514913390811, new: 1514913390811 2018/01/02 17:06:31 [debug] 6101#6101: recvmsg on 0.0.0.0:8330, ready: 0 2018/01/02 17:06:31 [debug] 6101#6101: posix_memalign: 00000000025EF740:256 @16 2018/01/02 17:06:31 [debug] 6101#6101: posix_memalign: 00000000025EF850:256 @16 2018/01/02 17:06:31 [debug] 6101#6101: malloc: 00000000025EF960:313 2018/01/02 17:06:31 [debug] 6101#6101: *297 recvmsg: 52.20.21.23:13129 fd:51 n:313 2018/01/02 17:06:31 [info] 6101#6101: *297 udp client 52.20.21.23:13129 connected to 0.0.0.0:8330 Both same nginx version. Thanks! -------------- next part -------------- An HTML attachment was scrubbed... URL: From francis at daoine.org Tue Jan 2 22:53:59 2018 From: francis at daoine.org (Francis Daly) Date: Tue, 2 Jan 2018 22:53:59 +0000 Subject: how do I run multiple https web sites on a single IP address In-Reply-To: <0b12412b3121788fe794193871b30175.NginxMailingListEnglish@forum.nginx.org> References: <20180102110159.GF3127@daoine.org> <0b12412b3121788fe794193871b30175.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20180102225359.GG3127@daoine.org> On Tue, Jan 02, 2018 at 11:27:07AM -0500, Kurogane wrote: Hi there, > >http://nginx.org/en/docs/http/configuring_https_servers.html > > I'm not sure what is your point here? nginx have built SNI a decade ago even > CentOS have nginx updated version. > > If my nginx not have enabled or not SNI support then why works with www? Ah, sorry - I had missed that https://www.domain.com, https://domain.com, and https://www.domain2.com all worked ok on IPv4. It is only https://domain2.com that presents an unwanted certificate. (And it presents the certificate for domain.com, even though www.domain.com is configured as the default_server.) Do you have four separate ssl certificate files, each of which is valid for a single server name? Or do you have one ssl certificate file which is valid for multiple server names? > Can you enlighten me what i do wrong or what is the "special" configuration > to use SNI with shared IPv4 address. One guess - is there any chance that the contents of the ssl_certificate file that applies in the domain2.com server{} block is actually the domain.com certificate? (Probably not, because the IPv6 connection should be using the same ssl_certificate, and no error was reported there.) Other than that, I don't know. Can you provide a complete config and test commands that someone else can use to recreate the problem? Or, to rule out any strange IPv4/IPv6 interaction -- do you see the same behaviour when you remove all of the IPv6 config? Good luck with it, f -- Francis Daly francis at daoine.org From nginx-forum at forum.nginx.org Wed Jan 3 04:58:35 2018 From: nginx-forum at forum.nginx.org (idfariz) Date: Tue, 02 Jan 2018 23:58:35 -0500 Subject: Proxy Protocol for IMAP and POP3 Message-ID: <608d39d0203df269f3ce6cdc514ec056.NginxMailingListEnglish@forum.nginx.org> Hello all, Currently we do load balancing for NGINX server that included in Zimbra as Proxy services with HAPROXY. but as we see in nginx's log access file all incoming source IP was logged as HaProxy's IP, the question in how we configure nginx to show client's origin IP instead of HaProx's for IMAP and POP3 (mail) ? Note: there is solution using Proxy Protocol (https://www.nginx.com/resources/admin-guide/proxy-protocol/) but it's available for http and stream only. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,277972,277972#msg-277972 From alexander.naumann at artcom-venture.de Wed Jan 3 10:03:27 2018 From: alexander.naumann at artcom-venture.de (Alexander Naumann) Date: Wed, 3 Jan 2018 11:03:27 +0100 (CET) Subject: Proxy Protocol for IMAP and POP3 In-Reply-To: <608d39d0203df269f3ce6cdc514ec056.NginxMailingListEnglish@forum.nginx.org> References: <608d39d0203df269f3ce6cdc514ec056.NginxMailingListEnglish@forum.nginx.org> Message-ID: <1362267223.27384502.1514973807714.JavaMail.zimbra@artcom-venture.de> Hello, you could use "set_real_ip_from 'IP from LB';" http://nginx.org/en/docs/http/ngx_http_realip_module.html -- Alexander Naumann ----- Urspr?ngliche Mail ----- Von: "idfariz" An: nginx at nginx.org Gesendet: Mittwoch, 3. Januar 2018 05:58:35 Betreff: Proxy Protocol for IMAP and POP3 Hello all, Currently we do load balancing for NGINX server that included in Zimbra as Proxy services with HAPROXY. but as we see in nginx's log access file all incoming source IP was logged as HaProxy's IP, the question in how we configure nginx to show client's origin IP instead of HaProx's for IMAP and POP3 (mail) ? Note: there is solution using Proxy Protocol (https://www.nginx.com/resources/admin-guide/proxy-protocol/) but it's available for http and stream only. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,277972,277972#msg-277972 _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx From nginx-forum at forum.nginx.org Wed Jan 3 12:32:49 2018 From: nginx-forum at forum.nginx.org (jakewow) Date: Wed, 03 Jan 2018 07:32:49 -0500 Subject: Regular expression length syntax not working? In-Reply-To: <204ea4238fc8fe815659a6ec2dc26b7b.NginxMailingListEnglish@forum.nginx.org> References: <20141211081225.GG15670@daoine.org> <204ea4238fc8fe815659a6ec2dc26b7b.NginxMailingListEnglish@forum.nginx.org> Message-ID: <02768083f681c436a06fcdaf6b3b5647.NginxMailingListEnglish@forum.nginx.org> For those who are looking for the answer: > A regular expression containing the characters ?{? and ?}? should be quoted. So, his location directive: location ~ ^/event/[0-9,A-Z]{16}/info$ { proxy_pass http://localhost:7777; } Should look like this in order to work: location ~ "^/event/[0-9,A-Z]{16}/info$" { proxy_pass http://localhost:7777; } Posted at Nginx Forum: https://forum.nginx.org/read.php?2,255413,277974#msg-277974 From nginx-forum at forum.nginx.org Wed Jan 3 19:23:32 2018 From: nginx-forum at forum.nginx.org (Kurogane) Date: Wed, 03 Jan 2018 14:23:32 -0500 Subject: how do I run multiple https web sites on a single IP address In-Reply-To: <20180102225359.GG3127@daoine.org> References: <20180102225359.GG3127@daoine.org> Message-ID: <89a5abd77347c8753d2420e940b18080.NginxMailingListEnglish@forum.nginx.org> >Are you sure that a tool you're using to check supports SNI? >wbr, Valentin V. Bartenev What tool you're talking about? this error show in browser. >Do you have four separate ssl certificate files, each of which is valid >for a single server name? >Or do you have one ssl certificate file which is valid for multiple server names? I'm not sure why you mean but i have two cert files. Each cert have a valid common name to use non www and www >One guess - is there any chance that the contents of the ssl_certificate f>ile that applies in the domain2.com server{} block is actually the >domain.com certificate? (Probably not, because the IPv6 connection should >be using the same ssl_certificate, and no error was reported there.) domain2.com is just a block only do redirect that all. Is what i put in initial thread. server { listen 443 ssl http2; listen [::3]:443 ssl http2; server_name domain2.com; return 301 https://www.$server_name$request_uri; } This is the full config of this block. >Or, to rule out any strange IPv4/IPv6 interaction -- do you see the same >behaviour when you remove all of the IPv6 config? Same problem with or without IPv6. I just notice when i disable IPv6 and only access via IPv4 do something wierd. When i visit https://domain2.com i got the same error (domain.com certificate) and chrome or whatever browser say me if i want to continue and when i click to continue redirect me to www.domain2.com (is what i want to do and work with domain.com and domain2.com with IPv6). I'm not sure why first check domain.com and then use domain2.com server block. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,277962,277978#msg-277978 From francis at daoine.org Wed Jan 3 22:38:02 2018 From: francis at daoine.org (Francis Daly) Date: Wed, 3 Jan 2018 22:38:02 +0000 Subject: how do I run multiple https web sites on a single IP address In-Reply-To: <89a5abd77347c8753d2420e940b18080.NginxMailingListEnglish@forum.nginx.org> References: <20180102225359.GG3127@daoine.org> <89a5abd77347c8753d2420e940b18080.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20180103223802.GJ3127@daoine.org> On Wed, Jan 03, 2018 at 02:23:32PM -0500, Kurogane wrote: Hi there, > >Are you sure that a tool you're using to check supports SNI? > > What tool you're talking about? this error show in browser. In this case, the tool is "the browser". Which browser, which version? The aim here is to allow someone who is not you to see the problem that you are seeing. Often, it is useful to use a low-level tool which hides nothing. So, for example, you might be able to test with openssl s_client -servername domain.com -connect 127.0.0.1:443 to see what certificate is returned; then repeat the test with "domain2.com" and "www.domain2.com". (You could also probably use something like curl -k -v --resolve domain.com:443:127.0.0.1 https://domain.com to see the same information, along with the http request and response.) > >Do you have four separate ssl certificate files, each of which is valid > >for a single server name? > > >Or do you have one ssl certificate file which is valid for multiple > server names? > > I'm not sure why you mean but i have two cert files. Each cert have a valid > common name to use non www and www What does that mean, specifically? If you do something like openssl x509 -noout -text < your-domain.com-cert do you see Subject: CN=www.domain.com and X509v3 Subject Alternative Name: DNS:domain.com or do you see something else? Same question, for your-domain2.com-cert. In your nginx config, what "ssl_certificate" lines do you have? You did not show any inside the server{} blocks; perhaps you have them inside the http{} block? The aim here is to allow someone to create an nginx instance which resembles yours, and which shows the problem, or which does not show the problem. The problem that you report should not be happening. If someone else can re-create it, perhaps there is a bug in nginx (that has not been reported previously) that can be fixed. If no-one else can re-create it, perhaps there is something unusual about your configuration and set-up. Only you know what your configuration is. If you provide enough information to allow someone else get a similar configuration, then maybe they will be able to see the cause of the problem. Can you show a complete, but minimum, configuration that still shows the problem? > server { > listen 443 ssl http2; > listen [::3]:443 ssl http2; > server_name domain2.com; > return 301 https://www.$server_name$request_uri; > } > > This is the full config of this block. Which ssl_certificate file do you want nginx to use when a request for this server_name comes in? How does nginx know that you want nginx to use that ssl_certificate? > Same problem with or without IPv6. Ok, that's good to know. Your example config can now remove all of the IPv6 lines. Perhaps it can also remove the "http2" parts, to make it even easier for someone else to build a similar configuration. > I just notice when i disable IPv6 and only access via IPv4 do something > wierd. > > When i visit https://domain2.com i got the same error (domain.com > certificate) and chrome or whatever browser say me if i want to continue and > when i click to continue redirect me to www.domain2.com (is what i want to > do and work with domain.com and domain2.com with IPv6). I'm not sure why > first check domain.com and then use domain2.com server block. That sounds to me like it is exactly the same as what happened when IPv6 was enabled. Is it different? If so, that is interesting information. Maybe there is some IPv4/IPv6 interaction involved. Good luck with it, f -- Francis Daly francis at daoine.org From mdounin at mdounin.ru Wed Jan 3 23:11:05 2018 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 4 Jan 2018 02:11:05 +0300 Subject: Too many workers conenction on udp stream In-Reply-To: References: Message-ID: <20180103231105.GQ34136@mdounin.ru> Hello! On Tue, Jan 02, 2018 at 02:42:53PM -0300, Agus wrote: > Hi guys, > > I have an nginx proxying udp/streams to another proxy which handles the > connection to the backend. > > The same proxy proxying the udp streams to another proxy is working ok. > But when it proxies it to the other one, it fills with the worker error. I > turned on debugging and what i see, is that nginx aint releasing the udp > connections... > I could use a hand as I cant get it to work. > > in the first proxy i have: > > server { > listen *:8330 udp; > proxy_responses 1; > proxy_pass second-proxy:8330; > error_log /var/log/nginx/8330udp.log debug; > } > > > > in the second that is the main which receives from various proxies: > > server { > listen *:8330 udp; > proxy_responses 1; > proxy_pass server:8302; > error_log /var/log/nginx/udp8330.log debug; > } > > > This same config in another "third" proxy for a differnet set of backends > works ok. > > > > The main proxy for the working requests logs is like this, it is ending the > connections: > > 2018/01/02 17:08:01 [debug] 6158#6158: *13 recv: fd:70 183 of 16384 > 2018/01/02 17:08:01 [debug] 6158#6158: *13 write new buf t:1 f:0 > 00000000012A6D00, pos 00000000012A7270, size: 183 file: 0, size: 0 > 2018/01/02 17:08:01 [debug] 6158#6158: *13 stream write filter: l:1 f:1 > s:183 > 2018/01/02 17:08:01 [debug] 6158#6158: *13 sendmsg: 183 of 183 > 2018/01/02 17:08:01 [debug] 6158#6158: *13 stream write filter > 0000000000000000 > 2018/01/02 17:08:01 [info] 6158#6158: *13 udp upstream disconnected, bytes > from/to client:122/183, bytes from/to upstream:183/122 Here nginx got an UDP response, and based on "proxy_responses 1" in your configuration closes the session. [...] > The same proxy for the other non working one is: NO finalize, nor closing > connection > 2018/01/02 17:06:30 [debug] 6101#6101: *291 recvmsg: 52.200.231.253:13129 > fd:51 n:313 > 2018/01/02 17:06:30 [info] 6101#6101: *291 udp client 52.200.231.253:13129 > connected to 0.0.0.0:8330 [...] > 2018/01/02 17:06:30 [info] 6101#6101: *291 udp proxy 10.13.11.74:48173 > connected to 52.44.235.174:8330 > 2018/01/02 17:06:30 [debug] 6101#6101: *291 malloc: 00000000025DE7D0:16384 > 2018/01/02 17:06:30 [debug] 6101#6101: *291 stream proxy add preread > buffer: 313 > 2018/01/02 17:06:30 [debug] 6101#6101: *291 posix_memalign: > 00000000025E27E0:256 @16 > 2018/01/02 17:06:30 [debug] 6101#6101: *291 write new buf t:1 f:0 > 00000000025DE2C0, pos 00000000025DE2C0, size: 313 file: 0, size: 0 > 2018/01/02 17:06:30 [debug] 6101#6101: *291 stream write filter: l:1 f:1 > s:313 > 2018/01/02 17:06:30 [debug] 6101#6101: *291 sendmsg: 313 of 313 > 2018/01/02 17:06:30 [debug] 6101#6101: *291 stream write filter > 0000000000000000 > 2018/01/02 17:06:30 [debug] 6101#6101: *291 event timer add: 51: > 600000:1514913390811 > 2018/01/02 17:06:30 [debug] 6101#6101: *291 event timer: 51, old: > 1514913390811, new: 1514913390811 > 2018/01/02 17:06:30 [debug] 6101#6101: *291 event timer: 51, old: > 1514913390811, new: 1514913390811 Here nginx got a new UDP client, sent the packet to the upstream server and started to wait for a response. Once the response is received, nginx will close the session much like in the above case. How long nginx will wait for a response can be controlled using the "proxy_timeout" directive" http://nginx.org/en/docs/stream/ngx_stream_proxy_module.html#proxy_timeout In your configuration it seems to be set to 600 seconds, which is 10 times longer than the default. If you want nginx to better tolerate non-responding UDP backends, you may want to configure shorter timeouts instead. Alternatively, consider configuring more worker connections. -- Maxim Dounin http://mdounin.ru/ From germanjaber at gmail.com Thu Jan 4 01:22:15 2018 From: germanjaber at gmail.com (German Jaber) Date: Thu, 04 Jan 2018 01:22:15 +0000 Subject: Allow the usen of the "env" directive in contexts other than "main" Message-ID: Hi folks, Is there a reason why the "env" directive is only allowed inside the "main" contexts? It would simplify many of my Docker deployments if I could do away with sed and envsubst and use the "env" directive directly. If the maintainers approve the inclusion of this feature in Nginx, I would like to offer my time to this project by implementing this functionality. Regards, German Jaber -------------- next part -------------- An HTML attachment was scrubbed... URL: From dev at efxnews.com Thu Jan 4 02:01:16 2018 From: dev at efxnews.com (Ameer Antar) Date: Wed, 3 Jan 2018 21:01:16 -0500 (EST) Subject: nginx latency/performance issues Message-ID: <967952017.2.1515031276772.JavaMail.genghis@boobie> An HTML attachment was scrubbed... URL: From wade.girard at gmail.com Thu Jan 4 13:11:22 2018 From: wade.girard at gmail.com (Wade Girard) Date: Thu, 4 Jan 2018 07:11:22 -0600 Subject: 504 gateway timeouts In-Reply-To: <7608c451-ae57-ed45-cf5c-139bb756300e@lucee.org> References: <7608c451-ae57-ed45-cf5c-139bb756300e@lucee.org> Message-ID: The version that is on the ubuntu servers was 1.10.xx. I just updated it to nginx version: nginx/1.13.8 And I am still having the same issue. How do I "Try to flush out some output early on so that nginx will know that Tomcat is alive." The nginx and tomcat connection is working fine for all requests/responses that take less than 60 seconds. On Wed, Dec 27, 2017 at 4:18 PM, Igal @ Lucee.org wrote: > On 12/27/2017 2:03 PM, Wade Girard wrote: > > I am using nginx on an ubuntu server as a proxy to a tomcat server. > > The nginx server is setup for https. > > I don't know how to determine what version of nginx I am using, but I > install it on the ubuntu 1.16 server using apt-get. > > Run: nginx -v > > > I have an issue that I have resolved locally on my Mac (using version 1.12 > of nginx and Tomcat 7) where requests through the proxy that take more than > 60 seconds were failing, they are now working. > > What seemed to be the fix was adding the following to the nginx.conf file > > proxy_connect_timeout 600; > > proxy_send_timeout 600; > > proxy_read_timeout 600; > > send_timeout 600; > > in the location section for my proxy. > > > However this same change in the ubuntu servers has no effect at all. > > Try to flush out some output early on so that nginx will know that Tomcat > is alive. > > Igal Sapir > Lucee Core Developer > Lucee.org > > > -- Wade Girard c: 612.363.0902 -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Thu Jan 4 14:21:21 2018 From: nginx-forum at forum.nginx.org (meteor8488) Date: Thu, 04 Jan 2018 09:21:21 -0500 Subject: how to enable http2 for two server hosted on the same IP Message-ID: <248c1e60b056f5530c52decfa4515e83.NginxMailingListEnglish@forum.nginx.org> Hi All, If I use server { listen 443 accept_filter=dataready ssl http2; } server { listen 443 http2 sndbuf=512k; } I'll get error duplicate listen options for 0.0.0.0:443 I know it's caused by http2 in server 2. But how can I enable http2 on two servers? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,277991,277991#msg-277991 From sca at andreasschulze.de Thu Jan 4 15:53:48 2018 From: sca at andreasschulze.de (A. Schulze) Date: Thu, 04 Jan 2018 16:53:48 +0100 Subject: how to enable http2 for two server hosted on the same IP In-Reply-To: <248c1e60b056f5530c52decfa4515e83.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20180104165348.Horde.TlI3VKE3FI_AbaVyfRA1V_m@andreasschulze.de> meteor8488: > Hi All, > > If I use > > server { > listen 443 accept_filter=dataready ssl http2; > } > server { > listen 443 http2 sndbuf=512k; > } > > I'll get error > duplicate listen options for 0.0.0.0:443 > > I know it's caused by http2 in server 2. probably you're wrong. The error is to specify sndbuf in the second server. from https://nginx.org/r/listen: The listen directive can have several additional parameters specific to socket-related system calls. These parameters can be specified in any listen directive, but only once for a given address:port pair. "but only once for a given address:port pair" is the point! multiple options: ssl, http2, spdy, proxy_protocol single options: setfib, fastopen, backlog, ... Andreas From mdounin at mdounin.ru Thu Jan 4 16:52:44 2018 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 4 Jan 2018 19:52:44 +0300 Subject: nginx latency/performance issues In-Reply-To: <967952017.2.1515031276772.JavaMail.genghis@boobie> References: <967952017.2.1515031276772.JavaMail.genghis@boobie> Message-ID: <20180104165244.GS34136@mdounin.ru> Hello! On Wed, Jan 03, 2018 at 09:01:16PM -0500, Ameer Antar wrote: > Hello All, > First time subscriber here, as I am new to using nginx. My question is > about latency/performance issue as compared to our previously > configured Apache server. Our web application is running on Apache + > PHP-FPM, and we are planning on publishing a new site which contains > only static files "published" from within the web application on a set > interval. Thinking of saving the overhead of apache and php, I've setup > nginx-light package on ubuntu and configured the server with minimal > changes from default. Just to see what kind of improvement we have, I > compared avg response times for the same static javascript file and > noticed a difference, but the opposite of what I expected: ~128ms from > apache and ~142ms from nginx. I've also tested with php enabled on > nginx and seen about the same results. > There must be something not right. I've looked at a lot performance > tips, but no huge difference. The only minor help was switching to > buffered logging, but the difference is probably close to the margin of > error. Can anyone help? Depending on what and how you've measured, the difference between ~128ms and ~142ms might be either non-significant, or explainable by different settings, or have some other explanation. You may want to elaborate a bit more on what and how you are measuring. Also, you may want to measure more details to better understand where the time is spent. In any case, both values are larger than 100ms, and this suggests that you aren't measuring local latency. Likely, most of the latency is network-related, and switching servers won't help much here. In particular, if you are measuring latency on real users within your web application, the difference might be due to no established connection and/or no cached SSL session to a separate static site. -- Maxim Dounin http://mdounin.ru/ From mdounin at mdounin.ru Thu Jan 4 17:06:15 2018 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 4 Jan 2018 20:06:15 +0300 Subject: Allow the usen of the "env" directive in contexts other than "main" In-Reply-To: References: Message-ID: <20180104170615.GT34136@mdounin.ru> Hello! On Thu, Jan 04, 2018 at 01:22:15AM +0000, German Jaber wrote: > Is there a reason why the "env" directive is only allowed inside the "main" > contexts? The "env" directive controls which environment variables will be available in the nginx worker processes. Environment variables apply to the whole worker process, and are not differentiated based on what the worker process is doing in the particular time. As such, these directives are to be specified at the global level. > It would simplify many of my Docker deployments if I could do away with sed > and envsubst and use the "env" directive directly. > > If the maintainers approve the inclusion of this feature in Nginx, I would > like to offer my time to this project by implementing this functionality. Sorry, from your description it is not clear what you are trying to do and how the "env" directive can help here. You may want to elaborate on this. -- Maxim Dounin http://mdounin.ru/ From dev at efxnews.com Thu Jan 4 23:12:38 2018 From: dev at efxnews.com (eFX News Development) Date: Thu, 4 Jan 2018 18:12:38 -0500 (EST) Subject: nginx latency/performance issues Message-ID: <423512847.11.1515107558342.JavaMail.genghis@boobie> Hello! Thanks for your response. I'm using apache bench for the tests, simply hitting the same static javascript file (no php). I was thinking since I'm using the same location and as long as the tests are repeatable, using remote testing would be ok and give more realistic results. Both apache and nginx are on the same machine, just using different IP aliases so I can connect to both via port 443. After more detective work, I think I've narrowed the problem to the aliasing. The first alias which nginx is on is slower than the second where apache is. When I placed nginx on the same ip, but different port than apache, the speed is much better. There must be some ip address priority as the nginx server is new and has zero traffic on it. This is probably out of scope, but if you have any other thoughts or advice, let me know. Thanks again for your help on this. -Ameer ----- Original Message ----- From: Maxim Dounin To: nginx at nginx.org Subject: Re: nginx latency/performance issues Date: 1/4/18 11:52:44 AM >Depending on what and how you've measured, the difference between >~128ms and ~142ms might be either non-significant, or explainable by >different settings, or have some other explanation. You may want >to elaborate a bit more on what and how you are measuring. Also, >you may want to measure more details to better understand where >the time is spent. > >In any case, both values are larger than 100ms, and this suggests >that you aren't measuring local latency. Likely, most of the >latency is network-related, and switching servers won't help much >here. In particular, if you are measuring latency on real users >within your web application, the difference might be due to no >established connection and/or no cached SSL session to a separate >static site. > >-- >Maxim Dounin >http://mdounin.ru/ >_______________________________________________ >nginx mailing list >nginx at nginx.org >http://mailman.nginx.org/mailman/listinfo/nginx From nginx-forum at forum.nginx.org Fri Jan 5 00:19:32 2018 From: nginx-forum at forum.nginx.org (Kurogane) Date: Thu, 04 Jan 2018 19:19:32 -0500 Subject: how do I run multiple https web sites on a single IP address In-Reply-To: <20180103223802.GJ3127@daoine.org> References: <20180103223802.GJ3127@daoine.org> Message-ID: <3fbb109681071293656efa412bd9c565.NginxMailingListEnglish@forum.nginx.org> I fixed now the problem, not sure is the best way but at least working. In the two server https block you need to put all cert information (ssl_certificate bla bla) in domain2.com and www.domain2.com. I just only put cert information in www.domain2.com and domain2.com only redirect in what i put in the example config in my initial thread. I tried to simplify the config to only use less server block possible but seems i do worse because of that. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,277962,278002#msg-278002 From mikydevel at yahoo.fr Fri Jan 5 01:04:52 2018 From: mikydevel at yahoo.fr (Mik J) Date: Fri, 5 Jan 2018 01:04:52 +0000 (UTC) Subject: IPv6 does not work correctly with nginx References: <910728718.1183010.1515114292561.ref@mail.yahoo.com> Message-ID: <910728718.1183010.1515114292561@mail.yahoo.com> Hello, I'm trying to finish to configure nginx for ipv6 listen [::]:443 ssl;doesn't workbutlisten [fc00:1:1::13]:443 ssl;works I need to explicitly specify the ipv6 address whereas in ipv4 I don't need to # nginx -V nginx version: nginx/1.12.1 server { ??? listen 443 ssl; #??? listen [::]:443 ssl; ??? listen [fc00:1:1::13]:443 ssl; ??? server_name test.mydomain.org; ??? root /var/www/html; # ifconfig vmx0 vmx0: flags=8843 mtu 1500 ...inet6 fc00:1:1::13 prefixlen 64 Does someone knows why ? Thank you -------------- next part -------------- An HTML attachment was scrubbed... URL: From zchao1995 at gmail.com Fri Jan 5 01:46:43 2018 From: zchao1995 at gmail.com (Zhang Chao) Date: Thu, 4 Jan 2018 17:46:43 -0800 Subject: 504 gateway timeouts In-Reply-To: References: <7608c451-ae57-ed45-cf5c-139bb756300e@lucee.org> Message-ID: > The version that is on the ubuntu servers was 1.10.xx. I just updated it to > > nginx version: nginx/1.13.8 > > And I am still having the same issue. > > How do I "Try to flush out some output early on so that nginx will know that Tomcat is alive." > > The nginx and tomcat connection is working fine for all requests/responses that take less t Maybe you can flush out the HTTP response headers quickly. -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Fri Jan 5 01:58:45 2018 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 5 Jan 2018 04:58:45 +0300 Subject: nginx latency/performance issues In-Reply-To: <423512847.11.1515107558342.JavaMail.genghis@boobie> References: <423512847.11.1515107558342.JavaMail.genghis@boobie> Message-ID: <20180105015844.GU34136@mdounin.ru> Hello! On Thu, Jan 04, 2018 at 06:12:38PM -0500, eFX News Development wrote: > Hello! Thanks for your response. I'm using apache bench for the > tests, simply hitting the same static javascript file (no php). > I was thinking since I'm using the same location and as long as > the tests are repeatable, using remote testing would be ok and > give more realistic results. With ApacheBench on an SSL host you are likely testing your SSL configuration. Or, rather, performance of handshakes with most secure ciphersuite available in your OpenSSL library. Try looking into detailed timing information provided by ApacheBench, it should show that most of the time is spend in the "Connect" phase - which in fact means that the time is spent in SSL hadshakes. Also try using keepalive connections with "ab -k" to see a drammatic difference. > Both apache and nginx are on the same machine, just using > different IP aliases so I can connect to both via port 443. > After more detective work, I think I've narrowed the problem to > the aliasing. The first alias which nginx is on is slower than > the second where apache is. When I placed nginx on the same ip, > but different port than apache, the speed is much better. There > must be some ip address priority as the nginx server is new and > has zero traffic on it. This is probably out of scope, but if > you have any other thoughts or advice, let me know. First of all, check if your results are statistically significant. That is, take a look at the "+/-sd" column in the ApacheBench detailed output. Alternatively, run both tests at least three times and compare resulting numbers using ministat(1). -- Maxim Dounin http://mdounin.ru/ From peter_booth at me.com Fri Jan 5 02:19:29 2018 From: peter_booth at me.com (Peter Booth) Date: Thu, 04 Jan 2018 21:19:29 -0500 Subject: nginx latency/performance issues In-Reply-To: <423512847.11.1515107558342.JavaMail.genghis@boobie> References: <423512847.11.1515107558342.JavaMail.genghis@boobie> Message-ID: Are you running apache bench on the sam for different host? How big is the javascript file? What is your ab command line? If your site is to be static published (which is a great idea) why are you using SSL anyway? > On 4 Jan 2018, at 6:12 PM, eFX News Development wrote: > > Hello! Thanks for your response. I'm using apache bench for the tests, simply hitting the same static javascript file (no php). I was thinking since I'm using the same location and as long as the tests are repeatable, using remote testing would be ok and give more realistic results. > > Both apache and nginx are on the same machine, just using different IP aliases so I can connect to both via port 443. After more detective work, I think I've narrowed the problem to the aliasing. The first alias which nginx is on is slower than the second where apache is. When I placed nginx on the same ip, but different port than apache, the speed is much better. There must be some ip address priority as the nginx server is new and has zero traffic on it. This is probably out of scope, but if you have any other thoughts or advice, let me know. > > Thanks again for your help on this. > > -Ameer > > ----- Original Message ----- > From: Maxim Dounin > To: nginx at nginx.org > Subject: Re: nginx latency/performance issues > Date: 1/4/18 11:52:44 AM > >> Depending on what and how you've measured, the difference between >> ~128ms and ~142ms might be either non-significant, or explainable by >> different settings, or have some other explanation. You may want >> to elaborate a bit more on what and how you are measuring. Also, >> you may want to measure more details to better understand where >> the time is spent. >> >> In any case, both values are larger than 100ms, and this suggests >> that you aren't measuring local latency. Likely, most of the >> latency is network-related, and switching servers won't help much >> here. In particular, if you are measuring latency on real users >> within your web application, the difference might be due to no >> established connection and/or no cached SSL session to a separate >> static site. >> >> -- >> Maxim Dounin >> http://mdounin.ru/ >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From germanjaber at gmail.com Fri Jan 5 02:28:45 2018 From: germanjaber at gmail.com (German Jaber) Date: Fri, 05 Jan 2018 02:28:45 +0000 Subject: Allow the usen of the "env" directive in contexts other than "main" Message-ID: What I want to be able to do is basically what this guy tried to do: https://serverfault.com/questions/577370/how-can-i-use-environment-variables-in-nginx-conf Succinctly, I want to pass the value of environment variables to my server and location contexts, and use those values in the arguments passed to my directives. For example. Say I have two servers, dev and prod; and that I have a file in sites-enabled that reads: env URL; server { listen 80; server_name $URL; access_log /var/log/nginx/access.log; error_log /var/log/nginx/error.log; location /static/ { alias /app/static/; } location / { alias /app/main/; } } I want the server_name directive to receive as argument the domains (or list of domains) contained in the environment variable $URL. This is very useful when using container technology (Docker, Kubernetes, etc) as the same Nginx configuration files may be used by many different environments (dev, staging, prod, CI, CD, etc). Many times these environments are created dynamically and programmatically by our continuous delivery pipelines, so setting them by hand is cumbersome or downright impossible. Right now I use sed and/or envsubst in our scripts to modify the file before using it. It would simplify our deployment scripts a ton if Nginx could just read these environment variables and substitute them for us wherever we need. > Hello! > > On Thu, Jan 04, 2018 at 01:22:15AM +0000, German Jaber wrote: > > > Is there a reason why the "env" directive is only allowed inside the "main" > > contexts? > > The "env" directive controls which environment variables will be > available in the nginx worker processes. Environment variables > apply to the whole worker process, and are not differentiated based on > what the worker process is doing in the particular time. As such, > these directives are to be specified at the global level. > > > It would simplify many of my Docker deployments if I could do away with sed > > and envsubst and use the "env" directive directly. > > > > If the maintainers approve the inclusion of this feature in Nginx, I would > > like to offer my time to this project by implementing this functionality. > > Sorry, from your description it is not clear what you are trying > to do and how the "env" directive can help here. You may want to > elaborate on this. > > -- > Maxim Dounin > http://mdounin.ru/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From wade.girard at gmail.com Fri Jan 5 04:45:15 2018 From: wade.girard at gmail.com (Wade Girard) Date: Thu, 4 Jan 2018 22:45:15 -0600 Subject: 504 gateway timeouts In-Reply-To: References: <7608c451-ae57-ed45-cf5c-139bb756300e@lucee.org> Message-ID: I am not sure what is meant by this or what action you are asking me to take. The settings, when added to nginx conf file on Mac OS server and nginx reloaded take effect immediately and work as expected, the same settings when added to nginx conf file on Ubuntu and nginx reloaded have no effect at all. What steps can I take to have the proxy in nginx honor these timeouts, or what other settings/actions can I take to make this work? Thanks On Thu, Jan 4, 2018 at 7:46 PM, Zhang Chao wrote: > > The version that is on the ubuntu servers was 1.10.xx. I just updated it > to > > > > nginx version: nginx/1.13.8 > > > > And I am still having the same issue. > > > > How do I "Try to flush out some output early on so that nginx will know > that Tomcat is alive." > > > > The nginx and tomcat connection is working fine for all > requests/responses that take less t > > Maybe you can flush out the HTTP response headers quickly. > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -- Wade Girard c: 612.363.0902 -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Fri Jan 5 05:55:25 2018 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 5 Jan 2018 08:55:25 +0300 Subject: Allow the usen of the "env" directive in contexts other than "main" In-Reply-To: References: Message-ID: <20180105055525.GW34136@mdounin.ru> Hello! On Fri, Jan 05, 2018 at 02:28:45AM +0000, German Jaber wrote: > What I want to be able to do is basically what this guy tried to do: > https://serverfault.com/questions/577370/how-can-i-use-environment-variables-in-nginx-conf > > Succinctly, I want to pass the value of environment variables to my server > and location contexts, and use those values in the arguments passed to my > directives. > > For example. Say I have two servers, dev and prod; and that I have a file > in sites-enabled that reads: > > env URL; > server { > listen 80; > server_name $URL; > > access_log /var/log/nginx/access.log; > error_log /var/log/nginx/error.log; > > location /static/ { > alias /app/static/; > } > > location / { > alias /app/main/; > } > } > > I want the server_name directive to receive as argument the domains (or > list of domains) contained in the environment variable $URL. > > This is very useful when using container technology (Docker, Kubernetes, > etc) as the same Nginx configuration files may be used by many different > environments (dev, staging, prod, CI, CD, etc). Many times these > environments are created dynamically and programmatically by our > continuous delivery pipelines, so setting them by hand is cumbersome or > downright impossible. Right now I use sed and/or envsubst in our scripts to > modify the file before using it. > > It would simplify our deployment scripts a ton if Nginx could just read > these environment variables and substitute them for us wherever we need. So, this is certainly not about allowing the "env" directive in other contexts, but instead using environment variables in the configuration. This is not something nginx currently supports, but rather a completely new mechanism - some preprocessing of the configuration. Something like this was discussed more than once, and the current consensus is that using various template mechanisms like sed, cpp, or whatever you prefer to is a good enough solution for such tasks. Note that using variables as you seem to imply in the example above is a bad idea, as variables are evaluated at run-time. See http://nginx.org/en/docs/faq/variables_in_config.html for details. -- Maxim Dounin http://mdounin.ru/ From dev at efxnews.com Fri Jan 5 06:24:39 2018 From: dev at efxnews.com (Ameer Antar) Date: Fri, 5 Jan 2018 01:24:39 -0500 (EST) Subject: nginx latency/performance issues Message-ID: <1498556397.2.1515133479549.JavaMail.genghis@boobie> I'm using multiple runs with these commands: ab -n100 ab -c20 -n100 Testing now on the same host eliminates the issue with the ip alias, so I can now see nginx running much faster for the 95kB file, (gzip is on, so it's probably less). Unfortunately, we have a embedded login form so if you don't use https, there is a warning in most browsers about this being insecure, even though it posts to a different sub-domain on https. ----- Original Message ----- From: Peter Booth To: Mik J via nginx Cc: Maxim Dounin Subject: Re: nginx latency/performance issues Date: 1/4/18 9:19:29 PM >Are you running apache bench on the sam for different host? >How big is the javascript file? What is your ab command line? >If your site is to be static published (which is a great idea) >why are you using SSL anyway? > From dev at efxnews.com Fri Jan 5 06:34:32 2018 From: dev at efxnews.com (Ameer Antar) Date: Fri, 5 Jan 2018 01:34:32 -0500 (EST) Subject: nginx latency/performance issues Message-ID: <1055063433.5.1515134072004.JavaMail.genghis@boobie> Hello, ----- Original Message ----- From: Maxim Dounin To: nginx at nginx.org Subject: Re: nginx latency/performance issues Date: 1/4/18 8:58:45 PM >With ApacheBench on an SSL host you are likely testing your SSL >configuration. Or, rather, performance of handshakes with most >secure ciphersuite available in your OpenSSL library. Try >looking into detailed timing information provided by ApacheBench, >it should show that most of the time is spend in the >"Connect" >phase - which in fact means that the time is spent in SSL >hadshakes. Also try using keepalive connections with "ab -k" >to >see a drammatic difference. > Yes, using http is much faster, but I was thinking of getting stats in a real-world scenario. That's probably the wrong course in this case to compare two different servers.. eliminating the extra common bottlenecks really focuses on the difference between the two servers. I guess the more "real-world" stats might be good just to have an idea of what users may experience, but not at comparing performance. >First of all, check if your results are statistically significant. >That is, take a look at the "+/-sd" column in the ApacheBench >detailed output. Alternatively, run both tests at least three >times and compare resulting numbers using ministat(1). The data looks good now, with minimal deviation. Now I'm getting 36k req/s vs. ~25k req/s for apache for 20 concurrent requests. I'm pleased, but the only issue remains why remote performance is not as good except for one ip alias. If my assumption is correct, this will get better over time. For now, I'll just have to wait to find out... Thanks again. -Ameer From nginx-forum at forum.nginx.org Fri Jan 5 06:47:07 2018 From: nginx-forum at forum.nginx.org (meteor8488) Date: Fri, 05 Jan 2018 01:47:07 -0500 Subject: how to enable http2 for two server hosted on the same IP In-Reply-To: <20180104165348.Horde.TlI3VKE3FI_AbaVyfRA1V_m@andreasschulze.de> References: <20180104165348.Horde.TlI3VKE3FI_AbaVyfRA1V_m@andreasschulze.de> Message-ID: <890ce1943e598bdffe73562bad7d6ce0.NginxMailingListEnglish@forum.nginx.org> Thanks for reply. Server 1 is for php and server 2 is for static files. I want to enable sndbuf on server 2. Then how can I do that? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,277991,278023#msg-278023 From peter_booth at me.com Fri Jan 5 07:33:42 2018 From: peter_booth at me.com (Peter Booth) Date: Fri, 05 Jan 2018 02:33:42 -0500 Subject: 504 gateway timeouts In-Reply-To: References: <7608c451-ae57-ed45-cf5c-139bb756300e@lucee.org> Message-ID: Wade, I think that you are asking ?hey why isn?t nginx behaving identically on MacOS and Linux when create a servlet that invokes Thread.sleep(300000) before it returns a response?.? Am I reading you correctly? A flippant response would be to say: ?because OS/X and Linux are different OSes that behave differently? It would probably help us if you explained a little more about your test, why the sleep is there and what your goals are? Peter > On Jan 4, 2018, at 11:45 PM, Wade Girard wrote: > > I am not sure what is meant by this or what action you are asking me to take. The settings, when added to nginx conf file on Mac OS server and nginx reloaded take effect immediately and work as expected, the same settings when added to nginx conf file on Ubuntu and nginx reloaded have no effect at all. What steps can I take to have the proxy in nginx honor these timeouts, or what other settings/actions can I take to make this work? > > Thanks > > On Thu, Jan 4, 2018 at 7:46 PM, Zhang Chao > wrote: > > The version that is on the ubuntu servers was 1.10.xx. I just updated it to > > > > nginx version: nginx/1.13.8 > > > > And I am still having the same issue. > > > > How do I "Try to flush out some output early on so that nginx will know that Tomcat is alive." > > > > The nginx and tomcat connection is working fine for all requests/responses that take less t > > Maybe you can flush out the HTTP response headers quickly. > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > > > > -- > Wade Girard > c: 612.363.0902 > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From thomas at glanzmann.de Fri Jan 5 09:54:48 2018 From: thomas at glanzmann.de (Thomas Glanzmann) Date: Fri, 5 Jan 2018 10:54:48 +0100 Subject: URI Substitue _ with - and + with space Message-ID: <20180105095448.GB27530@glanzmann.de> Hello, I would like to substitue '_' with '-' and '+' with ' ' in the $URI and pass it to upstream server that can't handle _ and + in the URI (IIS). Based on [1] I found a working solution, however I would like to know if there is a more efficient way to do the same for example using lua? location / { rewrite ^([^_]*)_([^_]*)_([^_]*)_([^_]*)_([^_]*)_([^_]*)_([^_]*)_([^_]*)_(.*)$ $1-$2-$3-$4-$5-$6-$7-$8-$9; rewrite ^([^_]*)_([^_]*)_([^_]*)_([^_]*)_(.*)$ $1-$2-$3-$4-$5; rewrite ^([^_]*)_([^_]*)_(.*)$ $1-$2-$3; rewrite ^([^_]*)_(.*)$ $1-$2; rewrite ^([^+]*)\+([^+]*)\+([^+]*)\+([^+]*)\+([^+]*)\+([^+]*)\+([^+]*)\+([^+]*)\+(.*)$ $1%20$2%20$3%20$4%20$5%20$6%20$7%20$8%20$9; rewrite ^([^+]*)\+([^+]*)\+([^+]*)\+([^+]*)\+(.*)$ $1%20$2%20$3%20$4%20$5; rewrite ^([^+]*)\+([^+]*)\+(.*)$ $1%20$2%20$3; rewrite ^([^+]*)\+(.*)$ '$1 $2'; proxy_pass https://upstream/; } [1] https://stackoverflow.com/questions/15912191/how-to-replace-underscore-to-dash-with-nginx Cheers, Thomas From francis at daoine.org Fri Jan 5 11:08:13 2018 From: francis at daoine.org (Francis Daly) Date: Fri, 5 Jan 2018 11:08:13 +0000 Subject: IPv6 does not work correctly with nginx In-Reply-To: <910728718.1183010.1515114292561@mail.yahoo.com> References: <910728718.1183010.1515114292561.ref@mail.yahoo.com> <910728718.1183010.1515114292561@mail.yahoo.com> Message-ID: <20180105110813.GL3127@daoine.org> On Fri, Jan 05, 2018 at 01:04:52AM +0000, Mik J via nginx wrote: Hi there, > I'm trying to finish to configure nginx for ipv6 > listen [::]:443 ssl;doesn't workbutlisten [fc00:1:1::13]:443 ssl;works "listen [::]:443 ssl;" seems to work for me. What does "doesn't work" mean to you, specifically? What does error log say? f -- Francis Daly francis at daoine.org From wade.girard at gmail.com Fri Jan 5 12:28:28 2018 From: wade.girard at gmail.com (Wade Girard) Date: Fri, 5 Jan 2018 06:28:28 -0600 Subject: 504 gateway timeouts In-Reply-To: References: <7608c451-ae57-ed45-cf5c-139bb756300e@lucee.org> Message-ID: Hi Peter, Thank You. In my servlet I am making https requests to third party vendors to get data from them. The requests typically take 4~5 seconds, but every now any then one of the requests will take more than 60 seconds. So the connection from the client to nginx to tomcat will remain open, and at 60 seconds nginx is terminating the request to tomcat, even though the connection from the third party server to tomcat is still open. I am also working with the third party vendor to have them see why their connections sometimes take more than 60 seconds. Through googling I discovered that adding the settings proxy_send_timeout, proxy_read_timeout, proxy_connection_timeout, etc... to my location definition in my conf file could change the timeout to be different (higher) than the apparent default 60 second timeout. I use a Mac for development. I added these to my local conf file, and added the long connection request to test if the settings worked. They did. However they do not have the same effect for nginx installed on my production Ubuntu 16.x servers. I did not realize that these settings were limited by the OS that nginx is installed on. Are there are similar settings that will work for the Ubuntu 16.x OS to achieve the same result? Wade On Fri, Jan 5, 2018 at 1:33 AM, Peter Booth wrote: > Wade, > > I think that you are asking ?hey why isn?t nginx behaving identically on > MacOS and Linux when create a servlet that invokes Thread.sleep(300000) > before it returns a response?*.?* > > Am I reading you correctly? > > A flippant response would be to say: ?because OS/X and Linux are different > OSes that behave differently? > > It would probably help us if you explained a little more about your test, > why the sleep is there and what your goals are? > > > Peter > > > On Jan 4, 2018, at 11:45 PM, Wade Girard wrote: > > I am not sure what is meant by this or what action you are asking me to > take. The settings, when added to nginx conf file on Mac OS server and > nginx reloaded take effect immediately and work as expected, the same > settings when added to nginx conf file on Ubuntu and nginx reloaded have no > effect at all. What steps can I take to have the proxy in nginx honor these > timeouts, or what other settings/actions can I take to make this work? > > Thanks > > On Thu, Jan 4, 2018 at 7:46 PM, Zhang Chao wrote: > >> > The version that is on the ubuntu servers was 1.10.xx. I just updated >> it to >> > >> > nginx version: nginx/1.13.8 >> > >> > And I am still having the same issue. >> > >> > How do I "Try to flush out some output early on so that nginx will know >> that Tomcat is alive." >> > >> > The nginx and tomcat connection is working fine for all >> requests/responses that take less t >> >> Maybe you can flush out the HTTP response headers quickly. >> >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx >> > > > > -- > Wade Girard > c: 612.363.0902 <(612)%20363-0902> > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -- Wade Girard c: 612.363.0902 -------------- next part -------------- An HTML attachment was scrubbed... URL: From mikydevel at yahoo.fr Fri Jan 5 13:23:47 2018 From: mikydevel at yahoo.fr (Mik J) Date: Fri, 5 Jan 2018 13:23:47 +0000 (UTC) Subject: IPv6 does not work correctly with nginx In-Reply-To: <20180105110813.GL3127@daoine.org> References: <910728718.1183010.1515114292561.ref@mail.yahoo.com> <910728718.1183010.1515114292561@mail.yahoo.com> <20180105110813.GL3127@daoine.org> Message-ID: <1906009710.1535342.1515158627308@mail.yahoo.com> Hello Francis, The port seems open but there is no ssl transaction.When I did a simple tcpdump capture I saw syn then syn/ack, then ackThe brower displays an error that the site is not accessible. I forgot to say that I d-natted my IPv6 and the one I displayed is not a public IP.I was wondering if nginx treats it differently Le vendredi 5 janvier 2018 ? 12:26:20 UTC+1, Francis Daly a ?crit : On Fri, Jan 05, 2018 at 01:04:52AM +0000, Mik J via nginx wrote: Hi there, > I'm trying to finish to configure nginx for ipv6 > listen [::]:443 ssl;doesn't workbutlisten [fc00:1:1::13]:443 ssl;works "listen [::]:443 ssl;" seems to work for me. What does "doesn't work" mean to you, specifically? What does error log say? ??? f -- Francis Daly? ? ? ? francis at daoine.org -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Sat Jan 6 08:52:37 2018 From: nginx-forum at forum.nginx.org (kimown) Date: Sat, 06 Jan 2018 03:52:37 -0500 Subject: Where can I find nginScript shell Message-ID: $ njs interactive njscript v. -> the properties and prototype methods of v. type console.help() for more information >> var my_data = '{"colors":[{"name":"red","fancy":"brick dust"}, {"name":"blue","fancy":"sea spray"}]}'; >> var my_object = JSON.parse(my_data); >> my_object.colors[1].fancy; sea spray >> I found this in https://www.nginx.com/blog/nginx-plus-r14-released/ ,but where can I download? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,278033,278033#msg-278033 From maxim at nginx.com Sat Jan 6 16:58:16 2018 From: maxim at nginx.com (Maxim Konovalov) Date: Sat, 6 Jan 2018 19:58:16 +0300 Subject: Where can I find nginScript shell In-Reply-To: References: Message-ID: Hello, On 06/01/2018 11:52, kimown wrote: > $ njs > interactive njscript > > v. -> the properties and prototype methods of v. > type console.help() for more information > > >>> var my_data = '{"colors":[{"name":"red","fancy":"brick dust"}, > {"name":"blue","fancy":"sea spray"}]}'; >>> var my_object = JSON.parse(my_data); >>> my_object.colors[1].fancy; > sea spray >>> > > > I found this in https://www.nginx.com/blog/nginx-plus-r14-released/ ,but > where can I download? > It is a part of njs distribution, see http://hg.nginx.org/njs/file/tip -- Maxim Konovalov From nginx-forum at forum.nginx.org Sun Jan 7 14:06:30 2018 From: nginx-forum at forum.nginx.org (kimown) Date: Sun, 07 Jan 2018 09:06:30 -0500 Subject: Where can I find nginScript shell In-Reply-To: References: Message-ID: I find the associated code, thanks for your help, but I'm not familiar with how to building the nginScript shell, I think it's better add instruction in README.Also, nginScript is really really awesome. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,278033,278035#msg-278035 From nginx-forum at forum.nginx.org Sun Jan 7 19:44:53 2018 From: nginx-forum at forum.nginx.org (shivramg94) Date: Sun, 07 Jan 2018 14:44:53 -0500 Subject: Upgrading Nginx executable on the fly Message-ID: Hi, We have been trying to upgrade the Nginx binary on the fly by invoking USR2 signal to spawn a new set of master and worker processes with the new configuration. The question I have is when this new master process is spawned, after issuing USR2 signal to the existing or the old master process, what would be it's parent process? Will it be the root process (1) or the old master process? We have been seeing different case of this. In some cases we have seen the new master process' PPID is 1, whereas in other cases its PPID is that of the old master process. Could someone please explain this scenario in detail. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,278038,278038#msg-278038 From zchao1995 at gmail.com Mon Jan 8 02:08:38 2018 From: zchao1995 at gmail.com (Zhang Chao) Date: Sun, 7 Jan 2018 18:08:38 -0800 Subject: Upgrading Nginx executable on the fly In-Reply-To: References: Message-ID: > We have been trying to upgrade the Nginx binary on the fly by invoking USR2 > signal to spawn a new set of master and worker processes with the new > configuration. The question I have is when this new master process is > spawned, after issuing USR2 signal to the existing or the old master > process, what would be it's parent process? Will it be the root process (1) > or the old master process? > > We have been seeing different case of this. In some cases we have seen the > new master process' PPID is 1, whereas in other cases its PPID is that of > the old master process. The new master?s parent is the old master before you quit the old master process, after it quits, the new master?s parent process will be the init process. -------------- next part -------------- An HTML attachment was scrubbed... URL: From tom at keepschtum.win Mon Jan 8 10:38:56 2018 From: tom at keepschtum.win (Thomas Valentine) Date: Mon, 08 Jan 2018 23:38:56 +1300 Subject: OCSP stapling priming and logging Message-ID: <475561515407936@web45j.yandex.ru> An HTML attachment was scrubbed... URL: From Adam.Cecile at hitec.lu Mon Jan 8 12:37:41 2018 From: Adam.Cecile at hitec.lu (Cecile, Adam) Date: Mon, 8 Jan 2018 12:37:41 +0000 Subject: upstream (tcp stream mode) doesn't detect connecton failure Message-ID: <70db0d5cbf85438aac4fbcf17cc2a4a1a6068327e3b6487196c459906dc50ef1AM3PR06MB35350C456DF9A038854F4A09C130@AM3PR06MB353.eurprd06.prod.outlook.com> Hello, I'm using this quite complicated setup involving SNI routing and proxy_protocol but I'm stuck on something. Here is the configuration file: http://paste.debian.net/hidden/62e13f9c/ Routing, proxy_protocol, logging stuff is working just fine, the only (quite critical issue) is that the "mag" upstream doesn't see connection failures and does not switch to the second server. In the mag.log file I just see: 98.98.98.98 [08/Jan/2018:10:56:10 +0100] proxying to "mag":10.0.0.1:443 TCP 500 0 239 1.01 But instead of blacklisting this server and moving to 10.0.0.2 I receive a connection closed error on the client. Thanks in advance for your help, Best regards, Adam. -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Mon Jan 8 13:28:25 2018 From: nginx-forum at forum.nginx.org (floriante) Date: Mon, 08 Jan 2018 08:28:25 -0500 Subject: What does send_timeout? Message-ID: <45b5baee6cd47bc8f2b803183f825b76.NginxMailingListEnglish@forum.nginx.org> I read the documentation but didnt quiet get it. What does send_timeout actually do? Can you please explain me with an example? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,278042,278042#msg-278042 From nginx-forum at forum.nginx.org Tue Jan 9 06:04:22 2018 From: nginx-forum at forum.nginx.org (devkuamr) Date: Tue, 09 Jan 2018 01:04:22 -0500 Subject: Redirect request based on request type in case of error Message-ID: <8209123413931ae26abdc2dac413a09f.NginxMailingListEnglish@forum.nginx.org> Hi, I want to redirect GET request to another server in case of errors. I am using the below configuration but nginx is having problem while starting with this configuration. worker_processes 8; events { worker_connections 1024; } http { include mime.types; include map.txt; default_type application/octet-stream; sendfile on; upstream 271889976{ server 172.198.1.19:8080; } upstream 271889979{ server 172.198.1.18:8080; } upstream backup { server 172.198.1.20:8080; } server { listen 8080; server_name localhost; proxy_intercept_errors on; error_page 502 @redirect; proxy_read_timeout 10s; location / { proxy_pass http://$newhost$request_uri; proxy_hide_header store; } location @redirect{ if ($request_method = GET){ proxy_pass http://backup$request_uri; break; } if ($request_method = POST){ return upstream_status; } } } include servers/*; } I am getting "invalid return code "upstream_status"" with above configuration file. For POST request I want the same error returned by server. and in case of GET I want request to be redirected on "backup upstream". Posted at Nginx Forum: https://forum.nginx.org/read.php?2,278044,278044#msg-278044 From nginx-forum at forum.nginx.org Tue Jan 9 13:40:30 2018 From: nginx-forum at forum.nginx.org (ThanksDude) Date: Tue, 09 Jan 2018 08:40:30 -0500 Subject: Convert .htaccess to nginx rules Message-ID: <942544c0ac2e5515f6fa6b0f2db38865.NginxMailingListEnglish@forum.nginx.org> hey guys I tried the tools and it didn't worked for me. can u guys pls help me convert this to a nginx rules? RewriteEngine On #RewriteCond %{HTTPS} off #RewriteRule .* https://%{HTTP_HOST}%{REQUEST_URI} [L,R=301] #RewriteCond %{HTTP_HOST} !^www\. #RewriteRule .* http://www.%{HTTP_HOST}%{REQUEST_URI} [L,R=301] Options +FollowSymLinks Options -Indexes RewriteCond %{SCRIPT_FILENAME} !-d RewriteCond %{SCRIPT_FILENAME} !-f RewriteRule . index.php [L,QSA] # Performace optimization # BEGIN Compress text files AddOutputFilterByType DEFLATE text/html text/xml text/css text/plain AddOutputFilterByType DEFLATE image/svg+xml application/xhtml+xml application/xml AddOutputFilterByType DEFLATE application/rdf+xml application/rss+xml application/atom+xml AddOutputFilterByType DEFLATE text/javascript application/javascript application/x-javascript application/json AddOutputFilterByType DEFLATE application/x-font-ttf application/x-font-otf AddOutputFilterByType DEFLATE font/truetype font/opentype # END Compress text files # BEGIN Expire headers ExpiresActive On ExpiresDefault "access plus 5 seconds" ExpiresByType image/x-icon "access plus 31536000 seconds" ExpiresByType image/jpeg "access plus 31536000 seconds" ExpiresByType image/png "access plus 31536000 seconds" ExpiresByType image/gif "access plus 31536000 seconds" ExpiresByType application/x-shockwave-flash "access plus 31536000 seconds" ExpiresByType text/css "access plus 31536000 seconds" ExpiresByType text/javascript "access plus 31536000 seconds" ExpiresByType application/javascript "access plus 31536000 seconds" ExpiresByType application/x-javascript "access plus 31536000 seconds" # END Expire headers # BEGIN Cache-Control Headers Header set Cache-Control "public" Header set Cache-Control "public" Header set Cache-Control "private" Header set Cache-Control "private, must-revalidate" Header set Cache-Control "max-age=31536000 private, must-revalidate" # END Cache-Control Headers Posted at Nginx Forum: https://forum.nginx.org/read.php?2,278046,278046#msg-278046 From mdounin at mdounin.ru Tue Jan 9 13:46:28 2018 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 9 Jan 2018 16:46:28 +0300 Subject: upstream (tcp stream mode) doesn't detect connecton failure In-Reply-To: <70db0d5cbf85438aac4fbcf17cc2a4a1a6068327e3b6487196c459906dc50ef1AM3PR06MB35350C456DF9A038854F4A09C130@AM3PR06MB353.eurprd06.prod.outlook.com> References: <70db0d5cbf85438aac4fbcf17cc2a4a1a6068327e3b6487196c459906dc50ef1AM3PR06MB35350C456DF9A038854F4A09C130@AM3PR06MB353.eurprd06.prod.outlook.com> Message-ID: <20180109134628.GC34136@mdounin.ru> Hello! On Mon, Jan 08, 2018 at 12:37:41PM +0000, Cecile, Adam wrote: > Hello, > > > I'm using this quite complicated setup involving SNI routing and proxy_protocol but I'm stuck on something. > > > Here is the configuration file: > > http://paste.debian.net/hidden/62e13f9c/ > > > Routing, proxy_protocol, logging stuff is working just fine, the only (quite critical issue) is that the "mag" upstream doesn't see connection failures and does not switch to the second server. > > > In the mag.log file I just see: > > 98.98.98.98 [08/Jan/2018:10:56:10 +0100] proxying to "mag":10.0.0.1:443 TCP 500 0 239 1.01 > > > But instead of blacklisting this server and moving to 10.0.0.2 I receive a connection closed error on the client. As far as I understand your configuration, you have two stream proxy layers: 1. The first one uses ssl_preread to obtain SNI name and tries to do some routing based on it. This layer also adds to the PROXY protocol to backend connections. 2. The second one strips PROXY protocol header. The problem with "upstream doesn't see connection failures" is because connection failures are only seen at the second layer (the log line above belongs to the second layer). The first layer will only see a connection close, and it won't know if there was an error or not. Also note: - You use $proxy_protocol_addr in the "upstream mag {...}" block, but the upstream block is used only in the first layer, where $proxy_protocol_addr won't be available according to your configuration. - You use $name in the logs of the second layer. It will always point to "map", as there is no ssl_preread in the second layer, hence $ssl_preread_server_name will be not available. Depending on what you actually want to achieve, the most straightforward solution might be to actually remove the second proxy layer. -- Maxim Dounin http://mdounin.ru/ From mdounin at mdounin.ru Tue Jan 9 14:02:25 2018 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 9 Jan 2018 17:02:25 +0300 Subject: OCSP stapling priming and logging In-Reply-To: <475561515407936@web45j.yandex.ru> References: <475561515407936@web45j.yandex.ru> Message-ID: <20180109140225.GD34136@mdounin.ru> Hello! On Mon, Jan 08, 2018 at 11:38:56PM +1300, Thomas Valentine wrote: > I've spent a bit of time setting up my server with SSL, and checking > for OCSP stapling to be working - couldn't work out why it wasn't > sending the OCSP reply but it's as I was querying the server as the > first hit before it had primed the response. This isn't mentioned in > the online docs as to how it actually works. There is also nothing in > the logs saying what is going on - unless using debug mode. > > Perhaps within ngx_http_ssl_module.c something could be added to log > when an OCSP query takes place (without requiring a debug log). OCSP requests are expected to happen on regular basis when OCSP Stapling is enabled, and logging them all to the error log might not be a good idea. Rather, it logs if there are any errors. > I assume at some point in the past the option to prime the server has > been considered and not implemented? I know a server script could be > written to do this - perhaps within an nginx startup - and get nginx to > use the ssl_stapling_file but this seems messy. OCSP Stapling is an optimization, and nothing breaks if it doesn't work. You don't need to prime anything (unless you are using the "Must Staple" certificate extension, which is completely different story and wasn't even existed when OCSP Stapling was implemented in nginx). You may also find these tickets interesting: https://trac.nginx.org/nginx/ticket/1413 https://trac.nginx.org/nginx/ticket/990 https://trac.nginx.org/nginx/ticket/812 -- Maxim Dounin http://mdounin.ru/ From anoopalias01 at gmail.com Tue Jan 9 14:25:06 2018 From: anoopalias01 at gmail.com (Anoop Alias) Date: Tue, 9 Jan 2018 19:55:06 +0530 Subject: Convert .htaccess to nginx rules In-Reply-To: <942544c0ac2e5515f6fa6b0f2db38865.NginxMailingListEnglish@forum.nginx.org> References: <942544c0ac2e5515f6fa6b0f2db38865.NginxMailingListEnglish@forum.nginx.org> Message-ID: try_files $uri $uri/ /index.php; should work On Tue, Jan 9, 2018 at 7:10 PM, ThanksDude wrote: > hey guys > > I tried the tools and it didn't worked for me. > can u guys pls help me convert this to a nginx rules? > > > RewriteEngine On > > #RewriteCond %{HTTPS} off > #RewriteRule .* https://%{HTTP_HOST}%{REQUEST_URI} [L,R=301] > > #RewriteCond %{HTTP_HOST} !^www\. > #RewriteRule .* http://www.%{HTTP_HOST}%{REQUEST_URI} [L,R=301] > > Options +FollowSymLinks > Options -Indexes > > RewriteCond %{SCRIPT_FILENAME} !-d > RewriteCond %{SCRIPT_FILENAME} !-f > RewriteRule . index.php [L,QSA] > > > > # Performace optimization > > # BEGIN Compress text files > > AddOutputFilterByType DEFLATE text/html text/xml text/css text/plain > AddOutputFilterByType DEFLATE image/svg+xml application/xhtml+xml > application/xml > AddOutputFilterByType DEFLATE application/rdf+xml application/rss+xml > application/atom+xml > AddOutputFilterByType DEFLATE text/javascript application/javascript > application/x-javascript application/json > AddOutputFilterByType DEFLATE application/x-font-ttf > application/x-font-otf > AddOutputFilterByType DEFLATE font/truetype font/opentype > > # END Compress text files > > # BEGIN Expire headers > > ExpiresActive On > ExpiresDefault "access plus 5 seconds" > ExpiresByType image/x-icon "access plus 31536000 seconds" > ExpiresByType image/jpeg "access plus 31536000 seconds" > ExpiresByType image/png "access plus 31536000 seconds" > ExpiresByType image/gif "access plus 31536000 seconds" > ExpiresByType application/x-shockwave-flash "access plus 31536000 > seconds" > ExpiresByType text/css "access plus 31536000 seconds" > ExpiresByType text/javascript "access plus 31536000 seconds" > ExpiresByType application/javascript "access plus 31536000 seconds" > ExpiresByType application/x-javascript "access plus 31536000 seconds" > > # END Expire headers > > # BEGIN Cache-Control Headers > > > Header set Cache-Control "public" > > > Header set Cache-Control "public" > > > Header set Cache-Control "private" > > > Header set Cache-Control "private, must-revalidate" > > > > Header set Cache-Control "max-age=31536000 private, must-revalidate" > > > # END Cache-Control Headers > > Posted at Nginx Forum: https://forum.nginx.org/read. > php?2,278046,278046#msg-278046 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -- *Anoop P Alias* -------------- next part -------------- An HTML attachment was scrubbed... URL: From xeioex at nginx.com Tue Jan 9 17:01:47 2018 From: xeioex at nginx.com (Dmitry Volyntsev) Date: Tue, 9 Jan 2018 20:01:47 +0300 Subject: Where can I find nginScript shell In-Reply-To: References: Message-ID: On 07.01.2018 17:06, kimown wrote: > I find the associated code, thanks for your help, but I'm not familiar with > how to building the nginScript shell, I think it's better add instruction in > README.Also, nginScript is really really awesome. > It is available as a part of nginx official packages. Just follow http://nginx.org/en/linux_packages.html#mainline NJS CLI is included as a part of nginx-module-njs package. From wade.girard at gmail.com Tue Jan 9 18:52:58 2018 From: wade.girard at gmail.com (Wade Girard) Date: Tue, 9 Jan 2018 12:52:58 -0600 Subject: 504 gateway timeouts In-Reply-To: References: <7608c451-ae57-ed45-cf5c-139bb756300e@lucee.org> Message-ID: Hi nginx group, If anyone has any ides on this, they would be appreciated. Thanks On Fri, Jan 5, 2018 at 6:28 AM, Wade Girard wrote: > Hi Peter, > > Thank You. > > In my servlet I am making https requests to third party vendors to get > data from them. The requests typically take 4~5 seconds, but every now any > then one of the requests will take more than 60 seconds. So the connection > from the client to nginx to tomcat will remain open, and at 60 seconds > nginx is terminating the request to tomcat, even though the connection from > the third party server to tomcat is still open. > > I am also working with the third party vendor to have them see why their > connections sometimes take more than 60 seconds. > > Through googling I discovered that adding the settings proxy_send_timeout, > proxy_read_timeout, proxy_connection_timeout, etc... to my location > definition in my conf file could change the timeout to be different > (higher) than the apparent default 60 second timeout. I use a Mac for > development. I added these to my local conf file, and added the long > connection request to test if the settings worked. They did. However they > do not have the same effect for nginx installed on my production Ubuntu > 16.x servers. I did not realize that these settings were limited by the OS > that nginx is installed on. Are there are similar settings that will work > for the Ubuntu 16.x OS to achieve the same result? > > Wade > > On Fri, Jan 5, 2018 at 1:33 AM, Peter Booth wrote: > >> Wade, >> >> I think that you are asking ?hey why isn?t nginx behaving identically on >> MacOS and Linux when create a servlet that invokes Thread.sleep(300000) >> before it returns a response?*.?* >> >> Am I reading you correctly? >> >> A flippant response would be to say: ?because OS/X and Linux are >> different OSes that behave differently? >> >> It would probably help us if you explained a little more about your test, >> why the sleep is there and what your goals are? >> >> >> Peter >> >> >> On Jan 4, 2018, at 11:45 PM, Wade Girard wrote: >> >> I am not sure what is meant by this or what action you are asking me to >> take. The settings, when added to nginx conf file on Mac OS server and >> nginx reloaded take effect immediately and work as expected, the same >> settings when added to nginx conf file on Ubuntu and nginx reloaded have no >> effect at all. What steps can I take to have the proxy in nginx honor these >> timeouts, or what other settings/actions can I take to make this work? >> >> Thanks >> >> On Thu, Jan 4, 2018 at 7:46 PM, Zhang Chao wrote: >> >>> > The version that is on the ubuntu servers was 1.10.xx. I just updated >>> it to >>> > >>> > nginx version: nginx/1.13.8 >>> > >>> > And I am still having the same issue. >>> > >>> > How do I "Try to flush out some output early on so that nginx will >>> know that Tomcat is alive." >>> > >>> > The nginx and tomcat connection is working fine for all >>> requests/responses that take less t >>> >>> Maybe you can flush out the HTTP response headers quickly. >>> >>> >>> _______________________________________________ >>> nginx mailing list >>> nginx at nginx.org >>> http://mailman.nginx.org/mailman/listinfo/nginx >>> >> >> >> >> -- >> Wade Girard >> c: 612.363.0902 <(612)%20363-0902> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx >> >> >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx >> > > > > -- > Wade Girard > c: 612.363.0902 <(612)%20363-0902> > -- Wade Girard c: 612.363.0902 -------------- next part -------------- An HTML attachment was scrubbed... URL: From peter_booth at me.com Tue Jan 9 20:56:19 2018 From: peter_booth at me.com (Peter Booth) Date: Tue, 09 Jan 2018 15:56:19 -0500 Subject: 504 gateway timeouts In-Reply-To: References: <7608c451-ae57-ed45-cf5c-139bb756300e@lucee.org> Message-ID: Wade, This reminds me of something I once saw with an application that was making web service requests to FedEx. So are you saying that the response times are bimodal? That you either get a remote response within a few seconds or the request takes more than 60 seconds, and that you have no 20sec,30sec,40sec requests? And, if so, do those 60+ sec requests ever get a healthy response? Sent from my iPhone > On Jan 9, 2018, at 1:52 PM, Wade Girard wrote: > > Hi nginx group, > > If anyone has any ides on this, they would be appreciated. > > Thanks > >> On Fri, Jan 5, 2018 at 6:28 AM, Wade Girard wrote: >> Hi Peter, >> >> Thank You. >> >> In my servlet I am making https requests to third party vendors to get data from them. The requests typically take 4~5 seconds, but every now any then one of the requests will take more than 60 seconds. So the connection from the client to nginx to tomcat will remain open, and at 60 seconds nginx is terminating the request to tomcat, even though the connection from the third party server to tomcat is still open. >> >> I am also working with the third party vendor to have them see why their connections sometimes take more than 60 seconds. >> >> Through googling I discovered that adding the settings proxy_send_timeout, proxy_read_timeout, proxy_connection_timeout, etc... to my location definition in my conf file could change the timeout to be different (higher) than the apparent default 60 second timeout. I use a Mac for development. I added these to my local conf file, and added the long connection request to test if the settings worked. They did. However they do not have the same effect for nginx installed on my production Ubuntu 16.x servers. I did not realize that these settings were limited by the OS that nginx is installed on. Are there are similar settings that will work for the Ubuntu 16.x OS to achieve the same result? >> >> Wade >> >>> On Fri, Jan 5, 2018 at 1:33 AM, Peter Booth wrote: >>> Wade, >>> >>> I think that you are asking ?hey why isn?t nginx behaving identically on MacOS and Linux when create a servlet that invokes Thread.sleep(300000) before it returns a response?.? >>> >>> Am I reading you correctly? >>> >>> A flippant response would be to say: ?because OS/X and Linux are different OSes that behave differently? >>> >>> It would probably help us if you explained a little more about your test, why the sleep is there and what your goals are? >>> >>> >>> Peter >>> >>> >>>> On Jan 4, 2018, at 11:45 PM, Wade Girard wrote: >>>> >>>> I am not sure what is meant by this or what action you are asking me to take. The settings, when added to nginx conf file on Mac OS server and nginx reloaded take effect immediately and work as expected, the same settings when added to nginx conf file on Ubuntu and nginx reloaded have no effect at all. What steps can I take to have the proxy in nginx honor these timeouts, or what other settings/actions can I take to make this work? >>>> >>>> Thanks >>>> >>>>> On Thu, Jan 4, 2018 at 7:46 PM, Zhang Chao wrote: >>>>> > The version that is on the ubuntu servers was 1.10.xx. I just updated it to >>>>> > >>>>> > nginx version: nginx/1.13.8 >>>>> > >>>>> > And I am still having the same issue. >>>>> > >>>>> > How do I "Try to flush out some output early on so that nginx will know that Tomcat is alive." >>>>> > >>>>> > The nginx and tomcat connection is working fine for all requests/responses that take less t >>>>> >>>>> Maybe you can flush out the HTTP response headers quickly. >>>>> >>>>> >>>>> _______________________________________________ >>>>> nginx mailing list >>>>> nginx at nginx.org >>>>> http://mailman.nginx.org/mailman/listinfo/nginx >>>> >>>> >>>> >>>> -- >>>> Wade Girard >>>> c: 612.363.0902 >>>> _______________________________________________ >>>> nginx mailing list >>>> nginx at nginx.org >>>> http://mailman.nginx.org/mailman/listinfo/nginx >>> >>> >>> _______________________________________________ >>> nginx mailing list >>> nginx at nginx.org >>> http://mailman.nginx.org/mailman/listinfo/nginx >> >> >> >> -- >> Wade Girard >> c: 612.363.0902 > > > > -- > Wade Girard > c: 612.363.0902 -------------- next part -------------- An HTML attachment was scrubbed... URL: From adam.cecile at hitec.lu Tue Jan 9 22:48:54 2018 From: adam.cecile at hitec.lu (Adam Cecile) Date: Tue, 9 Jan 2018 23:48:54 +0100 Subject: upstream (tcp stream mode) doesn't detect connecton failure In-Reply-To: <20180109134628.GC34136@mdounin.ru> References: <70db0d5cbf85438aac4fbcf17cc2a4a1a6068327e3b6487196c459906dc50ef1AM3PR06MB35350C456DF9A038854F4A09C130@AM3PR06MB353.eurprd06.prod.outlook.com> <20180109134628.GC34136@mdounin.ru> Message-ID: On 01/09/2018 02:46 PM, Maxim Dounin wrote: > Hello! > > On Mon, Jan 08, 2018 at 12:37:41PM +0000, Cecile, Adam wrote: > >> Hello, >> >> >> I'm using this quite complicated setup involving SNI routing and proxy_protocol but I'm stuck on something. >> >> >> Here is the configuration file: >> >> http://paste.debian.net/hidden/62e13f9c/ >> >> >> Routing, proxy_protocol, logging stuff is working just fine, the only (quite critical issue) is that the "mag" upstream doesn't see connection failures and does not switch to the second server. >> >> >> In the mag.log file I just see: >> >> 98.98.98.98 [08/Jan/2018:10:56:10 +0100] proxying to "mag":10.0.0.1:443 TCP 500 0 239 1.01 >> >> >> But instead of blacklisting this server and moving to 10.0.0.2 I receive a connection closed error on the client. > As far as I understand your configuration, you have two stream > proxy layers: > > 1. The first one uses ssl_preread to obtain SNI name and tries to > do some routing based on it. This layer also adds to the PROXY > protocol to backend connections. > > 2. The second one strips PROXY protocol header. > > The problem with "upstream doesn't see connection failures" is > because connection failures are only seen at the second layer (the > log line above belongs to the second layer). The first layer will > only see a connection close, and it won't know if there was an > error or not. > > Also note: > > - You use $proxy_protocol_addr in the "upstream mag {...}" block, > but the upstream block is used only in the first layer, where > $proxy_protocol_addr won't be available according to your > configuration. > > - You use $name in the logs of the second layer. It will always > point to "map", as there is no ssl_preread in the second layer, > hence $ssl_preread_server_name will be not available. > > Depending on what you actually want to achieve, the most > straightforward solution might be to actually remove the second > proxy layer. Hello, The proxy protocol was used for the "non-stream" routing on SNI when forwarding to nginx itself as "local_https". At this point it's using regular https vhost, that's why I added proxy_protocol to easily be able to extract the original client address. Aim of the two servers on 8080 and 8181 are only to strip proxy_protocol before going to upstream mag. I'd be happy to remove them but if I do that I need a way to strip out proxy_protocol inside the "upstream mag" block. Is it possible ? Thanks a lot, Adam. From arozyev at nginx.com Wed Jan 10 00:25:29 2018 From: arozyev at nginx.com (Aziz Rozyev) Date: Wed, 10 Jan 2018 03:25:29 +0300 Subject: 504 gateway timeouts In-Reply-To: References: <7608c451-ae57-ed45-cf5c-139bb756300e@lucee.org> Message-ID: <9D2EEE3D-694C-4E98-93B8-8F99A5AE85AC@nginx.com> Hi Wade, At least provide the access/error log fragments, curl -ivvv <..> outputs directly to the 3rd party service and via the nginx, jmeter (if you use that) outputs would make sense. Also, it would be nice to compare nginx configurations from the mac and linux. Currently it?s barely possible to conclude something relevant about the issue, having exactly the same nginx configurations, and getting different results looks rather strange. Hint: change ?error_log? directive?s level to ?info?, check error/access logs, gather tcpdump. And if that won?t make the problem clearer, start nginx in debug mode. this article should be useful: https://www.nginx.com/resources/admin-guide/debug/ br, Aziz. > On 9 Jan 2018, at 23:56, Peter Booth wrote: > > Wade, > > This reminds me of something I once saw with an application that was making web service requests to FedEx. So are you saying that the response times are bimodal? That you either get a remote response within a few seconds or the request takes more than 60 seconds, and that you have no 20sec,30sec,40sec requests? > > And, if so, do those 60+ sec requests ever get a healthy response? > > > Sent from my iPhone > > On Jan 9, 2018, at 1:52 PM, Wade Girard wrote: > >> Hi nginx group, >> >> If anyone has any ides on this, they would be appreciated. >> >> Thanks >> >> On Fri, Jan 5, 2018 at 6:28 AM, Wade Girard wrote: >> Hi Peter, >> >> Thank You. >> >> In my servlet I am making https requests to third party vendors to get data from them. The requests typically take 4~5 seconds, but every now any then one of the requests will take more than 60 seconds. So the connection from the client to nginx to tomcat will remain open, and at 60 seconds nginx is terminating the request to tomcat, even though the connection from the third party server to tomcat is still open. >> >> I am also working with the third party vendor to have them see why their connections sometimes take more than 60 seconds. >> >> Through googling I discovered that adding the settings proxy_send_timeout, proxy_read_timeout, proxy_connection_timeout, etc... to my location definition in my conf file could change the timeout to be different (higher) than the apparent default 60 second timeout. I use a Mac for development. I added these to my local conf file, and added the long connection request to test if the settings worked. They did. However they do not have the same effect for nginx installed on my production Ubuntu 16.x servers. I did not realize that these settings were limited by the OS that nginx is installed on. Are there are similar settings that will work for the Ubuntu 16.x OS to achieve the same result? >> >> Wade >> >> On Fri, Jan 5, 2018 at 1:33 AM, Peter Booth wrote: >> Wade, >> >> I think that you are asking ?hey why isn?t nginx behaving identically on MacOS and Linux when create a servlet that invokes Thread.sleep(300000) before it returns a response?.? >> >> Am I reading you correctly? >> >> A flippant response would be to say: ?because OS/X and Linux are different OSes that behave differently? >> >> It would probably help us if you explained a little more about your test, why the sleep is there and what your goals are? >> >> >> Peter >> >> >>> On Jan 4, 2018, at 11:45 PM, Wade Girard wrote: >>> >>> I am not sure what is meant by this or what action you are asking me to take. The settings, when added to nginx conf file on Mac OS server and nginx reloaded take effect immediately and work as expected, the same settings when added to nginx conf file on Ubuntu and nginx reloaded have no effect at all. What steps can I take to have the proxy in nginx honor these timeouts, or what other settings/actions can I take to make this work? >>> >>> Thanks >>> >>> On Thu, Jan 4, 2018 at 7:46 PM, Zhang Chao wrote: >>> > The version that is on the ubuntu servers was 1.10.xx. I just updated it to >>> > >>> > nginx version: nginx/1.13.8 >>> > >>> > And I am still having the same issue. >>> > >>> > How do I "Try to flush out some output early on so that nginx will know that Tomcat is alive." >>> > >>> > The nginx and tomcat connection is working fine for all requests/responses that take less t >>> >>> Maybe you can flush out the HTTP response headers quickly. >>> >>> >>> _______________________________________________ >>> nginx mailing list >>> nginx at nginx.org >>> http://mailman.nginx.org/mailman/listinfo/nginx >>> >>> >>> >>> -- >>> Wade Girard >>> c: 612.363.0902 >>> _______________________________________________ >>> nginx mailing list >>> nginx at nginx.org >>> http://mailman.nginx.org/mailman/listinfo/nginx >> >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx >> >> >> >> -- >> Wade Girard >> c: 612.363.0902 >> >> >> >> -- >> Wade Girard >> c: 612.363.0902 > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From nginx-forum at forum.nginx.org Wed Jan 10 02:55:17 2018 From: nginx-forum at forum.nginx.org (ThanksDude) Date: Tue, 09 Jan 2018 21:55:17 -0500 Subject: Convert .htaccess to nginx rules In-Reply-To: References: Message-ID: Thanks @Anoop Alias However I tried it and unfortunately it didn't worked. What reason can it be? I'm running latest nginx (1.13.8) Posted at Nginx Forum: https://forum.nginx.org/read.php?2,278046,278060#msg-278060 From nginx-forum at forum.nginx.org Wed Jan 10 08:15:05 2018 From: nginx-forum at forum.nginx.org (anish10dec) Date: Wed, 10 Jan 2018 03:15:05 -0500 Subject: Secure Link Expires - URL Signing Message-ID: URL Signing by Secure Link MD5 , restricts the client from accessing the secured object for limited time using below module Exp time is sent as query parameter from client device secure_link $arg_hash,$arg_exp; secure_link_md5 "secret$arg_exp"; if ($secure_link = "") {return 405;} if ($secure_link = "0") {return 410;} Here problem is that if expiry time i.e exp send from client is less than server time nginx module returns 410 . But if some client changes the time of device to some future date and request for object in that case also object will be delivered as client time will be greater than server time. Is there a way to restrict the request, by secure link module, to future time so that for example object should be accessible only for 1 hour time duration from current time. Please suggest Posted at Nginx Forum: https://forum.nginx.org/read.php?2,278063,278063#msg-278063 From francis at daoine.org Wed Jan 10 08:32:10 2018 From: francis at daoine.org (Francis Daly) Date: Wed, 10 Jan 2018 08:32:10 +0000 Subject: IPv6 does not work correctly with nginx In-Reply-To: <1906009710.1535342.1515158627308@mail.yahoo.com> References: <910728718.1183010.1515114292561.ref@mail.yahoo.com> <910728718.1183010.1515114292561@mail.yahoo.com> <20180105110813.GL3127@daoine.org> <1906009710.1535342.1515158627308@mail.yahoo.com> Message-ID: <20180110083210.GM3127@daoine.org> On Fri, Jan 05, 2018 at 01:23:47PM +0000, Mik J via nginx wrote: Hi there, I don't have a direct solution to the issue you report. I do have a few things to try, which might help isolate where the problem is (and therefore where the fix should be). > The port seems open but there is no ssl transaction.When I did a simple tcpdump capture I saw syn then syn/ack, then ackThe brower displays an error that the site is not accessible. Can you compare this tcpdump, with the start of a tcpdump of a working connection (when you have told nginx to listen on a dedicated IP:port)? Perhaps that will show which part of the communication fails. (If you can tcpdump on both the client and server, maybe that will show if something is lost in the network.) Do you see the same problem if you omit ssl? If so, that might make it easier to test manually. If not, that's probably useful information. > I forgot to say that I d-natted my IPv6 and the one I displayed is not a public IP.I was wondering if nginx treats it differently nginx should not care; something outside of nginx might care. If you make a "curl" request from the nginx machine to itself, do you see the problem? And - if you omit nginx and just use a tcp listener (such as netcat) as the server, do you see a similar problem? Good luck with it, f -- Francis Daly francis at daoine.org From mohit3081989 at gmail.com Wed Jan 10 08:45:19 2018 From: mohit3081989 at gmail.com (mohit Agrawal) Date: Wed, 10 Jan 2018 14:15:19 +0530 Subject: Nginx error log parser Message-ID: Hi , I am looking to parse nginx error log so as to find out which particular IP is throttled during specific amount of time on connection throttling / request throttling. The format looks like : 2018/01/10 06:26:31 [error] 13485#13485: *64285471 limiting connections by zone "rl_conn", client: xx.xx.xx.xx, server: www.xyz.com, request: "GET /api/xyz HTTP/1.1", host: "www.xyz.com" And the sample that I am looking for is : {client: "xx.xx.xx.xx", server: "www.xyz.com", host: "www.xyz.com", "request": "GET /api/xyz HTTP/1.1", reason: "limiting connections by zone "rl_conn""} so that I can pass it through ELK stack and find out the root ip which is causing issue. -- Mohit Agrawal -------------- next part -------------- An HTML attachment was scrubbed... URL: From francis at daoine.org Wed Jan 10 08:47:49 2018 From: francis at daoine.org (Francis Daly) Date: Wed, 10 Jan 2018 08:47:49 +0000 Subject: Secure Link Expires - URL Signing In-Reply-To: References: Message-ID: <20180110084749.GN3127@daoine.org> On Wed, Jan 10, 2018 at 03:15:05AM -0500, anish10dec wrote: > URL Signing by Secure Link MD5 , restricts the client from accessing the > secured object for limited time using below module It can, if you configure it to. > Exp time is sent as query parameter from client device > > secure_link $arg_hash,$arg_exp; > secure_link_md5 "secret$arg_exp"; http://nginx.org/r/secure_link_md5 says """ The expression should contain the secured part of a link (resource) and a secret ingredient. If the link has a limited lifetime, the expression should also contain $secure_link_expires. """ You appear to have included only one of the three pieces. > Is there a way to restrict the request, by secure link module, to future > time so that for example object should be accessible only for 1 hour time > duration from current time. Yes. Create the link and do the configuration like the documentation suggests. If it still does not work for you, can you show all of the steps that you take to secure one specific url? That might make it clear where the problem first appears. (Maybe the documentation needs changing.) f -- Francis Daly francis at daoine.org From arozyev at nginx.com Wed Jan 10 09:34:54 2018 From: arozyev at nginx.com (Aziz Rozyev) Date: Wed, 10 Jan 2018 12:34:54 +0300 Subject: Nginx error log parser In-Reply-To: References: Message-ID: <86062192-3948-4703-9B2A-0E47427B2860@nginx.com> is the 'log_format json? is what you?re asking for? http://nginx.org/en/docs/http/ngx_http_log_module.html#log_format br, Aziz. > On 10 Jan 2018, at 11:45, mohit Agrawal wrote: > > Hi , > > I am looking to parse nginx error log so as to find out which particular IP is throttled during specific amount of time on connection throttling / request throttling. The format looks like : > > 2018/01/10 06:26:31 [error] 13485#13485: *64285471 limiting connections by zone "rl_conn", client: xx.xx.xx.xx, server: www.xyz.com, request: "GET /api/xyz HTTP/1.1", host: "www.xyz.com" > And the sample that I am looking for is : > > {client: "xx.xx.xx.xx", server: "www.xyz.com", host: "www.xyz.com", "request": "GET /api/xyz HTTP/1.1", reason: "limiting connections by zone "rl_conn""} > so that I can pass it through ELK stack and find out the root ip which is causing issue. > > > -- > Mohit Agrawal > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From shahzaib.cb at gmail.com Wed Jan 10 09:41:27 2018 From: shahzaib.cb at gmail.com (shahzaib mushtaq) Date: Wed, 10 Jan 2018 14:41:27 +0500 Subject: Country Based vhost include !! Message-ID: Hi, We want to include virtual hosts based on the country such as, if country is "mycountry" include virtual.conf otherwise include default.conf. However, we tried to achieve that using 'if' but it doesn't support 'include' directive. Is there any other way of achieving this goal ? Thanks. Shahzaib -------------- next part -------------- An HTML attachment was scrubbed... URL: From arozyev at nginx.com Wed Jan 10 09:56:52 2018 From: arozyev at nginx.com (Aziz Rozyev) Date: Wed, 10 Jan 2018 12:56:52 +0300 Subject: Nginx error log parser In-Reply-To: References: Message-ID: <5C60C6A0-0950-4EC5-A540-A725B30F1C90@nginx.com> btw, after re-reading the your questing, it looks like you need something like logstash grok filter. br, Aziz. > On 10 Jan 2018, at 11:45, mohit Agrawal wrote: > > Hi , > > I am looking to parse nginx error log so as to find out which particular IP is throttled during specific amount of time on connection throttling / request throttling. The format looks like : > > 2018/01/10 06:26:31 [error] 13485#13485: *64285471 limiting connections by zone "rl_conn", client: xx.xx.xx.xx, server: www.xyz.com, request: "GET /api/xyz HTTP/1.1", host: "www.xyz.com" > And the sample that I am looking for is : > > {client: "xx.xx.xx.xx", server: "www.xyz.com", host: "www.xyz.com", "request": "GET /api/xyz HTTP/1.1", reason: "limiting connections by zone "rl_conn""} > so that I can pass it through ELK stack and find out the root ip which is causing issue. > > > -- > Mohit Agrawal > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From francis at daoine.org Wed Jan 10 11:18:50 2018 From: francis at daoine.org (Francis Daly) Date: Wed, 10 Jan 2018 11:18:50 +0000 Subject: Redirect request based on request type in case of error In-Reply-To: <8209123413931ae26abdc2dac413a09f.NginxMailingListEnglish@forum.nginx.org> References: <8209123413931ae26abdc2dac413a09f.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20180110111850.GO3127@daoine.org> On Tue, Jan 09, 2018 at 01:04:22AM -0500, devkuamr wrote: Hi there, > I want to redirect GET request to another server in case of errors. I am > using the below configuration but nginx is having problem while starting > with this configuration. > if ($request_method = POST){ > return upstream_status; > } > I am getting "invalid return code "upstream_status"" with above > configuration file. I don't immediately know about the rest of it, but I suspect there's a good chance that you want $upstream_status instead of upstream_status. f -- Francis Daly francis at daoine.org From mohit3081989 at gmail.com Wed Jan 10 11:37:26 2018 From: mohit3081989 at gmail.com (mohit Agrawal) Date: Wed, 10 Jan 2018 17:07:26 +0530 Subject: Nginx error log parser In-Reply-To: <5C60C6A0-0950-4EC5-A540-A725B30F1C90@nginx.com> References: <5C60C6A0-0950-4EC5-A540-A725B30F1C90@nginx.com> Message-ID: Hi Aziz, log_format directive only provides formatting for access log, I am looking to format error.log which doesn't take log_format directive. Above example that I gave is just for nginx error logs. Thanks On 10 January 2018 at 15:26, Aziz Rozyev wrote: > btw, after re-reading the your questing, it looks like you need something > like logstash grok filter. > > br, > Aziz. > > > > > > > On 10 Jan 2018, at 11:45, mohit Agrawal wrote: > > > > Hi , > > > > I am looking to parse nginx error log so as to find out which particular > IP is throttled during specific amount of time on connection throttling / > request throttling. The format looks like : > > > > 2018/01/10 06:26:31 [error] 13485#13485: *64285471 limiting connections > by zone "rl_conn", client: xx.xx.xx.xx, server: www.xyz.com, request: > "GET /api/xyz HTTP/1.1", host: "www.xyz.com" > > And the sample that I am looking for is : > > > > {client: "xx.xx.xx.xx", server: "www.xyz.com", host: "www.xyz.com", > "request": "GET /api/xyz HTTP/1.1", reason: "limiting connections by zone > "rl_conn""} > > so that I can pass it through ELK stack and find out the root ip which > is causing issue. > > > > > > -- > > Mohit Agrawal > > _______________________________________________ > > nginx mailing list > > nginx at nginx.org > > http://mailman.nginx.org/mailman/listinfo/nginx > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -- Mohit Agrawal -------------- next part -------------- An HTML attachment was scrubbed... URL: From ok at h09.org Wed Jan 10 11:37:49 2018 From: ok at h09.org (=?utf-8?Q?Otto_Kucera?=) Date: Wed, 10 Jan 2018 12:37:49 +0100 Subject: NTLM Message-ID: Hi all, I am testing ntlm for a reverse proxy secanrio. Info: http://nginx.org/en/docs/http/ngx_http_upstream_module.html#ntlm this is my config: upstream http_backend { server 127.0.0.1:8080; ntlm; } server { ?????????? listen 443; ????? ... location /http/ { proxy_pass http://http_backend; proxy_http_version 1.1; proxy_set_header Connection ""; ... } } I always get this error: nginx: [emerg] unknown directive "ntlm" in /etc/nginx/conf.d/test.conf:4 This is my version: nginx version: nginx/1.12.2 What do I make wrong? Since version 1.9.2 this option should be possible. Thanks, Otto -------------- next part -------------- An HTML attachment was scrubbed... URL: From lucas at lucasrolff.com Wed Jan 10 11:39:48 2018 From: lucas at lucasrolff.com (Lucas Rolff) Date: Wed, 10 Jan 2018 11:39:48 +0000 Subject: NTLM In-Reply-To: References: Message-ID: It?s only available for nginx-plus Get Outlook for iOS ________________________________ From: nginx on behalf of Otto Kucera Sent: Wednesday, January 10, 2018 12:37:49 PM To: nginx at nginx.org Subject: NTLM Hi all, I am testing ntlm for a reverse proxy secanrio. Info: http://nginx.org/en/docs/http/ngx_http_upstream_module.html#ntlm this is my config: upstream http_backend { server 127.0.0.1:8080; ntlm; } server { listen 443; ... location /http/ { proxy_pass http://http_backend; proxy_http_version 1.1; proxy_set_header Connection ""; ... } } I always get this error: nginx: [emerg] unknown directive "ntlm" in /etc/nginx/conf.d/test.conf:4 This is my version: nginx version: nginx/1.12.2 What do I make wrong? Since version 1.9.2 this option should be possible. Thanks, Otto -------------- next part -------------- An HTML attachment was scrubbed... URL: From maxim at nginx.com Wed Jan 10 11:41:54 2018 From: maxim at nginx.com (Maxim Konovalov) Date: Wed, 10 Jan 2018 14:41:54 +0300 Subject: NTLM In-Reply-To: References: Message-ID: <8713af7d-ac97-54bc-cbaf-a1f515f45d03@nginx.com> Hi Otto, On 10/01/2018 14:37, Otto Kucera wrote: > Hi all, > > > I am testing ntlm for a reverse proxy secanrio. [...] > I always get this error: > > nginx: [emerg] *unknown directive "ntlm"* in > /etc/nginx/conf.d/test.conf:4 > > > This is my version: > > nginx version: nginx/1.12.2 > > > What do I make wrong? Since version 1.9.2 this option should be > possible. > It is a part of paid version. From nginx.org/r/ntlm: "This directive is available as part of our commercial subscription." -- Maxim Konovalov From arozyev at nginx.com Wed Jan 10 11:42:59 2018 From: arozyev at nginx.com (Aziz Rozyev) Date: Wed, 10 Jan 2018 14:42:59 +0300 Subject: Nginx error log parser In-Reply-To: References: <5C60C6A0-0950-4EC5-A540-A725B30F1C90@nginx.com> Message-ID: Hi Mohit, check the second reply. I?m not sure that there is a conventional pretty printing tools for nginx error log. br, Aziz. > On 10 Jan 2018, at 14:37, mohit Agrawal wrote: > > Hi Aziz, > > log_format directive only provides formatting for access log, I am looking to format error.log which doesn't take log_format directive. > Above example that I gave is just for nginx error logs. > > Thanks > > On 10 January 2018 at 15:26, Aziz Rozyev wrote: > btw, after re-reading the your questing, it looks like you need something like logstash grok filter. > > br, > Aziz. > > > > > > > On 10 Jan 2018, at 11:45, mohit Agrawal wrote: > > > > Hi , > > > > I am looking to parse nginx error log so as to find out which particular IP is throttled during specific amount of time on connection throttling / request throttling. The format looks like : > > > > 2018/01/10 06:26:31 [error] 13485#13485: *64285471 limiting connections by zone "rl_conn", client: xx.xx.xx.xx, server: www.xyz.com, request: "GET /api/xyz HTTP/1.1", host: "www.xyz.com" > > And the sample that I am looking for is : > > > > {client: "xx.xx.xx.xx", server: "www.xyz.com", host: "www.xyz.com", "request": "GET /api/xyz HTTP/1.1", reason: "limiting connections by zone "rl_conn""} > > so that I can pass it through ELK stack and find out the root ip which is causing issue. > > > > > > -- > > Mohit Agrawal > > _______________________________________________ > > nginx mailing list > > nginx at nginx.org > > http://mailman.nginx.org/mailman/listinfo/nginx > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > > > > -- > Mohit Agrawal From mohit3081989 at gmail.com Wed Jan 10 11:45:32 2018 From: mohit3081989 at gmail.com (mohit Agrawal) Date: Wed, 10 Jan 2018 17:15:32 +0530 Subject: Nginx error log parser In-Reply-To: References: <5C60C6A0-0950-4EC5-A540-A725B30F1C90@nginx.com> Message-ID: Yeah I have tried grok / regex pattern as well. But not extensive success that I achieved. grok didn't work for me, I tried regex then it was able to segregate time , pid, tid, log_level and message. I also need message break up for above pattern On 10 January 2018 at 17:12, Aziz Rozyev wrote: > Hi Mohit, > > check the second reply. I?m not sure that there is a conventional pretty > printing > tools for nginx error log. > > > br, > Aziz. > > > > > > > On 10 Jan 2018, at 14:37, mohit Agrawal wrote: > > > > Hi Aziz, > > > > log_format directive only provides formatting for access log, I am > looking to format error.log which doesn't take log_format directive. > > Above example that I gave is just for nginx error logs. > > > > Thanks > > > > On 10 January 2018 at 15:26, Aziz Rozyev wrote: > > btw, after re-reading the your questing, it looks like you need > something like logstash grok filter. > > > > br, > > Aziz. > > > > > > > > > > > > > On 10 Jan 2018, at 11:45, mohit Agrawal > wrote: > > > > > > Hi , > > > > > > I am looking to parse nginx error log so as to find out which > particular IP is throttled during specific amount of time on connection > throttling / request throttling. The format looks like : > > > > > > 2018/01/10 06:26:31 [error] 13485#13485: *64285471 limiting > connections by zone "rl_conn", client: xx.xx.xx.xx, server: www.xyz.com, > request: "GET /api/xyz HTTP/1.1", host: "www.xyz.com" > > > And the sample that I am looking for is : > > > > > > {client: "xx.xx.xx.xx", server: "www.xyz.com", host: "www.xyz.com", > "request": "GET /api/xyz HTTP/1.1", reason: "limiting connections by zone > "rl_conn""} > > > so that I can pass it through ELK stack and find out the root ip which > is causing issue. > > > > > > > > > -- > > > Mohit Agrawal > > > _______________________________________________ > > > nginx mailing list > > > nginx at nginx.org > > > http://mailman.nginx.org/mailman/listinfo/nginx > > > > _______________________________________________ > > nginx mailing list > > nginx at nginx.org > > http://mailman.nginx.org/mailman/listinfo/nginx > > > > > > > > -- > > Mohit Agrawal > > -- Mohit Agrawal -------------- next part -------------- An HTML attachment was scrubbed... URL: From ok at h09.org Wed Jan 10 11:45:44 2018 From: ok at h09.org (=?utf-8?Q?Otto_Kucera?=) Date: Wed, 10 Jan 2018 12:45:44 +0100 Subject: NTLM Message-ID: Hi Maxim, -----Original message----- From: Maxim Konovalov? Sent: Wednesday 10th January 2018 12:40 To: nginx at nginx.org; Otto Kucera Subject: Re: NTLM Hi Otto, On 10/01/2018 14:37, Otto Kucera wrote: > Hi all, > > > I am testing ntlm for a reverse proxy secanrio. [...] > I always get this error: > > nginx: [emerg] *unknown directive "ntlm"* in > /etc/nginx/conf.d/test.conf:4 > > > This is my version: > > nginx version: nginx/1.12.2 > > > What do I make wrong? Since version 1.9.2 this option should be > possible. > It is a part of paid version. From nginx.org/r/ntlm: "This directive is available as part of our commercial subscription." -- Maxim Konovalov Thank you for the clarification. Otto -------------- next part -------------- An HTML attachment was scrubbed... URL: From arozyev at nginx.com Wed Jan 10 12:14:04 2018 From: arozyev at nginx.com (Aziz Rozyev) Date: Wed, 10 Jan 2018 15:14:04 +0300 Subject: Nginx error log parser In-Reply-To: References: <5C60C6A0-0950-4EC5-A540-A725B30F1C90@nginx.com> Message-ID: If you need parse exactly the same format, as you?ve shown in you question, it?s fairly easy to create something e.g. perl/awk/sed script. for instance: ################# tst.awk ################# BEGIN {FS = "," } { split($1, m, "\ ") printf "%s", "{ " printf "%s",$2 printf "%s",$3 printf "%s",$5 printf "%s",$4 printf "reason: %s %s %s %s \"%s\"\n", m[6], m[7], m[8], m[9], m[10] print " }? } ############################################# result: echo 2018/01/10 06:26:31 [error] 13485#13485: *64285471 limiting connections by zone "rl_conn", client: xx.xx.xx.xx, server: www.xyz.com, request: "GET /api/xyz HTTP/1.1", host: "www.xyz.com" | awk -f /tmp/test.awk { client: xx.xx.xx.xx server: www.xyz.com host: www.xyz.com request: GET /api/xyz HTTP/1.1reason: limiting connections by zone "rl_conn" } br, Aziz. > On 10 Jan 2018, at 14:45, mohit Agrawal wrote: > > Yeah I have tried grok / regex pattern as well. But not extensive success that I achieved. grok didn't work for me, I tried regex then it was able to segregate time , pid, tid, log_level and message. I also need message break up for above pattern > > On 10 January 2018 at 17:12, Aziz Rozyev wrote: > Hi Mohit, > > check the second reply. I?m not sure that there is a conventional pretty printing > tools for nginx error log. > > > br, > Aziz. > > > > > > > On 10 Jan 2018, at 14:37, mohit Agrawal wrote: > > > > Hi Aziz, > > > > log_format directive only provides formatting for access log, I am looking to format error.log which doesn't take log_format directive. > > Above example that I gave is just for nginx error logs. > > > > Thanks > > > > On 10 January 2018 at 15:26, Aziz Rozyev wrote: > > btw, after re-reading the your questing, it looks like you need something like logstash grok filter. > > > > br, > > Aziz. > > > > > > > > > > > > > On 10 Jan 2018, at 11:45, mohit Agrawal wrote: > > > > > > Hi , > > > > > > I am looking to parse nginx error log so as to find out which particular IP is throttled during specific amount of time on connection throttling / request throttling. The format looks like : > > > > > > 2018/01/10 06:26:31 [error] 13485#13485: *64285471 limiting connections by zone "rl_conn", client: xx.xx.xx.xx, server: www.xyz.com, request: "GET /api/xyz HTTP/1.1", host: "www.xyz.com" > > > And the sample that I am looking for is : > > > > > > {client: "xx.xx.xx.xx", server: "www.xyz.com", host: "www.xyz.com", "request": "GET /api/xyz HTTP/1.1", reason: "limiting connections by zone "rl_conn""} > > > so that I can pass it through ELK stack and find out the root ip which is causing issue. > > > > > > > > > -- > > > Mohit Agrawal > > > _______________________________________________ > > > nginx mailing list > > > nginx at nginx.org > > > http://mailman.nginx.org/mailman/listinfo/nginx > > > > _______________________________________________ > > nginx mailing list > > nginx at nginx.org > > http://mailman.nginx.org/mailman/listinfo/nginx > > > > > > > > -- > > Mohit Agrawal > > > > > -- > Mohit Agrawal From mohit3081989 at gmail.com Wed Jan 10 12:23:56 2018 From: mohit3081989 at gmail.com (mohit Agrawal) Date: Wed, 10 Jan 2018 17:53:56 +0530 Subject: Nginx error log parser In-Reply-To: References: <5C60C6A0-0950-4EC5-A540-A725B30F1C90@nginx.com> Message-ID: Thanks Aziz for this, I get your point, but can we do awking in fluentd cons file ? Basically we are looking for realtime awking a nginx error log file, how heavy this would be according to you. On 10 January 2018 at 17:44, Aziz Rozyev wrote: > If you need parse exactly the same format, as you?ve shown in you > question, it?s fairly easy to create something e.g. perl/awk/sed script. > > for instance: > > ################# tst.awk ################# > BEGIN {FS = "," } > { > split($1, m, "\ ") > printf "%s", "{ " > printf "%s",$2 > printf "%s",$3 > printf "%s",$5 > printf "%s",$4 > printf "reason: %s %s %s %s \"%s\"\n", m[6], m[7], m[8], m[9], m[10] > print " }? > > } > ############################################# > > > result: > > echo 2018/01/10 06:26:31 [error] 13485#13485: *64285471 limiting > connections by zone "rl_conn", client: xx.xx.xx.xx, server: www.xyz.com, > request: "GET /api/xyz HTTP/1.1", host: "www.xyz.com" | awk -f > /tmp/test.awk > { client: xx.xx.xx.xx server: www.xyz.com host: www.xyz.com request: GET > /api/xyz HTTP/1.1reason: limiting connections by zone "rl_conn" > } > > > br, > Aziz. > > > > > > > On 10 Jan 2018, at 14:45, mohit Agrawal wrote: > > > > Yeah I have tried grok / regex pattern as well. But not extensive > success that I achieved. grok didn't work for me, I tried regex then it was > able to segregate time , pid, tid, log_level and message. I also need > message break up for above pattern > > > > On 10 January 2018 at 17:12, Aziz Rozyev wrote: > > Hi Mohit, > > > > check the second reply. I?m not sure that there is a conventional pretty > printing > > tools for nginx error log. > > > > > > br, > > Aziz. > > > > > > > > > > > > > On 10 Jan 2018, at 14:37, mohit Agrawal > wrote: > > > > > > Hi Aziz, > > > > > > log_format directive only provides formatting for access log, I am > looking to format error.log which doesn't take log_format directive. > > > Above example that I gave is just for nginx error logs. > > > > > > Thanks > > > > > > On 10 January 2018 at 15:26, Aziz Rozyev wrote: > > > btw, after re-reading the your questing, it looks like you need > something like logstash grok filter. > > > > > > br, > > > Aziz. > > > > > > > > > > > > > > > > > > > On 10 Jan 2018, at 11:45, mohit Agrawal > wrote: > > > > > > > > Hi , > > > > > > > > I am looking to parse nginx error log so as to find out which > particular IP is throttled during specific amount of time on connection > throttling / request throttling. The format looks like : > > > > > > > > 2018/01/10 06:26:31 [error] 13485#13485: *64285471 limiting > connections by zone "rl_conn", client: xx.xx.xx.xx, server: www.xyz.com, > request: "GET /api/xyz HTTP/1.1", host: "www.xyz.com" > > > > And the sample that I am looking for is : > > > > > > > > {client: "xx.xx.xx.xx", server: "www.xyz.com", host: "www.xyz.com", > "request": "GET /api/xyz HTTP/1.1", reason: "limiting connections by zone > "rl_conn""} > > > > so that I can pass it through ELK stack and find out the root ip > which is causing issue. > > > > > > > > > > > > -- > > > > Mohit Agrawal > > > > _______________________________________________ > > > > nginx mailing list > > > > nginx at nginx.org > > > > http://mailman.nginx.org/mailman/listinfo/nginx > > > > > > _______________________________________________ > > > nginx mailing list > > > nginx at nginx.org > > > http://mailman.nginx.org/mailman/listinfo/nginx > > > > > > > > > > > > -- > > > Mohit Agrawal > > > > > > > > > > -- > > Mohit Agrawal > > -- Mohit Agrawal -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Wed Jan 10 12:27:00 2018 From: nginx-forum at forum.nginx.org (itpp2012) Date: Wed, 10 Jan 2018 07:27:00 -0500 Subject: Nginx error log parser In-Reply-To: References: Message-ID: Aziz Rozyev Wrote: ------------------------------------------------------- > Hi Mohit, > > check the second reply. I?m not sure that there is a conventional > pretty printing > tools for nginx error log. Look at awstats. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,278065,278080#msg-278080 From mohit3081989 at gmail.com Wed Jan 10 13:58:46 2018 From: mohit3081989 at gmail.com (mohit Agrawal) Date: Wed, 10 Jan 2018 19:28:46 +0530 Subject: Nginx error log parser In-Reply-To: References: <5C60C6A0-0950-4EC5-A540-A725B30F1C90@nginx.com> Message-ID: Hi All, I have something like this. I tested the `tail -f /var/log/nginx/error.log | awk -f /var/log/nginx/test.awk` part and it just works fine. But when i try to run it through fluentd, it doesn't do anything. Any idea why ? @type exec format json tag sample command tail -f /var/log/nginx/error.log | awk -f /var/log/nginx/test.awk @type stdout Also /var/log/nginx/test.awk, is as follow : ################# tst.awk ################# BEGIN {FS = "," } { split($1, m, "\ ") gsub(/ /, "", $2) split($2, a, ":") gsub(/ /, "", $3) split($3, b, ":") gsub(/ /, "", $4) split($4, c, ":") gsub(/ /, "", $5) split($5, d, ":") printf "%s", "{" printf "\"%s\" : \"%s\",",a[1], a[2] printf "\"%s\" : \"%s\",",b[1], b[2] #printf "%s",$3 "," #printf "%s",$5 "," #printf "%s",$4 "," printf "\"%s\" : %s,",c[1], c[2] printf "\"%s\" : %s,",d[1], d[2] split(m[10], e, "\"") printf " \"reason\": \"%s %s %s %s %s\"}\n", m[6], m[7], m[8], m[9], e[2 ] } ############################################# On 10 January 2018 at 17:53, mohit Agrawal wrote: > Thanks Aziz for this, I get your point, but can we do awking in fluentd > cons file ? Basically we are looking for realtime awking a nginx error log > file, how heavy this would be according to you. > > On 10 January 2018 at 17:44, Aziz Rozyev wrote: > >> If you need parse exactly the same format, as you?ve shown in you >> question, it?s fairly easy to create something e.g. perl/awk/sed script. >> >> for instance: >> >> ################# tst.awk ################# >> BEGIN {FS = "," } >> { >> split($1, m, "\ ") >> printf "%s", "{ " >> printf "%s",$2 >> printf "%s",$3 >> printf "%s",$5 >> printf "%s",$4 >> printf "reason: %s %s %s %s \"%s\"\n", m[6], m[7], m[8], m[9], m[10] >> print " }? >> >> } >> ############################################# >> >> >> result: >> >> echo 2018/01/10 06:26:31 [error] 13485#13485: *64285471 limiting >> connections by zone "rl_conn", client: xx.xx.xx.xx, server: www.xyz.com, >> request: "GET /api/xyz HTTP/1.1", host: "www.xyz.com" | awk -f >> /tmp/test.awk >> { client: xx.xx.xx.xx server: www.xyz.com host: www.xyz.com request: >> GET /api/xyz HTTP/1.1reason: limiting connections by zone "rl_conn" >> } >> >> >> br, >> Aziz. >> >> >> >> >> >> > On 10 Jan 2018, at 14:45, mohit Agrawal wrote: >> > >> > Yeah I have tried grok / regex pattern as well. But not extensive >> success that I achieved. grok didn't work for me, I tried regex then it was >> able to segregate time , pid, tid, log_level and message. I also need >> message break up for above pattern >> > >> > On 10 January 2018 at 17:12, Aziz Rozyev wrote: >> > Hi Mohit, >> > >> > check the second reply. I?m not sure that there is a conventional >> pretty printing >> > tools for nginx error log. >> > >> > >> > br, >> > Aziz. >> > >> > >> > >> > >> > >> > > On 10 Jan 2018, at 14:37, mohit Agrawal >> wrote: >> > > >> > > Hi Aziz, >> > > >> > > log_format directive only provides formatting for access log, I am >> looking to format error.log which doesn't take log_format directive. >> > > Above example that I gave is just for nginx error logs. >> > > >> > > Thanks >> > > >> > > On 10 January 2018 at 15:26, Aziz Rozyev wrote: >> > > btw, after re-reading the your questing, it looks like you need >> something like logstash grok filter. >> > > >> > > br, >> > > Aziz. >> > > >> > > >> > > >> > > >> > > >> > > > On 10 Jan 2018, at 11:45, mohit Agrawal >> wrote: >> > > > >> > > > Hi , >> > > > >> > > > I am looking to parse nginx error log so as to find out which >> particular IP is throttled during specific amount of time on connection >> throttling / request throttling. The format looks like : >> > > > >> > > > 2018/01/10 06:26:31 [error] 13485#13485: *64285471 limiting >> connections by zone "rl_conn", client: xx.xx.xx.xx, server: www.xyz.com, >> request: "GET /api/xyz HTTP/1.1", host: "www.xyz.com" >> > > > And the sample that I am looking for is : >> > > > >> > > > {client: "xx.xx.xx.xx", server: "www.xyz.com", host: "www.xyz.com", >> "request": "GET /api/xyz HTTP/1.1", reason: "limiting connections by zone >> "rl_conn""} >> > > > so that I can pass it through ELK stack and find out the root ip >> which is causing issue. >> > > > >> > > > >> > > > -- >> > > > Mohit Agrawal >> > > > _______________________________________________ >> > > > nginx mailing list >> > > > nginx at nginx.org >> > > > http://mailman.nginx.org/mailman/listinfo/nginx >> > > >> > > _______________________________________________ >> > > nginx mailing list >> > > nginx at nginx.org >> > > http://mailman.nginx.org/mailman/listinfo/nginx >> > > >> > > >> > > >> > > -- >> > > Mohit Agrawal >> > >> > >> > >> > >> > -- >> > Mohit Agrawal >> >> > > > -- > Mohit Agrawal > -- Mohit Agrawal -------------- next part -------------- An HTML attachment was scrubbed... URL: From frjaraur at gmail.com Wed Jan 10 14:28:14 2018 From: frjaraur at gmail.com (javier ramirez) Date: Wed, 10 Jan 2018 15:28:14 +0100 Subject: Reverse proxy error on proxy_pass with redirect on backends to root Message-ID: Hi All, I have been trying to configure our xen-orchestrator environment (application frontend for controlling Xen servers), published on a host on port 80. We tried their recommended configuration for NGINX after trying some others created by ourself but none of them seem to work. The recommended one is the following location ( https://xen-orchestra.com/docs/reverse_proxy.html) location /xen { access_log /var/log/nginx/xen.access.log jru; error_log /var/log/nginx/xen.error.log debug; # Add some headers proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto $scheme; # Proxy configuration proxy_pass http://our_xo-server_on_port_80/; proxy_http_version 1.1; proxy_set_header Connection "upgrade"; proxy_set_header Upgrade $http_upgrade; proxy_redirect default; # Issue https://github.com/vatesfr/xo-web/issues/1471 proxy_read_timeout 1800; # Error will be only every 30m # For the VM import feature, this size must be larger than the file we want to upload. # Without a proper value, nginx will have error "client intended to send too large body" client_max_body_size 4G; } The problem is that browsing to https://ourserver/xen is reverse-proxied to http://our_xo-server_on_port_80/, but them is redirected to /signin so it is routed to https://ourserver/signin and we get a 404 error because it doesn't exist. What are we doing wrong?, Do we have to rewrite the requests?. Many Thanks in Advance, Javier R. -- Javier Ram?rez -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Wed Jan 10 16:54:55 2018 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 10 Jan 2018 19:54:55 +0300 Subject: upstream (tcp stream mode) doesn't detect connecton failure In-Reply-To: References: <70db0d5cbf85438aac4fbcf17cc2a4a1a6068327e3b6487196c459906dc50ef1AM3PR06MB35350C456DF9A038854F4A09C130@AM3PR06MB353.eurprd06.prod.outlook.com> <20180109134628.GC34136@mdounin.ru> Message-ID: <20180110165455.GG34136@mdounin.ru> Hello! On Tue, Jan 09, 2018 at 11:48:54PM +0100, Adam Cecile wrote: > On 01/09/2018 02:46 PM, Maxim Dounin wrote: > > Hello! > > > > On Mon, Jan 08, 2018 at 12:37:41PM +0000, Cecile, Adam wrote: > > > >> Hello, > >> > >> > >> I'm using this quite complicated setup involving SNI routing and proxy_protocol but I'm stuck on something. > >> > >> > >> Here is the configuration file: > >> > >> http://paste.debian.net/hidden/62e13f9c/ > >> > >> > >> Routing, proxy_protocol, logging stuff is working just fine, the only (quite critical issue) is that the "mag" upstream doesn't see connection failures and does not switch to the second server. > >> > >> > >> In the mag.log file I just see: > >> > >> 98.98.98.98 [08/Jan/2018:10:56:10 +0100] proxying to "mag":10.0.0.1:443 TCP 500 0 239 1.01 > >> > >> > >> But instead of blacklisting this server and moving to 10.0.0.2 I receive a connection closed error on the client. > > As far as I understand your configuration, you have two stream > > proxy layers: > > > > 1. The first one uses ssl_preread to obtain SNI name and tries to > > do some routing based on it. This layer also adds to the PROXY > > protocol to backend connections. > > > > 2. The second one strips PROXY protocol header. > > > > The problem with "upstream doesn't see connection failures" is > > because connection failures are only seen at the second layer (the > > log line above belongs to the second layer). The first layer will > > only see a connection close, and it won't know if there was an > > error or not. > > > > Also note: > > > > - You use $proxy_protocol_addr in the "upstream mag {...}" block, > > but the upstream block is used only in the first layer, where > > $proxy_protocol_addr won't be available according to your > > configuration. > > > > - You use $name in the logs of the second layer. It will always > > point to "map", as there is no ssl_preread in the second layer, > > hence $ssl_preread_server_name will be not available. > > > > Depending on what you actually want to achieve, the most > > straightforward solution might be to actually remove the second > > proxy layer. > Hello, > > The proxy protocol was used for the "non-stream" routing on SNI when > forwarding to nginx itself as "local_https". At this point it's using > regular https vhost, that's why I added proxy_protocol to easily be able > to extract the original client address. > > Aim of the two servers on 8080 and 8181 are only to strip proxy_protocol > before going to upstream mag. I'd be happy to remove them but if I do > that I need a way to strip out proxy_protocol inside the "upstream mag" > block. Is it possible ? Ok, so you use multiple proxy layers to be able to combine backends which support/need PROXY protocol and ones which do not, right? This looks like a valid reason, as "proxy_protocol" is either on or off in a particular server. If you want nginx to switch to a different backend while maintaining two proxy layers, consider moving balancing to the second layer instead. This way balancing will happen where connection errors can be seen, and so nginx will be able to switch to a different server on errors. -- Maxim Dounin http://mdounin.ru/ From adam.cecile at hitec.lu Wed Jan 10 18:18:36 2018 From: adam.cecile at hitec.lu (Adam Cecile) Date: Wed, 10 Jan 2018 19:18:36 +0100 Subject: upstream (tcp stream mode) doesn't detect connecton failure In-Reply-To: <20180110165455.GG34136@mdounin.ru> References: <70db0d5cbf85438aac4fbcf17cc2a4a1a6068327e3b6487196c459906dc50ef1AM3PR06MB35350C456DF9A038854F4A09C130@AM3PR06MB353.eurprd06.prod.outlook.com> <20180109134628.GC34136@mdounin.ru> <20180110165455.GG34136@mdounin.ru> Message-ID: On 01/10/2018 05:54 PM, Maxim Dounin wrote: > Hello! > > On Tue, Jan 09, 2018 at 11:48:54PM +0100, Adam Cecile wrote: > >> On 01/09/2018 02:46 PM, Maxim Dounin wrote: >>> Hello! >>> >>> On Mon, Jan 08, 2018 at 12:37:41PM +0000, Cecile, Adam wrote: >>> >>>> Hello, >>>> >>>> >>>> I'm using this quite complicated setup involving SNI routing and proxy_protocol but I'm stuck on something. >>>> >>>> >>>> Here is the configuration file: >>>> >>>> http://paste.debian.net/hidden/62e13f9c/ >>>> >>>> >>>> Routing, proxy_protocol, logging stuff is working just fine, the only (quite critical issue) is that the "mag" upstream doesn't see connection failures and does not switch to the second server. >>>> >>>> >>>> In the mag.log file I just see: >>>> >>>> 98.98.98.98 [08/Jan/2018:10:56:10 +0100] proxying to "mag":10.0.0.1:443 TCP 500 0 239 1.01 >>>> >>>> >>>> But instead of blacklisting this server and moving to 10.0.0.2 I receive a connection closed error on the client. >>> As far as I understand your configuration, you have two stream >>> proxy layers: >>> >>> 1. The first one uses ssl_preread to obtain SNI name and tries to >>> do some routing based on it. This layer also adds to the PROXY >>> protocol to backend connections. >>> >>> 2. The second one strips PROXY protocol header. >>> >>> The problem with "upstream doesn't see connection failures" is >>> because connection failures are only seen at the second layer (the >>> log line above belongs to the second layer). The first layer will >>> only see a connection close, and it won't know if there was an >>> error or not. >>> >>> Also note: >>> >>> - You use $proxy_protocol_addr in the "upstream mag {...}" block, >>> but the upstream block is used only in the first layer, where >>> $proxy_protocol_addr won't be available according to your >>> configuration. >>> >>> - You use $name in the logs of the second layer. It will always >>> point to "map", as there is no ssl_preread in the second layer, >>> hence $ssl_preread_server_name will be not available. >>> >>> Depending on what you actually want to achieve, the most >>> straightforward solution might be to actually remove the second >>> proxy layer. >> Hello, >> >> The proxy protocol was used for the "non-stream" routing on SNI when >> forwarding to nginx itself as "local_https". At this point it's using >> regular https vhost, that's why I added proxy_protocol to easily be able >> to extract the original client address. >> >> Aim of the two servers on 8080 and 8181 are only to strip proxy_protocol >> before going to upstream mag. I'd be happy to remove them but if I do >> that I need a way to strip out proxy_protocol inside the "upstream mag" >> block. Is it possible ? > Ok, so you use multiple proxy layers to be able to combine > backends which support/need PROXY protocol and ones which do not, > right? This looks like a valid reason, as "proxy_protocol" is > either on or off in a particular server. Yes exactly ! Aim of this setup is to do SNI routing to TCP endpoints (with failover) or HTTPS virtual hosts. > > If you want nginx to switch to a different backend while > maintaining two proxy layers, consider moving balancing to the > second layer instead. This way balancing will happen where > connection errors can be seen, and so nginx will be able to switch > to a different server on errors. Could you be more specific and show me how to do this with my current configuration ? I'm a bit lost... Thanks ! From nginx-forum at forum.nginx.org Wed Jan 10 18:32:00 2018 From: nginx-forum at forum.nginx.org (anish10dec) Date: Wed, 10 Jan 2018 13:32:00 -0500 Subject: Secure Link Expires - URL Signing In-Reply-To: <20180110084749.GN3127@daoine.org> References: <20180110084749.GN3127@daoine.org> Message-ID: <3623a820ad1c122e3b590b631e25bbbd.NginxMailingListEnglish@forum.nginx.org> Let me explain the complete implementation methodology and problem statement URL to be protected http://site.media.com/mediafiles/movie.m3u8 We are generating token on application/client side to send it along with request so that content is delivered by server only to authorized apps. Token Generation Methodology on App/Client expire = Current Epoch Time on App/Client + 600 ( 600 so that URL will be valid for 10 mins) uri = mediafiles/movie.m3u8 secret = secretkey On Client , MD5 Function is used to generate token by using three above defined values token = MD5 Hash ( secret, uri, expire) Client passes generated token along with expiry time with URL http://site.media.com/mediafiles/movie.m3u8?token={generated value}&expire={value in variable expire} Token Validation on Server Token and Expire is captured and passed through secure link module location / { secure_link $arg_token,$arg_expire; secure_link_md5 "secretkey$uri$arg_expire"; //If token generated here matches with token passed in request , content is delivered if ($secure_link = "") {return 405;} // token doesn't match if ($secure_link = "0") {return 410;} //If value in arg_expire time is greater current epoch time of server , content is delivered . Since arg_expire has epoch time of device + 600 sec so on server it will be success. If someone tries to access the content using same URL after 600 sec , time on server will be greater than time send in arg_expire and thus request will be denied. Problem Statement Someone changes the time on his client device to say some future date and time. In this case same app will generate the token with above mention methodolgy on client and send it along with request to server. Server will generate the token at its end using all the values along with expire time send in URL request ( note here expire time is generated using future date on device) So token will match and 1st check will be successful . In 2nd check since arg_expire has epoch time of future date + 600 sec which will be obviously greater than current epcoh time of server and request will be successfully delivered. Anyone can use same token and extended epoch time with request for that period of time for which future date was set on device. Hopefully now its explainatory . Please let know if there is a way to protect the content in this scenario. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,278063,278088#msg-278088 From r1ch+nginx at teamliquid.net Wed Jan 10 18:48:05 2018 From: r1ch+nginx at teamliquid.net (Richard Stanway) Date: Wed, 10 Jan 2018 10:48:05 -0800 Subject: Secure Link Expires - URL Signing In-Reply-To: <3623a820ad1c122e3b590b631e25bbbd.NginxMailingListEnglish@forum.nginx.org> References: <20180110084749.GN3127@daoine.org> <3623a820ad1c122e3b590b631e25bbbd.NginxMailingListEnglish@forum.nginx.org> Message-ID: Only the server should be generating the tokens, if the client knows the secret it can do whatever it wants. On Wed, Jan 10, 2018 at 10:32 AM, anish10dec wrote: > Let me explain the complete implementation methodology and problem > statement > > URL to be protected > http://site.media.com/mediafiles/movie.m3u8 > > We are generating token on application/client side to send it along with > request so that content is delivered by server only to authorized apps. > > Token Generation Methodology on App/Client > > expire = Current Epoch Time on App/Client + 600 ( 600 so that URL will be > valid for 10 mins) > uri = mediafiles/movie.m3u8 > secret = secretkey > > On Client , MD5 Function is used to generate token by using three above > defined values > token = MD5 Hash ( secret, uri, expire) > > Client passes generated token along with expiry time with URL > http://site.media.com/mediafiles/movie.m3u8?token={generated > value}&expire={value in variable expire} > > > Token Validation on Server > Token and Expire is captured and passed through secure link module > > location / { > > secure_link $arg_token,$arg_expire; > secure_link_md5 "secretkey$uri$arg_expire"; > > //If token generated here matches with token passed in request , content is > delivered > if ($secure_link = "") {return 405;} // token doesn't match > > if ($secure_link = "0") {return 410;} > //If value in arg_expire time is greater current epoch time of server , > content is delivered . > Since arg_expire has epoch time of device + 600 sec so on server it will be > success. If someone tries to access the content using same URL after 600 > sec > , time on server will be greater than time send in arg_expire and thus > request will be denied. > > > Problem Statement > Someone changes the time on his client device to say some future date and > time. In this case same app will generate the token with above mention > methodolgy on client and send it along with request to server. > Server will generate the token at its end using all the values along with > expire time send in URL request ( note here expire time is generated using > future date on device) > So token will match and 1st check will be successful . > In 2nd check since arg_expire has epoch time of future date + 600 sec which > will be obviously greater than current epcoh time of server and request > will be successfully delivered. > Anyone can use same token and extended epoch time with request for that > period of time for which future date was set on device. > > Hopefully now its explainatory . > Please let know if there is a way to protect the content in this scenario. > > Posted at Nginx Forum: https://forum.nginx.org/read. > php?2,278063,278088#msg-278088 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Wed Jan 10 18:58:28 2018 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 10 Jan 2018 21:58:28 +0300 Subject: upstream (tcp stream mode) doesn't detect connecton failure In-Reply-To: References: <70db0d5cbf85438aac4fbcf17cc2a4a1a6068327e3b6487196c459906dc50ef1AM3PR06MB35350C456DF9A038854F4A09C130@AM3PR06MB353.eurprd06.prod.outlook.com> <20180109134628.GC34136@mdounin.ru> <20180110165455.GG34136@mdounin.ru> Message-ID: <20180110185828.GI34136@mdounin.ru> Hello! On Wed, Jan 10, 2018 at 07:18:36PM +0100, Adam Cecile wrote: [...] > > Ok, so you use multiple proxy layers to be able to combine > > backends which support/need PROXY protocol and ones which do not, > > right? This looks like a valid reason, as "proxy_protocol" is > > either on or off in a particular server. > Yes exactly ! > > Aim of this setup is to do SNI routing to TCP endpoints (with failover) > or HTTPS virtual hosts. > > > > If you want nginx to switch to a different backend while > > maintaining two proxy layers, consider moving balancing to the > > second layer instead. This way balancing will happen where > > connection errors can be seen, and so nginx will be able to switch > > to a different server on errors. > > Could you be more specific and show me how to do this with my current > configuration ? I'm a bit lost... At the first level, differentiate between hosts based on $ssl_preread_server_name. Proxy to either "local_https" or to a second-level server, say 8080. On the second level server, proxy to an upstream group with servers you want to balance. Example configuration (completely untested): map $ssl_preread_server_name $name { default local_https; "" second; pub.domain.com second; } upstream local_https { server 127.0.0.1:8443; } upstream second { server 127.0.0.1:8080; } upstream u { server 10.0.0.1:443; server 10.0.0.2:443; } server { listen 443; ssl_preread on; proxy_pass $name; proxy_protocol on; } server { listen 127.0.0.1:8080 proxy_protocol; proxy_pass u; } Logging and timeouts omitted for clarity. -- Maxim Dounin http://mdounin.ru/ From adam.cecile at hitec.lu Wed Jan 10 19:02:59 2018 From: adam.cecile at hitec.lu (Adam Cecile) Date: Wed, 10 Jan 2018 20:02:59 +0100 Subject: upstream (tcp stream mode) doesn't detect connecton failure In-Reply-To: <20180110185828.GI34136@mdounin.ru> References: <70db0d5cbf85438aac4fbcf17cc2a4a1a6068327e3b6487196c459906dc50ef1AM3PR06MB35350C456DF9A038854F4A09C130@AM3PR06MB353.eurprd06.prod.outlook.com> <20180109134628.GC34136@mdounin.ru> <20180110165455.GG34136@mdounin.ru> <20180110185828.GI34136@mdounin.ru> Message-ID: <81405375-bb74-f2b3-c187-a1da3f1b7060@hitec.lu> On 01/10/2018 07:58 PM, Maxim Dounin wrote: > Hello! > > On Wed, Jan 10, 2018 at 07:18:36PM +0100, Adam Cecile wrote: > > [...] > >>> Ok, so you use multiple proxy layers to be able to combine >>> backends which support/need PROXY protocol and ones which do not, >>> right? This looks like a valid reason, as "proxy_protocol" is >>> either on or off in a particular server. >> Yes exactly ! >> >> Aim of this setup is to do SNI routing to TCP endpoints (with failover) >> or HTTPS virtual hosts. >>> If you want nginx to switch to a different backend while >>> maintaining two proxy layers, consider moving balancing to the >>> second layer instead. This way balancing will happen where >>> connection errors can be seen, and so nginx will be able to switch >>> to a different server on errors. >> Could you be more specific and show me how to do this with my current >> configuration ? I'm a bit lost... > At the first level, differentiate between hosts based on > $ssl_preread_server_name. Proxy to either "local_https" or to a > second-level server, say 8080. On the second level server, proxy > to an upstream group with servers you want to balance. Example > configuration (completely untested): > > map $ssl_preread_server_name $name { > default local_https; > "" second; > pub.domain.com second; > } > > upstream local_https { > server 127.0.0.1:8443; > } > > upstream second { > server 127.0.0.1:8080; > } > > upstream u { > server 10.0.0.1:443; > server 10.0.0.2:443; > } > > server { > listen 443; > ssl_preread on; > proxy_pass $name; > proxy_protocol on; > } > > server { > listen 127.0.0.1:8080 proxy_protocol; > proxy_pass u; > } > > Logging and timeouts omitted for clarity. > Very nice ! I'll give a try tomorrow morning and let you know, thanks. From nginx-forum at forum.nginx.org Wed Jan 10 22:22:04 2018 From: nginx-forum at forum.nginx.org (shiz) Date: Wed, 10 Jan 2018 17:22:04 -0500 Subject: 499 and set $loggable 0; Message-ID: <7ee6ad1a2c116328de10b9ae211b06b3.NginxMailingListEnglish@forum.nginx.org> Any idea on how to keep those 499 errors out of the logs? I already do it for some specific 444 if specific condition { set $loggable 0; return 444; } Posted at Nginx Forum: https://forum.nginx.org/read.php?2,278092,278092#msg-278092 From francis at daoine.org Wed Jan 10 23:01:01 2018 From: francis at daoine.org (Francis Daly) Date: Wed, 10 Jan 2018 23:01:01 +0000 Subject: Secure Link Expires - URL Signing In-Reply-To: <3623a820ad1c122e3b590b631e25bbbd.NginxMailingListEnglish@forum.nginx.org> References: <20180110084749.GN3127@daoine.org> <3623a820ad1c122e3b590b631e25bbbd.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20180110230101.GP3127@daoine.org> On Wed, Jan 10, 2018 at 01:32:00PM -0500, anish10dec wrote: Hi there, > Let me explain the complete implementation methodology and problem > statement > > URL to be protected > http://site.media.com/mediafiles/movie.m3u8 > > We are generating token on application/client side to send it along with > request so that content is delivered by server only to authorized apps. There's your design problem. Don't generate the token on the client side. Have the client do whatever it takes to convince the server that it is authorised, and have it ask for a current link to the movie.m3u8 content. Then the server uses the server-secret and whatever other things are relevant to create a secure_link url, possibly including an expiry time based on the server-clock, are returns that url to this client. Then when any client tries to access that url after the server-clock expiry, they will fail. And if any client tries to access that url before the expiry time, it will be allowed only if the secure_link matches -- if it includes something like REMOTE_USER or a $cookie that was only given to one client, then only something with the matching values will succeed; if it was just based on things within the url, then every thing will succeed. > Please let know if there is a way to protect the content in this scenario. No. In your scenario, the client decides the expiry time, and creates a url that the server will honour until then. (And it can create a new url that will expire a day later, and the server will honour that too.) Anyone who requests that url before that expiry time will be given the content. So in your scenario, you would probably have to write your own securish_link module which checks that the expiry time is in the future, but not too far in the future. And then decide how much slop to allow, in case someone has the clock wrong on their client. You're probably better off starting with a different design. (As an aside: this might also resolve the question in https://forum.nginx.org/read.php?2,275668 -- when the client has no idea what the server-secret is, there is no need to have updated clients for a different server-secret.) Good luck with it, f -- Francis Daly francis at daoine.org From nginx-forum at forum.nginx.org Thu Jan 11 01:05:15 2018 From: nginx-forum at forum.nginx.org (ptcell) Date: Wed, 10 Jan 2018 20:05:15 -0500 Subject: Example body filter hangs when modified a little bit - request is not terminating. Message-ID: <186cfdf5d0447e5431ff136deef6098c.NginxMailingListEnglish@forum.nginx.org> I tried out this nginx example ngx_http_foo_body_filter body filter (here http://nginx.org/en/docs/dev/development_guide.html#http_body_buffers_reuse ) and got that to work just fine. It inserts a "foo" string before each incoming buffer. I tried modifying a little bit so that it puts the foo string AFTER each incoming buffer chain in the list. The pages render correctly, but the request never terminates and the browser just sits there spinning. I don't know what I'm doing wrong. I'm pretty sure it is not module ordering because the original version works ok. I also set the content length to -1 in the header filter, etc. In the debugger, the out linked chain looks right with no cycles or anything. I appreciate any help you can give. Thank you. Here is my version: ngx_int_t ngx_http_foo_body_filter(ngx_http_request_t *r, ngx_chain_t *in) { ngx_int_t rc; ngx_buf_t *b; ngx_chain_t *cl, *tl, *out, **ll; ngx_http_foo_filter_ctx_t *ctx; ctx = ngx_http_get_module_ctx(r, ngx_http_foo_filter_module); if (ctx == NULL) { return ngx_http_next_body_filter(r, in); } /* create a new chain "out" from "in" with all the changes */ ll = &out; for (cl = in; cl; cl = cl->next) { /* append the next incoming buffer */ tl = ngx_alloc_chain_link(r->pool); if (tl == NULL) { return NGX_ERROR; } tl->buf = cl->buf; *ll = tl; ll = &tl->next; /* append "foo" in a reused buffer if possible */ tl = ngx_chain_get_free_buf(r->pool, &ctx->free); if (tl == NULL) { return NGX_ERROR; } b = tl->buf; b->tag = (ngx_buf_tag_t) &ngx_http_foo_filter_module; b->memory = 1; b->pos = (u_char *) "foo"; b->last = b->pos + 3; *ll = tl; ll = &tl->next; } *ll = NULL; /* send the new chain */ rc = ngx_http_next_body_filter(r, out); /* update "busy" and "free" chains for reuse */ ngx_chain_update_chains(r->pool, &ctx->free, &ctx->busy, &out, (ngx_buf_tag_t) &ngx_http_foo_filter_module); return rc; } Posted at Nginx Forum: https://forum.nginx.org/read.php?2,278094,278094#msg-278094 From zchao1995 at gmail.com Thu Jan 11 02:11:06 2018 From: zchao1995 at gmail.com (tokers) Date: Wed, 10 Jan 2018 18:11:06 -0800 Subject: Example body filter hangs when modified a little bit - request is not terminating. In-Reply-To: <186cfdf5d0447e5431ff136deef6098c.NginxMailingListEnglish@forum.nginx.org> References: <186cfdf5d0447e5431ff136deef6098c.NginxMailingListEnglish@forum.nginx.org> Message-ID: Hi! what?s the corresponding response headers of your browser? -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Thu Jan 11 04:20:30 2018 From: nginx-forum at forum.nginx.org (ptcell) Date: Wed, 10 Jan 2018 23:20:30 -0500 Subject: Example body filter hangs when modified a little bit - request is not terminating. In-Reply-To: References: Message-ID: <45a61e77f05dc09c9957ad129a7f9dc6.NginxMailingListEnglish@forum.nginx.org> here is with the body filter disabled (works normally): HTTP/1.1 200 OK Server: nginx/1.12.0 Date: Thu, 11 Jan 2018 04:16:07 GMT Content-Type: text/html Transfer-Encoding: chunked Connection: keep-alive here is with the body filter enabled (connection does not terminate): HTTP/1.1 200 OK Server: nginx/1.12.0 Date: Thu, 11 Jan 2018 04:19:59 GMT Content-Type: text/html Transfer-Encoding: chunked Connection: keep-alive Essentially the same. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,278094,278096#msg-278096 From nginx-forum at forum.nginx.org Thu Jan 11 08:32:07 2018 From: nginx-forum at forum.nginx.org (pva) Date: Thu, 11 Jan 2018 03:32:07 -0500 Subject: limit_req is not working in virutal location? Message-ID: <6fcc5f616a992256e366d3aafb9dbf56.NginxMailingListEnglish@forum.nginx.org> Hi. Could you, please, explain why limit_req in @limitspeed location is not working in case of redirect to @allowed virtual location and works in case I copy @allowed virtual location contents inside @limitspeed? ================= This configuration is not limiting speed at all ======================== location @allowed { root /store/; try_files /live$uri @localdvr; } location @limitspeed { limit_req zone=limit_req_rate; limit_rate 2500000; error_page 420 = @allowed; return 420; } location ~ ^/limit_speed_for_ts/.*\.ts { proxy_intercept_errors on; recursive_error_pages on; error_page 418 = @limitspeed; error_page 419 = @allowed; # we allow all requests with if there is token argument without limiting speed if ($arg_token = "") { return 418; } return 419; } ============================================ But if I change @limitspeed it will work: location @limitspeed { limit_req zone=limit_req_rate; limit_rate 2500000; root /store/; try_files /live$uri @localdvr; } Why? Thanks in advance for any hints. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,278101,278101#msg-278101 From arozyev at nginx.com Thu Jan 11 08:56:57 2018 From: arozyev at nginx.com (Aziz Rozyev) Date: Thu, 11 Jan 2018 11:56:57 +0300 Subject: Nginx error log parser In-Reply-To: References: <5C60C6A0-0950-4EC5-A540-A725B30F1C90@nginx.com> Message-ID: <19883F60-0116-4F02-BD08-8C00F270840F@nginx.com> Hi, seems, that fluentd has an nginx_parser plugin already, another solution that probably should work is to use the grep filters, something as follows: @type grep key client patter ^client.*\ $ key server pattern ^server.*\ $ key host pattern ^host.*$ key zone pattern ^zone.*\ $ ?.. then use record_trasformer type, to make further modifications. But, I didn?t tried above, probably it?s something that better to be asked from fluentd community.. br, Aziz. > On 10 Jan 2018, at 15:23, mohit Agrawal wrote: > > Thanks Aziz for this, I get your point, but can we do awking in fluentd cons file ? Basically we are looking for realtime awking a nginx error log file, how heavy this would be according to you. > > On 10 January 2018 at 17:44, Aziz Rozyev wrote: > If you need parse exactly the same format, as you?ve shown in you question, it?s fairly easy to create something e.g. perl/awk/sed script. > > for instance: > > ################# tst.awk ################# > BEGIN {FS = "," } > { > split($1, m, "\ ") > printf "%s", "{ " > printf "%s",$2 > printf "%s",$3 > printf "%s",$5 > printf "%s",$4 > printf "reason: %s %s %s %s \"%s\"\n", m[6], m[7], m[8], m[9], m[10] > print " }? > > } > ############################################# > > > result: > > echo 2018/01/10 06:26:31 [error] 13485#13485: *64285471 limiting connections by zone "rl_conn", client: xx.xx.xx.xx, server: www.xyz.com, request: "GET /api/xyz HTTP/1.1", host: "www.xyz.com" | awk -f /tmp/test.awk > { client: xx.xx.xx.xx server: www.xyz.com host: www.xyz.com request: GET /api/xyz HTTP/1.1reason: limiting connections by zone "rl_conn" > } > > > br, > Aziz. > > > > > > > On 10 Jan 2018, at 14:45, mohit Agrawal wrote: > > > > Yeah I have tried grok / regex pattern as well. But not extensive success that I achieved. grok didn't work for me, I tried regex then it was able to segregate time , pid, tid, log_level and message. I also need message break up for above pattern > > > > On 10 January 2018 at 17:12, Aziz Rozyev wrote: > > Hi Mohit, > > > > check the second reply. I?m not sure that there is a conventional pretty printing > > tools for nginx error log. > > > > > > br, > > Aziz. > > > > > > > > > > > > > On 10 Jan 2018, at 14:37, mohit Agrawal wrote: > > > > > > Hi Aziz, > > > > > > log_format directive only provides formatting for access log, I am looking to format error.log which doesn't take log_format directive. > > > Above example that I gave is just for nginx error logs. > > > > > > Thanks > > > > > > On 10 January 2018 at 15:26, Aziz Rozyev wrote: > > > btw, after re-reading the your questing, it looks like you need something like logstash grok filter. > > > > > > br, > > > Aziz. > > > > > > > > > > > > > > > > > > > On 10 Jan 2018, at 11:45, mohit Agrawal wrote: > > > > > > > > Hi , > > > > > > > > I am looking to parse nginx error log so as to find out which particular IP is throttled during specific amount of time on connection throttling / request throttling. The format looks like : > > > > > > > > 2018/01/10 06:26:31 [error] 13485#13485: *64285471 limiting connections by zone "rl_conn", client: xx.xx.xx.xx, server: www.xyz.com, request: "GET /api/xyz HTTP/1.1", host: "www.xyz.com" > > > > And the sample that I am looking for is : > > > > > > > > {client: "xx.xx.xx.xx", server: "www.xyz.com", host: "www.xyz.com", "request": "GET /api/xyz HTTP/1.1", reason: "limiting connections by zone "rl_conn""} > > > > so that I can pass it through ELK stack and find out the root ip which is causing issue. > > > > > > > > > > > > -- > > > > Mohit Agrawal > > > > _______________________________________________ > > > > nginx mailing list > > > > nginx at nginx.org > > > > http://mailman.nginx.org/mailman/listinfo/nginx > > > > > > _______________________________________________ > > > nginx mailing list > > > nginx at nginx.org > > > http://mailman.nginx.org/mailman/listinfo/nginx > > > > > > > > > > > > -- > > > Mohit Agrawal > > > > > > > > > > -- > > Mohit Agrawal > > > > > -- > Mohit Agrawal From raffael.vogler at yieldlove.com Thu Jan 11 10:13:28 2018 From: raffael.vogler at yieldlove.com (Raffael Vogler) Date: Thu, 11 Jan 2018 11:13:28 +0100 Subject: 2 of 16 cores are constantly maxing out - how to balance the load? Message-ID: Hello! I have nginx with php-fpm running on a 16 core Ubuntu 16.04 instance. The server is handling more than 10 million requests per hour. https://imgur.com/a/iRZ7V As you can see on the htop screenshot cores 6 and 7 are maxed out and that's the case constantly - even after restarting nginx those two cores stay at that level. I wonder why is that so and how to balance the load more evenly? Also I'm curious to know whether this might indicate a performance relevant issue or if it is most likely harmless and just looks odd. > cat /etc/nginx/nginx.conf | grep -v '^\s*#' user www-data; worker_processes auto; pid /run/nginx.pid; events { worker_connections 768; } http { sendfile on; tcp_nopush on; tcp_nodelay on; keepalive_timeout 65; types_hash_max_size 2048; include /etc/nginx/mime.types; default_type application/octet-stream; ssl_protocols TLSv1 TLSv1.1 TLSv1.2; # Dropping SSLv3, ref: POODLE ssl_prefer_server_ciphers on; access_log /var/log/nginx/access.log; error_log /var/log/nginx/error.log; gzip on; gzip_disable "msie6"; include /etc/nginx/conf.d/*.conf; include /etc/nginx/sites-enabled/*; } Thanks Raffael -------------- next part -------------- An HTML attachment was scrubbed... URL: From lucas at lucasrolff.com Thu Jan 11 10:16:29 2018 From: lucas at lucasrolff.com (Lucas Rolff) Date: Thu, 11 Jan 2018 10:16:29 +0000 Subject: 2 of 16 cores are constantly maxing out - how to balance the load? In-Reply-To: References: Message-ID: <74CEB432-4A19-4F58-89EB-A4F2A48E3610@lucasrolff.com> If it?s the same two cores, it might be another process that uses the same two cores and thus happens to max out. One very likely possibility would be interrupts from e.g. networking. You can check /proc/interrupts to see where interrupts from the network happens. From: nginx on behalf of Raffael Vogler Reply-To: "nginx at nginx.org" Date: Thursday, 11 January 2018 at 11.14 To: "nginx at nginx.org" Subject: 2 of 16 cores are constantly maxing out - how to balance the load? Hello! I have nginx with php-fpm running on a 16 core Ubuntu 16.04 instance. The server is handling more than 10 million requests per hour. https://imgur.com/a/iRZ7V As you can see on the htop screenshot cores 6 and 7 are maxed out and that's the case constantly - even after restarting nginx those two cores stay at that level. I wonder why is that so and how to balance the load more evenly? Also I'm curious to know whether this might indicate a performance relevant issue or if it is most likely harmless and just looks odd. > cat /etc/nginx/nginx.conf | grep -v '^\s*#' user www-data; worker_processes auto; pid /run/nginx.pid; events { worker_connections 768; } http { sendfile on; tcp_nopush on; tcp_nodelay on; keepalive_timeout 65; types_hash_max_size 2048; include /etc/nginx/mime.types; default_type application/octet-stream; ssl_protocols TLSv1 TLSv1.1 TLSv1.2; # Dropping SSLv3, ref: POODLE ssl_prefer_server_ciphers on; access_log /var/log/nginx/access.log; error_log /var/log/nginx/error.log; gzip on; gzip_disable "msie6"; include /etc/nginx/conf.d/*.conf; include /etc/nginx/sites-enabled/*; } Thanks Raffael -------------- next part -------------- An HTML attachment was scrubbed... URL: From raffael.vogler at yieldlove.com Thu Jan 11 10:39:55 2018 From: raffael.vogler at yieldlove.com (Raffael Vogler) Date: Thu, 11 Jan 2018 11:39:55 +0100 Subject: 2 of 16 cores are constantly maxing out - how to balance the load? In-Reply-To: <74CEB432-4A19-4F58-89EB-A4F2A48E3610@lucasrolff.com> References: <74CEB432-4A19-4F58-89EB-A4F2A48E3610@lucasrolff.com> Message-ID: Hey Lucas, your assumption seems to be correct. According to /proc/interrupts the following stats are significantly higher for those two cores (CPU5, CPU6 - 0-based indexing): - CPU5: xen-percpu-ipi callfuncsingle5 - CPU6: xen-percpu-ipi callfuncsingle6 - CPU5: xen-pirq-msi-x eth0-TxRx-0 - CPU6: xen-pirq-msi-x eth0-TxRx-1 - CPU5,6: TLB shootdowns - CPU5,6: Hypervisor callback interrupts Is this something that can and should be optimized or is it simply a matter of fact due to the high load on the (fixed) available network card capacity? On Thu, Jan 11, 2018 at 11:16 AM, Lucas Rolff wrote: > If it?s the same two cores, it might be another process that uses the same > two cores and thus happens to max out. > > One very likely possibility would be interrupts from e.g. networking. You > can check /proc/interrupts to see where interrupts from the network happens. > > > > *From: *nginx on behalf of Raffael Vogler < > raffael.vogler at yieldlove.com> > *Reply-To: *"nginx at nginx.org" > *Date: *Thursday, 11 January 2018 at 11.14 > *To: *"nginx at nginx.org" > *Subject: *2 of 16 cores are constantly maxing out - how to balance the > load? > > > > Hello! > > I have nginx with php-fpm running on a 16 core Ubuntu 16.04 instance. The > server is handling more than 10 million requests per hour. > > https://imgur.com/a/iRZ7V > > As you can see on the htop screenshot cores 6 and 7 are maxed out and > that's the case constantly - even after restarting nginx those two cores > stay at that level. > > I wonder why is that so and how to balance the load more evenly? > > Also I'm curious to know whether this might indicate a performance > relevant issue or if it is most likely harmless and just looks odd. > > > cat /etc/nginx/nginx.conf | grep -v '^\s*#' > > > > user www-data; > > worker_processes auto; > > pid /run/nginx.pid; > > events { > > worker_connections 768; > > } > > http { > > sendfile on; > > tcp_nopush on; > > tcp_nodelay on; > > keepalive_timeout 65; > > types_hash_max_size 2048; > > include /etc/nginx/mime.types; > > default_type application/octet-stream; > > ssl_protocols TLSv1 TLSv1.1 TLSv1.2; # Dropping SSLv3, ref: POODLE > > ssl_prefer_server_ciphers on; > > access_log /var/log/nginx/access.log; > > error_log /var/log/nginx/error.log; > > gzip on; > > gzip_disable "msie6"; > > include /etc/nginx/conf.d/*.conf; > > include /etc/nginx/sites-enabled/*; > > } > > Thanks > > Raffael > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -- Raffael Vogler, Chief Technology Officer Yieldlove GmbH Neuer Pferdemarkt 1, 20359 Hamburg www.yieldlove.com raffael.vogler at yieldlove.com XING - LinkedIn Skype: joyofdata Registernummer: HRB 127559; Registergericht: Amtsgericht Hamburg; USt-ID: DE815426709; Gesch?ftsf?hrung:Benjamin Gries, Timo Hagenow, Ivan Tomic -------------- next part -------------- An HTML attachment was scrubbed... URL: From raffael.vogler at yieldlove.com Thu Jan 11 10:54:31 2018 From: raffael.vogler at yieldlove.com (Raffael Vogler) Date: Thu, 11 Jan 2018 11:54:31 +0100 Subject: 2 of 16 cores are constantly maxing out - how to balance the load? In-Reply-To: References: <74CEB432-4A19-4F58-89EB-A4F2A48E3610@lucasrolff.com> Message-ID: Or would it make sense (if possible at all) to assign two or three more cores to networking interrupts? -------------- next part -------------- An HTML attachment was scrubbed... URL: From lucas at lucasrolff.com Thu Jan 11 10:59:51 2018 From: lucas at lucasrolff.com (Lucas Rolff) Date: Thu, 11 Jan 2018 10:59:51 +0000 Subject: 2 of 16 cores are constantly maxing out - how to balance the load? In-Reply-To: References: <74CEB432-4A19-4F58-89EB-A4F2A48E3610@lucasrolff.com> Message-ID: <63956095-FCE3-4E80-8CB5-48E6037C763D@lucasrolff.com> In high traffic environments it generally make sense to ?dedicate? a core to each RX and TX queue you have on the NIC ? this way you lower the chances of a single core being overloaded from handling network and thus degrading performance. And then at same time within nginx, map the individual processes to other cores. So, let?s say say you have 8 cores and 1 RX and 1 TX queue: Core 0: RX queue Core 1: TX queue Core 2 to 7: nginx processes You?d then set nginx to 6 workers (if you?re not running other stuff on the box). Now, in your case with php-fpm in the mix as well, controlling that can be hard ( not sure if you can pin php-fpm processes to cores ) ? but for nginx and RX/TX queues, it?s for sure possible. From: nginx on behalf of Raffael Vogler Reply-To: "nginx at nginx.org" Date: Thursday, 11 January 2018 at 11.55 To: "nginx at nginx.org" Subject: Re: 2 of 16 cores are constantly maxing out - how to balance the load? Or would it make sense (if possible at all) to assign two or three more cores to networking interrupts? -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-ml at acheronmedia.hr Thu Jan 11 11:25:41 2018 From: nginx-ml at acheronmedia.hr (Vlad K.) Date: Thu, 11 Jan 2018 12:25:41 +0100 Subject: 2 of 16 cores are constantly maxing out - how to balance the load? In-Reply-To: <63956095-FCE3-4E80-8CB5-48E6037C763D@lucasrolff.com> References: <74CEB432-4A19-4F58-89EB-A4F2A48E3610@lucasrolff.com> <63956095-FCE3-4E80-8CB5-48E6037C763D@lucasrolff.com> Message-ID: <282982e3ceff40026cf960860378e1b9@acheronmedia.hr> On 2018-01-11 11:59, Lucas Rolff wrote: > > Now, in your case with php-fpm in the mix as well, controlling that > can be hard ( not sure if you can pin php-fpm processes to cores ) ? > but for nginx and RX/TX queues, it?s for sure possible. Should be doable with cgroups / cpusets? CPUAffinity directive in the service unit file ( see systemd.exec(5) )? -- Vlad K. From mohit3081989 at gmail.com Thu Jan 11 11:42:37 2018 From: mohit3081989 at gmail.com (mohit Agrawal) Date: Thu, 11 Jan 2018 17:12:37 +0530 Subject: Nginx error log parser In-Reply-To: <19883F60-0116-4F02-BD08-8C00F270840F@nginx.com> References: <5C60C6A0-0950-4EC5-A540-A725B30F1C90@nginx.com> <19883F60-0116-4F02-BD08-8C00F270840F@nginx.com> Message-ID: I finally end up writing my own error log fluentd custom parser in ruby. It's working now. Thanks for help anyways, much appreciated On 11 January 2018 at 14:26, Aziz Rozyev wrote: > Hi, > > seems, that fluentd has an nginx_parser plugin already, another solution > that probably should work is to use the grep filters, > something as follows: > > > @type grep > > key client > patter ^client.*\ $ > > > key server > pattern ^server.*\ $ > > > key host > pattern ^host.*$ > > > key zone > pattern ^zone.*\ $ > > ?.. > > > > then use record_trasformer type, to make further modifications. But, I > didn?t tried above, > probably it?s something that better to be asked from fluentd community.. > > > br, > Aziz. > > > > > > > On 10 Jan 2018, at 15:23, mohit Agrawal wrote: > > > > Thanks Aziz for this, I get your point, but can we do awking in fluentd > cons file ? Basically we are looking for realtime awking a nginx error log > file, how heavy this would be according to you. > > > > On 10 January 2018 at 17:44, Aziz Rozyev wrote: > > If you need parse exactly the same format, as you?ve shown in you > question, it?s fairly easy to create something e.g. perl/awk/sed script. > > > > for instance: > > > > ################# tst.awk ################# > > BEGIN {FS = "," } > > { > > split($1, m, "\ ") > > printf "%s", "{ " > > printf "%s",$2 > > printf "%s",$3 > > printf "%s",$5 > > printf "%s",$4 > > printf "reason: %s %s %s %s \"%s\"\n", m[6], m[7], m[8], m[9], m[10] > > print " }? > > > > } > > ############################################# > > > > > > result: > > > > echo 2018/01/10 06:26:31 [error] 13485#13485: *64285471 limiting > connections by zone "rl_conn", client: xx.xx.xx.xx, server: www.xyz.com, > request: "GET /api/xyz HTTP/1.1", host: "www.xyz.com" | awk -f > /tmp/test.awk > > { client: xx.xx.xx.xx server: www.xyz.com host: www.xyz.com request: > GET /api/xyz HTTP/1.1reason: limiting connections by zone "rl_conn" > > } > > > > > > br, > > Aziz. > > > > > > > > > > > > > On 10 Jan 2018, at 14:45, mohit Agrawal > wrote: > > > > > > Yeah I have tried grok / regex pattern as well. But not extensive > success that I achieved. grok didn't work for me, I tried regex then it was > able to segregate time , pid, tid, log_level and message. I also need > message break up for above pattern > > > > > > On 10 January 2018 at 17:12, Aziz Rozyev wrote: > > > Hi Mohit, > > > > > > check the second reply. I?m not sure that there is a conventional > pretty printing > > > tools for nginx error log. > > > > > > > > > br, > > > Aziz. > > > > > > > > > > > > > > > > > > > On 10 Jan 2018, at 14:37, mohit Agrawal > wrote: > > > > > > > > Hi Aziz, > > > > > > > > log_format directive only provides formatting for access log, I am > looking to format error.log which doesn't take log_format directive. > > > > Above example that I gave is just for nginx error logs. > > > > > > > > Thanks > > > > > > > > On 10 January 2018 at 15:26, Aziz Rozyev wrote: > > > > btw, after re-reading the your questing, it looks like you need > something like logstash grok filter. > > > > > > > > br, > > > > Aziz. > > > > > > > > > > > > > > > > > > > > > > > > > On 10 Jan 2018, at 11:45, mohit Agrawal > wrote: > > > > > > > > > > Hi , > > > > > > > > > > I am looking to parse nginx error log so as to find out which > particular IP is throttled during specific amount of time on connection > throttling / request throttling. The format looks like : > > > > > > > > > > 2018/01/10 06:26:31 [error] 13485#13485: *64285471 limiting > connections by zone "rl_conn", client: xx.xx.xx.xx, server: www.xyz.com, > request: "GET /api/xyz HTTP/1.1", host: "www.xyz.com" > > > > > And the sample that I am looking for is : > > > > > > > > > > {client: "xx.xx.xx.xx", server: "www.xyz.com", host: "www.xyz.com", > "request": "GET /api/xyz HTTP/1.1", reason: "limiting connections by zone > "rl_conn""} > > > > > so that I can pass it through ELK stack and find out the root ip > which is causing issue. > > > > > > > > > > > > > > > -- > > > > > Mohit Agrawal > > > > > _______________________________________________ > > > > > nginx mailing list > > > > > nginx at nginx.org > > > > > http://mailman.nginx.org/mailman/listinfo/nginx > > > > > > > > _______________________________________________ > > > > nginx mailing list > > > > nginx at nginx.org > > > > http://mailman.nginx.org/mailman/listinfo/nginx > > > > > > > > > > > > > > > > -- > > > > Mohit Agrawal > > > > > > > > > > > > > > > -- > > > Mohit Agrawal > > > > > > > > > > -- > > Mohit Agrawal > > -- Mohit Agrawal -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Thu Jan 11 12:17:20 2018 From: nginx-forum at forum.nginx.org (anish10dec) Date: Thu, 11 Jan 2018 07:17:20 -0500 Subject: GeoIP Module for Blocking IP in http_x_forwarded_for Message-ID: <6915b1abcd24648c49731b2d8378c7d2.NginxMailingListEnglish@forum.nginx.org> GeoIP module is able to block request on basis of remote address which is IP of the remote device or user but not on basis of X-Forwarded-For IP if it has multiple IP address in it. There is Frontend Server( Server A) which receives the request and send it to Intermediate Server (Server B) We have GeoIP module installed on Intermediate Server i.e. Server B Server B <--- Server A <---- User When Server B , receives the request from Server A, remote address (remote_addr) for Server B is IP of Server A. Device/User IP is in http_x_forwarded_for field . If http_x_forwarded_for has single IP in it GeoIP module is able to block the IP on the basis of blocking applied. If http_x_forwarded_for has multiple IP i.e IP of User as well as IP of some Proxy Server or IP of Server A, then its not able to block the request. Below is the configuration : geoip_country /usr/share/GeoIP/GeoIP.dat; geoip_proxy IP_OF_ServerA; // GeoIP module ignores remote_addr considering it as trusted and refers to X-Forwarded For map $geoip_country_code $allowed_country { default no; US yes; } http_x_forwarded_for = { User IP of UK } - Request from this IP is getting blocked http_x_forwarded_for = { User IP of UK , Proxy IP of US } - This request is not getting blocked http_x_forwarded_for = { User IP of UK , IP of Server A } - This request is not getting blocked It seems nginx GeoIP Module refers to Last IP in http_x_forwarded_for field for applying the blocking method. Is there a way to check for First IP Address in http_x_forwarded_for for blocking the request ? Please suggest Please refer this for Solution in Apache https://dev.maxmind.com/geoip/legacy/mod_geoip2/ Posted at Nginx Forum: https://forum.nginx.org/read.php?2,278110,278110#msg-278110 From arut at nginx.com Thu Jan 11 12:48:31 2018 From: arut at nginx.com (Roman Arutyunyan) Date: Thu, 11 Jan 2018 15:48:31 +0300 Subject: Example body filter hangs when modified a little bit - request is not terminating. In-Reply-To: <186cfdf5d0447e5431ff136deef6098c.NginxMailingListEnglish@forum.nginx.org> References: <186cfdf5d0447e5431ff136deef6098c.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20180111124831.GB971@Romans-MacBook-Air.local> Hi, On Wed, Jan 10, 2018 at 08:05:15PM -0500, ptcell wrote: > I tried out this nginx example ngx_http_foo_body_filter body filter (here > http://nginx.org/en/docs/dev/development_guide.html#http_body_buffers_reuse > ) and got that to work just fine. It inserts a "foo" string before each > incoming buffer. > > I tried modifying a little bit so that it puts the foo string AFTER each > incoming buffer chain in the list. The pages render correctly, but the > request never terminates and the browser just sits there spinning. I don't > know what I'm doing wrong. I'm pretty sure it is not module ordering > because the original version works ok. I also set the content length to -1 > in the header filter, etc. > > In the debugger, the out linked chain looks right with no cycles or > anything. > > I appreciate any help you can give. Thank you. The last buffer in chain should normally have the last_buf flag set. This is not changed by the devguide example, but in your code it's your buffer which is the last one, and it obviously does not have the right flag. This is why ngx_http_chunked_filter does not end the HTTP chunked response with the final empty chunk, but your client is expecting it. > Here is my version: > > ngx_int_t > ngx_http_foo_body_filter(ngx_http_request_t *r, ngx_chain_t *in) > { > ngx_int_t rc; > ngx_buf_t *b; > ngx_chain_t *cl, *tl, *out, **ll; > ngx_http_foo_filter_ctx_t *ctx; > > ctx = ngx_http_get_module_ctx(r, ngx_http_foo_filter_module); > if (ctx == NULL) { > return ngx_http_next_body_filter(r, in); > } > > /* create a new chain "out" from "in" with all the changes */ > > ll = &out; > > for (cl = in; cl; cl = cl->next) { > > /* append the next incoming buffer */ > > tl = ngx_alloc_chain_link(r->pool); > if (tl == NULL) { > return NGX_ERROR; > } > > tl->buf = cl->buf; > *ll = tl; > ll = &tl->next; > > /* append "foo" in a reused buffer if possible */ > > tl = ngx_chain_get_free_buf(r->pool, &ctx->free); > if (tl == NULL) { > return NGX_ERROR; > } > > b = tl->buf; > b->tag = (ngx_buf_tag_t) &ngx_http_foo_filter_module; > b->memory = 1; > b->pos = (u_char *) "foo"; > b->last = b->pos + 3; > > *ll = tl; > ll = &tl->next; > > } > > *ll = NULL; > > /* send the new chain */ > > rc = ngx_http_next_body_filter(r, out); > > /* update "busy" and "free" chains for reuse */ > > ngx_chain_update_chains(r->pool, &ctx->free, &ctx->busy, &out, > (ngx_buf_tag_t) &ngx_http_foo_filter_module); > > return rc; > } > > Posted at Nginx Forum: https://forum.nginx.org/read.php?2,278094,278094#msg-278094 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -- Roman Arutyunyan From francis at daoine.org Thu Jan 11 13:11:36 2018 From: francis at daoine.org (Francis Daly) Date: Thu, 11 Jan 2018 13:11:36 +0000 Subject: limit_req is not working in virutal location? In-Reply-To: <6fcc5f616a992256e366d3aafb9dbf56.NginxMailingListEnglish@forum.nginx.org> References: <6fcc5f616a992256e366d3aafb9dbf56.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20180111131136.GQ3127@daoine.org> On Thu, Jan 11, 2018 at 03:32:07AM -0500, pva wrote: Hi there, I'm slightly guessing, so apologies if I mislead you and hopefully someone else will correct this if necessary... > Hi. Could you, please, explain why limit_req in @limitspeed location is not > working in case of redirect to @allowed virtual location and works in case I > copy @allowed virtual location contents inside @limitspeed? I think that "return" happens before, and therefore effectively overrides "limit_req". You'll want your "limit_req" effective in a location where some real work is done to create the response, in order for it to actually limit the response rate. (So either replace "return" with the content of the other location, as you did; or add the limit_req into the other location.) f -- Francis Daly francis at daoine.org From nginx-forum at forum.nginx.org Thu Jan 11 13:22:47 2018 From: nginx-forum at forum.nginx.org (nir) Date: Thu, 11 Jan 2018 08:22:47 -0500 Subject: proxy protocol over a plain tcp with ssl Message-ID: <385386817f2a44734aa0bdabb669a675.NginxMailingListEnglish@forum.nginx.org> I'm trying to configure nginx which is behind an haproxy to pass the proxy protocol over a plain tcp connection. It works well. When I add ssl to the equation it fails. Below is the nginx configuration block I'm using. Is it a configuration issue or might be that it's not at all possible for nginx to pass proxy protocol with ssl if the connection is not strictly https? stream { upstream some_backend { server some_host:18010; } server { listen 8010; listen 8012 ssl; proxy_pass some_backend; proxy_protocol on; ssl_certificate /etc/ssl/server.crt; ssl_certificate_key /etc/ssl/server.key; ssl_protocols SSLv3 TLSv1 TLSv1.1 TLSv1.2; ssl_ciphers HIGH:!aNULL:!MD5; ssl_session_cache shared:SSLTCP:20m; ssl_session_timeout 4h; ssl_handshake_timeout 30s; } } Posted at Nginx Forum: https://forum.nginx.org/read.php?2,278113,278113#msg-278113 From mdounin at mdounin.ru Thu Jan 11 14:14:54 2018 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 11 Jan 2018 17:14:54 +0300 Subject: GeoIP Module for Blocking IP in http_x_forwarded_for In-Reply-To: <6915b1abcd24648c49731b2d8378c7d2.NginxMailingListEnglish@forum.nginx.org> References: <6915b1abcd24648c49731b2d8378c7d2.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20180111141454.GK34136@mdounin.ru> Hello! On Thu, Jan 11, 2018 at 07:17:20AM -0500, anish10dec wrote: > GeoIP module is able to block request on basis of remote address which is IP > of the remote device or user but not on basis of X-Forwarded-For IP if it > has multiple IP address in it. > > There is Frontend Server( Server A) which receives the request and send it > to Intermediate Server (Server B) > We have GeoIP module installed on Intermediate Server i.e. Server B > > > Server B <--- Server A <---- User > > When Server B , receives the request from Server A, remote address > (remote_addr) for Server B is IP of Server A. > Device/User IP is in http_x_forwarded_for field . > If http_x_forwarded_for has single IP in it GeoIP module is able to block > the IP on the basis of blocking applied. > > If http_x_forwarded_for has multiple IP i.e IP of User as well as IP of some > Proxy Server or IP of Server A, then its not able to block the request. > > Below is the configuration : > > geoip_country /usr/share/GeoIP/GeoIP.dat; > geoip_proxy IP_OF_ServerA; // GeoIP module ignores remote_addr > considering it as trusted and refers to X-Forwarded For > > map $geoip_country_code $allowed_country { > default no; > US yes; > } > > http_x_forwarded_for = { User IP of UK } - Request from this IP is getting > blocked > > http_x_forwarded_for = { User IP of UK , Proxy IP of US } - This request > is not getting blocked > > http_x_forwarded_for = { User IP of UK , IP of Server A } - This request > is not getting blocked > > It seems nginx GeoIP Module refers to Last IP in http_x_forwarded_for field > for applying the blocking method. This is what X-Forwarded-For header format assumes: IP addresses are added to the end of the list. As such, the last address is the only one you can trust in the above configuration. That is, a request with X-Forwarded-For: IP1, IP2, IP3 as got from Server A doesn't mean that you've got a request from IP1 forwarded to you via various proxies. It instead means that Server A got the request from IP3 with "X-Forwarded-For: IP1, IP2" already present in the request. Nothing guarantees that IP1 and IP2 are real addresses - they can be easily faked by the client, or they can be internal addresses in the client network. > Is there a way to check for First IP Address in http_x_forwarded_for for > blocking the request ? If you really want to, you can do so using the geoip_proxy_recursive directive and configuring the geoip_proxy to trust the whole world, see here: http://nginx.org/r/geoip_proxy_recursive Note though that this is generally not secure as the address can be easily forged, see above. -- Maxim Dounin http://mdounin.ru/ From Jason.Whittington at equifax.com Thu Jan 11 15:42:06 2018 From: Jason.Whittington at equifax.com (Jason Whittington) Date: Thu, 11 Jan 2018 15:42:06 +0000 Subject: [IE] GeoIP Module for Blocking IP in http_x_forwarded_for In-Reply-To: <6915b1abcd24648c49731b2d8378c7d2.NginxMailingListEnglish@forum.nginx.org> References: <6915b1abcd24648c49731b2d8378c7d2.NginxMailingListEnglish@forum.nginx.org> Message-ID: <995C5C9AD54A3C419AF1C20A8B6AB9A432AE282A@STLEISEXCMBX3.eis.equifax.com> If you control Frontend Server A I would suggest not using X-Forwarded-For for this purpose. Can you have the front end server send a distinct header to server B? X-Real-IP would be a good choice of header. Then Server B could key off that header instead of XFF. You might find this page interesting: https://distinctplace.com/2014/04/23/story-behind-x-forwarded-for-and-x-real-ip-headers/ Jason -----Original Message----- From: nginx [mailto:nginx-bounces at nginx.org] On Behalf Of anish10dec Sent: Thursday, January 11, 2018 6:17 AM To: nginx at nginx.org Subject: [IE] GeoIP Module for Blocking IP in http_x_forwarded_for GeoIP module is able to block request on basis of remote address which is IP of the remote device or user but not on basis of X-Forwarded-For IP if it has multiple IP address in it. There is Frontend Server( Server A) which receives the request and send it to Intermediate Server (Server B) We have GeoIP module installed on Intermediate Server i.e. Server B Server B <--- Server A <---- User When Server B , receives the request from Server A, remote address (remote_addr) for Server B is IP of Server A. Device/User IP is in http_x_forwarded_for field . If http_x_forwarded_for has single IP in it GeoIP module is able to block the IP on the basis of blocking applied. If http_x_forwarded_for has multiple IP i.e IP of User as well as IP of some Proxy Server or IP of Server A, then its not able to block the request. Below is the configuration : geoip_country /usr/share/GeoIP/GeoIP.dat; geoip_proxy IP_OF_ServerA; // GeoIP module ignores remote_addr considering it as trusted and refers to X-Forwarded For map $geoip_country_code $allowed_country { default no; US yes; } http_x_forwarded_for = { User IP of UK } - Request from this IP is getting blocked http_x_forwarded_for = { User IP of UK , Proxy IP of US } - This request is not getting blocked http_x_forwarded_for = { User IP of UK , IP of Server A } - This request is not getting blocked It seems nginx GeoIP Module refers to Last IP in http_x_forwarded_for field for applying the blocking method. Is there a way to check for First IP Address in http_x_forwarded_for for blocking the request ? Please suggest Please refer this for Solution in Apache https://dev.maxmind.com/geoip/legacy/mod_geoip2/ Posted at Nginx Forum: https://forum.nginx.org/read.php?2,278110,278110#msg-278110 _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx This message contains proprietary information from Equifax which may be confidential. If you are not an intended recipient, please refrain from any disclosure, copying, distribution or use of this information and note that such actions are prohibited. If you have received this transmission in error, please notify by e-mail postmaster at equifax.com. Equifax? is a registered trademark of Equifax Inc. All rights reserved. From nginx-forum at forum.nginx.org Thu Jan 11 15:47:29 2018 From: nginx-forum at forum.nginx.org (pva) Date: Thu, 11 Jan 2018 10:47:29 -0500 Subject: limit_req is not working in virutal location? In-Reply-To: <20180111131136.GQ3127@daoine.org> References: <20180111131136.GQ3127@daoine.org> Message-ID: Francis, thank you for you answer. > On Thu, Jan 11, 2018 at 03:32:07AM -0500, pva wrote: > I'm slightly guessing, so apologies if I mislead you and hopefully > someone else will correct this if necessary... > > > Hi. Could you, please, explain why limit_req in @limitspeed location > > is not working in case of redirect to @allowed virtual location and works > > in case I copy @allowed virtual location contents inside @limitspeed? > > I think that "return" happens before, and therefore effectively > overrides "limit_req". This is my guess as well. But then I'm wondering if this limit will be applied in case try_files redirects to @localdvr? try_files /live$uri @localdvr; -- Peter. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,278101,278118#msg-278118 From mdounin at mdounin.ru Thu Jan 11 15:57:09 2018 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 11 Jan 2018 18:57:09 +0300 Subject: limit_req is not working in virutal location? In-Reply-To: References: <20180111131136.GQ3127@daoine.org> Message-ID: <20180111155709.GL34136@mdounin.ru> Hello! On Thu, Jan 11, 2018 at 10:47:29AM -0500, pva wrote: > Francis, thank you for you answer. > > > On Thu, Jan 11, 2018 at 03:32:07AM -0500, pva wrote: > > I'm slightly guessing, so apologies if I mislead you and hopefully > > someone else will correct this if necessary... > > > > > Hi. Could you, please, explain why limit_req in @limitspeed location > > > is not working in case of redirect to @allowed virtual location and > works > > > in case I copy @allowed virtual location contents inside @limitspeed? > > > > I think that "return" happens before, and therefore effectively > > overrides "limit_req". > > This is my guess as well. But then I'm wondering if this limit will be > applied in case try_files redirects to @localdvr? > > try_files /live$uri @localdvr; That's because try_files is not a mechanism to "conditionally select configurations"[1] like the rewrite module directives (including "return"), but rather a way to choose which file will be used for request processing. As such, try_files checks happen right before actually returning the response, after various access checks and limits. [1] http://nginx.org/en/docs/http/ngx_http_rewrite_module.html -- Maxim Dounin http://mdounin.ru/ From arut at nginx.com Thu Jan 11 17:20:13 2018 From: arut at nginx.com (Roman Arutyunyan) Date: Thu, 11 Jan 2018 20:20:13 +0300 Subject: proxy protocol over a plain tcp with ssl In-Reply-To: <385386817f2a44734aa0bdabb669a675.NginxMailingListEnglish@forum.nginx.org> References: <385386817f2a44734aa0bdabb669a675.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20180111172013.GD971@Romans-MacBook-Air.local> Hi, On Thu, Jan 11, 2018 at 08:22:47AM -0500, nir wrote: > I'm trying to configure nginx which is behind an haproxy to pass the proxy > protocol over a plain tcp connection. It works well. > When I add ssl to the equation it fails. Below is the nginx configuration > block I'm using. > Is it a configuration issue or might be that it's not at all possible for > nginx to pass proxy protocol with ssl if the connection is not strictly > https? It's not clear what exactly is not working, can you elaborate on that? Just in case, PROXY protocol header is always sent (and expected) by nginx prior to anything else. For SSL connections, PROXY protocol header is sent prior to SSL handshake and is not encrypted. > stream { > upstream some_backend { > server some_host:18010; > } > > server { > listen 8010; > listen 8012 ssl; > proxy_pass some_backend; > proxy_protocol on; > > ssl_certificate /etc/ssl/server.crt; > ssl_certificate_key /etc/ssl/server.key; > ssl_protocols SSLv3 TLSv1 TLSv1.1 TLSv1.2; > ssl_ciphers HIGH:!aNULL:!MD5; > ssl_session_cache shared:SSLTCP:20m; > ssl_session_timeout 4h; > ssl_handshake_timeout 30s; > } > } > > Posted at Nginx Forum: https://forum.nginx.org/read.php?2,278113,278113#msg-278113 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -- Roman Arutyunyan From nginx-forum at forum.nginx.org Thu Jan 11 17:22:56 2018 From: nginx-forum at forum.nginx.org (pva) Date: Thu, 11 Jan 2018 12:22:56 -0500 Subject: limit_req is not working in virutal location? In-Reply-To: <20180111155709.GL34136@mdounin.ru> References: <20180111155709.GL34136@mdounin.ru> Message-ID: <390fad9ae8637bc2e2eabb52d6e91a9a.NginxMailingListEnglish@forum.nginx.org> Hi, Maxim. Maxim Dounin Wrote: > That's because try_files is not a mechanism to "conditionally select > configurations"[1] like the rewrite module directives (including > "return"), but rather a way to choose which file will be used for > request processing. As such, try_files checks happen right before > actually returning the response, after various access checks and > limits. I see, thank you. Do I understand correctly, that the following example in documentation https://nginx.ru/en/docs/http/ngx_http_core_module.html#try_files is not strictly correct: --------------------------------------------------------------------- In the following example, location / { try_files $uri $uri/ @drupal; } the try_files directive is equivalent to location / { error_page 404 = @drupal; log_not_found off; } --------------------------------------------------------------------- These directives are not equivalent since limits are not applied in the second case. Right? -- Peter. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,278101,278121#msg-278121 From mdounin at mdounin.ru Thu Jan 11 18:13:37 2018 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 11 Jan 2018 21:13:37 +0300 Subject: limit_req is not working in virutal location? In-Reply-To: <390fad9ae8637bc2e2eabb52d6e91a9a.NginxMailingListEnglish@forum.nginx.org> References: <20180111155709.GL34136@mdounin.ru> <390fad9ae8637bc2e2eabb52d6e91a9a.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20180111181337.GO34136@mdounin.ru> Hello! On Thu, Jan 11, 2018 at 12:22:56PM -0500, pva wrote: > Hi, Maxim. > > Maxim Dounin Wrote: > > That's because try_files is not a mechanism to "conditionally select > > configurations"[1] like the rewrite module directives (including > > "return"), but rather a way to choose which file will be used for > > request processing. As such, try_files checks happen right before > > actually returning the response, after various access checks and > > limits. > > I see, thank you. Do I understand correctly, that the following example in > documentation > https://nginx.ru/en/docs/http/ngx_http_core_module.html#try_files > is not strictly correct: > > --------------------------------------------------------------------- > In the following example, > > location / { > try_files $uri $uri/ @drupal; > } > > the try_files directive is equivalent to > > location / { > error_page 404 = @drupal; > log_not_found off; > } > --------------------------------------------------------------------- > > These directives are not equivalent since limits are not applied in the > second case. Right? No, there are actually more or less equivalent. There a minor differences - try_files version will do an extra syscall, while error_page version will make further error processing harder as recursive error pages are disabled by default. But in both cases all access checks and limits will be applied before testing or opening the file. You probably misunderstood how error_page version works. It actually tries to return the file requested, and all limits and access checks happen before this. If there is no file and so open()ing it fails, the 404 Not Found error is generated. Then 404 is handled according to error_page, and the request is internally redirected to the @drupal location for further processing. See http://nginx.org/r/error_page for details. -- Maxim Dounin http://mdounin.ru/ From nginx-forum at forum.nginx.org Thu Jan 11 18:21:31 2018 From: nginx-forum at forum.nginx.org (nir) Date: Thu, 11 Jan 2018 13:21:31 -0500 Subject: proxy protocol over a plain tcp with ssl In-Reply-To: <20180111172013.GD971@Romans-MacBook-Air.local> References: <20180111172013.GD971@Romans-MacBook-Air.local> Message-ID: <7d0229cd222bb5f3e5e0e2aa8b2df63b.NginxMailingListEnglish@forum.nginx.org> Hi Roman, I'm trying to pass the proxy protocol to my backend through Nginx when the traffic is encrypted This configuration block listen 8012; proxy_pass backend; proxy_protocol on; allows me to pass a non encrypted traffic and the proxy protocol This configuration block: listen 8012 proxy_protocol ssl; proxy_pass backend; allows me to pass encrypted traffic to my backend but the proxy protocol is not passed This configuration block: listen 8012 ssl; proxy_pass backend; proxy_protocol on; fails on SSL handshake The last configuration block was my first attempt and I expected it to work. The first two are debug attempts. If you can tell my why the last one doesn't work and how can it be fixed it will help a lot Posted at Nginx Forum: https://forum.nginx.org/read.php?2,278113,278124#msg-278124 From nginx-forum at forum.nginx.org Fri Jan 12 00:22:39 2018 From: nginx-forum at forum.nginx.org (nir) Date: Thu, 11 Jan 2018 19:22:39 -0500 Subject: proxy protocol over a plain tcp with ssl In-Reply-To: <385386817f2a44734aa0bdabb669a675.NginxMailingListEnglish@forum.nginx.org> References: <385386817f2a44734aa0bdabb669a675.NginxMailingListEnglish@forum.nginx.org> Message-ID: <8cc043cc8284b2c7e1f5fce0c2620f13.NginxMailingListEnglish@forum.nginx.org> Well, seems that you need to read the manual with the right perspective... https://stackoverflow.com/questions/48211083/proxy-protocol-and-ssl Posted at Nginx Forum: https://forum.nginx.org/read.php?2,278113,278128#msg-278128 From peter_booth at me.com Fri Jan 12 05:20:54 2018 From: peter_booth at me.com (Peter Booth) Date: Fri, 12 Jan 2018 00:20:54 -0500 Subject: 2 of 16 cores are constantly maxing out - how to balance the load? In-Reply-To: <282982e3ceff40026cf960860378e1b9@acheronmedia.hr> References: <74CEB432-4A19-4F58-89EB-A4F2A48E3610@lucasrolff.com> <63956095-FCE3-4E80-8CB5-48E6037C763D@lucasrolff.com> <282982e3ceff40026cf960860378e1b9@acheronmedia.hr> Message-ID: Perhaps you should use pidstat to validate which processes are running on the two busy cores? > On Jan 11, 2018, at 6:25 AM, Vlad K. wrote: > > On 2018-01-11 11:59, Lucas Rolff wrote: >> Now, in your case with php-fpm in the mix as well, controlling that >> can be hard ( not sure if you can pin php-fpm processes to cores ) ? >> but for nginx and RX/TX queues, it?s for sure possible. > > > Should be doable with cgroups / cpusets? CPUAffinity directive in the service unit file ( see systemd.exec(5) )? > > > > -- > Vlad K. > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From idefix at fechner.net Fri Jan 12 09:05:03 2018 From: idefix at fechner.net (Matthias Fechner) Date: Fri, 12 Jan 2018 10:05:03 +0100 Subject: nginx 1.12.2 with brotli compression truncates big files Message-ID: Dear all, I have nginx configured to compress files using brotli with the following configuration: # enable Brotli brotli on; brotli_types ??????? # text/html is always compressed by HttpGzipModule ??????? text/css ??????? text/javascript ??????? text/xml ??????? text/plain ??????? text/x-component ??????? application/javascript ??????? application/json ??????? application/xml ??????? application/rss+xml ??????? font/truetype ??????? font/opentype ??????? application/vnd.ms-fontobject ??????? image/svg+xml; brotli_comp_level 5; But if I try to get a bigger javascript file (around 1MB) it is truncated 236150 bytes, here a log: 188.210.x.x - - [12/Jan/2018:09:47:56 +0100] "GET /apps/notes/js/vendor/angular/angular.js?v=c097638827ed950b562e4489ee4b6777-0 HTTP/2.0" 200 236150 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:58.0) Gecko/20100101 Firefox/58.0" If I transfer the same file without br compression: 188.210.x.x- - [12/Jan/2018:09:50:28 +0100] "GET /apps/notes/js/vendor/angular/angular.js?v=c097638827ed950b562e4489ee4b6777-0 HTTP/2.0" 200 1065161 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:58.0) Gecko/20100101 Firefox/58.0" Is there a know bug with brotli compression in nginx? Thanks. Gru? Matthias -- "Programming today is a race between software engineers striving to build bigger and better idiot-proof programs, and the universe trying to produce bigger and better idiots. So far, the universe is winning." -- Rich Cook From zchao1995 at gmail.com Fri Jan 12 09:10:54 2018 From: zchao1995 at gmail.com (tokers) Date: Fri, 12 Jan 2018 01:10:54 -0800 Subject: nginx 1.12.2 with brotli compression truncates big files In-Reply-To: References: Message-ID: Hi! > 188.210.x.x - - [12/Jan/2018:09:47:56 +0100] "GET > /apps/notes/js/vendor/angular/angular.js?v=c097638827ed950b562e4489ee4b6777-0 > HTTP/2.0" 200 236150 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64; > rv:58.0) Gecko/20100101 Firefox/58.0" > If I transfer the same file without br compression: > 188.210.x.x- - [12/Jan/2018:09:50:28 +0100] "GET > /apps/notes/js/vendor/angular/angular.js?v=c097638827ed950b562e4489ee4b6777-0 > HTTP/2.0" 200 1065161 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64; > rv:58.0) Gecko/20100101 Firefox/58.0? The 236150 and 1065161 is the value of $body_bytes_sent. It conforms the effects caused by compression. > Is there a know bug with brotli compression in nginx? Maybe you can view the GitHub issue? https://github.com/google/ngx_brotli -------------- next part -------------- An HTML attachment was scrubbed... URL: From raffael.vogler at yieldlove.com Fri Jan 12 11:10:48 2018 From: raffael.vogler at yieldlove.com (Raffael Vogler) Date: Fri, 12 Jan 2018 12:10:48 +0100 Subject: 2 of 16 cores are constantly maxing out - how to balance the load? In-Reply-To: References: Message-ID: > Perhaps you should use pidstat to validate which processes are running on the two busy cores? Did that an can confirm that CPU 5 and 6 are not exclusively used by networking - but also by nginx and php-fpm. From raffael.vogler at yieldlove.com Fri Jan 12 11:13:37 2018 From: raffael.vogler at yieldlove.com (Raffael Vogler) Date: Fri, 12 Jan 2018 12:13:37 +0100 Subject: 2 of 16 cores are constantly maxing out - how to balance the load? In-Reply-To: <63956095-FCE3-4E80-8CB5-48E6037C763D@lucasrolff.com> References: <63956095-FCE3-4E80-8CB5-48E6037C763D@lucasrolff.com> Message-ID: <61aaac25-18bf-e499-6243-e6fc2df90c1e@yieldlove.com> > So, let?s say say you have 8 cores and 1 RX and 1 TX queue: > Core 0: RX queue > Core 1: TX queue > Core 2 to 7: nginx processes what tool or configuration file would I have to use to dedicate cores to processes? From raffael.vogler at yieldlove.com Fri Jan 12 14:21:44 2018 From: raffael.vogler at yieldlove.com (Raffael Vogler) Date: Fri, 12 Jan 2018 15:21:44 +0100 Subject: How to correctly dedicate server processes to specific CPU cores? Message-ID: I have 16 cores (0 to 15) and the most active processes are: - nginx - php-fpm - eth0-TxRx-0 (paired Tx and Rx queues #1) - eth0-TxRx-1 (paired Tx and Rx queues #2) eth0-TxRx-0 is already dedicated to core #5 and eth0-TxRx-1 to core #6. But on those two cores are also nginx and php-fpm processes being executed. Now I would like o restrict nginx and php-fpm to cores 0-4 and 7-15. I would like to confirm that this can be correctly achieved with: taskset -c 0-4,7-15 nginx taskset -c 0-4,7-15 php-fpm Is that a safe and sound approach? -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Sun Jan 14 09:38:27 2018 From: nginx-forum at forum.nginx.org (hostcanada2020) Date: Sun, 14 Jan 2018 04:38:27 -0500 Subject: How do I use GeoIP module with IP2Location LITE database? Message-ID: <87ea3981f1ec7bedc373c52b97f23209.NginxMailingListEnglish@forum.nginx.org> I have been using GeoIP module in Nginx. To my surprise, Maxmind has decided to remove the latitude and longitude from the GeoLite2 database from 2019 and announced in https://dev.maxmind.com/geoip/geoip2/geolite2/ . I need to use the coordinates information and don't want to pay for the commercial database. IP2Location LITE has free database with coordinates and other information. How can I use it in Nginx instead? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,278138,278138#msg-278138 From tom at keepschtum.win Mon Jan 15 01:58:49 2018 From: tom at keepschtum.win (Tom) Date: Mon, 15 Jan 2018 14:58:49 +1300 Subject: OCSP stapling priming and logging In-Reply-To: <20180109140225.GD34136@mdounin.ru> References: <475561515407936@web45j.yandex.ru> <20180109140225.GD34136@mdounin.ru> Message-ID: <663941515981529@web42o.yandex.ru> An HTML attachment was scrubbed... URL: From Adam.Cecile at hitec.lu Mon Jan 15 09:25:37 2018 From: Adam.Cecile at hitec.lu (Cecile, Adam) Date: Mon, 15 Jan 2018 09:25:37 +0000 Subject: upstream (tcp stream mode) doesn't detect connecton failure In-Reply-To: <81405375-bb74-f2b3-c187-a1da3f1b7060@hitec.lu> References: <70db0d5cbf85438aac4fbcf17cc2a4a1a6068327e3b6487196c459906dc50ef1AM3PR06MB35350C456DF9A038854F4A09C130@AM3PR06MB353.eurprd06.prod.outlook.com> <20180109134628.GC34136@mdounin.ru> <20180110165455.GG34136@mdounin.ru> <20180110185828.GI34136@mdounin.ru>, <81405375-bb74-f2b3-c187-a1da3f1b7060@hitec.lu> Message-ID: Hello, I tried the following configuration but it's still not working as expected. Client browser receive connection closed and the failing server does not seem to get blacklisted... Any idea ? Thanks map $ssl_preread_server_name $name { default local_https; "" mag_strip_proxy_protocol; maintenance.domain.com mag_strip_proxy_protocol; } upstream mag { server 10.0.0.32:443 max_conns=10 weight=100 max_fails=1 fail_timeout=300; server 10.0.0.33:443 max_conns=10 weight=50 max_fails=1 fail_timeout=300; server 10.0.0.26:443 max_conns=10 weight=10 max_fails=1 fail_timeout=300 backup; server 10.0.0.27:443 max_conns=10 weight=10 max_fails=1 fail_timeout=300 backup; } upstream mag_strip_proxy_protocol { server 127.0.0.1:8080; } upstream local_https { server 127.0.0.1:8443; } log_format stream_routing '$remote_addr [$time_local] ' 'with SNI name "$ssl_preread_server_name" ' 'proxying to "$name" ' '$protocol $status $bytes_sent $bytes_received ' '$session_time'; log_format upstream_routing '$proxy_protocol_addr [$time_local] ' 'proxying to "mag":$upstream_addr ' '$protocol $status $bytes_sent $bytes_received ' '$session_time'; server { listen 443; ssl_preread on; proxy_pass $name; proxy_protocol on; access_log /var/log/nginx/stream_443.log stream_routing; } server { listen 8080 proxy_protocol; proxy_pass mag; access_log /var/log/nginx/mag.log upstream_routing; } ________________________________ De : nginx de la part de Adam Cecile Envoy? : mercredi 10 janvier 2018 20:02:59 ? : nginx at nginx.org; Maxim Dounin Objet : Re: upstream (tcp stream mode) doesn't detect connecton failure [This sender failed our fraud detection checks and may not be who they appear to be. Learn about spoofing at http://aka.ms/LearnAboutSpoofing] On 01/10/2018 07:58 PM, Maxim Dounin wrote: > Hello! > > On Wed, Jan 10, 2018 at 07:18:36PM +0100, Adam Cecile wrote: > > [...] > >>> Ok, so you use multiple proxy layers to be able to combine >>> backends which support/need PROXY protocol and ones which do not, >>> right? This looks like a valid reason, as "proxy_protocol" is >>> either on or off in a particular server. >> Yes exactly ! >> >> Aim of this setup is to do SNI routing to TCP endpoints (with failover) >> or HTTPS virtual hosts. >>> If you want nginx to switch to a different backend while >>> maintaining two proxy layers, consider moving balancing to the >>> second layer instead. This way balancing will happen where >>> connection errors can be seen, and so nginx will be able to switch >>> to a different server on errors. >> Could you be more specific and show me how to do this with my current >> configuration ? I'm a bit lost... > At the first level, differentiate between hosts based on > $ssl_preread_server_name. Proxy to either "local_https" or to a > second-level server, say 8080. On the second level server, proxy > to an upstream group with servers you want to balance. Example > configuration (completely untested): > > map $ssl_preread_server_name $name { > default local_https; > "" second; > pub.domain.com second; > } > > upstream local_https { > server 127.0.0.1:8443; > } > > upstream second { > server 127.0.0.1:8080; > } > > upstream u { > server 10.0.0.1:443; > server 10.0.0.2:443; > } > > server { > listen 443; > ssl_preread on; > proxy_pass $name; > proxy_protocol on; > } > > server { > listen 127.0.0.1:8080 proxy_protocol; > proxy_pass u; > } > > Logging and timeouts omitted for clarity. > Very nice ! I'll give a try tomorrow morning and let you know, thanks. _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From vbart at nginx.com Mon Jan 15 15:39:40 2018 From: vbart at nginx.com (Valentin V. Bartenev) Date: Mon, 15 Jan 2018 18:39:40 +0300 Subject: Unit 0.4 beta release Message-ID: <2902172.S8XFyiGJkn@vbart-workstation> Hello, I'm glad to announce a new beta of NGINX Unit. This is mostly bugfix release in order to eliminate significant regressions in the previous version. Changes with Unit 0.4 15 Jan 2018 *) Feature: compatibility with DragonFly BSD. *) Feature: "configure php --lib-static" option. *) Bugfix: HTTP request body was not passed to application; the bug had appeared in 0.3. *) Bugfix: HTTP large header buffers allocation and deallocation fixed; the bug had appeared in 0.3. *) Bugfix: some PHP applications might not work with relative "root" path. You can find links to the source code and precompiled Linux packages here: - https://unit.nginx.org/installation/ Internally, we use Buildbot to build each commit and run tests on a large number of systems. We also use various static analysis tools to improve code quality and check tests coverage. There is ongoing work on functional tests framework that will allow to avoid such regressions in the future. And there are plans to add fuzz testing. You can learn more about recent Unit changes in this detailed blogpost: - https://www.nginx.com/blog/unit-0-3-beta-release-available-now/ Besides that, please welcome Alexander Borisov who's joined our Unit dev team today. His first task is going to be adding Perl/PSGI support. wbr, Valentin V. Bartenev From lem at uniregistry.link Tue Jan 16 20:23:31 2018 From: lem at uniregistry.link (Luis E. =?utf-8?q?Mu=C3=B1oz?=) Date: Tue, 16 Jan 2018 12:23:31 -0800 Subject: Logging when proxying SMTP / IMAP / POP sessions Message-ID: Hi there, For an email proxying setup using the open source version of Nginx, I have a configuration stanza along these lines: ``` ? mail { server_name mail.DOMAIN; auth_http 127.0.0.1:8080/auth; proxy_pass_error_message on; imap_capabilities "IMAP4rev1" "UIDPLUS"; imap_auth login plain; server { protocol imap; proxy on; ssl on; ssl_certificate ?; ssl_certificate_key ?; listen [::]:993 ssl; listen 993 ssl; } ? } ? ``` Proxying and authentication works as advertised, no issues. However, I've been unable to locate any piece of documentation indicating how to produce a suitable access log message (ie, a log tying the mail client with the backend server its connection was sent to). I've tried the `access_log` / `error_log` directives at various places in the configuration, to no avail. HTTP/HTTPS logging works as usual. Given how I've not seen any mention to logging in the various documents that describe SMTP/IMAP/POP proxying, I believe the functionality for this does not currently exist. This logging is very important for troubleshooting and support. Is there something I'm overlooking? If indeed the functionality doesn't exist, would it be possible for me to write and contribute the code required to add said logging? Thanks in advance. -lem -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Tue Jan 16 22:56:55 2018 From: nginx-forum at forum.nginx.org (erick3k) Date: Tue, 16 Jan 2018 17:56:55 -0500 Subject: MIME Type Problem Message-ID: <46c17364551b6836b6e480dda55ff311.NginxMailingListEnglish@forum.nginx.org> Hi, i like to fix this error as i believe is causing MP4 not to load to some visitors. Resource interpreted as Document but transferred with MIME type video/mp4: web.com/video.mp4 My config includes the mime types # Mime settings include /etc/nginx/mime.types; default_type application/octet-stream; Any ideas why this happens? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,278161,278161#msg-278161 From nginx-forum at forum.nginx.org Wed Jan 17 12:33:43 2018 From: nginx-forum at forum.nginx.org (anish10dec) Date: Wed, 17 Jan 2018 07:33:43 -0500 Subject: GeoIP Module for Blocking IP in http_x_forwarded_for In-Reply-To: <20180111141454.GK34136@mdounin.ru> References: <20180111141454.GK34136@mdounin.ru> Message-ID: <5f33c2464df868658ae6960446c416c6.NginxMailingListEnglish@forum.nginx.org> Maxim Dounin Wrote: ------------------------------------------------------- > Hello! > > On Thu, Jan 11, 2018 at 07:17:20AM -0500, anish10dec wrote: > > > GeoIP module is able to block request on basis of remote address > which is IP > > of the remote device or user but not on basis of X-Forwarded-For IP > if it > > has multiple IP address in it. > > > > There is Frontend Server( Server A) which receives the request and > send it > > to Intermediate Server (Server B) > > We have GeoIP module installed on Intermediate Server i.e. Server B > > > > > > Server B <--- Server A <---- User > > > > When Server B , receives the request from Server A, remote address > > (remote_addr) for Server B is IP of Server A. > > Device/User IP is in http_x_forwarded_for field . > > If http_x_forwarded_for has single IP in it GeoIP module is able to > block > > the IP on the basis of blocking applied. > > > > If http_x_forwarded_for has multiple IP i.e IP of User as well as IP > of some > > Proxy Server or IP of Server A, then its not able to block the > request. > > > > Below is the configuration : > > > > geoip_country /usr/share/GeoIP/GeoIP.dat; > > geoip_proxy IP_OF_ServerA; // GeoIP module ignores > remote_addr > > considering it as trusted and refers to X-Forwarded For > > > > map $geoip_country_code $allowed_country { > > default no; > > US yes; > > } > > > > http_x_forwarded_for = { User IP of UK } - Request from this IP is > getting > > blocked > > > > http_x_forwarded_for = { User IP of UK , Proxy IP of US } - This > request > > is not getting blocked > > > > http_x_forwarded_for = { User IP of UK , IP of Server A } - This > request > > is not getting blocked > > > > It seems nginx GeoIP Module refers to Last IP in > http_x_forwarded_for field > > for applying the blocking method. > > This is what X-Forwarded-For header format assumes: IP addresses > are added to the end of the list. As such, the last address is > the only one you can trust in the above configuration. > > That is, a request with > > X-Forwarded-For: IP1, IP2, IP3 > > as got from Server A doesn't mean that you've got a request from > IP1 forwarded to you via various proxies. It instead means that > Server A got the request from IP3 with "X-Forwarded-For: IP1, IP2" > already present in the request. Nothing guarantees that IP1 and > IP2 are real addresses - they can be easily faked by the client, > or they can be internal addresses in the client network. > > > Is there a way to check for First IP Address in http_x_forwarded_for > for > > blocking the request ? > > If you really want to, you can do so using the > geoip_proxy_recursive directive and configuring the geoip_proxy to > trust the whole world, see here: > > http://nginx.org/r/geoip_proxy_recursive geoip_proxy_recursive on; "If recursive search is disabled then instead of the original client address that matches one of the trusted addresses, the last address sent in ?X-Forwarded-For? will be used. If recursive search is enabled then instead of the original client address that matches one of the trusted addresses, the last non-trusted address sent in ?X-Forwarded-For? will be used." Even enabling this last IP Address is used which is again not able to block the request as Client IP is at 1st Position. > > Note though that this is generally not secure as the address can > be easily forged, see above. Agree . Tried by enabling the Geo IP module on Server A which looks after remote address field and successfully blocks the request. But the problem here is that it is even blocking the requests coming from our Internal Private IP Segment such as 10.0.0.0/27 which are used for monitoring . Is there a way to declare few Private IP's or IP Range as trusted address even though if they are coming under blocked countries ? Thanks and Regards, Anish Posted at Nginx Forum: https://forum.nginx.org/read.php?2,278110,278164#msg-278164 From nginx-forum at forum.nginx.org Wed Jan 17 12:36:43 2018 From: nginx-forum at forum.nginx.org (anish10dec) Date: Wed, 17 Jan 2018 07:36:43 -0500 Subject: [IE] GeoIP Module for Blocking IP in http_x_forwarded_for In-Reply-To: <995C5C9AD54A3C419AF1C20A8B6AB9A432AE282A@STLEISEXCMBX3.eis.equifax.com> References: <995C5C9AD54A3C419AF1C20A8B6AB9A432AE282A@STLEISEXCMBX3.eis.equifax.com> Message-ID: <9915f513d7ba92391efd207ca97ebbae.NginxMailingListEnglish@forum.nginx.org> Thanks ... We need the Client IP on Server B as well for analytics . Tried by enabling the Geo IP module on Server A which looks after remote address field and successfully blocks the request. But the problem here is that it is even blocking the requests coming from our Internal Private IP Segment such as 10.0.0.0/27 which are used for monitoring . Is there a way to declare few Private IP's or IP Range as trusted address even though if they are coming under blocked countries ? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,278117,278165#msg-278165 From mdounin at mdounin.ru Wed Jan 17 12:38:35 2018 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 17 Jan 2018 15:38:35 +0300 Subject: Logging when proxying SMTP / IMAP / POP sessions In-Reply-To: References: Message-ID: <20180117123835.GU34136@mdounin.ru> Hello! On Tue, Jan 16, 2018 at 12:23:31PM -0800, Luis E. Mu?oz wrote: [...] > Proxying and authentication works as advertised, no issues. However, > I've been unable to locate any piece of documentation indicating how to > produce a suitable access log message (ie, a log tying the mail client > with the backend server its connection was sent to). I've tried the > `access_log` / `error_log` directives at various places in the > configuration, to no avail. > > HTTP/HTTPS logging works as usual. > > Given how I've not seen any mention to logging in the various documents > that describe SMTP/IMAP/POP proxying, I believe the functionality for > this does not currently exist. This logging is very important for > troubleshooting and support. > > Is there something I'm overlooking? If indeed the functionality doesn't > exist, would it be possible for me to write and contribute the code > required to add said logging? The error_log directive as available in the core module works for mail module, too. If needed, you can set a specific error log in mail{} or server{} contexts. See http://nginx.org/r/error_log for details. There is no access log support for mail. Instead, basic mail session events are logged to the error log at the "info" level. -- Maxim Dounin http://mdounin.ru/ From mdounin at mdounin.ru Wed Jan 17 14:40:26 2018 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 17 Jan 2018 17:40:26 +0300 Subject: GeoIP Module for Blocking IP in http_x_forwarded_for In-Reply-To: <5f33c2464df868658ae6960446c416c6.NginxMailingListEnglish@forum.nginx.org> References: <20180111141454.GK34136@mdounin.ru> <5f33c2464df868658ae6960446c416c6.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20180117144025.GV34136@mdounin.ru> Hello! On Wed, Jan 17, 2018 at 07:33:43AM -0500, anish10dec wrote: [...] > > > Is there a way to check for First IP Address in http_x_forwarded_for for > > > blocking the request ? > > > > If you really want to, you can do so using the > > geoip_proxy_recursive directive and configuring the geoip_proxy to > > trust the whole world, see here: > > > > http://nginx.org/r/geoip_proxy_recursive > > geoip_proxy_recursive on; > > "If recursive search is disabled then instead of the original client address > that matches one of the trusted addresses, the last address sent in > ?X-Forwarded-For? will be used. If recursive search is enabled then instead > of the original client address that matches one of the trusted addresses, > the last non-trusted address sent in ?X-Forwarded-For? will be used." > > Even enabling this last IP Address is used which is again not able to block > the request as Client IP is at 1st Position. The "configuring the geoip_proxy to trust the whole world" part of the quote above is important. That is, you have to do something like this: geoip_proxy 0.0.0.0/0; geoip_proxy_recursive on; This way all addresses in the X-Forwarded-For header will be trusted, and nginx will use the first address in the X-Forwarded-For header. Note again that this is not secure as the address can be easily forged. > > Note though that this is generally not secure as the address can > > be easily forged, see above. > > Agree . > > Tried by enabling the Geo IP module on Server A which looks after remote > address field and successfully blocks the request. > But the problem here is that it is even blocking the requests coming from > our Internal Private IP Segment such as 10.0.0.0/27 which are used for > monitoring . > > Is there a way to declare few Private IP's or IP Range as trusted address > even though if they are coming under blocked countries ? If you are connecting to the server directly from the private range, you may want to review your blocking policy. Private addresses shouldn't have a country associated with them, so you must be blocking them for some other reasons. If you are connecting to the server via a proxy server in a otherwise blocked country, you may want to configure nginx to trust this specific server using the geoip_proxy directive. This should be more secure than trusting the whole world. -- Maxim Dounin http://mdounin.ru/ From lem at uniregistry.link Wed Jan 17 18:02:34 2018 From: lem at uniregistry.link (Luis E. =?utf-8?q?Mu=C3=B1oz?=) Date: Wed, 17 Jan 2018 10:02:34 -0800 Subject: Logging when proxying SMTP / IMAP / POP sessions In-Reply-To: <20180117123835.GU34136@mdounin.ru> References: <20180117123835.GU34136@mdounin.ru> Message-ID: <00211A71-5DCE-4A32-8996-26EBA66F33F5@uniregistry.link> On 17 Jan 2018, at 4:38, Maxim Dounin wrote: >> On Tue, Jan 16, 2018 at 12:23:31PM -0800, Luis E. Mu?oz wrote: >> ? >> Given how I've not seen any mention to logging in the various >> documents >> that describe SMTP/IMAP/POP proxying, [?] > The error_log directive as available in the core module works for > mail module, too. If needed, you can set a specific error log in > mail{} or server{} contexts. See http://nginx.org/r/error_log for > details. > > There is no access log support for mail. Instead, basic mail > session events are logged to the error log at the "info" level. Thank you very much for your response. Indeed, the error_log directive produces suitable logging that include both ends of the proxied connection. Best regards Luis Mu?oz Director, Registry Operations ____________________________ http://www.uniregistry.link/ 2161 San Joaquin Hills Road Newport Beach, CA 92660 Office +1 949 706 2300 x 4242 lem at uniregistry.link -------------- next part -------------- An HTML attachment was scrubbed... URL: From yar at nginx.com Thu Jan 18 12:11:53 2018 From: yar at nginx.com (Yaroslav Zhuravlev) Date: Thu, 18 Jan 2018 15:11:53 +0300 Subject: proxy protocol over a plain tcp with ssl In-Reply-To: <8cc043cc8284b2c7e1f5fce0c2620f13.NginxMailingListEnglish@forum.nginx.org> References: <385386817f2a44734aa0bdabb669a675.NginxMailingListEnglish@forum.nginx.org> <8cc043cc8284b2c7e1f5fce0c2620f13.NginxMailingListEnglish@forum.nginx.org> Message-ID: On 12 Jan 2018, at 03:22, nir wrote: > Well, seems that you need to read the manual with the right perspective... > https://stackoverflow.com/questions/48211083/proxy-protocol-and-ssl Hi! The chapter about the PROXY protocol in the Admin Guide has been updated recently: https://www.nginx.com/resources/admin-guide/proxy-protocol/ Best regards, yar [...] From scoulibaly at gmail.com Thu Jan 18 16:46:49 2018 From: scoulibaly at gmail.com (=?UTF-8?Q?S=C3=A9kine_Coulibaly?=) Date: Thu, 18 Jan 2018 17:46:49 +0100 Subject: Direct server return commands (tc filter) on Nginx blog Message-ID: Hi, I'm using this resource ( https://www.nginx.com/blog/ip-transparency-direct-server-return-nginx-plus-transparent-proxy/) to setup a UDP load balancer, with DSR and Origin NAT. Everything went fine in the walkthrough until I reached the traffic control stuff : tc qdisc add dev eth0 root handle 10: htb tc filter add dev eth0 parent 10: protocol ip prio 10 u32 match ip src 172.16.0.11 match ip sport 53 action nat egress 172.16.0.11 192.168.99.10 The second command fails with : Illegal "match" >From what I can read here ( http://man7.org/linux/man-pages/man8/tc-u32.8.html), the syntax looks correct though. Of course I replaced 172.16.0.11 with the actual IP of the upstream I'm configuring, and 192.168.99.10 with the IP of the host hosting the Nginx. The interface name is eth0. I'm running Ubuntu 16.04.02 LTS. Is the "tc filter" command correct, or am I doing something wrong ? Thank you Sekine -------------- next part -------------- An HTML attachment was scrubbed... URL: From ruvikal at gmail.com Thu Jan 18 18:01:44 2018 From: ruvikal at gmail.com (P lva) Date: Thu, 18 Jan 2018 13:01:44 -0500 Subject: Nginx rewrites URL Message-ID: Hello Everyone, I'm trying to get nginx server configured as a reverse proxy serving requests to few application servers upstream. Server { server_name app1.company.domain.com; listen 80; location / { proxy_pass http://appserver1:app1port/; proxy_pass_request_body on; proxy_intercept_errors on; error_page 301 302 307 = @handle_redirect; } location @handle_redirect { set $saved_redirect_location '$upstream_http_location'; proxy_pass $saved_redirect_location; } I'm having two challenges with this. 1) This doesn't work with the firewalls. I can get to it only if I open appserver1 to accept everyone on that app1port. I tried replacing the headers but none of them work. 2) This configuration works when I turn off the firewall, but the address in the address bar gets rewritten to http://appserver1:app1port which is a a dealbreaker as we definitely don't want to have the upstream server appear in the address bar. Also these servers (nginx server and the upstream app server) aren't connected to the same DNS as the client. So neither of these servers can resolve app1.company.domain.com I'm not sure where the problem lies, and would really appreciate any pointers. Thanks -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Fri Jan 19 06:11:20 2018 From: nginx-forum at forum.nginx.org (blason) Date: Fri, 19 Jan 2018 01:11:20 -0500 Subject: Has anyone implemented Nginx as a reverse proxy with Microsoft Sharepoint? Message-ID: Hi Guys, Keen to know if there was any success rate for implementing Nginx as a reverse proxy for Sharepoint? I mean I did implement however I am finding issues with Sub-sites and wanted to know if there is any solution for the same? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,278193,278193#msg-278193 From pchychi at gmail.com Fri Jan 19 06:13:24 2018 From: pchychi at gmail.com (Payam Chychi) Date: Fri, 19 Jan 2018 06:13:24 +0000 Subject: Has anyone implemented Nginx as a reverse proxy with Microsoft Sharepoint? In-Reply-To: References: Message-ID: On Thu, Jan 18, 2018 at 10:11 PM blason wrote: > Hi Guys, > > Keen to know if there was any success rate for implementing Nginx as a > reverse proxy for Sharepoint? I mean I did implement however I am finding > issues with Sub-sites and wanted to know if there is any solution for the > same? > > Posted at Nginx Forum: > https://forum.nginx.org/read.php?2,278193,278193#msg-278193 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > Whats the problem? I?ve had them running in production for mission critical stuff for over 4 years, rock solid! -- Payam Tarverdyan Chychi Network Security Specialist / Network Engineer -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Fri Jan 19 06:18:07 2018 From: nginx-forum at forum.nginx.org (blason) Date: Fri, 19 Jan 2018 01:18:07 -0500 Subject: Has anyone implemented Nginx as a reverse proxy with Microsoft Sharepoint? In-Reply-To: References: Message-ID: Wow man!! Thanks I am struggling with configuration as Subsites does not show anything it shows blank page i.e only for blank page while Front page gets open successfully. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,278193,278195#msg-278195 From pchychi at gmail.com Fri Jan 19 06:21:06 2018 From: pchychi at gmail.com (Payam Chychi) Date: Fri, 19 Jan 2018 06:21:06 +0000 Subject: Has anyone implemented Nginx as a reverse proxy with Microsoft Sharepoint? In-Reply-To: References: Message-ID: On Thu, Jan 18, 2018 at 10:18 PM blason wrote: > Wow man!! Thanks I am struggling with configuration as Subsites does not > show anything it shows blank page i.e only for blank page while Front page > gets open successfully. > > Posted at Nginx Forum: > https://forum.nginx.org/read.php?2,278193,278195#msg-278195 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > What does your config look like? -- Payam Tarverdyan Chychi Network Security Specialist / Network Engineer -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Fri Jan 19 06:23:17 2018 From: nginx-forum at forum.nginx.org (blason) Date: Fri, 19 Jan 2018 01:23:17 -0500 Subject: Has anyone implemented Nginx as a reverse proxy with Microsoft Sharepoint? In-Reply-To: References: Message-ID: Can I DM you or send you to email address? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,278193,278197#msg-278197 From Jason.Whittington at equifax.com Fri Jan 19 15:14:45 2018 From: Jason.Whittington at equifax.com (Jason Whittington) Date: Fri, 19 Jan 2018 15:14:45 +0000 Subject: [IE] Re: Has anyone implemented Nginx as a reverse proxy with Microsoft Sharepoint? In-Reply-To: References: Message-ID: <995C5C9AD54A3C419AF1C20A8B6AB9A432AF53D0@STLEISEXCMBX3.eis.equifax.com> I haven?t done it for sharepoint but I have done it for TFS. If I had to guess you are probably being bitten by NTLM. NTLM authentication authenticates connections instead of requests, and this is somewhat contradicts HTTP protocol, which is expected to be stateless. As a result it doesn't generally work though proxies, including nginx. NGINX can support it though, you need to use the "ntlm" directive. Below is an [stripped down] example of how I have it set up in front of TFS. I would think Sharepoint would be very similar. This has worked very reliably for like a year. upstream MyNtlmService { zone backend; server 192.168.0.1:8080; server 192.168.0.2:8080; #See http://stackoverflow.com/questions/10395807/nginx-close-upstream-connection-after-request keepalive 64; #See http://nginx.org/en/docs/http/ngx_http_upstream_module.html#ntlm ntlm; } server { listen 80; location / { proxy_read_timeout 60s; #http://stackoverflow.com/questions/21284935/nginx-reverse-proxy-with-windows-authentication-that-uses-ntlm proxy_http_version 1.1; proxy_set_header Connection ""; proxy_pass http:// MyNtlmService /; } } Jason -----Original Message----- From: nginx [mailto:nginx-bounces at nginx.org] On Behalf Of blason Sent: Friday, January 19, 2018 12:18 AM To: nginx at nginx.org Subject: [IE] Re: Has anyone implemented Nginx as a reverse proxy with Microsoft Sharepoint? Wow man!! Thanks I am struggling with configuration as Subsites does not show anything it shows blank page i.e only for blank page while Front page gets open successfully. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,278193,278195#msg-278195 _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx This message contains proprietary information from Equifax which may be confidential. If you are not an intended recipient, please refrain from any disclosure, copying, distribution or use of this information and note that such actions are prohibited. If you have received this transmission in error, please notify by e-mail postmaster at equifax.com. Equifax? is a registered trademark of Equifax Inc. All rights reserved. From pchychi at gmail.com Fri Jan 19 16:06:08 2018 From: pchychi at gmail.com (Payam Chychi) Date: Fri, 19 Jan 2018 16:06:08 +0000 Subject: [IE] Re: Has anyone implemented Nginx as a reverse proxy with Microsoft Sharepoint? In-Reply-To: <995C5C9AD54A3C419AF1C20A8B6AB9A432AF53D0@STLEISEXCMBX3.eis.equifax.com> References: <995C5C9AD54A3C419AF1C20A8B6AB9A432AF53D0@STLEISEXCMBX3.eis.equifax.com> Message-ID: On Fri, Jan 19, 2018 at 7:14 AM Jason Whittington < Jason.Whittington at equifax.com> wrote: > I haven?t done it for sharepoint but I have done it for TFS. If I had to > guess you are probably being bitten by NTLM. > > NTLM authentication authenticates connections instead of requests, and > this is somewhat contradicts HTTP protocol, which is expected to be > stateless. As a result it doesn't generally work though proxies, including > nginx. > > NGINX can support it though, you need to use the "ntlm" directive. Below > is an [stripped down] example of how I have it set up in front of TFS. I > would think Sharepoint would be very similar. This has worked very > reliably for like a year. > > upstream MyNtlmService { > zone backend; > server 192.168.0.1:8080; > server 192.168.0.2:8080; > #See > http://stackoverflow.com/questions/10395807/nginx-close-upstream-connection-after-request > keepalive 64; > #See > http://nginx.org/en/docs/http/ngx_http_upstream_module.html#ntlm > ntlm; > } > server { > listen 80; > > location / { > proxy_read_timeout 60s; > # > http://stackoverflow.com/questions/21284935/nginx-reverse-proxy-with-windows-authentication-that-uses-ntlm > proxy_http_version 1.1; > proxy_set_header Connection ""; > > proxy_pass http:// MyNtlmService /; > } > } > > > Jason > > > -----Original Message----- > From: nginx [mailto:nginx-bounces at nginx.org] On Behalf Of blason > Sent: Friday, January 19, 2018 12:18 AM > To: nginx at nginx.org > Subject: [IE] Re: Has anyone implemented Nginx as a reverse proxy with > Microsoft Sharepoint? > > Wow man!! Thanks I am struggling with configuration as Subsites does not > show anything it shows blank page i.e only for blank page while Front page > gets open successfully. > > Posted at Nginx Forum: > https://forum.nginx.org/read.php?2,278193,278195#msg-278195 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > > This message contains proprietary information from Equifax which may be > confidential. If you are not an intended recipient, please refrain from any > disclosure, copying, distribution or use of this information and note that > such actions are prohibited. If you have received this transmission in > error, please notify by e-mail postmaster at equifax.com. Equifax? is a > registered trademark of Equifax Inc. All rights reserved. > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx Yep, the problem is/will be ntlm. Try what Jason mentioned, and you can drop me an email if you like off-list - pchychi . At . Gmail > -- Payam Tarverdyan Chychi Network Security Specialist / Network Engineer -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx at 2xlp.com Fri Jan 19 18:04:03 2018 From: nginx at 2xlp.com (jonathan vanasco) Date: Fri, 19 Jan 2018 13:04:03 -0500 Subject: what is allowed within an evil "if", and multiple proxy failovers Message-ID: we have a shared macro/include used for letsencrypt verification, which proxies requests to the `./well-known` directory onto an upstream provider. the macro uses an old flag/semaphore based technique to toggle if the route is enabled or not, so we can disable it when not needed. it works great. location /.well-known/acme-challenge { if (!-f /etc/nginx/_flags/letsencrypt-public) { rewrite ^.*$ @ letsencrypt_public_503 last; } include /path/to/proxy_config; } location = @letsencrypt_public_503 { internal; return 503; } recently, letsencrypt dropped support for TLS authentication and now requires port80. this has created a problem, because we run multiple ACME clients and were able to segment the ones that hit our catchall/default servers based on the protocol. while many of our configs can use the existing files, a few need to support both systems in a failover situation: the current working version works around nginx config syntax to get around this? location /.well-known/acme-challenge { proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $host; set $acme_options ""; if (-f /etc/nginx/_flags/client_a) { set $acme_options "client_a"; } if (-f /etc/nginx/_flags/client_b) { set $acme_options "${acme_options}.client_b"; } if ($acme_options = "client_a") { proxy_pass http://127.0.0.1:81; break; } if ($acme_options = "client_a.client_b") { proxy_pass http://127.0.0.1:81; break; } if ($acme_options = ".client_b") { proxy_pass http://127.0.0.1:6501; break; } rewrite ^.*$ @acme_503 last; } i have some problems with this approach that I?d like to avoid, and wonder if anyone has suggestions: 1. i?m lucky that proxy_set_header has shared info, it?s not allowed within an ?if? block. 2. i repeat the proxy_pass info much here, and it also exists in some other macros which are shared often. there are many places to update. there were other things I didn?t like, but I forgot. does anyone have a better suggestion than my current implementation? it works for now, but it?s not modular or clean. From nginx-forum at forum.nginx.org Sat Jan 20 05:29:44 2018 From: nginx-forum at forum.nginx.org (blason) Date: Sat, 20 Jan 2018 00:29:44 -0500 Subject: [IE] Re: Has anyone implemented Nginx as a reverse proxy with Microsoft Sharepoint? In-Reply-To: <995C5C9AD54A3C419AF1C20A8B6AB9A432AF53D0@STLEISEXCMBX3.eis.equifax.com> References: <995C5C9AD54A3C419AF1C20A8B6AB9A432AF53D0@STLEISEXCMBX3.eis.equifax.com> Message-ID: <09194d74a933f198c0cd4a08479e213e.NginxMailingListEnglish@forum.nginx.org> Hi there, I guess it was not an issue with NTLM where I am successfully able to authenticate with sharepoint the front page loads successfully while sub-site pages does not load up and I am not able to figure out the issue. Will soon share the config and logs I would really appreciate if help can be offered to eliminate the issue. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,278193,278202#msg-278202 From pchychi at gmail.com Sat Jan 20 05:51:20 2018 From: pchychi at gmail.com (Payam Chychi) Date: Sat, 20 Jan 2018 05:51:20 +0000 Subject: [IE] Re: Has anyone implemented Nginx as a reverse proxy with Microsoft Sharepoint? In-Reply-To: <09194d74a933f198c0cd4a08479e213e.NginxMailingListEnglish@forum.nginx.org> References: <995C5C9AD54A3C419AF1C20A8B6AB9A432AF53D0@STLEISEXCMBX3.eis.equifax.com> <09194d74a933f198c0cd4a08479e213e.NginxMailingListEnglish@forum.nginx.org> Message-ID: On Fri, Jan 19, 2018 at 9:30 PM blason wrote: > Hi there, > > I guess it was not an issue with NTLM where I am successfully able to > authenticate with sharepoint the front page loads successfully while > sub-site pages does not load up and I am not able to figure out the issue. > > Will soon share the config and logs I would really appreciate if help can > be > offered to eliminate the issue. > > Posted at Nginx Forum: > https://forum.nginx.org/read.php?2,278193,278202#msg-278202 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > The best way to troubleshoot this is to do a packet capture using tcpdump and see what happens when the request hits nginx server. You will see what and how the packets are sent/received Also, when the sub-sites don?t work, what do you see in the http header? All these data are really important for troubleshooting Feel free to send me an email pchychi . At . Gmail, happy to help troubleshoot it. Cheers Payam -- Payam Tarverdyan Chychi Network Security Specialist / Network Engineer -------------- next part -------------- An HTML attachment was scrubbed... URL: From michael.friscia at yale.edu Sat Jan 20 12:47:29 2018 From: michael.friscia at yale.edu (Friscia, Michael) Date: Sat, 20 Jan 2018 12:47:29 +0000 Subject: Operation not permitted) while reading upstream - error log is filled with tons of these Message-ID: <9E7D58B3-9260-48D5-AC6C-0936DE1B39EE@yale.edu> I?m stumped and have exhausted pretty much every google search I could come up with on this. I have NGINX setup for caching. I?m using Ubuntu 17.10 and Nginx 1.12.1. Everything appears to be working just fine by the error log is getting filled with errors like this 2018/01/20 07:37:44 [crit] 122598#122598: *91 chmod() "/etc/nginx/cache/nginx2/main/a5/d8/e0/72677057b97aef4eee8e619c49e0d8a5.0000000013" failed (1: Operation not permitted) while reading upstream, client: 35.226.23.240, server: *.---.com, request: "GET /search/----.profile HTTP/1.1", upstream: "http://---.---.---.---:80/search/----.profile", host: "---.com" From what I can tell, everything is working. Nginx must have permission, I deleted the cache folder and let it make on restart. The actual setup is that this is on Azure, the storage location /etc/nginx/cache/nginx2 is a mounted file share from Azure storage. I then created the folder /main/ inside that, this is the folder I deleted and on restart nginx created. Which makes me think that there is no permission problem but instead some other underlying problem that is throwing this false error. The cache gzip files are being made and the folder path for 2:2:2 is being created. Has anyone seen/solved this before? ___________________________________________ Michael Friscia Office of Communications Yale School of Medicine (203) 737-7932 - office (203) 544-3282 ? mobile http://web.yale.edu -------------- next part -------------- An HTML attachment was scrubbed... URL: From arozyev at nginx.com Sat Jan 20 13:26:37 2018 From: arozyev at nginx.com (Aziz Rozyev) Date: Sat, 20 Jan 2018 16:26:37 +0300 Subject: Operation not permitted) while reading upstream - error log is filled with tons of these In-Reply-To: <9E7D58B3-9260-48D5-AC6C-0936DE1B39EE@yale.edu> References: <9E7D58B3-9260-48D5-AC6C-0936DE1B39EE@yale.edu> Message-ID: <286EE10D-C25F-4665-81DC-B7E91DB1E99F@nginx.com> As usual, try disabling selinux if it?s active. If that helps, investigate audit log.. br, Aziz. > On 20 Jan 2018, at 15:47, Friscia, Michael wrote: > > I?m stumped and have exhausted pretty much every google search I could come up with on this. I have NGINX setup for caching. I?m using Ubuntu 17.10 and Nginx 1.12.1. Everything appears to be working just fine by the error log is getting filled with errors like this > > 2018/01/20 07:37:44 [crit] 122598#122598: *91 chmod() "/etc/nginx/cache/nginx2/main/a5/d8/e0/72677057b97aef4eee8e619c49e0d8a5.0000000013" failed (1: Operation not permitted) while reading upstream, client: 35.226.23.240, server: *.---.com, request: "GET /search/----.profile HTTP/1.1", upstream: "http://---.---.---.---:80/search/----.profile", host: "---.com" > > From what I can tell, everything is working. Nginx must have permission, I deleted the cache folder and let it make on restart. > > The actual setup is that this is on Azure, the storage location /etc/nginx/cache/nginx2 is a mounted file share from Azure storage. I then created the folder /main/ inside that, this is the folder I deleted and on restart nginx created. Which makes me think that there is no permission problem but instead some other underlying problem that is throwing this false error. The cache gzip files are being made and the folder path for 2:2:2 is being created. > > Has anyone seen/solved this before? > > ___________________________________________ > Michael Friscia > Office of Communications > Yale School of Medicine > (203) 737-7932 - office > (203) 544-3282 ? mobile > http://web.yale.edu > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From michael.friscia at yale.edu Sat Jan 20 13:35:15 2018 From: michael.friscia at yale.edu (Friscia, Michael) Date: Sat, 20 Jan 2018 13:35:15 +0000 Subject: Operation not permitted) while reading upstream - error log is filled with tons of these In-Reply-To: <286EE10D-C25F-4665-81DC-B7E91DB1E99F@nginx.com> References: <9E7D58B3-9260-48D5-AC6C-0936DE1B39EE@yale.edu> <286EE10D-C25F-4665-81DC-B7E91DB1E99F@nginx.com> Message-ID: <5299C8FC-007F-428F-8ADF-62E3AEDFEE6B@yale.edu> Ok, found it. It was a permission issue. I did not have the uid and gid in the fstab, I set that the the www-data user that nginx is running as and the errors went away. I?m just not sure I understand how it made the cache files/directories in the first place. But clearly this is a problem when the cache is on a CIFS mount. I reverted to have the cache write to a local drive and there was no problem. So I ended up on this page which led to the solution https://stackoverflow.com/questions/21318573/permissions-issue-on-cifs-mount-between-ubuntu-and-mavericks Thank you for the suggestion, that?s actually where I started after you mentioned it to end up where I did. ___________________________________________ Michael Friscia Office of Communications Yale School of Medicine (203) 737-7932 - office (203) 544-3282 ? mobile http://web.yale.edu On 1/20/18, 8:26 AM, "nginx on behalf of Aziz Rozyev" wrote: As usual, try disabling selinux if it?s active. If that helps, investigate audit log.. br, Aziz. > On 20 Jan 2018, at 15:47, Friscia, Michael wrote: > > I?m stumped and have exhausted pretty much every google search I could come up with on this. I have NGINX setup for caching. I?m using Ubuntu 17.10 and Nginx 1.12.1. Everything appears to be working just fine by the error log is getting filled with errors like this > > 2018/01/20 07:37:44 [crit] 122598#122598: *91 chmod() "/etc/nginx/cache/nginx2/main/a5/d8/e0/72677057b97aef4eee8e619c49e0d8a5.0000000013" failed (1: Operation not permitted) while reading upstream, client: 35.226.23.240, server: *.---.com, request: "GET /search/----.profile HTTP/1.1", upstream: "https://urldefense.proofpoint.com/v2/url?u=http-3A__-2D-2D-2D.-2D-2D-2D.-2D-2D-2D.-2D-2D-2D-3A80_search_-2D-2D-2D-2D.profile&d=DwIGaQ&c=cjytLXgP8ixuoHflwc-poQ&r=wvXEDjvtDPcv7AlldT5UvDx32KXBEM6um_lS023SJrs&m=M4doNhMd3t6V64zd-c0vWeBW7Z5jUH7dBvAM-8w3vDg&s=osyFFwk-WV5Vfm0y8mJYBaHvoETSeWkW5-r3PlhFAPw&e=", host: "---.com" > > From what I can tell, everything is working. Nginx must have permission, I deleted the cache folder and let it make on restart. > > The actual setup is that this is on Azure, the storage location /etc/nginx/cache/nginx2 is a mounted file share from Azure storage. I then created the folder /main/ inside that, this is the folder I deleted and on restart nginx created. Which makes me think that there is no permission problem but instead some other underlying problem that is throwing this false error. The cache gzip files are being made and the folder path for 2:2:2 is being created. > > Has anyone seen/solved this before? > > ___________________________________________ > Michael Friscia > Office of Communications > Yale School of Medicine > (203) 737-7932 - office > (203) 544-3282 ? mobile > http://web.yale.edu > > _______________________________________________ > nginx mailing list > nginx at nginx.org > https://urldefense.proofpoint.com/v2/url?u=http-3A__mailman.nginx.org_mailman_listinfo_nginx&d=DwIGaQ&c=cjytLXgP8ixuoHflwc-poQ&r=wvXEDjvtDPcv7AlldT5UvDx32KXBEM6um_lS023SJrs&m=M4doNhMd3t6V64zd-c0vWeBW7Z5jUH7dBvAM-8w3vDg&s=V3XuCla575XErTF3p9z7MMFi6tKt7MrpiCo8rYhuIIU&e= _______________________________________________ nginx mailing list nginx at nginx.org https://urldefense.proofpoint.com/v2/url?u=http-3A__mailman.nginx.org_mailman_listinfo_nginx&d=DwIGaQ&c=cjytLXgP8ixuoHflwc-poQ&r=wvXEDjvtDPcv7AlldT5UvDx32KXBEM6um_lS023SJrs&m=M4doNhMd3t6V64zd-c0vWeBW7Z5jUH7dBvAM-8w3vDg&s=V3XuCla575XErTF3p9z7MMFi6tKt7MrpiCo8rYhuIIU&e= From francis at daoine.org Sat Jan 20 13:58:23 2018 From: francis at daoine.org (Francis Daly) Date: Sat, 20 Jan 2018 13:58:23 +0000 Subject: Operation not permitted) while reading upstream - error log is filled with tons of these In-Reply-To: <9E7D58B3-9260-48D5-AC6C-0936DE1B39EE@yale.edu> References: <9E7D58B3-9260-48D5-AC6C-0936DE1B39EE@yale.edu> Message-ID: <20180120135823.GA3018@daoine.org> On Sat, Jan 20, 2018 at 12:47:29PM +0000, Friscia, Michael wrote: Hi there, > 2018/01/20 07:37:44 [crit] 122598#122598: *91 chmod() "/etc/nginx/cache/nginx2/main/a5/d8/e0/72677057b97aef4eee8e619c49e0d8a5.0000000013" failed (1: Operation not permitted) while reading upstream, client: 35.226.23.240, server: *.---.com, request: "GET /search/----.profile HTTP/1.1", upstream: "http://---.---.---.---:80/search/----.profile", host: "---.com" > The actual setup is that this is on Azure, the storage location /etc/nginx/cache/nginx2 is a mounted file share from Azure storage. How precisely is the file share mounted? The error indication is that chmod() failed. Does chmod() work on that filesystem? Can you change the mount options such that chmod() will return success, even if it does not actually change the permissions of the file? https://superuser.com/questions/744384/allow-chmod-on-cifs-mount suggests that "noperm" in the mount options had the desired effect for one user; note that that does not seem (to me) to match what the "mount.cifs" documentation says, so you'll want to do your own investigation and testing before relying on it. > Has anyone seen/solved this before? I haven't. So the above is "questions" rather than "suggestions how to fix". Good luck with it, f -- Francis Daly francis at daoine.org From francis at daoine.org Sat Jan 20 14:16:53 2018 From: francis at daoine.org (Francis Daly) Date: Sat, 20 Jan 2018 14:16:53 +0000 Subject: Nginx rewrites URL In-Reply-To: References: Message-ID: <20180120141653.GB3018@daoine.org> On Thu, Jan 18, 2018 at 01:01:44PM -0500, P lva wrote: Hi there, > I'm trying to get nginx server configured as a reverse proxy serving > requests to few application servers upstream. > > Server { > server_name app1.company.domain.com; > listen 80; > > location / { > proxy_pass http://appserver1:app1port/; I think it is unrelated; but you might be happier with that final "/" not being there. > proxy_pass_request_body on; > proxy_intercept_errors on; > error_page 301 302 307 = @handle_redirect; I think this bit is related: why have it? In general, you want the redirect-response to get to the client, so that the client can make the correct next request directly. The thing you want to arrange, though, is that the Location: in the response refers to a Host: that the client can access, and that is "this nginx server". You might be able to arrange that by using proxy_set_header Host $server_name; and/or some version of proxy_redirect, perhaps like proxy_redirect http://appserver1:app1port/ /; See http://nginx.org/r/proxy_set_header http://nginx.org/r/proxy_redirect for details. > 1) This doesn't work with the firewalls. I can get to it only if I open > appserver1 to accept everyone on that app1port. I tried replacing the > headers but none of them work. If you can show the specific example config that you used, and what result you got, that might be useful. (If the above suggestions work, then this part is unnecessary, of course.) > 2) This configuration works when I turn off the firewall, but the address > in the address bar gets rewritten to http://appserver1:app1port which is a > a dealbreaker as we definitely don't want to have the upstream server > appear in the address bar. The client should never see http://appserver1:app1port. nginx should (be configured to) edit it from http response headers before sending to the client; but the app server should (be configured to) make sure that it never appears in the http response body. > Also these servers (nginx server and the upstream app server) aren't > connected to the same DNS as the client. So neither of these servers can > resolve app1.company.domain.com That should not matter. > I'm not sure where the problem lies, and would really appreciate any > pointers. If you can see one request from the client to nginx, the matching request from nginx to upstream, and the two responses, you should be able to see where "http://appserver1:app1port" is introduced into the response. That's the place to look to make the change. f -- Francis Daly francis at daoine.org From michael.friscia at yale.edu Sat Jan 20 14:31:38 2018 From: michael.friscia at yale.edu (Friscia, Michael) Date: Sat, 20 Jan 2018 14:31:38 +0000 Subject: Operation not permitted) while reading upstream - error log is filled with tons of these In-Reply-To: <20180120135823.GA3018@daoine.org> References: <9E7D58B3-9260-48D5-AC6C-0936DE1B39EE@yale.edu> <20180120135823.GA3018@daoine.org> Message-ID: <472EB7B1-91AA-47DF-8B70-EA33A4528866@yale.edu> I tried appling the noperm flag on my test environment, but once I applied the other fix of adding gid/uid to the cifs mount, the chmod() error went away. Now I face a new error that happens infrequently which is rename() fails. I?m guessing the real problem is that without a lot of additional configuration, the nginx cache may not work properly with a cifs mount. My problem is that I?ve forgotten more linux than I remember so I?m stumbling through these errors as they come. But if anyone has seen the rename() error and knows a fix, that would help. ___________________________________________ Michael Friscia Office of Communications Yale School of Medicine (203) 737-7932 - office (203) 544-3282 ? mobile http://web.yale.edu On 1/20/18, 8:58 AM, "nginx on behalf of Francis Daly" wrote: On Sat, Jan 20, 2018 at 12:47:29PM +0000, Friscia, Michael wrote: Hi there, > 2018/01/20 07:37:44 [crit] 122598#122598: *91 chmod() "/etc/nginx/cache/nginx2/main/a5/d8/e0/72677057b97aef4eee8e619c49e0d8a5.0000000013" failed (1: Operation not permitted) while reading upstream, client: 35.226.23.240, server: *.---.com, request: "GET /search/----.profile HTTP/1.1", upstream: "https://urldefense.proofpoint.com/v2/url?u=http-3A__-2D-2D-2D.-2D-2D-2D.-2D-2D-2D.-2D-2D-2D-3A80_search_-2D-2D-2D-2D.profile&d=DwICAg&c=cjytLXgP8ixuoHflwc-poQ&r=wvXEDjvtDPcv7AlldT5UvDx32KXBEM6um_lS023SJrs&m=UJ4oKb6qp-Ebegtr5fHmO-kPUVhfiwYjNdejFmU1NXI&s=U1ihyg-PzcrmSmpX_F1LuqGWbuzXE5j2ntOwmfP51Ac&e=", host: "---.com" > The actual setup is that this is on Azure, the storage location /etc/nginx/cache/nginx2 is a mounted file share from Azure storage. How precisely is the file share mounted? The error indication is that chmod() failed. Does chmod() work on that filesystem? Can you change the mount options such that chmod() will return success, even if it does not actually change the permissions of the file? https://urldefense.proofpoint.com/v2/url?u=https-3A__superuser.com_questions_744384_allow-2Dchmod-2Don-2Dcifs-2Dmount&d=DwICAg&c=cjytLXgP8ixuoHflwc-poQ&r=wvXEDjvtDPcv7AlldT5UvDx32KXBEM6um_lS023SJrs&m=UJ4oKb6qp-Ebegtr5fHmO-kPUVhfiwYjNdejFmU1NXI&s=as7NqjlOXjgBHNCelfury7UdUNuN2k7qLdyly5jwGrw&e= suggests that "noperm" in the mount options had the desired effect for one user; note that that does not seem (to me) to match what the "mount.cifs" documentation says, so you'll want to do your own investigation and testing before relying on it. > Has anyone seen/solved this before? I haven't. So the above is "questions" rather than "suggestions how to fix". Good luck with it, f -- Francis Daly francis at daoine.org _______________________________________________ nginx mailing list nginx at nginx.org https://urldefense.proofpoint.com/v2/url?u=http-3A__mailman.nginx.org_mailman_listinfo_nginx&d=DwICAg&c=cjytLXgP8ixuoHflwc-poQ&r=wvXEDjvtDPcv7AlldT5UvDx32KXBEM6um_lS023SJrs&m=UJ4oKb6qp-Ebegtr5fHmO-kPUVhfiwYjNdejFmU1NXI&s=2DD6Xlj8iSkt07h_51z6WbBf1N57_e4fDlAzA3p1STs&e= From michael.friscia at yale.edu Sat Jan 20 22:34:21 2018 From: michael.friscia at yale.edu (Friscia, Michael) Date: Sat, 20 Jan 2018 22:34:21 +0000 Subject: failed (13: Permission denied) while reading upstream on rename() Message-ID: <67C0BEC1-814B-4CBE-BA3A-F0FF502F3467@yale.edu> Earlier today I solved a chmod() problem in the cache and now I?m faced with this one which happens much less frequently. I don?t think permission is the problem, I think it?s an Nginx configuration I failed to set correctly. In the error I see that the rename() failure was to change: eedd07f7aef45a5ed22f748a31724947.0000002528 to eedd07f7aef45a5ed22f748a31724947 This seems to happen on some pages and then continues to happen if I browse that page but most pages are not affected. Has anyone seen this before? ___________________________________________ Michael Friscia Office of Communications Yale School of Medicine (203) 737-7932 - office (203) 544-3282 ? mobile http://web.yale.edu -------------- next part -------------- An HTML attachment was scrubbed... URL: From nkagade at gmail.com Mon Jan 22 05:33:08 2018 From: nkagade at gmail.com (Nayan Kagde) Date: Mon, 22 Jan 2018 11:03:08 +0530 Subject: Required help. Message-ID: Hi, I am using nginx on aws server I am getting some error as per follows, please help me to resolve this. [image: Inline image 1] Thanks & Regards, Nayank -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image.png Type: image/png Size: 79412 bytes Desc: not available URL: From zchao1995 at gmail.com Mon Jan 22 05:53:37 2018 From: zchao1995 at gmail.com (tokers) Date: Sun, 21 Jan 2018 21:53:37 -0800 Subject: Required help. In-Reply-To: References: Message-ID: Hi! Is your SELinux policy results in this? Check the status of SELinux may can help you. -------------- next part -------------- An HTML attachment was scrubbed... URL: From peter_booth at me.com Mon Jan 22 10:27:14 2018 From: peter_booth at me.com (Peter Booth) Date: Mon, 22 Jan 2018 05:27:14 -0500 Subject: How to correctly dedicate server processes to specific CPU cores? In-Reply-To: References: Message-ID: <99C7991F-30F5-4173-AA61-DB01408E6EB4@me.com> So some questions: What hardware is this? Are they 16 ?real? cores or hyper threaded cores? Do you have a test case setup so you can readily measure the impact of change? Many tunings that involve numa will only show substantial results ion specific app What does cat /proc/cpuinfo | tail -28 return? When you say maxed out do you literally mean that cores 6,7 show 100% CPU utilization > On Jan 12, 2018, at 9:21 AM, Raffael Vogler wrote: > > I have 16 cores (0 to 15) and the most active processes are: > > - nginx > - php-fpm > - eth0-TxRx-0 (paired Tx and Rx queues #1) > - eth0-TxRx-1 (paired Tx and Rx queues #2) > > eth0-TxRx-0 is already dedicated to core #5 and eth0-TxRx-1 to core #6. > > But on those two cores are also nginx and php-fpm processes being executed. > > Now I would like o restrict nginx and php-fpm to cores 0-4 and 7-15. > > I would like to confirm that this can be correctly achieved with: > > taskset -c 0-4,7-15 nginx > taskset -c 0-4,7-15 php-fpm > > Is that a safe and sound approach? > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From scoulibaly at gmail.com Mon Jan 22 14:01:21 2018 From: scoulibaly at gmail.com (=?UTF-8?Q?S=C3=A9kine_Coulibaly?=) Date: Mon, 22 Jan 2018 15:01:21 +0100 Subject: UDP Load balancing Message-ID: Hi, I'm evaluating Nginx Plus for a UDP Load Balancer but can't make it work. The packets are spoofed correctly on the LB side (as seen with tcpdump, where I can see packets created, the source IP being the one of the client, the destination the one of the selected upstream). However, on the upstream side, I receive nothing. Could it be the spoofed packets are filtered out somewhere ? My configuration is as below : user root; worker_processes auto; worker_rlimit_nofile 65535; error_log /var/log/nginx/error.log debug; pid /var/run/nginx.pid; events { worker_connections 20000; } http { include /etc/nginx/mime.types; default_type application/octet-stream; log_format main '$remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"'; access_log /var/log/nginx/access.log main; sendfile on; #tcp_nopush on; keepalive_timeout 65; #gzip on; include /etc/nginx/conf.d/*.conf; } stream { upstream dtls_udp_upstreams { hash $remote_addr; server preprods.mycorp.com:5684; } server { listen 5684 udp; proxy_bind $remote_addr:$remote_port transparent; proxy_pass dtls_udp_upstreams; proxy_responses 0; } } Thank you ! -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Mon Jan 22 14:34:40 2018 From: nginx-forum at forum.nginx.org (plrunner) Date: Mon, 22 Jan 2018 09:34:40 -0500 Subject: prevent nginx from translate 303 responses (see other) to 302 (temporary redirect) Message-ID: <8176115fa0337020829552fad9402a59.NginxMailingListEnglish@forum.nginx.org> Hi, I have an apache webserver in front of which I put my nginx 1.12.2 that is running with a basic proxy_pass configuration. I have done this a million times, even with more complex cofigurations. Everything works perfectly except one thing I recently noticed: the login phase consists of a POST request with the url-encoded credentials in the body. In case of successful authentication, the webserver returns a 303 (See Other) to the user's home. What's worng is that when I authenticate through nginx the POST requests gets back a 302 instead of a 303. Despite this not being a problem when using a common browser, this becomes a blocking issue when using a command line software suite that mandatorily expects a 303. I have already searched the documentation, but I got nothing about such behaviour. Any idea on how I can prevent nginx from "translating" that 303 into a 302? Thanks in advance for any hint. PL Posted at Nginx Forum: https://forum.nginx.org/read.php?2,278217,278217#msg-278217 From mdounin at mdounin.ru Mon Jan 22 16:41:31 2018 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 22 Jan 2018 19:41:31 +0300 Subject: prevent nginx from translate 303 responses (see other) to 302 (temporary redirect) In-Reply-To: <8176115fa0337020829552fad9402a59.NginxMailingListEnglish@forum.nginx.org> References: <8176115fa0337020829552fad9402a59.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20180122164130.GF34136@mdounin.ru> Hello! On Mon, Jan 22, 2018 at 09:34:40AM -0500, plrunner wrote: > I have an apache webserver in front of which I put my nginx 1.12.2 that is > running with a basic proxy_pass configuration. I have done this a million > times, even with more complex cofigurations. > > Everything works perfectly except one thing I recently noticed: > the login phase consists of a POST request with the url-encoded credentials > in the body. In case of successful authentication, the webserver returns a > 303 (See Other) to the user's home. What's worng is that when I authenticate > through nginx the POST requests gets back a 302 instead of a 303. > Despite this not being a problem when using a common browser, this becomes a > blocking issue when using a command line software suite that mandatorily > expects a 303. > > I have already searched the documentation, but I got nothing about such > behaviour. > Any idea on how I can prevent nginx from "translating" that 303 into a 302? There is nothing in nginx itself what can "translate" 303 to 302. It might be something conditional in your webserver though, and proxying through nginx triggers different code path which returns 302 instead of 303. In particular, I would recommend to check if proxy_http_version 1.1; helps. The default is 1.0, and your webserver might think that it is not appropriate to return 303 to HTTP/1.0 clients. See http://nginx.org/r/proxy_http_version for details. -- Maxim Dounin http://mdounin.ru/ From pchychi at gmail.com Mon Jan 22 18:21:45 2018 From: pchychi at gmail.com (Payam Chychi) Date: Mon, 22 Jan 2018 18:21:45 +0000 Subject: UDP Load balancing In-Reply-To: References: Message-ID: On Mon, Jan 22, 2018 at 6:02 AM S?kine Coulibaly wrote: > Hi, > > I'm evaluating Nginx Plus for a UDP Load Balancer but can't make it work. > The packets are spoofed correctly on the LB side (as seen with tcpdump, > where I can see packets created, the source IP being the one of the client, > the destination the one of the selected upstream). However, on the upstream > side, I receive nothing. > > Could it be the spoofed packets are filtered out somewhere ? > > My configuration is as below : > > user root; > > worker_processes auto; > worker_rlimit_nofile 65535; > > error_log /var/log/nginx/error.log debug; > pid /var/run/nginx.pid; > > > events { > worker_connections 20000; > } > > > http { > include /etc/nginx/mime.types; > default_type application/octet-stream; > > log_format main '$remote_addr - $remote_user [$time_local] > "$request" ' > '$status $body_bytes_sent "$http_referer" ' > '"$http_user_agent" "$http_x_forwarded_for"'; > > access_log /var/log/nginx/access.log main; > > sendfile on; > #tcp_nopush on; > > keepalive_timeout 65; > > #gzip on; > > include /etc/nginx/conf.d/*.conf; > } > > stream { > upstream dtls_udp_upstreams { > hash $remote_addr; > server preprods.mycorp.com:5684; > } > > server { > listen 5684 udp; > proxy_bind $remote_addr:$remote_port transparent; > proxy_pass dtls_udp_upstreams; > proxy_responses 0; > } > } > > Thank you ! > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx What does tcpdump show on the outbound from the LB? And what does tcpdump show on your upstream? Can you ping the upstream from the Lb? Better yet, can you telnet to upstream udp 5684? Are the LB health checks working? Are you running any iptables or hardware fw in between? > -- Payam Tarverdyan Chychi Network Security Specialist / Network Engineer -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Mon Jan 22 22:26:24 2018 From: nginx-forum at forum.nginx.org (plrunner) Date: Mon, 22 Jan 2018 17:26:24 -0500 Subject: prevent nginx from translate 303 responses (see other) to 302 (temporary redirect) In-Reply-To: <20180122164130.GF34136@mdounin.ru> References: <20180122164130.GF34136@mdounin.ru> Message-ID: <0a3a0c776cb1fd13ce80f6b857ee3302.NginxMailingListEnglish@forum.nginx.org> Great. proxy_http_version 1.1; did the trick. Thank you. PL Posted at Nginx Forum: https://forum.nginx.org/read.php?2,278217,278223#msg-278223 From sophie at klunky.co.uk Tue Jan 23 10:27:39 2018 From: sophie at klunky.co.uk (Sophie Loewenthal) Date: Tue, 23 Jan 2018 11:27:39 +0100 Subject: http2 ciphers question on correct order /availability Message-ID: <328E7B90-71DA-4C88-AC87-64966BE6F86E@klunky.co.uk> Hi, Did I add or remove the wrong ciphers for http2, and are they in the correct order? I found plenty of different documents on the Internet. Since mine is now broken, I should ask here :) Any ideas? Error message from Chrome: ERR_SSL_VERSION_OR_CIPHER_MISMATCH My nginx.conf has, ssl_protocols TLSv1 TLSv1.1 TLSv1.2; ssl_prefer_server_ciphers on; ssl_ciphers 'EECDH+AESGCM:EDH+AESGCM:AES256+EECDH:AES256+EDH'; ssl_session_cache shared:SSL:15m; ssl_session_timeout 1d; ssl_session_tickets off; ssl_dhparam /etc/nginx/ssl/dhparam.pem; The vhost has http2 switched on with TLS 1.2 only: server { listen 443 ssl http2; ... ssl_prefer_server_ciphers On; ssl_protocols TLSv1.2; ssl_session_timeout 8m; ssl_ecdh_curve secp521r1; ? add_header X-Content-Type-Options nosniff; add_header X-Frame-Options SAMEORIGIN; add_header X-Content-Type-Options nosniff; add_header X-XSS-Protection "1; mode=block"; add_header Strict-Transport-Security "max-age=63072000; includeSubdomains; preload"; add_header Referrer-Policy "no-referrer"; } Sophie -------------- next part -------------- An HTML attachment was scrubbed... URL: From sca at andreasschulze.de Tue Jan 23 14:52:40 2018 From: sca at andreasschulze.de (A. Schulze) Date: Tue, 23 Jan 2018 15:52:40 +0100 Subject: http2 ciphers question on correct order /availability In-Reply-To: <328E7B90-71DA-4C88-AC87-64966BE6F86E@klunky.co.uk> Message-ID: <20180123155240.Horde.imFgQtprmqgR1uLw227XVnH@andreasschulze.de> Sophie Loewenthal: > ssl_ecdh_curve secp521r1; I never used that curve, If there's no specific reason for secp521r1, try secp384r1 or leave it empty. ans see what whill happen. Andreas From sophie at klunky.co.uk Tue Jan 23 15:07:12 2018 From: sophie at klunky.co.uk (Sophie Loewenthal) Date: Tue, 23 Jan 2018 16:07:12 +0100 Subject: http2 ciphers question on correct order /availability In-Reply-To: <20180123155240.Horde.imFgQtprmqgR1uLw227XVnH@andreasschulze.de> References: <20180123155240.Horde.imFgQtprmqgR1uLw227XVnH@andreasschulze.de> Message-ID: <08CBD52B-7E07-46A4-BEB1-C80392BE8090@klunky.co.uk> That solved the problem. Thank-you Andreas. > On 23 Jan 2018, at 15:52, A. Schulze wrote: > > > Sophie Loewenthal: > > >> ssl_ecdh_curve secp521r1; > > I never used that curve, If there's no specific reason for secp521r1, try secp384r1 or leave it empty. > ans see what whill happen. > > Andreas > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From nginx-forum at forum.nginx.org Tue Jan 23 17:34:12 2018 From: nginx-forum at forum.nginx.org (agriz) Date: Tue, 23 Jan 2018 12:34:12 -0500 Subject: Nginx - Only handles exactly 500 request per second - How to increase the limit? Message-ID: <4fab07855705dab0ca9adf715d9d0262.NginxMailingListEnglish@forum.nginx.org> worker_processes auto; pid /run/nginx.pid; worker_rlimit_nofile 100000; error_log /var/log/nginx/error.log crit; events { worker_connections 4000; multi_accept on; use epoll; } http { include /etc/nginx/mime.types; sendfile on; tcp_nopush on; tcp_nodelay on; directio 4m; types_hash_max_size 2048; client_body_buffer_size 15K; client_max_body_size 8m; keepalive_timeout 20; client_body_timeout 15; client_header_timeout 15; send_timeout 10; open_file_cache max=5000 inactive=20s; open_file_cache_valid 60s; open_file_cache_min_uses 5; open_file_cache_errors off; gzip on; gzip_comp_level 2; gzip_min_length 1000; gzip_proxied any; gzip_types text/plain text/css application/json application/xjavascript text/xml application/xml application/xml+rss text/javascript; access_log off; log_not_found off; include /etc/nginx/conf.d/*.conf; } The server has 8 cores and 32 gb ram. The load is 0.05 But nginx is not able to handle more than 500 requests per second. The server suddenly received 1500 hits and goes down immediately. I have to restart nginx. Please tell me how to increase the limit Posted at Nginx Forum: https://forum.nginx.org/read.php?2,278237,278237#msg-278237 From sophie at klunky.co.uk Tue Jan 23 19:27:26 2018 From: sophie at klunky.co.uk (Sophie Loewenthal) Date: Tue, 23 Jan 2018 20:27:26 +0100 Subject: Debugging Safari 11 unable to connect over SSL to a http2 web server Message-ID: <4F8BD55D-3F5D-4E73-BCB8-A6CD252AFE3E@klunky.co.uk> Hi, Chrome and Firefox can connect to my webserver over https running http2. Safari 11 cannot, and gave no error messages other than "cannot connect". There is a certificate name mismatch, but I thought Safari would still let me know why it did not connect. The SSL cert is otherwise valid. I enabled debug on the vhost and had this logged below, but this does not tell me much. How could I investigate this further? 2018/01/23 19:17:35 [debug] 16054#16054: *1 SSL certificate status callback 2018/01/23 19:17:35 [debug] 16054#16054: *1 SSL ALPN supported by client: h2 2018/01/23 19:17:35 [debug] 16054#16054: *1 SSL ALPN supported by client: h2-16 2018/01/23 19:17:35 [debug] 16054#16054: *1 SSL ALPN supported by client: h2-15 2018/01/23 19:17:35 [debug] 16054#16054: *1 SSL ALPN supported by client: h2-14 2018/01/23 19:17:35 [debug] 16054#16054: *1 SSL ALPN supported by client: spdy/3.1 2018/01/23 19:17:35 [debug] 16054#16054: *1 SSL ALPN supported by client: spdy/3 2018/01/23 19:17:35 [debug] 16054#16054: *1 SSL ALPN supported by client: http/1.1 2018/01/23 19:17:35 [debug] 16054#16054: *1 SSL ALPN selected: h2 2018/01/23 19:17:35 [debug] 16054#16054: *1 SSL_do_handshake: -1 2018/01/23 19:17:35 [debug] 16054#16054: *1 SSL_get_error: 2 2018/01/23 19:17:35 [debug] 16054#16054: *1 epoll add event: fd:3 op:1 ev:80002001 2018/01/23 19:17:35 [debug] 16054#16054: *1 event timer add: 3: 12000:1516735067367 2018/01/23 19:17:35 [debug] 16054#16054: *1 reusable connection: 0 2018/01/23 19:17:35 [debug] 16054#16054: *1 SSL handshake handler: 0 2018/01/23 19:17:35 [debug] 16054#16054: *1 SSL_do_handshake: -1 2018/01/23 19:17:35 [debug] 16054#16054: *1 SSL_get_error: 5 2018/01/23 19:17:35 [info] 16054#16054: *1 peer closed connection in SSL handshake while SSL handshaking, client: 178.xx.xx.xxx, server: 0.0.0.0:443 2018/01/23 19:17:35 [debug] 16054#16054: *1 close http connection: 3 2018/01/23 19:17:35 [debug] 16054#16054: *1 event timer del: 3: 1516735067367 2018/01/23 19:17:35 [debug] 16054#16054: *1 reusable connection: 0 2018/01/23 19:17:35 [debug] 16054#16054: *1 free: 0000561F72E17370, unused: 112 The vhost is the same as the one I emailed about earlier: listen [::]:443 ipv6only=on ssl http2 ; server_name xx.com xx.com; root /var/www/xx.com; access_log /var/log/nginx/access.log combined_ssl; error_log /var/log/nginx/error.log debug ; ssl_certificate /etc/letsencrypt/live/xx/fullchain.pem ; ssl_certificate_key /etc/letsencrypt/live/xx/privkey.pem ; ssl_prefer_server_ciphers on; ssl_protocols TLSv1.2; ssl_ecdh_curve secp384r1; ssl_session_timeout 9m; ssl_session_tickets off; ssl_stapling on; ssl_stapling_verify on; ssl_trusted_certificate /etc/letsencrypt/live/xx/chain.pem; resolver 127.0.0.1 8.8.8.8 valid=300s; resolver_timeout 2s; # add_header Strict-Transport-Security "max-age=63072000; includeSubdomains"; #add_header Strict-Transport-Security "max-age=0;"; add_header X-Content-Type-Options nosniff; add_header X-XSS-Protection "1; mode=block"; add_header Referrer-Policy "no-referrer"; more_set_headers "Server: MyServerName"; Best, Sophie. From sophie at klunky.co.uk Tue Jan 23 20:04:23 2018 From: sophie at klunky.co.uk (Sophie Loewenthal) Date: Tue, 23 Jan 2018 21:04:23 +0100 Subject: Debugging Safari 11 unable to connect over SSL to a http2 web server In-Reply-To: <4F8BD55D-3F5D-4E73-BCB8-A6CD252AFE3E@klunky.co.uk> References: <4F8BD55D-3F5D-4E73-BCB8-A6CD252AFE3E@klunky.co.uk> Message-ID: Hi all, Problem found. This really was caused by an SSL cert name mismatch. > On 23 Jan 2018, at 20:27, Sophie Loewenthal wrote: > > Hi, > > Chrome and Firefox can connect to my webserver over https running http2. > Safari 11 cannot, and gave no error messages other than "cannot connect". > > There is a certificate name mismatch, but I thought Safari would still let me know why it did not connect. The SSL cert is otherwise valid. > > I enabled debug on the vhost and had this logged below, but this does not tell me much. How could I investigate this further? > > > 2018/01/23 19:17:35 [debug] 16054#16054: *1 SSL certificate status callback > 2018/01/23 19:17:35 [debug] 16054#16054: *1 SSL ALPN supported by client: h2 > 2018/01/23 19:17:35 [debug] 16054#16054: *1 SSL ALPN supported by client: h2-16 > 2018/01/23 19:17:35 [debug] 16054#16054: *1 SSL ALPN supported by client: h2-15 > 2018/01/23 19:17:35 [debug] 16054#16054: *1 SSL ALPN supported by client: h2-14 > 2018/01/23 19:17:35 [debug] 16054#16054: *1 SSL ALPN supported by client: spdy/3.1 > 2018/01/23 19:17:35 [debug] 16054#16054: *1 SSL ALPN supported by client: spdy/3 > 2018/01/23 19:17:35 [debug] 16054#16054: *1 SSL ALPN supported by client: http/1.1 > 2018/01/23 19:17:35 [debug] 16054#16054: *1 SSL ALPN selected: h2 > 2018/01/23 19:17:35 [debug] 16054#16054: *1 SSL_do_handshake: -1 > 2018/01/23 19:17:35 [debug] 16054#16054: *1 SSL_get_error: 2 > 2018/01/23 19:17:35 [debug] 16054#16054: *1 epoll add event: fd:3 op:1 ev:80002001 > 2018/01/23 19:17:35 [debug] 16054#16054: *1 event timer add: 3: 12000:1516735067367 > 2018/01/23 19:17:35 [debug] 16054#16054: *1 reusable connection: 0 > 2018/01/23 19:17:35 [debug] 16054#16054: *1 SSL handshake handler: 0 > 2018/01/23 19:17:35 [debug] 16054#16054: *1 SSL_do_handshake: -1 > 2018/01/23 19:17:35 [debug] 16054#16054: *1 SSL_get_error: 5 > 2018/01/23 19:17:35 [info] 16054#16054: *1 peer closed connection in SSL handshake while SSL handshaking, client: 178.xx.xx.xxx, server: 0.0.0.0:443 > 2018/01/23 19:17:35 [debug] 16054#16054: *1 close http connection: 3 > 2018/01/23 19:17:35 [debug] 16054#16054: *1 event timer del: 3: 1516735067367 > 2018/01/23 19:17:35 [debug] 16054#16054: *1 reusable connection: 0 > 2018/01/23 19:17:35 [debug] 16054#16054: *1 free: 0000561F72E17370, unused: 112 > > > The vhost is the same as the one I emailed about earlier: > listen [::]:443 ipv6only=on ssl http2 ; > > server_name xx.com xx.com; > root /var/www/xx.com; > access_log /var/log/nginx/access.log combined_ssl; > error_log /var/log/nginx/error.log debug ; > > ssl_certificate /etc/letsencrypt/live/xx/fullchain.pem ; > ssl_certificate_key /etc/letsencrypt/live/xx/privkey.pem ; > ssl_prefer_server_ciphers on; > ssl_protocols TLSv1.2; > ssl_ecdh_curve secp384r1; > ssl_session_timeout 9m; > ssl_session_tickets off; > ssl_stapling on; > ssl_stapling_verify on; > ssl_trusted_certificate /etc/letsencrypt/live/xx/chain.pem; > resolver 127.0.0.1 8.8.8.8 valid=300s; > resolver_timeout 2s; > # > add_header Strict-Transport-Security "max-age=63072000; includeSubdomains"; > #add_header Strict-Transport-Security "max-age=0;"; > add_header X-Content-Type-Options nosniff; > add_header X-XSS-Protection "1; mode=block"; > add_header Referrer-Policy "no-referrer"; > more_set_headers "Server: MyServerName"; > > > Best, Sophie. > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From francis at daoine.org Tue Jan 23 23:29:52 2018 From: francis at daoine.org (Francis Daly) Date: Tue, 23 Jan 2018 23:29:52 +0000 Subject: failed (13: Permission denied) while reading upstream on rename() In-Reply-To: <67C0BEC1-814B-4CBE-BA3A-F0FF502F3467@yale.edu> References: <67C0BEC1-814B-4CBE-BA3A-F0FF502F3467@yale.edu> Message-ID: <20180123232952.GA3063@daoine.org> On Sat, Jan 20, 2018 at 10:34:21PM +0000, Friscia, Michael wrote: Hi there, > In the error I see that the rename() failure was to change: > eedd07f7aef45a5ed22f748a31724947.0000002528 > to > eedd07f7aef45a5ed22f748a31724947 > > This seems to happen on some pages and then continues to happen if I browse that page but most pages are not affected. Does the error cause any breakage in your nginx? I suspect that the reason for the error is that an "uncommon" filesystem may have more failure modes than a "common" one; and the best way to avoid filesystem-related problems is to avoid using a network-based filesystem. If what you have works well enough for you, then it might be simplest to leave it as-is, and accept that your filesystem sometimes denies "rename" permission and other times allows "rename" permission. To investigate further, you may want to enable the debug log, and perhaps check the full permissions of the files in question and their containing directories, and see if there is anything that looks unusual. "full permissions" may not just be "ls -l", depending on the implementation or configuration details. > Has anyone seen this before? I've not; so this is just as much guess work as the previous one :-) Good luck with it, f -- Francis Daly francis at daoine.org From quinefang at gmail.com Wed Jan 24 03:13:35 2018 From: quinefang at gmail.com (=?UTF-8?B?5pa55Z2k?=) Date: Wed, 24 Jan 2018 11:13:35 +0800 Subject: failed (13: Permission denied) while reading upstream on rename() In-Reply-To: <67C0BEC1-814B-4CBE-BA3A-F0FF502F3467@yale.edu> References: <67C0BEC1-814B-4CBE-BA3A-F0FF502F3467@yale.edu> Message-ID: # setenforce 0 On Sun, Jan 21, 2018 at 6:34 AM, Friscia, Michael wrote: > Earlier today I solved a chmod() problem in the cache and now I?m faced > with this one which happens much less frequently. I don?t think permission > is the problem, I think it?s an Nginx configuration I failed to set > correctly. > > > > In the error I see that the rename() failure was to change: > > eedd07f7aef45a5ed22f748a31724947.0000002528 > > to > > eedd07f7aef45a5ed22f748a31724947 > > > > This seems to happen on some pages and then continues to happen if I > browse that page but most pages are not affected. > > > > Has anyone seen this before? > > > > ___________________________________________ > > Michael Friscia > > Office of Communications > > Yale School of Medicine > > (203) 737-7932 - office > > (203) 544-3282 ? mobile > > http://web.yale.edu > > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From he.hailong5 at zte.com.cn Wed Jan 24 03:13:54 2018 From: he.hailong5 at zte.com.cn (he.hailong5 at zte.com.cn) Date: Wed, 24 Jan 2018 11:13:54 +0800 (CST) Subject: Quick successive reload makes "bind () xxxx failed, Address already in use" error Message-ID: <201801241113541440632@zte.com.cn> SGksDQoNCg0KDQpJIGhhdmUgYSBzY3JpcHQgcnVucyB0d28gc3VjY2Vzc2l2ZSByZWxvYWRzLCB0 aGUgZmlyc3Qgb25lIGlzIHRvIHJlbW92ZSBhIGxpc3RlbiBwb3J0IGZyb20gdGhlIHN0cmVhbSBi bG9jaywgYW5kIHRoZSBzZWNvbmQgb25lIGlzIHRvIGFkZCB0aGUgc2FtZSBwb3J0IGJhY2sgdG8g dGhlIHN0cmVhbSBibG9jay4gSXQgaXMgb2JzZXJ2ZWQgdGhhdCBtb3N0IHRpbWUgdGhlIHNjcmlw dCB3b3VsZCBydW4gaW50byAiYmluZCgpIHh4eHggZmFpbGVkLCBBZGRyZXNzIGFscmVhZHkgaW4g dXNlIiBlcnJvci4gQWZ0ZXIgcHV0dGluZyBhIHNsZWVwIDEgaW4gYmV0d2VlbiB0aGVzZSB0d28g cmVsb2FkcyBJIG5ldmVyIGdldCB0aGF0IGVycm9yIGFnYWluLiANCg0KDQpTbyBJIGd1ZXNzIHRo ZSBsaXN0ZW5pbmcgc29ja2V0IHdhcyBub3QgcmVsZWFzZWQgaW4gdGhlIHRpbWUgdGhlIHNlY29u ZCByZWxvYWQgd2FzIGlzc3VlZD8gDQoNCg0KSG93IHRoZSBsaXN0ZW5pbmcgc29ja2V0IGlzIGdl dHRpbmcgcmVsZWFzZWQgZHVyaW5nIHJlbG9hZD8NCg0KDQpJbiB0aGlzIGNhc2UsIGhvdyB0byBl bnN1cmUgdGhhdCB3ZSBjYW4gc2FmZWx5IHRyaWdnZXIgdGhlIHNlY29uZCByZWxvYWQgb3RoZXIg dGhhbiBzbGVlcD8NCg0KDQoNCg0KDQoNCmJyLA0KDQoNCkFsbGVu -------------- next part -------------- An HTML attachment was scrubbed... URL: From quinefang at gmail.com Wed Jan 24 03:15:25 2018 From: quinefang at gmail.com (=?UTF-8?B?5pa55Z2k?=) Date: Wed, 24 Jan 2018 11:15:25 +0800 Subject: Quick successive reload makes "bind () xxxx failed, Address already in use" error In-Reply-To: <201801241113541440632@zte.com.cn> References: <201801241113541440632@zte.com.cn> Message-ID: Kill old processes first, then start new processes. On Wed, Jan 24, 2018 at 11:13 AM, wrote: > Hi, > > I have a script runs two successive reloads, the first one is to remove a > listen port from the stream block, and the second one is to add the same > port back to the stream block. It is observed that most time the script > would run into "bind() xxxx failed, Address already in use" error. After > putting a sleep 1 in between these two reloads I never get that error > again. > > So I guess the listening socket was not released in the time the second > reload was issued? > > How the listening socket is getting released during reload? > > In this case, how to ensure that we can safely trigger the second reload > other than sleep? > > > br, > > Allen > > > > > > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From pratyush at hostindya.com Wed Jan 24 03:48:26 2018 From: pratyush at hostindya.com (Pratyush Kumar) Date: Wed, 24 Jan 2018 09:18:26 +0530 Subject: Nginx - Only handles exactly 500 request per second - How to increase the limit? In-Reply-To: <4fab07855705dab0ca9adf715d9d0262.NginxMailingListEnglish@forum.nginx.org> Message-ID: An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Wed Jan 24 05:21:41 2018 From: nginx-forum at forum.nginx.org (agriz) Date: Wed, 24 Jan 2018 00:21:41 -0500 Subject: Nginx - Only handles exactly 500 request per second - How to increase the limit? In-Reply-To: <4fab07855705dab0ca9adf715d9d0262.NginxMailingListEnglish@forum.nginx.org> References: <4fab07855705dab0ca9adf715d9d0262.NginxMailingListEnglish@forum.nginx.org> Message-ID: Sir, I can see any message there. Additionally, There are the sysctl.conf file i modified net.ipv6.conf.all.accept_ra=2 net.core.rmem_max = 16777216 net.core.rmem_default = 31457280 net.ipv4.tcp_rmem = 4096 87380 16777216 net.core.wmem_max = 16777216 net.ipv4.tcp_wmem = 4096 16384 16777216 net.ipv4.tcp_mem = 65536 131072 262144 net.ipv4.udp_mem = 65536 131072 262144 net.ipv4.udp_rmem_min = 16384 net.ipv4.udp_wmem_min = 16384 net.ipv4.tcp_fin_timeout = 60 net.ipv4.tcp_fin_timeout = 20 net.ipv4.tcp_tw_reuse = 0 net.ipv4.tcp_tw_reuse = 1 net.core.netdev_max_backlog = 10000 net.core.somaxconn = 4096 net.ipv4.tcp_max_syn_backlog = 2048 net.ipv4.ip_local_port_range = 15000 61000 kernel.pid_max = 65535 fs.inotify.max_queued_events = 2000000 net.ipv4.tcp_keepalive_time = 300 net.ipv4.tcp_keepalive_probes = 5 net.ipv4.tcp_keepalive_intvl = 15 net.core.optmem_max = 25165824 vm.swappiness = 10 vm.dirty_ratio = 60 vm.dirty_background_ratio = 2 fs.inotify.max_user_watches=100000 Posted at Nginx Forum: https://forum.nginx.org/read.php?2,278237,278246#msg-278246 From pchychi at gmail.com Wed Jan 24 06:08:56 2018 From: pchychi at gmail.com (Payam Chychi) Date: Wed, 24 Jan 2018 06:08:56 +0000 Subject: Nginx - Only handles exactly 500 request per second - How to increase the limit? In-Reply-To: References: <4fab07855705dab0ca9adf715d9d0262.NginxMailingListEnglish@forum.nginx.org> Message-ID: On Tue, Jan 23, 2018 at 9:22 PM agriz wrote: > Sir, > > I can see any message there. > Additionally, There are the sysctl.conf file i modified > > net.ipv6.conf.all.accept_ra=2 > net.core.rmem_max = 16777216 > net.core.rmem_default = 31457280 > net.ipv4.tcp_rmem = 4096 87380 16777216 > net.core.wmem_max = 16777216 > net.ipv4.tcp_wmem = 4096 16384 16777216 > net.ipv4.tcp_mem = 65536 131072 262144 > net.ipv4.udp_mem = 65536 131072 262144 > net.ipv4.udp_rmem_min = 16384 > net.ipv4.udp_wmem_min = 16384 > net.ipv4.tcp_fin_timeout = 60 > net.ipv4.tcp_fin_timeout = 20 > net.ipv4.tcp_tw_reuse = 0 > net.ipv4.tcp_tw_reuse = 1 > net.core.netdev_max_backlog = 10000 > net.core.somaxconn = 4096 > net.ipv4.tcp_max_syn_backlog = 2048 > net.ipv4.ip_local_port_range = 15000 61000 > kernel.pid_max = 65535 > fs.inotify.max_queued_events = 2000000 > net.ipv4.tcp_keepalive_time = 300 > net.ipv4.tcp_keepalive_probes = 5 > net.ipv4.tcp_keepalive_intvl = 15 > net.core.optmem_max = 25165824 > vm.swappiness = 10 > vm.dirty_ratio = 60 > vm.dirty_background_ratio = 2 > fs.inotify.max_user_watches=100000 > > Posted at Nginx Forum: > https://forum.nginx.org/read.php?2,278237,278246#msg-278246 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx What does ulimit show? > > -- Payam Tarverdyan Chychi Network Security Specialist / Network Engineer -------------- next part -------------- An HTML attachment was scrubbed... URL: From he.hailong5 at zte.com.cn Wed Jan 24 07:01:50 2018 From: he.hailong5 at zte.com.cn (he.hailong5 at zte.com.cn) Date: Wed, 24 Jan 2018 15:01:50 +0800 (CST) Subject: =?UTF-8?B?UmU6wqBRdWlja8Kgc3VjY2Vzc2l2ZcKgcmVsb2FkwqBtYWtlc8KgImJpbmTCoCgp?= =?UTF-8?B?wqB4eHh4wqBmYWlsZWQswqDCoMKgwqBBZGRyZXNzwqBhbHJlYWR5wqBpbsKg?= =?UTF-8?B?dXNlIsKgZXJyb3I=?= Message-ID: <201801241501500459475@zte.com.cn> VGhlIGRvd250aW1lIGlzIGNyaXRpY2FsLCB3ZSBjYW5ub3QgdGFrZSAiS2lsbCBhbmQgU3RhcnQg cHJvY2VzcyI= From zchao1995 at gmail.com Wed Jan 24 07:18:11 2018 From: zchao1995 at gmail.com (tokers) Date: Wed, 24 Jan 2018 02:18:11 -0500 Subject: Quick successive reload makes "bind () xxxx failed, Address already in use" error In-Reply-To: <201801241113541440632@zte.com.cn> References: <201801241113541440632@zte.com.cn> Message-ID: Hello! > I have a script runs two successive reloads, the first one is to remove a listen port from the stream block, and the second one is to add the same port back to the stream block. It is observed that > most time the script would run into "bind() xxxx failed, Address already in use" error. After putting a sleep 1 in between these two reloads I never get that error again. How does you send ?reload? command? Through the nginx -s reload or sending signal to the master process directly? > So I guess the listening socket was not released in the time the second reload was issued? > How the listening socket is getting released during reload? The old unnecessary listening sockets will be closed after nginx master process opens the new listening sockets. -------------- next part -------------- An HTML attachment was scrubbed... URL: From oscaretu at gmail.com Wed Jan 24 07:22:17 2018 From: oscaretu at gmail.com (oscaretu .) Date: Wed, 24 Jan 2018 08:22:17 +0100 Subject: Quick successive reload makes "bind () xxxx failed, Address already in use" error Message-ID: If you search in Google detect a IP port is in use in linux you can find several ways to detect in the port is in use, for Windows and Unix - https://www.cyberciti.biz/faq/unix-linux-check-if-port-is-in-use-command/ - https://askubuntu.com/questions/278448/how-to-know-what-program-is-listening-on-a-given-port Kindl regards, Oscar Virus-free. www.avast.com <#DAB4FAD8-2DD7-40BB-A1B8-4E2AA1F9FDF2> On Wed, Jan 24, 2018 at 8:01 AM, wrote: > The downtime is critical, we cannot take "Kill and Start process" > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -- Oscar Fernandez Sierra oscaretu at gmail.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From he.hailong5 at zte.com.cn Wed Jan 24 07:57:54 2018 From: he.hailong5 at zte.com.cn (he.hailong5 at zte.com.cn) Date: Wed, 24 Jan 2018 15:57:54 +0800 (CST) Subject: Quick successive reload makes "bind () xxxx failed, Address already in use" error Message-ID: <201801241557548152964@zte.com.cn> VGhpcyBpcyB0aGUgZm9yZXZlciBsb29wIHRoYXQgaXQgaXQgcnVubmluZyBpbiB0aGUgc2NyaXB0 DQoNCg0KDQoNCg0KDQpmb3Igew0KDQoNCiAgbmdpbnggLXMgcmVsb2FkIC8vd2l0aG91dCB0aGUg cG9ydA0KDQoNCiAgbmdpbnggLXMgcmVsb2FkIC8vd2l0aCB0aGUgcG9ydA0KDQoNCn0NCg0KDQoN Cg0KDQoNCkkgZm91bmQgdGhlcmUgd2FzIGEgdHJhbnNpZW50IHRoYXQgYm90aCB0aGUgbWFzdGVy IHByb2Nlc3MgYW5kIHRoZSBuZXdseSBmb3JrZWQgd29ya2VyIHdlcmUgbGlzdGVuaW5nIHRoZSBz YW1lIHBvcnQsIEkgYW0gbm90IHN1cmUgaWYgdGhpcyBtZ2lodCBjYXVzZSB0aGUgImJpbmQiIGVy cm9yLg0KDQoNCkFub3RoZXIgZmFjdCBpcyB0aGF0IHRoZSBwb3J0IGlzIGxpc3RlbmluZyBhZnRl ciB0aGUgImJpbmQiIGVycm9yIHdhcyBvY2N1cnJpbmcsIHNvIHRoaXMgd2FzIG5vdCBkdWUgdG8g dGhlICJkZWxldGUiIHdhcyBub3QgY29tcGxldGVkIHRoYXQgbWFrZXMgdGhlICJhZGQiIG5vdCBz dWNjZXNzLg0KDQoNCkkgYW0gd29uZGVyaW5nIGhvdyB0aGlzICJiaW5kIiBlcnJvciBoYXBwZW5z Pw0KDQoNCg0KDQoNCg0KQnIsDQoNCg0KQWxsZW4= -------------- next part -------------- An HTML attachment was scrubbed... URL: From scoulibaly at gmail.com Wed Jan 24 09:35:57 2018 From: scoulibaly at gmail.com (=?UTF-8?Q?S=C3=A9kine_Coulibaly?=) Date: Wed, 24 Jan 2018 10:35:57 +0100 Subject: DTLS Load Balancing Message-ID: I've setup a simplisti UDP load balancing as follow : stream { upstream dtls_udp_upstreams { hash $remote_addr:remote_port; server preprod.mycorp.com:5684; } server { listen 5684 udp; proxy_pass dtls_udp_upstreams; proxy_responses 1; } } I notice that the balancing is correctly done and the response is received by the client. Unfortunately, the destination port on the response reaching the client is not the initial source port, and as a consequence, the DTLS frame is discarded and a new DTLS handshake is initiated. When proxying UDP packets through Nginx, is there a way for Nginx to preserve its initial source port for subsequent packets? In my case using Transparent proxying is not possible because my hoster doesn't allow IP spoofing. Thank you ! Sekine -------------- next part -------------- An HTML attachment was scrubbed... URL: From mailinglist at unix-solution.de Wed Jan 24 12:13:08 2018 From: mailinglist at unix-solution.de (basti) Date: Wed, 24 Jan 2018 13:13:08 +0100 Subject: more_set_headers debian Message-ID: <59ee1e41-3272-1d48-1095-1f5b18e8e334@unix-solution.de> Hello, I try to use "more_set_headers" directive in nginx 10.3 on debian. Modules are loded. ls -la /etc/nginx/modules-enabled/ lrwxrwxrwx 1 root root 68 Jan 24 12:57 50-mod-http-headers-more-filter.conf -> /usr/share/nginx/modules-available/mod-http-headers-more-filter.conf /etc/nginx/sites-enabled# dpkg -l | grep nginx ii libnginx-mod-http-headers-more-filter 1.10.3-1+deb9u1 amd64 Set and clear input and output headers for Nginx ii nginx-common 1.10.3-1+deb9u1 all small, powerful, scalable web/proxy server - common files ii nginx-extras 1.10.3-1+deb9u1 amd64 nginx web/proxy server (extended version) /etc/nginx/sites-enabled# nginx -V nginx version: nginx/1.10.3 built with OpenSSL 1.1.0f 25 May 2017 TLS SNI support enabled configure arguments: --with-cc-opt='-g -O2 -fdebug-prefix-map=/build/nginx-2tpxfc/nginx-1.10.3=. -fstack-protector-strong -Wformat -Werror=format-security -Wdate-time -D_FORTIFY_SOURCE=2' --with-ld-opt='-Wl,-z,relro -Wl,-z,now' --prefix=/usr/share/nginx --conf-path=/etc/nginx/nginx.conf --http-log-path=/var/log/nginx/access.log --error-log-path=/var/log/nginx/error.log --lock-path=/var/lock/nginx.lock --pid-path=/run/nginx.pid --modules-path=/usr/lib/nginx/modules --http-client-body-temp-path=/var/lib/nginx/body --http-fastcgi-temp-path=/var/lib/nginx/fastcgi --http-proxy-temp-path=/var/lib/nginx/proxy --http-scgi-temp-path=/var/lib/nginx/scgi --http-uwsgi-temp-path=/var/lib/nginx/uwsgi --with-debug --with-pcre-jit --with-ipv6 --with-http_ssl_module --with-http_stub_status_module --with-http_realip_module --with-http_auth_request_module --with-http_v2_module --with-http_dav_module --with-http_slice_module --with-threads --with-http_addition_module --with-http_flv_module --with-http_geoip_module=dynamic --with-http_gunzip_module --with-http_gzip_static_module --with-http_image_filter_module=dynamic --with-http_mp4_module --with-http_perl_module=dynamic --with-http_random_index_module --with-http_secure_link_module --with-http_sub_module --with-http_xslt_module=dynamic --with-mail=dynamic --with-mail_ssl_module --with-stream=dynamic --with-stream_ssl_module --add-dynamic-module=/build/nginx-2tpxfc/nginx-1.10.3/debian/modules/headers-more-nginx-module --add-dynamic-module=/build/nginx-2tpxfc/nginx-1.10.3/debian/modules/nginx-auth-pam --add-dynamic-module=/build/nginx-2tpxfc/nginx-1.10.3/debian/modules/nginx-cache-purge --add-dynamic-module=/build/nginx-2tpxfc/nginx-1.10.3/debian/modules/nginx-dav-ext-module --add-dynamic-module=/build/nginx-2tpxfc/nginx-1.10.3/debian/modules/nginx-development-kit --add-dynamic-module=/build/nginx-2tpxfc/nginx-1.10.3/debian/modules/nginx-echo --add-dynamic-module=/build/nginx-2tpxfc/nginx-1.10.3/debian/modules/ngx-fancyindex --add-dynamic-module=/build/nginx-2tpxfc/nginx-1.10.3/debian/modules/nchan --add-dynamic-module=/build/nginx-2tpxfc/nginx-1.10.3/debian/modules/nginx-lua --add-dynamic-module=/build/nginx-2tpxfc/nginx-1.10.3/debian/modules/nginx-upload-progress --add-dynamic-module=/build/nginx-2tpxfc/nginx-1.10.3/debian/modules/nginx-upstream-fair --add-dynamic-module=/build/nginx-2tpxfc/nginx-1.10.3/debian/modules/ngx_http_substitutions_filter_module I get this error: 1640#1640: unknown directive "more_set_headers" in /etc/nginx/nginx.conf:23 Best regards, Basti From mailinglist at unix-solution.de Wed Jan 24 15:13:40 2018 From: mailinglist at unix-solution.de (basti) Date: Wed, 24 Jan 2018 16:13:40 +0100 Subject: more_set_headers debian In-Reply-To: <59ee1e41-3272-1d48-1095-1f5b18e8e334@unix-solution.de> References: <59ee1e41-3272-1d48-1095-1f5b18e8e334@unix-solution.de> Message-ID: <87cd7da4-3b52-6cd7-f18d-2874686bc8c2@unix-solution.de> fixed by myself On 24.01.2018 13:13, basti wrote: > Hello, > > I try to use "more_set_headers" directive in nginx 10.3 on debian. > > Modules are loded. > > ls -la /etc/nginx/modules-enabled/ > lrwxrwxrwx 1 root root 68 Jan 24 12:57 > 50-mod-http-headers-more-filter.conf -> > /usr/share/nginx/modules-available/mod-http-headers-more-filter.conf > > /etc/nginx/sites-enabled# dpkg -l | grep nginx > ii libnginx-mod-http-headers-more-filter 1.10.3-1+deb9u1 > amd64 Set and clear input and > output headers for Nginx > ii nginx-common 1.10.3-1+deb9u1 > all small, powerful, > scalable web/proxy server - common files > ii nginx-extras 1.10.3-1+deb9u1 > amd64 nginx web/proxy server > (extended version) > > > /etc/nginx/sites-enabled# nginx -V > nginx version: nginx/1.10.3 > built with OpenSSL 1.1.0f 25 May 2017 > TLS SNI support enabled > configure arguments: --with-cc-opt='-g -O2 > -fdebug-prefix-map=/build/nginx-2tpxfc/nginx-1.10.3=. > -fstack-protector-strong -Wformat -Werror=format-security -Wdate-time > -D_FORTIFY_SOURCE=2' --with-ld-opt='-Wl,-z,relro -Wl,-z,now' > --prefix=/usr/share/nginx --conf-path=/etc/nginx/nginx.conf > --http-log-path=/var/log/nginx/access.log > --error-log-path=/var/log/nginx/error.log > --lock-path=/var/lock/nginx.lock --pid-path=/run/nginx.pid > --modules-path=/usr/lib/nginx/modules > --http-client-body-temp-path=/var/lib/nginx/body > --http-fastcgi-temp-path=/var/lib/nginx/fastcgi > --http-proxy-temp-path=/var/lib/nginx/proxy > --http-scgi-temp-path=/var/lib/nginx/scgi > --http-uwsgi-temp-path=/var/lib/nginx/uwsgi --with-debug --with-pcre-jit > --with-ipv6 --with-http_ssl_module --with-http_stub_status_module > --with-http_realip_module --with-http_auth_request_module > --with-http_v2_module --with-http_dav_module --with-http_slice_module > --with-threads --with-http_addition_module --with-http_flv_module > --with-http_geoip_module=dynamic --with-http_gunzip_module > --with-http_gzip_static_module --with-http_image_filter_module=dynamic > --with-http_mp4_module --with-http_perl_module=dynamic > --with-http_random_index_module --with-http_secure_link_module > --with-http_sub_module --with-http_xslt_module=dynamic > --with-mail=dynamic --with-mail_ssl_module --with-stream=dynamic > --with-stream_ssl_module > --add-dynamic-module=/build/nginx-2tpxfc/nginx-1.10.3/debian/modules/headers-more-nginx-module > --add-dynamic-module=/build/nginx-2tpxfc/nginx-1.10.3/debian/modules/nginx-auth-pam > --add-dynamic-module=/build/nginx-2tpxfc/nginx-1.10.3/debian/modules/nginx-cache-purge > --add-dynamic-module=/build/nginx-2tpxfc/nginx-1.10.3/debian/modules/nginx-dav-ext-module > --add-dynamic-module=/build/nginx-2tpxfc/nginx-1.10.3/debian/modules/nginx-development-kit > --add-dynamic-module=/build/nginx-2tpxfc/nginx-1.10.3/debian/modules/nginx-echo > --add-dynamic-module=/build/nginx-2tpxfc/nginx-1.10.3/debian/modules/ngx-fancyindex > --add-dynamic-module=/build/nginx-2tpxfc/nginx-1.10.3/debian/modules/nchan > --add-dynamic-module=/build/nginx-2tpxfc/nginx-1.10.3/debian/modules/nginx-lua > --add-dynamic-module=/build/nginx-2tpxfc/nginx-1.10.3/debian/modules/nginx-upload-progress > --add-dynamic-module=/build/nginx-2tpxfc/nginx-1.10.3/debian/modules/nginx-upstream-fair > --add-dynamic-module=/build/nginx-2tpxfc/nginx-1.10.3/debian/modules/ngx_http_substitutions_filter_module > > > I get this error: > > 1640#1640: unknown directive "more_set_headers" in /etc/nginx/nginx.conf:23 > > Best regards, > Basti > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > From reallfqq-nginx at yahoo.fr Wed Jan 24 18:49:25 2018 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Wed, 24 Jan 2018 19:49:25 +0100 Subject: Nginx - Only handles exactly 500 request per second - How to increase the limit? In-Reply-To: References: <4fab07855705dab0ca9adf715d9d0262.NginxMailingListEnglish@forum.nginx.org> Message-ID: 500 requests max sounds very much like the default max_requests parameter from PHP-FPM process manager. Btw, the configuration snippet you provided is incomplete (include [...]/*.conf). How can people help you? Have a look at nginx -T. --- *B. R.* On Wed, Jan 24, 2018 at 7:08 AM, Payam Chychi wrote: > > On Tue, Jan 23, 2018 at 9:22 PM agriz wrote: > >> Sir, >> >> I can see any message there. >> Additionally, There are the sysctl.conf file i modified >> >> net.ipv6.conf.all.accept_ra=2 >> net.core.rmem_max = 16777216 >> net.core.rmem_default = 31457280 >> net.ipv4.tcp_rmem = 4096 87380 16777216 >> net.core.wmem_max = 16777216 >> net.ipv4.tcp_wmem = 4096 16384 16777216 >> net.ipv4.tcp_mem = 65536 131072 262144 >> net.ipv4.udp_mem = 65536 131072 262144 >> net.ipv4.udp_rmem_min = 16384 >> net.ipv4.udp_wmem_min = 16384 >> net.ipv4.tcp_fin_timeout = 60 >> net.ipv4.tcp_fin_timeout = 20 >> net.ipv4.tcp_tw_reuse = 0 >> net.ipv4.tcp_tw_reuse = 1 >> net.core.netdev_max_backlog = 10000 >> net.core.somaxconn = 4096 >> net.ipv4.tcp_max_syn_backlog = 2048 >> net.ipv4.ip_local_port_range = 15000 61000 >> kernel.pid_max = 65535 >> fs.inotify.max_queued_events = 2000000 >> net.ipv4.tcp_keepalive_time = 300 >> net.ipv4.tcp_keepalive_probes = 5 >> net.ipv4.tcp_keepalive_intvl = 15 >> net.core.optmem_max = 25165824 >> vm.swappiness = 10 >> vm.dirty_ratio = 60 >> vm.dirty_background_ratio = 2 >> fs.inotify.max_user_watches=100000 >> >> Posted at Nginx Forum: https://forum.nginx.org/read.p >> hp?2,278237,278246#msg-278246 >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx > > > What does ulimit show? > >> >> > > -- > Payam Tarverdyan Chychi > Network Security Specialist / Network Engineer > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From pchychi at gmail.com Wed Jan 24 21:35:07 2018 From: pchychi at gmail.com (Payam Chychi) Date: Wed, 24 Jan 2018 21:35:07 +0000 Subject: Nginx - Only handles exactly 500 request per second - How to increase the limit? In-Reply-To: References: <4fab07855705dab0ca9adf715d9d0262.NginxMailingListEnglish@forum.nginx.org> Message-ID: On Wed, Jan 24, 2018 at 10:50 AM B.R. via nginx wrote: > 500 requests max sounds very much like the default max_requests parameter > from PHP-FPM process manager. > > Btw, the configuration snippet you provided is incomplete (include > [...]/*.conf). How can people help you? > Have a look at nginx -T. > --- > *B. R.* > > On Wed, Jan 24, 2018 at 7:08 AM, Payam Chychi wrote: > >> >> On Tue, Jan 23, 2018 at 9:22 PM agriz >> wrote: >> >>> Sir, >>> >>> I can see any message there. >>> Additionally, There are the sysctl.conf file i modified >>> >>> net.ipv6.conf.all.accept_ra=2 >>> net.core.rmem_max = 16777216 >>> net.core.rmem_default = 31457280 >>> net.ipv4.tcp_rmem = 4096 87380 16777216 >>> net.core.wmem_max = 16777216 >>> net.ipv4.tcp_wmem = 4096 16384 16777216 >>> net.ipv4.tcp_mem = 65536 131072 262144 >>> net.ipv4.udp_mem = 65536 131072 262144 >>> net.ipv4.udp_rmem_min = 16384 >>> net.ipv4.udp_wmem_min = 16384 >>> net.ipv4.tcp_fin_timeout = 60 >>> net.ipv4.tcp_fin_timeout = 20 >>> net.ipv4.tcp_tw_reuse = 0 >>> net.ipv4.tcp_tw_reuse = 1 >>> net.core.netdev_max_backlog = 10000 >>> net.core.somaxconn = 4096 >>> net.ipv4.tcp_max_syn_backlog = 2048 >>> net.ipv4.ip_local_port_range = 15000 61000 >>> kernel.pid_max = 65535 >>> fs.inotify.max_queued_events = 2000000 >>> net.ipv4.tcp_keepalive_time = 300 >>> net.ipv4.tcp_keepalive_probes = 5 >>> net.ipv4.tcp_keepalive_intvl = 15 >>> net.core.optmem_max = 25165824 >>> vm.swappiness = 10 >>> vm.dirty_ratio = 60 >>> vm.dirty_background_ratio = 2 >>> fs.inotify.max_user_watches=100000 >>> >>> Posted at Nginx Forum: >>> https://forum.nginx.org/read.php?2,278237,278246#msg-278246 >>> >>> _______________________________________________ >>> nginx mailing list >>> nginx at nginx.org >>> http://mailman.nginx.org/mailman/listinfo/nginx >> >> >> What does ulimit show? >> > Google ?Linux ulimit? or man ulimit -- >> Payam Tarverdyan Chychi >> Network Security Specialist / Network Engineer >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx >> > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -- Payam Tarverdyan Chychi Network Security Specialist / Network Engineer -------------- next part -------------- An HTML attachment was scrubbed... URL: From he.hailong5 at zte.com.cn Thu Jan 25 01:33:24 2018 From: he.hailong5 at zte.com.cn (he.hailong5 at zte.com.cn) Date: Thu, 25 Jan 2018 09:33:24 +0800 (CST) Subject: master or worker process is responsible for listening the port? Message-ID: <201801250933244971063@zte.com.cn> SGksDQoNCg0KDQoNCg0KDQpJIGFtIGtlZXBpbmcgYWRkIGFuZCByZW1vdmUgYSBsaXN0ZW5pbmcg cG9ydCBpbiBhbmQgZnJvbSBuaWdueCB1c2luZyAiLXMgcmVsb2FkIiwgYW5kIGZvdW5kIHNvbWV0 aW1lcyB0aGlzIHBvcnQgd2FzIGxpc3RlbmVkIGJ5IG1hc3RlciBwcm9jZXNzLCBzb21ldGltZXMg d2FzIGJ5IGEgd29ya2VyIHByb2Nlc3MuIElzIHRoaXMgYXMgZXhwZWN0ZWQ/DQoNCg0KDQoNCg0K DQoNCkJyLCBBbGxlbg== -------------- next part -------------- An HTML attachment was scrubbed... URL: From jfs.world at gmail.com Thu Jan 25 09:41:50 2018 From: jfs.world at gmail.com (Jeffrey 'jf' Lim) Date: Thu, 25 Jan 2018 17:41:50 +0800 Subject: how to trigger "proxy_next_upstream invalid_header"? Message-ID: This is more of a curiosity thing, I guess, than anything else, but... how do you trigger an "proxy_next_upstream invalid_header" when testing? I've tried basically sending random text from an upstream ('nc -l')... but nginx holds on to the connection and ends up triggering a "timeout" instead. If I send random text, and then close the connection, the random text still gets sent to the client, and no next peer is tried. -jf -- He who settles on the idea of the intelligent man as a static entity only shows himself to be a fool. From arut at nginx.com Thu Jan 25 12:22:29 2018 From: arut at nginx.com (Roman Arutyunyan) Date: Thu, 25 Jan 2018 15:22:29 +0300 Subject: how to trigger "proxy_next_upstream invalid_header"? In-Reply-To: References: Message-ID: <20180125122229.GJ971@Romans-MacBook-Air.local> Hi Jeffrey, On Thu, Jan 25, 2018 at 05:41:50PM +0800, Jeffrey 'jf' Lim wrote: > This is more of a curiosity thing, I guess, than anything else, but... > how do you trigger an "proxy_next_upstream invalid_header" when > testing? > > I've tried basically sending random text from an upstream ('nc -l')... > but nginx holds on to the connection and ends up triggering a > "timeout" instead. If I send random text, and then close the > connection, the random text still gets sent to the client, and no next > peer is tried. The easiest way is to send the status line + header bigger than proxy_buffer_size bytes. Another way is to send a null byte somewhere in the response header. You can also try sending broken line and header termination: CR followed by a non-LF byte. -- Roman Arutyunyan From scoulibaly at gmail.com Thu Jan 25 16:07:03 2018 From: scoulibaly at gmail.com (=?UTF-8?Q?S=C3=A9kine_Coulibaly?=) Date: Thu, 25 Jan 2018 17:07:03 +0100 Subject: Add support for PSK cipher suites patch Message-ID: Nate,Maxim, I found a patch here (http://mailman.nginx.org/pipermail/nginx-devel/2017-September/010449.html) regarding the PSK spport in Nginx. I can not make the new parameter ssl_psk_file work. I applied it to release-1.13.5 successfully. I updated my nginx.conf to stream { upstream dtls_udp_upstreams { hash $remote_addr:remote_port; server preprod.mycorp.com:5685; } server { listen 5684 udp ssl; ssl_protocols DTLSv1.2; ssl_ciphers PSK-AES128-CBC-SHA; ssl_psk_file /tmp/cred.txt; ssl_certificate /tmp/server.pem; ssl_certificate_key /tmp/server.key; proxy_pass dtls_udp_upstreams; } My issue is that although /tmp/cred.txt file exists, Nginx returns : nginx: [emerg] unknown directive "ssl_psk_file" in /tmp/nginx.conf:26. I checked the source files, it looks like the patch has been correctly applied. Would you mind posting the complete/corrected patch I could apply and test ? I'm using DTLS client with PSK load-balancer and I could experiment the setup. My patching application looks like : git checkout release-1.13.5 patch -p1 -i pskpatch.diff Thank you ! -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Thu Jan 25 16:27:12 2018 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 25 Jan 2018 19:27:12 +0300 Subject: Add support for PSK cipher suites patch In-Reply-To: References: Message-ID: <20180125162711.GK34136@mdounin.ru> Hello! On Thu, Jan 25, 2018 at 05:07:03PM +0100, S?kine Coulibaly wrote: > Nate,Maxim, > > I found a patch here > (http://mailman.nginx.org/pipermail/nginx-devel/2017-September/010449.html) > regarding the PSK spport in Nginx. I can not make the new parameter > ssl_psk_file work. > > I applied it to release-1.13.5 successfully. > > I updated my nginx.conf to > > stream { > upstream dtls_udp_upstreams { > hash $remote_addr:remote_port; > server preprod.mycorp.com:5685; > } > > > server { > listen 5684 udp ssl; > ssl_protocols DTLSv1.2; > ssl_ciphers PSK-AES128-CBC-SHA; > ssl_psk_file /tmp/cred.txt; > ssl_certificate /tmp/server.pem; > ssl_certificate_key /tmp/server.key; > proxy_pass dtls_udp_upstreams; > } > > My issue is that although /tmp/cred.txt file exists, Nginx returns : > > nginx: [emerg] unknown directive "ssl_psk_file" in /tmp/nginx.conf:26. > > > I checked the source files, it looks like the patch has been correctly applied. > > Would you mind posting the complete/corrected patch I could apply and test ? > > I'm using DTLS client with PSK load-balancer and I could experiment the setup. The patches in question does not try to provide relevant functionality to the stream module, they are http-only. Also please note that DTLS support isn't available either. -- Maxim Dounin http://mdounin.ru/ From scoulibaly at gmail.com Fri Jan 26 08:54:01 2018 From: scoulibaly at gmail.com (=?UTF-8?Q?S=C3=A9kine_Coulibaly?=) Date: Fri, 26 Jan 2018 09:54:01 +0100 Subject: Add support for PSK cipher suites patch In-Reply-To: <2d9e584dca264e3e9f0cb4aa28378c43@garmin.com> References: <2d9e584dca264e3e9f0cb4aa28378c43@garmin.com> Message-ID: Nate, In the meanwhile I followed the thread and actually found your revised patches. I was able to apply them successfully. I realised I didn't ran configure with the --with-http-ssl flag (since I don't use http) when building nginx. This explains why the ssl_psk_file was not recognized. After building http module, the parameter was recognized properly. However, since I use stream and not http, I'll not be able to test this patch since it only wotks for ssl http module. Regarding the PSK, in a DTLS use case I prefer loading the PSK file on startup in an in-memory store for example. Then, if some keys are to be changed while the server is running, the in-memory store is refreshed without stopping the server (think SIGHUP or reload). This avoid all clients being disconnected when the server is restarted to reload the PSK file. Would any progress being made on this on the stream module I'll be able to give it a try. Thank you ! 2018-01-26 5:14 GMT+01:00 Karstens, Nate : > S?kine, > > > > The link you sent is old, the latest set of patches is here: > > > > http://mailman.nginx.org/pipermail/nginx-devel/2017-September/010460.html > > > > Does that improve things? > > > > These were developed using TLS, not DTLS. I don?t have any experience with > DTLS, so that might be unrelated. > > > > One of the conversations we had earlier in the development process was > choosing between two different approaches to managing the PSK file: > > > > 1. The PSK file may be updated as needed (so it must be readable by > the worker threads). This is the approach used with the current patches. > 2. The PSK file is read into memory once at startup by the master > process. This allows the file permissions to be read only for root, but > requires the config file to be refreshed if the PSK file is changed. > > > > Would you mind providing feedback on which approach works better for your > environment, and why? Sending it to the mailing list is preferred, or you > can just reply to this email. > > > > Thanks, > > > > Nate > > > > *From:* S?kine Coulibaly [mailto:scoulibaly at gmail.com] > *Sent:* Thursday, January 25, 2018 10:23 AM > *To:* Karstens, Nate ; mdounin at mdounin.ru > *Subject:* Fwd: Add support for PSK cipher suites patch > > > > > > ---------- Forwarded message ---------- > From: *S?kine Coulibaly* > Date: 2018-01-25 17:07 GMT+01:00 > Subject: Add support for PSK cipher suites patch > To: nginx at nginx.org > > Nate,Maxim, > > I found a patch here (http://mailman.nginx.org/pipermail/nginx-devel/2017-September/010449.html) regarding the PSK spport in Nginx. I can not make the new parameter ssl_psk_file work. > > I applied it to release-1.13.5 successfully. > > I updated my nginx.conf to > > stream { > > upstream dtls_udp_upstreams { > > hash $remote_addr:remote_port; > > server preprod.mycorp.com:5685; > > } > > > > > > server { > > listen 5684 udp ssl; > > ssl_protocols DTLSv1.2; > > ssl_ciphers PSK-AES128-CBC-SHA; > > ssl_psk_file /tmp/cred.txt; > > ssl_certificate /tmp/server.pem; > > ssl_certificate_key /tmp/server.key; > > proxy_pass dtls_udp_upstreams; > > } > > > > My issue is that although /tmp/cred.txt file exists, Nginx returns : > > nginx: [emerg] unknown directive "ssl_psk_file" in /tmp/nginx.conf:26. > > > > I checked the source files, it looks like the patch has been correctly applied. > > Would you mind posting the complete/corrected patch I could apply and test ? > > I'm using DTLS client with PSK load-balancer and I could experiment the setup. > > > > My patching application looks like : > > git checkout release-1.13.5 > > patch -p1 -i pskpatch.diff > > > > Thank you ! > > > > ------------------------------ > > CONFIDENTIALITY NOTICE: This email and any attachments are for the sole > use of the intended recipient(s) and contain information that may be Garmin > confidential and/or Garmin legally privileged. If you have received this > email in error, please notify the sender by reply email and delete the > message. Any disclosure, copying, distribution or use of this communication > (including attachments) by someone other than the intended recipient is > prohibited. Thank you. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Fri Jan 26 14:49:03 2018 From: nginx-forum at forum.nginx.org (Bento) Date: Fri, 26 Jan 2018 09:49:03 -0500 Subject: Client certificates don't work with Internet Explorer and Edge Message-ID: <93db12a5cb8e366cfb64f791ad4477ae.NginxMailingListEnglish@forum.nginx.org> Hi Guys, I'm in the process of moving an old Apache2 server to nginx 1.13. One site we have uses SSL Client Certificates. They are being requested by the webserver and the browser supplies these. Everything seems to work on my Mac with Safari, Chrome and Firefox. Now on Windows all those browsers work too. Only Internet Explorer and Edge don't. Both browsers ask the user which client cert to use, I select the same one as I did with Chrome etc. but it doesn't seem to send them to nginx. When adding some more logging params to nginx (like $ssl_client_serial) I see that no serial is being forwarded on IE/Edge but it is being forwarderdfor all other browsers, both on Mac and Windows 10. Does anyone have any clue as of why? I've been Googling for hours but to no avail. Also tried nginx 1.10, no change. The relevant part of my config can be found here: https://kopy.io/0ikPF#mF7Cq0IdeFDKJQ Thanks, Bento Posted at Nginx Forum: https://forum.nginx.org/read.php?2,278281,278281#msg-278281 From nginx-forum at forum.nginx.org Fri Jan 26 19:24:16 2018 From: nginx-forum at forum.nginx.org (leeand00) Date: Fri, 26 Jan 2018 14:24:16 -0500 Subject: How to get nginx to redirect to another path only if the root path is requested? Message-ID: <1a10fe85e56ead3c40c01417081e9dae.NginxMailingListEnglish@forum.nginx.org> How to get nginx to redirect to another path only if the root path is requested? Here is part of my server configuration: server { listen 80; ## listen for ipv4; this line is default and implied #listen [::]:80 default_server ipv6only=on; ## listen for ipv6 # Make site accessible from http://localhost/ server_name wiki wiki.leerdomain.lan; # Note: There should never be more than one root in a # virutal host # Also there should never be a root in the location. #root /var/www/nginx/; rewrite ^/$ /rootWiki/ redirect; location ^~ /rootWiki/ { resolver 127.0.0.1 valid=300s; access_log ./logs/RootWiki_access.log; error_log ./logs/RootWiki_error.log; proxy_buffers 16 4k; proxy_buffer_size 2k; proxy_set_header Host $host; proxy_set_header X-Real_IP $remote_addr; rewrite /rootWiki/(.*) /$1 break; proxy_pass http://192.168.1.200:8080; } location ^~ /usmle/ { access_log ./logs/usmle_access.log; ... When I configure it as above I am unable to access any of the sub-locations under root...but the root directory does forward to /rootWiki/ but I receive a 502 Bad Gateway instead of the application on port 8080. When I remove the line: rewrite ^/$ /rootWiki/ redirect; I'm able to access the rootWiki application, and all the sub locations from root just fine. It seems to me like it should work but it does not appear to. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,278283,278283#msg-278283 From nginx-forum at forum.nginx.org Fri Jan 26 19:27:41 2018 From: nginx-forum at forum.nginx.org (leeand00) Date: Fri, 26 Jan 2018 14:27:41 -0500 Subject: Suggestions for web apps to test out nginx load balancing? Message-ID: Does anyone have a suggestion about a simple, free, open source web app, with a database that I could test out and get familiar with, nginx's load balancing functionality on? Thank you, leeand00 Posted at Nginx Forum: https://forum.nginx.org/read.php?2,278284,278284#msg-278284 From peter_booth at me.com Fri Jan 26 23:48:14 2018 From: peter_booth at me.com (Peter Booth) Date: Fri, 26 Jan 2018 18:48:14 -0500 Subject: Suggestions for web apps to test out nginx load balancing? In-Reply-To: References: Message-ID: <249659C4-861A-461D-A019-CEB8A6E90060@me.com> The tech empower web framework benchmark is a set of six micro benchmarks implemented with over 100 different web frameworks. It?s free, easy to setup, and comes as prebuilt docker containers. Sent from my iPhone > On Jan 26, 2018, at 2:27 PM, leeand00 wrote: > > Does anyone have a suggestion about a simple, free, open source web app, > with a database that I could test out and get familiar with, nginx's load > balancing functionality on? > > Thank you, > leeand00 > > Posted at Nginx Forum: https://forum.nginx.org/read.php?2,278284,278284#msg-278284 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From francis at daoine.org Sat Jan 27 13:25:25 2018 From: francis at daoine.org (Francis Daly) Date: Sat, 27 Jan 2018 13:25:25 +0000 Subject: How to get nginx to redirect to another path only if the root path is requested? In-Reply-To: <1a10fe85e56ead3c40c01417081e9dae.NginxMailingListEnglish@forum.nginx.org> References: <1a10fe85e56ead3c40c01417081e9dae.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20180127132525.GD3063@daoine.org> On Fri, Jan 26, 2018 at 02:24:16PM -0500, leeand00 wrote: Hi there, > How to get nginx to redirect to another path only if the root path is > requested? I don't understand the rest of your mail; perhaps if you could show one request that you make and the response that you get, and how it is not the same as the response that you want, that would be clearer. But this first question, I do understand. location = / { return 301 /rootWiki/; } should do what you want. Good luck with it, f -- Francis Daly francis at daoine.org From francis at daoine.org Sat Jan 27 13:31:16 2018 From: francis at daoine.org (Francis Daly) Date: Sat, 27 Jan 2018 13:31:16 +0000 Subject: Quick successive reload makes "bind () xxxx failed, Address already in use" error In-Reply-To: <201801241113541440632@zte.com.cn> References: <201801241113541440632@zte.com.cn> Message-ID: <20180127133116.GE3063@daoine.org> On Wed, Jan 24, 2018 at 11:13:54AM +0800, he.hailong5 at zte.com.cn wrote: Hi there, > I have a script runs two successive reloads, the first one is to remove a listen port from the stream block, and the second one is to add the same port back to the stream block. Why? If it is "I want to see what happens when I do that", that's perfectly fine; carry on with the research. If it is "I want to achieve some other objective, and this is a step that I think is necessary", then perhaps there is an alternative way to achieve that objective. Cheers, f -- Francis Daly francis at daoine.org From jfs.world at gmail.com Sat Jan 27 14:37:57 2018 From: jfs.world at gmail.com (Jeffrey 'jf' Lim) Date: Sat, 27 Jan 2018 22:37:57 +0800 Subject: how to trigger "proxy_next_upstream invalid_header"? In-Reply-To: <20180125122229.GJ971@Romans-MacBook-Air.local> References: <20180125122229.GJ971@Romans-MacBook-Air.local> Message-ID: On Thu, Jan 25, 2018 at 8:22 PM, Roman Arutyunyan wrote: > Hi Jeffrey, > > On Thu, Jan 25, 2018 at 05:41:50PM +0800, Jeffrey 'jf' Lim wrote: >> This is more of a curiosity thing, I guess, than anything else, but... >> how do you trigger an "proxy_next_upstream invalid_header" when >> testing? >> >> I've tried basically sending random text from an upstream ('nc -l')... >> but nginx holds on to the connection and ends up triggering a >> "timeout" instead. If I send random text, and then close the >> connection, the random text still gets sent to the client, and no next >> peer is tried. > > The easiest way is to send the status line + header bigger than > proxy_buffer_size bytes. Another way is to send a null byte somewhere in the > response header. You can also try sending broken line and header termination: > CR followed by a non-LF byte. > thank you, Roman. I actually tried using the null byte and "CR followed by non-LF" method first. Those, despite what I tried, unfortunately did not work (for a null byte, I tried various places in response headers: part of the header name, part of the header value; no luck). In the end though, I managed to do it by sending a large header, and that worked. thanks, -jf From nginx-forum at forum.nginx.org Sun Jan 28 02:36:44 2018 From: nginx-forum at forum.nginx.org (leeand00) Date: Sat, 27 Jan 2018 21:36:44 -0500 Subject: How to get nginx to redirect to another path only if the root path is requested? In-Reply-To: <20180127132525.GD3063@daoine.org> References: <20180127132525.GD3063@daoine.org> Message-ID: I have other subfolders in my location paths so for instance other than just / after my host, I also have /lang/english/grammar, and /lang/spanish/gram?tica But I figured it out: location = / { resolver 127.0.0.1 valid=300s; proxy_pass http://192.168.1.200:8080/; proxy_set_header X-Forwarded-Host $host; proxy_set_header X-Forwarded-Server $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; rewrite /(.*) /$1 break; access_log ./logs/root_access.log; error_log ./logs/root_error.log; } The = / that you suggested fixed it right up, thanks! Posted at Nginx Forum: https://forum.nginx.org/read.php?2,278283,278289#msg-278289 From nginx-forum at forum.nginx.org Tue Jan 30 03:34:09 2018 From: nginx-forum at forum.nginx.org (spacerobot) Date: Mon, 29 Jan 2018 22:34:09 -0500 Subject: Nginx auth_request pass through client SSL certificate Message-ID: <8d388cdfa99cce891799863cc8189b80.NginxMailingListEnglish@forum.nginx.org> We use auth_request right now and it works great. However, we are making a change that the authentication server in the future will only take SSL requests and it also verifies client certificates. I couldn't seem to find information online about how to pass through client SSL certificate when using auth_request. Current configuration: location = /_auth { internal; proxy_method POST; proxy_pass http://authentication-service; } Now that the authentication service is https only and it requires client SSL cert verification as well. By only changing to proxy_pass https://authentication-service; doesn't work because it doesn't pass through the client SSL information from the original request. I tried adding proxy_set_header for the X-SSL-CERT header with $ssl_client_cert and it didn't work properly. What's the best way that would allow me to continue to use the auth_request module but allow passing through client SSL information from the original request to the upstream? Thanks! Posted at Nginx Forum: https://forum.nginx.org/read.php?2,278291,278291#msg-278291