From jcreek at indigital.net Thu Oct 1 18:13:54 2020 From: jcreek at indigital.net (Jeff Creek) Date: Thu, 1 Oct 2020 14:13:54 -0400 Subject: Health Check Issue Message-ID: I am trying to check the contents of an html file on upstream servers. A configuration using HTTP works. However, using the same check with HTTPS does not work. nginx version: nginx/1.19.0 (nginx-plus-r22) Upstreams are IIS. Non working config: log_format upstreamlog-giscrp '$server_name to: $upstream_addr [$request] ' 'upstream_response_time $upstream_response_time ' 'msec $msec request_time $request_time'; match giscrp_up { body ~* "IISUP"; } upstream giscrp { server 10.212.226.58:443; server 10.212.226.59:443; zone map 64k; } server { listen 443 ssl http2; server_name giscrp.vt911.net; ssl_certificate /etc/pki/tls/certs/ vt911.net/STAR_vt911_net-bundle.crt; ssl_certificate_key /etc/pki/tls/certs/ vt911.net/STAR_vt911.net.key; access_log /var/log/nginx/access-giscrp.log upstreamlog-giscrp; #proxy_ssl on; location / { proxy_set_header X-Forwarded-For $remote_addr; #Passes client IP to upstream web server proxy_set_header Host $http_host; #Passes request hostname from client in header proxy_set_header X-Forwarded-Proto $scheme; proxy_http_version 1.1; proxy_pass https://giscrp; health_check match=giscrp_up uri=/iisstatus.html; } } Working config over HTTP: log_format upstreamlog-map '$server_name to: $upstream_addr [$request] ' 'upstream_response_time $upstream_response_time ' 'msec $msec request_time $request_time'; match iis_up { body ~ "IISUP"; } server { listen 80; server_name map.vt911.net; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; access_log /var/log/nginx/access-map.log upstreamlog-map; location / { proxy_pass http://map.vt911.net; proxy_set_header X-Forwarded-For $remote_addr; proxy_http_version 1.1; proxy_set_header Connection ""; health_check match=iis_up uri=/iisstatus.html; } } upstream map.vt911.net { server 10.212.224.56:80; server 10.212.224.57:80; zone map 64k; } I am not sure if the health check is sending the request to the IP instead of the FQDN and the server is rejecting it or something. Any ideas would be appreciated. -- Jeff Creek INdigital -------------- next part -------------- An HTML attachment was scrubbed... URL: From osa at freebsd.org.ru Thu Oct 1 20:25:11 2020 From: osa at freebsd.org.ru (Sergey A. Osokin) Date: Thu, 1 Oct 2020 23:25:11 +0300 Subject: Health Check Issue In-Reply-To: References: Message-ID: <20201001202511.GA20922@FreeBSD.org.ru> Hi Jeff, thanks for the report. Since this is related to the commercial version of NGINX - NGINX Plus, I'd recommend to raise a request with 24x7 NGINX Support Team through the My F5 Portal. Thanks. -- Sergey Osokin On Thu, Oct 01, 2020 at 02:13:54PM -0400, Jeff Creek wrote: > I am trying to check the contents of an html file on upstream servers. A > configuration using HTTP works. However, using the same check with HTTPS > does not work. > > nginx version: nginx/1.19.0 (nginx-plus-r22) > > Upstreams are IIS. > > Non working config: > log_format upstreamlog-giscrp '$server_name to: $upstream_addr [$request] ' > 'upstream_response_time $upstream_response_time ' > 'msec $msec request_time $request_time'; > > match giscrp_up { > body ~* "IISUP"; > } > > upstream giscrp { > server 10.212.226.58:443; > server 10.212.226.59:443; > zone map 64k; > } > > server { > listen 443 ssl http2; > > server_name giscrp.vt911.net; > > > ssl_certificate /etc/pki/tls/certs/ > vt911.net/STAR_vt911_net-bundle.crt; > ssl_certificate_key /etc/pki/tls/certs/ > vt911.net/STAR_vt911.net.key; > access_log /var/log/nginx/access-giscrp.log upstreamlog-giscrp; > > > #proxy_ssl on; > > > > location / { > proxy_set_header X-Forwarded-For $remote_addr; #Passes client > IP to upstream web server > proxy_set_header Host $http_host; #Passes request hostname > from client in header > proxy_set_header X-Forwarded-Proto $scheme; > proxy_http_version 1.1; > proxy_pass https://giscrp; > health_check match=giscrp_up uri=/iisstatus.html; > } > } > > > Working config over HTTP: > log_format upstreamlog-map '$server_name to: $upstream_addr [$request] ' > 'upstream_response_time $upstream_response_time ' > 'msec $msec request_time $request_time'; > match iis_up { > body ~ "IISUP"; > } > > server { > listen 80; > server_name map.vt911.net; > proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; > access_log /var/log/nginx/access-map.log upstreamlog-map; > > location / { > proxy_pass http://map.vt911.net; > proxy_set_header X-Forwarded-For $remote_addr; > proxy_http_version 1.1; > proxy_set_header Connection ""; > health_check match=iis_up uri=/iisstatus.html; > } > } > > upstream map.vt911.net { > server 10.212.224.56:80; > server 10.212.224.57:80; > zone map 64k; > } > > I am not sure if the health check is sending the request to the IP instead > of the FQDN and the server is rejecting it or something. > > Any ideas would be appreciated. > > -- > Jeff Creek > INdigital > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From francis at daoine.org Mon Oct 5 11:39:06 2020 From: francis at daoine.org (Francis Daly) Date: Mon, 5 Oct 2020 12:39:06 +0100 Subject: SSL routines:tls_process_client_hello:version too low In-Reply-To: <2a241f300e93b9a8825ec15dff2eea69.NginxMailingListEnglish@forum.nginx.org> References: <3DB2C8BF-F633-4EA0-9133-BA27AAF2B3CD@nginx.com> <08f877e2e877403aeaea769bc4084ef0.NginxMailingListEnglish@forum.nginx.org> <0ea415173df1f2171c9b65a1faf68a5e.NginxMailingListEnglish@forum.nginx.org> <2a241f300e93b9a8825ec15dff2eea69.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20201005113906.GO30691@daoine.org> On Wed, Sep 30, 2020 at 04:39:03PM -0400, jriker1 wrote: Hi there, > Not sure if they are relevant but went thru the entire log. Found these > references. Guessing related but not sure they tell me personally > anything: These logs do seem to indicate that the tls-negotiation part of things is working ok, so the error message in the Subject: seems to no longer be an issue? The fact that your upstream is using WWW-Authenticate: Negotiate and RDG_OUT_DATA makes me suspect that it may be using some Microsoft-specific non-standard-HTTP things that either may notwork through a reverse proxy, or may not work through nginx. I don't have a better answer for you; if you want to continue trying, are you able to re-create the client / nginx / upstream server setup that worked in the past? Then maybe change only one piece at a time to see where the first breakage happens? Good luck with it, f -- Francis Daly francis at daoine.org From acbeets610 at gmail.com Mon Oct 5 14:35:22 2020 From: acbeets610 at gmail.com (Anna Lewis) Date: Mon, 5 Oct 2020 08:35:22 -0600 Subject: SSL routines:tls_process_client_hello:version too low In-Reply-To: <20201005113906.GO30691@daoine.org> References: <3DB2C8BF-F633-4EA0-9133-BA27AAF2B3CD@nginx.com> <08f877e2e877403aeaea769bc4084ef0.NginxMailingListEnglish@forum.nginx.org> <0ea415173df1f2171c9b65a1faf68a5e.NginxMailingListEnglish@forum.nginx.org> <2a241f300e93b9a8825ec15dff2eea69.NginxMailingListEnglish@forum.nginx.org> <20201005113906.GO30691@daoine.org> Message-ID: Hi, Can you please remove me from this mailing list? I'm not sure how I got added. Thanks, Anna On Mon, Oct 5, 2020 at 5:39 AM Francis Daly wrote: > On Wed, Sep 30, 2020 at 04:39:03PM -0400, jriker1 wrote: > > Hi there, > > > Not sure if they are relevant but went thru the entire log. Found these > > references. Guessing related but not sure they tell me personally > > anything: > > These logs do seem to indicate that the tls-negotiation part of things > is working ok, so the error message in the Subject: seems to no longer > be an issue? > > The fact that your upstream is using WWW-Authenticate: Negotiate and > RDG_OUT_DATA makes me suspect that it may be using some Microsoft-specific > non-standard-HTTP things that either may notwork through a reverse proxy, > or may not work through nginx. > > I don't have a better answer for you; if you want to continue trying, > are you able to re-create the client / nginx / upstream server setup > that worked in the past? Then maybe change only one piece at a time to > see where the first breakage happens? > > Good luck with it, > > f > -- > Francis Daly francis at daoine.org > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -- Anna Lewis RN, BSN (913) 424-7531 acbeets610 at gmail.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From gfrankliu at gmail.com Mon Oct 5 22:25:31 2020 From: gfrankliu at gmail.com (Frank Liu) Date: Mon, 5 Oct 2020 15:25:31 -0700 Subject: CVE-2019-20372 Message-ID: Hi, CVE-2019-20372 mentioned a security vulnerability, but I don't see it in http://nginx.org/en/security_advisories.html CVE-2019-20372 did say a fix in nginx 1.17.7. When I check the CHANGES , I see bugfix: *) Bugfix: requests with bodies were handled incorrectly when returning redirections with the "error_page" directive; the bug had appeared in 0.7.12. Are those the same thing from this commit ? Is this really a vulnerability? under what conditions? Thanks! Frank -------------- next part -------------- An HTML attachment was scrubbed... URL: From ryanbgould at gmail.com Wed Oct 7 05:02:16 2020 From: ryanbgould at gmail.com (Ryan Gould) Date: Tue, 6 Oct 2020 22:02:16 -0700 Subject: HTTP/3 getsockname Bad file descriptor Message-ID: <11ce6e4f-2d29-2cd5-cb46-88d090f095a2@gmail.com> hello all you amazing developers, i found some old 2013 references to this error relating to SPDY, but have not seen anything recently. i am building on a debian 9 box using the latest code from here: https://hg.nginx.org/nginx-quic using build instructions from https://quic.nginx.org/readme.html i get the following errors when i use firefox nightly (82.0b8), Chrome/85, or when i use command-line curl (built using quiche): 2020/10/07 04:35:04 [alert] 14263#0: *78 getsockname() failed (9: Bad file descriptor), client: X.X.X.X, server: example.com, request: "HEAD / HTTP/3" 2020/10/07 04:35:04 [alert] 14263#0: *78 getsockname() failed (9: Bad file descriptor) while sending response to client, client: X.X.X.X, server: example.com, request: "HEAD / HTTP/3" i dont know if it is related, but i am also not having any success getting 0-RTT or passing the QUIC test (but HTTP/3 passes) on https://www.http3check.net/ otherwise, firefox and chrome think they are using HTTP/3 successfully. if i can help with logs, please let me know. From ryanbgould at gmail.com Wed Oct 7 05:02:27 2020 From: ryanbgould at gmail.com (Ryan Gould) Date: Tue, 6 Oct 2020 22:02:27 -0700 Subject: HTTP/3 getsockname Bad file descriptor Message-ID: <6271890b-e4de-7a96-3bc4-403510100c2a@gmail.com> hello all you amazing developers, i found some old 2013 references to this error relating to SPDY, but have not seen anything recently. i am building on a debian 9 box using the latest code from here: https://hg.nginx.org/nginx-quic using build instructions from https://quic.nginx.org/readme.html i get the following errors when i use firefox nightly (82.0b8), Chrome/85, or when i use command-line curl (built using quiche): 2020/10/07 04:35:04 [alert] 14263#0: *78 getsockname() failed (9: Bad file descriptor), client: X.X.X.X, server: example.com, request: "HEAD / HTTP/3" 2020/10/07 04:35:04 [alert] 14263#0: *78 getsockname() failed (9: Bad file descriptor) while sending response to client, client: X.X.X.X, server: example.com, request: "HEAD / HTTP/3" i dont know if it is related, but i am also not having any success getting 0-RTT or passing the QUIC test (but HTTP/3 passes) on https://www.http3check.net/ otherwise, firefox and chrome think they are using HTTP/3 successfully. if i can help with logs, please let me know. From nginx-forum at forum.nginx.org Wed Oct 7 08:29:50 2020 From: nginx-forum at forum.nginx.org (tored) Date: Wed, 07 Oct 2020 04:29:50 -0400 Subject: Keepalived Connections Reset after reloading the configuration (HUP Signal) In-Reply-To: <62159cd3128af693dfbe0a4b2f912438.NginxMailingListEnglish@forum.nginx.org> References: <56774ba55034a4075b78189191a29a62.NginxMailingListEnglish@forum.nginx.org> <62159cd3128af693dfbe0a4b2f912438.NginxMailingListEnglish@forum.nginx.org> Message-ID: <1f925cd832be67e2e4abac131e2e6b07.NginxMailingListEnglish@forum.nginx.org> Hi, I'm looking into the same issue; how to improve graceful shutdown when using keep-alive connections. It seems nginx at some point had support for doing graceful shutdown (if i read the code correctly): http://hg.nginx.org/nginx/rev/03f1133f24e8 But it was removed at a later stage: http://hg.nginx.org/nginx/rev/5e6142609e48 The feature was removed due to negative impact on CPU usage, maybe the process could be handled in ngx_http_finalize_connection(ngx_http_request_t *r) ? That may be a better option as it should only trigger when ngx_terminate and ngx_quit flags are active: diff --git a/src/http/ngx_http_request.c b/src/http/ngx_http_request.c index 2a0528c6..0b8e05fa 100644 --- a/src/http/ngx_http_request.c +++ b/src/http/ngx_http_request.c @@ -2735,6 +2735,15 @@ ngx_http_finalize_connection(ngx_http_request_t *r) return; } + if (ngx_terminate + || ngx_quit + && r->keepalive) + { + r->keepalive = 0; + ngx_http_close_request(r, 0); + return; + } + if (clcf->lingering_close == NGX_HTTP_LINGERING_ALWAYS || (clcf->lingering_close == NGX_HTTP_LINGERING_ON && (r->lingering_close Does this seem something that could work ? One other option could be to expand the function ngx_close_idle_connections(ngx_cycle_t *cycle) to check if the connection is using keepalive, if so, set keeplive to 0. I believe that would ensure that during graceful shutdown, an keepalive connection would be able to perform one request, then asked to close the connection: diff --git a/src/core/ngx_connection.c b/src/core/ngx_connection.c index c082d0da..df90c4f1 100644 --- a/src/core/ngx_connection.c +++ b/src/core/ngx_connection.c @@ -1334,8 +1334,10 @@ ngx_close_idle_connections(ngx_cycle_t *cycle) { ngx_uint_t i; ngx_connection_t *c; + ngx_http_request_t *r; c = cycle->connections; + r = cycle-> for (i = 0; i < cycle->connection_n; i++) { @@ -1345,6 +1347,11 @@ ngx_close_idle_connections(ngx_cycle_t *cycle) c[i].close = 1; c[i].read->handler(c[i].read); } + + if (r->keepalive) { + r->keepalive = 0; + } + } } I understand that the second example is not a fully working example. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,197927,289673#msg-289673 From pluknet at nginx.com Wed Oct 7 09:33:16 2020 From: pluknet at nginx.com (Sergey Kandaurov) Date: Wed, 7 Oct 2020 10:33:16 +0100 Subject: HTTP/3 getsockname Bad file descriptor In-Reply-To: <6271890b-e4de-7a96-3bc4-403510100c2a@gmail.com> References: <6271890b-e4de-7a96-3bc4-403510100c2a@gmail.com> Message-ID: > On 7 Oct 2020, at 06:02, Ryan Gould wrote: > > hello all you amazing developers, > > i found some old 2013 references to this error relating to SPDY, but have not seen anything recently. i am building on a debian 9 box using the latest code from here: https://hg.nginx.org/nginx-quic using build instructions from https://quic.nginx.org/readme.html > > i get the following errors when i use firefox nightly (82.0b8), Chrome/85, or when i use command-line curl (built using quiche): > > 2020/10/07 04:35:04 [alert] 14263#0: *78 getsockname() failed (9: Bad file descriptor), client: X.X.X.X, server: example.com, request: "HEAD / HTTP/3" > 2020/10/07 04:35:04 [alert] 14263#0: *78 getsockname() failed (9: Bad file descriptor) while sending response to client, client: X.X.X.X, server: example.com, request: "HEAD / HTTP/3" Thank you for your report. This is a (known) issue related to how connections are used in the ngx_http_v3_module. > i dont know if it is related, but i am also not having any success getting 0-RTT or passing the QUIC test (but HTTP/3 passes) on https://www.http3check.net/ > > otherwise, firefox and chrome think they are using HTTP/3 successfully. This one is unrelated to alerts. Using 0-RTT requires support in clients. I am not aware of HTTP/3 0-RTT support in curl (though, it supports HTTP/3). You may want to try 0-RTT with different clients such as ngtcp2 or kwik. It should also be explicitly enabled on server with "ssl_early_data on". -- Sergey Kandaurov From mdounin at mdounin.ru Wed Oct 7 13:54:28 2020 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 7 Oct 2020 16:54:28 +0300 Subject: Keepalived Connections Reset after reloading the configuration (HUP Signal) In-Reply-To: <1f925cd832be67e2e4abac131e2e6b07.NginxMailingListEnglish@forum.nginx.org> References: <56774ba55034a4075b78189191a29a62.NginxMailingListEnglish@forum.nginx.org> <62159cd3128af693dfbe0a4b2f912438.NginxMailingListEnglish@forum.nginx.org> <1f925cd832be67e2e4abac131e2e6b07.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20201007135428.GT1136@mdounin.ru> Hello! On Wed, Oct 07, 2020 at 04:29:50AM -0400, tored wrote: > I'm looking into the same issue; how to improve graceful shutdown when using > keep-alive connections. > > It seems nginx at some point had support for doing graceful shutdown (if i > read the code correctly): > http://hg.nginx.org/nginx/rev/03f1133f24e8 > > But it was removed at a later stage: > http://hg.nginx.org/nginx/rev/5e6142609e48 You aren't reading the code correctly. Starting with nginx 0.5.15 (revision 03f1133f24e8), nginx closes keepalive connnections when it receives the reconfiguration signal. Quoting CHANGES: *) Feature: now the keep-alive connections are closed just after receiving the reconfiguration signal. This feature wasn't removed in 5e6142609e48, but rather it was changed how things work: instead of doing a full scan over all connections on each event loop iteration, nginx now does it only once, and also makes sure no new idle connections are added after the shutdown signal. [...] -- Maxim Dounin http://mdounin.ru/ From pluknet at nginx.com Wed Oct 7 13:57:27 2020 From: pluknet at nginx.com (Sergey Kandaurov) Date: Wed, 7 Oct 2020 14:57:27 +0100 Subject: HTTP/3 getsockname Bad file descriptor In-Reply-To: References: <6271890b-e4de-7a96-3bc4-403510100c2a@gmail.com> Message-ID: > On 7 Oct 2020, at 10:33, Sergey Kandaurov wrote: > >> >> On 7 Oct 2020, at 06:02, Ryan Gould wrote: >> >> hello all you amazing developers, >> >> i found some old 2013 references to this error relating to SPDY, but have not seen anything recently. i am building on a debian 9 box using the latest code from here: https://hg.nginx.org/nginx-quic using build instructions from https://quic.nginx.org/readme.html >> >> i get the following errors when i use firefox nightly (82.0b8), Chrome/85, or when i use command-line curl (built using quiche): >> >> 2020/10/07 04:35:04 [alert] 14263#0: *78 getsockname() failed (9: Bad file descriptor), client: X.X.X.X, server: example.com, request: "HEAD / HTTP/3" >> 2020/10/07 04:35:04 [alert] 14263#0: *78 getsockname() failed (9: Bad file descriptor) while sending response to client, client: X.X.X.X, server: example.com, request: "HEAD / HTTP/3" > > Thank you for your report. > This is a (known) issue related to how connections are used > in the ngx_http_v3_module. Now fixed in https://hg.nginx.org/nginx-quic/rev/d57cfdebe301 -- Sergey Kandaurov From marcin.wanat at gmail.com Wed Oct 7 16:50:30 2020 From: marcin.wanat at gmail.com (Marcin Wanat) Date: Wed, 7 Oct 2020 18:50:30 +0200 Subject: Nginx bug when mixing map and try_files ? Message-ID: Hi, i am doing simple webp client support check using map and then using try_files to check if file exists and serve it. Nginx 1.18, my complete nginx.conf: events { use epoll; worker_connections 128; } http { # Check if client is capable of handling webp map $http_accept $webp_suffix { default ""; "~*webp" ".webp"; } server { listen *:8888; server_name test; root /srv; location ~ ^/imgs/([0-9]*)/(.*)$ { add_header X-webp $webp_suffix; try_files /imgs/$1$webp_suffix /imgs/$1.jpg =404; } } } Now i am opening: http://test:8888/imgs/10/whatever And it results in error 404. Files /srv/imgs/10.jpg and /srv/imgs/10.webp do exist. When i spoof my client and remove webp from http accept list, then everything works ok and serve .jpg. When i change order of try_files arguments from: /imgs/$1$webp_suffix /imgs/$1.jpg to: /imgs/$1.jpg /imgs/$1$webp_suffix than it works too and serve .jpg I have added "add_header" to check if map webp detection works and it results in: X-webp .webp header when webp support enabled in client, so map is working as expected. What am i missing ? 1) Why it does not serve .webp file at all ? 2) Why when try_files has webp check on first position it DO NOT serve .jpg file but 404 and when i swap order of parameters i DO serve .jpg ? Regards, Marcin Wanat -------------- next part -------------- An HTML attachment was scrubbed... URL: From francis at daoine.org Wed Oct 7 18:08:36 2020 From: francis at daoine.org (Francis Daly) Date: Wed, 7 Oct 2020 19:08:36 +0100 Subject: Nginx bug when mixing map and try_files ? In-Reply-To: References: Message-ID: <20201007180836.GP30691@daoine.org> On Wed, Oct 07, 2020 at 06:50:30PM +0200, Marcin Wanat wrote: Hi there, > i am doing simple webp client support check using map and then using > try_files to check if file exists and serve it. $1 may not mean what you want it to mean, when more than one regex-thing is involved. And "map" can be a regex-thing. > location ~ ^/imgs/([0-9]*)/(.*)$ { > add_header X-webp $webp_suffix; > try_files /imgs/$1$webp_suffix > /imgs/$1.jpg =404; > } If you change the location regex to whatever your engine's version of "save this in a named variable" is, then use that named variable in the try_files line, you may have better luck. Perhaps: location ~ ^/imgs/(?P[0-9]*)/(.*)$ { and use $numbers instead of $1. > What am i missing ? > > 1) Why it does not serve .webp file at all ? > 2) Why when try_files has webp check on first position it DO NOT serve .jpg > file but 404 and when i swap order of parameters i DO serve .jpg ? When nginx needs to know the value of $webp_suffix, the "map" is used, which messes with your $1. Then when nginx substitutes in the value of $1, it is not what it was from the regex location. So it all Just Works when $webp_suffix does not need to be read. f -- Francis Daly francis at daoine.org From marcin.wanat at gmail.com Wed Oct 7 18:35:20 2020 From: marcin.wanat at gmail.com (Marcin Wanat) Date: Wed, 7 Oct 2020 20:35:20 +0200 Subject: Nginx bug when mixing map and try_files ? In-Reply-To: <20201007180836.GP30691@daoine.org> References: <20201007180836.GP30691@daoine.org> Message-ID: Hi, On Wed, Oct 7, 2020 at 8:08 PM Francis Daly wrote: > $1 may not mean what you want it to mean, when more than one regex-thing > is involved. And "map" can be a regex-thing. > > If you change the location regex to whatever your engine's version of > "save this in a named variable" is, then use that named variable in the > try_files line, you may have better luck. > > Perhaps: > > location ~ ^/imgs/(?P[0-9]*)/(.*)$ { > > and use $numbers instead of $1. > > When nginx needs to know the value of $webp_suffix, the "map" is used, > which messes with your $1. > > Then when nginx substitutes in the value of $1, it is not what it was > from the regex location. > > So it all Just Works when $webp_suffix does not need to be read. > > This is exactly the problem! Thank you! Regards, Marcin Wanat -------------- next part -------------- An HTML attachment was scrubbed... URL: From showgammer at gmail.com Wed Oct 7 18:43:09 2020 From: showgammer at gmail.com (Mr Alex) Date: Wed, 7 Oct 2020 21:43:09 +0300 Subject: Nginx bug when mixing map and try_files ? In-Reply-To: References: <20201007180836.GP30691@daoine.org> Message-ID: stfu On Wed, Oct 7, 2020 at 9:35 PM Marcin Wanat wrote: > Hi, > > On Wed, Oct 7, 2020 at 8:08 PM Francis Daly wrote: > > >> $1 may not mean what you want it to mean, when more than one regex-thing >> is involved. And "map" can be a regex-thing. >> >> If you change the location regex to whatever your engine's version of >> "save this in a named variable" is, then use that named variable in the >> try_files line, you may have better luck. >> >> Perhaps: >> >> location ~ ^/imgs/(?P[0-9]*)/(.*)$ { >> >> and use $numbers instead of $1. >> >> When nginx needs to know the value of $webp_suffix, the "map" is used, >> which messes with your $1. >> >> Then when nginx substitutes in the value of $1, it is not what it was >> from the regex location. >> >> So it all Just Works when $webp_suffix does not need to be read. >> >> > This is exactly the problem! Thank you! > > Regards, > Marcin Wanat > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Thu Oct 8 07:31:57 2020 From: nginx-forum at forum.nginx.org (tored) Date: Thu, 08 Oct 2020 03:31:57 -0400 Subject: Keepalived Connections Reset after reloading the configuration (HUP Signal) In-Reply-To: <20201007135428.GT1136@mdounin.ru> References: <20201007135428.GT1136@mdounin.ru> Message-ID: Thanks Maxim to taking the time to respond. > This feature wasn't removed in 5e6142609e48, but rather it was > changed how things work: instead of doing a full scan over all > connections on each event loop iteration, nginx now does it only > once, and also makes sure no new idle connections are added after > the shutdown signal. > I don't fully understand what happens to non-idle keep-alive connections after the shutdown signal is sent. If I understand correctly, non-idle keep-alive connections will continue to serve requests after a graceful shutdown, until they are terminated by "normal" events, such as: * connection is closed/terminated by the client * or by the server, e.g if (r->connection->requests >= clcf->keepalive_requests) * or if keepalive_timeout is meet * any other event that would close the connection during normal operation >From reading the code, it looks like this is the expected flow after a graceful shutdown. I read somewhere that they expected nginx to return "Connection: close" headers to keepalive connections after graceful shutdown signal is sent. But I don't see that is true, and not expected either from reading the code. Thanks, Tore Posted at Nginx Forum: https://forum.nginx.org/read.php?2,197927,289684#msg-289684 From lukasz at tasz.eu Thu Oct 8 09:35:37 2020 From: lukasz at tasz.eu (=?UTF-8?B?xYF1a2FzeiBUYXN6?=) Date: Thu, 8 Oct 2020 11:35:37 +0200 Subject: keepalive seems not to work Message-ID: Hi all, can I expect that proxy_pass will keep connection to remote server that is being proxied? when I'm using setup client -> proxy -> server it looks to work but when I'm using: client -> 1stProxy_upstream -> proxy -> server connection between 1stProxy and proxy is being kept thanks to keepalive 100, but proxy makes new connection every new request, very simple setup: http { server { listen 8080; location / { keepalive_disable none; keepalive_requests 1000; keepalive_timeout 300s; proxy_cache proxy-cache; proxy_cache_valid 200 302 301 30m; proxy_cache_valid any 1m; proxy_cache_key $scheme://$http_host$request_uri; proxy_pass $scheme://$http_host$request_uri; proxy_http_version 1.1; proxy_set_header Connection ""; } } } I would expect that when client connects proxy and it works then it should also works when proxy connects upstream proxy.... any ideas? thanks in advance ?ukasz Tasz RTKW -------------- next part -------------- An HTML attachment was scrubbed... URL: From marcin.wanat at gmail.com Thu Oct 8 09:42:35 2020 From: marcin.wanat at gmail.com (Marcin Wanat) Date: Thu, 8 Oct 2020 11:42:35 +0200 Subject: keepalive seems not to work In-Reply-To: References: Message-ID: On Thu, Oct 8, 2020 at 11:36 AM ?ukasz Tasz wrote: > Hi all, > > can I expect that proxy_pass will keep connection to remote server that is > being proxied? > > when I'm using setup client -> proxy -> server it looks to work > but when I'm using: > client -> 1stProxy_upstream -> proxy -> server > connection between 1stProxy and proxy is being kept thanks to keepalive > 100, but proxy makes new connection every new request, very simple setup: > > http { > server { > listen 8080; > location / { > keepalive_disable none; > keepalive_requests 1000; > keepalive_timeout 300s; > proxy_cache proxy-cache; > proxy_cache_valid 200 302 301 30m; > proxy_cache_valid any 1m; > proxy_cache_key $scheme://$http_host$request_uri; > proxy_pass $scheme://$http_host$request_uri; > proxy_http_version 1.1; > proxy_set_header Connection ""; > } > } > } > I would expect that when client connects proxy and it works then it should > also works when proxy connects upstream proxy.... > For keepalive in upstream proxy you shoud use upstream configuration block and configure keepalive in it: upstream backend { zone backend 1m; server your-server.com; keepalive 128; } server { listen 8080; location / { proxy_cache proxy-cache; proxy_cache_valid 200 302 301 30m; proxy_cache_valid any 1m; proxy_cache_key $scheme://$http_host$request_uri; proxy_pass $scheme://backend$request_uri; proxy_http_version 1.1; proxy_set_header Connection ""; } } -- Marcin Wanat -------------- next part -------------- An HTML attachment was scrubbed... URL: From lukasz at tasz.eu Thu Oct 8 10:16:21 2020 From: lukasz at tasz.eu (=?UTF-8?B?xYF1a2FzeiBUYXN6?=) Date: Thu, 8 Oct 2020 12:16:21 +0200 Subject: keepalive seems not to work In-Reply-To: References: Message-ID: Hi, sucha setup is on 1stproxy, there I have upstream defined to second proxy and it works - connection is reused. problem is that it is chain of forward proxy with caching, and your-server.com including port is different - service:port is dynamic. I'm asking for it, because with firefox (set global proxy to my second proxy) I go to some blabla.your-server.com connection is kept in a meaning that on server side I see only one established connection, and all of requests that I make from firefox are made over this keept connection. Problem starts when I will use chain of proxies (all nginx). regards ?ukasz Tasz RTKW czw., 8 pa? 2020 o 11:43 Marcin Wanat napisa?(a): > > On Thu, Oct 8, 2020 at 11:36 AM ?ukasz Tasz wrote: > >> Hi all, >> >> can I expect that proxy_pass will keep connection to remote server that >> is being proxied? >> >> when I'm using setup client -> proxy -> server it looks to work >> but when I'm using: >> client -> 1stProxy_upstream -> proxy -> server >> connection between 1stProxy and proxy is being kept thanks to keepalive >> 100, but proxy makes new connection every new request, very simple setup: >> >> http { >> server { >> listen 8080; >> location / { >> keepalive_disable none; >> keepalive_requests 1000; >> keepalive_timeout 300s; >> proxy_cache proxy-cache; >> proxy_cache_valid 200 302 301 30m; >> proxy_cache_valid any 1m; >> proxy_cache_key $scheme://$http_host$request_uri; >> proxy_pass $scheme://$http_host$request_uri; >> proxy_http_version 1.1; >> proxy_set_header Connection ""; >> } >> } >> } >> I would expect that when client connects proxy and it works then it >> should also works when proxy connects upstream proxy.... >> > > For keepalive in upstream proxy you shoud use upstream configuration block > and configure keepalive in it: > > upstream backend { > zone backend 1m; > server your-server.com; > keepalive 128; > } > > server { > listen 8080; > location / { > proxy_cache proxy-cache; > proxy_cache_valid 200 302 301 30m; > proxy_cache_valid any 1m; > proxy_cache_key $scheme://$http_host$request_uri; > proxy_pass $scheme://backend$request_uri; > proxy_http_version 1.1; > proxy_set_header Connection ""; > } > } > > -- > Marcin Wanat > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From vbart at nginx.com Thu Oct 8 21:17:04 2020 From: vbart at nginx.com (Valentin V. Bartenev) Date: Fri, 09 Oct 2020 00:17:04 +0300 Subject: Unit 1.20.0 release Message-ID: <5104730.Sb9uPGUboI@vbart-laptop> Hi, I'm glad to announce a new release of NGINX Unit. It is yet another big release, featuring ASGI support for Python and a long list of other improvements and bug fixes. ASGI 3.0 is a modern standardized interface that enables writing natively asynchronous web applications making use of the async/await feature available inlatest versions of Python. Now, Unit fully supports it along with WSGI. Even more, Unit automatically detects the interface your Python app is using (ASGI or WSGI); the configuration experience remains the same, though. Also, our take on ASGI relies on Unit's native high-perf capabilities to implement WebSockets. To learn more about the new feature, check out the documentation: - https://unit.nginx.org/configuration/#python In addition, we've prepared for you a couple of howtos on configuring popular ASGI-based frameworks with Unit: - Quart: https://unit.nginx.org/howto/quart/ (note a simple WebSocket app) - Starlette: https://unit.nginx.org/howto/starlette/ Finally, we've updated the Django howto to include the ASGI alternative: - https://unit.nginx.org/howto/django/ Changes with Unit 1.20.0 08 Oct 2020 *) Change: the PHP module is now initialized before chrooting; this enables loading all extensions from the host system. *) Change: AVIF and APNG image formats added to the default MIME type list. *) Change: functional tests migrated to the pytest framework. *) Feature: the Python module now fully supports applications that use the ASGI 3.0 server interface. *) Feature: the Python module now has a built-in WebSocket server implementation for applications, compatible with the HTTP & WebSocket ASGI Message Format 2.1 specification. *) Feature: automatic mounting of an isolated "/tmp" file system into chrooted application environments. *) Feature: the $host variable contains a normalized "Host" request value. *) Feature: the "callable" option sets Python application callable names. *) Feature: compatibility with PHP 8 RC 1. Thanks to Remi Collet. *) Feature: the "automount" option in the "isolation" object allows to turn off the automatic mounting of language module dependencies. *) Bugfix: "pass"-ing requests to upstreams from a route was broken; the bug had appeared in 1.19.0. Thanks to ??? (Hong Zhi Dao) for discovering and fixing it. *) Bugfix: the router process could crash during reconfiguration. *) Bugfix: a memory leak occurring in the router process; the bug had appeared in 1.18.0. *) Bugfix: the "!" (non-empty) pattern was matched incorrectly; the bug had appeared in 1.19.0. *) Bugfix: fixed building on platforms without sendfile() support, notably NetBSD; the bug had appeared in 1.16.0. I would very much like to highlight one of these changes. Perhaps the least noticeable, it is still important for the entire project: our functional tests moved to a more feature-rich pytest framework from the native Python unittest module that we've used previously. This change should enable us to write more sophisticated tests, boosting the overall quality of our future releases. All in all, this is a genuinely solid release, but I'm still more excited about the things yet to come. Yes, even more great features are coming our way very shortly! Right now, we are tinkering with route matching patterns to support regular expressions; working on keepalive connection caching; adding multithreading to application modules; and finally, fabricating the metrics API! We encourage you to follow our roadmap on GitHub, where your ideas and requests are always more than welcome: - https://github.com/orgs/nginx/projects/1 Stay tuned! wbr, Valentin V. Bartenev From ryanbgould at gmail.com Sat Oct 10 22:09:47 2020 From: ryanbgould at gmail.com (Ryan Gould) Date: Sat, 10 Oct 2020 15:09:47 -0700 Subject: HTTP3 php-fpm fastcgi_pass 500 problems Message-ID: hello awesome nginx developers, i am having some odd behavior issues when in http3 / quic mode. i am using a completely-up-to-date nginx-quic build on a debian 9 box. my nginx php-fpm configuration has been solid and unchanging for years and years. if i disable http3 / quic, things work great in http2 mode. if i enable http3 / quic mode, something is causing all php7.3-fpm pages to generate "500 Internal Server Error" pages. static html pages render perfectly. despite adding all the extra log arguments to nginx and to php7.3-fpm (display_errors and the rest) i am only seeing the following in the nginx error log: 2020/10/10 21:19:47 [alert] 10614#0: *37 epoll_ctl(1, -1) failed (9: Bad file descriptor), client: X.X.X, server: example.com, request: "GET /php_test_page HTTP/3" there is nothing to be seen in any other browser or server log. the server does send the default / implied favicon.ico perfectly. the php test page is only a phpinfo() command. i have always used the fastcgi_pass socket setup, but i have also tried the port version (same problem). the same exact thing happens when i try it with a completely-up-to-date nginx-quiche build on a different server. i have not had much luck tracking down info on "epoll_ctl". any thoughts or suggestions? if i can help with logs, please let me know. From pluknet at nginx.com Sat Oct 10 23:03:02 2020 From: pluknet at nginx.com (Sergey Kandaurov) Date: Sun, 11 Oct 2020 00:03:02 +0100 Subject: HTTP3 php-fpm fastcgi_pass 500 problems In-Reply-To: References: Message-ID: > On 10 Oct 2020, at 23:09, Ryan Gould wrote: > > hello awesome nginx developers, > > i am having some odd behavior issues when in http3 / quic mode. > > i am using a completely-up-to-date nginx-quic build on a debian 9 box. > > my nginx php-fpm configuration has been solid and unchanging for years and years. > > if i disable http3 / quic, things work great in http2 mode. if i enable http3 / quic mode, something is causing all php7.3-fpm pages to generate "500 Internal Server Error" pages. static html pages render perfectly. > > despite adding all the extra log arguments to nginx and to php7.3-fpm (display_errors and the rest) i am only seeing the following in the nginx error log: > > 2020/10/10 21:19:47 [alert] 10614#0: *37 epoll_ctl(1, -1) failed (9: Bad file descriptor), client: X.X.X, server: example.com, request: "GET /php_test_page HTTP/3" Hello. It would be helpful if you'll be able to provide more details, such as "nginx -V" output, revision of the nginx-quic branch, debug log messages for a particular request causing 500 error, and relevant configuration details. How to obtain debug log: http://nginx.org/en/docs/debugging_log.html -- Sergey Kandaurov From ryanbgould at gmail.com Sun Oct 11 19:58:56 2020 From: ryanbgould at gmail.com (Ryan Gould) Date: Sun, 11 Oct 2020 12:58:56 -0700 Subject: HTTP3 php-fpm fastcgi_pass 500 problems In-Reply-To: References: Message-ID: <335540a4-eacf-4e91-d176-d52300b8311f@gmail.com> > Hello. > It would be helpful if you'll be able to provide more details, > such as "nginx -V" output, revision of the nginx-quic branch, > debug log messages for a particular request causing 500 error, > and relevant configuration details. How to obtain debug log: > http://nginx.org/en/docs/debugging_log.htm this is the nginx branch i am using: https://hg.nginx.org/nginx-quic this is the boringssl i am using: https://github.com/google/boringssl/archive/master.zip here is a debug log: https://quic.spoi.dev/quic.error-01.log.gz server config: i have pagespeed included in the build, but disabled for these tests. using unix fastcgi_pass php configuration. here is the "nginx -V" output: nginx version: nginx/1.19.3 built by gcc 6.3.0 20170516 (Debian 6.3.0-18+deb9u1) built with OpenSSL 1.1.1 (compatible; BoringSSL) (running with BoringSSL) TLS SNI support enabled configure arguments: --with-debug --with-cc-opt=-I../boringssl/include --with-ld-opt='-L../boringssl/build/ssl -L../boringssl/build/crypto' --with-http_v3_module --with-http_quic_module --with-stream_quic_module --prefix=/etc/nginx --sbin-path=/usr/sbin/nginx --conf-path=/etc/nginx/nginx.conf --pid-path=/var/run/nginx.pid --lock-path=/var/lock/nginx.lock --error-log-path=/var/log/nginx/error.log --http-log-path=/var/log/nginx/access.log --user=www-data --group=www-data --without-http_uwsgi_module --without-http_scgi_module --without-http_memcached_module --without-mail_pop3_module --without-mail_imap_module --without-mail_smtp_module --with-http_ssl_module --with-http_v2_module --with-http_stub_status_module --with-http_gzip_static_module --with-http_realip_module --with-file-aio --add-module=../headers-more-nginx-module-master --add-module=../pagespeed --http-client-body-temp-path=/var/cache/nginx/client_temp --http-proxy-temp-path=/var/cache/nginx/proxy_temp --http-fastcgi-temp-path=/var/cache/nginx/fastcgi_temp --http-uwsgi-temp-path=/var/cache/nginx/uwsgi_temp --http-scgi-temp-path=/var/cache/nginx/scgi_temp --with-openssl=../boringssl On 10/10/2020 3:09 PM, Ryan Gould wrote: > hello awesome nginx developers, > > i am having some odd behavior issues when in http3 / quic mode. > > i am using a completely-up-to-date nginx-quic build on a debian 9 box. > > my nginx php-fpm configuration has been solid and unchanging for years > and years. > > if i disable http3 / quic, things work great in http2 mode.? if i enable > http3 / quic mode, something is causing all php7.3-fpm pages to generate > "500 Internal Server Error" pages.? static html pages render perfectly. > > despite adding all the extra log arguments to nginx and to php7.3-fpm > (display_errors and the rest) i am only seeing the following in the > nginx error log: > > 2020/10/10 21:19:47 [alert] 10614#0: *37 epoll_ctl(1, -1) failed (9: Bad > file descriptor), client: X.X.X, server: example.com, request: "GET > /php_test_page HTTP/3" > > there is nothing to be seen in any other browser or server log.? the > server does send the default / implied favicon.ico perfectly.? the php > test page is only a phpinfo() command. > > i have always used the fastcgi_pass socket setup, but i have also tried > the port version (same problem). > > the same exact thing happens when i try it with a completely-up-to-date > nginx-quiche build on a different server. > > i have not had much luck tracking down info on "epoll_ctl". > > any thoughts or suggestions? > > if i can help with logs, please let me know. From pluknet at nginx.com Mon Oct 12 11:58:53 2020 From: pluknet at nginx.com (Sergey Kandaurov) Date: Mon, 12 Oct 2020 12:58:53 +0100 Subject: HTTP3 php-fpm fastcgi_pass 500 problems In-Reply-To: <335540a4-eacf-4e91-d176-d52300b8311f@gmail.com> References: <335540a4-eacf-4e91-d176-d52300b8311f@gmail.com> Message-ID: <2F36BC86-061D-4BFB-A4BC-B3C012CBC3FA@nginx.com> > On 11 Oct 2020, at 20:58, Ryan Gould wrote: > > > > Hello. > > It would be helpful if you'll be able to provide more details, > > such as "nginx -V" output, revision of the nginx-quic branch, > > debug log messages for a particular request causing 500 error, > > and relevant configuration details. How to obtain debug log: > > http://nginx.org/en/docs/debugging_log.htm > > this is the nginx branch i am using: > https://hg.nginx.org/nginx-quic > > this is the boringssl i am using: > https://github.com/google/boringssl/archive/master.zip > > here is a debug log: > https://quic.spoi.dev/quic.error-01.log.gz Ok, so it looks like a longstanding problem in http upstream init routine caused by that QUIC streams in nginx do not have real event descriptors. This results in harmless alerts when using kqueue but not so with epoll. Please try this patch. # HG changeset patch # User Sergey Kandaurov # Date 1602503831 -3600 # Mon Oct 12 12:57:11 2020 +0100 # Branch quic # Node ID d791f11d1625d6f99d0d0c3272fd4c98d4816f21 # Parent d14e15c33548a4432b682b9bbb4b6ba8df82c0b3 QUIC: fixed ngx_http_upstream_init() much like HTTP/2 connections. diff --git a/src/http/ngx_http_upstream.c b/src/http/ngx_http_upstream.c --- a/src/http/ngx_http_upstream.c +++ b/src/http/ngx_http_upstream.c @@ -523,6 +523,13 @@ ngx_http_upstream_init(ngx_http_request_ } #endif +#if (NGX_HTTP_QUIC) + if (c->qs) { + ngx_http_upstream_init_request(r); + return; + } +#endif + if (c->read->timer_set) { ngx_del_timer(c->read); } -- Sergey Kandaurov From mdounin at mdounin.ru Mon Oct 12 20:14:37 2020 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 12 Oct 2020 23:14:37 +0300 Subject: Keepalived Connections Reset after reloading the configuration (HUP Signal) In-Reply-To: References: <20201007135428.GT1136@mdounin.ru> Message-ID: <20201012201437.GZ1136@mdounin.ru> Hello! On Thu, Oct 08, 2020 at 03:31:57AM -0400, tored wrote: > Thanks Maxim to taking the time to respond. > > > This feature wasn't removed in 5e6142609e48, but rather it was > > changed how things work: instead of doing a full scan over all > > connections on each event loop iteration, nginx now does it only > > once, and also makes sure no new idle connections are added after > > the shutdown signal. > > > > I don't fully understand what happens to non-idle keep-alive connections > after the shutdown signal is sent. > > If I understand correctly, non-idle keep-alive connections will continue to > serve requests after a graceful shutdown, until they are terminated by > "normal" events, such as: > * connection is closed/terminated by the client > * or by the server, e.g if (r->connection->requests >= > clcf->keepalive_requests) > * or if keepalive_timeout is meet > * any other event that would close the connection during normal operation No. Connections won't be allowed to go into the keepalive state after a graceful shutdown was initiated, see the following check in the ngx_http_finalize_connection() function: if (!ngx_terminate && !ngx_exiting && r->keepalive && clcf->keepalive_timeout > 0) { ngx_http_set_keepalive(r); return; } That is, additional requests will be only processed on a connection if keepalive is enabled and nginx worker is not shutting down. If nginx worker is shutting down, the connection will be closed. -- Maxim Dounin http://mdounin.ru/ From f.flueckiger at computech-rz.ch Tue Oct 13 13:08:57 2020 From: f.flueckiger at computech-rz.ch (=?iso-8859-1?Q?Fabian_Fl=FCckiger?=) Date: Tue, 13 Oct 2020 13:08:57 +0000 Subject: Malformed login packet Message-ID: <03fc48ab70ca458fbf8688a87fd3a212@computech-rz.ch> Hi all, I am using NGINX as a mailproxy and recently discovered, that wireshark detects IMAP-LOGIN messages sent by nginx as "Malformed". The message contains: Line: 3 LOGIN {18}\r\n (HEX: 0000 33 20 4c 4f 47 49 4e 20 7b 31 38 7d 0d 0a) Some imap servers react to this with "BAD UNKNOWN Command" and close the connection. Here the full communication between nginx and backend: 3 LOGIN {18} + go ahead {9} + go ahead 3 OK LOGIN completed 4 CAPABILITY BAD UNKNOWN Command any ideas? best regards Fabian -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Tue Oct 13 14:06:13 2020 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 13 Oct 2020 17:06:13 +0300 Subject: Malformed login packet In-Reply-To: <03fc48ab70ca458fbf8688a87fd3a212@computech-rz.ch> References: <03fc48ab70ca458fbf8688a87fd3a212@computech-rz.ch> Message-ID: <20201013140613.GA1136@mdounin.ru> Hello! On Tue, Oct 13, 2020 at 01:08:57PM +0000, Fabian Fl?ckiger wrote: > I am using NGINX as a mailproxy and recently discovered, that wireshark detects IMAP-LOGIN messages sent by nginx as "Malformed". > > The message contains: Line: 3 LOGIN {18}\r\n (HEX: 0000 33 20 4c 4f 47 49 4e 20 7b 31 38 7d 0d 0a) This certainly isn't malformed (though incomplete). Most likely, Wireshark fails to recognize IMAP string literal correctly. > Some imap servers react to this with "BAD UNKNOWN Command" and close the connection. Define "some imap servers". > Here the full communication between nginx and backend: > > 3 LOGIN {18} > > + go ahead > > {9} > > + go ahead > > > > 3 OK LOGIN completed > > 4 CAPABILITY > > > BAD UNKNOWN Command > > > any ideas? The error seems to be returned to the "4 CAPABILITY" command. This is not a command nginx sends, rather something it proxies from the client. Nevertheless, this seems to be incorrect behaviour of the backend server: the CAPABILITY command looks perfectly valid. Moreover, even for unrecognized commands correct response would be "* BAD ...", that is, an untagged response, prefixed with the token "*". -- Maxim Dounin http://mdounin.ru/ From limit.usus at gmail.com Tue Oct 13 16:13:43 2020 From: limit.usus at gmail.com (Tomoya Kabe) Date: Wed, 14 Oct 2020 01:13:43 +0900 Subject: ProxyProtocol with SSL client verification failure does not log client's address Message-ID: Hello, I placed nginx behind AWS NLB proxyprotocol enabled, and configured to log the client's "real" IP listen 443 ssl proxy_protocol; set_real_ip_from xxx.xxx.xxx.xxx; real_ip_header proxy_protocol; real_ip_recursive on; and I need to verify clients certificates, ssl_verify_client on; are written in my config. With valid clients, i.e. with valid client certificates, the log is as expected, logged the client's real IP. However the load balancer's address is logged when the client does not show the client certificate. I expect nginx could log the real IP even if the client verification fails, because ProxyProtocol has nothing to do with client verification. Is there anything I should check or fix my configuration, or it's a bug of nginx? Note: * I'm using nginx:1.19.3 docker image in AWS Fargate service. * I enabled/disabled http2 in listen directive and the result was the same. * I logged $remote_addr and $realip_remote_addr but these are the same value when client verification fails. -- Tomoya KABE Mail : limit.usus at gmail.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Wed Oct 14 09:22:17 2020 From: nginx-forum at forum.nginx.org (electrotwelve) Date: Wed, 14 Oct 2020 05:22:17 -0400 Subject: Not able to install nginx on AWS AMI Message-ID: <2dcf2ae912aa07bc3127dd8a0342eeb7.NginxMailingListEnglish@forum.nginx.org> Hi, I spun up an AWS AMI and followed this guide to install nginx: http://nginx.org/en/linux_packages.html#RHEL-CentOS However, when I try to install I get the following error: Loaded plugins: extras_suggestions, langpacks, priorities, update-motd amzn2-core | 3.7 kB 00:00:00 amzn2extra-docker | 3.0 kB 00:00:00 http://nginx.org/packages/centos/2/x86_64/repodata/repomd.xml: [Errno 14] HTTP Error 404 - Not Found Trying other mirror. One of the configured repositories failed (nginx stable repo), and yum doesn't have enough cached data to continue. At this point the only safe thing yum can do is fail. Not sure why its trying to access centos/2 folder. I see the version 8 being the most recent one. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,289734,289734#msg-289734 From thresh at nginx.com Wed Oct 14 09:38:06 2020 From: thresh at nginx.com (Konstantin Pavlov) Date: Wed, 14 Oct 2020 12:38:06 +0300 Subject: Not able to install nginx on AWS AMI In-Reply-To: <2dcf2ae912aa07bc3127dd8a0342eeb7.NginxMailingListEnglish@forum.nginx.org> References: <2dcf2ae912aa07bc3127dd8a0342eeb7.NginxMailingListEnglish@forum.nginx.org> Message-ID: Hello, It seems you've launched Amazon Linux 2 instead of CentOS 8 AMI. We don't provide nginx packages for that operating system on nginx.org. 14.10.2020 12:22, electrotwelve wrote: > Hi, I spun up an AWS AMI and followed this guide to install nginx: > http://nginx.org/en/linux_packages.html#RHEL-CentOS > > However, when I try to install I get the following error: > > Loaded plugins: extras_suggestions, langpacks, priorities, update-motd > amzn2-core > > | 3.7 kB 00:00:00 > amzn2extra-docker > > | 3.0 kB 00:00:00 > http://nginx.org/packages/centos/2/x86_64/repodata/repomd.xml: [Errno 14] > HTTP Error 404 - Not Found > Trying other mirror. > -- Konstantin Pavlov https://www.nginx.com/ From nginx-forum at forum.nginx.org Wed Oct 14 11:15:10 2020 From: nginx-forum at forum.nginx.org (electrotwelve) Date: Wed, 14 Oct 2020 07:15:10 -0400 Subject: Not able to install nginx on AWS AMI In-Reply-To: References: Message-ID: <799faab3b1433fe72a61c0f7db41faed.NginxMailingListEnglish@forum.nginx.org> There doesn't seem to be a CentOS 8 AMI but there is a Red Hat Enterprise Linux 8 AMI. Would it work with that? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,289734,289736#msg-289736 From nmilas at noa.gr Wed Oct 14 14:37:58 2020 From: nmilas at noa.gr (Nikolaos Milas) Date: Wed, 14 Oct 2020 17:37:58 +0300 Subject: Experiences with pagespeed repo? Message-ID: Hello, I would like to ask for your opinions, experiences and advice on using the pagespeed repo on CentOS (I am particularly interested on CentOS 8) in production servers. Would you opt to install nginx directly from pagespeed repo rather than from the project nginx repo? (I would think that, if pagespeed repo is to be used, nginx main package and modules should be all installed from there, avoiding a mix.) I would appreciate your contribution! Thanks in advance. Cheers, Nick From 0815 at lenhardt.in Thu Oct 15 16:25:48 2020 From: 0815 at lenhardt.in (0815 at lenhardt.in) Date: Thu, 15 Oct 2020 18:25:48 +0200 Subject: fastcgi_pass and caching with request rewrite Message-ID: Hi! This is the first time I am doing rewrites with a fastcgi backend (php-fpm). This is my fpm location which is working fine on a ubuntu 18.04 VM: # fpm-config location ~ \.php$ { include snippets/fastcgi-php.conf; fastcgi_pass unix:/run/php/php-fpm-typo3.sock; fastcgi_param HTTPS 'on'; fastcgi_read_timeout 240; # cache settings fastcgi_cache html_cache; fastcgi_cache_valid 200 404 60m; fastcgi_cache_bypass $no_cache_allowed; fastcgi_cache_bypass $cookie_be_typo_user; fastcgi_cache_bypass $eID_search_no_cache; } in the http section: http { ... ... fastcgi_cache_path /var/cache/nginx/proxy_cache levels=1:2 keys_zone=html_cache:10m max_size=5120m inactive=60m use_temp_path=off; fastcgi_cache_key "$scheme$request_method$host$request_uri"; fastcgi_buffer_size 128k; fastcgi_buffers 4 256k; fastcgi_busy_buffers_size 256k; ... } After adding this location in the fpm vhost the caching is not working as expected and delivers the same page for each request: location ~ ^/hotel/([0-9]+)/.* { rewrite ^/hotel/([0-9]+)/.* /index.php?id=3238&user_kuoniibefe_pi3[iff]=$1 break; include snippets/fastcgi-php.conf; fastcgi_param REQUEST_URI $fastcgi_script_name?$query_string; fastcgi_pass unix:/run/php/php-fpm-typo3.sock; fastcgi_param HTTPS 'on'; fastcgi_read_timeout 240; # cache settings include /etc/nginx/conf.d/fcgi_cache_settings.inc; } The location block is used to rewrite URLs like /hotel/1234/bla?x=1&... to /index.php?h=1234&..... but in this case the cache key is missing all the query parameters: 2020/10/15 17:27:51 [debug] 825#825: *1 http2 request line: "GET /hotel/3/bl-zzz-xxxccccresort-spa-antigua/?ddate=2020-11-01&rdate=2021-04-30&adult=2&aid=3&dur=13,15&ibe=package HTTP/2.0" 2020/10/15 17:27:51 [debug] 825#825: *1 http cache key: "httpsGETdev1.restplatzboerse.at" with other URLs the cache key is correct: 2020/10/15 17:27:30 [debug] 32644#32644: *761 http2 request line: "GET /index.php?id=3238&user_kuoniibefe_pi3[iff]=3&ddate=2020-11-01&rdate=2021-04-30&adult=2&aid=3&dur=13,15&ibe=package HTTP/2.0" 2020/10/15 17:27:30 [debug] 32644#32644: *761 http cache key: "httpsGETdev1.restplatzboerse.at/index.php?id=3238&user_kuoniibefe_pi3[iff]=3&ddate=2020-11-01&rdate=2021-04-30&adult=2&aid=3&dur=13,15&ibe=package" My question is why is the cache key incomplete with request that are rewritten? How could I get the correct cache key after rewriting the /hotel/1234/bla request. The fastcgi_cache_key parameter in the http block is only once allowed, so I can not use another fastcgi_cache_key parameter in the rewrite location block. br, Marco From 0815 at lenhardt.in Thu Oct 15 16:48:47 2020 From: 0815 at lenhardt.in (0815 at lenhardt.in) Date: Thu, 15 Oct 2020 18:48:47 +0200 Subject: fastcgi_pass and caching with request rewrite In-Reply-To: References: Message-ID: On 15.10.20 18:25, 0815 at lenhardt.in wrote: > Hi! > > This is the first time I am doing rewrites with a fastcgi backend > (php-fpm). > > This is my fpm location which is working fine on a ubuntu 18.04 VM: > > ?????? # fpm-config > ??????? location ~ \.php$ { > > > ??????????????? include snippets/fastcgi-php.conf; > ??????????????? fastcgi_pass unix:/run/php/php-fpm-typo3.sock; > ??????????????? fastcgi_param?? HTTPS 'on'; > > ??????????????? fastcgi_read_timeout 240; > ??????????????? # cache settings > ??????? fastcgi_cache??????????? html_cache; > ??????? fastcgi_cache_valid??????? 200 404 60m; > ??????? fastcgi_cache_bypass??????? $no_cache_allowed; > > ??????? fastcgi_cache_bypass??????? $cookie_be_typo_user; > ??????? fastcgi_cache_bypass??????? $eID_search_no_cache; > > > ??????? } > > > in the http section: > > http { > ????... > ????... > > ??????? fastcgi_cache_path /var/cache/nginx/proxy_cache levels=1:2 > keys_zone=html_cache:10m max_size=5120m inactive=60m use_temp_path=off; > > ??????? fastcgi_cache_key "$scheme$request_method$host$request_uri"; > > ??????? fastcgi_buffer_size????????? 128k; > ??????? fastcgi_buffers????????????? 4 256k; > ??????? fastcgi_busy_buffers_size??? 256k; > > > ????... > } > > > After adding this location in the fpm vhost the caching is not working > as expected and delivers the same page for each request: > > > ?????? location ~ ^/hotel/([0-9]+)/.* { > ??????????????? rewrite ^/hotel/([0-9]+)/.* > /index.php?id=3238&user_kuoniibefe_pi3[iff]=$1 break; > > > ??????????????? include snippets/fastcgi-php.conf; > ??????????????? fastcgi_param? REQUEST_URI > $fastcgi_script_name?$query_string; > ??????????????? fastcgi_pass unix:/run/php/php-fpm-typo3.sock; > ??????????????? fastcgi_param?? HTTPS 'on'; > > ??????????????? fastcgi_read_timeout 240; > ??????????????? # cache settings > ??????????????? include /etc/nginx/conf.d/fcgi_cache_settings.inc; > > ??????? } > > The location block is used to rewrite URLs like > /hotel/1234/bla?x=1&... > > to > /index.php?h=1234&..... > > but in this case the cache key is missing all the query parameters: > > 2020/10/15 17:27:51 [debug] 825#825: *1 http2 request line: "GET > /hotel/3/bl-zzz-xxxccccresort-spa-antigua/?ddate=2020-11-01&rdate=2021-04-30&adult=2&aid=3&dur=13,15&ibe=package > HTTP/2.0" > 2020/10/15 17:27:51 [debug] 825#825: *1 http cache key: > "httpsGETdev1.restplatzboerse.at" > > > with other URLs the cache key is correct: > 2020/10/15 17:27:30 [debug] 32644#32644: *761 http2 request line: "GET > /index.php?id=3238&user_kuoniibefe_pi3[iff]=3&ddate=2020-11-01&rdate=2021-04-30&adult=2&aid=3&dur=13,15&ibe=package > HTTP/2.0" > 2020/10/15 17:27:30 [debug] 32644#32644: *761 http cache key: > "httpsGETdev1.restplatzboerse.at/index.php?id=3238&user_kuoniibefe_pi3[iff]=3&ddate=2020-11-01&rdate=2021-04-30&adult=2&aid=3&dur=13,15&ibe=package" > > > > My question is why is the cache key incomplete with request that are > rewritten? How could I get the correct cache key after rewriting the > /hotel/1234/bla request. > > The fastcgi_cache_key parameter in the http block is only once allowed, > so I can not use another fastcgi_cache_key parameter in the rewrite > location block. I found a include file in my vhost that is removing some unwanted query parameters (to be not part of the cache key). That include was also in the /hotel rewrite location and removed all query parameters. So I removed it and the cache key looks now ok. set $c_uri $args; # e.g. "param1=true¶m4=false" # get url path into variable for cache_key set $u_path $request_uri; if ($request_uri ~ (.*)(\?.*)) { set $u_path $1; } # remove unwanted get params if ($c_uri ~ (.*)(?:&|^)utm_source=[^&]*(.*)) { set $c_uri $1$2; } if ($c_uri ~ (.*)(?:&|^)utm_term=[^&]*(.*)) { set $c_uri $1$2; } if ($c_uri ~ (.*)(?:&|^)utm_campaign=[^&]*(.*)) { set $c_uri $1$2; } set $c_uri $is_args$c_uri; if ($c_uri ~ ^\?$) { set $c_uri ""; } # finally we have stripped out utms and has nice cache key set $c_uri $u_path$c_uri; # DEBUG (disable in prd env) #add_header X-RURI $request_uri; #add_header X-CACHE-KEY $c_uri; #add_header X-U-Path $u_path; # set cleaned $c_uri as cache_key fastcgi_cache_key "$scheme$request_method$host$c_uri"; 2020/10/15 18:42:24 [debug] 10179#10179: *66 http2 request line: "GET /hotel/3/bl-zzz-xxxccccccresort-spa-antigua/?ddate=2020-11-01&rdate=2021-04-30&adult=2&aid=3&dur=13,15&ibe=package HTTP/2.0" 2020/10/15 18:42:24 [debug] 10179#10179: *66 http cache key: "httpsGETdev1.restplatzboerse.at/hotel/3/bl-zzz-xxxccccccresort-spa-antigua/?ddate=2020-11-01&rdate=2021-04-30&adult=2&aid=3&dur=13,15&ibe=package" br, Marco > > br, > Marco > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From xserverlinux at gmail.com Fri Oct 16 02:54:44 2020 From: xserverlinux at gmail.com (Rick Gutierrez) Date: Thu, 15 Oct 2020 20:54:44 -0600 Subject: nginx module mysql o other Message-ID: Hi List, I am developing an application in Mysql + php where I need to use the geo module / directive to be able to block by ip or networks, but I'm not sure if there is any way that nginx can integrate with mysql making some kind of connection with some include-mysql.conf, the idea is that users can do some blocking through this application even IP or network segment and can protect the application, it is easier from an interface before modifying the nginx files in the bash of the server. I hope that is clear in this email. -- rickygm http://gnuforever.homelinux.com From xserverlinux at gmail.com Sat Oct 17 19:15:58 2020 From: xserverlinux at gmail.com (Rick Gutierrez) Date: Sat, 17 Oct 2020 13:15:58 -0600 Subject: nginx module mysql o other In-Reply-To: References: Message-ID: hi, any idea? El El jue, 15 de oct. de 2020 a la(s) 20:54, Rick Gutierrez < xserverlinux at gmail.com> escribi?: > Hi List, I am developing an application in Mysql + php where I need to > use the geo module / directive to be able to block by ip or networks, > but I'm not sure if there is any way that nginx can integrate with > mysql making some kind of connection with some include-mysql.conf, the > idea is that users can do some blocking through this application even > IP or network segment and can protect the application, it is easier > from an interface before modifying the nginx files in the bash of the > server. > > > I hope that is clear in this email. > > -- > rickygm > > http://gnuforever.homelinux.com > -------------- next part -------------- An HTML attachment was scrubbed... URL: From osa at freebsd.org.ru Sat Oct 17 19:43:03 2020 From: osa at freebsd.org.ru (Sergey A. Osokin) Date: Sat, 17 Oct 2020 22:43:03 +0300 Subject: nginx module mysql o other In-Reply-To: References: Message-ID: <20201017194303.GA55720@FreeBSD.org.ru> Hi there, here's a third-party mysql module for nginx, https://github.com/openresty/drizzle-nginx-module It's possible to compile it as a dynamic module for both versions of nginx. -- Sergey On Thu, Oct 15, 2020 at 08:54:44PM -0600, Rick Gutierrez wrote: > Hi List, I am developing an application in Mysql + php where I need to > use the geo module / directive to be able to block by ip or networks, > but I'm not sure if there is any way that nginx can integrate with > mysql making some kind of connection with some include-mysql.conf, the > idea is that users can do some blocking through this application even > IP or network segment and can protect the application, it is easier > from an interface before modifying the nginx files in the bash of the > server. > > > I hope that is clear in this email. > > -- > rickygm > > http://gnuforever.homelinux.com > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From xserverlinux at gmail.com Sat Oct 17 20:09:29 2020 From: xserverlinux at gmail.com (Rick Gutierrez) Date: Sat, 17 Oct 2020 14:09:29 -0600 Subject: nginx module mysql o other In-Reply-To: <20201017194303.GA55720@FreeBSD.org.ru> References: <20201017194303.GA55720@FreeBSD.org.ru> Message-ID: El s?b., 17 oct. 2020 a las 13:43, Sergey A. Osokin () escribi?: > > Hi there, > > here's a third-party mysql module for nginx, > https://github.com/openresty/drizzle-nginx-module > It's possible to compile it as a dynamic module for both versions > of nginx. > ok, I'm going to read and see how it could help me, or open another language that speaks nginx and make the connection with another db -- rickygm http://gnuforever.homelinux.com From nginx-forum at forum.nginx.org Sat Oct 17 20:41:08 2020 From: nginx-forum at forum.nginx.org (jriker1) Date: Sat, 17 Oct 2020 16:41:08 -0400 Subject: SSL routines:tls_process_client_hello:version too low In-Reply-To: <20201005113906.GO30691@daoine.org> References: <20201005113906.GO30691@daoine.org> Message-ID: <498ab5fc3e84b25011fc5adf833f11c9.NginxMailingListEnglish@forum.nginx.org> Thanks for the reply. Only thing I can do to get it back where it was, which I honestly really can't do, is remove whatever patches on my Windows servers have installed since 8 months ago when it worked, and try and revert to a prior version of NGINX which I'm really not sure what version I was on. Other than that and replacing the certs which show valid all around, I didn't change any physical settings. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,289572,289751#msg-289751 From lukasz at tasz.eu Mon Oct 19 07:32:54 2020 From: lukasz at tasz.eu (=?UTF-8?B?xYF1a2FzeiBUYXN6?=) Date: Mon, 19 Oct 2020 09:32:54 +0200 Subject: keepalive seems not to work In-Reply-To: References: Message-ID: Hi all, I fixed my issue with a simple application. Solution could be named "not named upstream support". idea is that there is proxy_pass to upstream, and upstream is forwarding request to my local service (upstream server 127.0.0.1:1981) Under localhost 127.0.0.1:1981 my simple application is making requests to any server, and keeps connection alive. Control of all connection is under nginx - keepalive #value; keepalive_requests... app looks like: > #!/bin/env python3 > import sanic > import aiohttp > app = sanic.Sanic("gadula") > app.config.REQUEST_TIMEOUT = 60 > app.config.KEEP_ALIVE_TIMEOUT = 60 > aiohttp_timeout = aiohttp.ClientTimeout( > total=None, > connect=None, > sock_connect=10.0, > sock_read=120.0 > ) > @app.route("/") > async def router(request, path): > try: > async with app.session.get(request.url, raise_for_status=True, > timeout=aiohttp_timeout) as r: > # TODO: Consider using Sanic's FileStreaming feature. > return sanic.response.raw(await r.read()) > except aiohttp.ClientResponseError as e: > return sanic.response.json({ > 'url': request.url, > 'error': str(e), > }, status=e.status) > except Exception as e: > return sanic.response.json({ > 'url': request.url, > 'error': str(e), > }, status=500) > @app.listener('before_server_start') > async def setup(app, loop): > app.session = aiohttp.ClientSession() > @app.listener('after_server_stop') > async def setup(app, loop): > await app.session.close() > app.run(host="127.0.0.1", port=1981, debug=False, access_log=False) > app is written in asynchronous mode, so can serve many nginx workers and generally it does the job, now I have nginx configuration which keeps konnection from *any* clients to *any* server. question, why it cannot be done without any intermediate application?? any comments, suggestions are welcome!! regards ?ukasz Tasz RTKW czw., 8 pa? 2020 o 12:16 ?ukasz Tasz napisa?(a): > Hi, > sucha setup is on 1stproxy, there I have upstream defined to second proxy > and it works - connection is reused. > problem is that it is chain of forward proxy with caching, and > your-server.com including port is different - service:port is dynamic. > I'm asking for it, because with firefox (set global proxy to my second > proxy) I go to some blabla.your-server.com connection is kept > in a meaning that on server side I see only one established connection, > and all of requests that I make from firefox are made over this keept > connection. Problem starts when I will use chain of proxies (all nginx). > > regards > ?ukasz Tasz > RTKW > > > czw., 8 pa? 2020 o 11:43 Marcin Wanat napisa?(a): > >> >> On Thu, Oct 8, 2020 at 11:36 AM ?ukasz Tasz wrote: >> >>> Hi all, >>> >>> can I expect that proxy_pass will keep connection to remote server that >>> is being proxied? >>> >>> when I'm using setup client -> proxy -> server it looks to work >>> but when I'm using: >>> client -> 1stProxy_upstream -> proxy -> server >>> connection between 1stProxy and proxy is being kept thanks to keepalive >>> 100, but proxy makes new connection every new request, very simple setup: >>> >>> http { >>> server { >>> listen 8080; >>> location / { >>> keepalive_disable none; >>> keepalive_requests 1000; >>> keepalive_timeout 300s; >>> proxy_cache proxy-cache; >>> proxy_cache_valid 200 302 301 30m; >>> proxy_cache_valid any 1m; >>> proxy_cache_key $scheme://$http_host$request_uri; >>> proxy_pass $scheme://$http_host$request_uri; >>> proxy_http_version 1.1; >>> proxy_set_header Connection ""; >>> } >>> } >>> } >>> I would expect that when client connects proxy and it works then it >>> should also works when proxy connects upstream proxy.... >>> >> >> For keepalive in upstream proxy you shoud use upstream configuration >> block and configure keepalive in it: >> >> upstream backend { >> zone backend 1m; >> server your-server.com; >> keepalive 128; >> } >> >> server { >> listen 8080; >> location / { >> proxy_cache proxy-cache; >> proxy_cache_valid 200 302 301 30m; >> proxy_cache_valid any 1m; >> proxy_cache_key $scheme://$http_host$request_uri; >> proxy_pass $scheme://backend$request_uri; >> proxy_http_version 1.1; >> proxy_set_header Connection ""; >> } >> } >> >> -- >> Marcin Wanat >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From sca at andreasschulze.de Mon Oct 19 09:23:23 2020 From: sca at andreasschulze.de (A. Schulze) Date: Mon, 19 Oct 2020 11:23:23 +0200 Subject: remote_addr variable Message-ID: <27e4d63c-3da9-8425-f71f-e56960322091@andreasschulze.de> Hello, I like to display (using ssi) if a client's remote address is ipv4 or ipv6 Is there a variable available that indicate the current transport protocol? Any hint is appreciated! Thanks, Andreas From francis at daoine.org Mon Oct 19 09:31:13 2020 From: francis at daoine.org (Francis Daly) Date: Mon, 19 Oct 2020 10:31:13 +0100 Subject: Auth_request and multiple cookies from the authentication server In-Reply-To: References: Message-ID: <20201019093113.GS30691@daoine.org> On Thu, Sep 24, 2020 at 09:01:38AM +0300, Hannu Shemeikka wrote: Hi there, I'm afraid I don't have a great answer for you... > It seems that the variable $upstream_http_set_cookie only contains the > first cookie and not all cookies set by the upstream server. > > Is this variable's behavior feature or is it a bug? Is there a > workaround for this? I think that that is "the current implementation", and has been that way for long enough that it is unlikely to change in the near future. A closely-related issue is at https://trac.nginx.org/nginx/ticket/1316 -- basically, the same thing but for $http_X headers/variables. There seems to be a lua-based function to get the request http headers; I do not know if there is something similar to get the upstream response ones. So, for workarounds -- I do not think it is possible in pure-nginx config. If you are willing and able to change the upstream server to return the Set-Cookie headers as a single header, maybe that would work? But it does appear to be a weakness in current nginx. It appears to have been that way for a long time, and therefore has not been important enough to any one person to design and implement a fix. Perhaps everyone who came across it, adapted their upstreams; or switched to using something other than nginx. All the best, f -- Francis Daly francis at daoine.org From nginx-forum at forum.nginx.org Mon Oct 19 11:24:40 2020 From: nginx-forum at forum.nginx.org (allenhe) Date: Mon, 19 Oct 2020 07:24:40 -0400 Subject: how to enable non root user to execute nginx reload Message-ID: A non root process needs to signal reload to nginx master (as root) without sudo I've tried using setcap and setpriv with CAP_KILL, both not work. # getcap nginx/sbin/nginx nginx/sbin/nginx = cap_kill+ip #su user01 -s /bin/sh -c 'nginx/sbin/nginx -s reload' nginx: [alert] kill(68, 1) failed (1: Operation not permitted) #setpriv --inh-caps +cap_5 --ambient-caps +cap_5 su user001 -s /bin/sh -c 'nginx/sbin/nginx -s reload' nginx: [alert] kill(68, 1) failed (1: Operation not permitted) I don't konw if this is specifc to nginx only or I mis used the linux capability? looking foward for the help BR, Allen Posted at Nginx Forum: https://forum.nginx.org/read.php?2,289755,289755#msg-289755 From iippolitov at nginx.com Mon Oct 19 14:10:09 2020 From: iippolitov at nginx.com (Igor A. Ippolitov) Date: Mon, 19 Oct 2020 15:10:09 +0100 Subject: how to enable non root user to execute nginx reload In-Reply-To: References: Message-ID: <027a5b49-9f54-1046-47f9-90924ab06467@nginx.com> Hello Allen. Capabilities for a binary without ambient flag won't work for a non-root user if I get it correctly from manuals. So it looks like you are on the way to success with '--ambient-caps'. It looks like 'su' drops all capabilities, though. You may want to have a look at libpam_cap which may solve this problem for you. Other than this the approach should work. Best regards, Igor. On 19.10.2020 12:24, allenhe wrote: > A non root process needs to signal reload to nginx master (as root) without > sudo > > I've tried using setcap and setpriv with CAP_KILL, both not work. > > > # getcap nginx/sbin/nginx > nginx/sbin/nginx = cap_kill+ip > #su user01 -s /bin/sh -c 'nginx/sbin/nginx -s reload' > nginx: [alert] kill(68, 1) failed (1: Operation not permitted) > > > #setpriv --inh-caps +cap_5 --ambient-caps +cap_5 su user001 -s /bin/sh -c > 'nginx/sbin/nginx -s reload' > nginx: [alert] kill(68, 1) failed (1: Operation not permitted) > > > I don't konw if this is specifc to nginx only or I mis used the linux > capability? > looking foward for the help > > > BR, > Allen > > Posted at Nginx Forum: https://forum.nginx.org/read.php?2,289755,289755#msg-289755 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From alex at grande.coffee Wed Oct 21 01:39:35 2020 From: alex at grande.coffee (Alexander Huynh) Date: Wed, 21 Oct 2020 01:39:35 +0000 Subject: [Solved] IPv6 connectivity issue to `nginx.org` due to tunnel MTU Message-ID: <204938CD-7BED-41E2-82BE-882522801114@grande.coffee> Hello, I think this is a repeat of "packages.nginx.org IPv6 SSL is broken" last month [0]. I solved the problem for myself, so I'm posting here in hopes it can help others down the line. I was trying to connect to `https://nginx.org/`, and I found the website hung before I was impatient enough to give up browsing to it [3]. I did a bit of digging and found that it occurs only on IPv6. My setup is similar to Sergio's: * no native IPv6, but rather via an HE.net tunnel on tunnelbroker.net * tunnel MTU set to 1480 on tunnelbroker.net Additionally, I have this relevant setting on my EdgeRouter, matching the remote end on HE: * # show interfaces tunnel tun0 mtu ? mtu 1480 After getting primed on MTU and MSS, I found some documentation [1] on how to clamp MSS on my router. I tried and nothing changed, so I fiddled around on my local machine and found that locally setting the MTU to 1480 worked, via macOS's `networksetup` commands: * original local MTU is 1500 according to: networksetup -getMTU en0 * [3] suddenly worked after executing: networksetup -setMTU en0 1480 * [3] stopped working after executing: networksetup -setMTU en0 1481 More searching yielded that there are two MSS clamping options on my EdgeRouter: one for IPv4 and one for IPv6 [2]. Deleting the erroneous v4 commands and adding the following TCP MSS clamping options directed the router to rewrite the MSS field, allowing the subsequent smaller packets to fit within my tunnel: * set firewall options mss-clamp6 mss 1420 * set firewall options mss-clamp6 interface-type all The 1420 is calculated as: the tunnel MTU of 1480 bytes, minus 40 bytes for the IPv6 header, minus 20 bytes for the IP header. Things started working again from there, though with my local interface MTU incorrectly set to 1500, I've only solved IPv6 TCP. Anywho, I do have to thank the nginx website for strictly adhering to standards. If it weren't for trying to maximize MTU, this problem wouldn't have been exposed on my end. Hopefully this information helps someone further down the line, if they run into a similar issue. Thanks for reading! -- Alex [0] http://mailman.nginx.org/pipermail/nginx/2020-September/059964.html [1] https://community.ui.com/questions/EdgeRouter-and-MTU-Setting/54051cd0-38fc-4de9-a499-32af37a851b3 [2] https://docs.vyos.io/en/latest/routing/mss-clamp.html [3] % date; curl -svJLo /dev/null --connect-to '::[2a05:d014:edb:5704::6]:443' 'https://nginx.org'; date Tue Oct 20 20:03:22 EDT 2020 * Connecting to hostname: 2a05:d014:edb:5704::6 * Connecting to port: 443 * Trying 2a05:d014:edb:5704::6... * TCP_NODELAY set * Connected to 2a05:d014:edb:5704::6 (2a05:d014:edb:5704::6) port 443 (#0) * ALPN, offering h2 * ALPN, offering http/1.1 * successfully set certificate verify locations: * CAfile: /etc/ssl/cert.pem CApath: none * TLSv1.2 (OUT), TLS handshake, Client hello (1): } [223 bytes data] ### hangs here * LibreSSL SSL_connect: SSL_ERROR_SYSCALL in connection to nginx.org:443 * Closing connection 0 Tue Oct 20 20:06:24 EDT 2020 From alex at grande.coffee Wed Oct 21 03:23:11 2020 From: alex at grande.coffee (Alexander Huynh) Date: Wed, 21 Oct 2020 03:23:11 +0000 Subject: [Solved] IPv6 connectivity issue to `nginx.org` due to tunnel MTU In-Reply-To: <204938CD-7BED-41E2-82BE-882522801114@grande.coffee> References: <204938CD-7BED-41E2-82BE-882522801114@grande.coffee> Message-ID: <49BF7626-3221-431B-95EB-53FFF60B1976@grande.coffee> Quick follow-up: after more digging I've found that I can solve it another way: using router advertisements to set the MTU. Here's a tcpdump showing the problem, note how the MSS is 1440 bidirectionally: IP6 (flowlabel 0x47dd6, hlim 64, next-header TCP (6) payload length: 44) 2001:db8::1.55652 > 2a05:d014:edb:5704::6.443: Flags [S], cksum 0xffc3 (correct), seq 3046951728, win 65535, options [mss 1440,nop,wscale 6,nop,nop,TS val 1569227211 ecr 0,sackOK,eol], length 0 IP6 (flowlabel 0x68d1d, hlim 41, next-header TCP (6) payload length: 40) 2a05:d014:edb:5704::6.443 > 2001:db8::1.55652: Flags [S.], cksum 0x1350 (correct), seq 2416054786, ack 3046951729, win 8192, options [mss 1440,sackOK,TS val 3541526609 ecr 1569227211,nop,wscale 0], length 0 And here's using MSS clamping to have the router rewrite MSS fields it sees, resulting in a smaller return MSS: IP6 (flowlabel 0xf55e4, hlim 64, next-header TCP (6) payload length: 44) 2001:db8::1.55198 > 2a05:d014:edb:5704::6.443: Flags [S], cksum 0xe188 (correct), seq 2163962289, win 65535, options [mss 1440,nop,wscale 6,nop,nop,TS val 1569009648 ecr 0,sackOK,eol], length 0 IP6 (flowlabel 0x18b17, hlim 41, next-header TCP (6) payload length: 40) 2a05:d014:edb:5704::6.443 > 2001:db8::1.55198: Flags [S.], cksum 0x3197 (correct), seq 3274704991, ack 2163962290, win 8192, options [mss 1420,sackOK,TS val 3541303899 ecr 1569009648,nop,wscale 0], length 0 By instead using router advertisements, a co-operating OS doesn't have to rely on the router rewrites, and can just do the right thing? from the start: IP6 (flowlabel 0x8ce90, hlim 64, next-header TCP (6) payload length: 44) 2001:db8::1.55859 > 2a05:d014:edb:5704::6.443: Flags [S], cksum 0xf73a (correct), seq 3178375029, win 65535, options [mss 1420,nop,wscale 6,nop,nop,TS val 1569335937 ecr 0,sackOK,eol], length 0 IP6 (flowlabel 0x8e947, hlim 41, next-header TCP (6) payload length: 40) 2a05:d014:edb:5704::6.443 > 2001:db8::1.55859: Flags [S.], cksum 0x314c (correct), seq 2240678375, ack 3178375030, win 8192, options [mss 1440,sackOK,TS val 2237309444 ecr 1569335937,nop,wscale 0], length 0 Note the initial lower MSS of 1420. Given the two options, I think using router advertisements is better than MSS clamping, for multiple reasons: * the problem packets aren't even generated at all, vs. being constructed and rewritten * MTU applies to more than just TCP, resolving potential UDP / ICMP / etc. issues * RAs are more granular, and is more intuitive to apply to different interfaces, including VLANs * fewer numbers need to be memorized, i.e. translating MTU to MSS If you have an EdgeRouter and wish to use RAs, then replace the following MSS clamping commands: > * set firewall options mss-clamp6 mss 1420 > * set firewall options mss-clamp6 interface-type all With the following RA MTU commands: * set interfaces switch switch0 ipv6 router-advert link-mtu 1480 Replace `switch switch0` with `ethernet ethX` for an ethernet interface. Thanks again for reading, and I hope this helps someone down the line. -- Alex From vbart at nginx.com Wed Oct 21 11:18:54 2020 From: vbart at nginx.com (Valentin V. Bartenev) Date: Wed, 21 Oct 2020 14:18:54 +0300 Subject: NGINX Unit team is hiring C devs Message-ID: <2743688.e9J7NaK4W3@vbart-laptop> We have tremendous ambition to improve NGINX Unit, our new open-source server and reverse proxy with a dynamic API, so we're looking for talented C developers willing to join our global team at F5. US: https://f5.recsolu.com/jobs/3XYNuNfpgUp3S3RE1K_hTA Europe: https://ffive.wd5.myworkdayjobs.com/NGINX/job/Cork-NGINX/Software-Engineer_RP1018735 More information about the project: https://unit.nginx.org/ wbr, Valentin V. Bartenev From rejaine at bhz.jamef.com.br Wed Oct 21 18:50:03 2020 From: rejaine at bhz.jamef.com.br (Rejaine Silveira Monteiro) Date: Wed, 21 Oct 2020 15:50:03 -0300 Subject: Deny all location ... except Message-ID: Hi list.. I want to deny all access from external network to /prod and /hml locations, except from some arguments like "?TEST" or "=TEST" Examples: https:/domain.com/prod/* (allow to localnet, deny to external users), but... https:/domain.com/prod/INDEX.apw?op=01&SERVICE=TEST (allo to all) https:/domain.com/prod/INDEX.apw?TEST=123 (allow to all) https:/domain.com/prod/something/TEST/index.html (allow to all) how to do this? I tried to use something like " if ($args = TEST) { allow all;}", but Nginx gives error " directive is not allowed here" location /prod { allow localnet; deny all; proxy_pass http://web1.domain:8080/prod; } location /hml { allow localnet; deny all proxy_pass http://web1.domain:8081/hml; } I appreciate any help -- *Esta mensagem pode conter informa??es confidenciais ou privilegiadas, sendo seu sigilo protegido por lei. Se voc? n?o for o destinat?rio ou a pessoa autorizada a receber esta mensagem, n?o pode usar, copiar ou divulgar as informa??es nela contidas ou tomar qualquer a??o baseada nessas informa??es. Se voc? recebeu esta mensagem por engano, por favor avise imediatamente ao remetente, respondendo o e-mail e em seguida apague-o. Agradecemos sua coopera??o.* From ghkgupta at gmail.com Thu Oct 22 12:33:26 2020 From: ghkgupta at gmail.com (Hari Kumar Gupta) Date: Thu, 22 Oct 2020 14:33:26 +0200 Subject: Provide Grafana dashboard display for Nginx upstream servers health check! In-Reply-To: References: Message-ID: > Hi, > Thank you for your help in advance. > > I was very new to Nginx but I learnt and configured many upstream and > virtual servers. The learning was adventure and really interesting as I > move on. > > I had a thought to show upstream health checks on Grafana dashboard but > unfortunately, I could not find any leads or documentation that could help > me. How Grafana can be integrated with Nginx for monitoring health check? > > What I was planning to do, > > The configured upstream servers return different formats of data like > JSON, XML, simple text and so on. My requirement is that, a job that runs > on scheduled interval, fires a configured request to the upstream server > (each upstream server has predefined request in properties file or some > where), validate the output and display the status as health check on > Grafana dashboard. > > Please help and advise > BR Hari -------------- next part -------------- An HTML attachment was scrubbed... URL: From osa at freebsd.org.ru Thu Oct 22 13:16:04 2020 From: osa at freebsd.org.ru (Sergey A. Osokin) Date: Thu, 22 Oct 2020 16:16:04 +0300 Subject: Provide Grafana dashboard display for Nginx upstream servers health check! In-Reply-To: References: Message-ID: <20201022131604.GD55720@FreeBSD.org.ru> Hi Hari, not to much but several articles and blog posts are available in the Internet, and here is the one of them: https://medium.com/@shevtsovav/ready-for-scraping-nginx-metrics-nginx-vts-exporter-prometheus-grafana-26c14816ae7c Here's the link on third-party vts module on GH, https://github.com/vozlt/nginx-module-vts Hope that helps a lot. -- Sergey A. Osokin On Thu, Oct 22, 2020 at 02:33:26PM +0200, Hari Kumar Gupta wrote: > Hi, > Thank you for your help in advance. > > I was very new to Nginx but I learnt and configured many upstream and > virtual servers. The learning was adventure and really interesting as I > move on. > > I had a thought to show upstream health checks on Grafana dashboard but > unfortunately, I could not find any leads or documentation that could help > me. How Grafana can be integrated with Nginx for monitoring health check? > > What I was planning to do, > > The configured upstream servers return different formats of data like > JSON, XML, simple text and so on. My requirement is that, a job that runs > on scheduled interval, fires a configured request to the upstream server > (each upstream server has predefined request in properties file or some > where), validate the output and display the status as health check on > Grafana dashboard. > > Please help and advise > BR Hari From rewindblu79 at gmail.com Fri Oct 23 12:04:03 2020 From: rewindblu79 at gmail.com (FlashBlog) Date: Fri, 23 Oct 2020 14:04:03 +0200 Subject: support Message-ID: i need to configure reverse proxy for an iptv list, can anyone help me? -------------- next part -------------- An HTML attachment was scrubbed... URL: From ghkgupta at gmail.com Fri Oct 23 14:11:45 2020 From: ghkgupta at gmail.com (Hari Kumar Gupta) Date: Fri, 23 Oct 2020 16:11:45 +0200 Subject: Adding nginx_vts_module to existing nginx on Windows 10 platform! Message-ID: Hi, I already have installed and running successfully (with well configured upstream and virtual servers) Nginx 1.11.11.1 Lion version on my windows 10 machine. Can you tell me how to add nginx_vts_module to Nginx? Referred to some articles that it may not be possible to add modules on existing Nginx running setup. Br, *Hari* -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Fri Oct 23 17:22:48 2020 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 23 Oct 2020 20:22:48 +0300 Subject: Adding nginx_vts_module to existing nginx on Windows 10 platform! In-Reply-To: References: Message-ID: <20201023172248.GE1136@mdounin.ru> Hello! On Fri, Oct 23, 2020 at 04:11:45PM +0200, Hari Kumar Gupta wrote: > I already have installed and running successfully (with well configured > upstream and virtual servers) Nginx 1.11.11.1 Lion version on my windows 10 > machine. > > Can you tell me how to add nginx_vts_module to Nginx? > > Referred to some articles that it may not be possible to add modules on > existing Nginx running setup. Right now on Windows it is only possible to compile modules statically, so you have to recompile nginx with the module in question to add it. See here for some basic instructions on how to compile nginx for Windows: http://nginx.org/en/docs/howto_build_on_win32.html -- Maxim Dounin http://mdounin.ru/ From nginx-forum at forum.nginx.org Sun Oct 25 08:24:48 2020 From: nginx-forum at forum.nginx.org (itpp2012) Date: Sun, 25 Oct 2020 04:24:48 -0400 Subject: Adding nginx_vts_module to existing nginx on Windows 10 platform! In-Reply-To: References: Message-ID: <22f03d16172ea5245a01981f62c35796.NginxMailingListEnglish@forum.nginx.org> Please note that Nginx 1.11.11.1 Lion (nearly 4 years old) is not a nginx product, either compile your own (and spend time to fix bugs) or upgrade to nginx 1.19.3.1 Unicorn. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,289793,289796#msg-289796 From francis at daoine.org Sun Oct 25 11:20:23 2020 From: francis at daoine.org (Francis Daly) Date: Sun, 25 Oct 2020 11:20:23 +0000 Subject: remote_addr variable In-Reply-To: <27e4d63c-3da9-8425-f71f-e56960322091@andreasschulze.de> References: <27e4d63c-3da9-8425-f71f-e56960322091@andreasschulze.de> Message-ID: <20201025112023.GW30691@daoine.org> On Mon, Oct 19, 2020 at 11:23:23AM +0200, A. Schulze wrote: Hi there, > I like to display (using ssi) if a client's remote address is ipv4 or ipv6 > Is there a variable available that indicate the current transport protocol? I'm not aware of a ready-made variable for this; but you can probably use "map" to make your own. For example -- if you are happy to say that when $remote_addr includes a : it is IPv6, otherwise it is IPv4; then at http{} level you could do something like map $remote_addr $this_transport_is { ~: IPv6; default IPv4; } and then use $this_transport_is where you want it. (Note: I have tested this with return 200 "Transport: $this_transport_is\n"; but I have not tried ssi.) > Any hint is appreciated! Note - you may prefer to say "if $remote_addr includes a dot, it is IPv4, else it is IPv6. And you may or may not care about listening on unix domain sockets. Adjust the "map" to match whatever definition you like. Good luck with it, f -- Francis Daly francis at daoine.org From francis at daoine.org Mon Oct 26 12:33:00 2020 From: francis at daoine.org (Francis Daly) Date: Mon, 26 Oct 2020 12:33:00 +0000 Subject: support In-Reply-To: References: Message-ID: <20201026123300.GX30691@daoine.org> On Fri, Oct 23, 2020 at 02:04:03PM +0200, FlashBlog wrote: Hi there, > i need to configure reverse proxy for an iptv list, can anyone help me? The usual way is to "proxy_pass" within a "location" that you want to handle the request. http://nginx.org/r/proxy_pass http://nginx.org/r/location Do you get an error message when you make a request? Cheers, f -- Francis Daly francis at daoine.org From francis at daoine.org Mon Oct 26 13:00:10 2020 From: francis at daoine.org (Francis Daly) Date: Mon, 26 Oct 2020 13:00:10 +0000 Subject: Deny all location ... except In-Reply-To: References: Message-ID: <20201026130010.GY30691@daoine.org> On Wed, Oct 21, 2020 at 03:50:03PM -0300, Rejaine Silveira Monteiro wrote: Hi there, > I want to deny all access from external network to /prod and /hml > locations, except from some arguments like "?TEST" or "=TEST" I don't see a straightforward way to do this, I'm afraid. For a non-straightforward way: It is possible to use "map" to create a variable to indicate whether or not you want this request to be blocked; and then use that variable instead of your allow/deny lists. But that will only be useful if you can find patterns that correctly match everything you want in the block/no-block decision. (And it is sort-of doing in config, what the application already has facilities to do -- but your special use case might make that worthwhile.) > https:/domain.com/prod/* (allow to localnet, deny to external users), but... > https:/domain.com/prod/INDEX.apw?op=01&SERVICE=TEST (allo to all) > https:/domain.com/prod/INDEX.apw?TEST=123 (allow to all) > https:/domain.com/prod/something/TEST/index.html (allow to all) For example, if your version of "localnet" can be expressed in a small number of regex patterns, then you could try something like map "$remote_addr-$request_uri" $block_this_request { default 1; ~^127\.0\.0\.1-/ 0; } which, when used, would have the effect of allowing everything from 127.0.0.1 and blocking everything else. Your case might want something like ~^192\.168\. to allow addresses that match that pattern. (If your "localnet" is not simple, then "geo" might be usable to set an always-allow variable based on source IP instead; and *that* could be used in the continuation of this example.) Next, that same "map" should include whatever patterns you want to allow from anywhere -- perhaps the string TEST anywhere in the url, or the string TEST= after a ? in the url; or even a complete list of urls or url prefixes that you want to allow; and set the $block_this_request variable to 0 in those cases. For example, for "/TEST/" or "TEST=" or "=TEST" anywhere in the url, you could add the three lines ~/TEST/ 0; ~TEST= 0; ~=TEST 0; inside the map. > I tried to use something like " if ($args = TEST) { allow all;}", > but Nginx gives error " directive is not allowed here" Now that you have set $block_this_request: in the two location{}s that you want it to take effect, remove the "allow" and "deny" lines, and instead add if ($block_this_request) { return 403; } which should have the same effect as the previous deny. > location /prod { > allow localnet; > deny all; > proxy_pass http://web1.domain:8080/prod; > } > location /hml { > allow localnet; > deny all > proxy_pass http://web1.domain:8081/hml; > } > > I appreciate any help It does not strike me as especially elegant; but if it does what you want, efficiently enough for your use case, it may be adequate. At least until someone suggests an alternative. Good luck with it, f -- Francis Daly francis at daoine.org From mdounin at mdounin.ru Tue Oct 27 15:26:06 2020 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 27 Oct 2020 18:26:06 +0300 Subject: nginx-1.19.4 Message-ID: <20201027152606.GD50919@mdounin.ru> Changes with nginx 1.19.4 27 Oct 2020 *) Feature: the "ssl_conf_command", "proxy_ssl_conf_command", "grpc_ssl_conf_command", and "uwsgi_ssl_conf_command" directives. *) Feature: the "ssl_reject_handshake" directive. *) Feature: the "proxy_smtp_auth" directive in mail proxy. -- Maxim Dounin http://nginx.org/ From nginx-forum at forum.nginx.org Wed Oct 28 04:28:04 2020 From: nginx-forum at forum.nginx.org (bouvierh) Date: Wed, 28 Oct 2020 00:28:04 -0400 Subject: upstream SSL certificate does not match "x.x.x.x" Message-ID: Hello, I have a configuration an nginx proxy server "NGINX_SERVER" as the following: listen 443 ssl default_server; chunked_transfer_encoding on; ssl_certificate server.crt; ssl_certificate_key private_key_server.pem; ssl_client_certificate trustedCA.crt; #ssl_verify_depth 7; ssl_verify_client optional_no_ca; location / { proxy_http_version 1.1; resolver 127.0.0.11; proxy_ssl_trusted_certificate trustedCA.crt; proxy_ssl_verify_depth 7; proxy_ssl_verify on; proxy_pass https://13.78.229.75:443; } The server "13.78.229.75" has a server certificate generate for an IP. When I do curl --cacert trustedCA.crt https://13.78.229.75:443 -v from "NGINX_SERVER", everything works fine. So the server certificate from "13.78.229.75" should be good. Additionnally openssl s_client -connect 13.78.229.75:443 -showcerts -verify 9 -CAfile trustedCA.crt is good too. However when I try to curl my "NGINX_SERVER": curl https://"NGINX_SERVER I get: *110 upstream SSL certificate does not match "13.78.229.75" while SSL handshaking to upstream, client: 13.78.128.54, server: , request: Looking at the server certificate, everything looks ok: Subject: CN = 13.78.229.75 X509v3 Subject Alternative Name: IP Address:13.78.229.75, DNS:iotedgeapiproxy I am at loss. How can curl/openssl tell me my server cert is valid while nginx telling me it is wrong. What am I doing wrong? Thank you! Hugues Posted at Nginx Forum: https://forum.nginx.org/read.php?2,289813,289813#msg-289813 From hershil at gmail.com Wed Oct 28 08:44:40 2020 From: hershil at gmail.com (Vikas Kumar) Date: Wed, 28 Oct 2020 14:14:40 +0530 Subject: Nginx logging phase Message-ID: I'm writing an Nginx plugin (using Openresty Lua) which increments a counter when a request is received (in ACCESS phase) and decrements the counter when request is processed (in LOG phase) in order to keep track of in-flight requests. I've seen some cases where the counter increments but does not decrement and reaches a very high value, but can't reproduce. The core of my logic depends on the accurate value of the in-flight requests counter. I wanted to ask if there are any cases where, for a request, ACCESS phase is called and LOG phase is not called. I can paste the relevant code if required. Thanks. -------------- next part -------------- An HTML attachment was scrubbed... URL: From francis at daoine.org Wed Oct 28 13:04:02 2020 From: francis at daoine.org (Francis Daly) Date: Wed, 28 Oct 2020 13:04:02 +0000 Subject: upstream SSL certificate does not match "x.x.x.x" In-Reply-To: References: Message-ID: <20201028130402.GA29865@daoine.org> On Wed, Oct 28, 2020 at 12:28:04AM -0400, bouvierh wrote: Hi there, it looks to me like you've come across a case that the current nginx code does not handle in the way that you want it to. Maybe the nginx code could be changed to handle this case; or maybe it will be decided that what nginx does is correct. Either way -- until a code change is made on the nginx side, you will have to either use something other than nginx, or change something on your side to work within the current nginx implementation. > location / { > proxy_http_version 1.1; > resolver 127.0.0.11; > proxy_ssl_trusted_certificate trustedCA.crt; > proxy_ssl_verify_depth 7; > proxy_ssl_verify on; > proxy_pass https://13.78.229.75:443; > } > However when I try to curl my "NGINX_SERVER": > curl https://"NGINX_SERVER > I get: > *110 upstream SSL certificate does not match "13.78.229.75" while SSL > handshaking to upstream, client: 13.78.128.54, server: , request: > > Looking at the server certificate, everything looks ok: > Subject: CN = 13.78.229.75 > X509v3 Subject Alternative Name: > IP Address:13.78.229.75, DNS:iotedgeapiproxy > > I am at loss. How can curl/openssl tell me my server cert is valid while > nginx telling me it is wrong. What am I doing wrong? What nginx currently does (at least: looking at the 1.17.2 code, which I happen to have to hand), is: * if there is a Subject Alternative Name defined, then it is looked at and Subject is ignored. * within Subject Alternative Name, only DNS values are looked at (not IP Address values). And your certificate happens to mean that nginx-as-implemented-today will not accept it as valid for the IP address. Possibly adding proxy_ssl_name iotedgeapiproxy; to your config will make things Just Work. Alternatively, changing the proxy_pass to refer to https://iotedgeapiproxy, and making sure that that resolves to the IP address (by using an "upstream" definition, or by having your system resolver respond appropriately at nginx startup) should work, but that might have consequences for returned Location: headers and the like. The other options would involve not verifying the certificate (bad!), or re-issuing the certificate either with no Subject Alternative Name, or with an extra value in the "DNS" part of the Subject Alternative Name that is the IP address, just to work around the nginx implementation. Hopefully the "proxy_ssl_name" addition will be enough. Good luck with it, f -- Francis Daly francis at daoine.org From mdounin at mdounin.ru Wed Oct 28 13:29:27 2020 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 28 Oct 2020 16:29:27 +0300 Subject: Nginx logging phase In-Reply-To: References: Message-ID: <20201028132927.GQ50919@mdounin.ru> Hello! On Wed, Oct 28, 2020 at 02:14:40PM +0530, Vikas Kumar wrote: > I'm writing an Nginx plugin (using Openresty Lua) which increments a > counter when a request is received (in ACCESS phase) and decrements the > counter when request is processed (in LOG phase) in order to keep track of > in-flight requests. > > I've seen some cases where the counter increments but does not decrement > and reaches a very high value, but can't reproduce. The core of my logic > depends on the accurate value of the in-flight requests counter. > > I wanted to ask if there are any cases where, for a request, ACCESS phase > is called and LOG phase is not called. The access phase handlers might be called more than once, for example, after internal redirects. Note well that access phase handlers might not be called at all if the request is finalized at earlier stages. -- Maxim Dounin http://mdounin.ru/ From sca at andreasschulze.de Wed Oct 28 16:58:39 2020 From: sca at andreasschulze.de (A. Schulze) Date: Wed, 28 Oct 2020 17:58:39 +0100 Subject: remote_addr variable In-Reply-To: <20201025112023.GW30691@daoine.org> References: <27e4d63c-3da9-8425-f71f-e56960322091@andreasschulze.de> <20201025112023.GW30691@daoine.org> Message-ID: Am 25.10.20 um 12:20 schrieb Francis Daly: > map $remote_addr $this_transport_is { > ~: IPv6; > default IPv4; > } > > and then use $this_transport_is where you want it. > > (Note: I have tested this with > > return 200 "Transport: $this_transport_is\n"; > > but I have not tried ssi.) Hello Francis! thanks for that hint. It works very well for me. I can confirm the variable '$this_transport_is' is accessible via ssi, too: Andreas From hahiwa at gmail.com Wed Oct 28 21:40:39 2020 From: hahiwa at gmail.com (Hiwa) Date: Thu, 29 Oct 2020 00:40:39 +0300 Subject: nginx Digest, Vol 132, Issue 24 In-Reply-To: References: Message-ID: Hello dears i want to install nginx as proxy cache server how can i do it and where i can get license.. On Wed, 28 Oct 2020, 3:00 pm , wrote: > Send nginx mailing list submissions to > nginx at nginx.org > > To subscribe or unsubscribe via the World Wide Web, visit > http://mailman.nginx.org/mailman/listinfo/nginx > or, via email, send a message with subject or body 'help' to > nginx-request at nginx.org > > You can reach the person managing the list at > nginx-owner at nginx.org > > When replying, please edit your Subject line so it is more specific > than "Re: Contents of nginx digest..." > > > Today's Topics: > > 1. nginx-1.19.4 (Maxim Dounin) > 2. upstream SSL certificate does not match "x.x.x.x" (bouvierh) > 3. Nginx logging phase (Vikas Kumar) > > > ---------------------------------------------------------------------- > > Message: 1 > Date: Tue, 27 Oct 2020 18:26:06 +0300 > From: Maxim Dounin > To: nginx at nginx.org > Subject: nginx-1.19.4 > Message-ID: <20201027152606.GD50919 at mdounin.ru> > Content-Type: text/plain; charset=us-ascii > > Changes with nginx 1.19.4 27 Oct > 2020 > > *) Feature: the "ssl_conf_command", "proxy_ssl_conf_command", > "grpc_ssl_conf_command", and "uwsgi_ssl_conf_command" directives. > > *) Feature: the "ssl_reject_handshake" directive. > > *) Feature: the "proxy_smtp_auth" directive in mail proxy. > > > -- > Maxim Dounin > http://nginx.org/ > > > ------------------------------ > > Message: 2 > Date: Wed, 28 Oct 2020 00:28:04 -0400 > From: "bouvierh" > To: nginx at nginx.org > Subject: upstream SSL certificate does not match "x.x.x.x" > Message-ID: > < > e2daca895c920269e52b2b369cea2fa9.NginxMailingListEnglish at forum.nginx.org> > > Content-Type: text/plain; charset=UTF-8 > > Hello, > > I have a configuration an nginx proxy server "NGINX_SERVER" as the > following: > listen 443 ssl default_server; > > chunked_transfer_encoding on; > > ssl_certificate server.crt; > ssl_certificate_key private_key_server.pem; > ssl_client_certificate trustedCA.crt; > #ssl_verify_depth 7; > ssl_verify_client optional_no_ca; > > location / { > proxy_http_version 1.1; > resolver 127.0.0.11; > proxy_ssl_trusted_certificate trustedCA.crt; > proxy_ssl_verify_depth 7; > proxy_ssl_verify on; > proxy_pass https://13.78.229.75:443; > } > > The server "13.78.229.75" has a server certificate generate for an IP. When > I do > curl --cacert trustedCA.crt https://13.78.229.75:443 -v > from "NGINX_SERVER", everything works fine. So the server certificate from > "13.78.229.75" should be good. > Additionnally openssl s_client -connect 13.78.229.75:443 -showcerts > -verify > 9 -CAfile trustedCA.crt is good too. > > However when I try to curl my "NGINX_SERVER": > curl https://"NGINX_SERVER > I get: > *110 upstream SSL certificate does not match "13.78.229.75" while SSL > handshaking to upstream, client: 13.78.128.54, server: , request: > > Looking at the server certificate, everything looks ok: > Subject: CN = 13.78.229.75 > X509v3 Subject Alternative Name: > IP Address:13.78.229.75, DNS:iotedgeapiproxy > > I am at loss. How can curl/openssl tell me my server cert is valid while > nginx telling me it is wrong. What am I doing wrong? > Thank you! > Hugues > > Posted at Nginx Forum: > https://forum.nginx.org/read.php?2,289813,289813#msg-289813 > > > > ------------------------------ > > Message: 3 > Date: Wed, 28 Oct 2020 14:14:40 +0530 > From: Vikas Kumar > To: nginx at nginx.org > Subject: Nginx logging phase > Message-ID: > qoD30QGr_w at mail.gmail.com> > Content-Type: text/plain; charset="utf-8" > > I'm writing an Nginx plugin (using Openresty Lua) which increments a > counter when a request is received (in ACCESS phase) and decrements the > counter when request is processed (in LOG phase) in order to keep track of > in-flight requests. > > I've seen some cases where the counter increments but does not decrement > and reaches a very high value, but can't reproduce. The core of my logic > depends on the accurate value of the in-flight requests counter. > > I wanted to ask if there are any cases where, for a request, ACCESS phase > is called and LOG phase is not called. > > I can paste the relevant code if required. > > Thanks. > -------------- next part -------------- > An HTML attachment was scrubbed... > URL: < > http://mailman.nginx.org/pipermail/nginx/attachments/20201028/f7c7b119/attachment-0001.htm > > > > ------------------------------ > > Subject: Digest Footer > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > > ------------------------------ > > End of nginx Digest, Vol 132, Issue 24 > ************************************** > -------------- next part -------------- An HTML attachment was scrubbed... URL: From osa at freebsd.org.ru Thu Oct 29 01:28:06 2020 From: osa at freebsd.org.ru (Sergey A. Osokin) Date: Thu, 29 Oct 2020 04:28:06 +0300 Subject: nginx Digest, Vol 132, Issue 24 In-Reply-To: References: Message-ID: <20201029012806.GA994@FreeBSD.org.ru> Hi there, to get a license for NGINX Plus please contact the NGINX Sales team, https://nginx.com/contact-sales/. -- Sergey Osokin On Thu, Oct 29, 2020 at 12:40:39AM +0300, Hiwa wrote: > Hello dears i want to install nginx as proxy cache server how can i do it > and where i can get license.. > > On Wed, 28 Oct 2020, 3:00 pm , wrote: > > > Send nginx mailing list submissions to > > nginx at nginx.org > > > > To subscribe or unsubscribe via the World Wide Web, visit > > http://mailman.nginx.org/mailman/listinfo/nginx > > or, via email, send a message with subject or body 'help' to > > nginx-request at nginx.org > > > > You can reach the person managing the list at > > nginx-owner at nginx.org > > > > When replying, please edit your Subject line so it is more specific > > than "Re: Contents of nginx digest..." > > > > > > Today's Topics: > > > > 1. nginx-1.19.4 (Maxim Dounin) > > 2. upstream SSL certificate does not match "x.x.x.x" (bouvierh) > > 3. Nginx logging phase (Vikas Kumar) > > > > > > ---------------------------------------------------------------------- > > > > Message: 1 > > Date: Tue, 27 Oct 2020 18:26:06 +0300 > > From: Maxim Dounin > > To: nginx at nginx.org > > Subject: nginx-1.19.4 > > Message-ID: <20201027152606.GD50919 at mdounin.ru> > > Content-Type: text/plain; charset=us-ascii > > > > Changes with nginx 1.19.4 27 Oct > > 2020 > > > > *) Feature: the "ssl_conf_command", "proxy_ssl_conf_command", > > "grpc_ssl_conf_command", and "uwsgi_ssl_conf_command" directives. > > > > *) Feature: the "ssl_reject_handshake" directive. > > > > *) Feature: the "proxy_smtp_auth" directive in mail proxy. > > > > > > -- > > Maxim Dounin > > http://nginx.org/ > > > > > > ------------------------------ > > > > Message: 2 > > Date: Wed, 28 Oct 2020 00:28:04 -0400 > > From: "bouvierh" > > To: nginx at nginx.org > > Subject: upstream SSL certificate does not match "x.x.x.x" > > Message-ID: > > < > > e2daca895c920269e52b2b369cea2fa9.NginxMailingListEnglish at forum.nginx.org> > > > > Content-Type: text/plain; charset=UTF-8 > > > > Hello, > > > > I have a configuration an nginx proxy server "NGINX_SERVER" as the > > following: > > listen 443 ssl default_server; > > > > chunked_transfer_encoding on; > > > > ssl_certificate server.crt; > > ssl_certificate_key private_key_server.pem; > > ssl_client_certificate trustedCA.crt; > > #ssl_verify_depth 7; > > ssl_verify_client optional_no_ca; > > > > location / { > > proxy_http_version 1.1; > > resolver 127.0.0.11; > > proxy_ssl_trusted_certificate trustedCA.crt; > > proxy_ssl_verify_depth 7; > > proxy_ssl_verify on; > > proxy_pass https://13.78.229.75:443; > > } > > > > The server "13.78.229.75" has a server certificate generate for an IP. When > > I do > > curl --cacert trustedCA.crt https://13.78.229.75:443 -v > > from "NGINX_SERVER", everything works fine. So the server certificate from > > "13.78.229.75" should be good. > > Additionnally openssl s_client -connect 13.78.229.75:443 -showcerts > > -verify > > 9 -CAfile trustedCA.crt is good too. > > > > However when I try to curl my "NGINX_SERVER": > > curl https://"NGINX_SERVER > > I get: > > *110 upstream SSL certificate does not match "13.78.229.75" while SSL > > handshaking to upstream, client: 13.78.128.54, server: , request: > > > > Looking at the server certificate, everything looks ok: > > Subject: CN = 13.78.229.75 > > X509v3 Subject Alternative Name: > > IP Address:13.78.229.75, DNS:iotedgeapiproxy > > > > I am at loss. How can curl/openssl tell me my server cert is valid while > > nginx telling me it is wrong. What am I doing wrong? > > Thank you! > > Hugues > > > > Posted at Nginx Forum: > > https://forum.nginx.org/read.php?2,289813,289813#msg-289813 > > > > > > > > ------------------------------ > > > > Message: 3 > > Date: Wed, 28 Oct 2020 14:14:40 +0530 > > From: Vikas Kumar > > To: nginx at nginx.org > > Subject: Nginx logging phase > > Message-ID: > > > qoD30QGr_w at mail.gmail.com> > > Content-Type: text/plain; charset="utf-8" > > > > I'm writing an Nginx plugin (using Openresty Lua) which increments a > > counter when a request is received (in ACCESS phase) and decrements the > > counter when request is processed (in LOG phase) in order to keep track of > > in-flight requests. > > > > I've seen some cases where the counter increments but does not decrement > > and reaches a very high value, but can't reproduce. The core of my logic > > depends on the accurate value of the in-flight requests counter. > > > > I wanted to ask if there are any cases where, for a request, ACCESS phase > > is called and LOG phase is not called. > > > > I can paste the relevant code if required. > > > > Thanks. > > -------------- next part -------------- > > An HTML attachment was scrubbed... > > URL: < > > http://mailman.nginx.org/pipermail/nginx/attachments/20201028/f7c7b119/attachment-0001.htm > > > > > > > ------------------------------ > > > > Subject: Digest Footer > > > > _______________________________________________ > > nginx mailing list > > nginx at nginx.org > > http://mailman.nginx.org/mailman/listinfo/nginx > > > > ------------------------------ > > > > End of nginx Digest, Vol 132, Issue 24 > > ************************************** > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From nginx-forum at forum.nginx.org Thu Oct 29 08:42:33 2020 From: nginx-forum at forum.nginx.org (garycnew@yahoo.com) Date: Thu, 29 Oct 2020 04:42:33 -0400 Subject: Nginx proxy_bind failing Message-ID: X All: I'm attempting to configure nginx to reverse proxy requests from (192.168.0.2:12345) the same Internal Host Address that it's listening from (192.168.0.2:443) on separate ports using the listen and proxy_bind directives. # /opt/sbin/nginx -v nginx version: nginx/1.19.2 (x86_64-pc-linux-gnu) # cat nginx.conf user admin root; #user nobody; worker_processes 1; events { worker_connections 64; } http { # HTTPS server server { listen 192.168.0.2:443 ssl; server_name z1.fm; ssl_certificate /etc/cert.pem; ssl_certificate_key /etc/key.pem; proxy_ssl_server_name on; ssl_session_cache shared:SSL:1m; ssl_session_timeout 5m; ssl_ciphers HIGH:!aNULL:!MD5; ssl_prefer_server_ciphers on; location / { # root html; # index index.html index.htm; resolver 103.86.99.100; # proxy_bind 192.168.0.2:12345; proxy_bind $server_addr:12345; # proxy_bind $remote_addr:12345 transparent; proxy_pass $scheme://$host; } } } I've tried changing the "user admin root;" which is the root user for this router. I've tried using different combinations of "proxy_bind 192.168.0.2;", "proxy_bind 192.168.0.2 transparent;", "proxy_bind $server_addr;", and "proxy_bind $server_addr transparent;". None of them appear to work, when validating with tcpdump. nginx always uses the External WAN Address (100.64.8.236). Ifconfig Output: # ifconfig br0 Link encap:Ethernet HWaddr C0:56:27:D1:B8:A4 inet addr:192.168.0.1 Bcast:192.168.0.255 Mask:255.255.255.0 UP BROADCAST RUNNING ALLMULTI MULTICAST MTU:1500 Metric:1 RX packets:10243803 errors:0 dropped:0 overruns:0 frame:0 TX packets:5440860 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:14614392834 (13.6 GiB) TX bytes:860977246 (821.0 MiB) br0:0 Link encap:Ethernet HWaddr C0:56:27:D1:B8:A4 inet addr:192.168.0.2 Bcast:192.168.0.255 Mask:255.255.255.0 UP BROADCAST RUNNING ALLMULTI MULTICAST MTU:1500 Metric:1 vlan2 Link encap:Ethernet HWaddr C0:56:27:D1:B8:A4 inet addr:100.64.8.236 Bcast:100.64.15.255 Mask:255.255.248.0 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:1757588 errors:0 dropped:0 overruns:0 frame:0 TX packets:613625 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:2267961441 (2.1 GiB) TX bytes:139435610 (132.9 MiB) Route Output: # route Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 10.10.0.17 * 255.255.255.255 UH 0 0 0 tun12 89.38.98.142 100.64.8.1 255.255.255.255 UGH 0 0 0 vlan2 100.64.8.1 * 255.255.255.255 UH 0 0 0 vlan2 10.15.0.65 * 255.255.255.255 UH 0 0 0 tun11 192.168.2.1 * 255.255.255.255 UH 0 0 0 vlan3 51.68.180.4 100.64.8.1 255.255.255.255 UGH 0 0 0 vlan2 192.168.2.0 * 255.255.255.0 U 0 0 0 vlan3 192.168.0.0 * 255.255.255.0 U 0 0 0 br0 100.64.8.0 * 255.255.248.0 U 0 0 0 vlan2 127.0.0.0 * 255.0.0.0 U 0 0 0 lo default 100.64.8.1 0.0.0.0 UG 0 0 0 vlan2 Tcpdump Output: Client Remote_Addr (192.168.0.154:$port) == Request => Nginx Reverse Proxy Server - Listener (192.168.0.2:443) 07:19:06.840468 In c8:1f:66:13:a1:11 ethertype IPv4 (0x0800), length 62: 192.168.0.154.55138 > 192.168.0.2.443: Flags [.], ack 1582, win 8212, length 0 07:19:06.840468 In c8:1f:66:13:a1:11 ethertype IPv4 (0x0800), length 62: 192.168.0.154.55138 > 192.168.0.2.443: Flags [.], ack 1582, win 8212, length 0 Nginx Reverse Proxy Server - Listener (192.168.0.2:443) == Response => Client Remote_Addr (192.168.0.154:$port) 07:19:06.841377 Out c0:56:27:d1:b8:a4 ethertype IPv4 (0x0800), length 56: 192.168.0.2.443 > 192.168.0.154.55138: Flags [.], ack 1475, win 541, length 0 07:19:06.841411 Out c0:56:27:d1:b8:a4 ethertype IPv4 (0x0800), length 56: 192.168.0.2.443 > 192.168.0.154.55138: Flags [.], ack 1475, win 541, length 0 Nginx Reverse Proxy Server - Sender (100.64.8.236:12345) == Request => Upstream Desination Server - Listener (104.27.161.206:443) 07:19:11.885314 Out c0:56:27:d1:b8:a4 ethertype IPv4 (0x0800), length 76: 100.64.8.236.12345 > 104.27.161.206.443: Flags [S], seq 3472185855, win 5840, options [mss 1460,sackOK,TS val 331214 ecr 0,nop,wscale 4], length 0 Upstream Desination Server - Listener (104.27.161.206:443) == Response => Nginx Reverse Proxy Server - Sender (100.64.8.236:12345) 07:19:11.887683 In 02:1f:a0:00:00:09 ethertype IPv4 (0x0800), length 68: 104.27.161.206.443 > 100.64.8.236.12345: Flags [S.], seq 2113436779, ack 3472185856, win 65535, options [mss 1400,nop,nop,sackOK,nop,wscale 10], length 0 Note: The Nginx Reverse Proxy Server (Listener) and Nginx Reverse Proxy Server (Sender) MAC addresses are the same piece of hardware 07:19:06.840468 In c8:1f:66:13:a1:11 ethertype IPv4 (0x0800), length 62: 192.168.0.154.55138 > 192.168.0.2.443: Flags [.], ack 1582, win 8212, length 0 07:19:06.840468 In c8:1f:66:13:a1:11 ethertype IPv4 (0x0800), length 62: 192.168.0.154.55138 > 192.168.0.2.443: Flags [.], ack 1582, win 8212, length 0 07:19:06.841377 Out c0:56:27:d1:b8:a4 ethertype IPv4 (0x0800), length 56: 192.168.0.2.443 > 192.168.0.154.55138: Flags [.], ack 1475, win 541, length 0 07:19:06.841411 Out c0:56:27:d1:b8:a4 ethertype IPv4 (0x0800), length 56: 192.168.0.2.443 > 192.168.0.154.55138: Flags [.], ack 1475, win 541, length 0 07:19:11.885314 Out c0:56:27:d1:b8:a4 ethertype IPv4 (0x0800), length 76: 100.64.8.236.12345 > 104.27.161.206.443: Flags [S], seq 3472185855, win 5840, options [mss 1460,sackOK,TS val 331214 ecr 0,nop,wscale 4], length 0 07:19:11.887683 In 02:1f:a0:00:00:09 ethertype IPv4 (0x0800), length 68: 104.27.161.206.443 > 100.64.8.236.12345: Flags [S.], seq 2113436779, ack 3472185856, win 65535, options [mss 1400,nop,nop,sackOK,nop,wscale 10], length 0 07:19:11.887948 Out c0:56:27:d1:b8:a4 ethertype IPv4 (0x0800), length 56: 100.64.8.236.12345 > 104.27.161.206.443: Flags [.], ack 1, win 365, length 0 07:19:11.888854 Out c0:56:27:d1:b8:a4 ethertype IPv4 (0x0800), length 264: 100.64.8.236.12345 > 104.27.161.206.443: Flags [P.], seq 1:209, ack 1, win 365, length 208 07:19:11.890844 In 02:1f:a0:00:00:09 ethertype IPv4 (0x0800), length 62: 104.27.161.206.443 > 100.64.8.236.12345: Flags [.], ack 209, win 66, length 0 07:19:11.893154 In 02:1f:a0:00:00:09 ethertype IPv4 (0x0800), length 1516: 104.27.161.206.443 > 100.64.8.236.12345: Flags [.], seq 1:1461, ack 209, win 66, length 1460 07:19:11.893316 Out c0:56:27:d1:b8:a4 ethertype IPv4 (0x0800), length 56: 100.64.8.236.12345 > 104.27.161.206.443: Flags [.], ack 1461, win 548, length 0 07:19:11.893161 In 02:1f:a0:00:00:09 ethertype IPv4 (0x0800), length 1000: 104.27.161.206.443 > 100.64.8.236.12345: Flags [P.], seq 1461:2405, ack 209, win 66, length 944 Iptables Output: # iptables -t mangle -I PREROUTING -i vlan2 -p tcp -m multiport --dport 12345 -j MARK --set-mark 0x2000/0x2000 # iptables -t mangle -I POSTROUTING -o vlan2 -p tcp -m multiport --sport 12345 -j MARK --set-mark 0x8000/0x8000 Note: Packets are matching and being marked, but not being routed to the appropriate interfaces. I'm thinking it may be too late in the pipe. # iptables -t mangle -L -v -n Chain PREROUTING (policy ACCEPT 5506K packets, 8051M bytes) pkts bytes target prot opt in out source destination 33 15329 MARK tcp -- vlan2 * 0.0.0.0/0 0.0.0.0/0 multiport dports 12345 MARK or 0x2000 Chain POSTROUTING (policy ACCEPT 2832K packets, 171M bytes) pkts bytes target prot opt in out source destination 30 4548 MARK tcp -- * vlan2 0.0.0.0/0 0.0.0.0/0 multiport sports 12345 MARK or 0x8000 The reverse proxied requests make it to the destination and back, but using the External WAN Address (100.64.8.236:12345) and not the Internal Host Address (192.168.0.2:12345). The proxy_bind directive just seems to be failing. Any ideas? Thanks! Gary Posted at Nginx Forum: https://forum.nginx.org/read.php?2,289823,289823#msg-289823 From nginx-forum at forum.nginx.org Thu Oct 29 12:43:01 2020 From: nginx-forum at forum.nginx.org (garycnew@yahoo.com) Date: Thu, 29 Oct 2020 08:43:01 -0400 Subject: Nginx proxy_bind failing In-Reply-To: References: Message-ID: <3d19c48ffe33c6898be1ae19c248fe1c.NginxMailingListEnglish@forum.nginx.org> All: I discovered a single SYN packet being sent from 192.168.0.2:12345 (nginx worker) when initiating traffic. Nothing more. # netstat -anp|grep 12345 tcp 0 1 192.168.0.2:12345 172.64.163.36:443 SYN_SENT 14176/nginx: worker For whatever reason, that packet isn't showing up in my promiscuous tcpdump. Question: If 192.168.0.2:12345 doesn't receive a SYN,ACK will the nginx worker use the next available interface address (i.e., 100.64.8.236:12345) to establish a connection? My goal is to use iptables to MARK the 192.168.0.2:12345 packets for routing over an established OpenVPN Tunnel (tun12). Any guidance is much appreciated. Respectfully, Gary Posted at Nginx Forum: https://forum.nginx.org/read.php?2,289823,289824#msg-289824 From hershil at gmail.com Thu Oct 29 12:53:59 2020 From: hershil at gmail.com (Vikas Kumar) Date: Thu, 29 Oct 2020 18:23:59 +0530 Subject: Nginx logging phase In-Reply-To: <20201028132927.GQ50919@mdounin.ru> References: <20201028132927.GQ50919@mdounin.ru> Message-ID: Do you have a recommendation on what handlers are suitable for my use case? Thanks. On Wed, Oct 28, 2020 at 6:59 PM Maxim Dounin wrote: > Hello! > > On Wed, Oct 28, 2020 at 02:14:40PM +0530, Vikas Kumar wrote: > > > I'm writing an Nginx plugin (using Openresty Lua) which increments a > > counter when a request is received (in ACCESS phase) and decrements the > > counter when request is processed (in LOG phase) in order to keep track > of > > in-flight requests. > > > > I've seen some cases where the counter increments but does not decrement > > and reaches a very high value, but can't reproduce. The core of my logic > > depends on the accurate value of the in-flight requests counter. > > > > I wanted to ask if there are any cases where, for a request, ACCESS phase > > is called and LOG phase is not called. > > The access phase handlers might be called more than once, for > example, after internal redirects. > > Note well that access phase handlers might not be called at all if > the request is finalized at earlier stages. > > -- > Maxim Dounin > http://mdounin.ru/ > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Thu Oct 29 15:51:20 2020 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 29 Oct 2020 18:51:20 +0300 Subject: Nginx logging phase In-Reply-To: References: <20201028132927.GQ50919@mdounin.ru> Message-ID: <20201029155120.GS50919@mdounin.ru> Hello! On Thu, Oct 29, 2020 at 06:23:59PM +0530, Vikas Kumar wrote: > Do you have a recommendation on what handlers are suitable for my use case? In nginx itself, proper approach to count in-flight request would be: 1. Increment the counter only if no module's cleanup handler is installed in the request pool. This way the counter can be incremented in any phase, which makes it possible to do this in a particular location. If the handler is called multiple times, additional calls are simply ignored. 2. Once the counter incremented, install module's cleanup handler to decrement the counter. This ensures that the counter is always properly decremented when the requst is freed. This is more or less what limit_conn module does (except it uses some bits in the request structure to optimize checking if the cleanup handler is installed). No idea though if something similar can be done with the Lua module you are trying to use. -- Maxim Dounin http://mdounin.ru/ From jordanvonkluck at gmail.com Thu Oct 29 18:02:57 2020 From: jordanvonkluck at gmail.com (Jordan von Kluck) Date: Thu, 29 Oct 2020 13:02:57 -0500 Subject: Transient, Load Related Slow response_time / upstream_response_time vs App Server Reported Times Message-ID: Hello - I am hoping someone on the community list can help steer me in the right direction for troubleshooting the following scenario: I am running a cluster of 4 virtualized nginx open source 1.16.0 servers with 4 vCPU cores and 4 GB of RAM each. They serve HTTP (REST API) requests to a pool of about 40 different upstream clusters, which range from 2 to 8 servers within each upstream definition. The upstream application servers themselves have multiple workers per server. I've recently started seeing an issue where the reported response_time and typically the reported upstream_response_time the nginx access log are drastically different from the reported response on the application servers themselves. For example, on some requests the typical average response_time would be around 5ms with an upstream_response_time of 4ms. During these transient periods of high load (approximately 1200 -1400 rps), the reported nginx response_time and upstream_response_time spike up to somewhere around 1 second, while the application logs on the upstream servers are still reporting the same 4ms response time. The upstream definitions are very simple and look like: upstream rest-api-xyz { least_conn; server 10.1.1.33:8080 max_fails=3 fail_timeout=30; # production-rest-api-xyz01 server 10.1.1.34:8080 max_fails=3 fail_timeout=30; # production-rest-api-xyz02 } One avenue that I've considered but does not seem to be the case from the instrumentation on the app servers is that they're accepting the requests and queueing them in a TCP socket locally. However, running a packet capture on both the nginx server and the app server actually shows the http request leaving nginx at the end of the time window. I have not looked at this down to the TCP handshake to see if the actual negotiation is taking an excessive amount of time. I can produce this queueing scenario artificially, but it does not appear to be what's happening in my production environment in the scenario described above. Does anyone here have any experience sorting out something like this? The upstream_connect_time is not part of the log currently, but if that number was reporting high, I'm not entirely sure what would cause that. Similarly, if the upstream_connect_time does not account for most of the delay, is there anything else I should be looking at? Thanks Jordan -------------- next part -------------- An HTML attachment was scrubbed... URL: From kaushalshriyan at gmail.com Thu Oct 29 18:12:11 2020 From: kaushalshriyan at gmail.com (Kaushal Shriyan) Date: Thu, 29 Oct 2020 23:42:11 +0530 Subject: Query on nginx. conf file regarding redirection. Message-ID: Hi, I have a specific query regarding the below /etc/nginx/nginx.conf file. When I hit this URL http://219.11.134.114/test/_plugin/kibana/app/kibana on the browser it does not get redirected to https://vpc-lab-test-search-7hyay88a9kjuisl.eu-north-1.es.amazonaws.com/; # TEST server { listen 81; location /test { proxy_set_header X-Forwarded-For $remote_addr; proxy_set_header Host $http_host; fastcgi_read_timeout 240; proxy_pass https://vpc-lab-test-search-7hyay88a9kjuisl.eu-north-1.es.amazonaws.com/; } error_page 404 /404.html; location = /40x.html { } error_page 500 502 503 504 /50x.html; location = /50x.html { } } Similarly, when I hit this URL http://219.11.134.114/prod/_plugin/kibana/app/kibana on the browser it does not get redirected to https://vpc-lab-prod-search-9aay182kkjoisl.eu-north-1.es.amazonaws.com/ # PROD server { listen 80; location /prod { proxy_set_header X-Forwarded-For $remote_addr; proxy_set_header Host $http_host; fastcgi_read_timeout 240; proxy_pass https://vpc-lab-prod-search-9aay182kkjoisl.eu-north-1.es.amazonaws.com/;; } error_page 404 /404.html; location = /40x.html { } error_page 500 502 503 504 /50x.html; location = /50x.html { } } Any help will be highly appreciated. Thanks in Advance and I look forward to hearing from you. Best Regards, Kaushal -------------- next part -------------- An HTML attachment was scrubbed... URL: From shankar.borate at workapps.com Thu Oct 29 18:35:13 2020 From: shankar.borate at workapps.com (shankar.borate at workapps.com) Date: Fri, 30 Oct 2020 00:05:13 +0530 Subject: Nginx as a forward proxy Message-ID: <00bd01d6ae22$3f5b2fb0$be118f10$@workapps.com> Dear Sir, I am using nginx as a reverse proxy. All my requests goes to nginx and then go to application server. This works well. I have requirement where from nginx, outbound request need to go to internet https proxy and then to some other service in AWS. Request flow is as follow Browser --> WAF--> Nginx-->corporate https proxy --> AWS S3 (s3 streaming url). My question is, is there a way nginx can proxy pass request to S3 via proxy server? If yes, how? Let me know some config snippet. Thanks in Advance !! Regards, Shankar Borate | CoFounder & CTO | +91-8975761692 Start a Chat with me instantly? workApps.com/110 --------- Enterprise Messaging Platform for Banks, Insurance, Financial Services, Securities and Mutual Funds From: nginx On Behalf Of Kaushal Shriyan Sent: 29 October 2020 23:42 To: nginx at nginx.org Subject: Query on nginx. conf file regarding redirection. Hi, I have a specific query regarding the below /etc/nginx/nginx.conf file. When I hit this URL http://219.11.134.114/test/_plugin/kibana/app/kibana on the browser it does not get redirected to https://vpc-lab-test-search-7hyay88a9kjuisl.eu-north-1.es.amazonaws.com/; # TEST server { listen 81; location /test { proxy_set_header X-Forwarded-For $remote_addr; proxy_set_header Host $http_host; fastcgi_read_timeout 240; proxy_pass https://vpc-lab-test-search-7hyay88a9kjuisl.eu-north-1.es.amazonaws.com/; } error_page 404 /404.html; location = /40x.html { } error_page 500 502 503 504 /50x.html; location = /50x.html { } } Similarly, when I hit this URL http://219.11.134.114/prod/_plugin/kibana/app/kibana on the browser it does not get redirected to https://vpc-lab-prod-search-9aay182kkjoisl.eu-north-1.es.amazonaws.com/ # PROD server { listen 80; location /prod { proxy_set_header X-Forwarded-For $remote_addr; proxy_set_header Host $http_host; fastcgi_read_timeout 240; proxy_pass https://vpc-lab-prod-search-9aay182kkjoisl.eu-north-1.es.amazonaws.com/;; } error_page 404 /404.html; location = /40x.html { } error_page 500 502 503 504 /50x.html; location = /50x.html { } } Any help will be highly appreciated. Thanks in Advance and I look forward to hearing from you. Best Regards, Kaushal -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Thu Oct 29 19:03:23 2020 From: nginx-forum at forum.nginx.org (bouvierh) Date: Thu, 29 Oct 2020 15:03:23 -0400 Subject: upstream SSL certificate does not match "x.x.x.x" In-Reply-To: <20201028130402.GA29865@daoine.org> References: <20201028130402.GA29865@daoine.org> Message-ID: <2943217340c63811d45135d243f63459.NginxMailingListEnglish@forum.nginx.org> I worked! Thank you so much! Posted at Nginx Forum: https://forum.nginx.org/read.php?2,289813,289843#msg-289843 From robenau at gmail.com Thu Oct 29 21:23:33 2020 From: robenau at gmail.com (Robert Naundorf) Date: Thu, 29 Oct 2020 22:23:33 +0100 Subject: Session ticket renewal regarding RFC 5077 TLS session resumption Message-ID: Hello, I have a question on TLS session resumption with client-side session tickets and its implementation in nginx. RFC 5077, section 3.3, paragraph 2 reads: If the server successfully verifies the client's ticket, then it MAY renew the ticket by including a NewSessionTicket handshake message after the ServerHello in the abbreviated handshake. The client should start using the new ticket as soon as possible ... Which seems very reasonable to me. That way the session could continue without the need of a costly full handshake. It could continue virtually forever, as long as the client resumes the session within the time window configured by ssl_session_timeout. However, it appears to me that nginx will not issue a new session ticket proactively before ssl_session_timeout elapses. So session resumption works fine within ssl_session_timeout and nginx initiates a full handshake once the timeout has expired. Searching the interwebs I found an old trac issue ( https://trac.nginx.org/nginx/ticket/120) including a patch, where it was reported that clients do not seem to support this kind of behavior. And then there is ticket 1892 (https://trac.nginx.org/nginx/ticket/1892) which is about session ticket renewal on TLS 1.3 (in my case it is TLS 1.2) but says that the setting ssl_session_ticket_key plays a role for this topic. So is my expectation and my understanding of RFC 5077 correct? And what is the current implementation in nginx? Best regards, Robert -------------- next part -------------- An HTML attachment was scrubbed... URL: From francis at daoine.org Thu Oct 29 23:05:11 2020 From: francis at daoine.org (Francis Daly) Date: Thu, 29 Oct 2020 23:05:11 +0000 Subject: Query on nginx. conf file regarding redirection. In-Reply-To: References: Message-ID: <20201029230511.GB29865@daoine.org> On Thu, Oct 29, 2020 at 11:42:11PM +0530, Kaushal Shriyan wrote: Hi there, > When I hit this URL http://219.11.134.114/test/_plugin/kibana/app/kibana on > the browser it does not get redirected to > https://vpc-lab-test-search-7hyay88a9kjuisl.eu-north-1.es.amazonaws.com/; What does the nginx access log or error log say happened to that request? Also: what IP address does vpc-lab-test-search-7hyay88a9kjuisl.eu-north-1.es.amazonaws.com have right now; and what IP address did it have when nginx was started? Your test request goes to port 80 on a specific IP address; your nginx is listening on port 81 on a non-specified IP address -- if the nginx logs do not show that this nginx got that request, then that's a thing to fix before the nginx config. And your proxy_pass is to a hostname that will be resolved once, at startup; if the remote address changes, your nginx config will not notice the change. > Similarly, when I hit this URL > http://219.11.134.114/prod/_plugin/kibana/app/kibana on the browser it does > not get redirected to > https://vpc-lab-prod-search-9aay182kkjoisl.eu-north-1.es.amazonaws.com/ Same questions; same reasons (except this nginx does listen on port 80). Cheers, f -- Francis Daly francis at daoine.org From francis at daoine.org Thu Oct 29 23:32:28 2020 From: francis at daoine.org (Francis Daly) Date: Thu, 29 Oct 2020 23:32:28 +0000 Subject: Nginx as a forward proxy In-Reply-To: <00bd01d6ae22$3f5b2fb0$be118f10$@workapps.com> References: <00bd01d6ae22$3f5b2fb0$be118f10$@workapps.com> Message-ID: <20201029233228.GC29865@daoine.org> On Fri, Oct 30, 2020 at 12:05:13AM +0530, shankar.borate at workapps.com wrote: Hi there, > I have requirement where from nginx, outbound request need to go to internet https proxy and then to some other service in AWS. Request flow is as follow > > Browser --> WAF--> Nginx-->corporate https proxy --> AWS S3 (s3 streaming url). > > My question is, is there a way nginx can proxy pass request to S3 via proxy server? If yes, how? No. Current stock nginx does not speak the http-via-proxy protocol as a client. I'm not aware of any third party modules that do that, either. I suspect your choices are "use something other than nginx", or "write, or encourage someone to write, the code that you want". Good luck with it, f -- Francis Daly francis at daoine.org From nginx-forum at forum.nginx.org Fri Oct 30 11:15:58 2020 From: nginx-forum at forum.nginx.org (garycnew@yahoo.com) Date: Fri, 30 Oct 2020 07:15:58 -0400 Subject: Nginx proxy_bind failing In-Reply-To: <3d19c48ffe33c6898be1ae19c248fe1c.NginxMailingListEnglish@forum.nginx.org> References: <3d19c48ffe33c6898be1ae19c248fe1c.NginxMailingListEnglish@forum.nginx.org> Message-ID: <039583df79acccda29bd8123f70f58e1.NginxMailingListEnglish@forum.nginx.org> All: After reviewing the iptables chains workflow, I discovered that the Nginx Worker (100.64.8.236:12345) outside interface was associated with the OUTPUT chain. (192.168.0.2:12345) OUTPUT ==> (192.168.0.154:$port) PREROUTING ==> (100.64.8.236:12345) POSTROUTING ==> Windows Client (192.168.0.154:$port) ==> Nginx Master (192.168.0.2:443) | Nginx Worker (100.64.8.236:12345) ==> Upstream Desination Server (104.27.161.206:443) <== POSTROUTING (192.168.0.2:443) <== PREROUTING (104.27.161.206:443) Once adding the appropriate iptables OUTPUT rule, using the correct interface (vlan2), the packets leaving the Nginx Worker (100.64.8.236:12345) were then appropriately MARKed and routed to the OpenVPN Tunnel. # iptables -t mangle -I OUTPUT -o vlan2 -p tcp -m multiport --sport 12345 -j MARK --set-mark 0x2000/0x2000 Now, I just need to figure out the Nginx SSL Client CA Trust configuration and we should be in business. Hope this helps someone in the future. Respectfully, Gary Posted at Nginx Forum: https://forum.nginx.org/read.php?2,289823,289847#msg-289847 From nginx-forum at forum.nginx.org Fri Oct 30 15:29:39 2020 From: nginx-forum at forum.nginx.org (bobfang_sqp) Date: Fri, 30 Oct 2020 11:29:39 -0400 Subject: ip_hash and multiple clients on the same host Message-ID: Hi I am using nginx as a load balacer and I require sticky session (same client always go to the same server), for this purpose I use this following setup: upstream backend { ip_hash; server host1:8000; server host2:8000; } The `ip_hash` directive is what I learned from online docs, but I am just wondering does this mean that all the requests from the same host will be routed to the same upstream server? Since I may have multiple clients running on the same host, I want to still distribute these requests to different upstream servers but if I am using ip_hash does that imply if two clients are on the same host they will be routed to the same server because they share the same ip? There seems to be some more advanced configs to achieve sticky session but seems that would require nginx plus which I dont have... My intention is that even if two clients come from same host they should probably go to different server, but after they are assigned an upstream server they should stick to that one. Thanks! Posted at Nginx Forum: https://forum.nginx.org/read.php?2,289848,289848#msg-289848 From kaushalshriyan at gmail.com Fri Oct 30 17:15:14 2020 From: kaushalshriyan at gmail.com (Kaushal Shriyan) Date: Fri, 30 Oct 2020 22:45:14 +0530 Subject: Query on nginx. conf file regarding redirection. In-Reply-To: <20201029230511.GB29865@daoine.org> References: <20201029230511.GB29865@daoine.org> Message-ID: On Fri, Oct 30, 2020 at 4:35 AM Francis Daly wrote: > On Thu, Oct 29, 2020 at 11:42:11PM +0530, Kaushal Shriyan wrote: > > Hi there, > > > When I hit this URL http://219.11.134.114/test/_plugin/kibana/app/kibana > on > > the browser it does not get redirected to > > https://vpc-lab-test-search-7hyay88a9kjuisl.eu-north-1.es.amazonaws.com/ > ; > > What does the nginx access log or error log say happened to that request? > > Also: what IP address does > vpc-lab-test-search-7hyay88a9kjuisl.eu-north-1.es.amazonaws.com have > right now; and what IP address did it have when nginx was started? > > > Your test request goes to port 80 on a specific IP address; your nginx > is listening on port 81 on a non-specified IP address -- if the nginx > logs do not show that this nginx got that request, then that's a thing > to fix before the nginx config. > > And your proxy_pass is to a hostname that will be resolved once, > at startup; if the remote address changes, your nginx config will not > notice the change. > > > Similarly, when I hit this URL > > http://219.11.134.114/prod/_plugin/kibana/app/kibana on the browser it > does > > not get redirected to > > https://vpc-lab-prod-search-9aay182kkjoisl.eu-north-1.es.amazonaws.com/ > > Same questions; same reasons (except this nginx does listen on port 80). > > Cheers, > > > Hi Francis, I am seeing this below message when I hit http://219.11.134.114/test/_plugin/kibana/app/kibana ==> /var/log/nginx/error.log <== 2020/10/30 17:10:57 [error] 9616#0: *20 open() "/usr/share/nginx/html/test/_plugin/kibana/app/kibana" failed (2: No such file or directory), client: 14.98.153.6, server: , request: "GET /test/_plugin/kibana/app/kibana HTTP/1.1", host: "219.11.134.114" ==> /var/log/nginx/access.log <== 14.98.153.6 - - [30/Oct/2020:17:10:57 +0000] "GET /test/_plugin/kibana/app/kibana HTTP/1.1" 404 3650 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/86.0.4240.111 Safari/537.36" "-" # TEST server { listen 80; location /test { proxy_set_header X-Forwarded-For $remote_addr; proxy_set_header Host $http_host; #fastcgi_read_timeout 240; proxy_connect_timeout 600; proxy_send_timeout 600; proxy_read_timeout 600; send_timeout 600; proxy_pass https://vpc-lab-test-search-7hyay88a9kjuisl.eu-north-1.es.amazonaws.com/; } error_page 404 /404.html; location = /40x.html { } error_page 500 502 503 504 /50x.html; location = /50x.html { } } Also, I am not sure why it is getting referenced to /usr/share/nginx/html/? Is there a way to change the default document root? Please suggest further. Best Regards, -------------- next part -------------- An HTML attachment was scrubbed... URL: From osa at freebsd.org.ru Fri Oct 30 20:46:46 2020 From: osa at freebsd.org.ru (Sergey A. Osokin) Date: Fri, 30 Oct 2020 23:46:46 +0300 Subject: ip_hash and multiple clients on the same host In-Reply-To: References: Message-ID: <20201030204646.GE55720@FreeBSD.org.ru> On Fri, Oct 30, 2020 at 11:29:39AM -0400, bobfang_sqp wrote: > Hi I am using nginx as a load balacer and I require sticky session (same > client always go to the same server), for this purpose I use this following > setup: > > > upstream backend { > ip_hash; > server host1:8000; > server host2:8000; > } > > The `ip_hash` directive is what I learned from online docs, but I am just > wondering does this mean that all the requests from the same host will be > routed to the same upstream server? Since I may have multiple clients > running on the same host, I want to still distribute these requests to > different upstream servers but if I am using ip_hash does that imply if two > clients are on the same host they will be routed to the same server because > they share the same ip? There seems to be some more advanced configs to > achieve sticky session but seems that would require nginx plus which I dont > have... My intention is that even if two clients come from same host they > should probably go to different server, but after they are assigned an > upstream server they should stick to that one. Thanks! Hi, a couple of options in this case: - sticky session feature is available as part of commecial subscription with NGINX Plus; - NGINX Sticky module for NGINX OSS is available in GH, here's the link to one of the forks, https://github.com/ayty-adrianomartins/nginx-sticky-module-ng -- Sergey From mdounin at mdounin.ru Fri Oct 30 21:01:20 2020 From: mdounin at mdounin.ru (Maxim Dounin) Date: Sat, 31 Oct 2020 00:01:20 +0300 Subject: Transient, Load Related Slow response_time / upstream_response_time vs App Server Reported Times In-Reply-To: References: Message-ID: <20201030210120.GW50919@mdounin.ru> Hello! On Thu, Oct 29, 2020 at 01:02:57PM -0500, Jordan von Kluck wrote: > I am hoping someone on the community list can help steer me in the right > direction for troubleshooting the following scenario: > > I am running a cluster of 4 virtualized nginx open source 1.16.0 servers > with 4 vCPU cores and 4 GB of RAM each. They serve HTTP (REST API) requests > to a pool of about 40 different upstream clusters, which range from 2 to 8 > servers within each upstream definition. The upstream application servers > themselves have multiple workers per server. > > I've recently started seeing an issue where the reported response_time and > typically the reported upstream_response_time the nginx access log are > drastically different from the reported response on the application servers > themselves. For example, on some requests the typical average response_time > would be around 5ms with an upstream_response_time of 4ms. During these > transient periods of high load (approximately 1200 -1400 rps), the reported > nginx response_time and upstream_response_time spike up to somewhere around > 1 second, while the application logs on the upstream servers are still > reporting the same 4ms response time. > > The upstream definitions are very simple and look like: > upstream rest-api-xyz { > least_conn; > server 10.1.1.33:8080 max_fails=3 fail_timeout=30; # > production-rest-api-xyz01 > server 10.1.1.34:8080 max_fails=3 fail_timeout=30; # > production-rest-api-xyz02 > } > > One avenue that I've considered but does not seem to be the case from the > instrumentation on the app servers is that they're accepting the requests > and queueing them in a TCP socket locally. However, running a packet > capture on both the nginx server and the app server actually shows the http > request leaving nginx at the end of the time window. I have not looked at > this down to the TCP handshake to see if the actual negotiation is taking > an excessive amount of time. I can produce this queueing scenario > artificially, but it does not appear to be what's happening in my > production environment in the scenario described above. > > Does anyone here have any experience sorting out something like this? The > upstream_connect_time is not part of the log currently, but if that number > was reporting high, I'm not entirely sure what would cause that. Similarly, > if the upstream_connect_time does not account for most of the delay, is > there anything else I should be looking at? Spikes to 1 second suggests that this might be SYN retransmit timeouts. Most likely, this is what happens: your backend cannot cope with load, so listen queue on the backend overflows. Default behaviour on most Linux boxes is to drop SYN packets on listen queue overflows (tcp.ipv4.abort_on_overflow=0). Dropped SYN packets eventually - after an initial RTO, initial retransmission timeout, which is 1s on modern Linux systems - result in retransmission and connection being finally established, but with 1s delay. Consider looking at network stats to see if there are actual listen queue overflows on your backends, something like "nstat -az TcpExtListenDrops" should be handy. You can also use "ss -nlt" to see listen queue sizes in real time. In many cases such occasional queue overflows under load simply mean that listen queue size is too low, so minor load fluctuations might occasionally result in overflows. In this case, using a larger listen queue might help. Also, if the backend servers in question are solely the backend ones, and there are multiple load-balanced servers as your configuration suggests, it might be a good idea to configure these servers to send RST on listen queue overflows, that is, to set tcp.ipv4.abort_on_overflow to 1. This way nginx will immediately know that the backend'd listen queue is full and will be able to try the next upstream server instead. -- Maxim Dounin http://mdounin.ru/ From mdounin at mdounin.ru Fri Oct 30 21:55:19 2020 From: mdounin at mdounin.ru (Maxim Dounin) Date: Sat, 31 Oct 2020 00:55:19 +0300 Subject: Session ticket renewal regarding RFC 5077 TLS session resumption In-Reply-To: References: Message-ID: <20201030215519.GX50919@mdounin.ru> Hello! On Thu, Oct 29, 2020 at 10:23:33PM +0100, Robert Naundorf wrote: > I have a question on TLS session resumption with client-side session > tickets and its implementation in nginx. > > RFC 5077, section 3.3, paragraph 2 reads: > If the server successfully verifies the client's ticket, then it MAY renew > the ticket by including a NewSessionTicket handshake message after the > ServerHello in the abbreviated handshake. The client should start using the > new ticket as soon as possible ... > > Which seems very reasonable to me. That way the session could continue > without the need of a costly full handshake. It could continue virtually > forever, as long as the client resumes the session within the time window > configured by ssl_session_timeout. This is not how session timeouts work. Session timeout means that parties have to re-check if the other side still have access to the certificate presented and its private key, certificates haven't been revoked and so on. That is, once session timeout expires, there should be a full handshake. In RFC 5077, relevant section is 5.6. Ticket Lifetime (https://tools.ietf.org/html/rfc5077#section-5.6): The TLS server controls the lifetime of the ticket. Servers determine the acceptable lifetime based on the operational and security requirements of the environments in which they are deployed. The ticket lifetime may be longer than the 24-hour lifetime recommended in [RFC4346]. TLS clients may be given a hint of the lifetime of the ticket. Since the lifetime of a ticket may be unspecified, a client has its own local policy that determines when it discards tickets. It refers to RFC 4346, which further explains where 24-hour limit comes from (https://tools.ietf.org/html/rfc4346#appendix-F.1.4): Sessions cannot be resumed unless both the client and server agree. If either party suspects that the session may have been compromised, or that certificates may have expired or been revoked, it should force a full handshake. An upper limit of 24 hours is suggested for session ID lifetimes, since an attacker who obtains a master_secret may be able to impersonate the compromised party until the corresponding session ID is retired. Applications that may be run in relatively insecure environments should not write session IDs to stable storage. Renewing tickets is not about avoiding full handshakes, but rather about returing an updated ticket to the client - for example, to update the key used to encrypt tickets. -- Maxim Dounin http://mdounin.ru/ From francis at daoine.org Fri Oct 30 23:43:38 2020 From: francis at daoine.org (Francis Daly) Date: Fri, 30 Oct 2020 23:43:38 +0000 Subject: Query on nginx. conf file regarding redirection. In-Reply-To: References: <20201029230511.GB29865@daoine.org> Message-ID: <20201030234338.GD29865@daoine.org> On Fri, Oct 30, 2020 at 10:45:14PM +0530, Kaushal Shriyan wrote: > On Fri, Oct 30, 2020 at 4:35 AM Francis Daly wrote: Hi there, > > What does the nginx access log or error log say happened to that request? > I am seeing this below message when I hit > http://219.11.134.114/test/_plugin/kibana/app/kibana > > ==> /var/log/nginx/error.log <== > 2020/10/30 17:10:57 [error] 9616#0: *20 open() > "/usr/share/nginx/html/test/_plugin/kibana/app/kibana" failed (2: No such > file or directory), client: 14.98.153.6, server: , request: "GET > /test/_plugin/kibana/app/kibana HTTP/1.1", host: "219.11.134.114" > > ==> /var/log/nginx/access.log <== > 14.98.153.6 - - [30/Oct/2020:17:10:57 +0000] "GET > /test/_plugin/kibana/app/kibana HTTP/1.1" 404 3650 "-" "Mozilla/5.0 > (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) > Chrome/86.0.4240.111 Safari/537.36" "-" When a request comes to nginx, first it decides which server{} you have configured to handle the request; then it decides which location{} in that server{} you have configured to handle the request (more or less). In this case, the request is being handled by looking at the filesystem; that means that it is handled in a location{} that does not have a proxy_pass or other similar handler defined. > # TEST > server { If you have shown the complete server{} block, then this is not the server{} block that the running nginx is using to handle this request. Maybe there is another location{} in the same server{} block? Or maybe there is another server{} in the same configuration? Or maybe the running nginx is using a different configuration? Some variant of "ps" should show your running nginx and any "-c" argument it has; if you copy that much and add "-T", then find the "server" and "listen" lines, it might help identify which server{} block is actually configured to be used for this request. Something like /usr/local/sbin/nginx -c /etc/nginx.conf -T | grep -e 'server\|listen' where the first three words match whatever your system is doing, will probably be helpful. (Do read the output, and edit any information you consider private, before pasting into email.) > Also, I am not sure why it is getting referenced to /usr/share/nginx/html/? > Is there a way to change the default document root? Please suggest further. Yes, there is -- root (http://nginx.org/r/root). But that is not the problem here. Cheers, f -- Francis Daly francis at daoine.org From kaushalshriyan at gmail.com Sat Oct 31 01:18:00 2020 From: kaushalshriyan at gmail.com (Kaushal Shriyan) Date: Sat, 31 Oct 2020 06:48:00 +0530 Subject: Query on nginx. conf file regarding redirection. In-Reply-To: <20201030234338.GD29865@daoine.org> References: <20201029230511.GB29865@daoine.org> <20201030234338.GD29865@daoine.org> Message-ID: On Sat, Oct 31, 2020 at 5:13 AM Francis Daly wrote: > On Fri, Oct 30, 2020 at 10:45:14PM +0530, Kaushal Shriyan wrote: > > On Fri, Oct 30, 2020 at 4:35 AM Francis Daly wrote: > > Hi there, > > > > What does the nginx access log or error log say happened to that > request? > > > I am seeing this below message when I hit > > http://219.11.134.114/test/_plugin/kibana/app/kibana > > > > ==> /var/log/nginx/error.log <== > > 2020/10/30 17:10:57 [error] 9616#0: *20 open() > > "/usr/share/nginx/html/test/_plugin/kibana/app/kibana" failed (2: No such > > file or directory), client: 14.98.153.6, server: , request: "GET > > /test/_plugin/kibana/app/kibana HTTP/1.1", host: "219.11.134.114" > > > > ==> /var/log/nginx/access.log <== > > 14.98.153.6 - - [30/Oct/2020:17:10:57 +0000] "GET > > /test/_plugin/kibana/app/kibana HTTP/1.1" 404 3650 "-" "Mozilla/5.0 > > (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like > Gecko) > > Chrome/86.0.4240.111 Safari/537.36" "-" > > When a request comes to nginx, first it decides which server{} you > have configured to handle the request; then it decides which location{} > in that server{} you have configured to handle the request (more or less). > > In this case, the request is being handled by looking at the filesystem; > that means that it is handled in a location{} that does not have a > proxy_pass or other similar handler defined. > > > # TEST > > server { > > If you have shown the complete server{} block, then this is not the > server{} block that the running nginx is using to handle this request. > > Maybe there is another location{} in the same server{} block? Or maybe > there is another server{} in the same configuration? Or maybe the running > nginx is using a different configuration? > > > Some variant of "ps" should show your running nginx and any "-c" argument > it has; if you copy that much and add "-T", then find the "server" and > "listen" lines, it might help identify which server{} block is actually > configured to be used for this request. > > Something like > > /usr/local/sbin/nginx -c /etc/nginx.conf -T | grep -e 'server\|listen' > > where the first three words match whatever your system is doing, will > probably be helpful. > > (Do read the output, and edit any information you consider private, > before pasting into email.) > > > Also, I am not sure why it is getting referenced to > /usr/share/nginx/html/? > > Is there a way to change the default document root? Please suggest > further. > > Yes, there is -- root (http://nginx.org/r/root). > > > Hi Francis, I am sharing the output of cat /etc/nginx/nginx.conf OS version : CentOS Linux release 7.8.2003 (Core) nginx version: nginx/1.16.1 cat /etc/nginx/nginx.conf > # For more information on configuration, see: > # * Official English Documentation: http://nginx.org/en/docs/ > # * Official Russian Documentation: http://nginx.org/ru/docs/ > user nginx; > worker_processes auto; > error_log /var/log/nginx/error.log; > pid /run/nginx.pid; > # Load dynamic modules. See /usr/share/doc/nginx/README.dynamic. > include /usr/share/nginx/modules/*.conf; > events { > worker_connections 1024; > } > http { > log_format main '$remote_addr - $remote_user [$time_local] > "$request" ' > '$status $body_bytes_sent "$http_referer" ' > '"$http_user_agent" "$http_x_forwarded_for"'; > access_log /var/log/nginx/access.log main; > sendfile on; > tcp_nopush on; > tcp_nodelay on; > keepalive_timeout 65; > types_hash_max_size 2048; > include /etc/nginx/mime.types; > default_type application/octet-stream; > # Load modular configuration files from the /etc/nginx/conf.d > directory. > # See http://nginx.org/en/docs/ngx_core_module.html#include > # for more information. > include /etc/nginx/conf.d/*.conf; > > # PROD > # server { > # listen 80; > # location /prod { > # proxy_set_header X-Forwarded-For $remote_addr; > # proxy_set_header Host $http_host; > # # fastcgi_read_timeout 240; > # proxy_connect_timeout 600; > # proxy_send_timeout 600; > # proxy_read_timeout 600; > # send_timeout 600; > # proxy_pass > https://vpc-lab-prod-search-zvf5bfbabstbb7gi5sklqh7ll4.eu-west-1.es.amazonaws.com/ > ; > # } > # error_page 404 /404.html; > # location = /40x.html { > # } > # > # # location = /img { > # # index index.html index.htm index.php; > # # root /var/www/html/images/; > # # } > # > # location /img { > # root html; > # index index.php index.html index.htm; > # } > # error_page 500 502 503 504 /50x.html; > # location = /50x.html { > # } > # } > # TEST > server { > listen 80; > location /test { > proxy_set_header X-Forwarded-For $remote_addr; > proxy_set_header Host $http_host; > #fastcgi_read_timeout 240; > proxy_connect_timeout 600; > proxy_send_timeout 600; > proxy_read_timeout 600; > send_timeout 600; > proxy_pass > https://vpc-lab-test-search-3nrzc66u2ffd3n4ywapz7jkrde.eu-west-1.es.amazonaws.com/ > ; > } > error_page 404 /404.html; > location = /40x.html { > } > error_page 500 502 503 504 /50x.html; > location = /50x.html { > } > } > } #nginx -c /etc/nginx/nginx.conf -T | grep -e 'server\|listen' nginx: the configuration file /etc/nginx/nginx.conf syntax is ok nginx: configuration file /etc/nginx/nginx.conf test is successful # server { # listen 80; server { listen 80; Please let me know if you need more details. Thanks in advance. Best Regards, Kaushal -------------- next part -------------- An HTML attachment was scrubbed... URL: From peter_booth at me.com Sat Oct 31 08:09:02 2020 From: peter_booth at me.com (Peter Booth) Date: Sat, 31 Oct 2020 12:09:02 +0400 Subject: Nginx proxy_bind failing In-Reply-To: <039583df79acccda29bd8123f70f58e1.NginxMailingListEnglish@forum.nginx.org> References: <039583df79acccda29bd8123f70f58e1.NginxMailingListEnglish@forum.nginx.org> Message-ID: <5B017222-2632-4EEC-A1A6-3D59501A7238@me.com> Gary, This was interesting to read. There was one thing that wasn?t obvious to me however. What was the high level problem that you were solving with this specific configuration? Curiously Peter Sent from my iPhone > On Oct 30, 2020, at 3:16 PM, garycnew at yahoo.com wrote: > > ?All: > > After reviewing the iptables chains workflow, I discovered that the Nginx > Worker (100.64.8.236:12345) outside interface was associated with the OUTPUT > chain. > > > (192.168.0.2:12345) OUTPUT ==> > (192.168.0.154:$port) PREROUTING ==> > (100.64.8.236:12345) POSTROUTING ==> > Windows Client (192.168.0.154:$port) ==> Nginx Master (192.168.0.2:443) | > Nginx Worker (100.64.8.236:12345) ==> Upstream Desination Server > (104.27.161.206:443) > <== POSTROUTING (192.168.0.2:443) > <== PREROUTING (104.27.161.206:443) > > Once adding the appropriate iptables OUTPUT rule, using the correct > interface (vlan2), the packets leaving the Nginx Worker (100.64.8.236:12345) > were then appropriately MARKed and routed to the OpenVPN Tunnel. > > # iptables -t mangle -I OUTPUT -o vlan2 -p tcp -m multiport --sport 12345 -j > MARK --set-mark 0x2000/0x2000 > Now, I just need to figure out the Nginx SSL Client CA Trust configuration > and we should be in business. > > Hope this helps someone in the future. > > Respectfully, > > Gary > > Posted at Nginx Forum: https://forum.nginx.org/read.php?2,289823,289847#msg-289847 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From nginx-forum at forum.nginx.org Sat Oct 31 12:16:41 2020 From: nginx-forum at forum.nginx.org (garycnew@yahoo.com) Date: Sat, 31 Oct 2020 08:16:41 -0400 Subject: Nginx proxy_bind failing In-Reply-To: <5B017222-2632-4EEC-A1A6-3D59501A7238@me.com> References: <5B017222-2632-4EEC-A1A6-3D59501A7238@me.com> Message-ID: <83618e43a6bdc328f39ba365ae451d5a.NginxMailingListEnglish@forum.nginx.org> Hi Peter! The high-level problem was to install Nginx on an Asuswrt-Merlin router to reverse proxy certain websites through an established OpenVPN Split-Tunnel. To do that, I had to ensure the Nginx Workers were using a specified Source IP and/or Ephemeral Port which could be MARKed by iptables for routing through the established OpenVPN Split-Tunnel. I was able to get it working, but ended up modifying the iptables OUTPUT rule to match only on the Source IP as Nginx was choking with a single Ephemeral Port defined. Now, all I have to do is update my dnsmasq rule; when, I want to add a new site to reverse proxy through the OpenVPN Spli-Tunnel. It's BOMB! Gary Posted at Nginx Forum: https://forum.nginx.org/read.php?2,289823,289857#msg-289857 From francis at daoine.org Sat Oct 31 22:56:37 2020 From: francis at daoine.org (Francis Daly) Date: Sat, 31 Oct 2020 22:56:37 +0000 Subject: Query on nginx. conf file regarding redirection. In-Reply-To: References: <20201029230511.GB29865@daoine.org> <20201030234338.GD29865@daoine.org> Message-ID: <20201031225637.GE29865@daoine.org> On Sat, Oct 31, 2020 at 06:48:00AM +0530, Kaushal Shriyan wrote: > On Sat, Oct 31, 2020 at 5:13 AM Francis Daly wrote: Hi there, thanks for the extra information. > > Something like > > > > /usr/local/sbin/nginx -c /etc/nginx.conf -T | grep -e 'server\|listen' > > > > where the first three words match whatever your system is doing, will > > probably be helpful. > I am sharing the output of cat /etc/nginx/nginx.conf > > include /etc/nginx/conf.d/*.conf; That line *might* add lots more config, but... > #nginx -c /etc/nginx/nginx.conf -T | grep -e 'server\|listen' > nginx: the configuration file /etc/nginx/nginx.conf syntax is ok > nginx: configuration file /etc/nginx/nginx.conf test is successful > # server { > # listen 80; > server { > listen 80; ..that output shows that there is no other config used here. What error message, if any, do you get when you do nginx -c /etc/nginx/nginx.conf ? (I suspect it will complain that it can't bind to port 80, because another nginx is listening there.) As a test, if you kill any running nginx process, and do nginx -c /etc/nginx/nginx.conf what happens if you repeat your test request? Cheers, f -- Francis Daly francis at daoine.org