From mdounin at mdounin.ru Mon Jun 1 15:42:32 2020 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 1 Jun 2020 18:42:32 +0300 Subject: proxy_ssl_verify error: 'upstream SSL certificate does not match "test.example.com" while SSL handshaking to upstream', for CN/SAN 'matched' client & server certs ? In-Reply-To: References: Message-ID: <20200601154232.GU12747@mdounin.ru> Hello! On Fri, May 29, 2020 at 07:09:45PM -0700, PGNet Dev wrote: > ?I'm running > > nginx -V > nginx version: nginx/1.19.0 (pgnd Build) > built with OpenSSL 1.1.1g 21 Apr 2020 > TLS SNI support enabled > ... > > It serves as front-end SSL termination, site host, and reverse-proxy to backend apps. > > I'm trying to get a backend app to proxy_ssl_verify the proxy connection to it. > > I have two self-signed certs: > > One for "TLS Web Client Authentication, E-mail Protection" > > openssl x509 -in test.example.com.client.crt -text | egrep "Subject.*CN|DNS|TLS" > Subject: C = US, ST = NY, L = New_York, O = example2.com, OU = myCA, CN = test.example.com, emailAddress = ssl at example2.com > TLS Web Client Authentication, E-mail Protection > DNS:test.example.com, DNS:www.test.example.com, DNS:localhost > > and the other, for "TLS Web Server Authentication" > > openssl x509 -in test.example.com.server.crt -text | egrep "Subject.*CN|DNS|TLS" > Subject: C = US, ST = NY, L = New_York, O = example2.com, OU = myCA, CN = test.example.com, emailAddress = ssl at example2.com > TLS Web Server Authentication > DNS:test.example.com, DNS:www.test.example.com, DNS:localhost > > The certs 'match' CN & SAN, differing in "X509v3 Extended Key Usage". > > Both are verified "OK" with my local CA cert > > openssl verify -CAfile myCA.crt.pem test.example.com.server.crt > test.example.com.server.crt: OK > > openssl verify -CAfile /myCA.crt.pem test.example.com.client.crt > test.example.com.client.crt: OK > > My main nginx config includes, > > upstream test.example.com { > server test.example.com:11111; > } > server { > > listen 10.10.10.1:443 ssl http2; > server_name example.com; > ... > > ssl_verify_client on; > ssl_client_certificate "/etc/ssl/nginx/myCA.crt"; > ssl_verify_depth 2; > ssl_certificate "/etc/ssl/nginx/example.com.server.crt"; > ssl_certificate_key "/etc/ssl/nginx/example.com.server.key"; > ssl_trusted_certificate "/etc/ssl/nginx/myCA.crt"; > > location /app1 { > proxy_pass https://test.example.com; > proxy_ssl_certificate "/etc/ssl/nginx/test.example.com.client.crt"; > proxy_ssl_certificate_key "/etc/ssl/nginx/test.example.com.client.key"; > proxy_ssl_trusted_certificate "/etc/ssl/nginx/myCA.crt"; > proxy_ssl_verify on; > proxy_ssl_verify_depth 2; > include includes/reverse-proxy.inc; > } > } > > and the upstream config, > > server { > listen 127.0.0.1:11111 ssl http2; > server_name test.example.com; > > root /data/webapps/demo_app/; > index index.php; > expires -1; > > ssl_certificate "/etc/ssl/nginx/test.example.com.server.crt"; > ssl_certificate_key "/etc/ssl/nginx/test.example.com.server.key"; > > ssl_client_certificate "/etc/ssl/nginx/myCA.crt"; > ssl_verify_client optional; > ssl_verify_depth 2; > > location ~ \.php { > try_files $uri =404; > fastcgi_pass phpfpm; > fastcgi_index index.php; > fastcgi_param PATH_INFO $fastcgi_script_name; > include fastcgi_params; > } > > } > > access to > > https://example.com/app1 > > responds, > > 502 Bad Gateway > > logs, show an SSL handshake fail > > ... > 2020/05/29 19:00:06 [debug] 29419#29419: *7 SSL: TLSv1.3, cipher: "TLS_CHACHA20_POLY1305_SHA256 TLSv1.3 Kx=any Au=any Enc=CHACHA20/POLY1305(256) Mac=AEAD" > 2020/05/29 19:00:06 [debug] 29419#29419: *7 http upstream ssl handshake: "/app1/?" > 2020/05/29 19:00:06 [debug] 29419#29419: *7 X509_check_host(): no match > 2020/05/29 19:00:06 [error] 29419#29419: *7 upstream SSL certificate does not match "test.example.com" while SSL handshaking to upstream, client: 10.10.10.73, server: example.com, request: "GET /app1/ HTTP/2.0", upstream: "https://127.0.0.1:11111/app1/", host: "example.com" > 2020/05/29 19:00:06 [debug] 29419#29419: *7 http next upstream, 2 > ... > > If I toggle > > - ssl_verify_client on; > + ssl_verify_client off; > > then I'm able to connect to the backend site, as expected. > > What exactly is NOT matching in the handshake? CN & SAN do ... > > &/or, is there a config problem above? Most likely the problem is that the certificate returned depends on the name provided via Server Name Indication (SNI). That is, that the server block in the upstream server configuration is not the default one, and the default one returns a different certificate. Usage of Server Name Indication for upstream SSL connections isn't enabled by default, and this isn't switched on in your configuration. Try proxy_ssl_server_name on; to see if it helps. See http://nginx.org/r/proxy_ssl_server_name for details. You may also try the following patch to provide somewhat better debug logging when checking upstream server SSL certificates: # HG changeset patch # User Maxim Dounin # Date 1591025575 -10800 # Mon Jun 01 18:32:55 2020 +0300 # Node ID eaa39944438dbb10507760890bddc45c19a5ad6f # Parent 8cadaf7e7231865f2f81c03cb785c045dda6bf8b SSL: added verify callback to ngx_ssl_trusted_certificate(). This ensures that certificate verification is properly logged to debug log during upstream server certificate verification. This should help with debugging various certificate issues. diff --git a/src/event/ngx_event_openssl.c b/src/event/ngx_event_openssl.c --- a/src/event/ngx_event_openssl.c +++ b/src/event/ngx_event_openssl.c @@ -920,6 +920,8 @@ ngx_int_t ngx_ssl_trusted_certificate(ngx_conf_t *cf, ngx_ssl_t *ssl, ngx_str_t *cert, ngx_int_t depth) { + SSL_CTX_set_verify(ssl->ctx, SSL_VERIFY_PEER, ngx_ssl_verify_callback); + SSL_CTX_set_verify_depth(ssl->ctx, depth); if (cert->len == 0) { -- Maxim Dounin http://mdounin.ru/ From pgnet.dev at gmail.com Tue Jun 2 04:43:20 2020 From: pgnet.dev at gmail.com (PGNet Dev) Date: Mon, 1 Jun 2020 21:43:20 -0700 Subject: proxy_ssl_verify error: 'upstream SSL certificate does not match "test.example.com" while SSL handshaking to upstream', for CN/SAN 'matched' client & server certs ? In-Reply-To: <20200601154232.GU12747@mdounin.ru> References: <20200601154232.GU12747@mdounin.ru> Message-ID: On 6/1/20 8:42 AM, Maxim Dounin wrote: > > proxy_ssl_server_name on; > > to see if it helps. See http://nginx.org/r/proxy_ssl_server_name > for details. enabling it _has_ an effect. now, access to https://example.com/app1 responds, - 502 Bad Gateway + 421 Misdirected Request > > You may also try the following patch to provide somewhat better > debug logging when checking upstream server SSL certificates: I'll get this in place & see what i learn ... From pgnet.dev at gmail.com Tue Jun 2 04:58:26 2020 From: pgnet.dev at gmail.com (PGNet Dev) Date: Mon, 1 Jun 2020 21:58:26 -0700 Subject: proxy_ssl_verify error: 'upstream SSL certificate does not match "test.example.com" while SSL handshaking to upstream', for CN/SAN 'matched' client & server certs ? In-Reply-To: References: <20200601154232.GU12747@mdounin.ru> Message-ID: <47a6af58-ae89-fdd2-c531-1249e726fc7b@gmail.com> with patch applied, and 'proxy_ssl_server_name on;' this is where the problem appears 2020/06/02 00:50:08 [debug] 20166#20166: *3 verify:1, error:0, depth:2, subject:"/O=example.com/OU=example.com_CA/L=New_York/ST=NY/C=US/emailAddress=admin at example.com/CN=example.com_CA", issuer:"/O=example.com/OU=example.com_CA/L=New_York/ST=NY/C=US/emailAddress=admin at example.com/CN=example.com_CA" 2020/06/02 00:50:08 [debug] 20166#20166: *3 verify:1, error:0, depth:1, subject:"/C=US/ST=NY/O=example.com/OU=example.com_CA/CN=example.com_CA_INTERMEDIATE/emailAddress=admin at example.com", issuer:"/O=example.com/OU=example.com_CA/L=New_York/ST=NY/C=US/emailAddress=admin at example.com/CN=example.com_CA" 2020/06/02 00:50:08 [debug] 20166#20166: *3 verify:1, error:0, depth:0, subject:"/C=US/ST=NY/L=New_York/O=example.com/OU=example.com_CA/CN=test.example.net/emailAddress=admin at example.com", issuer:"/C=US/ST=NY/O=example.com/OU=example.com_CA/CN=example.com_CA_INTERMEDIATE/emailAddress=admin at example.com" 2020/06/02 00:50:08 [debug] 20166#20166: *3 ssl new session: 0E2A0672:32:1105 2020/06/02 00:50:08 [debug] 20166#20166: *3 ssl new session: 31C878D7:32:1104 2020/06/02 00:50:08 [debug] 20166#20166: *3 SSL_do_handshake: 1 2020/06/02 00:50:08 [debug] 20166#20166: *3 SSL: TLSv1.3, cipher: "TLS_CHACHA20_POLY1305_SHA256 TLSv1.3 Kx=any Au=any Enc=CHACHA20/POLY1305(256) Mac=AEAD" 2020/06/02 00:50:08 [debug] 20166#20166: *3 reusable connection: 1 2020/06/02 00:50:08 [debug] 20166#20166: *3 http wait request handler 2020/06/02 00:50:08 [debug] 20166#20166: *3 malloc: 0000555967A0B2E0:1024 2020/06/02 00:50:08 [debug] 20166#20166: *3 SSL_read: 772 2020/06/02 00:50:08 [debug] 20166#20166: *3 SSL_read: -1 2020/06/02 00:50:08 [debug] 20166#20166: *3 SSL_get_error: 2 2020/06/02 00:50:08 [debug] 20166#20166: *3 reusable connection: 0 2020/06/02 00:50:08 [debug] 20166#20166: *3 posix_memalign: 00005559678F6460:4096 @16 2020/06/02 00:50:08 [debug] 20166#20166: *3 posix_memalign: 00005559675113A0:4096 @16 2020/06/02 00:50:08 [debug] 20166#20166: *3 http process request line 2020/06/02 00:50:08 [debug] 20166#20166: *3 http request line: "GET /app1 HTTP/1.1" 2020/06/02 00:50:08 [debug] 20166#20166: *3 http uri: "/app1" 2020/06/02 00:50:08 [debug] 20166#20166: *3 http args: "" 2020/06/02 00:50:08 [debug] 20166#20166: *3 http exten: "" 2020/06/02 00:50:08 [debug] 20166#20166: *3 http process request header line 2020/06/02 00:50:08 [info] 20166#20166: *3 client attempted to request the server name different from the one that was negotiated while reading client request headers, client: 127.0.0.1, server: test.example.net, request: "GET /app1 HTTP/1.1", host: "example.net" 2020/06/02 00:50:08 [debug] 20166#20166: *3 http finalize request: 421, "/app1?" a:1, c:1 2020/06/02 00:50:08 [debug] 20166#20166: *3 event timer del: 50: 3334703 2020/06/02 00:50:08 [debug] 20166#20166: *3 http special response: 421, "/app1?" 2020/06/02 00:50:08 [debug] 20166#20166: *3 http set discard body 2020/06/02 00:50:08 [debug] 20166#20166: *3 headers more header filter, uri "/app1" 2020/06/02 00:50:08 [debug] 20166#20166: *3 lua capture header filter, uri "/app1" 2020/06/02 00:50:08 [debug] 20166#20166: *3 xslt filter header 2020/06/02 00:50:08 [debug] 20166#20166: *3 charset: "" > "utf-8" 2020/06/02 00:50:08 [debug] 20166#20166: *3 HTTP/1.1 421 Misdirected Request noting 2020/06/02 00:50:08 [info] 20166#20166: *3 client attempted to request the server name different from the one that was negotiated while reading client request headers, client: 127.0.0.1, server: test.example.net, request: "GET /app1 HTTP/1.1", host: "example.net" now, need to stare at this and try to figure out 'why?' From pluknet at nginx.com Tue Jun 2 09:51:55 2020 From: pluknet at nginx.com (Sergey Kandaurov) Date: Tue, 2 Jun 2020 12:51:55 +0300 Subject: proxy_ssl_verify error: 'upstream SSL certificate does not match "test.example.com" while SSL handshaking to upstream', for CN/SAN 'matched' client & server certs ? In-Reply-To: <47a6af58-ae89-fdd2-c531-1249e726fc7b@gmail.com> References: <20200601154232.GU12747@mdounin.ru> <47a6af58-ae89-fdd2-c531-1249e726fc7b@gmail.com> Message-ID: <20838249-4958-4B5E-93C4-F9FA07E9A92E@nginx.com> > On 2 Jun 2020, at 07:58, PGNet Dev wrote: > > 2020/06/02 00:50:08 [info] 20166#20166: *3 client attempted to request the server name different from the one that was negotiated while reading client request headers, client: 127.0.0.1, server: test.example.net, request: "GET /app1 HTTP/1.1", host: "example.net" > > now, need to stare at this and try to figure out 'why?' That means client provided TLS "server_name" extension (SNI), then requested a different origin in the Host header. In your case, the mangled name "test.example.net" (via SNI) didn't match another mangled name "example.net" (in Host). For the formal specification, see the last paragraph in RFC 6066, section-3: If an application negotiates a server name using an application protocol and then upgrades to TLS, and if a server_name extension is sent, then the extension SHOULD contain the same name that was negotiated in the application protocol. If the server_name is established in the TLS session handshake, the client SHOULD NOT attempt to request a different server name at the application layer. 421 is defined for such cases in HTTP. -- Sergey Kandaurov From 7149144120 at txt.att.net Tue Jun 2 15:03:15 2020 From: 7149144120 at txt.att.net (7149144120 at txt.att.net) Date: Tue, 02 Jun 2020 15:03:15 -0000 Subject: proxy_ssl_verify error: 'upstream SSL certificate does not m In-Reply-To: f00fde1b-35e2-68fd-4a67-2d6c2c51b36a@gmail.com Message-ID: Stop -----Original Message----- From: Sent: Mon, 1 Jun 2020 21:43:20 -0700 To: 7149144120 at txt.att.net Subject: Re: proxy_ssl_verify error: 'upstream SSL certificate does not m >On 6/1/20 8:42 AM, Maxim Dounin wrote: >> >> proxy_ssl_server_name on; >> >> to see if it helps. See http://nginx.org/r/proxy_ssl_server_name >> for details. > >enabling ================================================================== This mobile text message is brought to you by AT&T From 7149144120 at txt.att.net Tue Jun 2 15:03:25 2020 From: 7149144120 at txt.att.net (7149144120 at txt.att.net) Date: Tue, 02 Jun 2020 15:03:25 -0000 Subject: proxy_ssl_verify error: 'upstream SSL certificate does not m In-Reply-To: 20838249-4958-4B5E-93C4-F9FA07E9A92E@nginx.com Message-ID: Stop -----Original Message----- From: Sent: Tue, 2 Jun 2020 12:51:55 +0300 To: 7149144120 at txt.att.net Subject: Re: proxy_ssl_verify error: 'upstream SSL certificate does not m > >> On 2 Jun 2020, at 07:58, PGNet Dev wrote: >> >> 2020/06/02 00:50:08 [info] 20166#20166: *3 client attempted to request the server name different from ================================================================== This mobile text message is brought to you by AT&T From francis at daoine.org Tue Jun 2 15:27:28 2020 From: francis at daoine.org (Francis Daly) Date: Tue, 2 Jun 2020 16:27:28 +0100 Subject: proxy_ssl_verify error: 'upstream SSL certificate does not match "test.example.com" while SSL handshaking to upstream', for CN/SAN 'matched' client & server certs ? In-Reply-To: <20838249-4958-4B5E-93C4-F9FA07E9A92E@nginx.com> References: <20200601154232.GU12747@mdounin.ru> <47a6af58-ae89-fdd2-c531-1249e726fc7b@gmail.com> <20838249-4958-4B5E-93C4-F9FA07E9A92E@nginx.com> Message-ID: <20200602152728.GU20939@daoine.org> On Tue, Jun 02, 2020 at 12:51:55PM +0300, Sergey Kandaurov wrote: Hi there, > That means client provided TLS "server_name" extension (SNI), > then requested a different origin in the Host header. That suggests that if you choose to use "proxy_ssl_server_name on;", then you almost certainly do not want to add your own "proxy_set_header Host" value. The nginx code probably should not try to check for (and reject) that combination of directives-and-values; but might it be worth adding a note to http://nginx.org/r/proxy_ssl_server_name to say that that other directive is probably a bad idea, especially if you get a http 421 response from your upstream? Cheers, f -- Francis Daly francis at daoine.org From pgnet.dev at gmail.com Tue Jun 2 19:10:45 2020 From: pgnet.dev at gmail.com (PGNet Dev) Date: Tue, 2 Jun 2020 12:10:45 -0700 Subject: proxy_ssl_verify error: 'upstream SSL certificate does not match "test.example.com" while SSL handshaking to upstream', for CN/SAN 'matched' client & server certs ? In-Reply-To: <20200602152728.GU20939@daoine.org> References: <20200601154232.GU12747@mdounin.ru> <47a6af58-ae89-fdd2-c531-1249e726fc7b@gmail.com> <20838249-4958-4B5E-93C4-F9FA07E9A92E@nginx.com> <20200602152728.GU20939@daoine.org> Message-ID: <1eb5e1ab-e14c-02dd-1d66-75e8354f40cc@gmail.com> On 6/2/20 8:27 AM, Francis Daly wrote: > That suggests that if you choose to use "proxy_ssl_server_name on;", > then you almost certainly do not want to add your own "proxy_set_header > Host" value. > > The nginx code probably should not try to check for (and reject) that > combination of directives-and-values; but might it be worth adding a > note to http://nginx.org/r/proxy_ssl_server_name to say that that other > directive is probably a bad idea, especially if you get a http 421 response > from your upstream? trying to simplify/repeat, i've vhost config, upstream test-upstream { server test.example.com:11111; } server { listen 10.10.10.1:443 ssl http2; server_name example.com; ... location /app1 { proxy_ssl_verify on; proxy_ssl_verify_depth 2; proxy_ssl_certificate "/etc/ssl/nginx/test.client.crt"; proxy_ssl_certificate_key "/etc/ssl/nginx/test.client.key"; proxy_ssl_trusted_certificate "/etc/ssl/nginx/ca_int.crt"; proxy_pass https://test-upstream/; proxy_ssl_server_name on; proxy_ssl_name test.example.com; } } and, upstream config server { listen 127.0.0.1:11111 ssl http2; server_name test.example.com; root /srv/www/test; index index.php; expires -1; ssl_certificate "/etc/ssl/nginx/test.server.crt"; ssl_certificate_key "/etc/ssl/nginx/test.server.key"; ssl_trusted_certificate "/etc/ssl/nginx/ca_int.crt"; ssl_verify_client off; ssl_verify_depth 2; ssl_client_certificate "/etc/ssl/nginx/ca_int.crt"; location ~ \.php { try_files $uri =404; fastcgi_pass phpfpm; fastcgi_index index.php; fastcgi_param PATH_INFO $fastcgi_script_name; include includes/fastcgi/fastcgi_params; } error_log /var/log/nginx/test.error.log info; } on access to https://example.com/app1 still get 421 Misdirected Request in log ==> /var/log/nginx/test.error.log <== 2020/06/02 11:52:13 [info] 8713#8713: *18 client attempted to request the server name different from the one that was negotiated while reading client request headers, client: 127.0.0.1, server: test.example.com, request: "GET / HTTP/1.0", host: "test-upstream" Is that host: "test-upstream" to be expected? it's an upstream name, not an actual host. Still unable to wrap my head around where this mis-match is coming from ... I have a nagging suspicion I'm missing something *really* obvious :-/ From mdounin at mdounin.ru Tue Jun 2 19:22:06 2020 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 2 Jun 2020 22:22:06 +0300 Subject: proxy_ssl_verify error: 'upstream SSL certificate does not match "test.example.com" while SSL handshaking to upstream', for CN/SAN 'matched' client & server certs ? In-Reply-To: <20200602152728.GU20939@daoine.org> References: <20200601154232.GU12747@mdounin.ru> <47a6af58-ae89-fdd2-c531-1249e726fc7b@gmail.com> <20838249-4958-4B5E-93C4-F9FA07E9A92E@nginx.com> <20200602152728.GU20939@daoine.org> Message-ID: <20200602192206.GX12747@mdounin.ru> Hello! On Tue, Jun 02, 2020 at 04:27:28PM +0100, Francis Daly wrote: > On Tue, Jun 02, 2020 at 12:51:55PM +0300, Sergey Kandaurov wrote: > > Hi there, > > > That means client provided TLS "server_name" extension (SNI), > > then requested a different origin in the Host header. > > That suggests that if you choose to use "proxy_ssl_server_name on;", > then you almost certainly do not want to add your own "proxy_set_header > Host" value. > > The nginx code probably should not try to check for (and reject) that > combination of directives-and-values; but might it be worth adding a > note to http://nginx.org/r/proxy_ssl_server_name to say that that other > directive is probably a bad idea, especially if you get a http 421 response > from your upstream? Not exactly. The 421 Misdirected Request error is only returned when one tries to access a virtual server with SSL client certificate verification enabled, and used a different server name during the SSL handshake. Normally one can use Host header which is different from the SNI server name, and this is often happens in real life (e.g., connection reuse in HTTP/2 implies requests to multiple hostnames via one connection). That's more about being careful when configuring things, especially when configuring SSL. -- Maxim Dounin http://mdounin.ru/ From mdounin at mdounin.ru Tue Jun 2 19:34:23 2020 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 2 Jun 2020 22:34:23 +0300 Subject: proxy_ssl_verify error: 'upstream SSL certificate does not match "test.example.com" while SSL handshaking to upstream', for CN/SAN 'matched' client & server certs ? In-Reply-To: <1eb5e1ab-e14c-02dd-1d66-75e8354f40cc@gmail.com> References: <20200601154232.GU12747@mdounin.ru> <47a6af58-ae89-fdd2-c531-1249e726fc7b@gmail.com> <20838249-4958-4B5E-93C4-F9FA07E9A92E@nginx.com> <20200602152728.GU20939@daoine.org> <1eb5e1ab-e14c-02dd-1d66-75e8354f40cc@gmail.com> Message-ID: <20200602193423.GY12747@mdounin.ru> Hello! On Tue, Jun 02, 2020 at 12:10:45PM -0700, PGNet Dev wrote: > On 6/2/20 8:27 AM, Francis Daly wrote: > > That suggests that if you choose to use "proxy_ssl_server_name on;", > > then you almost certainly do not want to add your own "proxy_set_header > > Host" value. > > > > The nginx code probably should not try to check for (and reject) that > > combination of directives-and-values; but might it be worth adding a > > note to http://nginx.org/r/proxy_ssl_server_name to say that that other > > directive is probably a bad idea, especially if you get a http 421 response > > from your upstream? > > trying to simplify/repeat, i've > > vhost config, > > upstream test-upstream { > server test.example.com:11111; > } > > server { > listen 10.10.10.1:443 ssl http2; > server_name example.com; > > ... > location /app1 { > > proxy_ssl_verify on; > proxy_ssl_verify_depth 2; > proxy_ssl_certificate "/etc/ssl/nginx/test.client.crt"; > proxy_ssl_certificate_key "/etc/ssl/nginx/test.client.key"; > proxy_ssl_trusted_certificate "/etc/ssl/nginx/ca_int.crt"; > > proxy_pass https://test-upstream/; > proxy_ssl_server_name on; > proxy_ssl_name test.example.com; > > } > } > > and, upstream config > > server { > listen 127.0.0.1:11111 ssl http2; > server_name test.example.com; > > root /srv/www/test; > index index.php; > expires -1; > > ssl_certificate "/etc/ssl/nginx/test.server.crt"; > ssl_certificate_key "/etc/ssl/nginx/test.server.key"; > ssl_trusted_certificate "/etc/ssl/nginx/ca_int.crt"; > > ssl_verify_client off; > ssl_verify_depth 2; > ssl_client_certificate "/etc/ssl/nginx/ca_int.crt"; > > location ~ \.php { > try_files $uri =404; > fastcgi_pass phpfpm; > fastcgi_index index.php; > fastcgi_param PATH_INFO $fastcgi_script_name; > include includes/fastcgi/fastcgi_params; > } > > error_log /var/log/nginx/test.error.log info; > } > > on access to > > https://example.com/app1 > > still get > > 421 Misdirected Request > > in log > > ==> /var/log/nginx/test.error.log <== > 2020/06/02 11:52:13 [info] 8713#8713: *18 client attempted to request the server name different from the one that was negotiated while reading client request headers, client: 127.0.0.1, server: test.example.com, request: "GET / HTTP/1.0", host: "test-upstream" > > Is that > > host: "test-upstream" > > to be expected? it's an upstream name, not an actual host. Yes, it is expected. Quoting http://nginx.org/r/proxy_set_header: : By default, only two fields are redefined: : : proxy_set_header Host $proxy_host; : proxy_set_header Connection close; That is, the name you've written in the proxy_pass directive is the actual hostname, and it will be used in the Host header when creating requests to upstream server. And it is also used in the proxy_ssl_name, so it will be used during SSL handshake for SNI and certificate verification. It's not just "an upstream name". If you want it to be only an upstream name, you'll have to redefine at least proxy_ssl_name and "proxy_set_header Host". (Well, not really, since $proxy_host is also used at least in the proxy_cache_key, but this is probably not that important.) Alternatively, you may want to use the real name, and define an upstream{} block with that name. This way you won't need to redefine anything. > Still unable to wrap my head around where this mis-match is > coming from ... I have a nagging suspicion I'm missing something > *really* obvious :-/ The mis-match comes from trying to redefine the name in some parts of the configuration but not others. Hope the above explanation helps. -- Maxim Dounin http://mdounin.ru/ From pgnet.dev at gmail.com Tue Jun 2 20:01:18 2020 From: pgnet.dev at gmail.com (PGNet Dev) Date: Tue, 2 Jun 2020 13:01:18 -0700 Subject: proxy_ssl_verify error: 'upstream SSL certificate does not match "test.example.com" while SSL handshaking to upstream', for CN/SAN 'matched' client & server certs ? In-Reply-To: <20200602193423.GY12747@mdounin.ru> References: <20200601154232.GU12747@mdounin.ru> <47a6af58-ae89-fdd2-c531-1249e726fc7b@gmail.com> <20838249-4958-4B5E-93C4-F9FA07E9A92E@nginx.com> <20200602152728.GU20939@daoine.org> <1eb5e1ab-e14c-02dd-1d66-75e8354f40cc@gmail.com> <20200602193423.GY12747@mdounin.ru> Message-ID: <4e71ba4e-8376-b8ed-d5e0-3dae596e751f@gmail.com> On 6/2/20 12:34 PM, Maxim Dounin wrote: > The mis-match comes from trying to redefine the name in some parts > of the configuration but not others. Hope the above explanation > helps. I've reread your comment That is, the name you've written in the proxy_pass directive is the actual hostname, and it will be used in the Host header when creating requests to upstream server. And it is also used in the proxy_ssl_name, so it will be used during SSL handshake for SNI and certificate verification. It's not just "an upstream name". If you want it to be only an upstream name, you'll have to redefine at least proxy_ssl_name and "proxy_set_header Host". (Well, not really, since $proxy_host is also used at least in the proxy_cache_key, but this is probably not that important.) a bunch of times. Still can't grasp it clearly. Which is the source of the pebkac :-/ Otoh, simply _doing_ Alternatively, you may want to use the real name, and define an upstream{} block with that name. This way you won't need to redefine anything. i.e., changing to EITHER case (1): vhost config, - upstream test-upstream { + upstream test.example.com { server test.example.com:11111; } server { listen 10.10.10.1:443 ssl http2; server_name example.com; ... location /app1 { proxy_ssl_verify on; proxy_ssl_verify_depth 2; proxy_ssl_certificate "/etc/ssl/nginx/test.client.crt"; proxy_ssl_certificate_key "/etc/ssl/nginx/test.client.key"; proxy_ssl_trusted_certificate "/etc/ssl/nginx/ca_int.crt"; - proxy_pass https://test-upstream/; + proxy_pass https://test.example.com/; proxy_ssl_server_name on; proxy_ssl_name test.example.com; } } and, upstream config server { listen 127.0.0.1:11111 ssl http2; server_name test.example.com; root /srv/www/test; index index.php; expires -1; ssl_certificate "/etc/ssl/nginx/test.server.crt"; ssl_certificate_key "/etc/ssl/nginx/test.server.key"; ssl_trusted_certificate "/etc/ssl/nginx/ca_int.crt"; - ssl_verify_client off; + ssl_verify_client on; ssl_verify_depth 2; ssl_client_certificate "/etc/ssl/nginx/ca_int.crt"; location ~ \.php { try_files $uri =404; fastcgi_pass phpfpm; fastcgi_index index.php; fastcgi_param PATH_INFO $fastcgi_script_name; include includes/fastcgi/fastcgi_params; } error_log /var/log/nginx/test.error.log info; } or case (2): vhost config, - upstream test-upstream { + upstream JUNK { server test.example.com:11111; } server { listen 10.10.10.1:443 ssl http2; server_name example.com; ... location /app1 { proxy_ssl_verify on; proxy_ssl_verify_depth 2; proxy_ssl_certificate "/etc/ssl/nginx/test.client.crt"; proxy_ssl_certificate_key "/etc/ssl/nginx/test.client.key"; proxy_ssl_trusted_certificate "/etc/ssl/nginx/ca_int.crt"; - proxy_pass https://test-upstream/; + proxy_pass https://test.example.com:11111/; proxy_ssl_server_name on; proxy_ssl_name test.example.com; } } and, upstream config server { listen 127.0.0.1:11111 ssl http2; server_name test.example.com; root /srv/www/test; index index.php; expires -1; ssl_certificate "/etc/ssl/nginx/test.server.crt"; ssl_certificate_key "/etc/ssl/nginx/test.server.key"; ssl_trusted_certificate "/etc/ssl/nginx/ca_int.crt"; - ssl_verify_client off; + ssl_verify_client on; ssl_verify_depth 2; ssl_client_certificate "/etc/ssl/nginx/ca_int.crt"; location ~ \.php { try_files $uri =404; fastcgi_pass phpfpm; fastcgi_index index.php; fastcgi_param PATH_INFO $fastcgi_script_name; include includes/fastcgi/fastcgi_params; } error_log /var/log/nginx/test.error.log info; } now, in _either_ case, access to https://example.com/app1 https://example.com/app1/ _does_ return my 'test' app correctly i _do_ see in logs in case (2), a single error instance, 2020/06/02 12:51:11 [debug] 6140#6140: *3 reusable connection: 1 2020/06/02 12:51:11 [debug] 6140#6140: *3 http wait request handler 2020/06/02 12:51:11 [debug] 6140#6140: *3 malloc: 0000563CDA76DF10:1024 2020/06/02 12:51:11 [debug] 6140#6140: *3 SSL_read: 345 2020/06/02 12:51:11 [debug] 6140#6140: *3 SSL_read: -1 ??? 2020/06/02 12:51:11 [debug] 6140#6140: *3 SSL_get_error: 2 2020/06/02 12:51:11 [debug] 6140#6140: *3 reusable connection: 0 2020/06/02 12:51:11 [debug] 6140#6140: *3 posix_memalign: 0000563CDA2963A0:4096 @16 2020/06/02 12:51:11 [debug] 6140#6140: *3 posix_memalign: 0000563CDA650060:4096 @16 & in case (1), a double error instance 2020/06/02 12:53:46 [debug] 6267#6267: *6 SSL_read_early_data: 2, 0 2020/06/02 12:53:46 [debug] 6267#6267: *6 SSL_do_handshake: 1 2020/06/02 12:53:46 [debug] 6267#6267: *6 SSL: TLSv1.2, cipher: "ECDHE-ECDSA-CHACHA20-POLY1305 TLSv1.2 Kx=ECDH Au=ECDSA Enc=CHACHA20/POLY1305(256) Mac=AEAD" 2020/06/02 12:53:46 [debug] 6267#6267: *6 reusable connection: 1 2020/06/02 12:53:46 [debug] 6267#6267: *6 http wait request handler 2020/06/02 12:53:46 [debug] 6267#6267: *6 malloc: 0000563C0F2ADAB0:1024 2020/06/02 12:53:46 [debug] 6267#6267: *6 SSL_read: -1 ??? 2020/06/02 12:53:46 [debug] 6267#6267: *6 SSL_get_error: 2 2020/06/02 12:53:46 [debug] 6267#6267: *6 free: 0000563C0F2ADAB0 2020/06/02 12:53:46 [debug] 6267#6267: *6 http wait request handler 2020/06/02 12:53:46 [debug] 6267#6267: *6 malloc: 0000563C0F2ADAB0:1024 2020/06/02 12:53:46 [debug] 6267#6267: *6 SSL_read: 339 2020/06/02 12:53:46 [debug] 6267#6267: *6 SSL_read: -1 ??? 2020/06/02 12:53:46 [debug] 6267#6267: *6 SSL_get_error: 2 2020/06/02 12:53:46 [debug] 6267#6267: *6 reusable connection: 0 2020/06/02 12:53:46 [debug] 6267#6267: *6 posix_memalign: 0000563C0F18FA60:4096 @16 2020/06/02 12:53:46 [debug] 6267#6267: *6 posix_memalign: 0000563C0EDD4B10:4096 @16 2020/06/02 12:53:46 [debug] 6267#6267: *6 http process request line but that error doesn't seem to be fatal. any idea what's causing those^^ errors? From mdounin at mdounin.ru Tue Jun 2 23:12:41 2020 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 3 Jun 2020 02:12:41 +0300 Subject: proxy_ssl_verify error: 'upstream SSL certificate does not match "test.example.com" while SSL handshaking to upstream', for CN/SAN 'matched' client & server certs ? In-Reply-To: <4e71ba4e-8376-b8ed-d5e0-3dae596e751f@gmail.com> References: <20200601154232.GU12747@mdounin.ru> <47a6af58-ae89-fdd2-c531-1249e726fc7b@gmail.com> <20838249-4958-4B5E-93C4-F9FA07E9A92E@nginx.com> <20200602152728.GU20939@daoine.org> <1eb5e1ab-e14c-02dd-1d66-75e8354f40cc@gmail.com> <20200602193423.GY12747@mdounin.ru> <4e71ba4e-8376-b8ed-d5e0-3dae596e751f@gmail.com> Message-ID: <20200602231241.GB12747@mdounin.ru> Hello! On Tue, Jun 02, 2020 at 01:01:18PM -0700, PGNet Dev wrote: > On 6/2/20 12:34 PM, Maxim Dounin wrote: > > The mis-match comes from trying to redefine the name in some parts > > of the configuration but not others. Hope the above explanation > > helps. > > I've reread your comment > > That is, the name you've written in the proxy_pass directive is > the actual hostname, and it will be used in the Host header when > creating requests to upstream server. And it is also used in the > proxy_ssl_name, so it will be used during SSL handshake for SNI > and certificate verification. > > It's not just "an upstream name". If you want it to be only an > upstream name, you'll have to redefine at least proxy_ssl_name and > "proxy_set_header Host". (Well, not really, since $proxy_host is > also used at least in the proxy_cache_key, but this is probably > not that important.) > > a bunch of times. Still can't grasp it clearly. Which is the source of the pebkac :-/ Read: if you want to use an internal upstream name in proxy_pass, consider using _both_ "proxy_ssl_name" and "proxy_set_header Host", for example: proxy_pass https://test-upstream; proxy_set_header Host test.example.com; proxy_ssl_name test.example.com; There are few other places where the hostname from the proxy_pass directive is used, but the probably aren't that important. > Otoh, simply _doing_ > > Alternatively, you may want to use the real name, and define an > upstream{} block with that name. This way you won't need to > redefine anything. > > i.e., changing to EITHER [...] > now, in _either_ case, access to > > https://example.com/app1 > https://example.com/app1/ > > _does_ return my 'test' app correctly So everything is fine, as expected. > i _do_ see in logs > > in case (2), a single error instance, > > 2020/06/02 12:51:11 [debug] 6140#6140: *3 reusable connection: 1 > 2020/06/02 12:51:11 [debug] 6140#6140: *3 http wait request handler > 2020/06/02 12:51:11 [debug] 6140#6140: *3 malloc: 0000563CDA76DF10:1024 > 2020/06/02 12:51:11 [debug] 6140#6140: *3 SSL_read: 345 > 2020/06/02 12:51:11 [debug] 6140#6140: *3 SSL_read: -1 > ??? 2020/06/02 12:51:11 [debug] 6140#6140: *3 SSL_get_error: 2 > 2020/06/02 12:51:11 [debug] 6140#6140: *3 reusable connection: 0 > 2020/06/02 12:51:11 [debug] 6140#6140: *3 posix_memalign: 0000563CDA2963A0:4096 @16 > 2020/06/02 12:51:11 [debug] 6140#6140: *3 posix_memalign: 0000563CDA650060:4096 @16 > > & > > in case (1), a double error instance > > 2020/06/02 12:53:46 [debug] 6267#6267: *6 SSL_read_early_data: 2, 0 > 2020/06/02 12:53:46 [debug] 6267#6267: *6 SSL_do_handshake: 1 > 2020/06/02 12:53:46 [debug] 6267#6267: *6 SSL: TLSv1.2, cipher: "ECDHE-ECDSA-CHACHA20-POLY1305 TLSv1.2 Kx=ECDH Au=ECDSA Enc=CHACHA20/POLY1305(256) Mac=AEAD" > 2020/06/02 12:53:46 [debug] 6267#6267: *6 reusable connection: 1 > 2020/06/02 12:53:46 [debug] 6267#6267: *6 http wait request handler > 2020/06/02 12:53:46 [debug] 6267#6267: *6 malloc: 0000563C0F2ADAB0:1024 > 2020/06/02 12:53:46 [debug] 6267#6267: *6 SSL_read: -1 > ??? 2020/06/02 12:53:46 [debug] 6267#6267: *6 SSL_get_error: 2 > 2020/06/02 12:53:46 [debug] 6267#6267: *6 free: 0000563C0F2ADAB0 > 2020/06/02 12:53:46 [debug] 6267#6267: *6 http wait request handler > 2020/06/02 12:53:46 [debug] 6267#6267: *6 malloc: 0000563C0F2ADAB0:1024 > 2020/06/02 12:53:46 [debug] 6267#6267: *6 SSL_read: 339 > 2020/06/02 12:53:46 [debug] 6267#6267: *6 SSL_read: -1 > ??? 2020/06/02 12:53:46 [debug] 6267#6267: *6 SSL_get_error: 2 > 2020/06/02 12:53:46 [debug] 6267#6267: *6 reusable connection: 0 > 2020/06/02 12:53:46 [debug] 6267#6267: *6 posix_memalign: 0000563C0F18FA60:4096 @16 > 2020/06/02 12:53:46 [debug] 6267#6267: *6 posix_memalign: 0000563C0EDD4B10:4096 @16 > 2020/06/02 12:53:46 [debug] 6267#6267: *6 http process request line > > > but that error doesn't seem to be fatal. > > any idea what's causing those^^ errors? These aren't errors, these are debug messages. The SSL_get_error() return code 2 means SSL_ERROR_WANT_READ, that is, SSL_read() consumed all the data from the socket and needs more data to read further. These messages are perfectly normal and expected. -- Maxim Dounin http://mdounin.ru/ From francis at daoine.org Thu Jun 4 15:19:15 2020 From: francis at daoine.org (Francis Daly) Date: Thu, 4 Jun 2020 16:19:15 +0100 Subject: proxy_ssl_verify error: 'upstream SSL certificate does not match "test.example.com" while SSL handshaking to upstream', for CN/SAN 'matched' client & server certs ? In-Reply-To: <20200602192206.GX12747@mdounin.ru> References: <20200601154232.GU12747@mdounin.ru> <47a6af58-ae89-fdd2-c531-1249e726fc7b@gmail.com> <20838249-4958-4B5E-93C4-F9FA07E9A92E@nginx.com> <20200602152728.GU20939@daoine.org> <20200602192206.GX12747@mdounin.ru> Message-ID: <20200604151915.GV20939@daoine.org> On Tue, Jun 02, 2020 at 10:22:06PM +0300, Maxim Dounin wrote: > On Tue, Jun 02, 2020 at 04:27:28PM +0100, Francis Daly wrote: Hi there, Thanks for the extra information. > > That suggests that if you choose to use "proxy_ssl_server_name on;", > > then you almost certainly do not want to add your own "proxy_set_header > > Host" value. > Not exactly. > > The 421 Misdirected Request error is only returned > when one tries to access a virtual server with SSL client > certificate verification enabled, and used a different server name > during the SSL handshake. Is the "client certificate verification" part important there, in the general case? The upstream server could be anything, so could be more picky about matching SNI name and Host header, I guess. So based on the other mails in the thread, I'll update my own notes to say: if you use "proxy_ssl_server_name on;", then probably make sure that "proxy_set_header Host" and "proxy_ssl_name" have the same value; by default they do, but if you change only one there may be problems. > Normally one can use Host header which > is different from the SNI server name, and this is often happens > in real life (e.g., connection reuse in HTTP/2 implies requests to > multiple hostnames via one connection). True; web searches do reveal some people reporting 421 error in normal use cases, but they seem mainly down to badly configured servers. > That's more about being careful when configuring things, > especially when configuring SSL. Agreed. Since the cases where special consideration is needed are not trivial to enumerate, adding a note for only one of them is not the most useful thing to do. Cheers, f -- Francis Daly francis at daoine.org From nginx-forum at forum.nginx.org Sat Jun 6 22:51:24 2020 From: nginx-forum at forum.nginx.org (andersonsserra) Date: Sat, 06 Jun 2020 18:51:24 -0400 Subject: Load Balancing TCP directive mail {} Message-ID: Hi folks, I'm trying to do a round-robin load balancing for outgoing connections from my MTA servers. At first I tried it as follows: [root at proxy-lb02 email]# pwd /etc/nginx/email [root at proxy-lb02 email]# cat balanceador.conf stream { upstream stream_backend_mail { least_conn; server mta-01.srvmail.com.br:26 max_fails=2 fail_timeout=15s; server mta-02.srvmail.com.br:26 max_fails=2 fail_timeout=15s; server mta-03.srvmail.com.br:26 max_fails=2 fail_timeout=15s; server mta-04.srvmail.com.br:26 max_fails=2 fail_timeout=15s; server mta-05.srvmail.com.br:26 max_fails=2 fail_timeout=15s; } server { listen 0.0.0.0:25; proxy_pass stream_backend_mail; } } [root at proxy-lb02 nginx]# pwd /etc/nginx [root at proxy-lb02 nginx]# cat nginx.conf user nginx; worker_processes 1; error_log /var/log/nginx/error.log warn; pid /var/run/nginx.pid; events { worker_connections 1024; } http { include /etc/nginx/mime.types; default_type application/octet-stream; log_format main '$remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"'; access_log /var/log/nginx/access.log main; sendfile on; #tcp_nopush on; keepalive_timeout 65; #gzip on; include /etc/nginx/conf.d/*.conf; } mail { include /etc/nginx/email/*.conf; } [root at proxy-lb02 nginx]# systemctl restart nginx Job for nginx.service failed because the control process exited with error code. See "systemctl status nginx.service" and "journalctl -xe" for details. [root at proxy-lb02 nginx]# systemctl status nginx -l ? nginx.service - nginx - high performance web server Loaded: loaded (/usr/lib/systemd/system/nginx.service; enabled; vendor preset: disabled) Active: failed (Result: exit-code) since S?b 2020-06-06 18:18:19 EDT; 58s ago Docs: http://nginx.org/en/docs/ Process: 51695 ExecStop=/bin/kill -s TERM $MAINPID (code=exited, status=0/SUCCESS) Process: 52777 ExecStart=/usr/sbin/nginx -c /etc/nginx/nginx.conf (code=exited, status=1/FAILURE) Main PID: 51686 (code=exited, status=0/SUCCESS) Jun 06 18:18:19 proxy-lb02.srvmail.ma.gov.br systemd[1]: Starting nginx - high performance web server... Jun 06 18:18:19 proxy-lb02.srvmail.ma.gov.br nginx[52777]: nginx: [emerg] "stream" directive is not allowed here in /etc/nginx/email/balanceador.conf:1 Jun 06 18:18:19 proxy-lb02.srvmail.ma.gov.br systemd[1]: nginx.service: control process exited, code=exited status=1 Jun 06 18:18:19 proxy-lb02.srvmail.ma.gov.br systemd[1]: Failed to start nginx - high performance web server. Jun 06 18:18:19 proxy-lb02.srvmail.ma.gov.br systemd[1]: Unit nginx.service entered failed state. Jun 06 18:18:19 proxy-lb02.srvmail.ma.gov.br systemd[1]: nginx.service failed. [root at proxy-lb02 nginx]# nginx -v nginx version: nginx/1.18.0 Could someone help me find the solution to this error? Thanks. Anderson Serra Posted at Nginx Forum: https://forum.nginx.org/read.php?2,288282,288282#msg-288282 From teward at thomas-ward.net Sat Jun 6 23:50:58 2020 From: teward at thomas-ward.net (Thomas Ward) Date: Sat, 6 Jun 2020 19:50:58 -0400 Subject: Load Balancing TCP directive mail {} In-Reply-To: References: Message-ID: <4f14623c-79b2-47fe-1210-c1902d386195@thomas-ward.net> That's a pretty self-explanatory error actually: Jun 06 18:18:19 proxy-lb02.srvmail.ma.gov.br nginx[52777]: nginx: [emerg] "stream" directive is not allowed here in /etc/nginx/email/balanceador.conf:1 Your mail configuration file is imported inside a mail block. That won't work. Stream operates on the same level as an http or mail block - that is, it's not *part* of the mail{} block but instead its own stream block. You would need to import the stream function directly at /etc/nginx/nginx.conf root level and NOT as part of the mail{} block. Details on *that* are in the nginx documentation: http://nginx.org/en/docs/stream/ngx_stream_core_module.html#stream Basically, you're trying to include the stream configuration at the wrong level - the stream{} block you are configuring needs to be at the nginx.conf base level and NOT as part of the mail{} block as your nginx.conf is trying to do. Thomas On 6/6/20 6:51 PM, andersonsserra wrote: > Hi folks, > > I'm trying to do a round-robin load balancing for outgoing connections from > my MTA servers. At first I tried it as follows: > > [root at proxy-lb02 email]# pwd > /etc/nginx/email > [root at proxy-lb02 email]# cat balanceador.conf > stream { > upstream stream_backend_mail { > least_conn; > server mta-01.srvmail.com.br:26 max_fails=2 > fail_timeout=15s; > server mta-02.srvmail.com.br:26 max_fails=2 > fail_timeout=15s; > server mta-03.srvmail.com.br:26 max_fails=2 > fail_timeout=15s; > server mta-04.srvmail.com.br:26 max_fails=2 > fail_timeout=15s; > server mta-05.srvmail.com.br:26 max_fails=2 > fail_timeout=15s; > } > > > server { > listen 0.0.0.0:25; > proxy_pass stream_backend_mail; > } > } > > [root at proxy-lb02 nginx]# pwd > /etc/nginx > [root at proxy-lb02 nginx]# cat nginx.conf > > user nginx; > worker_processes 1; > > error_log /var/log/nginx/error.log warn; > pid /var/run/nginx.pid; > > > events { > worker_connections 1024; > } > > > http { > include /etc/nginx/mime.types; > default_type application/octet-stream; > > log_format main '$remote_addr - $remote_user [$time_local] "$request" > ' > '$status $body_bytes_sent "$http_referer" ' > '"$http_user_agent" "$http_x_forwarded_for"'; > > access_log /var/log/nginx/access.log main; > > sendfile on; > #tcp_nopush on; > > keepalive_timeout 65; > > #gzip on; > > include /etc/nginx/conf.d/*.conf; > > > > > } > > mail { > > include /etc/nginx/email/*.conf; > > } > > > [root at proxy-lb02 nginx]# systemctl restart nginx > Job for nginx.service failed because the control process exited with error > code. See "systemctl status nginx.service" and "journalctl -xe" for > details. > > [root at proxy-lb02 nginx]# systemctl status nginx -l > ? nginx.service - nginx - high performance web server > Loaded: loaded (/usr/lib/systemd/system/nginx.service; enabled; vendor > preset: disabled) > Active: failed (Result: exit-code) since S?b 2020-06-06 18:18:19 EDT; 58s > ago > Docs: http://nginx.org/en/docs/ > Process: 51695 ExecStop=/bin/kill -s TERM $MAINPID (code=exited, > status=0/SUCCESS) > Process: 52777 ExecStart=/usr/sbin/nginx -c /etc/nginx/nginx.conf > (code=exited, status=1/FAILURE) > Main PID: 51686 (code=exited, status=0/SUCCESS) > > Jun 06 18:18:19 proxy-lb02.srvmail.ma.gov.br systemd[1]: Starting nginx - > high performance web server... > Jun 06 18:18:19 proxy-lb02.srvmail.ma.gov.br nginx[52777]: nginx: [emerg] > "stream" directive is not allowed here in > /etc/nginx/email/balanceador.conf:1 > Jun 06 18:18:19 proxy-lb02.srvmail.ma.gov.br systemd[1]: nginx.service: > control process exited, code=exited status=1 > Jun 06 18:18:19 proxy-lb02.srvmail.ma.gov.br systemd[1]: Failed to start > nginx - high performance web server. > Jun 06 18:18:19 proxy-lb02.srvmail.ma.gov.br systemd[1]: Unit nginx.service > entered failed state. > Jun 06 18:18:19 proxy-lb02.srvmail.ma.gov.br systemd[1]: nginx.service > failed. > > > [root at proxy-lb02 nginx]# nginx -v > nginx version: nginx/1.18.0 > > Could someone help me find the solution to this error? > > Thanks. > > Anderson Serra > > Posted at Nginx Forum: https://forum.nginx.org/read.php?2,288282,288282#msg-288282 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Sun Jun 7 02:23:34 2020 From: mdounin at mdounin.ru (Maxim Dounin) Date: Sun, 7 Jun 2020 05:23:34 +0300 Subject: TLSv1.3 by default? In-Reply-To: <19af5d70a7e1196d09a9c07e152dcf8c.NginxMailingListEnglish@forum.nginx.org> References: <20181123165100.GF99070@mdounin.ru> <19af5d70a7e1196d09a9c07e152dcf8c.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20200607022334.GJ12747@mdounin.ru> Hello! On Sun, May 17, 2020 at 12:13:20PM -0400, Olaf van der Spek wrote: > Maxim Dounin Wrote: > ------------------------------------------------------- > > On Fri, Nov 23, 2018 at 08:43:03AM -0500, Olaf van der Spek wrote: > > > > > > Why isn't 1.3 enabled by default (when available)? > > > > > > Syntax: ssl_protocols [SSLv2] [SSLv3] [TLSv1] [TLSv1.1] [TLSv1.2] > > > [TLSv1.3]; > > > Default: > > > ssl_protocols TLSv1 TLSv1.1 TLSv1.2; > > > > > > http://nginx.org/en/docs/http/ngx_http_ssl_module.html#ssl_protocols > > > > The main reason is that when it was implemented, TLSv1.3 RFC > > wasn't yet finalized, and TLSv1.3 was only available via various > > drafts, and only with pre-release versions of OpenSSL. > > > > Now with RFC 8446 published and OpenSSL 1.1.1 with TLSv1.3 > > released this probably can be reconsidered. On the other hand, > > Has this been reconsidered yet? Not yet. Blockers listed in the original message, notably "ssl_ciphers aNULL;" being non-functional with TLSv1.3 (https://trac.nginx.org/nginx/ticket/195), still apply. -- Maxim Dounin http://mdounin.ru/ From nginx-forum at forum.nginx.org Sun Jun 7 10:31:18 2020 From: nginx-forum at forum.nginx.org (itpp2012) Date: Sun, 07 Jun 2020 06:31:18 -0400 Subject: Load Balancing TCP directive mail {} In-Reply-To: References: Message-ID: andersonsserra Wrote: ------------------------------------------------------- > nginx: [emerg] "stream" directive is not allowed here in /etc/nginx/email/balanceador.conf:1 Posted at Nginx Forum: https://forum.nginx.org/read.php?2,288282,288291#msg-288291 From nginx-forum at forum.nginx.org Mon Jun 8 13:54:05 2020 From: nginx-forum at forum.nginx.org (andersonsserra) Date: Mon, 08 Jun 2020 09:54:05 -0400 Subject: Load Balancing TCP directive mail {} In-Reply-To: <4f14623c-79b2-47fe-1210-c1902d386195@thomas-ward.net> References: <4f14623c-79b2-47fe-1210-c1902d386195@thomas-ward.net> Message-ID: <71b38699957566ec0d4272e11eb5fb89.NginxMailingListEnglish@forum.nginx.org> Thomas, Thanks for the answer! I made the corrections and the nginx service worked. Thank you. Although it worked, I saw that the backend servers are receiving the IP request from the proxy, is there any way to pass the source IP of the proxy request preserving the source IP and port? take a look at this example: from an IP test server 10.22.51.16 I make a request on port 25 to the proxy server with address 10.22.8.153, as follows: anderson @ support-seati: ~ $ telnet 10.22.8.153 25 Trying 10.22.8.153 ... Connected to 10.22.8.153. Escape character is '^]'. 220 mta-01.example.com 502 5.5.2 Error: command not recognized look how it appears on the destination server ... root @ mta-03: ~ # tail -f /var/log/mail.log | egrep -e "(10.22.51. 16 | 10.22.8.153)" Jun 8 10:49:21 mta-03 postfix / smtpd [12607]: connect from unknown [10.22.8.153] How can I preserve the source port and IP address? Regards. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,288282,288294#msg-288294 From alan at chandlerfamily.org.uk Mon Jun 8 19:57:56 2020 From: alan at chandlerfamily.org.uk (Alan Chandler) Date: Mon, 8 Jun 2020 20:57:56 +0100 Subject: Why does nginx strip trailing headers from a proxied backend? How can I prevent it? Message-ID: <83c36622-0d0c-45e3-bba3-6fd4162a22f7@chandlerfamily.org.uk> I have nginx acting as the static file server for a single page web app I am developing. It acts as a proxy server for the "/api" portion on my url space. The backend server is running on a different port on local host and is nodejs based.. I'm using nginx as an http2 front end and using http 1/1 between nginx and the backend. In the main this is working well. But I have one problem. I would like to make use of a trailing header. My outgoing request has the header "TE: trailers", and the response has a header "Trailers: API-Status" and then after the body it adds (using nodejs response.addTrailers({'API-Status': 'OK'})). But nginx is stripping them out. I can use curl to prove it curl -b "MBFMVISIT=emailverify; expires=Sun, 07 Jun 2020 13:14:06 GMT;path=/;" -H "Content-Type: application/json" -H "TE: trailers" -X GET -c cookie.jar -i https://footdev.chandlerfamily.org.uk/api/config/config goes via nginx and outputs the response (including the initial 'Trailers: API-Status' header, but not the trailing header HTTP/2 200 server: nginx/1.18.0 date: Mon, 08 Jun 2020 19:39:41 GMT content-type: application/json trailer: API-Status cache-control: no-cache {"dcid":17,"pointsMap":"[1,2,4,6,8,12,16]","underdogMap":"[0,1,2,4,6,8]","playoffMap":"[1,2,4,6,8]","bonusMap":"[1,2,4,6,8,12,16]","defaultBonus":2,"clientLog":"ALL","clientLogUid":0,"version":"v4.0.0-alpha3","copyrightYear":2020,"cookieName":"MBBall","cookieVisitName":"MBFMVISIT","mainMenuIcon":"menu","status":true} curl -b "MBFMVISIT=emailverify; expires=Sun, 07 Jun 2020 13:14:06 GMT;path=/;" -H "Content-Type: application/json" -H "TE: trailers" -X GET -c cookie.jar -i http://localhost:2040/api/config/config goes directly to the backend. in this curl outputs the initial headers, the response and then after the response the trailing header 'API-Status: OK' HTTP/1.1 200 OK Trailer: API-Status Content-Type: application/json Cache-Control: no-cache Date: Mon, 08 Jun 2020 19:40:14 GMT Connection: keep-alive Transfer-Encoding: chunked {"dcid":17,"pointsMap":"[1,2,4,6,8,12,16]","underdogMap":"[0,1,2,4,6,8]","playoffMap":"[1,2,4,6,8]","bonusMap":"[1,2,4,6,8,12,16]","defaultBonus":2,"clientLog":"ALL","clientLogUid":0,"version":"v4.0.0-alpha3","copyrightYear":2020,"cookieName":"MBBall","cookieVisitName":"MBFMVISIT","mainMenuIcon":"menu","status":true}API-Status: OK (The API-Status: OK is bolded by curl along with the pre reply headers) My nginx config for the proxy is location /api/ { proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $http_host; proxy_set_header X-NginX-Proxy true; proxy_http_version 1.1; proxy_set_header Connection ""; proxy_pass http://localhost:2040; proxy_redirect default; proxy_buffering on; proxy_cache off; } So how do I tell nginx to pass the trailing header?? I have buffering on, but doesn't seem to have changed anything - it didn't work when I had it off. From jfs.world at gmail.com Mon Jun 8 20:35:21 2020 From: jfs.world at gmail.com (Jeffrey 'jf' Lim) Date: Tue, 9 Jun 2020 04:35:21 +0800 Subject: in search of the complete 444 Message-ID: I've been trying and scratching my head over this for some time now. I've always set up a default server to return 444, but I've not been able to make it do the 444 *always*. If I get an invalid response, nginx "skips" the 444 to return 400 instead. I'd rather nginx do the 444, and not return 400. I've searched and tried various things (like setting "error_page 400" to some location, and then returning 444 for that location), but I have not found anything that really works. Is there just no way to have a "complete" 444 response? What will it take to do this? thanks, -jf -- He who settles on the idea of the intelligent man as a static entity only shows himself to be a fool. From moshe at ymkatz.net Mon Jun 8 20:53:22 2020 From: moshe at ymkatz.net (Moshe Katz) Date: Mon, 8 Jun 2020 16:53:22 -0400 Subject: in search of the complete 444 In-Reply-To: References: Message-ID: I found the same question asked on StackOverflow a few years ago: https://stackoverflow.com/questions/41421111/http-444-no-response-instead-of-404-403-error-pages The accepted answer says to do it this way: ``` error_page 400 =444 @blackhole; location @blackhole { return 444; } ``` They key that you missed is the "=444" in the error_page directive. It seems like you need BOTH that and the `return 444` in the location block. Moshe On Mon, Jun 8, 2020 at 4:35 PM Jeffrey 'jf' Lim wrote: > I've been trying and scratching my head over this for some time now. > I've always set up a default server to return 444, but I've not been > able to make it do the 444 *always*. If I get an invalid response, > nginx "skips" the 444 to return 400 instead. I'd rather nginx do the > 444, and not return 400. > > I've searched and tried various things (like setting "error_page 400" > to some location, and then returning 444 for that location), but I > have not found anything that really works. Is there just no way to > have a "complete" 444 response? What will it take to do this? > > thanks, > -jf > > -- > He who settles on the idea of the intelligent man as a static entity > only shows himself to be a fool. > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Mon Jun 8 20:58:10 2020 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 8 Jun 2020 23:58:10 +0300 Subject: Why does nginx strip trailing headers from a proxied backend? How can I prevent it? In-Reply-To: <83c36622-0d0c-45e3-bba3-6fd4162a22f7@chandlerfamily.org.uk> References: <83c36622-0d0c-45e3-bba3-6fd4162a22f7@chandlerfamily.org.uk> Message-ID: <20200608205810.GM12747@mdounin.ru> Hello! On Mon, Jun 08, 2020 at 08:57:56PM +0100, Alan Chandler wrote: > I have nginx acting as the static file server for a single page > web app I am developing. It acts as a proxy server for the > "/api" portion on my url space. > > The backend server is running on a different port on local host > and is nodejs based.. I'm using nginx as an http2 front end and > using http 1/1 between nginx > and the backend. In the main this is working well. > > But I have one problem. I would like to make use of a trailing > header. My outgoing request has the header "TE: trailers", and > the response has a header > "Trailers: API-Status" and then after the body it adds (using > nodejs response.addTrailers({'API-Status': 'OK'})). > > But nginx is stripping them out. [...] > My nginx config for the proxy is > > location /api/ { > proxy_set_header X-Real-IP $remote_addr; > proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; > proxy_set_header Host $http_host; > proxy_set_header X-NginX-Proxy true; > proxy_http_version 1.1; > proxy_set_header Connection ""; > proxy_pass http://localhost:2040; > proxy_redirect default; > proxy_buffering on; > proxy_cache off; > } > > So how do I tell nginx to pass the trailing header? Trailers are only supported in gRPC proxying (grpc_pass), where they are required for gRPC. Trailers are not supported by proxy_pass. -- Maxim Dounin http://mdounin.ru/ From alan at chandlerfamily.org.uk Mon Jun 8 21:29:23 2020 From: alan at chandlerfamily.org.uk (Alan Chandler) Date: Mon, 8 Jun 2020 22:29:23 +0100 Subject: Why does nginx strip trailing headers from a proxied backend? How can I prevent it? In-Reply-To: <20200608205810.GM12747@mdounin.ru> References: <83c36622-0d0c-45e3-bba3-6fd4162a22f7@chandlerfamily.org.uk> <20200608205810.GM12747@mdounin.ru> Message-ID: <7f9e88ef-4824-f2de-df8f-9aab4cc7c830@chandlerfamily.org.uk> On 08/06/2020 21:58, Maxim Dounin wrote: > Hello! > > On Mon, Jun 08, 2020 at 08:57:56PM +0100, Alan Chandler wrote: > >> I have nginx acting as the static file server for a single page >> web app I am developing. It acts as a proxy server for the >> "/api" portion on my url space. > Trailers are only supported in gRPC proxying (grpc_pass), where > they are required for gRPC. Trailers are not supported by > proxy_pass. > That is a shame. Time to think up an alternative way From jfs.world at gmail.com Tue Jun 9 00:39:49 2020 From: jfs.world at gmail.com (Jeffrey 'jf' Lim) Date: Tue, 9 Jun 2020 08:39:49 +0800 Subject: in search of the complete 444 In-Reply-To: References: Message-ID: Thanks, Moshe. I've tried that, but I've found that if you send anything that's invalid at the HTTP layer by nginx, like talking http to a https server, or sending invalid http (random junk), you'll get either 400 or 500. It's still not "complete", unfortunately. -jf -- He who settles on the idea of the intelligent man as a static entity only shows himself to be a fool. On Tue, Jun 9, 2020 at 4:54 AM Moshe Katz wrote: > > I found the same question asked on StackOverflow a few years ago: https://stackoverflow.com/questions/41421111/http-444-no-response-instead-of-404-403-error-pages > > The accepted answer says to do it this way: > > ``` > error_page 400 =444 @blackhole; > > location @blackhole { > return 444; > } > ``` > > They key that you missed is the "=444" in the error_page directive. It seems like you need BOTH that and the `return 444` in the location block. > > Moshe > > > > On Mon, Jun 8, 2020 at 4:35 PM Jeffrey 'jf' Lim wrote: >> >> I've been trying and scratching my head over this for some time now. >> I've always set up a default server to return 444, but I've not been >> able to make it do the 444 *always*. If I get an invalid response, >> nginx "skips" the 444 to return 400 instead. I'd rather nginx do the >> 444, and not return 400. >> >> I've searched and tried various things (like setting "error_page 400" >> to some location, and then returning 444 for that location), but I >> have not found anything that really works. Is there just no way to >> have a "complete" 444 response? What will it take to do this? >> >> thanks, >> -jf >> >> -- >> He who settles on the idea of the intelligent man as a static entity >> only shows himself to be a fool. >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From kohenkatz at gmail.com Tue Jun 9 01:14:54 2020 From: kohenkatz at gmail.com (Moshe Katz) Date: Mon, 8 Jun 2020 21:14:54 -0400 Subject: in search of the complete 444 In-Reply-To: References: Message-ID: Sorry, I wasn't actually in front of a server where I could check it before I sent that. I just spent some time playing around with it on one of my servers, and I found that the second answer there does seem to work: ``` location / { return 444; } error_page 400 500 =444 /444.html; location = /444.html { return 444; } ``` I tested this using curl (using "curl -k https://example.com/%" as my bad request to trigger the 400) and it seems to work as desired in HTTP 1.0 and 1.1. However, when using HTTP2, curl just hangs instead of showing an error that the connection is closed. If your site doesn't respond to HTTP2 (which is fine since it's a do-nothing site anyway), then you don't have to worry about it. Moshe On Mon, Jun 8, 2020 at 8:40 PM Jeffrey 'jf' Lim wrote: > Thanks, Moshe. I've tried that, but I've found that if you send > anything that's invalid at the HTTP layer by nginx, like talking http > to a https server, or sending invalid http (random junk), you'll get > either 400 or 500. It's still not "complete", unfortunately. > > -jf > > -- > He who settles on the idea of the intelligent man as a static entity > only shows himself to be a fool. > > On Tue, Jun 9, 2020 at 4:54 AM Moshe Katz wrote: > > > > I found the same question asked on StackOverflow a few years ago: > https://stackoverflow.com/questions/41421111/http-444-no-response-instead-of-404-403-error-pages > > > > The accepted answer says to do it this way: > > > > ``` > > error_page 400 =444 @blackhole; > > > > location @blackhole { > > return 444; > > } > > ``` > > > > They key that you missed is the "=444" in the error_page directive. It > seems like you need BOTH that and the `return 444` in the location block. > > > > Moshe > > > > > > > > On Mon, Jun 8, 2020 at 4:35 PM Jeffrey 'jf' Lim > wrote: > >> > >> I've been trying and scratching my head over this for some time now. > >> I've always set up a default server to return 444, but I've not been > >> able to make it do the 444 *always*. If I get an invalid response, > >> nginx "skips" the 444 to return 400 instead. I'd rather nginx do the > >> 444, and not return 400. > >> > >> I've searched and tried various things (like setting "error_page 400" > >> to some location, and then returning 444 for that location), but I > >> have not found anything that really works. Is there just no way to > >> have a "complete" 444 response? What will it take to do this? > >> > >> thanks, > >> -jf > >> > >> -- > >> He who settles on the idea of the intelligent man as a static entity > >> only shows himself to be a fool. > >> _______________________________________________ > >> nginx mailing list > >> nginx at nginx.org > >> http://mailman.nginx.org/mailman/listinfo/nginx > > > > _______________________________________________ > > nginx mailing list > > nginx at nginx.org > > http://mailman.nginx.org/mailman/listinfo/nginx > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jfs.world at gmail.com Tue Jun 9 01:30:14 2020 From: jfs.world at gmail.com (Jeffrey 'jf' Lim) Date: Tue, 9 Jun 2020 09:30:14 +0800 Subject: in search of the complete 444 In-Reply-To: References: Message-ID: No problem, Moshe! Thank you so much for testing this out for me! This does take care of the case of "not HTTP" being sent (which is what 'curl -k https://localhost/%' used to give me)... BUT, unfortunately I still get a 400 with 'curl http://localhost:443'. I believe you should get the same if you were to send http to the https server? -jf On Tue, Jun 9, 2020 at 9:15 AM Moshe Katz wrote: > > Sorry, I wasn't actually in front of a server where I could check it before I sent that. > > I just spent some time playing around with it on one of my servers, and I found that the second answer there does seem to work: > > ``` > location / { > return 444; > } > > error_page 400 500 =444 /444.html; > > location = /444.html { > return 444; > } > ``` > > I tested this using curl (using "curl -k https://example.com/%" as my bad request to trigger the 400) and it seems to work as desired in HTTP 1.0 and 1.1. However, when using HTTP2, curl just hangs instead of showing an error that the connection is closed. If your site doesn't respond to HTTP2 (which is fine since it's a do-nothing site anyway), then you don't have to worry about it. > > Moshe > > > > On Mon, Jun 8, 2020 at 8:40 PM Jeffrey 'jf' Lim wrote: >> >> Thanks, Moshe. I've tried that, but I've found that if you send >> anything that's invalid at the HTTP layer by nginx, like talking http >> to a https server, or sending invalid http (random junk), you'll get >> either 400 or 500. It's still not "complete", unfortunately. >> >> -jf >> >> -- >> He who settles on the idea of the intelligent man as a static entity >> only shows himself to be a fool. >> >> On Tue, Jun 9, 2020 at 4:54 AM Moshe Katz wrote: >> > >> > I found the same question asked on StackOverflow a few years ago: https://stackoverflow.com/questions/41421111/http-444-no-response-instead-of-404-403-error-pages >> > >> > The accepted answer says to do it this way: >> > >> > ``` >> > error_page 400 =444 @blackhole; >> > >> > location @blackhole { >> > return 444; >> > } >> > ``` >> > >> > They key that you missed is the "=444" in the error_page directive. It seems like you need BOTH that and the `return 444` in the location block. >> > >> > Moshe >> > >> > >> > >> > On Mon, Jun 8, 2020 at 4:35 PM Jeffrey 'jf' Lim wrote: >> >> >> >> I've been trying and scratching my head over this for some time now. >> >> I've always set up a default server to return 444, but I've not been >> >> able to make it do the 444 *always*. If I get an invalid response, >> >> nginx "skips" the 444 to return 400 instead. I'd rather nginx do the >> >> 444, and not return 400. >> >> >> >> I've searched and tried various things (like setting "error_page 400" >> >> to some location, and then returning 444 for that location), but I >> >> have not found anything that really works. Is there just no way to >> >> have a "complete" 444 response? What will it take to do this? >> >> >> >> thanks, >> >> -jf >> >> >> >> -- >> >> He who settles on the idea of the intelligent man as a static entity >> >> only shows himself to be a fool. >> >> _______________________________________________ >> >> nginx mailing list >> >> nginx at nginx.org >> >> http://mailman.nginx.org/mailman/listinfo/nginx >> > >> > _______________________________________________ >> > nginx mailing list >> > nginx at nginx.org >> > http://mailman.nginx.org/mailman/listinfo/nginx >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From moshe at ymkatz.net Tue Jun 9 01:50:38 2020 From: moshe at ymkatz.net (Moshe Katz) Date: Mon, 8 Jun 2020 21:50:38 -0400 Subject: in search of the complete 444 In-Reply-To: References: Message-ID: Have you tried adding response code 497 to your `error_pages` list? I can't test now because I'm away from my nginx machines again at the moment, but the documentation for that case is here: http://nginx.org/en/docs/http/ngx_http_ssl_module.html#errors Moshe On Mon, Jun 8, 2020 at 9:30 PM Jeffrey 'jf' Lim wrote: > No problem, Moshe! Thank you so much for testing this out for me! This > does take care of the case of "not HTTP" being sent (which is what > 'curl -k https://localhost/%' used to give me)... BUT, unfortunately I > still get a 400 with 'curl http://localhost:443'. I believe you should > get the same if you were to send http to the https server? > > -jf > > On Tue, Jun 9, 2020 at 9:15 AM Moshe Katz wrote: > > > > Sorry, I wasn't actually in front of a server where I could check it > before I sent that. > > > > I just spent some time playing around with it on one of my servers, and > I found that the second answer there does seem to work: > > > > ``` > > location / { > > return 444; > > } > > > > error_page 400 500 =444 /444.html; > > > > location = /444.html { > > return 444; > > } > > ``` > > > > I tested this using curl (using "curl -k https://example.com/%" as my > bad request to trigger the 400) and it seems to work as desired in HTTP 1.0 > and 1.1. However, when using HTTP2, curl just hangs instead of showing an > error that the connection is closed. If your site doesn't respond to HTTP2 > (which is fine since it's a do-nothing site anyway), then you don't have to > worry about it. > > > > Moshe > > > > > > > > On Mon, Jun 8, 2020 at 8:40 PM Jeffrey 'jf' Lim > wrote: > >> > >> Thanks, Moshe. I've tried that, but I've found that if you send > >> anything that's invalid at the HTTP layer by nginx, like talking http > >> to a https server, or sending invalid http (random junk), you'll get > >> either 400 or 500. It's still not "complete", unfortunately. > >> > >> -jf > >> > >> -- > >> He who settles on the idea of the intelligent man as a static entity > >> only shows himself to be a fool. > >> > >> On Tue, Jun 9, 2020 at 4:54 AM Moshe Katz wrote: > >> > > >> > I found the same question asked on StackOverflow a few years ago: > https://stackoverflow.com/questions/41421111/http-444-no-response-instead-of-404-403-error-pages > >> > > >> > The accepted answer says to do it this way: > >> > > >> > ``` > >> > error_page 400 =444 @blackhole; > >> > > >> > location @blackhole { > >> > return 444; > >> > } > >> > ``` > >> > > >> > They key that you missed is the "=444" in the error_page directive. > It seems like you need BOTH that and the `return 444` in the location block. > >> > > >> > Moshe > >> > > >> > > >> > > >> > On Mon, Jun 8, 2020 at 4:35 PM Jeffrey 'jf' Lim > wrote: > >> >> > >> >> I've been trying and scratching my head over this for some time now. > >> >> I've always set up a default server to return 444, but I've not been > >> >> able to make it do the 444 *always*. If I get an invalid response, > >> >> nginx "skips" the 444 to return 400 instead. I'd rather nginx do the > >> >> 444, and not return 400. > >> >> > >> >> I've searched and tried various things (like setting "error_page 400" > >> >> to some location, and then returning 444 for that location), but I > >> >> have not found anything that really works. Is there just no way to > >> >> have a "complete" 444 response? What will it take to do this? > >> >> > >> >> thanks, > >> >> -jf > >> >> > >> >> -- > >> >> He who settles on the idea of the intelligent man as a static entity > >> >> only shows himself to be a fool. > >> >> _______________________________________________ > >> >> nginx mailing list > >> >> nginx at nginx.org > >> >> http://mailman.nginx.org/mailman/listinfo/nginx > >> > > >> > _______________________________________________ > >> > nginx mailing list > >> > nginx at nginx.org > >> > http://mailman.nginx.org/mailman/listinfo/nginx > >> _______________________________________________ > >> nginx mailing list > >> nginx at nginx.org > >> http://mailman.nginx.org/mailman/listinfo/nginx > > > > _______________________________________________ > > nginx mailing list > > nginx at nginx.org > > http://mailman.nginx.org/mailman/listinfo/nginx > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Tue Jun 9 01:51:23 2020 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 9 Jun 2020 04:51:23 +0300 Subject: Why does nginx strip trailing headers from a proxied backend? How can I prevent it? In-Reply-To: <7f9e88ef-4824-f2de-df8f-9aab4cc7c830@chandlerfamily.org.uk> References: <83c36622-0d0c-45e3-bba3-6fd4162a22f7@chandlerfamily.org.uk> <20200608205810.GM12747@mdounin.ru> <7f9e88ef-4824-f2de-df8f-9aab4cc7c830@chandlerfamily.org.uk> Message-ID: <20200609015123.GN12747@mdounin.ru> Hello! On Mon, Jun 08, 2020 at 10:29:23PM +0100, Alan Chandler wrote: > On 08/06/2020 21:58, Maxim Dounin wrote: > > > > On Mon, Jun 08, 2020 at 08:57:56PM +0100, Alan Chandler wrote: > > > > > I have nginx acting as the static file server for a single page > > > web app I am developing. It acts as a proxy server for the > > > "/api" portion on my url space. > > Trailers are only supported in gRPC proxying (grpc_pass), where > > they are required for gRPC. Trailers are not supported by > > proxy_pass. > > > That is a shame. You are welcome to work on this if you think this is needed. This should be relatively easy given that trailers infrastructure was added a while ago to support gRPC proxying. Note though that there are security concerns about HTTP trailers in general, and these shouldn't be passed by default unless explicitly enabled in the configuration. Just in case, an overview of trailers support in browsers can be found here (TL;DR: Firefox supports Server-Timing in trailers since 2018, and that's all): https://stackoverflow.com/questions/13371367/do-any-browsers-support-trailers-sent-in-chunked-encoding-responses Some lengthy discussion on trailers support happened in the past here (TL;DR: net effect is that there are no trailers support in the Fetch Standard now): https://github.com/whatwg/fetch/issues/34 -- Maxim Dounin http://mdounin.ru/ From jfs.world at gmail.com Tue Jun 9 02:03:22 2020 From: jfs.world at gmail.com (Jeffrey 'jf' Lim) Date: Tue, 9 Jun 2020 10:03:22 +0800 Subject: in search of the complete 444 In-Reply-To: References: Message-ID: Wow, Moshe. Thank you; I've honestly never seen this. This is great! It looks like my 444 might actually be "complete" :) I'll give it some time to see if I get any more traffic that escapes the 444, but this might really be it... thanks! -jf On Tue, Jun 9, 2020 at 9:51 AM Moshe Katz wrote: > > Have you tried adding response code 497 to your `error_pages` list? > > I can't test now because I'm away from my nginx machines again at the moment, but the documentation for that case is here: http://nginx.org/en/docs/http/ngx_http_ssl_module.html#errors > > Moshe > > > > On Mon, Jun 8, 2020 at 9:30 PM Jeffrey 'jf' Lim wrote: >> >> No problem, Moshe! Thank you so much for testing this out for me! This >> does take care of the case of "not HTTP" being sent (which is what >> 'curl -k https://localhost/%' used to give me)... BUT, unfortunately I >> still get a 400 with 'curl http://localhost:443'. I believe you should >> get the same if you were to send http to the https server? >> >> -jf >> >> On Tue, Jun 9, 2020 at 9:15 AM Moshe Katz wrote: >> > >> > Sorry, I wasn't actually in front of a server where I could check it before I sent that. >> > >> > I just spent some time playing around with it on one of my servers, and I found that the second answer there does seem to work: >> > >> > ``` >> > location / { >> > return 444; >> > } >> > >> > error_page 400 500 =444 /444.html; >> > >> > location = /444.html { >> > return 444; >> > } >> > ``` >> > >> > I tested this using curl (using "curl -k https://example.com/%" as my bad request to trigger the 400) and it seems to work as desired in HTTP 1.0 and 1.1. However, when using HTTP2, curl just hangs instead of showing an error that the connection is closed. If your site doesn't respond to HTTP2 (which is fine since it's a do-nothing site anyway), then you don't have to worry about it. >> > >> > Moshe >> > >> > >> > >> > On Mon, Jun 8, 2020 at 8:40 PM Jeffrey 'jf' Lim wrote: >> >> >> >> Thanks, Moshe. I've tried that, but I've found that if you send >> >> anything that's invalid at the HTTP layer by nginx, like talking http >> >> to a https server, or sending invalid http (random junk), you'll get >> >> either 400 or 500. It's still not "complete", unfortunately. >> >> >> >> -jf >> >> >> >> -- >> >> He who settles on the idea of the intelligent man as a static entity >> >> only shows himself to be a fool. >> >> >> >> On Tue, Jun 9, 2020 at 4:54 AM Moshe Katz wrote: >> >> > >> >> > I found the same question asked on StackOverflow a few years ago: https://stackoverflow.com/questions/41421111/http-444-no-response-instead-of-404-403-error-pages >> >> > >> >> > The accepted answer says to do it this way: >> >> > >> >> > ``` >> >> > error_page 400 =444 @blackhole; >> >> > >> >> > location @blackhole { >> >> > return 444; >> >> > } >> >> > ``` >> >> > >> >> > They key that you missed is the "=444" in the error_page directive. It seems like you need BOTH that and the `return 444` in the location block. >> >> > >> >> > Moshe >> >> > >> >> > >> >> > >> >> > On Mon, Jun 8, 2020 at 4:35 PM Jeffrey 'jf' Lim wrote: >> >> >> >> >> >> I've been trying and scratching my head over this for some time now. >> >> >> I've always set up a default server to return 444, but I've not been >> >> >> able to make it do the 444 *always*. If I get an invalid response, >> >> >> nginx "skips" the 444 to return 400 instead. I'd rather nginx do the >> >> >> 444, and not return 400. >> >> >> >> >> >> I've searched and tried various things (like setting "error_page 400" >> >> >> to some location, and then returning 444 for that location), but I >> >> >> have not found anything that really works. Is there just no way to >> >> >> have a "complete" 444 response? What will it take to do this? >> >> >> >> >> >> thanks, >> >> >> -jf >> >> >> >> >> >> -- >> >> >> He who settles on the idea of the intelligent man as a static entity >> >> >> only shows himself to be a fool. >> >> >> _______________________________________________ >> >> >> nginx mailing list >> >> >> nginx at nginx.org >> >> >> http://mailman.nginx.org/mailman/listinfo/nginx >> >> > >> >> > _______________________________________________ >> >> > nginx mailing list >> >> > nginx at nginx.org >> >> > http://mailman.nginx.org/mailman/listinfo/nginx >> >> _______________________________________________ >> >> nginx mailing list >> >> nginx at nginx.org >> >> http://mailman.nginx.org/mailman/listinfo/nginx >> > >> > _______________________________________________ >> > nginx mailing list >> > nginx at nginx.org >> > http://mailman.nginx.org/mailman/listinfo/nginx >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From alan at chandlerfamily.org.uk Tue Jun 9 12:11:10 2020 From: alan at chandlerfamily.org.uk (Alan Chandler) Date: Tue, 9 Jun 2020 13:11:10 +0100 Subject: Why does nginx strip trailing headers from a proxied backend? How can I prevent it? In-Reply-To: <20200609015123.GN12747@mdounin.ru> References: <83c36622-0d0c-45e3-bba3-6fd4162a22f7@chandlerfamily.org.uk> <20200608205810.GM12747@mdounin.ru> <7f9e88ef-4824-f2de-df8f-9aab4cc7c830@chandlerfamily.org.uk> <20200609015123.GN12747@mdounin.ru> Message-ID: On 09/06/2020 02:51, Maxim Dounin wrote: > Hello! > > On Mon, Jun 08, 2020 at 10:29:23PM +0100, Alan Chandler wrote: > >> On 08/06/2020 21:58, Maxim Dounin wrote: >>> On Mon, Jun 08, 2020 at 08:57:56PM +0100, Alan Chandler wrote: >>> >>>> I have nginx acting as the static file server for a single page >>>> web app I am developing. It acts as a proxy server for the >>>> "/api" portion on my url space. >>> Trailers are only supported in gRPC proxying (grpc_pass), where >>> they are required for gRPC. Trailers are not supported by >>> proxy_pass. >>> >> That is a shame. > You are welcome to work on this if you think this is needed. HA:? I'm still working on a Single Page Application I started in 2013 to move a client from Microsoft Access/Sqlserver app to a web app (still with sqlserver).? Until I do so I cannot retire, and that client is the only one I have left and I am currently 69. The work I was trying to use trailers for is a hobby, but with a deadline of September this year. Trailers was just me exploring options to support my api handler throwing an error mid stream. I can think of other options to handle my use case. Besides it has been almost 40 years (early 1980s)? since I last coded in C or C++ > Some lengthy discussion on trailers support happened in the past > here (TL;DR: net effect is that there are no trailers support in > the Fetch Standard now): > > https://github.com/whatwg/fetch/issues/34 > Having read through that, trailers is definitely not a rabbit hole I want to go down. Alan Chandler From mailinglist at unix-solution.de Tue Jun 9 15:14:20 2020 From: mailinglist at unix-solution.de (basti) Date: Tue, 9 Jun 2020 17:14:20 +0200 Subject: Location for any Host/ Server Message-ID: <25c0b168-252d-2f7a-3ada-297d3c571f6a@unix-solution.de> Hello, i want to setup a location match for any hostname/servername like in apache: cat /etc/apache2/conf-enabled/git.conf RedirectMatch 404 /\.git In nginx I try cat /etc/nginx/conf.d/git.conf server { ## Disable .htaccess and other hidden files location ~ /\.(?!well-known).* { deny all; access_log off; log_not_found off; } } But this does not match. When I remove server {} i get nginx: [emerg] "location" directive is not allowed here in /etc/nginx/conf.d/git.conf:2 I do not want to include my file into any server directive. It is asking for trouble, how fast can you forget to add this? best regards From teward at thomas-ward.net Tue Jun 9 15:16:45 2020 From: teward at thomas-ward.net (Thomas Ward) Date: Tue, 9 Jun 2020 11:16:45 -0400 Subject: Location for any Host/ Server In-Reply-To: <25c0b168-252d-2f7a-3ada-297d3c571f6a@unix-solution.de> References: <25c0b168-252d-2f7a-3ada-297d3c571f6a@unix-solution.de> Message-ID: <720ae25c-3215-10a8-fc91-7475046dc4e0@thomas-ward.net> server { ??? listen 80 default_server; ??? server_name _; ??? ... } The above should do what you're after.? Specifies a default-server listener on port 80 and it matches that special catch-all that accepts all server_name results.? (though, default_server will match anything that doesn't match any other server_name Host so...) Thomas On 6/9/20 11:14 AM, basti wrote: > Hello, > > i want to setup a location match for any hostname/servername like in apache: > > cat /etc/apache2/conf-enabled/git.conf > RedirectMatch 404 /\.git > > In nginx I try > > cat /etc/nginx/conf.d/git.conf > server { > ## Disable .htaccess and other hidden files > location ~ /\.(?!well-known).* { > deny all; > access_log off; > log_not_found off; > } > } > > But this does not match. > When I remove server {} i get nginx: [emerg] "location" directive is not > allowed here in /etc/nginx/conf.d/git.conf:2 > > I do not want to include my file into any server directive. It is asking > for trouble, how fast can you forget to add this? > > best regards > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mailinglist at unix-solution.de Tue Jun 9 15:47:02 2020 From: mailinglist at unix-solution.de (basti) Date: Tue, 9 Jun 2020 17:47:02 +0200 Subject: Location for any Host/ Server In-Reply-To: <720ae25c-3215-10a8-fc91-7475046dc4e0@thomas-ward.net> References: <25c0b168-252d-2f7a-3ada-297d3c571f6a@unix-solution.de> <720ae25c-3215-10a8-fc91-7475046dc4e0@thomas-ward.net> Message-ID: Does not work. cat /etc/nginx/conf.d/git.conf ## Disable .htaccess and other hidden files server { listen 80 default_server; server_name _; location ~ /\.git { return 404; } location ~ /\.(?!well-known).* { deny all; access_log off; log_not_found off; } } Result is, that http://example.com/test/.git/config is accessible. On 09.06.20 17:16, Thomas Ward wrote: > server { > ??? listen 80 default_server; > ??? server_name _; > > ??? ... > } > > > The above should do what you're after.? Specifies a default-server > listener on port 80 and it matches that special catch-all that accepts > all server_name results.? (though, default_server will match anything > that doesn't match any other server_name Host so...) > > > Thomas > > On 6/9/20 11:14 AM, basti wrote: >> Hello, >> >> i want to setup a location match for any hostname/servername like in apache: >> >> cat /etc/apache2/conf-enabled/git.conf >> RedirectMatch 404 /\.git >> >> In nginx I try >> >> cat /etc/nginx/conf.d/git.conf >> server { >> ## Disable .htaccess and other hidden files >> location ~ /\.(?!well-known).* { >> deny all; >> access_log off; >> log_not_found off; >> } >> } >> >> But this does not match. >> When I remove server {} i get nginx: [emerg] "location" directive is not >> allowed here in /etc/nginx/conf.d/git.conf:2 >> >> I do not want to include my file into any server directive. It is asking >> for trouble, how fast can you forget to add this? >> >> best regards >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx >> From teward at thomas-ward.net Tue Jun 9 16:04:08 2020 From: teward at thomas-ward.net (Thomas Ward) Date: Tue, 9 Jun 2020 12:04:08 -0400 Subject: Location for any Host/ Server In-Reply-To: References: <25c0b168-252d-2f7a-3ada-297d3c571f6a@unix-solution.de> <720ae25c-3215-10a8-fc91-7475046dc4e0@thomas-ward.net> Message-ID: I misread what you were trying to achieve thanks to no coffee this morning. > I do not want to include my file into any server directive. You have **no choice** but to specifically include your location snippets where you want them.? Location blocks can't be applied 'globally' unless you have it in the proper server blocks, so you are kind of wanting two separate disparate things you can't do. If you don't want to include your location snippets in server blocks then you need to hardcode the location snippets in the server blocks themselves.? There's not much you can do for this in that case. Thomas On 6/9/20 11:47 AM, basti wrote: > Does not work. > > cat /etc/nginx/conf.d/git.conf > ## Disable .htaccess and other hidden files > server { > listen 80 default_server; > server_name _; > > location ~ /\.git { > return 404; > } > > location ~ /\.(?!well-known).* { > deny all; > access_log off; > log_not_found off; > } > } > > Result is, that http://example.com/test/.git/config is accessible. > > On 09.06.20 17:16, Thomas Ward wrote: >> server { >> ??? listen 80 default_server; >> ??? server_name _; >> >> ??? ... >> } >> >> >> The above should do what you're after.? Specifies a default-server >> listener on port 80 and it matches that special catch-all that accepts >> all server_name results.? (though, default_server will match anything >> that doesn't match any other server_name Host so...) >> >> >> Thomas >> >> On 6/9/20 11:14 AM, basti wrote: >>> Hello, >>> >>> i want to setup a location match for any hostname/servername like in apache: >>> >>> cat /etc/apache2/conf-enabled/git.conf >>> RedirectMatch 404 /\.git >>> >>> In nginx I try >>> >>> cat /etc/nginx/conf.d/git.conf >>> server { >>> ## Disable .htaccess and other hidden files >>> location ~ /\.(?!well-known).* { >>> deny all; >>> access_log off; >>> log_not_found off; >>> } >>> } >>> >>> But this does not match. >>> When I remove server {} i get nginx: [emerg] "location" directive is not >>> allowed here in /etc/nginx/conf.d/git.conf:2 >>> >>> I do not want to include my file into any server directive. It is asking >>> for trouble, how fast can you forget to add this? >>> >>> best regards >>> _______________________________________________ >>> nginx mailing list >>> nginx at nginx.org >>> http://mailman.nginx.org/mailman/listinfo/nginx >>> > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From c at tunnel53.net Sun Jun 14 12:39:57 2020 From: c at tunnel53.net (=?UTF-8?Q?Carl_Winb=C3=A4ck?=) Date: Sun, 14 Jun 2020 14:39:57 +0200 Subject: Force Nginx to log error? Message-ID: Hi folks, Is there any surefire way to force Nginx to log an error? Perhaps some carefully crafted GET request or similar. The reason I?m asking is that I?m doing a lab with Nginx? error log. Therefore I would like to find a way so that I can force Nginx to log an error. E.g. if I specify "error_log /srv/nginx/error.log warn;" then I would like to verify that errors end up in the file that I specified. Best regards, Carl From lists at lazygranch.com Sun Jun 14 13:53:13 2020 From: lists at lazygranch.com (lists) Date: Sun, 14 Jun 2020 06:53:13 -0700 Subject: Force Nginx to log error? In-Reply-To: Message-ID: I'm not sure I understand the question, but how does this sound? I use a map to catch requests that I don't want. For instance I return a 444 if I receive a "wget". ? Original Message ? From: c at tunnel53.net Sent: June 14, 2020 5:40 AM To: nginx at nginx.org Reply-to: nginx at nginx.org Subject: Force Nginx to log error? Hi folks, Is there any surefire way to force Nginx to log an error? Perhaps some carefully crafted GET request or similar. The reason I?m asking is that I?m doing a lab with Nginx? error log. Therefore I would like to find a way so that I can force Nginx to log an error. E.g. if I specify "error_log /srv/nginx/error.log warn;" then I would like to verify that errors end up in the file that I specified. Best regards, Carl _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx From c at tunnel53.net Sun Jun 14 15:34:31 2020 From: c at tunnel53.net (=?UTF-8?Q?Carl_Winb=C3=A4ck?=) Date: Sun, 14 Jun 2020 17:34:31 +0200 Subject: Force Nginx to log error? In-Reply-To: References: Message-ID: > I'm not sure I understand the question, but how does this sound? I > use a map to catch requests that I don't want. For instance I return > a 444 if I receive a "wget". No, I don?t mean status codes on the HTTP level. Status code 444 is not an error per se that would be sent to the error log, it is a valid status code sent from the server to the client. To the client that might be considered an error, but not to Nginx. I mean errors on a lower level, i.e. the level of the nginx daemon itself. E.g. stuff like this: 2020/06/14 13:13:59 [alert] 700#700: *998 open socket #13 left in connection 11 2020/06/14 13:13:59 [alert] 700#700: *4118 open socket #4 left in connection 13 2020/06/14 13:13:59 [alert] 700#700: *4169 open socket #17 left in connection 16 2020/06/14 13:13:59 [alert] 700#700: aborting It would be useful to me to be able to trigger such messages, so that I can verify that they are sent to the right destination. E.g. if I have the following stanza in my config: error_log /srv/nginx/error.log warn; Then messages of the warn/error/crit/alert/emerg levels should be logged and written to the file /srv/nginx/error.log (a file which is separate from the access log). I hope it is clearer now what I meant :) From lists at lazygranch.com Sun Jun 14 17:00:48 2020 From: lists at lazygranch.com (lists) Date: Sun, 14 Jun 2020 10:00:48 -0700 Subject: Force Nginx to log error? In-Reply-To: Message-ID: <8bk1qch1cnu2ddi4li6t63ob.1592154048864@lazygranch.com> That clears it up. Most of what I see in the error log is stuff I have no idea how to fix. I will Google some errors and see what is fixable. The deal is my websites work for me and I get no complaints. Reading questions on the interwebs, most people get error messages when they use curl on their website. Now I never saw a post where someone intentionally causes an error with curl, but why not. Basically use curl to do something a bad browser would do. Oh yeah I trap curl too. I used to see strange stuff in the log file from curl use so I put it in the map. Maybe there is a test suite for this on GitHub. ? Original Message ? From: c at tunnel53.net Sent: June 14, 2020 8:34 AM To: nginx at nginx.org Reply-to: nginx at nginx.org Subject: Re: Force Nginx to log error? > I'm not sure I understand the question, but how does this sound? I > use a map to catch requests that I don't want. For instance I return > a 444 if I receive a "wget". No, I don?t mean status codes on the HTTP level. Status code 444 is not an error per se that would be sent to the error log, it is a valid status code sent from the server to the client. To the client that might be considered an error, but not to Nginx. I mean errors on a lower level, i.e. the level of the nginx daemon itself. E.g. stuff like this: 2020/06/14 13:13:59 [alert] 700#700: *998 open socket #13 left in connection 11 2020/06/14 13:13:59 [alert] 700#700: *4118 open socket #4 left in connection 13 2020/06/14 13:13:59 [alert] 700#700: *4169 open socket #17 left in connection 16 2020/06/14 13:13:59 [alert] 700#700: aborting It would be useful to me to be able to trigger such messages, so that I can verify that they are sent to the right destination. E.g. if I have the following stanza in my config: error_log /srv/nginx/error.log warn; Then messages of the warn/error/crit/alert/emerg levels should be logged and written to the file /srv/nginx/error.log (a file which is separate from the access log). I hope it is clearer now what I meant :) _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx From themadbeaker at gmail.com Sun Jun 14 21:50:02 2020 From: themadbeaker at gmail.com (J.R.) Date: Sun, 14 Jun 2020 16:50:02 -0500 Subject: Force Nginx to log error? Message-ID: Mmmm... If you set it to debug you would probably get something to pop up sooner rather than later.... My error log level is set to 'error' and I typically see some ocsp cert timeouts and the occasional client exceeding my request (rate) limit settings... Not a lot ends up in the nginx error log (at least not at the level I use)... Setting up rate limiting is just a few lines in the config, then you can hit a page with curl or wget and force it to trigger which would end up in the error log... If you have something invalid in your config and do a restart, it will also write that into the error log... From nginx-forum at forum.nginx.org Mon Jun 15 07:31:25 2020 From: nginx-forum at forum.nginx.org (divya.jain@philips.com) Date: Mon, 15 Jun 2020 03:31:25 -0400 Subject: connect() failed (110: Connection timed out) while connecting to upstream Message-ID: <112cd1ce5acc2690e0df9d1f65c5adfe.NginxMailingListEnglish@forum.nginx.org> Hi, i have a nginx deployment as a reverse proxy. everything seems to be fine when I start my service. but after few days we start getting below error. onnect() failed (110: Connection timed out) while connecting to upstream we get this error after ~2 minutes of hitting the request and after 2 mins upstream server receives the request before that we don't see any request. in upstream server and we get response code 200 and not 503 or 502. The problem goes away after we restart our nginx server . Can somebody please help me with this. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,288345,288345#msg-288345 From kaushalshriyan at gmail.com Mon Jun 15 17:01:15 2020 From: kaushalshriyan at gmail.com (Kaushal Shriyan) Date: Mon, 15 Jun 2020 22:31:15 +0530 Subject: redirect from http (port 80) to https (port 443) not working. Message-ID: Hi, I am running nginx version: nginx/1.16.1 on CentOS Linux release 7.8.2003 (Core). When I hit https://marketplace.mydomain.com it works perfectly fine whereas when I hit http://marketplace.mydomain.com (port 80) does not get redirected to https://marketplace.mydomain.com (port 443). I have the below nginx.conf. server { > listen 443 ssl default_server; > #listen 80 default_server; > #server_name _; > server_name marketplace.mydomain.com; > ssl_protocols TLSv1.2; > ssl_certificate /etc/ssl/certs/ > marketplace.mydomain.com/fullchain1.pem; ssl_certificate_key > /etc/ssl/certs/marketplace.mydomain.com/privkey1.pem; > if ($scheme = http) { return 301 https://$server_name$request_uri; > } > ssl_ciphers > ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384; > ssl_prefer_server_ciphers on; > ssl_dhparam /etc/ssl/certs/marketplace.mydomain.com/dhparam.pem; > client_max_body_size 100M; > root /var/www/drupal/marketplace-v2/mpV2/web; > access_log /var/log/nginx/access.log; > error_log /var/log/nginx/error.log; > # Load configuration files for the default server block. > include /etc/nginx/default.d/*.conf; > location = /favicon.ico { > log_not_found off; > access_log off; > } I will appreciate it if someone can pitch in for help. Please let me know if you need any additional configurations and I look forward to hearing from you. Thanks in Advance. Best Regards, Kaushal -------------- next part -------------- An HTML attachment was scrubbed... URL: From doctor at doctor.nl2k.ab.ca Mon Jun 15 16:59:07 2020 From: doctor at doctor.nl2k.ab.ca (The Doctor) Date: Mon, 15 Jun 2020 10:59:07 -0600 Subject: CDN server Message-ID: <20200615165907.GC29259@doctor.nl2k.ab.ca> Question has anyone step up a CDN server in nginx using Apache mod_cdn clients? -- Member - Liberal International This is doctor@@nl2k.ab.ca Ici doctor@@nl2k.ab.ca Yahweh, Queen & country!Never Satan President Republic!Beware AntiChrist rising! nk.ca started 1 June 1995 . https://www.empire.kred/ROOTNK?t=94a1f39b Only the mediocre are always at their best. -Jean Giraudoux From r at roze.lv Mon Jun 15 18:20:39 2020 From: r at roze.lv (Reinis Rozitis) Date: Mon, 15 Jun 2020 21:20:39 +0300 Subject: redirect from http (port 80) to https (port 443) not working. In-Reply-To: References: Message-ID: <000a01d64341$af5a8a50$0e0f9ef0$@roze.lv> > I am running nginx version: nginx/1.16.1 on CentOS Linux release 7.8.2003 (Core). When I hit https://marketplace.mydomain.com it works perfectly fine whereas when I hit http://marketplace.mydomain.com > (port 80) does not get redirected to https://marketplace.mydomain.com (port 443). I have the below nginx.conf. > > server { > listen 443 ssl default_server; > #listen 80 default_server; Either reenable listen 80 default_server; or add another server block: server { listen 80 default_server; server_name marketplace.mydomain.com; return 301 return 301 https://$server_name$request_uri; } Because if you don't listen on port 80 the only way $scheme will be 'http' (and the if condition true) if the client opens http://yoursite:443 which won't normally happen. rr From r at roze.lv Mon Jun 15 18:24:50 2020 From: r at roze.lv (Reinis Rozitis) Date: Mon, 15 Jun 2020 21:24:50 +0300 Subject: redirect from http (port 80) to https (port 443) not working. In-Reply-To: <000a01d64341$af5a8a50$0e0f9ef0$@roze.lv> References: <000a01d64341$af5a8a50$0e0f9ef0$@roze.lv> Message-ID: <000b01d64342$41c154a0$c543fde0$@roze.lv> > return 301 return 301 https://$server_name$request_uri; Obviously a typo just a single return 301. rr From nginx-forum at forum.nginx.org Tue Jun 16 07:42:04 2020 From: nginx-forum at forum.nginx.org (shaharmor) Date: Tue, 16 Jun 2020 03:42:04 -0400 Subject: worker_connections allocates a lot of memory Message-ID: <07faf68e5e52520de9a3ef04388a498f.NginxMailingListEnglish@forum.nginx.org> Hi, I noticed that worker_connections while is defined as the maximum number of connections per worker, nginx pre-allocated enough memory to handle all possible worker_connections, even before they are actually needed. For example, setting worker_connections to 10485760 causes nginx to take 4.3GB of memory upon init. Is this how its supposed to be? Is there a way to tell nginx to only allocate memory as needed? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,288358,288358#msg-288358 From marcin.wanat at gmail.com Tue Jun 16 08:33:31 2020 From: marcin.wanat at gmail.com (Marcin Wanat) Date: Tue, 16 Jun 2020 10:33:31 +0200 Subject: worker_connections allocates a lot of memory In-Reply-To: <07faf68e5e52520de9a3ef04388a498f.NginxMailingListEnglish@forum.nginx.org> References: <07faf68e5e52520de9a3ef04388a498f.NginxMailingListEnglish@forum.nginx.org> Message-ID: > Hi, > > I noticed that worker_connections while is defined as the maximum number of > connections per worker, nginx pre-allocated enough memory to handle all > possible worker_connections, even before they are actually needed. > > For example, setting worker_connections to 10485760 causes nginx to take > 4.3GB of memory upon init. > > Is this how its supposed to be? > Is there a way to tell nginx to only allocate memory as needed? > I think you do not need to worry about it. You will run out of available ports, file descriptors or system memory needed to allocate these resources way before you will notice this. -------------- next part -------------- An HTML attachment was scrubbed... URL: From kaushalshriyan at gmail.com Tue Jun 16 12:17:43 2020 From: kaushalshriyan at gmail.com (Kaushal Shriyan) Date: Tue, 16 Jun 2020 17:47:43 +0530 Subject: redirect from http (port 80) to https (port 443) not working. In-Reply-To: <000b01d64342$41c154a0$c543fde0$@roze.lv> References: <000a01d64341$af5a8a50$0e0f9ef0$@roze.lv> <000b01d64342$41c154a0$c543fde0$@roze.lv> Message-ID: On Mon, Jun 15, 2020 at 11:55 PM Reinis Rozitis wrote: > > return 301 return 301 https://$server_name$request_uri; > > Obviously a typo just a single return 301. > > rr > > > Thanks Reinis for the email and much appreciated. It worked like a charm. -------------- next part -------------- An HTML attachment was scrubbed... URL: From pizwer88 at wp.pl Tue Jun 16 14:31:32 2020 From: pizwer88 at wp.pl (=?UTF-8?Q?pizwer88=40wp=2Epl?=) Date: Tue, 16 Jun 2020 16:31:32 +0200 Subject: Nginx potentially leaking real filenames? Message-ID: <4cad3a83c0c04521ad6e0de79de4ec26@grupawp.pl> Hi, I am experimenting with various ways of annoying bots and automated vulnerability scanners that reach my service. In one instance I am serving a recursive decompression bomb for all requests for .php files. Since none of my services run PHP, and never have, all such traffic can be safely assumed malicious. Recently (a couple of months since first deployment) I started seeing repeated requests to the server trying to fetch the recursive decompression bomb by its real file name, which should have never been exposed anywhere. Is it possible for nginx to leak the real file name? Through misconfiguration or other means? I am using nginx (version 1.14.2-2+deb10u1) as a reverse proxy and for SSL termination. The custom application behind it is not aware of the existence of the decompression bomb and lives in its own completely separate directory tree. It never reads nor serves any files from the local server, all its data is in physically separate database and cache servers. While I cannot prove absence of vulnerabilities in this custom app, I have not found any evidence of it being used to (nor leaking) local directory contents. The decompression bomb does not contain its file name in its contents. The decompression bomb file <redacted-payload-filename> exists and is properly served in response to .php file requests. Given the above, I believe something in my nginx setup leaked the real file name of the decompression bomb. I've tried using all request methods (GET, HEAD, PUT, POST, DELETE, CONNECT, OPTIONS, TRACE, PATCH) on the server from curl like following: ??? $ curl --verbose -X <method> <redacted>.com/index.php and (as expected) none of the responses leaked the file name in any of the headers nor contents. Below is a redacted and inlined version of my nginx configuration. There is only one server defined, the Debian default server config has been removed. The error code mapping is there to avoid triggering high error rate alerts when hit by hundreds of consecutive bot requests. I would appreciate any help in figuring out what I am doing wrong and how could the <redacted-payload-filename> have been leaked? Thanks, Pizab # nginx.conf http { ??? sendfile on; ??? tcp_nopush on; ??? tcp_nodelay on; ??? keepalive_timeout 165; ??? types_hash_max_size 2048; ??? include /etc/nginx/mime.types; ??? default_type application/octet-stream; ??? limit_req_zone "php" zone=attackzone:10m rate=1r/s; ??? ssl_certificate <redacted>; ??? ssl_certificate_key <redacted>; ??? ssl_protocols TLSv1 TLSv1.1 TLSv1.2 TLSv1.3; ??? ssl_session_cache shared:SSL:10m; ??? ssl_session_timeout 10m; ??? server_tokens off; ??? access_log /var/log/nginx/access.log; ??? error_log /var/log/nginx/error.log; ??? client_body_buffer_size 1M; ??? server { ??????? listen?? 443? default_server ssl; ??????? listen?? 80; ??????? server_name <redacted>.com; ??????? rewrite_log on; ??????? location /.well-known/acme-challenge { ??????????? alias /var/www/html/.well-known/acme ??????? } ??????? location / { ??????????? access_log /<redacted>/logs/nginx_access. ??????????? proxy_set_header Host $http_host; ??????????? proxy_redirect off; ??????????? proxy_connect_timeout 60; ??????????? proxy_read_timeout 160; ??????????? proxy_pass localhost:10000 localhost:10000 ; ??????? } ??????? error_page 429 =229 /error429; ??????? location ~ \.php$ { ??????????? limit_rate_after 1k; ??????????? limit_rate 2k; ??????????? limit_req zone=attackzone burst=2; ??????????? limit_req_status 429; ??????????? keepalive_timeout 0; ??????????? root /var/www/html/<redacted>/; ??????????? default_type "application/xml"; ??????????? add_header Content-Encoding "br"; ??????????? try_files /<redacted-payload-filename> =400; ??????? } ??????? location = /error429 { ??????????? return 229 "Too many requests."; ??????? } ??? } } -------------- next part -------------- An HTML attachment was scrubbed... URL: From kaushalshriyan at gmail.com Tue Jun 16 14:46:28 2020 From: kaushalshriyan at gmail.com (Kaushal Shriyan) Date: Tue, 16 Jun 2020 20:16:28 +0530 Subject: Testing the performance of NGINX for both http and https traffic. Message-ID: Hi, I am running nginx version: nginx/1.16.1 on CentOS Linux release 7.8.2003 (Core) and have hosted my website on nginx for both http (port 80) and https (port 443) traffic. Is there a way to find out the below mentioned performance metrics. 1. Measure Nginx webserver performance for both http and https traffic. 2. Measure system resources like CPU, Memory (RAM) and Network bandwidth consumed by Nginx web server to handle both http and https traffic. Please let me know if you need any additional configurations and I look forward to hearing from you. Thanks in Advance. Best Regards, Kaushal -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Tue Jun 16 22:20:43 2020 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 17 Jun 2020 01:20:43 +0300 Subject: worker_connections allocates a lot of memory In-Reply-To: <07faf68e5e52520de9a3ef04388a498f.NginxMailingListEnglish@forum.nginx.org> References: <07faf68e5e52520de9a3ef04388a498f.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20200616222043.GX12747@mdounin.ru> Hello! On Tue, Jun 16, 2020 at 03:42:04AM -0400, shaharmor wrote: > I noticed that worker_connections while is defined as the maximum number of > connections per worker, nginx pre-allocated enough memory to handle all > possible worker_connections, even before they are actually needed. > > For example, setting worker_connections to 10485760 causes nginx to take > 4.3GB of memory upon init. > > Is this how its supposed to be? Yes. > Is there a way to tell nginx to only allocate memory as needed? No. Connection structures are small and cannot be freed, so nginx allocates them all on startup. This way it avoids memory fragmentation, and makes connnection management easier and faster. Note well that you cannot really use connection structures in nginx without corresponding kernel structures such as sockets and appropriate buffers, so using arbitrary large worker_connections value does not make sense. If you are seeing that it takes significant amount of memory, most likely you've configured it incorrectly. For example, some relevant numbers can be seen in the following blog post by WhatsApp: https://blog.whatsapp.com/1-million-is-so-2011 Numbers in suggest that about 64G of memory is needed to handle 2 millions of connections. Scaling this to 10 millions as in your example gives about 256G, so 4G for worker_connections shouldn't be noticeable. -- Maxim Dounin http://mdounin.ru/ From nginx-forum at forum.nginx.org Wed Jun 17 08:26:20 2020 From: nginx-forum at forum.nginx.org (JohnSmithers) Date: Wed, 17 Jun 2020 04:26:20 -0400 Subject: HLS stream issue Message-ID: <931744c78ef916fb2aefa1395699c4b3.NginxMailingListEnglish@forum.nginx.org> QUESTION: What steps do I need to take to ensure .m3u8 will appear on JWPlayer? Details: 1. I have set up nginx-rtmp module with the current latest version following https://docs.peer5.com/guides/setting-up-hls-live-streaming-server-using-nginx/ 2. I have connected successfully to the server and nginx was receiving a stream and successfully restreaming via hls. I received the .m3u8 restream on my browser. 3. I left it for one week. When I returned I found I could not longer receive .m3u8 streams. 4. I have tried many different pathways of installation but the most successful is still the one I followed above. Every new installation attempt falls at the hls stream pushing from the server. 5. JW Player supplies error code 232011 - Cannot load M3U8: Crossdomain access denied 6. Output from nginx hls (proving rtmp stream being received and converted): total 12812 drwxr-xr-x 2 nobody www-data 4096 Jun 16 16:04 . drwxr-xr-x 3 www-data www-data 4096 Jun 16 15:58 .. -rw-r--r-- 1 nobody nogroup 837916 Jun 16 16:02 test-0.ts -rw-r--r-- 1 nobody nogroup 835848 Jun 16 16:02 test-1.ts -rw-r--r-- 1 nobody nogroup 1007116 Jun 16 16:04 test-10.ts -rw-r--r-- 1 nobody nogroup 1012380 Jun 16 16:04 test-11.ts -rw-r--r-- 1 nobody nogroup 1004860 Jun 16 16:04 test-12.ts -rw-r--r-- 1 nobody nogroup 1013508 Jun 16 16:04 test-13.ts -rw-r--r-- 1 nobody nogroup 242896 Jun 16 16:04 test-14.ts -rw-r--r-- 1 nobody nogroup 850512 Jun 16 16:03 test-2.ts -rw-r--r-- 1 nobody nogroup 826636 Jun 16 16:03 test-3.ts -rw-r--r-- 1 nobody nogroup 849948 Jun 16 16:03 test-4.ts -rw-r--r-- 1 nobody nogroup 830208 Jun 16 16:03 test-5.ts -rw-r--r-- 1 nobody nogroup 837540 Jun 16 16:03 test-6.ts -rw-r--r-- 1 nobody nogroup 902588 Jun 16 16:03 test-7.ts -rw-r--r-- 1 nobody nogroup 1011440 Jun 16 16:03 test-8.ts -rw-r--r-- 1 nobody nogroup 1012756 Jun 16 16:04 test-9.ts -rw-r--r-- 1 nobody nogroup 448 Jun 16 16:04 test.m3u8 7. Configuration file: ++++++++++++++ worker_processes auto; events { worker_connections 1024; } # RTMP configuration rtmp { server { listen 1935; # Listen on standard RTMP port chunk_size 4000; application show { live on; # Turn on HLS hls on; hls_path /nginx/hls/; hls_fragment 3; hls_playlist_length 60; # disable consuming the stream from nginx as rtmp deny play all; } } } http { sendfile off; tcp_nopush on; # aio on; directio 512; default_type application/octet-stream; server { listen 8080; location / { # Disable cache add_header 'Cache-Control' 'no-cache'; # CORS setup add_header 'Access-Control-Allow-Origin' '*' always; add_header 'Access-Control-Expose-Headers' 'Content-Length'; # allow CORS preflight requests if ($request_method = 'OPTIONS') { add_header 'Access-Control-Allow-Origin' '*'; add_header 'Access-Control-Max-Age' 1728000; add_header 'Content-Type' 'text/plain charset=UTF-8'; add_header 'Content-Length' 0; return 204; } types { application/dash+xml mpd; application/vnd.apple.mpegurl m3u8; video/mp2t ts; } root /nginx/; } } } Posted at Nginx Forum: https://forum.nginx.org/read.php?2,288371,288371#msg-288371 From nginx-forum at forum.nginx.org Wed Jun 17 12:00:31 2020 From: nginx-forum at forum.nginx.org (itpp2012) Date: Wed, 17 Jun 2020 08:00:31 -0400 Subject: HLS stream issue In-Reply-To: <931744c78ef916fb2aefa1395699c4b3.NginxMailingListEnglish@forum.nginx.org> References: <931744c78ef916fb2aefa1395699c4b3.NginxMailingListEnglish@forum.nginx.org> Message-ID: <4035c4662fe833a30478b68977225fbf.NginxMailingListEnglish@forum.nginx.org> Drop "Cannot load M3U8: Crossdomain access denied" in Google, plenty of solutions. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,288371,288374#msg-288374 From k.bicknell at f5.com Thu Jun 18 00:47:57 2020 From: k.bicknell at f5.com (Kevin Bicknell) Date: Thu, 18 Jun 2020 00:47:57 +0000 Subject: Welcome to the nginx@nginx.org mailing list! Message-ID: <07BE105D-0B9C-479F-9066-CE882E77F66E@f5.com> Looking forward to being part of this list! From kaushalshriyan at gmail.com Thu Jun 18 02:40:06 2020 From: kaushalshriyan at gmail.com (Kaushal Shriyan) Date: Thu, 18 Jun 2020 08:10:06 +0530 Subject: Testing the performance of NGINX for both http and https traffic. In-Reply-To: References: Message-ID: On Tue, Jun 16, 2020 at 8:16 PM Kaushal Shriyan wrote: > Hi, > > I am running nginx version: nginx/1.16.1 on CentOS Linux release 7.8.2003 > (Core) and have hosted my website on nginx for both http (port 80) and > https (port 443) traffic. Is there a way to find out the below mentioned > performance metrics. > > 1. Measure Nginx webserver performance for both http and https > traffic. > 2. Measure system resources like CPU, Memory (RAM) and Network > bandwidth consumed by Nginx web server to handle both http and https > traffic. > > Please let me know if you need any additional configurations and I look > forward to hearing from you. Thanks in Advance. > > Best Regards, > > Kaushal > Hi, I will appreciate if someone can pitch in for my earlier post to this mailing list. Thanks in advance and i look forward to hearing from you. Best Regards, Kaushal -------------- next part -------------- An HTML attachment was scrubbed... URL: From pizwer88 at wp.pl Thu Jun 18 11:08:17 2020 From: pizwer88 at wp.pl (Pizab W) Date: Thu, 18 Jun 2020 13:08:17 +0200 Subject: Nginx potentially leaking real filenames? (hopefully properly formatted) Message-ID: Hi, I am experimenting with various ways of annoying bots and automated vulnerability scanners that reach my service. In one instance I am serving a recursive decompression bomb for all requests for .php files. Since none of my services run PHP, and never have, all such traffic can be safely assumed malicious. Recently (a couple of months since first deployment) I started seeing repeated requests to the server trying to fetch the recursive decompression bomb by its real file name, which should have never been exposed anywhere. Is it possible for nginx to leak the real file name? Through misconfiguration or other means? I am using nginx (version 1.14.2-2+deb10u1) as a reverse proxy and for SSL termination. The custom application behind it is not aware of the existence of the decompression bomb and lives in its own completely separate directory tree. It never reads nor serves any files from the local server, all its data is in physically separate database and cache servers. While I cannot prove absence of vulnerabilities in this custom app, I have not found any evidence of it being used to (nor leaking) local directory contents. The decompression bomb does not contain its file name in its contents. Given the above, I believe something in my nginx setup leaked the real file name of the decompression bomb. I've tried using all request methods (GET, HEAD, PUT, POST, DELETE, CONNECT, OPTIONS, TRACE, PATCH) on the server from curl like following: $ curl --verbose -X .com/index.php and (as expected) none of the responses leaked the file name in any of the headers nor contents. Below is a redacted and inlined version of my nginx configuration. There is only one server defined, the Debian default server config has been removed. The error code mapping is there to avoid triggering high error rate alerts when hit by hundreds of consecutive bot requests. I would appreciate any help in figuring out what I am doing wrong and how could the have been leaked? Thanks, Pizab # nginx.conf http { sendfile on; tcp_nopush on; tcp_nodelay on; keepalive_timeout 165; types_hash_max_size 2048; include /etc/nginx/mime.types; default_type application/octet-stream; limit_req_zone "php" zone=attackzone:10m rate=1r/s; ssl_certificate ; ssl_certificate_key ; ssl_protocols TLSv1 TLSv1.1 TLSv1.2 TLSv1.3; ssl_session_cache shared:SSL:10m; ssl_session_timeout 10m; server_tokens off; access_log /var/log/nginx/access.log; error_log /var/log/nginx/error.log; client_body_buffer_size 1M; server { listen 443 default_server ssl; listen 80; server_name .com; rewrite_log on; location /.well-known/acme-challenge { alias /var/www/html/.well-known/acme-challenge; } location / { access_log //logs/nginx_access.log; proxy_set_header Host $http_host; proxy_redirect off; proxy_connect_timeout 60; proxy_read_timeout 160; proxy_pass http://localhost:10000; } error_page 429 =229 /error429; location ~ \.php$ { limit_rate_after 1k; limit_rate 2k; limit_req zone=attackzone burst=2; limit_req_status 429; keepalive_timeout 0; root /var/www/html//; default_type "application/xml"; add_header Content-Encoding "br"; try_files / =400; } location = /error429 { return 229 "Too many requests."; } } } From jay at gooby.org Thu Jun 18 12:58:37 2020 From: jay at gooby.org (Jay Caines-Gooby) Date: Thu, 18 Jun 2020 13:58:37 +0100 Subject: Curious problem with nginx, NFS and Chrome; Edge, Firefox & Safari are all fine. Message-ID: I wrote it up in some detail here: https://serverfault.com/questions/1021932/why-does-setting-nfs-sync-option-for-aws-efs-cause-nginx-chrome-to-chunk-or-brea Intrigued if anyone can help me understand why -- Jay Caines-Gooby http://jay.gooby.org jay at gooby.org +44 (0)7956 182625 twitter, skype & aim: jaygooby gtalk: jaygooby at gmail.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From lists at lazygranch.com Thu Jun 18 15:04:07 2020 From: lists at lazygranch.com (lists) Date: Thu, 18 Jun 2020 08:04:07 -0700 Subject: Nginx potentially leaking real filenames? (hopefully properly formatted) In-Reply-To: Message-ID: <4tducin2thg14liu3abvs6u3.1592492647569@lazygranch.com> In theory not a problem, but look at the text on this page about placing root in location blocks. https://www.nginx.com/resources/wiki/start/topics/tutorials/config_pitfalls/ I saw your first post and thought it was entertaining. Somebody needs to annoy those hackers. Since I don't use php I trap those requests in a map and return a 444. That way I can use some scripts and pull the hacker IPs out of the log file. If they come from a server, the IP space of that host gets blocked. There are no eyeballs at servers. But I do like the idea of feeding the hacker something unpleasant. ? Original Message ? From: pizwer88 at wp.pl Sent: June 18, 2020 4:08 AM To: nginx at nginx.org Reply-to: nginx at nginx.org Subject: Nginx potentially leaking real filenames? (hopefully properly formatted) Hi, I am experimenting with various ways of annoying bots and automated vulnerability scanners that reach my service. In one instance I am serving a recursive decompression bomb for all requests for .php files. Since none of my services run PHP, and never have, all such traffic can be safely assumed malicious. Recently (a couple of months since first deployment) I started seeing repeated requests to the server trying to fetch the recursive decompression bomb by its real file name, which should have never been exposed anywhere. Is it possible for nginx to leak the real file name? Through misconfiguration or other means? I am using nginx (version 1.14.2-2+deb10u1) as a reverse proxy and for SSL termination. The custom application behind it is not aware of the existence of the decompression bomb and lives in its own completely separate directory tree. It never reads nor serves any files from the local server, all its data is in physically separate database and cache servers. While I cannot prove absence of vulnerabilities in this custom app, I have not found any evidence of it being used to (nor leaking) local directory contents. The decompression bomb does not contain its file name in its contents. Given the above, I believe something in my nginx setup leaked the real file name of the decompression bomb. I've tried using all request methods (GET, HEAD, PUT, POST, DELETE, CONNECT, OPTIONS, TRACE, PATCH) on the server from curl like following: ???? $ curl --verbose -X .com/index.php and (as expected) none of the responses leaked the file name in any of the headers nor contents. Below is a redacted and inlined version of my nginx configuration. There is only one server defined, the Debian default server config has been removed. The error code mapping is there to avoid triggering high error rate alerts when hit by hundreds of consecutive bot requests. I would appreciate any help in figuring out what I am doing wrong and how could the have been leaked? Thanks, Pizab # nginx.conf http { ???? sendfile on; ???? tcp_nopush on; ???? tcp_nodelay on; ???? keepalive_timeout 165; ???? types_hash_max_size 2048; ???? include /etc/nginx/mime.types; ???? default_type application/octet-stream; ???? limit_req_zone "php" zone=attackzone:10m rate=1r/s; ???? ssl_certificate ; ???? ssl_certificate_key ; ???? ssl_protocols TLSv1 TLSv1.1 TLSv1.2 TLSv1.3; ???? ssl_session_cache shared:SSL:10m; ???? ssl_session_timeout 10m; ???? server_tokens off; ???? access_log /var/log/nginx/access.log; ???? error_log /var/log/nginx/error.log; ???? client_body_buffer_size 1M; ???? server { ???????? listen?? 443? default_server ssl; ???????? listen?? 80; ???????? server_name .com; ???????? rewrite_log on; ???????? location /.well-known/acme-challenge { ???????????? alias /var/www/html/.well-known/acme-challenge; ???????? } ???????? location / { ???????????? access_log //logs/nginx_access.log; ???????????? proxy_set_header Host $http_host; ???????????? proxy_redirect off; ???????????? proxy_connect_timeout 60; ???????????? proxy_read_timeout 160; ???????????? proxy_pass http://localhost:10000; ???????? } ???????? error_page 429 =229 /error429; ???????? location ~ \.php$ { ???????????? limit_rate_after 1k; ???????????? limit_rate 2k; ???????????? limit_req zone=attackzone burst=2; ???????????? limit_req_status 429; ???????????? keepalive_timeout 0; ???????????? root /var/www/html//; ???????????? default_type "application/xml"; ???????????? add_header Content-Encoding "br"; ???????????? try_files / =400; ???????? } ???????? location = /error429 { ???????????? return 229 "Too many requests."; ???????? } ???? } } _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx From marcin.wanat at gmail.com Thu Jun 18 15:19:06 2020 From: marcin.wanat at gmail.com (Marcin Wanat) Date: Thu, 18 Jun 2020 17:19:06 +0200 Subject: Nginx potentially leaking real filenames? (hopefully properly formatted) In-Reply-To: <4tducin2thg14liu3abvs6u3.1592492647569@lazygranch.com> References: <4tducin2thg14liu3abvs6u3.1592492647569@lazygranch.com> Message-ID: On Thu, Jun 18, 2020 at 5:04 PM lists wrote: > In theory not a problem, but look at the text on this page about placing > root in location blocks. > > https://www.nginx.com/resources/wiki/start/topics/tutorials/config_pitfalls/ > > I saw your first post and thought it was entertaining. Somebody needs to > annoy those hackers. Since I don't use php I trap those requests in a map > and return a 444. That way I can use some scripts and pull the hacker IPs > out of the log file. If they come from a server, the IP space of that host > gets blocked. There are no eyeballs at servers. But I do like the idea of > feeding the hacker something unpleasant. > > If you have enough resources you can send them a 200 OK response with 10MB html file with an enormous DOM tree using chunked transfer and no content-length header. They will lost huge amount of resources to download and parse this (probably multiple times). On Thu, Jun 18, 2020 at 5:04 PM lists wrote: > In theory not a problem, but look at the text on this page about placing > root in location blocks. > > https://www.nginx.com/resources/wiki/start/topics/tutorials/config_pitfalls/ > > I saw your first post and thought it was entertaining. Somebody needs to > annoy those hackers. Since I don't use php I trap those requests in a map > and return a 444. That way I can use some scripts and pull the hacker IPs > out of the log file. If they come from a server, the IP space of that host > gets blocked. There are no eyeballs at servers. But I do like the idea of > feeding the hacker something unpleasant. > > > > > > Original Message > > > From: pizwer88 at wp.pl > Sent: June 18, 2020 4:08 AM > To: nginx at nginx.org > Reply-to: nginx at nginx.org > Subject: Nginx potentially leaking real filenames? (hopefully properly > formatted) > > > Hi, > > I am experimenting with various ways of annoying bots and automated > vulnerability scanners that reach my service. In one instance I am > serving a recursive decompression bomb for all requests for .php files. > Since none of my services run PHP, and never have, all such traffic can > be safely assumed malicious. > Recently (a couple of months since first deployment) I started seeing > repeated requests to the server trying to fetch the recursive > decompression bomb by its real file name, which should have never been > exposed anywhere. > > Is it possible for nginx to leak the real file name? Through > misconfiguration or other means? > > I am using nginx (version 1.14.2-2+deb10u1) as a reverse proxy and for > SSL termination. > The custom application behind it is not aware of the existence of the > decompression bomb and lives in its own completely separate directory > tree. It never reads nor serves any files from the local server, all its > data is in physically separate database and cache servers. While I > cannot prove absence of vulnerabilities in this custom app, I have not > found any evidence of it being used to (nor leaking) local directory > contents. > The decompression bomb does not contain its file name in its contents. > > Given the above, I believe something in my nginx setup leaked the real > file name of the decompression bomb. > I've tried using all request methods (GET, HEAD, PUT, POST, DELETE, > CONNECT, OPTIONS, TRACE, PATCH) on the server from curl like following: > $ curl --verbose -X .com/index.php > and (as expected) none of the responses leaked the file name in any of > the headers nor contents. > > Below is a redacted and inlined version of my nginx configuration. There > is only one server defined, the Debian default server config has been > removed. The error code mapping is there to avoid triggering high error > rate alerts when hit by hundreds of consecutive bot requests. > I would appreciate any help in figuring out what I am doing wrong and > how could the have been leaked? > > Thanks, > Pizab > > # nginx.conf > http { > sendfile on; > tcp_nopush on; > tcp_nodelay on; > keepalive_timeout 165; > types_hash_max_size 2048; > > include /etc/nginx/mime.types; > default_type application/octet-stream; > > limit_req_zone "php" zone=attackzone:10m rate=1r/s; > > ssl_certificate ; > ssl_certificate_key ; > ssl_protocols TLSv1 TLSv1.1 TLSv1.2 TLSv1.3; > ssl_session_cache shared:SSL:10m; > ssl_session_timeout 10m; > > server_tokens off; > > access_log /var/log/nginx/access.log; > error_log /var/log/nginx/error.log; > > client_body_buffer_size 1M; > > server { > listen 443 default_server ssl; > listen 80; > > server_name .com; > rewrite_log on; > > location /.well-known/acme-challenge { > alias /var/www/html/.well-known/acme-challenge; > } > > location / { > access_log //logs/nginx_access.log; > proxy_set_header Host $http_host; > proxy_redirect off; > proxy_connect_timeout 60; > proxy_read_timeout 160; > proxy_pass http://localhost:10000; > } > > error_page 429 =229 /error429; > > location ~ \.php$ { > limit_rate_after 1k; > limit_rate 2k; > limit_req zone=attackzone burst=2; > limit_req_status 429; > keepalive_timeout 0; > root /var/www/html//; > default_type "application/xml"; > add_header Content-Encoding "br"; > try_files / =400; > } > > location = /error429 { > return 229 "Too many requests."; > } > } > } > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From L.Meren at f5.com Thu Jun 18 17:58:07 2020 From: L.Meren at f5.com (Libby Meren) Date: Thu, 18 Jun 2020 17:58:07 +0000 Subject: Nginx user survey Message-ID: <03159C28-CB19-4CD2-8142-611249C3A8F8@f5.com> Hello NGINX community, Welcome to the 2020 NGINX User Survey. Over the past six years, you?ve helped us improve our solutions and evolve our product roadmap. Please continue to share your experiences and ideas with us ? we value your feedback. https://nkadmin.typeform.com/to/dMGPBn#source=email Thank you for helping shape the future of NGINX. Regards, The NGINX team -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Fri Jun 19 07:29:56 2020 From: nginx-forum at forum.nginx.org (siva.pannier) Date: Fri, 19 Jun 2020 03:29:56 -0400 Subject: TCP SSL termination issue on Nginx - for JDBC client Message-ID: <35dcb1b0dac127b68f2ecae949ee4d6a.NginxMailingListEnglish@forum.nginx.org> Hi there, I am exploring the features of Nginx features and doing a POC with all the possible use cases. If all goes well, probably there would be a huge investment on the Nginx to use it our cloud based architecture. Currently exploring an option on TCP SSL termination on Nginx for a SSL connection from Java JDBC client. Facing issues, any guidance would be speed up my POC and complete it. I'm using nginx on Windows 10 and using the opensource version. Error.log: ################### 2020/06/19 11:51:51 [debug] 12568#16420: timer delta: 17 2020/06/19 11:51:51 [debug] 12568#16420: posted event 03004310 2020/06/19 11:51:51 [debug] 12568#16420: *1 delete posted event 03004310 2020/06/19 11:51:51 [debug] 12568#16420: *1 SSL handshake handler: 0 2020/06/19 11:51:51 [debug] 12568#16420: *1 SSL_do_handshake: -1 2020/06/19 11:51:51 [debug] 12568#16420: *1 SSL_get_error: 5 2020/06/19 11:51:51 [info] 12568#16420: *1 peer closed connection in SSL handshake while SSL handshaking, client: 127.0.0.1, server: 0.0.0.0:1592 2020/06/19 11:51:51 [debug] 12568#16420: *1 finalize stream session: 500 2020/06/19 11:51:51 [debug] 12568#16420: *1 stream log handler 2020/06/19 11:51:51 [debug] 12568#16420: *1 close stream connection: 368 2020/06/19 11:51:51 [debug] 12568#16420: *1 event timer del: 368: 3409871779 2020/06/19 11:51:51 [debug] 12568#16420: *1 select del event fd:368 ev:768 Error from JDBC Client: ################### ..... ..... trigger seeding of SecureRandom done seeding SecureRandom Using SSLEngineImpl. SQL State: 08006 IO Error: The Network Adapter could not establish the connection Java code: ################### .... .... String url = "jdbc:oracle:thin:@(DESCRIPTION=(ADDRESS=(PROTOCOL=tcps)(HOST=localhost)(PORT=1592))(CONNECT_DATA=(SERVICE_NAME=xe)))"; String user="sys as sysdba"; String pwd="1234"; Properties props = new Properties(); props.setProperty("url", url); props.setProperty("user", user); props.setProperty("password", pwd); props.setProperty("oracle.net.ssl_cipher_suites", "(TLS_DH_RSA_WITH_AES_128_GCM_SHA256, TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384, TLS_ECDH_RSA_WITH_AES_128_CBC_SHA256, TLS_ECDH_ECDSA_WITH_AES_128_CBC_SHA256)"); ..... ..... try (Connection conn=DriverManager.getConnection(url,props)) { //failing on this line of code .... .... Nginx.conf: ################### upstream db_backend { server localhost:1521; #Local database server which is not SSL enabled. } server { listen 1592 ssl; listen [::]:1592 ssl; proxy_pass db_backend; ssl_certificate C:/Users/SivaPannier/Documents/Siva/IBM/Software/openSSL/ssl/certs/nginx-selfsigned.crt; ssl_certificate_key C:/Users/SivaPannier/Documents/Siva/IBM/Software/openSSL/ssl/nginx-selfsigned.key; ssl_protocols SSLv3 TLSv1 TLSv1.1 TLSv1.2; ssl_ciphers HIGH:!aNULL:!MD5; ssl_session_cache shared:SSL:20m; ssl_session_timeout 4h; ssl_handshake_timeout 30s; } Thanks, Siva P Posted at Nginx Forum: https://forum.nginx.org/read.php?2,288400,288400#msg-288400 From nginx-forum at forum.nginx.org Fri Jun 19 13:11:28 2020 From: nginx-forum at forum.nginx.org (siva.pannier) Date: Fri, 19 Jun 2020 09:11:28 -0400 Subject: Nginx Opensource API feature? Message-ID: <768e852b02cda701fff461bc7c495f1d.NginxMailingListEnglish@forum.nginx.org> Hi.. I am looking for APIs on Nginx Opensource. To monitor, get status and dynamic configuration of nginx.conf files. Does the opensource version has it, please confirm? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,288402,288402#msg-288402 From francis at daoine.org Fri Jun 19 13:17:45 2020 From: francis at daoine.org (Francis Daly) Date: Fri, 19 Jun 2020 14:17:45 +0100 Subject: Testing the performance of NGINX for both http and https traffic. In-Reply-To: References: Message-ID: <20200619131745.GX20939@daoine.org> On Tue, Jun 16, 2020 at 08:16:28PM +0530, Kaushal Shriyan wrote: Hi there, > Is there a way to find out the below mentioned > performance metrics. > > 1. Measure Nginx webserver performance for both http and https traffic. > 2. Measure system resources like CPU, Memory (RAM) and Network bandwidth > consumed by Nginx web server to handle both http and https traffic. There is not much nginx-specific in the question here. For measuring webserver performance, your favourite web search engine will probably give lots of links. "ab" and "siege" have been around for a while. For measuring system resource usage, search for generic tools suitable for your operating system. You could periodically run commands like "uptime" and "free" and see how they change as the load changes. Or use something like "sar" to handle the "periodic" part. "network traffic" during a web request is probably most easily counted by the client. If you do care about network retries and the like, then monitoring the interface traffic using something like "bmon" or well-known snmp polling may be useful. Overall - measuring performance or resource metrics is mostly unrelated to the thing being tested. The hard part is usually deciding what you do and do not want to measure. Then find (or build) a tool that does what you want and does not do what you do not want. So you may find it useful to investigate measuring tools for your system, while waiting to see if there is a more specific answer provided on the list. Good luck with it, f -- Francis Daly francis at daoine.org From r at roze.lv Fri Jun 19 15:17:26 2020 From: r at roze.lv (Reinis Rozitis) Date: Fri, 19 Jun 2020 18:17:26 +0300 Subject: Nginx Opensource API feature? In-Reply-To: <768e852b02cda701fff461bc7c495f1d.NginxMailingListEnglish@forum.nginx.org> References: <768e852b02cda701fff461bc7c495f1d.NginxMailingListEnglish@forum.nginx.org> Message-ID: <000b01d6464c$be000720$3a001560$@roze.lv> > I am looking for APIs on Nginx Opensource. To monitor, get status and > dynamic configuration of nginx.conf files. > > Does the opensource version has it, please confirm? For the os version there is stub status module http://nginx.org/en/docs/http/ngx_http_stub_status_module.html There are several 3rd party modules (like nginx-module-vts) which give more detailed statistics about upstreams/requests (https://www.nginx.com/resources/wiki/modules/) Depending on your needs for dynamic configuration you might look also at Unit (https://unit.nginx.org/configuration/#configuration-mgmt ) rr From nginx-forum at forum.nginx.org Fri Jun 19 16:55:32 2020 From: nginx-forum at forum.nginx.org (siva.pannier) Date: Fri, 19 Jun 2020 12:55:32 -0400 Subject: Nginx Opensource API feature? In-Reply-To: <000b01d6464c$be000720$3a001560$@roze.lv> References: <000b01d6464c$be000720$3a001560$@roze.lv> Message-ID: <4f226a39f35fc93dfd31624f2bc59f3f.NginxMailingListEnglish@forum.nginx.org> Thanks for your inputs.. I will go through the APIs provided in the wiki/modules. Can Unit be used as a reverse proxy server like what we do with Nginx? I want to update my Nginx reverse proxy server dynamically (& automatically) without any downtime, whenever the underlying services scale up & down automatically. I understand that with nginx + it is possible with the support of APIs it provides. I am looking for a complete opensource solution. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,288402,288409#msg-288409 From r at roze.lv Fri Jun 19 19:05:16 2020 From: r at roze.lv (Reinis Rozitis) Date: Fri, 19 Jun 2020 22:05:16 +0300 Subject: Nginx Opensource API feature? In-Reply-To: <4f226a39f35fc93dfd31624f2bc59f3f.NginxMailingListEnglish@forum.nginx.org> References: <000b01d6464c$be000720$3a001560$@roze.lv> <4f226a39f35fc93dfd31624f2bc59f3f.NginxMailingListEnglish@forum.nginx.org> Message-ID: <001b01d6466c$91bd1a70$b5374f50$@roze.lv> > Can Unit be used as a reverse proxy server like what we do with Nginx? It can. > I want to update my Nginx reverse proxy server dynamically (& > automatically) without any downtime, whenever the underlying services > scale up & down automatically. In general nginx reloads configuration gracefully, so one option is just to write the config files with an application/script and do reloads. Another way is to find appropriate module which allows dynamic upstream changes or for example there are modules which allow backends to be determined via dns (nginx itself can also use host based upstreams but in a more static way as the dns resolution is done only on startup and config reload (there are some hacks with variables but it's not as elegant)). Next level would be to use Openresty and something like https://github.com/openresty/lua-nginx-module/#balancer_by_lua_block where you can do whatever comes into your mind (as far you learn to code Lua a bit). > I understand that with nginx + it is possible with the support of APIs it > provides. I am looking for a complete opensource solution. Yes, the commercial version has inbuilt dynamic backend change feature. rr From josemar at dosul.digital Fri Jun 19 19:13:17 2020 From: josemar at dosul.digital (Josemar Odia) Date: Fri, 19 Jun 2020 16:13:17 -0300 Subject: Centos 7 + Nginx + PageSpeed Message-ID: <6f6d5053-b6a0-53e5-8e31-f085b55753d9@dosul.digital> Hello, I'm new to nginx. Forgive me if I'm sending the question in the wrong place. I performed the compilation and installation of Nginx, following this tutorial: https://www.linode.com/docs/web-servers/nginx/build-nginx-with-pagespeed-from-source/ At first it seemed to be all right, but when checking the status of nginx I have the following problem: $ sudo systemctl status nginx ? nginx.service - The NGINX HTTP and reverse proxy server ?? Loaded: loaded (/usr/lib/systemd/system/nginx.service; enabled; vendor preset: disabled) ? Drop-In: /etc/systemd/system/nginx.service.d ?????????? ??override.conf ?? Active: active (running) since Fri 2020-06-19 15:44:21 -03; 4s ago ? Process: 5717 ExecStop = / bin / kill -s QUIT $ MAINPID (code = exited, status = 0 / SUCCESS) ? Process: 5722 ExecStart = / usr / sbin / nginx (code = exited, status = 0 / SUCCESS) ? Process: 5721 ExecStartPre = / usr / sbin / nginx -t (code = exited, status = 0 / SUCCESS) ?Main PID: 5724 (nginx) ?? CGroup: /system.slice/nginx.service ?????????? 57?5724 nginx: master process / usr / sbin / nginx ?????????? ??5725 nginx: worker process Jun 19 15:44:21 brain01.dosuldigital.com.br systemd [1]: Stopped The NGINX HTTP and reverse proxy server. Jun 19 15:44:21 brain01.dosuldigital.com.br systemd [1]: Starting The NGINX HTTP and reverse proxy server ... Jun 19 15:44:21 brain01.dosuldigital.com.br nginx [5721]: nginx: the configuration file /etc/nginx/nginx.conf syntax is ok Jun 19 15:44:21 brain01.dosuldigital.com.br nginx [5721]: nginx: configuration file /etc/nginx/nginx.conf test is successful Jun 19 15:44:21 brain01.dosuldigital.com.br systemd [1]: Failed to parse PID from file /run/nginx.pid: Invalid argument Jun 19 15:44:21 brain01.dosuldigital.com.br systemd [1]: Started The NGINX HTTP and reverse proxy server. Could you help me solve it? From c at tunnel53.net Sat Jun 20 08:06:39 2020 From: c at tunnel53.net (=?UTF-8?Q?Carl_Winb=C3=A4ck?=) Date: Sat, 20 Jun 2020 10:06:39 +0200 Subject: Force Nginx to log error? In-Reply-To: References: Message-ID: On Sun, 14 Jun 2020 at 17:34, Carl Winb?ck wrote: > It would be useful to me to be able to trigger such messages, so that > I can verify that they are sent to the right destination. I found a simple way to trigger a message to the error log, so I thought I?d share it in case someone also has use for this. If the root of your site is /foo/mysite/ create an empty directory there, e.g. /foo/mysite/xyzzy Do *not* create any index file there, e.g. no index.html or index.php Now you request that URL via any client of your choice, e.g. curl: curl https://mysite.example.com/xyzzy/ That request will trigger a message to the error log that looks like this: 2020/06/20 07:30:04 [error] 694#694: *835 directory index of "/foo/mysite/xyzzy/" is forbidden, client: 172.16.3.248, server: , request: "GET / HTTP/1.0" Best regards, Carl From nginx-forum at forum.nginx.org Mon Jun 22 01:41:03 2020 From: nginx-forum at forum.nginx.org (siva.pannier) Date: Sun, 21 Jun 2020 21:41:03 -0400 Subject: Nginx Opensource API feature? In-Reply-To: <001b01d6466c$91bd1a70$b5374f50$@roze.lv> References: <001b01d6466c$91bd1a70$b5374f50$@roze.lv> Message-ID: Thanks rr!! Posted at Nginx Forum: https://forum.nginx.org/read.php?2,288402,288424#msg-288424 From nginx-forum at forum.nginx.org Mon Jun 22 01:42:32 2020 From: nginx-forum at forum.nginx.org (siva.pannier) Date: Sun, 21 Jun 2020 21:42:32 -0400 Subject: TCP SSL termination issue on Nginx - for JDBC client In-Reply-To: <35dcb1b0dac127b68f2ecae949ee4d6a.NginxMailingListEnglish@forum.nginx.org> References: <35dcb1b0dac127b68f2ecae949ee4d6a.NginxMailingListEnglish@forum.nginx.org> Message-ID: <70d615c9818ee9b3901fd07b54e6e099.NginxMailingListEnglish@forum.nginx.org> Hi.. Can someone pls guideme on this? Thanks.. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,288400,288425#msg-288425 From r at roze.lv Mon Jun 22 11:56:55 2020 From: r at roze.lv (Reinis Rozitis) Date: Mon, 22 Jun 2020 14:56:55 +0300 Subject: TCP SSL termination issue on Nginx - for JDBC client In-Reply-To: <35dcb1b0dac127b68f2ecae949ee4d6a.NginxMailingListEnglish@forum.nginx.org> References: <35dcb1b0dac127b68f2ecae949ee4d6a.NginxMailingListEnglish@forum.nginx.org> Message-ID: <000201d6488c$39f11310$add33930$@roze.lv> I'm not very into Java but you might get more details if you add -Djavax.net.debug=SSL,handshake or -Djavax.net.debug=all The current error is not very explanatory (at least to me) and from nginx side the client just closes connection. You could test the nginx side with cipherscan https://github.com/mozilla/cipherscan (not sure if there is an alternative for windows, but maybe it's possible to run it in WSL) to see if the problem is with nginx or jdbc client. Also I would try without the DHE ciphers (and widen available like add TLS_RSA_WITH_AES_128_CBC_SHA256,TLS_RSA_WITH_AES_256_CBC_SHA,TLS_RSA_WITH_AES_128_CBC_SHA) rr From nginx-forum at forum.nginx.org Mon Jun 22 17:21:00 2020 From: nginx-forum at forum.nginx.org (siva.pannier) Date: Mon, 22 Jun 2020 13:21:00 -0400 Subject: TCP SSL termination issue on Nginx - for JDBC client In-Reply-To: <000201d6488c$39f11310$add33930$@roze.lv> References: <000201d6488c$39f11310$add33930$@roze.lv> Message-ID: <56ca70409c197691d984cd8df873fbac.NginxMailingListEnglish@forum.nginx.org> Thanks a lot rr! for your suggestions.. my problem was solved.. I added the cipher suites as the one you gave.. props.setProperty("oracle.net.ssl_cipher_suites", "(TLS_RSA_WITH_AES_128_CBC_SHA256,TLS_RSA_WITH_AES_256_CBC_SHA,TLS_RSA_WITH_AES_128_CBC_SHA)"); Also imported the server certificate to 'cacerts' with the below command and it worked after that.. :) keytool -import -alias localhost -file C:/Users//openSSL/ssl/certs/nginx-selfsigned.crt -storetype JKS -keystore cacerts Posted at Nginx Forum: https://forum.nginx.org/read.php?2,288400,288437#msg-288437 From nginx-forum at forum.nginx.org Thu Jun 25 15:33:29 2020 From: nginx-forum at forum.nginx.org (anish10dec) Date: Thu, 25 Jun 2020 11:33:29 -0400 Subject: Removing Null Character from Query Parameter Message-ID: Nginx Upstream returning 400 Bad Request if null character is being passed in the request as part of uri or query params. Is there a way Null Character can be removed from request before proxying it to upstream. Its only known from access logs that null character is being passed in request as \x00 and causing the failure How to identify the Null Character and remove it ? Tried below options but its not able to identify the null character if ($args ~* (.*)(\x00)(.*)) { set $args $1$3; } Nginx returning below error Error Log 2020/06/25 20:20:43 [info] 19838#19838: *11985 client sent invalid request while reading client request line, client: 10.49.120.61, server: test.com, request: "HEAD /folder/Test.m3u8?uid=abc123 HTTP/1.0" Access log 10.49.120.61 | - | test.com | [25/Jun/2020:20:20:43 +0530] | - | "HEAD /folder/Test.m3u8?uid=abc123\x00 HTTP/1.0" | 400 | 0 | "-" | "-" | 0.001 | - | - | - | "- - - -" | http | - | -| "-" Posted at Nginx Forum: https://forum.nginx.org/read.php?2,288455,288455#msg-288455 From mdounin at mdounin.ru Thu Jun 25 16:48:20 2020 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 25 Jun 2020 19:48:20 +0300 Subject: Removing Null Character from Query Parameter In-Reply-To: References: Message-ID: <20200625164820.GP12747@mdounin.ru> Hello! On Thu, Jun 25, 2020 at 11:33:29AM -0400, anish10dec wrote: > Nginx Upstream returning 400 Bad Request if null character is being passed > in the request as part of uri or query params. > > Is there a way Null Character can be removed from request before proxying > it to upstream. > > Its only known from access logs that null character is being passed in > request as \x00 and causing the failure The null character is not allowed in the HTTP request line, and hence nginx returns 400 (Bad Request) error. > How to identify the Null Character and remove it ? You can't. Instead, consider fixing the client to generate HTTP requests correctly. -- Maxim Dounin http://mdounin.ru/ From nginx-forum at forum.nginx.org Thu Jun 25 18:02:35 2020 From: nginx-forum at forum.nginx.org (anish10dec) Date: Thu, 25 Jun 2020 14:02:35 -0400 Subject: Removing Null Character from Query Parameter In-Reply-To: <20200625164820.GP12747@mdounin.ru> References: <20200625164820.GP12747@mdounin.ru> Message-ID: Thanks Maxim Actually null character is not being generated by Client . We are using below module to validate the tokens https://github.com/kaltura/nginx-akamai-token-validate-module This is being caused by akamai_token_validate_strip_token directive which strips the token and forwards request to upstream server. While striping the token and passing the remaining request to upstream stream its appending null character at the end. If there is no any additional query param in request apart from token , then there is no issue in handling. http://10.49.120.61/folder/Test.m3u8?token=st=1593095161~exp=1593112361~acl=/*~hmac=60d9c29a65d837b203225318d1c69e205037580a08bf4417d4a1e237e5a2f5b6&uid=abc123 Request passed to upstream is as below which is causing problem GET /folder/Test.m3u8?uid=abc123\x00 Posted at Nginx Forum: https://forum.nginx.org/read.php?2,288455,288462#msg-288462 From mdounin at mdounin.ru Thu Jun 25 18:18:28 2020 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 25 Jun 2020 21:18:28 +0300 Subject: Removing Null Character from Query Parameter In-Reply-To: References: <20200625164820.GP12747@mdounin.ru> Message-ID: <20200625181828.GR12747@mdounin.ru> Hello! On Thu, Jun 25, 2020 at 02:02:35PM -0400, anish10dec wrote: > Thanks Maxim > > Actually null character is not being generated by Client . > > We are using below module to validate the tokens > https://github.com/kaltura/nginx-akamai-token-validate-module > > This is being caused by akamai_token_validate_strip_token directive which > strips the token and forwards request to upstream server. > > While striping the token and passing the remaining request to upstream > stream its appending null character at the end. > If there is no any additional query param in request apart from token , then > there is no issue in handling. > > http://10.49.120.61/folder/Test.m3u8?token=st=1593095161~exp=1593112361~acl=/*~hmac=60d9c29a65d837b203225318d1c69e205037580a08bf4417d4a1e237e5a2f5b6&uid=abc123 > > Request passed to upstream is as below which is causing problem > > GET /folder/Test.m3u8?uid=abc123\x00 So the module is broken and needs to be fixed. -- Maxim Dounin http://mdounin.ru/ From jeff.dyke at gmail.com Fri Jun 26 01:21:23 2020 From: jeff.dyke at gmail.com (Jeff Dyke) Date: Thu, 25 Jun 2020 21:21:23 -0400 Subject: Removing Null Character from Query Parameter In-Reply-To: <20200625181828.GR12747@mdounin.ru> References: <20200625164820.GP12747@mdounin.ru> <20200625181828.GR12747@mdounin.ru> Message-ID: no offense to the OP, but i love Maxim. Direct and to the point, and in this case, as usual, he is correct. You should not look at what the requester wants, before understanding what the sender should provide. On Thu, Jun 25, 2020 at 2:18 PM Maxim Dounin wrote: > Hello! > > On Thu, Jun 25, 2020 at 02:02:35PM -0400, anish10dec wrote: > > > Thanks Maxim > > > > Actually null character is not being generated by Client . > > > > We are using below module to validate the tokens > > https://github.com/kaltura/nginx-akamai-token-validate-module > > > > This is being caused by akamai_token_validate_strip_token directive which > > strips the token and forwards request to upstream server. > > > > While striping the token and passing the remaining request to upstream > > stream its appending null character at the end. > > If there is no any additional query param in request apart from token , > then > > there is no issue in handling. > > > > > http://10.49.120.61/folder/Test.m3u8?token=st=1593095161~exp=1593112361~acl=/*~hmac=60d9c29a65d837b203225318d1c69e205037580a08bf4417d4a1e237e5a2f5b6&uid=abc123 > > > > Request passed to upstream is as below which is causing problem > > > > GET /folder/Test.m3u8?uid=abc123\x00 > > So the module is broken and needs to be fixed. > > -- > Maxim Dounin > http://mdounin.ru/ > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Fri Jun 26 05:59:04 2020 From: nginx-forum at forum.nginx.org (anish10dec) Date: Fri, 26 Jun 2020 01:59:04 -0400 Subject: Removing Null Character from Query Parameter In-Reply-To: References: Message-ID: <2cd97568a02f023b67ebd3da82442041.NginxMailingListEnglish@forum.nginx.org> Thanks Maxim Will fix the module , just was looking a way around if it can be handled by just removing the null character Posted at Nginx Forum: https://forum.nginx.org/read.php?2,288455,288472#msg-288472 From lathaxyz012 at gmail.com Fri Jun 26 10:48:59 2020 From: lathaxyz012 at gmail.com (Latha Appanna) Date: Fri, 26 Jun 2020 16:18:59 +0530 Subject: How to call a named location in nginx upon 200 status from upstream server Message-ID: Hello, I want to call REST endpoint after successful (200/201) status from upstream server. For error cases, I know we can use error_page/try_files etc. But for success cases, I did not find any directive to use. I read about `mirror` which can be used to call another server before making the call to upstream server, I'm looking for something opposite to mirror, which can be used to call at the end after successful response from upstream server. -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Fri Jun 26 15:11:24 2020 From: nginx-forum at forum.nginx.org (anish10dec) Date: Fri, 26 Jun 2020 11:11:24 -0400 Subject: Removing Null Character from Query Parameter In-Reply-To: <20200625181828.GR12747@mdounin.ru> References: <20200625181828.GR12747@mdounin.ru> Message-ID: Module is fixed now https://github.com/kaltura/nginx-akamai-token-validate-module/issues/18 Thanks Posted at Nginx Forum: https://forum.nginx.org/read.php?2,288455,288478#msg-288478 From alex.zonimi at gmail.com Sat Jun 27 21:49:51 2020 From: alex.zonimi at gmail.com (Zonimi) Date: Sat, 27 Jun 2020 18:49:51 -0300 Subject: .htaccess Message-ID: Hello, I would like to know how do I convert this to nginx; RewriteEngine On RewriteCond %{REQUEST_URI} !(/$|\.) RewriteRule (.*) %{REQUEST_URI}/ [R=301,L] RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule . index.php [L] Already tried; location / { try_files $uri $uri/ /index.php; } and if ($uri !~ "(/$|\.)"){ set $rule_0 1$rule_0; } if ($rule_0 = "1"){ rewrite /(.*) $uri/ permanent; } if (!-f $request_filename){ set $rule_1 1$rule_1; } if (!-d $request_filename){ set $rule_1 2$rule_1; } if ($rule_1 = "21"){ rewrite /. /index.php; } location / { try_files $uri $uri/ /index.php; } And none works -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Mon Jun 29 08:43:53 2020 From: nginx-forum at forum.nginx.org (stu_cambridge) Date: Mon, 29 Jun 2020 04:43:53 -0400 Subject: Problem with nginx rate limiting not working when using white listing Message-ID: <56f06fec5cfaa1e5df2ea391209e4f4c.NginxMailingListEnglish@forum.nginx.org> I have nginx rate-limiting working when using the following limit_req_zone $binary_remote_addr zone=mylimit:20m rate=50r/m; I now want to apply it to certain IPs so i've changed it to geo $limit { default 1; 1.2.3.4/32 0; } map $limit $mylimit { 0 ""; 1 $binary_remote_addr; } limit_req_zone $my_limit zone=mylimit:20m rate=50r/m; Following the example here https://www.nginx.com/blog/rate-limiting-nginx/ But the rate limit is ignored even when coming from a different IP than the one in the config This is using nginx version: nginx/1.14.0 (Ubuntu) In the server block I have limit_req zone=mylimit burst=15 nodelay; which was working before Posted at Nginx Forum: https://forum.nginx.org/read.php?2,288487,288487#msg-288487 From igor at sysoev.ru Mon Jun 29 13:51:04 2020 From: igor at sysoev.ru (Igor Sysoev) Date: Mon, 29 Jun 2020 16:51:04 +0300 Subject: Please take 10 minutes to give us your feedback Message-ID: <690C6788-3CC7-482D-B793-C1C65AF594E4@sysoev.ru> Dear community member, As you probably know, we?ve run an annual user survey for NGINX the past six years. Your feedback directly impacts the future roadmap and vision for nginx, nginx unit, and nginx plus. We truly value your thoughts and opinions, and look forward to hearing from you each year. The NGINX 2020 User Survey is open for a few more days. If you?ve already completed the survey ? my sincere thanks. If you haven?t yet had a chance, please continue to share your experience with us. Please take the 10 minute survey: https://emails.nginx.com/XKT03CM34800Cwg9E028SI0 Thank you for shaping the future of NGINX. -- Igor Sysoev http://nginx.com From themadbeaker at gmail.com Mon Jun 29 14:43:22 2020 From: themadbeaker at gmail.com (J.R.) Date: Mon, 29 Jun 2020 09:43:22 -0500 Subject: Problem with nginx rate limiting not working when using white listing Message-ID: One place you have $mylimit and another is $my_limit (with the underscore). From nginx-forum at forum.nginx.org Mon Jun 29 19:50:11 2020 From: nginx-forum at forum.nginx.org (siva.pannier) Date: Mon, 29 Jun 2020 15:50:11 -0400 Subject: Nginx as reverse proxy in Openshift Cluster Message-ID: <60024e3db2ecf2ffd1c893a2325447c3.NginxMailingListEnglish@forum.nginx.org> Hi, I am new to Nginx and I could validate some of the reverse proxy scenarios in Windows and Ubuntu machine successfully. However I am facing challenges on validating them on Openshift cluster platform. I am new to Docker/Kubernetes/Openshift. I am able to deploy the below Nginx image in Openshift and hit the welcome page url successfully. "https://github.com/sclorg/nginx-ex" Now I want to achieve the below things on the image. 1) Want to know what is the nginx version deployed on the above image. 2) Want to customize the nginx.conf file frequently and update my container with the latest changes. 3) Want to use the latest version of nginx. 4) Want to know how to locate the nginx.conf file and other nginx files deployed through the above image. 5) When I open a POD terminal, I can see Nginx 1.12 files are deployed in /etc/nginx, can I upgrade this to latest version? are these files created by the deployment of the above image? Please clarify & help me on the above queries. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,288492,288492#msg-288492 From al-nginx at none.at Mon Jun 29 23:32:13 2020 From: al-nginx at none.at (Aleksandar Lazic) Date: Tue, 30 Jun 2020 01:32:13 +0200 Subject: Nginx as reverse proxy in Openshift Cluster In-Reply-To: <60024e3db2ecf2ffd1c893a2325447c3.NginxMailingListEnglish@forum.nginx.org> References: <60024e3db2ecf2ffd1c893a2325447c3.NginxMailingListEnglish@forum.nginx.org> Message-ID: Hi, On 29.06.20 21:50, siva.pannier wrote: > Hi, > > I am new to Nginx and I could validate some of the reverse proxy scenarios > in Windows and Ubuntu machine successfully. However I am facing challenges > on validating them on Openshift cluster platform. > > I am new to Docker/Kubernetes/Openshift. I am able to deploy the below Nginx > image in Openshift and hit the welcome page url successfully. > > "https://github.com/sclorg/nginx-ex" > > Now I want to achieve the below things on the image. > > 1) Want to know what is the nginx version deployed on the above image. You can see the image version in this line in the template. https://github.com/sclorg/nginx-ex/blob/master/openshift/templates/nginx.json#L232-L236 When you deployed nginx whith the README command the will you have nginx 1.12 `oc new-app centos/nginx-112-centos7~https://github.com/sclorg/nginx-ex` ^^^^^^^^^^ > 2) Want to customize the nginx.conf file frequently and update my container > with the latest changes. My suggestion is to use a configmap and mount it to the nginx conf dir. The command are for the version abouve: oc create configmap nginx-conf--from-file=nginx.conf=/path/to/your/local/nginx.conf oc set volume dc/ --add --name=nginx-conf \ --mount-path=/etc/opt/rh/rh-nginx112/nginx/nginx.conf --type=configmap \ --configmap-name=nginx-conf > 3) Want to use the latest version of nginx. This requires some more tasks. My suggestion: * Create a Dockerfile with a "FROM nginx" * Create a nginx conf and a configmap as described above. * put everything into a git repository * call ` oc new-app git-repo` OpenShift will the create a BC/DC/SVC for you. It is necessary to create a route to access the nginx setup. `oc create route edge nginx --service=` https://docs.okd.io/3.11/dev_guide/routes.html This could be a good start point for further adoption. > 4) Want to know how to locate the nginx.conf file and other nginx files > deployed through the above image. This could be done via configmaps. The configmaps are limited to ~1MB afaik. > 5) When I open a POD terminal, I can see Nginx 1.12 files are deployed in > /etc/nginx, can I upgrade this to latest version? are these files created by > the deployment of the above image? As describe above I would recommend to create your own image with your requirements. > Please clarify & help me on the above queries. OpenShift have a quite detailed documentation. I strongly recommend to take a look there. https://docs.okd.io/3.11/dev_guide/index.html https://docs.openshift.com/container-platform/3.11/dev_guide/index.html https://docs.openshift.com/container-platform/4.4/builds/understanding-image-builds.html > Posted at Nginx Forum: https://forum.nginx.org/read.php?2,288492,288492#msg-288492 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > From francis at daoine.org Tue Jun 30 09:51:09 2020 From: francis at daoine.org (Francis Daly) Date: Tue, 30 Jun 2020 10:51:09 +0100 Subject: .htaccess In-Reply-To: References: Message-ID: <20200630095109.GC20939@daoine.org> On Sat, Jun 27, 2020 at 06:49:51PM -0300, Zonimi wrote: Hi there, > Hello, I would like to know how do I convert this to nginx; > > > RewriteEngine On > RewriteCond %{REQUEST_URI} !(/$|\.) > RewriteRule (.*) %{REQUEST_URI}/ [R=301,L] > > RewriteCond %{REQUEST_FILENAME} !-f > RewriteCond %{REQUEST_FILENAME} !-d > > RewriteRule . index.php [L] > Can you describe how you want requests to be handled? It's not immediately clear to me what that apache config is trying to do. > Already tried; > > location / { try_files $uri $uri/ /index.php; } That looks like it is probably the "standard" close-enough equivalent; but obviously some part of it does not work in this case. > And none works What request do you make? What response do you get? What response do you want to get instead? Thanks, f -- Francis Daly francis at daoine.org From nginx-forum at forum.nginx.org Tue Jun 30 20:59:25 2020 From: nginx-forum at forum.nginx.org (siva.pannier) Date: Tue, 30 Jun 2020 16:59:25 -0400 Subject: Nginx as reverse proxy in Openshift Cluster In-Reply-To: References: Message-ID: <74878e8780ad417da7186b2117a57e58.NginxMailingListEnglish@forum.nginx.org> Thank you so much for the guidance. I did the deployment of the below image via Openshift Console. As per the JSON, it should have picked the version 1.16. However it deployed the older verison 1.12. Not sure whats wrong here. "https://github.com/sclorg/nginx-ex" Posted at Nginx Forum: https://forum.nginx.org/read.php?2,288492,288506#msg-288506