From lagged at gmail.com Tue Jul 2 09:12:22 2019 From: lagged at gmail.com (Andrei) Date: Tue, 2 Jul 2019 04:12:22 -0500 Subject: set_real_ip_from behavior Message-ID: Hello, I'm having some issues with getting X-Forwarded-For set consistently for upstream proxy requests. The server runs Nginx/OpenResty in front of Apache, and has domains hosted behind Cloudflare as well as direct. The ones behind Cloudflare show the correct X-Forwarded-For header being set, using (snippet): http { set_real_ip_from 167.114.56.190/32; [..] set_real_ip_from 167.114.56.191/32; real_ip_header X-Forwarded-For; server { location ~ .* { [..] proxy_set_header X-Forwarded-For $http_x_forwarded_for; proxy_set_header X-Real-IP $remote_addr; } } However, when I receive a direct request, which does not include X-Forwarded-For, $http_x_forwarded_for, $proxy_add_x_forwarded_for, $http_x_real_ip are empty, and I'm unable to set the header to $remote_addr (which shows the correct IP). If I try adding this in the server {} block: if ($http_x_forwarded_for = '') { set $http_x_forwarded_for $remote_addr; } I get: nginx: [emerg] the duplicate "http_x_forwarded_for" variable in /usr/local/openresty/nginx/conf/nginx.conf:131 nginx: configuration file /usr/local/openresty/nginx/conf/nginx.conf test failed The above works to set $http_x_real_ip, but then I end up with direct connections passing Apache the client IP through X-Real-IP, and proxied connections (from Cloudflare) set X-Forwarded-For. The log format I'm using to verify both $http_x_forwarded_for and $http_x_real_ip is: log_format json_combined escape=json '{' '"id":"$zid",' '"upstream_cache_status":"$upstream_cache_status",' '"remote_addr":"$remote_addr",' '"remote_user":"$remote_user",' '"stime":"$msec",' '"timestamp":"$time_local",' '"host":"$host",' '"server_addr":"$server_addr",' '"server_port":"$proxy_port",' '"request":"$request",' '"status": "$status",' '"body_bytes_sent":"$body_bytes_sent",' '"http_referer":"$http_referer",' '"http_user_agent":"$http_user_agent",' '"http_x_forwarded_for":"$http_x_forwarded_for",' '"http_x_real_ip":"$http_x_real_ip",' '"request_type":"$request_type",' '"upstream_addr":"$upstream_addr",' '"upstream_status":"$upstream_status",' '"upstream_connect_time":"$upstream_connect_time",' '"upstream_header_time":"$upstream_header_time",' '"upstream_response_time":"$upstream_response_time",' '"country":"$country_code",' '"request_time":"$request_time"' '}'; How can I consistently pass the backend service an X-Forwarded-For header, with the client IP, regardless of it being a direct request or proxied through Cloudflare/some other CDN? Thanks! -------------- next part -------------- An HTML attachment was scrubbed... URL: From r at roze.lv Tue Jul 2 10:58:46 2019 From: r at roze.lv (Reinis Rozitis) Date: Tue, 2 Jul 2019 13:58:46 +0300 Subject: set_real_ip_from behavior In-Reply-To: References: Message-ID: <000001d530c5$1f39a620$5dacf260$@roze.lv> > I'm having some issues with getting X-Forwarded-For set consistently for upstream proxy requests. The server runs Nginx/OpenResty in front of > Apache, and has domains hosted behind Cloudflare as well as direct. The ones behind Cloudflare show the correct X-Forwarded-For header being > set, using (snippet): Imo your approach is too complicated (unless I missed something). If your setup is Cloudflare -> nginx -> apache then if you configure the real ip module on nginx you can just always pass the $remote_addr to the Apache backend: http { set_real_ip_from 167.114.56.190/32; [..] set_real_ip_from 167.114.56.191/32; real_ip_header X-Forwarded-For; proxy_set_header X-Forwarded-For $remote_addr; In case the request is direct $remote_addr will contain client ip (and it will be passed to Apache), if the request comes from trusted proxies the realip module will automatically overwrite $remote_addr variable with the one in the X-Forwarded-For header (if you still want to log the original client ip you can use $realip_remote_addr (http://nginx.org/en/docs/http/ngx_http_realip_module.html#variables) ). rr From nginx-forum at forum.nginx.org Tue Jul 2 14:54:34 2019 From: nginx-forum at forum.nginx.org (bmacphee) Date: Tue, 02 Jul 2019 10:54:34 -0400 Subject: auth_request with grpc In-Reply-To: References: Message-ID: <37d8bce337bbb5738123dbc765602b39.NginxMailingListEnglish@forum.nginx.org> I was about to ask a related question. Here is a sample of my config. The only issue is that the gRPC client gets a StatusCode.Cancelled when authorization fails. In this scenario, the auth service at http://auth:5000 is a simple flask application performing the auth with a 3rd party identity provider. You may not need all the variables I am pushing around here, but hopefully this gives you an idea. server { location /some_grpc_api { grpc_pass grpc://internal_service:50051; grpc_set_header x-grpc-user $auth_resp_x_grpc_user; } # send all requests to the `/validate` endpoint for authorization auth_request /validate; auth_request_set $auth_resp_x_grpc_user $upstream_http_x_grpc_user; location = /validate { proxy_pass http://auth:5000; # the auth service acts only on the request headers proxy_pass_request_body off; proxy_set_header Content-Length ""; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto $scheme; } } Posted at Nginx Forum: https://forum.nginx.org/read.php?2,284427,284716#msg-284716 From nginx-forum at forum.nginx.org Tue Jul 2 15:19:54 2019 From: nginx-forum at forum.nginx.org (bmacphee) Date: Tue, 02 Jul 2019 11:19:54 -0400 Subject: request authorization with grpc (failure status code) Message-ID: I have an nginx configuration that passes gRPC API requests to other services an authorization endpoint that is used in conjunction. This works great when authorization is successful (my HTTP1 authorization endpoint returns HTTP 2xx status codes). When authorization fails (it returns 401), the gRPC connection initiated by the client receives a gRPC Cancelled(1) status code, rather than what would be ideal for the client - an Unauthorized (16) status code. The status message appears to be populated by nginx indicating the 401 failure. Is there a way to control the status code returned to the gRPC channel during failed auth? I tried and failed at doing this with the below configuration. Any non-200 code returned by the auth failure handling results in the same cancelled status code even after trying to set the status code manually. If I override the return with a 200 series code, it treats authorization as successful (which it also bad). server { location /some_grpc_api { grpc_pass grpc://internal_service:50051; grpc_set_header x-grpc-user $auth_resp_x_grpc_user; } # send all requests to the `/validate` endpoint for authorization auth_request /validate; auth_request_set $auth_resp_x_grpc_user $upstream_http_x_grpc_user; location = /validate { proxy_pass http://auth:5000; # the auth service acts only on the request headers proxy_pass_request_body off; proxy_set_header Content-Length ""; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto $scheme; # attempt to customize grpc error code proxy_intercept_errors on; error_page 401 /grpc_auth_fail_page; } # attempt to customize grpc error code location = /grpc_auth_fail_page { internal; grpc_set_header grpc-status 16; grpc_set_header grpc-message "Unauthorized"; return 401; } } Posted at Nginx Forum: https://forum.nginx.org/read.php?2,284718,284718#msg-284718 From mdounin at mdounin.ru Wed Jul 3 00:40:43 2019 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 3 Jul 2019 03:40:43 +0300 Subject: Nginx 1.17.0 doesn't change the content-type header In-Reply-To: References: Message-ID: <20190703004043.GK1877@mdounin.ru> Hello! On Sat, Jun 29, 2019 at 10:49:00PM +0000, Andrew Andonopoulos wrote: > I have the following config in the http: > > include mime.types; > default_type application/octet-stream; > > > also i have this in the location: > > types { > application/vnd.apple.mpegurl m3u8; > video/mp2t ts; > } > > > But when i send a request, i am getting these headers: > > Request URL: > https://example.com/hls/5d134afe91b970.80939375/1024_576_1500_5d134afe91b970.80939375_00169.ts [...] > ETag: > "7ba4b759c57dbffbca650ce6a290f524" [...] > For some reason, Nginx doesn't change the Content-Type The ETag header format suggests that the response is not returned by nginx, but rather by a backend server. Check your backend server. -- Maxim Dounin http://mdounin.ru/ From pgnet.dev at gmail.com Wed Jul 3 00:55:01 2019 From: pgnet.dev at gmail.com (PGNet Dev) Date: Tue, 2 Jul 2019 17:55:01 -0700 Subject: effect of bcrypt hash $cost on HTTP Basic authentication's login performance? In-Reply-To: <20190703002325.GI1877@mdounin.ru> References: <20190703002325.GI1877@mdounin.ru> Message-ID: <28b27876-a8a3-e001-1a31-208041e504fc@gmail.com> > (And no, it does not look like an appropriate question for the > nginx-devel@ list. Consider using nginx@ instead.) k. On 7/2/19 5:23 PM, Maxim Dounin wrote: > On Sat, Jun 29, 2019 at 09:48:01AM -0700, PGNet Dev wrote: > >> When generating hashed data for "HTTP Basic" login auth >> protection, using bcrypt as the hash algorithm, one can vary the >> resultant hash strength by varying specify bcrypt's $cost, e.g. > > [...] > >> For site login usage, does *client* login time vary at all with >> the hash $cost? >> >> Other than the initial, one-time hash generation, is there any >> login-performance reason NOT to use the highest hash $cost? > > With Basic HTTP authentication, hashing happens on every user > request. That is, with high costs you are likely make your site > completely unusable. Noted. *ARE* there authentication mechanisms available that do NOT hash on every request? Perhaps via some mode of secure caching? AND, that still maintain a high algorithmic cost to prevent breach attemtps, or at least maximize their efforts? From mdounin at mdounin.ru Wed Jul 3 01:17:12 2019 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 3 Jul 2019 04:17:12 +0300 Subject: request authorization with grpc (failure status code) In-Reply-To: References: Message-ID: <20190703011712.GM1877@mdounin.ru> Hello! On Tue, Jul 02, 2019 at 11:19:54AM -0400, bmacphee wrote: > I have an nginx configuration that passes gRPC API requests to other > services an authorization endpoint that is used in conjunction. > > This works great when authorization is successful (my HTTP1 authorization > endpoint returns HTTP 2xx status codes). > > When authorization fails (it returns 401), the gRPC connection initiated by > the client receives a gRPC Cancelled(1) status code, rather than what would > be ideal for the client - an Unauthorized (16) status code. The status > message appears to be populated by nginx indicating the 401 failure. > > Is there a way to control the status code returned to the gRPC channel > during failed auth? > > I tried and failed at doing this with the below configuration. Any non-200 > code returned by the auth failure handling results in the same cancelled > status code even after trying to set the status code manually. If I > override the return with a 200 series code, it treats authorization as > successful (which it also bad). [...] > # attempt to customize grpc error code > proxy_intercept_errors on; > error_page 401 /grpc_auth_fail_page; > } > > # attempt to customize grpc error code > location = /grpc_auth_fail_page { > internal; > grpc_set_header grpc-status 16; > grpc_set_header grpc-message "Unauthorized"; > return 401; The "grpc_set_header" directive controls headers sent to the backend server with grpc_pass. In your setup you need to control headers returned to the client, so you have to use "add_header" instead. Or, given that gRPC uses trailers as long as there is a response body, you may have to use "add_trailer". Additionally, gRPC requires error code 200 for all responses. That is, you may have to use something like error_page 401 = /grpc_auth_fail_page; location = /grpc_auth_fail_page { ... return 200 ""; } to return status code 200. -- Maxim Dounin http://mdounin.ru/ From mdounin at mdounin.ru Wed Jul 3 01:35:53 2019 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 3 Jul 2019 04:35:53 +0300 Subject: effect of bcrypt hash $cost on HTTP Basic authentication's login performance? In-Reply-To: <28b27876-a8a3-e001-1a31-208041e504fc@gmail.com> References: <20190703002325.GI1877@mdounin.ru> <28b27876-a8a3-e001-1a31-208041e504fc@gmail.com> Message-ID: <20190703013553.GN1877@mdounin.ru> Hello! On Tue, Jul 02, 2019 at 05:55:01PM -0700, PGNet Dev wrote: > On 7/2/19 5:23 PM, Maxim Dounin wrote: > > On Sat, Jun 29, 2019 at 09:48:01AM -0700, PGNet Dev wrote: > > > >> When generating hashed data for "HTTP Basic" login auth > >> protection, using bcrypt as the hash algorithm, one can vary the > >> resultant hash strength by varying specify bcrypt's $cost, e.g. > > > > [...] > > > >> For site login usage, does *client* login time vary at all with > >> the hash $cost? > >> > >> Other than the initial, one-time hash generation, is there any > >> login-performance reason NOT to use the highest hash $cost? > > > > With Basic HTTP authentication, hashing happens on every user > > request. That is, with high costs you are likely make your site > > completely unusable. > > Noted. > > *ARE* there authentication mechanisms available that do NOT hash on > every request? Perhaps via some mode of secure caching? > > AND, that still maintain a high algorithmic cost to prevent breach > attemtps, or at least maximize their efforts? In nginx itself, the only authentication available is Basic HTTP authentication, and it implies hasning on every (authenticated) request. To avoid hashing on every request one have to maintain a session, so hashing can only happen once per session, and this is not something nginx provides. You can, however, implement it yourself, for example, using auth_request. Note though that algorithmic cost might not be the best solution to prevent "breach attempts". The only case when algorithmic cost is indeed matters is when hashes are leaked, and available for offline attacks (and if this happens, you have a problem anyway). In most cases you care about online attacks, and these can be effectively mitigated by limit_req (http://nginx.org/r/limit_req). -- Maxim Dounin http://mdounin.ru/ From pgnet.dev at gmail.com Wed Jul 3 03:53:37 2019 From: pgnet.dev at gmail.com (PGNet Dev) Date: Tue, 2 Jul 2019 20:53:37 -0700 Subject: how to force/send TLS Certificate Request for all client connections, in client-side ssl-verification? Message-ID: I've setup my nginx server with self-signed SSL server-side certs, using my own/local CA. Without client-side verifications, i.e. just an unverified-TLS connection, all's good. If I enable client-side SSL cert verification with, ssl_certificate "ssl/example.com.server.crt.pem"; ssl_certificate_key "ssl/example.com.server.key.pem"; ssl_verify_client on; ssl_client_certificate "ssl_cert_dir/CA_intermediate.crt.pem"; ssl_verify_depth 2; , a connecting android app is failing on connect, receiving FROM the nginx server, HTTP RESPONSE: Response{protocol=http/1.1, code=400, message=Bad Request, url=https://proxy.example.com/dav/myuser%40example.com/3d75dc22-8afc-1946-5b3f-4d84e9b28432/} 400 No required SSL certificate was sent

400 Bad Request

No required SSL certificate was sent

nginx
I've been unsuccessful so far using tshark/ssldump to decrypt the SSL handshake; I suspect (?) it's because my certs are ec signed. Still working on that ... In 'debug' level nginx logs, I see 2019/06/30 21:58:14 [debug] 41777#41777: *7 s:0 in:'35:5' 2019/06/30 21:58:14 [debug] 41777#41777: *7 s:0 in:'2F:/' 2019/06/30 21:58:14 [debug] 41777#41777: *7 http uri: "/dav/myuser at example.com/7a59f94d-6be5-18ef-4248-b8a2867fe445/" 2019/06/30 21:58:14 [debug] 41777#41777: *7 http args: "" 2019/06/30 21:58:14 [debug] 41777#41777: *7 http exten: "" 2019/06/30 21:58:14 [debug] 41777#41777: *7 posix_memalign: 0000558C35B3C840:4096 @16 2019/06/30 21:58:14 [debug] 41777#41777: *7 http process request header line 2019/06/30 21:58:14 [debug] 41777#41777: *7 http header: "Depth: 0" 2019/06/30 21:58:14 [debug] 41777#41777: *7 http header: "Content-Type: application/xml; charset=utf-8" 2019/06/30 21:58:14 [debug] 41777#41777: *7 http header: "Content-Length: 241" 2019/06/30 21:58:14 [debug] 41777#41777: *7 http header: "Host: proxy.example.com" 2019/06/30 21:58:14 [debug] 41777#41777: *7 http header: "Connection: Keep-Alive" 2019/06/30 21:58:14 [debug] 41777#41777: *7 http header: "Accept-Encoding: gzip" 2019/06/30 21:58:14 [debug] 41777#41777: *7 http header: "Accept-Language: en-US, en;q=0.7, *;q=0.5" 2019/06/30 21:58:14 [debug] 41777#41777: *7 http header: "Authorization: Basic 1cC5...WUVi" 2019/06/30 21:58:14 [debug] 41777#41777: *7 http header done 2019/06/30 21:58:14 [info] 41777#41777: *7 client sent no required SSL certificate while reading client request headers, client: 10.0.1.235, server: proxy.example.com, request: "PROPFIND /dav/myuser%40example.com/7a59f94d-6be5-18ef-4248-b8a2867fe445/ HTTP/1.1", host: "proxy.example.com" 2019/06/30 21:58:14 [debug] 41777#41777: *7 http finalize request: 496, "/dav/myuser at example.com/7a59f94d-6be5-18ef-4248-b8a2867fe445/?" a:1, c:1 2019/06/30 21:58:14 [debug] 41777#41777: *7 event timer del: 15: 91237404 2019/06/30 21:58:14 [debug] 41777#41777: *7 http special response: 496, "/dav/myuser at example.com/7a59f94d-6be5-18ef-4248-b8a2867fe445/?" 2019/06/30 21:58:14 [debug] 41777#41777: *7 http set discard body 2019/06/30 21:58:14 [debug] 41777#41777: *7 headers more header filter, uri "/dav/myuser at example.com/7a59f94d-6be5-18ef-4248-b8a2867fe445/" 2019/06/30 21:58:14 [debug] 41777#41777: *7 charset: "" > "utf-8" 2019/06/30 21:58:14 [debug] 41777#41777: *7 HTTP/1.1 400 Bad Request Date: Mon, 01 Jul 2019 04:58:14 GMT Content-Type: text/html; charset=utf-8 Content-Length: 230 Connection: close Secure: Groupware Server X-Content-Type-Options: nosniff In comms with the app vendor, I was asked Does your proxy send TLS Certificate Request https://tools.ietf.org/html/rfc5246#section-7.4.4? ... the TLS stack which is used ... won't send certificates preemptively, but only when they're requested. In my tests, client certificates are working as expected, but ONLY if the server explicitly requests them. I don't recognize the preemptive request above. DOES nginx send such a TLS Certificate Request by default? Is there a required, additional config to force that request? From andre8525 at hotmail.com Wed Jul 3 06:17:46 2019 From: andre8525 at hotmail.com (Andrew Andonopoulos) Date: Wed, 3 Jul 2019 06:17:46 +0000 Subject: Nginx 1.17.0 doesn't change the content-type header In-Reply-To: <20190703004043.GK1877@mdounin.ru> References: , <20190703004043.GK1877@mdounin.ru> Message-ID: Hello, I missed the cache status header in my previous email. This is another response with all the headers: https://example.com/hls/5d0f9398852b84.49917460/1280_720_1300_5d0f9398852b84.49917460_00004.ts 1. Accept-Ranges: bytes 2. Access-Control-Allow-Credentials: true 3. Access-Control-Allow-Headers: DNT,X-CustomHeader,Keep-Alive,User-Agent,X-Requested-With,If-Modified-Since,Cache-Control,Content-Type,Origin,X-Auth-Token,Authorization,Accept,Client-Security-Token 4. Access-Control-Allow-Methods: OPTIONS, GET 5. Access-Control-Allow-Origin: * 6. Cache-Control: max-age=31536000 7. Connection: keep-alive 8. Content-Length: 114304 9. Content-Type: application/octet-stream 10. Date: Wed, 03 Jul 2019 06:15:02 GMT 11. ETag: "1b744341828e22daea781982614ffc74" 12. Last-Modified: Mon, 24 Jun 2019 22:24:35 GMT 13. Server: nginx/1.17.0 14. X-Cache-Status: HIT 15. X-Proxy-Cache: HIT Nginx deliver the file from the cache but the content-type is from the default in the HTTP rather than the location in the Server Thanks Andrew ________________________________ From: nginx on behalf of Maxim Dounin Sent: Wednesday, July 3, 2019 12:40 AM To: nginx at nginx.org Subject: Re: Nginx 1.17.0 doesn't change the content-type header Hello! On Sat, Jun 29, 2019 at 10:49:00PM +0000, Andrew Andonopoulos wrote: > I have the following config in the http: > > include mime.types; > default_type application/octet-stream; > > > also i have this in the location: > > types { > application/vnd.apple.mpegurl m3u8; > video/mp2t ts; > } > > > But when i send a request, i am getting these headers: > > Request URL: > https://example.com/hls/5d134afe91b970.80939375/1024_576_1500_5d134afe91b970.80939375_00169.ts [...] > ETag: > "7ba4b759c57dbffbca650ce6a290f524" [...] > For some reason, Nginx doesn't change the Content-Type The ETag header format suggests that the response is not returned by nginx, but rather by a backend server. Check your backend server. -- Maxim Dounin http://mdounin.ru/ _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From francis at daoine.org Wed Jul 3 12:22:21 2019 From: francis at daoine.org (Francis Daly) Date: Wed, 3 Jul 2019 13:22:21 +0100 Subject: Nginx 1.17.0 doesn't change the content-type header In-Reply-To: References: <20190703004043.GK1877@mdounin.ru> Message-ID: <20190703122221.4leen4jqbmmoq3n6@daoine.org> On Wed, Jul 03, 2019 at 06:17:46AM +0000, Andrew Andonopoulos wrote: Hi there, I think that the point is: nginx does not change the content-type header from the upstream server. If you want your nginx to do that, you have to configure your nginx to do that -- probably using "add_header". > Nginx deliver the file from the cache but the content-type is from > the default in the HTTP rather than the location in the Server The "types" configuration applies when nginx serves a file from the filesystem. If you proxy_pass, nginx does not serve a file from the filesystem. The "correct" fix is for you to ensure that your upstream server sends the content-type header that you want. The alternate fix is for you to configure your nginx server to send the content-type header that you want; you will need to tell nginx how to know what that header value is, for each response that you make. Good luck with it, f -- Francis Daly francis at daoine.org From andre8525 at hotmail.com Wed Jul 3 12:25:53 2019 From: andre8525 at hotmail.com (Andrew Andonopoulos) Date: Wed, 3 Jul 2019 12:25:53 +0000 Subject: Nginx 1.17.0 doesn't change the content-type header In-Reply-To: <20190703122221.4leen4jqbmmoq3n6@daoine.org> References: <20190703004043.GK1877@mdounin.ru> , <20190703122221.4leen4jqbmmoq3n6@daoine.org> Message-ID: Thanks Francis, will modify the upstream server ________________________________ From: nginx on behalf of Francis Daly Sent: Wednesday, July 3, 2019 12:22 PM To: nginx at nginx.org Subject: Re: Nginx 1.17.0 doesn't change the content-type header On Wed, Jul 03, 2019 at 06:17:46AM +0000, Andrew Andonopoulos wrote: Hi there, I think that the point is: nginx does not change the content-type header from the upstream server. If you want your nginx to do that, you have to configure your nginx to do that -- probably using "add_header". > Nginx deliver the file from the cache but the content-type is from > the default in the HTTP rather than the location in the Server The "types" configuration applies when nginx serves a file from the filesystem. If you proxy_pass, nginx does not serve a file from the filesystem. The "correct" fix is for you to ensure that your upstream server sends the content-type header that you want. The alternate fix is for you to configure your nginx server to send the content-type header that you want; you will need to tell nginx how to know what that header value is, for each response that you make. Good luck with it, f -- Francis Daly francis at daoine.org _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From zeev at initech.co.il Wed Jul 3 15:49:57 2019 From: zeev at initech.co.il (Zeev Tarantov) Date: Wed, 3 Jul 2019 18:49:57 +0300 Subject: TLS 1.3 support in nginx-1.17.1 binary for Ubuntu 18.04 "bionic" provided by nginx.org Message-ID: I've installed the nginx package provided by nginx.org ( https://nginx.org/en/linux_packages.html#Ubuntu) specifically the binary provided by https://nginx.org/packages/mainline/ubuntu/pool/nginx/n/nginx/nginx_1.17.1-1~bionic_amd64.deb and it doesn't have TLS 1.3 support. According to https://mailman.nginx.org/pipermail/nginx/2019-January/057402.html this would be because it was built on an Ubuntu 18.04 "bionic" that was not fully updated. Ubuntu 18.04 "bionic" switched from openssl 1.1.0 to openssl 1.1.1 recently and I hoped the newer releases would be compiled with openssl 1.1.1 and support TLS 1.3. When I build that package myself (using apt-get source nginx ; cd nginx-1.17.1/ ; debuild -i -us -uc -b) on a fully updated Ubuntu 18.04 "bionic", it does support TLS 1.3. I ask that the build environment is set up such that the next release will support TLS 1.3, or better yet, that 1.16.0 and 1.17.1 packages for Ubuntu 18.04 "bionic" are updated to include TLS 1.3 support. Unless such packages won't work on a non-updated Ubuntu 18.04 system? (Why?) Or does anyone know of a workaround that does not involve building the packages myself? -------------- next part -------------- An HTML attachment was scrubbed... URL: From om at mykarmaapp.com Thu Jul 4 07:37:04 2019 From: om at mykarmaapp.com (Om Prakash Shanmugam) Date: Thu, 4 Jul 2019 13:07:04 +0530 Subject: Nginx request processing is slow when logging disabled Message-ID: Hello All, I have an Nginx reverse proxy connected to uWSGI backend. I have configured nginx to log to a centralized remote syslog service as: *error_log syslog:server=example.com:514 ,tag=nginx_error debug;* The problem here is that, when I remove the above line from my nginx.conf, the request processing time is becoming very high and It leads to client timeout ( returns HTTP 460 ). When I enable logging in my nginx.conf, I do not get HTTP 460 at all. But, there's an extra overhead introduced which Increases the CPU Utilization. What I suspect is that the nginx is sending the HTTP requests to my uWSGI backend little slowly and my uWSGI backend is able to handle them gracefully, write the response back to nginx successfully. The average response time of the backend also spikes to 5x when logging is enabled. Once I disable logging, the CPU Utilization decreases while the requests are flooded to uWSGI backend and the backend takes time to return the response within the defined client timeout period. If the request takes time to process and if the client ( Android/iOS app) hasn't received the response, It aborts the connection either when the timeout is reached or if the user cancels the request. I'd like to know whether I have to add a proxy buffer to my nginx to queue up requests and send it to my backend instead of overflooding it. Or any other solutions are appreciated. I've also attached the log files below. any help is appreciated. Thanks in advance. Om -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: nginx.conf Type: application/octet-stream Size: 615 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: error.log Type: application/octet-stream Size: 2621 bytes Desc: not available URL: From nginx-forum at forum.nginx.org Sat Jul 6 11:59:10 2019 From: nginx-forum at forum.nginx.org (BeyondEvil) Date: Sat, 06 Jul 2019 07:59:10 -0400 Subject: SSL_ERROR_BAD_CERT_DOMAIN with multiple domains In-Reply-To: <20190626082607.ya6yio3ezzuemdeo@daoine.org> References: <20190626082607.ya6yio3ezzuemdeo@daoine.org> Message-ID: <963804883d08850e4f5a98a0549b3aa5.NginxMailingListEnglish@forum.nginx.org> Hi Francis! Thank you so much for your answer! I really appreciate it! And I apologize for taking this long to reply. > As I understand things: > > * you need one nginx listening on port 80 for http and 443 for https > * you want to handle two server names (differently) Well, sort of. I have to Servers, and both are running nginx. Which I think is the key to this problem. Server A (macmini) has an nginx server under my direct control. Server B (the synology NAS) has an nginx server NOT under my direct control. > I am not clear on whether you want to "redirect" or "proxy_pass" to > the service on the other ports -- "redirect" would involve the client > issuing a new request to https://something:5001; while "proxy_pass" > would involve the client continuing to request https://something, and > nginx ensuring that the response from :5001 gets to the client. I thought what I wanted was to "proxy_pass", but what I needed to do was to "redirect". Sadly, that doesn't work - and I _think_ I might understand why. I have two domains - one related to Server A and one related to Server B. Server A domain is certified using Let's Encrypt (LE) and I own that domain. Server B domain is also certified using LE, but I DON'T own that domain - Synology does. It's part of their "internal" DDNS system to help users expose their NAS reliably to the internet. And herein lies the problem as it seems, from what I can gather HTTPS is terminated and checked/validated in Server A and fails for requests to Server B domain, since the certificates in Server A are not the correct ones for Server B domain - only for Server A domain. So the redirect works - but you get the "not valid ceritficates" warning(s) in the browser. :( > two server{} blocks with different server_name directives, and SNI > enabled > in your nginx, and the correct ssl_certificate available in each > server{}. So that's ^^ is basically the problem and why it fails. The certificates can't be in that server block, because they reside in the server block in the nginx running on Server B. > Good luck with it, Thanks again! :) Posted at Nginx Forum: https://forum.nginx.org/read.php?2,284630,284764#msg-284764 From nginx-forum at forum.nginx.org Mon Jul 8 02:39:52 2019 From: nginx-forum at forum.nginx.org (allenhe) Date: Sun, 07 Jul 2019 22:39:52 -0400 Subject: Multiple master processes per reloading Message-ID: <109629381873787176a1c0b3e4ff3c7c.NginxMailingListEnglish@forum.nginx.org> Hi, Per my understanding, the reloading would only replace the old workers with new ones, while during testing (constantly reloading), I found the output of "ps -ef" shows multiple masters and shutting down workers which would fade away very quickly, so I guess the master process may undergo the same replacement. Could some experts help confirm this? What's strange is that in a production env, those masters and shutting down workers keep stay there forever and never go away. in this env, there is a running script that would reload the nginx every 5 minutes. Any idea what's going on here? Thanks and Regards, Allen Posted at Nginx Forum: https://forum.nginx.org/read.php?2,284771,284771#msg-284771 From 201904-nginx at jslf.app Mon Jul 8 03:32:40 2019 From: 201904-nginx at jslf.app (Patrick) Date: Mon, 8 Jul 2019 11:32:40 +0800 Subject: Multiple master processes per reloading In-Reply-To: <109629381873787176a1c0b3e4ff3c7c.NginxMailingListEnglish@forum.nginx.org> References: <109629381873787176a1c0b3e4ff3c7c.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20190708033240.GA5959@haller.ws> On 2019-07-07 22:39, allenhe wrote: > Per my understanding, the reloading would only replace the old workers with > new ones, while during testing (constantly reloading), I found the output of > "ps -ef" shows multiple masters and shutting down workers which would fade > away very quickly, so I guess the master process may undergo the same > replacement. > Could some experts help confirm this? It depends on what exactly is done to "reload" nginx? Different operating systems will do different things. In production, you should probably be using some form of commit / rollback, e.g. something like nginx-graceful[0] which automates the recommended zero-downtime restart[1]. Patrick [0] https://github.com/patrickhaller/nginx-graceful [1] https://www.nginx.com/resources/wiki/start/topics/tutorials/commandline/ From nginx-forum at forum.nginx.org Mon Jul 8 06:24:53 2019 From: nginx-forum at forum.nginx.org (allenhe) Date: Mon, 08 Jul 2019 02:24:53 -0400 Subject: Multiple master processes per reloading In-Reply-To: <20190708033240.GA5959@haller.ws> References: <20190708033240.GA5959@haller.ws> Message-ID: <3896bf87e0ccd648ea81c59876d7a3ed.NginxMailingListEnglish@forum.nginx.org> Patrick Wrote: ------------------------------------------------------- > On 2019-07-07 22:39, allenhe wrote: > > Per my understanding, the reloading would only replace the old > workers with > > new ones, while during testing (constantly reloading), I found the > output of > > "ps -ef" shows multiple masters and shutting down workers which > would fade > > away very quickly, so I guess the master process may undergo the > same > > replacement. > > Could some experts help confirm this? > > It depends on what exactly is done to "reload" nginx? Different > operating systems will do different things. > The "reload" script just contains one command line: /usr/bin/nginx -s reload And this happens on Ubuntu Linux x86_64 Curious that in the official doc It does not mention that the "-s reload" would impact the master > In production, you should probably be using some form of commit / > rollback, e.g. something like nginx-graceful[0] which automates the > recommended zero-downtime restart[1]. > It is about "restart" while I'm concerned with reloading to apply configuration changes. > > Patrick > [0] https://github.com/patrickhaller/nginx-graceful > [1] > https://www.nginx.com/resources/wiki/start/topics/tutorials/commandlin > e/ > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx Posted at Nginx Forum: https://forum.nginx.org/read.php?2,284771,284773#msg-284773 From mdounin at mdounin.ru Mon Jul 8 12:51:07 2019 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 8 Jul 2019 15:51:07 +0300 Subject: Nginx request processing is slow when logging disabled In-Reply-To: References: Message-ID: <20190708125107.GQ1877@mdounin.ru> Hello! On Thu, Jul 04, 2019 at 01:07:04PM +0530, Om Prakash Shanmugam wrote: > Hello All, > > I have an Nginx reverse proxy connected to uWSGI backend. I have configured > nginx to log to a centralized remote syslog service as: > > *error_log syslog:server=example.com:514 > ,tag=nginx_error debug;* > > The problem here is that, when I remove the above line from my nginx.conf, > the request processing time is becoming very high and It leads to client > timeout ( returns HTTP 460 ). > > When I enable logging in my nginx.conf, I do not get HTTP 460 at all. But, > there's an extra overhead introduced which Increases the CPU Utilization. > What I suspect is that the nginx is sending the HTTP requests to my uWSGI > backend little slowly and my uWSGI backend is able to handle them > gracefully, write the response back to nginx successfully. The average > response time of the backend also spikes to 5x when logging is enabled. > > Once I disable logging, the CPU Utilization decreases while the requests > are flooded to uWSGI backend and the backend takes time to return the > response within the defined client timeout period. If the request takes > time to process and if the client ( Android/iOS app) hasn't received the > response, It aborts the connection either when the timeout is reached or if > the user cancels the request. > > I'd like to know whether I have to add a proxy buffer to my nginx to queue > up requests and send it to my backend instead of overflooding it. Or any > other solutions are appreciated. So, from your description it looks like the problem is that your backend can't cope with load. You may want to configure your backend to better handle many requests - in particular, consider configuring a reasonable number of worker processes / threads, and make sure this number is low enough for your server to handle. To control number of requests queued to your backend, consider tuning your backend's listening queue. Alternatively, you may want to add more backends to handle the load. -- Maxim Dounin http://mdounin.ru/ From kelsey.dannels at nginx.com Mon Jul 8 22:22:37 2019 From: kelsey.dannels at nginx.com (Kelsey Dannels) Date: Mon, 8 Jul 2019 15:22:37 -0700 Subject: 2019 NGINX User Survey: Give us feedback and be part of our future Message-ID: Hello- Reaching out because it?s that time of year for the annual NGINX User Survey. We're always eager to hear about your experiences to help us evolve, improve and shape our product roadmap. Please take ten minutes to share your thoughts: https://nkadmin.typeform.com/to/nSuOmW?source=email Best, Kelsey -- Kelsey Dannels San Francisco -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Tue Jul 9 06:09:47 2019 From: nginx-forum at forum.nginx.org (kirti maindargikar) Date: Tue, 09 Jul 2019 02:09:47 -0400 Subject: FIPS support in nginx? In-Reply-To: <20190617090020.GA11414@vlpc> References: <20190617090020.GA11414@vlpc> Message-ID: <6c611038dae3f3dde1c254ca86e1022a.NginxMailingListEnglish@forum.nginx.org> Hi, We are using 1.10.3 nginx in FIPS mode. As discussed above we already have FIPS enabled on RHEL and we have recompiled nginx with OpenSSL FIPS. However we still see that Nginx is using MD5 algorithms ( which is not allowed in FIPS mode ) when we use proxy_cache to cache pictures . Looks like nginx uses MD5 hash to create the name of the cached image file. As given in this link http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_cache_key Syntax: proxy_cache_path path[levels=levels][use_temp_path=on|off] keys_zone=name:size[inactive=time][max_size=size][manager_files=number][manager_sleep=time][manager_threshold=time][loader_files=number][loader_sleep=time][loader_threshold=time][purger=on|off][purger_files=number][purger_sleep=time][purger_threshold=time]; "Sets the path and other parameters of a cache. Cache data are stored in files. The file name in a cache is a result of applying the MD5 function to the cache key. The levels parameter defines hierarchy levels of a cache: from 1 to 3, each level accepts values 1 or 2. For example, in the following configuration" proxy_cache_path /data/nginx/cache levels=1:2 keys_zone=one:10m; file names in a cache will look like this: /data/nginx/cache/c/29/b7f54b2df7773722d382f4809d65029c As nginx is using MD5 here, which is not supported in FIPS, we are getting openssl error "md5_dgst.c(82): OpenSSL internal error, assertion failed: Digest MD5 forbidden in FIPS mode!" Is there a way to configure nginx to use fips compliant algorithms like SH256 instead of MD5 in proxy cache ? Or does it need a code fix in nginx? If so which file/module may require a code fix here ? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,284539,284788#msg-284788 From nginx-forum at forum.nginx.org Tue Jul 9 06:13:28 2019 From: nginx-forum at forum.nginx.org (kirti maindargikar) Date: Tue, 09 Jul 2019 02:13:28 -0400 Subject: FIPS support in nginx? In-Reply-To: <6c611038dae3f3dde1c254ca86e1022a.NginxMailingListEnglish@forum.nginx.org> References: <20190617090020.GA11414@vlpc> <6c611038dae3f3dde1c254ca86e1022a.NginxMailingListEnglish@forum.nginx.org> Message-ID: This is the entry in the nginx.conf which is using proxy cache . I dont see any option here to configure hashing algorithm location /nginx-picture { internal; proxy_buffering on; proxy_cache media; proxy_cache_key $uri$args; proxy_cache_valid 200 43200s; proxy_ignore_headers Expires; proxy_ignore_headers Cache-Control; add_header X-Cache-Status $upstream_cache_status; proxy_pass http://acs_backend$request_uri; } Posted at Nginx Forum: https://forum.nginx.org/read.php?2,284539,284789#msg-284789 From webert.boss at gmail.com Tue Jul 9 07:41:21 2019 From: webert.boss at gmail.com (Webert de Souza Lima) Date: Tue, 9 Jul 2019 09:41:21 +0200 Subject: deny vs limit_req Message-ID: Hi, I have a few `deny` rules set in global scope, sometimes I add spammers there to block annoying attacks. I also have a couple of `limit_req` rules in global scope, and 1 in a local scope, that is more restrictive and I put it inside a `location` directive. Last time an attack happened the limit_req was kicking in for this location but after I put the IP addr on the `deny` rules it didn't do anything. So my question is: is this a matter of precedence? The limit_req inside a location would suppress any global deny rules? Thanks Regards, Webert Lima *Belo Horizonte - Brasil* *IRC NICK - WebertRLZ* -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Tue Jul 9 09:10:00 2019 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 9 Jul 2019 12:10:00 +0300 Subject: FIPS support in nginx? In-Reply-To: <6c611038dae3f3dde1c254ca86e1022a.NginxMailingListEnglish@forum.nginx.org> References: <20190617090020.GA11414@vlpc> <6c611038dae3f3dde1c254ca86e1022a.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20190709091000.GW1877@mdounin.ru> Hello! On Tue, Jul 09, 2019 at 02:09:47AM -0400, kirti maindargikar wrote: > Hi, We are using 1.10.3 nginx in FIPS mode. As discussed above we already > have FIPS enabled on RHEL and we have recompiled nginx with OpenSSL FIPS. > However we still see that Nginx is using MD5 algorithms ( which is not > allowed in FIPS mode ) when we use proxy_cache to cache pictures . > Looks like nginx uses MD5 hash to create the name of the cached image file. Yes, it does. It is, however, used for non-security purpose, and this has nothing to do with FIPS. [...] > As nginx is using MD5 here, which is not supported in FIPS, we are getting > openssl error > > "md5_dgst.c(82): OpenSSL internal error, assertion failed: Digest MD5 > forbidden in FIPS mode!" Upgrade to nginx 1.11.2 or later. Starting with this version, nginx will use internal MD5 implementation for hashing cache keys, so using RHEL with FIPS enabled won't cause errors. Note well that nginx 1.10.3 is obsolete for more than two years now, so you may want to upgrade anyway. Latest nginx version is 1.17.1, latest stable is 1.16.0. -- Maxim Dounin http://mdounin.ru/ From thresh at nginx.com Tue Jul 9 10:35:59 2019 From: thresh at nginx.com (Konstantin Pavlov) Date: Tue, 9 Jul 2019 13:35:59 +0300 Subject: TLS 1.3 support in nginx-1.17.1 binary for Ubuntu 18.04 "bionic" provided by nginx.org In-Reply-To: References: Message-ID: Hi Zeev, 03.07.2019 18:49, Zeev Tarantov wrote: > I've installed the nginx package provided by nginx.org > (https://nginx.org/en/linux_packages.html#Ubuntu) > specifically the binary provided by? > https://nginx.org/packages/mainline/ubuntu/pool/nginx/n/nginx/nginx_1.17.1-1~bionic_amd64.deb > and it doesn't have TLS 1.3 support. > According to > https://mailman.nginx.org/pipermail/nginx/2019-January/057402.html this > would be because it was built on an Ubuntu 18.04 "bionic" that was not > fully updated. > Ubuntu 18.04 "bionic" switched from openssl 1.1.0 to openssl 1.1.1 > recently and I hoped the newer releases would be compiled with openssl > 1.1.1 and support TLS 1.3. > When I build that package myself (using apt-get source nginx ; cd > nginx-1.17.1/ ; debuild -i -us -uc -b) on a fully updated Ubuntu 18.04 > "bionic", it does support TLS 1.3. > I ask that?the build environment is set up such that the next release > will support TLS 1.3, or better yet, that 1.16.0 and 1.17.1 packages for > Ubuntu 18.04 "bionic" are updated to include TLS 1.3 support. > Unless such packages won't work on a non-updated Ubuntu 18.04 system? (Why?) > Or does anyone know of a workaround that does not involve building the > packages myself? Thanks for the heads up on the openssl version change in 18.04 - it definitely is on our roadmap to provide prebuilt packages based on openssl 1.1.1! Indeed, new packages built with openssl 1.1.1 will not work on the older Ubuntu 18.04 point releases (non-updated), so this means the users will have to update when they update nginx. We definitely will not be changing the already released binaries, as this is likely to break existing setups that rely on the specific environments. The next nginx release however will be built using the newer Ubuntu 18.04 base with openssl 1.1.1. There's no ETA for it yet as far as I can tell. Thanks, -- Konstantin Pavlov https://www.nginx.com/ From b.jeyamurugan at gmail.com Tue Jul 9 12:25:39 2019 From: b.jeyamurugan at gmail.com (Jeya Murugan) Date: Tue, 9 Jul 2019 17:55:39 +0530 Subject: How to configure Nginx LB IP-Transparency for custom UDP application In-Reply-To: References: Message-ID: Hi all, I am using *NGINX 1.13.5 as a Load Balancer for one of my CUSTOM-APPLICATION *which will listen on* UDP port 2231,67 and 68.* I am trying for Load Balancing with IP-Transparency. When I using the proxy_protocol method the packets received from a remote client is modified and send to upstream by NGINX LB not sure why/how the packet is modified and also the remote client IP is NOT as source IP. When I using proxy_bind, the packet is forwarded to configured upstream but the source IP is not updated with Remote Client IP. *Basically, in both methods, the remote client address was not used as a source IP. I hope I missed some minor parts. Can someone help to resolve this issue?* The following are the detailed configuration for your reference. *Method 1 :- proxy_protocol* *Configuration:* user *root;* worker_processes 1; error_log /var/log/nginx/error.log debug; pid /var/run/nginx.pid; events { worker_connections 1024; } stream { server { listen 10.43.18.107:2231 udp; proxy_protocol on; proxy_pass 10.43.18.172:2231; } server { listen 10.43.18.107:67 udp; proxy_protocol on; proxy_pass 10.43.18.172:67; } server { listen 10.43.18.107:68 udp; proxy_protocol on; proxy_pass 10.43.18.172:68; } } *TCPDUMP O/P :* *From LB:* 10:05:07.284259 IP 10.43.18.116.2231 > 10.43.18.107.2231: UDP, length 43 10:05:07.284555 IP 10.43.18.107.51775 > 10.43.18.172.2231: UDP, length 91 *From upstream[Custom application]:* 10:05:07.284442 IP 10.43.18.107.51775 > 10.43.18.172.2231: UDP, length 91 *Method 2:- [ proxy_bind ]* *Configuration:* user root; worker_processes 1; error_log /var/log/nginx/error.log debug; pid /var/run/nginx.pid; events { worker_connections 1024; } stream { server { listen 10.43.18.107:2231 udp; proxy_bind $remote_addr:2231 transparent; proxy_pass 10.43.18.172:2231; } server { listen 10.43.18.107:67 udp; proxy_bind $remote_addr:67 transparent; proxy_pass 10.43.18.172:67; } server { listen 10.43.18.107:68 udp; proxy_bind $remote_addr:68 transparent; proxy_pass 10.43.18.172:68; } } *Also, added the below rules :* ip rule add fwmark 1 lookup 100 ip route add local 0.0.0.0/0 dev lo table 100 iptables -t mangle -A PREROUTING -p udp -s 10.43.18.0/24 --sport 2231 -j MARK --set-xmark 0x1/0xffffffff iptables -t mangle -A PREROUTING -p udp -s 10.43.18.0/24 --sport 67 -j MARK --set-xmark 0x1/0xffffffff iptables -t mangle -A PREROUTING -p udp -s 10.43.18.0/24 --sport 68 -j MARK --set-xmark 0x1/0xffffffff However, still, the packet is sent from NGINX LB with its own IP, not with the remote client IP address. *TCPDUMP O/P from LB:* 11:49:51.999829 IP 10.43.18.116.2231 > 10.43.18.107.2231: UDP, length 43 11:49:52.000161 IP 10.43.18.107.2231 > 10.43.18.172.2231: UDP, length 43 *TPCDUM O/P from Upstream:* 11:49:52.001155 IP 10.43.18.107.2231 > 10.43.18.172.2231: UDP, length 43 *Note:* I have followed the below link. https://www.nginx.com/blog/ip-transparency-direct-server-return-nginx-plus-transparent-proxy/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From arut at nginx.com Tue Jul 9 15:11:08 2019 From: arut at nginx.com (Roman Arutyunyan) Date: Tue, 9 Jul 2019 18:11:08 +0300 Subject: How to configure Nginx LB IP-Transparency for custom UDP application In-Reply-To: References: Message-ID: <20190709151108.GB61550@Romans-MacBook-Air.local> Hi, On Tue, Jul 09, 2019 at 05:55:39PM +0530, Jeya Murugan wrote: > Hi all, > > > I am using *NGINX 1.13.5 as a Load Balancer for one of my > CUSTOM-APPLICATION *which will listen on* UDP port 2231,67 and 68.* > > I am trying for Load Balancing with IP-Transparency. > > > > When I using the proxy_protocol method the packets received from a remote > client is modified and send to upstream by NGINX LB not sure why/how the > packet is modified and also the remote client IP is NOT as source IP. The proxy_protocol directive adds a PROXY protocol header to the datagram, that's why it's modified. The directive does not change the source address. Instead, the remote client address is passed in the PROXY protocol header. > When I using proxy_bind, the packet is forwarded to configured upstream but > the source IP is not updated with Remote Client IP. What is the reason for the port next to $remote_addr in proxy_bind? Also make sure nginx master runs with sufficient privileges. > *Basically, in both methods, the remote client address was not used as a > source IP. I hope I missed some minor parts. Can someone help to resolve > this issue?* > > > > The following are the detailed configuration for your reference. > > > > *Method 1 :- proxy_protocol* > > > > *Configuration:* > > > > user *root;* > worker_processes 1; > error_log /var/log/nginx/error.log debug; > pid /var/run/nginx.pid; > events { > worker_connections 1024; > > } > > stream { > server { > listen 10.43.18.107:2231 udp; > proxy_protocol on; > proxy_pass 10.43.18.172:2231; > } > server { > listen 10.43.18.107:67 udp; > proxy_protocol on; > proxy_pass 10.43.18.172:67; > } > server { > listen 10.43.18.107:68 udp; > proxy_protocol on; > proxy_pass 10.43.18.172:68; > } > } > > *TCPDUMP O/P :* > > > > *From LB:* > > 10:05:07.284259 IP 10.43.18.116.2231 > 10.43.18.107.2231: UDP, length 43 > > 10:05:07.284555 IP 10.43.18.107.51775 > 10.43.18.172.2231: UDP, length 91 > > > > *From upstream[Custom application]:* > > 10:05:07.284442 IP 10.43.18.107.51775 > 10.43.18.172.2231: UDP, length 91 > > > > *Method 2:- [ proxy_bind ]* > > > > *Configuration:* > > > > user root; > worker_processes 1; > error_log /var/log/nginx/error.log debug; > pid /var/run/nginx.pid; > events { > worker_connections 1024; > } > > stream { > server { > listen 10.43.18.107:2231 udp; > proxy_bind $remote_addr:2231 transparent; > proxy_pass 10.43.18.172:2231; > } > server { > listen 10.43.18.107:67 udp; > proxy_bind $remote_addr:67 transparent; > proxy_pass 10.43.18.172:67; > } > server { > listen 10.43.18.107:68 udp; > proxy_bind $remote_addr:68 transparent; > proxy_pass 10.43.18.172:68; > } > > } > > > > *Also, added the below rules :* > > > > ip rule add fwmark 1 lookup 100 > > ip route add local 0.0.0.0/0 dev lo table 100 > iptables -t mangle -A PREROUTING -p udp -s 10.43.18.0/24 --sport 2231 -j > MARK --set-xmark 0x1/0xffffffff > iptables -t mangle -A PREROUTING -p udp -s 10.43.18.0/24 --sport 67 -j MARK > --set-xmark 0x1/0xffffffff > iptables -t mangle -A PREROUTING -p udp -s 10.43.18.0/24 --sport 68 -j MARK > --set-xmark 0x1/0xffffffff > > > > However, still, the packet is sent from NGINX LB with its own IP, not with > the remote client IP address. > > > > *TCPDUMP O/P from LB:* > > > > 11:49:51.999829 IP 10.43.18.116.2231 > 10.43.18.107.2231: UDP, length 43 > > 11:49:52.000161 IP 10.43.18.107.2231 > 10.43.18.172.2231: UDP, length 43 > > > > *TPCDUM O/P from Upstream:* > > > > 11:49:52.001155 IP 10.43.18.107.2231 > 10.43.18.172.2231: UDP, length 43 > > > > *Note:* I have followed the below link. > > > > https://www.nginx.com/blog/ip-transparency-direct-server-return-nginx-plus-transparent-proxy/ > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -- Roman Arutyunyan From Bernard.Quick at polaris.com Tue Jul 9 15:34:12 2019 From: Bernard.Quick at polaris.com (Bernie Quick) Date: Tue, 9 Jul 2019 15:34:12 +0000 Subject: How to properly log a bug Message-ID: Hi, I have been working with NGINX for about a year now. I have some 40 instances of NGINX running and I am running into a core dump with 2 new ones. I have a repeatable process that generates my .conf and my .map files. I have powershell scripts that runs and read from a database and generate the .conf and .map files. There is something about the .map files for these two that is causing a core dump / segmentation fault. So I built an NGINX box from code with debugging on and I have the error log files with all the debugging, but it means nothing to me. I just don't know exactly where or to whom to report this issue. Thanks, -Bernie Bernard Quick | Polaris Industries | Staff DevOps Architect 9955 59th Ave N | Plymouth, MN 55442 | p:763.417.2204 | c:612.963.7742 | e:Bernard.Quick at polaris.com CONFIDENTIAL: The information contained in this email communication is confidential information intended only for the use of the addressee. Unauthorized use, disclosure or copying of this communication is strictly prohibited and may be unlawful. If you have received this communication in error, please notify us immediately by return email and destroy all copies of this communication, including all attachments. -------------- next part -------------- An HTML attachment was scrubbed... URL: From Terry.Lemons at dell.com Tue Jul 9 18:40:06 2019 From: Terry.Lemons at dell.com (Lemons, Terry) Date: Tue, 9 Jul 2019 18:40:06 +0000 Subject: Does nginx use unique session identifiers In-Reply-To: <3D3D64550867DA40B2DA7D9F013709C0B355990F@MX308CL03.corp.emc.com> References: <3D3D64550867DA40B2DA7D9F013709C0B355990F@MX308CL03.corp.emc.com> Message-ID: <3D3D64550867DA40B2DA7D9F013709C0B355CB47@MX308CL03.corp.emc.com> Hi Our product uses nginx to front-end inbound web access. To enhance our product's security posture, we have been examining the rules in the DISA Web Server Security Requirements Guide. One of the rules (https://www.stigviewer.com/stig/web_server_security_requirements_guide/2014-11-17/finding/V-41807) states, "The web server must generate unique session identifiers that cannot be reliably reproduced." I searched the nginx documentation, but wasn't able to confirm that unique session identifiers are used. Are they? Thanks tl Terry Lemons [DellEMC_Logo_Hz_Blue_rgb_10percent] Data Protection Division 176 South Street, MS 2/B-34 Hopkinton MA 01748 terry.lemons at dell.com -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.png Type: image/png Size: 2117 bytes Desc: image001.png URL: From nginx-forum at forum.nginx.org Tue Jul 9 19:32:20 2019 From: nginx-forum at forum.nginx.org (tlemons) Date: Tue, 09 Jul 2019 15:32:20 -0400 Subject: FIPS support in nginx? In-Reply-To: <20190617090020.GA11414@vlpc> References: <20190617090020.GA11414@vlpc> Message-ID: <7dfee366468aa5b0e190c12288211d5f.NginxMailingListEnglish@forum.nginx.org> Thanks for this reply, Vladimir! Where can I find nginx' use of openssl explained in the nginx documentation? I searched but didn't find it. Also, kirti mentioned re-compiling nginx to achieve a FIPS-compliant environment; is that necessary? Thanks! tl Posted at Nginx Forum: https://forum.nginx.org/read.php?2,284539,284807#msg-284807 From nginx-forum at forum.nginx.org Wed Jul 10 06:52:37 2019 From: nginx-forum at forum.nginx.org (jbalasubramanian) Date: Wed, 10 Jul 2019 02:52:37 -0400 Subject: IP Transparency in NGINX Message-ID: <425cd65d485a0760508731a9f5426b9e.NginxMailingListEnglish@forum.nginx.org> Hi all, I am using NGINX 1.13.5 as a Load Balancer for one of my CUSTOM-APPLICATION which will listen on UDP port 2231,67 and 68. I am trying for Load Balancing with IP-Transparency. When I using the proxy_protocol method the packets received from a remote client is modified and send to upstream by NGINX LB not sure why/how the packet is modified and also the remote client IP is NOT as source IP. When I using proxy_bind, the packet is forwarded to configured upstream but the source IP is not updated with Remote Client IP. Basically, in both methods, the remote client address was not used as a source IP. I hope I missed some minor parts. Can someone help to resolve this issue? The following are the detailed configuration for your reference. Method 1 :- proxy_protocol Configuration: user root; worker_processes 1; error_log /var/log/nginx/error.log debug; pid /var/run/nginx.pid; events { worker_connections 1024; } stream { server { listen 10.43.18.107:2231 udp; proxy_protocol on; proxy_pass 10.43.18.172:2231; } server { listen 10.43.18.107:67 udp; proxy_protocol on; proxy_pass 10.43.18.172:67; } server { listen 10.43.18.107:68 udp; proxy_protocol on; proxy_pass 10.43.18.172:68; } } TCPDUMP O/P : >From LB: 10:05:07.284259 IP 10.43.18.116.2231 > 10.43.18.107.2231: UDP, length 43 10:05:07.284555 IP 10.43.18.107.51775 > 10.43.18.172.2231: UDP, length 91 >From upstream[Custom application]: 10:05:07.284442 IP 10.43.18.107.51775 > 10.43.18.172.2231: UDP, length 91 Method 2:- [ proxy_bind ] Configuration: user root; worker_processes 1; error_log /var/log/nginx/error.log debug; pid /var/run/nginx.pid; events { worker_connections 1024; } stream { server { listen 10.43.18.107:2231 udp; proxy_bind $remote_addr:2231 transparent; proxy_pass 10.43.18.172:2231; } server { listen 10.43.18.107:67 udp; proxy_bind $remote_addr:67 transparent; proxy_pass 10.43.18.172:67; } server { listen 10.43.18.107:68 udp; proxy_bind $remote_addr:68 transparent; proxy_pass 10.43.18.172:68; } } Also, added the below rules : ip rule add fwmark 1 lookup 100 ip route add local 0.0.0.0/0 dev lo table 100 iptables -t mangle -A PREROUTING -p udp -s 10.43.18.0/24 --sport 2231 -j MARK --set-xmark 0x1/0xffffffff iptables -t mangle -A PREROUTING -p udp -s 10.43.18.0/24 --sport 67 -j MARK --set-xmark 0x1/0xffffffff iptables -t mangle -A PREROUTING -p udp -s 10.43.18.0/24 --sport 68 -j MARK --set-xmark 0x1/0xffffffff However, still, the packet is sent from NGINX LB with its own IP, not with the remote client IP address. TCPDUMP O/P from LB: 11:49:51.999829 IP 10.43.18.116.2231 > 10.43.18.107.2231: UDP, length 43 11:49:52.000161 IP 10.43.18.107.2231 > 10.43.18.172.2231: UDP, length 43 TPCDUM O/P from Upstream: 11:49:52.001155 IP 10.43.18.107.2231 > 10.43.18.172.2231: UDP, length 43 Note: I have followed the below link. https://www.nginx.com/blog/ip-transparency-direct-server-return-nginx-plus-transparent-proxy/ Posted at Nginx Forum: https://forum.nginx.org/read.php?2,284810,284810#msg-284810 From al-nginx at none.at Wed Jul 10 11:16:02 2019 From: al-nginx at none.at (Aleksandar Lazic) Date: Wed, 10 Jul 2019 13:16:02 +0200 Subject: Does nginx use unique session identifiers In-Reply-To: <3D3D64550867DA40B2DA7D9F013709C0B355CB47@MX308CL03.corp.emc.com> References: <3D3D64550867DA40B2DA7D9F013709C0B355990F@MX308CL03.corp.emc.com> <3D3D64550867DA40B2DA7D9F013709C0B355CB47@MX308CL03.corp.emc.com> Message-ID: <802fb7db-512d-ceb6-19e1-25503a1de8ba@none.at> Hi. Am 09.07.2019 um 20:40 schrieb Lemons, Terry: > Hi > > Our product uses nginx to front-end inbound web access. To enhance our product?s > security posture, we have been examining the rules in the DISA Web Server > Security Requirements Guide > . > One of the rules > (https://www.stigviewer.com/stig/web_server_security_requirements_guide/2014-11-17/finding/V-41807) > states, ?The web server must generate unique session identifiers that cannot be > reliably reproduced.? I searched the nginx documentation, but wasn?t able to > confirm that unique session identifiers are used. > > Are they? Myabe you can use the variable `request_id`. https://nginx.org/en/docs/http/ngx_http_core_module.html#var_request_id In the following blog posts can you find a example how it can be used. https://www.nginx.com/blog/application-tracing-nginx-plus/ => Tracing Requests End?to?End > Thanks > > tl Hth Aleks > *Terry Lemons* > > *?* > > DellEMC_Logo_Hz_Blue_rgb_10percent > > Data Protection Division > > ? > > 176 South Street, MS 2/B-34 > Hopkinton MA 01748 > terry.lemons at dell.com > > ? > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > From francis at daoine.org Wed Jul 10 22:25:25 2019 From: francis at daoine.org (Francis Daly) Date: Wed, 10 Jul 2019 23:25:25 +0100 Subject: SSL_ERROR_BAD_CERT_DOMAIN with multiple domains In-Reply-To: <963804883d08850e4f5a98a0549b3aa5.NginxMailingListEnglish@forum.nginx.org> References: <20190626082607.ya6yio3ezzuemdeo@daoine.org> <963804883d08850e4f5a98a0549b3aa5.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20190710222525.xjfhdkkqvijtfad2@daoine.org> On Sat, Jul 06, 2019 at 07:59:10AM -0400, BeyondEvil wrote: Hi there, > Server A (macmini) has an nginx server under my direct control. > Server B (the synology NAS) has an nginx server NOT under my direct > control. ...and you have exactly 1 public IP address, and you would like to be able to access the content on both of them. If you are happy to test things, I have two suggestions which might work for you. The first is a "proxy_pass" where your users will never talk directly to server B, and will never use the server B domain name. Depending on what server B requires, this may not work. But if it does -- you get a new hostname, "nas.domainA", for example, and get a certificate for it. Then do the normal nginx two-ssl-servers thing with SNI, and the one with "server_name nas.domainA" does "proxy_pass https://server-B". The second involves using "stream" instead of "http" on the public-facing ip:port. In that case, you use stream with ssl preread, documented at http://nginx.org/en/docs/stream/ngx_stream_ssl_preread_module.html If the SNI name relates to the NAS, you proxy_pass to that IP:port; else you proxy_pass to the IP:port that your nginx https listener is on (possibly 127.0.0.1:443, if you have the stream listener on the same machine). > And herein lies the problem as it seems, from what I can gather HTTPS is > terminated and checked/validated in Server A and fails for requests to > Server B domain, since the certificates in Server A are not the correct ones > for Server B domain - only for Server A domain. In the first new case above, https is terminated on "your" nginx server, either with the www.domainA cert or the nas.domainA cert, so the client is happy. In the second new case above, https is terminated either on your server with the www.domainA cert, or on the other server with the domainB cert; so the client is still happy. Maybe one of those will suit you. f -- Francis Daly francis at daoine.org From qi.zheng at intel.com Thu Jul 11 01:04:04 2019 From: qi.zheng at intel.com (Zheng, Qi) Date: Thu, 11 Jul 2019 01:04:04 +0000 Subject: Do nginx 1.14 and 1.17 have compatible configuration file formats? Message-ID: <0DD381DBF8F68D419C32ACFCEB28EB255E6C2A59@SHSMSX101.ccr.corp.intel.com> Hi, I am now using nginx 1.14. I am planning to upgrade it to the latest 1.17 version. My question is do nginx 1.14 and 1.17 have compatible configuration file formats? Can I use whatever I configured before for 1.14 on 1.17 version? Thanks. Best Regards SSP->LSE->Clear Linux Engineering (Shanghai) -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Thu Jul 11 12:41:56 2019 From: nginx-forum at forum.nginx.org (tuank19) Date: Thu, 11 Jul 2019 08:41:56 -0400 Subject: help allow post method url Message-ID: Hi all, I have a static website ( html ) with domain : abc.name.com , in html file have form and use post method to laravel cms with domain : xyz.qwe.com. Now i want config only domain abc.name.con post data to my cms laravel. what can i do in nginx ? thanks Posted at Nginx Forum: https://forum.nginx.org/read.php?2,284822,284822#msg-284822 From al-nginx at none.at Thu Jul 11 14:00:46 2019 From: al-nginx at none.at (Aleksandar Lazic) Date: Thu, 11 Jul 2019 16:00:46 +0200 Subject: help allow post method url In-Reply-To: References: Message-ID: Hi. Am 11.07.2019 um 14:41 schrieb tuank19: > Hi all, > > I have a static website ( html ) with domain : abc.name.com , in html file > have form and use post method to laravel cms with domain : xyz.qwe.com. > Now i want config only domain abc.name.con post data to my cms laravel. > what can i do in nginx ? > thanks I would try it with maps. https://nginx.org/en/docs/http/ngx_http_map_module.html ```code # untested map $request_method $dest_url { default abc.name.com; POST xyz.qwe.com; } location / { proxy_pass $dest_url; } ``` Hth Aleks > Posted at Nginx Forum: https://forum.nginx.org/read.php?2,284822,284822#msg-284822 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > From francis at daoine.org Thu Jul 11 15:12:47 2019 From: francis at daoine.org (Francis Daly) Date: Thu, 11 Jul 2019 16:12:47 +0100 Subject: How to properly log a bug In-Reply-To: References: Message-ID: <20190711151247.hlgutbrxu3eznvew@daoine.org> On Tue, Jul 09, 2019 at 03:34:12PM +0000, Bernie Quick wrote: Hi there, > I just don't know exactly where or to whom to report this issue. I was all set to describe some debugging tasks, and then suggest that you either send a mail to this list, or open a ticket on https://trac.nginx.org/nginx/ And then I looked, and saw that you found that site, and have opened the ticket, and got the answer which has led to a code fix that will presumably be in the next version of nginx. So, yay. All looks good. Cheers, f -- Francis Daly francis at daoine.org From francis at daoine.org Thu Jul 11 15:22:52 2019 From: francis at daoine.org (Francis Daly) Date: Thu, 11 Jul 2019 16:22:52 +0100 Subject: Does nginx use unique session identifiers In-Reply-To: <3D3D64550867DA40B2DA7D9F013709C0B355CB47@MX308CL03.corp.emc.com> References: <3D3D64550867DA40B2DA7D9F013709C0B355990F@MX308CL03.corp.emc.com> <3D3D64550867DA40B2DA7D9F013709C0B355CB47@MX308CL03.corp.emc.com> Message-ID: <20190711152252.gjlhi4a24inbrafg@daoine.org> On Tue, Jul 09, 2019 at 06:40:06PM +0000, Lemons, Terry wrote: Hi there, > One of the rules (https://www.stigviewer.com/stig/web_server_security_requirements_guide/2014-11-17/finding/V-41807) states, "The web server must generate unique session identifiers that cannot be reliably reproduced." I searched the nginx documentation, but wasn't able to confirm that unique session identifiers are used. > > Are they? I think that that rule is intended as something like: if session identifiers are generated, then they must not be guessable. And I think that nginx does not generate session identifiers, unless you ask it to. If you do ask it to, then you possibly will use the "userid" directive (http://nginx.org/r/userid, plus the rest of that page). If you use "userid", then what it does is in the file ./src/http/modules/ngx_http_userid_filter_module.c The main "hopefully unguessable" part there seems to be "the number of microseconds past the second, at the instant that this code ran". But you shouldn't trust my interpretation of it, when you can read it yourself. Cheers, f -- Francis Daly francis at daoine.org From b.jeyamurugan at gmail.com Fri Jul 12 18:14:22 2019 From: b.jeyamurugan at gmail.com (Jeya Murugan) Date: Fri, 12 Jul 2019 23:44:22 +0530 Subject: How to configure Nginx LB IP-Transparency for custom UDP application In-Reply-To: <20190709151108.GB61550@Romans-MacBook-Air.local> References: <20190709151108.GB61550@Romans-MacBook-Air.local> Message-ID: On Tue, Jul 9, 2019 at 8:41 PM Roman Arutyunyan wrote: > Hi, > > On Tue, Jul 09, 2019 at 05:55:39PM +0530, Jeya Murugan wrote: > > Hi all, > > > > > > I am using *NGINX 1.13.5 as a Load Balancer for one of my > > CUSTOM-APPLICATION *which will listen on* UDP port 2231,67 and 68.* > > > > I am trying for Load Balancing with IP-Transparency. > > > > > > > > When I using the proxy_protocol method the packets received from a remote > > client is modified and send to upstream by NGINX LB not sure why/how the > > packet is modified and also the remote client IP is NOT as source IP. > > The proxy_protocol directive adds a PROXY protocol header to the datagram, > that's why it's modified. The directive does not change the source > address. > Instead, the remote client address is passed in the PROXY protocol header. > > : Okay. Do we have any options to send remote client IP as source > address? Due to additional proxy header the packet is dropped by the > application running in the upstream. How can the proxy header can be > stripped in the upstream end? Do we need to do configuration/rules on the upstream end? > > When I using proxy_bind, the packet is forwarded to configured upstream > but > > the source IP is not updated with Remote Client IP. > > What is the reason for the port next to $remote_addr in proxy_bind? > Also make sure nginx master runs with sufficient privileges. > : Yes, application running with root privilege as specified in the conf file Also, the proxy_bind syntax is referred in the below link.' https://www.nginx.com/blog/ip-transparency-direct-server-return-nginx-plus-transparent-proxy/#proxy_bind proxy_bind $remote_addr:$remote_port transparent; > > > *Basically, in both methods, the remote client address was not used as a > > source IP. I hope I missed some minor parts. Can someone help to resolve > > this issue?* > > > > > > > > The following are the detailed configuration for your reference. > > > > > > > > *Method 1 :- proxy_protocol* > > > > > > > > *Configuration:* > > > > > > > > user *root;* > > worker_processes 1; > > error_log /var/log/nginx/error.log debug; > > pid /var/run/nginx.pid; > > events { > > worker_connections 1024; > > > > } > > > > stream { > > server { > > listen 10.43.18.107:2231 udp; > > proxy_protocol on; > > proxy_pass 10.43.18.172:2231; > > } > > server { > > listen 10.43.18.107:67 udp; > > proxy_protocol on; > > proxy_pass 10.43.18.172:67; > > } > > server { > > listen 10.43.18.107:68 udp; > > proxy_protocol on; > > proxy_pass 10.43.18.172:68; > > } > > } > > > > *TCPDUMP O/P :* > > > > > > > > *From LB:* > > > > 10:05:07.284259 IP 10.43.18.116.2231 > 10.43.18.107.2231: UDP, length 43 > > > > 10:05:07.284555 IP 10.43.18.107.51775 > 10.43.18.172.2231: UDP, length 91 > > > > > > > > *From upstream[Custom application]:* > > > > 10:05:07.284442 IP 10.43.18.107.51775 > 10.43.18.172.2231: UDP, length 91 > > > > > > > > *Method 2:- [ proxy_bind ]* > > > > > > > > *Configuration:* > > > > > > > > user root; > > worker_processes 1; > > error_log /var/log/nginx/error.log debug; > > pid /var/run/nginx.pid; > > events { > > worker_connections 1024; > > } > > > > stream { > > server { > > listen 10.43.18.107:2231 udp; > > proxy_bind $remote_addr:2231 transparent; > > proxy_pass 10.43.18.172:2231; > > } > > server { > > listen 10.43.18.107:67 udp; > > proxy_bind $remote_addr:67 transparent; > > proxy_pass 10.43.18.172:67; > > } > > server { > > listen 10.43.18.107:68 udp; > > proxy_bind $remote_addr:68 transparent; > > proxy_pass 10.43.18.172:68; > > } > > > > } > > > > > > > > *Also, added the below rules :* > > > > > > > > ip rule add fwmark 1 lookup 100 > > > > ip route add local 0.0.0.0/0 dev lo table 100 > > iptables -t mangle -A PREROUTING -p udp -s 10.43.18.0/24 --sport 2231 -j > > MARK --set-xmark 0x1/0xffffffff > > iptables -t mangle -A PREROUTING -p udp -s 10.43.18.0/24 --sport 67 -j > MARK > > --set-xmark 0x1/0xffffffff > > iptables -t mangle -A PREROUTING -p udp -s 10.43.18.0/24 --sport 68 -j > MARK > > --set-xmark 0x1/0xffffffff > > > > > > > > However, still, the packet is sent from NGINX LB with its own IP, not > with > > the remote client IP address. > > > > > > > > *TCPDUMP O/P from LB:* > > > > > > > > 11:49:51.999829 IP 10.43.18.116.2231 > 10.43.18.107.2231: UDP, length 43 > > > > 11:49:52.000161 IP 10.43.18.107.2231 > 10.43.18.172.2231: UDP, length 43 > > > > > > > > *TPCDUM O/P from Upstream:* > > > > > > > > 11:49:52.001155 IP 10.43.18.107.2231 > 10.43.18.172.2231: UDP, length 43 > > > > > > > > *Note:* I have followed the below link. > > > > > > > > > https://www.nginx.com/blog/ip-transparency-direct-server-return-nginx-plus-transparent-proxy/ > > > _______________________________________________ > > nginx mailing list > > nginx at nginx.org > > http://mailman.nginx.org/mailman/listinfo/nginx > > > -- > Roman Arutyunyan > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Sat Jul 13 13:50:50 2019 From: nginx-forum at forum.nginx.org (heythisisom) Date: Sat, 13 Jul 2019 09:50:50 -0400 Subject: Nginx request processing is slow when logging disabled In-Reply-To: <20190708125107.GQ1877@mdounin.ru> References: <20190708125107.GQ1877@mdounin.ru> Message-ID: <796d8a1d140209bff10ccb91eef5dc33.NginxMailingListEnglish@forum.nginx.org> Hi Maxim, The nginx reverse proxy and uWSGI runs on the same host. Each nginx reverse proxies are connected to only one single Instance of the uWSGI backend. But in the uWSGI backend, I'm running 4 workers in total based on the configuration 2 workers can be handled by 1 VCPU. Essentially the Instance I run has 2 VCPUs hence It translates to 4 workers. The listen queue length of my backend is sufficiently high i.e. 4096 and I have set my somaxconn parameter to 32768. So I think everything with respect to the backend seems fine. Only when I disable logging in nginx I could see this Issue happen. Once I enable it, my hosts never raised Timeout error at all. Also, note that this Issue happens more often when the server is in an Idle state and not when the server is in peak. -- Om Posted at Nginx Forum: https://forum.nginx.org/read.php?2,284757,284847#msg-284847 From nginx-forum at forum.nginx.org Sat Jul 13 14:00:39 2019 From: nginx-forum at forum.nginx.org (heythisisom) Date: Sat, 13 Jul 2019 10:00:39 -0400 Subject: Nginx request processing is slow when logging disabled In-Reply-To: <20190708125107.GQ1877@mdounin.ru> References: <20190708125107.GQ1877@mdounin.ru> Message-ID: <7bc50b049d6bbf18c814fc75e2ee4197.NginxMailingListEnglish@forum.nginx.org> Hi Maxim, The nginx reverse proxy and uWSGI runs on the same host. Each nginx reverse proxies are connected to only one single Instance of the uWSGI backend. But in the uWSGI backend, I'm running 4 workers in total based on the configuration 2 workers can be handled by 1 VCPU. Essentially the Instance I run has 2 VCPUs hence It translates to 4 workers. The listen queue length of my backend is sufficiently high i.e. 4096 and I have set my somaxconn parameter to 32768. So I think everything with respect to the backend seems fine. Only when I disable logging in nginx I could see this Issue happen. Once I enable it, my hosts never raised Timeout error at all. Also, note that this Issue happens more often when the server is in an Idle state and not when the server is in peak. -- Om Posted at Nginx Forum: https://forum.nginx.org/read.php?2,284757,284848#msg-284848 From qi.zheng at intel.com Mon Jul 15 08:45:56 2019 From: qi.zheng at intel.com (Zheng, Qi) Date: Mon, 15 Jul 2019 08:45:56 +0000 Subject: Do nginx 1.14 and 1.17 have compatible configuration file formats? In-Reply-To: <0DD381DBF8F68D419C32ACFCEB28EB255E6C2A59@SHSMSX101.ccr.corp.intel.com> References: <0DD381DBF8F68D419C32ACFCEB28EB255E6C2A59@SHSMSX101.ccr.corp.intel.com> Message-ID: <0DD381DBF8F68D419C32ACFCEB28EB255E6C35D6@SHSMSX101.ccr.corp.intel.com> Hi, Can someone help answer my question? Or any other channel I can throw the question to? Many thanks. Best Regards SSP->LSE->Clear Linux Engineering (Shanghai) From: Zheng, Qi Sent: Thursday, July 11, 2019 9:04 AM To: nginx at nginx.org Subject: Do nginx 1.14 and 1.17 have compatible configuration file formats? Hi, I am now using nginx 1.14. I am planning to upgrade it to the latest 1.17 version. My question is do nginx 1.14 and 1.17 have compatible configuration file formats? Can I use whatever I configured before for 1.14 on 1.17 version? Thanks. Best Regards SSP->LSE->Clear Linux Engineering (Shanghai) -------------- next part -------------- An HTML attachment was scrubbed... URL: From lucas at slcoding.com Mon Jul 15 08:47:11 2019 From: lucas at slcoding.com (Lucas Rolff) Date: Mon, 15 Jul 2019 08:47:11 +0000 Subject: Do nginx 1.14 and 1.17 have compatible configuration file formats? In-Reply-To: <0DD381DBF8F68D419C32ACFCEB28EB255E6C35D6@SHSMSX101.ccr.corp.intel.com> References: <0DD381DBF8F68D419C32ACFCEB28EB255E6C2A59@SHSMSX101.ccr.corp.intel.com> <0DD381DBF8F68D419C32ACFCEB28EB255E6C35D6@SHSMSX101.ccr.corp.intel.com> Message-ID: <9274D3F8-0CB6-4B3F-B4F3-C53B7E8A9F36@slcoding.com> I would say, install a box with 1.17, copy your config, and do a config test to see if it works ? From: nginx on behalf of "Zheng, Qi" Reply-To: "nginx at nginx.org" Date: Monday, 15 July 2019 at 10.46 To: "'nginx at nginx.org'" Subject: RE: Do nginx 1.14 and 1.17 have compatible configuration file formats? Hi, Can someone help answer my question? Or any other channel I can throw the question to? Many thanks. Best Regards SSP->LSE->Clear Linux Engineering (Shanghai) From: Zheng, Qi Sent: Thursday, July 11, 2019 9:04 AM To: nginx at nginx.org Subject: Do nginx 1.14 and 1.17 have compatible configuration file formats? Hi, I am now using nginx 1.14. I am planning to upgrade it to the latest 1.17 version. My question is do nginx 1.14 and 1.17 have compatible configuration file formats? Can I use whatever I configured before for 1.14 on 1.17 version? Thanks. Best Regards SSP->LSE->Clear Linux Engineering (Shanghai) -------------- next part -------------- An HTML attachment was scrubbed... URL: From qi.zheng at intel.com Tue Jul 16 00:53:41 2019 From: qi.zheng at intel.com (Zheng, Qi) Date: Tue, 16 Jul 2019 00:53:41 +0000 Subject: Do nginx 1.14 and 1.17 have compatible configuration file formats? In-Reply-To: <9274D3F8-0CB6-4B3F-B4F3-C53B7E8A9F36@slcoding.com> References: <0DD381DBF8F68D419C32ACFCEB28EB255E6C2A59@SHSMSX101.ccr.corp.intel.com> <0DD381DBF8F68D419C32ACFCEB28EB255E6C35D6@SHSMSX101.ccr.corp.intel.com> <9274D3F8-0CB6-4B3F-B4F3-C53B7E8A9F36@slcoding.com> Message-ID: <0DD381DBF8F68D419C32ACFCEB28EB255E6C387C@SHSMSX101.ccr.corp.intel.com> Thanks Lucas. Basic test showed it work on 1.17 with my old 1.14 config. But is there any official announcement that the nginx has backward compatibility from X version to Y version? Best Regards SSP->LSE->Clear Linux Engineering (Shanghai) From: nginx [mailto:nginx-bounces at nginx.org] On Behalf Of Lucas Rolff Sent: Monday, July 15, 2019 4:47 PM To: nginx at nginx.org Subject: Re: Do nginx 1.14 and 1.17 have compatible configuration file formats? I would say, install a box with 1.17, copy your config, and do a config test to see if it works ? From: nginx > on behalf of "Zheng, Qi" > Reply-To: "nginx at nginx.org" > Date: Monday, 15 July 2019 at 10.46 To: "'nginx at nginx.org'" > Subject: RE: Do nginx 1.14 and 1.17 have compatible configuration file formats? Hi, Can someone help answer my question? Or any other channel I can throw the question to? Many thanks. Best Regards SSP->LSE->Clear Linux Engineering (Shanghai) From: Zheng, Qi Sent: Thursday, July 11, 2019 9:04 AM To: nginx at nginx.org Subject: Do nginx 1.14 and 1.17 have compatible configuration file formats? Hi, I am now using nginx 1.14. I am planning to upgrade it to the latest 1.17 version. My question is do nginx 1.14 and 1.17 have compatible configuration file formats? Can I use whatever I configured before for 1.14 on 1.17 version? Thanks. Best Regards SSP->LSE->Clear Linux Engineering (Shanghai) -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Tue Jul 16 09:26:24 2019 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 16 Jul 2019 12:26:24 +0300 Subject: Nginx request processing is slow when logging disabled In-Reply-To: <796d8a1d140209bff10ccb91eef5dc33.NginxMailingListEnglish@forum.nginx.org> References: <20190708125107.GQ1877@mdounin.ru> <796d8a1d140209bff10ccb91eef5dc33.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20190716092624.GQ1877@mdounin.ru> Hello! On Sat, Jul 13, 2019 at 09:50:50AM -0400, heythisisom wrote: > Hi Maxim, > > The nginx reverse proxy and uWSGI runs on the same host. Each nginx reverse > proxies are connected to only one single Instance of the uWSGI backend. > > But in the uWSGI backend, I'm running 4 workers in total based on the > configuration 2 workers can be handled by 1 VCPU. Essentially the Instance I > run has 2 VCPUs hence It translates to 4 workers. 4 uWSGI workers means that only 4 requests can be handled in parallel. As long as your code uses external resources, such as databases or external requests, this may be way too low. > The listen queue length of > my backend is sufficiently high i.e. 4096 and I have set my somaxconn > parameter to 32768. So I think everything with respect to the backend seems > fine. To large listen queue length might be a reason which actually causes the 499 errors you've observed. For example, assuming processing each request requires 1 second on your backend, and given you have 4 backend workers, listen queue length of 4096 translates to 1000 seconds delay due to queueing. You may want to monitor actual number of queued connection requests in the listening socket queue when the issue happens. Assuming you are using Linux, try "ss -nlt" see queue sizes and numbers of currently queue connection requests. > Only when I disable logging in nginx I could see this Issue happen. Once I > enable it, my hosts never raised Timeout error at all. Also, note that this > Issue happens more often when the server is in an Idle state and not when > the server is in peak. Disabling/enabled logging implies slightly different load pattern, might change various timings and/or OS scheduler behaviour, and hence change the observed results - either by triggering bugs in various places (including kernel, nginx, and your backend) or simply by making things less efficient. What exactly happens requires further investigation. You may start with looking at the "ss -nlt" numbers as suggested above. -- Maxim Dounin http://mdounin.ru/ From b.jeyamurugan at gmail.com Tue Jul 16 11:29:21 2019 From: b.jeyamurugan at gmail.com (Jeya Murugan) Date: Tue, 16 Jul 2019 16:59:21 +0530 Subject: How to configure Nginx LB IP-Transparency for custom UDP application In-Reply-To: References: <20190709151108.GB61550@Romans-MacBook-Air.local> Message-ID: > > @all : Can someone help /point-out what i have missed in proxy_protocol >> here? >> >> > I am using *NGINX 1.13.5 as a Load Balancer for one of my >> > CUSTOM-APPLICATION *which will listen on* UDP port 2231,67 and 68.* >> > >> > I am trying for Load Balancing with IP-Transparency. >> > >> > >> > >> > When I using the proxy_protocol method the packets received from a >> remote >> > client is modified and send to upstream by NGINX LB not sure why/how the >> > packet is modified and also the remote client IP is NOT as source IP. >> >> The proxy_protocol directive adds a PROXY protocol header to the datagram, >> that's why it's modified. The directive does not change the source >> address. >> Instead, the remote client address is passed in the PROXY protocol header. >> >> : Okay. Do we have any options to send remote client IP as source >> address? Due to additional proxy header the packet is dropped by the >> application running in the upstream. How can the proxy header can be >> stripped in the upstream end? > > Do we need to do configuration/rules on the upstream > end? > > >> > When I using proxy_bind, the packet is forwarded to configured upstream >> but >> > the source IP is not updated with Remote Client IP. >> >> What is the reason for the port next to $remote_addr in proxy_bind? >> Also make sure nginx master runs with sufficient privileges. >> > > : Yes, application running with root privilege as specified in the > conf file > > Also, the proxy_bind syntax is referred in the below link.' > > https://www.nginx.com/blog/ip-transparency-direct-server-return-nginx-plus-transparent-proxy/#proxy_bind > > proxy_bind $remote_addr:$remote_port transparent; > >> >> > *Basically, in both methods, the remote client address was not used as a >> > source IP. I hope I missed some minor parts. Can someone help to resolve >> > this issue?* >> > >> > >> > >> > The following are the detailed configuration for your reference. >> > >> > >> > >> > *Method 1 :- proxy_protocol* >> > >> > >> > >> > *Configuration:* >> > >> > >> > >> > user *root;* >> > worker_processes 1; >> > error_log /var/log/nginx/error.log debug; >> > pid /var/run/nginx.pid; >> > events { >> > worker_connections 1024; >> > >> > } >> > >> > stream { >> > server { >> > listen 10.43.18.107:2231 udp; >> > proxy_protocol on; >> > proxy_pass 10.43.18.172:2231; >> > } >> > server { >> > listen 10.43.18.107:67 udp; >> > proxy_protocol on; >> > proxy_pass 10.43.18.172:67; >> > } >> > server { >> > listen 10.43.18.107:68 udp; >> > proxy_protocol on; >> > proxy_pass 10.43.18.172:68; >> > } >> > } >> > >> > *TCPDUMP O/P :* >> > >> > >> > >> > *From LB:* >> > >> > 10:05:07.284259 IP 10.43.18.116.2231 > 10.43.18.107.2231: UDP, length 43 >> > >> > 10:05:07.284555 IP 10.43.18.107.51775 > 10.43.18.172.2231: UDP, length >> 91 >> > >> > >> > >> > *From upstream[Custom application]:* >> > >> > 10:05:07.284442 IP 10.43.18.107.51775 > 10.43.18.172.2231: UDP, length >> 91 >> > >> > >> > >> > *Method 2:- [ proxy_bind ]* >> > >> > >> > >> > *Configuration:* >> > >> > >> > >> > user root; >> > worker_processes 1; >> > error_log /var/log/nginx/error.log debug; >> > pid /var/run/nginx.pid; >> > events { >> > worker_connections 1024; >> > } >> > >> > stream { >> > server { >> > listen 10.43.18.107:2231 udp; >> > proxy_bind $remote_addr:2231 transparent; >> > proxy_pass 10.43.18.172:2231; >> > } >> > server { >> > listen 10.43.18.107:67 udp; >> > proxy_bind $remote_addr:67 transparent; >> > proxy_pass 10.43.18.172:67; >> > } >> > server { >> > listen 10.43.18.107:68 udp; >> > proxy_bind $remote_addr:68 transparent; >> > proxy_pass 10.43.18.172:68; >> > } >> > >> > } >> > >> > >> > >> > *Also, added the below rules :* >> > >> > >> > >> > ip rule add fwmark 1 lookup 100 >> > >> > ip route add local 0.0.0.0/0 dev lo table 100 >> > iptables -t mangle -A PREROUTING -p udp -s 10.43.18.0/24 --sport 2231 >> -j >> > MARK --set-xmark 0x1/0xffffffff >> > iptables -t mangle -A PREROUTING -p udp -s 10.43.18.0/24 --sport 67 -j >> MARK >> > --set-xmark 0x1/0xffffffff >> > iptables -t mangle -A PREROUTING -p udp -s 10.43.18.0/24 --sport 68 -j >> MARK >> > --set-xmark 0x1/0xffffffff >> > >> > >> > >> > However, still, the packet is sent from NGINX LB with its own IP, not >> with >> > the remote client IP address. >> > >> > >> > >> > *TCPDUMP O/P from LB:* >> > >> > >> > >> > 11:49:51.999829 IP 10.43.18.116.2231 > 10.43.18.107.2231: UDP, length 43 >> > >> > 11:49:52.000161 IP 10.43.18.107.2231 > 10.43.18.172.2231: UDP, length 43 >> > >> > >> > >> > *TPCDUM O/P from Upstream:* >> > >> > >> > >> > 11:49:52.001155 IP 10.43.18.107.2231 > 10.43.18.172.2231: UDP, length 43 >> > >> > >> > >> > *Note:* I have followed the below link. >> > >> > >> > >> > >> https://www.nginx.com/blog/ip-transparency-direct-server-return-nginx-plus-transparent-proxy/ >> >> > _______________________________________________ >> > nginx mailing list >> > nginx at nginx.org >> > http://mailman.nginx.org/mailman/listinfo/nginx >> >> >> -- >> Roman Arutyunyan >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Tue Jul 16 12:48:08 2019 From: nginx-forum at forum.nginx.org (heythisisom) Date: Tue, 16 Jul 2019 08:48:08 -0400 Subject: Nginx request processing is slow when logging disabled In-Reply-To: <20190716092624.GQ1877@mdounin.ru> References: <20190716092624.GQ1877@mdounin.ru> Message-ID: Hi Maxim, Thank you for your suggestion. I understand that enabling/disabling logging introduces extra CPU overhead. However, I will start to monitor the listen queue with the ss command and debug the Issue further. Thanks, Om Posted at Nginx Forum: https://forum.nginx.org/read.php?2,284757,284868#msg-284868 From peter_booth at me.com Tue Jul 16 18:24:35 2019 From: peter_booth at me.com (Peter Booth) Date: Tue, 16 Jul 2019 14:24:35 -0400 Subject: Nginx request processing is slow when logging disabled In-Reply-To: References: <20190716092624.GQ1877@mdounin.ru> Message-ID: <1382005E-CAA4-4F60-BD78-B2577535D4C2@me.com> I?d suggest that you use wrk2, httperf, ab or similar to run a synthetic test. Can your site handle one request every five seconds? One request every second? Five every second? ... is your backend configured to log service times? Is your nginx configured to log service times? What do you see? By slowly ramping up traffic you will be able to see when and how the site transitions from healthy to unhealthy. Sent from my iPhone > On Jul 16, 2019, at 8:48 AM, heythisisom wrote: > > Hi Maxim, > > Thank you for your suggestion. I understand that enabling/disabling logging > introduces extra CPU overhead. However, I will start to monitor the listen > queue with the ss command and debug the Issue further. > > Thanks, > Om > > Posted at Nginx Forum: https://forum.nginx.org/read.php?2,284757,284868#msg-284868 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From nginx-forum at forum.nginx.org Tue Jul 16 20:23:17 2019 From: nginx-forum at forum.nginx.org (bmacphee) Date: Tue, 16 Jul 2019 16:23:17 -0400 Subject: request authorization with grpc (failure status code) In-Reply-To: <20190703011712.GM1877@mdounin.ru> References: <20190703011712.GM1877@mdounin.ru> Message-ID: <535606f5c447a243b7156c8e0c245977.NginxMailingListEnglish@forum.nginx.org> I appreciate the suggestion but it doesn't look like this is possible to solve with these modules. The authentication part happens as a sub-request, and the response provided by sub request influences how the gRPC part is handled at the top level. Unless I can figure out some way to pass variables from the sub request and handle things differently... I don't know. If I return 200, the request proceeds as if it were authorized. This is bad. If I return 401 and try to set the headers/trailers, I don't think it affects the top level request/response. I always get grpc.CANCELLED at the client. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,284718,284876#msg-284876 From jeff.dyke at gmail.com Wed Jul 17 00:54:35 2019 From: jeff.dyke at gmail.com (Jeff Dyke) Date: Tue, 16 Jul 2019 20:54:35 -0400 Subject: Do nginx 1.14 and 1.17 have compatible configuration file formats? In-Reply-To: <0DD381DBF8F68D419C32ACFCEB28EB255E6C387C@SHSMSX101.ccr.corp.intel.com> References: <0DD381DBF8F68D419C32ACFCEB28EB255E6C2A59@SHSMSX101.ccr.corp.intel.com> <0DD381DBF8F68D419C32ACFCEB28EB255E6C35D6@SHSMSX101.ccr.corp.intel.com> <9274D3F8-0CB6-4B3F-B4F3-C53B7E8A9F36@slcoding.com> <0DD381DBF8F68D419C32ACFCEB28EB255E6C387C@SHSMSX101.ccr.corp.intel.com> Message-ID: As an old dog in this world, i don't think you should ever take release notes over config tests and further web tests (siege, wrk, ab). Nginx has become such a versatile server starting with web and any proxy , then with openresty and unit etc ...how can you provide proof of an upgrade path. This is work that only you can finish. FWIW, i have only had to change my conf files for my benefit over the 1.14 to 1.17, but if you don't do this yourself, you need to ask yourself why and how you can that is correct for your environment. Best Jeff On Mon, Jul 15, 2019 at 8:53 PM Zheng, Qi wrote: > Thanks Lucas. > > Basic test showed it work on 1.17 with my old 1.14 config. > > But is there any official announcement that the nginx has backward > compatibility from X version to Y version? > > > > > > Best Regards > > > > SSP->LSE->Clear Linux Engineering (Shanghai) > > > > *From:* nginx [mailto:nginx-bounces at nginx.org] *On Behalf Of *Lucas Rolff > *Sent:* Monday, July 15, 2019 4:47 PM > *To:* nginx at nginx.org > *Subject:* Re: Do nginx 1.14 and 1.17 have compatible configuration file > formats? > > > > I would say, install a box with 1.17, copy your config, and do a config > test to see if it works ? > > > > *From: *nginx on behalf of "Zheng, Qi" < > qi.zheng at intel.com> > *Reply-To: *"nginx at nginx.org" > *Date: *Monday, 15 July 2019 at 10.46 > *To: *"'nginx at nginx.org'" > *Subject: *RE: Do nginx 1.14 and 1.17 have compatible configuration file > formats? > > > > Hi, > > > > Can someone help answer my question? > > Or any other channel I can throw the question to? > > Many thanks. > > > > Best Regards > > > > SSP->LSE->Clear Linux Engineering (Shanghai) > > > > *From:* Zheng, Qi > *Sent:* Thursday, July 11, 2019 9:04 AM > *To:* nginx at nginx.org > *Subject:* Do nginx 1.14 and 1.17 have compatible configuration file > formats? > > > > Hi, > > > > I am now using nginx 1.14. > > I am planning to upgrade it to the latest 1.17 version. > > My question is do nginx 1.14 and 1.17 have compatible configuration file > formats? > > Can I use whatever I configured before for 1.14 on 1.17 version? > > Thanks. > > > > Best Regards > > > > SSP->LSE->Clear Linux Engineering (Shanghai) > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Wed Jul 17 11:33:52 2019 From: nginx-forum at forum.nginx.org (jko98) Date: Wed, 17 Jul 2019 07:33:52 -0400 Subject: Client ip adress gets lost (nginx authentication via sub-request) Message-ID: <1f0b7dfffcae05a1a37ece9df625efdf.NginxMailingListEnglish@forum.nginx.org> Hi! I am using http-auth-request-module on a nginx proxy to authenticate against another server. This other server's authentication mechanism evaluates the requesting client ip adress for authentication purposes. However, this server only receives the ip adress from the proxy server - regardless from which client i access the proxied URL. How can i pass the origin client ip adress via the proxy server on to the "authentication server"? Workflow illustration: https://i.stack.imgur.com/4wNSd.png ... server { listen *:80; location /restricted { auth_request /auth; proxy_pass http://proxied-server/; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; } location /auth { internal; proxy_pass http://authentication-server/; } } ... Posted at Nginx Forum: https://forum.nginx.org/read.php?2,284877,284877#msg-284877 From mdounin at mdounin.ru Wed Jul 17 11:49:03 2019 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 17 Jul 2019 14:49:03 +0300 Subject: request authorization with grpc (failure status code) In-Reply-To: <535606f5c447a243b7156c8e0c245977.NginxMailingListEnglish@forum.nginx.org> References: <20190703011712.GM1877@mdounin.ru> <535606f5c447a243b7156c8e0c245977.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20190717114903.GT1877@mdounin.ru> Hello! On Tue, Jul 16, 2019 at 04:23:17PM -0400, bmacphee wrote: > I appreciate the suggestion but it doesn't look like this is possible to > solve with these modules. The authentication part happens as a sub-request, > and the response provided by sub request influences how the gRPC part is > handled at the top level. Unless I can figure out some way to pass > variables from the sub request and handle things differently... I don't > know. > > If I return 200, the request proceeds as if it were authorized. This is > bad. The gRPC protocol fails to follow HTTP status code semantics, and all responses in gRPC use status code 200. For details see protocol specification here: https://github.com/grpc/grpc/blob/master/doc/PROTOCOL-HTTP2.md How a particular response will be handled by a client depends on the "grpc-status" response header. You can add one with the "add_header" nginx configuration directive. Note that it might be non-trivial to find out which grpc-status codes should be used, as the "specification" in question fails to define them. Status codes seems to be listed - and with some descriptions - here in the grpc sources: https://github.com/grpc/grpc/blob/master/include/grpc/impl/codegen/status.h -- Maxim Dounin http://mdounin.ru/ From nginx-forum at forum.nginx.org Wed Jul 17 12:22:34 2019 From: nginx-forum at forum.nginx.org (bmacphee) Date: Wed, 17 Jul 2019 08:22:34 -0400 Subject: request authorization with grpc (failure status code) In-Reply-To: <20190717114903.GT1877@mdounin.ru> References: <20190717114903.GT1877@mdounin.ru> Message-ID: Yes, I was trying various combinations of the following, with no success. location @grpc_auth_fail { add_trailer grpc-status 16 always; add_header grpc-status 16 always; add_trailer grpc-message Unauthorized always; add_header grpc-message Unauthorized always; return 401; #return 200; } The choice of 16 for the status was based on this documentation (which is similar to the one you linked): https://grpc.github.io/grpc/core/md_doc_statuscodes.html Posted at Nginx Forum: https://forum.nginx.org/read.php?2,284718,284879#msg-284879 From nginx-forum at forum.nginx.org Wed Jul 17 12:32:33 2019 From: nginx-forum at forum.nginx.org (bmacphee) Date: Wed, 17 Jul 2019 08:32:33 -0400 Subject: request authorization with grpc (failure status code) In-Reply-To: References: Message-ID: <0223e087b5a9df54867b1b401c8b3d77.NginxMailingListEnglish@forum.nginx.org> I had some success doing the intercept at the next level above the auth proxy location like this: (using grpc_intercept_errors) server { listen 443 ssl http2; include grpc_servers.conf; # send all requests to the `/validate` endpoint for authorization auth_request /validate; grpc_intercept_errors on; error_page 401 @grpc_auth_fail; location = /validate { proxy_pass http://auth:5000; #proxy_intercept_errors on; #error_page 401 @grpc_auth_fail; } location @grpc_auth_fail { add_trailer grpc-status 16 always; add_header grpc-status 16 always; add_trailer grpc-message Unauthorized always; add_header grpc-message Unauthorized always; return 200; } } Posted at Nginx Forum: https://forum.nginx.org/read.php?2,284718,284880#msg-284880 From nginx-forum at forum.nginx.org Wed Jul 17 12:35:36 2019 From: nginx-forum at forum.nginx.org (bmacphee) Date: Wed, 17 Jul 2019 08:35:36 -0400 Subject: request authorization with grpc (failure status code) In-Reply-To: <20190717114903.GT1877@mdounin.ru> References: <20190717114903.GT1877@mdounin.ru> Message-ID: <1706fe86c45015c2d290172df4c26fd0.NginxMailingListEnglish@forum.nginx.org> Thanks for your input. I think I found a solution that will work, so I replied to my original question with the config. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,284718,284881#msg-284881 From pgnet.dev at gmail.com Wed Jul 17 14:24:37 2019 From: pgnet.dev at gmail.com (PGNet Dev) Date: Wed, 17 Jul 2019 07:24:37 -0700 Subject: Nextcloud 16 on Nginx 1.17.1 -- "Status: 500 Internal Server Error" & "Something is wrong with your openssl setup" ? Message-ID: <5369f435-621a-2f14-6e0a-b4db2dfc52bd@gmail.com> I run nginx/1.17.1 + PHP 7.4.0-dev on linux/64. It's an in-production setup, with lots of directly hosted, as well as proxied, SSL-secured webapps. I've now installed Nextcloud v16.0.3. For the moment, directly hosted on Nginx, not-yet proxied. It installs to DB with no errors. &, The site's accessible hosted on Nginx; I can get to the app's login page -- securely. But when I enter login credentials & submit, I get an nginx/fastcgi http fastcgi header: "Status: 500 Internal Server Error" and in Nextcloud logs, "Something is wrong with your openssl setup: error:02001002:system library:fopen:No such file or directory," I've posted an issue, with config & error details, here: https://github.com/nextcloud/server/issues/16378 Since I've got lots of other webapps running securely with no issues, and I _am_ able to get to Nextcloud's secure login page with my Nginx-served SSL cert, I suspect the problem's in Nextcloud -- NOT nginx. But thought I'd check-in here ... anyone successfully using Nextcloud on Nginx that can suggest what the problem is, or a fix? OR, *is* this an Nginx issue that I simply haven't recognized ? Header I've missed, or misconfigured? Not immediately clear to me why it wouldn't surface on the login page at 1st connect ... From postmaster at palvelin.fi Thu Jul 18 17:03:24 2019 From: postmaster at palvelin.fi (Palvelin Postmaster) Date: Thu, 18 Jul 2019 10:03:24 -0700 Subject: SSL_write() failed errors Message-ID: Hi, we?re getting random SSL_write() failed errors on seemingly legitimate requests. The common denominator seems to be they are all for static files (images, js, etc.). Can anyone help me debug the issue? Here?s a debug log paste for one incident: https://pastebin.com/ZsbLuD5N Our architecture is: Amazon ALB > Nginx 1.14 > PHP-FPM 7.3 Some of our possibly relevant nginx config parameters: upstream php73 { server unix:/run/php/php7.3-fpm.sock max_fails=20 fail_timeout=60; } ssl_protocols TLSv1 TLSv1.1 TLSv1.2; # Dropping SSLv3, ref: POODLE ssl_prefer_server_ciphers on; ssl_session_cache shared:SSL:20m; ssl_session_timeout 120m; ssl_dhparam /etc/nginx/dhparam.pem; ssl_ciphers !aNULL:!eNULL:FIPS at STRENGTH; # http://blog.chrismeller.com/configuring-and-optimizing-php-fpm-and-nginx-on-ubuntu-or-debian # Caching configuration using 30 days expiration delay for static served files. location ~ \.(ogg|ogv|svg|svgz|eot|otf|woff|mp4|ttf|css|rss|atom|js|jpe?g|gif|png|ico|zip|tgz|gz|rar|bz2|doc|xls|exe|ppt|tar|mid|midi|wav|bmp|rtf|cur)$ { set $location_name static; # Location name expires 30d; log_not_found off; access_log off; } From andre8525 at hotmail.com Thu Jul 18 22:37:07 2019 From: andre8525 at hotmail.com (Andrew Andonopoulos) Date: Thu, 18 Jul 2019 22:37:07 +0000 Subject: Nginx cache-control headers issue Message-ID: Hello, I have an nginx proxy which suddenly adding 2 cache-control headers and the last modified time is always the current time: curl -I https://example.com/hls/5d15498d3b4e13.57348983/1280_720_3200_5d15498d3b4e13.57348983.m3u8?token=st=1563488654~exp=1563575054~acl=/hls/5d15498d3b4e13.57348983/*~hmac=863d655766652601b77c0ba1fc94a60039c4c800d9ac7097b68edfa77b9c1cdb HTTP/1.1 200 OK Server: nginx/1.17.0 Date: Thu, 18 Jul 2019 22:28:34 GMT Content-Type: application/vnd.apple.mpegurl Last-Modified: Thu, 18 Jul 2019 22:28:34 GMT Connection: keep-alive Expires: Thu, 18 Jul 2019 23:28:34 GMT Cache-Control: private, max-age=3600, max-stale=0 Cache-Control: public max-age=31536000 s-maxage=31536000 Access-Control-Allow-Origin: * Access-Control-Allow-Credentials: true Access-Control-Allow-Headers: DNT,X-CustomHeader,Keep-Alive,User-Agent,X-Requested-With,If-Modified-Since,Cache-Control,Content-Type,Origin,X-Auth-Token,Authorization,Accept,Client-Security-Token Access-Control-Allow-Methods: OPTIONS, GET and this is the config: server { listen 443 ssl; server_name example.com; # security for bypass so localhost can empty cache if ($remote_addr ~ "^(127.0.0.1)$") { set $bypass $http_secret_header; } location '/.well-known/acme-challenge' { root /usr/local/www/example.com; allow all; default_type "text/plain"; } if ($arg_token) { set $test_token $arg_token; } if ($cookie_token) { set $test_token $cookie_token; } location / { #Proxy related config proxy_cache s3_cache; proxy_http_version 1.1; proxy_read_timeout 10s; proxy_send_timeout 10s; proxy_connect_timeout 10s; proxy_cache_methods GET HEAD; proxy_cache_use_stale error timeout updating http_500 http_502 http_503 http_504; proxy_cache_lock on; proxy_cache_revalidate on; proxy_intercept_errors on; proxy_cache_lock_age 10s; proxy_cache_lock_timeout 1h; proxy_cache_background_update on; proxy_cache_valid 200 301 302 30d; proxy_pass https://s3/; proxy_cache_bypass $cookie_nocache $arg_nocache; proxy_cache_key "$scheme$host$request_uri"; #Proxy Buffers proxy_buffering on; proxy_buffer_size 1k; proxy_buffers 24 4k; proxy_busy_buffers_size 8k; proxy_max_temp_file_size 2048m; proxy_temp_file_write_size 32k; #Add Headers add_header Cache-Control 'public max-age=31536000 s-maxage=31536000'; add_header X-Cache-Status $upstream_cache_status always; add_header X-Proxy-Cache $upstream_cache_status; add_header 'Access-Control-Allow-Origin' '*'; add_header 'Access-Control-Allow-Credentials' 'true'; add_header 'Access-Control-Allow-Headers' 'DNT,X-CustomHeader,Keep-Alive,User-Agent,X-Requested-With,If-Modified-Since,Cache-Control,Content-Type,Origin,X-Auth-Token,Authorization,Accept,Client-Security-Token'; add_header 'Access-Control-Allow-Methods' 'OPTIONS, GET'; #Header related config proxy_set_header Connection ""; proxy_set_header Authorization ''; proxy_set_header Host 'xxxxx.s3.amazonaws.com'; proxy_hide_header x-amz-id-2; proxy_hide_header x-amz-request-id; proxy_hide_header x-amz-meta-server-side-encryption; proxy_hide_header x-amz-server-side-encryption; proxy_hide_header Set-Cookie; proxy_hide_header x-amz-storage-class; proxy_ignore_headers Set-Cookie; proxy_ignore_headers Cache-Control; proxy_ignore_headers X-Accel-Expires Expires; # enable thread bool aio threads=default; akamai_token_validate $test_token; akamai_token_validate_key xxxxxxxxxxxxxxx; secure_token $token; secure_token_types text/xml application/vnd.apple.mpegurl; secure_token_content_type_f4m text/xml; secure_token_expires_time 100d; secure_token_query_token_expires_time 1h; secure_token_tokenize_segments on; } } I don't know why is adding this control header: Cache-Control: private, max-age=3600, max-stale=0 I don't have this in the config. Also, I re-installed Nginx but still getting the same issue Thanks Andrew -------------- next part -------------- An HTML attachment was scrubbed... URL: From andre8525 at hotmail.com Thu Jul 18 22:39:55 2019 From: andre8525 at hotmail.com (Andrew Andonopoulos) Date: Thu, 18 Jul 2019 22:39:55 +0000 Subject: Nginx cache-control headers issue Message-ID: Hello, I have an nginx proxy which suddenly adding 2 cache-control headers and the last modified time is always the current time: curl -I https://example.com/hls/5d15498d3b4e13.57348983/1280_720_3200_5d15498d3b4e13.57348983.m3u8?token=st=1563488654~exp=1563575054~acl=/hls/5d15498d3b4e13.57348983/*~hmac=863d655766652601b77c0ba1fc94a60039c4c800d9ac7097b68edfa77b9c1cdb HTTP/1.1 200 OK Server: nginx/1.17.0 Date: Thu, 18 Jul 2019 22:28:34 GMT Content-Type: application/vnd.apple.mpegurl Last-Modified: Thu, 18 Jul 2019 22:28:34 GMT Connection: keep-alive Expires: Thu, 18 Jul 2019 23:28:34 GMT Cache-Control: private, max-age=3600, max-stale=0 Cache-Control: public max-age=31536000 s-maxage=31536000 Access-Control-Allow-Origin: * Access-Control-Allow-Credentials: true Access-Control-Allow-Headers: DNT,X-CustomHeader,Keep-Alive,User-Agent,X-Requested-With,If-Modified-Since,Cache-Control,Content-Type,Origin,X-Auth-Token,Authorization,Accept,Client-Security-Token Access-Control-Allow-Methods: OPTIONS, GET and this is the config: server { listen 443 ssl; server_name example.com; # security for bypass so localhost can empty cache if ($remote_addr ~ "^(127.0.0.1)$") { set $bypass $http_secret_header; } location '/.well-known/acme-challenge' { root /usr/local/www/example.com; allow all; default_type "text/plain"; } if ($arg_token) { set $test_token $arg_token; } if ($cookie_token) { set $test_token $cookie_token; } location / { #Proxy related config proxy_cache s3_cache; proxy_http_version 1.1; proxy_read_timeout 10s; proxy_send_timeout 10s; proxy_connect_timeout 10s; proxy_cache_methods GET HEAD; proxy_cache_use_stale error timeout updating http_500 http_502 http_503 http_504; proxy_cache_lock on; proxy_cache_revalidate on; proxy_intercept_errors on; proxy_cache_lock_age 10s; proxy_cache_lock_timeout 1h; proxy_cache_background_update on; proxy_cache_valid 200 301 302 30d; proxy_pass https://s3/; proxy_cache_bypass $cookie_nocache $arg_nocache; proxy_cache_key "$scheme$host$request_uri"; #Proxy Buffers proxy_buffering on; proxy_buffer_size 1k; proxy_buffers 24 4k; proxy_busy_buffers_size 8k; proxy_max_temp_file_size 2048m; proxy_temp_file_write_size 32k; #Add Headers add_header Cache-Control 'public max-age=31536000 s-maxage=31536000'; add_header X-Cache-Status $upstream_cache_status always; add_header X-Proxy-Cache $upstream_cache_status; add_header 'Access-Control-Allow-Origin' '*'; add_header 'Access-Control-Allow-Credentials' 'true'; add_header 'Access-Control-Allow-Headers' 'DNT,X-CustomHeader,Keep-Alive,User-Agent,X-Requested-With,If-Modified-Since,Cache-Control,Content-Type,Origin,X-Auth-Token,Authorization,Accept,Client-Security-Token'; add_header 'Access-Control-Allow-Methods' 'OPTIONS, GET'; #Header related config proxy_set_header Connection ""; proxy_set_header Authorization ''; proxy_set_header Host 'xxxxx.s3.amazonaws.com'; proxy_hide_header x-amz-id-2; proxy_hide_header x-amz-request-id; proxy_hide_header x-amz-meta-server-side-encryption; proxy_hide_header x-amz-server-side-encryption; proxy_hide_header Set-Cookie; proxy_hide_header x-amz-storage-class; proxy_ignore_headers Set-Cookie; proxy_ignore_headers Cache-Control; proxy_ignore_headers X-Accel-Expires Expires; # enable thread bool aio threads=default; akamai_token_validate $test_token; akamai_token_validate_key xxxxxxxxxxxxxxx; secure_token $token; secure_token_types text/xml application/vnd.apple.mpegurl; secure_token_content_type_f4m text/xml; secure_token_expires_time 100d; secure_token_query_token_expires_time 1h; secure_token_tokenize_segments on; } } I don't know why is adding this control header: Cache-Control: private, max-age=3600, max-stale=0 I don't have this in the config. Also, I re-installed Nginx but still getting the same issue Thanks Andrew -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Thu Jul 18 22:44:13 2019 From: nginx-forum at forum.nginx.org (andregr-jp) Date: Thu, 18 Jul 2019 18:44:13 -0400 Subject: Nginx cache-control headers issue Message-ID: <3cb9e736e154956bdbd93dac89b536cd.NginxMailingListEnglish@forum.nginx.org> Hello, I have an nginx proxy which suddenly adding 2 cache-control headers and the last modified time is always the current time: curl -I https://example.com/hls/5d15498d3b4e13.57348983/1280_720_3200_5d15498d3b4e13.57348983.m3u8?token=st=1563488654~exp=1563575054~acl=/hls/5d15498d3b4e13.57348983/*~hmac=863d655766652601b77c0ba1fc94a60039c4c800d9ac7097b68edfa77b9c1cdb HTTP/1.1 200 OK Server: nginx/1.17.0 Date: Thu, 18 Jul 2019 22:28:34 GMT Content-Type: application/vnd.apple.mpegurl Last-Modified: Thu, 18 Jul 2019 22:28:34 GMT Connection: keep-alive Expires: Thu, 18 Jul 2019 23:28:34 GMT Cache-Control: private, max-age=3600, max-stale=0 Cache-Control: public max-age=31536000 s-maxage=31536000 Access-Control-Allow-Origin: * Access-Control-Allow-Credentials: true Access-Control-Allow-Headers: DNT,X-CustomHeader,Keep-Alive,User-Agent,X-Requested-With,If-Modified-Since,Cache-Control,Content-Type,Origin,X-Auth-Token,Authorization,Accept,Client-Security-Token Access-Control-Allow-Methods: OPTIONS, GET and this is the config: server { listen 443 ssl; server_name example.com; # security for bypass so localhost can empty cache if ($remote_addr ~ "^(127.0.0.1)$") { set $bypass $http_secret_header; } location '/.well-known/acme-challenge' { root /usr/local/www/example.com; allow all; default_type "text/plain"; } if ($arg_token) { set $test_token $arg_token; } if ($cookie_token) { set $test_token $cookie_token; } location / { #Proxy related config proxy_cache s3_cache; proxy_http_version 1.1; proxy_read_timeout 10s; proxy_send_timeout 10s; proxy_connect_timeout 10s; proxy_cache_methods GET HEAD; proxy_cache_use_stale error timeout updating http_500 http_502 http_503 http_504; proxy_cache_lock on; proxy_cache_revalidate on; proxy_intercept_errors on; proxy_cache_lock_age 10s; proxy_cache_lock_timeout 1h; proxy_cache_background_update on; proxy_cache_valid 200 301 302 30d; proxy_pass https://s3/; proxy_cache_bypass $cookie_nocache $arg_nocache; proxy_cache_key "$scheme$host$request_uri"; #Proxy Buffers proxy_buffering on; proxy_buffer_size 1k; proxy_buffers 24 4k; proxy_busy_buffers_size 8k; proxy_max_temp_file_size 2048m; proxy_temp_file_write_size 32k; #Add Headers add_header Cache-Control 'public max-age=31536000 s-maxage=31536000'; add_header X-Cache-Status $upstream_cache_status always; add_header X-Proxy-Cache $upstream_cache_status; add_header 'Access-Control-Allow-Origin' '*'; add_header 'Access-Control-Allow-Credentials' 'true'; add_header 'Access-Control-Allow-Headers' 'DNT,X-CustomHeader,Keep-Alive,User-Agent,X-Requested-With,If-Modified-Since,Cache-Control,Content-Type,Origin,X-Auth-Token,Authorization,Accept,Client-Security-Token'; add_header 'Access-Control-Allow-Methods' 'OPTIONS, GET'; #Header related config proxy_set_header Connection ""; proxy_set_header Authorization ''; proxy_set_header Host 'xxxxx.s3.amazonaws.com'; proxy_hide_header x-amz-id-2; proxy_hide_header x-amz-request-id; proxy_hide_header x-amz-meta-server-side-encryption; proxy_hide_header x-amz-server-side-encryption; proxy_hide_header Set-Cookie; proxy_hide_header x-amz-storage-class; proxy_ignore_headers Set-Cookie; proxy_ignore_headers Cache-Control; proxy_ignore_headers X-Accel-Expires Expires; # enable thread bool aio threads=default; akamai_token_validate $test_token; akamai_token_validate_key xxxxxxxxxxxxxxx; secure_token $token; secure_token_types text/xml application/vnd.apple.mpegurl; secure_token_content_type_f4m text/xml; secure_token_expires_time 100d; secure_token_query_token_expires_time 1h; secure_token_tokenize_segments on; } } I don't know why is adding this control header: Cache-Control: private, max-age=3600, max-stale=0 I don't have this in the config. Also, I re-installed Nginx but still getting the same issue Thanks Andrew Posted at Nginx Forum: https://forum.nginx.org/read.php?2,284904,284904#msg-284904 From pgnet.dev at gmail.com Fri Jul 19 15:39:15 2019 From: pgnet.dev at gmail.com (PGNet Dev) Date: Fri, 19 Jul 2019 08:39:15 -0700 Subject: nginx 1.17.1 configcheck fails if config'd for TLSv1.3-only ? Message-ID: I run nginx nginx -v nginx version: nginx/1.17.1 on linux/64. I've installed which openssl /usr/local/openssl/bin/openssl openssl version OpenSSL 1.1.1c 28 May 2019 nginx is built with/linked to this version ldd `which nginx` | grep ssl libssl.so.1.1 => /usr/local/openssl/lib64/libssl.so.1.1 (0x00007f95bdc09000) libcrypto.so.1.1 => /usr/local/openssl/lib64/libcrypto.so.1.1 (0x00007f95bd6f9000) I'm currently working setting up a local-only server, attempting to get it to use TLSv1.3/CHACHA20 only. I've tightened down restrictions in nginx config. With my attempted restrictions in place, I've found that I'm apparently NOT using TLSv1.3/CHACHA20. With this nginx config server { listen 10.0.1.20:443 ssl http2; server_name test.dev.lan; root /data/webapps/nulldir; index index.html; rewrite_log on; access_log /var/log/nginx/access.log main; error_log /var/log/nginx/error.log info; ssl_protocols TLSv1.3 TLSv1.2; ssl_ciphers "TLS13-CHACHA20-POLY1305-SHA256 TLS13-AES-256-GCM-SHA384 TLS13-AES-128-GCM-SHA256 ECDHE-ECDSA-CHACHA20-POLY1305"; ssl_ecdh_curve X25519:prime256v1:secp384r1; ssl_prefer_server_ciphers on; ssl_trusted_certificate "/usr/local/etc/ssl/myCA/myCA.chain.crt.pem"; ssl_certificate "/usr/local/etc/ssl/test/test.ec.crt.pem"; ssl_certificate_key "/usr/local/etc/ssl/test/test.ec.key.pem"; location / { } } config check is ok, nginxconfcheck nginx: the configuration file /usr/local/etc/nginx/nginx.conf syntax is ok nginx: configuration file /usr/local/etc/nginx/nginx.conf test is successful and I see a TLS 1.3 handshake, openssl s_client -connect 10.0.1.20:443 -CAfile /usr/local/etc/ssl/myCA/myCA.chain.crt.pem CONNECTED(00000003) Can't use SSL_get_servername depth=2 O = dev.lan, OU = myCA, L = NewYork, ST = NY, C = US, emailAddress = admin at dev.lan, CN = myCA_ROOT verify return:1 depth=1 C = US, ST = NY, O = dev.lan, OU = myCA, CN = myCA_INT, emailAddress = admin at dev.lan verify return:1 depth=0 C = US, ST = NY, L = NewYork, O = dev.lan, OU = myCA, CN = test.dev.lan, emailAddress = admin at dev.lan verify return:1 --- Certificate chain 0 s:C = US, ST = NY, L = NewYork, O = dev.lan, OU = myCA, CN = test.dev.lan, emailAddress = admin at dev.lan i:C = US, ST = NY, O = dev.lan, OU = myCA, CN = myCA_INT, emailAddress = admin at dev.lan --- Server certificate -----BEGIN CERTIFICATE----- MIIEhjCCBAygAwIBAgICELAwCgYIKoZIzj0EAwIwgbAxCzAJBgNVBAYTAlVTMQsw ... VHldKgTNpiGuFA== -----END CERTIFICATE----- subject=C = US, ST = NY, L = NewYork, O = dev.lan, OU = myCA, CN = test.dev.lan, emailAddress = admin at dev.lan issuer=C = US, ST = NY, O = dev.lan, OU = myCA, CN = myCA_INT, emailAddress = admin at dev.lan --- No client certificate CA names sent Peer signing digest: SHA384 Peer signature type: ECDSA Server Temp Key: X25519, 253 bits --- SSL handshake has read 1565 bytes and written 373 bytes Verification: OK --- New, TLSv1.3, Cipher is TLS_AES_256_GCM_SHA384 Server public key is 384 bit Secure Renegotiation IS NOT supported No ALPN negotiated Early data was not sent Verify return code: 0 (ok) --- --- Post-Handshake New Session Ticket arrived: SSL-Session: Protocol : TLSv1.3 Cipher : TLS_AES_256_GCM_SHA384 Session-ID: CA79B0596A2CCF19BBA9A49E086F99E7F811FAC8349888E37531E46B17FE35A9 Session-ID-ctx: Resumption PSK: 9966170E5086490D231260B15CDA6852D0CCDED661D1C075BF0DE3334C89472B158F2524282DD5F1175381B4317D8DC9 PSK identity: None PSK identity hint: None SRP username: None TLS session ticket lifetime hint: 300 (seconds) TLS session ticket: 0000 - 1e 49 9a 75 97 46 90 9c-8a ec 1b 8d ac 90 5a a6 .I.u.F........Z. ... 00d0 - 49 e4 e0 50 62 3b 45 a5-10 f9 9e 2e 43 09 41 40 I..Pb;E.....C.A@ Start Time: 1563419052 Timeout : 7200 (sec) Verify return code: 0 (ok) Extended master secret: no Max Early Data: 0 --- read R BLOCK --- Post-Handshake New Session Ticket arrived: SSL-Session: Protocol : TLSv1.3 Cipher : TLS_AES_256_GCM_SHA384 Session-ID: 1B65B9377224E89FA226C7DC8103E3A57C13798F9FAA0B909BC36E436EE95DC9 Session-ID-ctx: Resumption PSK: FEDFC913674474BC83DBE17F4290CA744C92E0763B450C6C489724442E2B2C6F14849A6910356B7ADFFEA3D03D2E7931 PSK identity: None PSK identity hint: None SRP username: None TLS session ticket lifetime hint: 300 (seconds) TLS session ticket: 0000 - 1e 49 9a 75 97 46 90 9c-8a ec 1b 8d ac 90 5a a6 .I.u.F........Z. ... 00d0 - c9 d0 19 a1 00 6d 72 37-f7 f4 39 6b dd 48 4d cf .....mr7..9k.HM. Start Time: 1563419052 Timeout : 7200 (sec) Verify return code: 0 (ok) Extended master secret: no Max Early Data: 0 --- read R BLOCK closed but the cipher used is TLS_AES_256_GCM_SHA384 NOT either of the CHACHA20 options, TLS-CHACHA20-POLY1305-SHA256 ECDHE-ECDSA-CHACHA20-POLY130 And, if I change nginx to be 'TLSv1.3-only', - ssl_protocols TLSv1.3 TLSv1.2; - ssl_ciphers "TLS13-CHACHA20-POLY1305-SHA256 TLS13-AES-256-GCM-SHA384 TLS13-AES-128-GCM-SHA256 ECDHE-ECDSA-CHACHA20-POLY1305"; + ssl_protocols TLSv1.3; + ssl_ciphers "TLS13-CHACHA20-POLY1305-SHA256 TLS13-AES-256-GCM-SHA384 TLS13-AES-128-GCM-SHA256"; even the webserver config check FAILs, nginxconfcheck TLS13-AES-128-GCM-SHA256") failed (SSL: error:1410D0B9:SSL routines:SSL_CTX_set_cipher_list:no cipher match) nginx: configuration file /usr/local/etc/nginx/nginx.conf test failed and the server fails to start. So I _see_ two issues, (1) when the webserver config passes, with not-just-TLS1.3 ciphers enabled in the config, I get an SSL connection, using TLS1.3, but NOT the hoped-for CHACHA20 ciphers. (2) when I list ONLY TLS1.3 ciphers, the config check fails, and the server won't start. What's preventing the use of a just TLSv1.3 cipherlist? & specifically the usage of CHACHA20 ciphers in connection? From mdounin at mdounin.ru Fri Jul 19 16:29:41 2019 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 19 Jul 2019 19:29:41 +0300 Subject: nginx 1.17.1 configcheck fails if config'd for TLSv1.3-only ? In-Reply-To: References: Message-ID: <20190719162941.GE1877@mdounin.ru> Hello! On Fri, Jul 19, 2019 at 08:39:15AM -0700, PGNet Dev wrote: > I run nginx > > nginx -v > nginx version: nginx/1.17.1 > > on linux/64. > > I've installed > > which openssl > /usr/local/openssl/bin/openssl > openssl version > OpenSSL 1.1.1c 28 May 2019 > > nginx is built with/linked to this version > > ldd `which nginx` | grep ssl > libssl.so.1.1 => /usr/local/openssl/lib64/libssl.so.1.1 (0x00007f95bdc09000) > libcrypto.so.1.1 => /usr/local/openssl/lib64/libcrypto.so.1.1 (0x00007f95bd6f9000) > > I'm currently working setting up a local-only server, attempting to get it to use TLSv1.3/CHACHA20 only. > > I've tightened down restrictions in nginx config. > With my attempted restrictions in place, I've found that I'm apparently NOT using TLSv1.3/CHACHA20. > > With this nginx config > > server { > > listen 10.0.1.20:443 ssl http2; > > server_name test.dev.lan; > root /data/webapps/nulldir; > index index.html; > > rewrite_log on; > access_log /var/log/nginx/access.log main; > error_log /var/log/nginx/error.log info; > > ssl_protocols TLSv1.3 TLSv1.2; > ssl_ciphers "TLS13-CHACHA20-POLY1305-SHA256 TLS13-AES-256-GCM-SHA384 TLS13-AES-128-GCM-SHA256 ECDHE-ECDSA-CHACHA20-POLY1305"; TLS 1.3 ciphers cannot be controlled via the traditional SSL_CTX_set_cipher_list() interface - rather, OpenSSL enables all TLS 1.3 ciphers unconditionally. This was done somewhere at OpenSSL 1.1.1-pre4 to prevent people from disabling all TLS 1.3 ciphers by using traditional cipher strings. (Futher, TLS 1.3 ciphers are named differently, but it doesn't really matter as they are not controlled by the ssl_ciphers anyway.) Try $ openssl ciphers -v to find out which ciphers will be enabled. Futher details can be found here: https://trac.nginx.org/nginx/ticket/1529 [...] > but the cipher used is > > TLS_AES_256_GCM_SHA384 > > NOT either of the CHACHA20 options, > > TLS-CHACHA20-POLY1305-SHA256 ECDHE-ECDSA-CHACHA20-POLY130 That's expected, as all TLSv1.3 ciphers are enabled, see above. > And, if I change nginx to be 'TLSv1.3-only', > > - ssl_protocols TLSv1.3 TLSv1.2; > - ssl_ciphers "TLS13-CHACHA20-POLY1305-SHA256 TLS13-AES-256-GCM-SHA384 TLS13-AES-128-GCM-SHA256 ECDHE-ECDSA-CHACHA20-POLY1305"; > + ssl_protocols TLSv1.3; > + ssl_ciphers "TLS13-CHACHA20-POLY1305-SHA256 TLS13-AES-256-GCM-SHA384 TLS13-AES-128-GCM-SHA256"; > > even the webserver config check FAILs, > > nginxconfcheck > TLS13-AES-128-GCM-SHA256") failed (SSL: error:1410D0B9:SSL routines:SSL_CTX_set_cipher_list:no cipher match) > nginx: configuration file /usr/local/etc/nginx/nginx.conf test failed > > and the server fails to start. That's because the cipher string listed contains no valid ciphers. -- Maxim Dounin http://mdounin.ru/ From mdounin at mdounin.ru Fri Jul 19 16:59:40 2019 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 19 Jul 2019 19:59:40 +0300 Subject: SSL_write() failed errors In-Reply-To: References: Message-ID: <20190719165940.GF1877@mdounin.ru> Hello! On Thu, Jul 18, 2019 at 10:03:24AM -0700, Palvelin Postmaster via nginx wrote: > we?re getting random SSL_write() failed errors on seemingly > legitimate requests. The common denominator seems to be they are > all for static files (images, js, etc.). > > Can anyone help me debug the issue? > > Here?s a debug log paste for one incident: > https://pastebin.com/ZsbLuD5N > > Our architecture is: Amazon ALB > Nginx 1.14 > PHP-FPM 7.3 The following debug log: 2019/07/18 19:27:25 [debug] 1840#1840: *2037 SSL_write: -1 2019/07/18 19:27:25 [debug] 1840#1840: *2037 SSL_get_error: 6 2019/07/18 19:27:25 [crit] 1840#1840: *2037 SSL_write() failed (SSL:) while sending response to client... suggests that this is due to error 6, that is, SSL_ERROR_ZERO_RETURN. This looks strange, as we haven't seen this error being returned from SSL_write(), but might be legitimate. In theory this can happen if nginx got a close notify SSL alert while writing a response, and probably have something to do with Amazon ALB before nginx. Just in case, could you please provide details about OpenSSL library you are using ("nginx -V" should contain enough details)? -- Maxim Dounin http://mdounin.ru/ From pgnet.dev at gmail.com Fri Jul 19 17:52:55 2019 From: pgnet.dev at gmail.com (PGNet Dev) Date: Fri, 19 Jul 2019 10:52:55 -0700 Subject: nginx 1.17.1 configcheck fails if config'd for TLSv1.3-only ? In-Reply-To: <20190719162941.GE1877@mdounin.ru> References: <20190719162941.GE1877@mdounin.ru> Message-ID: >> And, if I change nginx to be 'TLSv1.3-only', >> >> - ssl_protocols TLSv1.3 TLSv1.2; >> - ssl_ciphers "TLS13-CHACHA20-POLY1305-SHA256 TLS13-AES-256-GCM-SHA384 TLS13-AES-128-GCM-SHA256 ECDHE-ECDSA-CHACHA20-POLY1305"; >> + ssl_protocols TLSv1.3; >> + ssl_ciphers "TLS13-CHACHA20-POLY1305-SHA256 TLS13-AES-256-GCM-SHA384 TLS13-AES-128-GCM-SHA256"; >> >> even the webserver config check FAILs, >> >> nginxconfcheck >> TLS13-AES-128-GCM-SHA256") failed (SSL: error:1410D0B9:SSL routines:SSL_CTX_set_cipher_list:no cipher match) >> nginx: configuration file /usr/local/etc/nginx/nginx.conf test failed >> >> and the server fails to start. > > That's because the cipher string listed contains no valid ciphers. Sorry, I'm missing something :-/ What's specifically "invalid" about the 3, listed ciphers? TLS13-CHACHA20-POLY1305-SHA256 TLS13-AES-256-GCM-SHA384 TLS13-AES-128-GCM-SHA256 as stated here https://www.openssl.org/blog/blog/2018/02/08/tlsv1.3/ OpenSSL has implemented support for five TLSv1.3 ciphersuites as follows: TLS13-AES-256-GCM-SHA384 TLS13-CHACHA20-POLY1305-SHA256 TLS13-AES-128-GCM-SHA256 TLS13-AES-128-CCM-8-SHA256 TLS13-AES-128-CCM-SHA256 for openssl ciphers -stdname -s -V 'TLS13-CHACHA20-POLY1305-SHA256:TLS13-AES-128-GCM-SHA256:TLS13-AES-256-GCM-SHA384:ECDHE:!AES128:!SHA1:!SHA256:!SHA384:!COMPLEMENTOFDEFAULT' 0x13,0x02 - TLS_AES_256_GCM_SHA384 - TLS_AES_256_GCM_SHA384 TLSv1.3 Kx=any Au=any Enc=AESGCM(256) Mac=AEAD 0x13,0x03 - TLS_CHACHA20_POLY1305_SHA256 - TLS_CHACHA20_POLY1305_SHA256 TLSv1.3 Kx=any Au=any Enc=CHACHA20/POLY1305(256) Mac=AEAD 0x13,0x01 - TLS_AES_128_GCM_SHA256 - TLS_AES_128_GCM_SHA256 TLSv1.3 Kx=any Au=any Enc=AESGCM(128) Mac=AEAD 0xC0,0x2C - TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384 - ECDHE-ECDSA-AES256-GCM-SHA384 TLSv1.2 Kx=ECDH Au=ECDSA Enc=AESGCM(256) Mac=AEAD 0xC0,0x30 - TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 - ECDHE-RSA-AES256-GCM-SHA384 TLSv1.2 Kx=ECDH Au=RSA Enc=AESGCM(256) Mac=AEAD 0xCC,0xA9 - TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256 - ECDHE-ECDSA-CHACHA20-POLY1305 TLSv1.2 Kx=ECDH Au=ECDSA Enc=CHACHA20/POLY1305(256) Mac=AEAD 0xCC,0xA8 - TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256 - ECDHE-RSA-CHACHA20-POLY1305 TLSv1.2 Kx=ECDH Au=RSA Enc=CHACHA20/POLY1305(256) Mac=AEAD using the alias TLSv1.3 ciphersuite names is also fine, openssl ciphers -stdname -s -V 'TLS-CHACHA20-POLY1305-SHA256:TLS-AES-128-GCM-SHA256:TLS-AES-256-GCM-SHA384:ECDHE:!AES128:!SHA1:!SHA256:!SHA384:!COMPLEMENTOFDEFAULT' 0x13,0x02 - TLS_AES_256_GCM_SHA384 - TLS_AES_256_GCM_SHA384 TLSv1.3 Kx=any Au=any Enc=AESGCM(256) Mac=AEAD 0x13,0x03 - TLS_CHACHA20_POLY1305_SHA256 - TLS_CHACHA20_POLY1305_SHA256 TLSv1.3 Kx=any Au=any Enc=CHACHA20/POLY1305(256) Mac=AEAD 0x13,0x01 - TLS_AES_128_GCM_SHA256 - TLS_AES_128_GCM_SHA256 TLSv1.3 Kx=any Au=any Enc=AESGCM(128) Mac=AEAD 0xC0,0x2C - TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384 - ECDHE-ECDSA-AES256-GCM-SHA384 TLSv1.2 Kx=ECDH Au=ECDSA Enc=AESGCM(256) Mac=AEAD 0xC0,0x30 - TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 - ECDHE-RSA-AES256-GCM-SHA384 TLSv1.2 Kx=ECDH Au=RSA Enc=AESGCM(256) Mac=AEAD 0xCC,0xA9 - TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256 - ECDHE-ECDSA-CHACHA20-POLY1305 TLSv1.2 Kx=ECDH Au=ECDSA Enc=CHACHA20/POLY1305(256) Mac=AEAD 0xCC,0xA8 - TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256 - ECDHE-RSA-CHACHA20-POLY1305 TLSv1.2 Kx=ECDH Au=RSA Enc=CHACHA20/POLY1305(256) Mac=AEAD if in nginx config, ssl_protocols TLSv1.3 TLSv1.2; ssl_ciphers "TTLS13-CHACHA20-POLY1305-SHA256:TLS13-AES-128-GCM-SHA256:TLS13-AES-256-GCM-SHA384:ECDHE:!AES128:!SHA1:!SHA256:!SHA384:!COMPLEMENTOFDEFAULT"; ssllabs.com/ssltest reports Configuration Protocols TLS 1.3 Yes TLS 1.2 Yes TLS 1.1 No TLS 1.0 No SSL 3 No SSL 2 No For TLS 1.3 tests, we only support RFC 8446. Cipher Suites # TLS 1.3 (suites in server-preferred order) TLS_AES_256_GCM_SHA384 (0x1302) ECDH x25519 (eq. 3072 bits RSA) FS 256 TLS_CHACHA20_POLY1305_SHA256 (0x1303) ECDH x25519 (eq. 3072 bits RSA) FS 256 TLS_AES_128_GCM_SHA256 (0x1301) ECDH x25519 (eq. 3072 bits RSA) FS 128 # TLS 1.2 (suites in server-preferred order) TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384 (0xc02c) ECDH x25519 (eq. 3072 bits RSA) FS 256 TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 (0xc030) ECDH x25519 (eq. 3072 bits RSA) FS 256 TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256 (0xcca9) ECDH x25519 (eq. 3072 bits RSA) FS 256 TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256 (0xcca8) ECDH x25519 (eq. 3072 bits RSA) FS 256 on connect to the site, I see connection handshake, Protocol TLS 1.3 Cipher Suite TLS_AES_256_GCM_SHA384 Key Exchange Group x25519 Signature Scheme ECDSA-P384-SHA384 switching nginx config ssl_protocols TLSv1.3; BREAKS ssllabs testing/reporting (their issue, apparently), but the site itself still works, with, again, handshake Protocol TLS 1.3 Cipher Suite TLS_AES_256_GCM_SHA384 Key Exchange Group x25519 Signature Scheme ECDSA-P384-SHA384 OTOH, if I enable ONLY TLSv1.3 ciphersuites, ssl_protocols TLSv1.3; ssl_ciphers "TTLS13-CHACHA20-POLY1305-SHA256:TLS13-AES-128-GCM-SHA256:TLS13-AES-256-GCM-SHA384"; nginxconfcheck indeed FAILs nginxconfcheck nginx: [emerg] SSL_CTX_set_cipher_list("TTLS13-CHACHA20-POLY1305-SHA256:TLS13-AES-128-GCM-SHA256:TLS13-AES-256-GCM-SHA384") failed (SSL: error:1410D0B9:SSL routines:SSL_CTX_set_cipher_list:no cipher match) nginx: configuration file /usr/local/etc/nginx/nginx.conf test failed IIUC due to openssl itself apparently being 'unhappy' with that string, openssl ciphers -stdname -s -V 'TLS-CHACHA20-POLY1305-SHA256:TLS-AES-128-GCM-SHA256:TLS-AES-256-GCM-SHA384' Error in cipher list 139817124520384:error:1410D0B9:SSL routines:SSL_CTX_set_cipher_list:no cipher match:ssl/ssl_lib.c:2549: WHY it's unhappy with that string is an openssl issue; I've asked 'over there' abt that ... From mdounin at mdounin.ru Fri Jul 19 18:02:37 2019 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 19 Jul 2019 21:02:37 +0300 Subject: nginx 1.17.1 configcheck fails if config'd for TLSv1.3-only ? In-Reply-To: References: <20190719162941.GE1877@mdounin.ru> Message-ID: <20190719180237.GG1877@mdounin.ru> Hello! On Fri, Jul 19, 2019 at 10:52:55AM -0700, PGNet Dev wrote: > >> And, if I change nginx to be 'TLSv1.3-only', > >> > >> - ssl_protocols TLSv1.3 TLSv1.2; > >> - ssl_ciphers "TLS13-CHACHA20-POLY1305-SHA256 TLS13-AES-256-GCM-SHA384 TLS13-AES-128-GCM-SHA256 ECDHE-ECDSA-CHACHA20-POLY1305"; > >> + ssl_protocols TLSv1.3; > >> + ssl_ciphers "TLS13-CHACHA20-POLY1305-SHA256 TLS13-AES-256-GCM-SHA384 TLS13-AES-128-GCM-SHA256"; > >> > >> even the webserver config check FAILs, > >> > >> nginxconfcheck > >> TLS13-AES-128-GCM-SHA256") failed (SSL: error:1410D0B9:SSL routines:SSL_CTX_set_cipher_list:no cipher match) > >> nginx: configuration file /usr/local/etc/nginx/nginx.conf test failed > >> > >> and the server fails to start. > > > > That's because the cipher string listed contains no valid ciphers. > > > Sorry, I'm missing something :-/ > > What's specifically "invalid" about the 3, listed ciphers? > > TLS13-CHACHA20-POLY1305-SHA256 TLS13-AES-256-GCM-SHA384 TLS13-AES-128-GCM-SHA256 There are no such ciphers in the OpenSSL. Try it yourself: $ openssl ciphers TLS13-CHACHA20-POLY1305-SHA256 Error in cipher list 0:error:1410D0B9:SSL routines:SSL_CTX_set_cipher_list:no cipher match:ssl/ssl_lib.c:2549: [...] -- Maxim Dounin http://mdounin.ru/ From pgnet.dev at gmail.com Fri Jul 19 18:24:36 2019 From: pgnet.dev at gmail.com (PGNet Dev) Date: Fri, 19 Jul 2019 11:24:36 -0700 Subject: nginx 1.17.1 configcheck fails if config'd for TLSv1.3-only ? In-Reply-To: <20190719180237.GG1877@mdounin.ru> References: <20190719162941.GE1877@mdounin.ru> <20190719180237.GG1877@mdounin.ru> Message-ID: <6d46c9f5-f12a-def1-5a39-5102a0581146@gmail.com> On 7/19/19 11:02 AM, Maxim Dounin wrote: > Hello! > > On Fri, Jul 19, 2019 at 10:52:55AM -0700, PGNet Dev wrote: > >>>> And, if I change nginx to be 'TLSv1.3-only', >>>> >>>> - ssl_protocols TLSv1.3 TLSv1.2; >>>> - ssl_ciphers "TLS13-CHACHA20-POLY1305-SHA256 TLS13-AES-256-GCM-SHA384 TLS13-AES-128-GCM-SHA256 ECDHE-ECDSA-CHACHA20-POLY1305"; >>>> + ssl_protocols TLSv1.3; >>>> + ssl_ciphers "TLS13-CHACHA20-POLY1305-SHA256 TLS13-AES-256-GCM-SHA384 TLS13-AES-128-GCM-SHA256"; >>>> >>>> even the webserver config check FAILs, >>>> >>>> nginxconfcheck >>>> TLS13-AES-128-GCM-SHA256") failed (SSL: error:1410D0B9:SSL routines:SSL_CTX_set_cipher_list:no cipher match) >>>> nginx: configuration file /usr/local/etc/nginx/nginx.conf test failed >>>> >>>> and the server fails to start. >>> >>> That's because the cipher string listed contains no valid ciphers. >> >> >> Sorry, I'm missing something :-/ >> >> What's specifically "invalid" about the 3, listed ciphers? >> >> TLS13-CHACHA20-POLY1305-SHA256 TLS13-AES-256-GCM-SHA384 TLS13-AES-128-GCM-SHA256 > > There are no such ciphers in the OpenSSL. > Try it yourself: > > $ openssl ciphers TLS13-CHACHA20-POLY1305-SHA256 > Error in cipher list > 0:error:1410D0B9:SSL routines:SSL_CTX_set_cipher_list:no cipher match:ssl/ssl_lib.c:2549: > > [...] > Then what are these lists? https://wiki.openssl.org/index.php/TLS1.3 Ciphersuites OpenSSL has implemented support for five TLSv1.3 ciphersuites as follows: TLS_AES_256_GCM_SHA384 TLS_CHACHA20_POLY1305_SHA256 TLS_AES_128_GCM_SHA256 TLS_AES_128_CCM_8_SHA256 TLS_AES_128_CCM_SHA256 https://www.openssl.org/blog/blog/2017/05/04/tlsv1.3/ Ciphersuites OpenSSL has implemented support for five TLSv1.3 ciphersuites as follows: TLS13-AES-256-GCM-SHA384 TLS13-CHACHA20-POLY1305-SHA256 TLS13-AES-128-GCM-SHA256 TLS13-AES-128-CCM-8-SHA256 TLS13-AES-128-CCM-SHA256 "$ openssl ciphers -s -v ECDHE Will list all the ciphersuites for TLSv1.2 and below that support ECDHE and additionally all of the default TLSv1.3 ciphersuites." openssl ciphers -s -v ECDHE >> TLS_AES_256_GCM_SHA384 TLSv1.3 Kx=any Au=any Enc=AESGCM(256) Mac=AEAD >> TLS_CHACHA20_POLY1305_SHA256 TLSv1.3 Kx=any Au=any Enc=CHACHA20/POLY1305(256) Mac=AEAD >> TLS_AES_128_GCM_SHA256 TLSv1.3 Kx=any Au=any Enc=AESGCM(128) Mac=AEAD ... openssl ciphers -tls1_3 >> TLS_AES_256_GCM_SHA384: >> TLS_CHACHA20_POLY1305_SHA256: >> TLS_AES_128_GCM_SHA256: ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA384:DHE-RSA-AES256-SHA256:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES256-SHA:ECDHE-RSA-AES256-SHA:DHE-RSA-AES256-SHA:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES128-SHA:DHE-RSA-AES128-SHA:RSA-PSK-AES256-GCM-SHA384:DHE-PSK-AES256-GCM-SHA384:RSA-PSK-CHACHA20-POLY1305:DHE-PSK-CHACHA20-POLY1305:ECDHE-PSK-CHACHA20-POLY1305:AES256-GCM-SHA384:PSK-AES256-GCM-SHA384:PSK-CHACHA20-POLY1305:RSA-PSK-AES128-GCM-SHA256:DHE-PSK-AES128-GCM-SHA256:AES128-GCM-SHA256:PSK-AES128-GCM-SHA256:AES256-SHA256:AES128-SHA256:ECDHE-PSK-AES256-CBC-SHA384:ECDHE-PSK-AES256-CBC-SHA:SRP-RSA-AES-256-CBC-SHA:SRP-AES-256-CBC-SHA:RSA-PSK-AES256-CBC-SHA384:DHE-PSK-AES256-CBC-SHA384:RSA-PSK-AES256-CBC-SHA:DHE-PSK-AES256-CBC-SHA:AES256-SHA:PSK-AES256-CBC-SHA384:PSK-AES256-CBC-SHA:ECDHE-PSK-AES128-CBC-SHA256:ECDHE-PSK-AES128-CBC-SHA:SRP-RSA-AES-128-CBC-SHA:SRP-AES-128-CBC-SHA:RSA-PSK-AES128-CBC-SHA256:DHE-PSK-AES128-CBC-SHA256:RSA-PSK-AES128-CBC-SHA:DHE-PSK-AES128-CBC-SHA:AES128-SHA:PSK-AES128-CBC-SHA256:PSK-AES128-CBC-SHA openssl ciphers TLS13-CHACHA20-POLY1305-SHA256 Error in cipher list 140418731745728:error:1410D0B9:SSL routines:SSL_CTX_set_cipher_list:no cipher match:ssl/ssl_lib.c:2549: openssl ciphers TLS-CHACHA20-POLY1305-SHA256 Error in cipher list 140126717628864:error:1410D0B9:SSL routines:SSL_CTX_set_cipher_list:no cipher match:ssl/ssl_lib.c:2549: openssl ciphers TLS13_CHACHA20_POLY1305_SHA256 Error in cipher list 139978279444928:error:1410D0B9:SSL routines:SSL_CTX_set_cipher_list:no cipher match:ssl/ssl_lib.c:2549: openssl ciphers TLS_CHACHA20_POLY1305_SHA256 Error in cipher list 139921842241984:error:1410D0B9:SSL routines:SSL_CTX_set_cipher_list:no cipher match:ssl/ssl_lib.c:2549: If your argument for TLSv1.3 usage in nginx is as-correctly-used in openssl, that's fine. Can you provide a correct nginx example of TLS13-only usage of CHACHA20-POLY1305-SHA256 cipher? From postmaster at palvelin.fi Fri Jul 19 18:35:44 2019 From: postmaster at palvelin.fi (Palvelin Postmaster) Date: Fri, 19 Jul 2019 11:35:44 -0700 Subject: SSL_write() failed errors In-Reply-To: <20190719165940.GF1877@mdounin.ru> References: <20190719165940.GF1877@mdounin.ru> Message-ID: > On 19 Jul 2019, at 9.59, Maxim Dounin wrote: > > Hello! > > On Thu, Jul 18, 2019 at 10:03:24AM -0700, Palvelin Postmaster via nginx wrote: > >> we?re getting random SSL_write() failed errors on seemingly >> legitimate requests. The common denominator seems to be they are >> all for static files (images, js, etc.). >> >> Can anyone help me debug the issue? >> >> Here?s a debug log paste for one incident: >> https://pastebin.com/ZsbLuD5N >> >> Our architecture is: Amazon ALB > Nginx 1.14 > PHP-FPM 7.3 > > The following debug log: > > 2019/07/18 19:27:25 [debug] 1840#1840: *2037 SSL_write: -1 > 2019/07/18 19:27:25 [debug] 1840#1840: *2037 SSL_get_error: 6 > 2019/07/18 19:27:25 [crit] 1840#1840: *2037 SSL_write() failed (SSL:) while sending response to client... > > suggests that this is due to error 6, that is, > SSL_ERROR_ZERO_RETURN. This looks strange, as we haven't seen > this error being returned from SSL_write(), but might be > legitimate. In theory this can happen if nginx got a close notify > SSL alert while writing a response, and probably have something to > do with Amazon ALB before nginx. > > Just in case, could you please provide details about OpenSSL > library you are using ("nginx -V" should contain enough details)? Certainly: nginx version: nginx/1.14.0 (Ubuntu) built with OpenSSL 1.1.0g 2 Nov 2017 (running with OpenSSL 1.1.1c 28 May 2019) TLS SNI support enabled configure arguments: --with-cc-opt='-g -O2 -fdebug-prefix-map=/build/nginx-FIJPpj/nginx-1.14.0=. -fstack-protector-strong -Wformat -Werror=format-security -fPIC -Wdate-time -D_FORTIFY_SOURCE=2' --with-ld-opt='-Wl,-Bsymbolic-functions -Wl,-z,relro -Wl,-z,now -fPIC' --prefix=/usr/share/nginx --conf-path=/etc/nginx/nginx.conf --http-log-path=/var/log/nginx/access.log --error-log-path=/var/log/nginx/error.log --lock-path=/var/lock/nginx.lock --pid-path=/run/nginx.pid --modules-path=/usr/lib/nginx/modules --http-client-body-temp-path=/var/lib/nginx/body --http-fastcgi-temp-path=/var/lib/nginx/fastcgi --http-proxy-temp-path=/var/lib/nginx/proxy --http-scgi-temp-path=/var/lib/nginx/scgi --http-uwsgi-temp-path=/var/lib/nginx/uwsgi --with-debug --with-pcre-jit --with-http_ssl_module --with-http_stub_status_module --with-http_realip_module --with-http_auth_request_module --with-http_v2_module --with-http_dav_module --with-http_slice_module --with-threads --with-http_addition_module --with-http_flv_module --with-http_geoip_module=dynamic --with-http_gunzip_module --with-http_gzip_static_module --with-http_image_filter_module=dynamic --with-http_mp4_module --with-http_perl_module=dynamic --with-http_random_index_module --with-http_secure_link_module --with-http_sub_module --with-http_xslt_module=dynamic --with-mail=dynamic --with-mail_ssl_module --with-stream=dynamic --with-stream_ssl_module --with-stream_ssl_preread_module --add-dynamic-module=/build/nginx-FIJPpj/nginx-1.14.0/debian/modules/http-headers-more-filter --add-dynamic-module=/build/nginx-FIJPpj/nginx-1.14.0/debian/modules/http-auth-pam --add-dynamic-module=/build/nginx-FIJPpj/nginx-1.14.0/debian/modules/http-cache-purge --add-dynamic-module=/build/nginx-FIJPpj/nginx-1.14.0/debian/modules/http-dav-ext --add-dynamic-module=/build/nginx-FIJPpj/nginx-1.14.0/debian/modules/http-ndk --add-dynamic-module=/build/nginx-FIJPpj/nginx-1.14.0/debian/modules/http-echo --add-dynamic-module=/build/nginx-FIJPpj/nginx-1.14.0/debian/modules/http-fancyindex --add-dynamic-module=/build/nginx-FIJPpj/nginx-1.14.0/debian/modules/nchan --add-dynamic-module=/build/nginx-FIJPpj/nginx-1.14.0/debian/modules/http-lua --add-dynamic-module=/build/nginx-FIJPpj/nginx-1.14.0/debian/modules/rtmp --add-dynamic-module=/build/nginx-FIJPpj/nginx-1.14.0/debian/modules/http-uploadprogress --add-dynamic-module=/build/nginx-FIJPpj/nginx-1.14.0/debian/modules/http-upstream-fair --add-dynamic-module=/build/nginx-FIJPpj/nginx-1.14.0/debian/modules/http-subs-filter From mdounin at mdounin.ru Fri Jul 19 18:41:25 2019 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 19 Jul 2019 21:41:25 +0300 Subject: nginx 1.17.1 configcheck fails if config'd for TLSv1.3-only ? In-Reply-To: <6d46c9f5-f12a-def1-5a39-5102a0581146@gmail.com> References: <20190719162941.GE1877@mdounin.ru> <20190719180237.GG1877@mdounin.ru> <6d46c9f5-f12a-def1-5a39-5102a0581146@gmail.com> Message-ID: <20190719184125.GH1877@mdounin.ru> Hello! On Fri, Jul 19, 2019 at 11:24:36AM -0700, PGNet Dev wrote: > On 7/19/19 11:02 AM, Maxim Dounin wrote: > > Hello! > > > > On Fri, Jul 19, 2019 at 10:52:55AM -0700, PGNet Dev wrote: > > > >>>> And, if I change nginx to be 'TLSv1.3-only', > >>>> > >>>> - ssl_protocols TLSv1.3 TLSv1.2; > >>>> - ssl_ciphers "TLS13-CHACHA20-POLY1305-SHA256 TLS13-AES-256-GCM-SHA384 TLS13-AES-128-GCM-SHA256 ECDHE-ECDSA-CHACHA20-POLY1305"; > >>>> + ssl_protocols TLSv1.3; > >>>> + ssl_ciphers "TLS13-CHACHA20-POLY1305-SHA256 TLS13-AES-256-GCM-SHA384 TLS13-AES-128-GCM-SHA256"; > >>>> > >>>> even the webserver config check FAILs, > >>>> > >>>> nginxconfcheck > >>>> TLS13-AES-128-GCM-SHA256") failed (SSL: error:1410D0B9:SSL routines:SSL_CTX_set_cipher_list:no cipher match) > >>>> nginx: configuration file /usr/local/etc/nginx/nginx.conf test failed > >>>> > >>>> and the server fails to start. > >>> > >>> That's because the cipher string listed contains no valid ciphers. > >> > >> > >> Sorry, I'm missing something :-/ > >> > >> What's specifically "invalid" about the 3, listed ciphers? > >> > >> TLS13-CHACHA20-POLY1305-SHA256 TLS13-AES-256-GCM-SHA384 TLS13-AES-128-GCM-SHA256 > > > > There are no such ciphers in the OpenSSL. > > Try it yourself: > > > > $ openssl ciphers TLS13-CHACHA20-POLY1305-SHA256 > > Error in cipher list > > 0:error:1410D0B9:SSL routines:SSL_CTX_set_cipher_list:no cipher match:ssl/ssl_lib.c:2549: > > > > [...] > > > > Then what are these lists? You may want to re-read my initial answer and the ticket it links to. [...] -- Maxim Dounin http://mdounin.ru/ From pgnet.dev at gmail.com Fri Jul 19 18:54:59 2019 From: pgnet.dev at gmail.com (PGNet Dev) Date: Fri, 19 Jul 2019 11:54:59 -0700 Subject: nginx 1.17.1 configcheck fails if config'd for TLSv1.3-only ? In-Reply-To: <20190719184125.GH1877@mdounin.ru> References: <20190719162941.GE1877@mdounin.ru> <20190719180237.GG1877@mdounin.ru> <6d46c9f5-f12a-def1-5a39-5102a0581146@gmail.com> <20190719184125.GH1877@mdounin.ru> Message-ID: > You may want to re-read my initial answer and the ticket it links to. If that were _clear_, neither I nor others would STILL be spending time/effort trying to understand & clarify this. Nevermind. From francis at daoine.org Fri Jul 19 19:43:20 2019 From: francis at daoine.org (Francis Daly) Date: Fri, 19 Jul 2019 20:43:20 +0100 Subject: How to configure Nginx LB IP-Transparency for custom UDP application In-Reply-To: References: <20190709151108.GB61550@Romans-MacBook-Air.local> Message-ID: <20190719194320.pepwtr6oxnoqzmiu@daoine.org> On Fri, Jul 12, 2019 at 11:44:22PM +0530, Jeya Murugan wrote: > On Tue, Jul 9, 2019 at 8:41 PM Roman Arutyunyan wrote: Hi there, > > > I am using *NGINX 1.13.5 as a Load Balancer for one of my > > > CUSTOM-APPLICATION *which will listen on* UDP port 2231,67 and 68.* > > > > > > I am trying for Load Balancing with IP-Transparency. > > > When I using the proxy_protocol method the packets received from a remote > > > client is modified and send to upstream by NGINX LB not sure why/how the > > > packet is modified and also the remote client IP is NOT as source IP. proxy_protocol is not IP-Transparency. The source IP in the packet sent from nginx, is nginx. If you use nginx as the proxy_protocol client, then your "backend" service must run the proxy_protocol server -- which is basically "modify the backend code to read a few extra bytes at the start or each connection, before it does its own normal thing". (For udp, "each connection" might be "each packet".) You probably do not want to do that. > > > When I using proxy_bind, the packet is forwarded to configured upstream > > but > > > the source IP is not updated with Remote Client IP. That should work -- in as much as "nginx asks the operating system to change the source address of the outgoing packet". If your operating system does not co-operate, there's not a lot nginx can do. > > > *Configuration:* Note that the web page that you reference does suggest that "proxy_responses 1;" is needed. I don't know if that will influence what you are seeing, though. What operating system are you running on? "uname -a" should say; and will give the kernel version involved. That might indicate a problem. Although I guess that if your nginx was reporting "transparent proxying is not supported on this platform", you have have seen it. Note also that you seem to be testing with the client, nginx, and the backend server all on the same subnet. That might cause some confusion when it comes to the response packet; I don't know if it would interfere with the nginx operating system changing the packet source IP address, or with the iptables mangling. And, you use: > > > proxy_bind $remote_addr:2231 transparent; which may well work, but is not exactly what the document you refer to uses. In principle, there is no reason why the udp traffic to port 2231 must come from port 2231; if you use $remote_port like the document shows, it removes one more place where your config differs from theirs. So, I don't have an answer for you; but maybe the above points at some things you can check or change, to see if it improves for you. Good luck with it, f -- Francis Daly francis at daoine.org From al-nginx at none.at Fri Jul 19 20:49:48 2019 From: al-nginx at none.at (Aleksandar Lazic) Date: Fri, 19 Jul 2019 22:49:48 +0200 Subject: How to configure Nginx LB IP-Transparency for custom UDP application In-Reply-To: References: <20190709151108.GB61550@Romans-MacBook-Air.local> Message-ID: <1223f4e1-d7cf-cd2d-f54f-61b5a7c0c023@none.at> Am 16.07.2019 um 13:29 schrieb Jeya Murugan: > @all : Can someone help /point-out what i have missed in proxy_protocol > here?? the proxy protocol is only designed for tcp not udp. > > I am using *NGINX 1.13.5 as a Load Balancer for one of my > > CUSTOM-APPLICATION *which will listen on* UDP port 2231,67 and 68.* > > > > I am trying for Load Balancing with IP-Transparency. > > > > > > > > When I using the proxy_protocol method the packets received from a remote > > client is modified and send to upstream by NGINX LB not sure why/how the > > packet is modified and also the remote client IP is NOT as source IP. > > The proxy_protocol directive adds a PROXY protocol header to the datagram, > that's why it's modified.? The directive does not change the source address. > Instead, the remote client address is passed in the PROXY protocol header. > > : Okay. Do we have any options to send remote client IP as source > address? Due to additional proxy header the packet is dropped by the > application running in the upstream.How can the proxy header can be > stripped in the upstream end?? > > ? ? ? ? ? ? ? ? ? ? Do we need to do configuration/rules on the upstream end? > ? > > > When I using proxy_bind, the packet is forwarded to configured > upstream but > > the source IP is not updated with Remote Client IP. > > What is the reason for the port next to $remote_addr in proxy_bind? > Also make sure nginx master runs with sufficient privileges. > > ? > : Yes, application running with root privilege as specified in the > conf file > > Also, the proxy_bind syntax is referred in the below link.' > > https://www.nginx.com/blog/ip-transparency-direct-server-return-nginx-plus-transparent-proxy/#proxy_bind? > > proxy_bind $remote_addr:$remote_port transparent;? > > > > *Basically, in both methods, the remote client address was not used as a > > source IP. I hope I missed some minor parts. Can someone help to resolve > > this issue?* > > > > > > > > The following are the detailed configuration for your reference. > > > > > > > > *Method 1 :- proxy_protocol* > > > > > > > > *Configuration:* > > > > > > > > user? *root;* > > worker_processes? 1; > > error_log? /var/log/nginx/error.log debug; > > pid? ? ? ? /var/run/nginx.pid; > > events { > >? ? ?worker_connections? 1024; > > > > } > > > > stream { > >? ? ?server { > >? ? ? ? ?listen 10.43.18.107:2231 udp; > >? ? ? ? ?proxy_protocol on; > >? ? ? ? ?proxy_pass 10.43.18.172:2231 ; > >? ? ?} > >? ? ?server { > >? ? ? ? ?listen 10.43.18.107:67 udp; > >? ? ? ? ?proxy_protocol on; > >? ? ? ? ?proxy_pass 10.43.18.172:67 ; > >? ? ?} > >? ? ?server { > >? ? ? ? ?listen 10.43.18.107:68 udp; > >? ? ? ? ?proxy_protocol on; > >? ? ? ? ?proxy_pass 10.43.18.172:68 ; > >? ? ?} > > } > > > > *TCPDUMP O/P :* > > > > > > > > *From LB:* > > > > 10:05:07.284259 IP 10.43.18.116.2231 > 10.43.18.107.2231: UDP, length 43 > > > > 10:05:07.284555 IP 10.43.18.107.51775 > 10.43.18.172.2231: UDP, length 91 > > > > > > > > *From upstream[Custom application]:* > > > > 10:05:07.284442 IP 10.43.18.107.51775 > 10.43.18.172.2231: UDP, length 91 > > > > > > > > *Method 2:- [ proxy_bind ]* > > > > > > > > *Configuration:* > > > > > > > > user? root; > > worker_processes? 1; > > error_log? /var/log/nginx/error.log debug; > > pid? ? ? ? /var/run/nginx.pid; > > events { > >? ? ?worker_connections? 1024; > > } > > > > stream { > >? ? ?server { > >? ? ? ? ?listen 10.43.18.107:2231 udp; > >? ? ? ? ?proxy_bind $remote_addr:2231 transparent; > >? ? ? ? ?proxy_pass 10.43.18.172:2231 ; > >? ? ?} > >? ? ?server { > >? ? ? ? ?listen 10.43.18.107:67 udp; > >? ? ? ? ?proxy_bind $remote_addr:67 transparent; > >? ? ? ? ?proxy_pass 10.43.18.172:67 ; > >? ? ?} > >? ? ?server { > >? ? ? ? ?listen 10.43.18.107:68 udp; > >? ? ? ? ?proxy_bind $remote_addr:68 transparent; > >? ? ? ? ?proxy_pass 10.43.18.172:68 ; > >? ? ?} > > > > } > > > > > > > > *Also, added the below rules :* > > > > > > > > ip rule add fwmark 1 lookup 100 > > > > ip route add local 0.0.0.0/0 dev lo table 100 > > iptables -t mangle -A PREROUTING -p udp -s 10.43.18.0/24 > --sport 2231 -j > > MARK --set-xmark 0x1/0xffffffff > > iptables -t mangle -A PREROUTING -p udp -s 10.43.18.0/24 > --sport 67 -j MARK > > --set-xmark 0x1/0xffffffff > > iptables -t mangle -A PREROUTING -p udp -s 10.43.18.0/24 > --sport 68 -j MARK > > --set-xmark 0x1/0xffffffff > > > > > > > > However, still, the packet is sent from NGINX LB with its own IP, not with > > the remote client IP address. > > > > > > > > *TCPDUMP O/P from LB:* > > > > > > > > 11:49:51.999829 IP 10.43.18.116.2231 > 10.43.18.107.2231: UDP, length 43 > > > > 11:49:52.000161 IP 10.43.18.107.2231 > 10.43.18.172.2231: UDP, length 43 > > > > > > > > *TPCDUM O/P from Upstream:* > > > > > > > > 11:49:52.001155 IP 10.43.18.107.2231 > 10.43.18.172.2231: UDP, length 43 > > > > > > > > *Note:* I have followed the below link. > > > > > > > > > https://www.nginx.com/blog/ip-transparency-direct-server-return-nginx-plus-transparent-proxy/ > > > _______________________________________________ > > nginx mailing list > > nginx at nginx.org > > http://mailman.nginx.org/mailman/listinfo/nginx > > > -- > Roman Arutyunyan > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > From nginx-forum at forum.nginx.org Fri Jul 19 21:16:34 2019 From: nginx-forum at forum.nginx.org (manuelcorona) Date: Fri, 19 Jul 2019 17:16:34 -0400 Subject: Avoid creating a temp file on Nginx host Message-ID: <492687a5f2318f7cc7ee8c8b532e1682.NginxMailingListEnglish@forum.nginx.org> I'm using Nginx in proxy pass mode to serve an application. We have had some issues where the host running Nginx doesn't have enough space to host some uploaded files. Is there a way to stream this files to the backend server without creating a temporary file in the Nginx host? Based on the post here: https://serverfault.com/questions/768693/nginx-how-to-completely-disable-request-body-buffering?newreg=990eae88df904d448be4555c0589b7ed I have tried setting up this config variables: proxy_http_version 1.1; proxy_request_buffering off; client_max_body_size 0; And all the different combinations of those config variables but I always see the temp file being written to the Nginx Host. Is there a way to pass the file without buffering? I based this configuration based on this ServerFault post: https://serverfault.com/questions/768693/nginx-how-to-completely-disable-request-body-buffering?newreg=990eae88df904d448be4555c0589b7ed Posted at Nginx Forum: https://forum.nginx.org/read.php?2,284929,284929#msg-284929 From francis at daoine.org Fri Jul 19 22:47:15 2019 From: francis at daoine.org (Francis Daly) Date: Fri, 19 Jul 2019 23:47:15 +0100 Subject: Nginx cache-control headers issue In-Reply-To: <3cb9e736e154956bdbd93dac89b536cd.NginxMailingListEnglish@forum.nginx.org> References: <3cb9e736e154956bdbd93dac89b536cd.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20190719224715.orupm3jbijnkjh5g@daoine.org> On Thu, Jul 18, 2019 at 06:44:13PM -0400, andregr-jp wrote: Hi there, > I have an nginx proxy which suddenly adding 2 cache-control headers and the > last modified time is always the current time: I suspect that whatever is being reverse-proxied changed recently to send these headers. > #Add Headers > add_header Cache-Control 'public max-age=31536000 > s-maxage=31536000'; add_header is "please send this in the nginx response, as well as everything else". > proxy_hide_header x-amz-id-2; proxy_hide_header is "please do not send this, from upstream to the client". > proxy_ignore_headers Cache-Control; proxy_ignore_headers is "don't use these special headers". > I don't know why is adding this control header: Cache-Control: private, > max-age=3600, max-stale=0 You probably want to add "proxy_hide_header Cache-Control"; or to change back whatever changed on your upstream which made it claim that things are public. You can look at the response from upstream (e.g., $upstream_http_cache_control) to confirm whether the header is set there. f -- Francis Daly francis at daoine.org From andre8525 at hotmail.com Sat Jul 20 00:33:24 2019 From: andre8525 at hotmail.com (Andrew Andonopoulos) Date: Sat, 20 Jul 2019 00:33:24 +0000 Subject: Nginx cache-control headers issue In-Reply-To: <20190719224715.orupm3jbijnkjh5g@daoine.org> References: <3cb9e736e154956bdbd93dac89b536cd.NginxMailingListEnglish@forum.nginx.org>, <20190719224715.orupm3jbijnkjh5g@daoine.org> Message-ID: Hi Francis, Thanks for the response, I checked multiple scenarios and when I removed the token I got the correct header. Looks like when the token is active, I am getting wrong headers. Also "upstream" you mean the Origin for nginx? which is in my case is S3 For example, this is a token-based request: Request URL: https://example.com/hls/nickelback/Nickelback-Lullaby_960_540_9000000011.ts?token=st=1563581722~exp=1563668122~acl=/hls/nickelback/*~hmac=88ebce1fa4cca0a30b5cb5395bf3c04cde1018cbbfaa1c23506ebbf70e920e3a Response header: 1. Accept-Ranges: bytes 2. Access-Control-Allow-Credentials: true 3. Access-Control-Allow-Headers: DNT,X-CustomHeader,Keep-Alive,User-Agent,X-Requested-With,If-Modified-Since,Cache-Control,Content-Type,Origin,X-Auth-Token,Authorization,Accept,Client-Security-Token 4. Access-Control-Allow-Methods: OPTIONS, GET 5. Access-Control-Allow-Origin: * 6. Cache-Control: public, max-age=8640000, max-stale=0, public max-age=31536000 7. Connection: keep-alive 8. Content-Length: 2535932 9. Content-Type: video/MP2T 10. Date: Sat, 20 Jul 2019 00:15:58 GMT 11. ETag: "9660239489c3a42342fc2fff979f3658" 12. Expires: Mon, 28 Oct 2019 00:15:58 GMT 13. Last-Modified: Sun, 19 Nov 2000 08:52:00 GMT 14. Pragma: public 15. Server: nginx/1.17.0 16. X-Cache-Status: MISS 17. X-Proxy-Cache: MISS and this is a request without token and all headers are correct: Request URL: https://example.com/hls/nickelback/Nickelback-Lullaby_960_540_9000000000.ts Response header: 1. Accept-Ranges: bytes 2. Access-Control-Allow-Credentials: true 3. Access-Control-Allow-Headers: DNT,X-CustomHeader,Keep-Alive,User-Agent,X-Requested-With,If-Modified-Since,Cache-Control,Content-Type,Origin,X-Auth-Token,Authorization,Accept,Client-Security-Token 4. Access-Control-Allow-Methods: OPTIONS, GET 5. Access-Control-Allow-Origin: * 6. Cache-Control: public max-age=31536000 7. Connection: keep-alive 8. Content-Length: 3275712 9. Content-Type: video/MP2T 10. Date: Sat, 20 Jul 2019 00:24:48 GMT 11. ETag: "cb86d50c9544c5382d854420c807aa86" 12. Last-Modified: Fri, 19 Jul 2019 20:15:31 GMT 13. Pragma: public 14. Server: nginx/1.17.0 15. X-Cache-Status: HIT 16. X-Proxy-Cache: HIT Thanks Andrew ________________________________ From: nginx on behalf of Francis Daly Sent: Friday, July 19, 2019 10:47 PM To: nginx at nginx.org Subject: Re: Nginx cache-control headers issue On Thu, Jul 18, 2019 at 06:44:13PM -0400, andregr-jp wrote: Hi there, > I have an nginx proxy which suddenly adding 2 cache-control headers and the > last modified time is always the current time: I suspect that whatever is being reverse-proxied changed recently to send these headers. > #Add Headers > add_header Cache-Control 'public max-age=31536000 > s-maxage=31536000'; add_header is "please send this in the nginx response, as well as everything else". > proxy_hide_header x-amz-id-2; proxy_hide_header is "please do not send this, from upstream to the client". > proxy_ignore_headers Cache-Control; proxy_ignore_headers is "don't use these special headers". > I don't know why is adding this control header: Cache-Control: private, > max-age=3600, max-stale=0 You probably want to add "proxy_hide_header Cache-Control"; or to change back whatever changed on your upstream which made it claim that things are public. You can look at the response from upstream (e.g., $upstream_http_cache_control) to confirm whether the header is set there. f -- Francis Daly francis at daoine.org _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From andre8525 at hotmail.com Sat Jul 20 01:47:14 2019 From: andre8525 at hotmail.com (Andrew Andonopoulos) Date: Sat, 20 Jul 2019 01:47:14 +0000 Subject: Nginx cache-control headers issue In-Reply-To: References: <3cb9e736e154956bdbd93dac89b536cd.NginxMailingListEnglish@forum.nginx.org>, <20190719224715.orupm3jbijnkjh5g@daoine.org>, Message-ID: I tried with upstream cache control and this is the results: Request URL: https://example.com/hls/nickelback/Nickelback-Lullaby_1280_720_13000000011.ts?token=st=1563586913~exp=1563673313~acl=/hls/nickelback/*~hmac=bad8f13314c29ec41312b6f10b9106a2f1f024fdfbfce090d9a08bd0a635928f 1. Accept-Ranges: bytes 2. Access-Control-Allow-Credentials: true 3. Access-Control-Allow-Headers: DNT,X-CustomHeader,Keep-Alive,User-Agent,X-Requested-With,If-Modified-Since,Cache-Control,Content-Type,Origin,X-Auth-Token,Authorization,Accept,Client-Security-Token 4. Access-Control-Allow-Methods: OPTIONS, GET 5. Access-Control-Allow-Origin: * 6. Cache-Control: public, max-age=8640000, max-stale=0, public max-age=31536000 7. Connection: keep-alive 8. Content-Length: 4399200 9. Content-Type: video/MP2T 10. Date: Sat, 20 Jul 2019 01:42:17 GMT 11. ETag: "606f6744f7a72e1ff48a2748d673ef96" 12. Expires: Mon, 28 Oct 2019 01:42:17 GMT 13. Last-Modified: Sun, 19 Nov 2000 08:52:00 GMT 14. Pragma: public 15. Server: nginx/1.17.0 16. X-Cache-Status: MISS 17. X-Cache-Upstream-Connect-Time: 0.000 18. X-Cache-Upstream-Header-Time: 0.143 19. X-Cache-Upstream-Response-Time: - 20. X-Proxy-Cache: MISS 21. X-Upstream-Http-Cache-Control: no-cache I don't know why the upstream cache-control return no-cache (which is S3) but this is happening only with tokens. Also when I use the following in the config, I am getting MISS for all the requests proxy_hide_header Cache-Control; proxy_ignore_headers Cache-Control; Thanks Andrew ________________________________ From: nginx on behalf of Andrew Andonopoulos Sent: Saturday, July 20, 2019 12:33 AM To: nginx at nginx.org Subject: Re: Nginx cache-control headers issue Hi Francis, Thanks for the response, I checked multiple scenarios and when I removed the token I got the correct header. Looks like when the token is active, I am getting wrong headers. Also "upstream" you mean the Origin for nginx? which is in my case is S3 For example, this is a token-based request: Request URL: https://example.com/hls/nickelback/Nickelback-Lullaby_960_540_9000000011.ts?token=st=1563581722~exp=1563668122~acl=/hls/nickelback/*~hmac=88ebce1fa4cca0a30b5cb5395bf3c04cde1018cbbfaa1c23506ebbf70e920e3a Response header: 1. Accept-Ranges: bytes 2. Access-Control-Allow-Credentials: true 3. Access-Control-Allow-Headers: DNT,X-CustomHeader,Keep-Alive,User-Agent,X-Requested-With,If-Modified-Since,Cache-Control,Content-Type,Origin,X-Auth-Token,Authorization,Accept,Client-Security-Token 4. Access-Control-Allow-Methods: OPTIONS, GET 5. Access-Control-Allow-Origin: * 6. Cache-Control: public, max-age=8640000, max-stale=0, public max-age=31536000 7. Connection: keep-alive 8. Content-Length: 2535932 9. Content-Type: video/MP2T 10. Date: Sat, 20 Jul 2019 00:15:58 GMT 11. ETag: "9660239489c3a42342fc2fff979f3658" 12. Expires: Mon, 28 Oct 2019 00:15:58 GMT 13. Last-Modified: Sun, 19 Nov 2000 08:52:00 GMT 14. Pragma: public 15. Server: nginx/1.17.0 16. X-Cache-Status: MISS 17. X-Proxy-Cache: MISS and this is a request without token and all headers are correct: Request URL: https://example.com/hls/nickelback/Nickelback-Lullaby_960_540_9000000000.ts Response header: 1. Accept-Ranges: bytes 2. Access-Control-Allow-Credentials: true 3. Access-Control-Allow-Headers: DNT,X-CustomHeader,Keep-Alive,User-Agent,X-Requested-With,If-Modified-Since,Cache-Control,Content-Type,Origin,X-Auth-Token,Authorization,Accept,Client-Security-Token 4. Access-Control-Allow-Methods: OPTIONS, GET 5. Access-Control-Allow-Origin: * 6. Cache-Control: public max-age=31536000 7. Connection: keep-alive 8. Content-Length: 3275712 9. Content-Type: video/MP2T 10. Date: Sat, 20 Jul 2019 00:24:48 GMT 11. ETag: "cb86d50c9544c5382d854420c807aa86" 12. Last-Modified: Fri, 19 Jul 2019 20:15:31 GMT 13. Pragma: public 14. Server: nginx/1.17.0 15. X-Cache-Status: HIT 16. X-Proxy-Cache: HIT Thanks Andrew ________________________________ From: nginx on behalf of Francis Daly Sent: Friday, July 19, 2019 10:47 PM To: nginx at nginx.org Subject: Re: Nginx cache-control headers issue On Thu, Jul 18, 2019 at 06:44:13PM -0400, andregr-jp wrote: Hi there, > I have an nginx proxy which suddenly adding 2 cache-control headers and the > last modified time is always the current time: I suspect that whatever is being reverse-proxied changed recently to send these headers. > #Add Headers > add_header Cache-Control 'public max-age=31536000 > s-maxage=31536000'; add_header is "please send this in the nginx response, as well as everything else". > proxy_hide_header x-amz-id-2; proxy_hide_header is "please do not send this, from upstream to the client". > proxy_ignore_headers Cache-Control; proxy_ignore_headers is "don't use these special headers". > I don't know why is adding this control header: Cache-Control: private, > max-age=3600, max-stale=0 You probably want to add "proxy_hide_header Cache-Control"; or to change back whatever changed on your upstream which made it claim that things are public. You can look at the response from upstream (e.g., $upstream_http_cache_control) to confirm whether the header is set there. f -- Francis Daly francis at daoine.org _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Sat Jul 20 07:18:48 2019 From: nginx-forum at forum.nginx.org (bhagavathula) Date: Sat, 20 Jul 2019 03:18:48 -0400 Subject: Integration of gprof for Nginx dynamic module Message-ID: <961e78dc5f9c5dbb619a51e5ee812505.NginxMailingListEnglish@forum.nginx.org> Hi, I am trying to integrate gprof for testing the performance of our nginx dynamic module developed in C. I am trying to include the -pg flag during compilation as follows: ./configure --with-debug --with-compat --add-dynamic-module= --prefix= --with-cc-opt="-I/usr/include/x86_64-linux-gnu/ -I/usr/include/ .. .. -pg" --with-ld-opt=" -pg" I am able to see gmon.out file getting created, but that does not include the profiling information of our dynamic module code. Only the Core nginx methods prof information is displayed. Can someone please help me for this. Thanks, Phanee Posted at Nginx Forum: https://forum.nginx.org/read.php?2,284935,284935#msg-284935 From francis at daoine.org Sat Jul 20 07:38:55 2019 From: francis at daoine.org (Francis Daly) Date: Sat, 20 Jul 2019 08:38:55 +0100 Subject: Nginx cache-control headers issue In-Reply-To: References: <3cb9e736e154956bdbd93dac89b536cd.NginxMailingListEnglish@forum.nginx.org> <20190719224715.orupm3jbijnkjh5g@daoine.org> Message-ID: <20190720073855.5f425332gqreo6x6@daoine.org> On Sat, Jul 20, 2019 at 12:33:24AM +0000, Andrew Andonopoulos wrote: Hi there, > I checked multiple scenarios and when I removed the token I got the correct header. Looks like when the token is active, I am getting wrong headers. There is lots going on in your config. I suggest it may be useful to have a test system, where you can easily remove most of the config, in order to identify which directive leads to the problem being observed. That "tokens make a difference" is useful information. You config mentions tokens in more than one place. What code handles the tokens? Does it affect the headers that you see are wrong? Is the request made to "upstream" different, when a token is or is not included in the request to nginx? > Also "upstream" you mean the Origin for nginx? which is in my case is S3 Yes, by "upstream" I mean "whatever nginx does proxy_pass to". > For example, this is a token-based request: > > Request URL: > https://example.com/hls/nickelback/Nickelback-Lullaby_960_540_9000000011.ts?token=st=1563581722~exp=1563668122~acl=/hls/nickelback/*~hmac=88ebce1fa4cca0a30b5cb5395bf3c04cde1018cbbfaa1c23506ebbf70e920e3a > > Response header: > Cache-Control: > public, max-age=8640000, max-stale=0, public max-age=31536000 That is not "exactly what you had in your add_header directive". And - it is also not the "private, max-age=3600, max-stale=0" that you reported initially. Is your upstream changing things? Or are you making different request each time, so you do not know what the response will be? Note that the first max-age=8640000 corresponds to 100 days. And your config has "secure_token_expires_time 100d;" which looks like it might be a candidate for where it comes from. And your config has "secure_token_query_token_expires_time 1h;", which might correspond to your original "max-age=3600". > and this is a request without token and all headers are correct: > > Request URL: > https://example.com/hls/nickelback/Nickelback-Lullaby_960_540_9000000000.ts > Cache-Control: > public max-age=31536000 That is also not "exactly what you had in your add_header directive". So I'd call it "not correct". I suggest - for testing purposes, remove as many lines of nginx config as you can. For example -- most of the add_header lines are not needed when testing with "curl", so get rid of them to make the response smaller and easier to analyse. But also -- the configuration that you have for the third-party modules that you use appears to be the source of the response headers that you don't expect. So it probably is not "upstream changed something". Cheers, f -- Francis Daly francis at daoine.org From andre8525 at hotmail.com Sat Jul 20 10:10:39 2019 From: andre8525 at hotmail.com (Andrew Andonopoulos) Date: Sat, 20 Jul 2019 10:10:39 +0000 Subject: Nginx cache-control headers issue In-Reply-To: <20190720073855.5f425332gqreo6x6@daoine.org> References: <3cb9e736e154956bdbd93dac89b536cd.NginxMailingListEnglish@forum.nginx.org> <20190719224715.orupm3jbijnkjh5g@daoine.org> , <20190720073855.5f425332gqreo6x6@daoine.org> Message-ID: Hi Francis, Thank you for the suggestion, I will start removing the config and try to find which one is the source of the problem. Also, I want to ask you, I saw that the last-modified header with token is always: Last-Modified: Sun, 19 Nov 2000 08:52:00 GMT, but there isn't line in the config forcing this date/time. Can you suggest which code forcing this modified time? Thanks Andrew ________________________________ From: nginx on behalf of Francis Daly Sent: Saturday, July 20, 2019 7:38 AM To: nginx at nginx.org Subject: Re: Nginx cache-control headers issue On Sat, Jul 20, 2019 at 12:33:24AM +0000, Andrew Andonopoulos wrote: Hi there, > I checked multiple scenarios and when I removed the token I got the correct header. Looks like when the token is active, I am getting wrong headers. There is lots going on in your config. I suggest it may be useful to have a test system, where you can easily remove most of the config, in order to identify which directive leads to the problem being observed. That "tokens make a difference" is useful information. You config mentions tokens in more than one place. What code handles the tokens? Does it affect the headers that you see are wrong? Is the request made to "upstream" different, when a token is or is not included in the request to nginx? > Also "upstream" you mean the Origin for nginx? which is in my case is S3 Yes, by "upstream" I mean "whatever nginx does proxy_pass to". > For example, this is a token-based request: > > Request URL: > https://example.com/hls/nickelback/Nickelback-Lullaby_960_540_9000000011.ts?token=st=1563581722~exp=1563668122~acl=/hls/nickelback/*~hmac=88ebce1fa4cca0a30b5cb5395bf3c04cde1018cbbfaa1c23506ebbf70e920e3a > > Response header: > Cache-Control: > public, max-age=8640000, max-stale=0, public max-age=31536000 That is not "exactly what you had in your add_header directive". And - it is also not the "private, max-age=3600, max-stale=0" that you reported initially. Is your upstream changing things? Or are you making different request each time, so you do not know what the response will be? Note that the first max-age=8640000 corresponds to 100 days. And your config has "secure_token_expires_time 100d;" which looks like it might be a candidate for where it comes from. And your config has "secure_token_query_token_expires_time 1h;", which might correspond to your original "max-age=3600". > and this is a request without token and all headers are correct: > > Request URL: > https://example.com/hls/nickelback/Nickelback-Lullaby_960_540_9000000000.ts > Cache-Control: > public max-age=31536000 That is also not "exactly what you had in your add_header directive". So I'd call it "not correct". I suggest - for testing purposes, remove as many lines of nginx config as you can. For example -- most of the add_header lines are not needed when testing with "curl", so get rid of them to make the response smaller and easier to analyse. But also -- the configuration that you have for the third-party modules that you use appears to be the source of the response headers that you don't expect. So it probably is not "upstream changed something". Cheers, f -- Francis Daly francis at daoine.org _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From mcooper at coopfire.com Sat Jul 20 11:22:54 2019 From: mcooper at coopfire.com (Michael Cooper) Date: Sat, 20 Jul 2019 07:22:54 -0400 Subject: Reverse Proxy Message-ID: Hello Guys, First time poster to nginx list. I have successfully created an nginx proxy but i am only using http at the moment: The following works perfectly # Coopfire.com Website # server { server_name www.coopfire.com; location / { # www.coopfire.com reverse proxy follow proxy_set_header X-Real-IP $remote_addr; proxy_set_header Host $host; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_pass http://xxx.xxx.xxx.xxx:80; } } am trying to add https: to this because I have a need for it for my blog app it is as follows: server { server_name blog.coopfire.com; location / { # coopfire.com reverse proxy follow proxy_set_header X-Real-IP $remote_addr; proxy_set_header Host $host; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_pass http://xxx.xxx.xxx.xxx:2368; *<- This works fine* # proxy_pass https://xxx.xxx.xxx.xxx:444; *<- This does not work in browser with ip it does* } } So I have seen a few different configurations where the ssl cert has a path that appears to be local to the proxy server, Does this mean I put the certs on the proxy instead of the backend server? server { listen 80; return 301 https://$host$request_uri; } server { listen 443; server_name jenkins.domain.com; ssl_certificate /etc/nginx/cert.crt; ssl_certificate_key /etc/nginx/cert.key; ssl on; ssl_session_cache builtin:1000 shared:SSL:10m; ssl_protocols TLSv1 TLSv1.1 TLSv1.2; ssl_ciphers HIGH:!aNULL:!eNULL:!EXPORT:!CAMELLIA:!DES:!MD5:!PSK:!RC4; ssl_prefer_server_ciphers on; access_log /var/log/nginx/jenkins.access.log; location / { proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto $scheme; # Fix the ?It appears that your reverse proxy set up is broken" error. proxy_pass http://localhost:8080; proxy_read_timeout 90; proxy_redirect http://localhost:8080 https://jenkins.domain.com; } } Also I see on the top it is redirecting all http requests to https, Do certs need to be added to all the sites? Thanks, -- Michael A Cooper Linux Certified Zerto Certified http://www.coopfire.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From francis at daoine.org Sun Jul 21 12:26:34 2019 From: francis at daoine.org (Francis Daly) Date: Sun, 21 Jul 2019 13:26:34 +0100 Subject: Nginx cache-control headers issue In-Reply-To: References: <3cb9e736e154956bdbd93dac89b536cd.NginxMailingListEnglish@forum.nginx.org> <20190719224715.orupm3jbijnkjh5g@daoine.org> <20190720073855.5f425332gqreo6x6@daoine.org> Message-ID: <20190721122634.7rsaa2e746qybvsh@daoine.org> On Sat, Jul 20, 2019 at 10:10:39AM +0000, Andrew Andonopoulos wrote: Hi there, > Also, I want to ask you, I saw that the last-modified header with token is always: Last-Modified: Sun, 19 Nov 2000 08:52:00 GMT, but there isn't line in the config forcing this date/time. > Can you suggest which code forcing this modified time? You appear to be using the third-party module documented at https://github.com/kaltura/nginx-secure-token-module That page says: == secure_token_last_modified syntax: secure_token_last_modified time default: Sun, 19 Nov 2000 08:52:00 GMT context: http, server, location Sets the value of the last-modified header of responses that are not tokenized. An empty string leaves the value of last-modified unaltered, while the string "now" sets the header to the server current time. secure_token_token_last_modified syntax: secure_token_token_last_modified time default: now context: http, server, location Sets the value of the last-modified header of responses that are tokenized (query / cookie) An empty string leaves the value of last-modified unaltered, while the string "now" sets the header to the server current time. == which seems to explain both the "that old timestamp" and the "always the current time" that you reported in the first mail. f -- Francis Daly francis at daoine.org From nginx-forum at forum.nginx.org Mon Jul 22 11:03:00 2019 From: nginx-forum at forum.nginx.org (rambabuy) Date: Mon, 22 Jul 2019 07:03:00 -0400 Subject: upstream prematurely closed connection while reading response header from upstream In-Reply-To: <3464ee6131b281b7f1e78ac6f6853f49.NginxMailingListEnglish@forum.nginx.org> References: <54410D5E.7040407@gmail.com> <3464ee6131b281b7f1e78ac6f6853f49.NginxMailingListEnglish@forum.nginx.org> Message-ID: <0065904a08e30647bd4033f89d0a28af.NginxMailingListEnglish@forum.nginx.org> Hi, Any solution to this Issue? I am facing similar issue. Thanks Ram Posted at Nginx Forum: https://forum.nginx.org/read.php?2,254031,284944#msg-284944 From mdounin at mdounin.ru Mon Jul 22 11:54:48 2019 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 22 Jul 2019 14:54:48 +0300 Subject: SSL_write() failed errors In-Reply-To: References: <20190719165940.GF1877@mdounin.ru> Message-ID: <20190722115448.GI1877@mdounin.ru> Hello! On Fri, Jul 19, 2019 at 11:35:44AM -0700, Palvelin Postmaster via nginx wrote: > > On 19 Jul 2019, at 9.59, Maxim Dounin wrote: > > > > Hello! > > > > On Thu, Jul 18, 2019 at 10:03:24AM -0700, Palvelin Postmaster via nginx wrote: > > > >> we?re getting random SSL_write() failed errors on seemingly > >> legitimate requests. The common denominator seems to be they are > >> all for static files (images, js, etc.). > >> > >> Can anyone help me debug the issue? > >> > >> Here?s a debug log paste for one incident: > >> https://pastebin.com/ZsbLuD5N > >> > >> Our architecture is: Amazon ALB > Nginx 1.14 > PHP-FPM 7.3 > > > > The following debug log: > > > > 2019/07/18 19:27:25 [debug] 1840#1840: *2037 SSL_write: -1 > > 2019/07/18 19:27:25 [debug] 1840#1840: *2037 SSL_get_error: 6 > > 2019/07/18 19:27:25 [crit] 1840#1840: *2037 SSL_write() failed (SSL:) while sending response to client... > > > > suggests that this is due to error 6, that is, > > SSL_ERROR_ZERO_RETURN. This looks strange, as we haven't seen > > this error being returned from SSL_write(), but might be > > legitimate. In theory this can happen if nginx got a close notify > > SSL alert while writing a response, and probably have something to > > do with Amazon ALB before nginx. > > > > Just in case, could you please provide details about OpenSSL > > library you are using ("nginx -V" should contain enough details)? > > Certainly: > > nginx version: nginx/1.14.0 (Ubuntu) > built with OpenSSL 1.1.0g 2 Nov 2017 (running with OpenSSL 1.1.1c 28 May 2019) You are using Ubuntu 18.04 package, correct? Could you please update to the latest package (1.14.0-0ubuntu1.3, exactly the same nginx version rebuild against OpenSSL 1.1.1c) and report if it fixes the errors in question or not? -- Maxim Dounin http://mdounin.ru/ From kelsey.dannels at nginx.com Mon Jul 22 23:26:10 2019 From: kelsey.dannels at nginx.com (Kelsey Dannels) Date: Mon, 22 Jul 2019 16:26:10 -0700 Subject: Follow up- 2019 NGINX User Survey: Give us feedback and be part of our future Message-ID: Hello- Following up, if you haven't yet, please take ten minutes to fill out our annual NGINX User Survey. We want to hear about your experiences to help improve and shape our product roadmap. Link to survey: https://nkadmin.typeform.com/to/nSuOmW?source=email Best, Kelsey -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Tue Jul 23 12:23:03 2019 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 23 Jul 2019 15:23:03 +0300 Subject: nginx-1.17.2 Message-ID: <20190723122303.GQ1877@mdounin.ru> Changes with nginx 1.17.2 23 Jul 2019 *) Change: minimum supported zlib version is 1.2.0.4. Thanks to Ilya Leoshkevich. *) Change: the $r->internal_redirect() embedded perl method now expects escaped URIs. *) Feature: it is now possible to switch to a named location using the $r->internal_redirect() embedded perl method. *) Bugfix: in error handling in embedded perl. *) Bugfix: a segmentation fault might occur on start or during reconfiguration if hash bucket size larger than 64 kilobytes was used in the configuration. *) Bugfix: nginx might hog CPU during unbuffered proxying and when proxying WebSocket connections if the select, poll, or /dev/poll methods were used. *) Bugfix: in the ngx_http_xslt_filter_module. *) Bugfix: in the ngx_http_ssi_filter_module. -- Maxim Dounin http://nginx.org/ From nginx-forum at forum.nginx.org Tue Jul 23 13:38:15 2019 From: nginx-forum at forum.nginx.org (kabloko) Date: Tue, 23 Jul 2019 09:38:15 -0400 Subject: Nginx window config files Message-ID: <8c4b768c55c83b10374a8a2c8bb8c94b.NginxMailingListEnglish@forum.nginx.org> Good day. I am trying to set up nginx in my windows machine but whatever alteration i do to my .conf files i always get the "Welcome to nginx page". I followed every guide i've found and tried to set it up multiple times to no avail. Can anyone help me with this issue? I have it detailed in a stack overflow question right here: https://stackoverflow.com/questions/57165230/nginx-configuration-setup-for-windows Posted at Nginx Forum: https://forum.nginx.org/read.php?2,284967,284967#msg-284967 From thresh at nginx.com Tue Jul 23 15:05:45 2019 From: thresh at nginx.com (Konstantin Pavlov) Date: Tue, 23 Jul 2019 18:05:45 +0300 Subject: TLS 1.3 support in nginx-1.17.1 binary for Ubuntu 18.04 "bionic" provided by nginx.org In-Reply-To: References: Message-ID: <112710ec-86a6-8b65-50ad-218e437c1591@nginx.com> Hello, 09.07.2019 13:35, Konstantin Pavlov wrote: > Thanks for the heads up on the openssl version change in 18.04 - it > definitely is on our roadmap to provide prebuilt packages based on > openssl 1.1.1! > > Indeed, new packages built with openssl 1.1.1 will not work on the older > Ubuntu 18.04 point releases (non-updated), so this means the users will > have to update when they update nginx. > > We definitely will not be changing the already released binaries, as > this is likely to break existing setups that rely on the specific > environments. The next nginx release however will be built using the > newer Ubuntu 18.04 base with openssl 1.1.1. There's no ETA for it yet > as far as I can tell. Just a heads-up: nginx 1.17.2 as available from http://nginx.org/en/linux_packages.html was built with openssl 1.1.1 on Ubuntu 18.04. Have a good one, -- Konstantin Pavlov https://www.nginx.com/ From edigarov at qarea.com Wed Jul 24 15:33:10 2019 From: edigarov at qarea.com (Gregory Edigarov) Date: Wed, 24 Jul 2019 18:33:10 +0300 Subject: too_many_redirects Message-ID: <22a47d2a-9901-fe56-7968-ea9565e42c01@qarea.com> Hello, Having this setup: nginx (on host) -> nginx (in docker-nginx [WP site resides here]) -> php-fpm(in docker-php) got the error: too many redirects. what could be the problem? thanks a lot in advance. config on host nginx: server { ?? listen 80; ?? listen 443 ssl http2; ?? server_name example.com; ?? if ($scheme = http) {return 301 https://$server_name$request_uri;} ?? access_log /var/log/access.log ; ?? error_log /var/log/error.log ; ?? location / { ?? ??? proxy_pass http://127.0.0.1:8181; ??? proxy_redirect ??? ?? off; ??????? proxy_set_header?? Host $host; ??????? proxy_set_header?? X-Real-IP $remote_addr; ??????? proxy_set_header?? X-Forwarded-For $proxy_add_x_forwarded_for; ??????? proxy_set_header?? X-Forwarded-Host $server_name; ??? } } config on dockerized nginx: server { ?? listen 80; ?? server_name example.com; ?? access_log /var/log/nginx/access.log; ?? error_log /var/log/nginx/error.log; ?? root /var/www; ?? index index.php; ?? gzip_vary on; ?? gzip on; ?? gzip_disable "msie6"; ?? gzip_comp_level 5; # Compress the following MIME types. ??? gzip_types ??? ??? application/atom+xml ??? ??? application/javascript ??? ??? application/json ??? ??? application/ld+json ??? ??? application/manifest+json ??? ??? application/rss+xml ??? ??? application/vnd.geo+json ??? ??? application/vnd.ms-fontobject ??? ??? application/x-font-ttf ??? ??? application/x-web-app-manifest+json ??? ??? application/xhtml+xml ??? ??? application/xml ??? ??? font/opentype ??? ??? image/bmp ??? ??? image/svg+xml ??? ??? image/x-icon ??? ??? text/cache-manifest ??? ??? text/css ??? ??? text/plain ??? ??? text/vcard ??? ??? text/vnd.rim.location.xloc ??? ??? text/vtt ??? ??? text/x-component ??? ??? text/x-cross-domain-policy; ?? location ~* ^/.well-known/ { ????? allow all; ?? } ?? location = /favicon.ico { ????? log_not_found off; ????? access_log off; ?? } ?? location = /robots.txt { ????? #????? allow all; ????? log_not_found off; ????? access_log off; ?? } ?? location / { ????? try_files $uri $uri/ /index.php?$args; ?? } ?? location ~ \.php$ { ????? fastcgi_pass php:9000; # aka docker-php:9000 ????? fastcgi_buffers 16 16k; ????? fastcgi_read_timeout 300; ????? fastcgi_buffer_size 32k; ????? include fastcgi_params ; ????? fastcgi_index index.php; ????? fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; ????? fastcgi_param PATH_INFO $fastcgi_path_info; ????? fastcgi_param QUERY_STRING $query_string; ????? fastcgi_intercept_errors on; ?? } ? location ~* \.(jpg|jpeg|png|css|js|ico|xml|txt)$ { ??? access_log??????? off; ??? log_not_found???? off; ??? expires?????????? 360d; ??? add_header Cache-Control "public"; ? } ? # Specify what happens what .ht files are requested ? location ~ /\.ht { ????? deny all; ? } } From francis at daoine.org Wed Jul 24 22:26:29 2019 From: francis at daoine.org (Francis Daly) Date: Wed, 24 Jul 2019 23:26:29 +0100 Subject: too_many_redirects In-Reply-To: <22a47d2a-9901-fe56-7968-ea9565e42c01@qarea.com> References: <22a47d2a-9901-fe56-7968-ea9565e42c01@qarea.com> Message-ID: <20190724222629.zi7rcfafhst4uqxb@daoine.org> On Wed, Jul 24, 2019 at 06:33:10PM +0300, Gregory Edigarov via nginx wrote: Hi there, > Having this setup: > > nginx (on host) -> nginx (in docker-nginx [WP site resides here]) -> > php-fpm(in docker-php) > > got the error: too many redirects. > > what could be the problem? What request do you make? What response do you get? What response do you want instead? The output of "curl -i https://example.com/your/request" will probably answer the first two. > config on host nginx: > > server { > ?? listen 80; > ?? listen 443 ssl http2; > ?? server_name example.com; > ?? if ($scheme = http) {return 301 https://$server_name$request_uri;} As an aside - it is usually nicer to do that using two server{}s, one for http and one for https. > ?? location / { > ?? ??? proxy_pass http://127.0.0.1:8181; > ??? proxy_redirect ??? ?? off; It may be that changing that to turn a "http://" redirect into a "https://" one will make everything work as you want. The "curl -i" output may show if that is the case. f -- Francis Daly francis at daoine.org From nginx-forum at forum.nginx.org Thu Jul 25 08:07:57 2019 From: nginx-forum at forum.nginx.org (blason) Date: Thu, 25 Jul 2019 04:07:57 -0400 Subject: How do I add multiple proxy_pass Message-ID: <12b2e98878b9c601a5ea97e5fe044d7f.NginxMailingListEnglish@forum.nginx.org> Hi, I have nginx with version 1.10.1 and have below scenario which I am not able to figure it out. My reverse proxy set it up as www.example.com and location / is set it as location / { proxy_pass https://www.example.com:8084; Now URL is getting opened properly when I login it again diverts to port 88 on the same server so my query is how do I add multiple proxy pass for same server like proxy_pass https://www.example.com:88 Posted at Nginx Forum: https://forum.nginx.org/read.php?2,284983,284983#msg-284983 From nginx-forum at forum.nginx.org Thu Jul 25 09:08:30 2019 From: nginx-forum at forum.nginx.org (cello86@gmail.com) Date: Thu, 25 Jul 2019 05:08:30 -0400 Subject: Nginx and conditional logformat Message-ID: <267aad0446536981eff69f91ab88da58.NginxMailingListEnglish@forum.nginx.org> Hi All, we tried to add some debug information into our access_log for a service with a client certificate authentication. Actually we print some information related to the clients but we would print into the logs the client certificate sent by the client during the handshake in case of error. We tried to put the generic logformat into the server block and another logformat into the if condition, but the first wins always versus the second. Is it possible to config the logformat in a conditional statement? Thanks, Marcello Posted at Nginx Forum: https://forum.nginx.org/read.php?2,284984,284984#msg-284984 From rainer at ultra-secure.de Thu Jul 25 09:59:59 2019 From: rainer at ultra-secure.de (rainer at ultra-secure.de) Date: Thu, 25 Jul 2019 11:59:59 +0200 Subject: Multiple server_name directives? Message-ID: <623a3dbe31ecbff29e9179c71351924f@ultra-secure.de> Hi, I found that using multiple server_name bla; server_name blu; directives seems to actually work. At least in 1.12. Can someone from @nginx comment on whether using that is a good idea? Or is that deprecated already? The documentation doesn't mention it. Best Regards Rainer From mailinglist at unix-solution.de Thu Jul 25 10:09:48 2019 From: mailinglist at unix-solution.de (basti) Date: Thu, 25 Jul 2019 12:09:48 +0200 Subject: Multiple server_name directives? In-Reply-To: <623a3dbe31ecbff29e9179c71351924f@ultra-secure.de> References: <623a3dbe31ecbff29e9179c71351924f@ultra-secure.de> Message-ID: <1d3f2326-ad23-6abd-00bd-1092327093da@unix-solution.de> You can also use multiple names in one line. http://nginx.org/en/docs/http/server_names.html On 25.07.19 11:59, rainer at ultra-secure.de wrote: > Hi, > > > I found that using multiple > > server_name bla; > server_name blu; > > directives seems to actually work. > > At least in 1.12. > > > Can someone from @nginx comment on whether using that is a good idea? > Or is that deprecated already? > > The documentation doesn't mention it. > > > > Best Regards > Rainer > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From rainer at ultra-secure.de Thu Jul 25 10:16:25 2019 From: rainer at ultra-secure.de (rainer at ultra-secure.de) Date: Thu, 25 Jul 2019 12:16:25 +0200 Subject: Multiple server_name directives? In-Reply-To: <1d3f2326-ad23-6abd-00bd-1092327093da@unix-solution.de> References: <623a3dbe31ecbff29e9179c71351924f@ultra-secure.de> <1d3f2326-ad23-6abd-00bd-1092327093da@unix-solution.de> Message-ID: Am 2019-07-25 12:09, schrieb basti: > You can also use multiple names in one line. > > http://nginx.org/en/docs/http/server_names.html Yes, that is also what I would consider the default. I just came across the other format and I was honestly surprised it actually worked. From andre8525 at hotmail.com Thu Jul 25 12:02:27 2019 From: andre8525 at hotmail.com (Andrew Andonopoulos) Date: Thu, 25 Jul 2019 12:02:27 +0000 Subject: Nginx cache-control headers issue In-Reply-To: <20190721122634.7rsaa2e746qybvsh@daoine.org> References: <3cb9e736e154956bdbd93dac89b536cd.NginxMailingListEnglish@forum.nginx.org> <20190719224715.orupm3jbijnkjh5g@daoine.org> <20190720073855.5f425332gqreo6x6@daoine.org> , <20190721122634.7rsaa2e746qybvsh@daoine.org> Message-ID: Hi Francis, Nginx decide which content to cache based on the configuration under "Location" + the cache key? For example I have proxy_cache which means will cache everything which match the specific location? I don't yet why I am getting cache miss for all the token based requests (m3u8 & ts), but I am wondering if is related to cache key and if will need to instruct neginx to check the token first and then cache it? Can this be done? Thanks Andrew ________________________________ From: nginx on behalf of Francis Daly Sent: Sunday, July 21, 2019 12:26 PM To: nginx at nginx.org Subject: Re: Nginx cache-control headers issue On Sat, Jul 20, 2019 at 10:10:39AM +0000, Andrew Andonopoulos wrote: Hi there, > Also, I want to ask you, I saw that the last-modified header with token is always: Last-Modified: Sun, 19 Nov 2000 08:52:00 GMT, but there isn't line in the config forcing this date/time. > Can you suggest which code forcing this modified time? You appear to be using the third-party module documented at https://github.com/kaltura/nginx-secure-token-module That page says: == secure_token_last_modified syntax: secure_token_last_modified time default: Sun, 19 Nov 2000 08:52:00 GMT context: http, server, location Sets the value of the last-modified header of responses that are not tokenized. An empty string leaves the value of last-modified unaltered, while the string "now" sets the header to the server current time. secure_token_token_last_modified syntax: secure_token_token_last_modified time default: now context: http, server, location Sets the value of the last-modified header of responses that are tokenized (query / cookie) An empty string leaves the value of last-modified unaltered, while the string "now" sets the header to the server current time. == which seems to explain both the "that old timestamp" and the "always the current time" that you reported in the first mail. f -- Francis Daly francis at daoine.org _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Thu Jul 25 13:19:31 2019 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 25 Jul 2019 16:19:31 +0300 Subject: Nginx and conditional logformat In-Reply-To: <267aad0446536981eff69f91ab88da58.NginxMailingListEnglish@forum.nginx.org> References: <267aad0446536981eff69f91ab88da58.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20190725131931.GU1877@mdounin.ru> Hello! On Thu, Jul 25, 2019 at 05:08:30AM -0400, cello86 at gmail.com wrote: > Hi All, > we tried to add some debug information into our access_log for a service > with a client certificate authentication. Actually we print some information > related to the clients but we would print into the logs the client > certificate sent by the client during the handshake in case of error. We > tried to put the generic logformat into the server block and another > logformat into the if condition, but the first wins always versus the > second. > > Is it possible to config the logformat in a conditional statement? Yes, check the "if=" parameter of the access_log directive. See here for details: http://nginx.org/r/access_log -- Maxim Dounin http://mdounin.ru/ From cello86 at gmail.com Fri Jul 26 13:49:05 2019 From: cello86 at gmail.com (Marcello Lorenzi) Date: Fri, 26 Jul 2019 15:49:05 +0200 Subject: Nginx and conditional logformat In-Reply-To: <20190725131931.GU1877@mdounin.ru> References: <267aad0446536981eff69f91ab88da58.NginxMailingListEnglish@forum.nginx.org> <20190725131931.GU1877@mdounin.ru> Message-ID: Hi Maxim, I tried to configure the location with this example: server { access_log logs/access_log sslclient; location / { if ($ssl_client_verify != "SUCCESS") { set $loggingcert 1; } access_log logs/access_log sslclientfull if=$loggingcert; } } I noticed that all the request inherit the first access_log configuration and not the conditional. Marcello On Thu, Jul 25, 2019 at 3:19 PM Maxim Dounin wrote: > Hello! > > On Thu, Jul 25, 2019 at 05:08:30AM -0400, cello86 at gmail.com wrote: > > > Hi All, > > we tried to add some debug information into our access_log for a service > > with a client certificate authentication. Actually we print some > information > > related to the clients but we would print into the logs the client > > certificate sent by the client during the handshake in case of error. We > > tried to put the generic logformat into the server block and another > > logformat into the if condition, but the first wins always versus the > > second. > > > > Is it possible to config the logformat in a conditional statement? > > Yes, check the "if=" parameter of the access_log directive. See > here for details: > > http://nginx.org/r/access_log > > -- > Maxim Dounin > http://mdounin.ru/ > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From postmaster at palvelin.fi Sat Jul 27 16:08:14 2019 From: postmaster at palvelin.fi (Palvelin Postmaster) Date: Sat, 27 Jul 2019 09:08:14 -0700 Subject: SSL_write() failed errors In-Reply-To: <20190722115448.GI1877@mdounin.ru> References: <20190719165940.GF1877@mdounin.ru> <20190722115448.GI1877@mdounin.ru> Message-ID: <058AF186-9E57-48BB-9F28-E899F8D07A64@palvelin.fi> > On 22 Jul 2019, at 4:54, Maxim Dounin wrote: > > Hello! > > On Fri, Jul 19, 2019 at 11:35:44AM -0700, Palvelin Postmaster via nginx wrote: > >>> On 19 Jul 2019, at 9.59, Maxim Dounin wrote: >>> >>> Hello! >>> >>> On Thu, Jul 18, 2019 at 10:03:24AM -0700, Palvelin Postmaster via nginx wrote: >>> >>>> we?re getting random SSL_write() failed errors on seemingly >>>> legitimate requests. The common denominator seems to be they are >>>> all for static files (images, js, etc.). >>>> >>>> Can anyone help me debug the issue? >>>> >>>> Here?s a debug log paste for one incident: >>>> https://pastebin.com/ZsbLuD5N >>>> >>>> Our architecture is: Amazon ALB > Nginx 1.14 > PHP-FPM 7.3 >>> >>> The following debug log: >>> >>> 2019/07/18 19:27:25 [debug] 1840#1840: *2037 SSL_write: -1 >>> 2019/07/18 19:27:25 [debug] 1840#1840: *2037 SSL_get_error: 6 >>> 2019/07/18 19:27:25 [crit] 1840#1840: *2037 SSL_write() failed (SSL:) while sending response to client... >>> >>> suggests that this is due to error 6, that is, >>> SSL_ERROR_ZERO_RETURN. This looks strange, as we haven't seen >>> this error being returned from SSL_write(), but might be >>> legitimate. In theory this can happen if nginx got a close notify >>> SSL alert while writing a response, and probably have something to >>> do with Amazon ALB before nginx. >>> >>> Just in case, could you please provide details about OpenSSL >>> library you are using ("nginx -V" should contain enough details)? >> >> Certainly: >> >> nginx version: nginx/1.14.0 (Ubuntu) >> built with OpenSSL 1.1.0g 2 Nov 2017 (running with OpenSSL 1.1.1c 28 May 2019) > > You are using Ubuntu 18.04 package, correct? > > Could you please update to the latest package (1.14.0-0ubuntu1.3, > exactly the same nginx version rebuild against OpenSSL 1.1.1c) and > report if it fixes the errors in question or not? Yes, using the Ubuntu 18.04 package. I upgraded the package to latest and have been following up the error log for a few days. I?m still getting one or two errors a day but the frequency is now small enough to become uninteresting. :) -- Palvelin.fi Hostmaster postmaster at palvelin.fi From robn at fastmailteam.com Mon Jul 29 05:03:09 2019 From: robn at fastmailteam.com (=?UTF-8?Q?Rob_N_=E2=98=85?=) Date: Mon, 29 Jul 2019 15:03:09 +1000 Subject: Crash in mail module during SMTP setup Message-ID: I'm using the mail module for IMAP/POP/SMTP proxying (at Fastmail). Lately (last few weeks) we've heard reports of connections dropping (particularly IMAP IDLE) but it wasn't until this morning I understood the source of it: nginx is segfaulting. I haven't got a direct reproduction, but I find that if I just attach gdb to a running worker, most of the time it will eventually fail (sometimes they run to completion). This is only happening on our production frontends, never in development or test, which suggests its related to load. The machines aren't under any particular memory or CPU pressure, and peak around 150K active simultaneous connections, but these crashes don't appear to correlate with daily load variations. Probably, this has become more noticeable as the number of active connections has increased in the last few weeks (as we've taken on more customers). This was happening with 1.15.8, which we've had running since February. I upgraded to 1.17.2 this morning just in case it had already been fixed, but it still happens. It's slow gathering crashes, so I only have this one (that I've seen three times): Program received signal SIGSEGV, Segmentation fault. 0x0000000000555dbe in ngx_mail_smtp_resolve_addr_handler (ctx=0x4c4a430) at src/mail/ngx_mail_smtp_handler.c:107 107 ngx_log_error(NGX_LOG_ERR, c->log, 0, (gdb) bt #0 0x0000000000555dbe in ngx_mail_smtp_resolve_addr_handler (ctx=0x4c4a430) at src/mail/ngx_mail_smtp_handler.c:107 #1 0x000000000047f0c4 in ngx_resolver_timeout_handler (ev=0x6cecb30) at src/core/ngx_resolver.c:4047 #2 0x00000000004879e7 in ngx_event_expire_timers () at src/event/ngx_event_timer.c:94 #3 0x0000000000485307 in ngx_process_events_and_timers (cycle=0x1fec320) at src/event/ngx_event.c:256 #4 0x00000000004949f7 in ngx_worker_process_cycle (cycle=0x1fec320, data=0x2) at src/os/unix/ngx_process_cycle.c:757 #5 0x0000000000491121 in ngx_spawn_process (cycle=0x1fec320, proc=0x494909 , data=0x2, name=0x78ea93 "worker process", respawn=2) at src/os/unix/ngx_process.c:199 #6 0x0000000000494499 in ngx_reap_children (cycle=0x1fec320) at src/os/unix/ngx_process_cycle.c:629 #7 0x000000000049304f in ngx_master_process_cycle (cycle=0x1fec320) at src/os/unix/ngx_process_cycle.c:182 #8 0x000000000044f962 in main (argc=3, argv=0x7ffe5cfba9b8) at src/core/nginx.c:382 (gdb) f 107 #0 0x0000000000000000 in ?? () (gdb) f 0 #0 0x0000000000555dbe in ngx_mail_smtp_resolve_addr_handler (ctx=0x4c4a430) at src/mail/ngx_mail_smtp_handler.c:107 107 ngx_log_error(NGX_LOG_ERR, c->log, 0, (gdb) l 107 102 103 s = ctx->data; 104 c = s->connection; 105 106 if (ctx->state) { 107 ngx_log_error(NGX_LOG_ERR, c->log, 0, 108 "%V could not be resolved (%i: %s)", 109 &c->addr_text, ctx->state, 110 ngx_resolver_strerror(ctx->state)); 111 (gdb) p ctx $1 = (ngx_resolver_ctx_t *) 0x4c4a430 (gdb) p ctx->state $2 = 110 (gdb) p *ctx $3 = {next = 0x0, resolver = 0x2020a60, node = 0x66efd90, ident = -1, state = 110, name = {len = 0, data = 0x0}, service = {len = 0, data = 0x0}, valid = 0, naddrs = 0, addrs = 0x0, addr = {sockaddr = 0x5dc46f0, socklen = 16, name = {len = 0, data = 0x0}, priority = 0, weight = 0}, sin = {sin_family = 0, sin_port = 0, sin_addr = { s_addr = 0}, sin_zero = "\000\000\000\000\000\000\000"}, count = 0, nsrvs = 0, srvs = 0x0, handler = 0x555d7e , data = 0x47aeaf0, timeout = 30000, quick = 0, async = 1, cancelable = 0, recursion = 0, event = 0x6cecb30} An earlier one on 1.15.8 looks more like this (I don't have the complete gdb session unfortunately): Program received signal SIGSEGV, Segmentation fault. 0x000000000055427b in ngx_event_add_timer (ev=0x321c980, timer=60000) at src/event/ngx_event_timer.h:80 80 src/event/ngx_event_timer.h: No such file or directory. (gdb) l 80 75 ngx_del_timer(ev); 76 } 77 78 ev->timer.key = key; 79 80 ngx_log_debug3(NGX_LOG_DEBUG_EVENT, ev->log, 0, 81 "event timer add: %d: %M:%M", 82 ngx_event_ident(ev->data), timer, ev->timer.key); 83 84 ngx_rbtree_insert(&ngx_event_timer_rbtree, &ev->timer); (gdb) p ev->data $3 = (void *) 0x100 I've lost the full trace on that but again, it was during SMTP auth. I don't have a good intuition of how to debug this further. What other information could I give you to help figure it out? Cheers, Rob N Fastmail. -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Mon Jul 29 13:02:20 2019 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 29 Jul 2019 16:02:20 +0300 Subject: Nginx and conditional logformat In-Reply-To: References: <267aad0446536981eff69f91ab88da58.NginxMailingListEnglish@forum.nginx.org> <20190725131931.GU1877@mdounin.ru> Message-ID: <20190729130220.GY1877@mdounin.ru> Hello! On Fri, Jul 26, 2019 at 03:49:05PM +0200, Marcello Lorenzi wrote: > Hi Maxim, > I tried to configure the location with this example: > > server { > access_log logs/access_log sslclient; > > location / { > > if ($ssl_client_verify != "SUCCESS") { > set $loggingcert 1; > } > > access_log logs/access_log sslclientfull if=$loggingcert; > > } > > } > > I noticed that all the request inherit the first access_log configuration > and not the conditional. In the configuration in question, all requests handled in "location /" will either use the "sslclientfull" log format, or won't be logged at all. If this is not what you observe, most likely you've missed something an requests are either not handled in the server in question, or not handled in "location /". For example, this may happen if you are using "ssl_verify_client on;" and all requests without proper client certificates are terminated with error 400 in the server context before any processing, hence they are not handled in "location /". -- Maxim Dounin http://mdounin.ru/ From kworthington at gmail.com Mon Jul 29 14:31:02 2019 From: kworthington at gmail.com (Kevin Worthington) Date: Mon, 29 Jul 2019 10:31:02 -0400 Subject: [nginx-announce] nginx-1.17.2 In-Reply-To: <20190723122309.GR1877@mdounin.ru> References: <20190723122309.GR1877@mdounin.ru> Message-ID: Hello Nginx users, Now available: Nginx 1.17.2 for Windows https://kevinworthington.com/nginxwin1172 (32-bit and 64-bit versions) These versions are to support legacy users who are already using Cygwin based builds of Nginx. Officially supported native Windows binaries are at nginx.org. Announcements are also available here: Twitter http://twitter.com/kworthington Thank you, Kevin -- Kevin Worthington kworthington *@* (gmail] [dot} {com) https://kevinworthington.com/ https://twitter.com/kworthington On Tue, Jul 23, 2019 at 8:23 AM Maxim Dounin wrote: > Changes with nginx 1.17.2 23 Jul > 2019 > > *) Change: minimum supported zlib version is 1.2.0.4. > Thanks to Ilya Leoshkevich. > > *) Change: the $r->internal_redirect() embedded perl method now expects > escaped URIs. > > *) Feature: it is now possible to switch to a named location using the > $r->internal_redirect() embedded perl method. > > *) Bugfix: in error handling in embedded perl. > > *) Bugfix: a segmentation fault might occur on start or during > reconfiguration if hash bucket size larger than 64 kilobytes was > used > in the configuration. > > *) Bugfix: nginx might hog CPU during unbuffered proxying and when > proxying WebSocket connections if the select, poll, or /dev/poll > methods were used. > > *) Bugfix: in the ngx_http_xslt_filter_module. > > *) Bugfix: in the ngx_http_ssi_filter_module. > > > -- > Maxim Dounin > http://nginx.org/ > _______________________________________________ > nginx-announce mailing list > nginx-announce at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx-announce > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Mon Jul 29 15:25:08 2019 From: nginx-forum at forum.nginx.org (fredr) Date: Mon, 29 Jul 2019 11:25:08 -0400 Subject: Resident memory not released Message-ID: <1b7e135761792e5f2748e54c980770c4.NginxMailingListEnglish@forum.nginx.org> Hi all, We are using the kubernetes nginx-ingress for websocket connections in front of one of our applications. We have added automatic scaling based on the resident memory, as that seems to be a good scaling metric when dealing with persistent connections. But we noticed that the memory seems to never be released, and thus it only scales up and never down. I've done some testing locally using this dockerfile and nginx.conf with vanilla nginx: https://gist.github.com/fredr/d58f8221b813e4fdcf7bbfc08df30afa and it seems to have the same behaviour. Starting the docker image, nginx has allocated 211Mb in the resident memory column in htop. Then I connected 20K websocket connections, memory raised to about 540Mb, and then I disconnecting all the connections, the used memory stays at 540Mb. If I reconnect all the 20K websocket connections, it seems to reuse the already allocated memory. So I wounder, is this by design? is it somehow configurable, so that nginx would release this memory? or is it the wrong value to base scaling on? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,285025,285025#msg-285025 From mdounin at mdounin.ru Mon Jul 29 17:02:29 2019 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 29 Jul 2019 20:02:29 +0300 Subject: Resident memory not released In-Reply-To: <1b7e135761792e5f2748e54c980770c4.NginxMailingListEnglish@forum.nginx.org> References: <1b7e135761792e5f2748e54c980770c4.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20190729170228.GD1877@mdounin.ru> Hello! On Mon, Jul 29, 2019 at 11:25:08AM -0400, fredr wrote: > Hi all, > > We are using the kubernetes nginx-ingress for websocket connections in front > of one of our applications. We have added automatic scaling based on the > resident memory, as that seems to be a good scaling metric when dealing with > persistent connections. But we noticed that the memory seems to never be > released, and thus it only scales up and never down. > > I've done some testing locally using this dockerfile and nginx.conf with > vanilla nginx: > https://gist.github.com/fredr/d58f8221b813e4fdcf7bbfc08df30afa and it seems > to have the same behaviour. > > Starting the docker image, nginx has allocated 211Mb in the resident memory > column in htop. > Then I connected 20K websocket connections, memory raised to about 540Mb, > and then I disconnecting all the connections, the used memory stays at > 540Mb. > > If I reconnect all the 20K websocket connections, it seems to reuse the > already allocated memory. > So I wounder, is this by design? is it somehow configurable, so that nginx > would release this memory? or is it the wrong value to base scaling on? Whether or not allocated (and then freed) memory will be returned to the OS depends mostly on your system allocator and its settings. -- Maxim Dounin http://mdounin.ru/ From mdounin at mdounin.ru Mon Jul 29 17:25:24 2019 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 29 Jul 2019 20:25:24 +0300 Subject: Multiple server_name directives? In-Reply-To: <623a3dbe31ecbff29e9179c71351924f@ultra-secure.de> References: <623a3dbe31ecbff29e9179c71351924f@ultra-secure.de> Message-ID: <20190729172524.GE1877@mdounin.ru> Hello! On Thu, Jul 25, 2019 at 11:59:59AM +0200, rainer at ultra-secure.de wrote: > I found that using multiple > > server_name bla; > server_name blu; > > directives seems to actually work. > > At least in 1.12. > > > Can someone from @nginx comment on whether using that is a good idea? > Or is that deprecated already? > > The documentation doesn't mention it. This is something more or less universaly works for directives which accept arbitrary number of parameters, such as "server_name" or "index", as well as bitmask-style directives like "ssl_protocols" and "proxy_next_upstream". This is not something documented though, and I personally would rather avoid configuring things this way - mostly because I suspect this can be accidentally broken if a particular directive handling is changed for some reason. On the other hand, there are no plans to remove and/or deprecate such syntax intentionally. -- Maxim Dounin http://mdounin.ru/ From mdounin at mdounin.ru Mon Jul 29 18:26:42 2019 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 29 Jul 2019 21:26:42 +0300 Subject: Crash in mail module during SMTP setup In-Reply-To: References: Message-ID: <20190729182642.GF1877@mdounin.ru> Hello! On Mon, Jul 29, 2019 at 03:03:09PM +1000, Rob N ? wrote: > I'm using the mail module for IMAP/POP/SMTP proxying (at > Fastmail). Lately (last few weeks) we've heard reports of > connections dropping (particularly IMAP IDLE) but it wasn't > until this morning I understood the source of it: nginx is > segfaulting. > > I haven't got a direct reproduction, but I find that if I just > attach gdb to a running worker, most of the time it will > eventually fail (sometimes they run to completion). > > This is only happening on our production frontends, never in > development or test, which suggests its related to load. The > machines aren't under any particular memory or CPU pressure, and > peak around 150K active simultaneous connections, but these > crashes don't appear to correlate with daily load variations. > Probably, this has become more noticeable as the number of > active connections has increased in the last few weeks (as we've > taken on more customers). > > This was happening with 1.15.8, which we've had running since > February. I upgraded to 1.17.2 this morning just in case it had > already been fixed, but it still happens. > > > It's slow gathering crashes, so I only have this one (that I've seen three times): > > Program received signal SIGSEGV, Segmentation fault. > 0x0000000000555dbe in ngx_mail_smtp_resolve_addr_handler (ctx=0x4c4a430) at src/mail/ngx_mail_smtp_handler.c:107 > 107 ngx_log_error(NGX_LOG_ERR, c->log, 0, > (gdb) bt > #0 0x0000000000555dbe in ngx_mail_smtp_resolve_addr_handler (ctx=0x4c4a430) at src/mail/ngx_mail_smtp_handler.c:107 > #1 0x000000000047f0c4 in ngx_resolver_timeout_handler (ev=0x6cecb30) at src/core/ngx_resolver.c:4047 > #2 0x00000000004879e7 in ngx_event_expire_timers () at src/event/ngx_event_timer.c:94 > #3 0x0000000000485307 in ngx_process_events_and_timers (cycle=0x1fec320) at src/event/ngx_event.c:256 > #4 0x00000000004949f7 in ngx_worker_process_cycle (cycle=0x1fec320, data=0x2) at src/os/unix/ngx_process_cycle.c:757 > #5 0x0000000000491121 in ngx_spawn_process (cycle=0x1fec320, proc=0x494909 , data=0x2, > name=0x78ea93 "worker process", respawn=2) at src/os/unix/ngx_process.c:199 > #6 0x0000000000494499 in ngx_reap_children (cycle=0x1fec320) at src/os/unix/ngx_process_cycle.c:629 > #7 0x000000000049304f in ngx_master_process_cycle (cycle=0x1fec320) at src/os/unix/ngx_process_cycle.c:182 > #8 0x000000000044f962 in main (argc=3, argv=0x7ffe5cfba9b8) at src/core/nginx.c:382 > (gdb) f 107 > #0 0x0000000000000000 in ?? () > (gdb) f 0 > #0 0x0000000000555dbe in ngx_mail_smtp_resolve_addr_handler (ctx=0x4c4a430) at src/mail/ngx_mail_smtp_handler.c:107 > 107 ngx_log_error(NGX_LOG_ERR, c->log, 0, > (gdb) l 107 > 102 > 103 s = ctx->data; > 104 c = s->connection; > 105 > 106 if (ctx->state) { > 107 ngx_log_error(NGX_LOG_ERR, c->log, 0, > 108 "%V could not be resolved (%i: %s)", > 109 &c->addr_text, ctx->state, > 110 ngx_resolver_strerror(ctx->state)); > 111 > (gdb) p ctx > $1 = (ngx_resolver_ctx_t *) 0x4c4a430 > (gdb) p ctx->state > $2 = 110 > (gdb) p *ctx > $3 = {next = 0x0, resolver = 0x2020a60, node = 0x66efd90, ident = -1, state = 110, name = {len = 0, data = 0x0}, > service = {len = 0, data = 0x0}, valid = 0, naddrs = 0, addrs = 0x0, addr = {sockaddr = 0x5dc46f0, socklen = 16, > name = {len = 0, data = 0x0}, priority = 0, weight = 0}, sin = {sin_family = 0, sin_port = 0, sin_addr = { > s_addr = 0}, sin_zero = "\000\000\000\000\000\000\000"}, count = 0, nsrvs = 0, srvs = 0x0, > handler = 0x555d7e , data = 0x47aeaf0, timeout = 30000, quick = 0, async = 1, > cancelable = 0, recursion = 0, event = 0x6cecb30} Looking at "p *c" and "p *s" might be also interesting. > An earlier one on 1.15.8 looks more like this (I don't have the complete gdb session unfortunately): > > Program received signal SIGSEGV, Segmentation fault. > 0x000000000055427b in ngx_event_add_timer (ev=0x321c980, timer=60000) at src/event/ngx_event_timer.h:80 > 80 src/event/ngx_event_timer.h: No such file or directory. > > (gdb) l 80 > 75 ngx_del_timer(ev); > 76 } > 77 > 78 ev->timer.key = key; > 79 > 80 ngx_log_debug3(NGX_LOG_DEBUG_EVENT, ev->log, 0, > 81 "event timer add: %d: %M:%M", > 82 ngx_event_ident(ev->data), timer, ev->timer.key); > 83 > 84 ngx_rbtree_insert(&ngx_event_timer_rbtree, &ev->timer); > > (gdb) p ev->data > $3 = (void *) 0x100 > > > I've lost the full trace on that but again, it was during SMTP auth. > > > I don't have a good intuition of how to debug this further. What > other information could I give you to help figure it out? Any changes to nginx code and/or additional modules? Additionally, consider configuring debug logging. Given that it's slow gathering cores, normal debug logging might not be an option, though configuring large enough memory buffer might work, see here: http://nginx.org/en/docs/debugging_log.html#memory Note that given the segfault looks related to DNS resolution timeouts - at least the one from 1.17.2 - and your configuration uses 30 seconds timeout - memory buffer needs to be big enough to include at least 30 seconds of debug logs. -- Maxim Dounin http://mdounin.ru/ From nginx-forum at forum.nginx.org Mon Jul 29 18:52:47 2019 From: nginx-forum at forum.nginx.org (aledbf) Date: Mon, 29 Jul 2019 14:52:47 -0400 Subject: Resident memory not released In-Reply-To: <20190729170228.GD1877@mdounin.ru> References: <20190729170228.GD1877@mdounin.ru> Message-ID: <8583e275a38b5b5202f18b7bf648e0fe.NginxMailingListEnglish@forum.nginx.org> > on your system allocator and its settings. Do you have a suggestion to enable this behavior (release of memory) using a particular allocator or setting? Thanks! Posted at Nginx Forum: https://forum.nginx.org/read.php?2,285025,285031#msg-285031 From mdounin at mdounin.ru Mon Jul 29 19:10:13 2019 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 29 Jul 2019 22:10:13 +0300 Subject: Resident memory not released In-Reply-To: <8583e275a38b5b5202f18b7bf648e0fe.NginxMailingListEnglish@forum.nginx.org> References: <20190729170228.GD1877@mdounin.ru> <8583e275a38b5b5202f18b7bf648e0fe.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20190729191013.GG1877@mdounin.ru> Hello! On Mon, Jul 29, 2019 at 02:52:47PM -0400, aledbf wrote: > > on your system allocator and its settings. > > Do you have a suggestion to enable this behavior (release of memory) using a > particular allocator or setting? > Thanks! On FreeBSD and/or on any system with jemalloc(), I would expect memory to be returned to the OS more or less effectively. On Linux with standard glibc allocator, consider tuning MALLOC_MMAP_THRESHOLD_ and MALLOC_TRIM_THRESHOLD_ environment variables, as documented here: http://man7.org/linux/man-pages/man3/mallopt.3.html Note that you may need to use the "env" directive to make sure these are passed to nginx worker processes as well. -- Maxim Dounin http://mdounin.ru/ From robn at fastmailteam.com Tue Jul 30 12:39:56 2019 From: robn at fastmailteam.com (=?UTF-8?Q?Rob_N_=E2=98=85?=) Date: Tue, 30 Jul 2019 22:39:56 +1000 Subject: Crash in mail module during SMTP setup In-Reply-To: <20190729182642.GF1877@mdounin.ru> References: <20190729182642.GF1877@mdounin.ru> Message-ID: <3a8fd0ea-463a-4018-92f3-5a9082297e61@www.fastmail.com> On Tue, 30 Jul 2019, at 4:26 AM, Maxim Dounin wrote: > Looking at "p *c" and "p *s" might be also interesting. Program received signal SIGSEGV, Segmentation fault. 0x00000000005562f2 in ngx_mail_smtp_resolve_name_handler (ctx=0x7bcaa40) at src/mail/ngx_mail_smtp_handler.c:215 215 ngx_log_debug1(NGX_LOG_DEBUG_MAIL, c->log, 0, (gdb) p *c $14 = {data = 0x30, read = 0x111, write = 0xc2cfff0, fd = 263201712, recv = 0xfb023c0, send = 0x0, recv_chain = 0xb0, send_chain = 0x350cf90, listening = 0x0, sent = 55627856, log = 0x0, pool = 0x350cff0, type = -1242759166, sockaddr = 0x0, socklen = 7, addr_text = {len = 0, data = 0x2c4e8fc ""}, proxy_protocol_addr = {len = 0, data = 0x54eb79 "UH\211\345H\203\354 at H\211}\330H\211u\320H\211U\310H\213E\330H\213@@H\205\300tCH\213E\330H\213P at H\213u\310H\213E\320H\211\321\272\234\064z"}, proxy_protocol_port = 53344, ssl = 0x484cb1 , udp = 0x2018d20, local_sockaddr = 0x7a414a, local_socklen = 0, buffer = 0x33312e32322e3438, queue = {prev = 0x3031312e36, next = 0x0}, number = 204275712, requests = 139872032560632, buffered = 0, log_error = 0, timedout = 0, error = 0, destroyed = 0, idle = 0, reusable = 0, close = 1, shared = 0, sendfile = 1, sndlowat = 1, tcp_nodelay = 2, tcp_nopush = 0, need_last_buf = 0} (gdb) p *s $15 = {signature = 155588656, connection = 0x350cf80, out = {len = 35, data = 0x20ae3e0 "220 smtp.fastmail.com ESMTP ready\r\n250 smtp.fastmail.com\r\n250-smtp.fastmail.com\r\n250-PIPELINING\r\n250-SIZE 71000000\r\n250-ENHANCEDSTATUSCODES\r\n250-8BITMIME\r\n250-AUTH PLAIN LOGIN\r\n250 AUTH=PLAIN LOGIN\r\n2"...}, buffer = 0x0, ctx = 0xfb02470, main_conf = 0x2015218, srv_conf = 0x202af60, resolver_ctx = 0x0, proxy = 0x0, mail_state = 0, protocol = 2, blocked = 0, quit = 0, quoted = 0, backslash = 0, no_sync_literal = 0, starttls = 0, esmtp = 0, auth_method = 0, auth_wait = 0, login = {len = 0, data = 0x0}, passwd = {len = 0, data = 0x0}, salt = {len = 0, data = 0x0}, tag = {len = 0, data = 0x0}, tagged_line = {len = 0, data = 0x0}, text = {len = 0, data = 0x0}, addr_text = 0x20b0768, host = {len = 20, data = 0xfb024a8 "aldo-gw.g-service.ru"}, smtp_helo = {len = 0, data = 0x0}, smtp_from = {len = 0, data = 0x0}, smtp_to = {len = 0, data = 0x0}, cmd = {len = 0, data = 0x0}, command = 0, args = {elts = 0x0, nelts = 0, size = 0, nalloc = 0, pool = 0x0}, login_attempt = 0, state = 0, cmd_start = 0x0, arg_start = 0x0, arg_end = 0x0, literal_len = 384} > Any changes to nginx code and/or additional modules? This small patch set (which we've had for years): https://github.com/fastmailops/nginx/commits/1.17.2-fastmail Modules: lua(+luajit), headers_more, ndk, vts (though none of these do anything with the mail module (I know, they're in the same binary though)). > Additionally, consider configuring debug logging. Given that it's > slow gathering cores, normal debug logging might not be an option, > though configuring large enough memory buffer might work, see > here: Working on this! Rob N. -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Tue Jul 30 14:46:26 2019 From: nginx-forum at forum.nginx.org (fredr) Date: Tue, 30 Jul 2019 10:46:26 -0400 Subject: Resident memory not released In-Reply-To: <20190729191013.GG1877@mdounin.ru> References: <20190729191013.GG1877@mdounin.ru> Message-ID: <36f0c7b4598c3a60531413ffac758133.NginxMailingListEnglish@forum.nginx.org> Maxim Dounin Wrote: ------------------------------------------------------- > > Whether or not allocated (and then freed) memory will be returned > to the OS depends mostly on your system allocator and its > settings. That is very interesting! I had no idea, thanks! Maxim Dounin Wrote: ------------------------------------------------------- > On Linux with standard glibc allocator, consider tuning > MALLOC_MMAP_THRESHOLD_ and MALLOC_TRIM_THRESHOLD_ environment > variables, as documented here: > > http://man7.org/linux/man-pages/man3/mallopt.3.html I've been playing around with MALLOC_MMAP_THRESHOLD_ and MALLOC_TRIM_THRESHOLD_ without much success. I noticed that when setting a low value on MALLOC_TRIM_THRESHOLD_, nginx would allocate more memory, and then when disconnecting release about half of that. So a bit of progress I guess. I then tried setting MALLOC_CHECK=1, and that magically solved it, it seems. When disconnecting the websockets, all memory was reclaimed by the OS. But I don't understand why, from reading the man pages you linked, I thought it would only trigger some logging of memory related errors. I haven't gotten it to work with the kubernetes nginx ingress yet, it seems the environment variables isn't passed to the nginx processes for some reason. But I'm working on that. Thanks for your help! If anyone knows more about MALLOC_CHECK and if it is not recommended to set in a production environment, please let me know. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,285025,285036#msg-285036 From mdounin at mdounin.ru Tue Jul 30 15:32:43 2019 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 30 Jul 2019 18:32:43 +0300 Subject: Crash in mail module during SMTP setup In-Reply-To: <3a8fd0ea-463a-4018-92f3-5a9082297e61@www.fastmail.com> References: <20190729182642.GF1877@mdounin.ru> <3a8fd0ea-463a-4018-92f3-5a9082297e61@www.fastmail.com> Message-ID: <20190730153243.GJ1877@mdounin.ru> Hello! On Tue, Jul 30, 2019 at 10:39:56PM +1000, Rob N ? wrote: > On Tue, 30 Jul 2019, at 4:26 AM, Maxim Dounin wrote: > > Looking at "p *c" and "p *s" might be also interesting. > > Program received signal SIGSEGV, Segmentation fault. > 0x00000000005562f2 in ngx_mail_smtp_resolve_name_handler (ctx=0x7bcaa40) > at src/mail/ngx_mail_smtp_handler.c:215 > 215 ngx_log_debug1(NGX_LOG_DEBUG_MAIL, c->log, 0, > > (gdb) p *c > $14 = {data = 0x30, read = 0x111, write = 0xc2cfff0, fd = 263201712, > recv = 0xfb023c0, send = 0x0, recv_chain = 0xb0, send_chain = 0x350cf90, > listening = 0x0, sent = 55627856, log = 0x0, pool = 0x350cff0, > type = -1242759166, sockaddr = 0x0, socklen = 7, addr_text = {len = 0, > data = 0x2c4e8fc ""}, proxy_protocol_addr = {len = 0, > data = 0x54eb79 "UH\211\345H\203\354 at H\211}\330H\211u\320H\211U\310H\213E\330H\213@@H\205\300tCH\213E\330H\213P at H\213u\310H\213E\320H\211\321\272\234\064z"}, proxy_protocol_port = 53344, > ssl = 0x484cb1 , udp = 0x2018d20, > local_sockaddr = 0x7a414a, local_socklen = 0, buffer = 0x33312e32322e3438, > queue = {prev = 0x3031312e36, next = 0x0}, number = 204275712, > requests = 139872032560632, buffered = 0, log_error = 0, timedout = 0, > error = 0, destroyed = 0, idle = 0, reusable = 0, close = 1, shared = 0, > sendfile = 1, sndlowat = 1, tcp_nodelay = 2, tcp_nopush = 0, > need_last_buf = 0} It looks like "c" points to garbage. > > (gdb) p *s > $15 = {signature = 155588656, connection = 0x350cf80, out = {len = 35, Signature should be 0x4C49414D ("MAIL") == 1279869261, so this looks like garbage too. And this explains why "c" points to garbage. > data = 0x20ae3e0 "220 smtp.fastmail.com ESMTP ready\r\n250 smtp.fastmail.com\r\n250-smtp.fastmail.com\r\n250-PIPELINING\r\n250-SIZE 71000000\r\n250-ENHANCEDSTATUSCODES\r\n250-8BITMIME\r\n250-AUTH PLAIN LOGIN\r\n250 AUTH=PLAIN LOGIN\r\n2"...}, buffer = 0x0, ctx = 0xfb02470, main_conf = 0x2015218, Except there are some seemingly valid fields - it looks like s->out is set to sscf->greeting. So it looks like this might be an already closed and partially overwritten session. Given that "s->out = sscf->greeting;" is expected to happen after client address resolution, likely this is a duplicate handler call from the resolver. I think I see the problem - when using SMTP with SSL and resolver, read events might be enabled during address resolving, leading to duplicate ngx_mail_ssl_handshake_handler() calls if something arrives from the client, and duplicate session initialization - including starting another resolving. The following patch should resolve this: # HG changeset patch # User Maxim Dounin # Date 1564500680 -10800 # Tue Jul 30 18:31:20 2019 +0300 # Node ID 63604bfd60a09c7c91ce62c89df468a6e54d2f1c # Parent e7181cfe9212de7f67df805bb746519c059b490b Mail: fixed duplicate resolving. When using SMTP with SSL and resolver, read events might be enabled during address resolving, leading to duplicate ngx_mail_ssl_handshake_handler() calls if something arrives from the client, and duplicate session initialization - including starting another resolving. This can lead to a segmentation fault if the session is closed after first resolving finished. Fix is to block read events while resolving. Reported by Robert Norris, http://mailman.nginx.org/pipermail/nginx/2019-July/058204.html. diff --git a/src/mail/ngx_mail_smtp_handler.c b/src/mail/ngx_mail_smtp_handler.c --- a/src/mail/ngx_mail_smtp_handler.c +++ b/src/mail/ngx_mail_smtp_handler.c @@ -15,6 +15,7 @@ static void ngx_mail_smtp_resolve_addr_handler(ngx_resolver_ctx_t *ctx); static void ngx_mail_smtp_resolve_name(ngx_event_t *rev); static void ngx_mail_smtp_resolve_name_handler(ngx_resolver_ctx_t *ctx); +static void ngx_mail_smtp_block_reading(ngx_event_t *rev); static void ngx_mail_smtp_greeting(ngx_mail_session_t *s, ngx_connection_t *c); static void ngx_mail_smtp_invalid_pipelining(ngx_event_t *rev); static ngx_int_t ngx_mail_smtp_create_buffer(ngx_mail_session_t *s, @@ -91,6 +92,9 @@ ngx_mail_smtp_init_session(ngx_mail_sess if (ngx_resolve_addr(ctx) != NGX_OK) { ngx_mail_close_connection(c); } + + s->resolver_ctx = ctx; + c->read->handler = ngx_mail_smtp_block_reading; } @@ -172,6 +176,9 @@ ngx_mail_smtp_resolve_name(ngx_event_t * if (ngx_resolve_name(ctx) != NGX_OK) { ngx_mail_close_connection(c); } + + s->resolver_ctx = ctx; + c->read->handler = ngx_mail_smtp_block_reading; } @@ -239,6 +246,38 @@ found: static void +ngx_mail_smtp_block_reading(ngx_event_t *rev) +{ + ngx_connection_t *c; + ngx_mail_session_t *s; + ngx_resolver_ctx_t *ctx; + + c = rev->data; + s = c->data; + + ngx_log_debug0(NGX_LOG_DEBUG_MAIL, c->log, 0, + "smtp reading blocked"); + + if (ngx_handle_read_event(rev, 0) != NGX_OK) { + if (s->resolver_ctx) { + ctx = s->resolver_ctx; + + if (ctx->handler == ngx_mail_smtp_resolve_addr_handler) { + ngx_resolve_addr_done(ctx); + + } else if (ctx->handler == ngx_mail_smtp_resolve_name_handler) { + ngx_resolve_name_done(ctx); + } + + s->resolver_ctx = NULL; + } + + ngx_mail_close_connection(c); + } +} + + +static void ngx_mail_smtp_greeting(ngx_mail_session_t *s, ngx_connection_t *c) { ngx_msec_t timeout; @@ -258,6 +297,10 @@ ngx_mail_smtp_greeting(ngx_mail_session_ ngx_mail_close_connection(c); } + if (c->read->ready) { + ngx_post_event(c->read, &ngx_posted_events); + } + if (sscf->greeting_delay) { c->read->handler = ngx_mail_smtp_invalid_pipelining; return; -- Maxim Dounin http://mdounin.ru/ From mdounin at mdounin.ru Tue Jul 30 16:11:32 2019 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 30 Jul 2019 19:11:32 +0300 Subject: Crash in mail module during SMTP setup In-Reply-To: <20190730153243.GJ1877@mdounin.ru> References: <20190729182642.GF1877@mdounin.ru> <3a8fd0ea-463a-4018-92f3-5a9082297e61@www.fastmail.com> <20190730153243.GJ1877@mdounin.ru> Message-ID: <20190730161132.GK1877@mdounin.ru> Hello! On Tue, Jul 30, 2019 at 06:32:43PM +0300, Maxim Dounin wrote: > Hello! > > On Tue, Jul 30, 2019 at 10:39:56PM +1000, Rob N ? wrote: > > > On Tue, 30 Jul 2019, at 4:26 AM, Maxim Dounin wrote: > > > Looking at "p *c" and "p *s" might be also interesting. > > > > Program received signal SIGSEGV, Segmentation fault. > > 0x00000000005562f2 in ngx_mail_smtp_resolve_name_handler (ctx=0x7bcaa40) > > at src/mail/ngx_mail_smtp_handler.c:215 > > 215 ngx_log_debug1(NGX_LOG_DEBUG_MAIL, c->log, 0, > > > > (gdb) p *c > > $14 = {data = 0x30, read = 0x111, write = 0xc2cfff0, fd = 263201712, > > recv = 0xfb023c0, send = 0x0, recv_chain = 0xb0, send_chain = 0x350cf90, > > listening = 0x0, sent = 55627856, log = 0x0, pool = 0x350cff0, > > type = -1242759166, sockaddr = 0x0, socklen = 7, addr_text = {len = 0, > > data = 0x2c4e8fc ""}, proxy_protocol_addr = {len = 0, > > data = 0x54eb79 "UH\211\345H\203\354 at H\211}\330H\211u\320H\211U\310H\213E\330H\213@@H\205\300tCH\213E\330H\213P at H\213u\310H\213E\320H\211\321\272\234\064z"}, proxy_protocol_port = 53344, > > ssl = 0x484cb1 , udp = 0x2018d20, > > local_sockaddr = 0x7a414a, local_socklen = 0, buffer = 0x33312e32322e3438, > > queue = {prev = 0x3031312e36, next = 0x0}, number = 204275712, > > requests = 139872032560632, buffered = 0, log_error = 0, timedout = 0, > > error = 0, destroyed = 0, idle = 0, reusable = 0, close = 1, shared = 0, > > sendfile = 1, sndlowat = 1, tcp_nodelay = 2, tcp_nopush = 0, > > need_last_buf = 0} > > It looks like "c" points to garbage. > > > > > (gdb) p *s > > $15 = {signature = 155588656, connection = 0x350cf80, out = {len = 35, > > Signature should be 0x4C49414D ("MAIL") == 1279869261, so this > looks like garbage too. And this explains why "c" points to > garbage. > > > data = 0x20ae3e0 "220 smtp.fastmail.com ESMTP ready\r\n250 smtp.fastmail.com\r\n250-smtp.fastmail.com\r\n250-PIPELINING\r\n250-SIZE 71000000\r\n250-ENHANCEDSTATUSCODES\r\n250-8BITMIME\r\n250-AUTH PLAIN LOGIN\r\n250 AUTH=PLAIN LOGIN\r\n2"...}, buffer = 0x0, ctx = 0xfb02470, main_conf = 0x2015218, > > Except there are some seemingly valid fields - it looks like > s->out is set to sscf->greeting. So it looks like this might be > an already closed and partially overwritten session. > > Given that "s->out = sscf->greeting;" is expected to happen after > client address resolution, likely this is a duplicate handler call > from the resolver. > > I think I see the problem - when using SMTP with SSL and resolver, > read events might be enabled during address resolving, leading to > duplicate ngx_mail_ssl_handshake_handler() calls if something > arrives from the client, and duplicate session initialization - > including starting another resolving. > > The following patch should resolve this: > > # HG changeset patch > # User Maxim Dounin > # Date 1564500680 -10800 > # Tue Jul 30 18:31:20 2019 +0300 > # Node ID 63604bfd60a09c7c91ce62c89df468a6e54d2f1c > # Parent e7181cfe9212de7f67df805bb746519c059b490b > Mail: fixed duplicate resolving. > > When using SMTP with SSL and resolver, read events might be enabled > during address resolving, leading to duplicate ngx_mail_ssl_handshake_handler() > calls if something arrives from the client, and duplicate session > initialization - including starting another resolving. This can lead > to a segmentation fault if the session is closed after first resolving > finished. Fix is to block read events while resolving. > > Reported by Robert Norris, > http://mailman.nginx.org/pipermail/nginx/2019-July/058204.html. > > diff --git a/src/mail/ngx_mail_smtp_handler.c b/src/mail/ngx_mail_smtp_handler.c > --- a/src/mail/ngx_mail_smtp_handler.c > +++ b/src/mail/ngx_mail_smtp_handler.c > @@ -15,6 +15,7 @@ > static void ngx_mail_smtp_resolve_addr_handler(ngx_resolver_ctx_t *ctx); > static void ngx_mail_smtp_resolve_name(ngx_event_t *rev); > static void ngx_mail_smtp_resolve_name_handler(ngx_resolver_ctx_t *ctx); > +static void ngx_mail_smtp_block_reading(ngx_event_t *rev); > static void ngx_mail_smtp_greeting(ngx_mail_session_t *s, ngx_connection_t *c); > static void ngx_mail_smtp_invalid_pipelining(ngx_event_t *rev); > static ngx_int_t ngx_mail_smtp_create_buffer(ngx_mail_session_t *s, > @@ -91,6 +92,9 @@ ngx_mail_smtp_init_session(ngx_mail_sess > if (ngx_resolve_addr(ctx) != NGX_OK) { > ngx_mail_close_connection(c); > } > + > + s->resolver_ctx = ctx; > + c->read->handler = ngx_mail_smtp_block_reading; > } > > > @@ -172,6 +176,9 @@ ngx_mail_smtp_resolve_name(ngx_event_t * > if (ngx_resolve_name(ctx) != NGX_OK) { > ngx_mail_close_connection(c); > } > + > + s->resolver_ctx = ctx; > + c->read->handler = ngx_mail_smtp_block_reading; > } Err, this should be before ngx_resolve_addr()/ngx_resolve_name(). Updated patch: # HG changeset patch # User Maxim Dounin # Date 1564502955 -10800 # Tue Jul 30 19:09:15 2019 +0300 # Node ID 9744505242b6ba59f0a8752e52c6e73050dd1cc6 # Parent d11673c35dc184a7030c9b678d3ad89376dd3079 Mail: fixed duplicate resolving. When using SMTP with SSL and resolver, read events might be enabled during address resolving, leading to duplicate ngx_mail_ssl_handshake_handler() calls if something arrives from the client, and duplicate session initialization - including starting another resolving. This can lead to a segmentation fault if the session is closed after first resolving finished. Fix is to block read events while resolving. Reported by Robert Norris, http://mailman.nginx.org/pipermail/nginx/2019-July/058204.html. diff --git a/src/mail/ngx_mail_smtp_handler.c b/src/mail/ngx_mail_smtp_handler.c --- a/src/mail/ngx_mail_smtp_handler.c +++ b/src/mail/ngx_mail_smtp_handler.c @@ -15,6 +15,7 @@ static void ngx_mail_smtp_resolve_addr_handler(ngx_resolver_ctx_t *ctx); static void ngx_mail_smtp_resolve_name(ngx_event_t *rev); static void ngx_mail_smtp_resolve_name_handler(ngx_resolver_ctx_t *ctx); +static void ngx_mail_smtp_block_reading(ngx_event_t *rev); static void ngx_mail_smtp_greeting(ngx_mail_session_t *s, ngx_connection_t *c); static void ngx_mail_smtp_invalid_pipelining(ngx_event_t *rev); static ngx_int_t ngx_mail_smtp_create_buffer(ngx_mail_session_t *s, @@ -88,6 +89,9 @@ ngx_mail_smtp_init_session(ngx_mail_sess ctx->data = s; ctx->timeout = cscf->resolver_timeout; + s->resolver_ctx = ctx; + c->read->handler = ngx_mail_smtp_block_reading; + if (ngx_resolve_addr(ctx) != NGX_OK) { ngx_mail_close_connection(c); } @@ -169,6 +173,9 @@ ngx_mail_smtp_resolve_name(ngx_event_t * ctx->data = s; ctx->timeout = cscf->resolver_timeout; + s->resolver_ctx = ctx; + c->read->handler = ngx_mail_smtp_block_reading; + if (ngx_resolve_name(ctx) != NGX_OK) { ngx_mail_close_connection(c); } @@ -239,6 +246,38 @@ found: static void +ngx_mail_smtp_block_reading(ngx_event_t *rev) +{ + ngx_connection_t *c; + ngx_mail_session_t *s; + ngx_resolver_ctx_t *ctx; + + c = rev->data; + s = c->data; + + ngx_log_debug0(NGX_LOG_DEBUG_MAIL, c->log, 0, + "smtp reading blocked"); + + if (ngx_handle_read_event(rev, 0) != NGX_OK) { + if (s->resolver_ctx) { + ctx = s->resolver_ctx; + + if (ctx->handler == ngx_mail_smtp_resolve_addr_handler) { + ngx_resolve_addr_done(ctx); + + } else if (ctx->handler == ngx_mail_smtp_resolve_name_handler) { + ngx_resolve_name_done(ctx); + } + + s->resolver_ctx = NULL; + } + + ngx_mail_close_connection(c); + } +} + + +static void ngx_mail_smtp_greeting(ngx_mail_session_t *s, ngx_connection_t *c) { ngx_msec_t timeout; @@ -258,6 +297,10 @@ ngx_mail_smtp_greeting(ngx_mail_session_ ngx_mail_close_connection(c); } + if (c->read->ready) { + ngx_post_event(c->read, &ngx_posted_events); + } + if (sscf->greeting_delay) { c->read->handler = ngx_mail_smtp_invalid_pipelining; return; -- Maxim Dounin http://mdounin.ru/ From jlmuir at imca-cat.org Tue Jul 30 21:20:41 2019 From: jlmuir at imca-cat.org (J. Lewis Muir) Date: Tue, 30 Jul 2019 16:20:41 -0500 Subject: Implicit root location? Message-ID: <20190730212041.szj6lk5bbh27tc3i@mink.imca.aps.anl.gov> Hello, all! I have a minimal nginx.conf with one server block that sets the root directory but has *no* location directives, yet for a request of "/", it serves "/index.html". Why? With no locations specified, I expected it to return 404 or similar for any request. Here's the server block (entire nginx.conf at end of message): ---- server { listen 127.0.0.1:80; listen [::1]:80; server_name localhost "" 127.0.0.1 [::1]; root /srv/www/localhost; } ---- Here's the contents of /srv/www/localhost: ---- $ ls -al /srv/www/localhost total 4 drwxr-xr-x. 2 root root 24 Jul 30 15:50 . drwxr-xr-x. 3 root root 23 Jun 26 21:34 .. -rw-r--r--. 1 root root 140 Jun 26 22:22 index.html ---- And here's the curl invocation: ---- $ curl 'http://localhost/' localhost

localhost

---- I know that the default index directive is ---- index index.html; ---- That explains how it knows to try index.html, but what makes it try the root when there are no location directives? Is there an implicit location directive? There is no default listed for the location directive: https://nginx.org/en/docs/http/ngx_http_core_module.html#location And I couldn't find this behavior stated in "How nginx processes a request:" https://nginx.org/en/docs/http/request_processing.html Thank you! Lewis ---- Complete nginx.conf ---- user nginx; worker_processes auto; error_log /var/log/nginx/error.log; pid /run/nginx.pid; events { worker_connections 1024; } http { log_format main '$remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"'; access_log /var/log/nginx/access.log main; sendfile on; tcp_nopush on; tcp_nodelay on; keepalive_timeout 65; types_hash_max_size 2048; include /etc/nginx/mime.types; default_type application/octet-stream; server { listen 127.0.0.1:80; listen [::1]:80; server_name localhost "" 127.0.0.1 [::1]; root /srv/www/localhost; } } ---- From jlmuir at imca-cat.org Tue Jul 30 22:12:01 2019 From: jlmuir at imca-cat.org (J. Lewis Muir) Date: Tue, 30 Jul 2019 17:12:01 -0500 Subject: Why 301 permanent redirect with appended slash? Message-ID: <20190730221201.lypgmwxq5kg7ydxo@mink.imca.aps.anl.gov> Hello, all! I have a minimal nginx.conf with one server block that sets the root directory and one location with a prefix string of "/foo/", and for a request of "/foo", it returns a 301 permanent redirect to "/foo/". Why? I expected it to return 404 or similar. I also tried a prefix string of "/foo", but that also results in the same 301. Here's the server block (entire nginx.conf at end of message): ---- server { listen 127.0.0.1:80; listen [::1]:80; server_name localhost "" 127.0.0.1 [::1]; root /srv/www/localhost; location /foo/ { } } ---- And here's the curl invocation: ---- $ curl -I 'http://localhost/foo' HTTP/1.1 301 Moved Permanently Server: nginx/1.12.2 Date: Tue, 30 Jul 2019 21:54:44 GMT Content-Type: text/html Content-Length: 185 Location: http://localhost/foo/ Connection: keep-alive ---- I've read in https://nginx.org/en/docs/http/ngx_http_core_module.html#location where it says If a location is defined by a prefix string that ends with the slash character, and requests are processed by one of proxy_pass, fastcgi_pass, uwsgi_pass, scgi_pass, memcached_pass, or grpc_pass, then the special processing is performed. In response to a request with URI equal to this string, but without the trailing slash, a permanent redirect with the code 301 will be returned to the requested URI with the slash appended. But in my case, I don't believe the request is being processed by any of those *_pass directives. Thank you! Lewis ---- Complete nginx.conf ---- user nginx; worker_processes auto; error_log /var/log/nginx/error.log; pid /run/nginx.pid; events { worker_connections 1024; } http { log_format main '$remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"'; access_log /var/log/nginx/access.log main; sendfile on; tcp_nopush on; tcp_nodelay on; keepalive_timeout 65; types_hash_max_size 2048; include /etc/nginx/mime.types; default_type application/octet-stream; server { listen 127.0.0.1:80; listen [::1]:80; server_name localhost "" 127.0.0.1 [::1]; root /srv/www/localhost; location /foo/ { } } } ---- From kuroishi at iij.ad.jp Wed Jul 31 01:13:38 2019 From: kuroishi at iij.ad.jp (Kuroishi Mitsuo) Date: Wed, 31 Jul 2019 10:13:38 +0900 (JST) Subject: handling cookie Message-ID: <20190731.101338.2096696124554281829.kuroishi@iij.ad.jp> Hi, I'm developing a module that handles the cookie header for Nginx. It's kind of awkward though, the cookie sometimes contains the same key name. For example, Cookie: a=xxx; a=yyy Currently I use ngx_http_parse_multi_header_lines() like below. ngx_str_t buf; ngx_str_t key = ngx_string ("a"); ngx_http_parse_multi_header_lines(&r->headers_in.cookies, &key, &buf); But the function only seems to be able to get the 1st one. Is there any way to get the 2nd value? And any idea is welcome. Thanks in advance. -- Kuroishi Mitsuo From nginx-forum at forum.nginx.org Wed Jul 31 03:25:36 2019 From: nginx-forum at forum.nginx.org (blason) Date: Tue, 30 Jul 2019 23:25:36 -0400 Subject: Need help on Oauth-2.0 Token with Nginx reverse proxy Message-ID: Hi Folks, I am trying to setup a reverse proxy on nginx with server at backend and from HAR file I understand it uses Oauth-Token-2.0 with POST method. However I am unable to set the stuff and seeking help here. My original server here is assuming https://test.example.net:9084 And for Outh from har file I can see the request goes to https://test.example.net:99/connect/token Here is my config ********************************* server { listen 443 ssl; listen 8084; listen 88; server_name test.example.net; ssl_protocols TLSv1.1 TLSv1.2; ssl_certificate /etc/nginx/certs/star_xxxx.com.crt; ssl_certificate_key /etc/nginx/certs/server.key; ssl on; ssl_ciphers 'EECDH+AESGCM:EDH+AESGCM:AES256+EECDH:AES256+EDH'; gzip on; gzip_proxied any; gzip_types text/plain text/xml text/css application/x-javascript; gzip_vary on; gzip_comp_level 6; gzip_buffers 16 8k; gzip_http_version 1.1; gzip_min_length 256; gzip_disable "MSIE [1-6]\.(?!.*SV1)"; ssl_prefer_server_ciphers on; ssl_session_cache shared:SSL:10m; access_log /var/log/nginx/test/access.log; error_log /var/log/nginx/test/error.log; location / { proxy_pass https://test.example.net:9084; proxy_redirect https://test.example.net:99/ /; client_max_body_size 10m; client_body_buffer_size 128k; #proxy_redirect off; proxy_send_timeout 90; proxy_read_timeout 90; proxy_buffer_size 128k; proxy_buffers 4 256k; proxy_busy_buffers_size 256k; proxy_temp_file_write_size 256k; proxy_connect_timeout 30s; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-Proto $scheme; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; add_header Strict-Transport-Security "max-age=31536000; includeSubDomains" always; add_header X-Content-Type-Options nosniff; add_header X-XSS-Protection "1; mode=block"; add_header Referrer-Policy "no-referrer-when-downgrade"; add_header X-Frame-Options "SAMEORIGIN" always; } Posted at Nginx Forum: https://forum.nginx.org/read.php?2,285048,285048#msg-285048 From nginx-forum at forum.nginx.org Wed Jul 31 03:27:46 2019 From: nginx-forum at forum.nginx.org (blason) Date: Tue, 30 Jul 2019 23:27:46 -0400 Subject: Need help on Oauth-2.0 Token with Nginx reverse proxy In-Reply-To: References: Message-ID: <038b34678f89970bd1648e716c0b8960.NginxMailingListEnglish@forum.nginx.org> blason Wrote: ------------------------------------------------------- > Hi Folks, > > I am trying to setup a reverse proxy on nginx with server at backend > and from HAR file I understand it uses Oauth-Token-2.0 with POST > method. > > However I am unable to set the stuff and seeking help here. > > My original server here is assuming > > https://test.example.net:9084 > And for Outh from har file I can see the request goes to > https://test.example.net:99/connect/token > > Here is my config > ********************************* > server { > listen 443 ssl; > listen 8084; > listen 88; > server_name test.example.net; > ssl_protocols TLSv1.1 TLSv1.2; > ssl_certificate /etc/nginx/certs/star_xxxx.com.crt; > ssl_certificate_key /etc/nginx/certs/server.key; > ssl on; > ssl_ciphers > 'EECDH+AESGCM:EDH+AESGCM:AES256+EECDH:AES256+EDH'; > gzip on; > gzip_proxied any; > gzip_types text/plain text/xml text/css > application/x-javascript; > gzip_vary on; > gzip_comp_level 6; > gzip_buffers 16 8k; > gzip_http_version 1.1; > gzip_min_length 256; > gzip_disable "MSIE [1-6]\.(?!.*SV1)"; > ssl_prefer_server_ciphers on; > ssl_session_cache shared:SSL:10m; > access_log /var/log/nginx/test/access.log; > error_log /var/log/nginx/test/error.log; > > > location / { > proxy_pass https://test.example.net:9084; > proxy_redirect https://test.example.net:99/ /; > client_max_body_size 10m; > client_body_buffer_size 128k; > #proxy_redirect off; > proxy_send_timeout 90; > proxy_read_timeout 90; > proxy_buffer_size 128k; > proxy_buffers 4 256k; > proxy_busy_buffers_size 256k; > proxy_temp_file_write_size 256k; > proxy_connect_timeout 30s; > proxy_set_header Host $host; > proxy_set_header X-Real-IP $remote_addr; > proxy_set_header X-Forwarded-Proto $scheme; > proxy_set_header X-Forwarded-For > $proxy_add_x_forwarded_for; > add_header Strict-Transport-Security "max-age=31536000; > includeSubDomains" always; > add_header X-Content-Type-Options nosniff; > add_header X-XSS-Protection "1; mode=block"; > add_header Referrer-Policy "no-referrer-when-downgrade"; > add_header X-Frame-Options "SAMEORIGIN" always; > } Here are HAR file Headers Date Tue, 30 Jul 2019 07:56:26 GMT Strict-Transport-Security max-age=31536000; includeSubDomains X-Content-Type-Options nosniff X-AspNet-Version 4.0.30319 X-Powered-By ASP.NET Connection keep-alive Content-Length 919 X-XSS-Protection 1; mode=block Pragma no-cache Referrer-Policy no-referrer-when-downgrade Server nginx X-Frame-Options SAMEORIGIN Access-Control-Allow-Methods * Content-Type application/json; charset=utf-8 Access-Control-Allow-Origin * Cache-Control no-store, no-cache, max-age=0, private Access-Control-Allow-Headers Origin, X-Requested-With, Content-Type, Accept Request Headers Accept application/json, text/plain, */* Referer https://test.example.net/ Origin https://test.example.net User-Agent Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/75.0.3770.142 Safari/537.36 Content-Type application/x-www-form-urlencoded Posted at Nginx Forum: https://forum.nginx.org/read.php?2,285048,285049#msg-285049 From nginx-forum at forum.nginx.org Wed Jul 31 03:41:13 2019 From: nginx-forum at forum.nginx.org (blason) Date: Tue, 30 Jul 2019 23:41:13 -0400 Subject: Need help on Oauth-2.0 Token with Nginx reverse proxy In-Reply-To: References: Message-ID: <9bddaf562495e2e880f28491e1994a22.NginxMailingListEnglish@forum.nginx.org> Here are the error messages I am seeing in access.log 1.2.3.4 - - [31/Jul/2019:10:07:58 +0530] "POST /connect/token HTTP/1.1" 400 80 "https://test.example.net/" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/75.0.3770.142 Safari/537.36" 1.2.3.4 - - [31/Jul/2019:10:07:58 +0530] "POST /AdsvaluAPI/api/Authentication/UpdateLoginAttemptFailed HTTP/1.1" 201 132 "https://test.example.net/" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/75.0.3770.142 Safari/537.36" Posted at Nginx Forum: https://forum.nginx.org/read.php?2,285048,285050#msg-285050 From nginx-forum at forum.nginx.org Wed Jul 31 06:46:26 2019 From: nginx-forum at forum.nginx.org (mightbeanyone) Date: Wed, 31 Jul 2019 02:46:26 -0400 Subject: Nginx proxy_pass URL-encoding with "unsafe" characters is not working Message-ID: Hi all, on my Nginx (1.16.0) I noticed the following behavior regarding "unsafe" character in the URL when using the proxy_pass directive: Some of the "unsafe" characters described in RFC1738 ( "These characters are "{", "}", "|", "\", "^", "~", "[", "]", and "`" ") are encoded, some don't, when they arrive at the tomcat backend. Using Nginx default configuration and a simple proxy config: location / { proxy_pass http://localhost:8080; } I'm forwarding the request to a tomcat server running on the same host. I analysed the incoming traffic on tomcat port. a) Request: GET /app/sample/| HTTP/1.1 Tomcat: GET /app/sample/| HTTP/1.1 b) Request: GET /app/sample/{ HTTP/1.1 Tomcat: GET /app/sample/%7B HTTP/1.1 Apache HTTP encodes apparently all of the above "unsafe" characters, Nginx only some: Encoded: "{", "}", "\", "^", "`" Not Encoded: "|", "~", "[", "]" Is there a logical explanation for this or is it misconduct? Can URL encoding be enforced? Regards Posted at Nginx Forum: https://forum.nginx.org/read.php?2,285051,285051#msg-285051 From francis at daoine.org Wed Jul 31 11:54:16 2019 From: francis at daoine.org (Francis Daly) Date: Wed, 31 Jul 2019 12:54:16 +0100 Subject: Implicit root location? In-Reply-To: <20190730212041.szj6lk5bbh27tc3i@mink.imca.aps.anl.gov> References: <20190730212041.szj6lk5bbh27tc3i@mink.imca.aps.anl.gov> Message-ID: <20190731115416.xyaanln2tahrsxby@daoine.org> On Tue, Jul 30, 2019 at 04:20:41PM -0500, J. Lewis Muir wrote: Hi there, > I have a minimal nginx.conf with one server block that sets the root > directory but has *no* location directives, yet for a request of "/", it > serves "/index.html". Why? With no locations specified, I expected it > to return 404 or similar for any request. As you've seen: if there is not a best-match location{} for this request; then nginx uses the server{}-level config. Without other configuration, that will eventually default to serving from the filesystem with the defined root directory. If *that* directory or file is not there, you'll get a 404. (And there is a compile-time root directory, if you do not set "root" explicitly.) In the common case, you want a "location / {}" so that there will always be a best-match location{}. Perhaps this "no-location-matched" case could be documented more clearly? f -- Francis Daly francis at daoine.org From francis at daoine.org Wed Jul 31 12:05:27 2019 From: francis at daoine.org (Francis Daly) Date: Wed, 31 Jul 2019 13:05:27 +0100 Subject: Why 301 permanent redirect with appended slash? In-Reply-To: <20190730221201.lypgmwxq5kg7ydxo@mink.imca.aps.anl.gov> References: <20190730221201.lypgmwxq5kg7ydxo@mink.imca.aps.anl.gov> Message-ID: <20190731120527.6nrhmod7u5z4ekq7@daoine.org> On Tue, Jul 30, 2019 at 05:12:01PM -0500, J. Lewis Muir wrote: Hi there, > I have a minimal nginx.conf with one server block that sets the root > directory and one location with a prefix string of "/foo/", and for a > request of "/foo", it returns a 301 permanent redirect to "/foo/". Why? > I expected it to return 404 or similar. I also tried a prefix string of > "/foo", but that also results in the same 301. I get a 301 if the directory "foo" exists; and a 404 if it does not. Do you get something different? > But in my case, I don't believe the request is being processed by any of > those *_pass directives. If you request "/directory", and "directory" exists, then the filesystem handler will issue a 301 to "/directory/", which I think is what you are seeing. As in: your request for "/foo" does not match any location{}, and so is handled at server-level, which runs the filesystem handler and returns 200 if the file "foo" exists, 301 if the directory "foo" exists, and 404 otherwise. Change your config to be location /foo/ { return 200 "location /foo/\n"; } and you will see when that location is used. If your config has location /foo {} then a similar consideration applies, except the request "/foo" is now handled in that location which, per the above configuration, uses the filesystem handler. f -- Francis Daly francis at daoine.org From mouseless at free.fr Wed Jul 31 15:29:37 2019 From: mouseless at free.fr (Vincent M.) Date: Wed, 31 Jul 2019 17:29:37 +0200 Subject: Setting Charset on Nginx PHP virtual host Message-ID: <16fc46cb-e503-9cdd-686a-80920d4ab711@free.fr> Hello, I tried to set the Charset of a virtual host like this: server { ??????? root /var/www/mywebsite.com; ... ??????? charset iso-8859-1; ??????? override_charset on; ... ??????? location ~ \.php$ { ??????????????? include snippets/fastcgi-php.conf; ??????????????? fastcgi_pass unix:/var/run/php/php7.2-fpm.sock; ??????????????? charset iso-8859-1; ??????????????? override_charset on; ??????? } } I have specified charset and overried_charset on both server and location and yet, it was still sending headers in UTF-8. I had to modify the php.ini file from /etc/php/7.2/fpm/php.ini to specify in it default_charset = "iso-8859-1". But I would like to let my php set to UTF-8 and specify on Nginx only for only one virtual host iso-8859-1. On Apache we can do: ... ??? Header set Content-Type "text/html; charset=iso-8859-1" How to do the same on Nginx? Thanks, Vincent. -------------- next part -------------- An HTML attachment was scrubbed... URL: From jlmuir at imca-cat.org Wed Jul 31 16:45:58 2019 From: jlmuir at imca-cat.org (J. Lewis Muir) Date: Wed, 31 Jul 2019 11:45:58 -0500 Subject: Why 301 permanent redirect with appended slash? In-Reply-To: <20190731120527.6nrhmod7u5z4ekq7@daoine.org> References: <20190730221201.lypgmwxq5kg7ydxo@mink.imca.aps.anl.gov> <20190731120527.6nrhmod7u5z4ekq7@daoine.org> Message-ID: <20190731164558.dcyoyi3n2shgwgaq@mink.imca.aps.anl.gov> On 07/31, Francis Daly wrote: > On Tue, Jul 30, 2019 at 05:12:01PM -0500, J. Lewis Muir wrote: > > Hi there, > > > I have a minimal nginx.conf with one server block that sets the root > > directory and one location with a prefix string of "/foo/", and for a > > request of "/foo", it returns a 301 permanent redirect to "/foo/". Why? > > I expected it to return 404 or similar. I also tried a prefix string of > > "/foo", but that also results in the same 301. > > I get a 301 if the directory "foo" exists; and a 404 if it does not. > > Do you get something different? No, I get exactly what you described. Sorry, I failed to state in my initial email that the directory "foo" exists (as well as the file "foo/index.html" which probably doesn't matter here). > > But in my case, I don't believe the request is being processed by any of > > those *_pass directives. > > If you request "/directory", and "directory" exists, then the filesystem > handler will issue a 301 to "/directory/", which I think is what you > are seeing. Indeed. > As in: your request for "/foo" does not match any location{}, and so is > handled at server-level, which runs the filesystem handler and returns > 200 if the file "foo" exists, 301 if the directory "foo" exists, and > 404 otherwise. Yes, thank you very much for the explanation! That all makes sense. I couldn't find this behavior documented anywhere; is it documented somewhere that I've missed? > Change your config to be > > location /foo/ { return 200 "location /foo/\n"; } > > and you will see when that location is used. Nice; that's useful for testing! > If your config has > > location /foo {} > > then a similar consideration applies, except the request "/foo" is now > handled in that location which, per the above configuration, uses the > filesystem handler. Understood. Thanks! Lewis From robn at fastmailteam.com Wed Jul 31 23:32:20 2019 From: robn at fastmailteam.com (=?UTF-8?Q?Rob_N_=E2=98=85?=) Date: Thu, 01 Aug 2019 09:32:20 +1000 Subject: Crash in mail module during SMTP setup In-Reply-To: <20190730161132.GK1877@mdounin.ru> References: <20190729182642.GF1877@mdounin.ru> <3a8fd0ea-463a-4018-92f3-5a9082297e61@www.fastmail.com> <20190730153243.GJ1877@mdounin.ru> <20190730161132.GK1877@mdounin.ru> Message-ID: <676c08b5-d10f-4cf5-a756-a98494f46d5f@www.fastmail.com> On Wed, 31 Jul 2019, at 2:11 AM, Maxim Dounin wrote: > > I think I see the problem - when using SMTP with SSL and resolver, > > read events might be enabled during address resolving, leading to > > duplicate ngx_mail_ssl_handshake_handler() calls if something > > arrives from the client, and duplicate session initialization - > > including starting another resolving. That neatly explains why the problem became more noticeable as number of connections went up. With the load a little higher, DNS resolution could conceivable take a little longer, making it more likely that the bug would be triggered. > > The following patch should resolve this: I've been running the second patch you posted for ~22hrs with no crashes, compared to one every 10-20mins previously. So I think you got it! Thank you so much! Cheers, Rob N. -------------- next part -------------- An HTML attachment was scrubbed... URL: