From gk at leniwiec.biz Tue Nov 3 06:37:32 2020 From: gk at leniwiec.biz (Grzegorz Kulewski) Date: Tue, 3 Nov 2020 07:37:32 +0100 Subject: $request_id version per subrequest Message-ID: <949cbf49-63e6-b0c3-79c9-9086eea6d7c2@leniwiec.biz> Hello, Currently $request_id seems to be per main request and any subrequests (like: ssi includes and other cases) reuse same id. Would it be possible to add a second variable with per subrequest version of $request_id? Thank you in advance. -- Grzegorz Kulewski From sathish046 at gmail.com Tue Nov 3 08:51:21 2020 From: sathish046 at gmail.com (Sathishkumar Pannerselvam) Date: Tue, 3 Nov 2020 14:21:21 +0530 Subject: Nginx - Hide Proxy Server url + Header Message-ID: Hello Team, I am very new to th Nginx. Past 2 days, I have been learning the Nginx from the Open Forum. I am not familiar with most of the term and key words. Sorry for that. I need your support on the below case. I am using "www.ebay.com" as a proxy server. When I am trying to access the nginx using my public IP from another machine in port 80. I was able to see the ebay welcome page . But, when i am trying to visit any sub page in ebay using hyperlink listed in the ebay home page, its not using my nginx IP instead of it using the ebay hostname. My expectation is, In browsers it should always show nginx server IP instead of proxy server hostname. Below is the code: Your help is really appreciated and relief for my burden. worker_processes 1; events { worker_connections 1024; } http { include mime.types; default_type application/octet-stream; sendfile on; keepalive_timeout 65; server { listen 80; server_name localhost; location / { proxy_pass https://www.ebay.com; index index.html index.htm; } # end location } # end server } -- Thanks, Sathish Kumar Pannerselvam -------------- next part -------------- An HTML attachment was scrubbed... URL: From rejaine at bhz.jamef.com.br Tue Nov 3 14:01:55 2020 From: rejaine at bhz.jamef.com.br (Rejaine Silveira Monteiro) Date: Tue, 3 Nov 2020 11:01:55 -0300 Subject: upstream problem Message-ID: Hi, I'm trying to set up a balance as follows (example): upstream loadbalance { server server:9091; server server:9092; server server:9093; } location / { proxy_set_header Host $server_name; proxy_request_buffering off; proxy_buffering off; proxy_redirect off; proxy_read_timeout 30s; proxy_connect_timeout 75s; proxy_pass http://loadbalance; } When access http:/nginxserver/webservice .... works perfectly if I use one of the webservices at a time (ex: only port 9091, 9092 or 9093 at a time), but when I use the 3 together, it works intermittently ... (sorry, the page you are looking for is currently unavailable...) all services are ok too (http://nginxserver:909x/webservice are running) what am I doing wrong? -- *Esta mensagem pode conter informa??es confidenciais ou privilegiadas, sendo seu sigilo protegido por lei. Se voc? n?o for o destinat?rio ou a pessoa autorizada a receber esta mensagem, n?o pode usar, copiar ou divulgar as informa??es nela contidas ou tomar qualquer a??o baseada nessas informa??es. Se voc? recebeu esta mensagem por engano, por favor avise imediatamente ao remetente, respondendo o e-mail e em seguida apague-o. Agradecemos sua coopera??o.* From nginx-forum at forum.nginx.org Thu Nov 5 22:18:38 2020 From: nginx-forum at forum.nginx.org (meniem) Date: Thu, 05 Nov 2020 17:18:38 -0500 Subject: SSL error:14094418:SSL routines:SSL3_READ_BYTES:tlsv1 alert unknown ca:SSL alert Message-ID: <4ee4acb4f4583acd856ee30f9e7e67a6.NginxMailingListEnglish@forum.nginx.org> I'm trying to setup Nginx reserve proxy which redirect to a specific host that requires certificate for proper functionality. But I get this error when I hit the endpoint from the browser: 2020/11/05 19:55:21 [error] 6334#6334: *111317 SSL_do_handshake() failed (SSL: error:14094418:SSL routines:SSL3_READ_BYTES:tlsv1 alert unknown ca:SSL alert n$ Here is the nginx configuration file: server { listen 443 ssl; listen [::]:443 ssl; ssl_certificate /home/ubuntu/appname.com.pem; ssl_certificate_key /home/ubuntu/appname.com.key; server_name appname.com; ssl_protocols TLSv1.2; set $target_server targetapp.com:443; location /api/ { rewrite ^/api(/.*) $1 break; proxy_pass https://$target_server/$uri$is_args$args; proxy_set_header X-Forwarded-Host $server_name; proxy_set_header Host appname.com; error_log /var/log/nginx/target_server.log debug; proxy_set_header Accept-Encoding text/xml; proxy_ssl_certificate /home/ubuntu/target_server_client.pem; proxy_ssl_certificate_key /home/ubuntu/target_server_key.pem; proxy_ssl_trusted_certificate /home/ubuntu/target_server_CA.pem; proxy_ssl_verify off; proxy_ssl_verify_depth 1; proxy_ssl_server_name on; } } I tried to enable/disable both `proxy_ssl_server_name` and `proxy_ssl_verify`, but both didn't fix the issue. When I SSH into that server and try the below curl command, I can get the expected correct response, it's only when try to hit the endpoint from the browser: curl -vv --cert target_server_client.pem --key target_server_key.pem --cacert target_server_CA.pem --url https://targetapp.com/api 2>&1|less I'm not sure what could be the issue, I suspect it would be that the Nginx proxy is using the IP address instead of host name in the endpoint, that's why it's giving an SSL verification issue. Because it's working by curl command propely. I also tried to enable the proxy_ssl_server_name, but didn't help. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,289880,289880#msg-289880 From jonathan.morrison at gmail.com Thu Nov 5 22:47:16 2020 From: jonathan.morrison at gmail.com (Jonathan Morrison) Date: Thu, 5 Nov 2020 15:47:16 -0700 Subject: How do I call a subrequest on every request? Message-ID: Currently with auth_request I can set an expiration, but I also need to check to see if a user has been manually disabled before that expiration time. Is there a way to force auth_request to be called every time? Currently if it is successful it doesn't hit that endpoint again. -Jonathan -------------- next part -------------- An HTML attachment was scrubbed... URL: From pluknet at nginx.com Fri Nov 6 00:56:01 2020 From: pluknet at nginx.com (Sergey Kandaurov) Date: Fri, 6 Nov 2020 00:56:01 +0000 Subject: SSL error:14094418:SSL routines:SSL3_READ_BYTES:tlsv1 alert unknown ca:SSL alert In-Reply-To: <4ee4acb4f4583acd856ee30f9e7e67a6.NginxMailingListEnglish@forum.nginx.org> References: <4ee4acb4f4583acd856ee30f9e7e67a6.NginxMailingListEnglish@forum.nginx.org> Message-ID: <446DC99D-8CEA-4152-B48A-DD6F0E4DA9F2@nginx.com> > On 5 Nov 2020, at 22:18, meniem wrote: > > I'm trying to setup Nginx reserve proxy which redirect to a specific host > that requires certificate for proper functionality. But I get this error > when I hit the endpoint from the browser: > > > 2020/11/05 19:55:21 [error] 6334#6334: *111317 SSL_do_handshake() > failed (SSL: error:14094418:SSL routines:SSL3_READ_BYTES:tlsv1 alert > unknown ca:SSL alert n$ That means that the proxied HTTPS server could not build a full certificate chain combined from what you have specified in the proxy_ssl_certificate directive and their own CA certificate(s). Hence, it aborts the handshake by sending the "unknown_ca" alert. > > Here is the nginx configuration file: > > server { > listen 443 ssl; > listen [::]:443 ssl; > > ssl_certificate /home/ubuntu/appname.com.pem; > ssl_certificate_key /home/ubuntu/appname.com.key; > > server_name appname.com; > > ssl_protocols TLSv1.2; > > set $target_server targetapp.com:443; > > location /api/ { > rewrite ^/api(/.*) $1 break; > proxy_pass https://$target_server/$uri$is_args$args; > proxy_set_header X-Forwarded-Host $server_name; > proxy_set_header Host appname.com; > error_log /var/log/nginx/target_server.log debug; > proxy_set_header Accept-Encoding text/xml; > proxy_ssl_certificate /home/ubuntu/target_server_client.pem; > proxy_ssl_certificate_key /home/ubuntu/target_server_key.pem; > proxy_ssl_trusted_certificate > /home/ubuntu/target_server_CA.pem; > proxy_ssl_verify off; > proxy_ssl_verify_depth 1; > proxy_ssl_server_name on; > } > } > > > > > I tried to enable/disable both `proxy_ssl_server_name` and > `proxy_ssl_verify`, but both didn't fix the issue. proxy_ssl_verify works in the opposite direction and would barely help. It's used to verify the upstream server certificate, disabled by default. > > When I SSH into that server and try the below curl command, I can get the > expected correct response, it's only when try to hit the endpoint from the > browser: > > > curl -vv --cert target_server_client.pem --key target_server_key.pem > --cacert target_server_CA.pem --url https://targetapp.com/api 2>&1|less > If proxy_ssl_certificate / proxy_ssl_certificate_key paths match those specified in the curl command, then the problem can be somewhere else. It could be that the behaviour depends on what the server name is sent through SNI. In your case it depends on what's set in $target_server (which also requires resolver), here SNI value will be "targetapp.com". The name is otherwise specified in the proxy_ssl_name directive. > I'm not sure what could be the issue, I suspect it would be that the Nginx > proxy is using the IP address instead of host name in the endpoint, that's > why it's giving an SSL verification issue. Because it's working by curl > command propely. I also tried to enable the proxy_ssl_server_name, but > didn't help. I'd check what's actually sent in SNI (upstream SSL server name). You may want to explore debug messages for further insights. http://nginx.org/en/docs/debugging_log.html -- Sergey Kandaurov From jordanvonkluck at gmail.com Fri Nov 6 02:06:06 2020 From: jordanvonkluck at gmail.com (Jordan von Kluck) Date: Thu, 5 Nov 2020 20:06:06 -0600 Subject: Transient, Load Related Slow response_time / upstream_response_time vs App Server Reported Times In-Reply-To: <20201030210120.GW50919@mdounin.ru> References: <20201030210120.GW50919@mdounin.ru> Message-ID: Maxim - You were pretty much entirely correct here - although it was actually the firewall (which sits logically between the reverse proxies and the upstreams) which wasn't removing state from the connection table quickly enough. Following the advice here https://www.nginx.com/blog/overcoming-ephemeral-port-exhaustion-nginx-plus/ resolved the issue for now. The suggestion to configure the upstreams to send RST on tcp queue overflows is a really helpful one. Thanks again for the help and guidance here. Jordan On Fri, Oct 30, 2020 at 4:01 PM Maxim Dounin wrote: > Hello! > > On Thu, Oct 29, 2020 at 01:02:57PM -0500, Jordan von Kluck wrote: > > > I am hoping someone on the community list can help steer me in the right > > direction for troubleshooting the following scenario: > > > > I am running a cluster of 4 virtualized nginx open source 1.16.0 servers > > with 4 vCPU cores and 4 GB of RAM each. They serve HTTP (REST API) > requests > > to a pool of about 40 different upstream clusters, which range from 2 to > 8 > > servers within each upstream definition. The upstream application servers > > themselves have multiple workers per server. > > > > I've recently started seeing an issue where the reported response_time > and > > typically the reported upstream_response_time the nginx access log are > > drastically different from the reported response on the application > servers > > themselves. For example, on some requests the typical average > response_time > > would be around 5ms with an upstream_response_time of 4ms. During these > > transient periods of high load (approximately 1200 -1400 rps), the > reported > > nginx response_time and upstream_response_time spike up to somewhere > around > > 1 second, while the application logs on the upstream servers are still > > reporting the same 4ms response time. > > > > The upstream definitions are very simple and look like: > > upstream rest-api-xyz { > > least_conn; > > server 10.1.1.33:8080 max_fails=3 fail_timeout=30; # > > production-rest-api-xyz01 > > server 10.1.1.34:8080 max_fails=3 fail_timeout=30; # > > production-rest-api-xyz02 > > } > > > > One avenue that I've considered but does not seem to be the case from the > > instrumentation on the app servers is that they're accepting the requests > > and queueing them in a TCP socket locally. However, running a packet > > capture on both the nginx server and the app server actually shows the > http > > request leaving nginx at the end of the time window. I have not looked at > > this down to the TCP handshake to see if the actual negotiation is taking > > an excessive amount of time. I can produce this queueing scenario > > artificially, but it does not appear to be what's happening in my > > production environment in the scenario described above. > > > > Does anyone here have any experience sorting out something like this? The > > upstream_connect_time is not part of the log currently, but if that > number > > was reporting high, I'm not entirely sure what would cause that. > Similarly, > > if the upstream_connect_time does not account for most of the delay, is > > there anything else I should be looking at? > > Spikes to 1 second suggests that this might be SYN retransmit > timeouts. > > Most likely, this is what happens: your backend cannot cope with > load, so listen queue on the backend overflows. Default behaviour > on most Linux boxes is to drop SYN packets on listen queue > overflows (tcp.ipv4.abort_on_overflow=0). Dropped SYN packets > eventually - after an initial RTO, initial retransmission timeout, > which is 1s on modern Linux systems - result in retransmission and > connection being finally established, but with 1s delay. > > Consider looking at network stats to see if there are actual > listen queue overflows on your backends, something like "nstat -az > TcpExtListenDrops" should be handy. You can also use "ss -nlt" to > see listen queue sizes in real time. > > In many cases such occasional queue overflows under load simply > mean that listen queue size is too low, so minor load fluctuations > might occasionally result in overflows. In this case, using a > larger listen queue might help. > > Also, if the backend servers in question are solely the backend > ones, and there are multiple load-balanced servers as your > configuration suggests, it might be a good idea to configure these > servers to send RST on listen queue overflows, that is, to set > tcp.ipv4.abort_on_overflow to 1. This way nginx will immediately > know that the backend'd listen queue is full and will be able to > try the next upstream server instead. > > -- > Maxim Dounin > http://mdounin.ru/ > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Fri Nov 6 09:35:43 2020 From: nginx-forum at forum.nginx.org (meniem) Date: Fri, 06 Nov 2020 04:35:43 -0500 Subject: SSL error:14094418:SSL routines:SSL3_READ_BYTES:tlsv1 alert unknown ca:SSL alert In-Reply-To: <446DC99D-8CEA-4152-B48A-DD6F0E4DA9F2@nginx.com> References: <446DC99D-8CEA-4152-B48A-DD6F0E4DA9F2@nginx.com> Message-ID: Thanks Sergey for your quick reply. I have checked the debug logs for the SNI (upstream SSL server name), and it seems to be correct.I also used the "proxy_ssl_name" directive that set to the proxied_server_name. Below is the debug output when I hit the endpoint: 2020/11/06 09:14:36 [debug] 30370#30370: *113140 http cleanup add: 000F8E3FFB8 2020/11/06 09:14:36 [debug] 30370#30370: *113140 http upstream resolve: "/abc" 2020/11/06 09:14:36 [debug] 30370#30370: *113140 name was resolved to 1.2.3.4 2020/11/06 09:14:36 [debug] 30370#30370: *113140 get rr peer, try: 1 2020/11/06 09:14:36 [debug] 30370#30370: *113140 stream socket 13 2020/11/06 09:14:36 [debug] 30370#30370: *113140 epoll add connection: fd:13 ev:8002005 2020/11/06 09:14:36 [debug] 30370#30370: *113140 connect to 1.2.3.4:443, fd:13 #11343 2020/11/06 09:14:36 [debug] 30370#30370: *113140 http upstream connect: -2 2020/11/06 09:14:36 [debug] 30370#30370: *113140 posix_memalign: 003FFB8:128 @16 2020/11/06 09:14:36 [debug] 30370#30370: *113140 event timer add: 13: 60000:1604656507 2020/11/06 09:14:36 [debug] 30370#30370: *113140 http finalize request: -4, "/abc" a:1, c:2 2020/11/06 09:14:36 [debug] 30370#30370: *113140 http request count:2 blk:0 2020/11/06 09:14:36 [debug] 30370#30370: *113140 http run request: "/abc" 2020/11/06 09:14:36 [debug] 30370#30370: *113140 http upstream check client, write event:1, "/abc" 2020/11/06 09:14:36 [debug] 30370#30370: *113140 http upstream request: "/abc" 2020/11/06 09:14:36 [debug] 30370#30370: *113140 http upstream send request handler 2020/11/06 09:14:36 [debug] 30370#30370: *113140 malloc: 00007F8EF805E0:72 2020/11/06 09:14:36 [debug] 30370#30370: *113140 upstream SSL server name: "targetapp.com" 2020/11/06 09:14:36 [debug] 30370#30370: *113140 tcp_nodelay 2020/11/06 09:14:36 [debug] 30370#30370: *113140 SSL_do_handshake: -1 2020/11/06 09:14:36 [debug] 30370#30370: *113140 SSL_get_error: 2 2020/11/06 09:14:36 [debug] 30370#30370: *113140 SSL handshake handler: 0 2020/11/06 09:14:36 [debug] 30370#30370: *113140 SSL_do_handshake: -1 2020/11/06 09:14:36 [debug] 30370#30370: *113140 SSL_get_error: 2 2020/11/06 09:14:36 [debug] 30370#30370: *113140 SSL handshake handler: 1 2020/11/06 09:14:36 [debug] 30370#30370: *113140 SSL_do_handshake: -1 2020/11/06 09:14:36 [debug] 30370#30370: *113140 SSL_get_error: 2 2020/11/06 09:14:36 [debug] 30370#30370: *113140 SSL handshake handler: 0 2020/11/06 09:14:36 [debug] 30370#30370: *113140 SSL_do_handshake: -1 2020/11/06 09:14:36 [debug] 30370#30370: *113140 SSL_get_error: 2 2020/11/06 09:14:36 [debug] 30370#30370: *113140 SSL handshake handler: 1 2020/11/06 09:14:36 [debug] 30370#30370: *113140 SSL_do_handshake: -1 2020/11/06 09:14:36 [debug] 30370#30370: *113140 SSL_get_error: 2 2020/11/06 09:14:37 [debug] 30370#30370: *113140 SSL handshake handler: 0 2020/11/06 09:14:37 [debug] 30370#30370: *113140 SSL_do_handshake: 0 2020/11/06 09:14:37 [debug] 30370#30370: *113140 SSL_get_error: 1 2020/11/06 09:14:37 [error] 30370#30370: *113140 SSL_do_handshake() failed (SSL: error:14094418:SSL routines:SSL3_READ_BYTES:tlsv1 alert unknown ca:SSL alert$ 2020/11/06 09:14:37 [debug] 30370#30370: *113140 http next upstream, 2 2020/11/06 09:14:37 [debug] 30370#30370: *113140 free rr peer 1 4 2020/11/06 09:14:37 [debug] 30370#30370: *113140 finalize http upstream request: 502 2020/11/06 09:14:37 [debug] 30370#30370: *113140 finalize http proxy request 2020/11/06 09:14:37 [debug] 30370#30370: *113140 close http upstream connection: 13 2020/11/06 09:14:37 [debug] 30370#30370: *113140 free: 0007F8EF0E0 2020/11/06 09:14:37 [debug] 30370#30370: *113140 free: 0007F8EFA2A0, unused: 32 2020/11/06 09:14:37 [debug] 30370#30370: *113140 event timer del: 13: 104613507 2020/11/06 09:14:37 [debug] 30370#30370: *113140 reusable connection: 0 2020/11/06 09:14:37 [debug] 30370#30370: *113140 http finalize request: 502, "/abc" a:1, c:1 2020/11/06 09:14:37 [debug] 30370#30370: *113140 http special response: 502, "/abc" 2020/11/06 09:14:37 [debug] 30370#30370: *113140 xslt filter header 2020/11/06 09:14:37 [debug] 30370#30370: *113140 HTTP/1.1 502 Bad Gateway Server: nginx/1.12.2 Server: nginx/1.12.2 Date: Fri, 06 Nov 2020 09:14:37 GMT Content-Type: text/html Content-Length: 173 Connection: keep-alive Posted at Nginx Forum: https://forum.nginx.org/read.php?2,289880,289884#msg-289884 From francis at daoine.org Fri Nov 6 12:18:18 2020 From: francis at daoine.org (Francis Daly) Date: Fri, 6 Nov 2020 12:18:18 +0000 Subject: How do I call a subrequest on every request? In-Reply-To: References: Message-ID: <20201106121818.GF29865@daoine.org> On Thu, Nov 05, 2020 at 03:47:16PM -0700, Jonathan Morrison wrote: Hi there, > Is there a way to force auth_request to be called every time? Currently if > it is successful it doesn't hit that endpoint again. Why do you think that auth_request is not called for every relevant request? Have you logs to show it not being called? Are you doing any caching of its response which would lead to it not being called sometimes? Cheers, f -- Francis Daly francis at daoine.org From nginx-forum at forum.nginx.org Sat Nov 7 06:42:36 2020 From: nginx-forum at forum.nginx.org (paravz) Date: Sat, 07 Nov 2020 01:42:36 -0500 Subject: nginx modules development with vscode Message-ID: <3384a80bc1a218138bee671a2ec1ddba.NginxMailingListEnglish@forum.nginx.org> A while back i went to nginx modules development training (thanks arut and vl-homutov) and ended up setting up my dev environment in VS Code on Linux. Hope someone finds it useful: https://github.com/paravz/nginx-dev-examples Posted at Nginx Forum: https://forum.nginx.org/read.php?2,289903,289903#msg-289903 From nginx-forum at forum.nginx.org Sat Nov 7 10:47:28 2020 From: nginx-forum at forum.nginx.org (garycnew@yahoo.com) Date: Sat, 07 Nov 2020 05:47:28 -0500 Subject: Nginx Download MP3 206 Partial Content HTTP Response Message-ID: <44b9a01d18516dfa6d0c1b391857866f.NginxMailingListEnglish@forum.nginx.org> All: I am successfully able to browse an MP3 website and play the MP3 streams without issue through Nginx (1.19.2). However, when attempting to download an MP3 through Nginx, I'm receiving a 206 Partial Content HTTP Response: 192.168.0.154 - - [07/Nov/2020:10:25:22 +0000] "GET music.mp3 HTTP/1.1" 206 1982193 "http://domain.com/" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/86.0.4240.75 Safari/537.36 Edg/86.0.622.38" A Client-Side Packet Trace shows the 206 Partial Content HTTP Response with a RST from Nginx: 3390 42.119998 192.168.0.154 192.168.0.2 TCP 54 61978 ? 80 [ACK] Seq=526 Ack=2293125 Win=1629440 Len=0 3391 42.120434 192.168.0.2 192.168.0.154 HTTP 347 HTTP/1.1 206 Partial Content (audio/mpeg) 3392 42.120449 192.168.0.154 192.168.0.2 TCP 54 61978 ? 80 [ACK] Seq=526 Ack=2293418 Win=1629184 Len=0 4375 69.116574 192.168.0.154 192.168.0.2 TCP 54 [TCP Window Update] 61978 ? 80 [ACK] Seq=526 Ack=2293418 Win=4219392 Len=0 4984 87.122995 192.168.0.154 192.168.0.2 TCP 55 [TCP Keep-Alive] 61978 ? 80 [ACK] Seq=525 Ack=2293418 Win=4219392 Len=1 4985 87.123324 192.168.0.2 192.168.0.154 TCP 66 [TCP Keep-Alive ACK] 80 ? 61978 [ACK] Seq=2293418 Ack=526 Win=6912 Len=0 SLE=525 SRE=526 5761 117.117822 192.168.0.2 192.168.0.154 TCP 60 80 ? 61978 [FIN, ACK] Seq=2293418 Ack=526 Win=6912 Len=0 5762 117.117911 192.168.0.154 192.168.0.2 TCP 54 61978 ? 80 [ACK] Seq=526 Ack=2293419 Win=4219392 Len=0 7291 162.122574 192.168.0.154 192.168.0.2 TCP 55 [TCP Keep-Alive] 61978 ? 80 [ACK] Seq=525 Ack=2293419 Win=4219392 Len=1 7292 162.123048 192.168.0.2 192.168.0.154 TCP 60 [TCP Keep-Alive ACK] 80 ? 61978 [ACK] Seq=2293419 Ack=526 Win=6912 Len=0 7591 173.888730 192.168.0.154 192.168.0.2 TCP 54 61978 ? 80 [FIN, ACK] Seq=526 Ack=2293419 Win=4219392 Len=0 7594 173.889906 192.168.0.2 192.168.0.154 TCP 60 80 ? 61978 [RST] Seq=2293419 Win=0 Len=0 I've tried several different browsers (i.e., Chrome, Edge, etc) with the same issue. The download is successful when browsing directly and not using Nginx. Any idea why the MP3 download is failing using Nginx? Much Appreciated. Gary Posted at Nginx Forum: https://forum.nginx.org/read.php?2,289905,289905#msg-289905 From nginx-forum at forum.nginx.org Sun Nov 8 08:42:05 2020 From: nginx-forum at forum.nginx.org (garycnew@yahoo.com) Date: Sun, 08 Nov 2020 03:42:05 -0500 Subject: Nginx Download MP3 206 Partial Content HTTP Response In-Reply-To: <44b9a01d18516dfa6d0c1b391857866f.NginxMailingListEnglish@forum.nginx.org> References: <44b9a01d18516dfa6d0c1b391857866f.NginxMailingListEnglish@forum.nginx.org> Message-ID: <2f029ec3d41fcea3d32ad2d1768861c2.NginxMailingListEnglish@forum.nginx.org> All: I discovered that the failing request is making a subsequent, asynchronous AJAX call to port 443 of Nginx where the connection is failing with "Certificate Unknown" against my self-signed certificate. GET http://example.com/ajax/inc/1488440 HTTP/1.1 Host: example.com Connection: keep-alive Accept: application/json, text/javascript, */*; q=0.01 User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/86.0.4240.75 Safari/537.36 X-Requested-With: XMLHttpRequest Referer: http://example.com/mp3/search?keywords=california+gurls Accept-Encoding: gzip, deflate Accept-Language: en-US,en;q=0.9 Cookie: PHPSESSID=k6o4mq4np28bdr6n2g2pbgq190; zvAuth=1; zvLang=0; ZvcurrentVolume=100; nua=Mozilla%2F5.0%20(Windows%20NT%2010.0%3B%20Win64%3B%20x64)%20AppleWebKit%2F537.36%20(KHTML%2C%20like%20Gecko)%20Chrome%2F86.0.4240.75%20Safari%2F537.36; asus_token=81G3BJcZjrt06SpsxUrh; z1_n=5 HTTP/1.1 200 OK Server: nginx/1.19.2 Date: Sun, 08 Nov 2020 07:38:33 GMT Content-Type: text/html; charset=UTF-8 Transfer-Encoding: chunked Connection: keep-alive Set-Cookie: __cfduid=d3d5b5d9e0cbf7321ca040f0b126eb6631604821113; expires=Tue, 08-Dec-20 07:38:33 GMT; path=/; domain=.example.com; HttpOnly; SameSite=Lax; Secure Vary: Accept-Encoding CF-Cache-Status: DYNAMIC cf-request-id: 064863f2fb00000b786e0c5000000001 Expect-CT: max-age=604800, report-uri="https://report-uri.cloudflare.com/cdn-cgi/beacon/expect-ct" Report-To: {"endpoints":[{"url":"https:\/\/a.nel.cloudflare.com\/report?s=uoLAfVO2XqMqj6FJI%2BwyHFz52QFckDptxRfYjClxWfJvGUxnyAlsIR5Im37T5tC2j%2Big2WIgIfXajj0EWpPBMCxdTtC5ZA%3D%3D"}],"group":"cf-nel","max_age":604800} NEL: {"report_to":"cf-nel","max_age":604800} CF-RAY: 5eeda297ffb90b78-AMS Content-Encoding: gzip CONNECT example.com:443 HTTP/1.1 Host: example.com:443 Connection: keep-alive User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/86.0.4240.75 Safari/537.36 334 7.593054 192.168.0.154 192.168.0.2 TLSv1.2 61 Alert (Level: Fatal, Description: Certificate Unknown) I'd like to force the AJAX connection over port 80 of Nginx. Is it possible to evaluate the Host header for :443 and if it exists change it to :80? If so, what's the most efficient way to accomplish this task? BTW... I've already implemented the proxy_redirect https:// http://; directive, which works well for the URL but not for the Host header. Thank you for your assistance. Respectfully, Gary Posted at Nginx Forum: https://forum.nginx.org/read.php?2,289905,289909#msg-289909 From francis at daoine.org Sun Nov 8 10:45:05 2020 From: francis at daoine.org (Francis Daly) Date: Sun, 8 Nov 2020 10:45:05 +0000 Subject: Nginx Download MP3 206 Partial Content HTTP Response In-Reply-To: <2f029ec3d41fcea3d32ad2d1768861c2.NginxMailingListEnglish@forum.nginx.org> References: <44b9a01d18516dfa6d0c1b391857866f.NginxMailingListEnglish@forum.nginx.org> <2f029ec3d41fcea3d32ad2d1768861c2.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20201108104505.GG29865@daoine.org> On Sun, Nov 08, 2020 at 03:42:05AM -0500, garycnew at yahoo.com wrote: Hi there, > I discovered that the failing request is making a subsequent, asynchronous > AJAX call to port 443 of Nginx where the connection is failing with > "Certificate Unknown" against my self-signed certificate. I'm not quite sure what your architecture is -- what part involves nginx, and what part involves other things. Can you show why the ajax request is going to https? As in -- what part of the previous response invites it to request https instead of the http that you want? Probably changing *that* part, will make the whole thing work better. (Or: if you are running nginx with https that remote clients should connect to, can you arrange that the certificate used is acceptable to all clients?) > GET http://example.com/ajax/inc/1488440 HTTP/1.1 That's a http request... > HTTP/1.1 200 OK ...with a normal response... > CONNECT example.com:443 HTTP/1.1 ...and then that happened. That's a http client talking to a http proxy asking to talk through to a remote https server (probably). Where did that come from? > I'd like to force the AJAX connection over port 80 of Nginx. Is it possible > to evaluate the Host header for :443 and if it exists change it to :80? If > so, what's the most efficient way to accomplish this task? If I understand things correctly -- by the time nginx sees this Host: header, the request has been made; so it is too late to change what the client does. You probably need to examine the previous response, to see what can be changed there. I have no specific suggestions right now; hopefully this description gives you a hint as to what you might be able to do. Good luck with it, f -- Francis Daly francis at daoine.org From francis at daoine.org Sun Nov 8 11:49:21 2020 From: francis at daoine.org (Francis Daly) Date: Sun, 8 Nov 2020 11:49:21 +0000 Subject: Nginx Download MP3 206 Partial Content HTTP Response In-Reply-To: <20201108104505.GG29865@daoine.org> References: <44b9a01d18516dfa6d0c1b391857866f.NginxMailingListEnglish@forum.nginx.org> <2f029ec3d41fcea3d32ad2d1768861c2.NginxMailingListEnglish@forum.nginx.org> <20201108104505.GG29865@daoine.org> Message-ID: <20201108114921.GH29865@daoine.org> On Sun, Nov 08, 2020 at 10:45:05AM +0000, Francis Daly wrote: > On Sun, Nov 08, 2020 at 03:42:05AM -0500, garycnew at yahoo.com wrote: Actually... > > GET http://example.com/ajax/inc/1488440 HTTP/1.1 > > That's a http request... That's a http request to a http proxy server, not to a http server. Are you trying to use nginx as a proxy server? Because that is going to have some problems. As far as the client is concerned, nginx is a http server, not a http proxy server. So, I guess that an architecture description might well help decide the best way to do the thing that you want to do. Cheers, f -- Francis Daly francis at daoine.org From nginx-forum at forum.nginx.org Sun Nov 8 13:07:07 2020 From: nginx-forum at forum.nginx.org (garycnew@yahoo.com) Date: Sun, 08 Nov 2020 08:07:07 -0500 Subject: Nginx Download MP3 206 Partial Content HTTP Response In-Reply-To: <20201108114921.GH29865@daoine.org> References: <20201108114921.GH29865@daoine.org> Message-ID: Fancis, Nginx is configured as a reverse proxy server in this architecture. It is successfully working except with this AJAX call. Client:54454 ==> NginxMaster:80 | NginxWorker:52312 ==> UpstreamServer:443 UpstreamServer:443 ==> NginxWorker:52312 | NginxMaster:80 ==> Client:54454 The requests and responses are as originally provided within the original post of this thread (in order). My guess is that the upstream server is responding with the Host header as example.com:443 and needs to be rewritten prior to the client making the subsequent, asynchronous AJAX request to the upstream server. Any idea how to evaluate and modify the Host header prior to the subsequent, asynchronous AJAX request to the upstream server? Thank you for your time and assistance. Respectfully, Gary Posted at Nginx Forum: https://forum.nginx.org/read.php?2,289905,289915#msg-289915 From nginx-forum at forum.nginx.org Sun Nov 8 14:51:36 2020 From: nginx-forum at forum.nginx.org (garycnew@yahoo.com) Date: Sun, 08 Nov 2020 09:51:36 -0500 Subject: Nginx Download MP3 206 Partial Content HTTP Response In-Reply-To: References: <20201108114921.GH29865@daoine.org> Message-ID: <06a93aea819b197523bb7a60c7d98fbc.NginxMailingListEnglish@forum.nginx.org> All: I've made some more progress in that when I copy/paste the AJAX URL into my browser's address-bar, the MP3 download request is successfully made and the MP3 is downloaded (opposed to the previous examples when I clicked on the MP3 download link). Interestingly, the copy/paste method yields an initial 302 response opposed to a 200 response with the click link method. Copy/Paste Method: GET http://example.com/ajax/inc/283544 HTTP/1.1 Host: example.com Connection: keep-alive Upgrade-Insecure-Requests: 1 User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/86.0.4240.75 Safari/537.36 Accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.9 Accept-Encoding: gzip, deflate Accept-Language: en-US,en;q=0.9 Cookie: PHPSESSID=k6o4mq4np28bdr6n2g2pbgq190; zvAuth=1; zvLang=0; ZvcurrentVolume=100; nua=Mozilla%2F5.0%20(Windows%20NT%2010.0%3B%20Win64%3B%20x64)%20AppleWebKit%2F537.36%20(KHTML%2C%20like%20Gecko)%20Chrome%2F86.0.4240.75%20Safari%2F537.36; asus_token=81G3BJcZjrt06SpsxUrh; _zvBoobs_=%2F%2F_-%29 HTTP/1.1 302 Found Server: nginx/1.19.2 Date: Sun, 08 Nov 2020 14:27:53 GMT Content-Type: text/html; charset=UTF-8 Connection: keep-alive Set-Cookie: __cfduid=d2f42248bc953328459ea277d77ee62671604845673; expires=Tue, 08-Dec-20 14:27:53 GMT; path=/; domain=.example.com; HttpOnly; SameSite=Lax; Secure Location: http://example.com/download/283544 CF-Cache-Status: DYNAMIC cf-request-id: 0649dab4e200000b6bce8a6000000001 Expect-CT: max-age=604800, report-uri="https://report-uri.cloudflare.com/cdn-cgi/beacon/expect-ct" Report-To: {"endpoints":[{"url":"https:\/\/a.nel.cloudflare.com\/report?s=SRYrhqPuwCUwe1MPbJ4RGW%2F8yqt4t8UD19zHwUrcNqX94%2FD8VZ6EW1vl2dogVCCaFkeDh3%2BCwogueN4i3K6Gc5SMenGqRg%3D%3D"}],"group":"cf-nel","max_age":604800} NEL: {"report_to":"cf-nel","max_age":604800} CF-RAY: 5eeffa349c0d0b6b-AMS Content-Length: 0 GET http://example.com/download/283544 HTTP/1.1 Host: example.com Connection: keep-alive Upgrade-Insecure-Requests: 1 User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/86.0.4240.75 Safari/537.36 Accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.9 Accept-Encoding: gzip, deflate Accept-Language: en-US,en;q=0.9 Cookie: PHPSESSID=k6o4mq4np28bdr6n2g2pbgq190; zvAuth=1; zvLang=0; ZvcurrentVolume=100; nua=Mozilla%2F5.0%20(Windows%20NT%2010.0%3B%20Win64%3B%20x64)%20AppleWebKit%2F537.36%20(KHTML%2C%20like%20Gecko)%20Chrome%2F86.0.4240.75%20Safari%2F537.36; asus_token=81G3BJcZjrt06SpsxUrh; _zvBoobs_=%2F%2F_-%29 HTTP/1.1 307 Temporary Redirect Server: nginx/1.19.2 Date: Sun, 08 Nov 2020 14:27:54 GMT Content-Type: text/html; charset=UTF-8 Connection: keep-alive Set-Cookie: __cfduid=db70304f5ae41939e5d51647d5b3dcc261604845674; expires=Tue, 08-Dec-20 14:27:54 GMT; path=/; domain=.example.com; HttpOnly; SameSite=Lax; Secure X-Robots-Tag: noindex, nofollow X-Frame-Options: SAMEORIGIN X-Content-Type-Options: nosniff X-XSS-Protection: 1; mode=block Set-Cookie: _zvBoobs_=deleted; expires=Thu, 01-Jan-1970 00:00:01 GMT; Max-Age=0; path=/ Set-Cookie: _zvBoobs_=%2F%2F_-%29; expires=Mon, 09-Nov-2020 02:27:54 GMT; Max-Age=43200; path=/; domain=.example.com Location: http://st1.example.com/music/9/68/katy_perry_feat._snoop_dogg_-_california_gurls_(mstrkrft_remix_radio)_(zvukoff.ru).mp3?download=force CF-Cache-Status: DYNAMIC cf-request-id: 0649dab7760000faa08984d000000001 Expect-CT: max-age=604800, report-uri="https://report-uri.cloudflare.com/cdn-cgi/beacon/expect-ct" Report-To: {"endpoints":[{"url":"https:\/\/a.nel.cloudflare.com\/report?s=0RmfU9QT43xEgj9rH4LUrpCFAVXYh6gMubnObVJWjNxnSn4CJl5zkJVoeoD6uoEOkvVzgUOlwy%2F7KbFbat6NF8Qj0b64Ig%3D%3D"}],"group":"cf-nel","max_age":604800} NEL: {"report_to":"cf-nel","max_age":604800} CF-RAY: 5eeffa38785bfaa0-AMS Content-Length: 0 GET http://st1.example.com/music/9/68/katy_perry_feat._snoop_dogg_-_california_gurls_(mstrkrft_remix_radio)_(zvukoff.ru).mp3?download=force HTTP/1.1 Host: st1.example.com Connection: keep-alive Upgrade-Insecure-Requests: 1 User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/86.0.4240.75 Safari/537.36 Accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.9 Accept-Encoding: gzip, deflate Accept-Language: en-US,en;q=0.9 Cookie: zvAuth=1; zvLang=0; _zvBoobs_=%2F%2F_-%29 HTTP/1.1 200 OK Server: nginx/1.19.2 Date: Sun, 08 Nov 2020 14:28:00 GMT Content-Type: application/force-download Content-Length: 6634727 Connection: keep-alive Last-Modified: Thu, 26 Jul 2012 13:19:08 GMT ETag: "501143cc-653ce7" Content-Disposition: attachment; filename=katy_perry_feat._snoop_dogg_-_california_gurls_(mstrkrft_remix_radio)_(zvukoff.ru).mp3 Accept-Ranges: bytes Nginx Access Logs (Click Link Method - Fails): 192.168.0.154 - - [08/Nov/2020:14:27:00 +0000] "GET /ajax/inc/283544 HTTP/1.1" 200 94 "http://example.com/mp3/search?keywords=california+gurls" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/86.0.4240.75 Safari/537.36 Edg/86.0.622.38" Nginx Access Logs (Copy/Paste Method - Success): 192.168.0.154 - - [08/Nov/2020:14:27:53 +0000] "GET /ajax/inc/283544 HTTP/1.1" 302 5 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/86.0.4240.75 Safari/537.36" 192.168.0.154 - - [08/Nov/2020:14:27:54 +0000] "GET /download/283544 HTTP/1.1" 307 5 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/86.0.4240.75 Safari/537.36" 192.168.0.154 - - [08/Nov/2020:14:28:31 +0000] "GET /music/9/68/katy_perry_feat._snoop_dogg_-_california_gurls_(mstrkrft_remix_radio)_(zvukoff.ru).mp3?download=force HTTP/1.1" 200 6634727 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/86.0.4240.75 Safari/537.36" I could use an extra set of eyes to review the requests/responses and logs to confirm whether I'm missing something. The Click Link and the Copy/Paste Methods are both going through the Nginx Reverse Proxy. Much Appreciated. Gary Posted at Nginx Forum: https://forum.nginx.org/read.php?2,289905,289916#msg-289916 From ryanbgould at gmail.com Sun Nov 8 15:42:23 2020 From: ryanbgould at gmail.com (Ryan Gould) Date: Sun, 08 Nov 2020 07:42:23 -0800 Subject: HTTP/3 and php POST Message-ID: hello team, i have found that https://hg.nginx.org/nginx-quic (current?as of 06 Nov 2020) is having some trouble properly POSTing back to PayPal using php 7.3.24 on a Debian Buster box. things work as expected using current mainline nginx or current quiche.? i have verified that PageSpeed is not causing the problem. the PayPal IPN php script is one of those things that has been working so long it has cobwebs on it.? it gets a POST, it adds a key / value to the payload and POSTs it back.? it is instantaneous and thoughtless.? the PHP script is getting a 200 return code from the return POST and everything seems great on my side.? but PayPal is complaining about the return POST and they cant tell me why.? i have enabled logging on the script and in PHP and dont see anything out of the ordinary.? the only thing i see is when i use Postman 7.34.0 i am getting a "Parse Error: Invalid character in chunk size" error, which i am having no luck tracking down information on. i would like to generate some --debug logs for you but wont because of the sensitive PayPal customer payment information. any thoughts or suggestions? nginx -V nginx version: nginx/1.19.4 built by gcc 6.3.0 20170516 (Debian 6.3.0-18+deb9u1) built with OpenSSL 1.1.1 (compatible; BoringSSL) (running with BoringSSL) TLS SNI support enabled configure arguments: --with-cc-opt=-I../boringssl/include --with-ld-opt='-L../boringssl/build/ssl -L../boringssl/build/crypto' --with-http_v3_module --with-http_quic_module --with-stream_quic_module --prefix=/etc/nginx --sbin-path=/usr/sbin/nginx --conf-path=/etc/nginx/nginx.conf --pid-path=/var/run/nginx.pid --lock-path=/var/lock/nginx.lock --error-log-path=/var/log/nginx/error.log --http-log-path=/var/log/nginx/access.log --user=www-data --group=www-data --without-http_uwsgi_module --without-http_scgi_module --without-http_memcached_module --without-mail_pop3_module --without-mail_imap_module --without-mail_smtp_module --with-http_ssl_module --with-http_v2_module --with-http_stub_status_module --with-http_gzip_static_module --with-http_realip_module --with-file-aio --add-module=../../headers-more-nginx-module --add-module=../../pagespeed --http-client-body-temp-path=/var/cache/nginx/client_temp --http-proxy-temp-path=/var/cache/nginx/proxy_temp --http-fastcgi-temp-path=/var/cache/nginx/fastcgi_temp --http-uwsgi-temp-path=/var/cache/nginx/uwsgi_temp --http-scgi-temp-path=/var/cache/nginx/scgi_temp --with-openssl=../boringssl as always, thank you for being awesome. -------------- next part -------------- An HTML attachment was scrubbed... URL: From francis at daoine.org Sun Nov 8 19:27:49 2020 From: francis at daoine.org (Francis Daly) Date: Sun, 8 Nov 2020 19:27:49 +0000 Subject: Nginx Download MP3 206 Partial Content HTTP Response In-Reply-To: References: <20201108114921.GH29865@daoine.org> Message-ID: <20201108192749.GI29865@daoine.org> On Sun, Nov 08, 2020 at 08:07:07AM -0500, garycnew at yahoo.com wrote: Hi there, > Nginx is configured as a reverse proxy server in this architecture. It is > successfully working except with this AJAX call. In general, nginx does not care whether a request to it comes from AJAX, or clicking a link, or by any other means. But your description seems to indicate that http requests to nginx work, while https requests to nginx are failed by the client, because the client does not like the certificate that nginx presents. (There's not a lot nginx can do to fix that, other than to arrange that the client does like the certificate that nginx presents.) > My guess is that the upstream server is responding with the Host header as > example.com:443 and needs to be rewritten prior to the client making the > subsequent, asynchronous AJAX request to the upstream server. I'm not sure what that means, I'm afraid. Responses don't have a Host: header, in general. And the specific responses that you have shown in this thread, do not have Host: headers. > Any idea how to evaluate and modify the Host header prior to the subsequent, > asynchronous AJAX request to the upstream server? Is there any chance that the response body content from the upstream includes links to https urls? In general, nginx will not modify the response body content from an upstream -- it is much easier to reverse-proxy a web service if any included links doe not start with "http://" or "https://"; instead starting with "/" or (better) anything else, including "../". > Thank you for your time and assistance. > > Respectfully, > > > Gary > > Posted at Nginx Forum: https://forum.nginx.org/read.php?2,289905,289915#msg-289915 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -- Francis Daly francis at daoine.org From francis at daoine.org Sun Nov 8 19:54:38 2020 From: francis at daoine.org (Francis Daly) Date: Sun, 8 Nov 2020 19:54:38 +0000 Subject: Nginx Download MP3 206 Partial Content HTTP Response In-Reply-To: <06a93aea819b197523bb7a60c7d98fbc.NginxMailingListEnglish@forum.nginx.org> References: <20201108114921.GH29865@daoine.org> <06a93aea819b197523bb7a60c7d98fbc.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20201108195438.GJ29865@daoine.org> On Sun, Nov 08, 2020 at 09:51:36AM -0500, garycnew at yahoo.com wrote: Hi there, > Interestingly, the copy/paste method yields an initial 302 response opposed > to a 200 response with the click link method. You have two requests to the same url, getting different responses. Does your nginx config include anything that might show why "the same" request gets a different response? If not -- does your upstream do anything to explain that? Perhaps something wants certain http request headers to be present, or to be absent? > GET http://example.com/ajax/inc/283544 HTTP/1.1 > Host: example.com > Connection: keep-alive > Upgrade-Insecure-Requests: 1 That request header invites the server to redirect to https, if available. Maybe that's where the https part comes in. > HTTP/1.1 302 Found > Server: nginx/1.19.2 ... > Set-Cookie: __cfduid=d2f42248bc953328459ea277d77ee62671604845673; > expires=Tue, 08-Dec-20 14:27:53 GMT; path=/; domain=.example.com; HttpOnly; > SameSite=Lax; Secure > Location: http://example.com/download/283544 But the response that the client gets is to http. I think you said that you had configured your nginx to proxy_redirect from https to http? The "Set-Cookie" line there (from the upstream) has the "Secure" flag, so the client is unlikely to send it with a subsequent http request (only https). > GET http://example.com/download/283544 HTTP/1.1 > Host: example.com > Connection: keep-alive > Upgrade-Insecure-Requests: 1 This request also invites the server to redirect to https, if appropriate. > HTTP/1.1 307 Temporary Redirect > Server: nginx/1.19.2 ... > Location: > http://st1.example.com/music/9/68/katy_perry_feat._snoop_dogg_-_california_gurls_(mstrkrft_remix_radio)_(zvukoff.ru).mp3?download=force ...and the response that the client gets is to http on a different server. So this next request goes to that different server (same nginx instance, perhaps?) > GET > http://st1.example.com/music/9/68/katy_perry_feat._snoop_dogg_-_california_gurls_(mstrkrft_remix_radio)_(zvukoff.ru).mp3?download=force > HTTP/1.1 > Host: st1.example.com > HTTP/1.1 200 OK > Server: nginx/1.19.2 > Date: Sun, 08 Nov 2020 14:28:00 GMT > Content-Type: application/force-download > Content-Length: 6634727 and presumably gets the expected response of "the desired content". > Nginx Access Logs (Click Link Method - Fails): > > 192.168.0.154 - - [08/Nov/2020:14:27:00 +0000] "GET /ajax/inc/283544 > HTTP/1.1" 200 94 "http://example.com/mp3/search?keywords=california+gurls" > "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like > Gecko) Chrome/86.0.4240.75 Safari/537.36 Edg/86.0.622.38" > > Nginx Access Logs (Copy/Paste Method - Success): > > 192.168.0.154 - - [08/Nov/2020:14:27:53 +0000] "GET /ajax/inc/283544 > HTTP/1.1" 302 5 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) > AppleWebKit/537.36 (KHTML, like Gecko) Chrome/86.0.4240.75 Safari/537.36" What is your nginx config for the request /ajax/inc/283544? Does it explain why the responses differ? Does it care about the http Referer: header, or does it care about the http User-Agent: header? (They seem to differ in these two requests.) > I could use an extra set of eyes to review the requests/responses and logs > to confirm whether I'm missing something. It sounds like you want the request for /ajax/inc/283544 to get a 302 response. If nginx handles the two requests for it the same way, then there must be something different in what upstream is doing. Can you see that, or change it? Cheers, f -- Francis Daly francis at daoine.org From nginx-forum at forum.nginx.org Mon Nov 9 08:50:38 2020 From: nginx-forum at forum.nginx.org (garycnew@yahoo.com) Date: Mon, 09 Nov 2020 03:50:38 -0500 Subject: Nginx Download MP3 206 Partial Content HTTP Response In-Reply-To: <20201108195438.GJ29865@daoine.org> References: <20201108195438.GJ29865@daoine.org> Message-ID: Fancis, I found the following in the body of the Click-Link 200 HTTP Response: {"url":"https:\/\/example.com\/download\/2770587","isSuccess":1} To me, it appears to be a Javascript redirect that Nginx is unaware of and in which the https protocol doesn't get rewritten. Is it possible for Nginx to evaluate the body of a response and rewrite a given string (i.e., https => http)? I think I might be able to use GreaseMonkey or the like to validate my theory. Thanks, again, for your time and interest. Respectfully, Gary Posted at Nginx Forum: https://forum.nginx.org/read.php?2,289905,289921#msg-289921 From nginx-forum at forum.nginx.org Mon Nov 9 09:45:31 2020 From: nginx-forum at forum.nginx.org (garycnew@yahoo.com) Date: Mon, 09 Nov 2020 04:45:31 -0500 Subject: Nginx Download MP3 206 Partial Content HTTP Response In-Reply-To: References: <20201108195438.GJ29865@daoine.org> Message-ID: <973256681ba7508b21047cbdce540a43.NginxMailingListEnglish@forum.nginx.org> Francis, That was it! The subsequent, asynchronous AJAX call was responding with a Javascript redirect that was remedied using Nginx's sub_filter directive. location / { resolver 103.86.99.100; proxy_bind $server_addr; proxy_pass https://$host$request_uri; proxy_redirect https:// http://; proxy_set_header Accept-Encoding ""; # Needed by sub_filter to disable gzip compression sub_filter_types text/javascript text/css text/xml; sub_filter 'https:' 'http:'; sub_filter_once off; } The combination of disabling gzip compression, adding sub_filter_types text/javascript, and creating sub_filter 'https:' 'http:' rewrote the protocol of the Javascript redirect. {"url":"http:\/\/z1.fm\/download\/3298838","isSuccess":1} Then, I was able to download MP3's by simply clicking the AJAX link. Hope this helps someone in the future. Respectfully, Gary Posted at Nginx Forum: https://forum.nginx.org/read.php?2,289905,289922#msg-289922 From sathya.prasad at automationanywhere.com Mon Nov 9 12:36:12 2020 From: sathya.prasad at automationanywhere.com (Sathya Prasad H R) Date: Mon, 9 Nov 2020 12:36:12 +0000 Subject: Using Nginx as a proxy server Message-ID: Hello Team, We have been configuring Nginx as a proxy server for one of our module server. The flow works like below [cid:image001.png at 01D6B6C2.4E20A040] * Request comes from Module1 server to nginx * Nginx receives and act as proxy server to forward the received data from module1 server to module2 server We are facing issue while configuring the proxy server configuration. URL is a SSL enabled Uri and it's a open SSL certificate, which doesn't have private key. When we tested the above scenario which is mention in the image with the proxy server configuration which is listed in official doc we were facing an issue. The logs says that upstream server was trying to connect with port 80, but it suppose to connect with port 443. I need bit information that why its been connecting to port 80 and what's the configuration that I am missing it. Please help us to unblock it. -- Regards, Sathya -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.png Type: image/png Size: 2171 bytes Desc: image001.png URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: nginx.conf Type: application/octet-stream Size: 2039 bytes Desc: nginx.conf URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: nginx2.conf Type: application/octet-stream Size: 1814 bytes Desc: nginx2.conf URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: access.log Type: application/octet-stream Size: 786 bytes Desc: access.log URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: error.log Type: application/octet-stream Size: 5272 bytes Desc: error.log URL: From sathya.prasad at automationanywhere.com Mon Nov 9 12:38:12 2020 From: sathya.prasad at automationanywhere.com (Sathya Prasad H R) Date: Mon, 9 Nov 2020 12:38:12 +0000 Subject: FW: Using it as a proxy server Message-ID: From: Sathya Prasad H R Sent: 09 November 2020 18:06 To: nginx at nginx.org Cc: Joydeep Ghosh Subject: Using Nginx as a proxy server Hello Team, We have been configuring Nginx as a proxy server for one of our module server. The flow works like below [cid:image001.png at 01D6B6C2.4E20A040] * Request comes from Module1 server to nginx * Nginx receives and act as proxy server to forward the received data from module1 server to module2 server We are facing issue while configuring the proxy server configuration. URL is a SSL enabled Uri and it's a open SSL certificate, which doesn't have private key. When we tested the above scenario which is mention in the image with the proxy server configuration which is listed in official doc we were facing an issue. The logs says that upstream server was trying to connect with port 80, but it suppose to connect with port 443. I need bit information that why its been connecting to port 80 and what's the configuration that I am missing it. Please help us to unblock it. -- Regards, Sathya -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.png Type: image/png Size: 2171 bytes Desc: image001.png URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: nginx.conf Type: application/octet-stream Size: 2039 bytes Desc: nginx.conf URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: nginx2.conf Type: application/octet-stream Size: 1814 bytes Desc: nginx2.conf URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: access.log Type: application/octet-stream Size: 786 bytes Desc: access.log URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: error.log Type: application/octet-stream Size: 5272 bytes Desc: error.log URL: From sathya.prasad at automationanywhere.com Mon Nov 9 12:40:19 2020 From: sathya.prasad at automationanywhere.com (Sathya Prasad H R) Date: Mon, 9 Nov 2020 12:40:19 +0000 Subject: FW: Using it as a proxy server In-Reply-To: References: Message-ID: Hello Team, We have been configuring Nginx as a proxy server for one of our module server. The flow works like below [cid:image001.png at 01D6B6C2.4E20A040] * Request comes from Module1 server to nginx * Nginx receives and act as proxy server to forward the received data from module1 server to module2 server We are facing issue while configuring the proxy server configuration. URL is a SSL enabled Uri and it's a open SSL certificate, which doesn't have private key. When we tested the above scenario which is mention in the image with the proxy server configuration which is listed in official doc we were facing an issue. The logs says that upstream server was trying to connect with port 80, but it suppose to connect with port 443. I need bit information that why its been connecting to port 80 and what's the configuration that I am missing it. Please help us to unblock it. -- Regards, Sathya -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.png Type: image/png Size: 2171 bytes Desc: image001.png URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: nginx.conf Type: application/octet-stream Size: 2039 bytes Desc: nginx.conf URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: nginx2.conf Type: application/octet-stream Size: 1814 bytes Desc: nginx2.conf URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: access.log Type: application/octet-stream Size: 786 bytes Desc: access.log URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: error.log Type: application/octet-stream Size: 5272 bytes Desc: error.log URL: From gk at leniwiec.biz Mon Nov 9 14:47:13 2020 From: gk at leniwiec.biz (Grzegorz Kulewski) Date: Mon, 9 Nov 2020 15:47:13 +0100 Subject: Matching of special characters in location Message-ID: Hello, Is there any (sane) way to match things like: %e2%80%8b in URL in location? Thank you in advance. -- Grzegorz Kulewski From mdounin at mdounin.ru Mon Nov 9 19:12:46 2020 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 9 Nov 2020 22:12:46 +0300 Subject: SSL error:14094418:SSL routines:SSL3_READ_BYTES:tlsv1 alert unknown ca:SSL alert In-Reply-To: References: <446DC99D-8CEA-4152-B48A-DD6F0E4DA9F2@nginx.com> Message-ID: <20201109191246.GG1147@mdounin.ru> Hello! On Fri, Nov 06, 2020 at 04:35:43AM -0500, meniem wrote: > Thanks Sergey for your quick reply. > > I have checked the debug logs for the SNI (upstream SSL server name), and it > seems to be correct.I also used the "proxy_ssl_name" directive that set to > the proxied_server_name. Below is the debug output when I hit the endpoint: [...] > 2020/11/06 09:14:36 [debug] 30370#30370: *113140 connect to 1.2.3.4:443, fd:13 #11343 [...] > 2020/11/06 09:14:36 [debug] 30370#30370: *113140 upstream SSL server name: "targetapp.com" [...] > 2020/11/06 09:14:37 [error] 30370#30370: *113140 SSL_do_handshake() failed (SSL: error:14094418:SSL routines:SSL3_READ_BYTES:tlsv1 alert unknown ca:SSL alert$ The error is clear enough: the upstream server sent the "unknown CA" alert. It is defined as follows (https://tools.ietf.org/html/rfc5246#section-7.2.2): unknown_ca A valid certificate chain or partial chain was received, but the certificate was not accepted because the CA certificate could not be located or couldn't be matched with a known, trusted CA. This message is always fatal. That is, the upstream server got the certificate, but it does no know the Certificate Authority used to sign the certificate. As long as the IP address of the server and the SNI name are correct, and the same certificate works with curl, this might happen due to lack of some intermediate certificates. These certificates are added by curl automatically (as long as present in the available list CA certificates as provided to curl). In contrast, nginx does not add any certificates automatically. If intermediate certs are indeed required by your upstream server, you can provide them by placing them into the proxy_ssl_certificate file following the certificate itself, much like additional intermediate certificates for the server certificate in the ssl_certificate file. Alternatively, consider reconfiguring your upstream server to do not require intermediate certs from the client. Providing all required intermediate certificates on the server rather than asking clients to send them along with their client certificates is believed to be a better practice. -- Maxim Dounin http://mdounin.ru/ From osa at freebsd.org.ru Mon Nov 9 20:10:44 2020 From: osa at freebsd.org.ru (Sergey A. Osokin) Date: Mon, 9 Nov 2020 23:10:44 +0300 Subject: Matching of special characters in location In-Reply-To: References: Message-ID: <20201109201044.GA31693@FreeBSD.org.ru> On Mon, Nov 09, 2020 at 03:47:13PM +0100, Grzegorz Kulewski wrote: > Hello, > > Is there any (sane) way to match things like: %e2%80%8b in URL in location? > Thank you in advance. Hi Grzegorz, here is the code snippet (not tested): location ~ ^/\xE2\x80\x8E { return 200 "%e2%80%8b matched\n"/; } -- Sergey From nginx-forum at forum.nginx.org Mon Nov 9 20:48:08 2020 From: nginx-forum at forum.nginx.org (meniem) Date: Mon, 09 Nov 2020 15:48:08 -0500 Subject: SSL error:14094418:SSL routines:SSL3_READ_BYTES:tlsv1 alert unknown ca:SSL alert In-Reply-To: <20201109191246.GG1147@mdounin.ru> References: <20201109191246.GG1147@mdounin.ru> Message-ID: Thanks Maxim for your feedback. Yeah, I believe it's an issue with the intermediate certificates. So, can you please let me know how can I obtain this intermediate certificates so that I can append it to the certificate itself. I can't also change this from the upstream server; as we are getting those from one of our providers. Currently I have the Certificate, Key and CA files only. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,289880,289929#msg-289929 From teward at thomas-ward.net Mon Nov 9 21:08:17 2020 From: teward at thomas-ward.net (Thomas Ward) Date: Mon, 9 Nov 2020 16:08:17 -0500 Subject: SSL error:14094418:SSL routines:SSL3_READ_BYTES:tlsv1 alert unknown ca:SSL alert In-Reply-To: References: <20201109191246.GG1147@mdounin.ru> Message-ID: On 11/9/20 3:48 PM, meniem wrote: > Thanks Maxim for your feedback. > > Yeah, I believe it's an issue with the intermediate certificates. So, can > you please let me know how can I obtain this intermediate certificates so > that I can append it to the certificate itself. You will need to reach out to the certificate issuer/provider to get the proper intermediate certificates.? There is no way for us on the nginx mailing list or forums to provide you intermediate certificates. > I can't also change this from the upstream server; as we are getting those > from one of our providers. > > Currently I have the Certificate, Key and CA files only. > > Posted at Nginx Forum: https://forum.nginx.org/read.php?2,289880,289929#msg-289929 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Mon Nov 9 21:19:15 2020 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 10 Nov 2020 00:19:15 +0300 Subject: SSL error:14094418:SSL routines:SSL3_READ_BYTES:tlsv1 alert unknown ca:SSL alert In-Reply-To: References: <20201109191246.GG1147@mdounin.ru> Message-ID: <20201109211915.GI1147@mdounin.ru> Hello! On Mon, Nov 09, 2020 at 03:48:08PM -0500, meniem wrote: > Thanks Maxim for your feedback. > > Yeah, I believe it's an issue with the intermediate certificates. So, can > you please let me know how can I obtain this intermediate certificates so > that I can append it to the certificate itself. > > I can't also change this from the upstream server; as we are getting those > from one of our providers. > > Currently I have the Certificate, Key and CA files only. Likely the CA file contains needed intermediate certificate. Quick-and-dirty test would be to simply add all the CA file contents to the proxy_ssl_certificate file, much like when configuring certificate chains (http://nginx.org/en/docs/http/configuring_https_servers.html#chains). For more details, consider looking into the certificate itself and all certificates in the CA file by using the following command: $ openssl x509 -subject -issuer -noout -in /path/to/cert Results should allow you to build a chain from the certificate to the self-signed root CA. You'll need first certificates from this chain, including the certificate itself, to be in the proxy_ssl_certificate file. Most likely the certificate itself and the intermediate CA certificate as listed in the certificate issuer would be enough. Note that the CA file likely contains more than one certificate, while openssl only shows information about the first certificate in a file. You'll have to save each of them to a separate file for openssl to be able to see them. -- Maxim Dounin http://mdounin.ru/ From francis at daoine.org Mon Nov 9 21:42:19 2020 From: francis at daoine.org (Francis Daly) Date: Mon, 9 Nov 2020 21:42:19 +0000 Subject: Nginx Download MP3 206 Partial Content HTTP Response In-Reply-To: <973256681ba7508b21047cbdce540a43.NginxMailingListEnglish@forum.nginx.org> References: <20201108195438.GJ29865@daoine.org> <973256681ba7508b21047cbdce540a43.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20201109214219.GK29865@daoine.org> On Mon, Nov 09, 2020 at 04:45:31AM -0500, garycnew at yahoo.com wrote: Hi there, > That was it! The subsequent, asynchronous AJAX call was responding with a > Javascript redirect that was remedied using Nginx's sub_filter directive. Great that you found the problem, and found the fix that lets you use the web site the way you want to. > The combination of disabling gzip compression, adding sub_filter_types > text/javascript, and creating sub_filter 'https:' 'http:' rewrote the > protocol of the Javascript redirect. Thanks for sharing the resolution. Cheers, f -- Francis Daly francis at daoine.org From gk at leniwiec.biz Mon Nov 9 23:11:28 2020 From: gk at leniwiec.biz (Grzegorz Kulewski) Date: Tue, 10 Nov 2020 00:11:28 +0100 Subject: Matching of special characters in location In-Reply-To: <20201109201044.GA31693@FreeBSD.org.ru> References: <20201109201044.GA31693@FreeBSD.org.ru> Message-ID: <203c88ba-d133-71c6-60ff-37ea86dfc782@leniwiec.biz> W dniu 09.11.2020 o?21:10, Sergey A. Osokin pisze: > On Mon, Nov 09, 2020 at 03:47:13PM +0100, Grzegorz Kulewski wrote: >> Hello, >> >> Is there any (sane) way to match things like: %e2%80%8b in URL in location? >> Thank you in advance. > > Hi Grzegorz, > > here is the code snippet (not tested): > > location ~ ^/\xE2\x80\x8E { > return 200 "%e2%80%8b matched\n"/; > } Thank you. It works. They key seems to be using regexp match. Regular match doesn't seem to understand escapes. Not sure if (where) it is documented. -- Grzegorz Kulewski From francis at daoine.org Tue Nov 10 00:19:34 2020 From: francis at daoine.org (Francis Daly) Date: Tue, 10 Nov 2020 00:19:34 +0000 Subject: Matching of special characters in location In-Reply-To: <203c88ba-d133-71c6-60ff-37ea86dfc782@leniwiec.biz> References: <20201109201044.GA31693@FreeBSD.org.ru> <203c88ba-d133-71c6-60ff-37ea86dfc782@leniwiec.biz> Message-ID: <20201110001934.GL29865@daoine.org> On Tue, Nov 10, 2020 at 12:11:28AM +0100, Grzegorz Kulewski wrote: > W dniu 09.11.2020 o?21:10, Sergey A. Osokin pisze: > > On Mon, Nov 09, 2020 at 03:47:13PM +0100, Grzegorz Kulewski wrote: Hi there, > >> Is there any (sane) way to match things like: %e2%80%8b in URL in location? > > here is the code snippet (not tested): > > > > location ~ ^/\xE2\x80\x8E { > > return 200 "%e2%80%8b matched\n"/; > > } > > Thank you. It works. > > They key seems to be using regexp match. Regular match doesn't seem to understand escapes. Not sure if (where) it is documented. > Regex match is straightforward here -- you use whatever your regex-engine supports to match the octets, which probably includes a straight swap of \x for % from the url. Non-regex match does work too, though; the key there is that nginx does that match against the non-url-encoded characters. %e2%80%8b is the url-encoding of three octets; in a utf-8 world, they represent the utf-8 encoding of the unicode code point U+200B (ZERO WIDTH SPACE). So if you want to prefix-match on a string including that character, you'll need to include that character directly in your config file. Your text editor should have some way of letting you do that -- for example, in "vim" in insert mode, the six-character sequence control-V, u, 2, 0, 0, b will do the right thing. In my case, it displays as <200b> and represents a single character. So my config file can include (e.g.) location ^~/<200b> { return 200 "match /zwsp ($uri, $request_uri)\n"; } (except there is only one "character" between the / and the space); and then any request that starts with /%e2%80%8b should be handled in that location; and any request that does not, should not be. Cheers, f -- Francis Daly francis at daoine.org From gk at leniwiec.biz Tue Nov 10 01:54:26 2020 From: gk at leniwiec.biz (Grzegorz Kulewski) Date: Tue, 10 Nov 2020 02:54:26 +0100 Subject: Matching of special characters in location In-Reply-To: <20201110001934.GL29865@daoine.org> References: <20201109201044.GA31693@FreeBSD.org.ru> <203c88ba-d133-71c6-60ff-37ea86dfc782@leniwiec.biz> <20201110001934.GL29865@daoine.org> Message-ID: W dniu 10.11.2020 o?01:19, Francis Daly pisze: > On Tue, Nov 10, 2020 at 12:11:28AM +0100, Grzegorz Kulewski wrote: >> W dniu 09.11.2020 o?21:10, Sergey A. Osokin pisze: >>> On Mon, Nov 09, 2020 at 03:47:13PM +0100, Grzegorz Kulewski wrote: >>>> Is there any (sane) way to match things like: %e2%80%8b in URL in location? > >>> here is the code snippet (not tested): >>> >>> location ~ ^/\xE2\x80\x8E { >>> return 200 "%e2%80%8b matched\n"/; >>> } >> >> Thank you. It works. >> >> They key seems to be using regexp match. Regular match doesn't seem to understand escapes. Not sure if (where) it is documented. > > Regex match is straightforward here -- you use whatever your regex-engine > supports to match the octets, which probably includes a straight swap > of \x for % from the url. > > Non-regex match does work too, though; the key there is that nginx does > that match against the non-url-encoded characters. This is documented. > In my case, it displays as <200b> and represents a single character. > > So my config file can include (e.g.) > > location ^~/<200b> { return 200 "match /zwsp ($uri, $request_uri)\n"; } > > (except there is only one "character" between the / and the space); > and then any request that starts with /%e2%80%8b should be handled in > that location; and any request that does not, should not be. Including raw UTF-8 special characters in nginx config isn't my ideal sane solution. :) I think I would prefer to be able to use escapes in non-regexp matches too... -- Grzegorz Kulewski From me at tomlebreux.com Tue Nov 10 03:08:41 2020 From: me at tomlebreux.com (Tom Lebreux) Date: Mon, 9 Nov 2020 22:08:41 -0500 Subject: How to detect rest of c->recv Message-ID: <41021870-50f8-c9b0-94da-4eb6021a4526@tomlebreux.com> Hi, I have been working my way around the code base and developing some modules purely for fun[0]. I am now building a core module that acts as an echo server[1]. This is to learn how to actually work with the event loop and clients, etc. One question I have is: How do I know if there is more data to be recv()'d? Here is an example code taken from [1]. size = 10; b = c->buffer; if (b == NULL) { b = ngx_create_temp_buf(c->pool, size); if (b == NULL) { ngx_echo_close_connection(c); return; } c->buffer = b; } n = c->recv(c, b->last, size); if (n == NGX_AGAIN) { In this case, I'm using a small buffer for testing purpose. What is the idiomatic way of knowing if there is more data coming in? Also, what is the proper way of "waiting" again on the next received data? I tried simply returning, but the read handler does not run again (until the client inputs more data.) Here's the scenario I am describing: 1. client sends "Hello, world!", which is 13 bytes. 2. read handler reads 10 bytes, stores that into another buffer (ngx_str_t in this case) 3. here we want to run the read handler again, probably increase the buffer size 4. echo back data to client Thanks! [0]: https://git.sr.ht/~tomleb/nginx-stream-upstream-time-module [1]: https://git.sr.ht/~tomleb/nginx-echo-module From lilihongbeast at 163.com Tue Nov 10 09:05:28 2020 From: lilihongbeast at 163.com (lilihongbeast at 163.com) Date: Tue, 10 Nov 2020 17:05:28 +0800 Subject: How to encrypt request body content by nginx plugin Message-ID: <2020111017052680389718@163.com> Hello Team, I learned that it can be encrypt request body content and encrypt response body content through encrypted-session-nginx-module(https://github.com/openresty/encrypted-session-nginx-module) in openresty. But I need to use the nginx plugin, not openresty(Old system, it is not allowed to replace ngnix). I refer to nginx-http-concat (https://github.com/alibaba/nginx-http-concat),this can be solve the encrypt response body content. How to encrypt request body content by nginx plugin?Which stage of HTTP can be processed?Is it NGX_HTTP_CONTENT_PHASE?Are there any examples of plugins? Thanks! -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Tue Nov 10 12:56:36 2020 From: nginx-forum at forum.nginx.org (aagrawal) Date: Tue, 10 Nov 2020 07:56:36 -0500 Subject: grpc-go client disconnect in 60 seconds Message-ID: Hi , I am using grpc-go client for grpc subscription, and in every 60 seconds after subscription , it is disconnecting. I run nginx in debug mode, and found following logs where i see that "408" error happened , because "client_body_timeout is default value 60 seconds." i am using nginx 1.16.1 2020/11/10 10:57:05 [debug] 2670#0: *1 event timer del: 3: 12588176 2020/11/10 10:57:05 [debug] 2670#0: *1 http run request: "/gnmi.gNMI/Subscribe?" 2020/11/10 10:57:05 [debug] 2670#0: *1 http upstream read request handler 2020/11/10 10:57:05 [debug] 2670#0: *1 finalize http upstream request: 408 2020/11/10 10:57:05 [debug] 2670#0: *1 finalize grpc request 2020/11/10 10:57:05 [debug] 2670#0: *1 free rr peer 1 0 2020/11/10 10:57:05 [debug] 2670#0: *1 close http upstream connection: 25 2020/11/10 10:57:05 [debug] 2670#0: *1 run cleanup: 097FE508 2020/11/10 10:57:05 [debug] 2670#0: *1 free: 097FE4E0, unused: 60 2020/11/10 10:57:05 [debug] 2670#0: *1 event timer del: 25: 16184227 2020/11/10 10:57:05 [debug] 2670#0: *1 reusable connection: 0 2020/11/10 10:57:05 [debug] 2670#0: *1 http finalize request: 408, "/gnmi.gNMI/Subscribe?" a:1, c:1 2020/11/10 10:57:05 [debug] 2670#0: *1 http terminate request count:1 2020/11/10 10:57:05 [debug] 2670#0: *1 http terminate cleanup count:1 blk:0 2020/11/10 10:57:05 [debug] 2670#0: *1 http posted request: "/gnmi.gNMI/Subscribe?" 2020/11/10 10:57:05 [debug] 2670#0: *1 http terminate handler count:1 2020/11/10 10:57:05 [debug] 2670#0: *1 http request count:1 blk:0 2020/11/10 10:57:05 [debug] 2670#0: *1 http2 close stream 1, queued 0, processing 1, pushing 0 2020/11/10 10:57:05 [debug] 2670#0: *1 http2 send RST_STREAM frame sid:1, status:1 2020/11/10 10:57:05 [debug] 2670#0: *1 http close request 2020/11/10 10:57:05 [debug] 2670#0: *1 http log handler 2020/11/10 10:57:05 [debug] 2670#0: *1 run cleanup: 09852B5C 2020/11/10 10:57:05 [debug] 2670#0: *1 free: 09853E38 2020/11/10 10:57:05 [debug] 2670#0: *1 free: 098A3518 2020/11/10 10:57:05 [debug] 2670#0: *1 free: 09851E00, unused: 0 2020/11/10 10:57:05 [debug] 2670#0: *1 free: 09852E20, unused: 2045 2020/11/10 10:57:05 [debug] 2670#0: *1 free: 097FDF40, unused: 847 2020/11/10 10:57:05 [debug] 2670#0: *1 post event 097FA190 2020/11/10 10:57:05 [debug] 2670#0: *1 delete posted event 097FA190 2020/11/10 10:57:05 [debug] 2670#0: *1 http2 handle connection handler 2020/11/10 10:57:05 [debug] 2670#0: *1 http2 frame out: 09831A48 sid:0 bl:0 len:4 2020/11/10 10:57:05 [debug] 2670#0: *1 SSL buf copy: 13 2020/11/10 10:57:05 [debug] 2670#0: *1 SSL to write: 13 2020/11/10 10:57:05 [debug] 2670#0: *1 SSL_write: 13 2020/11/10 10:57:05 [debug] 2670#0: *1 http2 frame sent: 09831A48 sid:0 bl:0 len:4 2020/11/10 10:57:05 [debug] 2670#0: *1 free: 09831A20, unused: 3672 2020/11/10 10:57:05 [debug] 2670#0: *1 free: 098B3520 2020/11/10 10:57:05 [debug] 2670#0: *1 reusable connection: 1 2020/11/10 10:57:05 [debug] 2670#0: *1 event timer add: 3: 180000:12768251 2020/11/10 10:57:06 [debug] 2670#0: *1 http2 idle handler 2020/11/10 10:57:06 [debug] 2670#0: *1 reusable connection: 0 2020/11/10 10:57:06 [debug] 2670#0: *1 posix_memalign: 09831A20:4096 @16 2020/11/10 10:57:06 [debug] 2670#0: *1 http2 read handler 2020/11/10 10:57:06 [debug] 2670#0: *1 SSL_read: -1 2020/11/10 10:57:06 [debug] 2670#0: *1 SSL_get_error: 5 2020/11/10 10:57:06 [debug] 2670#0: *1 peer shutdown SSL cleanly 2020/11/10 10:57:06 [debug] 2670#0: *1 close http connection: 3 2020/11/10 10:57:06 [debug] 2670#0: *1 SSL_shutdown: 1 2020/11/10 10:57:06 [debug] 2670#0: *1 event timer del: 3: 12768251 2020/11/10 10:57:06 [debug] 2670#0: *1 reusable connection: 0 2020/11/10 10:57:06 [debug] 2670#0: *1 run cleanup: 097F5670 2020/11/10 10:57:06 [debug] 2670#0: *1 free: 09831A20, unused: 4056 2020/11/10 10:57:06 [debug] 2670#0: *1 free: 00000000 2020/11/10 10:57:06 [debug] 2670#0: *1 free: 097FCEF8 2020/11/10 10:57:06 [debug] 2670#0: *1 free: 097FE8B0 2020/11/10 10:57:06 [debug] 2670#0: *1 free: 098458E8 2020/11/10 10:57:06 [debug] 2670#0: *1 free: 097F55A0, unused: 4 2020/11/10 10:57:06 [debug] 2670#0: *1 free: 09845CE0, unused: 4 2020/11/10 10:57:06 [debug] 2670#0: *1 free: 09830E40, unused: 136 Following is my nginx conf file http { include mime.types; default_type application/octet-stream; #log_format main '$remote_addr - $remote_user [$time_local] "$request" ' # '$status $body_bytes_sent "$http_referer" ' # '"$http_user_agent" "$http_x_forwarded_for"'; #access_log /tmp/access.log main; access_log /tmp/access.log; sendfile on; #tcp_nopush on; keepalive_timeout 180; proxy_read_timeout 120s; proxy_send_timeout 120s; client_body_timeout 360s; limit_conn_zone localhost zone=servers:10m; map $http_upgrade $connection_upgrade { default upgrade; '' close; } I tried increasing this client_body_timeout directive to 360 seconds, i see that grpc-go client disconnect in 360 seconds. Same issue was not observed when i run grpc-java client or grpc-python client . Can you please help me to know why this issue may happen ? Thanks Posted at Nginx Forum: https://forum.nginx.org/read.php?2,289945,289945#msg-289945 From mdounin at mdounin.ru Tue Nov 10 13:41:50 2020 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 10 Nov 2020 16:41:50 +0300 Subject: grpc-go client disconnect in 60 seconds In-Reply-To: References: Message-ID: <20201110134150.GN1147@mdounin.ru> Hello! On Tue, Nov 10, 2020 at 07:56:36AM -0500, aagrawal wrote: > Hi , > I am using grpc-go client for grpc subscription, and in every 60 seconds > after subscription , it is disconnecting. > I run nginx in debug mode, and found following logs where i see that "408" > error happened , because "client_body_timeout is default value 60 seconds." [...] > I tried increasing this client_body_timeout directive to 360 seconds, i see > that grpc-go client disconnect in 360 seconds. > Same issue was not observed when i run grpc-java client or grpc-python > client . > > Can you please help me to know why this issue may happen ? As long as your gRPC call uses client-to-server streaming, from HTTP point of view it essentially sends the request body in a small chunks. As long as no data are sent for a long time, the client body timeout will occur. This is what you seems to observe in your tests. Similarly, grpc_read_timeout might occur for server-to-client streaming if no data are sent for a long time. To fix this, make sure that at least some data are sent in gRPC streams periodically, and configure timeouts to something larger than the period in question. Alternatively, consider avoiding gRPC streams and/or make sure streams are properly closed when they are no longer needed. -- Maxim Dounin http://mdounin.ru/ From nginx-forum at forum.nginx.org Tue Nov 10 13:59:54 2020 From: nginx-forum at forum.nginx.org (aagrawal) Date: Tue, 10 Nov 2020 08:59:54 -0500 Subject: grpc-go client disconnect in 60 seconds In-Reply-To: <20201110134150.GN1147@mdounin.ru> References: <20201110134150.GN1147@mdounin.ru> Message-ID: Thanks for reply, So i can see that when i start grpc-go client insecure connection, http request is received by grpc server and response also sent from grpcserver in every 10 seconds. But still after 60 seconds timeout happens. So is it correct to say that because of some issue grpc-go client request body is not proper or something is missing in it. Please advise. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,289945,289948#msg-289948 From mdounin at mdounin.ru Tue Nov 10 14:31:15 2020 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 10 Nov 2020 17:31:15 +0300 Subject: grpc-go client disconnect in 60 seconds In-Reply-To: References: <20201110134150.GN1147@mdounin.ru> Message-ID: <20201110143115.GP1147@mdounin.ru> Hello! On Tue, Nov 10, 2020 at 08:59:54AM -0500, aagrawal wrote: > Thanks for reply, > So i can see that when i start grpc-go client insecure connection, http > request is received by grpc server and response also sent from grpcserver in > every 10 seconds. > But still after 60 seconds timeout happens. > > So is it correct to say that because of some issue grpc-go client request > body is not proper or something is missing in it. > Please advise. As far as I understand, the gRPC call in question uses the client-to-server streaming (as well as server-to-client one). Given the timeout, no data are sent in the client-to-server stream, yet the stream is not closed. For timeout to not happen, either some data should be sent periodically in the stream, or the stream should be closed. -- Maxim Dounin http://mdounin.ru/ From francis at daoine.org Wed Nov 11 00:34:07 2020 From: francis at daoine.org (Francis Daly) Date: Wed, 11 Nov 2020 00:34:07 +0000 Subject: Matching of special characters in location In-Reply-To: References: <20201109201044.GA31693@FreeBSD.org.ru> <203c88ba-d133-71c6-60ff-37ea86dfc782@leniwiec.biz> <20201110001934.GL29865@daoine.org> Message-ID: <20201111003407.GM29865@daoine.org> On Tue, Nov 10, 2020 at 02:54:26AM +0100, Grzegorz Kulewski wrote: > W dniu 10.11.2020 o?01:19, Francis Daly pisze: Hi there, > > So my config file can include (e.g.) > > > > location ^~/<200b> { return 200 "match /zwsp ($uri, $request_uri)\n"; } > > > > (except there is only one "character" between the / and the space); > > and then any request that starts with /%e2%80%8b should be handled in > > that location; and any request that does not, should not be. > > Including raw UTF-8 special characters in nginx config isn't my ideal sane solution. :) That's reasonable; but I think I'd disagree. I'd suggest that if you can't write the character, you probably shouldn't be using it in your web site such that it is needed in this part of the config file :-) > I think I would prefer to be able to use escapes in non-regexp matches too... That's a reasonable preference. Current stock-nginx does not support it, as far as I know. But you can add it yourself, if it is important enough. If you can decide on an escape syntax that you like (and: most will probably break some current valid config, so you'll want to choose one that works for you), then you can either try to get the code to handle it added to nginx; or (more likely, in the short term at least) you can write a pre-processor for you that turns your nginx.conf.in (with escaped characters in "location" values that start with something other than ~ or @) into an nginx.conf that includes the un-escaped characters in the format that current-nginx will read. Cheers, f -- Francis Daly francis at daoine.org From nginx-forum at forum.nginx.org Wed Nov 11 17:55:30 2020 From: nginx-forum at forum.nginx.org (unoobee) Date: Wed, 11 Nov 2020 12:55:30 -0500 Subject: using $upstream* variables inside map directive In-Reply-To: <20140507124424.GB46973@lo0.su> References: <20140507124424.GB46973@lo0.su> Message-ID: <480dba8cf2ddf27af70ff9f0ca895896.NginxMailingListEnglish@forum.nginx.org> Ruslan, could you send that patch for "map"? I would like to check it. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,249880,289960#msg-289960 From ru at nginx.com Wed Nov 11 19:12:13 2020 From: ru at nginx.com (Ruslan Ermilov) Date: Wed, 11 Nov 2020 22:12:13 +0300 Subject: using $upstream* variables inside map directive In-Reply-To: <480dba8cf2ddf27af70ff9f0ca895896.NginxMailingListEnglish@forum.nginx.org> References: <20140507124424.GB46973@lo0.su> <480dba8cf2ddf27af70ff9f0ca895896.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20201111191213.GA58579@lo0.su> On Wed, Nov 11, 2020 at 12:55:30PM -0500, unoobee wrote: > Ruslan, could you send that patch for "map"? I would like to check it. The "volatile" parameter of the "map" directive is available since nginx version 1.11.7. From arut at nginx.com Wed Nov 11 21:29:37 2020 From: arut at nginx.com (Roman Arutyunyan) Date: Wed, 11 Nov 2020 21:29:37 +0000 Subject: HTTP/3 and php POST In-Reply-To: References: Message-ID: Hello, > On 8 Nov 2020, at 15:42, Ryan Gould wrote: > > hello team, > > i have found that https://hg.nginx.org/nginx-quic (current as of 06 Nov 2020) > is having some trouble properly POSTing back to PayPal using php 7.3.24 on > a Debian Buster box. things work as expected using current mainline nginx > or current quiche. i have verified that PageSpeed is not causing the problem. > > the PayPal IPN php script is one of those things that has been working so > long it has cobwebs on it. it gets a POST, it adds a key / value to the > payload and POSTs it back. it is instantaneous and thoughtless. the PHP > script is getting a 200 return code from the return POST and everything > seems great on my side. but PayPal is complaining about the return POST > and they cant tell me why. i have enabled logging on the script and in > PHP and dont see anything out of the ordinary. the only thing i see is > when i use Postman 7.34.0 i am getting a "Parse Error: Invalid character > in chunk size" error, which i am having no luck tracking down information > on. > > i would like to generate some --debug logs for you but wont because of the > sensitive PayPal customer payment information. > > any thoughts or suggestions? Thanks for reporting this. We have committed a change that can potentially fix this issue: https://hg.nginx.org/nginx-quic/rev/ef83990f0e25 Could you please update the source code and report if the issue is indeed fixed for you? > nginx -V > nginx version: nginx/1.19.4 > built by gcc 6.3.0 20170516 (Debian 6.3.0-18+deb9u1) > built with OpenSSL 1.1.1 (compatible; BoringSSL) (running with BoringSSL) > TLS SNI support enabled > configure arguments: --with-cc-opt=-I../boringssl/include --with-ld-opt='-L../boringssl/build/ssl -L../boringssl/build/crypto' --with-http_v3_module --with-http_quic_module --with-stream_quic_module --prefix=/etc/nginx --sbin-path=/usr/sbin/nginx --conf-path=/etc/nginx/nginx.conf --pid-path=/var/run/nginx.pid --lock-path=/var/lock/nginx.lock --error-log-path=/var/log/nginx/error.log --http-log-path=/var/log/nginx/access.log --user=www-data --group=www-data --without-http_uwsgi_module --without-http_scgi_module --without-http_memcached_module --without-mail_pop3_module --without-mail_imap_module --without-mail_smtp_module --with-http_ssl_module --with-http_v2_module --with-http_stub_status_module --with-http_gzip_static_module --with-http_realip_module --with-file-aio --add-module=../../headers-more-nginx-module --add-module=../../pagespeed --http-client-body-temp-path=/var/cache/nginx/client_temp --http-proxy-temp-path=/var/cache/nginx/proxy_temp --http-fastcgi-temp-path=/var/cache/nginx/fastcgi_temp --http-uwsgi-temp-path=/var/cache/nginx/uwsgi_temp --http-scgi-temp-path=/var/cache/nginx/scgi_temp --with-openssl=../boringssl > > as always, thank you for being awesome. > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Thu Nov 12 07:33:49 2020 From: nginx-forum at forum.nginx.org (unoobee) Date: Thu, 12 Nov 2020 02:33:49 -0500 Subject: using $upstream* variables inside map directive In-Reply-To: <20201111191213.GA58579@lo0.su> References: <20201111191213.GA58579@lo0.su> Message-ID: I tried using $upstream_http_content_length inside the map directive with the "volatile" parameter to specify the proxy_cache behavior, but the map still uses the default value. Is there any way to set the proxy_cache behavior depending on $upstream_http_content_length via the map directive? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,249880,289963#msg-289963 From francis at daoine.org Thu Nov 12 08:34:06 2020 From: francis at daoine.org (Francis Daly) Date: Thu, 12 Nov 2020 08:34:06 +0000 Subject: using $upstream* variables inside map directive In-Reply-To: References: <20201111191213.GA58579@lo0.su> Message-ID: <20201112083406.GA9236@daoine.org> On Thu, Nov 12, 2020 at 02:33:49AM -0500, unoobee wrote: Hi there, > I tried using $upstream_http_content_length inside the map directive with > the "volatile" parameter to specify the proxy_cache behavior, but the map > still uses the default value. What's your config? > Is there any way to set the proxy_cache behavior depending on > $upstream_http_content_length via the map directive? What proxy_cache behavior do you want to set? You could reasonably set proxy_no_cache, because that only applies after accessing upstream. You can't usefully set proxy_cache, or proxy_cache_bypass, or proxy_cache_key, because they are all consulted before the decision on whether or not to access upstream has been made. Cheers, f -- Francis Daly francis at daoine.org From nginx-forum at forum.nginx.org Thu Nov 12 09:58:31 2020 From: nginx-forum at forum.nginx.org (unoobee) Date: Thu, 12 Nov 2020 04:58:31 -0500 Subject: using $upstream* variables inside map directive In-Reply-To: <20201112083406.GA9236@daoine.org> References: <20201112083406.GA9236@daoine.org> Message-ID: My configuration looks like this: proxy_cache_path /cache/ssd keys_zone=ssd_cache:10m levels=1:2 inactive=600s max_size=100m; proxy_cache_path /cache/hdd keys_zone=hdd_cache:10m levels=1:2 inactive=600s max_size=100m; upstream backend { server www.test.com:443; } server { listen 80; server_name test.com; location / { proxy_pass https://backend; proxy_redirect https://backend/ /; proxy_set_header Host $host; proxy_cache $cache; } } map $upstream_http_content_length $cache { volatile; ~^\d\d\b ssd_cache; default hdd_cache; } Posted at Nginx Forum: https://forum.nginx.org/read.php?2,249880,289965#msg-289965 From emilio.fernandes70 at gmail.com Thu Nov 12 11:10:40 2020 From: emilio.fernandes70 at gmail.com (Emilio Fernandes) Date: Thu, 12 Nov 2020 13:10:40 +0200 Subject: aarch64 packages for other Linux flavors In-Reply-To: References: <4e388ac4-8291-9e19-0774-351af78a4445@nginx.com> Message-ID: Hi Konstantin, El vie., 29 may. 2020 a las 12:24, Konstantin Pavlov () escribi?: > Hello Emilio, > > 29.05.2020 10:23, Emilio Fernandes wrote: > > Hi Konstantin, > > > > I guess you follow the GitHub issue but just in case: Mike Crute just > > announced a beta AMI for > > Alpine: > https://github.com/mcrute/alpine-ec2-ami/issues/28#issuecomment-635618625 > > If there are no major issues he will release an official one next week. > > Indeed, we do follow this issue - rest assured we're going to use the > release when it happens. That being said, it seems the needed kernel > changes for the AMI to boot will only be there for 3.12, which means > we're going to be limited to that Alpine version for ARM builds if not > backported to previous releases. > The AMI for Alpine 3.12.1 has been released few days ago: - https://github.com/mcrute/alpine-ec2-ami/issues/28#issuecomment-723401582 - https://github.com/mcrute/alpine-ec2-ami/blob/master/releases/README.md And I see you already have updated https://nginx.org/en/linux_packages.html ! Many thanks! Emilio! > Thanks! > > -- > Konstantin Pavlov > https://www.nginx.com/ > -------------- next part -------------- An HTML attachment was scrubbed... URL: From kaushalshriyan at gmail.com Thu Nov 12 13:47:46 2020 From: kaushalshriyan at gmail.com (Kaushal Shriyan) Date: Thu, 12 Nov 2020 19:17:46 +0530 Subject: Forbid web.config page from the browser as in https://mydomain.com/web.config Message-ID: Hi, I am running the Nginx version: nginx/1.16.1 on CentOS Linux release 7.8.2003 (Core). I am trying to forbid/prevent web.config file to download it from the browser. When I hit https://mydomain.com/web.config it is allowing me to download instead of forbidding the page ( 403 Forbidden). I am sharing the below nginx.conf file for your reference. server { > server_name _; > root /var/www/html/apcv3/docroot; ## <-- Your only path reference. > location /dacv3 { > alias /var/www/html/apcv3/docroot; > index index.php; > location ~ \.php$ { > include fastcgi_params; > # Block httpoxy attacks. See https://httpoxy.org/. > fastcgi_param HTTP_PROXY ""; > fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; > fastcgi_param PATH_INFO $fastcgi_path_info; > fastcgi_param QUERY_STRING $query_string; > fastcgi_intercept_errors on; > fastcgi_pass unix:/var/run/php-fpm/php-fpm.sock; > } > } > location = /favicon.ico { > log_not_found off; > access_log off; > } > location = /robots.txt { > allow all; > log_not_found off; > access_log off; > } > # Very rarely should these ever be accessed outside of your lan > location ~* \.(txt|log)$ { > allow 192.168.0.0/16; > deny all; > } > location ~ \..*/.*\.php$ { > return 403; > } > location ~ ^/sites/.*/private/ { > return 403; > } > # Block access to scripts in site files directory > location ~ ^/sites/[^/]+/files/.*\.php$ { > deny all; > } > # Allow "Well-Known URIs" as per RFC 5785 > location ~* ^/.well-known/ { > allow all; > } > # Block access to "hidden" files and directories whose names begin > with a > # period. This includes directories used by version control systems > such > # as Subversion or Git to store control files. > location ~ (^|/)\. { > return 403; > } > location / { > # try_files $uri @rewrite; # For Drupal <= 6 > try_files $uri /index.php?$query_string; # For Drupal >= 7 > } > location @rewrite { > rewrite ^/(.*)$ /index.php?q=$1; > } > # Don't allow direct access to PHP files in the vendor directory. > location ~ /vendor/.*\.php$ { > deny all; > return 404; > } > # Protect files and directories from prying eyes. > location ~* > \.(engine|inc|install|make|module|profile|po|sh|.*sql|theme|twig|tpl(\.php)?|xtmpl|yml)(~|\.sw[op]|\.bak|\.orig|\.save)?$|^(\.(?!well-known).*|Entries.*|Repository|Root|Tag|Template|composer\.(json|lock)|web\.config)$|^#.*#$|\.php(~|\.sw[op]|\.bak|\.orig|\.save)$ > { > deny all; > return 404; > } > location ^~ /web.config { > deny all; > } > # In Drupal 8, we must also match new paths where the '.php' appears in > # the middle, such as update.php/selection. The rule we use is strict, > # and only allows this pattern with the update.php front controller. > # This allows legacy path aliases in the form of > # blog/index.php/legacy-path to continue to route to Drupal nodes. If > # you do not have any paths like that, then you might prefer to use a > # laxer rule, such as: > # location ~ \.php(/|$) { > # The laxer rule will continue to work if Drupal uses this new URL > # pattern with front controllers other than update.php in a future > # release. > location ~ '\.php$|^/update.php' { > fastcgi_split_path_info ^(.+?\.php)(|/.*)$; > # Security note: If you're running a version of PHP older than the > # latest 5.3, you should have "cgi.fix_pathinfo = 0;" in php.ini. > # See http://serverfault.com/q/627903/94922 for details. > include fastcgi_params; > # Block httpoxy attacks. See https://httpoxy.org/. > fastcgi_param HTTP_PROXY ""; > fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; > fastcgi_param PATH_INFO $fastcgi_path_info; > fastcgi_param QUERY_STRING $query_string; > fastcgi_intercept_errors on; > # PHP 5 socket location. > #fastcgi_pass unix:/var/run/php5-fpm.sock; > # PHP 7 socket location. > fastcgi_pass unix:/var/run/php-fpm/php-fpm.sock; > } > # Fighting with Styles? This little gem is amazing. > # location ~ ^/sites/.*/files/imagecache/ { # For Drupal <= 6 > location ~ ^/sites/.*/files/styles/ { # For Drupal >= 7 > try_files $uri @rewrite; > } > # Handle private files through Drupal. Private file's path can come > # with a language prefix. > location ~ ^(/[a-z\-]+)?/system/files/ { # For Drupal >= 7 > try_files $uri /index.php?$query_string; > } > location ~* \.(js|css|png|jpg|jpeg|gif|ico|svg)$ { > try_files $uri @rewrite; > expires max; > log_not_found off; > } > } Please let me know if I am missing anything in the Nginx config file. Thanks in advance and I look forward to hearing from you. Best Regards, Kaushal -------------- next part -------------- An HTML attachment was scrubbed... URL: From francis at daoine.org Thu Nov 12 14:34:55 2020 From: francis at daoine.org (Francis Daly) Date: Thu, 12 Nov 2020 14:34:55 +0000 Subject: using $upstream* variables inside map directive In-Reply-To: References: <20201112083406.GA9236@daoine.org> Message-ID: <20201112143455.GB9236@daoine.org> On Thu, Nov 12, 2020 at 04:58:31AM -0500, unoobee wrote: Hi there, > My configuration looks like this: Thanks for this. It looks like you are setting "proxy_cache" to always try to read from "hdd_cache"; but you want it to sometimes write to "ssd_cache" instead. And you are reporting that it does not ever write to "ssd_cache". Is that correct? If so -- given that it will only ever read from "hdd_cache", what would be the benefit in writing to somewhere else? I'm not certain what you're trying to achieve. Perhaps describing that, might make it clear whether it can be done? f -- Francis Daly francis at daoine.org From francis at daoine.org Thu Nov 12 14:43:43 2020 From: francis at daoine.org (Francis Daly) Date: Thu, 12 Nov 2020 14:43:43 +0000 Subject: Forbid web.config page from the browser as in https://mydomain.com/web.config In-Reply-To: References: Message-ID: <20201112144343.GC9236@daoine.org> On Thu, Nov 12, 2020 at 07:17:46PM +0530, Kaushal Shriyan wrote: Hi there, > I am running the Nginx version: nginx/1.16.1 on CentOS Linux release > 7.8.2003 (Core). I am trying to forbid/prevent web.config file to > download it from the browser. When I hit > https://mydomain.com/web.config it is allowing me to download instead of > forbidding the page ( 403 Forbidden). When I use this config, it works for me (I get the http 403 response). Are you sure that the config file with this server{} block is read by your running nginx? Are there any other server{} blocks with the same (implicit) "listen" directive, that might mean that this server{} block is never used? What do you get if you do curl -i -H Host:_ http://your-server/web.config where the "Host:_" part is an attempt to match the server_name that you set in this server{} block. (Change "your-server" to be a name or IP that your client can use to get at the web service.) Cheers, f -- Francis Daly francis at daoine.org From ryanbgould at gmail.com Thu Nov 12 15:51:20 2020 From: ryanbgould at gmail.com (Ryan Gould) Date: Thu, 12 Nov 2020 07:51:20 -0800 Subject: HTTP/3 and php POST In-Reply-To: References: Message-ID: Thanks for reporting this.? We have committed a change that can potentially fix this issue:? https://hg.nginx.org/nginx-quic/rev/ef83990f0e25? Could you please update the source code and report if the issue is indeed fixed for you?? thank you for solving the problem!? i can confirm that PayPal doesnt seem to have a problem anymore and that?Postman 7.34.0 also seems happy now. it might be completely un-related to this problem, but i have noticed that now a large number php pages are having some difficulty being consistently displayed.? if i spam the reload key in the browser, it feels like every third reload will hang and 200 with no content returned.? Edge seems to handle it gracefully and moves the connection back to h2.? Firefox will hang the tab until it times out (very odd), but then it might work the next tab reload. hello team, i have found that https://hg.nginx.org/nginx-quic (current?as of 06 Nov 2020) is having some trouble properly POSTing back to PayPal using php 7.3.24 on a Debian Buster box. things work as expected using current mainline nginx or current quiche.? i have verified that PageSpeed is not causing the problem. the PayPal IPN php script is one of those things that has been working so long it has cobwebs on it.? it gets a POST, it adds a key / value to the payload and POSTs it back.? it is instantaneous and thoughtless.? the PHP script is getting a 200 return code from the return POST and everything seems great on my side.? but PayPal is complaining about the return POST and they cant tell me why.? i have enabled logging on the script and in PHP and dont see anything out of the ordinary.? the only thing i see is when i use Postman 7.34.0 i am getting a "Parse Error: Invalid character in chunk size" error, which i am having no luck tracking down information on. i would like to generate some --debug logs for you but wont because of the sensitive PayPal customer payment information. any thoughts or suggestions? nginx -V nginx version: nginx/1.19.4 built by gcc 6.3.0 20170516 (Debian 6.3.0-18+deb9u1) built with OpenSSL 1.1.1 (compatible; BoringSSL) (running with BoringSSL) TLS SNI support enabled configure arguments: --with-cc-opt=-I../boringssl/include --with-ld-opt='-L../boringssl/build/ssl -L../boringssl/build/crypto' --with-http_v3_module --with-http_quic_module --with-stream_quic_module --prefix=/etc/nginx --sbin-path=/usr/sbin/nginx --conf-path=/etc/nginx/nginx.conf --pid-path=/var/run/nginx.pid --lock-path=/var/lock/nginx.lock --error-log-path=/var/log/nginx/error.log --http-log-path=/var/log/nginx/access.log --user=www-data --group=www-data --without-http_uwsgi_module --without-http_scgi_module --without-http_memcached_module --without-mail_pop3_module --without-mail_imap_module --without-mail_smtp_module --with-http_ssl_module --with-http_v2_module --with-http_stub_status_module --with-http_gzip_static_module --with-http_realip_module --with-file-aio --add-module=../../headers-more-nginx-module --add-module=../../pagespeed --http-client-body-temp-path=/var/cache/nginx/client_temp --http-proxy-temp-path=/var/cache/nginx/proxy_temp --http-fastcgi-temp-path=/var/cache/nginx/fastcgi_temp --http-uwsgi-temp-path=/var/cache/nginx/uwsgi_temp --http-scgi-temp-path=/var/cache/nginx/scgi_temp --with-openssl=../boringssl as always, thank you for being awesome. -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Thu Nov 12 16:00:09 2020 From: nginx-forum at forum.nginx.org (unoobee) Date: Thu, 12 Nov 2020 11:00:09 -0500 Subject: using $upstream* variables inside map directive In-Reply-To: <20201112143455.GB9236@daoine.org> References: <20201112143455.GB9236@daoine.org> Message-ID: <401ab7a95344d80940a82a0868fc54a4.NginxMailingListEnglish@forum.nginx.org> > And you are reporting that it does not ever write to "ssd_cache". Yes, this is correct. I want to choose the cache location based on the size of the cached file I want to get the behavior described in the article, but only with the file size in the map directive, I assume I need $sent_http_content_length or $upstream_http_content_length. https://www.nginx.com/blog/cache-placement-strategies-nginx-plus/ Posted at Nginx Forum: https://forum.nginx.org/read.php?2,249880,289971#msg-289971 From francis at daoine.org Thu Nov 12 16:48:34 2020 From: francis at daoine.org (Francis Daly) Date: Thu, 12 Nov 2020 16:48:34 +0000 Subject: using $upstream* variables inside map directive In-Reply-To: <401ab7a95344d80940a82a0868fc54a4.NginxMailingListEnglish@forum.nginx.org> References: <20201112143455.GB9236@daoine.org> <401ab7a95344d80940a82a0868fc54a4.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20201112164834.GD9236@daoine.org> On Thu, Nov 12, 2020 at 11:00:09AM -0500, unoobee wrote: Hi there, > > And you are reporting that it does not ever write to "ssd_cache". > Yes, this is correct. I want to choose the cache location based on the size > of the cached file Logically, you can't. You can only choose the cache location based on something in the request; not on something in the response. > I want to get the behavior described in the article, but only with the file > size in the map directive, I assume I need $sent_http_content_length or > $upstream_http_content_length. You could try changing all your urls to include the file size; and then choose a cache location based on the sizes in the request url. But that is unlikely to be convenient. Good luck with it, f -- Francis Daly francis at daoine.org From yichun at openresty.com Thu Nov 12 23:38:43 2020 From: yichun at openresty.com (Yichun Zhang) Date: Thu, 12 Nov 2020 15:38:43 -0800 Subject: [ANN] OpenResty 1.19.3.1 released Message-ID: Hi folks, I am happy to announce the new formal release, 1.19.3.1, of our OpenResty web platform based on NGINX and LuaJIT. The full announcement, download links, and change logs can be found below: https://openresty.org/en/ann-1019003001.html OpenResty is a high performance and dynamic web platform based on our enhanced version of Nginx core, our enhanced version of LuaJIT, and many powerful Nginx modules and Lua libraries. See OpenResty's homepage for details: https://openresty.org/en/ Enjoy! Best, Yichun -- Yichun Zhang Founder and CEO of OpenResty Inc. https://openresty.com/ From kaushalshriyan at gmail.com Fri Nov 13 00:21:57 2020 From: kaushalshriyan at gmail.com (Kaushal Shriyan) Date: Fri, 13 Nov 2020 05:51:57 +0530 Subject: Forbid web.config page from the browser as in https://mydomain.com/web.config In-Reply-To: <20201112144343.GC9236@daoine.org> References: <20201112144343.GC9236@daoine.org> Message-ID: On Thu, Nov 12, 2020 at 8:13 PM Francis Daly wrote: > On Thu, Nov 12, 2020 at 07:17:46PM +0530, Kaushal Shriyan wrote: > > Hi there, > > > I am running the Nginx version: nginx/1.16.1 on CentOS Linux release > > 7.8.2003 (Core). I am trying to forbid/prevent web.config file to > > download it from the browser. When I hit > > https://mydomain.com/web.config it is allowing me to download instead of > > forbidding the page ( 403 Forbidden). > > When I use this config, it works for me (I get the http 403 response). > > Are you sure that the config file with this server{} block is read by > your running nginx? > > Are there any other server{} blocks with the same (implicit) "listen" > directive, that might mean that this server{} block is never used? > > What do you get if you do > > curl -i -H Host:_ http://your-server/web.config > > where the "Host:_" part is an attempt to match the server_name that you > set in this server{} block. > > (Change "your-server" to be a name or IP that your client can use to get > at the web service.) > Hi Francis, Thanks Francis for the email response. There are two servers {} blocks one with *listen 80 default_server* and the other with *listen 443 ssl* I am running the website on port 443 and added the below in the server block with listen 443 ssl. It worked perfectly. Thanks a lot for pointing the issue and appreciate it. location ^~ /web.config { > deny all; > } Best Regards, Kaushal -------------- next part -------------- An HTML attachment was scrubbed... URL: From kaushalshriyan at gmail.com Fri Nov 13 00:33:02 2020 From: kaushalshriyan at gmail.com (Kaushal Shriyan) Date: Fri, 13 Nov 2020 06:03:02 +0530 Subject: Hide HTTP headers in nginx Message-ID: Hi, As part of the security audit, I have set server_tokens off; in /etc/nginx/nginx.conf. Is there a way to hide Server: nginx, X-Powered-By and X-Generator? To hide the below HTTP headers Server: nginx > X-Powered-By: PHP/7.2.34 > X-Generator: Drupal 8 (https://www.drupal.org) curl -i -H Host:_ https://mydomain.com HTTP/1.1 200 OK *Server: nginx* Content-Type: text/html; charset=UTF-8 Transfer-Encoding: chunked Connection: keep-alive *X-Powered-By: PHP/7.2.34* Cache-Control: max-age=21600, public Date: Fri, 13 Nov 2020 00:23:38 GMT X-Drupal-Dynamic-Cache: MISS Link: ; rel="shortlink", ; rel="canonical" X-UA-Compatible: IE=edge Content-language: en X-Content-Type-Options: nosniff X-Frame-Options: SAMEORIGIN Expires: Sun, 19 Nov 1978 05:00:00 GMT Last-Modified: Fri, 13 Nov 2020 00:23:37 GMT ETag: "1605227017" Vary: Cookie *X-Generator: Drupal 8 (https://www.drupal.org )* X-XSS-Protection: 1; mode=block X-Drupal-Cache: HIT Best Regards, Kaushal -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Fri Nov 13 07:48:18 2020 From: nginx-forum at forum.nginx.org (praveen) Date: Fri, 13 Nov 2020 02:48:18 -0500 Subject: 499 response code for NGINX Proxy App in Cloud Foundry Message-ID: Hi, I'm using nginx as the proxy application deployed in the cloud foundry space which will route the requests to the upstream servers which are behind the Elastic Load Balancer. I noticed that after 3-4 hours, the proxy application in the cloud foundry space is going down and it's logging 499 response code in the access logs. If we restart the cf app , then everything works fine for sometime and again goes back to the error state. Can you please suggest what might be the possible reason for this? Thanks, praveen Posted at Nginx Forum: https://forum.nginx.org/read.php?2,289976,289976#msg-289976 From ng4rrjanbiah at rediffmail.com Fri Nov 13 07:53:08 2020 From: ng4rrjanbiah at rediffmail.com (R. Rajesh Jeba Anbiah) Date: 13 Nov 2020 07:53:08 -0000 Subject: Performance of Nginx as reverse proxy for Hasura - 7-50x slow Message-ID: <20201113075308.21792.qmail@f5mail-224-154.rediffmail.com> Recently noted that when proxying Hasura for the https support reduces the speed to 7-50x times! More information including tcpdump available in https://github.com/hasura/graphql-engine/discussions/6154 I have not faced noticed any performance issues with other REST APIs. Is it a known issue (perhaps due to SSL handshake)? If so, are there any known solutions for the same? TIA -------------- next part -------------- An HTML attachment was scrubbed... URL: From r at roze.lv Fri Nov 13 10:03:14 2020 From: r at roze.lv (Reinis Rozitis) Date: Fri, 13 Nov 2020 12:03:14 +0200 Subject: Hide HTTP headers in nginx In-Reply-To: References: Message-ID: <000001d6b9a4$33f97590$9bec60b0$@roze.lv> > As part of the security audit, I have set server_tokens off; in /etc/nginx/nginx.conf. Is there a way to hide Server: nginx, X-Powered-By and X-Generator? > > To hide the below HTTP headers > > Server: nginx > X-Powered-By: PHP/7.2.34 > X-Generator: Drupal 8 (https://www.drupal.org) Afaik the Nginx header is hardcoded, so to remove it you have either to change the source/recompile or run through a proxy which can remove http headers. For the php header you have to change php.ini and set: expose_php = Off For Drupal there are several modules/plugins which let you remove the header (for example https://www.drupal.org/project/remove_http_headers ) rr From r at roze.lv Fri Nov 13 11:10:23 2020 From: r at roze.lv (Reinis Rozitis) Date: Fri, 13 Nov 2020 13:10:23 +0200 Subject: Performance of Nginx as reverse proxy for Hasura - 7-50x slow In-Reply-To: <20201113075308.21792.qmail@f5mail-224-154.rediffmail.com> References: <20201113075308.21792.qmail@f5mail-224-154.rediffmail.com> Message-ID: <000501d6b9ad$95631df0$c02959d0$@roze.lv> > Recently noted that when proxying Hasura for the https support reduces the speed to 7-50x times! More information including tcpdump available in https://github.com/hasura/graphql-engine/discussions/6154 Looking at the github discussion - you are comparing http vs https. Since you are not setting using keepalive 'ab' does the ssl handshake for each request. Try with 'ab -k ...'. rr From francis at daoine.org Fri Nov 13 11:17:22 2020 From: francis at daoine.org (Francis Daly) Date: Fri, 13 Nov 2020 11:17:22 +0000 Subject: Hide HTTP headers in nginx In-Reply-To: References: Message-ID: <20201113111722.GE9236@daoine.org> On Fri, Nov 13, 2020 at 06:03:02AM +0530, Kaushal Shriyan wrote: Hi there, > As part of the security audit, I have set server_tokens off; > in /etc/nginx/nginx.conf. Is there a way to hide Server: nginx, > X-Powered-By and X-Generator? It's generally pointless from a security perspective to hide headers; and it is impolite to the authors to do so. Stock nginx does not provide a configuration option to remove the Server: header (but it does provide the source code and the freedom for you to do what you want with it). The other headers might be adjustable by whatever generates them; but nginx does provide directives like fastcgi_hide_header (http://nginx.org/r/fastcgi_hide_header) to adjust what is sent from a fastcgi_pass response. Good luck with it, f -- Francis Daly francis at daoine.org From ng4rrjanbiah at rediffmail.com Fri Nov 13 13:51:25 2020 From: ng4rrjanbiah at rediffmail.com (R. Rajesh Jeba Anbiah) Date: 13 Nov 2020 13:51:25 -0000 Subject: Performance of Nginx as reverse proxy for Hasura - 7-50x slow In-Reply-To: <000501d6b9ad$95631df0$c02959d0$@roze.lv> Message-ID: <1605265837.S.4210.26921.f5-224-127.1605275485.20969@webmail.rediffmail.com> > > Recently noted that when proxying Hasura for the https support reduces the speed to 7-50x times! More information including tcpdump available in https://github.com/hasura/graphql-engine/discussions/6154 >  > Looking at the github discussion - you are comparing http vs https. > Since you are not setting using keepalive 'ab' does the ssl handshake for each request. Try with 'ab -k ...'. Thank you so much for the insights. Will try that. Thanks again. -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Mon Nov 16 17:20:31 2020 From: nginx-forum at forum.nginx.org (leeahaddad) Date: Mon, 16 Nov 2020 12:20:31 -0500 Subject: Some Questions about NGINX and F5 Message-ID: <970686129366075d8413a7129add36c0.NginxMailingListEnglish@forum.nginx.org> 1) How does NGINX fit with F5 legacy solutions? For example, and I may be wrong certain functions in BIG-IP seem to overlap some of the functions of NGINX? Is it that BIG-IP is ?more heavy duty?? 2) What happened with Heylu? I am not even sure what ?Heylu? was supposed to be. Any clarity on this? 3) I do not understand so well how NGINX and Docker work together ? are they complementary? I read a lot of material and saw videos, but I think I am missing something that a quick discussion would help me get it. I think my lack of understanding is also with Kubernetes vs Docker swarm topic. I know there is something really exciting and big with NGINX and its combination with F5 but I need some color to move my imagination into the reality Lee Haddad Posted at Nginx Forum: https://forum.nginx.org/read.php?2,289987,289987#msg-289987 From osa at freebsd.org.ru Mon Nov 16 17:37:42 2020 From: osa at freebsd.org.ru (Sergey A. Osokin) Date: Mon, 16 Nov 2020 20:37:42 +0300 Subject: Some Questions about NGINX and F5 In-Reply-To: <970686129366075d8413a7129add36c0.NginxMailingListEnglish@forum.nginx.org> References: <970686129366075d8413a7129add36c0.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20201116173742.GA86527@FreeBSD.org.ru> Hi Lee, hope you're doing well these days. Thanks for your questions in this public maillist. In most cases these questions are related to the commercial version of NGINX - NGINX Plus. Follow that, I'd recommend to contact the NGINX Sales Team, https://www.nginx.com/contact-sales/, to get more details about the products of interest. Thank you. -- Sergey Osokin On Mon, Nov 16, 2020 at 12:20:31PM -0500, leeahaddad wrote: > 1) How does NGINX fit with F5 legacy solutions? For example, and I may be > wrong certain functions in BIG-IP seem to overlap some of the functions of > NGINX? Is it that BIG-IP is ?more heavy duty?? > 2) What happened with Heylu? I am not even sure what ?Heylu? was supposed to > be. Any clarity on this? > 3) I do not understand so well how NGINX and Docker work together ? are they > complementary? I read a lot of material and saw videos, but I think I am > missing something that a quick discussion would help me get it. I think my > lack of understanding is also with Kubernetes vs Docker swarm topic. > I know there is something really exciting and big with NGINX and its > combination with F5 but I need some color to move my imagination into the > reality > > Lee Haddad > > Posted at Nginx Forum: https://forum.nginx.org/read.php?2,289987,289987#msg-289987 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From o.garrett at f5.com Tue Nov 17 10:48:07 2020 From: o.garrett at f5.com (Owen Garrett) Date: Tue, 17 Nov 2020 10:48:07 +0000 Subject: Some Questions about NGINX and F5 In-Reply-To: <20201116173742.GA86527@FreeBSD.org.ru> References: <970686129366075d8413a7129add36c0.NginxMailingListEnglish@forum.nginx.org> <20201116173742.GA86527@FreeBSD.org.ru> Message-ID: <82F7F6A2-CC60-44D6-9F76-4518418E70D6@f5.com> Hi Lee, I can add some suggestions to help you find out more. We do try to keep the nginx mailing list for technical questions and announcements relating to NGINX open source; do DM me for further information. > 1) How does NGINX fit with F5 legacy solutions? For example, and I may be > wrong certain functions in BIG-IP seem to overlap some of the functions of > NGINX? Is it that BIG-IP is ?more heavy duty?? There is overlap, and it's not correct to think of BIG-IP as more heavy-duty. NGINX can be equally or more capable. Different teams (IT and Apps) have different expectations for the products they use. BIG-IP and NGINX are complementary, and we often see both running together. Paradoxically, sometimes duplication leads to better efficiency: https://www.nginx.com/blog/deploying-application-services-in-kubernetes-part-1/ 2) What happened with Heylu? I am not even sure what ?Heylu? was supposed to > be. Any clarity on this? Take a look at NGINX Controller: https://www.nginx.com/products/nginx-controller 3) I do not understand so well how NGINX and Docker work together ? are they > complementary? Docker is a way to package and share applications in containers. NGINX is very commonly used as part of these lightweight, portable applications. NGINX Ingress Controller https://github.com/nginxinc/kubernetes-ingress and NGINX Service Mesh https://github.com/nginxinc/nginx-service-mesh are NGINX-based solutions for docker containers running on Kubernetes and related platforms. Owen ?On 16/11/2020, 17:38, "nginx on behalf of Sergey A. Osokin" wrote: EXTERNAL MAIL: nginx-bounces at nginx.org Hi Lee, hope you're doing well these days. Thanks for your questions in this public maillist. In most cases these questions are related to the commercial version of NGINX - NGINX Plus. Follow that, I'd recommend to contact the NGINX Sales Team, https://www.nginx.com/contact-sales/, to get more details about the products of interest. Thank you. -- Sergey Osokin On Mon, Nov 16, 2020 at 12:20:31PM -0500, leeahaddad wrote: > 1) How does NGINX fit with F5 legacy solutions? For example, and I may be > wrong certain functions in BIG-IP seem to overlap some of the functions of > NGINX? Is it that BIG-IP is ?more heavy duty?? > 2) What happened with Heylu? I am not even sure what ?Heylu? was supposed to > be. Any clarity on this? > 3) I do not understand so well how NGINX and Docker work together ? are they > complementary? I read a lot of material and saw videos, but I think I am > missing something that a quick discussion would help me get it. I think my > lack of understanding is also with Kubernetes vs Docker swarm topic. > I know there is something really exciting and big with NGINX and its > combination with F5 but I need some color to move my imagination into the > reality > > Lee Haddad > > Posted at Nginx Forum: https://forum.nginx.org/read.php?2,289987,289987#msg-289987 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx From ng4rrjanbiah at rediffmail.com Tue Nov 17 14:36:21 2020 From: ng4rrjanbiah at rediffmail.com (R. Rajesh Jeba Anbiah) Date: 17 Nov 2020 14:36:21 -0000 Subject: Performance of Nginx as reverse proxy for Hasura - 7-50x slow In-Reply-To: <000501d6b9ad$95631df0$c02959d0$@roze.lv> Message-ID: <1605265837.S.4210.26921.f5-224-127.1605623781.17697@webmail.rediffmail.com> > > Recently noted that when proxying Hasura for the https support reduces the speed to 7-50x times! More information including tcpdump available in > > https://github.com/hasura/graphql-engine/discussions/6154 >  > Looking at the github discussion - you are comparing http vs https. > Since you are not setting using keepalive 'ab' does the ssl handshake for each request. Try with 'ab -k ...'. (Apologies for the delay due to Diwali) As noted in https://github.com/hasura/graphql-engine/discussions/6154#discussioncomment-131629 your insights were really helpful.  Keep alive works for other REST services, but not working for Hasura. (Keep-Alive requests:    0 Vs Keep-Alive requests:    200 for other services). Is Keep-Alive anything to do with the response headers of Hasura or its POST request? But, even with keep-alive, usual performance of http vs https through nginx is around 50-100 times slow for any services. Are there any optimization approaches for the same? TIA -------------- next part -------------- An HTML attachment was scrubbed... URL: From r at roze.lv Tue Nov 17 16:27:03 2020 From: r at roze.lv (Reinis Rozitis) Date: Tue, 17 Nov 2020 18:27:03 +0200 Subject: Performance of Nginx as reverse proxy for Hasura - 7-50x slow In-Reply-To: <1605265837.S.4210.26921.f5-224-127.1605623781.17697@webmail.rediffmail.com> References: <000501d6b9ad$95631df0$c02959d0$@roze.lv> <1605265837.S.4210.26921.f5-224-127.1605623781.17697@webmail.rediffmail.com> Message-ID: <000001d6bcfe$7bfcd2d0$73f67870$@roze.lv> > Keep alive works for other REST services, but not working for Hasura. (Keep-Alive requests: 0 Vs Keep-Alive requests: 200 for other services). Is Keep-Alive anything to do with the response headers of Hasura or its POST request? It could be that the service/backend doesn't support (or doesn't want to hence the header) keepalive connections. You might be better of asking Hasura support directly (I'm not familiar with the product/service). > But, even with keep-alive, usual performance of http vs https through nginx is around 50-100 times slow for any services. That is somewhat expected- depending on the protocol/ciphers and server resources the ssl/tls handshake (for every request) can take a significant time. Here you can see some official tests (a bit dated but you'll get the picture) done by nginx team themselves https://www.nginx.com/blog/testing-the-performance-of-nginx-and-nginx-plus-web-servers/ One additional thing to check for the benchmarks is what cipher is 'ab' actually using. You might get more precise (or better) results if you specify it with '-Z ciphersuite' (can be obtained from 'openssl ciphers'). Some ciphers require (significantly) more cpu resources which can result in a slower network throughput. > Are there any optimization approaches for the same? TIA In general the default configuration values for ssl/tls in nginx are a middle ground - as in those work for majority. If your goal is to reduce latency/TTFB you might read https://blog.cloudflare.com/optimizing-tls-over-tcp-to-reduce-latency/ - there are custom patches which introduce some tuning options. Also other ssl offloaders like Haproxy might be better solution for this task. Another approach could be getting the nginx commercial support. rr From ng4rrjanbiah at rediffmail.com Wed Nov 18 05:42:02 2020 From: ng4rrjanbiah at rediffmail.com (R. Rajesh Jeba Anbiah) Date: 18 Nov 2020 05:42:02 -0000 Subject: Performance of Nginx as reverse proxy for Hasura - 7-50x slow In-Reply-To: <000001d6bcfe$7bfcd2d0$73f67870$@roze.lv> Message-ID: <1605630443.S.5698.5668.f5-224-114.1605678122.7805@webmail.rediffmail.com> <snip> Thank you so much for your wonderful insights. > In general the default configuration values for ssl/tls in nginx are a middle ground - as in those work for majority. Are these middle ground optimized for any particular hardware configuration? IOW, are there any hardware recommendations to get better performance? -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Wed Nov 18 06:37:29 2020 From: nginx-forum at forum.nginx.org (lujiangbin) Date: Wed, 18 Nov 2020 01:37:29 -0500 Subject: nginx-quic http3 reverse proxy problem Message-ID: hi, i am trying nginx-quic,i use it as a http3 reverse proxy, it will send a http request to upstream. when i send http3 request to reverser proxy to retrieve a large file(1g), it does not work fine, just download some part of the data, i think there are some bugs in it, could you please check it out? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,289999,289999#msg-289999 From pluknet at nginx.com Wed Nov 18 16:27:31 2020 From: pluknet at nginx.com (Sergey Kandaurov) Date: Wed, 18 Nov 2020 16:27:31 +0000 Subject: nginx-quic http3 reverse proxy problem In-Reply-To: References: Message-ID: <9635A345-CD65-4FEA-B559-5B2FE9400927@nginx.com> > On 18 Nov 2020, at 06:37, lujiangbin wrote: > > hi, i am trying nginx-quic,i use it as a http3 reverse proxy, it will send a > http request to upstream. when i send http3 request to reverser proxy to > retrieve a large file(1g), it does not work fine, just download some part of > the data, i think there are some bugs in it, could you please check it out? Hi. To make progress on this, please provide some details: nginx-quic revision, debug log messages for a particular request that doesn't appear to work, and minimal nginx configuration. See how to obtain debug log: http://nginx.org/en/docs/debugging_log.html -- Sergey Kandaurov From nginx-forum at forum.nginx.org Thu Nov 19 18:28:57 2020 From: nginx-forum at forum.nginx.org (sachingp) Date: Thu, 19 Nov 2020 13:28:57 -0500 Subject: SSL Handshake Errors Message-ID: Hi - We are using Nginx as a reverse proxy with SSL as a termination point Call flow Network Load Balancer (TCP) --> Nginx(SSL Termination) --> Vertx Servers (HTTP) This is the config we use, fairly standard upstream xyz { server 127.0.0.1:8080; keepalive 4096; } server { listen 80; listen 443 ssl; ssl_certificate /etc/ssl/certs/bundle.crt; ssl_certificate_key /etc/ssl/private/nginx-digicert.key; # ssl_handshake_timeout 10s; ssl_session_cache shared:SSL:20m; ssl_session_timeout 4h; # ssl_handshake_timeout 30s; server_name _; root /usr/share/nginx/html; access_log /var/log/nginx/raps-access.log timed_combined buffer=8k flush=1m; #access_log off; # only log critical errors error_log /var/log/nginx/raps-error.log info; location / { proxy_pass http://xyz; proxy_pass_request_headers on; proxy_ssl_server_name on; proxy_http_version 1.1; proxy_ssl_session_reuse on; proxy_set_header Host $host; proxy_set_header Connection ""; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; } We see a lot of SSL handshake errors 2020/11/19 18:28:08 [info] 5784#0: *5771518 peer closed connection in SSL handshake while SSL handshaking, client: 169.53.151.196, server: 0.0.0.0:443 2020/11/19 18:28:08 [info] 5786#0: *5771519 peer closed connection in SSL handshake while SSL handshaking, client: 158.85.210.39, server: 0.0.0.0:443 2020/11/19 18:28:08 [info] 5785#0: *5771520 peer closed connection in SSL handshake while SSL handshaking, client: 169.53.151.201, server: 0.0.0.0:443 2020/11/19 18:28:08 [info] 5786#0: *5771521 peer closed connection in SSL handshake while SSL handshaking, client: 169.53.151.198, server: 0.0.0.0:443 2020/11/19 18:28:08 [info] 5785#0: *5771522 peer closed connection in SSL handshake while SSL handshaking, client: 169.54.155.4, server: 0.0.0.0:443 2020/11/19 18:28:08 [info] 5785#0: *5771524 peer closed connection in SSL handshake while SSL handshaking, client: 169.53.151.202, server: 0.0.0.0:443 2020/11/19 18:28:08 [info] 5784#0: *5771525 peer closed connection in SSL handshake while SSL handshaking, client: 158.85.210.39, server: 0.0.0.0:443 2020/11/19 18:28:08 [info] 5784#0: *5771527 peer closed connection in SSL handshake while SSL handshaking, client: 169.53.151.212, server: 0.0.0.0:443 2020/11/19 18:28:08 [info] 5786#0: *5771528 peer closed connection in SSL handshake while SSL handshaking, client: 169.53.151.202, server: 0.0.0.0:443 2020/11/19 18:28:08 [info] 5783#0: *5771526 peer closed connection in SSL handshake while SSL handshaking, client: 169.53.151.212, server: 0.0.0.0:443 2020/11/19 18:28:08 [info] 5785#0: *5771529 peer closed connection in SSL handshake while SSL handshaking, client: 169.53.151.204, server: 0.0.0.0:443 2020/11/19 18:28:08 [info] 5785#0: *5771530 peer closed connection in SSL handshake while SSL handshaking, client: 169.54.155.82, server: 0.0.0.0:443 2020/11/19 18:28:08 [info] 5785#0: *5771531 peer closed connection in SSL handshake while SSL handshaking, client: 169.53.151.216, server: 0.0.0.0:443 2020/11/19 18:28:08 [info] 5785#0: *5771533 peer closed connection in SSL handshake while SSL handshaking, client: 169.53.151.201, server: 0.0.0.0:443 Mostly this code 2020/11/19 18:15:00 [debug] 5525#0: *5703427 SSL_get_error: 2 2020/11/19 18:15:00 [debug] 5524#0: *5703640 SSL_get_error: 2 2020/11/19 18:15:00 [debug] 5525#0: *5703079 SSL_get_error: 2 2020/11/19 18:15:00 [debug] 5525#0: *5702872 SSL_get_error: 2 2020/11/19 18:15:00 [debug] 5524#0: *5703173 SSL_get_error: 2 2020/11/19 18:15:00 [debug] 5524#0: *5703406 SSL_get_error: 2 2020/11/19 18:15:00 [debug] 5524#0: *5703705 SSL_get_error: 2 2020/11/19 18:15:00 [debug] 5524#0: *5703764 SSL_get_error: 5 2020/11/19 18:15:00 [debug] 5524#0: *5703765 SSL_get_error: 5 2020/11/19 18:15:00 [debug] 5525#0: *5703766 SSL_get_error: 2 2020/11/19 18:15:00 [debug] 5525#0: *5703632 SSL_get_error: 2 2020/11/19 18:15:00 [debug] 5524#0: *5703406 SSL_get_error: 2 2020/11/19 18:15:00 [debug] 5523#0: *5703177 SSL_get_error: 2 2020/11/19 18:15:00 [debug] 5523#0: *5703357 SSL_get_error: 2 2020/11/19 18:15:00 [debug] 5524#0: *5703173 SSL_get_error: 2 2020/11/19 18:15:00 [debug] 5523#0: *5703627 SSL_get_error: 2 Please share your experience or thoughts asap Posted at Nginx Forum: https://forum.nginx.org/read.php?2,290009,290009#msg-290009 From nginx-forum at forum.nginx.org Thu Nov 19 18:31:23 2020 From: nginx-forum at forum.nginx.org (sachingp) Date: Thu, 19 Nov 2020 13:31:23 -0500 Subject: SSL Handshake Errors Message-ID: <2a0f9e6791f651ce16cacc1e36934813.NginxMailingListEnglish@forum.nginx.org> Hi - We are using Nginx as a reverse proxy with SSL as a termination point Call flow Network Load Balancer (TCP) --> Nginx(SSL Termination) --> Vertx Servers (HTTP) This is the config we use, fairly standard upstream xyz { server 127.0.0.1:8080; keepalive 4096; } server { listen 80; listen 443 ssl; ssl_certificate /etc/ssl/certs/bundle.crt; ssl_certificate_key /etc/ssl/private/nginx-digicert.key; # ssl_handshake_timeout 10s; ssl_session_cache shared:SSL:20m; ssl_session_timeout 4h; # ssl_handshake_timeout 30s; server_name _; root /usr/share/nginx/html; access_log /var/log/nginx/raps-access.log timed_combined buffer=8k flush=1m; #access_log off; # only log critical errors error_log /var/log/nginx/raps-error.log info; location / { proxy_pass http://xyz; proxy_pass_request_headers on; proxy_ssl_server_name on; proxy_http_version 1.1; proxy_ssl_session_reuse on; proxy_set_header Host $host; proxy_set_header Connection ""; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; } We see a lot of SSL handshake errors 2020/11/19 18:28:08 [info] 5784#0: *5771518 peer closed connection in SSL handshake while SSL handshaking, client: 169.53.151.196, server: 0.0.0.0:443 2020/11/19 18:28:08 [info] 5786#0: *5771519 peer closed connection in SSL handshake while SSL handshaking, client: 158.85.210.39, server: 0.0.0.0:443 2020/11/19 18:28:08 [info] 5785#0: *5771520 peer closed connection in SSL handshake while SSL handshaking, client: 169.53.151.201, server: 0.0.0.0:443 2020/11/19 18:28:08 [info] 5786#0: *5771521 peer closed connection in SSL handshake while SSL handshaking, client: 169.53.151.198, server: 0.0.0.0:443 2020/11/19 18:28:08 [info] 5785#0: *5771522 peer closed connection in SSL handshake while SSL handshaking, client: 169.54.155.4, server: 0.0.0.0:443 2020/11/19 18:28:08 [info] 5785#0: *5771524 peer closed connection in SSL handshake while SSL handshaking, client: 169.53.151.202, server: 0.0.0.0:443 2020/11/19 18:28:08 [info] 5784#0: *5771525 peer closed connection in SSL handshake while SSL handshaking, client: 158.85.210.39, server: 0.0.0.0:443 2020/11/19 18:28:08 [info] 5784#0: *5771527 peer closed connection in SSL handshake while SSL handshaking, client: 169.53.151.212, server: 0.0.0.0:443 2020/11/19 18:28:08 [info] 5786#0: *5771528 peer closed connection in SSL handshake while SSL handshaking, client: 169.53.151.202, server: 0.0.0.0:443 2020/11/19 18:28:08 [info] 5783#0: *5771526 peer closed connection in SSL handshake while SSL handshaking, client: 169.53.151.212, server: 0.0.0.0:443 2020/11/19 18:28:08 [info] 5785#0: *5771529 peer closed connection in SSL handshake while SSL handshaking, client: 169.53.151.204, server: 0.0.0.0:443 2020/11/19 18:28:08 [info] 5785#0: *5771530 peer closed connection in SSL handshake while SSL handshaking, client: 169.54.155.82, server: 0.0.0.0:443 2020/11/19 18:28:08 [info] 5785#0: *5771531 peer closed connection in SSL handshake while SSL handshaking, client: 169.53.151.216, server: 0.0.0.0:443 2020/11/19 18:28:08 [info] 5785#0: *5771533 peer closed connection in SSL handshake while SSL handshaking, client: 169.53.151.201, server: 0.0.0.0:443 Mostly this code 2020/11/19 18:15:00 [debug] 5525#0: *5703427 SSL_get_error: 2 2020/11/19 18:15:00 [debug] 5524#0: *5703640 SSL_get_error: 2 2020/11/19 18:15:00 [debug] 5525#0: *5703079 SSL_get_error: 2 2020/11/19 18:15:00 [debug] 5525#0: *5702872 SSL_get_error: 2 2020/11/19 18:15:00 [debug] 5524#0: *5703173 SSL_get_error: 2 2020/11/19 18:15:00 [debug] 5524#0: *5703406 SSL_get_error: 2 2020/11/19 18:15:00 [debug] 5524#0: *5703705 SSL_get_error: 2 2020/11/19 18:15:00 [debug] 5524#0: *5703764 SSL_get_error: 5 2020/11/19 18:15:00 [debug] 5524#0: *5703765 SSL_get_error: 5 2020/11/19 18:15:00 [debug] 5525#0: *5703766 SSL_get_error: 2 2020/11/19 18:15:00 [debug] 5525#0: *5703632 SSL_get_error: 2 2020/11/19 18:15:00 [debug] 5524#0: *5703406 SSL_get_error: 2 2020/11/19 18:15:00 [debug] 5523#0: *5703177 SSL_get_error: 2 2020/11/19 18:15:00 [debug] 5523#0: *5703357 SSL_get_error: 2 2020/11/19 18:15:00 [debug] 5524#0: *5703173 SSL_get_error: 2 2020/11/19 18:15:00 [debug] 5523#0: *5703627 SSL_get_error: 2 Please share your experience or thoughts asap Posted at Nginx Forum: https://forum.nginx.org/read.php?2,290010,290010#msg-290010 From teward at thomas-ward.net Thu Nov 19 18:52:01 2020 From: teward at thomas-ward.net (Thomas Ward) Date: Thu, 19 Nov 2020 13:52:01 -0500 Subject: SSL Handshake Errors In-Reply-To: <2a0f9e6791f651ce16cacc1e36934813.NginxMailingListEnglish@forum.nginx.org> References: <2a0f9e6791f651ce16cacc1e36934813.NginxMailingListEnglish@forum.nginx.org> Message-ID: <1496a85c-34ad-b55f-dc87-70079ccc7942@thomas-ward.net> Provide SSL logs from the client side - if you can, using OpenSSL and its `s_connect` framework or similar to get the actual SSL handshake errors/logs.? Chances are something's wrong with the handshake or your cert.? (since I can't scan your infra directly yourself, you'll have to get detailed SSL connection information to the NGINX server first using some other tool) On 11/19/20 1:31 PM, sachingp wrote: > Hi - We are using Nginx as a reverse proxy with SSL as a termination point > > Call flow > > Network Load Balancer (TCP) --> Nginx(SSL Termination) --> Vertx Servers > (HTTP) > > This is the config we use, fairly standard > > upstream xyz { > server 127.0.0.1:8080; > keepalive 4096; > } > > server { > listen 80; > listen 443 ssl; > ssl_certificate /etc/ssl/certs/bundle.crt; > ssl_certificate_key /etc/ssl/private/nginx-digicert.key; > # ssl_handshake_timeout 10s; > ssl_session_cache shared:SSL:20m; > ssl_session_timeout 4h; > # ssl_handshake_timeout 30s; > server_name _; > root /usr/share/nginx/html; > access_log /var/log/nginx/raps-access.log timed_combined buffer=8k > flush=1m; > #access_log off; > > # only log critical errors > error_log /var/log/nginx/raps-error.log info; > > location / { > proxy_pass http://xyz; > proxy_pass_request_headers on; > proxy_ssl_server_name on; > proxy_http_version 1.1; > proxy_ssl_session_reuse on; > proxy_set_header Host $host; > proxy_set_header Connection ""; > proxy_set_header X-Real-IP $remote_addr; > proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; > } > > > We see a lot of SSL handshake errors > > 2020/11/19 18:28:08 [info] 5784#0: *5771518 peer closed connection in SSL > handshake while SSL handshaking, client: 169.53.151.196, server: > 0.0.0.0:443 > 2020/11/19 18:28:08 [info] 5786#0: *5771519 peer closed connection in SSL > handshake while SSL handshaking, client: 158.85.210.39, server: 0.0.0.0:443 > 2020/11/19 18:28:08 [info] 5785#0: *5771520 peer closed connection in SSL > handshake while SSL handshaking, client: 169.53.151.201, server: > 0.0.0.0:443 > 2020/11/19 18:28:08 [info] 5786#0: *5771521 peer closed connection in SSL > handshake while SSL handshaking, client: 169.53.151.198, server: > 0.0.0.0:443 > 2020/11/19 18:28:08 [info] 5785#0: *5771522 peer closed connection in SSL > handshake while SSL handshaking, client: 169.54.155.4, server: 0.0.0.0:443 > 2020/11/19 18:28:08 [info] 5785#0: *5771524 peer closed connection in SSL > handshake while SSL handshaking, client: 169.53.151.202, server: > 0.0.0.0:443 > 2020/11/19 18:28:08 [info] 5784#0: *5771525 peer closed connection in SSL > handshake while SSL handshaking, client: 158.85.210.39, server: 0.0.0.0:443 > 2020/11/19 18:28:08 [info] 5784#0: *5771527 peer closed connection in SSL > handshake while SSL handshaking, client: 169.53.151.212, server: > 0.0.0.0:443 > 2020/11/19 18:28:08 [info] 5786#0: *5771528 peer closed connection in SSL > handshake while SSL handshaking, client: 169.53.151.202, server: > 0.0.0.0:443 > 2020/11/19 18:28:08 [info] 5783#0: *5771526 peer closed connection in SSL > handshake while SSL handshaking, client: 169.53.151.212, server: > 0.0.0.0:443 > 2020/11/19 18:28:08 [info] 5785#0: *5771529 peer closed connection in SSL > handshake while SSL handshaking, client: 169.53.151.204, server: > 0.0.0.0:443 > 2020/11/19 18:28:08 [info] 5785#0: *5771530 peer closed connection in SSL > handshake while SSL handshaking, client: 169.54.155.82, server: 0.0.0.0:443 > 2020/11/19 18:28:08 [info] 5785#0: *5771531 peer closed connection in SSL > handshake while SSL handshaking, client: 169.53.151.216, server: > 0.0.0.0:443 > 2020/11/19 18:28:08 [info] 5785#0: *5771533 peer closed connection in SSL > handshake while SSL handshaking, client: 169.53.151.201, server: > 0.0.0.0:443 > > > > Mostly this code > > > 2020/11/19 18:15:00 [debug] 5525#0: *5703427 SSL_get_error: 2 > 2020/11/19 18:15:00 [debug] 5524#0: *5703640 SSL_get_error: 2 > 2020/11/19 18:15:00 [debug] 5525#0: *5703079 SSL_get_error: 2 > 2020/11/19 18:15:00 [debug] 5525#0: *5702872 SSL_get_error: 2 > 2020/11/19 18:15:00 [debug] 5524#0: *5703173 SSL_get_error: 2 > 2020/11/19 18:15:00 [debug] 5524#0: *5703406 SSL_get_error: 2 > 2020/11/19 18:15:00 [debug] 5524#0: *5703705 SSL_get_error: 2 > 2020/11/19 18:15:00 [debug] 5524#0: *5703764 SSL_get_error: 5 > 2020/11/19 18:15:00 [debug] 5524#0: *5703765 SSL_get_error: 5 > 2020/11/19 18:15:00 [debug] 5525#0: *5703766 SSL_get_error: 2 > 2020/11/19 18:15:00 [debug] 5525#0: *5703632 SSL_get_error: 2 > 2020/11/19 18:15:00 [debug] 5524#0: *5703406 SSL_get_error: 2 > 2020/11/19 18:15:00 [debug] 5523#0: *5703177 SSL_get_error: 2 > 2020/11/19 18:15:00 [debug] 5523#0: *5703357 SSL_get_error: 2 > 2020/11/19 18:15:00 [debug] 5524#0: *5703173 SSL_get_error: 2 > 2020/11/19 18:15:00 [debug] 5523#0: *5703627 SSL_get_error: 2 > > > Please share your experience or thoughts asap > > Posted at Nginx Forum: https://forum.nginx.org/read.php?2,290010,290010#msg-290010 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Thu Nov 19 19:03:20 2020 From: nginx-forum at forum.nginx.org (sachingp) Date: Thu, 19 Nov 2020 14:03:20 -0500 Subject: SSL Handshake Errors In-Reply-To: <1496a85c-34ad-b55f-dc87-70079ccc7942@thomas-ward.net> References: <1496a85c-34ad-b55f-dc87-70079ccc7942@thomas-ward.net> Message-ID: <9f828e394072e64c4ce344a02f80bc52.NginxMailingListEnglish@forum.nginx.org> Hi Thomas - We are using digicert, I don't have access to the client logs, what more I can do to go deeper Sachin Posted at Nginx Forum: https://forum.nginx.org/read.php?2,290009,290013#msg-290013 From teward at thomas-ward.net Thu Nov 19 19:23:57 2020 From: teward at thomas-ward.net (Thomas Ward) Date: Thu, 19 Nov 2020 14:23:57 -0500 Subject: SSL Handshake Errors In-Reply-To: <9f828e394072e64c4ce344a02f80bc52.NginxMailingListEnglish@forum.nginx.org> References: <1496a85c-34ad-b55f-dc87-70079ccc7942@thomas-ward.net> <9f828e394072e64c4ce344a02f80bc52.NginxMailingListEnglish@forum.nginx.org> Message-ID: <0c4619f4-8d42-a89a-8be5-71e788e196f7@thomas-ward.net> Is your nginx system a Linux one?? If so, then you can do something like this: `openssl s_client -connect localhost:443` from the nginx box and see what handshake errors you're getting. Thomas On 11/19/20 2:03 PM, sachingp wrote: > Hi Thomas - We are using digicert, I don't have access to the client logs, > what more I can do to go deeper > > Sachin > > Posted at Nginx Forum: https://forum.nginx.org/read.php?2,290009,290013#msg-290013 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Thu Nov 19 19:29:13 2020 From: nginx-forum at forum.nginx.org (sachingp) Date: Thu, 19 Nov 2020 14:29:13 -0500 Subject: SSL Handshake Errors In-Reply-To: <0c4619f4-8d42-a89a-8be5-71e788e196f7@thomas-ward.net> References: <0c4619f4-8d42-a89a-8be5-71e788e196f7@thomas-ward.net> Message-ID: <43108feb2df91ee0768f1ffccf445ce8.NginxMailingListEnglish@forum.nginx.org> Thomas - Executed o ALPN negotiated SSL-Session: Protocol : TLSv1.2 Cipher : ECDHE-RSA-AES256-GCM-SHA384 Session-ID: BEEDB7A167EC486D98CFAFBDA541DDB308B27BFCF9D5732599DEDB1A3E2D45B2 Session-ID-ctx: Master-Key: 5E7AD6C866CEAAC0AE0868858ADDE392406533185DFD5CE7BCA7E12E7FE6A520B67254B334A93CFAB88C3069EFBFAEA0 Key-Arg : None Krb5 Principal: None PSK identity: None PSK identity hint: None TLS session ticket lifetime hint: 14400 (seconds) TLS session ticket: 0000 - 54 20 6b 35 3b b3 20 b1-81 05 52 7b 23 e5 ae 63 T k5;. ...R{#..c 0010 - aa bd 85 a9 ea 74 f7 7a-80 89 da 38 92 4f 70 09 .....t.z...8.Op. 0020 - 24 e3 a1 17 07 32 1d d6-b8 77 a6 7b 35 e4 7b a9 $....2...w.{5.{. 0030 - c0 a4 ca 13 0c 4e fd 09-45 32 6e 78 df 02 5b ca .....N..E2nx..[. 0040 - b5 b4 d1 52 8d 9a cc 0b-c5 c3 dc 1f d0 70 b0 af ...R.........p.. 0050 - 75 41 d1 d2 32 99 41 23-ef 2d 11 10 21 90 f1 22 uA..2.A#.-..!.." 0060 - 04 04 c7 d5 29 8d 50 47-af ef d6 ef 77 02 25 6b ....).PG....w.%k 0070 - 59 45 6c 24 b6 1f 20 46-8a c3 60 87 9f 3e c2 5d YEl$.. F..`..>.] 0080 - 97 79 2f 24 6a a2 f1 ab-3a 7b 49 89 1b 38 74 20 .y/$j...:{I..8t 0090 - 2e e2 e6 17 07 4b 19 91-80 82 e6 5f d0 88 8a da .....K....._.... 00a0 - c9 d9 27 d4 9c 4f 32 73-cc 73 a8 8c 1d 92 cb 7b ..'..O2s.s.....{ Start Time: 1605814076 Timeout : 300 (sec) Verify return code: 0 (ok) Further, I see issues with some % of requests not all Posted at Nginx Forum: https://forum.nginx.org/read.php?2,290009,290017#msg-290017 From gfrankliu at gmail.com Thu Nov 19 22:06:46 2020 From: gfrankliu at gmail.com (Frank Liu) Date: Thu, 19 Nov 2020 14:06:46 -0800 Subject: nginx vulnerability Message-ID: Hi, CVE-2019-20372 mentioned a security vulnerability, but I don't see it in http://nginx.org/en/security_advisories.html Does that mean CVE-2019-20372 is not considered a security vulnerability by nginx? Or is it because nginx standard config won't be vulnerable, and users have to enable error_log in order to be vulnerable? Thanks! Frank -------------- next part -------------- An HTML attachment was scrubbed... URL: From vbart at nginx.com Thu Nov 19 23:37:14 2020 From: vbart at nginx.com (Valentin V. Bartenev) Date: Fri, 20 Nov 2020 02:37:14 +0300 Subject: Unit 1.21.0 release Message-ID: <2895579.CbtlEUcBR6@vbart-laptop> Hi, I'm glad to announce a new release of NGINX Unit. Our two previous releases were thoroughly packed with new features and capabilities, but Unit 1.21.0 isn't an exception either. This is our third big release in a row, with only six weeks since the previous one! Perhaps, the most notable feature of this release is the support for multithreaded request handling in application processes. Now, you can fine-tune the number of threads used for request handling in each application process; this improves scaling and optimize memory usage. As a result, your apps can use a combination of multiple processes and multiple threads per each process for truly dynamic scaling; the feature is available for any Java, Python, Perl, or Ruby apps out of the box without any need to update their code. Moreover, if you make use of ASGI support in Unit (introduced in the previous release), each thread of each process of your application can run asynchronously. Pretty neat, huh? To configure the number of threads per process, use the "threads" option of the application object: - https://unit.nginx.org/configuration/#applications Yet another cool feature is the long-awaited support for regular expressions. In Unit, they enable granular request filtering and routing via our compound matching rules; now, with PCRE syntax available, your request matching capabilities are limited only by your imagination. For details and examples, see our documentation: - https://unit.nginx.org/configuration/#routes Changes with Unit 1.21.0 19 Nov 2020 *) Change: procfs is mounted by default for all languages when "rootfs" isolation is used. *) Change: any characters valid according to RFC 7230 are now allowed in HTTP header field names. *) Change: HTTP header fields with underscores ("_") are now discarded from requests by default. *) Feature: optional multithreaded request processing for Java, Python, Perl, and Ruby apps. *) Feature: regular expressions in route matching patterns. *) Feature: compatibility with Python 3.9. *) Feature: the Python module now supports ASGI 2.0 legacy applications. *) Feature: the "protocol" option in Python applications aids choice between ASGI and WSGI. *) Feature: the fastcgi_finish_request() PHP function that finalizes request processing and continues code execution without holding onto the client connection. *) Feature: the "discard_unsafe_fields" HTTP option that enables discarding request header fields with irregular (but still valid) characters in the field name. *) Feature: the "procfs" and "tmpfs" automount isolation options to disable automatic mounting of eponymous filesystems. *) Bugfix: the router process could crash when running Go applications under high load; the bug had appeared in 1.19.0. *) Bugfix: some language dependencies could remain mounted after using "rootfs" isolation. *) Bugfix: various compatibility issues in Java applications. *) Bugfix: the Java module built with the musl C library couldn't run applications that use "rootfs" isolation. Also, packages for Ubuntu 20.10 "Groovy" are available in our repositories: - https://unit.nginx.org/installation/#ubuntu-2010 Thanks to Sergey Osokin, the FreeBSD port of Unit now provides an almost exhaustive set of language modules: - https://www.freshports.org/www/unit/ We encourage you to follow our roadmap on GitHub, where your ideas and requests are always more than welcome: - https://github.com/orgs/nginx/projects/1 Stay tuned! wbr, Valentin V. Bartenev From nginx-forum at forum.nginx.org Fri Nov 20 10:47:49 2020 From: nginx-forum at forum.nginx.org (duckyrain) Date: Fri, 20 Nov 2020 05:47:49 -0500 Subject: why nginx worker process listen in port 80, not master process? In-Reply-To: <68347349.84ec.17381016e44.Coremail.zhengyupann@163.com> References: <68347349.84ec.17381016e44.Coremail.zhengyupann@163.com> Message-ID: It's not nginx's problem?`netstat` only displayed the first filtered pid?change `ss -ltp | grep nginx` to display listen process. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,288851,290022#msg-290022 From nginx-forum at forum.nginx.org Sun Nov 22 19:24:34 2020 From: nginx-forum at forum.nginx.org (leeahaddad) Date: Sun, 22 Nov 2020 14:24:34 -0500 Subject: Some Questions about NGINX and F5 In-Reply-To: <82F7F6A2-CC60-44D6-9F76-4518418E70D6@f5.com> References: <82F7F6A2-CC60-44D6-9F76-4518418E70D6@f5.com> Message-ID: Dear Owen, Thank you so much!!!!. You helped me with this. All the best Posted at Nginx Forum: https://forum.nginx.org/read.php?2,289987,290024#msg-290024 From nginx-forum at forum.nginx.org Mon Nov 23 10:59:49 2020 From: nginx-forum at forum.nginx.org (meniem) Date: Mon, 23 Nov 2020 05:59:49 -0500 Subject: Nginx proxy for an endpoint that redirect automatically to another path Message-ID: <95591cd153aae4d9375d1c80174a1068.NginxMailingListEnglish@forum.nginx.org> I'm trying to setup an Nginx proxy that redirect all requests from provider.domain.com to proxy.appname.com/provider (where proxy.appname.com is the server_name in nginx server) The configuration of server and location blocks are fine, but the issue is that the provider.domain.com is automatically redirecting to provider.domain.com/broker/login.php (when I hit the provider.domain.com from the browser, it's automatically taking me to provider.domain.com/broker/login.php) which give an error with nginx proxy when trying to redirect to a non-existent page (proxy.appname.com/provider redirect to proxy.appname.com/broker/login.php -> which does not exist) location /provider/ { proxy_pass https://provider.domain.com; proxy_set_header X-Forwarded-Host $server_name; proxy_set_header Host proxy.appname.com; error_log /var/log/nginx/appname.log debug; proxy_set_header Accept-Encoding text/xml; } So, how can I set this to redirect all requests from provider.domain.com automatically to proxy.appname.com/provider (we don't need to mention the full path in proxy_pass; as we have multiple endpoints to hi from the provider link such as api?etc.) I have also tried to follow the redirect using, but didn't work out: proxy_intercept_errors on; error_page 301 302 307 = @handle_redirects; location @handle_redirects { set $orig_loc $upstream_http_location; proxy_pass $orig_loc; } } Appreciate your help on this Posted at Nginx Forum: https://forum.nginx.org/read.php?2,290027,290027#msg-290027 From mdounin at mdounin.ru Mon Nov 23 13:59:57 2020 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 23 Nov 2020 16:59:57 +0300 Subject: Nginx proxy for an endpoint that redirect automatically to another path In-Reply-To: <95591cd153aae4d9375d1c80174a1068.NginxMailingListEnglish@forum.nginx.org> References: <95591cd153aae4d9375d1c80174a1068.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20201123135957.GJ1147@mdounin.ru> Hello! On Mon, Nov 23, 2020 at 05:59:49AM -0500, meniem wrote: > I'm trying to setup an Nginx proxy that redirect all requests from > provider.domain.com to proxy.appname.com/provider (where proxy.appname.com > is the server_name in nginx server) > > The configuration of server and location blocks are fine, but the issue is > that the provider.domain.com is automatically redirecting to > provider.domain.com/broker/login.php (when I hit the provider.domain.com > from the browser, it's automatically taking me to > provider.domain.com/broker/login.php) which give an error with nginx proxy > when trying to redirect to a non-existent page (proxy.appname.com/provider > redirect to proxy.appname.com/broker/login.php -> which does not exist) > > > location /provider/ { > proxy_pass https://provider.domain.com; > proxy_set_header X-Forwarded-Host $server_name; > proxy_set_header Host proxy.appname.com; > error_log /var/log/nginx/appname.log debug; > proxy_set_header Accept-Encoding text/xml; > } > > > > So, how can I set this to redirect all requests from provider.domain.com > automatically to > proxy.appname.com/provider (we don't need to mention the full path in > proxy_pass; as we have multiple endpoints to hi from the provider link such > as api?etc.) Check how redirects are returned, and configure proxy_redirect appropriately. See http://nginx.org/r/proxy_redirect for details. -- Maxim Dounin http://mdounin.ru/ From nginx-forum at forum.nginx.org Mon Nov 23 16:17:26 2020 From: nginx-forum at forum.nginx.org (neodjandre) Date: Mon, 23 Nov 2020 11:17:26 -0500 Subject: Correct Implementation of Ddos protection Message-ID: may main /etc/nginx/nginx.conf file reads: user www-data; worker_processes 2; pid /run/nginx.pid; worker_rlimit_nofile 100000; events { worker_connections 2048; multi_accept on; } http { ## # Basic Settings ## client_header_buffer_size 2k; large_client_header_buffers 2 1k; client_body_buffer_size 10M; client_max_body_size 10M; client_body_timeout 12; client_header_timeout 12; keepalive_timeout 15; send_timeout 10; limit_conn_zone $binary_remote_addr zone=conn_limit_per_ip:10m; limit_req_zone $binary_remote_addr zone=req_limit_per_ip:10m rate=50r/s; server { limit_conn conn_limit_per_ip 10; limit_req zone=req_limit_per_ip burst=10 nodelay; } sendfile on; tcp_nopush on; tcp_nodelay on; types_hash_max_size 2048; server_tokens off; # server_names_hash_bucket_size 64; # server_name_in_redirect off; include /etc/nginx/mime.types; default_type application/octet-stream; ## # Virtual Host Configs ## include /etc/nginx/conf.d/*.conf; include /etc/nginx/sites-enabled/*; } My websites server blocks are included within: include /etc/nginx/sites-enabled/*; Is the server block defined above going to supersede the server blocks in my sites-enabled so that DDOS protection will work as expected? many thanks Andrew Posted at Nginx Forum: https://forum.nginx.org/read.php?2,290029,290029#msg-290029 From mdounin at mdounin.ru Tue Nov 24 15:18:58 2020 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 24 Nov 2020 18:18:58 +0300 Subject: nginx-1.19.5 Message-ID: <20201124151858.GL1147@mdounin.ru> Changes with nginx 1.19.5 24 Nov 2020 *) Feature: the -e switch. *) Feature: the same source files can now be specified in different modules while building addon modules. *) Bugfix: SSL shutdown did not work when lingering close was used. *) Bugfix: "upstream sent frame for closed stream" errors might occur when working with gRPC backends. *) Bugfix: in request body filters internal API. -- Maxim Dounin http://nginx.org/ From mdounin at mdounin.ru Tue Nov 24 15:37:54 2020 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 24 Nov 2020 18:37:54 +0300 Subject: nginx vulnerability In-Reply-To: References: Message-ID: <20201124153754.GP1147@mdounin.ru> Hello! On Thu, Nov 19, 2020 at 02:06:46PM -0800, Frank Liu wrote: > CVE-2019-20372 mentioned a security vulnerability, but I don't see it in > http://nginx.org/en/security_advisories.html > Does that mean CVE-2019-20372 is not considered a security vulnerability by > nginx? Or is it because nginx standard config won't be vulnerable, and > users have to enable error_log in order to be vulnerable? The CVE-2019-20372 corresponds to the following bugfix in nginx 1.17.7: *) Bugfix: requests with bodies were handled incorrectly when returning redirections with the "error_page" directive; the bug had appeared in 0.7.12. It only affects rarely used configurations with error_page returning redirects by itself, that is, configurations with "error_page ... http://...". Further, it can only have any security impact if nginx is used behind another HTTP proxy, and the configuration relies on security checks on this proxy. Given the above, it is not considered to be a security issue, but rather treated as a bug. This bug is already fixed in all supported nginx versions. -- Maxim Dounin http://mdounin.ru/ From sca at andreasschulze.de Tue Nov 24 16:46:43 2020 From: sca at andreasschulze.de (A. Schulze) Date: Tue, 24 Nov 2020 17:46:43 +0100 Subject: one client "floods" nginx errorlog Message-ID: <4db647ce-d46a-3116-9737-7f4d8bb78242@andreasschulze.de> Hello, I run a nginx instance handling only TLS1.2 and TLS1.3. Now I noticed an remote client hammering (Ok, once per second) with an SSLv2 connection an thus filling the log: 2020/11/24 17:37:08 [info] 383#0: *11 SSL_do_handshake() failed (SSL: error:14209102:SSL routines:tls_early_post_process_client_hello:unsupported protocol) while SSL handshaking, client: 87.138.121.xx, server: 0.0.0.0:443 That's annoying. beside blocking that IP in a firewall, is there a smart way to just prevent the log entry? Thanks! Andreas From kaushalshriyan at gmail.com Tue Nov 24 18:10:53 2020 From: kaushalshriyan at gmail.com (Kaushal Shriyan) Date: Tue, 24 Nov 2020 23:40:53 +0530 Subject: a duplicate default server for 0.0.0.0:80 in /etc/nginx/nginx.conf:39 Message-ID: Hi, I have two nginx.conf file /etc/nginx/conf.d/onetest.conf and /etc/nginx/nginx.conf. Basically the first config file works without any issue while redirecting from port 80 to 443. I want to enable the port redirect from port 80 to 443 in /etc/nginx/nginx.conf. when I add the below block in /etc/nginx/nginx.conf, I am facing *a duplicate default server for 0.0.0.0:80 in /etc/nginx/nginx.conf:39 * I listen 80 default_server; server_name abtddeveloperportal.mydomain.com; return 301 https://$server_name$request_uri; } ####################################cat /etc/nginx/conf.d/onetest.conf###################################################################### cat /etc/nginx/conf.d/onetest.conf server { listen 80 default_server; server_name onetest.mydomain.io; return 301 https://$server_name$request_uri; } server { listen 443 ssl; server_name onetest.mydomain.io; ssl_protocols TLSv1.3 TLSv1.2; ssl_certificate /etc/ssl/onetest.mydomain.io/fullchain1.pem; ssl_certificate_key /etc/ssl/onetest.mydomain.io/privkey1.pem; ssl_ciphers ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384; ssl_prefer_server_ciphers on; ssl_dhparam /etc/ssl/onetest.mydomain.io/dhparam.pem; client_max_body_size 100M; root /var/www/newtheme/testuatplace-v2/mpV2/web/; location = /favicon.ico { log_not_found off; access_log off; } ##################################################################################################################################### ####################################cat /etc/nginx/nginx.conf###################################################################### server { listen 80 default_server; server_name abtddeveloperportal.mydomain.com; return 301 https://$server_name$request_uri; } server { listen 443 ssl; server_name abtddeveloperportal.mydomain.com; ssl_protocols TLSv1.3 TLSv1.2; ssl_certificate /etc/ssl/ abtddeveloperportal.mydomain.com/fullchain3.pem; ssl_certificate_key /etc/ssl/abtddeveloperportal.example.com/privkey3.pem; ssl_ciphers ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384; ssl_prefer_server_ciphers on; ssl_dhparam /etc/ssl/abtddeveloperportal.mydomain.com/dhparam.pem; client_max_body_size 100M; #listen [::]:80 default_server; root /var/www/drupal/testplace-v2/mpV2/web; ####################################################################################################################################################### #nginx -t -c /etc/nginx/nginx.conf nginx: [emerg] a duplicate default server for 0.0.0.0:80 in /etc/nginx/nginx.conf:39 nginx: configuration file /etc/nginx/nginx.conf test failed Is there a way to enable redirect from port 80 to 443 for both /etc/nginx/conf.d/onetest.conf and /etc/nginx/nginx.conf files. Any help will be highly appreciated. Thanks in Advance. Best Regards, Kaushal -------------- next part -------------- An HTML attachment was scrubbed... URL: From j at fuldgroup.com Tue Nov 24 21:17:59 2020 From: j at fuldgroup.com (j at fuldgroup.com) Date: Tue, 24 Nov 2020 13:17:59 -0800 Subject: Build Issue on Ubuntu and Linux following instructions on blog post https://www.nginx.com/blog/video-streaming-for-remote-learning-with-nginx/ Message-ID: <016a01d6c2a7$4991b0a0$dcb511e0$@fuldgroup.com> Hi All, Found an issue within the build of nginx. Following James' instructions on https://www.nginx.com/blog/video-streaming-for-remote-learning-with-nginx/ : mkdir buildnginx cd buildnginx sudo git clone https://github.com/arut/nginx-rtmp-module.git sudo git clone https://github.com/nginx/nginx.git cd nginx sudo ./auto/configure --add-module=../nginx-rtmp-module sudo make sudo sudo make install At the make command: The following error occurs -I objs -I src/http -I src/http/modules \ -o objs/addon/nginx-rtmp-module/ngx_rtmp_eval.o \ ../nginx-rtmp-module/ngx_rtmp_eval.c ../nginx-rtmp-module/ngx_rtmp_eval.c: In function 'ngx_rtmp_eval': ../nginx-rtmp-module/ngx_rtmp_eval.c:160:17: error: this statement may fall through [-Werror=implicit-fallthrough=] 160 | switch (c) { | ^~~~~~ ../nginx-rtmp-module/ngx_rtmp_eval.c:170:13: note: here 170 | case ESCAPE: | ^~~~ cc1: all warnings being treated as errors make[1]: *** [objs/Makefile:1339: objs/addon/nginx-rtmp-module/ngx_rtmp_eval.o] Error 1 make[1]: Leaving directory '/buildnginx/nginx' make: *** [Makefile:8: build] Error 2 Reading through the source code ngx_rtmp_eval at line 160 through 170, the switch statement will most likely fall through, so the build warning call out is a correct action. This occurs on both ubuntu and Raspbian versions of Linux. Having watched James at https://www.youtube.com/watch?v=Js1OlvRNsdI, there was no problem with his build. Questions: 1. How do I resolve? * I could try make -k; * I remove -Werror flag in the make file (I did not see it in the make file - could be my eyes, though, so if you know where it is please tell me) * Who owns ngx_rtmp_eval, so that I can contact them to fix the source code? Any and all advice would be greatly appreciated. Thanks, JF -------------- next part -------------- An HTML attachment was scrubbed... URL: From pluknet at nginx.com Tue Nov 24 22:04:51 2020 From: pluknet at nginx.com (Sergey Kandaurov) Date: Tue, 24 Nov 2020 22:04:51 +0000 Subject: Build Issue on Ubuntu and Linux following instructions on blog post https://www.nginx.com/blog/video-streaming-for-remote-learning-with-nginx/ In-Reply-To: <016a01d6c2a7$4991b0a0$dcb511e0$@fuldgroup.com> References: <016a01d6c2a7$4991b0a0$dcb511e0$@fuldgroup.com> Message-ID: <1F2721EE-490E-4007-96EA-4D44FA0D2D3A@nginx.com> > On 24 Nov 2020, at 21:17, j at fuldgroup.com wrote: > > Hi All, > > Found an issue within the build of nginx. > > Following James? instructions on https://www.nginx.com/blog/video-streaming-for-remote-learning-with-nginx/ : > mkdir buildnginx > cd buildnginx > sudo git clone https://github.com/arut/nginx-rtmp-module.git > sudo git clone https://github.com/nginx/nginx.git > cd nginx > sudo ./auto/configure --add-module=../nginx-rtmp-module > sudo make > sudo sudo make install > > At the make command: The following error occurs > > -I objs -I src/http -I src/http/modules \ > -o objs/addon/nginx-rtmp-module/ngx_rtmp_eval.o \ > ../nginx-rtmp-module/ngx_rtmp_eval.c > ../nginx-rtmp-module/ngx_rtmp_eval.c: In function ?ngx_rtmp_eval?: > ../nginx-rtmp-module/ngx_rtmp_eval.c:160:17: error: this statement may fall through [-Werror=implicit-fallthrough=] > 160 | switch (c) { > | ^~~~~~ > ../nginx-rtmp-module/ngx_rtmp_eval.c:170:13: note: here > 170 | case ESCAPE: > | ^~~~ > cc1: all warnings being treated as errors > make[1]: *** [objs/Makefile:1339: objs/addon/nginx-rtmp-module/ngx_rtmp_eval.o] Error 1 > make[1]: Leaving directory '/buildnginx/nginx' > make: *** [Makefile:8: build] Error 2 > > Reading through the source code ngx_rtmp_eval at line 160 through 170, the switch statement will most likely fall through, so the build warning call out is a correct action. > > This occurs on both ubuntu and Raspbian versions of Linux. > > Having watched James at https://www.youtube.com/watch?v=Js1OlvRNsdI, there was no problem with his build. > > Questions: > ? How do I resolve? > ? I could try make -k; > ? I remove -Werror flag in the make file (I did not see it in the make file ? could be my eyes, though, so if you know where it is please tell me) > ? Who owns ngx_rtmp_eval, so that I can contact them to fix the source code? > nginx-rtmp-module is a third-party nginx module. You may want to report about a build issue to the module author(s). This is a new error reported in recent gcc versions, that's why it may not trigger in other environments. To work around, you can specify -Wno-error=implicit-fallthrough= in --with-cc-opt= configuration option to ignore that error. -- Sergey Kandaurov From gfrankliu at gmail.com Wed Nov 25 04:23:08 2020 From: gfrankliu at gmail.com (Frank Liu) Date: Tue, 24 Nov 2020 20:23:08 -0800 Subject: client disconnects Message-ID: Hi, When a client disconnects (initiated tcp FIN), shouldn't we see 499 in nginx logs? But sometimes I see 400, along with below in error log: *2314539 client prematurely closed connection, client: x.x.x.x, Since I don't see "while reading client request headers" in the error log, I assume the request is already received (or maybe "body" hasn't arrived fully yet?) when client disconnects. When would nginx log 400 instead of 499? The error log ("info" level) is not clear about why 400, other than "client prematurely closed connection, ". Thanks! Frank -------------- next part -------------- An HTML attachment was scrubbed... URL: From lrntbs at gmail.com Wed Nov 25 10:54:57 2020 From: lrntbs at gmail.com (Laurent Bois) Date: Wed, 25 Nov 2020 11:54:57 +0100 Subject: Problem while replacing a certificate by a new one, nains sees the old one. Message-ID: <31CA72DB-31BD-4AE2-A24D-985D42B74B73@gmail.com> Hi, I have a problem on a Windows 2012 server, with SSL certificate on nginx. We?ve replaced an old self signed certificate with a new one ready for production and we encounter a problem: even after restart, nains sees the old certificate. I think this problem is similar to renewing certificate and I found some articles about this. But after several trials of tricks , we haven?t solved the problem What I understand would be to store the ssl certificate under a sub folder of nginx home, and modify the path in the conf file. I am right? Thanks for your help Laurent Bois lrntbs at gmail.com Tel: 06 61 64 30 75 From mdounin at mdounin.ru Wed Nov 25 13:34:33 2020 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 25 Nov 2020 16:34:33 +0300 Subject: client disconnects In-Reply-To: References: Message-ID: <20201125133433.GQ1147@mdounin.ru> Hello! On Tue, Nov 24, 2020 at 08:23:08PM -0800, Frank Liu wrote: > When a client disconnects (initiated tcp FIN), shouldn't we see 499 in > nginx logs? But sometimes I see 400, along with below in error log: > *2314539 client prematurely closed connection, client: x.x.x.x, > > Since I don't see "while reading client request headers" in the error log, > I assume the request is already received (or maybe "body" hasn't arrived > fully yet?) when client disconnects. > > When would nginx log 400 instead of 499? The error log ("info" level) is > not clear about why 400, other than "client prematurely closed connection, > ". The 400 error along with "client prematurely closed connection" and without an "while ..." clause implies that the connection was closed while reading the request body. -- Maxim Dounin http://mdounin.ru/ From mdounin at mdounin.ru Wed Nov 25 14:43:51 2020 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 25 Nov 2020 17:43:51 +0300 Subject: one client "floods" nginx errorlog In-Reply-To: <4db647ce-d46a-3116-9737-7f4d8bb78242@andreasschulze.de> References: <4db647ce-d46a-3116-9737-7f4d8bb78242@andreasschulze.de> Message-ID: <20201125144351.GT1147@mdounin.ru> Hello! On Tue, Nov 24, 2020 at 05:46:43PM +0100, A. Schulze wrote: > I run a nginx instance handling only TLS1.2 and TLS1.3. > Now I noticed an remote client hammering (Ok, once per second) with an SSLv2 connection an thus filling the log: > > 2020/11/24 17:37:08 [info] 383#0: *11 SSL_do_handshake() failed (SSL: error:14209102:SSL routines:tls_early_post_process_client_hello:unsupported protocol) while SSL handshaking, client: 87.138.121.xx, server: 0.0.0.0:443 > > That's annoying. > beside blocking that IP in a firewall, is there a smart way to just prevent the log entry? Much like any log lines easily triggered by misbehaving clients, these can be hidden by using higher log level, such as "notice", see http://nginx.org/r/error_log. -- Maxim Dounin http://mdounin.ru/ From gfrankliu at gmail.com Wed Nov 25 19:43:59 2020 From: gfrankliu at gmail.com (Frank Liu) Date: Wed, 25 Nov 2020 11:43:59 -0800 Subject: client disconnects In-Reply-To: <20201125133433.GQ1147@mdounin.ru> References: <20201125133433.GQ1147@mdounin.ru> Message-ID: Hi, Thanks for the clarification! What would we see if the client disconnects in the middle of nginx sending the response? Will there be a "while .." clause in the error log? Will the http status code be reset to 499? Thanks! Frank On Wed, Nov 25, 2020 at 5:34 AM Maxim Dounin wrote: > Hello! > > On Tue, Nov 24, 2020 at 08:23:08PM -0800, Frank Liu wrote: > > > When a client disconnects (initiated tcp FIN), shouldn't we see 499 in > > nginx logs? But sometimes I see 400, along with below in error log: > > *2314539 client prematurely closed connection, client: x.x.x.x, > > > > Since I don't see "while reading client request headers" in the error > log, > > I assume the request is already received (or maybe "body" hasn't arrived > > fully yet?) when client disconnects. > > > > When would nginx log 400 instead of 499? The error log ("info" level) is > > not clear about why 400, other than "client prematurely closed > connection, > > ". > > The 400 error along with "client prematurely closed connection" > and without an "while ..." clause implies that the connection was > closed while reading the request body. > > -- > Maxim Dounin > http://mdounin.ru/ > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Wed Nov 25 21:41:05 2020 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 26 Nov 2020 00:41:05 +0300 Subject: client disconnects In-Reply-To: References: <20201125133433.GQ1147@mdounin.ru> Message-ID: <20201125214105.GX1147@mdounin.ru> Hello! On Wed, Nov 25, 2020 at 11:43:59AM -0800, Frank Liu wrote: > Thanks for the clarification! > What would we see if the client disconnects in the middle of nginx sending > the response? Will there be a "while .." clause in the error log? Will the > http status code be reset to 499? As long as nginx is sending the response, the response status is already set by the response, and log will say "while sending response to client". The 499 status code can be seen if client closes the connection while nginx is sending the request to upstream or waiting for an upstream response ("connecting to upstream", "sending request to upstream", "reading response header from upstream"), or when a request is delayed with limit_req. -- Maxim Dounin http://mdounin.ru/ From kaushalshriyan at gmail.com Thu Nov 26 03:43:04 2020 From: kaushalshriyan at gmail.com (Kaushal Shriyan) Date: Thu, 26 Nov 2020 09:13:04 +0530 Subject: a duplicate default server for 0.0.0.0:80 in /etc/nginx/nginx.conf:39 In-Reply-To: References: Message-ID: On Tue, Nov 24, 2020 at 11:40 PM Kaushal Shriyan wrote: > Hi, > > I have two nginx.conf file /etc/nginx/conf.d/onetest.conf > and /etc/nginx/nginx.conf. Basically the first config file works without > any issue while redirecting from port 80 to 443. I want to enable the port > redirect from port 80 to 443 in /etc/nginx/nginx.conf. when I add the below > block in /etc/nginx/nginx.conf, I am facing *a duplicate default server > for 0.0.0.0:80 in /etc/nginx/nginx.conf:39 * I > > listen 80 default_server; > server_name abtddeveloperportal.mydomain.com; > return 301 https://$server_name$request_uri; > } > > > ####################################cat > /etc/nginx/conf.d/onetest.conf###################################################################### > cat /etc/nginx/conf.d/onetest.conf > server { > listen 80 default_server; > server_name onetest.mydomain.io; > return 301 https://$server_name$request_uri; > } > > > server { > listen 443 ssl; > server_name onetest.mydomain.io; > ssl_protocols TLSv1.3 TLSv1.2; > ssl_certificate /etc/ssl/onetest.mydomain.io/fullchain1.pem; > ssl_certificate_key /etc/ssl/onetest.mydomain.io/privkey1.pem; > ssl_ciphers > ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384; > ssl_prefer_server_ciphers on; > ssl_dhparam /etc/ssl/onetest.mydomain.io/dhparam.pem; > client_max_body_size 100M; > root /var/www/newtheme/testuatplace-v2/mpV2/web/; > > > location = /favicon.ico { > log_not_found off; > access_log off; > } > > > ##################################################################################################################################### > > > ####################################cat > /etc/nginx/nginx.conf###################################################################### > > > server { > listen 80 default_server; > server_name abtddeveloperportal.mydomain.com; > return 301 https://$server_name$request_uri; > } > > > server { > listen 443 ssl; > server_name abtddeveloperportal.mydomain.com; > ssl_protocols TLSv1.3 TLSv1.2; > ssl_certificate /etc/ssl/ > abtddeveloperportal.mydomain.com/fullchain3.pem; ssl_certificate_key > /etc/ssl/abtddeveloperportal.example.com/privkey3.pem; > ssl_ciphers > ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384; > ssl_prefer_server_ciphers on; > ssl_dhparam /etc/ssl/abtddeveloperportal.mydomain.com/dhparam.pem; > client_max_body_size 100M; > #listen [::]:80 default_server; > root /var/www/drupal/testplace-v2/mpV2/web; > > ####################################################################################################################################################### > > > #nginx -t -c /etc/nginx/nginx.conf > nginx: [emerg] a duplicate default server for 0.0.0.0:80 in > /etc/nginx/nginx.conf:39 > nginx: configuration file /etc/nginx/nginx.conf test failed > > Is there a way to enable redirect from port 80 to 443 for both > /etc/nginx/conf.d/onetest.conf and /etc/nginx/nginx.conf files. Any help > will be highly appreciated. > > Thanks in Advance. > > Best Regards, > > Kaushal > Hi, I will appreciate if someone can pitch in for my earlier email to this mailing list. Thanks in Advance. Best Regards, Kaushal -------------- next part -------------- An HTML attachment was scrubbed... URL: From r at roze.lv Thu Nov 26 08:46:34 2020 From: r at roze.lv (Reinis Rozitis) Date: Thu, 26 Nov 2020 10:46:34 +0200 Subject: a duplicate default server for 0.0.0.0:80 in /etc/nginx/nginx.conf:39 In-Reply-To: References: Message-ID: <000001d6c3d0$a542f150$efc8d3f0$@roze.lv> > Is there a way to enable redirect from port 80 to 443 for both /etc/nginx/conf.d/onetest.conf and /etc/nginx/nginx.conf files. Any help will be highly appreciated. You can have only one default_server per listen port. It will be the used if a client makes a request not matching any hostnames in server_name definitions (for example request to servers IP without giving a hostname). If there is no 'default_server' nginx will pick the first one by the order in configuration. So in general you don't need to specify the default_server at all (unless it's somewhere in the middle of configuration). In your case: >cat /etc/nginx/conf.d/onetest.conf >server { > listen 80 default_server; > server_name onetest.mydomain.io; > return 301 https://$server_name$request_uri; >} You should remove default_server here (or in the nginx.conf). In case you just want to force all your virtualhosts to https might as well just use a general redirect for all of them (have a single server {} block for all the redirects): Server { listen 80 default_server; server_name _; return 301 https://$host$request_uri; } rr From nginx-forum at forum.nginx.org Thu Nov 26 12:03:56 2020 From: nginx-forum at forum.nginx.org (allenhe) Date: Thu, 26 Nov 2020 07:03:56 -0500 Subject: how nginx handle websocket proxying Message-ID: <2ea8a4dc5a589efa237f8b1cfc8cc1af.NginxMailingListEnglish@forum.nginx.org> Can someone elaborate this a little bit? "NGINX supports WebSocket by allowing a tunnel to be set up between both client and back-end servers." what is the "tunnel" here? Does it mean the client will talk with the back-end server directly after the http Upgrade handshakes? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,290078,290078#msg-290078 From pluknet at nginx.com Thu Nov 26 12:23:30 2020 From: pluknet at nginx.com (Sergey Kandaurov) Date: Thu, 26 Nov 2020 12:23:30 +0000 Subject: how nginx handle websocket proxying In-Reply-To: <2ea8a4dc5a589efa237f8b1cfc8cc1af.NginxMailingListEnglish@forum.nginx.org> References: <2ea8a4dc5a589efa237f8b1cfc8cc1af.NginxMailingListEnglish@forum.nginx.org> Message-ID: <5649D24E-6D1A-49FB-A821-F9639345B2C2@nginx.com> > On 26 Nov 2020, at 12:03, allenhe wrote: > > Can someone elaborate this a little bit? > "NGINX supports WebSocket by allowing a tunnel to be set up between both > client and back-end servers." > > what is the "tunnel" here? After upgrading HTTP/1.1 to WebSocket protocol, a two-way communication channel is established, which is proxied by nginx is a special mode. > Does it mean the client will talk with the back-end server directly after > the http Upgrade handshakes? No, WebSocket protocol messages are still proxied by nginx. See also http://nginx.org/en/docs/http/websocket.html -- Sergey Kandaurov From nginx-forum at forum.nginx.org Fri Nov 27 11:29:16 2020 From: nginx-forum at forum.nginx.org (narksu) Date: Fri, 27 Nov 2020 06:29:16 -0500 Subject: proxy_socket_keepalive how to? Message-ID: Hi, I need to enable tcp keepalive messages betwin nginx and backend. There is POST which have to be replyed from backend within about 40-50 minutes but my session closed by timeout(2 hour), i think it is firewall wich drop session from nginx to backend due unactivity. If I make a curl directly to backend from nginx server, there are tcp keepalive in tcpdump and I receive reply from backend in 40 minutes. Tried to use proxy_socket_keepalive=on in location section but tcpkeepalive not appear in tcpdump and session stil close by timeout. So what is the proper way to enable proxy_socket_keepalive? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,290085,290085#msg-290085 From francis at daoine.org Fri Nov 27 11:58:40 2020 From: francis at daoine.org (Francis Daly) Date: Fri, 27 Nov 2020 11:58:40 +0000 Subject: proxy_socket_keepalive how to? In-Reply-To: References: Message-ID: <20201127115840.GB23032@daoine.org> On Fri, Nov 27, 2020 at 06:29:16AM -0500, narksu wrote: Hi there, > I make a curl directly to backend > from nginx server, there are tcp keepalive in tcpdump and I receive reply > from backend in 40 minutes. Tried to use proxy_socket_keepalive=on in > location section but tcpkeepalive not appear in tcpdump and session stil > close by timeout. So what is the proper way to enable > proxy_socket_keepalive? The documentation at http://nginx.org/r/proxy_socket_keepalive suggests that it is proxy_socket_keepalive on; but gives no obvious way to set the idle/interval/count values, so presumably it will use your system defaults for those. Maybe your system says "don't send the first keepalive packet until two hours have passed". If you are on something linuxy, files with names like /proc/sys/net/ipv4/tcp_keepalive* or the output of something like /sbin/sysctl -ar tcp_keepalive will probably be instructive; other systems presumably have their own ways to configure things. (If it is important to you to be able to set these values for nginx only, then you could consider implementing what "listen" does, but for outgoing connections, probably in src/http/ngx_http_upstream.c and friends. But it is probably less work today, to change your system defaults to what you want your nginx to use.) Good luck with it, f -- Francis Daly francis at daoine.org From gfrankliu at gmail.com Sun Nov 29 13:35:19 2020 From: gfrankliu at gmail.com (Frank Liu) Date: Sun, 29 Nov 2020 05:35:19 -0800 Subject: empty variable in access log Message-ID: Hi, If I create a variable, default to blank: map upstream_env $upstream_env { default ""; } and log it in access log (log_format has $upstream_env). I see a "-" in the log file, which is as expected, but for a 2-way SSL virtual host, I don't see the "-", just blank. Is that a bug? For now, if I change above map to : map upstream_env $upstream_env { default "-"; } I can then see the - in the access log for 2-way SSL virtual host. Thanks! Frank -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx at bartelt.name Sun Nov 29 15:01:07 2020 From: nginx at bartelt.name (nginx at bartelt.name) Date: Sun, 29 Nov 2020 16:01:07 +0100 Subject: nginx 1.18.0 implicitly enables TLS 1.3 (with only "ssl_protocols TLSv1.2; " in nginx.conf config) Message-ID: Hello, I've noticed that nginx 1.18.0 always enables TLS 1.3 even if not configured to do so. I've observed this behavior on OpenBSD with (nginx 1.18.0 linked against LibreSSL 3.3.0) and on Ubuntu 20.04 (nginx 1.18.0 linked against OpenSSL 1.1.1f). I don't know which release of nginx introduced this bug. From nginx.conf: ssl_protocols TLSv1.2; --> in my understanding, this config statement should only enable TLS 1.2 but not TLS 1.3. However, the observed behavior is that TLS 1.3 is implicitly enabled in addition to TLS 1.2. Best regards Andreas # nginx -V nginx version: nginx/1.18.0 built with LibreSSL 3.2.2 (running with LibreSSL 3.3.0) From teward at thomas-ward.net Sun Nov 29 16:23:58 2020 From: teward at thomas-ward.net (Thomas Ward) Date: Sun, 29 Nov 2020 11:23:58 -0500 Subject: nginx 1.18.0 implicitly enables TLS 1.3 (with only "ssl_protocols TLSv1.2; " in nginx.conf config) In-Reply-To: Message-ID: <4CkYb41P67z2YD8@mail.syn-ack.link> We had this problem in Ubuntu's repos until we rebuilt against newer OpenSSL and the TLS 1.3 variables were exposed to NGINX at build time - then you could turn it off in ssl_protocols by not specifying TLSv1.3.However, your case indicates that you are linked (compiled) against older LibreSSL than you are running.? NGINX can't know what newer items are available without a recompile.? This applies to OpenSSL as well.? Its not just what version of the SSL libraries you are running - its what version of the libs you compiled with as well.Sent from my Sprint Samsung Galaxy Note10+. -------- Original message --------From: nginx at bartelt.name Date: 11/29/20 10:01 (GMT-05:00) To: nginx at nginx.org Subject: nginx 1.18.0 implicitly enables TLS 1.3 (with only "ssl_protocols TLSv1.2; " in nginx.conf config) Hello,I've noticed that nginx 1.18.0 always enables TLS 1.3 even if not configured to do so. I've observed this behavior on OpenBSD with (nginx 1.18.0 linked against LibreSSL 3.3.0) and on Ubuntu 20.04 (nginx 1.18.0 linked against OpenSSL 1.1.1f). I don't know which release of nginx introduced this bug. From nginx.conf:ssl_protocols TLSv1.2;--> in my understanding, this config statement should only enable TLS 1.2 but not TLS 1.3. However, the observed behavior is that TLS 1.3 is implicitly enabled in addition to TLS 1.2.Best regardsAndreas# nginx -Vnginx version: nginx/1.18.0built with LibreSSL 3.2.2 (running with LibreSSL 3.3.0)_______________________________________________nginx mailing listnginx at nginx.orghttp://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx at bartelt.name Mon Nov 30 10:52:38 2020 From: nginx at bartelt.name (Andreas Bartelt) Date: Mon, 30 Nov 2020 11:52:38 +0100 Subject: nginx 1.18.0 implicitly enables TLS 1.3 (with only "ssl_protocols TLSv1.2; " in nginx.conf config) In-Reply-To: <4CkYb41P67z2YD8@mail.syn-ack.link> References: <4CkYb41P67z2YD8@mail.syn-ack.link> Message-ID: Thanks for your reply. I've recompiled nginx on OpenBSD in order to get rid of the LibreSSL version mismatch which is gone now: # nginx -V nginx version: nginx/1.18.0 built with LibreSSL 3.3.0 Unfortunately, this didn't solve the problem, i.e., TLS 1.3 is still enabled on my OpenBSD/nginx setup with the same nginx.conf. As I've previously indicated, I've also observed the same problem on a fully patched Ubuntu 20.04 system with nginx 1.18.0/OpenSSL 1.1.1f -- I'm not sure if this was a misunderstanding since you wrote in past tense regarding problems with Ubuntu's repos. Did you also check if TLS 1.3 is enabled with "ssl_protocols TLSv1.2;" in nginx.conf but got different results? Best regards Andreas On 11/29/20 5:23 PM, Thomas Ward wrote: > We had this problem in Ubuntu's repos until we rebuilt against newer OpenSSL and the TLS 1.3 variables were exposed to NGINX at build time - then you could turn it off in ssl_protocols by not specifying TLSv1.3.However, your case indicates that you are linked (compiled) against older LibreSSL than you are running.? NGINX can't know what newer items are available without a recompile.? This applies to OpenSSL as well.? Its not just what version of the SSL libraries you are running - its what version of the libs you compiled with as well.Sent from my Sprint Samsung Galaxy Note10+. > -------- Original message --------From: nginx at bartelt.name Date: 11/29/20 10:01 (GMT-05:00) To: nginx at nginx.org Subject: nginx 1.18.0 implicitly enables TLS 1.3 (with only "ssl_protocols > TLSv1.2; " in nginx.conf config) Hello,I've noticed that nginx 1.18.0 always enables TLS 1.3 even if not configured to do so. I've observed this behavior on OpenBSD with (nginx 1.18.0 linked against LibreSSL 3.3.0) and on Ubuntu 20.04 (nginx 1.18.0 linked against OpenSSL 1.1.1f). I don't know which release of nginx introduced this bug. From nginx.conf:ssl_protocols TLSv1.2;--> in my understanding, this config statement should only enable TLS 1.2 but not TLS 1.3. However, the observed behavior is that TLS 1.3 is implicitly enabled in addition to TLS 1.2.Best regardsAndreas# nginx -Vnginx version: nginx/1.18.0built with LibreSSL 3.2.2 (running with LibreSSL 3.3.0)_______________________________________________nginx mailing listnginx at nginx.orghttp://mailman.nginx.org/mailman/listinfo/nginx > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > From mdounin at mdounin.ru Mon Nov 30 13:27:50 2020 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 30 Nov 2020 16:27:50 +0300 Subject: empty variable in access log In-Reply-To: References: Message-ID: <20201130132750.GF1147@mdounin.ru> Hello! On Sun, Nov 29, 2020 at 05:35:19AM -0800, Frank Liu wrote: > If I create a variable, default to blank: > > map upstream_env $upstream_env { > default ""; > } > > and log it in access log (log_format has $upstream_env). I see a "-" in the > log file, which is as expected, but for a 2-way SSL virtual host, I don't > see the "-", just blank. Is that a bug? The above snippet is expected to always result in "", as the above variable has the value "". If it results in "-" being logged for you, this is certainly not something expected, please share full configuration which demonstrates the problem. -- Maxim Dounin http://mdounin.ru/ From mdounin at mdounin.ru Mon Nov 30 15:07:59 2020 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 30 Nov 2020 18:07:59 +0300 Subject: nginx 1.18.0 implicitly enables TLS 1.3 (with only "ssl_protocols TLSv1.2; " in nginx.conf config) In-Reply-To: References: Message-ID: <20201130150759.GG1147@mdounin.ru> Hello! On Sun, Nov 29, 2020 at 04:01:07PM +0100, nginx at bartelt.name wrote: > I've noticed that nginx 1.18.0 always enables TLS 1.3 even if not > configured to do so. I've observed this behavior on OpenBSD with (nginx > 1.18.0 linked against LibreSSL 3.3.0) and on Ubuntu 20.04 (nginx 1.18.0 > linked against OpenSSL 1.1.1f). I don't know which release of nginx > introduced this bug. > > From nginx.conf: > ssl_protocols TLSv1.2; > --> in my understanding, this config statement should only enable TLS > 1.2 but not TLS 1.3. However, the observed behavior is that TLS 1.3 is > implicitly enabled in addition to TLS 1.2. As long as "ssl_protocols TLSv1.2;" is the only ssl_protocols in nginx configuration, TLSv1.3 shouldn't be enabled. Much like when there are no "ssl_protocols" at all, as TLSv1.3 isn't enabled by default (for now, at least up to and including nginx 1.19.5). If you see it enabled, please provide full "nginx -T" output on the minimal configuration you are able to reproduce the problem with, along with some tests which demonstrate that TLSv1.3 is indeed enabled. Full output of "nginx -V" and compilation details might be also helpful. -- Maxim Dounin http://mdounin.ru/ From nginx at bartelt.name Mon Nov 30 17:41:18 2020 From: nginx at bartelt.name (Andreas Bartelt) Date: Mon, 30 Nov 2020 18:41:18 +0100 Subject: nginx 1.18.0 implicitly enables TLS 1.3 (with only "ssl_protocols TLSv1.2; " in nginx.conf config) In-Reply-To: <20201130150759.GG1147@mdounin.ru> References: <20201130150759.GG1147@mdounin.ru> Message-ID: <3bc72df5-211f-6ed8-e829-e290a70224c5@bartula.de> On 11/30/20 4:07 PM, Maxim Dounin wrote: > Hello! > > On Sun, Nov 29, 2020 at 04:01:07PM +0100, nginx at bartelt.name wrote: > >> I've noticed that nginx 1.18.0 always enables TLS 1.3 even if not >> configured to do so. I've observed this behavior on OpenBSD with (nginx >> 1.18.0 linked against LibreSSL 3.3.0) and on Ubuntu 20.04 (nginx 1.18.0 >> linked against OpenSSL 1.1.1f). I don't know which release of nginx >> introduced this bug. >> >> From nginx.conf: >> ssl_protocols TLSv1.2; >> --> in my understanding, this config statement should only enable TLS >> 1.2 but not TLS 1.3. However, the observed behavior is that TLS 1.3 is >> implicitly enabled in addition to TLS 1.2. > > As long as "ssl_protocols TLSv1.2;" is the only ssl_protocols in > nginx configuration, TLSv1.3 shouldn't be enabled. Much like when > there are no "ssl_protocols" at all, as TLSv1.3 isn't enabled by > default (for now, at least up to and including nginx 1.19.5). > I've just retested this with my Ubuntu 20.04 based nginx test instance from yesterday (nginx 1.18.0 linked against OpenSSL 1.1.1f) and noticed that it works there as intended (i.e., "ssl_protocols TLSv1.2;" only enables TLS 1.2 but not TLS 1.3). I don't know what I did wrong there yesterday -- sorry for this. However, the problem persists on OpenBSD current with nginx 1.18.0 (built from ports with default options which links against LibreSSL 3.3.0 from base). Setting "ssl_protocols TLSv1.2;" enables TLS 1.2 as well as TLS 1.3 there. > If you see it enabled, please provide full "nginx -T" output on > the minimal configuration you are able to reproduce the problem > with, along with some tests which demonstrate that TLSv1.3 is > indeed enabled. Full output of "nginx -V" and compilation > details might be also helpful. > The following output is from the OpenBSD current / nginx 1.18.0 / LibreSSL 3.3.0 instance after minimizing nginx.conf: # nginx -V nginx version: nginx/1.18.0 built with LibreSSL 3.3.0 TLS SNI support enabled configure arguments: --add-dynamic-module=/usr/ports/pobj/nginx-1.18.0/nginx-1.18.0/lua-nginx-module --add-dynamic-module=/usr/local/lib/phusion-passenger27/src/nginx_module --add-dynamic-module=/usr/ports/pobj/nginx-1.18.0/nginx-rtmp-module-1.2.1/ --prefix=/var/www --conf-path=/etc/nginx/nginx.conf --sbin-path=/usr/local/sbin/nginx --pid-path=/var/run/nginx.pid --lock-path=/var/run/nginx.lock --http-log-path=logs/access.log --error-log-path=logs/error.log --http-client-body-temp-path=/var/www/cache/client_body_temp --http-proxy-temp-path=/var/www/cache/proxy_temp --http-fastcgi-temp-path=/var/www/cache/fastcgi_temp --http-scgi-temp-path=/var/www/cache/scgi_temp --http-uwsgi-temp-path=/var/www/cache/uwsgi_temp --user=www --group=www --with-http_auth_request_module --with-http_dav_module --with-http_image_filter_module=dynamic --with-http_gzip_static_module --with-http_gunzip_module --with-http_perl_module=dynamic --with-http_realip_module --with-http_slice_module --with-http_ssl_module --with-http_stub_status_module --with-http_v2_module --with-http_xslt_module=dynamic --with-mail=dynamic --with-stream=dynamic --with-stream_ssl_module --add-dynamic-module=/usr/ports/pobj/nginx-1.18.0/nginx-1.18.0/naxsi/naxsi_src/ --add-dynamic-module=/usr/ports/pobj/nginx-1.18.0/nginx-1.18.0/ngx_devel_kit --add-dynamic-module=/usr/ports/pobj/nginx-1.18.0/nginx-1.18.0/headers-more-nginx-module --add-dynamic-module=/usr/ports/pobj/nginx-1.18.0/nginx-1.18.0/nginx-auth-ldap --add-dynamic-module=/usr/ports/pobj/nginx-1.18.0/nginx-1.18.0/ngx_http_geoip2_module --add-dynamic-module=/usr/ports/pobj/nginx-1.18.0/nginx-1.18.0/ngx_http_hmac_secure_link_module # nginx -T nginx: the configuration file /etc/nginx/nginx.conf syntax is ok nginx: configuration file /etc/nginx/nginx.conf test is successful # configuration file /etc/nginx/nginx.conf: user www; events { worker_connections 100; } http { server { listen 37.24.253.138:443 ssl; server_name www.bartelt.name; root /var/www/www.bartelt.name; ssl_certificate /etc/ssl/www.bartelt.name_chain.pem; ssl_certificate_key /etc/ssl/private/bartelt.name.key; ssl_protocols TLSv1.2; ssl_ciphers ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384; ssl_prefer_server_ciphers off; ssl_ecdh_curve prime256v1; } } $ openssl s_client -connect www.bartelt.name:443 -servername www.bartelt.name CONNECTED(00000003) depth=2 O = Digital Signature Trust Co., CN = DST Root CA X3 verify return:1 depth=2 O = Digital Signature Trust Co., CN = DST Root CA X3 verify return:1 depth=1 C = US, O = Let's Encrypt, CN = Let's Encrypt Authority X3 verify return:1 depth=0 CN = bartelt.name verify return:1 depth=0 CN = bartelt.name verify return:1 write W BLOCK --- Certificate chain 0 s:/CN=bartelt.name i:/C=US/O=Let's Encrypt/CN=Let's Encrypt Authority X3 1 s:/C=US/O=Let's Encrypt/CN=Let's Encrypt Authority X3 i:/O=Digital Signature Trust Co./CN=DST Root CA X3 --- Server certificate -----BEGIN CERTIFICATE----- MIIEqjCCA5KgAwIBAgISBLtqQEpDJAi3a8TVwzuKd3PaMA0GCSqGSIb3DQEBCwUA MEoxCzAJBgNVBAYTAlVTMRYwFAYDVQQKEw1MZXQncyBFbmNyeXB0MSMwIQYDVQQD ExpMZXQncyBFbmNyeXB0IEF1dGhvcml0eSBYMzAeFw0yMDExMjkxMTA5NTlaFw0y MTAyMjcxMTA5NTlaMBcxFTATBgNVBAMTDGJhcnRlbHQubmFtZTBZMBMGByqGSM49 AgEGCCqGSM49AwEHA0IABDDLZa3XObj0MBoMCQ3IRbHzEWPfyuSU9drHo6PU2M3M rW6mIlDVEoHJISehoFEKVerOyBCCM3UDPJs7IV0aukijggKGMIICgjAOBgNVHQ8B Af8EBAMCB4AwHQYDVR0lBBYwFAYIKwYBBQUHAwEGCCsGAQUFBwMCMAwGA1UdEwEB /wQCMAAwHQYDVR0OBBYEFAfzC26Rpd53q7GMmZWk7x2P9zM4MB8GA1UdIwQYMBaA FKhKamMEfd265tE5t6ZFZe/zqOyhMG8GCCsGAQUFBwEBBGMwYTAuBggrBgEFBQcw AYYiaHR0cDovL29jc3AuaW50LXgzLmxldHNlbmNyeXB0Lm9yZzAvBggrBgEFBQcw AoYjaHR0cDovL2NlcnQuaW50LXgzLmxldHNlbmNyeXB0Lm9yZy8wKQYDVR0RBCIw IIIMYmFydGVsdC5uYW1lghB3d3cuYmFydGVsdC5uYW1lMBEGCCsGAQUFBwEYBAUw AwIBBTBMBgNVHSAERTBDMAgGBmeBDAECATA3BgsrBgEEAYLfEwEBATAoMCYGCCsG AQUFBwIBFhpodHRwOi8vY3BzLmxldHNlbmNyeXB0Lm9yZzCCAQQGCisGAQQB1nkC BAIEgfUEgfIA8AB2AESUZS6w7s6vxEAH2Kj+KMDa5oK+2MsxtT/TM5a1toGoAAAB dhPo6R0AAAQDAEcwRQIgHtCa0Dw0JwNWqxtNy9VGJPkle4ngTsO/q3uZ8NEOGEoC IQD5SayDysdXj6raQ0wrNbcml8+DW/5vp5s1FYL65znWugB2APZclC/RdzAiFFQY CDCUVo7jTRMZM7/fDC8gC8xO8WTjAAABdhPo6SIAAAQDAEcwRQIhALVOCq7NUhCs 4T/FxGuGcY/hqwvJ1Z55jHlI5ZEukAd5AiAKjdQxFpZ+0YVo016+4skOR3bOKodc 3pvBPLQC0cpIWzANBgkqhkiG9w0BAQsFAAOCAQEAmuKb/dOrQO7O/nDAaKrPuT8Y EgUNEKAb27SBiSC0BkUbFFNkhW6z9wKDY6kblkhbcqzVuOrlaMTQ1IS9bxQ9MjfI V7tkBZGC39fYNXup6PQdZVI2Ko/b+ywmbDfqYXFnb/sg6G4qJgVLgs3839ksMpRH gWIhAGbmSatri3YBicVmYdoiXFG2moskH25TQDoW1pROMqwNy8MTAePICJH0LdWv aSlVgoqV6NBDRqTXMVbZlejrURf+VZ8jxt+TgKIbkmTOcsztHqh0T/5LcC+1cqxD an4zT9et1MvgsvRGHS3UYGjJ1euuJ4Itg15XODcVDxNLL0csEsPSySfAt8W5dQ== -----END CERTIFICATE----- subject=/CN=bartelt.name issuer=/C=US/O=Let's Encrypt/CN=Let's Encrypt Authority X3 --- No client certificate CA names sent Server Temp Key: ECDH, P-256, 256 bits --- SSL handshake has read 2878 bytes and written 737 bytes --- New, TLSv1/SSLv3, Cipher is AEAD-AES256-GCM-SHA384 Server public key is 256 bit Secure Renegotiation IS NOT supported Compression: NONE Expansion: NONE No ALPN negotiated SSL-Session: Protocol : TLSv1.3 Cipher : AEAD-AES256-GCM-SHA384 Session-ID: Session-ID-ctx: Master-Key: Start Time: 1606757614 Timeout : 7200 (sec) Verify return code: 0 (ok) --- ^C Best regards Andreas From gfrankliu at gmail.com Mon Nov 30 22:04:35 2020 From: gfrankliu at gmail.com (Frank Liu) Date: Mon, 30 Nov 2020 14:04:35 -0800 Subject: empty variable in access log In-Reply-To: <20201130132750.GF1147@mdounin.ru> References: <20201130132750.GF1147@mdounin.ru> Message-ID: I may have mixed this with special upstream variables, eg: $upstream_http_something. When upstream response header doesn't existing, the variable was logged - in the nginx access logs. On Mon, Nov 30, 2020 at 5:28 AM Maxim Dounin wrote: > Hello! > > On Sun, Nov 29, 2020 at 05:35:19AM -0800, Frank Liu wrote: > > > If I create a variable, default to blank: > > > > map upstream_env $upstream_env { > > default ""; > > } > > > > and log it in access log (log_format has $upstream_env). I see a "-" in > the > > log file, which is as expected, but for a 2-way SSL virtual host, I don't > > see the "-", just blank. Is that a bug? > > The above snippet is expected to always result in "", as the above > variable has the value "". If it results in "-" being logged for > you, this is certainly not something expected, please share full > configuration which demonstrates the problem. > > -- > Maxim Dounin > http://mdounin.ru/ > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Mon Nov 30 22:39:15 2020 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 1 Dec 2020 01:39:15 +0300 Subject: nginx 1.18.0 implicitly enables TLS 1.3 (with only "ssl_protocols TLSv1.2; " in nginx.conf config) In-Reply-To: <3bc72df5-211f-6ed8-e829-e290a70224c5@bartula.de> References: <20201130150759.GG1147@mdounin.ru> <3bc72df5-211f-6ed8-e829-e290a70224c5@bartula.de> Message-ID: <20201130223915.GH1147@mdounin.ru> Hello! On Mon, Nov 30, 2020 at 06:41:18PM +0100, Andreas Bartelt wrote: > On 11/30/20 4:07 PM, Maxim Dounin wrote: > > Hello! > > > > On Sun, Nov 29, 2020 at 04:01:07PM +0100, nginx at bartelt.name wrote: > > > >> I've noticed that nginx 1.18.0 always enables TLS 1.3 even if not > >> configured to do so. I've observed this behavior on OpenBSD with (nginx > >> 1.18.0 linked against LibreSSL 3.3.0) and on Ubuntu 20.04 (nginx 1.18.0 > >> linked against OpenSSL 1.1.1f). I don't know which release of nginx > >> introduced this bug. > >> > >> From nginx.conf: > >> ssl_protocols TLSv1.2; > >> --> in my understanding, this config statement should only enable TLS > >> 1.2 but not TLS 1.3. However, the observed behavior is that TLS 1.3 is > >> implicitly enabled in addition to TLS 1.2. > > > > As long as "ssl_protocols TLSv1.2;" is the only ssl_protocols in > > nginx configuration, TLSv1.3 shouldn't be enabled. Much like when > > there are no "ssl_protocols" at all, as TLSv1.3 isn't enabled by > > default (for now, at least up to and including nginx 1.19.5). > > > > I've just retested this with my Ubuntu 20.04 based nginx test instance > from yesterday (nginx 1.18.0 linked against OpenSSL 1.1.1f) and noticed > that it works there as intended (i.e., "ssl_protocols TLSv1.2;" only > enables TLS 1.2 but not TLS 1.3). I don't know what I did wrong there > yesterday -- sorry for this. > > However, the problem persists on OpenBSD current with nginx 1.18.0 > (built from ports with default options which links against LibreSSL > 3.3.0 from base). Setting "ssl_protocols TLSv1.2;" enables TLS 1.2 as > well as TLS 1.3 there. I don't see any problems when testing with LibreSSL 3.3.0 as available on libressl.org and the very same configuration. So it's probably something specific to your system. Some possible reasons for the behaviour you are seeing, in no particular order: - Given that OpenBSD current and LibreSSL from base implies some arbitrary version of LibreSSL, this might be something with the changes present on your system but not in LibreSSL 3.3.0 release. - There may be something with the port you are using to compile nginx. Consider testing nginx compiled manually. - You are testing the wrong server (the name resolves to a different IP address, or the IP address is routed to a different server). Make sure you are seeing connection on nginx side, something like "return 200 $ssl_protocol;" in the appropriate server block and making a "GET / HTTP/1.0" request in s_client would be a good test. - The nginx version running differs from the one on disk, and you are running an nginx version older than 1.15.6 built with an old LibreSSL without TLSv1.3 but running with LibreSSL 3.3.0 with TLSv1.3 enabled. Check the "Server" header in the above test. - There might be something wrong with headers on your system. The behaviour observed might happen if SSL_OP_NO_TLSv1_3, TLS1_3_VERSION, and SSL_CTX_set_min_proto_version/SSL_CTX_set_max_proto_version are not defined, yet TLSv1.3 is present in the library. -- Maxim Dounin http://mdounin.ru/ From mdounin at mdounin.ru Mon Nov 30 22:46:06 2020 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 1 Dec 2020 01:46:06 +0300 Subject: empty variable in access log In-Reply-To: References: <20201130132750.GF1147@mdounin.ru> Message-ID: <20201130224606.GI1147@mdounin.ru> Hello! On Mon, Nov 30, 2020 at 02:04:35PM -0800, Frank Liu wrote: > I may have mixed this with special upstream variables, eg: > $upstream_http_something. When upstream response header doesn't existing, > the variable was logged - in the nginx access logs. When a variable value is not found, it's logged as "-", that's expected behaviour. That's documented in the log_format directive description (http://nginx.org/r/log_format): : If the variable value is not found, a hyphen (?-?) will be : logged. This doesn't apply to found but empty values as in your example though. -- Maxim Dounin http://mdounin.ru/ From gfrankliu at gmail.com Mon Nov 30 23:26:59 2020 From: gfrankliu at gmail.com (Frank Liu) Date: Mon, 30 Nov 2020 15:26:59 -0800 Subject: empty variable in access log In-Reply-To: <20201130224606.GI1147@mdounin.ru> References: <20201130132750.GF1147@mdounin.ru> <20201130224606.GI1147@mdounin.ru> Message-ID: ok, for testing, I removed the variable from the map, and add one line in a 2-way SSL server config, to create a fresh new variable: set $test_var "test"; For a request without client cert (400), I see neither "test", nor "-" in the access log for $test_var. I only see blank, as if the $test_var was set to "". Here is the config: log_format custom '$remote_addr - $remote_user [$time_local] ' '"$request" $status $test_var'; server { listen *:443 ssl; server_name _; ssl_certificate /opt/nginx/ssl/localhost.crt; ssl_certificate_key /opt/nginx/ssl/localhost.key; ssl_client_certificate /opt/nginx/ssl/localhost.crt; ssl_verify_client on; ssl_session_cache shared:SSL:10m; ssl_session_timeout 10m; set $test_var "test"; access_log /tmp/access.log custom; } cat /tmp/access.log 127.0.0.1 - - [30/Nov/2020:23:25:12 +0000] "GET / HTTP/1.1" 400 On Mon, Nov 30, 2020 at 2:46 PM Maxim Dounin wrote: > Hello! > > On Mon, Nov 30, 2020 at 02:04:35PM -0800, Frank Liu wrote: > > > I may have mixed this with special upstream variables, eg: > > $upstream_http_something. When upstream response header doesn't existing, > > the variable was logged - in the nginx access logs. > > When a variable value is not found, it's logged as "-", that's > expected behaviour. That's documented in the log_format directive > description (http://nginx.org/r/log_format): > > : If the variable value is not found, a hyphen (?-?) will be > : logged. > > This doesn't apply to found but empty values as in your example > though. > > -- > Maxim Dounin > http://mdounin.ru/ > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: