From nginx-forum at forum.nginx.org Wed Jul 1 01:19:40 2020 From: nginx-forum at forum.nginx.org (latha) Date: Tue, 30 Jun 2020 21:19:40 -0400 Subject: X-Accel-Redirect not redirecting to named locaiton Message-ID: I have the below config, calling /v2/test/status calls the http://test.svc.cluster.local:9080/status. But the named location `acreate` is not being called. http://test.svc.cluster.local:9080/status does return `X-Accel-Redirect: @acreate` in response header. location @acreate { internal; return 200 "testing"; } location ~ /v2/test(.*)$ { proxy_method GET; proxy_set_header X-Real-IP $remote_addr; proxy_set_header content-type "application/json"; proxy_set_header Host $host; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-NginX-Proxy true; set $serv http://test.svc.cluster.local:9080$1; proxy_pass $serv; } Posted at Nginx Forum: https://forum.nginx.org/read.php?2,288509,288509#msg-288509 From evald80 at gmail.com Wed Jul 1 06:55:13 2020 From: evald80 at gmail.com (evald ibrahimi) Date: Wed, 1 Jul 2020 08:55:13 +0200 Subject: Found Nginx 1.19.0 stopped but no idea what happened Message-ID: hello everybody, I'm using nginx 1.19.0 with ModSec on Centos 7. The other day it happened twice that nginx was stopped. No idea what could cause the issue but i found on the error.log the following: So it seems the server was running out of memory and it stopped nginx? Thank you 2020/06/29 17:23:39 [alert] 1890#1890: sendmsg() failed (9: Bad file descriptor) 2020/06/29 17:23:39 [alert] 1890#1890: sendmsg() failed (9: Bad file descriptor) 2020/06/29 17:23:39 [alert] 1890#1890: sendmsg() failed (9: Bad file descriptor) 2020/06/29 17:24:28 [alert] 1890#1890: fork() failed while spawning "worker process" (12: Cannot allocate memory) 2020/06/29 17:24:28 [alert] 1890#1890: sendmsg() failed (9: Bad file descriptor) 2020/06/29 17:24:28 [alert] 1890#1890: sendmsg() failed (9: Bad file descriptor) 2020/06/29 17:24:28 [alert] 1890#1890: fork() failed while spawning "cache manager process" (12: Cannot allocate memory) 2020/06/29 17:24:28 [alert] 1890#1890: sendmsg() failed (9: Bad file descriptor) 2020/06/29 17:24:28 [alert] 1890#1890: sendmsg() failed (9: Bad file descriptor) 2020/06/29 18:10:35 [notice] 2461#2461: ModSecurity-nginx v1.0.1 (rules loaded inline/local/remote: 0/19826/0) 2020/06/29 18:10:35 [emerg] 2461#2461: bind() to 0.0.0.0:443 failed (98: Address already in use) 2020/06/29 18:10:35 [emerg] 2461#2461: bind() to 0.0.0.0:80 failed (98: Address already in use) 2020/06/29 18:10:35 [emerg] 2461#2461: bind() to 0.0.0.0:443 failed (98: Address already in use) 2020/06/29 18:10:35 [emerg] 2461#2461: bind() to 0.0.0.0:80 failed (98: Address already in use) 2020/06/29 18:10:35 [emerg] 2461#2461: bind() to 0.0.0.0:443 failed (98: Address already in use) 2020/06/29 18:10:35 [emerg] 2461#2461: bind() to 0.0.0.0:80 failed (98: Address already in use) 2020/06/29 18:10:35 [emerg] 2461#2461: bind() to 0.0.0.0:443 failed (98: Address already in use) 2020/06/29 18:10:35 [emerg] 2461#2461: bind() to 0.0.0.0:80 failed (98: Address already in use) 2020/06/29 18:10:35 [emerg] 2461#2461: bind() to 0.0.0.0:443 failed (98: Address already in use) 2020/06/29 18:10:35 [emerg] 2461#2461: bind() to 0.0.0.0:80 failed (98: Address already in use) 2020/06/29 18:10:35 [emerg] 2461#2461: still could not bind() 2020/06/29 18:11:16 [alert] 1890#1890: unlink() "/run/nginx.pid" failed (2: No such file or directory) 2020/06/29 18:11:48 [notice] 927#927: ModSecurity-nginx v1.0.1 (rules loaded inline/local/remote: 0/19826/0) From francis at daoine.org Wed Jul 1 08:05:19 2020 From: francis at daoine.org (Francis Daly) Date: Wed, 1 Jul 2020 09:05:19 +0100 Subject: X-Accel-Redirect not redirecting to named locaiton In-Reply-To: References: Message-ID: <20200701080519.GD20939@daoine.org> On Tue, Jun 30, 2020 at 09:19:40PM -0400, latha wrote: Hi there, > I have the below config, calling /v2/test/status calls the > http://test.svc.cluster.local:9080/status. But the named location `acreate` > is not being called. http://test.svc.cluster.local:9080/status does return > `X-Accel-Redirect: @acreate` in response header. The below config works for me, as the tests show. Can you show the results of similar tests, to see where the difference is? Configuration: == server { listen 6060; location @named { return 200 "In @named location 6060\n"; } location ~ /v2/test(.*)$ { set $serv http://127.0.0.1:9080$1; proxy_pass $serv; } } server { listen 9080; return 200 "In 9080, request was $request_uri\n"; add_header X-Accel-Redirect @named; } == Test request/response pairs: == $ curl -i http://127.0.0.1:9080/direct HTTP/1.1 200 OK Server: nginx/1.17.2 [snip] X-Accel-Redirect: @named In 9080, request was /direct $ curl -i http://127.0.0.1:6060/v2/test/status HTTP/1.1 200 OK Server: nginx/1.17.2 [snip] In @named location 6060 == You indicate that you don't get the response that you want. What response do you get? Cheers, f -- Francis Daly francis at daoine.org From nginx-forum at forum.nginx.org Thu Jul 2 09:06:17 2020 From: nginx-forum at forum.nginx.org (anish10dec) Date: Thu, 02 Jul 2020 05:06:17 -0400 Subject: Multiple Cache Object for same file Message-ID: <79e7b14a379877e964a190732a9c442b.NginxMailingListEnglish@forum.nginx.org> We are observing that multiple cache object is getting created for same file in Nginx Cache which is resulting into non optimal use of cache storage. We are using proxy_cache_key as $uri. proxy_cache_key $uri; For example with file having URI /content/entry/jiomags/content/719/51/51_t_0.jpg 2 cache object has been created in cache folder. Both the files are having same KEY -rw------- 1 nginx nginx 21023 Jun 27 16:11 ./2/95/9d78505da184e6ccd981fefe6b333952 -rw------- 1 nginx nginx 21023 Jun 27 18:16 ./f/ad/c8e1c56031a14dd4a27e538956253adf vi ./2/95/9d78505da184e6ccd981fefe6b333952 KEY: /content/entry/jiomags/content/719/51/51_t_0.jpg HTTP/1.1 200 OK^M Server: nginx^M Date: Sat, 27 Jun 2020 10:41:01 GMT^M Content-Type: image/jpeg^M Content-Length: 20369^M Connection: close^M Last-Modified: Fri, 10 Jan 2020 15:20:59 GMT^M Vary: Accept-Encoding^M ETag: "5e18965b-4f91"^M Expires: Sun, 26 Jul 2020 20:17:15 GMT^M Cache-Control: max-age=2592000^M Access-Control-Allow-Origin: *^M Access-Control-Expose-Headers: Content-Length,Content-Range^M Access-Control-Allow-Headers: Range^M Accept-Ranges: bytes^ vi ./f/ad/c8e1c56031a14dd4a27e538956253adf KEY: /content/entry/jiomags/content/719/51/51_t_0.jpg HTTP/1.1 200 OK^M Server: nginx^M Date: Sat, 27 Jun 2020 12:46:06 GMT^M Content-Type: image/jpeg^M Content-Length: 20369^M Connection: close^M Last-Modified: Fri, 10 Jan 2020 15:20:59 GMT^M Vary: Accept-Encoding^M ETag: "5e18965b-4f91"^M Expires: Mon, 27 Jul 2020 12:46:06 GMT^M Cache-Control: max-age=2592000^M Access-Control-Allow-Origin: *^M Access-Control-Expose-Headers: Content-Length,Content-Range^M Access-Control-Allow-Headers: Range^M Accept-Ranges: bytes^M What could be the reason for duplicate file getting cached having same URI and KEY. Please help Posted at Nginx Forum: https://forum.nginx.org/read.php?2,288520,288520#msg-288520 From nginx-forum at forum.nginx.org Thu Jul 2 12:28:32 2020 From: nginx-forum at forum.nginx.org (Evald80) Date: Thu, 02 Jul 2020 08:28:32 -0400 Subject: Found Nginx 1.19.0 stopped but no idea what happened In-Reply-To: References: Message-ID: hi, it happened again and i see that nginx is eating all the memory of the server. Somebody could help me how to troubleshoot? Mos probably there is a memory leak but don't know if on nginx or ModSec. There are 10websites on this server and they are not popular at all. Some clicks a day, so no superbusy website. thank you Posted at Nginx Forum: https://forum.nginx.org/read.php?2,288511,288522#msg-288522 From francis at daoine.org Thu Jul 2 13:05:21 2020 From: francis at daoine.org (Francis Daly) Date: Thu, 2 Jul 2020 14:05:21 +0100 Subject: Found Nginx 1.19.0 stopped but no idea what happened In-Reply-To: References: Message-ID: <20200702130521.GE20939@daoine.org> On Wed, Jul 01, 2020 at 08:55:13AM +0200, evald ibrahimi wrote: Hi there, I don't have answers for you; but possibly you'll be able to provide more information that will help someone to help you. > I'm using nginx 1.19.0 with ModSec on Centos 7. The other day it > happened twice that nginx was stopped. No idea what could cause the > issue but i found on the error.log the following: If unexpected things happen repeatedly, it can be useful to see if there is a pattern that causes them to happen. >From your normal logs, was there anything usual that happened shortly before things stopped, that did not happen at other times? > 2020/06/29 17:23:39 [alert] 1890#1890: sendmsg() failed (9: Bad file descriptor) That (I think) usually means that something has gone wrong already -- a file or socket that nginx thinks it should have access to, is not accessible. (That might be a "real" file that has changed; or it might be due to a coding bug.) > 2020/06/29 17:24:28 [alert] 1890#1890: fork() failed while spawning > "worker process" (12: Cannot allocate memory) That means that your operating system was unwilling to give nginx the extra memory it asked for. It might be a resource problem on your system, or it might be a consequence of the previous problem. > 2020/06/29 18:10:35 [notice] 2461#2461: ModSecurity-nginx v1.0.1 > (rules loaded inline/local/remote: 0/19826/0) That possibly means that nginx has just started. > 2020/06/29 18:10:35 [emerg] 2461#2461: bind() to 0.0.0.0:443 failed > (98: Address already in use) > 2020/06/29 18:10:35 [emerg] 2461#2461: bind() to 0.0.0.0:80 failed > (98: Address already in use) And they mean that nginx can't start fully, because something else is already running where nginx wants to. (Possibly a previous nginx which has not fully shut down.) "nginx -V" is usually very helpful for people to see what versions of what code you are using. Often third-party modules written for one version of nginx and used with another version of nginx, can lead to subtle problems. ModSecurity-nginx is, I think, not a stock-nginx module. What other modules are you using? Often the nginx config can help identify a problem too; it is not clear to me if that will help in this case. Good luck with it, f -- Francis Daly francis at daoine.org From themadbeaker at gmail.com Thu Jul 2 13:55:44 2020 From: themadbeaker at gmail.com (J.R.) Date: Thu, 2 Jul 2020 08:55:44 -0500 Subject: Found Nginx 1.19.0 stopped but no idea what happened Message-ID: How much RAM is on your machine? Have you tried disabling modsecurity temporarily? What other (if any) 3rd party modules are you using? From mdounin at mdounin.ru Thu Jul 2 15:42:57 2020 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 2 Jul 2020 18:42:57 +0300 Subject: Multiple Cache Object for same file In-Reply-To: <79e7b14a379877e964a190732a9c442b.NginxMailingListEnglish@forum.nginx.org> References: <79e7b14a379877e964a190732a9c442b.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20200702154257.GK12747@mdounin.ru> Hello! On Thu, Jul 02, 2020 at 05:06:17AM -0400, anish10dec wrote: > We are observing that multiple cache object is getting created for same file > in Nginx Cache which is resulting into non optimal use of cache storage. > > We are using proxy_cache_key as $uri. > > proxy_cache_key $uri; > > For example with file having URI > /content/entry/jiomags/content/719/51/51_t_0.jpg > > 2 cache object has been created in cache folder. Both the files are having > same KEY > > -rw------- 1 nginx nginx 21023 Jun 27 16:11 > ./2/95/9d78505da184e6ccd981fefe6b333952 > -rw------- 1 nginx nginx 21023 Jun 27 18:16 > ./f/ad/c8e1c56031a14dd4a27e538956253adf > > vi ./2/95/9d78505da184e6ccd981fefe6b333952 [...] > Vary: Accept-Encoding^M [...] > What could be the reason for duplicate file getting cached having same URI > and KEY. > Please help The reason is the "Vary: Accept-Encoding" header line returned by your backend. If you want nginx to ignore it, consider using "proxy_ignore_headers Vary;", see http://nginx.org/r/proxy_ignore_headers for details. -- Maxim Dounin http://mdounin.ru/ From nginx-forum at forum.nginx.org Thu Jul 2 16:17:03 2020 From: nginx-forum at forum.nginx.org (Evald80) Date: Thu, 02 Jul 2020 12:17:03 -0400 Subject: Found Nginx 1.19.0 stopped but no idea what happened In-Reply-To: References: Message-ID: hello, There is not too much 1GB. No, i did not disabled mode_sec. The list of loaded modules: load_module modules/ngx_http_brotli_filter_module.so; load_module modules/ngx_http_brotli_static_module.so; load_module modules/ngx_http_modsecurity_module.so; load_module modules/ngx_http_geoip2_module.so; this is an issue that suddenly started to come out. There always has been this config and software installed. I think some kind of requests are causing this but does not know if it is a module or nginx itself. Can i troubleshoot it? I mean, when the issue will pop up again, try to gather some info? if yes, how? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,288511,288531#msg-288531 From nginx-forum at forum.nginx.org Thu Jul 2 18:20:59 2020 From: nginx-forum at forum.nginx.org (siva.pannier) Date: Thu, 02 Jul 2020 14:20:59 -0400 Subject: Nginx as reverse proxy mail server host Message-ID: Hi, I am trying to proxy a SMTP server on Nginx using the below configuration. I want all the client calls to hit the SMTP server via my proxy host. I want the SSL termination on nginx for the client calls to the SMTP Server. When I do the connection getting below exception even before the SSL handshake.. Please correct me if I am wrong anywhere. Without SSL directive & Properties in nginx.conf, it works fine and able to do SSL handshake as well. Not sure how it would be a SSL connection, without the SSL directive and SSL properties. Java Error ######## javax.mail.MessagingException: Could not connect to SMTP host: localhost, port: 3001, response: -1 at com.sun.mail.smtp.SMTPTransport.openServer(SMTPTransport.java:2197) at com.sun.mail.smtp.SMTPTransport.protocolConnect(SMTPTransport.java:740) at javax.mail.Service.connect(Service.java:388) at javax.mail.Service.connect(Service.java:246) at javax.mail.Service.connect(Service.java:195) at javax.mail.Transport.send0(Transport.java:254) at javax.mail.Transport.send(Transport.java:124) at com.att.client.smtp.SMTPTestClient.main(SMTPTestClient.java:50) nginx.conf ######## stream{ upstream smtp_backend { least_conn; server smtp.gmail.com:587; } server { listen 3001 ssl; proxy_pass smtp_backend; ssl_certificate C:/nginx-selfsigned.crt; ssl_certificate_key C:/nginx-selfsigned.key; ssl_protocols SSLv3 TLSv1 TLSv1.1 TLSv1.2; ssl_ciphers ALL; #ssl_ciphers HIGH:!aNULL:!MD5; ssl_session_cache shared:SSL:20m; ssl_session_timeout 4h; ssl_handshake_timeout 30s; } } Java client code ############# .. .. Properties prop = new Properties(); //prop.put("mail.smtp.host", "smtp.gmail.com"); prop.put("mail.smtp.host", "localhost"); //prop.put("mail.smtp.port", "587"); prop.put("mail.smtp.port", "3001"); prop.put("mail.smtp.auth", "true"); prop.put("mail.smtp.starttls.enable", "true"); //TLS //prop.put("mail.smtp.starttls.required", "true"); Session session = Session.getInstance(prop, new javax.mail.Authenticator() { protected PasswordAuthentication getPasswordAuthentication() { return new PasswordAuthentication(username, password); } }); try { Message message = new MimeMessage(session); message.setFrom(new InternetAddress("siva.pannier at gmail.com")); message.setRecipients( Message.RecipientType.TO, InternetAddress.parse("siva.pannier at in.ibm.com") ); message.setSubject("Testing Gmail TLS from nginx"); message.setText("Dear Mail Crawler," + "\n\n Please do not spam my email!"); Transport.send(message); System.out.println("Done"); ... .... Posted at Nginx Forum: https://forum.nginx.org/read.php?2,288532,288532#msg-288532 From francis at daoine.org Thu Jul 2 21:05:49 2020 From: francis at daoine.org (Francis Daly) Date: Thu, 2 Jul 2020 22:05:49 +0100 Subject: Nginx as reverse proxy mail server host In-Reply-To: References: Message-ID: <20200702210549.GF20939@daoine.org> On Thu, Jul 02, 2020 at 02:20:59PM -0400, siva.pannier wrote: Hi there, > I am trying to proxy a SMTP server on Nginx using the below configuration. I > want all the client calls to hit the SMTP server via my proxy host. I want > the SSL termination on nginx for the client calls to the SMTP Server. Your config has nginx as an ssl-termination point, and nginx just sends the decrypted traffic to its upstream. The simplest way to prove that this works is probably to use a well-known working client, such as "openssl s_client -connect". > When I do the connection getting below exception even before the SSL > handshake.. Please correct me if I am wrong anywhere. There are two ways of doing ssl with smtp. One is to establish a ssl session, and then "speak" smtp through that -- that is what you have configured your nginx server to expect here. The other is to establish a smtp session, and then use the smtp command "starttls" to establish a ssl session -- that is what you have configured your client to do. Things fail because nginx is expecting to see a ssl session being established, but the client is expecting to see a smtp session being established. > Without SSL directive & Properties in nginx.conf, it works fine and able to > do SSL handshake as well. Not sure how it would be a SSL connection, without > the SSL directive and SSL properties. In this case, nginx is acting as a plain tcp forwarder; it does not know or care what is in the packet, it just copies it. Now your client connects to nginx, and nginx sends the content to your upstream. Your client says "starttls" and negotiates the ssl session with your upstream, not with nginx. What you have can work; but you must make sure that your design has the client and the server speaking the same protocol with each other. An alternative way of proxying smtp is described at https://docs.nginx.com/nginx/admin-guide/mail-proxy/mail-proxy/ Good luck with it, f -- Francis Daly francis at daoine.org From nginx-forum at forum.nginx.org Fri Jul 3 12:38:09 2020 From: nginx-forum at forum.nginx.org (siva.pannier) Date: Fri, 03 Jul 2020 08:38:09 -0400 Subject: Nginx as reverse proxy mail server host In-Reply-To: <20200702210549.GF20939@daoine.org> References: <20200702210549.GF20939@daoine.org> Message-ID: <3b612d9ad7eaef2462fb9c9b30acdd51.NginxMailingListEnglish@forum.nginx.org> Thank you for your suggestions! My understanding from your suggestions is that you do not want me to make any corrections on the client code. I just need to make corrections on the Nginx configuration as per the blog link. I am trying to understand that blog, going through again and again. so far I understand that it creates a SSL layer first through which it accepts the client request. Client should point to my proxy host and one of the ports listed under "mail{... }". Proxy server identifies the upstream host based on the username came from the client request. Then the call is routed to actual upstream host based on the port. Please correct me if I am wrong anywhere. My questions are 1) Significance of this line "auth_http localhost:9000/cgi-bin/nginxauth.cgi;" is just to have my own authorization logic and return the valid upstream server host based on the username. Is it correct? 2) I want to know what does this mean "smtp_auth login plain cram-md5;". Does the connection to actual upstream happen here? Please help me on this and also share links supporting the above configuration. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,288532,288539#msg-288539 From nginx-forum at forum.nginx.org Fri Jul 3 12:57:45 2020 From: nginx-forum at forum.nginx.org (siva.pannier) Date: Fri, 03 Jul 2020 08:57:45 -0400 Subject: CORBA IIOP reverse proxy on nginx Message-ID: Hi, I would like to know whether nginx can work as a reverse proxy server for CORBA IIOP communication and also SSL on top of it. Please share if there are any blogs on that configuration. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,288540,288540#msg-288540 From nginx-forum at forum.nginx.org Fri Jul 3 13:12:56 2020 From: nginx-forum at forum.nginx.org (siva.pannier) Date: Fri, 03 Jul 2020 09:12:56 -0400 Subject: Force SSL redirection to target service host for all protocols Message-ID: <9111fd05796c34847a99cec4213fe9a6.NginxMailingListEnglish@forum.nginx.org> Hi, I want all my client applications make call to the service host via proxy. And the hosted services are TLSv1.2 enabled. Clients are not in a position to upgrade. Hence I want to enforce the SSL encryption when the call routed/redirected to the target from proxy. I have seen few blogs that talks about HTTP to HTTPS redirection. I want to do that for all protocols like TCPS, UDPS(DTLS), SMTPS, IIOPS. Can you please share your suggestions on this? Thanks, Siva Posted at Nginx Forum: https://forum.nginx.org/read.php?2,288541,288541#msg-288541 From nginx-forum at forum.nginx.org Fri Jul 3 16:50:09 2020 From: nginx-forum at forum.nginx.org (siva.pannier) Date: Fri, 03 Jul 2020 12:50:09 -0400 Subject: SSL over UDP - Nginx as reverse proxy Message-ID: <4eb436c2bf72f7bcd1908cfd1125e42e.NginxMailingListEnglish@forum.nginx.org> Hi, I would like to have SSL Termination on nginx for UDP connections. Can you please share the instructions on how to do achieve it? Thanks, Siva Posted at Nginx Forum: https://forum.nginx.org/read.php?2,288543,288543#msg-288543 From francis at daoine.org Sat Jul 4 08:23:54 2020 From: francis at daoine.org (Francis Daly) Date: Sat, 4 Jul 2020 09:23:54 +0100 Subject: Nginx as reverse proxy mail server host In-Reply-To: <3b612d9ad7eaef2462fb9c9b30acdd51.NginxMailingListEnglish@forum.nginx.org> References: <20200702210549.GF20939@daoine.org> <3b612d9ad7eaef2462fb9c9b30acdd51.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20200704082354.GG20939@daoine.org> On Fri, Jul 03, 2020 at 08:38:09AM -0400, siva.pannier wrote: Hi there, > My understanding from your suggestions is that you do not want me to make > any corrections on the client code. I just need to make corrections on the > Nginx configuration as per the blog link. Not quite, no. You need to know which of the smtp-involving-ssl protocols you want your client to speak. You need to know which of the smtp-involving-ssl protocols your upstream server speaks. Then you decide how (and whether) to configure nginx to translate between the two. >From your report, your client already works with nginx using stream{} and no ssl, because your client uses smtp+starttls and your upstream server uses smtp+starttls. So maybe there is nothing that you need to change. > I am trying to understand that blog, going through again and again. so far I > understand that it creates a SSL layer first through which it accepts the > client request. Maybe. That document describes multiple possible ways of configuring things. You will want to use exactly one way. If you use the nginx mail{} with "ssl on", then what you suggest is correct. If you do not use "ssl on", then it is not correct. > Client should point to my proxy host and one of the ports > listed under "mail{... }". Proxy server identifies the upstream host based > on the username came from the client request. Then the call is routed to > actual upstream host based on the port. Please correct me if I am wrong > anywhere. When nginx is configured to proxy a message to an upstream server, it needs to know which upstream server to talk to. If you use nginx stream{}, you configure the upstream using proxy_pass. If you use nginx mail{}, as this document does, you configure the upstream indirectly using auth_http. auth_http refers to a http url that is expected to return an indication of which server:port the connection should be proxied to. How it does that is up to you to write -- maybe it differs per user and per port; maybe it always gives the same response. > My questions are > 1) Significance of this line "auth_http > localhost:9000/cgi-bin/nginxauth.cgi;" is just to have my own authorization > logic and return the valid upstream server host based on the username. Is it > correct? http://nginx.org/r/auth_http > 2) I want to know what does this mean "smtp_auth login plain cram-md5;". > Does the connection to actual upstream happen here? http://nginx.org/r/smtp_auth The connection to upstream cannot happen until after nginx knows which upstream to connect to. And that comes from the auth_http response. The auth_http request includes the details provided by the client in response to the smtp_auth "challenge". > Please help me on this and also share links supporting the above > configuration. There is a lot of information at http://nginx.org/en/docs/ The "ngx_mail_*" modules are grouped together. For a lot of this, if the documentation is unclear, you may be better off building a test system and seeing what happens when you try things. If that results in the unclear documentation being made clear, that is good. Good luck with it, f -- Francis Daly francis at daoine.org From nginx-forum at forum.nginx.org Sat Jul 4 09:52:09 2020 From: nginx-forum at forum.nginx.org (everhardt) Date: Sat, 04 Jul 2020 05:52:09 -0400 Subject: $ssl_client_escaped_cert does not contain intermediate client certificates Message-ID: <9bd9d6717560c7b9e3a23ca365be2f35.NginxMailingListEnglish@forum.nginx.org> I have the following certificate chain: Root certificate > Intermediate certificate > End user certificate. I've set up nginx as an SSL termination proxy for a backend service that differentiates it actions based on the serial of the intermediate certificate and the subject of the end user certificate. Only the root certificate is available at the (nginx) server, the client will present the intermediate + end user certificate. Relevant nginx configuration is as follows: ssl_client_certificate root_cert.pem; # so only the root certificate ssl_verify_client on; ssl_verify_depth 2; proxy_set_header X-Ssl-Client-Escaped-Cert $ssl_client_escaped_cert; # to pass it on to the backend service Connectivity works great: nginx accepts the request if the client (I'm testing with curl) presents intermediate + end user certificate and passes it on to the backend service. If the client presents only one of the certificates, nginx rightly rejects it. So I'm sure curl shares both certificates with nginx. Where it goes wrong, is when nginx passes the certificate information to the backend service. The embedded variable $ssl_client_escaped_cert only seems to contain the end user certificate and not the intermediate one(s). I did some logging to check $ssl_client_raw_cert, but that also only contains the end user certificate. Is there a way to get the intermediate client certificates included in these embedded variables? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,288553,288553#msg-288553 From nginx-forum at forum.nginx.org Sun Jul 5 08:14:00 2020 From: nginx-forum at forum.nginx.org (siva.pannier) Date: Sun, 05 Jul 2020 04:14:00 -0400 Subject: Nginx pre-configured test environment with all scenarios Message-ID: Hi Team, I am assessing the capabilities and doing a POC on Nignx integration as reverse proxy. Are there any pre-configured image with all the protocols and the necessary clients to test and demo the capabilities of Nignx or Nignx plus? Doing a self-assessment with all the necessary setup on my local machine takes a lot of time to come to a conclusion. Also is there any technical support team who can review our use-cases/requirements and give a feedback, whether it is possible on Nginx? Looking for a positive support on my queries above. Thanks, Siva. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,288558,288558#msg-288558 From francis at daoine.org Sun Jul 5 14:31:54 2020 From: francis at daoine.org (Francis Daly) Date: Sun, 5 Jul 2020 15:31:54 +0100 Subject: CORBA IIOP reverse proxy on nginx In-Reply-To: References: Message-ID: <20200705143154.GI20939@daoine.org> On Fri, Jul 03, 2020 at 08:57:45AM -0400, siva.pannier wrote: Hi there, > I would like to know whether nginx can work as a reverse proxy server for > CORBA IIOP communication and also SSL on top of it. If the protocol is a single tcp connection, then the "stream" module may work with minimal extra configuration. If the protocol is more complicated, then there may need to be a protocol-specific module involved. A quick web search does not show me any obvious corba-related nginx modules. Good luck with it, f -- Francis Daly francis at daoine.org From themadbeaker at gmail.com Sun Jul 5 16:22:07 2020 From: themadbeaker at gmail.com (J.R.) Date: Sun, 5 Jul 2020 11:22:07 -0500 Subject: Nginx pre-configured test environment with all scenarios Message-ID: > I am assessing the capabilities and doing a POC on Nignx integration as > reverse proxy. Are there any pre-configured image with all the protocols and > the necessary clients to test and demo the capabilities of Nignx or Nignx > plus? Doing a self-assessment with all the necessary setup on my local > machine takes a lot of time to come to a conclusion. Installing nginx includes a basic configuration, however due to every case being different there is no "generic" template that I know of for instant operation in the scope you are looking for. > Also is there any technical support team who can review our > use-cases/requirements and give a feedback, whether it is possible on > Nginx? Probably best to contact the commercial nginx sales department directly? From francis at daoine.org Sun Jul 5 22:08:02 2020 From: francis at daoine.org (Francis Daly) Date: Sun, 5 Jul 2020 23:08:02 +0100 Subject: SSL over UDP - Nginx as reverse proxy In-Reply-To: <4eb436c2bf72f7bcd1908cfd1125e42e.NginxMailingListEnglish@forum.nginx.org> References: <4eb436c2bf72f7bcd1908cfd1125e42e.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20200705220802.GJ20939@daoine.org> On Fri, Jul 03, 2020 at 12:50:09PM -0400, siva.pannier wrote: Hi there, > I would like to have SSL Termination on nginx for UDP connections. Can you > please share the instructions on how to do achieve it? The documentation for "stream" is at http://nginx.org/en/docs/stream/ngx_stream_core_module.html I would expect that the way to do it would be to put both "udp" and "ssl" in the "listen" directive. When I do that using one version of nginx, "nginx -t" reports: [emerg] "listen" directive "ssl" parameter is incompatible with "udp" That does match what is described at https://www.nginx.com/blog/ask-nginx-april-2019/ Note that searching the list archives does point to http://nginx.org/patches/dtls/ and an indication that that experiment was paused owing to a lack of a use case. I suspect that if you want to report on how that patch works for you -- being aware that it was written for an older version of nginx, so possibly will not apply as-is to the current version -- and/or describe your specific use case, then there may be someone willing to update the patch. Good luck with it, f -- Francis Daly francis at daoine.org From peter_booth at me.com Mon Jul 6 03:18:34 2020 From: peter_booth at me.com (Peter Booth) Date: Sun, 5 Jul 2020 23:18:34 -0400 Subject: Nginx pre-configured test environment with all scenarios In-Reply-To: References: Message-ID: <106D3D78-E6D3-4B64-B111-F4B53031D6FE@me.com> Why are you doing an nginx POC? To be blunt, nginx is the most powerful, flexible web server/reverse proxy/application delivery software product that exists. If it has an obvious competitor it?s the F5 BigIP LTM/WAF device - and F5 owns nginx. So what does this mean? It means that if you don?t have hands on experience with these two products then you cant really appreciate what is possible from a best of breed web server / reverse proxy. I?m not trying to be a d1ck, but to be honest. Here?s an analogy - if I were a very rich person (I?m not) and I wanted to buy a Bugatti Veyron and I rolled up to a Bugatti dealer and asked for a test drive the dealer would laugh at me. People buy super cars sight unseen because they know they are super cars. In the same way, if you need a reverse proxy Then nginx is what you need. It?s that simple. I don?t work for nginx, I?m not an obsessive fanboy- just a realist. Sent from my iPhone > On Jul 5, 2020, at 12:22 PM, J.R. wrote: > > ? >> >> I am assessing the capabilities and doing a POC on Nignx integration as >> reverse proxy. Are there any pre-configured image with all the protocols and >> the necessary clients to test and demo the capabilities of Nignx or Nignx >> plus? Doing a self-assessment with all the necessary setup on my local >> machine takes a lot of time to come to a conclusion. > > Installing nginx includes a basic configuration, however due to every > case being different there is no "generic" template that I know of for > instant operation in the scope you are looking for. > >> Also is there any technical support team who can review our >> use-cases/requirements and give a feedback, whether it is possible on >> Nginx? > > Probably best to contact the commercial nginx sales department directly? > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From nginx-forum at forum.nginx.org Mon Jul 6 04:06:57 2020 From: nginx-forum at forum.nginx.org (siva.pannier) Date: Mon, 06 Jul 2020 00:06:57 -0400 Subject: Nginx pre-configured test environment with all scenarios In-Reply-To: References: Message-ID: <275cf73f9292b3a3015dae12d527ce96.NginxMailingListEnglish@forum.nginx.org> Yup thanks I mailed them today. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,288558,288563#msg-288563 From nginx-forum at forum.nginx.org Mon Jul 6 04:09:04 2020 From: nginx-forum at forum.nginx.org (siva.pannier) Date: Mon, 06 Jul 2020 00:09:04 -0400 Subject: Nginx pre-configured test environment with all scenarios In-Reply-To: <106D3D78-E6D3-4B64-B111-F4B53031D6FE@me.com> References: <106D3D78-E6D3-4B64-B111-F4B53031D6FE@me.com> Message-ID: Appreciate your confidence on Nignx!! Posted at Nginx Forum: https://forum.nginx.org/read.php?2,288558,288564#msg-288564 From nginx-forum at forum.nginx.org Mon Jul 6 04:15:00 2020 From: nginx-forum at forum.nginx.org (siva.pannier) Date: Mon, 06 Jul 2020 00:15:00 -0400 Subject: Force SSL redirection to target service host for all protocols In-Reply-To: <9111fd05796c34847a99cec4213fe9a6.NginxMailingListEnglish@forum.nginx.org> References: <9111fd05796c34847a99cec4213fe9a6.NginxMailingListEnglish@forum.nginx.org> Message-ID: <1673aa0aead76ba1be102e199b87df09.NginxMailingListEnglish@forum.nginx.org> Can somebody please comment on this? Thanks, Siva Posted at Nginx Forum: https://forum.nginx.org/read.php?2,288541,288565#msg-288565 From nginx-forum at forum.nginx.org Mon Jul 6 04:22:00 2020 From: nginx-forum at forum.nginx.org (siva.pannier) Date: Mon, 06 Jul 2020 00:22:00 -0400 Subject: Nginx as reverse proxy mail server host In-Reply-To: <20200704082354.GG20939@daoine.org> References: <20200704082354.GG20939@daoine.org> Message-ID: <73e308666bdd791c8af5319bdc49aceb.NginxMailingListEnglish@forum.nginx.org> Thanks for your inputs.. let me try those.. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,288532,288566#msg-288566 From nginx-forum at forum.nginx.org Mon Jul 6 08:17:46 2020 From: nginx-forum at forum.nginx.org (Evald80) Date: Mon, 06 Jul 2020 04:17:46 -0400 Subject: Found Nginx 1.19.0 stopped but no idea what happened In-Reply-To: References: Message-ID: <1eaff7d6a9a0ca051ffeae66b6617519.NginxMailingListEnglish@forum.nginx.org> The problem appeared again and at the time of writing is still present and i did not reboot the machine which will fix it. The following are the commands i executed in order to get some info: Basically this seems a problem with nginx and not a library issue: first of all: is it running nginx? [root at web ~]# systemctl status nginx ? nginx.service - nginx - high performance web server Loaded: loaded (/usr/lib/systemd/system/nginx.service; enabled; vendor preset: disabled) Active: inactive (dead) since Mon 2020-07-06 00:00:06 CEST; 10h ago Docs: http://nginx.org/en/docs/ Process: 19475 ExecStop=/bin/sh -c /bin/kill -s TERM $(/bin/cat /var/run/nginx.pid) (code=exited, status=0/SUCCESS) Process: 939 ExecStart=/usr/sbin/nginx -c /etc/nginx/nginx.conf (code=exited, status=0/SUCCESS) Main PID: 1356 (code=exited, status=0/SUCCESS) Jul 06 00:00:01 web systemd[1]: Stopping nginx - high performance web server... Jul 06 00:00:06 web systemd[1]: Stopped nginx - high performance web server. Warning: Journal has been rotated since unit was started. Log output is incomplete or unavailable. Seems not active ehm..let's try to see what is resident in memory: [root at web ~]# ps -el F S UID PID PPID C PRI NI ADDR SZ WCHAN TTY TIME CMD 4 S 0 1 0 0 80 0 - 11564 ep_pol ? 00:00:55 systemd 1 S 0 2 0 0 80 0 - 0 kthrea ? 00:00:00 kthreadd 1 S 0 4 2 0 60 -20 - 0 worker ? 00:00:00 kworker/0:0H 1 S 0 6 2 0 80 0 - 0 smpboo ? 00:00:10 ksoftirqd/0 1 S 0 7 2 0 -40 - - 0 smpboo ? 00:00:00 migration/0 1 S 0 8 2 0 80 0 - 0 rcu_gp ? 00:00:00 rcu_bh 1 R 0 9 2 0 80 0 - 0 - ? 00:00:34 rcu_sched 1 S 0 10 2 0 60 -20 - 0 rescue ? 00:00:00 lru-add-drain 5 S 0 11 2 0 -40 - - 0 smpboo ? 00:00:01 watchdog/0 5 S 0 13 2 0 80 0 - 0 devtmp ? 00:00:00 kdevtmpfs 1 S 0 14 2 0 60 -20 - 0 rescue ? 00:00:00 netns 1 S 0 15 2 0 80 0 - 0 watchd ? 00:00:00 khungtaskd 1 S 0 16 2 0 60 -20 - 0 rescue ? 00:00:00 writeback 1 S 0 17 2 0 60 -20 - 0 rescue ? 00:00:00 kintegrityd 1 S 0 18 2 0 60 -20 - 0 rescue ? 00:00:00 bioset 1 S 0 19 2 0 60 -20 - 0 rescue ? 00:00:00 bioset 1 S 0 20 2 0 60 -20 - 0 rescue ? 00:00:00 bioset 1 S 0 21 2 0 60 -20 - 0 rescue ? 00:00:00 kblockd 1 S 0 22 2 0 60 -20 - 0 rescue ? 00:00:00 md 1 S 0 23 2 0 60 -20 - 0 rescue ? 00:00:00 edac-poller 1 S 0 24 2 0 60 -20 - 0 rescue ? 00:00:00 watchdogd 1 S 0 30 2 0 80 0 - 0 kswapd ? 00:04:39 kswapd0 1 S 0 31 2 0 85 5 - 0 ksm_sc ? 00:00:00 ksmd 1 S 0 32 2 0 99 19 - 0 khugep ? 00:00:02 khugepaged 1 S 0 33 2 0 60 -20 - 0 rescue ? 00:00:00 crypto 1 S 0 41 2 0 60 -20 - 0 rescue ? 00:00:00 kthrotld 1 S 0 43 2 0 60 -20 - 0 rescue ? 00:00:00 kmpath_rdacd 1 S 0 44 2 0 60 -20 - 0 rescue ? 00:00:00 kaluad 1 S 0 45 2 0 60 -20 - 0 rescue ? 00:00:00 kpsmoused 1 S 0 47 2 0 60 -20 - 0 rescue ? 00:00:00 ipv6_addrconf 1 S 0 60 2 0 60 -20 - 0 rescue ? 00:00:00 deferwq 1 S 0 97 2 0 80 0 - 0 kaudit ? 00:00:02 kauditd 1 S 0 274 2 0 60 -20 - 0 rescue ? 00:00:00 nfit 1 S 0 275 2 0 60 -20 - 0 rescue ? 00:00:00 mpt_poll_0 1 S 0 276 2 0 60 -20 - 0 rescue ? 00:00:00 mpt/0 1 S 0 277 2 0 60 -20 - 0 rescue ? 00:00:00 ata_sff 1 S 0 283 2 0 80 0 - 0 scsi_e ? 00:00:00 scsi_eh_0 1 S 0 286 2 0 60 -20 - 0 rescue ? 00:00:00 scsi_tmf_0 1 S 0 292 2 0 80 0 - 0 scsi_e ? 00:00:00 scsi_eh_1 1 S 0 295 2 0 60 -20 - 0 rescue ? 00:00:00 scsi_tmf_1 1 S 0 296 2 0 80 0 - 0 scsi_e ? 00:00:00 scsi_eh_2 1 S 0 299 2 0 60 -20 - 0 rescue ? 00:00:00 scsi_tmf_2 5 S 0 309 2 0 9 - - 0 irq_th ? 00:00:05 irq/16-vmwgfx 1 S 0 310 2 0 60 -20 - 0 rescue ? 00:00:00 ttm_swap 1 S 0 395 2 0 60 -20 - 0 rescue ? 00:00:00 kdmflush 1 S 0 396 2 0 60 -20 - 0 rescue ? 00:00:00 bioset 1 S 0 406 2 0 60 -20 - 0 rescue ? 00:00:00 kdmflush 1 S 0 407 2 0 60 -20 - 0 rescue ? 00:00:00 bioset 1 S 0 421 2 0 80 0 - 0 kjourn ? 00:00:11 jbd2/dm-0-8 1 S 0 422 2 0 60 -20 - 0 rescue ? 00:00:00 ext4-rsv-conver 4 S 0 508 1 0 80 0 - 9390 ep_pol ? 00:01:42 systemd-journal 4 S 0 529 1 0 80 0 - 50277 poll_s ? 00:00:00 lvmetad 4 S 0 544 1 0 80 0 - 12039 ep_pol ? 00:00:00 systemd-udevd 1 S 0 569 2 0 60 -20 - 0 worker ? 00:00:02 kworker/0:1H 1 S 0 621 2 0 80 0 - 0 kjourn ? 00:00:00 jbd2/sda1-8 1 S 0 622 2 0 60 -20 - 0 rescue ? 00:00:00 ext4-rsv-conver 5 S 0 654 1 0 76 -4 - 13883 ep_pol ? 00:00:12 auditd 4 S 81 677 1 0 80 0 - 16578 ep_pol ? 00:00:26 dbus-daemon 4 S 999 682 1 0 80 0 - 153062 poll_s ? 00:00:02 polkitd 4 S 0 684 1 0 80 0 - 24922 poll_s ? 00:00:00 VGAuthService 4 S 0 685 1 0 80 0 - 6596 ep_pol ? 00:00:12 systemd-logind 4 S 0 686 1 0 80 0 - 76321 poll_s ? 00:05:54 vmtoolsd 4 S 0 692 1 0 80 0 - 31597 hrtime ? 00:00:02 crond 4 S 0 923 1 0 80 0 - 142417 ep_pol ? 00:00:22 php-fpm 4 S 0 926 1 0 80 0 - 160559 poll_s ? 00:00:21 httpd 4 S 0 928 1 0 80 0 - 143577 poll_s ? 00:00:43 tuned 4 S 0 936 1 0 80 0 - 28231 poll_s ? 00:00:23 sshd 4 S 0 938 1 0 80 0 - 66729 poll_s ? 00:00:47 rsyslogd 5 S 0 943 1 0 80 0 - 13322 inet_c ? 00:00:00 vsftpd 4 S 0 953 1 0 80 0 - 152429 poll_s ? 00:09:46 f2b/server 4 S 27 998 1 0 80 0 - 28354 do_wai ? 00:00:00 mysqld_safe 4 S 0 1069 1 0 80 0 - 27552 n_tty_ tty1 00:00:00 agetty 5 S 996 1104 923 0 80 0 - 179445 inet_c ? 00:02:31 php-fpm 5 S 996 1105 923 0 80 0 - 178437 inet_c ? 00:02:39 php-fpm 5 S 996 1108 923 0 80 0 - 178901 inet_c ? 00:02:22 php-fpm 5 S 996 1110 923 0 80 0 - 177776 inet_c ? 00:02:19 php-fpm 5 S 996 1113 923 0 80 0 - 179294 inet_c ? 00:02:29 php-fpm 0 S 27 1220 998 0 80 0 - 297171 poll_s ? 00:05:01 mysqld 5 S 0 1331 1 0 80 0 - 22426 ep_pol ? 00:00:02 master 4 S 89 1333 1331 0 80 0 - 22496 ep_pol ? 00:00:01 qmgr 5 S 996 1377 923 0 80 0 - 178856 inet_c ? 00:02:13 php-fpm 5 S 996 1380 923 0 80 0 - 173811 inet_c ? 00:02:29 php-fpm 5 S 996 1387 923 0 80 0 - 173294 inet_c ? 00:02:16 php-fpm 5 S 996 1393 923 0 80 0 - 177947 inet_c ? 00:02:24 php-fpm 5 S 996 1405 923 0 80 0 - 171703 inet_c ? 00:02:20 php-fpm 5 S 996 1406 923 0 80 0 - 177955 inet_c ? 00:02:12 php-fpm 5 S 48 12523 926 0 80 0 - 160559 inet_c ? 00:00:00 httpd 1 S 0 19606 2 0 80 0 - 0 worker ? 00:00:01 kworker/u128:2 5 S 0 19821 1 0 80 0 - 310302 sigsus ? 00:00:15 nginx 1 S 0 23270 2 0 80 0 - 0 worker ? 00:00:18 kworker/0:1 1 S 0 23271 2 0 80 0 - 0 worker ? 00:00:00 kworker/u128:0 4 S 89 27372 1331 0 80 0 - 22452 ep_pol ? 00:00:00 pickup 4 S 0 27666 936 0 80 0 - 38160 poll_s ? 00:00:00 sshd 4 S 0 27670 27666 0 80 0 - 29151 do_wai pts/0 00:00:00 bash 1 S 0 27787 2 0 80 0 - 0 worker ? 00:00:00 kworker/0:2 1 R 0 27901 2 0 80 0 - 0 - ? 00:00:00 kworker/0:0 4 S 0 27928 936 0 80 0 - 28231 poll_s ? 00:00:00 sshd 5 S 74 27929 27928 0 80 0 - 28231 poll_s ? 00:00:00 sshd 0 R 0 27930 27670 0 80 0 - 38338 - pts/0 00:00:00 ps 5 S 48 64818 926 0 80 0 - 160559 inet_c ? 00:00:00 httpd 5 S 48 64819 926 0 80 0 - 160559 inet_c ? 00:00:00 httpd 5 S 48 64820 926 0 80 0 - 160559 inet_c ? 00:00:00 httpd 5 S 48 64821 926 0 80 0 - 160559 inet_c ? 00:00:00 httpd 5 S 48 64822 926 0 80 0 - 160559 inet_c ? 00:00:00 httpd [root at web ~]# ok so: 5 S 0 19821 1 0 80 0 - 310302 sigsus ? 00:00:15 nginx let's take the PID and see what is in memory: [root at web ~]# pmap 19821 19821: nginx: master process nginx -c /etc/nginx/nginx.conf 000055e515592000 1164K r-x-- nginx 000055e5158b5000 8K r---- nginx 000055e5158b7000 132K rw--- nginx 000055e5158d8000 128K rw--- [ anon ] 000055e51660e000 346604K rw--- [ anon ] 000055e52b889000 607984K rw--- [ anon ] 00007fda69936000 89792K rwx-- [ anon ] 00007fda6f0e6000 1024K rw-s- zero (deleted) 00007fda6f1e6000 1024K rw-s- zero (deleted) 00007fda6f2e6000 1024K rw-s- zero (deleted) 00007fda6f3e6000 1024K rw-s- zero (deleted) 00007fda6f4e6000 1024K rw-s- zero (deleted) 00007fda6f5e6000 1024K rw-s- zero (deleted) 00007fda6f6e6000 1024K rw-s- zero (deleted) 00007fda6f7e6000 1024K rw-s- zero (deleted) 00007fda6f8e6000 1024K rw-s- zero (deleted) 00007fda6f9e6000 1024K rw-s- zero (deleted) 00007fda6fae6000 1024K rw-s- zero (deleted) 00007fda6fbe6000 1024K rw-s- zero (deleted) 00007fda6fce6000 1024K rw-s- zero (deleted) 00007fda6fde6000 10240K rw-s- zero (deleted) 00007fda707e6000 1024K rw-s- zero (deleted) 00007fda708e6000 2112K rwx-- [ anon ] 00007fda70af7000 256K rwx-- [ anon ] 00007fda70b37000 2048K rwx-- [ anon ] 00007fda70d38000 256K rwx-- [ anon ] 00007fda70d78000 2112K rwx-- [ anon ] 00007fda70f89000 256K rwx-- [ anon ] 00007fda70fc9000 2048K rwx-- [ anon ] 00007fda711ca000 256K rwx-- [ anon ] 00007fda7120a000 2112K rwx-- [ anon ] 00007fda7141b000 256K rwx-- [ anon ] 00007fda7145b000 2048K rwx-- [ anon ] 00007fda7165c000 256K rwx-- [ anon ] 00007fda7169c000 2112K rwx-- [ anon ] 00007fda718ad000 256K rwx-- [ anon ] 00007fda718ed000 2048K rwx-- [ anon ] 00007fda71aee000 256K rwx-- [ anon ] 00007fda71b2e000 2112K rwx-- [ anon ] 00007fda71d3f000 256K rwx-- [ anon ] 00007fda71d7f000 2048K rwx-- [ anon ] 00007fda71f80000 256K rwx-- [ anon ] 00007fda71fc0000 2112K rwx-- [ anon ] 00007fda721d1000 256K rwx-- [ anon ] 00007fda72211000 2048K rwx-- [ anon ] 00007fda72412000 256K rwx-- [ anon ] 00007fda72452000 2112K rwx-- [ anon ] 00007fda72663000 256K rwx-- [ anon ] 00007fda726a3000 2048K rwx-- [ anon ] 00007fda728a4000 256K rwx-- [ anon ] 00007fda728e4000 2112K rwx-- [ anon ] 00007fda72af5000 256K rwx-- [ anon ] 00007fda72b35000 2048K rwx-- [ anon ] 00007fda72d36000 256K rwx-- [ anon ] 00007fda72d76000 2112K rwx-- [ anon ] 00007fda72f87000 256K rwx-- [ anon ] 00007fda72fc7000 2048K rwx-- [ anon ] 00007fda731c8000 256K rwx-- [ anon ] 00007fda73208000 2112K rwx-- [ anon ] 00007fda73419000 256K rwx-- [ anon ] 00007fda73459000 2048K rwx-- [ anon ] 00007fda7365a000 256K rwx-- [ anon ] 00007fda7369a000 2048K rwx-- [ anon ] 00007fda7389b000 256K rwx-- [ anon ] 00007fda738db000 2112K rwx-- [ anon ] 00007fda73aec000 256K rwx-- [ anon ] 00007fda73b2c000 448K rwx-- [ anon ] 00007fda73b9c000 20K r-x-- libmaxminddb.so.0.0.7 00007fda73ba1000 2044K ----- libmaxminddb.so.0.0.7 00007fda73da0000 4K r---- libmaxminddb.so.0.0.7 00007fda73da1000 4K rw--- libmaxminddb.so.0.0.7 00007fda73da2000 12K r-x-- ngx_http_geoip2_module.so 00007fda73da5000 2044K ----- ngx_http_geoip2_module.so 00007fda73fa4000 4K r---- ngx_http_geoip2_module.so 00007fda73fa5000 4K rw--- ngx_http_geoip2_module.so 00007fda73fa6000 112K r-x-- libsasl2.so.3.0.0 00007fda73fc2000 2044K ----- libsasl2.so.3.0.0 00007fda741c1000 4K r---- libsasl2.so.3.0.0 00007fda741c2000 4K rw--- libsasl2.so.3.0.0 00007fda741c3000 148K r-x-- liblzma.so.5.2.2 00007fda741e8000 2044K ----- liblzma.so.5.2.2 00007fda743e7000 4K r---- liblzma.so.5.2.2 00007fda743e8000 4K rw--- liblzma.so.5.2.2 00007fda743e9000 328K r-x-- libldap-2.4.so.2.10.7 00007fda7443b000 2048K ----- libldap-2.4.so.2.10.7 00007fda7463b000 8K r---- libldap-2.4.so.2.10.7 00007fda7463d000 4K rw--- libldap-2.4.so.2.10.7 00007fda7463e000 56K r-x-- liblber-2.4.so.2.10.7 00007fda7464c000 2044K ----- liblber-2.4.so.2.10.7 00007fda7484b000 4K r---- liblber-2.4.so.2.10.7 00007fda7484c000 4K rw--- liblber-2.4.so.2.10.7 00007fda7484d000 232K r-x-- libnspr4.so 00007fda74887000 2044K ----- libnspr4.so 00007fda74a86000 4K r---- libnspr4.so 00007fda74a87000 8K rw--- libnspr4.so 00007fda74a89000 8K rw--- [ anon ] 00007fda74a8b000 16K r-x-- libplc4.so 00007fda74a8f000 2044K ----- libplc4.so 00007fda74c8e000 4K r---- libplc4.so 00007fda74c8f000 4K rw--- libplc4.so 00007fda74c90000 12K r-x-- libplds4.so 00007fda74c93000 2044K ----- libplds4.so 00007fda74e92000 4K r---- libplds4.so 00007fda74e93000 4K rw--- libplds4.so 00007fda74e94000 164K r-x-- libnssutil3.so 00007fda74ebd000 2044K ----- libnssutil3.so 00007fda750bc000 28K r---- libnssutil3.so 00007fda750c3000 4K rw--- libnssutil3.so 00007fda750c4000 1176K r-x-- libnss3.so 00007fda751ea000 2048K ----- libnss3.so 00007fda753ea000 20K r---- libnss3.so 00007fda753ef000 8K rw--- libnss3.so 00007fda753f1000 8K rw--- [ anon ] 00007fda753f3000 148K r-x-- libsmime3.so 00007fda75418000 2044K ----- libsmime3.so 00007fda75617000 12K r---- libsmime3.so 00007fda7561a000 4K rw--- libsmime3.so 00007fda7561b000 332K r-x-- libssl3.so 00007fda7566e000 2048K ----- libssl3.so 00007fda7586e000 16K r---- libssl3.so 00007fda75872000 4K rw--- libssl3.so 00007fda75873000 4K rw--- [ anon ] 00007fda75874000 172K r-x-- libssh2.so.1.0.1 00007fda7589f000 2048K ----- libssh2.so.1.0.1 00007fda75a9f000 4K r---- libssh2.so.1.0.1 00007fda75aa0000 4K rw--- libssh2.so.1.0.1 00007fda75aa1000 200K r-x-- libidn.so.11.6.11 00007fda75ad3000 2044K ----- libidn.so.11.6.11 00007fda75cd2000 4K r---- libidn.so.11.6.11 00007fda75cd3000 4K rw--- libidn.so.11.6.11 00007fda75cd4000 84K r-x-- libgcc_s-4.8.5-20150702.so.1 00007fda75ce9000 2044K ----- libgcc_s-4.8.5-20150702.so.1 00007fda75ee8000 4K r---- libgcc_s-4.8.5-20150702.so.1 00007fda75ee9000 4K rw--- libgcc_s-4.8.5-20150702.so.1 00007fda75eea000 932K r-x-- libstdc++.so.6.0.19 00007fda75fd3000 2044K ----- libstdc++.so.6.0.19 00007fda761d2000 32K r---- libstdc++.so.6.0.19 00007fda761da000 8K rw--- libstdc++.so.6.0.19 00007fda761dc000 84K rw--- [ anon ] 00007fda761f1000 32K r-x-- libyajl.so.2.0.4 00007fda761f9000 2048K ----- libyajl.so.2.0.4 00007fda763f9000 4K r---- libyajl.so.2.0.4 00007fda763fa000 4K rw--- libyajl.so.2.0.4 00007fda763fb000 16K r-x-- libfuzzy.so.2.1.0 00007fda763ff000 2044K ----- libfuzzy.so.2.1.0 00007fda765fe000 4K r---- libfuzzy.so.2.1.0 00007fda765ff000 4K rw--- libfuzzy.so.2.1.0 00007fda76600000 176K r-x-- liblua-5.1.so 00007fda7662c000 2044K ----- liblua-5.1.so 00007fda7682b000 8K r---- liblua-5.1.so 00007fda7682d000 4K rw--- liblua-5.1.so 00007fda7682e000 80K r-x-- liblmdb.so.0.0.0 00007fda76842000 2044K ----- liblmdb.so.0.0.0 00007fda76a41000 4K r---- liblmdb.so.0.0.0 00007fda76a42000 4K rw--- liblmdb.so.0.0.0 00007fda76a43000 1404K r-x-- libxml2.so.2.9.1 00007fda76ba2000 2044K ----- libxml2.so.2.9.1 00007fda76da1000 32K r---- libxml2.so.2.9.1 00007fda76da9000 8K rw--- libxml2.so.2.9.1 00007fda76dab000 8K rw--- [ anon ] 00007fda76dad000 28K r-x-- librt-2.17.so 00007fda76db4000 2044K ----- librt-2.17.so 00007fda76fb3000 4K r---- librt-2.17.so 00007fda76fb4000 4K rw--- librt-2.17.so 00007fda76fb5000 184K r-x-- libGeoIP.so.1.5.0 00007fda76fe3000 2044K ----- libGeoIP.so.1.5.0 00007fda771e2000 4K r---- libGeoIP.so.1.5.0 00007fda771e3000 8K rw--- libGeoIP.so.1.5.0 00007fda771e5000 408K r-x-- libcurl.so.4.3.0 00007fda7724b000 2048K ----- libcurl.so.4.3.0 00007fda7744b000 8K r---- libcurl.so.4.3.0 00007fda7744d000 4K rw--- libcurl.so.4.3.0 00007fda7744e000 4K rw--- [ anon ] 00007fda7744f000 2148K r-x-- libmodsecurity.so.3.0.4 00007fda77668000 2048K ----- libmodsecurity.so.3.0.4 00007fda77868000 192K r---- libmodsecurity.so.3.0.4 00007fda77898000 12K rw--- libmodsecurity.so.3.0.4 00007fda7789b000 20K r-x-- ngx_http_modsecurity_module.so 00007fda778a0000 2044K ----- ngx_http_modsecurity_module.so 00007fda77a9f000 4K r---- ngx_http_modsecurity_module.so 00007fda77aa0000 4K rw--- ngx_http_modsecurity_module.so 00007fda77aa1000 8K r-x-- ngx_http_brotli_static_module.so 00007fda77aa3000 2044K ----- ngx_http_brotli_static_module.so 00007fda77ca2000 4K r---- ngx_http_brotli_static_module.so 00007fda77ca3000 4K rw--- ngx_http_brotli_static_module.so 00007fda77ca4000 1028K r-x-- libm-2.17.so 00007fda77da5000 2044K ----- libm-2.17.so 00007fda77fa4000 4K r---- libm-2.17.so 00007fda77fa5000 4K rw--- libm-2.17.so 00007fda77fa6000 672K r-x-- ngx_http_brotli_filter_module.so 00007fda7804e000 2044K ----- ngx_http_brotli_filter_module.so 00007fda7824d000 4K r---- ngx_http_brotli_filter_module.so 00007fda7824e000 4K rw--- ngx_http_brotli_filter_module.so 00007fda7824f000 48K r-x-- libnss_files-2.17.so 00007fda7825b000 2044K ----- libnss_files-2.17.so 00007fda7845a000 4K r---- libnss_files-2.17.so 00007fda7845b000 4K rw--- libnss_files-2.17.so 00007fda7845c000 24K rw--- [ anon ] 00007fda78462000 144K r-x-- libselinux.so.1 00007fda78486000 2044K ----- libselinux.so.1 00007fda78685000 4K r---- libselinux.so.1 00007fda78686000 4K rw--- libselinux.so.1 00007fda78687000 8K rw--- [ anon ] 00007fda78689000 88K r-x-- libresolv-2.17.so 00007fda7869f000 2048K ----- libresolv-2.17.so 00007fda7889f000 4K r---- libresolv-2.17.so 00007fda788a0000 4K rw--- libresolv-2.17.so 00007fda788a1000 8K rw--- [ anon ] 00007fda788a3000 12K r-x-- libkeyutils.so.1.5 00007fda788a6000 2044K ----- libkeyutils.so.1.5 00007fda78aa5000 4K r---- libkeyutils.so.1.5 00007fda78aa6000 4K rw--- libkeyutils.so.1.5 00007fda78aa7000 56K r-x-- libkrb5support.so.0.1 00007fda78ab5000 2048K ----- libkrb5support.so.0.1 00007fda78cb5000 4K r---- libkrb5support.so.0.1 00007fda78cb6000 4K rw--- libkrb5support.so.0.1 00007fda78cb7000 196K r-x-- libk5crypto.so.3.1 00007fda78ce8000 2044K ----- libk5crypto.so.3.1 00007fda78ee7000 8K r---- libk5crypto.so.3.1 00007fda78ee9000 4K rw--- libk5crypto.so.3.1 00007fda78eea000 12K r-x-- libcom_err.so.2.1 00007fda78eed000 2044K ----- libcom_err.so.2.1 00007fda790ec000 4K r---- libcom_err.so.2.1 00007fda790ed000 4K rw--- libcom_err.so.2.1 00007fda790ee000 868K r-x-- libkrb5.so.3.3 00007fda791c7000 2044K ----- libkrb5.so.3.3 00007fda793c6000 56K r---- libkrb5.so.3.3 00007fda793d4000 12K rw--- libkrb5.so.3.3 00007fda793d7000 296K r-x-- libgssapi_krb5.so.2.2 00007fda79421000 2048K ----- libgssapi_krb5.so.2.2 00007fda79621000 4K r---- libgssapi_krb5.so.2.2 00007fda79622000 8K rw--- libgssapi_krb5.so.2.2 00007fda79624000 8K r-x-- libfreebl3.so 00007fda79626000 2044K ----- libfreebl3.so 00007fda79825000 4K r---- libfreebl3.so 00007fda79826000 4K rw--- libfreebl3.so 00007fda79827000 1804K r-x-- libc-2.17.so 00007fda799ea000 2048K ----- libc-2.17.so 00007fda79bea000 16K r---- libc-2.17.so 00007fda79bee000 8K rw--- libc-2.17.so 00007fda79bf0000 20K rw--- [ anon ] 00007fda79bf5000 84K r-x-- libz.so.1.2.7 00007fda79c0a000 2044K ----- libz.so.1.2.7 00007fda79e09000 4K r---- libz.so.1.2.7 00007fda79e0a000 4K rw--- libz.so.1.2.7 00007fda79e0b000 2264K r-x-- libcrypto.so.1.0.2k 00007fda7a041000 2048K ----- libcrypto.so.1.0.2k 00007fda7a241000 112K r---- libcrypto.so.1.0.2k 00007fda7a25d000 52K rw--- libcrypto.so.1.0.2k 00007fda7a26a000 16K rw--- [ anon ] 00007fda7a26e000 412K r-x-- libssl.so.1.0.2k 00007fda7a2d5000 2048K ----- libssl.so.1.0.2k 00007fda7a4d5000 16K r---- libssl.so.1.0.2k 00007fda7a4d9000 28K rw--- libssl.so.1.0.2k 00007fda7a4e0000 384K r-x-- libpcre.so.1.2.0 00007fda7a540000 2048K ----- libpcre.so.1.2.0 00007fda7a740000 4K r---- libpcre.so.1.2.0 00007fda7a741000 4K rw--- libpcre.so.1.2.0 00007fda7a742000 32K r-x-- libcrypt-2.17.so 00007fda7a74a000 2044K ----- libcrypt-2.17.so 00007fda7a949000 4K r---- libcrypt-2.17.so 00007fda7a94a000 4K rw--- libcrypt-2.17.so 00007fda7a94b000 184K rw--- [ anon ] 00007fda7a979000 92K r-x-- libpthread-2.17.so 00007fda7a990000 2044K ----- libpthread-2.17.so 00007fda7ab8f000 4K r---- libpthread-2.17.so 00007fda7ab90000 4K rw--- libpthread-2.17.so 00007fda7ab91000 16K rw--- [ anon ] 00007fda7ab95000 8K r-x-- libdl-2.17.so 00007fda7ab97000 2048K ----- libdl-2.17.so 00007fda7ad97000 4K r---- libdl-2.17.so 00007fda7ad98000 4K rw--- libdl-2.17.so 00007fda7ad99000 136K r-x-- ld-2.17.so 00007fda7adc6000 1600K rwx-- [ anon ] 00007fda7af57000 256K rwx-- [ anon ] 00007fda7af97000 64K rwx-- [ anon ] 00007fda7afa7000 36K rw--- [ anon ] 00007fda7afb7000 4K rw-s- zero (deleted) 00007fda7afb8000 4K rw-s- [ shmid=0x16 ] 00007fda7afb9000 4K rw--- [ anon ] 00007fda7afba000 4K r---- ld-2.17.so 00007fda7afbb000 4K rw--- ld-2.17.so 00007fda7afbc000 4K rw--- [ anon ] 00007ffe7bef6000 132K rw--- [ stack ] 00007ffe7bfea000 8K r-x-- [ anon ] ffffffffff600000 4K r-x-- [ anon ] total 1241212K [root at web ~]# the interesting part is: 000055e51660e000 346604K rw--- [ anon ] 000055e52b889000 607984K rw--- [ anon ] 00007fda69936000 89792K rwx-- [ anon ] there is almost 1.1GB of virtual mem. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,288511,288567#msg-288567 From nginx-forum at forum.nginx.org Mon Jul 6 10:49:15 2020 From: nginx-forum at forum.nginx.org (webber) Date: Mon, 06 Jul 2020 06:49:15 -0400 Subject: range_filter_module get duplicated Accept-Ranges response headers Message-ID: Hello, Recently, we found if we use nginx slice module , and upstream server is such as a static file server, nginx will response duplicated `Accept-Ranges` headers if client request is not included range header. the minimal config example as follow: ``` server { listen 80; server_name _; default_type text/html; location /get_file { slice 256k; proxy_set_header Range $slice_range; proxy_pass http://127.0.0.1:8899; } } ``` use curl to get 1mb file: ``` curl -s http://localhost/get_file/1mb.test -D- -o /dev/null HTTP/1.1 200 OK Date: Mon, 06 Jul 2020 10:32:58 GMT Content-Type: application/octet-stream Content-Length: 1048576 Connection: keep-alive Last-Modified: Mon, 06 Jul 2020 07:34:23 GMT Cache-Control: public, max-age=43200 Expires: Mon, 06 Jul 2020 22:32:58 GMT ETag: "1594020863.76-1048576-4019326287" Accept-Ranges: bytes Accept-Ranges: bytes ``` but if I add range header to curl request, will get expected response. Then I review the ngx_http_range_filter_module, in `goto next_filter`( line 253) , should we handle NGX_HTTP_OK response? like: ``` next_filter: if (r->headers_out.status == NGX_HTTP_OK) { r->headers_out.accept_ranges = NULL; return ngx_http_next_header_filter(r); } r->headers_out.accept_ranges = ngx_list_push(&r->headers_out.headers); if (r->headers_out.accept_ranges == NULL) { return NGX_ERROR; } r->headers_out.accept_ranges->hash = 1; ngx_str_set(&r->headers_out.accept_ranges->key, "Accept-Ranges"); ngx_str_set(&r->headers_out.accept_ranges->value, "bytes"); return ngx_http_next_header_filter(r); ``` I am confused if it is a bug? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,288569,288569#msg-288569 From mdounin at mdounin.ru Mon Jul 6 15:10:23 2020 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 6 Jul 2020 18:10:23 +0300 Subject: $ssl_client_escaped_cert does not contain intermediate client certificates In-Reply-To: <9bd9d6717560c7b9e3a23ca365be2f35.NginxMailingListEnglish@forum.nginx.org> References: <9bd9d6717560c7b9e3a23ca365be2f35.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20200706151023.GY12747@mdounin.ru> Hello! On Sat, Jul 04, 2020 at 05:52:09AM -0400, everhardt wrote: > I have the following certificate chain: Root certificate > Intermediate > certificate > End user certificate. > > I've set up nginx as an SSL termination proxy for a backend service that > differentiates it actions based on the serial of the intermediate > certificate and the subject of the end user certificate. Only the root > certificate is available at the (nginx) server, the client will present the > intermediate + end user certificate. > > Relevant nginx configuration is as follows: > > ssl_client_certificate root_cert.pem; # so only the root certificate > ssl_verify_client on; > ssl_verify_depth 2; > > proxy_set_header X-Ssl-Client-Escaped-Cert $ssl_client_escaped_cert; # to > pass it on to the backend service > > Connectivity works great: nginx accepts the request if the client (I'm > testing with curl) presents intermediate + end user certificate and passes > it on to the backend service. If the client presents only one of the > certificates, nginx rightly rejects it. So I'm sure curl shares both > certificates with nginx. > > Where it goes wrong, is when nginx passes the certificate information to the > backend service. The embedded variable $ssl_client_escaped_cert only seems > to contain the end user certificate and not the intermediate one(s). I did > some logging to check $ssl_client_raw_cert, but that also only contains the > end user certificate. > > Is there a way to get the intermediate client certificates included in these > embedded variables? No. Futher, intermediate certs as sent by the client are not saved by the OpenSSL into session information, so the approach you are trying to use is not going to work at all, more or less universally (or at least it won't work with session resumption). For things to work, you may want to reconsider the approach and make sure all intermediate certificates are known on the server instead. -- Maxim Dounin http://mdounin.ru/ From denok at yandex.com Mon Jul 6 16:52:09 2020 From: denok at yandex.com (Denis Sh.) Date: Mon, 06 Jul 2020 09:52:09 -0700 Subject: SNI support in `mail` context In-Reply-To: <1214981594053695@mail.yandex.ru> Message-ID: <1916141594054283@mail.yandex.ru> An HTML attachment was scrubbed... URL: From denok at yandex.com Mon Jul 6 17:17:31 2020 From: denok at yandex.com (Denis Sh.) Date: Mon, 06 Jul 2020 10:17:31 -0700 Subject: SNI support in `mail` context (fixed formatting) Message-ID: <1919741594053770@mail.yandex.ru> Hi! So, when proxying SMTP/IMAP, is it possible to get the Server Name that mail clients send as a part of Client Hello? Similar to Embedded Variables for ngx_http_ssl_module: $ssl_server_name returns the server name requested through?SNI?(1.7.0); I don't see these vars defined here https://github.com/nginx/nginx/blob/829c9d5981da1abc81dd7e2fb563da592203e54a/src/mail/ngx_mail_ssl_module.c#L229 Or should I use `stream` to proxy mail? Any ideas? Thanks, Denis From mdounin at mdounin.ru Mon Jul 6 17:31:50 2020 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 6 Jul 2020 20:31:50 +0300 Subject: SNI support in `mail` context (fixed formatting) In-Reply-To: <1919741594053770@mail.yandex.ru> References: <1919741594053770@mail.yandex.ru> Message-ID: <20200706173150.GC12747@mdounin.ru> Hello! On Mon, Jul 06, 2020 at 10:17:31AM -0700, Denis Sh. wrote: > So, when proxying SMTP/IMAP, is it possible to get the Server > Name that mail clients send as a part of Client Hello? Currently no. > Similar to Embedded Variables for ngx_http_ssl_module: > $ssl_server_name > returns the server name requested through?SNI?(1.7.0); > > I don't see these vars defined here https://github.com/nginx/nginx/blob/829c9d5981da1abc81dd7e2fb563da592203e54a/src/mail/ngx_mail_ssl_module.c#L229 There is no variables in the mail module. > Or should I use `stream` to proxy mail? > > Any ideas? This depends on what you are trying to achieve. For obvious reasons stream won't work for complex protocol-dependent things, such as STARTTLS or authentication. But if the goal is to provide different certificates to different names requested via SNI in SMTPS and IMAPS connections, proxying via the stream module with ssl_preread (http://nginx.org/r/ssl_preread) might work for you. Note though that in general there is no concept of name-based virtual hosts in mail protocols, and using name-based virtual hosts for SSL might not be a good idea either. Also, status of SNI support by email clients varies, and "unknown" in most cases (https://en.wikipedia.org/wiki/Comparison_of_email_clients). -- Maxim Dounin http://mdounin.ru/ From denok at yandex.com Mon Jul 6 18:06:14 2020 From: denok at yandex.com (Denis Sh.) Date: Mon, 06 Jul 2020 11:06:14 -0700 Subject: SNI support in `mail` context (fixed formatting) In-Reply-To: <20200706173150.GC12747@mdounin.ru> References: <1919741594053770@mail.yandex.ru> <20200706173150.GC12747@mdounin.ru> Message-ID: <276181594058093@mail.yandex.ru> An HTML attachment was scrubbed... URL: From denok at yandex.com Mon Jul 6 18:07:56 2020 From: denok at yandex.com (Denis Sh.) Date: Mon, 06 Jul 2020 11:07:56 -0700 Subject: SNI support in `mail` context (fixed formatting) In-Reply-To: <276181594058093@mail.yandex.ru> References: <1919741594053770@mail.yandex.ru> <20200706173150.GC12747@mdounin.ru> <276181594058093@mail.yandex.ru> Message-ID: <8308131594058815@mail.yandex.ru> Thank for your reply, Maxim. Sorry, I screwed with HTML formatting! What are the chances that you would look into adding these variable into mail module in upstream? Looks like it's not very hard to do. Or SNI for mail is not considered to be a real thing? >>> But if the goal is to provide >> different certificates to different names requested via SNI in >> SMTPS and IMAPS connections I'm afraid I need to support STARTTLS and either completely do AUTH on NGINX or backends. Also, I wasn't able to find a reason why NGINX intentionally doesn't support passing thru the AUTH to the backend for SMTP, same as with IMAP/POP? Yeah, I know that SNI for mail protocols is a "grey" area, still want to start implementing it. Denis > > 06.07.2020, 10:32, "Maxim Dounin" : >> Hello! >> >> On Mon, Jul 06, 2020 at 10:17:31AM -0700, Denis Sh. wrote: >> >>> ?So, when proxying SMTP/IMAP, is it possible to get the Server >>> ?Name that mail clients send as a part of Client Hello? >> >> Currently no. >> >>> ?Similar to Embedded Variables for ngx_http_ssl_module: >>> ?$ssl_server_name >>> ?returns the server name requested through?SNI?(1.7.0); >>> >>> ?I don't see these vars defined here https://github.com/nginx/nginx/blob/829c9d5981da1abc81dd7e2fb563da592203e54a/src/mail/ngx_mail_ssl_module.c#L229 >> >> There is no variables in the mail module. >> >>> ?Or should I use `stream` to proxy mail? >>> >>> ?Any ideas? >> >> This depends on what you are trying to achieve. For obvious >> reasons stream won't work for complex protocol-dependent things, >> such as STARTTLS or authentication. But if the goal is to provide >> different certificates to different names requested via SNI in >> SMTPS and IMAPS connections, proxying via the stream module with >> ssl_preread (http://nginx.org/r/ssl_preread) might work for you. >> >> Note though that in general there is no concept of name-based >> virtual hosts in mail protocols, and using name-based virtual >> hosts for SSL might not be a good idea either. Also, status of >> SNI support by email clients varies, and "unknown" in most cases >> (https://en.wikipedia.org/wiki/Comparison_of_email_clients). >> >> -- >> Maxim Dounin >> http://mdounin.ru/ >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx > , From linux at cmadams.net Mon Jul 6 18:20:57 2020 From: linux at cmadams.net (Chris Adams) Date: Mon, 6 Jul 2020 13:20:57 -0500 Subject: SNI support in `mail` context (fixed formatting) In-Reply-To: <20200706173150.GC12747@mdounin.ru> References: <1919741594053770@mail.yandex.ru> <20200706173150.GC12747@mdounin.ru> Message-ID: <20200706182057.GA10523@cmadams.net> Once upon a time, Maxim Dounin said: > Note though that in general there is no concept of name-based > virtual hosts in mail protocols, and using name-based virtual > hosts for SSL might not be a good idea either. Also, status of > SNI support by email clients varies, and "unknown" in most cases > (https://en.wikipedia.org/wiki/Comparison_of_email_clients). I'm pretty sure that list is out of date - I have Dovecot handling POP/IMAP for a bunch of domains with SNI and no user complaints. I'd lean towards the SNI column being ? because it's just not being checked. -- Chris Adams From linux at cmadams.net Mon Jul 6 18:27:18 2020 From: linux at cmadams.net (Chris Adams) Date: Mon, 6 Jul 2020 13:27:18 -0500 Subject: SNI support in `mail` context (fixed formatting) In-Reply-To: <8308131594058815@mail.yandex.ru> References: <1919741594053770@mail.yandex.ru> <20200706173150.GC12747@mdounin.ru> <276181594058093@mail.yandex.ru> <8308131594058815@mail.yandex.ru> Message-ID: <20200706182718.GB10523@cmadams.net> Once upon a time, Denis Sh. said: > Also, I wasn't able to find a reason why NGINX intentionally doesn't support passing thru the AUTH to the backend for SMTP, same as with IMAP/POP? I looked at adding this, using ID for IMAP and XCLIENT for POP3 (what Dovecot supports)... didn't get the time and don't need it now, but it didn't look like it would be too difficult to implement. -- Chris Adams From denok at yandex.com Mon Jul 6 18:39:30 2020 From: denok at yandex.com (Denis Sh.) Date: Mon, 06 Jul 2020 11:39:30 -0700 Subject: SNI support in `mail` context (fixed formatting) In-Reply-To: <20200706182718.GB10523@cmadams.net> References: <1919741594053770@mail.yandex.ru> <20200706173150.GC12747@mdounin.ru> <276181594058093@mail.yandex.ru> <8308131594058815@mail.yandex.ru> <20200706182718.GB10523@cmadams.net> Message-ID: <1953641594060692@mail.yandex.ru> so, I think passtrhru AUTH IMAP and POP works out of the box now. It's only SMTP that NGINX never even tries to AUTH against backed. I wonder why this decision was taken? 06.07.2020, 11:27, "Chris Adams" : > Once upon a time, Denis Sh. said: >> ??Also, I wasn't able to find a reason why NGINX intentionally doesn't support passing thru the AUTH to the backend for SMTP, same as with IMAP/POP? > > I looked at adding this, using ID for IMAP and XCLIENT for POP3 (what > Dovecot supports)... didn't get the time and don't need it now, but it > didn't look like it would be too difficult to implement. > -- > Chris Adams > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From denok at yandex.com Mon Jul 6 18:42:41 2020 From: denok at yandex.com (Denis Sh.) Date: Mon, 06 Jul 2020 11:42:41 -0700 Subject: SNI support in `mail` context (fixed formatting) In-Reply-To: <20200706182057.GA10523@cmadams.net> References: <1919741594053770@mail.yandex.ru> <20200706173150.GC12747@mdounin.ru> <20200706182057.GA10523@cmadams.net> Message-ID: <775671594060843@mail.yandex.ru> Yeah, It's 2020 after all :) I think most modern mail client do support SNI and send server name in client hello. So, Chris, you're saying that you successfully run Postfix and Dovecot that rely on SNI in production? How bit is your user base, roughly? Thanks 06.07.2020, 11:21, "Chris Adams" : > Once upon a time, Maxim Dounin said: >> ?Note though that in general there is no concept of name-based >> ?virtual hosts in mail protocols, and using name-based virtual >> ?hosts for SSL might not be a good idea either. Also, status of >> ?SNI support by email clients varies, and "unknown" in most cases >> ?(https://en.wikipedia.org/wiki/Comparison_of_email_clients). > > I'm pretty sure that list is out of date - I have Dovecot handling > POP/IMAP for a bunch of domains with SNI and no user complaints. I'd > lean towards the SNI column being ? because it's just not being checked. > > -- > Chris Adams > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From mdounin at mdounin.ru Mon Jul 6 18:52:31 2020 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 6 Jul 2020 21:52:31 +0300 Subject: SNI support in `mail` context (fixed formatting) In-Reply-To: <8308131594058815@mail.yandex.ru> References: <1919741594053770@mail.yandex.ru> <20200706173150.GC12747@mdounin.ru> <276181594058093@mail.yandex.ru> <8308131594058815@mail.yandex.ru> Message-ID: <20200706185231.GE12747@mdounin.ru> Hello! On Mon, Jul 06, 2020 at 11:07:56AM -0700, Denis Sh. wrote: > Thank for your reply, Maxim. Sorry, I screwed with HTML formatting! > > What are the chances that you would look into adding these variable into mail module in upstream? > Looks like it's not very hard to do. Or SNI for mail is not considered to be a real thing? There are no variables in the mail module, and it's unlikely we'll add any in the short-term perspective. SNI server name as sent by the client can be passed to the auth_http script if needed, along this other Auth-SSL* headers, this should be simple enough. But we are yet to see use cases where this is needed (and this is not going to help to provide different certificates for different names). > >>> But if the goal is to provide > >> different certificates to different names requested via SNI in > >> SMTPS and IMAPS connections > > I'm afraid I need to support STARTTLS and either completely do AUTH on NGINX or backends. So stream certainly won't work for you. The question is still the same though: what are you trying to achieve. > Also, I wasn't able to find a reason why NGINX intentionally > doesn't support passing thru the AUTH to the backend for SMTP, > same as with IMAP/POP? SMTP is considered to be a protocol to be handled locally, with authentication details passed via XCLIENT (if at all). You don't need to proxy users to specific backend servers with their mailboxes, an SMTP server running locally (or on dedicated machine) would be enough. There are pending patches to allow SMTP proxying to work more like other protocols if requested, though these needs someone to review them, and given low priority of mail proxying it is unknown when this is going to happen. > Yeah, I know that SNI for mail protocols is a "grey" area, > still want to start implementing it. > > Denis > > > > 06.07.2020, 10:32, "Maxim Dounin" : > >> Hello! > >> > >> On Mon, Jul 06, 2020 at 10:17:31AM -0700, Denis Sh. wrote: > >> > >>> ?So, when proxying SMTP/IMAP, is it possible to get the Server > >>> ?Name that mail clients send as a part of Client Hello? > >> > >> Currently no. > >> > >>> ?Similar to Embedded Variables for ngx_http_ssl_module: > >>> ?$ssl_server_name > >>> ?returns the server name requested through?SNI?(1.7.0); > >>> > >>> ?I don't see these vars defined here https://github.com/nginx/nginx/blob/829c9d5981da1abc81dd7e2fb563da592203e54a/src/mail/ngx_mail_ssl_module.c#L229 > >> > >> There is no variables in the mail module. > >> > >>> ?Or should I use `stream` to proxy mail? > >>> > >>> ?Any ideas? > >> > >> This depends on what you are trying to achieve. For obvious > >> reasons stream won't work for complex protocol-dependent things, > >> such as STARTTLS or authentication. But if the goal is to provide > >> different certificates to different names requested via SNI in > >> SMTPS and IMAPS connections, proxying via the stream module with > >> ssl_preread (http://nginx.org/r/ssl_preread) might work for you. > >> > >> Note though that in general there is no concept of name-based > >> virtual hosts in mail protocols, and using name-based virtual > >> hosts for SSL might not be a good idea either. Also, status of > >> SNI support by email clients varies, and "unknown" in most cases > >> (https://en.wikipedia.org/wiki/Comparison_of_email_clients). > >> > >> -- > >> Maxim Dounin > >> http://mdounin.ru/ > >> _______________________________________________ > >> nginx mailing list > >> nginx at nginx.org > >> http://mailman.nginx.org/mailman/listinfo/nginx > > , > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -- Maxim Dounin http://mdounin.ru/ From denok at yandex.com Mon Jul 6 19:08:50 2020 From: denok at yandex.com (Denis Sh.) Date: Mon, 06 Jul 2020 12:08:50 -0700 Subject: SNI support in `mail` context (fixed formatting) In-Reply-To: <20200706185231.GE12747@mdounin.ru> References: <1919741594053770@mail.yandex.ru> <20200706173150.GC12747@mdounin.ru> <276181594058093@mail.yandex.ru> <8308131594058815@mail.yandex.ru> <20200706185231.GE12747@mdounin.ru> Message-ID: <1824031594061734@mail.yandex.ru> Thanks Maxim, so > SNI server name as sent by the client can be passed to the > auth_http script if needed, along this other Auth-SSL* headers, > this should be simple enough. you mean with config or changing NGINX code? > But we are yet to see use cases > where this is needed use case - having a front end nodes that would terminate TLS/SSL SMTP/IMAP/POP including STARTLS (offloading) using Server Name presented by the clients via SNI and pass traffic to the Postfix/Dovecot blackens respectively. The idea is that back-ends won't have to terminate SSL and keep all the keys/certs. >> So stream certainly won't work for you.? The question is still the >> same though: what are you trying to achieve. yeah, stream would only work for pure TLS, not for STARTTLS, same as with HAPROXY by the way. I've described the use case above, also there are blog posts on the inernets wheere people use NGINX mail proxying for transitioning their user base from "old" to "new" platforms, allowing them to keep existing configs and names in mail client's settings. > SMTP is considered to be a protocol to be handled locally, with > authentication details passed via XCLIENT (if at all). You don't > need to proxy users to specific backend servers with their > mailboxes, an SMTP server running locally (or on dedicated > machine) would be enough. I'm not sure I understand the part where you talk about local or dedicated SMTP server? SMTP is used to send mail and also mail servers talk to each other over public internet using it. I would not say it's a local only protocol, It's not LMTP. Also, XCLIENT is an extension, not the standard way of doing AUTH, correct me if I'm wrong here. I'm still curious why NGINX choose not to support passthru AUTH, that seems to be logical and aligns with use case. > There are pending patches to allow SMTP proxying to work more like > other protocols if requested, though these needs someone to review > them, and given low priority of mail proxying it is unknown when > this is going to happen. Oh, so other people already proposed that, I think I saw some proprietary products that might be based on NGINX that actually implemented this. Denis 06.07.2020, 11:52, "Maxim Dounin" : > Hello! > > On Mon, Jul 06, 2020 at 11:07:56AM -0700, Denis Sh. wrote: > >> ?Thank for your reply, Maxim. Sorry, I screwed with HTML formatting! >> >> ??What are the chances that you would look into adding these variable into mail module in upstream? >> ??Looks like it's not very hard to do. Or SNI for mail is not considered to be a real thing? > > There are no variables in the mail module, and it's unlikely we'll > add any in the short-term perspective. > > SNI server name as sent by the client can be passed to the > auth_http script if needed, along this other Auth-SSL* headers, > this should be simple enough. But we are yet to see use cases > where this is needed (and this is not going to help to provide > different certificates for different names). > >> ?>>> But if the goal is to provide >> ?>> different certificates to different names requested via SNI in >> ?>> SMTPS and IMAPS connections >> >> ??I'm afraid I need to support STARTTLS and either completely do AUTH on NGINX or backends. > > So stream certainly won't work for you. The question is still the > same though: what are you trying to achieve. > >> ??Also, I wasn't able to find a reason why NGINX intentionally >> ??doesn't support passing thru the AUTH to the backend for SMTP, >> ??same as with IMAP/POP? > > SMTP is considered to be a protocol to be handled locally, with > authentication details passed via XCLIENT (if at all). You don't > need to proxy users to specific backend servers with their > mailboxes, an SMTP server running locally (or on dedicated > machine) would be enough. > > There are pending patches to allow SMTP proxying to work more like > other protocols if requested, though these needs someone to review > them, and given low priority of mail proxying it is unknown when > this is going to happen. > >> ??Yeah, I know that SNI for mail protocols is a "grey" area, >> ??still want to start implementing it. >> >> ??Denis >> ?> >> ?> 06.07.2020, 10:32, "Maxim Dounin" : >> ?>> Hello! >> ?>> >> ?>> On Mon, Jul 06, 2020 at 10:17:31AM -0700, Denis Sh. wrote: >> ?>> >> ?>>> ?So, when proxying SMTP/IMAP, is it possible to get the Server >> ?>>> ?Name that mail clients send as a part of Client Hello? >> ?>> >> ?>> Currently no. >> ?>> >> ?>>> ?Similar to Embedded Variables for ngx_http_ssl_module: >> ?>>> ?$ssl_server_name >> ?>>> ?returns the server name requested through?SNI?(1.7.0); >> ?>>> >> ?>>> ?I don't see these vars defined here https://github.com/nginx/nginx/blob/829c9d5981da1abc81dd7e2fb563da592203e54a/src/mail/ngx_mail_ssl_module.c#L229 >> ?>> >> ?>> There is no variables in the mail module. >> ?>> >> ?>>> ?Or should I use `stream` to proxy mail? >> ?>>> >> ?>>> ?Any ideas? >> ?>> >> ?>> This depends on what you are trying to achieve. For obvious >> ?>> reasons stream won't work for complex protocol-dependent things, >> ?>> such as STARTTLS or authentication. But if the goal is to provide >> ?>> different certificates to different names requested via SNI in >> ?>> SMTPS and IMAPS connections, proxying via the stream module with >> ?>> ssl_preread (http://nginx.org/r/ssl_preread) might work for you. >> ?>> >> ?>> Note though that in general there is no concept of name-based >> ?>> virtual hosts in mail protocols, and using name-based virtual >> ?>> hosts for SSL might not be a good idea either. Also, status of >> ?>> SNI support by email clients varies, and "unknown" in most cases >> ?>> (https://en.wikipedia.org/wiki/Comparison_of_email_clients). >> ?>> >> ?>> -- >> ?>> Maxim Dounin >> ?>> http://mdounin.ru/ >> ?>> _______________________________________________ >> ?>> nginx mailing list >> ?>> nginx at nginx.org >> ?>> http://mailman.nginx.org/mailman/listinfo/nginx >> ?> , >> ?_______________________________________________ >> ?nginx mailing list >> ?nginx at nginx.org >> ?http://mailman.nginx.org/mailman/listinfo/nginx > > -- > Maxim Dounin > http://mdounin.ru/ > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From nginx-forum at forum.nginx.org Mon Jul 6 19:55:05 2020 From: nginx-forum at forum.nginx.org (everhardt) Date: Mon, 06 Jul 2020 15:55:05 -0400 Subject: $ssl_client_escaped_cert does not contain intermediate client certificates In-Reply-To: <20200706151023.GY12747@mdounin.ru> References: <20200706151023.GY12747@mdounin.ru> Message-ID: <15f8f6621c602215ced3b4f3146eab3f.NginxMailingListEnglish@forum.nginx.org> Thanks for your reply, Maxim! I'll work out an alternative then. Re. session resumption, I read in the OpenSSL docs (https://www.openssl.org/docs/man1.1.0/man3/SSL_get0_verified_chain.html) that OpenSSL is willing to store the chain longer than a single request, but only if the implementing application (nginx) is managing freeing it at the proper time (eg. when the session times out): > If applications wish to use any certificates in the returned chain indefinitely they must increase the reference counts using X509_up_ref() or obtain a copy of the whole chain with X509_chain_up_ref(). ps. I now see that HAProxy is also discussing it: https://www.mail-archive.com/haproxy at formilux.org/msg35607.html Posted at Nginx Forum: https://forum.nginx.org/read.php?2,288553,288596#msg-288596 From mdounin at mdounin.ru Mon Jul 6 23:19:51 2020 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 7 Jul 2020 02:19:51 +0300 Subject: SNI support in `mail` context (fixed formatting) In-Reply-To: <1824031594061734@mail.yandex.ru> References: <1919741594053770@mail.yandex.ru> <20200706173150.GC12747@mdounin.ru> <276181594058093@mail.yandex.ru> <8308131594058815@mail.yandex.ru> <20200706185231.GE12747@mdounin.ru> <1824031594061734@mail.yandex.ru> Message-ID: <20200706231951.GG12747@mdounin.ru> Hello! On Mon, Jul 06, 2020 at 12:08:50PM -0700, Denis Sh. wrote: > Thanks Maxim, so > > > SNI server name as sent by the client can be passed to the > > auth_http script if needed, along this other Auth-SSL* headers, > > this should be simple enough. > > you mean with config or changing NGINX code? Changing the nginx code, of course. > > But we are yet to see use cases > > where this is needed > > use case - having a front end nodes that would terminate TLS/SSL SMTP/IMAP/POP including > STARTLS (offloading) using Server Name presented by the clients via SNI and pass traffic > to the Postfix/Dovecot blackens respectively. The idea is that back-ends won't have to terminate SSL and > keep all the keys/certs. Define "using server name paresented by the clients". Using how? I see only two possible usage scenario here: 1. Use the sever name to provide appropriate certificate. The Auth-SSL-Whatever header is not going to help here though. On the other hand, providing a certificate with multiple names easily solves this, and also covers clients not using SNI. 2. Use the server name to choose the backend and/or the default domain name for authentication. This is not going to work for non-SSL connections though, since, as already mentioned, there is no concept of name-based virtual hosts in mail protocols. So, Auth-SSL-Whatever header is only going to help if you are not going to use non-SSL connections at all, and only helps to provide some only > >> So stream certainly won't work for you.? The question is still the > >> same though: what are you trying to achieve. > > yeah, stream would only work for pure TLS, not for STARTTLS, > same as with HAPROXY by the way. > I've described the use case above, also there are blog posts on > the inernets wheere people use NGINX mail proxying > for transitioning their user base from "old" to "new" platforms, > allowing them to keep existing configs and names in mail > client's settings. It is not clear how SNI is going to help here. > > SMTP is considered to be a protocol to be handled locally, with > > authentication details passed via XCLIENT (if at all). You don't > > need to proxy users to specific backend servers with their > > mailboxes, an SMTP server running locally (or on dedicated > > machine) would be enough. > > I'm not sure I understand the part where you talk about local or > dedicated SMTP server? > > SMTP is used to send mail and also mail servers talk to each > other over public internet using it. > I would not say it's a local only protocol, It's not LMTP. > > Also, XCLIENT is an extension, not the standard way of doing > AUTH, correct me if I'm wrong here. > > I'm still curious why NGINX choose not to support passthru AUTH, > that seems to be logical and aligns with use case. Probably I wasn't clear enough. Usage model of nginx expects SMTP backends to be local. A locally running fully-functional SMTP server such as Postfix would be good enough solution. The main goal of using nginx in front of a real SMTP server is to protect the SMTP server from high load, since most, if not all, SMTP servers out there cannot handle many connections. Additional goal is to use uniform authentication with IMAP/POP3, so there is no need to configure / implement authentication in you SMTP server. If you aren't happy with using XCLIENT, which is indeed an extension, you can use no authentication at all - this is perfectly standard for SMTP - and allow only connections from nginx. The only downside is that you won't be able to add proper attribution in headers. On the other hand, without XCLIENT you won't be able to add proper client's IP address to headers, so you have to use XCLIENT anyway. Summing the above: for SMTP you have to use XCLIENT anyway, and proxying authentication is only needed in some specific use cases when using nginx to proxy to external SMTP servers. This is believed to be relatively rare use case and certainly not the use case nginx SMTP proxying was developed for. [...] -- Maxim Dounin http://mdounin.ru/ From mdounin at mdounin.ru Tue Jul 7 00:47:13 2020 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 7 Jul 2020 03:47:13 +0300 Subject: $ssl_client_escaped_cert does not contain intermediate client certificates In-Reply-To: <15f8f6621c602215ced3b4f3146eab3f.NginxMailingListEnglish@forum.nginx.org> References: <20200706151023.GY12747@mdounin.ru> <15f8f6621c602215ced3b4f3146eab3f.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20200707004713.GH12747@mdounin.ru> Hello! On Mon, Jul 06, 2020 at 03:55:05PM -0400, everhardt wrote: > Thanks for your reply, Maxim! I'll work out an alternative then. > > Re. session resumption, I read in the OpenSSL docs > (https://www.openssl.org/docs/man1.1.0/man3/SSL_get0_verified_chain.html) > that OpenSSL is willing to store the chain longer than a single request, but > only if the implementing application (nginx) is managing freeing it at the > proper time (eg. when the session times out): > > If applications wish to use any certificates in the returned chain > indefinitely they must increase the reference counts using X509_up_ref() or > obtain a copy of the whole chain with X509_chain_up_ref(). This quote is about how to use the chain if it is returned. The problem is that the chain is _not_ returned for resumed sessions, and there is no way to obtain it for a resumed session as long as the chain uses intermediate certificates provided by the client. Saving the chain somewhere once session is established may work as a band-aid in some simple cases, but certainly not an option in general for multiple reasons, including the fact that this won't work with TLS session tickets when there is no server-side state. -- Maxim Dounin http://mdounin.ru/ From yichun at openresty.com Tue Jul 7 07:09:49 2020 From: yichun at openresty.com (Yichun Zhang) Date: Tue, 7 Jul 2020 00:09:49 -0700 Subject: [ANN] OpenResty 1.17.8.1 released Message-ID: Hi there, I am happy to announce the new formal release, 1.17.8.1, of our OpenResty web platform based on NGINX and LuaJIT. The full announcement, download links, and change logs can be found below: https://openresty.org/en/ann-1017008001.html OpenResty is a high performance and dynamic web platform based on our enhanced version of Nginx core, our enhanced version of LuaJIT, and many powerful Nginx modules and Lua libraries. See OpenResty's homepage for details: https://openresty.org/ Enjoy! Best, Yichun --- Yichun Zhang is the creator of OpenResty, the founder and CEO of OpenResty Inc. From nginx-forum at forum.nginx.org Tue Jul 7 07:18:36 2020 From: nginx-forum at forum.nginx.org (everhardt) Date: Tue, 07 Jul 2020 03:18:36 -0400 Subject: $ssl_client_escaped_cert does not contain intermediate client certificates In-Reply-To: <20200707004713.GH12747@mdounin.ru> References: <20200707004713.GH12747@mdounin.ru> Message-ID: <73814df25e1fca3e42e12f82bfe1aeed.NginxMailingListEnglish@forum.nginx.org> Hi Maxim, I, naively maybe, thought the following would work. At an incoming request, nginx checks whether the session is new or resumed. * new: it retrieves the chain, calls X509_chain_up_ref and stores a mapping from session ID to the chain pointer * resumed: it retrieves the session ID, looks up the pointer from the mapping and retrieves the chain from the pointer At session timeout nginx should drop the session ID from the mapping and calls X509_free on each certificate in the chain. Best, Rob Posted at Nginx Forum: https://forum.nginx.org/read.php?2,288553,288600#msg-288600 From arut at nginx.com Tue Jul 7 10:08:15 2020 From: arut at nginx.com (Roman Arutyunyan) Date: Tue, 7 Jul 2020 13:08:15 +0300 Subject: range_filter_module get duplicated Accept-Ranges response headers In-Reply-To: References: Message-ID: <20200707100815.f5mlkcvofpd2sbmd@Romans-MacBook-Pro.local> Hello, On Mon, Jul 06, 2020 at 06:49:15AM -0400, webber wrote: > Hello, > > Recently, we found if we use nginx slice module , and upstream server is > such as a static file server, nginx will response duplicated > `Accept-Ranges` headers if client request is not included range header. > > the minimal config example as follow: > > ``` > server { > listen 80; > server_name _; > default_type text/html; > > location /get_file { > slice 256k; > proxy_set_header Range $slice_range; > proxy_pass http://127.0.0.1:8899; > } > } > ``` > > use curl to get 1mb file: > ``` > curl -s http://localhost/get_file/1mb.test -D- -o /dev/null > > HTTP/1.1 200 OK > Date: Mon, 06 Jul 2020 10:32:58 GMT > Content-Type: application/octet-stream > Content-Length: 1048576 > Connection: keep-alive > Last-Modified: Mon, 06 Jul 2020 07:34:23 GMT > Cache-Control: public, max-age=43200 > Expires: Mon, 06 Jul 2020 22:32:58 GMT > ETag: "1594020863.76-1048576-4019326287" > Accept-Ranges: bytes > Accept-Ranges: bytes > ``` > > but if I add range header to curl request, will get expected response. Normally nginx does not send Accept-Ranges with partial responses. The fact that the response is partial is itself an indication that the server supports ranges. If the upstream server is nginx too, there should be no problem with slice. However if the upstream server sends Accept-Ranges with partial responses, the client ends up with the two Accept-Ranges. Attached is a patch that removes the original Accept-Ranges. This brings back the normal nginx behavior - one Accept-Ranges for full response and none for partial. Can you try it and report back if it works for you? > Then I review the ngx_http_range_filter_module, in `goto next_filter`( line > 253) , should we handle NGX_HTTP_OK response? like: > > ``` > next_filter: > > if (r->headers_out.status == NGX_HTTP_OK) { > r->headers_out.accept_ranges = NULL; > return ngx_http_next_header_filter(r); > } > > r->headers_out.accept_ranges = ngx_list_push(&r->headers_out.headers); > if (r->headers_out.accept_ranges == NULL) { > return NGX_ERROR; > } > > r->headers_out.accept_ranges->hash = 1; > ngx_str_set(&r->headers_out.accept_ranges->key, "Accept-Ranges"); > ngx_str_set(&r->headers_out.accept_ranges->value, "bytes"); > > return ngx_http_next_header_filter(r); > ``` > > I am confused if it is a bug? > > Posted at Nginx Forum: https://forum.nginx.org/read.php?2,288569,288569#msg-288569 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -- Roman Arutyunyan -------------- next part -------------- # HG changeset patch # User Roman Arutyunyan # Date 1594115767 -10800 # Tue Jul 07 12:56:07 2020 +0300 # Node ID 86cf2f78477d6ef8845608a9a36003899308f6e6 # Parent 1ece2ac2555a10f7a09ea2feca2db3149ccb470f Slice filter: clear original Accept-Ranges. The slice filter allows ranges for the response by setting the r->allow_ranges flag, which enables the range filter. If the range was not requested, the range filter adds an Accept-Ranges header to the response to signal the support for ranges. Previously, if an Accept-Ranges header was already present in the first slice response, client received two copies of this header. Now, the slice filters removes the Accept-Ranges header from the response prior to setting the r->allow_ranges flag. diff --git a/src/http/modules/ngx_http_slice_filter_module.c b/src/http/modules/ngx_http_slice_filter_module.c --- a/src/http/modules/ngx_http_slice_filter_module.c +++ b/src/http/modules/ngx_http_slice_filter_module.c @@ -180,6 +180,11 @@ ngx_http_slice_header_filter(ngx_http_re r->headers_out.content_range->hash = 0; r->headers_out.content_range = NULL; + if (r->headers_out.accept_ranges) { + r->headers_out.accept_ranges->hash = 0; + r->headers_out.accept_ranges = NULL; + } + r->allow_ranges = 1; r->subrequest_ranges = 1; r->single_range = 1; From raghuvenkat111 at gmail.com Tue Jul 7 13:02:55 2020 From: raghuvenkat111 at gmail.com (raghu venkat) Date: Tue, 7 Jul 2020 18:32:55 +0530 Subject: module to control TLS handshake algorithms Message-ID: HI Is there any module through which i can control algorithms used in cipher suites during TLS handshake. My requirement is like i want to configure my server in such a way that i can specify list of acceptable cipher suites and also the algorithms used in cipher suite. Specifying algorithms for individual aspects like key exchange, authentication, encryption, HKDF would also do. For example consider ECDHE-ECDSA-AES256-GCM-SHA384 cipher suite. 1) for ECDHE specify the curves like secp256r1, secp384r1. 2) for ECDSA also specify the curves like secp256r1, secp384r1 and also SHA digest used like SHA256, SHA384 similarly if RSA is used specify key length like 1024, 2048 and algorithms like RSASSA-PSS, RSASSA-PKCS-v1_5 With openssl configuration i can do some of the stuff but i don't want to use it as it effects other application. Regards Raghu -------------- next part -------------- An HTML attachment was scrubbed... URL: From francis at daoine.org Tue Jul 7 15:53:15 2020 From: francis at daoine.org (Francis Daly) Date: Tue, 7 Jul 2020 16:53:15 +0100 Subject: Found Nginx 1.19.0 stopped but no idea what happened In-Reply-To: <1eaff7d6a9a0ca051ffeae66b6617519.NginxMailingListEnglish@forum.nginx.org> References: <1eaff7d6a9a0ca051ffeae66b6617519.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20200707155315.GK20939@daoine.org> On Mon, Jul 06, 2020 at 04:17:46AM -0400, Evald80 wrote: Hi there, > [root at web ~]# systemctl status nginx > ? nginx.service - nginx - high performance web server > Loaded: loaded (/usr/lib/systemd/system/nginx.service; enabled; vendor > preset: disabled) What is the history of your nginx install? Where did this /usr/lib/systemd/system/nginx.service come from? This shows: > ExecStop=/bin/sh -c /bin/kill -s TERM $(/bin/cat > /var/run/nginx.pid) (code=exited, status=0/SUCCESS) and TERM is probably not the best signal to say "please close down tidily". https://www.nginx.com/resources/wiki/start/topics/examples/systemd/ uses QUIT. Your (lack of) nginx -V output makes me think that you are running a random third party module that is not exiting cleanly. I don't know if you can see any pattern in your access.log for the request that was handled just before your nginx stopped responding? That might help make a reproduction case, which might in turn help isolate where the problem is. Good luck with it, f -- Francis Daly francis at daoine.org From linux at cmadams.net Tue Jul 7 16:05:41 2020 From: linux at cmadams.net (Chris Adams) Date: Tue, 7 Jul 2020 11:05:41 -0500 Subject: SNI support in `mail` context (fixed formatting) In-Reply-To: <775671594060843@mail.yandex.ru> References: <1919741594053770@mail.yandex.ru> <20200706173150.GC12747@mdounin.ru> <20200706182057.GA10523@cmadams.net> <775671594060843@mail.yandex.ru> Message-ID: <20200707160541.GB21398@cmadams.net> Once upon a time, Denis Sh. said: > So, Chris, you're saying that you successfully run Postfix and Dovecot that rely on SNI in production? No, not postfix - it doesn't support SNI on the server side (and postfix maintainers are not interested in adding support). I do have Dovecot using SNI; at peak it was around 20,000 active users across a couple of dozen independent service provider domains (lower now as we're migrating off that whole setup for other reasons). -- Chris Adams From mdounin at mdounin.ru Tue Jul 7 16:11:02 2020 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 7 Jul 2020 19:11:02 +0300 Subject: nginx-1.19.1 Message-ID: <20200707161102.GN12747@mdounin.ru> Changes with nginx 1.19.1 07 Jul 2020 *) Change: the "lingering_close", "lingering_time", and "lingering_timeout" directives now work when using HTTP/2. *) Change: now extra data sent by a backend are always discarded. *) Change: now after receiving a too short response from a FastCGI server nginx tries to send the available part of the response to the client, and then closes the client connection. *) Change: now after receiving a response with incorrect length from a gRPC backend nginx stops response processing with an error. *) Feature: the "min_free" parameter of the "proxy_cache_path", "fastcgi_cache_path", "scgi_cache_path", and "uwsgi_cache_path" directives. Thanks to Adam Bambuch. *) Bugfix: nginx did not delete unix domain listen sockets during graceful shutdown on the SIGQUIT signal. *) Bugfix: zero length UDP datagrams were not proxied. *) Bugfix: proxying to uwsgi backends using SSL might not work. Thanks to Guanzhong Chen. *) Bugfix: in error handling when using the "ssl_ocsp" directive. *) Bugfix: on XFS and NFS file systems disk cache size might be calculated incorrectly. *) Bugfix: "negative size buf in writer" alerts might appear in logs if a memcached server returned a malformed response. -- Maxim Dounin http://nginx.org/ From svyatoslav.mishyn at gmail.com Tue Jul 7 16:48:50 2020 From: svyatoslav.mishyn at gmail.com (Svyatoslav Mishyn) Date: Tue, 7 Jul 2020 19:48:50 +0300 Subject: SNI support in `mail` context (fixed formatting) In-Reply-To: <20200707160541.GB21398@cmadams.net> References: <1919741594053770@mail.yandex.ru> <20200706173150.GC12747@mdounin.ru> <20200706182057.GA10523@cmadams.net> <775671594060843@mail.yandex.ru> <20200707160541.GB21398@cmadams.net> Message-ID: <20200707164850.GA265523@asus-b450-plus> (Tue, 07 Jul 11:05) Chris Adams: > No, not postfix - it doesn't support SNI on the server side (and postfix > maintainers are not interested in adding support). FYI, it has SNI support but version should be >= 3.4, see: http://www.postfix.org/postconf.5.html#tls_server_sni_maps -- https://www.juef.space/ From linux at cmadams.net Tue Jul 7 17:39:05 2020 From: linux at cmadams.net (Chris Adams) Date: Tue, 7 Jul 2020 12:39:05 -0500 Subject: SNI support in `mail` context (fixed formatting) In-Reply-To: <20200707164850.GA265523@asus-b450-plus> References: <1919741594053770@mail.yandex.ru> <20200706173150.GC12747@mdounin.ru> <20200706182057.GA10523@cmadams.net> <775671594060843@mail.yandex.ru> <20200707160541.GB21398@cmadams.net> <20200707164850.GA265523@asus-b450-plus> Message-ID: <20200707173905.GC21398@cmadams.net> Once upon a time, Svyatoslav Mishyn said: > (Tue, 07 Jul 11:05) Chris Adams: > > No, not postfix - it doesn't support SNI on the server side (and postfix > > maintainers are not interested in adding support). > > FYI, it has SNI support but version should be >= 3.4, see: > http://www.postfix.org/postconf.5.html#tls_server_sni_maps Thanks for the correction - I wasn't aware they added it. Hmm, I usually run CentOS and the OS-provided postfix, which is only up to 3.3.1 on CentOS 8.2... still, something to keep in mind. -- Chris Adams From xeioex at nginx.com Tue Jul 7 19:26:31 2020 From: xeioex at nginx.com (Dmitry Volyntsev) Date: Tue, 7 Jul 2020 22:26:31 +0300 Subject: njs-0.4.2 Message-ID: <2bcb28c8-6f30-bf6b-9ffe-db6b73f0af27@nginx.com> Hello, I'm glad to announce a new release of NGINX JavaScript module (njs). This release proceeds to extend the coverage of ECMAScript specification. You can learn more about njs: - Overview and introduction: http://nginx.org/en/docs/njs/ - Presentation: https://youtu.be/Jc_L6UffFOs - Using node modules with njs: http://nginx.org/en/docs/njs/node_modules.html - Writing njs code using TypeScript definition files: http://nginx.org/en/docs/njs/typescript.html Feel free to try it and give us feedback on: - Github: https://github.com/nginx/njs/issues - Mailing list: http://mailman.nginx.org/mailman/listinfo/nginx-devel Changes with njs 0.4.2 07 Jul 2020 Core: *) Feature: added RegExp.prototype[Symbol.replace]. *) Feature: introduced line level backtrace. *) Feature: added %TypedArray%.prototype.sort(). *) Feature: extended "fs" module. Added mkdir(), readdir(), rmdir() and friends. Thanks to Artem S. Povalyukhin. *) Improvement: parser refactoring. *) Bugfix: fixed TypedScript API description for HTTP headers. *) Bugfix: fixed TypedScript API description for NjsByteString type. *) Bugfix: fixed String.prototype.repeat() according to the specification. *) Bugfix: fixed parsing of flags for regexp literals. *) Bugfix: fixed index generation for global objects in generator. *) Bugfix: fixed String.prototype.replace() according to the specification. *) Bugfix: fixed %TypedArray%.prototype.copyWithin() with nonzero byte offset. *) Bugfix: fixed Array.prototype.splice() for sparse arrays. *) Bugfix: fixed Array.prototype.reverse() for sparse arrays. *) Bugfix: fixed Array.prototype.sort() for sparse arrays. -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Wed Jul 8 04:14:06 2020 From: nginx-forum at forum.nginx.org (webber) Date: Wed, 08 Jul 2020 00:14:06 -0400 Subject: range_filter_module get duplicated Accept-Ranges response headers In-Reply-To: <20200707100815.f5mlkcvofpd2sbmd@Romans-MacBook-Pro.local> References: <20200707100815.f5mlkcvofpd2sbmd@Romans-MacBook-Pro.local> Message-ID: <87f3fcfe34b309a3c9ffb903e065bf12.NginxMailingListEnglish@forum.nginx.org> Hi, Thanks for your reply. I have tried the patch that removes the original Accept-Ranges in slice filter module, but it is not work as my expected. Because I think response header `Accept-Ranges` should be added if client send a range request. Actually, in my production environment, my upstream server is Apache TrafficServer, and according to RFC 7233, section 2.3: Accept-Ranges(https://tools.ietf.org/html/rfc7233#section-2.3) , I think original Accept-Ranges header should not be removed in all case. I changed your patch as follow, original Accept-Ranges header will be removed just for no-range request , that will work for me, could you please review that? diff --git a/src/http/modules/ngx_http_slice_filter_module.c b/src/http/modules/ngx_http_slice_filter_module.c index c1edbca2..570deaa5 100644 --- a/src/http/modules/ngx_http_slice_filter_module.c +++ b/src/http/modules/ngx_http_slice_filter_module.c @@ -180,6 +180,11 @@ ngx_http_slice_header_filter(ngx_http_request_t *r) r->headers_out.content_range->hash = 0; r->headers_out.content_range = NULL; + if (!r->headers_in.range) { + r->headers_out.accept_ranges->hash = 0; + r->headers_out.accept_ranges = NULL; + } + r->allow_ranges = 1; r->subrequest_ranges = 1; r->single_range = 1; Posted at Nginx Forum: https://forum.nginx.org/read.php?2,288569,288627#msg-288627 From nginx-forum at forum.nginx.org Wed Jul 8 07:52:14 2020 From: nginx-forum at forum.nginx.org (latha) Date: Wed, 08 Jul 2020 03:52:14 -0400 Subject: X-Accel-Redirect not redirecting to named locaiton In-Reply-To: <20200701080519.GD20939@daoine.org> References: <20200701080519.GD20939@daoine.org> Message-ID: <2bc5ca09643cd9c1c53e16140f288955.NginxMailingListEnglish@forum.nginx.org> Thank you Francis Daly. Jumping to named location worked in my env also. I have another issue now: I want to pass the response body of http://test.svc.cluster.local:9080/v2/test/* apis to named location `acreate` which does proxy_pass to another micro-service (which takes the response of /v2/test* api as payload). I learnt that `body_filter_by_lua` does not work with internal redirects, are there any other options to do this? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,288509,288626#msg-288626 From francis at daoine.org Wed Jul 8 07:54:48 2020 From: francis at daoine.org (Francis Daly) Date: Wed, 8 Jul 2020 08:54:48 +0100 Subject: Force SSL redirection to target service host for all protocols In-Reply-To: <9111fd05796c34847a99cec4213fe9a6.NginxMailingListEnglish@forum.nginx.org> References: <9111fd05796c34847a99cec4213fe9a6.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20200708075448.GL20939@daoine.org> On Fri, Jul 03, 2020 at 09:12:56AM -0400, siva.pannier wrote: Hi there, > I want all my client applications make call to the service host via proxy. > And the hosted services are TLSv1.2 enabled. Clients are not in a position > to upgrade. Hence I want to enforce the SSL encryption when the call > routed/redirected to the target from proxy. I may be misunderstanding the terminology, but I think your scenario is that your clients speak their protocol over a "normal" (non-encrypted) network connection; and your (upstream) servers allow the protocol both directly over a "normal" connection, or over a SSL-wrapped connection. An you want your clients to talk to nginx without encryption, and for nginx to talk to upstream with encryption. If nginx does not already have a dedicated module for the protocol you care about, then possibly the "stream" module with "proxy_ssl" will work for you. http://nginx.org/en/docs/stream/ngx_stream_proxy_module.html That *does* depend on the nature of the protocol, of course -- if the protocol does not easily allow proxying, then it is not going to easily work through the nginx stream proxy. > I have seen few blogs that talks about HTTP to HTTPS redirection. I want to > do that for all protocols like TCPS, UDPS(DTLS), SMTPS, IIOPS. > > Can you please share your suggestions on this? If my protocol writes IP addresses or ports within the content payload, then a "blind" traffic-forwarder (as "stream" mostly is) will probably not be able to reliably proxy things that use my protocol. For the specific protocols you care about: can they be proxied? I suspect that the list will be interested in the results of your testing, if you are willing to share them. Thanks, f -- Francis Daly francis at daoine.org From nginx-forum at forum.nginx.org Wed Jul 8 08:14:39 2020 From: nginx-forum at forum.nginx.org (webber) Date: Wed, 08 Jul 2020 04:14:39 -0400 Subject: range_filter_module get duplicated Accept-Ranges response headers In-Reply-To: <20200707100815.f5mlkcvofpd2sbmd@Romans-MacBook-Pro.local> References: <20200707100815.f5mlkcvofpd2sbmd@Romans-MacBook-Pro.local> Message-ID: Hi, Sorry to bother you, I rechange the patch as follow: Previously, if an Accept-Ranges header was already present in the first slice response, client received two copies of this header. Now, the slice filters removes the Accept-Ranges header for partital request and if original server response Accept-Ranges header, has from the response prior to setting the r->allow_ranges flag. diff --git a/src/http/modules/ngx_http_slice_filter_module.c b/src/http/modules/ngx_http_slice_filter_module.c index c1edbca2..9a215e43 100644 --- a/src/http/modules/ngx_http_slice_filter_module.c +++ b/src/http/modules/ngx_http_slice_filter_module.c @@ -180,6 +180,11 @@ ngx_http_slice_header_filter(ngx_http_request_t *r) r->headers_out.content_range->hash = 0; r->headers_out.content_range = NULL; + if (!r->headers_in.range && r->headers_out.accept_ranges) { + r->headers_out.accept_ranges->hash = 0; + r->headers_out.accept_ranges = NULL; + } + r->allow_ranges = 1; r->subrequest_ranges = 1; r->single_range = 1; Posted at Nginx Forum: https://forum.nginx.org/read.php?2,288569,288629#msg-288629 From arut at nginx.com Wed Jul 8 08:41:50 2020 From: arut at nginx.com (Roman Arutyunyan) Date: Wed, 8 Jul 2020 11:41:50 +0300 Subject: range_filter_module get duplicated Accept-Ranges response headers In-Reply-To: <87f3fcfe34b309a3c9ffb903e065bf12.NginxMailingListEnglish@forum.nginx.org> References: <20200707100815.f5mlkcvofpd2sbmd@Romans-MacBook-Pro.local> <87f3fcfe34b309a3c9ffb903e065bf12.NginxMailingListEnglish@forum.nginx.org> Message-ID: <0C8C3602-6FD1-450D-9A03-9D93475196FC@nginx.com> Hi, > On 8 Jul 2020, at 07:14, webber wrote: > > Hi, > > Thanks for your reply. I have tried the patch that removes the original > Accept-Ranges in slice filter module, but it is not work as my expected. > Because I think response header `Accept-Ranges` should be added if client > send a range request. > > Actually, in my production environment, my upstream server is Apache > TrafficServer, and according to RFC 7233, section 2.3: > Accept-Ranges(https://tools.ietf.org/html/rfc7233#section-2.3) , I think > original Accept-Ranges header should not be removed in all case. I changed > your patch as follow, original Accept-Ranges header will be removed just > for no-range request , that will work for me, could you please review that? RFC 7233 does not require a server to send Accept-Ranges at all, and certainly does not require it to send the header with a 206 response. Indeed Apache does send Accept-Ranges both with 200 and 206, but nginx only sends it with 200. This is how nginx works and I don?t think changing this only for the slice module is a good idea. By the way, the example of a 206 response in RFC 7233 does not have the Accept-Ranges too: https://tools.ietf.org/html/rfc7233#section-4.1 . The rule nginx follows is this. Accept-Ranges should be sent by a party that does ranges and is a way to signal the support for ranges. If ranges are done by the nginx range filter, it should add the header. The Accept-Ranges that came from the upstream server is unrelated to what the range filter is doing even if it?s exactly the same header. That?s why the existing Accept-Ranges header should be removed anyway. Another question is whether the range filter should add a new header for both 200 and 206 or only for 200. The way it currently works for any response (not only a slice response) is to add Accept-Ranges only for 200. Whether this should be changed nginx-wise is a totally different question and I personally don?t think it makes sense. > diff --git a/src/http/modules/ngx_http_slice_filter_module.c > b/src/http/modules/ngx_http_slice_filter_module.c > index c1edbca2..570deaa5 100644 > --- a/src/http/modules/ngx_http_slice_filter_module.c > +++ b/src/http/modules/ngx_http_slice_filter_module.c > @@ -180,6 +180,11 @@ ngx_http_slice_header_filter(ngx_http_request_t *r) > r->headers_out.content_range->hash = 0; > r->headers_out.content_range = NULL; > > + if (!r->headers_in.range) { > + r->headers_out.accept_ranges->hash = 0; > + r->headers_out.accept_ranges = NULL; > + } > + > r->allow_ranges = 1; > r->subrequest_ranges = 1; > r->single_range = 1; > > Posted at Nginx Forum: https://forum.nginx.org/read.php?2,288569,288627#msg-288627 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Wed Jul 8 10:10:14 2020 From: nginx-forum at forum.nginx.org (Evald80) Date: Wed, 08 Jul 2020 06:10:14 -0400 Subject: Found Nginx 1.19.0 stopped but no idea what happened In-Reply-To: <20200707155315.GK20939@daoine.org> References: <20200707155315.GK20939@daoine.org> Message-ID: <98cab1da648f0091e18428a845040ecd.NginxMailingListEnglish@forum.nginx.org> Hi, this is a vm created from the cloud provider. Than i just added the repo of nginx, the official one. Than did regular upgrades. Today also i run the yum update and got the 1.19.1 version of nginx. [root at web ~]# nginx -V nginx version: nginx/1.19.1 built by gcc 4.8.5 20150623 (Red Hat 4.8.5-39) (GCC) built with OpenSSL 1.0.2k-fips 26 Jan 2017 TLS SNI support enabled configure arguments: --prefix=/etc/nginx --sbin-path=/usr/sbin/nginx --modules-path=/usr/lib64/nginx/modules --conf-path=/etc/nginx/nginx.conf --error-log-path=/var/log/nginx/error.log --http-log-path=/var/log/nginx/access.log --pid-path=/var/run/nginx.pid --lock-path=/var/run/nginx.lock --http-client-body-temp-path=/var/cache/nginx/client_temp --http-proxy-temp-path=/var/cache/nginx/proxy_temp --http-fastcgi-temp-path=/var/cache/nginx/fastcgi_temp --http-uwsgi-temp-path=/var/cache/nginx/uwsgi_temp --http-scgi-temp-path=/var/cache/nginx/scgi_temp --user=nginx --group=nginx --with-compat --with-file-aio --with-threads --with-http_addition_module --with-http_auth_request_module --with-http_dav_module --with-http_flv_module --with-http_gunzip_module --with-http_gzip_static_module --with-http_mp4_module --with-http_random_index_module --with-http_realip_module --with-http_secure_link_module --with-http_slice_module --with-http_ssl_module --with-http_stub_status_module --with-http_sub_module --with-http_v2_module --with-mail --with-mail_ssl_module --with-stream --with-stream_realip_module --with-stream_ssl_module --with-stream_ssl_preread_module --with-cc-opt='-O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector-strong --param=ssp-buffer-size=4 -grecord-gcc-switches -m64 -mtune=generic -fPIC' --with-ld-opt='-Wl,-z,relro -Wl,-z,now -pie' Posted at Nginx Forum: https://forum.nginx.org/read.php?2,288511,288631#msg-288631 From nginx-forum at forum.nginx.org Wed Jul 8 11:29:35 2020 From: nginx-forum at forum.nginx.org (webber) Date: Wed, 08 Jul 2020 07:29:35 -0400 Subject: range_filter_module get duplicated Accept-Ranges response headers In-Reply-To: <0C8C3602-6FD1-450D-9A03-9D93475196FC@nginx.com> References: <0C8C3602-6FD1-450D-9A03-9D93475196FC@nginx.com> Message-ID: Hi, OK, I see. Now I agree that Accept-Ranges header should be removed anyway in slice module, and any plan to fix that? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,288569,288632#msg-288632 From francis at daoine.org Thu Jul 9 17:03:35 2020 From: francis at daoine.org (Francis Daly) Date: Thu, 9 Jul 2020 18:03:35 +0100 Subject: Found Nginx 1.19.0 stopped but no idea what happened In-Reply-To: <98cab1da648f0091e18428a845040ecd.NginxMailingListEnglish@forum.nginx.org> References: <20200707155315.GK20939@daoine.org> <98cab1da648f0091e18428a845040ecd.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20200709170335.GM20939@daoine.org> On Wed, Jul 08, 2020 at 06:10:14AM -0400, Evald80 wrote: Hi there, > this is a vm created from the cloud provider. Than i just added the repo of > nginx, the official one. > Than did regular upgrades. Today also i run the yum update and got the > 1.19.1 version of nginx. Thanks for the information. Curious -- the "ExecStop" command in that systemd.service file does use TERM ("Quick shutdown") where the other suggested one uses QUIT ("Graceful shutdown"). Either way -- that presumably is not the cause of the failure you see. You showed that you load four modules. Can you see where those modules came from? http://nginx.org/packages/mainline/centos/7/x86_64/RPMS/ does include a nginx-module-geoip-1.19.1-1.el7.ngx.x86_64.rpm but I don't know if that matches the one you are using. Might it be that the combination of modules you load has some incompatibility with the running nginx? I'm afraid I don't have a better suggestion that "the usual" of trying to find a reproduction recipe, and then turning off bits to see what causes the failure to go away. Did you fetch pre-built module.so files, or did you build them yourself? Good luck with the investigations! (Presuming that the update to 1.19.1 didn't make it all just start working.) f -- Francis Daly francis at daoine.org From nginx-forum at forum.nginx.org Thu Jul 9 18:42:52 2020 From: nginx-forum at forum.nginx.org (harsh) Date: Thu, 09 Jul 2020 14:42:52 -0400 Subject: Nginx not retrying failed UDP messaged Message-ID: Hi, We are using NGINX as Load Balancer for load balancing RADIUS UDP traffic. It seems NGINX is not retrying to send messages to another upstream server if one of the upstream servers is down. We are using NGINX 1.16.1. But the same behaviour exists in all NGINX version upto NGINX 1.19. In older NGINX version (1.13.10/1.14.1) this retry is working fine without any issues. Following is our NGINX configuration - server { listen 1813 udp reuseport; proxy_pass udp_radius; proxy_connect_timeout 60s; proxy_timeout 5s; proxy_responses 1; proxy_requests 1; proxy_buffer_size 64k; #access_log /var/log/nginx/radius.log upstreamlog buffer=64k flush=1m; access_log off; } Posted at Nginx Forum: https://forum.nginx.org/read.php?2,288649,288649#msg-288649 From nginx-forum at forum.nginx.org Fri Jul 10 13:26:09 2020 From: nginx-forum at forum.nginx.org (siva.pannier) Date: Fri, 10 Jul 2020 09:26:09 -0400 Subject: SSL over UDP - Nginx as reverse proxy In-Reply-To: <20200705220802.GJ20939@daoine.org> References: <20200705220802.GJ20939@daoine.org> Message-ID: <521f845128c32afe4280f883c5bca851.NginxMailingListEnglish@forum.nginx.org> Thanks Francis.. I tried that DTLS patch on the version 1.15. It worked. It supported both the SSL & UDP directive on the same stream. I could do the SSL termination on Nginx with the Bouncy Castle Java API.. They should add it in the latest versions of Nginx as well. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,288543,288659#msg-288659 From vl at nginx.com Fri Jul 10 14:38:58 2020 From: vl at nginx.com (Vladimir Homutov) Date: Fri, 10 Jul 2020 17:38:58 +0300 Subject: Nginx not retrying failed UDP message In-Reply-To: References: Message-ID: <20200710143858.GB59881@vlcx> On Thu, Jul 09, 2020 at 02:42:52PM -0400, harsh wrote: > Hi, > > We are using NGINX as Load Balancer for load balancing RADIUS UDP traffic. > > It seems NGINX is not retrying to send messages to another upstream server > if one of the upstream servers is down. > > We are using NGINX 1.16.1. But the same behaviour exists in all NGINX > version upto NGINX 1.19. > > In older NGINX version (1.13.10/1.14.1) this retry is working fine without > any issues. > > Following is our NGINX configuration - > > server { > listen 1813 udp reuseport; > > proxy_pass udp_radius; > proxy_connect_timeout 60s; > proxy_timeout 5s; > proxy_responses 1; > proxy_requests 1; > > proxy_buffer_size 64k; > > #access_log /var/log/nginx/radius.log upstreamlog buffer=64k > flush=1m; > access_log off; > } Can you please show full configuration? Is 'udp_radius' a hostname or an upstream{} block ? Regarding 'retrying to send message': since UDP is non-reliable, nginx will retry with another server only if it will get error immediately during send() call (not very probable). nginx sends packet, no error immediately; later it can get either icmp-caused error on read or write, or timeout can fire, and the upstream will be marked as failed. So, no new packets will be sent to it. Until upstream is not marked dead, new packets can be sent to it. Probably, you see effects of this change: Changes with nginx 1.15.0 05 Jun 2018 *) Feature: now the stream module can handle multiple incoming UDP datagrams from a client within a single session. You may want to look at debug log to see what is exactly happening - there is information about upstream servers marked alive or not, and clients requests and responses. From nginx-forum at forum.nginx.org Fri Jul 10 14:49:18 2020 From: nginx-forum at forum.nginx.org (siva.pannier) Date: Fri, 10 Jul 2020 10:49:18 -0400 Subject: Force SSL redirection to target service host for all protocols In-Reply-To: <20200708075448.GL20939@daoine.org> References: <20200708075448.GL20939@daoine.org> Message-ID: <9fb1e8f33456d39ed4b848a20e2b040c.NginxMailingListEnglish@forum.nginx.org> Hi.. > An you want your clients to talk to nginx without encryption, and for > nginx to talk to upstream with encryption. Yup this is what I am trying to achieve. Started testing on these scenarios. Will you keep you all posted on the results. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,288541,288663#msg-288663 From gfrankliu at gmail.com Fri Jul 10 16:40:52 2020 From: gfrankliu at gmail.com (Frank Liu) Date: Fri, 10 Jul 2020 09:40:52 -0700 Subject: proxy module handling early responses In-Reply-To: <20191218193729.GY12894@mdounin.ru> References: <20191218152150.GW12894@mdounin.ru> <20191218193729.GY12894@mdounin.ru> Message-ID: Hi, If you read the same RFC, section 6.5, right before the section you mentioned, you can see: A client sending a message body SHOULD monitor the network connection for an error response while it is transmitting the request. If the client sees a response that indicates the server does not wish to receive the message body and is closing the connection, the client SHOULD immediately cease transmitting the body and close its side of the connection. In this case, server sent HTTP/413 (along with Connection: close) to indicate it did not wish to receive the message body. Does nginx immediately cease transmitting the body and close its side of the connection? Thanks! Frank On Wed, Dec 18, 2019 at 11:37 AM Maxim Dounin wrote: > Hello! > > On Wed, Dec 18, 2019 at 10:09:56AM -0800, Frank Liu wrote: > > > Our upstream returns HTTP/413 along with "Connection: close" in the > header, > > then closes the socket. It seems nginx catches the socket close in the > > middle of sending the large payload. This triggers additional 502 and > > client gets both 413 and 502 from nginx. > > Your upstream server's behaviour is incorrect: it have to continue > reading data in the socket buffers and in transit (usually this is > called "lingering close", see http://nginx.org/r/lingering_close), > or nginx simply won't get the response. The client will get > simple and quite reasonable 502 in such a situation (not "413 and > 502"). > > This problem is explicitly documented in RFC 7230, "6.6. > Tear-down" (https://tools.ietf.org/html/rfc7230#section-6.6): > > If a server performs an immediate close of a TCP connection, there is > a significant risk that the client will not be able to read the last > HTTP response. If the server receives additional data from the > client on a fully closed connection, such as another request that was > sent by the client before receiving the server's response, the > server's TCP stack will send a reset packet to the client; > unfortunately, the reset packet might erase the client's > unacknowledged input buffers before they can be read and interpreted > by the client's HTTP parser. > > To avoid the TCP reset problem, servers typically close a connection > in stages. First, the server performs a half-close by closing only > the write side of the read/write connection. The server then > continues to read from the connection until it receives a > corresponding close by the client, or until the server is reasonably > certain that its own TCP stack has received the client's > acknowledgement of the packet(s) containing the server's last > response. Finally, the server fully closes the connection. > > If the upstream server fails to do connection teardown properly, > the only option is to fix the upstream server: it should either > implemenent proper connection teardown, or avoid returning > responses without reading the request body first. > > -- > Maxim Dounin > http://mdounin.ru/ > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Sun Jul 12 13:43:10 2020 From: nginx-forum at forum.nginx.org (dorafmon) Date: Sun, 12 Jul 2020 09:43:10 -0400 Subject: Multiple SSL web sites with nginx Message-ID: <517afa2f183b01912101507bbf3e644d.NginxMailingListEnglish@forum.nginx.org> I am trying to host multiple web apps on the same machine and they are all SSL enabled. I am trying to put an Nginx server in front of them to redirect incoming requests to different ports. Here is the configuration I have for this purpose: ``` server { listen 443 ssl; server_name domain1.com; ssl_certificate /etc/nginx/sslcerts/domain1/certificate.crt; ssl_certificate_key /etc/nginx/sslcerts/domain1/private.key; # ssl_session_cache builtin:1000 shared:SSL:10m; ssl_protocols TLSv1 TLSv1.1 TLSv1.2; ssl_ciphers HIGH:!aNULL:!eNULL:!EXPORT:!CAMELLIA:!DES:!MD5:!PSK:!RC4; # ssl_prefer_server_ciphers on; location / { proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto $scheme; # Fix the ?It appears that your reverse proxy set up is broken" error. proxy_pass https://localhost:4444; proxy_read_timeout 90; proxy_redirect https://localhost:4444 https://domain1.com; } } server { listen 443 ssl; server_name api.domain2.com; ssl_certificate /etc/nginx/sslcerts/domain2/certificate.crt; ssl_certificate_key /etc/nginx/sslcerts/domain2/private.key; # ssl_session_cache builtin:1000 shared:SSL:10m; ssl_protocols TLSv1 TLSv1.1 TLSv1.2; ssl_ciphers HIGH:!aNULL:!eNULL:!EXPORT:!CAMELLIA:!DES:!MD5:!PSK:!RC4; # ssl_prefer_server_ciphers on; location / { proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto $scheme; # Fix the ?It appears that your reverse proxy set up is broken" error. proxy_pass https://localhost:9999; proxy_read_timeout 90; proxy_redirect https://localhost:9999 https://tomlapi.domain2.com; } } ``` However, with this configuration it seems when I try to hit `https://api.domain2.com` then I am still redirected to `https://domain1.com`. I am just wondering what is wrong with my config? Previously I had used similar configs for non-SSL web apps for the same purpose and it worked. Thanks! Posted at Nginx Forum: https://forum.nginx.org/read.php?2,288666,288666#msg-288666 From bee.lists at gmail.com Sun Jul 12 16:43:42 2020 From: bee.lists at gmail.com (Bee.Lists) Date: Sun, 12 Jul 2020 12:43:42 -0400 Subject: Multiple SSL web sites with nginx In-Reply-To: <517afa2f183b01912101507bbf3e644d.NginxMailingListEnglish@forum.nginx.org> References: <517afa2f183b01912101507bbf3e644d.NginxMailingListEnglish@forum.nginx.org> Message-ID: > On Jul 12, 2020, at 9:43 AM, dorafmon wrote: > > I am trying to host multiple web apps on the same machine and they are all > SSL enabled. I am trying to put an Nginx server in front of them to redirect > incoming requests to different ports. The domain carried forward is what nginx uses to decipher what vhost to return. Also, both of those domains are port 443, so it will go to the first/default domain. Cheers, Bee From hobson42 at gmail.com Sun Jul 12 19:08:08 2020 From: hobson42 at gmail.com (Ian Hobson) Date: Sun, 12 Jul 2020 20:08:08 +0100 Subject: Multiple SSL web sites with nginx In-Reply-To: References: <517afa2f183b01912101507bbf3e644d.NginxMailingListEnglish@forum.nginx.org> Message-ID: On 12/07/2020 17:43, Bee.Lists wrote: > >> On Jul 12, 2020, at 9:43 AM, dorafmon wrote: >> >> I am trying to host multiple web apps on the same machine and they are all >> SSL enabled. I am trying to put an Nginx server in front of them to redirect >> incoming requests to different ports. > > The domain carried forward is what nginx uses to decipher what vhost to return. Also, both of those domains are port 443, so it will go to the first/default domain. > This is not correct, see https://nginx.org/en/docs/http/ngx_http_core_module.html#server where it says Syntax: server { ... } Default: ? Context: http Sets configuration for a virtual server. There is no clear separation between IP-based (based on the IP address) and name-based (based on the ?Host? request header field) virtual servers. Instead, the listen directives describe all addresses and ports that should accept connections for the server, and the server_name directive lists all server names. So the ports are defined in the listen directive, and the server names in the server_name directive. Your approach of multiple https servers works fine on my kit with the approach you have taken. Suggest there may be a typo in your configuration - try sudo nginx -t to prove both servers are loaded. Regards Ian -- Ian Hobson Tel (+351) 910 418 473 -- This email has been checked for viruses by AVG. https://www.avg.com From nginx-forum at forum.nginx.org Sun Jul 12 19:33:11 2020 From: nginx-forum at forum.nginx.org (jeffgx619) Date: Sun, 12 Jul 2020 15:33:11 -0400 Subject: Nginx customized authorization Message-ID: <4a550333ca25d2d5ca2b43f52402f9bf.NginxMailingListEnglish@forum.nginx.org> Hello, Nginx experts: I wonder if the nginx can support customized authorization. I am deploying Nginx on K8S, and would like to use nginx as proxy server for authentication and authorization. For authorization part, I want to check whether the user belongs to a specific namespace or not. Could you give me some suggestions on how to do it, please? Thank you. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,288669,288669#msg-288669 From bee.lists at gmail.com Mon Jul 13 00:41:22 2020 From: bee.lists at gmail.com (Bee.Lists) Date: Sun, 12 Jul 2020 20:41:22 -0400 Subject: Multiple SSL web sites with nginx In-Reply-To: References: <517afa2f183b01912101507bbf3e644d.NginxMailingListEnglish@forum.nginx.org> Message-ID: <4EDF0416-D687-4E3B-8FCA-B9AD0E2B3D34@gmail.com> > On Jul 12, 2020, at 3:08 PM, Ian Hobson wrote: > > This is not correct, see https://nginx.org/en/docs/http/ngx_http_core_module.html#server where it says > > Syntax: server { ... } > Default: ? > Context: http > Sets configuration for a virtual server. There is no clear separation between IP-based (based on the IP address) and name-based (based on the ?Host? request header field) virtual servers. Instead, the listen directives describe all addresses and ports that should accept connections for the server, and the server_name directive lists all server names. > > So the ports are defined in the listen directive, and the server names in the server_name directive. > > Your approach of multiple https servers works fine on my kit with the approach you have taken. > > Suggest there may be a typo in your configuration - try > > sudo nginx -t > > to prove both servers are loaded. They are both on the same server. Same IP. With the same port number, there?s nothing deciphering between the two other than server_name. Hence using server_name as the forward. I don?t even know how one could use port number as the request. If you look at the example he posted, there was no default. Cheers, Bee From nginx-forum at forum.nginx.org Mon Jul 13 07:19:08 2020 From: nginx-forum at forum.nginx.org (Evald80) Date: Mon, 13 Jul 2020 03:19:08 -0400 Subject: Found Nginx 1.19.0 stopped but no idea what happened In-Reply-To: <20200709170335.GM20939@daoine.org> References: <20200709170335.GM20939@daoine.org> Message-ID: <76d7ae8b3ae9ce05e22e3f4c7415b7ad.NginxMailingListEnglish@forum.nginx.org> Hi, The issue appeared again. I checked the memory and it is the same as previously. Something(within the nginx process) is eating the whole memory on the server or the nginx itself(who knows). In order to troubleshoot. The strategy will be to remove all the modules and leave only ModSec and wait for at least one week. I saw that in less than a week, usually it appears...ModSec module is built by me and it is the latest version on git. #load_module modules/ngx_http_brotli_filter_module.so; #load_module modules/ngx_http_brotli_static_module.so; load_module modules/ngx_http_modsecurity_module.so; #load_module modules/ngx_http_geoip2_module.so; [root at web ~]# nginx -V nginx version: nginx/1.19.1 built by gcc 4.8.5 20150623 (Red Hat 4.8.5-39) (GCC) built with OpenSSL 1.0.2k-fips 26 Jan 2017 TLS SNI support enabled configure arguments: --prefix=/etc/nginx --sbin-path=/usr/sbin/nginx --modules-path=/usr/lib64/nginx/modules --conf-path=/etc/nginx/nginx.conf --error-log-path=/var/log/nginx/error.log --http-log-path=/var/log/nginx/access.log --pid-path=/var/run/nginx.pid --lock-path=/var/run/nginx.lock --http-client-body-temp-path=/var/cache/nginx/client_temp --http-proxy-temp-path=/var/cache/nginx/proxy_temp --http-fastcgi-temp-path=/var/cache/nginx/fastcgi_temp --http-uwsgi-temp-path=/var/cache/nginx/uwsgi_temp --http-scgi-temp-path=/var/cache/nginx/scgi_temp --user=nginx --group=nginx --with-compat --with-file-aio --with-threads --with-http_addition_module --with-http_auth_request_module --with-http_dav_module --with-http_flv_module --with-http_gunzip_module --with-http_gzip_static_module --with-http_mp4_module --with-http_random_index_module --with-http_realip_module --with-http_secure_link_module --with-http_slice_module --with-http_ssl_module --with-http_stub_status_module --with-http_sub_module --with-http_v2_module --with-mail --with-mail_ssl_module --with-stream --with-stream_realip_module --with-stream_ssl_module --with-stream_ssl_preread_module --with-cc-opt='-O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector-strong --param=ssp-buffer-size=4 -grecord-gcc-switches -m64 -mtune=generic -fPIC' --with-ld-opt='-Wl,-z,relro -Wl,-z,now -pie' [root at web ~]# Posted at Nginx Forum: https://forum.nginx.org/read.php?2,288511,288672#msg-288672 From branimir.lyubenov at sap.com Mon Jul 13 13:59:18 2020 From: branimir.lyubenov at sap.com (Lyubenov, Branimir) Date: Mon, 13 Jul 2020 13:59:18 +0000 Subject: $ssl_client_escaped_cert forward client cert in URL encoded PEM only Message-ID: Hello team, We use nginx as reverse proxy to a upstream endpoint which requires a client cert authentication. The proxy is configured to request a certificate from the browser and then to set a header in the proxy location block like: proxy_set_header SSL_CLIENT_CERT $ssl_client_escaped_cert; The upstream server supports PEM with some restrictions: 1. Newlines should be replaced by space (or any whitespace) 2. Or alternatively only the base64 content in one row (no blanks) without BEGIN.. END CERTIFICATE sections Manipulating the upstream server is not an option. It is not another nginx instance. Is it possible to rewrite the header before it is sent by replacing all %0d and %0a with space and URL decoding a few other characters like %2f? -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Mon Jul 13 14:09:07 2020 From: nginx-forum at forum.nginx.org (redflag) Date: Mon, 13 Jul 2020 10:09:07 -0400 Subject: ngx_http_catch_body_filter doesn't appear to work In-Reply-To: <1105b9e2345a8fb91ee07a8677a23c96.NginxMailingListEnglish@forum.nginx.org> References: <1105b9e2345a8fb91ee07a8677a23c96.NginxMailingListEnglish@forum.nginx.org> Message-ID: <5f56a71b5276603ed6f1f894076c9dda.NginxMailingListEnglish@forum.nginx.org> i still have the same issue, could you assist me please, Thank you in advance Posted at Nginx Forum: https://forum.nginx.org/read.php?2,285923,288678#msg-288678 From mdounin at mdounin.ru Mon Jul 13 15:47:25 2020 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 13 Jul 2020 18:47:25 +0300 Subject: proxy module handling early responses In-Reply-To: References: <20191218152150.GW12894@mdounin.ru> <20191218193729.GY12894@mdounin.ru> Message-ID: <20200713154725.GX12747@mdounin.ru> Hello! On Fri, Jul 10, 2020 at 09:40:52AM -0700, Frank Liu wrote: > If you read the same RFC, section 6.5, right before the section you > mentioned, you can see: > > A client sending a message body SHOULD monitor the network connection > for an error response while it is transmitting the request. If the > client sees a response that indicates the server does not wish to > receive the message body and is closing the connection, the client > SHOULD immediately cease transmitting the body and close its side of > the connection. > > In this case, server sent HTTP/413 (along with Connection: close) to > indicate it did not wish to receive the message body. Does nginx > immediately cease transmitting the body and close its side of the > connection? It does. But "immediately" from nginx point of view can easily mean "way too late" from TCP stack point of view, resulting in nginx not being able to get the response at all. To re-iterate: if the upstream server fails to do connection teardown properly, the only option is to fix the upstream server. This is not something which can be solved on nginx side. Everthing which can be done on nginx side is believed to be already implemented, including sending to the client partially obtained responses with all the bytes nginx was able to read from the socket (if nginx was able to read at least the response headers). -- Maxim Dounin http://mdounin.ru/ From nginx-forum at forum.nginx.org Mon Jul 13 18:57:34 2020 From: nginx-forum at forum.nginx.org (siva.pannier) Date: Mon, 13 Jul 2020 14:57:34 -0400 Subject: Force SSL redirection to target service host for all protocols In-Reply-To: <20200708075448.GL20939@daoine.org> References: <20200708075448.GL20939@daoine.org> Message-ID: <50d16cb4bbf3d72199808fc06bd292ed.NginxMailingListEnglish@forum.nginx.org> Hi there, I have tried doing TCP redirection to a backend TCP server with SSL enabled following the below URL. https://docs.nginx.com/nginx/admin-guide/security-controls/securing-tcp-traffic-upstream/ My TCP (non-ssl) client is able to hit the TCP Server (SSL enabled) via the Nginx (proxy_ssl) but buffered reader gets back only 'null' Client code: ########## Socket socket = new Socket(hostname, port); InputStream input = socket.getInputStream(); BufferedReader reader = new BufferedReader(new InputStreamReader(input)); String time = reader.readLine(); //returns only null System.out.println(time); Server code: ######### ServerSocketFactory ssf = SSLServerSocketFactory.getDefault(); int port = 8091; ServerSocket ss = ssf.createServerSocket(port); while (true) { Socket sock = ss.accept(); try { System.out.println("New client connected"); //BufferedReader br = new BufferedReader(new InputStreamReader(sock.getInputStream())); //String data = br.readLine(); PrintWriter pw = new PrintWriter(sock.getOutputStream()); pw.println(new Date().toString() + " from port: "+port); pw.flush(); pw.close(); sock.close(); .... .... Nginx Conf: ############ stream { upstream backend { server backend1.example.com:12345; } server { listen 8091; proxy_pass backend; proxy_ssl on; proxy_ssl_certificate /etc/ssl/certs/backend.crt; proxy_ssl_certificate_key /etc/ssl/certs/backend.key; proxy_ssl_protocols TLSv1 TLSv1.1 TLSv1.2; proxy_ssl_ciphers HIGH:!aNULL:!MD5; proxy_ssl_trusted_certificate /etc/ssl/certs/trusted_ca_cert.crt; proxy_ssl_verify on; proxy_ssl_verify_depth 2; proxy_ssl_session_reuse on; } } can somebody please suggest what is wrong with the above configuration? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,288541,288680#msg-288680 From nginx-forum at forum.nginx.org Tue Jul 14 06:20:54 2020 From: nginx-forum at forum.nginx.org (redflag) Date: Tue, 14 Jul 2020 02:20:54 -0400 Subject: Help with filter module Message-ID: <86489412edde1b1f7896a0328fdf57a6.NginxMailingListEnglish@forum.nginx.org> Greetings all, i am writing a new Nginx filter module , the main issue is that when i try to read the request body it turn out to be null, is there a way to READ THE BODY?? i found an already existing module that could read the request body filter but it does not appear to be working, this is the module: [ https://www.nginx.com/resources/wiki/extending/examples/body_filter/ ] but it is not working I would really appreciate any type of help. Thank u Posted at Nginx Forum: https://forum.nginx.org/read.php?2,288682,288682#msg-288682 From nginx-forum at forum.nginx.org Tue Jul 14 08:25:46 2020 From: nginx-forum at forum.nginx.org (salmanpskply) Date: Tue, 14 Jul 2020 04:25:46 -0400 Subject: How to get response body when using proxy_intercept_errors Message-ID: <7290c04592139199ff2f8e4792e26f56.NginxMailingListEnglish@forum.nginx.org> I have a requirement where when proxied response error code is 401 (condition #1) and response body has text "SAML Token Expired" (condition #2), I need to intercept it and redirect to refresh token API. I could do it partially where only error code 401 is considered. Not able to find a way to read response body text to give condition #2: "location /service { proxy_intercept_errors on; error_page 401 = @refresh; } location @refresh { (# Here check if the response body has text "SAML Token Expired". If yes return 401 directly.) set $original_uri $scheme://$http_host$request_uri; return 307 https://localhost:8083/service/auth/refresh?uri=$original_uri; }" Please help me on getting the response body text. Thanks in advance! Posted at Nginx Forum: https://forum.nginx.org/read.php?2,288683,288683#msg-288683 From nginx-forum at forum.nginx.org Tue Jul 14 12:28:35 2020 From: nginx-forum at forum.nginx.org (CarstenK.) Date: Tue, 14 Jul 2020 08:28:35 -0400 Subject: nginx plus - purge multiple URLs in one request Message-ID: <25fa3f7a3e23427ff76f97b6437b43ca.NginxMailingListEnglish@forum.nginx.org> Hello, we use nginx plus for caching and purge URLs automatic by our Shopsystem. In some times we have a lot of URLs to purge and need for every (sometimes wildcard purge) an extra request. Is it possible to purge a list of urls in one request? regards Carsten Posted at Nginx Forum: https://forum.nginx.org/read.php?2,288687,288687#msg-288687 From mdounin at mdounin.ru Tue Jul 14 13:13:37 2020 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 14 Jul 2020 16:13:37 +0300 Subject: How to get response body when using proxy_intercept_errors In-Reply-To: <7290c04592139199ff2f8e4792e26f56.NginxMailingListEnglish@forum.nginx.org> References: <7290c04592139199ff2f8e4792e26f56.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20200714131337.GZ12747@mdounin.ru> Hello! On Tue, Jul 14, 2020 at 04:25:46AM -0400, salmanpskply wrote: > I have a requirement where when proxied response error code is 401 > (condition #1) and response body has text "SAML Token Expired" (condition > #2), I need to intercept it and redirect to refresh token API. > > I could do it partially where only error code 401 is considered. Not able to > find a way to read response body text to give condition #2: There is no way. Errors are intercepted based on the response headers, and the response body is not read from the upstream server if the error is intercepted. If you need to analyze the response body, consider using njs instead. Doing appropriate subrequest and looking into it might work for you, see here for details: http://nginx.org/en/docs/http/ngx_http_js_module.html http://nginx.org/en/docs/njs/reference.html#subrequest -- Maxim Dounin http://mdounin.ru/ From francis at daoine.org Tue Jul 14 13:16:53 2020 From: francis at daoine.org (Francis Daly) Date: Tue, 14 Jul 2020 14:16:53 +0100 Subject: Force SSL redirection to target service host for all protocols In-Reply-To: <50d16cb4bbf3d72199808fc06bd292ed.NginxMailingListEnglish@forum.nginx.org> References: <20200708075448.GL20939@daoine.org> <50d16cb4bbf3d72199808fc06bd292ed.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20200714131653.GN20939@daoine.org> On Mon, Jul 13, 2020 at 02:57:34PM -0400, siva.pannier wrote: Hi there, > https://docs.nginx.com/nginx/admin-guide/security-controls/securing-tcp-traffic-upstream/ > > My TCP (non-ssl) client is able to hit the TCP Server (SSL enabled) via the > Nginx (proxy_ssl) but buffered reader gets back only 'null' When my client is "nc", and my server is "openssl s_server -port 12345", things seem to work for me. Anything I write on one end is shown on the other, with nginx handling the ssl/no-ssl translation. > Server code: > ######### > ServerSocketFactory ssf = SSLServerSocketFactory.getDefault(); > int port = 8091; > ServerSocket ss = ssf.createServerSocket(port); This looks like your server wants to listen on port 8091. Your nginx configuration suggests that nginx listens on 8091, and talks to the server on 12345. > Nginx Conf: > ############ > stream { > upstream backend { > server backend1.example.com:12345; > } > > server { > listen 8091; > proxy_pass backend; > proxy_ssl on; Match the ports, and it should work. Cheers, f -- Francis Daly francis at daoine.org From nginx-forum at forum.nginx.org Tue Jul 14 13:28:45 2020 From: nginx-forum at forum.nginx.org (salmanpskply) Date: Tue, 14 Jul 2020 09:28:45 -0400 Subject: How to get response body when using proxy_intercept_errors In-Reply-To: <20200714131337.GZ12747@mdounin.ru> References: <20200714131337.GZ12747@mdounin.ru> Message-ID: <4e032d3c693f680b372c95cd5fd7308e.NginxMailingListEnglish@forum.nginx.org> Thank you Posted at Nginx Forum: https://forum.nginx.org/read.php?2,288683,288693#msg-288693 From francis at daoine.org Tue Jul 14 13:37:36 2020 From: francis at daoine.org (Francis Daly) Date: Tue, 14 Jul 2020 14:37:36 +0100 Subject: Help with filter module In-Reply-To: <86489412edde1b1f7896a0328fdf57a6.NginxMailingListEnglish@forum.nginx.org> References: <86489412edde1b1f7896a0328fdf57a6.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20200714133736.GO20939@daoine.org> On Tue, Jul 14, 2020 at 02:20:54AM -0400, redflag wrote: Hi there, I don't have an answer for you, but... > i am writing a new Nginx filter module , the main issue is that when i try > to read the request body it turn out to be null, is there a way to READ THE > BODY?? > > i found an already existing module that could read the request body filter > but it does not appear to be working, this is the module: [ > https://www.nginx.com/resources/wiki/extending/examples/body_filter/ ] but > it is not working ...when you say "not working", do you mean "it fails to compile, giving *this* error message"; or "it compiles but fails to configure, giving *this* error message"; or "it compiles and configures, but when I make *this* request I get *that* response but I want to get *this* response instead"; or something else? > I would really appreciate any type of help. I suspect that people who are able to help, will find it easier to help if they have a full clear question, possibly with sample code. Cheers, f -- Francis Daly francis at daoine.org From francis at daoine.org Tue Jul 14 13:42:48 2020 From: francis at daoine.org (Francis Daly) Date: Tue, 14 Jul 2020 14:42:48 +0100 Subject: nginx plus - purge multiple URLs in one request In-Reply-To: <25fa3f7a3e23427ff76f97b6437b43ca.NginxMailingListEnglish@forum.nginx.org> References: <25fa3f7a3e23427ff76f97b6437b43ca.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20200714134248.GP20939@daoine.org> On Tue, Jul 14, 2020 at 08:28:35AM -0400, CarstenK. wrote: Hi there, > we use nginx plus for caching and purge URLs automatic by our Shopsystem. In > some times we have a lot of URLs to purge and need for every (sometimes > wildcard purge) an extra request. > > Is it possible to purge a list of urls in one request? http://nginx.org/r/proxy_cache_purge suggests "no". (At least: not using that directive.) It looks like it does single exact-match or single prefix-match in one request. Is it possible to set the expiry appropriately at read-time, so that things will auto-expire without needing to be purged? Cheers, f -- Francis Daly francis at daoine.org From nginx-forum at forum.nginx.org Tue Jul 14 13:55:04 2020 From: nginx-forum at forum.nginx.org (siva.pannier) Date: Tue, 14 Jul 2020 09:55:04 -0400 Subject: Force SSL redirection to target service host for all protocols In-Reply-To: <20200714131653.GN20939@daoine.org> References: <20200714131653.GN20939@daoine.org> Message-ID: <2cf4b4e07c2f23b00f4e61886c81a0bf.NginxMailingListEnglish@forum.nginx.org> Extremely sorry, I mentioned the wrong port in that post.. Actually I am using the correct port number.. Client (Windows + non SSL):8091 ==> Nginx host (ubuntu vm+ SSL redirection) ==> TCP server (Windows + SSL enabled) TCP server listening on 8091 Nginx Server listening on 8091 Client makes call to Nginx on 8091 I modified my server code for additional debugging as below ################# ServerSocketFactory ssf = SSLServerSocketFactory.getDefault(); int port = 8091; ServerSocket ss = ssf.createServerSocket(port); while (true) { try { Socket sock = ss.accept(); System.out.println("Timeout set is " + sock.getSoTimeout()); System.out.println("New client connected"); PrintWriter pw = new PrintWriter(sock.getOutputStream()); pw.println(new Date().toString() + " from port: "+port); System.out.println("Data ready to sent to client"); pw.flush(); //pw.close(); System.out.println("Data sent to client"); System.out.println("Ready to read client data"); BufferedReader br = new BufferedReader(new InputStreamReader(sock.getInputStream())); String data = br.readLine(); System.out.println("Data received from Client: "+ data); //br.close(); sock.close(); System.out.println("Socket closed"); ######################## Output from the server when client initiated the connection is.. ##################### Timeout set is 0 New client connected Data ready to sent to client Data sent to client Ready to read client data I/O error: Connection has been shutdown: javax.net.ssl.SSLHandshakeException: no cipher suites in common Exception in thread "main" javax.net.ssl.SSLException: Connection has been shutdown: javax.net.ssl.SSLHandshakeException: no cipher suites in common at sun.security.ssl.SSLSocketImpl.checkEOF(Unknown Source) at sun.security.ssl.AppInputStream.read(Unknown Source) at sun.nio.cs.StreamDecoder.readBytes(Unknown Source) at sun.nio.cs.StreamDecoder.implRead(Unknown Source) at sun.nio.cs.StreamDecoder.read(Unknown Source) at java.io.InputStreamReader.read(Unknown Source) at java.io.BufferedReader.fill(Unknown Source) at java.io.BufferedReader.readLine(Unknown Source) at java.io.BufferedReader.readLine(Unknown Source) at com.att.tcp.server.TCPSServer.main(TCPSServer.java:37) Caused by: javax.net.ssl.SSLHandshakeException: no cipher suites in common at sun.security.ssl.Alerts.getSSLException(Unknown Source) at sun.security.ssl.SSLSocketImpl.fatal(Unknown Source) at sun.security.ssl.Handshaker.fatalSE(Unknown Source) at sun.security.ssl.Handshaker.fatalSE(Unknown Source) at sun.security.ssl.ServerHandshaker.chooseCipherSuite(Unknown Source) at sun.security.ssl.ServerHandshaker.clientHello(Unknown Source) at sun.security.ssl.ServerHandshaker.processMessage(Unknown Source) at sun.security.ssl.Handshaker.processLoop(Unknown Source) at sun.security.ssl.Handshaker.process_record(Unknown Source) at sun.security.ssl.SSLSocketImpl.readRecord(Unknown Source) at sun.security.ssl.SSLSocketImpl.performInitialHandshake(Unknown Source) at sun.security.ssl.SSLSocketImpl.writeRecord(Unknown Source) at sun.security.ssl.AppOutputStream.write(Unknown Source) at sun.nio.cs.StreamEncoder.writeBytes(Unknown Source) at sun.nio.cs.StreamEncoder.implFlushBuffer(Unknown Source) at sun.nio.cs.StreamEncoder.implFlush(Unknown Source) at sun.nio.cs.StreamEncoder.flush(Unknown Source) at java.io.OutputStreamWriter.flush(Unknown Source) at java.io.BufferedWriter.flush(Unknown Source) at java.io.PrintWriter.flush(Unknown Source) at com.att.tcp.server.TCPSServer.main(TCPSServer.java:31) Error was thrown on the line "pw.flush();" in the above code #################################### Output from the client is ##################### I/O error: Connection reset Exception in thread "main" java.net.SocketException: Connection reset at java.net.SocketInputStream.read(Unknown Source) at java.net.SocketInputStream.read(Unknown Source) at sun.security.ssl.InputRecord.readFully(Unknown Source) at sun.security.ssl.InputRecord.read(Unknown Source) at sun.security.ssl.SSLSocketImpl.readRecord(Unknown Source) at sun.security.ssl.SSLSocketImpl.performInitialHandshake(Unknown Source) at sun.security.ssl.SSLSocketImpl.startHandshake(Unknown Source) at sun.security.ssl.SSLSocketImpl.startHandshake(Unknown Source) at com.att.client.tcp.TimeClient.main(TimeClient.java:34) Error is thrown on the client code " socket.startHandshake(); " ########################## > When my client is "nc", and my server is "openssl s_server -port 12345", > things seem to work for me. Anything I write on one end is shown on the > other, with nginx handling the ssl/no-ssl translation. Are you able to run a similar configuration? May be I would have done something wrong on SSL settings or on self-signed certificate. Let me start things from scratch again.. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,288541,288696#msg-288696 From francis at daoine.org Tue Jul 14 14:01:24 2020 From: francis at daoine.org (Francis Daly) Date: Tue, 14 Jul 2020 15:01:24 +0100 Subject: Force SSL redirection to target service host for all protocols In-Reply-To: <2cf4b4e07c2f23b00f4e61886c81a0bf.NginxMailingListEnglish@forum.nginx.org> References: <20200714131653.GN20939@daoine.org> <2cf4b4e07c2f23b00f4e61886c81a0bf.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20200714140124.GQ20939@daoine.org> On Tue, Jul 14, 2020 at 09:55:04AM -0400, siva.pannier wrote: Hi there, > Output from the server when client initiated the connection is.. > ##################### > javax.net.ssl.SSLHandshakeException: no cipher suites in common That suggests that the ssl client (nginx) and the ssl server (your code) are unable to agree on how to set up a suitable ssl session. Possibly the nginx side will have (debug) logs showing what it tried and what response it got. Your nginx config does say what ciphers nginx should try; does your server accept one of those? Good luck with it, f -- Francis Daly francis at daoine.org From des.davc12 at gmail.com Tue Jul 14 16:12:31 2020 From: des.davc12 at gmail.com (Desiree Valdez) Date: Tue, 14 Jul 2020 09:12:31 -0700 Subject: How to get response body when using proxy_intercept_errors In-Reply-To: <20200714131337.GZ12747@mdounin.ru> References: <7290c04592139199ff2f8e4792e26f56.NginxMailingListEnglish@forum.nginx.org> <20200714131337.GZ12747@mdounin.ru> Message-ID: Stop emails to me On Tue, Jul 14, 2020, 6:13 AM Maxim Dounin wrote: > Hello! > > On Tue, Jul 14, 2020 at 04:25:46AM -0400, salmanpskply wrote: > > > I have a requirement where when proxied response error code is 401 > > (condition #1) and response body has text "SAML Token Expired" (condition > > #2), I need to intercept it and redirect to refresh token API. > > > > I could do it partially where only error code 401 is considered. Not > able to > > find a way to read response body text to give condition #2: > > There is no way. Errors are intercepted based on the response > headers, and the response body is not read from the upstream > server if the error is intercepted. > > If you need to analyze the response body, consider using njs > instead. Doing appropriate subrequest and looking into it might > work for you, see here for details: > > http://nginx.org/en/docs/http/ngx_http_js_module.html > http://nginx.org/en/docs/njs/reference.html#subrequest > > -- > Maxim Dounin > http://mdounin.ru/ > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Wed Jul 15 10:31:02 2020 From: nginx-forum at forum.nginx.org (CarstenK.) Date: Wed, 15 Jul 2020 06:31:02 -0400 Subject: nginx plus - purge multiple URLs in one request In-Reply-To: <20200714134248.GP20939@daoine.org> References: <20200714134248.GP20939@daoine.org> Message-ID: Hi Francis, thank you for your fast reply. prices or other product information are changing not very often, so i can cache product sites for a very long time and only need to refresh f there are changes. If there are changes i purge sites automaticly, so they are "always" correct. I think it a good way for me and it's working fine, only purging needs to much time :( Posted at Nginx Forum: https://forum.nginx.org/read.php?2,288687,288707#msg-288707 From nginx-forum at forum.nginx.org Wed Jul 15 11:56:14 2020 From: nginx-forum at forum.nginx.org (Evald80) Date: Wed, 15 Jul 2020 07:56:14 -0400 Subject: Found Nginx 1.19.0 stopped but no idea what happened In-Reply-To: <76d7ae8b3ae9ce05e22e3f4c7415b7ad.NginxMailingListEnglish@forum.nginx.org> References: <20200709170335.GM20939@daoine.org> <76d7ae8b3ae9ce05e22e3f4c7415b7ad.NginxMailingListEnglish@forum.nginx.org> Message-ID: <0b2c0553f84abe4ccd21dec4d78d3a8a.NginxMailingListEnglish@forum.nginx.org> Problem appeared again. Well i have to disable the last module which is mod_sec and see again. If the issue appears again, can we say that we have a memory leak on nginx? I have noticed that Currrent config is the following. [root at web ~]# nginx -V nginx version: nginx/1.19.1 built by gcc 4.8.5 20150623 (Red Hat 4.8.5-39) (GCC) built with OpenSSL 1.0.2k-fips 26 Jan 2017 TLS SNI support enabled configure arguments: --prefix=/etc/nginx --sbin-path=/usr/sbin/nginx --modules-path=/usr/lib64/nginx/modules --conf-path=/etc/nginx/nginx.conf --error-log-path=/var/log/nginx/error.log --http-log-path=/var/log/nginx/access.log --pid-path=/var/run/nginx.pid --lock-path=/var/run/nginx.lock --http-client-body-temp-path=/var/cache/nginx/client_temp --http-proxy-temp-path=/var/cache/nginx/proxy_temp --http-fastcgi-temp-path=/var/cache/nginx/fastcgi_temp --http-uwsgi-temp-path=/var/cache/nginx/uwsgi_temp --http-scgi-temp-path=/var/cache/nginx/scgi_temp --user=nginx --group=nginx --with-compat --with-file-aio --with-threads --with-http_addition_module --with-http_auth_request_module --with-http_dav_module --with-http_flv_module --with-http_gunzip_module --with-http_gzip_static_module --with-http_mp4_module --with-http_random_index_module --with-http_realip_module --with-http_secure_link_module --with-http_slice_module --with-http_ssl_module --with-http_stub_status_module --with-http_sub_module --with-http_v2_module --with-mail --with-mail_ssl_module --with-stream --with-stream_realip_module --with-stream_ssl_module --with-stream_ssl_preread_module --with-cc-opt='-O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector-strong --param=ssp-buffer-size=4 -grecord-gcc-switches -m64 -mtune=generic -fPIC' --with-ld-opt='-Wl,-z,relro -Wl,-z,now -pie' [root at web ~]# Posted at Nginx Forum: https://forum.nginx.org/read.php?2,288511,288713#msg-288713 From nginx-forum at forum.nginx.org Wed Jul 15 14:04:08 2020 From: nginx-forum at forum.nginx.org (siva.pannier) Date: Wed, 15 Jul 2020 10:04:08 -0400 Subject: Force SSL redirection to target service host for all protocols In-Reply-To: <20200714140124.GQ20939@daoine.org> References: <20200714140124.GQ20939@daoine.org> Message-ID: <5f67a5c1d451152dde4aab9e5c5a296f.NginxMailingListEnglish@forum.nginx.org> Thanks Francis! I was able to resolve that after a creating a Keystore jks with my cert & key and pointing my java code to that store using the system property, after that added keystore manager to the SSL context. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,288541,288716#msg-288716 From nginx-forum at forum.nginx.org Wed Jul 15 15:16:46 2020 From: nginx-forum at forum.nginx.org (oriano) Date: Wed, 15 Jul 2020 11:16:46 -0400 Subject: Ingress whitelist configuration problem in kubernetes Message-ID: <20621f07cdc446a993eed2e4f541f60d.NginxMailingListEnglish@forum.nginx.org> I am working with a nginx ingress controller, inside kubernetes as a Daemon set. To enable whitelisting add the following parameter inside the configmap: whitelist-source-range with the ips that I want to allow separated by (,) commas. When doing this I tried to access from one of the ips but the result was an error 403. Searching I found the recommendation to change in the service the parameter externalTrafficPolicy to local, make this change but it still doesn't work. Enter one of the pod to verify the modules and there is a modules folder but the module does not appear in it: ngx_http_access_module, so I was left with the doubt if this may be the cause of the problem. The installation of the income is done through helm v2.16.1 stable version like this: helm install stable / nginx-ingress --name ingress --values ??ingress-config.yml --namespace kube-system Any recommendation? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,288726,288726#msg-288726 From nginx-forum at forum.nginx.org Wed Jul 15 15:23:29 2020 From: nginx-forum at forum.nginx.org (harsh) Date: Wed, 15 Jul 2020 11:23:29 -0400 Subject: Nginx not retrying failed UDP message In-Reply-To: <20200710143858.GB59881@vlcx> References: <20200710143858.GB59881@vlcx> Message-ID: Thanks Vladimir for the explanation. 'udp_radius' is an upstream{} block with 3 servers. What you mentioned is exactly what we are seeing in our tests. Since we are dealing with Radius Accounting request we don't want to lose failed messages. Is there any way to force nginx to retry failed messages at all? Thanks, Harsh Posted at Nginx Forum: https://forum.nginx.org/read.php?2,288662,288727#msg-288727 From nginx-forum at forum.nginx.org Thu Jul 16 22:19:18 2020 From: nginx-forum at forum.nginx.org (beckyjmcd) Date: Thu, 16 Jul 2020 18:19:18 -0400 Subject: nginx ingress controller on non-standard ports Message-ID: I have deployed the nginx ingress controller to my kubernetes cluster as a daemonset. In the deployments/daemon-set/nginx-ingress.yaml file, I modified the container spec and set: hostPort: 81 (for http) hostPort: 444 (for https) Likewise in the deployments/service/nodeport.yaml, I set port: 81 targetPort: 80 port: 444 targetPort: 443 :I could not get one ingress resource to route to both http and https because http redirected to https on port 443 but I have something else running on that port. I can get my example service to work with https, if I explicity specify: https://service-helloworld-https.blue.gms.sandia.gov:444/ but if I try: http://service-helloworld-https.blue.gms.sandia.gov:81 it does not work. When I try to curl: curl -Lv http://service-helloworld-https.blue.gms.sandia.gov:81 I see that "Location" is set to: < Location: https://service-helloworld-https.blue.gms.sandia.gov:443/ so http requests are being redirected to https on port 443. But, my ingress controller listens on 444. How can I update my ingress resource to redirect to port 444? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,288738,288738#msg-288738 From ganesh_vk at rediffmail.com Fri Jul 17 09:20:57 2020 From: ganesh_vk at rediffmail.com (ganesh) Date: 17 Jul 2020 09:20:57 -0000 Subject: How to connect Remote shared folder with password Message-ID: <20200717092057.7935.qmail@f5mail-224-156.rediffmail.com> Hi, I'm evaluating NGINX web server for our application, currently we use IIS and we wanted to shift to NGINX for better performance. During evaluation, we are struck while connecting the remote located folder with password, need your assisstance in resolving this, please do the needful. We use 3 servers, web, app & db servers for our php application. The issue happens when we wanted to connect a shared folder in app server from our web server where the nginx is running. Web server - NGINX is installed App server - our PHP application sources are available in a shared folder which can be connected thru username & password Thank you, Ganesh Thanks & Regards Ganesh. Thanks & Regards Ganesh. -------------- next part -------------- An HTML attachment was scrubbed... URL: From yichun at openresty.com Fri Jul 17 17:33:22 2020 From: yichun at openresty.com (Yichun Zhang) Date: Fri, 17 Jul 2020 10:33:22 -0700 Subject: OpenResty 1.17.8.2 is just released Message-ID: Hi folks, We just released OpenResty 1.17.8.2 which is a maintenance release fixing a regression in ngx_http_perl_module: https://openresty.org/en/ann-1017008002.html OpenResty is a high performance and dynamic web platform based on our enhanced version of Nginx core, our enhanced version of LuaJIT, and many powerful Nginx modules and Lua libraries. See OpenResty's homepage for details: https://openresty.org/ Have fun! Best, Yichun --- Yichun Zhang is the creator of OpenResty, the founder and CEO of OpenResty Inc. From dino.edwards at mydirectmail.net Sat Jul 18 13:18:22 2020 From: dino.edwards at mydirectmail.net (Dino Edwards) Date: Sat, 18 Jul 2020 13:18:22 +0000 Subject: Reverse proxy to Tomcat 8 Message-ID: <1772e9e38fe8458a97b5117a522d0f16@mydirectmail.net> Trying to reverse proxy using Nginx to a tomcat 8 application listening on port 8443 but I'm running into an issue where I'm getting status 404 on all static content. Below is my config file. The /admin location works with no problems. The /ciphermail is the problematic one. I had the exact same problem with static content with the /admin location until I added the following in my server block: location ~* \.(jpg|jpeg|png|gif|ico|css|js)$ and then it started working. However, the /ciphermail location seems to still not work. server { #LISTEN CONFIG listen 80 default_server; listen [::]:80 default_server; #REDIRECT TO HTTPS UNCOMMENT BELOW TO ENABLE #return 301 https://$host$request_uri; keepalive_timeout 70; #LOGS CONFIG access_log /var/log/nginx/access.log; error_log /var/log/nginx/error.log warn; proxy_max_temp_file_size 5120m; client_max_body_size 5120m; # Set the .well-known directory alias for initial Lets Encrypt Certificate location /.well-known { root /var/www/html/; } root /var/www/html; index index.cfm index.html index.htm; location ~* \.(jpg|jpeg|png|gif|ico|css|js)$ { expires 1y; } location /admin { #Set Real IP Headers proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $remote_addr; proxy_set_header Host $host; proxy_redirect off; client_max_body_size 10m; client_body_buffer_size 128k; proxy_connect_timeout 90; proxy_send_timeout 90; proxy_read_timeout 90; proxy_buffer_size 4k; proxy_buffers 4 32k; proxy_busy_buffers_size 64k; proxy_temp_file_write_size 64k; proxy_pass http://localhost:8888/admin/; } location /ciphermail { #Set Real IP Headers proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $remote_addr; proxy_set_header Host $host; proxy_redirect off; client_max_body_size 10m; client_body_buffer_size 128k; proxy_connect_timeout 90; proxy_send_timeout 90; proxy_read_timeout 90; proxy_buffer_size 4k; proxy_buffers 4 32k; proxy_busy_buffers_size 64k; proxy_temp_file_write_size 64k; proxy_ssl_verify off; proxy_pass https://localhost:8443/ciphermail/; } } From nginx-forum at forum.nginx.org Sun Jul 19 06:45:46 2020 From: nginx-forum at forum.nginx.org (jalil1408) Date: Sun, 19 Jul 2020 02:45:46 -0400 Subject: nginx reverse proxy for ssh reverse tunnel? Message-ID: <62d189b15f79cdb0200f34b78016049a.NginxMailingListEnglish@forum.nginx.org> * 192.168.1.100 (reverse proxy and ssh tunnel server) : centos 8 + sshd 8 + nginx 1.14.1 + firewalld disabled * 192.168.1.101 (local web server) : windows 10 + web App (port 80) exposed to remote access with SSH reverse tunnel (port 6033) * 192.168.1.102 (remote machine) : Ubuntu + curl curl http://192.168.1.100:6033/api/Users ==> works well curl http://192.168.1.100/tunnel/api/Users ==> "the page you are looking for is temporarily unavailable" !!! tried with this [/etc/nginx/nginx.conf] ... http { ... server { ... location /tunnel/ { proxy_pass http://127.0.0.1:6033; } } } then with this [/etc/nginx/nginx.conf] ... http { ... upstream tunnel { server 127.0.0.1:6033; } server { ... location /tunnel/ { proxy_pass http://tunnel; } } } how to make a working nginx configuration that forward server port 80 to the ssh tunnel port 6033? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,288756,288756#msg-288756 From francis at daoine.org Sun Jul 19 07:54:36 2020 From: francis at daoine.org (Francis Daly) Date: Sun, 19 Jul 2020 08:54:36 +0100 Subject: X-Accel-Redirect not redirecting to named locaiton In-Reply-To: <2bc5ca09643cd9c1c53e16140f288955.NginxMailingListEnglish@forum.nginx.org> References: <20200701080519.GD20939@daoine.org> <2bc5ca09643cd9c1c53e16140f288955.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20200719075436.GR20939@daoine.org> On Wed, Jul 08, 2020 at 03:52:14AM -0400, latha wrote: Hi there, > I want to pass the response body of > http://test.svc.cluster.local:9080/v2/test/* apis to named location > `acreate` which does proxy_pass to another micro-service (which takes the > response of /v2/test* api as payload). I learnt that `body_filter_by_lua` > does not work with internal redirects, are there any other options to do > this? I don't know; and I have not tested it; but the "Subrequests Chaining" example using njs on http://nginx.org/en/docs/njs/examples.html does seem to use some of "reply.responseBody" from the first subrequest as part of the second subrequest. Maybe that can be adapted to what you want? Good luck with it, f -- Francis Daly francis at daoine.org From francis at daoine.org Sun Jul 19 08:11:46 2020 From: francis at daoine.org (Francis Daly) Date: Sun, 19 Jul 2020 09:11:46 +0100 Subject: How to connect Remote shared folder with password In-Reply-To: <20200717092057.7935.qmail@f5mail-224-156.rediffmail.com> References: <20200717092057.7935.qmail@f5mail-224-156.rediffmail.com> Message-ID: <20200719081146.GS20939@daoine.org> On Fri, Jul 17, 2020 at 09:20:57AM -0000, ganesh wrote: Hi there, I don't fully follow your design and expectations in this context, but... > We use 3 servers, web, app & db servers for our php application. > > The issue happens when we wanted to connect a shared folder in app server from our web server where the nginx is running. > > Web server - NGINX is installed > App server - our PHP application sources are available in a shared folder which can be connected thru username & password ...if you want nginx to serve files from a filesystem, you will probably have most luck if you make sure that the filesystem is available to nginx, outside of nginx -- mount it as a shared drive, or use a network file system that presents it as if it were local, so that nginx is just doing normal local-filesystem things. However -- nginx does not "do" php, so nginx does not need to care what or where your php source files are. Typically, you will run a fastcgi service which does need to access those files; and nginx will tell that service what filename it should run for this specific request If you do want nginx to access files provided by a remote service, then you'll need to make sure that your nginx can speak whatever protocol your remote service speaks. Often that is http, using nginx's proxy_pass directive. (Although in that case, nginx isn't accessing files; it is receiving the response of a http request.) Good luck with it, f -- Francis Daly francis at daoine.org From francis at daoine.org Sun Jul 19 08:19:54 2020 From: francis at daoine.org (Francis Daly) Date: Sun, 19 Jul 2020 09:19:54 +0100 Subject: Reverse proxy to Tomcat 8 In-Reply-To: <1772e9e38fe8458a97b5117a522d0f16@mydirectmail.net> References: <1772e9e38fe8458a97b5117a522d0f16@mydirectmail.net> Message-ID: <20200719081954.GT20939@daoine.org> On Sat, Jul 18, 2020 at 01:18:22PM +0000, Dino Edwards wrote: Hi there, > Trying to reverse proxy using Nginx to a tomcat 8 application listening on port 8443 but I'm running into an issue where I'm getting status 404 on all static content. What request do you make? (Probably something like /file.jpg, or /ciphermail/file.jpg? Give one specific example.) What response do you get? (You said 404. Do the nginx-or-upstream logs give any further information?) What response do you want instead? (This might be "the content of /usr/local/nginx/html/ciphermail/file.jpg", or "the response from proxy_pass to https://localhost:8443/ciphermail/file.jpg", or something else -- be specific,for the example that you gave in the first answer.) > Below is my config file. The /admin location works with no problems. The /ciphermail is the problematic one. I had the exact same problem with static content with the /admin location until I added the following in my server block: > > location ~* \.(jpg|jpeg|png|gif|ico|css|js)$ > > and then it started working. However, the /ciphermail location seems to still not work. > location /ciphermail { Depending on what you want, it might be that you should have "^~" before and "/" after the word "/ciphermail" in the "location" definition. Or it might be something else. Cheers, f -- Francis Daly francis at daoine.org From francis at daoine.org Sun Jul 19 08:29:32 2020 From: francis at daoine.org (Francis Daly) Date: Sun, 19 Jul 2020 09:29:32 +0100 Subject: nginx reverse proxy for ssh reverse tunnel? In-Reply-To: <62d189b15f79cdb0200f34b78016049a.NginxMailingListEnglish@forum.nginx.org> References: <62d189b15f79cdb0200f34b78016049a.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20200719082932.GU20939@daoine.org> On Sun, Jul 19, 2020 at 02:45:46AM -0400, jalil1408 wrote: Hi there, > * 192.168.1.102 (remote machine) : Ubuntu + curl > curl http://192.168.1.100:6033/api/Users ==> works well > curl http://192.168.1.100/tunnel/api/Users ==> "the page you are > looking for is temporarily unavailable" !!! That suggests that you want a request to port 80 for /tunnel/X to be proxy_pass'ed to port 6033 for /X. (As in: remove the "/tunnel" part.) Do your nginx or port-6033 logs show the requests made and responses sent? > location /tunnel/ { > proxy_pass http://127.0.0.1:6033; Add "/" after 6033. http://nginx.org/r/proxy_pass Cheers, f -- Francis Daly francis at daoine.org From nginx-forum at forum.nginx.org Sun Jul 19 11:39:55 2020 From: nginx-forum at forum.nginx.org (bitquest25c) Date: Sun, 19 Jul 2020 07:39:55 -0400 Subject: TCP proxy with SNI support Message-ID: <60fcd2b23361397a16d7d3bb13a4491b.NginxMailingListEnglish@forum.nginx.org> Hello, I have a single server with one Public IP and 10 domains. For each domain I?d like to have a separate docker container as an email server (Postfix + Dovecot). I?d like to achieve this with transparent TCP proxy with SNI support. I'd like to route traffic from example.com on ports 587 & 143 to one container and traffic for acme.com on ports 587 & 143 to a different container, etc. Should ports 587 & 143 be changed to 465 & 993 instead to achieve this? Can I achieve this by using host:port? Does anyone know of an example? Thank you. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,288761,288761#msg-288761 From dino.edwards at mydirectmail.net Sun Jul 19 11:45:14 2020 From: dino.edwards at mydirectmail.net (Dino Edwards) Date: Sun, 19 Jul 2020 11:45:14 +0000 Subject: Reverse proxy to Tomcat 8 In-Reply-To: <20200719081954.GT20939@daoine.org> References: <1772e9e38fe8458a97b5117a522d0f16@mydirectmail.net> <20200719081954.GT20939@daoine.org> Message-ID: > location /ciphermail { > Depending on what you want, it might be that you should have "^~" before and "/" after the word "/ciphermail" in the "location" definition. That seems to have done the trick. Thanks a lot From nginx-forum at forum.nginx.org Sun Jul 19 18:53:00 2020 From: nginx-forum at forum.nginx.org (jalil1408) Date: Sun, 19 Jul 2020 14:53:00 -0400 Subject: nginx reverse proxy for ssh reverse tunnel? In-Reply-To: <62d189b15f79cdb0200f34b78016049a.NginxMailingListEnglish@forum.nginx.org> References: <62d189b15f79cdb0200f34b78016049a.NginxMailingListEnglish@forum.nginx.org> Message-ID: fixed by issueing this command: sudo setsebool httpd_can_network_connect on -P and this [/etc/nginx/nginx.conf] ... http { ... server { ... location /tunnel/ { proxy_pass http://127.0.0.1:6033; } } } Posted at Nginx Forum: https://forum.nginx.org/read.php?2,288756,288763#msg-288763 From xiaohanhoho at gmail.com Mon Jul 20 03:21:43 2020 From: xiaohanhoho at gmail.com (=?UTF-8?B?6IKW5ra1?=) Date: Mon, 20 Jul 2020 11:21:43 +0800 Subject: =?UTF-8?Q?Is_there_any_plan_to_support_WebAssembly=EF=BC=9F?= Message-ID: HiWebAssembly provides a new way to extend the module, like envoy already supports WebAssembly. we would like to consult nginx's attitude towards this In addition, We have supported WebAssembly for Nignx. we can load the wasm module as your extension like Lua. Will you consider accepting our code as part of open source nginx in the future? Thanks Han -------------- next part -------------- An HTML attachment was scrubbed... URL: From kunalspunjabi at gmail.com Wed Jul 22 06:41:31 2020 From: kunalspunjabi at gmail.com (Kunal Punjabi) Date: Tue, 21 Jul 2020 23:41:31 -0700 Subject: Wildcard subdomains in Nginx Message-ID: I've been struggling with setting up nginx subdomains on my linode instance and setting up CNAME redirects. *What I need is to be able to do:* 1. First set up wildcard subdomains on my server (tinyadults.com), so that users can go to abc.tinyadults.com, xyz.tinyadults.com, etc. My server is running nuxt.js on port 4001 (default port is 3000 but I chose to use 4001 as a non-standard port), so I guess I have to use reverse proxies: proxy_pass http://localhost:4001; 2. Then for my users I need to set up CNAME redirects from domain1.com to abc.tinyadults.com, and from domain2.com to xyz.tinyadults.com, so that if I visit domain1.com , it would serve the contents (without redirecting me) of abc.tinyadults.com. For testing purposes I have an additional domain ( passivefinance.com) that we could use. However, I've not been able to get step 1 working. Can someone who is experienced with nginx setup please guide me? Below is my nginx config from sites-available/tinyadults.com.conf: server { index index.html index.htm; server_name tinyadults.com www.tinyadults.com; location / { # WARNING: https in proxy_pass does NOT WORK!! I spent half a day debugging this. #proxy_pass https://localhost:4001; proxy_pass http://localhost:4001; proxy_http_version 1.1; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection 'upgrade'; proxy_set_header Host $host; proxy_cache_bypass $http_upgrade; } # Kunal: create a custom 404 nginx page, from https://www.digitalocean.com/community/tutorials/how-to-configure-nginx-to-use-custom-error-pages-on-ubuntu-14-04 error_page 404 /custom_404.html; location = /custom_404.html { root /etc/nginx/sites-available/custom_nginx_error_pages; internal; } listen [::]:4001 ssl http2; # managed by Certbot, modified by Kunal to add http2 listen 4001 ssl http2; # managed by Certbot, modified by Kunal to add http2 #Install SSL certificates and configure https:// on a per-domain-basis by running: #sudo certbot --nginx #(when prompted, be sure to select the option to set up redirects from http to https and effectively "disable" http) ssl_certificate /etc/letsencrypt/live/tinyadults.com-0001/fullchain.pem; # managed by Certbot ssl_certificate_key /etc/letsencrypt/live/tinyadults.com-0001/privkey.pem; # managed by Certbot include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot } server { server_name tinyadults.com; if ($host = tinyadults.com) { return 301 https://$host$request_uri; } # managed by Certbot listen 80 default_server; listen [::]:80 default_server; return 404; # managed by Certbot } -------------- next part -------------- An HTML attachment was scrubbed... URL: From francis at daoine.org Wed Jul 22 10:52:11 2020 From: francis at daoine.org (Francis Daly) Date: Wed, 22 Jul 2020 11:52:11 +0100 Subject: Wildcard subdomains in Nginx In-Reply-To: References: Message-ID: <20200722105211.GV20939@daoine.org> On Tue, Jul 21, 2020 at 11:41:31PM -0700, Kunal Punjabi wrote: Hi there, > I've been struggling with setting up nginx subdomains on my linode instance > and setting up CNAME redirects. I don't fully understand what you are trying to do here. For example -- what do you mean by a CNAME redirect? > *What I need is to be able to do:* > > 1. First set up wildcard subdomains on my server (tinyadults.com), so that > users can go to abc.tinyadults.com, xyz.tinyadults.com, etc. server_name *.tinyadults.com; (See http://nginx.org/r/server_name) Or, if the names should be handled differently: server { server_name abc.tinyadults.com; } server { server_name xyz.tinyadults.com; } > My server is running nuxt.js on port 4001 (default port is 3000 but I chose > to use 4001 as a non-standard port), so I guess I have to use reverse > proxies: > proxy_pass http://localhost:4001; Ok. Does your nuxt.js service care whether the original request was for the hostname abc or xyz? If so, you may want to indicate to it what the original hostname was. > 2. Then for my users I need to set up CNAME redirects from domain1.com to > abc.tinyadults.com, and from domain2.com to xyz.tinyadults.com, so that if > I visit domain1.com , it would serve the contents (without redirecting me) > of abc.tinyadults.com. I don't know what you mean by that. Might it be server { server_name abc.tinyadults.com domain1.com; } or perhaps server { server_name domain1.com; location / { proxy_pass https://abc.tinyadults.com; } } ? > However, I've not been able to get step 1 working. Can someone who is > experienced with nginx setup please guide me? Can you give some specific examples of "I make *this* request, and I want to get *this* response"? It looks like you have a nuxt.js http service listening on port 4001, and you want an nginx https service to listen on port 443 and reverse-proxy the 4001 service. But your suggested nginx config seems to try to do something different from that. Good luck with it, f -- Francis Daly francis at daoine.org From xeioex at nginx.com Wed Jul 22 18:26:42 2020 From: xeioex at nginx.com (Dmitry Volyntsev) Date: Wed, 22 Jul 2020 21:26:42 +0300 Subject: =?UTF-8?Q?Re=3A_Is_there_any_plan_to_support_WebAssembly=EF=BC=9F?= In-Reply-To: References: Message-ID: <76b9756f-f75a-f59c-0ca2-cbe6549a7d86@nginx.com> On 20.07.2020 06:21, ?? wrote: > > > Hi > > > WebAssembly provides a new way to extend the module, like envoy > already supports WebAssembly. we would like to consult nginx's > attitude towards this > > > In addition, We have supported? WebAssembly for Nignx. we can > load the wasm module as your extension like Lua. Will you > consider accepting our code as part of open source nginx in the > future? > > > Thanks > > Han Hi Han, We are in? the process of investigation of applicability of WebAssembly in nginx. Please share more details about the ways you use WebAssembly in nginx. 1) do you have publicly available code right now? 2) Do you support any interoperability interfaces like proxy-wasm? (https://github.com/proxy-wasm/spec/tree/master/abi-versions/vNEXT) Or please share the details about your WASM runtime, and the way you interact with nginx internals. >Will you consider accepting our code as part of open source nginx in the future? If it is just a module, feel free to release it. There are multitudes of community supported modules for nginx. We rarely accept external large patches, because its support requires considerable time and energy. > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From francis at daoine.org Thu Jul 23 12:38:30 2020 From: francis at daoine.org (Francis Daly) Date: Thu, 23 Jul 2020 13:38:30 +0100 Subject: TCP proxy with SNI support In-Reply-To: <60fcd2b23361397a16d7d3bb13a4491b.NginxMailingListEnglish@forum.nginx.org> References: <60fcd2b23361397a16d7d3bb13a4491b.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20200723123830.GW20939@daoine.org> On Sun, Jul 19, 2020 at 07:39:55AM -0400, bitquest25c wrote: Hi there, > I have a single server with one Public IP and 10 domains. For each domain > I?d like to have a separate docker container as an email server (Postfix + > Dovecot). I?d like to achieve this with transparent TCP proxy with SNI > support. > > I'd like to route traffic from example.com on ports 587 & 143 to one > container and traffic for acme.com on ports 587 & 143 to a different > container, etc. Does the first example configuration at http://nginx.org/en/docs/stream/ngx_stream_ssl_preread_module.html show what you want? And if not -- how will your tcp proxy know that some traffic to your-ip:your-port is intended for one.example.com instead of for two.example.com? > Should ports 587 & 143 be changed to 465 & 993 instead to achieve this? It sounds like you want your clients to speak protocol-over-ssl, using SNI. So you will want probably want smtps-on-465 and imaps-on-993, yes. So long as you control the clients, and can require them to use your configuration (SNI and these ports), it should work, In this design, nginx is not doing SSL-termination; each individual upstream service will do that. Good luck with it, f -- Francis Daly francis at daoine.org From kaushalshriyan at gmail.com Thu Jul 23 17:20:29 2020 From: kaushalshriyan at gmail.com (Kaushal Shriyan) Date: Thu, 23 Jul 2020 22:50:29 +0530 Subject: The page you are looking for is temporarily unavailable. Please try again later. (HTTP Status code 499 and 504) Message-ID: Hi, I am running nginx version: nginx/1.16.1 on CentOS Linux release 7.8.2003 (Core). I have configured the below settings in /etc/nginx/nginx.conf. # For more information on configuration, see: > # * Official English Documentation: http://nginx.org/en/docs/ > # * Official Russian Documentation: http://nginx.org/ru/docs/ > user nginx; > worker_processes auto; > error_log /var/log/nginx/error.log; > pid /run/nginx.pid; > # Load dynamic modules. See /usr/share/doc/nginx/README.dynamic. > include /usr/share/nginx/modules/*.conf; > events { > worker_connections 1024; > } > http { > log_format main '$remote_addr - $remote_user [$time_local] > "$request" ' > '$status $body_bytes_sent "$http_referer" ' > '"$http_user_agent" "$http_x_forwarded_for"'; > access_log /var/log/nginx/access.log main; > sendfile on; > tcp_nopush on; > tcp_nodelay on; > keepalive_timeout 65; > types_hash_max_size 2048; > include /etc/nginx/mime.types; > default_type application/octet-stream; > # Load modular configuration files from the /etc/nginx/conf.d > directory. > # See http://nginx.org/en/docs/ngx_core_module.html#include > # for more information. > include /etc/nginx/conf.d/*.conf; > > server { > listen 80; > location /test { > proxy_set_header X-Forwarded-For $remote_addr; > proxy_set_header Host $http_host; > proxy_pass > https://vpc-test-search-9n18zc18u2ffd3n4ywapz7jkrde.us-west-1.es.amazonaws.com > ; > } > location /prod { > proxy_set_header X-Forwarded-For $remote_addr; > proxy_set_header Host $http_host; > proxy_pass > https://vpc-prod-search-zvf8bfaabstcc6gi7sklqh7ll4.us-west-1.es.amazonaws.com > ; > } > > > error_page 404 /404.html; > location = /40x.html { > } > error_page 500 502 503 504 /50x.html; > location = /50x.html { > } > } > } When I hit http://14.217.10.21/prod I am seeing the below issue in nginx access log (/var/log/nginx/access.log) 114.18.113.6 - - [23/Jul/2020:17:14:35 +0000] "GET /prod HTTP/1.1" *499* 0 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_6) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/84.0.4147.89 Safari/537.36" "-" 114.18.113.6 - - [23/Jul/2020:17:15:35 +0000] "GET /prod HTTP/1.1" *504* 3693 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_6) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/84.0.4147.89 Safari/537.36" "-" [image: image.png] Please let me know if you need any additional information. Any help will be highly appreciated. Thanks in Advance. I look forward to hearing from you. Best Regards, Kaushal -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image.png Type: image/png Size: 171551 bytes Desc: not available URL: From adam.volek at cdn77.com Thu Jul 23 18:06:36 2020 From: adam.volek at cdn77.com (Adam Volek) Date: Thu, 23 Jul 2020 20:06:36 +0200 Subject: No revalidation when using stale-while-revalidate Message-ID: <414fa7be-e6a6-9dcd-82b3-e6588e52a595@cdn77.com> Hi, We're running into some strange behaviour with the stale-while-revalidate extension of the cache-control header when using nginx as a reverse proxy. When there is a stale response in the cache with nonzero stale-while-revalidate time, nginx attempts revalidation but seems to ignore the upstream answer if it has specific status code, such as 404 or 500, and server a stale response to the client. Other response codes such as 200 or 410 don't trigger this behaviour. Is this intended, and if so, is there a way to configure nginx to treat 404 as any other response? I understand that this behaviour might be desirable in some situations (especially for the responses with 5xx status codes), but in our case, if the upstream return a 404 response, we would want the cache to start returning it as well as soon as the revalidation is finished. Below is a minimum nginx.conf we used to replicate the issue. Kind regards, Adam Volek events {} http { proxy_cache_path /tmp/ngx_cache keys_zone=one:10m; proxy_cache one; server { listen 8080 default_server; location / { proxy_pass http://example.com/; } } } From kunalspunjabi at gmail.com Thu Jul 23 21:30:42 2020 From: kunalspunjabi at gmail.com (Kunal Punjabi) Date: Thu, 23 Jul 2020 14:30:42 -0700 Subject: nginx Digest, Vol 129, Issue 24 In-Reply-To: References: Message-ID: Hi Francis, Thanks for the response, and sorry for the delay here - I'm new to the mailing list and trying to figure out how the threads and responses are supposed to work (If I am not following the nginx digest's guidelines, please let me know) *> Can you give some specific examples of "I make *this* request, and I want to get *this* response"? It looks like you have a nuxt.js http service listening on port 4001, and you want an nginx https service to listen on port 443 and reverse-proxy the 4001 service. But your suggested nginx config seems to try to do something different from that.* *So, in simple terms, this is what I have and this is what I want to happen:* - I have nuxt.js (frontend) running on a specific port, let's say 4001 (this can be changed and does not matter). - Nuxt.js frontend communicates with my backend APIs running php / laravel, on port 8000 (this port can also be changed and does not matter). - I will host the frontend on domain: tinyadults.com, and apis on api.tinyadults.com (btw please dont go to these URLs, they're not configured yet). - Furthermore, users of my site (tinyadults.com) will be able to go to tinyadults.com, create an account, and in the settings, specify what subdomain and custom domain they want to use. Similar to this https://downloads.intercomcdn.com/i/o/173567706/0ef0f78954834279ddad732e/image.png as you can see here https://help.podia.com/en/articles/101242-setting-up-your-custom-domain-name - So, a user (say user1) should be able to the app (tinyadults.com), specify their own subdomain URL, like user1.tinyadults.com, user2.tinyadults.com etc. I probably need a nginx config with "wildcard subdomains" to do this, which I also haven't figured out how to do. - They can also specify a custom domain (that they own), like domain1.com in the admin settings. What should happen is if you go to domain1.com, it would show the contents of that user's subdomain, user1.tinyadults.com in this example. - All of this is exactly like what podia does here: https://help.podia.com/en/articles/101242-setting-up-your-custom-domain-name, but configuring this has been a huge issue for us. It probably requires nginx and some DNS / CNAME changes, and we haven't been able to figure out how to get it to work. Our current nginx configs can be seen here https://gist.github.com/connecteev/f16b01e2aadbdcfbeb2e53f1b29cea04 So far, it's been like shooting darts blindfolded. If you could help figure this out, that would be so appreciated. Kunal On Wed, Jul 22, 2020 at 5:00 AM wrote: > Send nginx mailing list submissions to > nginx at nginx.org > > To subscribe or unsubscribe via the World Wide Web, visit > http://mailman.nginx.org/mailman/listinfo/nginx > or, via email, send a message with subject or body 'help' to > nginx-request at nginx.org > > You can reach the person managing the list at > nginx-owner at nginx.org > > When replying, please edit your Subject line so it is more specific > than "Re: Contents of nginx digest..." > > > Today's Topics: > > 1. Wildcard subdomains in Nginx (Kunal Punjabi) > 2. Re: Wildcard subdomains in Nginx (Francis Daly) > > > ---------------------------------------------------------------------- > > Message: 1 > Date: Tue, 21 Jul 2020 23:41:31 -0700 > From: Kunal Punjabi > To: nginx at nginx.org > Subject: Wildcard subdomains in Nginx > Message-ID: > vEik5f5_OuBWJJWTg at mail.gmail.com> > Content-Type: text/plain; charset="utf-8" > > I've been struggling with setting up nginx subdomains on my linode instance > and setting up CNAME redirects. > > *What I need is to be able to do:* > > 1. First set up wildcard subdomains on my server (tinyadults.com), so that > users can go to abc.tinyadults.com, xyz.tinyadults.com, etc. > My server is running nuxt.js on port 4001 (default port is 3000 but I chose > to use 4001 as a non-standard port), so I guess I have to use reverse > proxies: > proxy_pass http://localhost:4001; > > 2. Then for my users I need to set up CNAME redirects from domain1.com to > abc.tinyadults.com, and from domain2.com to xyz.tinyadults.com, so that if > I visit domain1.com , it would serve the contents (without redirecting me) > of abc.tinyadults.com. For testing purposes I have an additional domain ( > passivefinance.com) that we could use. > > However, I've not been able to get step 1 working. Can someone who is > experienced with nginx setup please guide me? > > Below is my nginx config from sites-available/tinyadults.com.conf: > > server { > > index index.html index.htm; > > server_name tinyadults.com www.tinyadults.com; > > > location / { > > # WARNING: https in proxy_pass does NOT WORK!! I spent half a day > debugging this. > > #proxy_pass https://localhost:4001; > > proxy_pass http://localhost:4001; > > proxy_http_version 1.1; > > proxy_set_header Upgrade $http_upgrade; > > proxy_set_header Connection 'upgrade'; > > proxy_set_header Host $host; > > proxy_cache_bypass $http_upgrade; > > } > > > # Kunal: create a custom 404 nginx page, from > > https://www.digitalocean.com/community/tutorials/how-to-configure-nginx-to-use-custom-error-pages-on-ubuntu-14-04 > > error_page 404 /custom_404.html; > > location = /custom_404.html { > > root /etc/nginx/sites-available/custom_nginx_error_pages; > > internal; > > } > > > listen [::]:4001 ssl http2; # managed by Certbot, modified by Kunal to > add http2 > > listen 4001 ssl http2; # managed by Certbot, modified by Kunal to add > http2 > > > #Install SSL certificates and configure https:// on a per-domain-basis > by running: > > #sudo certbot --nginx > > #(when prompted, be sure to select the option to set up redirects from > http to https and effectively "disable" http) > > ssl_certificate > /etc/letsencrypt/live/tinyadults.com-0001/fullchain.pem; # managed by > Certbot > > ssl_certificate_key > /etc/letsencrypt/live/tinyadults.com-0001/privkey.pem; # managed by Certbot > > include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot > > ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot > > > > } > > > server { > > server_name tinyadults.com; > > if ($host = tinyadults.com) { > > return 301 https://$host$request_uri; > > } # managed by Certbot > > > listen 80 default_server; > > listen [::]:80 default_server; > > return 404; # managed by Certbot > > } > -------------- next part -------------- > An HTML attachment was scrubbed... > URL: < > http://mailman.nginx.org/pipermail/nginx/attachments/20200721/85055f7d/attachment-0001.htm > > > > ------------------------------ > > Message: 2 > Date: Wed, 22 Jul 2020 11:52:11 +0100 > From: Francis Daly > To: nginx at nginx.org > Subject: Re: Wildcard subdomains in Nginx > Message-ID: <20200722105211.GV20939 at daoine.org> > Content-Type: text/plain; charset=us-ascii > > On Tue, Jul 21, 2020 at 11:41:31PM -0700, Kunal Punjabi wrote: > > Hi there, > > > I've been struggling with setting up nginx subdomains on my linode > instance > > and setting up CNAME redirects. > > I don't fully understand what you are trying to do here. > > For example -- what do you mean by a CNAME redirect? > > > *What I need is to be able to do:* > > > > 1. First set up wildcard subdomains on my server (tinyadults.com), so > that > > users can go to abc.tinyadults.com, xyz.tinyadults.com, etc. > > server_name *.tinyadults.com; > > (See http://nginx.org/r/server_name) > > Or, if the names should be handled differently: > > server { server_name abc.tinyadults.com; } > server { server_name xyz.tinyadults.com; } > > > My server is running nuxt.js on port 4001 (default port is 3000 but I > chose > > to use 4001 as a non-standard port), so I guess I have to use reverse > > proxies: > > proxy_pass http://localhost:4001; > > Ok. Does your nuxt.js service care whether the original request was for > the hostname abc or xyz? If so, you may want to indicate to it what the > original hostname was. > > > 2. Then for my users I need to set up CNAME redirects from domain1.com > to > > abc.tinyadults.com, and from domain2.com to xyz.tinyadults.com, so that > if > > I visit domain1.com , it would serve the contents (without redirecting > me) > > of abc.tinyadults.com. > > I don't know what you mean by that. > > Might it be > > server { server_name abc.tinyadults.com domain1.com; } > > or perhaps > > server { server_name domain1.com; > location / { proxy_pass https://abc.tinyadults.com; } > } > > ? > > > However, I've not been able to get step 1 working. Can someone who is > > experienced with nginx setup please guide me? > > Can you give some specific examples of "I make *this* request, and I > want to get *this* response"? > > It looks like you have a nuxt.js http service listening on port 4001, and > you want an nginx https service to listen on port 443 and reverse-proxy > the 4001 service. > > But your suggested nginx config seems to try to do something different > from that. > > Good luck with it, > > f > -- > Francis Daly francis at daoine.org > > > ------------------------------ > > Subject: Digest Footer > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > > ------------------------------ > > End of nginx Digest, Vol 129, Issue 24 > ************************************** > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Fri Jul 24 02:33:48 2020 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 24 Jul 2020 05:33:48 +0300 Subject: No revalidation when using stale-while-revalidate In-Reply-To: <414fa7be-e6a6-9dcd-82b3-e6588e52a595@cdn77.com> References: <414fa7be-e6a6-9dcd-82b3-e6588e52a595@cdn77.com> Message-ID: <20200724023348.GY12747@mdounin.ru> Hello! On Thu, Jul 23, 2020 at 08:06:36PM +0200, Adam Volek wrote: > We're running into some strange behaviour with the > stale-while-revalidate extension of the cache-control header > when using nginx as a reverse proxy. When > there is a stale response in the cache with nonzero > stale-while-revalidate time, nginx attempts revalidation but > seems to ignore the upstream answer if it > has specific status code, such as 404 or 500, and server a stale > response to the client. Other response codes such as 200 or 410 > don't trigger this > behaviour. Is this intended, and if so, is there a way to > configure nginx to treat 404 as any other response? As long as the response returned isn't cacheable (either as specified in the response Cache-Control / Expires headers, or per proxy_cache_valid), nginx won't put the response into cache and will continue serving previously cached response till stale-while-revalidate timeout expires. Most likely "specific status code" in your tests in fact means responses returned by your upstream server without Cache-Control headers, and hence not cached by nginx. > I understand that this behaviour might be desirable in some > situations (especially for the responses with 5xx status codes), > but in our case, if the > upstream return a 404 response, we would want the cache to start > returning it as well as soon as the revalidation is finished. As long as the above analysis is correct, the solution is simple: make sure responses you want nginx to cache are cacheable, that is, make sure appropriate headers are present or configure proxy_cache_valid (http://nginx.org/r/proxy_cache_valid). -- Maxim Dounin http://mdounin.ru/ From william.zk at antgroup.com Fri Jul 24 03:03:45 2020 From: william.zk at antgroup.com (=?UTF-8?B?5pu+5p+vKOavheS4nSk=?=) Date: Fri, 24 Jul 2020 11:03:45 +0800 Subject: Will nginx support quic-lb feature Message-ID: <7a78d1f8-b776-475d-a746-fe76a5bedfba.william.zk@antgroup.com> Dear NGINX maintainer: Will nginx support quic-lb feature which was described in https://tools.ietf.org/html/draft-ietf-quic-load-balancers-03 latter? We firmly believe that nginx is the most appropriate LB product to do this(Other LB product such as LVS, have to modify kernel). If you agree with this, we are very happy to contribute our code to nginx community(We will contribute our code to nginx-quic branch). -------------- next part -------------- An HTML attachment was scrubbed... URL: From adam.volek at cdn77.com Fri Jul 24 13:21:31 2020 From: adam.volek at cdn77.com (Adam Volek) Date: Fri, 24 Jul 2020 15:21:31 +0200 Subject: No revalidation when using stale-while-revalidate In-Reply-To: <20200724023348.GY12747@mdounin.ru> References: <414fa7be-e6a6-9dcd-82b3-e6588e52a595@cdn77.com> <20200724023348.GY12747@mdounin.ru> Message-ID: <07e8cbcc-4291-517b-e91c-471b02e94baa@cdn77.com> Hi, On 24. 07. 20 4:33, Maxim Dounin wrote: > As long as the response returned isn't cacheable (either > as specified in the response Cache-Control / Expires > headers, or per proxy_cache_valid), nginx won't put > the response into cache and will continue serving previously > cached response till stale-while-revalidate timeout expires. > > Most likely "specific status code" in your tests in fact means > responses returned by your upstream server without Cache-Control > headers, and hence not cached by nginx. This is not the case as far as I can tell. In our tests, the upstream server was set up to send these two responses, the 204 first, and then then the 404: HTTP/1.1 204 No Content Date: Fri, 24 Jul 2020 11:32:33 GMT Connection: keep-alive cache-control: max-age=5, stale-while-revalidate=10 HTTP/1.1 404 Not Found Date: Fri, 24 Jul 2020 11:32:35 GMT Content-Type: text/plain Connection: close cache-control: max-age=5, stale-while-revalidate=10 In this scenario, nginx returns fresh 204 for five seconds and then it returns stale 204 for ten seconds even though it's attempting revalidation according to access log at the upstream server. If we send the following 410 response instead of 404 however, nginx behaves as we would expect: it returns the fresh 204 for five seconds, then it revalidates it almost instantly and starts returning the fresh 410: HTTP/1.1 410 Gone Date: Fri, 24 Jul 2020 11:41:56 GMT Content-Type: text/plain Connection: close cache-control: max-age=5, stale-while-revalidate=10 Adam Volek On 24. 07. 20 4:33, Maxim Dounin wrote: > Hello! > > On Thu, Jul 23, 2020 at 08:06:36PM +0200, Adam Volek wrote: > >> We're running into some strange behaviour with the >> stale-while-revalidate extension of the cache-control header >> when using nginx as a reverse proxy. When >> there is a stale response in the cache with nonzero >> stale-while-revalidate time, nginx attempts revalidation but >> seems to ignore the upstream answer if it >> has specific status code, such as 404 or 500, and server a stale >> response to the client. Other response codes such as 200 or 410 >> don't trigger this >> behaviour. Is this intended, and if so, is there a way to >> configure nginx to treat 404 as any other response? > > As long as the response returned isn't cacheable (either > as specified in the response Cache-Control / Expires > headers, or per proxy_cache_valid), nginx won't put > the response into cache and will continue serving previously > cached response till stale-while-revalidate timeout expires. > > Most likely "specific status code" in your tests in fact means > responses returned by your upstream server without Cache-Control > headers, and hence not cached by nginx. > >> I understand that this behaviour might be desirable in some >> situations (especially for the responses with 5xx status codes), >> but in our case, if the >> upstream return a 404 response, we would want the cache to start >> returning it as well as soon as the revalidation is finished. > > As long as the above analysis is correct, the solution is simple: > make sure responses you want nginx to cache are cacheable, that > is, make sure appropriate headers are present or configure > proxy_cache_valid (http://nginx.org/r/proxy_cache_valid). > From zhengyupann at 163.com Fri Jul 24 13:27:01 2020 From: zhengyupann at 163.com (Zhengyu Pan) Date: Fri, 24 Jul 2020 21:27:01 +0800 (CST) Subject: why nginx worker process listen in port 80, not master process? Message-ID: <68347349.84ec.17381016e44.Coremail.zhengyupann@163.com> In my node, Every network namespace has own nginx process. when i use netstat command to get which nginx process are listening in port 80? I found that some nginx worker process are listening 80 port. Some nginx master process are listening port 80. In my understanding ,it should be that nginx master process will listen port 80? Why happens that worker process listen port 80? Is it about reload? [root at node2 ~]# for i in `ip netns |grep lbaas| awk '{print $1}'`; do ip netns exec $i netstat -atnp |grep nginx;done tcp 0 0 0.0.0.0:80 0.0.0.0:* LISTEN 22253/nginx: worker tcp 0 0 0.0.0.0:80 0.0.0.0:* LISTEN 21968/nginx: worker tcp 0 0 0.0.0.0:80 0.0.0.0:* LISTEN 22622/nginx: worker tcp 0 0 0.0.0.0:80 0.0.0.0:* LISTEN 17499/nginx: master tcp 0 0 0.0.0.0:80 0.0.0.0:* LISTEN 15567/nginx: master tcp 0 0 0.0.0.0:80 0.0.0.0:* LISTEN 17568/nginx: master tcp 0 0 0.0.0.0:80 0.0.0.0:* LISTEN 18643/nginx: master tcp 0 0 0.0.0.0:80 0.0.0.0:* LISTEN 18736/nginx: master tcp 0 0 0.0.0.0:80 0.0.0.0:* LISTEN 17328/nginx: master tcp 0 0 0.0.0.0:80 0.0.0.0:* LISTEN 17766/nginx: master -- -------------- next part -------------- An HTML attachment was scrubbed... URL: From kaushalshriyan at gmail.com Fri Jul 24 15:36:18 2020 From: kaushalshriyan at gmail.com (Kaushal Shriyan) Date: Fri, 24 Jul 2020 21:06:18 +0530 Subject: The page you are looking for is temporarily unavailable. Please try again later. (HTTP Status code 499 and 504) In-Reply-To: References: Message-ID: On Thu, 23 Jul 2020 at 22:50, Kaushal Shriyan wrote: > Hi, > > I am running nginx version: nginx/1.16.1 on CentOS Linux release 7.8.2003 > (Core). I have configured the below settings in /etc/nginx/nginx.conf. > > # For more information on configuration, see: >> # * Official English Documentation: http://nginx.org/en/docs/ >> # * Official Russian Documentation: http://nginx.org/ru/docs/ >> user nginx; >> worker_processes auto; >> error_log /var/log/nginx/error.log; >> pid /run/nginx.pid; >> # Load dynamic modules. See /usr/share/doc/nginx/README.dynamic. >> include /usr/share/nginx/modules/*.conf; >> events { >> worker_connections 1024; >> } >> http { >> log_format main '$remote_addr - $remote_user [$time_local] >> "$request" ' >> '$status $body_bytes_sent "$http_referer" ' >> '"$http_user_agent" "$http_x_forwarded_for"'; >> access_log /var/log/nginx/access.log main; >> sendfile on; >> tcp_nopush on; >> tcp_nodelay on; >> keepalive_timeout 65; >> types_hash_max_size 2048; >> include /etc/nginx/mime.types; >> default_type application/octet-stream; >> # Load modular configuration files from the /etc/nginx/conf.d >> directory. >> # See http://nginx.org/en/docs/ngx_core_module.html#include >> # for more information. >> include /etc/nginx/conf.d/*.conf; >> >> server { >> listen 80; >> location /test { >> proxy_set_header X-Forwarded-For $remote_addr; >> proxy_set_header Host $http_host; >> proxy_pass >> https://vpc-test-search-9n18zc18u2ffd3n4ywapz7jkrde.us-west-1.es.amazonaws.com >> ; >> } >> location /prod { >> proxy_set_header X-Forwarded-For $remote_addr; >> proxy_set_header Host $http_host; >> proxy_pass >> https://vpc-prod-search-zvf8bfaabstcc6gi7sklqh7ll4.us-west-1.es.amazonaws.com >> ; >> } >> >> >> error_page 404 /404.html; >> location = /40x.html { >> } >> error_page 500 502 503 504 /50x.html; >> location = /50x.html { >> } >> } >> } > > > When I hit http://14.217.10.21/prod I am > seeing the below issue in nginx access log (/var/log/nginx/access.log) > > 114.18.113.6 - - [23/Jul/2020:17:14:35 +0000] "GET /prod HTTP/1.1" *499* > 0 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_6) AppleWebKit/537.36 > (KHTML, like Gecko) Chrome/84.0.4147.89 Safari/537.36" "-" > > 114.18.113.6 - - [23/Jul/2020:17:15:35 +0000] "GET /prod HTTP/1.1" *504* > 3693 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_6) > AppleWebKit/537.36 (KHTML, like Gecko) Chrome/84.0.4147.89 Safari/537.36" > "-" > > > [image: image.png] > > Please let me know if you need any additional information. Any help will > be highly appreciated. Thanks in Advance. I look forward to hearing from > you. > > Best Regards, > > Kaushal > Hi, Checking in again if someone can pitch in for help regarding my earlier email to this mailing list. Thanks in advance Best Regards, Kaushal > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image.png Type: image/png Size: 171551 bytes Desc: not available URL: From francis at daoine.org Sat Jul 25 09:54:49 2020 From: francis at daoine.org (Francis Daly) Date: Sat, 25 Jul 2020 10:54:49 +0100 Subject: Wildcard subdomains in Nginx In-Reply-To: References: Message-ID: <20200725095449.GX20939@daoine.org> On Thu, Jul 23, 2020 at 02:30:42PM -0700, Kunal Punjabi wrote: Hi there, > Thanks for the response, and sorry for the delay here No worries -- this list is not built for immediate responses. > - I'm new to the > mailing list and trying to figure out how the threads and responses are > supposed to work (If I am not following the nginx digest's guidelines, > please let me know) Welcome to the list. > > When replying, please edit your Subject line so it is more specific > > than "Re: Contents of nginx digest..." The only change I made in this response, was to edit the Subject: line to match the original mail, instead of it being the generic "digest" Subject line. > *> Can you give some specific examples of "I make *this* request, and I > want to get *this* response"? It looks like you have a nuxt.js http service > listening on port 4001, and you want an nginx https service to listen on > port 443 and reverse-proxy the 4001 service. But your > suggested nginx config seems to try to do something different from that.* > *So, in simple terms, this is what I have and this is what I want to > happen:* > > - I have nuxt.js (frontend) running on a specific port, let's say 4001 > (this can be changed and does not matter). Ok. I guess that your nuxt system must have some way to know that "one.tinyadults.com" should get one set of content, while "two.tinyadults.com" and "example.com" should both get a different set of content. From what I understand, nginx does not need to care about that; that is for nuxt-or-something-behind-it to handle. > - So, a user (say user1) should be able to the app (tinyadults.com), > specify their own subdomain URL, like user1.tinyadults.com, > user2.tinyadults.com etc. I probably need a nginx config with "wildcard > subdomains" to do this, which I also haven't figured out how to do. If nginx should handle all incoming domain (server_name or Host:) requests in the same way, then nginx does not need to care about the domains. For just-http, your nginx config might be just server { listen 80 default; location / { proxy_pass http://localhost:4001; } } with probably something like "proxy_set_header Host $host;" and maybe some other proxy_* parts, depending on what your nuxt system does. If you want to include https, then you will need to have a certificate for every server_name that your clients might use to access nginx. That would probably mean that you would have a server{} block like the above, but with ssl enabled and a matching certificate, for each of those server_name values. > - They can also specify a custom domain (that they own), like domain1.com > in the admin settings. What should happen is if you go to domain1.com, > it would show the contents of that user's subdomain, user1.tinyadults.com > in this example. That sounds to me like it is not for your nginx to worry about. If "domain1.com" is your nginx, then your nginx just proxy_pass'es to nuxt as normal, and nuxt must know that domain1.com should get the same content as user1.tinyadults.com. If "domain1.com" is not your nginx, then domain1.com must know to proxy_pass to user1.tinyadults.com (which *is* your nginx); and your nginx just proxy_pass'es to nuxt as normal. > It probably requires > nginx and some DNS / CNAME changes, and we haven't been able to figure out > how to get it to work. All that DNS (including CNAME) does in this context, is get the client to talk to your IP address when they want to talk to whatever name they are looking for. The DNS side has nothing to do with nginx. > Our current nginx configs can be seen here > https://gist.github.com/connecteev/f16b01e2aadbdcfbeb2e53f1b29cea04 Pretty much all nginx config starts with you deciding what you want to have happen. What is the full data flow that you care about? When the user makes a request for "one.tinyadults.com" or "two.tinyadults.com", DNS must make sure that the request gets to your nginx. After that: how do you want nginx to handle those two requests? If you want a http request to be proxy_pass'ed to nuxt, configure your nginx to do that (like is shown above). If you want a http request to be redirected to a https equivalent of the same request, configure your nginx to do that (like the "return 301" is your linked config). If you want a https request to be handled by nginx, you must (for the client's sake) first make sure that you present a certificate that the client will accept; and then: what do you want nginx to do with the request? If it is "proxy_pass to nuxt" -- do that. If it is something else -- decide what that something else is, and then see how to configure nginx to do it. > So far, it's been like shooting darts blindfolded. If you could help figure > this out, that would be so appreciated. The hard part is you deciding what you want nginx to do with each request. (Usually it will be "serve *this* file from the filesystem", or "redirect to *this* http(s) url", or "proxy_pass to *this* http(s) url", or "fastcgi_pass to the configured service, asking it to process *this* filename". You just have to be very clear on how you want each request to be handled.) After that is decided, then you can worry about how to configure nginx to do the thing that you want nginx to do. That's where the documentation, the mailing list, and other resources come in. Good luck with it, f -- Francis Daly francis at daoine.org From jfs.world at gmail.com Sun Jul 26 03:26:18 2020 From: jfs.world at gmail.com (Jeffrey 'jf' Lim) Date: Sun, 26 Jul 2020 11:26:18 +0800 Subject: Ok to have access_log and error_log writing to the same file? Message-ID: I've traditionally kept the 2 separate, but I'm wondering if it's ok or if there are going to be any problems having the 2 directives write to the same file. My sense is that it should be fine, but can anybody who's more familiar - especially with the internals - comment? My file path contains no variables, btw. thank you, -jf -- He who settles on the idea of the intelligent man as a static entity only shows himself to be a fool. From francis at daoine.org Sun Jul 26 21:22:02 2020 From: francis at daoine.org (Francis Daly) Date: Sun, 26 Jul 2020 22:22:02 +0100 Subject: The page you are looking for is temporarily unavailable. Please try again later. (HTTP Status code 499 and 504) In-Reply-To: References: Message-ID: <20200726212202.GY20939@daoine.org> On Thu, Jul 23, 2020 at 10:50:29PM +0530, Kaushal Shriyan wrote: Hi there, > > location /prod { > > proxy_set_header X-Forwarded-For $remote_addr; > > proxy_set_header Host $http_host; > > proxy_pass > > https://vpc-prod-search-zvf8bfaabstcc6gi7sklqh7ll4.us-west-1.es.amazonaws.com > > ; > > } > > error_page 500 502 503 504 /50x.html; > > location = /50x.html { > > } > When I hit http://14.217.10.21/prod I am seeing > the below issue in nginx access log (/var/log/nginx/access.log) > > 114.18.113.6 - - [23/Jul/2020:17:14:35 +0000] "GET /prod HTTP/1.1" *499* 0 > "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_6) AppleWebKit/537.36 > (KHTML, like Gecko) Chrome/84.0.4147.89 Safari/537.36" "-" "499" there means that the client closed the connection with nginx before nginx successfully sent things to the client. There's nothing nginx can do about that, other than break the nginx connection to upstream if you want it to (http://nginx.org/r/proxy_ignore_client_abort). > 114.18.113.6 - - [23/Jul/2020:17:15:35 +0000] "GET /prod HTTP/1.1" *504* > 3693 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_6) > AppleWebKit/537.36 (KHTML, like Gecko) Chrome/84.0.4147.89 Safari/537.36" > "-" "504" is Gateway Timeout: nginx failed to get an answer from the upstream (https://vpc-prod-search-zvf8bfaabstcc6gi7sklqh7ll4.us-west-1.es.amazonaws.com) for the request "/prod" in the time that it was allowed. > [image: image.png] That image presumably shows your /50x.html file, which you have configured to be shown for any of the listed 500-series error codes. For the 504 -- can you connect to that upstream from the nginx server? Is there anything extra in the nginx error log? Does, for example, the hostname resolve now to the same IP address that it resolved to when nginx started? f -- Francis Daly francis at daoine.org From mdounin at mdounin.ru Mon Jul 27 01:42:00 2020 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 27 Jul 2020 04:42:00 +0300 Subject: No revalidation when using stale-while-revalidate In-Reply-To: <07e8cbcc-4291-517b-e91c-471b02e94baa@cdn77.com> References: <414fa7be-e6a6-9dcd-82b3-e6588e52a595@cdn77.com> <20200724023348.GY12747@mdounin.ru> <07e8cbcc-4291-517b-e91c-471b02e94baa@cdn77.com> Message-ID: <20200727014200.GA12747@mdounin.ru> Hello! On Fri, Jul 24, 2020 at 03:21:31PM +0200, Adam Volek wrote: > On 24. 07. 20 4:33, Maxim Dounin wrote: > > As long as the response returned isn't cacheable (either > > as specified in the response Cache-Control / Expires > > headers, or per proxy_cache_valid), nginx won't put > > the response into cache and will continue serving previously > > cached response till stale-while-revalidate timeout expires. > > > > Most likely "specific status code" in your tests in fact means > > responses returned by your upstream server without Cache-Control > > headers, and hence not cached by nginx. > > This is not the case as far as I can tell. In our tests, the upstream server was set up to send these two responses, the 204 first, and then then the 404: > > HTTP/1.1 204 No Content > Date: Fri, 24 Jul 2020 11:32:33 GMT > Connection: keep-alive > cache-control: max-age=5, stale-while-revalidate=10 > > HTTP/1.1 404 Not Found > Date: Fri, 24 Jul 2020 11:32:35 GMT > Content-Type: text/plain > Connection: close > cache-control: max-age=5, stale-while-revalidate=10 > > In this scenario, nginx returns fresh 204 for five seconds and then it returns stale 204 for ten seconds even though it's attempting revalidation according > to access log at the upstream server. If we send the following 410 response instead of 404 however, nginx behaves as we would expect: it returns the fresh > 204 for five seconds, then it revalidates it almost instantly and starts returning the fresh 410: > > HTTP/1.1 410 Gone > Date: Fri, 24 Jul 2020 11:41:56 GMT > Content-Type: text/plain > Connection: close > cache-control: max-age=5, stale-while-revalidate=10 You are right, this seems to be an incorrect behaviour of stale-while-revalidate / stale-if-error handling. Internally, stale-if-error (and stale-while-revalidate) currently behave as if "proxy_cache_use_stale" was set with all possible flags (http://nginx.org/r/proxy_cache_use_stale) when handling upstream server responses. Notably this includes http_403, http_404, and http_429 flags, and this causes the effect you observe. This probably should be fixed. Just in case, the following configuration can be used to reproduce the issue within nginx itself: proxy_cache_path cache keys_zone=one:1m; server { listen 8080; location / { proxy_pass http://127.0.0.1:8081; proxy_cache one; add_header X-Cache-Status $upstream_cache_status always; } } server { listen 8081; location / { add_header cache-control "max-age=5, stale-while-revalidate=10" always; if ($connection = "3") { return 204; } return 404; } } -- Maxim Dounin http://mdounin.ru/ From jfs.world at gmail.com Tue Jul 28 05:08:32 2020 From: jfs.world at gmail.com (Jeffrey 'jf' Lim) Date: Tue, 28 Jul 2020 13:08:32 +0800 Subject: any way to escape logged bytes? Message-ID: I am discovering that nginx is logging the bytes sent by a client - in raw form - in my error.log for the following error: client sent invalid method while reading client request line, client: NN.N.N.N, server: NAME, request: ",'?Cookie: mstshash=eltons" Is there a way to get nginx to escape the bytes, as per what happens with access_log / log_format (https://nginx.org/en/docs/http/ngx_http_log_module.html#log_format)? thanks, -jf -- He who settles on the idea of the intelligent man as a static entity only shows himself to be a fool. From mdounin at mdounin.ru Thu Jul 30 12:05:52 2020 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 30 Jul 2020 15:05:52 +0300 Subject: any way to escape logged bytes? In-Reply-To: References: Message-ID: <20200730120552.GG12747@mdounin.ru> Hello! On Tue, Jul 28, 2020 at 01:08:32PM +0800, Jeffrey 'jf' Lim wrote: > I am discovering that nginx is logging the bytes sent by a client - in > raw form - in my error.log for the following error: > > client sent invalid method while reading client request line, client: > NN.N.N.N, server: NAME, request: ",'?Cookie: mstshash=eltons" > > Is there a way to get nginx to escape the bytes, as per what happens > with access_log / log_format > (https://nginx.org/en/docs/http/ngx_http_log_module.html#log_format)? No, currently error logs are written as is and not escaped. Relevant ticket is here: https://trac.nginx.org/nginx/ticket/191 -- Maxim Dounin http://mdounin.ru/ From jfs.world at gmail.com Thu Jul 30 12:48:54 2020 From: jfs.world at gmail.com (Jeffrey 'jf' Lim) Date: Thu, 30 Jul 2020 20:48:54 +0800 Subject: any way to escape logged bytes? In-Reply-To: <20200730120552.GG12747@mdounin.ru> References: <20200730120552.GG12747@mdounin.ru> Message-ID: On Thu, Jul 30, 2020 at 8:06 PM Maxim Dounin wrote: > > Hello! > > On Tue, Jul 28, 2020 at 01:08:32PM +0800, Jeffrey 'jf' Lim wrote: > > > I am discovering that nginx is logging the bytes sent by a client - in > > raw form - in my error.log for the following error: > > > > client sent invalid method while reading client request line, client: > > NN.N.N.N, server: NAME, request: ",'?Cookie: mstshash=eltons" > > > > Is there a way to get nginx to escape the bytes, as per what happens > > with access_log / log_format > > (https://nginx.org/en/docs/http/ngx_http_log_module.html#log_format)? > > No, currently error logs are written as is and not escaped. > Relevant ticket is here: > > https://trac.nginx.org/nginx/ticket/191 > gotcha. Thank you! Will you accept a patch which would make error_log escape the same way as per log_format's default escaping? That's if I even manage to get it figured out. -jf From mdounin at mdounin.ru Thu Jul 30 14:21:41 2020 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 30 Jul 2020 17:21:41 +0300 Subject: any way to escape logged bytes? In-Reply-To: References: <20200730120552.GG12747@mdounin.ru> Message-ID: <20200730142141.GH12747@mdounin.ru> Hello! On Thu, Jul 30, 2020 at 08:48:54PM +0800, Jeffrey 'jf' Lim wrote: > On Thu, Jul 30, 2020 at 8:06 PM Maxim Dounin wrote: > > > > Hello! > > > > On Tue, Jul 28, 2020 at 01:08:32PM +0800, Jeffrey 'jf' Lim wrote: > > > > > I am discovering that nginx is logging the bytes sent by a client - in > > > raw form - in my error.log for the following error: > > > > > > client sent invalid method while reading client request line, client: > > > NN.N.N.N, server: NAME, request: ",'?Cookie: mstshash=eltons" > > > > > > Is there a way to get nginx to escape the bytes, as per what happens > > > with access_log / log_format > > > (https://nginx.org/en/docs/http/ngx_http_log_module.html#log_format)? > > > > No, currently error logs are written as is and not escaped. > > Relevant ticket is here: > > > > https://trac.nginx.org/nginx/ticket/191 > > > > gotcha. Thank you! Will you accept a patch which would make error_log > escape the same way as per log_format's default escaping? That's if I > even manage to get it figured out. Sure. Note though that there is more than one thing to consider, including: 1. nginx uses multiline output in its own log messages, notably debug logging of HTTP requests and responses, and it might not be the best idea to escape all the error logs. Probably this needs some discretion on what to escape. 2. There can be a lot of debug log output, and it is bad idea to slow things down. Again, may need some discretion on what to escape. 3. Error logging cannot allocate additional memory since it is used, in particular, to log out-of-memory conditions. -- Maxim Dounin http://mdounin.ru/ From nginx-forum at forum.nginx.org Fri Jul 31 21:49:56 2020 From: nginx-forum at forum.nginx.org (Roydiggs) Date: Fri, 31 Jul 2020 17:49:56 -0400 Subject: When does NGINX start logging Message-ID: I'm going over some Web Server STIGs (referenced here: https://www.stigviewer.com/stig/web_server_security_requirements_guide/) to make sure my NGINX web server is configured to comply with those security requirements. One of the requirements is that "The web server must initiate session logging upon start up." So my question is: Are there any NGINX documentation or resource that shows NGINX starts logging as soon as it's started before any requests are handled? I assume that it does by default as each request comes in and is handled based on configurations made in nginx.conf. To get approval for the use of NGINX at my workplace, it would be a big help to be able to provide some sort of proof or resource showing what modules are up and running by the time requests are being processed. Having a brief window of a module or logging not ready once requests are being handled would be a vulnerability concern if an attacker were to flooding requests at start up. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,288932,288932#msg-288932