From francis at daoine.org Thu Jul 1 05:57:21 2021 From: francis at daoine.org (Francis Daly) Date: Thu, 1 Jul 2021 06:57:21 +0100 Subject: Problem with aliases In-Reply-To: <78655efb41ec7f4e881fa809f6e383d9.NginxMailingListEnglish@forum.nginx.org> References: <78655efb41ec7f4e881fa809f6e383d9.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20210701055721.GZ11167@daoine.org> On Wed, Jun 30, 2021 at 07:31:43PM -0400, yosef wrote: Hi there, > Im trying to use several blocks in my server using the server IP as server > name (no domain yet), each block points to a folder containing Wordpress. I > dont know what Im doing wrong because instead of running index.php nginx is > download the file. In nginx, one request is handled in one location{}. Only the configuration in (or inherited into) that location matters. (One http request can become more than one (sub)request within nginx.) > location ^~ /proj1 { > alias /var/www/proj1/public_html; > try_files $uri $uri/ /index.php?q=$uri&$args; > } "^~" means "prefix match, and do not look at parallel regex matches". So a request for /proj1/index.php will be handled in this location{}. And this location has no fastcgi_pass or similar directive, so it will use "serve from the filesystem". Probably the simplest fix is to copy the "location ~ \.php$ {" block to be inside each of the "location ^~" blocks. Note also -- if the request is for /proj1/missing, the try_files will use the subrequest /index.php?q=/proj1/missing& -- is that what you want, instead of (e.g) /proj1/index.php?q=/proj1/missing& ? Good luck with it, f -- Francis Daly francis at daoine.org From nginx-forum at forum.nginx.org Thu Jul 1 20:37:56 2021 From: nginx-forum at forum.nginx.org (yosef) Date: Thu, 01 Jul 2021 16:37:56 -0400 Subject: Problem with aliases In-Reply-To: <20210701055721.GZ11167@daoine.org> References: <20210701055721.GZ11167@daoine.org> Message-ID: <10ca1519a04ae3370333bbf1d6661ab7.NginxMailingListEnglish@forum.nginx.org> Hi Francis, First of all, thank you very much for your help :) I took your advice, unfortunately I still cant get this working, now the browser is not downloading the file, instead is showing me this message "No input file specified" let me show you what my code looks like now: server { listen 80; listen [::]:80; index index.php; server_name 173.230.131.168; location ^~ /proj1 { alias /var/www/proj1/public_html; try_files $uri $uri/ /index.php?q=$uri&$args; location ~ \.php$ { include snippets/fastcgi-php.conf; fastcgi_pass unix:/run/php/php7.4-fpm.sock; } } location ^~ /proj2 { alias /var/www/proj2/public_html; try_files $uri $uri/ /index.php?q=$uri&$args; location ~ \.php$ { include snippets/fastcgi-php.conf; fastcgi_pass unix:/run/php/php7.4-fpm.sock; } } } Whats wrong now? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,291964,291970#msg-291970 From francis at daoine.org Fri Jul 2 07:50:43 2021 From: francis at daoine.org (Francis Daly) Date: Fri, 2 Jul 2021 08:50:43 +0100 Subject: Problem with aliases In-Reply-To: <10ca1519a04ae3370333bbf1d6661ab7.NginxMailingListEnglish@forum.nginx.org> References: <20210701055721.GZ11167@daoine.org> <10ca1519a04ae3370333bbf1d6661ab7.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20210702075043.GA11167@daoine.org> On Thu, Jul 01, 2021 at 04:37:56PM -0400, yosef wrote: Hi there, > unfortunately I still cant get this working, now the browser is not > downloading the file, instead is showing me this message "No input file > specified" let me show you what my code looks like now: * what request do you make? (Probably something like "/proj1/index.php".) * what file on your filesystem do you want your fastcgi server to read, to process that request? (Probably something like "/var/www/proj1/public_html/index.php".) * what "fastcgi_param SCRIPT_FILENAME" value does nginx send to your fastcgi server? That last one is probably "what is inside snippets/fastcgi-php.conf?" Note that every fastcgi server is (potentially) different, and will read the "param" values sent in their own ways. But commonly, if exactly one SCRIPT_FILENAME is sent, that is what is used. You probably want something like fastcgi_param SCRIPT_FILENAME $request_filename; but other options exist, depending on details. > location ^~ /proj1 { > alias /var/www/proj1/public_html; Somewhat unrelated -- that will work, but it is probably better to add a / to both lines, so that they are location ^~ /proj1/ { alias /var/www/proj1/public_html/; so that someone requesting /proj12/ will not accidentally be shown things inside /var/www/proj1/public_html2/ Cheers, f -- Francis Daly francis at daoine.org From nginx-forum at forum.nginx.org Fri Jul 2 15:18:29 2021 From: nginx-forum at forum.nginx.org (yucheng.tw) Date: Fri, 02 Jul 2021 11:18:29 -0400 Subject: Why return value from epoll_wait are always far lower than connections? Message-ID: <083661423be1d920217805b875d69fe3.NginxMailingListEnglish@forum.nginx.org> Hello, I'm new to this forum and not familiar with mail list system. If there're something I did wrong, please tell me. In short, I have a general method (batching system call) and I try to apply into Nginx, then I found the weird behavior and can't explain it. To linux, connections and requests will be handled in `ngx_epoll_module.c`, following is snapshot about main job of the workers. Workers will using epoll_wait to get the numbers of events which are already, then handle them in the following `for` loop. In each iteration, worker will do some thing like sending file (system call: sendfile) to client. ```c ngx_epoll_process_events(ngx_cycle_t *cycle, ngx_msec_t timer, ngx_uint_t flags) { ... events = epoll_wait(ep, event_list, (int) nevents, timer); ... for (i = 0; i < events; i++) { // handle events .... } ``` As I observed, the value of `events` is relative to the numbers of connections. For example, in wrk (benchmarking tool), when I set `c` option to 50, the value of events will usually be close to 50. As service time grow, the change will like: ``` 1->1->1->50->50..... (close to 50)->50.... ``` So far, it really make sense. Then, I apply my method: batching system call in each iteration. Following is snapshot: ``` ngx_epoll_process_events(ngx_cycle_t *cycle, ngx_msec_t timer, ngx_uint_t flags) { ... events = epoll_wait(ep, event_list, (int) nevents, timer); ... my_batching_entry(); for (i = 0; i < events; i++) { // handle events } my_batching_exit(); ``` In above model, instead of executing system call (such as sendfile64) in each iteration, all of them will be executed until `my_batching_exit()` was called. It seems that it works correct (at least in wrk). Ok, here is the part I found weird. Applying my method, I found that the value (events) return from `epoll_wait` will always be 1 or some really low value (far from 50). Are there any mechanism to cause it? (low numbers of ready events) ,Did Nginx think the requests were served too slow , so lowering my event numbers? In my method, I only add two lines into Nginx (batching_entry and batching_exit) and Makefile. I means I didn't do a lot changes with Nginx. Are there any possible to cause the number of events are far lower than connections number? Any suggestions will be appreciated! Thanks in advance! - Steven Posted at Nginx Forum: https://forum.nginx.org/read.php?2,291972,291972#msg-291972 From nginx-forum at forum.nginx.org Sat Jul 3 21:15:56 2021 From: nginx-forum at forum.nginx.org (yosef) Date: Sat, 03 Jul 2021 17:15:56 -0400 Subject: Problem with aliases In-Reply-To: <20210702075043.GA11167@daoine.org> References: <20210702075043.GA11167@daoine.org> Message-ID: <03d7c955edd99caced5e93e10e3bb270.NginxMailingListEnglish@forum.nginx.org> Hi again, now is working thanks a lot! Posted at Nginx Forum: https://forum.nginx.org/read.php?2,291964,291977#msg-291977 From francis at daoine.org Sun Jul 4 13:26:16 2021 From: francis at daoine.org (Francis Daly) Date: Sun, 4 Jul 2021 14:26:16 +0100 Subject: Problem with aliases In-Reply-To: <03d7c955edd99caced5e93e10e3bb270.NginxMailingListEnglish@forum.nginx.org> References: <20210702075043.GA11167@daoine.org> <03d7c955edd99caced5e93e10e3bb270.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20210704132616.GD11167@daoine.org> On Sat, Jul 03, 2021 at 05:15:56PM -0400, yosef wrote: Hi there, > Hi again, now is working thanks a lot! Thanks for confirming, and it's good to hear that you have a working system! Cheers, f -- Francis Daly francis at daoine.org From nginx-forum at forum.nginx.org Sun Jul 4 21:42:16 2021 From: nginx-forum at forum.nginx.org (vishwaskn) Date: Sun, 04 Jul 2021 17:42:16 -0400 Subject: ssl_engine configuration Message-ID: <1c6b051be54f64019ecc82ede63f2df0.NginxMailingListEnglish@forum.nginx.org> Hello All, Could you please point me to a configuration example of using ssl_engine configuration directive ?. Is there a patch needed to nginx to get it work as I saw a few discussions around this or are they already merged to mainline ?. I see here (http://nginx.org/en/docs/http/ngx_http_ssl_module.html#ssl_certificate_key) the usage example of using ssl_certifcate_key specifying a engine:name:id, however it is not clear to me, what should be the "id" here. Also, I presume, the ssl_engine is a global directive and should be indicated with the engine name as identified by openssl. Any pointers/references regarding getting ssl_engine working with nginx would be helpful. thank you, regards, -vishwas. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,291983,291983#msg-291983 From teward at thomas-ward.net Sun Jul 4 22:52:42 2021 From: teward at thomas-ward.net (Thomas Ward) Date: Sun, 04 Jul 2021 18:52:42 -0400 Subject: ssl_engine configuration In-Reply-To: <1c6b051be54f64019ecc82ede63f2df0.NginxMailingListEnglish@forum.nginx.org> Message-ID: <4GJ3xS4wZ5z3XV9@mail.syn-ack.link> Unless you need to specify an alternative engine I would suggest not trying to manually configure things this way.Is there a reason you don't want to just use ssl_certificate and ssl_certificate_key?Sent from my T-Mobile 5G Device -------- Original message --------From: vishwaskn Date: 7/4/21 17:42 (GMT-05:00) To: nginx at nginx.org Subject: ssl_engine configuration Hello All,Could you please point me to a configuration example of using ssl_engineconfiguration directive ?.Is there a patch needed to nginx to get it work as I saw a few discussionsaround this or are they already merged to mainline ?.I see here(http://nginx.org/en/docs/http/ngx_http_ssl_module.html#ssl_certificate_key)the usage example of using ssl_certifcate_key specifying a engine:name:id,however it is not clear to me, what should be the "id" here.Also, I presume, the ssl_engine is a global directive and should beindicated with the engine name as identified by openssl.Any pointers/references regarding getting ssl_engine working with nginxwould be helpful.thank you,regards,-vishwas.Posted at Nginx Forum: https://forum.nginx.org/read.php?2,291983,291983#msg-291983_______________________________________________nginx mailing listnginx at nginx.orghttp://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Mon Jul 5 03:22:41 2021 From: nginx-forum at forum.nginx.org (vishwaskn) Date: Sun, 04 Jul 2021 23:22:41 -0400 Subject: ssl_engine configuration In-Reply-To: <4GJ3xS4wZ5z3XV9@mail.syn-ack.link> References: <4GJ3xS4wZ5z3XV9@mail.syn-ack.link> Message-ID: <511c4cd74ed0c6f65f8ea7b39d9f910d.NginxMailingListEnglish@forum.nginx.org> Hi Thomas, Thanks for your reply. Yes, I do have an alternate engine to which I would like to offload parts of the handshake. Hence the question. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,291983,291985#msg-291985 From lamvinhhung1511 at gmail.com Mon Jul 5 07:36:02 2021 From: lamvinhhung1511 at gmail.com (=?UTF-8?B?SMawbmcgVsSpbmg=?=) Date: Mon, 5 Jul 2021 14:36:02 +0700 Subject: I have a question from a website I visit often Message-ID: Hi NGINX! Today, I already subscribed for Mailing and talk about a website. Few days ago, I login into website have a link: https://flyxtrade.com. And that website is still working fine. Early July, I login into that link website and it show me "Welcome to nginx" not "Flyxtrade - Most secure cryptocurrency Crypto Trading platform" . [image: image.png] This is a picture about the last time i login to that website is 30/6/2021 and still working fine. Did you know where a new link that website? and why NGINX into Flyxtrade? I have a large BTC in my account on that website and I still haven't withdraw that BTC to my wallet. Please give back that website or give me a new link that website or give back my BTC please! Thanks you for Reading my Mail and Please Reply! -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image.png Type: image/png Size: 67713 bytes Desc: not available URL: From maliubiao at gmail.com Mon Jul 5 10:32:17 2021 From: maliubiao at gmail.com (maliubiao) Date: Mon, 5 Jul 2021 18:32:17 +0800 Subject: I have a question from a website I visit often In-Reply-To: References: Message-ID: If you lost your money, please seek police for help, this is a tech mail list H?ng V?nh ?2021?7?5??? ??3:36??? > Hi NGINX! > Today, I already subscribed for Mailing and talk about a website. > Few days ago, I login into website have a link: https://flyxtrade.com. > And that website is still working fine. > Early July, I login into that link website and it show me "Welcome to > nginx" not "Flyxtrade - Most secure cryptocurrency Crypto Trading > platform" . > [image: image.png] > > This is a picture about the last time i login to that website is 30/6/2021 > and still working fine. > Did you know where a new link that website? and why NGINX into Flyxtrade? > I have a large BTC in my account on that website and I still haven't > withdraw that BTC to my wallet. Please give back that website or give me a > new link that website or give back my BTC please! > > Thanks you for Reading my Mail and Please Reply! > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image.png Type: image/png Size: 67713 bytes Desc: not available URL: From teward at thomas-ward.net Mon Jul 5 15:27:02 2021 From: teward at thomas-ward.net (Thomas Ward) Date: Mon, 05 Jul 2021 11:27:02 -0400 Subject: I have a question from a website I visit often In-Reply-To: References: Message-ID: <030d6a58-1572-44da-bf8b-0f2924d9b78f@thomas-ward.net> This has nothing to do with NGINX.? NGINX has nothing to do with Flyxtrade.? You will need your local law enforcement to help you with lost money. -------- Original Message -------- From: "H?ng V?nh" Sent: Mon Jul 05 03:36:02 EDT 2021 To: nginx at nginx.org Subject: I have a question from a website I visit often Hi NGINX! Today, I already subscribed for Mailing and talk about a website. Few days ago, I login into website have a link: https://flyxtrade.com. And that website is still working fine. Early July, I login into that link website and it show me "Welcome to nginx" not "Flyxtrade - Most secure cryptocurrency Crypto Trading platform" . [image: image.png] This is a picture about the last time i login to that website is 30/6/2021 and still working fine. Did you know where a new link that website? and why NGINX into Flyxtrade? I have a large BTC in my account on that website and I still haven't withdraw that BTC to my wallet. Please give back that website or give me a new link that website or give back my BTC please! Thanks you for Reading my Mail and Please Reply! ------------------------------------------------------------------------ _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From iwaraminda at googlemail.com Mon Jul 5 19:19:03 2021 From: iwaraminda at googlemail.com (Raminda Subashana) Date: Tue, 6 Jul 2021 00:49:03 +0530 Subject: Fwd: Nginx HTTP/3 Issue In-Reply-To: References: Message-ID: Dear All, I have an issue with the Nginx HTTP3. when I add the "add_header alt-svc 'h3-29=":443"; ma=86400';" instead of "add_header Alt-Svc 'h3=":443"';" to the server block browser " err_too_many_redirects". Below is my simple config file to run a Magento 2.4.2 site. upstream fastcgi_backend { server unix:/run/php/php7.4-fpm.sock; } server { listen 80; server_name demo3.eazykart.lk; set $MAGE_ROOT /var/www/html/demo3.eazykart.lk/public_html; include /var/www/html/demo3.eazykart.lk/public_html/nginx.conf.sample; listen 443 http3 reuseport; # UDP listener for QUIC+HTTP/3 listen 443 ssl http2; # TCP listener for HTTP/1.1 ssl_protocols TLSv1.3; # QUIC requires TLS 1.3 ssl_certificate /etc/letsencrypt/live/demo3.eazykart.lk/fullchain.pem; # managed by Certbot ssl_certificate_key /etc/letsencrypt/live/demo3.eazykart.lk/privkey.pem; # managed by Certbot include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot add_header alt-svc 'h3-29=":443"; ma=86400'; add_header QUIC-Status $quic; # Sent when QUIC was used } With the "add_header Alt-Svc 'h3=":443"';" https://www.http3check.net/ says quic & http3 supported, but browser shows h2. If I add " add_header alt-svc 'h3-29=":443"; ma=86400';" browser shows h3-29 but after few seconds it shows "err_too_many_redirects" With the "add_header Alt-Svc 'h3=":443"';" https://gf.dev/http3-test test fail. But the test is successful if I check with "add_header alt-svc 'h3-29=":443"; ma=86400'; " & cannot load the web page. SITE URL : https://demo3.eazykart.lk/ Please help me to resolve the issue, If anyone can share the correct config for magento much appreciate it. [image: image.png] -- Thanks & Regards, *Raminda Subhashana* Mobile : 0773482137 *Please consider the environmental impact of needlessly printing this e-mail.* -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image.png Type: image/png Size: 36841 bytes Desc: not available URL: From mdounin at mdounin.ru Tue Jul 6 15:16:11 2021 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 6 Jul 2021 18:16:11 +0300 Subject: nginx-1.21.1 Message-ID: Changes with nginx 1.21.1 06 Jul 2021 *) Change: now nginx always returns an error for the CONNECT method. *) Change: now nginx always returns an error if both "Content-Length" and "Transfer-Encoding" header lines are present in the request. *) Change: now nginx always returns an error if spaces or control characters are used in the request line. *) Change: now nginx always returns an error if spaces or control characters are used in a header name. *) Change: now nginx always returns an error if spaces or control characters are used in the "Host" request header line. *) Change: optimization of configuration testing when using many listening sockets. *) Bugfix: nginx did not escape """, "<", ">", "\", "^", "`", "{", "|", and "}" characters when proxying with changed URI. *) Bugfix: SSL variables might be empty when used in logs; the bug had appeared in 1.19.5. *) Bugfix: keepalive connections with gRPC backends might not be closed after receiving a GOAWAY frame. *) Bugfix: reduced memory consumption for long-lived requests when proxying with more than 64 buffers. -- Maxim Dounin http://nginx.org/ From saber at planethoster.info Tue Jul 6 19:56:53 2021 From: saber at planethoster.info (Saber@PlanetHoster.info) Date: Tue, 6 Jul 2021 15:56:53 -0400 Subject: nginx-1.20.1-2 now requires openssl11-libs to run (centos7)? Message-ID: <8C9B9C8E-01FC-4533-ACAE-E686DB9BAA58@planethoster.info> Hi, We are using nginx from the official nginx.org yum repos for centos7. nginx-1.20.1-1.el7.ngx.x86_64 ?> is running fine nginx-1.20.1-2.el7.ngx.x86_64 ?> complains about libssl.so.1.1 "nginx: /usr/sbin/nginx: error while loading shared libraries: libssl.so.1.1: cannot open shared object file: No such file or directory? Installed openssl11-libs from epel7 and it?s now ok. Since when openssl11 is required to run nginx on centos 7? Is it a normal behaviour or a bug? Thanks, Saber CTO @PlanetHoster -------------- next part -------------- An HTML attachment was scrubbed... URL: From thresh at nginx.com Wed Jul 7 08:44:11 2021 From: thresh at nginx.com (Konstantin Pavlov) Date: Wed, 7 Jul 2021 11:44:11 +0300 Subject: nginx-1.20.1-2 now requires openssl11-libs to run (centos7)? In-Reply-To: <8C9B9C8E-01FC-4533-ACAE-E686DB9BAA58@planethoster.info> References: <8C9B9C8E-01FC-4533-ACAE-E686DB9BAA58@planethoster.info> Message-ID: Hi Saber, 06.07.2021 22:56, Saber at PlanetHoster.info wrote: > Hi, > > We are using nginx from the official nginx.org ?yum > repos for centos7. > > nginx-1.20.1-1.el7.ngx.x86_64 ?> is running fine > nginx-1.20.1-2.el7.ngx.x86_64 ?> complains about?libssl.so.1.1 > > "nginx: /usr/sbin/nginx: error while loading shared libraries: > libssl.so.1.1: cannot open shared object file: No such file or directory? > > > Installed?openssl11-libs from epel7 and it?s now ok. > > Since when openssl11 is required to run nginx on centos 7? Is it a > normal behaviour or a bug? nginx-1.20.1-2.el7.ngx.x86_64 is not something we ship from nginx.org. You probably mean nginx-1.20.1-2.el7.x86_64, which is available on EPEL, and it indeed has such a dependency. -- Konstantin Pavlov https://www.nginx.com/ From saber at planethoster.info Wed Jul 7 18:31:05 2021 From: saber at planethoster.info (Saber@PlanetHoster.info) Date: Wed, 7 Jul 2021 14:31:05 -0400 Subject: nginx-1.20.1-2 now requires openssl11-libs to run (centos7)? In-Reply-To: References: <8C9B9C8E-01FC-4533-ACAE-E686DB9BAA58@planethoster.info> Message-ID: <2A498B2C-1D7B-44E4-A089-4C550A3A6A4F@planethoster.info> Thanks Konstantin. Looks like EPEL took precedence over NGINX repo. Will update the nginx.repo to ensure its priority is higher. Usually EPEL ships an older nginx version so never had that in the past. I think it?s a good idea to update the nginx.repo in the doc so others won?t have this issue. Best Regards, Saber CTO @PlanetHoster > On Jul 7, 2021, at 4:44 AM, Konstantin Pavlov wrote: > > Hi Saber, > > 06.07.2021 22:56, Saber at PlanetHoster.info wrote: >> Hi, >> >> We are using nginx from the official nginx.org yum >> repos for centos7. >> >> nginx-1.20.1-1.el7.ngx.x86_64 ?> is running fine >> nginx-1.20.1-2.el7.ngx.x86_64 ?> complains about libssl.so.1.1 >> >> "nginx: /usr/sbin/nginx: error while loading shared libraries: >> libssl.so.1.1: cannot open shared object file: No such file or directory? >> >> >> Installed openssl11-libs from epel7 and it?s now ok. >> >> Since when openssl11 is required to run nginx on centos 7? Is it a >> normal behaviour or a bug? > > nginx-1.20.1-2.el7.ngx.x86_64 is not something we ship from nginx.org. > > You probably mean nginx-1.20.1-2.el7.x86_64, which is available on EPEL, > and it indeed has such a dependency. > > -- > Konstantin Pavlov > https://www.nginx.com/ From nginx-forum at forum.nginx.org Fri Jul 9 07:05:32 2021 From: nginx-forum at forum.nginx.org (lehuyprox) Date: Fri, 09 Jul 2021 03:05:32 -0400 Subject: Make the Subrequest start same time with main request Message-ID: <9b9b0b301509238d9200f462ad8f9f74.NginxMailingListEnglish@forum.nginx.org> Hi All, I have written the Nginx module using subrequest with the main request body, and authorization based on the result of a subrequest (I consulted from https://github.com/openresty/srcache-nginx-module and http://nginx.org/en/docs/http/ngx_http_auth_request_module.html) But the subrequest start when the body read completely, so the delay time occurs when the request body size bigger How can I make the request body is sent to the sub server immediately as it is received? Can I replace the main request body with the response of the subrequest? Thanks, Huy Le Posted at Nginx Forum: https://forum.nginx.org/read.php?2,292011,292011#msg-292011 From iwaraminda at googlemail.com Fri Jul 9 09:51:08 2021 From: iwaraminda at googlemail.com (Raminda Subashana) Date: Fri, 9 Jul 2021 15:21:08 +0530 Subject: nginx/stable 1.21.1-1~focal amd64 http/3 build issue Message-ID: Hi, I compiled the above focal release with http/3 pagespeed & brotli. nothing wrong with the build process & when I try to upgrade/install (upgrade from nginx/now 1.21.0-1-3) on ubuntu 20.04 I'm getting below error. dpkg: error processing package nginx-dbg (--install): dependency problems - leaving unconfigured Errors were encountered while processing: nginx-dbg -- Thanks & Regards, *Raminda Subhashana* Mobile : 0773482137 *Please consider the environmental impact of needlessly printing this e-mail.* -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Fri Jul 9 13:13:37 2021 From: nginx-forum at forum.nginx.org (Oleg Pisklov) Date: Fri, 09 Jul 2021 09:13:37 -0400 Subject: Upstream get stuck in disabled state for websocket load balancing Message-ID: <3d603dbe1e142be13768a9de0a3e39b9.NginxMailingListEnglish@forum.nginx.org> My current nginx configuration looks like: worker_processes 1; error_log /dev/stdout debug; events { worker_connections 1024; } http { upstream back { server backend1 max_fails=1 fail_timeout=10; server backend2 max_fails=1 fail_timeout=10; } server { listen 80; location / { proxy_pass http://back; proxy_http_version 1.1; proxy_set_header Host $host; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection "Upgrade"; proxy_read_timeout 3600s; proxy_send_timeout 3600s; } } } nginx version: nginx/1.21.1 Backend is a simple websocket server, which accepts incoming connection and then does nothing with it. --- # Test scenario: 1. Run nginx and backend1 server, backend2 should stay down. 2. Run several websocket clients Some of them try to connect to backend2 upstream, and nginx writes ("connect() failed (111: Connection refused) while connecting to upstream" and "upstream server temporarily disabled while connecting to upstream") to log, which is expected. 3. Run backend2 and wait some time (to outwait fail_timeout). 4. Close websocket connections on backend1 side and wait for clients to reconnect. Then a strange thing happens. I expect that new websocket connections will be distributed evenly among two backends. But most of them land on backend1, as if backend2 is still disabled. Sometimes there is one client that connects to backend2, but it's the only one. Further attempts to close connections on server side show the same picture. I found that setting max_fails=0 solves the problem with distribution. Is this correct behavior? If so, how to assure proper distribution of websocket connection while using max_fails in such scenarios? Is there any documentation for it? --- Client/server code and docker-compose file used to reproduce this behavior are below. Websocket clients can be disconnected by a command inside backend container: ps aux | grep server.js | awk '{print $2}' | xargs kill -sHUP # server.js const WebSocket = require('ws'); const Url = require('url'); const Process = require('process'); console.log("Starting Node.js websocket server"); const wss = new WebSocket.Server({ port: 80 }); wss.on('connection', function connection(ws, request) { const uid = Url.parse(request.url, true).query.uid; ws.on('message', function incoming(message) { console.log('Received: %s', message); }); ws.on('close', function close() { console.log('Disconnected: %s', uid) }); }); Process.on('SIGHUP', () => { console.log('Received SIGHUP'); for (const client of wss.clients) client.terminate(); }); --- # client.js const WebSocket = require('ws'); const UUID = require('uuid') function client(uid) { const ws = new WebSocket('ws://balancer/?uid=' + uid); ws.on('open', function open() { ws.send(JSON.stringify({'id': uid})); }); ws.on('close', function close() { setTimeout(client, 2, uid) }); } for (let i = 0; i < 100; i++){ uid = UUID.v4(); client(uid) } --- # Dockerfile.node FROM node WORKDIR app RUN npm install -y ws uuid COPY *.js /app CMD node server.js --- # docker-compose.yaml version: '3.4' services: balancer: image: nginx:latest depends_on: - backend1 - backend2 volumes: - ./nginx.conf:/etc/nginx/nginx.conf:ro backend1: build: dockerfile: Dockerfile.node context: . backend2: build: dockerfile: Dockerfile.node context: . command: bash -c "sleep 30 && node server.js" clients: build: dockerfile: Dockerfile.node context: . depends_on: - balancer command: node client.js Posted at Nginx Forum: https://forum.nginx.org/read.php?2,292014,292014#msg-292014 From mdounin at mdounin.ru Mon Jul 12 01:28:49 2021 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 12 Jul 2021 04:28:49 +0300 Subject: Upstream get stuck in disabled state for websocket load balancing In-Reply-To: <3d603dbe1e142be13768a9de0a3e39b9.NginxMailingListEnglish@forum.nginx.org> References: <3d603dbe1e142be13768a9de0a3e39b9.NginxMailingListEnglish@forum.nginx.org> Message-ID: Hello! On Fri, Jul 09, 2021 at 09:13:37AM -0400, Oleg Pisklov wrote: [...] > Then a strange thing happens. I expect that new websocket connections will > be distributed evenly among two backends. But most of them land on backend1, > as if backend2 is still disabled. Sometimes there is one client that > connects to backend2, but it's the only one. > Further attempts to close connections on server side show the same picture. > I found that setting max_fails=0 solves the problem with distribution. > > Is this correct behavior? If so, how to assure proper distribution of > websocket connection while using max_fails in such scenarios? Is there any > documentation for it? Quoting nginx 1.1.6 changes: *) Change: if a server in an upstream failed, only one request will be sent to it after fail_timeout; the server will be considered alive if it will successfully respond to the request. That is, only one request (a websocket connection in your case) after fail_timeout to the failed backend is an expected behaviour. Once this connection is successfully closed, nginx will mark this backend as working properly and will start balancing new connections to the backend. -- Maxim Dounin http://mdounin.ru/ From happy.cerberus at gmail.com Mon Jul 12 08:34:41 2021 From: happy.cerberus at gmail.com (=?UTF-8?B?xaBpbW9uIFTDs3Ro?=) Date: Mon, 12 Jul 2021 10:34:41 +0200 Subject: Using nginx as a HLS reverse proxy with Envoy? Message-ID: Hi, I'm building a demonstration streaming system (for educational purposes - teaching system design) and I'm trying to figure out whether Nginx can be used as an HLS reverse proxy in conjunction with Envoy (for sticky load balancing). I have the backend servers that convert RTMP into HLS. Now in front of that, I want to have an autoscaling pool of reverse proxies that I have load-balance to using Envoy based on the stream (so that the same stream ends on the same process if possible). What I need from Nginx then is 2 things: - Correctly proxy HLS streams. - Report overload situations to Envoy, so that envoy can bounce to a different replica if one gets overloaded. Is this possible? Any links to documentation or examples would be greatly appreciated as well. Thanks in advance, Simon Toth -------------- next part -------------- An HTML attachment was scrubbed... URL: From praveenssit at gmail.com Mon Jul 12 12:15:23 2021 From: praveenssit at gmail.com (Praveen Kumar K S) Date: Mon, 12 Jul 2021 17:45:23 +0530 Subject: Reg permanent redirect Message-ID: Hello, Is there any way to permanently redirect abc.com/def to xyz.com ? I have tried return 301. But the context /def is being passed to xyz.com Please help. Regards, -------------- next part -------------- An HTML attachment was scrubbed... URL: From francis at daoine.org Mon Jul 12 12:33:51 2021 From: francis at daoine.org (Francis Daly) Date: Mon, 12 Jul 2021 13:33:51 +0100 Subject: Reg permanent redirect In-Reply-To: References: Message-ID: <20210712123351.GE11167@daoine.org> On Mon, Jul 12, 2021 at 05:45:23PM +0530, Praveen Kumar K S wrote: Hi there, > Is there any way to permanently redirect abc.com/def to xyz.com ? > I have tried return 301. But the context /def is being passed to xyz.com location = /def { return 301 http://xyz.com/; } If that does not do what you want, can you show the output of curl -i http://abc.com/def to see what the Location: header says. f -- Francis Daly francis at daoine.org From nginx-forum at forum.nginx.org Mon Jul 12 14:50:56 2021 From: nginx-forum at forum.nginx.org (rajatgoyal123) Date: Mon, 12 Jul 2021 10:50:56 -0400 Subject: same connection usage in grpc-pass in nginx Message-ID: <214af65f5af163baf88d46a502deca90.NginxMailingListEnglish@forum.nginx.org> I am trying to set-up nginx reverse proxy server with dynamic upstream configuration. The upstream servers are grpc servers, so running over HTTP2. Since http2 uses multiplexing and grpc supports bi-directional streaming, can nginx be configured in below way : a) Every upstream server on start-up will add itself to upstream servers by calling nginx /api and creates a http2 connection with nginx. b) Every grpc client when calls grpc API ( via nginx), nginx does grpc_pass to any of upstream server. Ask is : For every request in step-b, can nginx, while doing grpc_pass to upstream server, use the same http2 connection established in step-a ? Is such configuration possible in nginx ? When I tested with keepalive 1; in upstream group, I can see it is creating a new connection once for grpc_pass and using the same ( keepalive = 1), but it is not using the connection created in step-a. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,292024,292024#msg-292024 From nginx-forum at forum.nginx.org Tue Jul 13 04:57:32 2021 From: nginx-forum at forum.nginx.org (spaace) Date: Tue, 13 Jul 2021 00:57:32 -0400 Subject: Limit the worker processes for a particular upstream server Message-ID: <5c433a5758b54a5370369ec2b9bc25df.NginxMailingListEnglish@forum.nginx.org> Hi, Our deployment uses Lua scripts to add authentication / certain other checks at the proxy. It does this by the required configuration from the upstream server and then refreshing it periodically. However, this does add some memory to each worker process and the refreshes will also hit the upstream server. To avoid the overheads, i was wondering if there is a way to limit the no of worker processes that can serve a single upstream server Thanks Arun Posted at Nginx Forum: https://forum.nginx.org/read.php?2,292026,292026#msg-292026 From maxim at nginx.com Tue Jul 13 08:14:38 2021 From: maxim at nginx.com (Maxim Konovalov) Date: Tue, 13 Jul 2021 11:14:38 +0300 Subject: QUIC and HTTP/3 roadmap blog post Message-ID: <0a3d4771-ff1b-756a-3830-fd7b85e94548@nginx.com> Hi, Liam Crilly just published recently a good summary where we are with the QUIC and HTTP/3 implementation for nginx and our short term plans: https://www.nginx.com/blog/our-roadmap-quic-http-3-support-nginx/ -- Maxim Konovalov From me at mheard.com Tue Jul 13 08:55:14 2021 From: me at mheard.com (Mathew Heard) Date: Tue, 13 Jul 2021 18:55:14 +1000 Subject: QUIC and HTTP/3 roadmap blog post In-Reply-To: <0a3d4771-ff1b-756a-3830-fd7b85e94548@nginx.com> References: <0a3d4771-ff1b-756a-3830-fd7b85e94548@nginx.com> Message-ID: Hi Maxim, Really interesting read. Do you have any plans for resolving the SIGHUP causes session closure issues that currently exist with nginx-quic? The closure of long lived connections has been a thorn in the side of people doing HTTP/1.1 web sockets (and probably HTTP/2 push) for many years. With HTTP/3 (QUIC) it's even more pronounced. >From my point of view its the single biggest obstacle to the QUIC upgrade. as a user. Regards, Mathew On Tue, 13 Jul 2021 at 18:15, Maxim Konovalov wrote: > > Hi, > > Liam Crilly just published recently a good summary where we are with the > QUIC and HTTP/3 implementation for nginx and our short term plans: > > https://www.nginx.com/blog/our-roadmap-quic-http-3-support-nginx/ > > -- > Maxim Konovalov > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From iwaraminda at googlemail.com Tue Jul 13 09:09:42 2021 From: iwaraminda at googlemail.com (Raminda Subashana) Date: Tue, 13 Jul 2021 14:39:42 +0530 Subject: QUIC and HTTP/3 roadmap blog post In-Reply-To: References: <0a3d4771-ff1b-756a-3830-fd7b85e94548@nginx.com> Message-ID: Hi Maxim, Really interesting post, can I publish this on my personal blog, is it legal? Thanks & Regards, Raminda On Tue, Jul 13, 2021 at 2:25 PM Mathew Heard wrote: > Hi Maxim, > > Really interesting read. > > Do you have any plans for resolving the SIGHUP causes session closure > issues that currently exist with nginx-quic? The closure of long lived > connections has been a thorn in the side of people doing HTTP/1.1 web > sockets (and probably HTTP/2 push) for many years. With HTTP/3 (QUIC) > it's even more pronounced. > > From my point of view its the single biggest obstacle to the QUIC > upgrade. as a user. > > Regards, > Mathew > > On Tue, 13 Jul 2021 at 18:15, Maxim Konovalov wrote: > > > > Hi, > > > > Liam Crilly just published recently a good summary where we are with the > > QUIC and HTTP/3 implementation for nginx and our short term plans: > > > > https://www.nginx.com/blog/our-roadmap-quic-http-3-support-nginx/ > > > > -- > > Maxim Konovalov > > _______________________________________________ > > nginx mailing list > > nginx at nginx.org > > http://mailman.nginx.org/mailman/listinfo/nginx > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -- Thanks & Regards, *Raminda Subhashana* Mobile : 0773482137 *Please consider the environmental impact of needlessly printing this e-mail.* -------------- next part -------------- An HTML attachment was scrubbed... URL: From iwaraminda at googlemail.com Tue Jul 13 11:59:15 2021 From: iwaraminda at googlemail.com (Raminda Subashana) Date: Tue, 13 Jul 2021 17:29:15 +0530 Subject: QUIC and HTTP/3 roadmap blog post In-Reply-To: References: <0a3d4771-ff1b-756a-3830-fd7b85e94548@nginx.com> Message-ID: Hi Maxim, Just tested nginx-quic release and there is a performance issue. I compared it with Cloudflare quic experimental release which is based on nginx 1.16. It is almost 3 times slower than 1.16. Below config worked for me and it never advertised h3-29. if you have specific config file to test appreciate if you can share server { listen 443 ssl; # TCP listener for HTTP/1.1 listen 443 http3 reuseport; # UDP listener for QUIC+HTTP/3 ssl_protocols TLSv1.3; # QUIC requires TLS 1.3 ssl_certificate ssl/www.example.com.crt; ssl_certificate_key ssl/www.example.com.key; add_header Alt-Svc 'h3=":443"'; # Advertise that HTTP/3 is available add_header QUIC-Status $quic; # Sent when QUIC was used } On Tue, Jul 13, 2021 at 2:39 PM Raminda Subashana wrote: > Hi Maxim, > > Really interesting post, can I publish this on my personal blog, is it > legal? > > Thanks & Regards, > Raminda > > On Tue, Jul 13, 2021 at 2:25 PM Mathew Heard wrote: > >> Hi Maxim, >> >> Really interesting read. >> >> Do you have any plans for resolving the SIGHUP causes session closure >> issues that currently exist with nginx-quic? The closure of long lived >> connections has been a thorn in the side of people doing HTTP/1.1 web >> sockets (and probably HTTP/2 push) for many years. With HTTP/3 (QUIC) >> it's even more pronounced. >> >> From my point of view its the single biggest obstacle to the QUIC >> upgrade. as a user. >> >> Regards, >> Mathew >> >> On Tue, 13 Jul 2021 at 18:15, Maxim Konovalov wrote: >> > >> > Hi, >> > >> > Liam Crilly just published recently a good summary where we are with the >> > QUIC and HTTP/3 implementation for nginx and our short term plans: >> > >> > https://www.nginx.com/blog/our-roadmap-quic-http-3-support-nginx/ >> > >> > -- >> > Maxim Konovalov >> > _______________________________________________ >> > nginx mailing list >> > nginx at nginx.org >> > http://mailman.nginx.org/mailman/listinfo/nginx >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx >> > > > -- > > Thanks & Regards, > > *Raminda Subhashana* > > Mobile : 0773482137 > > > > > > *Please consider the environmental impact of needlessly printing this > e-mail.* > -- Thanks & Regards, *Raminda Subhashana* Mobile : 0773482137 *Please consider the environmental impact of needlessly printing this e-mail.* -------------- next part -------------- An HTML attachment was scrubbed... URL: From marcin.wanat at gmail.com Tue Jul 13 12:42:07 2021 From: marcin.wanat at gmail.com (Marcin Wanat) Date: Tue, 13 Jul 2021 14:42:07 +0200 Subject: QUIC and HTTP/3 roadmap blog post In-Reply-To: <0a3d4771-ff1b-756a-3830-fd7b85e94548@nginx.com> References: <0a3d4771-ff1b-756a-3830-fd7b85e94548@nginx.com> Message-ID: Hi Maxim, does Nginx have plans to adopt BBR as congestion control when using QUIC ? Regards, Marcin Wanat On Tue, Jul 13, 2021 at 10:15 AM Maxim Konovalov wrote: > Hi, > > Liam Crilly just published recently a good summary where we are with the > QUIC and HTTP/3 implementation for nginx and our short term plans: > > https://www.nginx.com/blog/our-roadmap-quic-http-3-support-nginx/ > > -- > Maxim Konovalov > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From vl at nginx.com Tue Jul 13 13:14:31 2021 From: vl at nginx.com (Vladimir Homutov) Date: Tue, 13 Jul 2021 16:14:31 +0300 Subject: QUIC and HTTP/3 roadmap blog post In-Reply-To: References: <0a3d4771-ff1b-756a-3830-fd7b85e94548@nginx.com> Message-ID: On Tue, Jul 13, 2021 at 05:29:15PM +0530, Raminda Subashana wrote: > Hi Maxim, > > Just tested nginx-quic release and there is a performance issue. I compared > it with Cloudflare quic experimental release which is based on nginx 1.16. > > It is almost 3 times slower than 1.16. Below config worked for me and it > never advertised h3-29. if you have specific config file to test appreciate > if you can share > > server { > listen 443 ssl; # TCP listener for HTTP/1.1 > listen 443 http3 reuseport; # UDP listener for QUIC+HTTP/3 > > ssl_protocols TLSv1.3; # QUIC requires TLS 1.3 > ssl_certificate ssl/www.example.com.crt; > ssl_certificate_key ssl/www.example.com.key; > > add_header Alt-Svc 'h3=":443"'; # Advertise that HTTP/3 is available > add_header QUIC-Status $quic; # Sent when QUIC was used > } Hi Raminda, Can you please describe how do you measure performance? What clients do you use? Full nginx configuration would be nice to see also. Do you measure request time or maybe overall throughput or something else. Any details are appreciated. Please ensure that for perforamnce tests you use nginx built without debug, as it produces quite a lot messages if built in debug mode. From vl at nginx.com Tue Jul 13 13:18:04 2021 From: vl at nginx.com (Vladimir Homutov) Date: Tue, 13 Jul 2021 16:18:04 +0300 Subject: QUIC and HTTP/3 roadmap blog post In-Reply-To: References: <0a3d4771-ff1b-756a-3830-fd7b85e94548@nginx.com> Message-ID: On Tue, Jul 13, 2021 at 06:55:14PM +1000, Mathew Heard wrote: > Hi Maxim, > > Really interesting read. > > Do you have any plans for resolving the SIGHUP causes session closure > issues that currently exist with nginx-quic? The closure of long lived > connections has been a thorn in the side of people doing HTTP/1.1 web > sockets (and probably HTTP/2 push) for many years. With HTTP/3 (QUIC) > it's even more pronounced. > > From my point of view its the single biggest obstacle to the QUIC > upgrade. as a user. > > Regards, > Mathew Hi Mathew, connections are handled in worker processes, and reload means running new worker processes, that don't have state for existing connections. QUIC doesn't change how nginx handles connections, so there are no specific plans to change it. Can you please elaborate how HTTP/3 makes things worse from your perspective? From me at mheard.com Tue Jul 13 13:37:49 2021 From: me at mheard.com (Mathew Heard) Date: Tue, 13 Jul 2021 23:37:49 +1000 Subject: QUIC and HTTP/3 roadmap blog post In-Reply-To: References: <0a3d4771-ff1b-756a-3830-fd7b85e94548@nginx.com> Message-ID: Hi Vladimir, Within the main users of HTTP/3 we are seeing an uptick in users doing long lived connections and users using previously niche features (such as server push) leading to an expectation of near indefinite session life (and websocket isnt yet available for http/3). Basically applications built to take full advantage of QUIC tend towards using larger, longer sessions as opposed to the traditional HTTP request & response. There are many drivers for this (modern app development libraries, PWA trend as well as more HTTP/3 related such as increase session establishment time). I'm no researcher in that area (just someone who sees the trend in his work) so I'll leave that there for now. I'm well aware of the rationale and the difficulties to implement such a thing in the current nginx architecture. I wouldn't say it's impossible, but difficult - yes certainly. I can think of a couple ways it could be architected especially now you have that ebpf packet router. Regards, Mathew On Tue, 13 Jul 2021 at 23:18, Vladimir Homutov wrote: > > On Tue, Jul 13, 2021 at 06:55:14PM +1000, Mathew Heard wrote: > > Hi Maxim, > > > > Really interesting read. > > > > Do you have any plans for resolving the SIGHUP causes session closure > > issues that currently exist with nginx-quic? The closure of long lived > > connections has been a thorn in the side of people doing HTTP/1.1 web > > sockets (and probably HTTP/2 push) for many years. With HTTP/3 (QUIC) > > it's even more pronounced. > > > > From my point of view its the single biggest obstacle to the QUIC > > upgrade. as a user. > > > > Regards, > > Mathew > > Hi Mathew, > > connections are handled in worker processes, and reload means running > new worker processes, that don't have state for existing connections. > QUIC doesn't change how nginx handles connections, so there are no > specific plans to change it. > > Can you please elaborate how HTTP/3 makes things worse from your > perspective? > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From iwaraminda at googlemail.com Tue Jul 13 14:24:32 2021 From: iwaraminda at googlemail.com (Raminda Subashana) Date: Tue, 13 Jul 2021 19:54:32 +0530 Subject: QUIC and HTTP/3 roadmap blog post In-Reply-To: References: <0a3d4771-ff1b-756a-3830-fd7b85e94548@nginx.com> Message-ID: Hi Vladimir, Sure I'll share all the information. Thanks On Tue, Jul 13, 2021 at 6:44 PM Vladimir Homutov wrote: > On Tue, Jul 13, 2021 at 05:29:15PM +0530, Raminda Subashana wrote: > > Hi Maxim, > > > > Just tested nginx-quic release and there is a performance issue. I > compared > > it with Cloudflare quic experimental release which is based on nginx > 1.16. > > > > It is almost 3 times slower than 1.16. Below config worked for me and it > > never advertised h3-29. if you have specific config file to test > appreciate > > if you can share > > > > server { > > listen 443 ssl; # TCP listener for HTTP/1.1 > > listen 443 http3 reuseport; # UDP listener for QUIC+HTTP/3 > > > > ssl_protocols TLSv1.3; # QUIC requires TLS 1.3 > > ssl_certificate ssl/www.example.com.crt; > > ssl_certificate_key ssl/www.example.com.key; > > > > add_header Alt-Svc 'h3=":443"'; # Advertise that HTTP/3 is > available > > add_header QUIC-Status $quic; # Sent when QUIC was used > > } > > Hi Raminda, > > Can you please describe how do you measure performance? What clients do > you use? Full nginx configuration would be nice to see also. > Do you measure request time or maybe overall throughput or something > else. Any details are appreciated. > > Please ensure that for perforamnce tests you use nginx built without debug, > as it produces quite a lot messages if built in debug mode. > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -- Thanks & Regards, *Raminda Subhashana* Mobile : 0773482137 *Please consider the environmental impact of needlessly printing this e-mail.* -------------- next part -------------- An HTML attachment was scrubbed... URL: From happy.cerberus at gmail.com Tue Jul 13 17:54:18 2021 From: happy.cerberus at gmail.com (=?UTF-8?B?xaBpbW9uIFTDs3Ro?=) Date: Tue, 13 Jul 2021 19:54:18 +0200 Subject: Using nginx as a HLS reverse proxy with Envoy? In-Reply-To: References: Message-ID: OK, after doing more research myself I'm pretty sure that nginx can't proxy HLS streams. It can proxy (pull) RTMP streams, however if I understand it correctly, these streams have to be statically configured in configs (which is not my use case). -- Simon Toth On Mon, Jul 12, 2021 at 10:34 AM ?imon T?th wrote: > Hi, > > I'm building a demonstration streaming system (for educational purposes - > teaching system design) and I'm trying to figure out whether Nginx can be > used as an HLS reverse proxy in conjunction with Envoy (for sticky load > balancing). > > I have the backend servers that convert RTMP into HLS. Now in front of > that, I want to have an autoscaling pool of reverse proxies that I have > load-balance to using Envoy based on the stream (so that the same stream > ends on the same process if possible). > > What I need from Nginx then is 2 things: > > - Correctly proxy HLS streams. > - Report overload situations to Envoy, so that envoy can bounce to a > different replica if one gets overloaded. > > Is this possible? Any links to documentation or examples would be greatly > appreciated as well. > > Thanks in advance, > Simon Toth > -------------- next part -------------- An HTML attachment was scrubbed... URL: From r at roze.lv Tue Jul 13 19:16:25 2021 From: r at roze.lv (Reinis Rozitis) Date: Tue, 13 Jul 2021 22:16:25 +0300 Subject: Using nginx as a HLS reverse proxy with Envoy? In-Reply-To: References: Message-ID: <000c01d7781b$92dd3660$b897a320$@roze.lv> From: nginx [mailto:nginx-bounces at nginx.org] On Behalf Of ?imon T?th Sent: otrdiena, 2021. gada 13. j?lijs 20:54 To: nginx at nginx.org Subject: Re: Using nginx as a HLS reverse proxy with Envoy? OK, after doing more research myself I'm pretty sure that nginx can't proxy HLS streams. It can proxy (pull) RTMP streams, however if I understand it correctly, these streams have to be statically configured in configs (which is not my use case). -- Simon Toth On Mon, Jul 12, 2021 at 10:34 AM ?imon T?th > wrote: Hi, I'm building a demonstration streaming system (for educational purposes - teaching system design) and I'm trying to figure out whether Nginx can be used as an HLS reverse proxy in conjunction with Envoy (for sticky load balancing). I have the backend servers that convert RTMP into HLS. Now in front of that, I want to have an autoscaling pool of reverse proxies that I have load-balance to using Envoy based on the stream (so that the same stream ends on the same process if possible). What I need from Nginx then is 2 things: * Correctly proxy HLS streams. * Report overload situations to Envoy, so that envoy can bounce to a different replica if one gets overloaded. Is this possible? Any links to documentation or examples would be greatly appreciated as well. Thanks in advance, Simon Toth -------------- next part -------------- An HTML attachment was scrubbed... URL: From r at roze.lv Tue Jul 13 19:18:38 2021 From: r at roze.lv (Reinis Rozitis) Date: Tue, 13 Jul 2021 22:18:38 +0300 Subject: Using nginx as a HLS reverse proxy with Envoy? In-Reply-To: References: Message-ID: <001101d7781b$e21228d0$a6367a70$@roze.lv> > OK, after doing more research myself I'm pretty sure that nginx can't proxy HLS streams. There is nothing special about HLS it is just simple http requests which nginx can serve/proxy just fine. What were the issues you have encountered? rr From happy.cerberus at gmail.com Tue Jul 13 19:44:25 2021 From: happy.cerberus at gmail.com (=?UTF-8?B?xaBpbW9uIFTDs3Ro?=) Date: Tue, 13 Jul 2021 21:44:25 +0200 Subject: Using nginx as a HLS reverse proxy with Envoy? In-Reply-To: <001101d7781b$e21228d0$a6367a70$@roze.lv> References: <001101d7781b$e21228d0$a6367a70$@roze.lv> Message-ID: How would that work? >From the presentation on RTMP it looks like that nginx is just serving cached VODs instead of the actual stream. And even if it can actually proxy the requests (which I doubt), it can't be configured dynamically. On Tue, Jul 13, 2021 at 9:20 PM Reinis Rozitis wrote: > > OK, after doing more research myself I'm pretty sure that nginx can't > proxy HLS streams. > > There is nothing special about HLS it is just simple http requests which > nginx can serve/proxy just fine. > > What were the issues you have encountered? > > rr > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From r at roze.lv Tue Jul 13 20:40:44 2021 From: r at roze.lv (Reinis Rozitis) Date: Tue, 13 Jul 2021 23:40:44 +0300 Subject: Using nginx as a HLS reverse proxy with Envoy? In-Reply-To: References: <001101d7781b$e21228d0$a6367a70$@roze.lv> Message-ID: <001c01d77827$5a479fa0$0ed6dee0$@roze.lv> > How would that work? > From the presentation on RTMP it looks like that nginx is just serving cached VODs instead of the actual stream. RTMP and HLS are two different streaming technologies. HLS just consists of a playlist (.m3u8) file (usually the player reads it by continuously making http requests) which points to video chunk files which again are served via simple http. This is why (contrary to rtmp) hls is easy to scale/cache and/or use via all kinds of CDNs. > And even if it can actually proxy the requests (which I doubt) It can. > , it can't be configured dynamically. As to "dynamically" depends on what are your needs. The commercial nginx plus version provides an API [1] for dynamic upstream configuration but you are not limited by it. For example you can use Lua code [2] to determine/set upstream in a very dynamic way. There are also some third party modules with such functionality. But again you didn't actually specify what is the issue? [1] https://docs.nginx.com/nginx/admin-guide/load-balancer/dynamic-configuration-api/ [2] https://github.com/openresty/lua-resty-core/blob/master/lib/ngx/balancer.md rr From happy.cerberus at gmail.com Tue Jul 13 21:18:57 2021 From: happy.cerberus at gmail.com (=?UTF-8?B?xaBpbW9uIFTDs3Ro?=) Date: Tue, 13 Jul 2021 23:18:57 +0200 Subject: Using nginx as a HLS reverse proxy with Envoy? In-Reply-To: <001c01d77827$5a479fa0$0ed6dee0$@roze.lv> References: <001101d7781b$e21228d0$a6367a70$@roze.lv> <001c01d77827$5a479fa0$0ed6dee0$@roze.lv> Message-ID: On Tue, Jul 13, 2021 at 10:41 PM Reinis Rozitis wrote: > > How would that work? > > From the presentation on RTMP it looks like that nginx is just serving > cached VODs instead of the actual stream. > > RTMP and HLS are two different streaming technologies. > HLS just consists of a playlist (.m3u8) file (usually the player reads it > by continuously making http requests) which points to video chunk files > which again are served via simple http. This is why (contrary to rtmp) hls > is easy to scale/cache and/or use via all kinds of CDNs. > Indeed, converting RTMP to HLS is something that I already have. This is the video, I was talking about: https://www.youtube.com/watch?v=Js1OlvRNsdI The streams read through different protocols are completely out of sync. I suspect it is because nginx is just serving stale cached chunks. > And even if it can actually proxy the requests (which I doubt) > > It can. How? The presentation does not cover that. And it can't be a simple HTTP, because as you said yourself, with HLS I would just end up with a cached playlist and bunch of useless VOD chunks that I don't want to ever serve. > > , it can't be configured dynamically. > > As to "dynamically" depends on what are your needs. > > The commercial nginx plus version provides an API [1] for dynamic upstream > configuration but you are not limited by it. > For example you can use Lua code [2] to determine/set upstream in a very > dynamic way. There are also some third party modules with such > functionality. > > But again you didn't actually specify what is the issue? > > > [1] > https://docs.nginx.com/nginx/admin-guide/load-balancer/dynamic-configuration-api/ > [2] > https://github.com/openresty/lua-resty-core/blob/master/lib/ngx/balancer.md Thanks for the links. -------------- next part -------------- An HTML attachment was scrubbed... URL: From r at roze.lv Wed Jul 14 06:15:27 2021 From: r at roze.lv (Reinis Rozitis) Date: Wed, 14 Jul 2021 09:15:27 +0300 Subject: Using nginx as a HLS reverse proxy with Envoy? In-Reply-To: References: <001101d7781b$e21228d0$a6367a70$@roze.lv> <001c01d77827$5a479fa0$0ed6dee0$@roze.lv> Message-ID: <000901d77877$a3ffa340$ebfee9c0$@roze.lv> > How? The presentation does not cover that. You shouldn't stick to a single youtube video. > And it can't be a simple HTTP It is. Even the name HLS stands for "HTTP Live Streaming". > , because as you said yourself, with HLS I would just end up with a cached playlist and bunch of useless VOD chunks that I don't want to ever serve. As with http objects to cache or not to cache is something the server can specify (like no-cache headers (add_header 'Cache-Control' 'no-cache'; [1]) etc) and it's on player/client side to honour those. No matter the stream setup (live stream / with or without history or delay) the video chunks are just static files (written by the transcoder). [1] http://nginx.org/en/docs/http/ngx_http_headers_module.html rr From happy.cerberus at gmail.com Wed Jul 14 06:44:06 2021 From: happy.cerberus at gmail.com (=?UTF-8?B?xaBpbW9uIFTDs3Ro?=) Date: Wed, 14 Jul 2021 08:44:06 +0200 Subject: Using nginx as a HLS reverse proxy with Envoy? In-Reply-To: <000901d77877$a3ffa340$ebfee9c0$@roze.lv> References: <001101d7781b$e21228d0$a6367a70$@roze.lv> <001c01d77827$5a479fa0$0ed6dee0$@roze.lv> <000901d77877$a3ffa340$ebfee9c0$@roze.lv> Message-ID: On Wed, Jul 14, 2021 at 8:15 AM Reinis Rozitis wrote: > > How? The presentation does not cover that. > > You shouldn't stick to a single youtube video. > This is the only video that I found that covers HLS. I also read the module documentation. > And it can't be a simple HTTP > > It is. > Even the name HLS stands for "HTTP Live Streaming". > > > > , because as you said yourself, with HLS I would just end up with a > cached playlist and bunch of useless VOD chunks that I don't want to ever > serve. > > As with http objects to cache or not to cache is something the server can > specify (like no-cache headers (add_header 'Cache-Control' 'no-cache'; [1]) > etc) and it's on player/client side to honour those. OK, if I have an HLS stream with caching turned off, will nginx do the correct thing? That is, serve the same stream (+/- couple of RTT) to each user and not cause interruptions of the stream when the chunks roll over? Because I don't see how that is possible if it is treated as simple HTTP. -------------- next part -------------- An HTML attachment was scrubbed... URL: From iwaraminda at googlemail.com Wed Jul 14 10:39:45 2021 From: iwaraminda at googlemail.com (Raminda Subashana) Date: Wed, 14 Jul 2021 16:09:45 +0530 Subject: QUIC and HTTP/3 roadmap blog post In-Reply-To: References: <0a3d4771-ff1b-756a-3830-fd7b85e94548@nginx.com> Message-ID: Hi Vladimir, Please see below; details & herewith attached another detail report as a PDF. I tested with Magento 2.4.2 & below results based on it. PHP 7.4 on Ubuntu 20.04 LTS *Nginx 1.16.1 (Cloudflare Quic)* Virtual Host Config; upstream fastcgi_backend { server unix:/run/php/php7.4-fpm.sock; } server { listen 80; server_name xxxxx.com; set $MAGE_ROOT /var/www/html/app1/public_html; include /var/www/html/app1/public_html/nginx.conf.sample; listen 443 quic reuseport; listen 443 ssl http2; # listen 443 ssl; # managed by Certbot ssl_certificate /etc/letsencrypt/live/app1.wernos.com/fullchain.pem; # managed by Certbot ssl_certificate_key /etc/letsencrypt/live/app1.wernos.com/privkey.pem; # managed by Certbot include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot ssl_protocols TLSv1 TLSv1.1 TLSv1.2 TLSv1.3; add_header alt-svc 'h3=":443"; ma=86400'; } [image: image.png] [image: image.png] *Nginx-quic* Virtual Host config; upstream fastcgi_backend { server unix:/run/php/php7.4-fpm.sock; } server { listen 80; server_name xxxxxx.com; set $MAGE_ROOT /var/www/html/app1/public_html; include /var/www/html/app1/public_html/nginx.conf.sample; listen 443 ssl; # TCP listener for HTTP/1.1 listen 443 http3 reuseport; # UDP listener for QUIC+HTTP/3 ssl_protocols TLSv1.3; # QUIC requires TLS 1.3 # listen 443 ssl; # managed by Certbot ssl_certificate /etc/letsencrypt/live/app2.wernos.com/fullchain.pem; # managed by Certbot ssl_certificate_key /etc/letsencrypt/live/app2.wernos.com/privkey.pem; # managed by Certbot include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot add_header Alt-Svc 'h3=":443"'; # Advertise that HTTP/3 is available add_header QUIC-Status $quic; # Sent when QUIC was used } [image: image.png] [image: image.png] Herewith attached nginx.conf too. On Tue, Jul 13, 2021 at 7:54 PM Raminda Subashana wrote: > Hi Vladimir, > > Sure I'll share all the information. > > Thanks > > On Tue, Jul 13, 2021 at 6:44 PM Vladimir Homutov wrote: > >> On Tue, Jul 13, 2021 at 05:29:15PM +0530, Raminda Subashana wrote: >> > Hi Maxim, >> > >> > Just tested nginx-quic release and there is a performance issue. I >> compared >> > it with Cloudflare quic experimental release which is based on nginx >> 1.16. >> > >> > It is almost 3 times slower than 1.16. Below config worked for me and it >> > never advertised h3-29. if you have specific config file to test >> appreciate >> > if you can share >> > >> > server { >> > listen 443 ssl; # TCP listener for HTTP/1.1 >> > listen 443 http3 reuseport; # UDP listener for QUIC+HTTP/3 >> > >> > ssl_protocols TLSv1.3; # QUIC requires TLS 1.3 >> > ssl_certificate ssl/www.example.com.crt; >> > ssl_certificate_key ssl/www.example.com.key; >> > >> > add_header Alt-Svc 'h3=":443"'; # Advertise that HTTP/3 is >> available >> > add_header QUIC-Status $quic; # Sent when QUIC was used >> > } >> >> Hi Raminda, >> >> Can you please describe how do you measure performance? What clients do >> you use? Full nginx configuration would be nice to see also. >> Do you measure request time or maybe overall throughput or something >> else. Any details are appreciated. >> >> Please ensure that for perforamnce tests you use nginx built without >> debug, >> as it produces quite a lot messages if built in debug mode. >> >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx >> > > > -- > > Thanks & Regards, > > *Raminda Subhashana* > > Mobile : 0773482137 > > > > > > *Please consider the environmental impact of needlessly printing this > e-mail.* > -- Thanks & Regards, *Raminda Subhashana* Mobile : 0773482137 *Please consider the environmental impact of needlessly printing this e-mail.* -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image.png Type: image/png Size: 32487 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image.png Type: image/png Size: 29971 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image.png Type: image/png Size: 35378 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image.png Type: image/png Size: 34381 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: nginx-quic_compressed.pdf Type: application/pdf Size: 204203 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: nginx.conf Type: application/octet-stream Size: 1491 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: nginx-1.16.1_compressed.pdf Type: application/pdf Size: 203387 bytes Desc: not available URL: From vl at nginx.com Thu Jul 15 08:10:10 2021 From: vl at nginx.com (Vladimir Homutov) Date: Thu, 15 Jul 2021 11:10:10 +0300 Subject: QUIC and HTTP/3 roadmap blog post In-Reply-To: References: <0a3d4771-ff1b-756a-3830-fd7b85e94548@nginx.com> Message-ID: <5c2ef619-8b40-0ca6-0662-8b30292f9ade@nginx.com> 13.07.2021 15:42, Marcin Wanat ?????: > Hi Maxim, > > does Nginx have plans to adopt BBR as congestion control when using QUIC ? > > Regards, > Marcin Wanat > Hi Marcin Wanat, Short-term, there are no such plans. We still have plenty of things to do. Currently for congestion we use what is described in RFC 9002. There are no objections in general to introduction of other algorithms. Any feedback with real statistics how we behave under different circumstances will be useful. Thank you for question! From vl at nginx.com Thu Jul 15 08:24:33 2021 From: vl at nginx.com (Vladimir Homutov) Date: Thu, 15 Jul 2021 11:24:33 +0300 Subject: QUIC and HTTP/3 roadmap blog post In-Reply-To: References: <0a3d4771-ff1b-756a-3830-fd7b85e94548@nginx.com> Message-ID: 14.07.2021 13:39, Raminda Subashana writes: > Hi Vladimir, > > Please see below; details & herewith attached another detail report as a > PDF. I tested with Magento 2.4.2 & below results based on it. PHP 7.4 on > Ubuntu 20.04 LTS > Hi Raminda, thank you for the feedback! can you please send full nginx config (to produce, run 'nginx -T') and nginx configure options (nginx -V). It would be also interesting to see results of vanilla nginx with https (and TLS 1.3) as a baseline. What was the request used for testing? Is it request for some static file ? Of which size? Thank you. From nginx-forum at forum.nginx.org Fri Jul 16 02:02:05 2021 From: nginx-forum at forum.nginx.org (drew571) Date: Thu, 15 Jul 2021 22:02:05 -0400 Subject: help me understand NGINX Message-ID: <500660f262479ad3ed4f6e8acfda3031.NginxMailingListEnglish@forum.nginx.org> Sorry but I'm new to NGINX and i have a few questions. Please bear with me. My understanding of NGINX is that it allows me to just keep port 443 open and NGINX will forward the traffic to the correct IP address and port inside my network based on the domain being used to get to my NGINX instance. Do I have that right? Right now I have NGINX set to forward a duckdns domain to my Home Assistant instance. The NGINX is actually installed on my home assistant as an addon. I also have another domain that is a synology.me domain that I setup on my synology server that forwards that domain to my synology logon screen. NGINX is getting me to the login screen but I am getting the message that it is not secure. How would I go about fixing this? Likewise I would like to setup another domain name that would get me to my router login. Obviously I know what ip and port my router is and i setup an addition domain with duckdns. I just keep getting internal errors when I hit save on my new proxy host in NGINX. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,292066,292066#msg-292066 From marcel.kocisek at gmail.com Fri Jul 16 06:38:39 2021 From: marcel.kocisek at gmail.com (=?UTF-8?B?TWFyY2VsIEtvxI3DrcWhZWs=?=) Date: Fri, 16 Jul 2021 08:38:39 +0200 Subject: Fwd: NGINX is sending for .json file text/html Conent type In-Reply-To: References: Message-ID: Stack overflow: https://stackoverflow.com/questions/68362629/nginx-is-sending-for-json-file-text-html-conent-type I have problem with my nginx server in version nginx:1.19-alpine running on Docker. Static files with .json extension are served with mime type text/html. Request: curl 'example.json' -H 'Accept: application/json' -H 'Accept-Language: en-US,en;q=0.5' --compressed -H 'Referer: http://192.168.16.201/' -H 'X-Requested-With: XMLHttpRequest' Response: HTTP/1.1 200 OKServer: nginxDate: Tue, 13 Jul 2021 12:23:51 GMTContent-Type: text/html;charset=utf-8Content-Length: 14746Connection: keep-aliveContent-Encoding: gzipCache-Control: no-storeAccept-Ranges: bytesVary: OriginVary: Access-Control-Request-MethodVary: Access-Control-Request-HeadersLast-Modified: Thu, 01 Jan 1970 00:00:01 GMT mime.types are configured correctly. -- *Marcel Kocisek * -------------- next part -------------- An HTML attachment was scrubbed... URL: From vl at nginx.com Fri Jul 16 10:49:58 2021 From: vl at nginx.com (Vladimir Homutov) Date: Fri, 16 Jul 2021 13:49:58 +0300 Subject: QUIC and HTTP/3 roadmap blog post In-Reply-To: References: <0a3d4771-ff1b-756a-3830-fd7b85e94548@nginx.com> Message-ID: On Tue, Jul 13, 2021 at 05:29:15PM +0530, Raminda Subashana wrote: > Hi Maxim, > > Just tested nginx-quic release and there is a performance issue. I compared > it with Cloudflare quic experimental release which is based on nginx 1.16. > > It is almost 3 times slower than 1.16. Below config worked for me and it > never advertised h3-29. if you have specific config file to test appreciate > if you can share > Hello Raminda, I've just looked at your results (in your letter 14/07 with PDFs attached), and here is a summary: ---------+--------------+-------------+ metric | nginx-1.16.1 | nginx-quic | ---------+--------------+-------------+ | | | avg rps | 25 | 25 | max rps | 80 | 61 | | | | avg resp | 564 | 597 | 95% resp | 570 | 591 | max resp | 1550 | 1342 | | | | ---------+--------------+-------------+ | | | FCP* | 0.4 s | 0.6 s | SI | 0.8 s | 3.7 s | LCP | 0.4 s | 0.9 s | TTI | 0.5 s | 1.9 s | TBT | 0 ms | 0 ms | CLS | 0.016 | 0.015 | | | | Rx | 240.973 | 240.489 | Tx | 388.72 | 388.524 | ---------+--------------+-------------+ * First Contentful Paint, Speed Index, Largest Contentful Paint, Time To Interactive, Total Blocking Time, Cumulative Layout Shift Looking at it, I don't see any real difference, except in metrics related to rendering, like some syntetic 'Speed Index'. You may want to dive into details how your application interacts with server and find out what happens, if such results are repeatable. Maybe some difference in HTTP/3 implementation affect it, but I have no idea how this index is calculated. Also, 25 rps is really low load, unless your system is a very slow machine. What are the parameters of your machine? Finally, I'd like to know, how did you manage to get QUIC HTTP/3 support in k6.io ? I don't see it in opensource version? Is it some dev branch? From mdounin at mdounin.ru Fri Jul 16 16:01:36 2021 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 16 Jul 2021 19:01:36 +0300 Subject: Fwd: NGINX is sending for .json file text/html Conent type In-Reply-To: References: Message-ID: Hello! On Fri, Jul 16, 2021 at 08:38:39AM +0200, Marcel Ko???ek wrote: > Stack overflow: > > https://stackoverflow.com/questions/68362629/nginx-is-sending-for-json-file-text-html-conent-type > > I have problem with my nginx server in version nginx:1.19-alpine running on > Docker. > > Static files with .json extension are served with mime type text/html. > Request: > > curl 'example.json' -H 'Accept: application/json' -H 'Accept-Language: > en-US,en;q=0.5' --compressed -H 'Referer: http://192.168.16.201/' -H > 'X-Requested-With: XMLHttpRequest' > > Response: > > HTTP/1.1 200 OKServer: nginxDate: Tue, 13 Jul 2021 12:23:51 > GMTContent-Type: text/html;charset=utf-8Content-Length: > 14746Connection: keep-aliveContent-Encoding: gzipCache-Control: > no-storeAccept-Ranges: bytesVary: OriginVary: > Access-Control-Request-MethodVary: > Access-Control-Request-HeadersLast-Modified: Thu, 01 Jan 1970 00:00:01 > GMT > > mime.types are configured correctly. Two possible answers I can think of: 1. The MIME types aren't configured correctly. Check your nginx configuration, notably if the "mime.types" is actually included in the configuration and not overwritten at different levels with something like "types {}" (see http://nginx.org/r/types). Just in case, the "nginx -T" output might be helpful to see actual nginx configuration. 2. The response is returned by a backend server rather than nginx. Various "Vary" headers as well as no space before "charset=" in the "Content-Type" suggest this might be the case. Hope this helps. -- Maxim Dounin http://mdounin.ru/ From nginx-forum at forum.nginx.org Fri Jul 16 18:54:29 2021 From: nginx-forum at forum.nginx.org (mrgonza78) Date: Fri, 16 Jul 2021 14:54:29 -0400 Subject: nginx taking too long to respond Message-ID: <821df72409da30b05a92a5b627d987a8.NginxMailingListEnglish@forum.nginx.org> I have an nginx used mainly as a reverse proxy for a couple of upstream services. This nginx has a simple endpoint used for health checks: location /ping { return 200 '{"ping":"successful"}'; } The problem I'm having is that this ping takes too long to be responded: $ cat /proc/loadavg; date ; httpstat localhost/ping?foo=bar 2.93 1.98 1.94 8/433 16725 Thu Jul 15 15:25:08 UTC 2021 Connected to 127.0.0.1:80 from 127.0.0.1:42946 HTTP/1.1 200 OK Date: Thu, 15 Jul 2021 15:26:24 GMT X-Request-ID: b8d276b0b3828113cfee3bf2daa01293 DNS Lookup TCP Connection Server Processing Content Transfer [ 4ms | 0ms | 76032ms | 0ms ] namelookup:4ms connect:4ms starttransfer:76036ms total:76036ms That ^ is telling me that the average load is low at the time of the request (2.93 the 1m average load for an 8-core server is ok) Curl/httpstat initiated the request at 15:25:08 and response was obtained 15:26:24. Connection was stablished fast, request sent, then it took 76s for the server to respond. If I look at the access log for this ping I see "req_time":"0.000" (this is the $request_time variable). {"t":"2021-07-15T15:26:24+00:00","id":"b8d276b0b3828113cfee3bf2daa01293","cid":"18581172","pid":"13631","host":"localhost","req":"GET /ping?foo=bar HTTP/1.1","scheme":"","status":"200","req_time":"0.000","body_sent":"21","bytes_sent":"373","content_length":"","request_length":"85","stats":"","upstream":{"status":"","sent":"","received":"","addr":"","conn_time":"","resp_time":""},"client":{"id":"#","agent":"curl/7.58.0","addr":",127.0.0.1:42946"},"limit_status":{"conn":"","req":""}} This is the access log format in case anybody wonders what are the rest of the values: '{"t":"$time_iso8601","id":"$ring_request_id","cid":"$connection","pid":"$pid","host":"$http_host","req":"$request","scheme":"$http_x_forwarded_proto","status":"$status","req_time":"$request_time","body_sent":"$body_bytes_sent","bytes_sent":"$bytes_sent","content_length":"$content_length","request_length":"$request_length","stats":"$location_tag","upstream":{"status":"$upstream_status","sent":"$upstream_bytes_sent","received":"$upstream_bytes_received","addr":"$upstream_addr","conn_time":"$upstream_connect_time","resp_time":"$upstream_response_time"},"client":{"id":"$http_x_auth_appid$http_x_ringdevicetype#$remote_user$http_x_auth_userid","agent":"$http_user_agent","addr":"$http_x_forwarded_for,$remote_addr:$remote_port"},"limit_status":{"conn":"$limit_conn_status","req":"$limit_req_status"}}'; My question is: where could nginx have spent these 76s if the request just took 0s to be processed and responded? Something special to mention is that the server is timing out a lof of connections with the upstreams at that moment as well: we see a lot of upstream timed out (110: Connection timed out) while reading response header from upstream and upstream server temporarily disabled while reading response header from upstream. So, these two are related, what I can't see is why upstream timeouts would lead to a /ping taking 76s to be attended and responded when both cpu and load are low/acceptable. Any idea? Thanks Alejandro Posted at Nginx Forum: https://forum.nginx.org/read.php?2,292073,292073#msg-292073 From mdounin at mdounin.ru Sat Jul 17 04:33:55 2021 From: mdounin at mdounin.ru (Maxim Dounin) Date: Sat, 17 Jul 2021 07:33:55 +0300 Subject: nginx taking too long to respond In-Reply-To: <821df72409da30b05a92a5b627d987a8.NginxMailingListEnglish@forum.nginx.org> References: <821df72409da30b05a92a5b627d987a8.NginxMailingListEnglish@forum.nginx.org> Message-ID: Hello! On Fri, Jul 16, 2021 at 02:54:29PM -0400, mrgonza78 wrote: > I have an nginx used mainly as a reverse proxy for a couple of upstream > services. This nginx has a simple endpoint used for health checks: > > location /ping { return 200 '{"ping":"successful"}'; } > > The problem I'm having is that this ping takes too long to be responded: > > $ cat /proc/loadavg; date ; httpstat localhost/ping?foo=bar > 2.93 1.98 1.94 8/433 16725 > Thu Jul 15 15:25:08 UTC 2021 > Connected to 127.0.0.1:80 from 127.0.0.1:42946 > > HTTP/1.1 200 OK > Date: Thu, 15 Jul 2021 15:26:24 GMT > X-Request-ID: b8d276b0b3828113cfee3bf2daa01293 > > DNS Lookup TCP Connection Server Processing Content Transfer > [ 4ms | 0ms | 76032ms | 0ms ] > namelookup:4ms > connect:4ms > starttransfer:76036ms > total:76036ms > > That ^ is telling me that the average load is low at the time of the request > (2.93 the 1m average load for an 8-core server is ok) > > Curl/httpstat initiated the request at 15:25:08 and response was obtained > 15:26:24. Connection was stablished fast, request sent, then it took 76s for > the server to respond. > > If I look at the access log for this ping I see "req_time":"0.000" (this is > the $request_time variable). > > > {"t":"2021-07-15T15:26:24+00:00","id":"b8d276b0b3828113cfee3bf2daa01293","cid":"18581172","pid":"13631","host":"localhost","req":"GET > /ping?foo=bar > HTTP/1.1","scheme":"","status":"200","req_time":"0.000","body_sent":"21","bytes_sent":"373","content_length":"","request_length":"85","stats":"","upstream":{"status":"","sent":"","received":"","addr":"","conn_time":"","resp_time":""},"client":{"id":"#","agent":"curl/7.58.0","addr":",127.0.0.1:42946"},"limit_status":{"conn":"","req":""}} > > > This is the access log format in case anybody wonders what are the rest of > the values: > > '{"t":"$time_iso8601","id":"$ring_request_id","cid":"$connection","pid":"$pid","host":"$http_host","req":"$request","scheme":"$http_x_forwarded_proto","status":"$status","req_time":"$request_time","body_sent":"$body_bytes_sent","bytes_sent":"$bytes_sent","content_length":"$content_length","request_length":"$request_length","stats":"$location_tag","upstream":{"status":"$upstream_status","sent":"$upstream_bytes_sent","received":"$upstream_bytes_received","addr":"$upstream_addr","conn_time":"$upstream_connect_time","resp_time":"$upstream_response_time"},"client":{"id":"$http_x_auth_appid$http_x_ringdevicetype#$remote_user$http_x_auth_userid","agent":"$http_user_agent","addr":"$http_x_forwarded_for,$remote_addr:$remote_port"},"limit_status":{"conn":"$limit_conn_status","req":"$limit_req_status"}}'; > > My question is: where could nginx have spent these 76s if the request just > took 0s to be processed and responded? > > Something special to mention is that the server is timing out a lof of > connections with the upstreams at that moment as well: we see a lot of > upstream timed out (110: Connection timed out) while reading response header > from upstream and upstream server temporarily disabled while reading > response header from upstream. > > So, these two are related, what I can't see is why upstream timeouts would > lead to a /ping taking 76s to be attended and responded when both cpu and > load are low/acceptable. Most likely nginx worker process was blocked on something, and wasn't able to handle requests due to this. It processed the request as soon as it was unblocked. Typical cases which can cause blocking include: serving huge files over fast links with sendfile without sendfile_max_chunk, serving files from NFS volumes, badly-written embedded Perl code which uses potentially long-running operations such as connecting to other servers or databases, badly-written 3rd party modules which use potentially long-running operations. The 75s time might indicate that the culprit is a blocking connect() somewhere, so I would recommend checking for 3rd party modules and/or embedded Perl code first. -- Maxim Dounin http://mdounin.ru/ From nginx-forum at forum.nginx.org Mon Jul 19 12:25:39 2021 From: nginx-forum at forum.nginx.org (mrgonza78) Date: Mon, 19 Jul 2021 08:25:39 -0400 Subject: nginx taking too long to respond In-Reply-To: References: Message-ID: <09c11052208ef36c4d26bbc726fcba3d.NginxMailingListEnglish@forum.nginx.org> Thanks for the responde Maxim, I appreciate it. In our case we're not using perl modules and we're not serving files either. We're using a bunch of known lua/resty libraries (we're using openresty), but AFAIK all IO in lua is non-blocking. On the other hand, that 75s delay in processing was just the example I pasted, we saw cases where it took almost 3m to respond. Thanks Posted at Nginx Forum: https://forum.nginx.org/read.php?2,292073,292076#msg-292076 From me at mheard.com Mon Jul 19 12:31:18 2021 From: me at mheard.com (Mathew Heard) Date: Mon, 19 Jul 2021 22:31:18 +1000 Subject: nginx taking too long to respond In-Reply-To: <09c11052208ef36c4d26bbc726fcba3d.NginxMailingListEnglish@forum.nginx.org> References: <09c11052208ef36c4d26bbc726fcba3d.NginxMailingListEnglish@forum.nginx.org> Message-ID: Alejandro, Not all up in Lua is non blocking. In fact many libraries for file ok are blocking. I would check that Also technically the engine (Lua script) processing is blocking and that can also be an issue if there is heavy processing . Hope that helps, Mathew On Mon, 19 Jul 2021, 10:26 pm mrgonza78, wrote: > Thanks for the responde Maxim, I appreciate it. > > In our case we're not using perl modules and we're not serving files > either. We're using a bunch of known lua/resty libraries (we're using > openresty), but AFAIK all IO in lua is non-blocking. > > On the other hand, that 75s delay in processing was just the example I > pasted, we saw cases where it took almost 3m to respond. > > Thanks > > Posted at Nginx Forum: > https://forum.nginx.org/read.php?2,292073,292076#msg-292076 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From awani227 at zohomail.in Tue Jul 20 08:35:41 2021 From: awani227 at zohomail.in (awani227) Date: Tue, 20 Jul 2021 14:05:41 +0530 Subject: Need some guidance for setting upstream logs in NGINX TCP Load Balancer Message-ID: <17ac30e72b9.31af7fd810786.6065747769807734477@zohomail.in> Hi, I'm using 'nginx/1.21.1' for load balancing TCP connection for DBs like Teradata, Netezza, Oracle and MySQL. I'm using stream directive for it as per the guidance 'https://docs.nginx.com/nginx/admin-guide/load-balancer/tcp-udp-load-balancer/' .? Following is my stream configuration? (/etc/nginx/nginx.conf) stream { ? ? log_format? timed_combined? '$remote_addr - [$time_local] $session_time $msec $status $protocol ua="$upstream_addr" "$upstream_connect_time" "$upstream_response_time"'; ??? access_log /var/log/nginx/access.log timed_combined; ?? ??? upstream stream_backend_mysql { ??????? hash $remote_addr; ??????? server 192.168.122.251:3306; ??????? server 192.168.122.252:3306; ??? } ??? server { ??????? listen 3306; ??????? proxy_pass stream_backend_mysql; ??????? proxy_connect_timeout 1s; ??? } } I'm able to establish connection for mysql in this setup, but when I'm adding the log_format directive nginx is giving following error, 'nginx: [emerg] unknown "upstream_response_time" variable'? My motive is to get metrics for this nginx tcp loadbalancer (like requests per seconds, number of active connections). Please guide me on this as I'm a newbie in this field. I'm attaching my nginx.conf for your reference with this mail.? Looking forward for your response. Thank you in anticipation! Regards, Akki -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: nginx.conf Type: application/octet-stream Size: 1990 bytes Desc: not available URL: From ruv.donita at gmail.com Tue Jul 20 10:39:51 2021 From: ruv.donita at gmail.com (Dorin RuV) Date: Tue, 20 Jul 2021 12:39:51 +0200 Subject: Embedded variables from ngx_http_core_module and websocket connections Message-ID: Hi everybody, I'm currently having an issue with nginx which I cannot get to the bottom of. I'm using nginx as a reverse proxy/load balancer in front of Kubernetes. I'm trying to set up Websocket connections with an app running in Kubernetes, but there are some problems. I have followed the "more sophisticated" example from https://nginx.org/en/docs/http/websocket.html. The configuration more or less looks like: "http { map $http_upgrade $connection_upgrade { default upgrade; '' close; } # building the connection_upgrade variable based on $http_upgrade ...... server{ listen 443 ssl http2; location / { proxy_http_version 1.1; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection $connection_upgrade; } .... " There is also a server { listen 80 } directive there which simply redirects to https. If I configure "proxy_set_header Upgrade Websocket" and "Connection Upgrade", everything works as intended. It seems though that the $http_upgrade variable is seen as empty, even though tcpdump confirms the fact that the Upgrade header is correctly sent to Nginx by the client request. Can somebody please point me towards what could reset this variable or why is it unavailable? I'm thinking some scope issues but I have no idea how to debug. Thank you! -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Tue Jul 20 12:36:10 2021 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 20 Jul 2021 15:36:10 +0300 Subject: Need some guidance for setting upstream logs in NGINX TCP Load Balancer In-Reply-To: <17ac30e72b9.31af7fd810786.6065747769807734477@zohomail.in> References: <17ac30e72b9.31af7fd810786.6065747769807734477@zohomail.in> Message-ID: Hello! On Tue, Jul 20, 2021 at 02:05:41PM +0530, awani227 wrote: [...] > stream { > > ? ? log_format? timed_combined? '$remote_addr - [$time_local] > $session_time $msec $status $protocol ua="$upstream_addr" > "$upstream_connect_time" "$upstream_response_time"'; [...] > I'm able to establish connection for mysql in this setup, but > when I'm adding the log_format directive nginx is giving > following error, > > 'nginx: [emerg] unknown "upstream_response_time" variable'? > > > > My motive is to get metrics for this nginx tcp loadbalancer > (like requests per seconds, number of active connections). In the stream load balancing there are no requests or responses nginx knows about. Rather, there are connections (or sessions). As such, there is no $upstream_response_time variable, as well as other request and response-related variables which are available in http load balancing. Instead, consider using appropriate connection-related variables, such as $upstream_connect_time, $upstream_first_byte_time, and $upstream_session_time. Full list of the $upstream_* variables can be found here: https://nginx.org/en/docs/stream/ngx_stream_upstream_module.html#variables Hope this helps. -- Maxim Dounin http://mdounin.ru/ From osa at freebsd.org.ru Wed Jul 21 10:21:00 2021 From: osa at freebsd.org.ru (Sergey A. Osokin) Date: Wed, 21 Jul 2021 13:21:00 +0300 Subject: Need some guidance for setting upstream logs in NGINX TCP Load Balancer In-Reply-To: <17ac30e72b9.31af7fd810786.6065747769807734477@zohomail.in> References: <17ac30e72b9.31af7fd810786.6065747769807734477@zohomail.in> Message-ID: Hi Akki, hope you're doing well these days. On Tue, Jul 20, 2021 at 02:05:41PM +0530, awani227 wrote: > Hi, > > I'm using 'nginx/1.21.1' for load balancing TCP connection for DBs > like Teradata, Netezza, Oracle and MySQL. I'm using stream directive > for it as per the guidance > 'https://docs.nginx.com/nginx/admin-guide/load-balancer/tcp-udp-load-balancer/' .? > > Following is my stream configuration? (/etc/nginx/nginx.conf) > > stream { > > ? ? log_format? timed_combined? '$remote_addr - [$time_local] $session_time $msec $status $protocol ua="$upstream_addr" "$upstream_connect_time" "$upstream_response_time"'; > ??? access_log /var/log/nginx/access.log timed_combined; > > ??? upstream stream_backend_mysql { > ??????? hash $remote_addr; > ??????? server 192.168.122.251:3306; > ??????? server 192.168.122.252:3306; > ??? } > > ??? server { > ??????? listen 3306; > ??????? proxy_pass stream_backend_mysql; > ??????? proxy_connect_timeout 1s; > ??? } > } > > I'm able to establish connection for mysql in this setup, but when > I'm adding the log_format directive nginx is giving following error, > > 'nginx: [emerg] unknown "upstream_response_time" variable'? The $upstream_response_time embedded variable is available with ngx_http_upstream module, https://nginx.org/en/docs/http/ngx_http_upstream_module.html#variables For ngx_stream_upstream you can utilize the following variables: https://nginx.org/en/docs/stream/ngx_stream_upstream_module.html#variables -- Sergey From osa at freebsd.org.ru Wed Jul 21 10:48:35 2021 From: osa at freebsd.org.ru (Sergey A. Osokin) Date: Wed, 21 Jul 2021 13:48:35 +0300 Subject: Embedded variables from ngx_http_core_module and websocket connections In-Reply-To: References: Message-ID: Hi Dorin, hope you're doing well. On Tue, Jul 20, 2021 at 12:39:51PM +0200, Dorin RuV wrote: > Hi everybody, > > I'm currently having an issue with nginx which I cannot get to the bottom > of. > > I'm using nginx as a reverse proxy/load balancer in front of Kubernetes. > I'm trying to set up Websocket connections with an app running in > Kubernetes, but there are some problems. I have followed the "more > sophisticated" example from https://nginx.org/en/docs/http/websocket.html. > > The configuration more or less looks like: > > "http { > > map $http_upgrade $connection_upgrade { > default upgrade; > '' close; > } # building the connection_upgrade variable based on $http_upgrade > ...... > > > server{ > listen 443 ssl http2; > location / { > proxy_http_version 1.1; > proxy_set_header Upgrade $http_upgrade; > proxy_set_header Connection $connection_upgrade; > } > .... > " > There is also a server { listen 80 } directive there which simply redirects > to https. > > If I configure "proxy_set_header Upgrade Websocket" and "Connection > Upgrade", everything works as intended. It seems though that the > $http_upgrade variable is seen as empty, even though tcpdump confirms the > fact that the Upgrade header is correctly sent to Nginx by the client > request. Can somebody please point me towards what could reset this > variable or why is it unavailable? I'm thinking some scope issues but I > have no idea how to debug. Is there a chance to use debug log feature, https://nginx.org/en/docs/debugging_log.html, and reproduce the issue with "working" and "non-working" configuration files. -- Sergey Osokin From nginx-forum at forum.nginx.org Wed Jul 21 11:12:19 2021 From: nginx-forum at forum.nginx.org (kay) Date: Wed, 21 Jul 2021 07:12:19 -0400 Subject: Passing the original client EHLO to the SMTP backend Message-ID: <555835cb71e90a2a74cfa93179383386.NginxMailingListEnglish@forum.nginx.org> Hi, Currently only XCLIENT protocol supports passing "EHLO or HELO, as passed by the client" - http://nginx.org/en/docs/mail/ngx_mail_proxy_module.html#xclient I'm using PROXY protocol and nginx sends the hostname, specified in "server_name". Is it possible to pass the real client hostname to SMTP backend? Regards, Posted at Nginx Forum: https://forum.nginx.org/read.php?2,292085,292085#msg-292085 From ruv.donita at gmail.com Wed Jul 21 13:47:26 2021 From: ruv.donita at gmail.com (Dorin RuV) Date: Wed, 21 Jul 2021 15:47:26 +0200 Subject: Embedded variables from ngx_http_core_module and websocket connections In-Reply-To: References: Message-ID: Hi Sergey, Thank you for lending a hand! I have enabled debug and here are two different calls to the backend: 1. It worked (The nginx config contains a hard-coded "Connection Upgrade" and "Upgrade Websocket" proxy_set_headers) "2021/07/21 13:36:43 [debug] 1#1: bind() 0.0.0.0:8060 #6 2021/07/21 13:36:43 [debug] 1#1: bind() 0.0.0.0:8063 #7 2021/07/21 13:36:43 [notice] 1#1: using the "epoll" event method 2021/07/21 13:36:43 [debug] 1#1: counter: 00007F9A0E809080, 1 2021/07/21 13:36:43 [notice] 1#1: nginx/1.18.0 2021/07/21 13:36:43 [notice] 1#1: OS: Linux 4.14.239-1626564138 2021/07/21 13:36:43 [notice] 1#1: getrlimit(RLIMIT_NOFILE): 1048576:1048576 2021/07/21 13:36:43 [debug] 1#1: write: 8, 00007FFC29515792, 2, 0 2021/07/21 13:36:43 [debug] 1#1: setproctitle: "nginx: master process nginx-debug -c /usr/local/nginx/conf/nginx.conf -g daemon off;" 2021/07/21 13:36:43 [notice] 1#1: start worker processes 2021/07/21 13:36:43 [debug] 1#1: channel 3:8 2021/07/21 13:36:43 [notice] 1#1: start worker process 8 2021/07/21 13:36:43 [debug] 1#1: channel 9:10 2021/07/21 13:36:43 [debug] 8#8: add cleanup: 00007F9A0E80F8A8 2021/07/21 13:36:43 [debug] 8#8: malloc: 00007F9A0E89B090:8 2021/07/21 13:36:43 [notice] 1#1: start worker process 9 2021/07/21 13:36:43 [debug] 1#1: pass channel s:1 pid:9 fd:9 to s:0 pid:8 fd:3 2021/07/21 13:36:43 [debug] 1#1: channel 11:12 2021/07/21 13:36:43 [debug] 9#9: add cleanup: 00007F9A0E80F8A8 2021/07/21 13:36:43 [debug] 9#9: malloc: 00007F9A0E89B090:8 2021/07/21 13:36:43 [notice] 1#1: start worker process 10 2021/07/21 13:36:43 [debug] 1#1: pass channel s:2 pid:10 fd:11 to s:0 pid:8 fd:3 2021/07/21 13:36:43 [debug] 8#8: notify eventfd: 10 2021/07/21 13:36:43 [debug] 1#1: pass channel s:2 pid:10 fd:11 to s:1 pid:9 fd:9 2021/07/21 13:36:43 [debug] 8#8: eventfd: 11 2021/07/21 13:36:43 [debug] 1#1: channel 13:14 2021/07/21 13:36:43 [debug] 10#10: add cleanup: 00007F9A0E80F8A8 2021/07/21 13:36:43 [debug] 10#10: malloc: 00007F9A0E89B090:8 2021/07/21 13:36:43 [debug] 9#9: notify eventfd: 12 2021/07/21 13:36:43 [debug] 8#8: testing the EPOLLRDHUP flag: success 2021/07/21 13:36:43 [debug] 9#9: eventfd: 13 2021/07/21 13:36:43 [debug] 8#8: malloc: 00007F9A0E88A450:6144 2021/07/21 13:36:43 [debug] 8#8: malloc: 00007F9A0DDCB460:237568 2021/07/21 13:36:43 [notice] 1#1: start worker process 11 2021/07/21 13:36:43 [debug] 8#8: malloc: 00007F9A0DDB2470:98304 2021/07/21 13:36:43 [debug] 9#9: testing the EPOLLRDHUP flag: success 2021/07/21 13:36:43 [debug] 1#1: pass channel s:3 pid:11 fd:13 to s:0 pid:8 fd:3 2021/07/21 13:36:43 [debug] 1#1: pass channel s:3 pid:11 fd:13 to s:1 pid:9 fd:9 2021/07/21 13:36:43 [debug] 1#1: pass channel s:3 pid:11 fd:13 to s:2 pid:10 fd:11 2021/07/21 13:36:43 [debug] 9#9: malloc: 00007F9A0E88A450:6144 2021/07/21 13:36:43 [debug] 1#1: sigsuspend 2021/07/21 13:36:43 [debug] 9#9: malloc: 00007F9A0DDCB460:237568 2021/07/21 13:36:43 [debug] 8#8: malloc: 00007F9A0DD99480:98304 2021/07/21 13:36:43 [debug] 10#10: notify eventfd: 14 2021/07/21 13:36:43 [debug] 9#9: malloc: 00007F9A0DDB2470:98304 2021/07/21 13:36:43 [debug] 10#10: eventfd: 15 2021/07/21 13:36:43 [debug] 10#10: testing the EPOLLRDHUP flag: success 2021/07/21 13:36:43 [debug] 9#9: malloc: 00007F9A0DD99480:98304 2021/07/21 13:36:43 [debug] 10#10: malloc: 00007F9A0E88A450:6144 2021/07/21 13:36:43 [debug] 10#10: malloc: 00007F9A0DDCB460:237568 2021/07/21 13:36:43 [debug] 10#10: malloc: 00007F9A0DDB2470:98304 2021/07/21 13:36:43 [debug] 8#8: epoll add event: fd:8 op:1 ev:00002001 2021/07/21 13:36:43 [debug] 8#8: setproctitle: "nginx: worker process" 2021/07/21 13:36:43 [debug] 8#8: worker cycle 2021/07/21 13:36:43 [debug] 8#8: epoll timer: -1 2021/07/21 13:36:43 [debug] 10#10: malloc: 00007F9A0DD99480:98304 2021/07/21 13:36:43 [debug] 8#8: epoll: fd:8 ev:0001 d:00007F9A0DDCB630 2021/07/21 13:36:43 [debug] 8#8: channel handler 2021/07/21 13:36:43 [debug] 8#8: channel: 32 2021/07/21 13:36:43 [debug] 9#9: epoll add event: fd:10 op:1 ev:00002001 2021/07/21 13:36:43 [debug] 8#8: channel command: 1 2021/07/21 13:36:43 [debug] 9#9: setproctitle: "nginx: worker process" 2021/07/21 13:36:43 [debug] 8#8: get channel s:1 pid:9 fd:3 2021/07/21 13:36:43 [debug] 9#9: worker cycle 2021/07/21 13:36:43 [debug] 9#9: epoll timer: -1 2021/07/21 13:36:43 [debug] 8#8: channel: 32 2021/07/21 13:36:43 [debug] 8#8: channel command: 1 2021/07/21 13:36:43 [debug] 8#8: get channel s:2 pid:10 fd:12 2021/07/21 13:36:43 [debug] 9#9: epoll: fd:10 ev:0001 d:00007F9A0DDCB630 2021/07/21 13:36:43 [debug] 9#9: channel handler 2021/07/21 13:36:43 [debug] 8#8: channel: 32 2021/07/21 13:36:43 [debug] 8#8: channel command: 1 2021/07/21 13:36:43 [debug] 9#9: channel: 32 2021/07/21 13:36:43 [debug] 8#8: get channel s:3 pid:11 fd:13 2021/07/21 13:36:43 [debug] 9#9: channel command: 1 2021/07/21 13:36:43 [debug] 9#9: get channel s:2 pid:10 fd:8 2021/07/21 13:36:43 [debug] 8#8: channel: -2 2021/07/21 13:36:43 [debug] 8#8: timer delta: 21 2021/07/21 13:36:43 [debug] 9#9: channel: 32 2021/07/21 13:36:43 [debug] 8#8: worker cycle 2021/07/21 13:36:43 [debug] 9#9: channel command: 1 2021/07/21 13:36:43 [debug] 8#8: epoll timer: -1 2021/07/21 13:36:43 [debug] 9#9: get channel s:3 pid:11 fd:9 2021/07/21 13:36:43 [debug] 10#10: epoll add event: fd:12 op:1 ev:00002001 2021/07/21 13:36:43 [debug] 9#9: channel: -2 2021/07/21 13:36:43 [debug] 10#10: setproctitle: "nginx: worker process" 2021/07/21 13:36:43 [debug] 9#9: timer delta: 21 2021/07/21 13:36:43 [debug] 10#10: worker cycle 2021/07/21 13:36:43 [debug] 9#9: worker cycle 2021/07/21 13:36:43 [debug] 10#10: epoll timer: -1 2021/07/21 13:36:43 [debug] 9#9: epoll timer: -1 2021/07/21 13:36:43 [debug] 10#10: epoll: fd:12 ev:0001 d:00007F9A0DDCB630 2021/07/21 13:36:43 [debug] 10#10: channel handler 2021/07/21 13:36:43 [debug] 10#10: channel: 32 2021/07/21 13:36:43 [debug] 10#10: channel command: 1 2021/07/21 13:36:43 [debug] 10#10: get channel s:3 pid:11 fd:8 2021/07/21 13:36:43 [debug] 10#10: channel: -2 2021/07/21 13:36:43 [debug] 11#11: add cleanup: 00007F9A0E80F8A8 2021/07/21 13:36:43 [debug] 10#10: timer delta: 21 2021/07/21 13:36:43 [debug] 10#10: worker cycle 2021/07/21 13:36:43 [debug] 10#10: epoll timer: -1 2021/07/21 13:36:43 [debug] 11#11: malloc: 00007F9A0E89B090:8 2021/07/21 13:36:43 [debug] 11#11: notify eventfd: 16 2021/07/21 13:36:43 [debug] 11#11: eventfd: 17 2021/07/21 13:36:43 [debug] 11#11: testing the EPOLLRDHUP flag: success 2021/07/21 13:36:43 [debug] 11#11: malloc: 00007F9A0E88A450:6144 2021/07/21 13:36:43 [debug] 11#11: malloc: 00007F9A0DDCB460:237568 2021/07/21 13:36:43 [debug] 11#11: malloc: 00007F9A0DDB2470:98304 2021/07/21 13:36:43 [debug] 11#11: malloc: 00007F9A0DD99480:98304 2021/07/21 13:36:43 [debug] 11#11: epoll add event: fd:14 op:1 ev:00002001 2021/07/21 13:36:43 [debug] 11#11: setproctitle: "nginx: worker process" 2021/07/21 13:36:43 [debug] 11#11: worker cycle 2021/07/21 13:36:43 [debug] 11#11: epoll timer: -1 2021/07/21 13:36:48 [debug] 8#8: epoll: fd:7 ev:0001 d:00007F9A0DDCB548 2021/07/21 13:36:48 [debug] 8#8: timer delta: 5334 2021/07/21 13:36:48 [debug] 8#8: worker cycle 2021/07/21 13:36:48 [debug] 8#8: epoll timer: 60000 2021/07/21 13:36:48 [debug] 8#8: epoll: fd:14 ev:0001 d:00007F9A0DDCB718 2021/07/21 13:36:48 [debug] 8#8: timer delta: 9 2021/07/21 13:36:48 [debug] 8#8: worker cycle 2021/07/21 13:36:48 [debug] 8#8: epoll timer: 59991 2021/07/21 13:36:48 [debug] 8#8: epoll: fd:14 ev:0001 d:00007F9A0DDCB718 2021/07/21 13:36:48 [debug] 8#8: timer delta: 3 2021/07/21 13:36:48 [debug] 8#8: worker cycle 2021/07/21 13:36:48 [debug] 8#8: epoll timer: 59988 2021/07/21 13:36:48 [debug] 8#8: epoll: fd:14 ev:0001 d:00007F9A0DDCB718 2021/07/21 13:36:48 [debug] 8#8: shmtx lock 2021/07/21 13:36:48 [debug] 8#8: slab alloc: 191 slot: 5 2021/07/21 13:36:48 [debug] 8#8: slab alloc: 00007F9A0DE18000 2021/07/21 13:36:48 [debug] 8#8: slab alloc: 128 slot: 4 2021/07/21 13:36:48 [debug] 8#8: slab alloc: 00007F9A0DE16080 2021/07/21 13:36:48 [debug] 8#8: shmtx unlock 2021/07/21 13:36:48 [debug] 8#8: shmtx lock 2021/07/21 13:36:48 [debug] 8#8: slab alloc: 191 slot: 5 2021/07/21 13:36:48 [debug] 8#8: slab alloc: 00007F9A0DE18100 2021/07/21 13:36:48 [debug] 8#8: slab alloc: 128 slot: 4 2021/07/21 13:36:48 [debug] 8#8: slab alloc: 00007F9A0DE16100 2021/07/21 13:36:48 [debug] 8#8: shmtx unlock 2021/07/21 13:36:48 [debug] 8#8: malloc: 00007F9A0DD4A5D0:262144 2021/07/21 13:36:48 [debug] 8#8: timer delta: 6 2021/07/21 13:36:48 [debug] 8#8: worker cycle 2021/07/21 13:36:48 [debug] 8#8: epoll timer: 120000 2021/07/21 13:36:48 [debug] 8#8: epoll: fd:15 ev:0004 d:00007F9A0DDCB800 2021/07/21 13:36:48 [debug] 8#8: timer delta: 1 2021/07/21 13:36:48 [debug] 8#8: worker cycle 2021/07/21 13:36:48 [debug] 8#8: epoll timer: 120000 2021/07/21 13:36:48 [debug] 8#8: epoll: fd:14 ev:0001 d:00007F9A0DDCB718 2021/07/21 13:36:48 [debug] 8#8: timer delta: 0 2021/07/21 13:36:48 [debug] 8#8: worker cycle 2021/07/21 13:36:48 [debug] 8#8: epoll timer: 120000 2021/07/21 13:36:49 [debug] 8#8: epoll: fd:15 ev:0005 d:00007F9A0DDCB800 2021/07/21 13:36:49 [debug] 8#8: timer delta: 1 2021/07/21 13:36:49 [debug] 8#8: worker cycle 2021/07/21 13:36:49 [debug] 8#8: epoll timer: 119999 2021/07/21 13:37:04 [debug] 8#8: epoll: fd:14 ev:2001 d:00007F9A0DDCB718 10.197.96.163 - - [21/Jul/2021:13:37:04 +0000] "GET /1-ws HTTP/2.0" 101 backend:10.197.93.8:80 ssl:TLSv1.3/TLS_AES_256_GCM_SHA384 0 "-" "curl/7.61.1" "-" 2021/07/21 13:37:04 [debug] 8#8: timer delta: 15332 2021/07/21 13:37:04 [debug] 8#8: worker cycle 2021/07/21 13:37:04 [debug] 8#8: epoll timer: -1" 2. It didn't work (The config is using the $http_upgrade variable which seems to be empty) "2021/07/21 13:39:40 [debug] 1#1: bind() 0.0.0.0:8060 #6 2021/07/21 13:39:40 [debug] 1#1: bind() 0.0.0.0:8063 #7 2021/07/21 13:39:40 [notice] 1#1: using the "epoll" event method 2021/07/21 13:39:40 [debug] 1#1: counter: 00007F6F213C3080, 1 2021/07/21 13:39:40 [notice] 1#1: nginx/1.18.0 2021/07/21 13:39:40 [notice] 1#1: OS: Linux 4.14.239-1626564138 2021/07/21 13:39:40 [notice] 1#1: getrlimit(RLIMIT_NOFILE): 1048576:1048576 2021/07/21 13:39:40 [debug] 1#1: write: 8, 00007FFEBDADD2C2, 2, 0 2021/07/21 13:39:40 [debug] 1#1: setproctitle: "nginx: master process nginx-debug -c /usr/local/nginx/conf/nginx.conf -g daemon off;" 2021/07/21 13:39:40 [notice] 1#1: start worker processes 2021/07/21 13:39:40 [debug] 1#1: channel 3:8 2021/07/21 13:39:40 [notice] 1#1: start worker process 7 2021/07/21 13:39:40 [debug] 1#1: channel 9:10 2021/07/21 13:39:40 [debug] 7#7: add cleanup: 00007F6F213C8148 2021/07/21 13:39:40 [debug] 7#7: malloc: 00007F6F21455090:8 2021/07/21 13:39:40 [notice] 1#1: start worker process 8 2021/07/21 13:39:40 [debug] 1#1: pass channel s:1 pid:8 fd:9 to s:0 pid:7 fd:3 2021/07/21 13:39:40 [debug] 1#1: channel 11:12 2021/07/21 13:39:40 [debug] 8#8: add cleanup: 00007F6F213C8148 2021/07/21 13:39:40 [debug] 8#8: malloc: 00007F6F21455090:8 2021/07/21 13:39:40 [notice] 1#1: start worker process 9 2021/07/21 13:39:40 [debug] 1#1: pass channel s:2 pid:9 fd:11 to s:0 pid:7 fd:3 2021/07/21 13:39:40 [debug] 1#1: pass channel s:2 pid:9 fd:11 to s:1 pid:8 fd:9 2021/07/21 13:39:40 [debug] 1#1: channel 13:14 2021/07/21 13:39:40 [debug] 7#7: notify eventfd: 10 2021/07/21 13:39:40 [debug] 9#9: add cleanup: 00007F6F213C8148 2021/07/21 13:39:40 [debug] 7#7: eventfd: 11 2021/07/21 13:39:40 [debug] 9#9: malloc: 00007F6F21455090:8 2021/07/21 13:39:40 [notice] 1#1: start worker process 10 2021/07/21 13:39:40 [debug] 7#7: testing the EPOLLRDHUP flag: success 2021/07/21 13:39:40 [debug] 1#1: pass channel s:3 pid:10 fd:13 to s:0 pid:7 fd:3 2021/07/21 13:39:40 [debug] 8#8: notify eventfd: 12 2021/07/21 13:39:40 [debug] 1#1: pass channel s:3 pid:10 fd:13 to s:1 pid:8 fd:9 2021/07/21 13:39:40 [debug] 1#1: pass channel s:3 pid:10 fd:13 to s:2 pid:9 fd:11 2021/07/21 13:39:40 [debug] 7#7: malloc: 00007F6F21444450:6144 2021/07/21 13:39:40 [debug] 1#1: sigsuspend 2021/07/21 13:39:40 [debug] 8#8: eventfd: 13 2021/07/21 13:39:40 [debug] 7#7: malloc: 00007F6F20985460:237568 2021/07/21 13:39:40 [debug] 7#7: malloc: 00007F6F2096C470:98304 2021/07/21 13:39:40 [debug] 9#9: notify eventfd: 14 2021/07/21 13:39:40 [debug] 8#8: testing the EPOLLRDHUP flag: success 2021/07/21 13:39:40 [debug] 7#7: malloc: 00007F6F20953480:98304 2021/07/21 13:39:40 [debug] 9#9: eventfd: 15 2021/07/21 13:39:40 [debug] 8#8: malloc: 00007F6F21444450:6144 2021/07/21 13:39:40 [debug] 8#8: malloc: 00007F6F20985460:237568 2021/07/21 13:39:40 [debug] 8#8: malloc: 00007F6F2096C470:98304 2021/07/21 13:39:40 [debug] 9#9: testing the EPOLLRDHUP flag: success 2021/07/21 13:39:40 [debug] 9#9: malloc: 00007F6F21444450:6144 2021/07/21 13:39:40 [debug] 9#9: malloc: 00007F6F20985460:237568 2021/07/21 13:39:40 [debug] 9#9: malloc: 00007F6F2096C470:98304 2021/07/21 13:39:40 [debug] 8#8: malloc: 00007F6F20953480:98304 2021/07/21 13:39:40 [debug] 9#9: malloc: 00007F6F20953480:98304 2021/07/21 13:39:40 [debug] 7#7: epoll add event: fd:8 op:1 ev:00002001 2021/07/21 13:39:40 [debug] 7#7: setproctitle: "nginx: worker process" 2021/07/21 13:39:40 [debug] 7#7: worker cycle 2021/07/21 13:39:40 [debug] 7#7: epoll timer: -1 2021/07/21 13:39:40 [debug] 7#7: epoll: fd:8 ev:0001 d:00007F6F20985630 2021/07/21 13:39:40 [debug] 7#7: channel handler 2021/07/21 13:39:40 [debug] 7#7: channel: 32 2021/07/21 13:39:40 [debug] 7#7: channel command: 1 2021/07/21 13:39:40 [debug] 10#10: add cleanup: 00007F6F213C8148 2021/07/21 13:39:40 [debug] 7#7: get channel s:1 pid:8 fd:3 2021/07/21 13:39:40 [debug] 7#7: channel: 32 2021/07/21 13:39:40 [debug] 7#7: channel command: 1 2021/07/21 13:39:40 [debug] 10#10: malloc: 00007F6F21455090:8 2021/07/21 13:39:40 [debug] 7#7: get channel s:2 pid:9 fd:12 2021/07/21 13:39:40 [debug] 7#7: channel: 32 2021/07/21 13:39:40 [debug] 7#7: channel command: 1 2021/07/21 13:39:40 [debug] 7#7: get channel s:3 pid:10 fd:13 2021/07/21 13:39:40 [debug] 7#7: channel: -2 2021/07/21 13:39:40 [debug] 7#7: timer delta: 21 2021/07/21 13:39:40 [debug] 7#7: worker cycle 2021/07/21 13:39:40 [debug] 7#7: epoll timer: -1 2021/07/21 13:39:40 [debug] 9#9: epoll add event: fd:12 op:1 ev:00002001 2021/07/21 13:39:40 [debug] 8#8: epoll add event: fd:10 op:1 ev:00002001 2021/07/21 13:39:40 [debug] 9#9: setproctitle: "nginx: worker process" 2021/07/21 13:39:40 [debug] 8#8: setproctitle: "nginx: worker process" 2021/07/21 13:39:40 [debug] 9#9: worker cycle 2021/07/21 13:39:40 [debug] 8#8: worker cycle 2021/07/21 13:39:40 [debug] 9#9: epoll timer: -1 2021/07/21 13:39:40 [debug] 8#8: epoll timer: -1 2021/07/21 13:39:40 [debug] 9#9: epoll: fd:12 ev:0001 d:00007F6F20985630 2021/07/21 13:39:40 [debug] 8#8: epoll: fd:10 ev:0001 d:00007F6F20985630 2021/07/21 13:39:40 [debug] 9#9: channel handler 2021/07/21 13:39:40 [debug] 8#8: channel handler 2021/07/21 13:39:40 [debug] 10#10: notify eventfd: 16 2021/07/21 13:39:40 [debug] 9#9: channel: 32 2021/07/21 13:39:40 [debug] 8#8: channel: 32 2021/07/21 13:39:40 [debug] 9#9: channel command: 1 2021/07/21 13:39:40 [debug] 8#8: channel command: 1 2021/07/21 13:39:40 [debug] 9#9: get channel s:3 pid:10 fd:8 2021/07/21 13:39:40 [debug] 10#10: eventfd: 17 2021/07/21 13:39:40 [debug] 8#8: get channel s:2 pid:9 fd:8 2021/07/21 13:39:40 [debug] 9#9: channel: -2 2021/07/21 13:39:40 [debug] 8#8: channel: 32 2021/07/21 13:39:40 [debug] 9#9: timer delta: 21 2021/07/21 13:39:40 [debug] 8#8: channel command: 1 2021/07/21 13:39:40 [debug] 9#9: worker cycle 2021/07/21 13:39:40 [debug] 8#8: get channel s:3 pid:10 fd:9 2021/07/21 13:39:40 [debug] 9#9: epoll timer: -1 2021/07/21 13:39:40 [debug] 8#8: channel: -2 2021/07/21 13:39:40 [debug] 8#8: timer delta: 21 2021/07/21 13:39:40 [debug] 10#10: testing the EPOLLRDHUP flag: success 2021/07/21 13:39:40 [debug] 8#8: worker cycle 2021/07/21 13:39:40 [debug] 8#8: epoll timer: -1 2021/07/21 13:39:40 [debug] 10#10: malloc: 00007F6F21444450:6144 2021/07/21 13:39:40 [debug] 10#10: malloc: 00007F6F20985460:237568 2021/07/21 13:39:40 [debug] 10#10: malloc: 00007F6F2096C470:98304 2021/07/21 13:39:40 [debug] 10#10: malloc: 00007F6F20953480:98304 2021/07/21 13:39:40 [debug] 10#10: epoll add event: fd:14 op:1 ev:00002001 2021/07/21 13:39:40 [debug] 10#10: setproctitle: "nginx: worker process" 2021/07/21 13:39:40 [debug] 10#10: worker cycle 2021/07/21 13:39:40 [debug] 10#10: epoll timer: -1 2021/07/21 13:39:46 [debug] 7#7: epoll: fd:7 ev:0001 d:00007F6F20985548 2021/07/21 13:39:46 [debug] 7#7: timer delta: 5569 2021/07/21 13:39:46 [debug] 7#7: worker cycle 2021/07/21 13:39:46 [debug] 7#7: epoll timer: 60000 2021/07/21 13:39:46 [debug] 7#7: epoll: fd:14 ev:0001 d:00007F6F20985718 2021/07/21 13:39:46 [debug] 7#7: timer delta: 11 2021/07/21 13:39:46 [debug] 7#7: worker cycle 2021/07/21 13:39:46 [debug] 7#7: epoll timer: 59989 2021/07/21 13:39:46 [debug] 7#7: epoll: fd:14 ev:0001 d:00007F6F20985718 2021/07/21 13:39:46 [debug] 7#7: timer delta: 2 2021/07/21 13:39:46 [debug] 7#7: worker cycle 2021/07/21 13:39:46 [debug] 7#7: epoll timer: 59987 2021/07/21 13:39:46 [debug] 7#7: epoll: fd:14 ev:0001 d:00007F6F20985718 2021/07/21 13:39:46 [debug] 7#7: shmtx lock 2021/07/21 13:39:46 [debug] 7#7: slab alloc: 191 slot: 5 2021/07/21 13:39:46 [debug] 7#7: slab alloc: 00007F6F209D2000 2021/07/21 13:39:46 [debug] 7#7: slab alloc: 128 slot: 4 2021/07/21 13:39:46 [debug] 7#7: slab alloc: 00007F6F209D0080 2021/07/21 13:39:46 [debug] 7#7: shmtx unlock 2021/07/21 13:39:46 [debug] 7#7: shmtx lock 2021/07/21 13:39:46 [debug] 7#7: slab alloc: 191 slot: 5 2021/07/21 13:39:46 [debug] 7#7: slab alloc: 00007F6F209D2100 2021/07/21 13:39:46 [debug] 7#7: slab alloc: 128 slot: 4 2021/07/21 13:39:46 [debug] 7#7: slab alloc: 00007F6F209D0100 2021/07/21 13:39:46 [debug] 7#7: shmtx unlock 2021/07/21 13:39:46 [debug] 7#7: malloc: 00007F6F209045C0:262144 2021/07/21 13:39:46 [debug] 7#7: timer delta: 7 2021/07/21 13:39:46 [debug] 7#7: worker cycle 2021/07/21 13:39:46 [debug] 7#7: epoll timer: 120000 2021/07/21 13:39:46 [debug] 7#7: epoll: fd:15 ev:0004 d:00007F6F20985800 2021/07/21 13:39:46 [debug] 7#7: epoll: fd:14 ev:0001 d:00007F6F20985718 2021/07/21 13:39:46 [debug] 7#7: timer delta: 2 2021/07/21 13:39:46 [debug] 7#7: worker cycle 2021/07/21 13:39:46 [debug] 7#7: epoll timer: 120000 2021/07/21 13:39:46 [debug] 7#7: epoll: fd:15 ev:0005 d:00007F6F20985800 10.197.96.163 - - [21/Jul/2021:13:39:46 +0000] "GET /1-ws HTTP/2.0" 502 backend:10.197.93.8:80 ssl:TLSv1.3/TLS_AES_256_GCM_SHA384 150 "-" "curl/7.61.1" "-" 2021/07/21 13:39:46 [debug] 7#7: timer delta: 4 2021/07/21 13:39:46 [debug] 7#7: posted event 00007F6F2096C590 2021/07/21 13:39:46 [debug] 7#7: worker cycle 2021/07/21 13:39:46 [debug] 7#7: epoll timer: 180000 2021/07/21 13:39:46 [debug] 7#7: epoll: fd:14 ev:0001 d:00007F6F20985718 2021/07/21 13:39:46 [debug] 7#7: timer delta: 1 2021/07/21 13:39:46 [debug] 7#7: worker cycle 2021/07/21 13:39:46 [debug] 7#7: epoll timer: -1" I have also test case 3, where the $http_upgrade variable is still used in the configuration. Here though, I have used a server { listen 80 } context for the nginx which connects to the upstreams, without doing a redirect to https. This case also worked, $http_upgrade seems to be recognized in this context: "2021/07/21 13:41:42 [debug] 1#1: bind() 0.0.0.0:8060 #6 2021/07/21 13:41:42 [debug] 1#1: bind() 0.0.0.0:8063 #7 2021/07/21 13:41:42 [notice] 1#1: using the "epoll" event method 2021/07/21 13:41:42 [debug] 1#1: counter: 00007FE63A134080, 1 2021/07/21 13:41:42 [notice] 1#1: nginx/1.18.0 2021/07/21 13:41:42 [notice] 1#1: OS: Linux 4.14.239-1626564138 2021/07/21 13:41:42 [notice] 1#1: getrlimit(RLIMIT_NOFILE): 1048576:1048576 2021/07/21 13:41:42 [debug] 1#1: write: 8, 00007FFCC83BBFC2, 2, 0 2021/07/21 13:41:42 [debug] 1#1: setproctitle: "nginx: master process nginx-debug -c /usr/local/nginx/conf/nginx.conf -g daemon off;" 2021/07/21 13:41:42 [notice] 1#1: start worker processes 2021/07/21 13:41:42 [debug] 1#1: channel 3:8 2021/07/21 13:41:42 [notice] 1#1: start worker process 8 2021/07/21 13:41:42 [debug] 1#1: channel 9:10 2021/07/21 13:41:42 [debug] 8#8: add cleanup: 00007FE63A1440A8 2021/07/21 13:41:42 [debug] 8#8: malloc: 00007FE63A1C6090:8 2021/07/21 13:41:42 [notice] 1#1: start worker process 9 2021/07/21 13:41:42 [debug] 1#1: pass channel s:1 pid:9 fd:9 to s:0 pid:8 fd:3 2021/07/21 13:41:42 [debug] 1#1: channel 11:12 2021/07/21 13:41:42 [debug] 9#9: add cleanup: 00007FE63A1440A8 2021/07/21 13:41:42 [debug] 9#9: malloc: 00007FE63A1C6090:8 2021/07/21 13:41:42 [notice] 1#1: start worker process 10 2021/07/21 13:41:42 [debug] 1#1: pass channel s:2 pid:10 fd:11 to s:0 pid:8 fd:3 2021/07/21 13:41:42 [debug] 8#8: notify eventfd: 10 2021/07/21 13:41:42 [debug] 8#8: eventfd: 11 2021/07/21 13:41:42 [debug] 1#1: pass channel s:2 pid:10 fd:11 to s:1 pid:9 fd:9 2021/07/21 13:41:42 [debug] 1#1: channel 13:14 2021/07/21 13:41:42 [debug] 10#10: add cleanup: 00007FE63A1440A8 2021/07/21 13:41:42 [debug] 10#10: malloc: 00007FE63A1C6090:8 2021/07/21 13:41:42 [debug] 9#9: notify eventfd: 12 2021/07/21 13:41:42 [debug] 8#8: testing the EPOLLRDHUP flag: success 2021/07/21 13:41:42 [debug] 9#9: eventfd: 13 2021/07/21 13:41:42 [debug] 8#8: malloc: 00007FE63A1B5460:6144 2021/07/21 13:41:42 [notice] 1#1: start worker process 11 2021/07/21 13:41:42 [debug] 8#8: malloc: 00007FE6396F6470:237568 2021/07/21 13:41:42 [debug] 1#1: pass channel s:3 pid:11 fd:13 to s:0 pid:8 fd:3 2021/07/21 13:41:42 [debug] 9#9: testing the EPOLLRDHUP flag: success 2021/07/21 13:41:42 [debug] 8#8: malloc: 00007FE6396DD480:98304 2021/07/21 13:41:42 [debug] 1#1: pass channel s:3 pid:11 fd:13 to s:1 pid:9 fd:9 2021/07/21 13:41:42 [debug] 1#1: pass channel s:3 pid:11 fd:13 to s:2 pid:10 fd:11 2021/07/21 13:41:42 [debug] 9#9: malloc: 00007FE63A1B5460:6144 2021/07/21 13:41:42 [debug] 1#1: sigsuspend 2021/07/21 13:41:42 [debug] 9#9: malloc: 00007FE6396F6470:237568 2021/07/21 13:41:42 [debug] 9#9: malloc: 00007FE6396DD480:98304 2021/07/21 13:41:42 [debug] 8#8: malloc: 00007FE6396C4490:98304 2021/07/21 13:41:42 [debug] 10#10: notify eventfd: 14 2021/07/21 13:41:42 [debug] 10#10: eventfd: 15 2021/07/21 13:41:42 [debug] 11#11: add cleanup: 00007FE63A1440A8 2021/07/21 13:41:42 [debug] 9#9: malloc: 00007FE6396C4490:98304 2021/07/21 13:41:42 [debug] 11#11: malloc: 00007FE63A1C6090:8 2021/07/21 13:41:42 [debug] 10#10: testing the EPOLLRDHUP flag: success 2021/07/21 13:41:42 [debug] 10#10: malloc: 00007FE63A1B5460:6144 2021/07/21 13:41:42 [debug] 10#10: malloc: 00007FE6396F6470:237568 2021/07/21 13:41:42 [debug] 10#10: malloc: 00007FE6396DD480:98304 2021/07/21 13:41:42 [debug] 11#11: notify eventfd: 16 2021/07/21 13:41:42 [debug] 10#10: malloc: 00007FE6396C4490:98304 2021/07/21 13:41:42 [debug] 11#11: eventfd: 17 2021/07/21 13:41:42 [debug] 8#8: epoll add event: fd:8 op:1 ev:00002001 2021/07/21 13:41:42 [debug] 8#8: setproctitle: "nginx: worker process" 2021/07/21 13:41:42 [debug] 9#9: epoll add event: fd:10 op:1 ev:00002001 2021/07/21 13:41:42 [debug] 8#8: worker cycle 2021/07/21 13:41:42 [debug] 8#8: epoll timer: -1 2021/07/21 13:41:42 [debug] 11#11: testing the EPOLLRDHUP flag: success 2021/07/21 13:41:42 [debug] 9#9: setproctitle: "nginx: worker process" 2021/07/21 13:41:42 [debug] 9#9: worker cycle 2021/07/21 13:41:42 [debug] 9#9: epoll timer: -1 2021/07/21 13:41:42 [debug] 11#11: malloc: 00007FE63A1B5460:6144 2021/07/21 13:41:42 [debug] 8#8: epoll: fd:8 ev:0001 d:00007FE6396F6640 2021/07/21 13:41:42 [debug] 8#8: channel handler 2021/07/21 13:41:42 [debug] 9#9: epoll: fd:10 ev:0001 d:00007FE6396F6640 2021/07/21 13:41:42 [debug] 11#11: malloc: 00007FE6396F6470:237568 2021/07/21 13:41:42 [debug] 8#8: channel: 32 2021/07/21 13:41:42 [debug] 9#9: channel handler 2021/07/21 13:41:42 [debug] 8#8: channel command: 1 2021/07/21 13:41:42 [debug] 11#11: malloc: 00007FE6396DD480:98304 2021/07/21 13:41:42 [debug] 8#8: get channel s:1 pid:9 fd:3 2021/07/21 13:41:42 [debug] 9#9: channel: 32 2021/07/21 13:41:42 [debug] 9#9: channel command: 1 2021/07/21 13:41:42 [debug] 8#8: channel: 32 2021/07/21 13:41:42 [debug] 9#9: get channel s:2 pid:10 fd:8 2021/07/21 13:41:42 [debug] 8#8: channel command: 1 2021/07/21 13:41:42 [debug] 8#8: get channel s:2 pid:10 fd:12 2021/07/21 13:41:42 [debug] 9#9: channel: 32 2021/07/21 13:41:42 [debug] 9#9: channel command: 1 2021/07/21 13:41:42 [debug] 8#8: channel: 32 2021/07/21 13:41:42 [debug] 9#9: get channel s:3 pid:11 fd:9 2021/07/21 13:41:42 [debug] 8#8: channel command: 1 2021/07/21 13:41:42 [debug] 9#9: channel: -2 2021/07/21 13:41:42 [debug] 8#8: get channel s:3 pid:11 fd:13 2021/07/21 13:41:42 [debug] 9#9: timer delta: 21 2021/07/21 13:41:42 [debug] 9#9: worker cycle 2021/07/21 13:41:42 [debug] 8#8: channel: -2 2021/07/21 13:41:42 [debug] 11#11: malloc: 00007FE6396C4490:98304 2021/07/21 13:41:42 [debug] 9#9: epoll timer: -1 2021/07/21 13:41:42 [debug] 8#8: timer delta: 21 2021/07/21 13:41:42 [debug] 8#8: worker cycle 2021/07/21 13:41:42 [debug] 8#8: epoll timer: -1 2021/07/21 13:41:42 [debug] 10#10: epoll add event: fd:12 op:1 ev:00002001 2021/07/21 13:41:42 [debug] 10#10: setproctitle: "nginx: worker process" 2021/07/21 13:41:42 [debug] 10#10: worker cycle 2021/07/21 13:41:42 [debug] 10#10: epoll timer: -1 2021/07/21 13:41:42 [debug] 10#10: epoll: fd:12 ev:0001 d:00007FE6396F6640 2021/07/21 13:41:42 [debug] 10#10: channel handler 2021/07/21 13:41:42 [debug] 10#10: channel: 32 2021/07/21 13:41:42 [debug] 10#10: channel command: 1 2021/07/21 13:41:42 [debug] 10#10: get channel s:3 pid:11 fd:8 2021/07/21 13:41:42 [debug] 10#10: channel: -2 2021/07/21 13:41:42 [debug] 10#10: timer delta: 21 2021/07/21 13:41:42 [debug] 10#10: worker cycle 2021/07/21 13:41:42 [debug] 10#10: epoll timer: -1 2021/07/21 13:41:42 [debug] 11#11: epoll add event: fd:14 op:1 ev:00002001 2021/07/21 13:41:42 [debug] 11#11: setproctitle: "nginx: worker process" 2021/07/21 13:41:42 [debug] 11#11: worker cycle 2021/07/21 13:41:42 [debug] 11#11: epoll timer: -1 2021/07/21 13:41:48 [debug] 8#8: epoll: fd:6 ev:0001 d:00007FE6396F6470 2021/07/21 13:41:48 [debug] 8#8: timer delta: 6547 2021/07/21 13:41:48 [debug] 8#8: worker cycle 2021/07/21 13:41:48 [debug] 8#8: epoll timer: 60000 2021/07/21 13:41:48 [debug] 8#8: epoll: fd:14 ev:0001 d:00007FE6396F6728 2021/07/21 13:41:48 [debug] 8#8: timer delta: 0 2021/07/21 13:41:48 [debug] 8#8: worker cycle 2021/07/21 13:41:48 [debug] 8#8: epoll timer: 120000 2021/07/21 13:41:48 [debug] 8#8: epoll: fd:14 ev:0004 d:00007FE6396F6728 2021/07/21 13:41:48 [debug] 8#8: timer delta: 0 2021/07/21 13:41:48 [debug] 8#8: worker cycle 2021/07/21 13:41:48 [debug] 8#8: epoll timer: 120000 2021/07/21 13:41:48 [debug] 8#8: epoll: fd:15 ev:0004 d:00007FE6396F6810 2021/07/21 13:41:48 [debug] 8#8: timer delta: 1 2021/07/21 13:41:48 [debug] 8#8: worker cycle 2021/07/21 13:41:48 [debug] 8#8: epoll timer: 120000 2021/07/21 13:41:48 [debug] 8#8: epoll: fd:15 ev:0005 d:00007FE6396F6810 2021/07/21 13:41:48 [debug] 8#8: timer delta: 1 2021/07/21 13:41:48 [debug] 8#8: worker cycle 2021/07/21 13:41:48 [debug] 8#8: epoll timer: 119999 2021/07/21 13:41:48 [debug] 8#8: epoll: fd:15 ev:0005 d:00007FE6396F6810 2021/07/21 13:41:48 [debug] 8#8: timer delta: 0 2021/07/21 13:41:48 [debug] 8#8: worker cycle 2021/07/21 13:41:48 [debug] 8#8: epoll timer: 119999 2021/07/21 13:41:51 [debug] 8#8: epoll: fd:14 ev:2005 d:00007FE6396F6728 10.197.96.163 - - [21/Jul/2021:13:41:51 +0000] "GET /1-ws HTTP/1.1" 101 backend:10.197.93.8:80 ssl:-/- 12 "-" "curl/7.61.1" "-" 2021/07/21 13:41:51 [debug] 8#8: timer delta: 2870 2021/07/21 13:41:51 [debug] 8#8: worker cycle 2021/07/21 13:41:51 [debug] 8#8: epoll timer: -1" Judging by the fact that this 3rd test case worked, it seems that the $http_upgrade is there when not doing the ssl termination on the nginx reverse proxy. I don't know though about nginx internals, why this is. Best, Dorin Am Mi., 21. Juli 2021 um 12:48 Uhr schrieb Sergey A. Osokin < osa at freebsd.org.ru>: > Hi Dorin, > > hope you're doing well. > > On Tue, Jul 20, 2021 at 12:39:51PM +0200, Dorin RuV wrote: > > Hi everybody, > > > > I'm currently having an issue with nginx which I cannot get to the bottom > > of. > > > > I'm using nginx as a reverse proxy/load balancer in front of Kubernetes. > > I'm trying to set up Websocket connections with an app running in > > Kubernetes, but there are some problems. I have followed the "more > > sophisticated" example from > https://nginx.org/en/docs/http/websocket.html. > > > > The configuration more or less looks like: > > > > "http { > > > > map $http_upgrade $connection_upgrade { > > default upgrade; > > '' close; > > } # building the connection_upgrade variable based on > $http_upgrade > > ...... > > > > > > server{ > > listen 443 ssl http2; > > location / { > > proxy_http_version 1.1; > > proxy_set_header Upgrade $http_upgrade; > > proxy_set_header Connection $connection_upgrade; > > } > > .... > > " > > There is also a server { listen 80 } directive there which simply > redirects > > to https. > > > > If I configure "proxy_set_header Upgrade Websocket" and "Connection > > Upgrade", everything works as intended. It seems though that the > > $http_upgrade variable is seen as empty, even though tcpdump confirms the > > fact that the Upgrade header is correctly sent to Nginx by the client > > request. Can somebody please point me towards what could reset this > > variable or why is it unavailable? I'm thinking some scope issues but I > > have no idea how to debug. > > Is there a chance to use debug log feature, > https://nginx.org/en/docs/debugging_log.html, > and reproduce the issue with "working" and "non-working" configuration > files. > > -- > Sergey Osokin > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Wed Jul 21 19:57:14 2021 From: nginx-forum at forum.nginx.org (kay) Date: Wed, 21 Jul 2021 15:57:14 -0400 Subject: Passing the original client EHLO to the SMTP backend In-Reply-To: <555835cb71e90a2a74cfa93179383386.NginxMailingListEnglish@forum.nginx.org> References: <555835cb71e90a2a74cfa93179383386.NginxMailingListEnglish@forum.nginx.org> Message-ID: I've managed to get the desired result with a patch below: --- a/src/mail/ngx_mail_proxy_module.c +++ b/src/mail/ngx_mail_proxy_module.c @@ -574,7 +574,7 @@ cscf = ngx_mail_get_module_srv_conf(s, ngx_mail_core_module); - line.len = sizeof("HELO ") - 1 + cscf->server_name.len + 2; + line.len = sizeof("HELO ") - 1 + s->smtp_helo.len + 2; line.data = ngx_pnalloc(c->pool, line.len); if (line.data == NULL) { ngx_mail_proxy_internal_server_error(s); @@ -587,7 +587,7 @@ ((s->esmtp || pcf->xclient) ? "EHLO " : "HELO "), sizeof("HELO ") - 1); - p = ngx_cpymem(p, cscf->server_name.data, cscf->server_name.len); + p = ngx_cpymem(p, s->smtp_helo.data, s->smtp_helo.len); *p++ = CR; *p = LF; if (pcf->xclient) { Posted at Nginx Forum: https://forum.nginx.org/read.php?2,292085,292088#msg-292088 From francis at daoine.org Wed Jul 21 21:40:21 2021 From: francis at daoine.org (Francis Daly) Date: Wed, 21 Jul 2021 22:40:21 +0100 Subject: Embedded variables from ngx_http_core_module and websocket connections In-Reply-To: References: Message-ID: <20210721214021.GI11167@daoine.org> On Tue, Jul 20, 2021 at 12:39:51PM +0200, Dorin RuV wrote: Hi there, > I'm currently having an issue with nginx which I cannot get to the bottom > of. > server{ > listen 443 ssl http2; Does anything change if you just remove that "http2"? > location / { > proxy_http_version 1.1; > proxy_set_header Upgrade $http_upgrade; "Upgrade" is a http/1.1 header, not a http/2.0 header. So for an incoming http/2.0 request, the variable should be empty. > It seems though that the > $http_upgrade variable is seen as empty, even though tcpdump confirms the > fact that the Upgrade header is correctly sent to Nginx by the client > request. I'm guessing that tcpdump is showing that the http/1.1 request to port 80 includes the Upgrade header; but is not showing anything about what is in the encrypted http/2.0 request to port 443? Cheers, f -- Francis Daly francis at daoine.org From Tushar.Bankar at veritas.com Thu Jul 22 05:44:54 2021 From: Tushar.Bankar at veritas.com (Tushar Bankar) Date: Thu, 22 Jul 2021 05:44:54 +0000 Subject: SRPM for 1.20.1 for RHEL 7 Message-ID: <62408D7A-2905-47A6-AFE1-B4D67AE4937F@veritas.com> Hi I am looking for a SRPM of nginx ver: 1.20.1 for rhel7. Can anybody please share the link of the same, I was looking at following link, however not found: https://nginx.org/packages/rhel/7/SRPMS/ Thanks, Tushar -------------- next part -------------- An HTML attachment was scrubbed... URL: From ruv.donita at gmail.com Thu Jul 22 07:55:08 2021 From: ruv.donita at gmail.com (Dorin RuV) Date: Thu, 22 Jul 2021 09:55:08 +0200 Subject: Embedded variables from ngx_http_core_module and websocket connections In-Reply-To: <20210721214021.GI11167@daoine.org> References: <20210721214021.GI11167@daoine.org> Message-ID: Wow! That's it! Thank you, Francis! I need to catch up with some reading on http//2.0. I didn't know that "Connection Upgrade" is http/1.1 specific. So, in order to set up the websocket anyway and keep http/2.0 (don't know how much sense that makes) there would be 2 options as I see it: 1. Statically define the connection upgrade headers - "proxy_set_header Connection Upgrade", but in this case there will never be a "connection: close" coming from the client. 2. Create a custom header ("X-Upgrade-Custom" e.g.) and use $http_x_upgrade_custom as connection starter, but this would extremely specific to this constellation. What do you think? Best, Dorin Am Mi., 21. Juli 2021 um 23:40 Uhr schrieb Francis Daly : > On Tue, Jul 20, 2021 at 12:39:51PM +0200, Dorin RuV wrote: > > Hi there, > > > I'm currently having an issue with nginx which I cannot get to the bottom > > of. > > > server{ > > listen 443 ssl http2; > > Does anything change if you just remove that "http2"? > > > location / { > > proxy_http_version 1.1; > > proxy_set_header Upgrade $http_upgrade; > > "Upgrade" is a http/1.1 header, not a http/2.0 header. So for an incoming > http/2.0 request, the variable should be empty. > > > It seems though that the > > $http_upgrade variable is seen as empty, even though tcpdump confirms the > > fact that the Upgrade header is correctly sent to Nginx by the client > > request. > > I'm guessing that tcpdump is showing that the http/1.1 request to port > 80 includes the Upgrade header; but is not showing anything about what > is in the encrypted http/2.0 request to port 443? > > Cheers, > > f > -- > Francis Daly francis at daoine.org > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From thresh at nginx.com Thu Jul 22 09:45:43 2021 From: thresh at nginx.com (Konstantin Pavlov) Date: Thu, 22 Jul 2021 12:45:43 +0300 Subject: SRPM for 1.20.1 for RHEL 7 In-Reply-To: <62408D7A-2905-47A6-AFE1-B4D67AE4937F@veritas.com> References: <62408D7A-2905-47A6-AFE1-B4D67AE4937F@veritas.com> Message-ID: <36a24624-a581-102f-2b03-04ba97b801bf@nginx.com> Hi Tushar, 22.07.2021 08:44, Tushar Bankar wrote: > Hi > > ? > > I am looking for a SRPM of nginx ver: 1.20.1 for rhel7. > > Can anybody please share the link of the same, I was looking at > following link, however not found: > https://nginx.org/packages/rhel/7/SRPMS/ Thanks for your mail. Please update the page - source rpms should be there now. Have a great day, -- Konstantin Pavlov https://www.nginx.com/ From ADD_SP at outlook.com Fri Jul 23 03:54:10 2021 From: ADD_SP at outlook.com (ADD SP) Date: Fri, 23 Jul 2021 03:54:10 +0000 Subject: Why does nginx -V output to stderr? Message-ID: Hello! I found that the command `nginx -V` is output to `stderr` instead of `stdout`, what is the reason for doing this? I'm just asking this question out of curiosity. Best Regards ADD-SP ??? Windows 10 ????? -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Sat Jul 24 01:50:04 2021 From: mdounin at mdounin.ru (Maxim Dounin) Date: Sat, 24 Jul 2021 04:50:04 +0300 Subject: Why does nginx -V output to stderr? In-Reply-To: References: Message-ID: Hello! On Fri, Jul 23, 2021 at 03:54:10AM +0000, ADD SP wrote: > I found that the command `nginx -V` is output to `stderr` > instead of `stdout`, what is the reason for doing this? > > I'm just asking this question out of curiosity. https://trac.nginx.org/nginx/ticket/592#comment:2 -- Maxim Dounin http://mdounin.ru/ From ADD_SP at outlook.com Sat Jul 24 02:22:11 2021 From: ADD_SP at outlook.com (ADD SP) Date: Sat, 24 Jul 2021 02:22:11 +0000 Subject: Why does nginx -V output to stderr? In-Reply-To: References: , Message-ID: Thanks! ADD-SP -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Sun Jul 25 11:33:20 2021 From: nginx-forum at forum.nginx.org (NCviQwilt) Date: Sun, 25 Jul 2021 07:33:20 -0400 Subject: Resurrecting the Async Open Discussion Message-ID: <013d30d9438514e306a56e666544e907.NginxMailingListEnglish@forum.nginx.org> Hello, This is a follow up on the discussion started 3 years ago at: https://forum.nginx.org/read.php?29,280794,280794 I can't respond at that forum or topic, so I hope this is the right place. I'm working at Qwilt, and we too have experienced the 999th percentile problem as discussed in Cloudflare's blog post. We were happy to see an official patch from Nginx Dev, but sad to see the old discussion has since died with no official release. I've started working on applying the patch from the old discussion, but the integration isn't smooth since the source code has progressed since then (especially with regards to the static module). We'll keep working on it, but we're wondering if there are any plans for future official support for this feature? Otherwise any Nginx upgrade would be difficult. We'd be more than happy to help with the testing of the patch. Best regards, Noam Posted at Nginx Forum: https://forum.nginx.org/read.php?2,292104,292104#msg-292104 From ram.programme at gmail.com Mon Jul 26 17:19:30 2021 From: ram.programme at gmail.com (Ram B) Date: Mon, 26 Jul 2021 10:19:30 -0700 Subject: Hook during connection termination Message-ID: Hello Nginx team, I am planning to use nginx as a gRPC reverse proxy. On each client connection termination we need to handle some cleanup activity. Is there a way to set up a callback or have some sort of hook within nginx during connection teardown workflow where I can kick off the clean up activity? Thank you, -Ram. -------------- next part -------------- An HTML attachment was scrubbed... URL: From mailme.s at yandex.com Tue Jul 27 10:12:20 2021 From: mailme.s at yandex.com (s s) Date: Tue, 27 Jul 2021 14:12:20 +0400 Subject: Nginx+Redis Dynamic Page Caching & Cache Purging (vs Varnish) Message-ID: <302471627380681@mail.yandex.com> An HTML attachment was scrubbed... URL: From r at roze.lv Thu Jul 29 08:32:38 2021 From: r at roze.lv (Reinis Rozitis) Date: Thu, 29 Jul 2021 11:32:38 +0300 Subject: Nginx+Redis Dynamic Page Caching & Cache Purging (vs Varnish) In-Reply-To: <302471627380681@mail.yandex.com> References: <302471627380681@mail.yandex.com> Message-ID: <000901d78454$4a4898f0$ded9cad0$@roze.lv> > And most importantly, how its capabilities compare to Varnish. From my searching and articles I have read so far, the general consensus seems to be that Varnish is more flexible and offers more abilities for dynamic page caching and cache purging. Is this indeed the case today? Depends on what are your requirements. While Varnish with its VCL might be more "programmable" from the get go (I imagine you can achieve the same with openresty (nginx and Lua)) there are advantages and drawbacks for each product. I haven't used recent versions of varnish (so these points might be now invalid) here are some from my own experience: - it seems Varnish still doesn't have a native SSL/TLS support [1] so you need some SSL termination/offloading (hitch, haproxy, nginx or some CDN) above it - adds and extra layer and additional piece of software you have to manage - the community nginx version doesn't have a callable cache PURGE option [2] - you have to either use the nginx plus version or compile a third party module for that. Varnish offers that by default. - Varnish has different kinds of storage backends (mmap, umem,file etc) which might be more optimal/performant for particular cases. For nginx besides the file based you have to use other services redis/memcache etc. On the other hand the persistent storage for Varnish seems to be deprecated what is/could be problematic for large caches in case of restarts. - based on your cache objects (count) there might be some hardware requirement differences - Varnish used to have a 1Kb per object memory overhead (even using a file based storage backend) - and in my case on a 32Gb ram node I could cache at most ~33mil objects while nginx on the same hardware could handle 800+ mil easily (obviously nowadays ram amounts might not be an issue) My 2 cents .. [1] https://docs.varnish-software.com/varnish-administration-console/installation/ssl/ [2] http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_cache_purge wbr rr From pierre at couderc.eu Fri Jul 30 08:40:00 2021 From: pierre at couderc.eu (Pierre Couderc) Date: Fri, 30 Jul 2021 10:40:00 +0200 Subject: Abnormal delays in download Message-ID: <7b912201-e833-507e-6194-1b2013af6c1c@couderc.eu> I am trying to download a 12M pdf file and it takes more than 4 minutes with http nginx while it takes a few seconds with scp. I have asked detailed log here https://paste.debian.net/1206029/ I see timeouts but I am not able to interpret them correctly. nginx is in lxd container in a nearly not used? server (htop load average 0.08 ...). I use nearly defaut debian configuration of nginx. Thanks for any help. From nginx-forum at forum.nginx.org Fri Jul 30 10:22:45 2021 From: nginx-forum at forum.nginx.org (pfeiffer) Date: Fri, 30 Jul 2021 06:22:45 -0400 Subject: TCP reset (RST) from upstream server being delayed by upstream socket receive buffer Message-ID: Hi, I'm runnung nginx/1.18.0 as a tls reverse proxy to a local upstream http server on Ubuntu 20.. In certain cases the upstream server is resetting the connection with the intention to close the downstream connection as soon as possible without delivering remaining data in the socket and proxy buffers. This seems to work as expected when running nginx on Windows with similar socket buffer sizes. On Ubuntu though it looks like nginx is trying to read and deliver the remaining data in the upstream receive buffer before actually returning the connection error from the recv call. Depending on the network limitation and fill of the receive buffer this can take 20 seconds and more. On Windows nginx recv is returning the connection reset almost immediately. Is this rather an application or OR related difference/issue? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,292116,292116#msg-292116 From nginx-forum at forum.nginx.org Fri Jul 30 10:23:54 2021 From: nginx-forum at forum.nginx.org (pfeiffer) Date: Fri, 30 Jul 2021 06:23:54 -0400 Subject: TCP reset (RST) from upstream server being delayed by upstream socket receive buffer In-Reply-To: References: Message-ID: Is this rather an application or OS related difference/issue? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,292116,292117#msg-292117