From mdounin at mdounin.ru Sat Oct 1 12:49:15 2022 From: mdounin at mdounin.ru (Maxim Dounin) Date: Sat, 1 Oct 2022 15:49:15 +0300 Subject: Nginx as mail proxy: different domains with different certs In-Reply-To: <1051f742a8c2c534e029006f4ba0b01e.NginxMailingListEnglish@forum.nginx.org> References: <1051f742a8c2c534e029006f4ba0b01e.NginxMailingListEnglish@forum.nginx.org> Message-ID: Hello! On Fri, Sep 30, 2022 at 03:29:16PM -0400, achekalin wrote: > I set up nginx as mail proxy, and it works for one domain, but won't work > when I try to serve more that one domain each with different SSL > certificate. Are there any way I can archive that, since nginx as mail proxy > it quite good and seems to be good solution. > > My fail is that I expected from mail servers the same I used to see in http > server. Say, I tried to write this: > > mail { > ... > server { > listen 25; > protocol smtp; > server_name mail.domain1.com; > ssl_certificate mail.domain1.com.fullchain.pem; > ssl_certificate_key mail.domain1.com.key.pem; > starttls on; > proxy on; > xclient off; > } > > server { > listen 25; > protocol smtp; > server_name mail.domain2.com; > ssl_certificate mail.domain2.com.fullchain.pem; > ssl_certificate_key mail.domain2.com.key.pem; > starttls on; > proxy on; > xclient off; > } > ... > } > > I expected nginx will choose right 'server' block based on server_name > (which was wrong assumption) and then will use ssl certificate set in that > server block. > > I do understand I can set up LE certs with many hostnames included but say > story is that domain list is too big to be included in single cert so I have > to use more that one server block anyway. Name-based (including SNI-based) virtual servers are not supported in the mail proxy module. As such, the remaining options are: - Use multiple names in a certificate - Use IP-based (or port-based) virtual servers You can combine both options as appropriate. -- Maxim Dounin http://mdounin.ru/ From pgnet.dev at gmail.com Sun Oct 2 12:02:52 2022 From: pgnet.dev at gmail.com (PGNet Dev) Date: Sun, 2 Oct 2022 08:02:52 -0400 Subject: Nginx as mail proxy: different domains with different certs In-Reply-To: References: <1051f742a8c2c534e029006f4ba0b01e.NginxMailingListEnglish@forum.nginx.org> Message-ID: <8a24cb50-c5d8-0be1-d6cb-96c6eb0f42ba@gmail.com> > Name-based (including SNI-based) virtual servers are not supported > in the mail proxy module. As such, the remaining options are: > > - Use multiple names in a certificate > - Use IP-based (or port-based) virtual servers > > You can combine both options as appropriate. add'l useful option for mail proxy + sni https://www.linuxbabe.com/mail-server/smtp-imap-proxy-with-haproxy-debian-ubuntu-centos From nginx-forum at forum.nginx.org Tue Oct 4 09:58:13 2022 From: nginx-forum at forum.nginx.org (Davis_J) Date: Tue, 04 Oct 2022 05:58:13 -0400 Subject: Nginx KTLS hardware offloading not working In-Reply-To: <9d27a731a9aada993dcca2c3ce062bc0.NginxMailingListEnglish@forum.nginx.org> References: <9d27a731a9aada993dcca2c3ce062bc0.NginxMailingListEnglish@forum.nginx.org> Message-ID: Making it work on FreeBSD, words with the same versions , no issue. Only on Linux couldn't make it work. KTLS SW works, but Hardware isn't eventhough on ETHtool, its showing active (meaning tls_device module is working) and kTLS SW is working (meaning TLS module is working) CONFIG_TLS=m CONFIG_TLS_DEVICE=y # CONFIG_TLS_TOE is not set CONFIG_CHELSIO_TLS_DEVICE=m CONFIG_MLX5_FPGA_TLS=y CONFIG_MLX5_TLS=y CONFIG_MLX5_EN_TLS=y CONFIG_FB_TFT_TLS8204=m so why wouldn't it work on ubuntu nor debian? (even when I compiled kernel 5.19 the newest) and making sure the flags or TLS are right. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,294477,295382#msg-295382 From gilles.ganault at free.fr Tue Oct 4 22:08:34 2022 From: gilles.ganault at free.fr (Gilles Ganault) Date: Wed, 5 Oct 2022 00:08:34 +0200 Subject: Installing two versions of PHP-FMP? Message-ID: Hello, I only have shallow experience with nginx. To migrate an old php5-based application to the latest release which expects php7, I'd like to install both versions of PHP-FPM in one nginx server. Although I read elsewhere it's a mistake to install the php package instead of php-fpm because the former also installs Apache… this is what this document does: ============= apt-get install php5.6 php5.6-fpm php7.4 php7.4-fpm ============= So, what's the recommended way to set things up so that nginx can support both interpreters and manage two versions of a web app in their respective directory? Thank you. -------------- next part -------------- An HTML attachment was scrubbed... URL: From anoopalias01 at gmail.com Wed Oct 5 03:19:32 2022 From: anoopalias01 at gmail.com (Anoop Alias) Date: Wed, 5 Oct 2022 08:49:32 +0530 Subject: Installing two versions of PHP-FMP? In-Reply-To: References: Message-ID: This should help: https://tinyurl.com/2mrps4a4 -- *Anoop P Alias* -------------- next part -------------- An HTML attachment was scrubbed... URL: From osa at freebsd.org.ru Wed Oct 5 17:28:16 2022 From: osa at freebsd.org.ru (Sergey A. Osokin) Date: Wed, 5 Oct 2022 20:28:16 +0300 Subject: Installing two versions of PHP-FMP? In-Reply-To: References: Message-ID: Hi Gilles, On Wed, Oct 05, 2022 at 12:08:34AM +0200, Gilles Ganault wrote: > > To migrate an old php5-based application to the latest release which > expects php7, I'd like to install both versions of PHP-FPM in one nginx > server. It's possible to solve this issue with NGINX Unit, https://unit.nginx.org/. It supports multiple application languages, including but not limited to php, and multiple versions of the application languages. -- Sergey A. Osokin From martin at martin-wolfert.de Thu Oct 6 12:30:08 2022 From: martin at martin-wolfert.de (Martin Wolfert) Date: Thu, 6 Oct 2022 14:30:08 +0200 Subject: Nginx does not serve avif In-Reply-To: <20220928211822.GO9502@daoine.org> References: <4423e587-fc54-39c4-baaf-b4d5f0c18f21@martin-wolfert.de> <20220928211822.GO9502@daoine.org> Message-ID: <9e903e26-0611-79f1-62b1-20b553c379e8@martin-wolfert.de> Hi, sorry for being not precise ... In "/var/www/htdocs/blog.lichttraeumer.de/wp-content/uploads/2022/05" i have located .jpg, .webp and .avif files: -rw-r--r--  1 www-data www-data 244010 May  1 13:45 back-to-the-roots-7.jpg -rw-r--r--  1 www-data www-data  35273 Oct  6 09:49 back-to-the-roots-7.jpg.avif -rw-r--r--  1 www-data www-data 166618 May  1 13:45 back-to-the-roots-7.jpg.webp The virtual host part for serving .avif files looks actually like that:     set $img_suffix "";     if ($http_accept ~* "webp") {         set $img_suffix ".webp";     }     if ($http_accept ~* "avif") {         set $img_suffix ".avif";     }     location ~ \.(jpg|png)$ {        expires 1y;        add_header Cache-Control "public, no-transform";        add_header Vary "Accept-Encoding";        try_files $uri$img_suffix $uri $uri/ =404;     } What i also tried was to work mit map directive. In nginx.conf:         map $http_accept $webp_suffix {                     "~image/webp"  "$uri.webp";         }         map $http_accept $avif_suffix {                 "~image/avif"  "$uri.avif";          } and then in the vhost:         location ~* ^.+\.(jpg|jpeg)$ {                add_header Vary Accept;               try_files $avif_suffix $webp_suffix $uri =404;         } Both solutions does not deliver .afiv files via e.g. Firefox. - Martin Am 28.09.22 um 23:18 schrieb Francis Daly: > On Wed, Sep 28, 2022 at 10:49:15AM +0200, Martin Wolfert wrote: > > Hi there, > >> i want to use new image files. That means: first serve (if available) avif, >> than webp and lastly jpg images. >>         location ~* ^/wp-content/.*/.*/.*\.(png|jpg)$ { >>                 add_header Vary Accept; >>                 try_files   $uri$img_ext $uri =404; >>         } >> Unfortunately ... Nginx does not serve avif files, if available. Tested it >> with the newest Chrome Versions. >> >> Anyone any idea where my error is located? > When you make the request for /dir/thing.png, do you want to get the file > /var/www/dir/thing.avif, or the file /var/www/dir/thing.png.avif? > > The usual questions are: > > What request do you make? > What response do you get? > What response do you want to get instead? > > Cheers, > > f -- Pflichtinformationen gemäß Artikel 13 DSGVO Im Falle des Erstkontakts sind wir gemäß Art. 12, 13 DSGVO verpflichtet, Ihnen folgende datenschutzrechtliche Pflichtinformationen zur Verfügung zu stellen: Wenn Sie uns per E-Mail kontaktieren, verarbeiten wir Ihre personenbezogenen Daten nur, soweit an der Verarbeitung ein berechtigtes Interesse besteht (Art. 6 Abs. 1 lit. f DSGVO), Sie in die Datenverarbeitung eingewilligt haben (Art. 6 Abs. 1 lit. a DSGVO), die Verarbeitung für die Anbahnung, Begründung, inhaltliche Ausgestaltung oder Änderung eines Rechtsverhältnisses zwischen Ihnen und uns erforderlich sind (Art. 6 Abs. 1 lit. b DSGVO) oder eine sonstige Rechtsnorm die Verarbeitung gestattet. Ihre personenbezogenen Daten verbleiben bei uns, bis Sie uns zur Löschung auffordern, Ihre Einwilligung zur Speicherung widerrufen oder der Zweck für die Datenspeicherung entfällt (z. B. nach abgeschlossener Bearbeitung Ihres Anliegens). Zwingende gesetzliche Bestimmungen – insbesondere steuer- und handelsrechtliche Aufbewahrungsfristen – bleiben unberührt. Sie haben jederzeit das Recht, unentgeltlich Auskunft über Herkunft, Empfänger und Zweck Ihrer gespeicherten personenbezogenen Daten zu erhalten. Ihnen steht außerdem ein Recht auf Widerspruch, auf Datenübertragbarkeit und ein Beschwerderecht bei der zuständigen Aufsichtsbehörde zu. Ferner können Sie die Berichtigung, die Löschung und unter bestimmten Umständen die Einschränkung der Verarbeitung Ihrer personenbezogenen Daten verlangen. Details entnehmen Sie meiner Datenschutzerklärung unter https://lichttraeumer.de/datenschutz/ From francis at daoine.org Thu Oct 6 21:43:32 2022 From: francis at daoine.org (Francis Daly) Date: Thu, 6 Oct 2022 22:43:32 +0100 Subject: Nginx does not serve avif In-Reply-To: <9e903e26-0611-79f1-62b1-20b553c379e8@martin-wolfert.de> References: <4423e587-fc54-39c4-baaf-b4d5f0c18f21@martin-wolfert.de> <20220928211822.GO9502@daoine.org> <9e903e26-0611-79f1-62b1-20b553c379e8@martin-wolfert.de> Message-ID: <20221006214332.GA4185@daoine.org> On Thu, Oct 06, 2022 at 02:30:08PM +0200, Martin Wolfert wrote: Hi there, > In "/var/www/htdocs/blog.lichttraeumer.de/wp-content/uploads/2022/05" i have > located .jpg, .webp and .avif files: Thanks for the details. Both ideas seem to work for me, when testing with curl: === $ cat /etc/nginx/conf.d/test-avif.conf server { listen 127.0.0.5:80; root /tmp/t3; set $img_suffix ""; if ($http_accept ~* "webp") { set $img_suffix ".webp"; } if ($http_accept ~* "avif") { set $img_suffix ".avif"; } location ~ \.(jpg|png)$ { try_files $uri$img_suffix $uri $uri/ =404; } } $ mkdir /tmp/t3 $ echo one.png > /tmp/t3/one.png $ echo one.png.avif > /tmp/t3/one.png.avif $ curl http://127.0.0.5/one.png one.png $ curl -H Accept:webp http://127.0.0.5/one.png one.png $ curl -H Accept:avif http://127.0.0.5/one.png one.png.avif === $ cat /etc/nginx/conf.d/test-avif-map.conf map $http_accept $webp_suffix { "~image/webp" "$uri.webp"; } map $http_accept $avif_suffix { "~image/avif" "$uri.avif"; } server { listen 127.0.0.6:80; root /tmp/t4; location ~ \.(jpg|jpeg)$ { try_files $avif_suffix $webp_suffix $uri =404; } } $ mkdir /tmp/t4 $ echo one.jpg > /tmp/t4/one.jpg $ echo one.jpg.webp > /tmp/t4/one.jpg.webp $ echo one.jpg.avif > /tmp/t4/one.jpg.avif $ echo two.jpg.webp > /tmp/t4/two.jpg.webp $ curl http://127.0.0.6/one.jpg one.jpg $ curl -H Accept:image/avif http://127.0.0.6/one.jpg one.jpg.avif $ curl -H Accept:image/webp http://127.0.0.6/one.jpg one.jpg.webp $ curl -H Accept:image/other http://127.0.0.6/one.jpg one.jpg $ curl -H Accept:image/avif,image/webp http://127.0.0.6/one.jpg one.jpg.avif $ curl -H Accept:image/avif,image/webp http://127.0.0.6/two.jpg two.jpg.webp $ === Do they work for you, when testing with curl? If not -- why not / what is different between your test config and my test config? And if so -- what is different between the curl request and the Firefox request? Thanks, f -- Francis Daly francis at daoine.org From martin at martin-wolfert.de Fri Oct 7 10:04:27 2022 From: martin at martin-wolfert.de (Martin Wolfert) Date: Fri, 7 Oct 2022 12:04:27 +0200 Subject: Nginx does not serve avif In-Reply-To: <20221006214332.GA4185@daoine.org> References: <4423e587-fc54-39c4-baaf-b4d5f0c18f21@martin-wolfert.de> <20220928211822.GO9502@daoine.org> <9e903e26-0611-79f1-62b1-20b553c379e8@martin-wolfert.de> <20221006214332.GA4185@daoine.org> Message-ID: Hi, well, i would say thet using curl, delivering avif works: Downloading the jpg without headers gives back another filesize as with given webp and avif headers. So ... yes, also my configuration seams to work with curl. Why browsers who should support avif only serve webpand not avif (e.g. Chrome 106.0.5249.103)  ... I have no clue. I really hope the Chrome developers do not distinguish between Chrome on Arm (MacBook M1 & M2) and Intel. Best, Martin Am 06.10.22 um 23:43 schrieb Francis Daly: > On Thu, Oct 06, 2022 at 02:30:08PM +0200, Martin Wolfert wrote: > > Hi there, > >> In "/var/www/htdocs/blog.lichttraeumer.de/wp-content/uploads/2022/05" i have >> located .jpg, .webp and .avif files: > Thanks for the details. > > Both ideas seem to work for me, when testing with curl: > > === > $ cat /etc/nginx/conf.d/test-avif.conf > server { > listen 127.0.0.5:80; > root /tmp/t3; > > set $img_suffix ""; > if ($http_accept ~* "webp") { > set $img_suffix ".webp"; > } > if ($http_accept ~* "avif") { > set $img_suffix ".avif"; > } > location ~ \.(jpg|png)$ { > try_files $uri$img_suffix $uri $uri/ =404; > } > } > $ mkdir /tmp/t3 > $ echo one.png > /tmp/t3/one.png > $ echo one.png.avif > /tmp/t3/one.png.avif > > $ curlhttp://127.0.0.5/one.png > one.png > $ curl -H Accept:webphttp://127.0.0.5/one.png > one.png > $ curl -H Accept:avifhttp://127.0.0.5/one.png > one.png.avif > > === > $ cat /etc/nginx/conf.d/test-avif-map.conf > map $http_accept $webp_suffix { > "~image/webp" "$uri.webp"; > } > map $http_accept $avif_suffix { > "~image/avif" "$uri.avif"; > } > > server { > listen 127.0.0.6:80; > root /tmp/t4; > > location ~ \.(jpg|jpeg)$ { > try_files $avif_suffix $webp_suffix $uri =404; > } > } > $ mkdir /tmp/t4 > $ echo one.jpg > /tmp/t4/one.jpg > $ echo one.jpg.webp > /tmp/t4/one.jpg.webp > $ echo one.jpg.avif > /tmp/t4/one.jpg.avif > $ echo two.jpg.webp > /tmp/t4/two.jpg.webp > > $ curlhttp://127.0.0.6/one.jpg > one.jpg > $ curl -H Accept:image/avifhttp://127.0.0.6/one.jpg > one.jpg.avif > $ curl -H Accept:image/webphttp://127.0.0.6/one.jpg > one.jpg.webp > $ curl -H Accept:image/otherhttp://127.0.0.6/one.jpg > one.jpg > $ curl -H Accept:image/avif,image/webphttp://127.0.0.6/one.jpg > one.jpg.avif > $ curl -H Accept:image/avif,image/webphttp://127.0.0.6/two.jpg > two.jpg.webp > $ > > === > > Do they work for you, when testing with curl? > > If not -- why not / what is different between your test config and my test config? > > And if so -- what is different between the curl request and the Firefox request? > > Thanks, > > f -- Pflichtinformationen gemäß Artikel 13 DSGVO Im Falle des Erstkontakts sind wir gemäß Art. 12, 13 DSGVO verpflichtet, Ihnen folgende datenschutzrechtliche Pflichtinformationen zur Verfügung zu stellen: Wenn Sie uns per E-Mail kontaktieren, verarbeiten wir Ihre personenbezogenen Daten nur, soweit an der Verarbeitung ein berechtigtes Interesse besteht (Art. 6 Abs. 1 lit. f DSGVO), Sie in die Datenverarbeitung eingewilligt haben (Art. 6 Abs. 1 lit. a DSGVO), die Verarbeitung für die Anbahnung, Begründung, inhaltliche Ausgestaltung oder Änderung eines Rechtsverhältnisses zwischen Ihnen und uns erforderlich sind (Art. 6 Abs. 1 lit. b DSGVO) oder eine sonstige Rechtsnorm die Verarbeitung gestattet. Ihre personenbezogenen Daten verbleiben bei uns, bis Sie uns zur Löschung auffordern, Ihre Einwilligung zur Speicherung widerrufen oder der Zweck für die Datenspeicherung entfällt (z. B. nach abgeschlossener Bearbeitung Ihres Anliegens). Zwingende gesetzliche Bestimmungen – insbesondere steuer- und handelsrechtliche Aufbewahrungsfristen – bleiben unberührt. Sie haben jederzeit das Recht, unentgeltlich Auskunft über Herkunft, Empfänger und Zweck Ihrer gespeicherten personenbezogenen Daten zu erhalten. Ihnen steht außerdem ein Recht auf Widerspruch, auf Datenübertragbarkeit und ein Beschwerderecht bei der zuständigen Aufsichtsbehörde zu. Ferner können Sie die Berichtigung, die Löschung und unter bestimmten Umständen die Einschränkung der Verarbeitung Ihrer personenbezogenen Daten verlangen. Details entnehmen Sie meiner Datenschutzerklärung unterhttps://lichttraeumer.de/datenschutz/ -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Bildschirmfoto 2022-10-07 um 11.42.46.png Type: image/png Size: 42977 bytes Desc: not available URL: From martin at martin-wolfert.de Fri Oct 7 12:00:44 2022 From: martin at martin-wolfert.de (Martin Wolfert) Date: Fri, 7 Oct 2022 14:00:44 +0200 Subject: Nginx does not serve avif In-Reply-To: References: <4423e587-fc54-39c4-baaf-b4d5f0c18f21@martin-wolfert.de> <20220928211822.GO9502@daoine.org> <9e903e26-0611-79f1-62b1-20b553c379e8@martin-wolfert.de> <20221006214332.GA4185@daoine.org> Message-ID: Hi, i found the issue! Solution: When enabling the webp caching compatibility in WP Rocket (WordPress plugin), the nginx rules / config could not work. Because WP Rocket adds ".webp" as suffix to all .jpg images. So having the suffix set to bla.jpg.webp, the Nginx location ( /location ~ \.(jpg|png)$ {/ ) for sure could not match! So disabling the webp caching compatibilty in WP Rocket solves the problem. Thanks to Francis to have opened my thinking that Nginx would be the problem! Best, Martin Am 07.10.22 um 12:04 schrieb Martin Wolfert: > > Hi, > > well, i would say thet using curl, delivering avif works: > > Downloading the jpg without headers gives back another filesize as > with given webp and avif headers. > So ... yes, also my configuration seams to work with curl. > > Why browsers who should support avif only serve webpand not avif  > (e.g. Chrome 106.0.5249.103) ... I have no clue. > I really hope the Chrome developers do not distinguish between Chrome > on Arm (MacBook M1 & M2) and Intel. > > Best, > Martin > > > Am 06.10.22 um 23:43 schrieb Francis Daly: >> On Thu, Oct 06, 2022 at 02:30:08PM +0200, Martin Wolfert wrote: >> >> Hi there, >> >>> In "/var/www/htdocs/blog.lichttraeumer.de/wp-content/uploads/2022/05" i have >>> located .jpg, .webp and .avif files: >> Thanks for the details. >> >> Both ideas seem to work for me, when testing with curl: >> >> === >> $ cat /etc/nginx/conf.d/test-avif.conf >> server { >> listen 127.0.0.5:80; >> root /tmp/t3; >> >> set $img_suffix ""; >> if ($http_accept ~* "webp") { >> set $img_suffix ".webp"; >> } >> if ($http_accept ~* "avif") { >> set $img_suffix ".avif"; >> } >> location ~ \.(jpg|png)$ { >> try_files $uri$img_suffix $uri $uri/ =404; >> } >> } >> $ mkdir /tmp/t3 >> $ echo one.png > /tmp/t3/one.png >> $ echo one.png.avif > /tmp/t3/one.png.avif >> >> $ curlhttp://127.0.0.5/one.png >> one.png >> $ curl -H Accept:webphttp://127.0.0.5/one.png >> one.png >> $ curl -H Accept:avifhttp://127.0.0.5/one.png >> one.png.avif >> >> === >> $ cat /etc/nginx/conf.d/test-avif-map.conf >> map $http_accept $webp_suffix { >> "~image/webp" "$uri.webp"; >> } >> map $http_accept $avif_suffix { >> "~image/avif" "$uri.avif"; >> } >> >> server { >> listen 127.0.0.6:80; >> root /tmp/t4; >> >> location ~ \.(jpg|jpeg)$ { >> try_files $avif_suffix $webp_suffix $uri =404; >> } >> } >> $ mkdir /tmp/t4 >> $ echo one.jpg > /tmp/t4/one.jpg >> $ echo one.jpg.webp > /tmp/t4/one.jpg.webp >> $ echo one.jpg.avif > /tmp/t4/one.jpg.avif >> $ echo two.jpg.webp > /tmp/t4/two.jpg.webp >> >> $ curlhttp://127.0.0.6/one.jpg >> one.jpg >> $ curl -H Accept:image/avifhttp://127.0.0.6/one.jpg >> one.jpg.avif >> $ curl -H Accept:image/webphttp://127.0.0.6/one.jpg >> one.jpg.webp >> $ curl -H Accept:image/otherhttp://127.0.0.6/one.jpg >> one.jpg >> $ curl -H Accept:image/avif,image/webphttp://127.0.0.6/one.jpg >> one.jpg.avif >> $ curl -H Accept:image/avif,image/webphttp://127.0.0.6/two.jpg >> two.jpg.webp >> $ >> >> === >> >> Do they work for you, when testing with curl? >> >> If not -- why not / what is different between your test config and my test config? >> >> And if so -- what is different between the curl request and the Firefox request? >> >> Thanks, >> >> f > -- > Pflichtinformationen gemäß Artikel 13 DSGVO > Im Falle des Erstkontakts sind wir gemäß Art. 12, 13 DSGVO verpflichtet, Ihnen folgende datenschutzrechtliche Pflichtinformationen zur Verfügung zu stellen: > Wenn Sie uns per E-Mail kontaktieren, verarbeiten wir Ihre personenbezogenen Daten nur, soweit an der Verarbeitung ein berechtigtes Interesse besteht (Art. 6 Abs. 1 lit. f DSGVO), > Sie in die Datenverarbeitung eingewilligt haben (Art. 6 Abs. 1 lit. a DSGVO), die Verarbeitung für die Anbahnung, Begründung, inhaltliche Ausgestaltung oder Änderung eines > Rechtsverhältnisses zwischen Ihnen und uns erforderlich sind (Art. 6 Abs. 1 lit. b DSGVO) oder eine sonstige Rechtsnorm die Verarbeitung gestattet. > Ihre personenbezogenen Daten verbleiben bei uns, bis Sie uns zur Löschung auffordern, Ihre Einwilligung zur Speicherung widerrufen oder der Zweck für die Datenspeicherung entfällt > (z. B. nach abgeschlossener Bearbeitung Ihres Anliegens). Zwingende gesetzliche Bestimmungen – insbesondere steuer- und handelsrechtliche Aufbewahrungsfristen – bleiben unberührt. > Sie haben jederzeit das Recht, unentgeltlich Auskunft über Herkunft, Empfänger und Zweck Ihrer gespeicherten personenbezogenen Daten zu erhalten. Ihnen steht außerdem ein Recht auf > Widerspruch, auf Datenübertragbarkeit und ein Beschwerderecht bei der zuständigen Aufsichtsbehörde zu. Ferner können Sie die Berichtigung, die Löschung und unter bestimmten > Umständen die Einschränkung der Verarbeitung Ihrer personenbezogenen Daten verlangen. Details entnehmen Sie meiner > Datenschutzerklärung unterhttps://lichttraeumer.de/datenschutz/ > > _______________________________________________ > nginx mailing list --nginx at nginx.org > To unsubscribe send an email tonginx-leave at nginx.org -- Pflichtinformationen gemäß Artikel 13 DSGVO Im Falle des Erstkontakts sind wir gemäß Art. 12, 13 DSGVO verpflichtet, Ihnen folgende datenschutzrechtliche Pflichtinformationen zur Verfügung zu stellen: Wenn Sie uns per E-Mail kontaktieren, verarbeiten wir Ihre personenbezogenen Daten nur, soweit an der Verarbeitung ein berechtigtes Interesse besteht (Art. 6 Abs. 1 lit. f DSGVO), Sie in die Datenverarbeitung eingewilligt haben (Art. 6 Abs. 1 lit. a DSGVO), die Verarbeitung für die Anbahnung, Begründung, inhaltliche Ausgestaltung oder Änderung eines Rechtsverhältnisses zwischen Ihnen und uns erforderlich sind (Art. 6 Abs. 1 lit. b DSGVO) oder eine sonstige Rechtsnorm die Verarbeitung gestattet. Ihre personenbezogenen Daten verbleiben bei uns, bis Sie uns zur Löschung auffordern, Ihre Einwilligung zur Speicherung widerrufen oder der Zweck für die Datenspeicherung entfällt (z. B. nach abgeschlossener Bearbeitung Ihres Anliegens). Zwingende gesetzliche Bestimmungen – insbesondere steuer- und handelsrechtliche Aufbewahrungsfristen – bleiben unberührt. Sie haben jederzeit das Recht, unentgeltlich Auskunft über Herkunft, Empfänger und Zweck Ihrer gespeicherten personenbezogenen Daten zu erhalten. Ihnen steht außerdem ein Recht auf Widerspruch, auf Datenübertragbarkeit und ein Beschwerderecht bei der zuständigen Aufsichtsbehörde zu. Ferner können Sie die Berichtigung, die Löschung und unter bestimmten Umständen die Einschränkung der Verarbeitung Ihrer personenbezogenen Daten verlangen. Details entnehmen Sie meiner Datenschutzerklärung unterhttps://lichttraeumer.de/datenschutz/ -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Bildschirmfoto 2022-10-07 um 11.42.46.png Type: image/png Size: 42977 bytes Desc: not available URL: From francis at daoine.org Fri Oct 7 12:14:28 2022 From: francis at daoine.org (Francis Daly) Date: Fri, 7 Oct 2022 13:14:28 +0100 Subject: Nginx does not serve avif In-Reply-To: References: <4423e587-fc54-39c4-baaf-b4d5f0c18f21@martin-wolfert.de> <20220928211822.GO9502@daoine.org> <9e903e26-0611-79f1-62b1-20b553c379e8@martin-wolfert.de> <20221006214332.GA4185@daoine.org> Message-ID: <20221007121428.GB4185@daoine.org> On Fri, Oct 07, 2022 at 02:00:44PM +0200, Martin Wolfert wrote: Hi there, > i found the issue! Good stuff! > Solution: When enabling the webp caching compatibility in WP Rocket > (WordPress plugin), the nginx rules / config could not work. Because WP > Rocket adds ".webp" as suffix to all .jpg images. So having the suffix set > to bla.jpg.webp, the Nginx location ( /location ~ \.(jpg|png)$ {/ ) for sure > could not match! So disabling the webp caching compatibilty in WP Rocket > solves the problem. Nice. My next guess would have been that the browser was requesting thing.jpg, and getting back content that was not a jpeg image, and was getting confused by that mismatch. My guess would have been wrong :-) Cheers, f -- Francis Daly francis at daoine.org From nginx-forum at forum.nginx.org Mon Oct 10 13:31:28 2022 From: nginx-forum at forum.nginx.org (ankitpatodiya) Date: Mon, 10 Oct 2022 09:31:28 -0400 Subject: Running nginx tests fails with error nginx: invalid option: "e" Message-ID: <1046822059e86dc70fbebabda7bdce43.NginxMailingListEnglish@forum.nginx.org> I am trying to run nginx test using below command (nginx version is 1.16.1) TEST_NGINX_BINARY=/usr/sbin/nginx TEST_NGINX_UNSAFE=true prove ${PROJECT_SOURCE_DIR}/nginx-tests but get below error - nginx: invalid option: "e" Can't start nginx at lib/Test/Nginx.pm line 350. Can someone please clarify the reason for the error? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,295433,295433#msg-295433 From nginx-forum at forum.nginx.org Mon Oct 10 13:38:01 2022 From: nginx-forum at forum.nginx.org (ankitpatodiya) Date: Mon, 10 Oct 2022 09:38:01 -0400 Subject: Running nginx tests fails with error nginx: invalid option: "e" In-Reply-To: <1046822059e86dc70fbebabda7bdce43.NginxMailingListEnglish@forum.nginx.org> References: <1046822059e86dc70fbebabda7bdce43.NginxMailingListEnglish@forum.nginx.org> Message-ID: Tests run are from https://github.com/nginx/nginx-tests Posted at Nginx Forum: https://forum.nginx.org/read.php?2,295433,295434#msg-295434 From mdounin at mdounin.ru Mon Oct 10 15:24:36 2022 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 10 Oct 2022 18:24:36 +0300 Subject: Running nginx tests fails with error nginx: invalid option: "e" In-Reply-To: <1046822059e86dc70fbebabda7bdce43.NginxMailingListEnglish@forum.nginx.org> References: <1046822059e86dc70fbebabda7bdce43.NginxMailingListEnglish@forum.nginx.org> Message-ID: Hello! On Mon, Oct 10, 2022 at 09:31:28AM -0400, ankitpatodiya wrote: > I am trying to run nginx test using below command (nginx version is 1.16.1) > > TEST_NGINX_BINARY=/usr/sbin/nginx TEST_NGINX_UNSAFE=true prove > ${PROJECT_SOURCE_DIR}/nginx-tests > > but get below error - > > nginx: invalid option: "e" > Can't start nginx at lib/Test/Nginx.pm line 350. > > Can someone please clarify the reason for the error? You are trying to run tests with an nginx version which is obsolete and no longer supported by the test suite, see this commit: http://hg.nginx.org/nginx-tests/rev/ba625d5a02e4 Supported versions are mainline 1.23.x and stable 1.22.x. Hope this helps. -- Maxim Dounin http://mdounin.ru/ From francis at daoine.org Tue Oct 11 22:34:43 2022 From: francis at daoine.org (Francis Daly) Date: Tue, 11 Oct 2022 23:34:43 +0100 Subject: Installing two versions of PHP-FMP? In-Reply-To: References: Message-ID: <20221011223443.GC4185@daoine.org> On Wed, Oct 05, 2022 at 12:08:34AM +0200, Gilles Ganault wrote: Hi there, > I only have shallow experience with nginx. > > To migrate an old php5-based application to the latest release which expects > php7, I'd like to install both versions of PHP-FPM in one nginx server. php (with php-fpm) is independent of nginx; so the way to install one or more versions of php is "whatever your operating system wants". So do that, to end up with (probably) one tcp port listener (or unix domain socket) for the php5 fastcgi server, plus one for the php7 fastcgi server. ("fpm" = "fastcgi", in this context.) > Although I read elsewhere it's a mistake to install the php package instead > of php-fpm because the former also installs Apache… this is what this > document This, and the link in the parallel reply, show how to run one nginx process, configured to run two server{} blocks (which means "two host names"); and one server{} only uses php5 and the other only uses php7. Depending on the applications involved, that might be the simplest way to deploy them. However, there is no reason not to use one server{} block, provided that you have a way of knowing which requests should go to each fastcgi server. > So, what's the recommended way to set things up so that nginx can support > both interpreters and manage two versions of a web app in their respective > directory? Within the server{}, each incoming request is handled in one location{}. Make sure that requests that should be handled by php5 are handled in a location that does "fastcgi_pass" to the php5 server; and the other requests are handled in a location that does "fastcgi_pass" to the php7 server. That could be something like location ~ ^/app5/.*\.php { fastcgi_pass unix:/tmp/php5.sock; } location ~ \.php { fastcgi_pass unix:/tmp/php7.sock; } but the extra details for how each application is installed and what it expects, will matter. (And that config fragment would need extra supporting config, in order to be useful.) Good luck with it, f -- Francis Daly francis at daoine.org From edflecko at gmail.com Thu Oct 13 15:40:35 2022 From: edflecko at gmail.com (edflecko) Date: Thu, 13 Oct 2022 15:40:35 +0000 Subject: How to patch and/or upgrade Nginx from source in production environment? Message-ID: I'm curious how many people run Nginx in a production environment that was installed from source and not a package. For those people who are running Nginx in this manner, how do you keep Nginx patched when patches are released? How do you upgrade your existing Nginx in your production environment while minimizing downtime? Thank you, Ed -------------- next part -------------- An HTML attachment was scrubbed... URL: From pgnet.dev at gmail.com Thu Oct 13 15:49:23 2022 From: pgnet.dev at gmail.com (PGNet Dev) Date: Thu, 13 Oct 2022 11:49:23 -0400 Subject: How to patch and/or upgrade Nginx from source in production environment? In-Reply-To: References: Message-ID: <5b0f78de-7cbe-c2a0-87bb-92adcb2bcb3a@gmail.com> Nginx is an easy build from source, thankfully. Deploying tarbal'd local source-builds to other machines is not terrible at all if you isolate your install DIR (e.g, 'everything' under /opt/nginx); ansible is your friend. But, it's a bit of a slog to deploy into usual distro env, avoid collisions, and if needed, cleanly uninstall. Certainly doable, but can be messy. To solve for that inconvenience, build your own packages from own sources on an open build system (e.g., SUSE's OBS, Fedora's COPR, etc), and install those packages via rpms. Or for that matter, even local rpmbuilds should be portable, as long as you correctly account for differences in target deployment ENVs. yes, rpm .spec files can be annoying. it's a trade-off. > I'm curious how many people run Nginx in a production environment that was installed from source and not a package. > > For those people who are running Nginx in this manner, how do you keep Nginx patched when patches are released? > > How do you upgrade your existing Nginx in your production environment while minimizing downtime? > > Thank you, > Ed > > _______________________________________________ > nginx mailing list -- nginx at nginx.org > To unsubscribe send an email to nginx-leave at nginx.org From edflecko at gmail.com Thu Oct 13 16:02:31 2022 From: edflecko at gmail.com (edflecko) Date: Thu, 13 Oct 2022 16:02:31 +0000 Subject: How to patch and/or upgrade Nginx from source in production environment? In-Reply-To: <5b0f78de-7cbe-c2a0-87bb-92adcb2bcb3a@gmail.com> References: <5b0f78de-7cbe-c2a0-87bb-92adcb2bcb3a@gmail.com> Message-ID: Thank you for your reply! I should have mentioned that I'm running in an Ubuntu environment so I'm not sure if that makes much difference? I like the idea of installing from source because I can control all of the options, but I'm wondering if it's worth going that route in a production environment? Thoughts? Opinions? Ed On Thu, Oct 13, 2022 at 3:49 PM PGNet Dev wrote: > Nginx is an easy build from source, thankfully. > > Deploying tarbal'd local source-builds to other machines is not terrible > at all if you isolate your install DIR (e.g, 'everything' under > /opt/nginx); ansible is your friend. > > But, it's a bit of a slog to deploy into usual distro env, avoid > collisions, and if needed, cleanly uninstall. Certainly doable, but can be > messy. > > To solve for that inconvenience, build your own packages from own sources > on an open build system (e.g., SUSE's OBS, Fedora's COPR, etc), and install > those packages via rpms. > Or for that matter, even local rpmbuilds should be portable, as long as > you correctly account for differences in target deployment ENVs. > > yes, rpm .spec files can be annoying. it's a trade-off. > > > > I'm curious how many people run Nginx in a production environment that > was installed from source and not a package. > > > > For those people who are running Nginx in this manner, how do you keep > Nginx patched when patches are released? > > > > How do you upgrade your existing Nginx in your production environment > while minimizing downtime? > > > > Thank you, > > Ed > > > > _______________________________________________ > > nginx mailing list -- nginx at nginx.org > > To unsubscribe send an email to nginx-leave at nginx.org > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From teward at thomas-ward.net Thu Oct 13 16:10:09 2022 From: teward at thomas-ward.net (Thomas Ward) Date: Thu, 13 Oct 2022 12:10:09 -0400 Subject: How to patch and/or upgrade Nginx from source in production environment? In-Reply-To: References: <5b0f78de-7cbe-c2a0-87bb-92adcb2bcb3a@gmail.com> Message-ID: If you're on Ubuntu you have some tradeoffs by doing this yourself. You can surely uninstall the packages of nginx from Ubuntu and then compile and install it yourself on each system.  However, you will then need to redo this compiling and patch software yourself.  This is why the packaging exists in Ubuntu - to allow easy installation and patching of security vulns from the Security team (yes, Ubuntu Security Team and Ubuntu Server Team both work to patch nginx in the various releases of Ubuntu!). You will lose that automated security patching, etc. and will have to recompile your software on every machine every time there's a security update if you do this yourself. You can do the packaging yourself in a private repository (which will be basically a 'from source' compile with the configure options, etc. YOU want there to be), and then that package installs the compiled binaries, etc. to whatever system you install that package on, but again you then have to patch it yourself. There's pros and cons to every approach, especially security related concerns for software patching.  The question is, how big is this 'production environment' and do you want to have to recompile and reinstall every time there's a patch for a security problem. Thomas On 10/13/22 12:02, edflecko wrote: > Thank you for your reply! > > I should have mentioned that I'm running in an Ubuntu environment so > I'm not sure if that makes much difference? I like the idea of > installing from source because I can control all of the options, but > I'm wondering if it's worth going that route in a production environment? > > Thoughts? Opinions? > > Ed > > On Thu, Oct 13, 2022 at 3:49 PM PGNet Dev wrote: > > Nginx is an easy build from source, thankfully. > > Deploying tarbal'd local source-builds to other machines is not > terrible at all if you isolate your install DIR (e.g, 'everything' > under /opt/nginx); ansible is your friend. > > But, it's a bit of a slog to deploy into usual distro env, avoid > collisions, and if needed, cleanly uninstall.  Certainly doable, > but can be messy. > > To solve for that inconvenience, build your own packages from own > sources on an open build system (e.g., SUSE's OBS, Fedora's COPR, > etc), and install those packages via rpms. > Or for that matter, even local rpmbuilds should be portable, as > long as you correctly account for differences in target deployment > ENVs. > > yes, rpm .spec files can be annoying. it's a trade-off. > > > > I'm curious how many people run Nginx in a production > environment that was installed from source and not a package. > > > > For those people who are running Nginx in this manner, how do > you keep Nginx patched when patches are released? > > > > How do you upgrade your existing Nginx in your production > environment while minimizing downtime? > > > > Thank you, > > Ed > > > > _______________________________________________ > > nginx mailing list -- nginx at nginx.org > > To unsubscribe send an email to nginx-leave at nginx.org > > > _______________________________________________ > nginx mailing list --nginx at nginx.org > To unsubscribe send an email tonginx-leave at nginx.org -------------- next part -------------- An HTML attachment was scrubbed... URL: From pgnet.dev at gmail.com Thu Oct 13 16:18:57 2022 From: pgnet.dev at gmail.com (PGNet Dev) Date: Thu, 13 Oct 2022 12:18:57 -0400 Subject: How to patch and/or upgrade Nginx from source in production environment? In-Reply-To: References: <5b0f78de-7cbe-c2a0-87bb-92adcb2bcb3a@gmail.com> Message-ID: > I should have mentioned that I'm running in an Ubuntu environment so I'm not sure if that makes much difference? Ubuntu/Debian have all the tools for source builds. They also have the apt packaging solution. I assume there are available build services. I'm not an Ubuntu/Debian user. Simply a matter of preference. Beyond that, no opinion worth its salt :-/ > worth Per whose definition? Stuff breaks. You either live with it, patch it yourself, or ask someone else to patch it for you. What's the 'worth' to you of not having any particular breakage unsolved? i.e., it depends. > Thoughts? Opinions? Only, don't blindly do what others suggest. Do what works best for you. For me, 'my' distro chooses not to build/package nginx mainline. Or, build/config the way I want. So I do it myself, using the distro's build service & tools. Is it a PITA? sure. just less than not having what I need. From edflecko at gmail.com Thu Oct 13 16:21:44 2022 From: edflecko at gmail.com (edflecko) Date: Thu, 13 Oct 2022 16:21:44 +0000 Subject: How to patch and/or upgrade Nginx from source in production environment? In-Reply-To: References: <5b0f78de-7cbe-c2a0-87bb-92adcb2bcb3a@gmail.com> Message-ID: Thank you both for your replies. While my server would be a production environment, it would only consist of the single Ubuntu server and the single instance of Nginx that would be running no more than 10-12 websites. The server is virtual, so any needed changes would be scheduled and the server would be "snapshotted" so if any changes break Nginx, the server would be easily rolled back. The primary concern would be patching the running version of Nginx. I subscribe to Nginx announcements, but I don't know the process to install patches. Can anyone tell me the process and/or point me toward any tutorials? Ed On Thu, Oct 13, 2022 at 4:18 PM PGNet Dev wrote: > > I should have mentioned that I'm running in an Ubuntu environment so I'm > not sure if that makes much difference? > > Ubuntu/Debian have all the tools for source builds. > They also have the apt packaging solution. > I assume there are available build services. > > I'm not an Ubuntu/Debian user. Simply a matter of preference. > > Beyond that, no opinion worth its salt :-/ > > > worth > > Per whose definition? > > Stuff breaks. You either live with it, patch it yourself, or ask someone > else to patch it for you. > > What's the 'worth' to you of not having any particular breakage unsolved? > > i.e., it depends. > > > Thoughts? Opinions? > > Only, don't blindly do what others suggest. Do what works best for you. > > For me, 'my' distro chooses not to build/package nginx mainline. Or, > build/config the way I want. So I do it myself, using the distro's build > service & tools. > Is it a PITA? sure. just less than not having what I need. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From osa at freebsd.org.ru Thu Oct 13 16:28:54 2022 From: osa at freebsd.org.ru (Sergey A. Osokin) Date: Thu, 13 Oct 2022 19:28:54 +0300 Subject: How to patch and/or upgrade Nginx from source in production environment? In-Reply-To: References: Message-ID: Hi, hope you're doing well. On Thu, Oct 13, 2022 at 03:40:35PM +0000, edflecko wrote: > I'm curious how many people run Nginx in a production environment that was > installed from source and not a package. > > For those people who are running Nginx in this manner, how do you keep > Nginx patched when patches are released? > > How do you upgrade your existing Nginx in your production environment while > minimizing downtime? pkgsrc [1] is the one of the good choices to automate builds and manage dependences in a non-root environment on your favorite operating system. References 1. https://pkgsrc.org/ -- Sergey A. Osokin From pgnet.dev at gmail.com Thu Oct 13 17:09:44 2022 From: pgnet.dev at gmail.com (PGNet Dev) Date: Thu, 13 Oct 2022 13:09:44 -0400 Subject: How to patch and/or upgrade Nginx from source in production environment? In-Reply-To: References: <5b0f78de-7cbe-c2a0-87bb-92adcb2bcb3a@gmail.com> Message-ID: > I don't know the process to install patches. That's a big ol' red flag. Personally, I'd strongly recommend against building/installing into a *production* env, until you're up to snuff with managing the sources, including patches. That said, are you solving for a real/existing production problem you have? Or more a want-to-learn-how-to-build exercise? Looking here https://packages.ubuntu.com/search?keywords=nginx https://changelogs.ubuntu.com/changelogs/pool/main/n/nginx/nginx_1.18.0-6ubuntu14.2/changelog https://changelogs.ubuntu.com/changelogs/pool/main/n/nginx/nginx_1.22.0-1ubuntu1/changelog at first glance it sure looks like sources/packages are actively patched & maintained Is there a specific example of an nginx patch your production environment needed that isn't/wasn't acted upon? If so, had your raised it first with the maintainers, and they refused or failed to act? Or is there a version that you need for valid reasons that isn't available to you? > pkgsrc [1] is the one of the good choices to automate builds and manage dependences in a non-root environment on your favorite operating system. +1 there are many. each is its own rabbit-hole, with its own infrastructure & process gotchas. i.e., another layer of stuff/complexity. once mastered, sure -- great to have. From edflecko at gmail.com Thu Oct 13 17:38:55 2022 From: edflecko at gmail.com (edflecko) Date: Thu, 13 Oct 2022 17:38:55 +0000 Subject: How to patch and/or upgrade Nginx from source in production environment? In-Reply-To: References: <5b0f78de-7cbe-c2a0-87bb-92adcb2bcb3a@gmail.com> Message-ID: My primary driving reason for considering the deployment of Nginx from source is to use ModSecurity WAF with Nginx. I'm under the impression that it's much easier to use ModSecurity with Nginx when compiled from source. My only goal of installing patches would simply be to keep the install up to date from a security and/or stability perspective. Finally, in part this install would be a goal of mine to learn to patch and maintain a source installation. Ed On Thu, Oct 13, 2022 at 5:09 PM PGNet Dev wrote: > > I don't know the process to install patches. > > That's a big ol' red flag. Personally, I'd strongly recommend against > building/installing into a *production* env, until you're up to snuff with > managing the sources, including patches. > > That said, are you solving for a real/existing production problem you > have? Or more a want-to-learn-how-to-build exercise? > > Looking here > > https://packages.ubuntu.com/search?keywords=nginx > > https://changelogs.ubuntu.com/changelogs/pool/main/n/nginx/nginx_1.18.0-6ubuntu14.2/changelog > > https://changelogs.ubuntu.com/changelogs/pool/main/n/nginx/nginx_1.22.0-1ubuntu1/changelog > > at first glance it sure looks like sources/packages are actively patched & > maintained > > Is there a specific example of an nginx patch your production environment > needed that isn't/wasn't acted upon? > If so, had your raised it first with the maintainers, and they refused or > failed to act? > Or is there a version that you need for valid reasons that isn't available > to you? > > > > pkgsrc [1] is the one of the good choices to automate builds and manage > dependences in a non-root environment on your favorite operating system. > > +1 > > there are many. > > each is its own rabbit-hole, with its own infrastructure & process > gotchas. i.e., another layer of stuff/complexity. once mastered, sure -- > great to have. > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From pgnet.dev at gmail.com Thu Oct 13 17:54:30 2022 From: pgnet.dev at gmail.com (PGNet Dev) Date: Thu, 13 Oct 2022 13:54:30 -0400 Subject: How to patch and/or upgrade Nginx from source in production environment? In-Reply-To: References: <5b0f78de-7cbe-c2a0-87bb-92adcb2bcb3a@gmail.com> Message-ID: <50d76762-6b71-e27f-a214-ffaf62c13e84@gmail.com> > My primary driving reason for considering the deployment of Nginx from source is to use ModSecurity WAF with Nginx. I'm under the impression that it's much easier to use ModSecurity with Nginx when compiled from source. If ModSecurity is the issue ... There are old instructions easily found ON the nginx.com site, https://www.nginx.com/blog/compiling-and-installing-modsecurity-for-open-source-nginx/ for building it as a dynamic module, which can be separately built and added to a packaged nginx build. not required to rebuild/repackage/reinstall nginx itself. of course, you need to match source version to your pkg'd version. but note, NGINX is dumping ... er ... Transitioning to End-of-Life ... ModSecurity support, F5 NGINX ModSecurity WAF Is Transitioning to End-of-Life https://www.nginx.com/blog/f5-nginx-modsecurity-waf-transitioning-to-eol/ and that ModSecurity itself is on its way out, Talking about ModSecurity and the new Coraza WAF https://coreruleset.org/20211222/talking-about-modsecurity-and-the-new-coraza-waf/ but not quite dead yet. in the interim, there's ModSecurity v3/master https://github.com/SpiderLabs/ModSecurity , with a new architecture, and a specific Nginx connector https://github.com/SpiderLabs/ModSecurity-nginx which can, similarly to the above, be built/added as a dynamic module, and still works well enough. and here's a useful tutorial for setting up Nginx + LibModsecurity Configure LibModsecurity with Nginx on CentOS 8 https://kifarunix.com/configure-libmodsecurity-with-nginx-on-centos-8/ From edflecko at gmail.com Thu Oct 13 20:19:14 2022 From: edflecko at gmail.com (edflecko) Date: Thu, 13 Oct 2022 20:19:14 +0000 Subject: How to patch and/or upgrade Nginx from source in production environment? In-Reply-To: <50d76762-6b71-e27f-a214-ffaf62c13e84@gmail.com> References: <5b0f78de-7cbe-c2a0-87bb-92adcb2bcb3a@gmail.com> <50d76762-6b71-e27f-a214-ffaf62c13e84@gmail.com> Message-ID: Thank you for all of your input! Ed On Thu, Oct 13, 2022 at 5:54 PM PGNet Dev wrote: > > My primary driving reason for considering the deployment of Nginx from > source is to use ModSecurity WAF with Nginx. I'm under the impression that > it's much easier to use ModSecurity with Nginx when compiled from source. > > If ModSecurity is the issue ... > > There are old instructions easily found ON the nginx.com site, > > > https://www.nginx.com/blog/compiling-and-installing-modsecurity-for-open-source-nginx/ > > for building it as a dynamic module, which can be separately built and > added to a packaged nginx build. not required to > rebuild/repackage/reinstall nginx itself. of course, you need to match > source version to your pkg'd version. > > but note, NGINX is dumping ... er ... Transitioning to End-of-Life ... > ModSecurity support, > > F5 NGINX ModSecurity WAF Is Transitioning to End-of-Life > > https://www.nginx.com/blog/f5-nginx-modsecurity-waf-transitioning-to-eol/ > > and that ModSecurity itself is on its way out, > > Talking about ModSecurity and the new Coraza WAF > > https://coreruleset.org/20211222/talking-about-modsecurity-and-the-new-coraza-waf/ > > but not quite dead yet. in the interim, there's ModSecurity v3/master > > https://github.com/SpiderLabs/ModSecurity > > , with a new architecture, and a specific Nginx connector > > https://github.com/SpiderLabs/ModSecurity-nginx > > which can, similarly to the above, be built/added as a dynamic module, and > still works well enough. > > and here's a useful tutorial for setting up Nginx + LibModsecurity > > Configure LibModsecurity with Nginx on CentOS 8 > > https://kifarunix.com/configure-libmodsecurity-with-nginx-on-centos-8/ > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From andreriopreto at gmail.com Fri Oct 14 01:03:21 2022 From: andreriopreto at gmail.com (Andre Pedro) Date: Thu, 13 Oct 2022 21:03:21 -0400 Subject: IPV6 UDP port 6343 Message-ID: Hello folks, I am trying to get the below load balancer config to work in our environment. Basically, this server should receive v6 udp packets sflow sample packets on port 6343 and forward out to these 3 servers part of stream_backend. However, it doesn't seem to work. If I switch to v4, it does work. Can someone please let me know if I am missing something? # * Official English Documentation: http://nginx.org/en/docs/ # * Official Russian Documentation: http://nginx.org/ru/docs/ user nginx; worker_processes auto; error_log /var/log/nginx/error.log; pid /run/nginx.pid; # Load dynamic modules. See /usr/share/doc/nginx/README.dynamic. include /usr/share/nginx/modules/*.conf; events { worker_connections 100000000; } http { log_format main '$remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"'; access_log /var/log/nginx/access.log main; sendfile on; tcp_nopush on; tcp_nodelay on; keepalive_timeout 65; types_hash_max_size 2048; include /etc/nginx/mime.types; default_type application/octet-stream; # Load modular configuration files from the /etc/nginx/conf.d directory. # See http://nginx.org/en/docs/ngx_core_module.html#include # for more information. include /etc/nginx/conf.d/*.conf; # server { # listen 80 default_server; # listen [::]:80 default_server; # server_name _; # root /usr/share/nginx/html; # Load configuration files for the default server block. # include /etc/nginx/default.d/*.conf; # location / { # } # # error_page 404 /404.html; # location = /40x.html { # } # # error_page 500 502 503 504 /50x.html; # location = /50x.html { # } # } # Settings for a TLS enabled server. # # server { # listen 443 ssl http2 default_server; # listen [::]:443 ssl http2 default_server; # server_name _; # root /usr/share/nginx/html; # # ssl_certificate "/etc/pki/nginx/server.crt"; # ssl_certificate_key "/etc/pki/nginx/private/server.key"; # ssl_session_cache shared:SSL:1m; # ssl_session_timeout 10m; # ssl_ciphers PROFILE=SYSTEM; # ssl_prefer_server_ciphers on; # # # Load configuration files for the default server block. # include /etc/nginx/default.d/*.conf; # # location / { # } # # error_page 404 /404.html; # location = /40x.html { # } # # error_page 500 502 503 504 /50x.html; # location = /50x.html { # } # } } stream { upstream stream_backend { # least_conn; server [2001:10:80:70::171]:6343; server [2001:10:80:70::172]:6343; server [2001:10:80:70::173]:6343; } server { listen [::]:6343 udp; # listen 6343 udp; proxy_pass stream_backend; proxy_timeout 3s; proxy_connect_timeout 60s; proxy_responses 0; error_log /home/lab/sflow_lb.log; # access_log /home/lab/nginx-access.log; } } -------------- next part -------------- An HTML attachment was scrubbed... URL: From osa at freebsd.org.ru Fri Oct 14 02:52:29 2022 From: osa at freebsd.org.ru (Sergey A. Osokin) Date: Fri, 14 Oct 2022 05:52:29 +0300 Subject: IPV6 UDP port 6343 In-Reply-To: References: Message-ID: Hi Andre, thanks for the report. On Thu, Oct 13, 2022 at 09:03:21PM -0400, Andre Pedro wrote: > Hello folks, > > I am trying to get the below load balancer config to work in our > environment. Basically, this server should receive v6 udp packets sflow > sample packets on port 6343 and forward out to these 3 servers part of > stream_backend. However, it doesn't seem to work. If I switch to v4, it > does work. [...] > events { > worker_connections 100000000; > } Could you please provide an explaination about that one. I'm too curious how did you get that number. Thank you. -- Sergey A. Osokin From andreriopreto at gmail.com Fri Oct 14 11:33:17 2022 From: andreriopreto at gmail.com (Andre Pedro) Date: Fri, 14 Oct 2022 07:33:17 -0400 Subject: IPV6 UDP port 6343 In-Reply-To: References: Message-ID: Hello Sergey, I was actually seeing access errors due to the number of connections I guess and then when I changed that, it worked for v4. do you think this is related to my problem? This is just a random(big number) that I've decided on. Thanks, Andre Pedro On Thu, Oct 13, 2022 at 10:53 PM Sergey A. Osokin wrote: > Hi Andre, > > thanks for the report. > > On Thu, Oct 13, 2022 at 09:03:21PM -0400, Andre Pedro wrote: > > Hello folks, > > > > I am trying to get the below load balancer config to work in our > > environment. Basically, this server should receive v6 udp packets sflow > > sample packets on port 6343 and forward out to these 3 servers part of > > stream_backend. However, it doesn't seem to work. If I switch to v4, it > > does work. > > [...] > > > events { > > worker_connections 100000000; > > } > > Could you please provide an explaination about that one. > I'm too curious how did you get that number. > > Thank you. > > -- > Sergey A. Osokin > _______________________________________________ > nginx mailing list -- nginx at nginx.org > To unsubscribe send an email to nginx-leave at nginx.org > -------------- next part -------------- An HTML attachment was scrubbed... URL: From osa at freebsd.org.ru Fri Oct 14 14:55:39 2022 From: osa at freebsd.org.ru (Sergey A. Osokin) Date: Fri, 14 Oct 2022 17:55:39 +0300 Subject: IPV6 UDP port 6343 In-Reply-To: References: Message-ID: Hi Andre, On Fri, Oct 14, 2022 at 07:33:17AM -0400, Andre Pedro wrote: > On Thu, Oct 13, 2022 at 10:53 PM Sergey A. Osokin > wrote: > > On Thu, Oct 13, 2022 at 09:03:21PM -0400, Andre Pedro wrote: > > > > > > I am trying to get the below load balancer config to work in our > > > environment. Basically, this server should receive v6 udp packets sflow > > > sample packets on port 6343 and forward out to these 3 servers part of > > > stream_backend. However, it doesn't seem to work. If I switch to v4, it > > > does work. > > > > [...] > > > > > events { > > > worker_connections 100000000; > > > } > > > > Could you please provide an explaination about that one. > > I'm too curious how did you get that number. > > I was actually seeing access errors due to the number of connections I > guess and then when I changed that, it worked for v4. do you think this is > related to my problem? That's quite possible. > This is just a random(big number) that I've decided on. Have you tuned any other system resources or parameters to work with such a huge number? How many worker processes in that instance? -- Sergey A. Osokin From andreriopreto at gmail.com Mon Oct 17 23:20:11 2022 From: andreriopreto at gmail.com (Andre Pedro) Date: Mon, 17 Oct 2022 19:20:11 -0400 Subject: IPV6 UDP port 6343 In-Reply-To: References: Message-ID: hello Sergey, No, I have not.. the reason I've set this number quite high is because I am expecting a huge amount of packets destined to UDP port 6343.. so I was just playing with numbers really. What number would you recommend? Also, why does it work with ipv4? thanks, Andre Pedro Cisco Systems On Fri, Oct 14, 2022 at 10:56 AM Sergey A. Osokin wrote: > Hi Andre, > > On Fri, Oct 14, 2022 at 07:33:17AM -0400, Andre Pedro wrote: > > On Thu, Oct 13, 2022 at 10:53 PM Sergey A. Osokin > > wrote: > > > On Thu, Oct 13, 2022 at 09:03:21PM -0400, Andre Pedro wrote: > > > > > > > > I am trying to get the below load balancer config to work in our > > > > environment. Basically, this server should receive v6 udp packets > sflow > > > > sample packets on port 6343 and forward out to these 3 servers part > of > > > > stream_backend. However, it doesn't seem to work. If I switch to v4, > it > > > > does work. > > > > > > [...] > > > > > > > events { > > > > worker_connections 100000000; > > > > } > > > > > > Could you please provide an explaination about that one. > > > I'm too curious how did you get that number. > > > > I was actually seeing access errors due to the number of connections I > > guess and then when I changed that, it worked for v4. do you think this > is > > related to my problem? > > That's quite possible. > > > This is just a random(big number) that I've decided on. > > Have you tuned any other system resources or parameters to work with such > a huge number? How many worker processes in that instance? > > -- > Sergey A. Osokin > _______________________________________________ > nginx mailing list -- nginx at nginx.org > To unsubscribe send an email to nginx-leave at nginx.org > -------------- next part -------------- An HTML attachment was scrubbed... URL: From osa at freebsd.org.ru Tue Oct 18 00:37:26 2022 From: osa at freebsd.org.ru (Sergey A. Osokin) Date: Tue, 18 Oct 2022 03:37:26 +0300 Subject: IPV6 UDP port 6343 In-Reply-To: References: Message-ID: On Mon, Oct 17, 2022 at 07:20:11PM -0400, Andre Pedro wrote: > On Fri, Oct 14, 2022 at 10:56 AM Sergey A. Osokin wrote: > > On Fri, Oct 14, 2022 at 07:33:17AM -0400, Andre Pedro wrote: > > > > [...] > > > > > > > > > events { > > > > > worker_connections 100000000; > > > > > } > > > > > > > > Could you please provide an explaination about that one. > > > > > This is just a random(big number) that I've decided on. > > > > Have you tuned any other system resources or parameters to work with such > > a huge number? How many worker processes in that instance? > > No, I have not.. the reason I've set this number quite high is because I am > expecting a huge amount of packets destined to UDP port 6343.. so I was > just playing with numbers really. > > What number would you recommend? Also, why does it work with ipv4? I'd recommend to use the default value until you have an evidence that the default value needs to be tweaked. All other numbers need to be tested pretty well with the same traffic volume and pattern. Please note that the worker_connections is the number of sumiltaneous connections per worker. The configuration file you've shared contains the following line: worker_process auto; So, the number of worker process depends on a number of CPUs in the system. Hope that helps. -- Sergey A. Osokin From Devendra.S.Daiya at wellsfargo.com Tue Oct 18 14:12:59 2022 From: Devendra.S.Daiya at wellsfargo.com (Devendra.S.Daiya at wellsfargo.com) Date: Tue, 18 Oct 2022 14:12:59 +0000 Subject: NGINX 1.21.x EOL? Message-ID: <419df71af6d04406aa0a7341701fcdf6@wellsfargo.com> Hi, I don't see 1.21.6 available for Download. Is it already End Of Life? nginx: download I don't see any update on NGINX webpage. Could anyone please share the announcement link from NGINX that says 1.21.x no more supported. Or any other reference. Thanks. Regards, Dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From osa at freebsd.org.ru Tue Oct 18 14:18:26 2022 From: osa at freebsd.org.ru (Sergey A. Osokin) Date: Tue, 18 Oct 2022 17:18:26 +0300 Subject: IPV6 UDP port 6343 In-Reply-To: References: Message-ID: Two more comments. On Thu, Oct 13, 2022 at 09:03:21PM -0400, Andre Pedro wrote: [...] > include /usr/share/nginx/modules/*.conf; This line can be safely removed in case you're using plain nginx' stream functionality. [...] > http { > ... > } The http {} block can be safely removed as well. Thank you. -- Sergey A. Osokin From osa at freebsd.org.ru Tue Oct 18 14:26:56 2022 From: osa at freebsd.org.ru (Sergey A. Osokin) Date: Tue, 18 Oct 2022 17:26:56 +0300 Subject: NGINX 1.21.x EOL? In-Reply-To: <419df71af6d04406aa0a7341701fcdf6@wellsfargo.com> References: <419df71af6d04406aa0a7341701fcdf6@wellsfargo.com> Message-ID: Hi Devendra, hope you're doing well. On Tue, Oct 18, 2022 at 02:12:59PM +0000, Devendra.S.Daiya--- via nginx wrote: > > I don't see 1.21.6 available for Download. nginx 1.21.6 is available for download, [1], here's the direct link, [2]. > Is it already End Of Life? Yes, it is, please welcome to 1.23.x series. > I don't see any update on NGINX webpage. Could anyone please share the > announcement link from NGINX that says 1.21.x no more supported. > Or any other reference. Every year the NGINX Development team creates a new branch. The site, [3], in the "nginx news" section has the following message about that: 2022-06-21 nginx-1.23.0 mainline version has been released. That means that previous branch is EoS. References 1. https://nginx.org/download/ 2. https://nginx.org/download/nginx-1.21.6.tar.gz 3. https://nginx.org/ Thank you. -- Sergey A. Osokin From francis at daoine.org Tue Oct 18 14:48:44 2022 From: francis at daoine.org (Francis Daly) Date: Tue, 18 Oct 2022 15:48:44 +0100 Subject: NGINX 1.21.x EOL? In-Reply-To: <419df71af6d04406aa0a7341701fcdf6@wellsfargo.com> References: <419df71af6d04406aa0a7341701fcdf6@wellsfargo.com> Message-ID: <20221018144844.GD4185@daoine.org> On Tue, Oct 18, 2022 at 02:12:59PM +0000, Devendra.S.Daiya--- via nginx wrote: Hi there, I don't speak for nginx-the-company, or for nginx-the-software. But from knowing some of the history... > I don't see 1.21.6 available for Download. Is it already End Of Life? nginx: download I don't see any 1.odd-number versions on that page, other than the most recent. 1.odd-number is the "mainline/development" version. Generally, if you are using it, you should be tracking updates yourself. You can find all of the tagged versions by following the "Source Code" links further down the page. Simplest is probably to go the read-only code repository and click "tags", to get to http://hg.nginx.org/nginx/tags > I don't see any update on NGINX webpage. Could anyone please share the announcement link from NGINX that says 1.21.x no more supported. Or any other reference. > What would you like "supported" to mean? What it actually means is described at https://nginx.org/en/support.html The licence is at https://nginx.org/LICENSE If you've got a problem with using the code, this list is as good a place as any to ask questions and generally help out; and someone will probably respond at some point. But realistically, problems with an older development version are likely to be most quickly addressed by using a current development version. Of course, if the same problem can be shown in whatever version someone is using, there is a better chance that they'll be able to see if a config change can address the problem. Good luck with it, f -- Francis Daly francis at daoine.org From paul at stormy.ca Tue Oct 18 14:58:27 2022 From: paul at stormy.ca (Paul) Date: Tue, 18 Oct 2022 10:58:27 -0400 Subject: NGINX 1.21.x EOL? In-Reply-To: References: <419df71af6d04406aa0a7341701fcdf6@wellsfargo.com> Message-ID: <68dbcac7-fe6c-224e-6fbc-82b38ebfd7b8@stormy.ca> On 2022-10-18 10:26, Sergey A. Osokin wrote: [snip] > nginx 1.21.6 is available for download, [1], here's the direct link, [2]. > >> Is it already End Of Life? > > Yes, it is, please welcome to 1.23.x series. Interesting. All the servers that I run in production are either Ubuntu 22.04LTS or Debian bullseye (both are latest stable releases) and I find that they are still at 1.18.0-6ubuntu14.1 or 1.18.0-6.1+deb11u2 Now I fully recognize that package managers are a tad conservative, and that both Debian and Ubuntu try and stay on top of security patches, but "end of life" sounds a bit scary ;=} Best -- Paul From osa at freebsd.org.ru Tue Oct 18 15:16:24 2022 From: osa at freebsd.org.ru (Sergey A. Osokin) Date: Tue, 18 Oct 2022 18:16:24 +0300 Subject: NGINX 1.21.x EOL? In-Reply-To: <68dbcac7-fe6c-224e-6fbc-82b38ebfd7b8@stormy.ca> References: <419df71af6d04406aa0a7341701fcdf6@wellsfargo.com> <68dbcac7-fe6c-224e-6fbc-82b38ebfd7b8@stormy.ca> Message-ID: On Tue, Oct 18, 2022 at 10:58:27AM -0400, Paul wrote: > On 2022-10-18 10:26, Sergey A. Osokin wrote: > [snip] > > > nginx 1.21.6 is available for download, [1], here's the direct link, [2]. > > > >> Is it already End Of Life? > > > > Yes, it is, please welcome to 1.23.x series. > > Interesting. All the servers that I run in production are either Ubuntu > 22.04LTS or Debian bullseye (both are latest stable releases) and I find > that they are still at 1.18.0-6ubuntu14.1 or 1.18.0-6.1+deb11u2 > > Now I fully recognize that package managers are a tad conservative, and > that both Debian and Ubuntu try and stay on top of security patches, but > "end of life" sounds a bit scary ;=} I can add that official packages for the recent mainline and stable releases are available for a list of platforms on the following page, [1]. In case of an additional interest how those pacakges have been built, please visit [2]. Referencens 1. https://nginx.org/en/linux_packages.html 2. https://hg.nginx.org/pkg-oss/file/tip Thank you. -- Sergey A. Osokin From Devendra.S.Daiya at wellsfargo.com Tue Oct 18 18:28:38 2022 From: Devendra.S.Daiya at wellsfargo.com (Devendra.S.Daiya at wellsfargo.com) Date: Tue, 18 Oct 2022 18:28:38 +0000 Subject: NGINX 1.21.x EOL? In-Reply-To: References: <419df71af6d04406aa0a7341701fcdf6@wellsfargo.com> <68dbcac7-fe6c-224e-6fbc-82b38ebfd7b8@stormy.ca> Message-ID: Thanks Everyone for response. This really helps 😊 Regards, Dev -----Original Message----- From: Sergey A. Osokin Sent: Tuesday, October 18, 2022 8:46 PM To: nginx at nginx.org Subject: Re: NGINX 1.21.x EOL? On Tue, Oct 18, 2022 at 10:58:27AM -0400, Paul wrote: > On 2022-10-18 10:26, Sergey A. Osokin wrote: > [snip] > > > nginx 1.21.6 is available for download, [1], here's the direct link, [2]. > > > >> Is it already End Of Life? > > > > Yes, it is, please welcome to 1.23.x series. > > Interesting. All the servers that I run in production are either > Ubuntu 22.04LTS or Debian bullseye (both are latest stable releases) > and I find that they are still at 1.18.0-6ubuntu14.1 or > 1.18.0-6.1+deb11u2 > > Now I fully recognize that package managers are a tad conservative, > and that both Debian and Ubuntu try and stay on top of security > patches, but "end of life" sounds a bit scary ;=} I can add that official packages for the recent mainline and stable releases are available for a list of platforms on the following page, [1]. In case of an additional interest how those pacakges have been built, please visit [2]. Referencens 1. https://urldefense.com/v3/__https://nginx.org/en/linux_packages.html__;!!F9svGWnIaVPGSwU!pS0OotMd-Jh-z-SVj2S5lPtPdwEFnW91q8dCjWfH3dBT_Fgs2b2ZshlxTDq6w3yVKuxxfB0BWo4NrQ6bUeTN5Zc$ 2. https://urldefense.com/v3/__https://hg.nginx.org/pkg-oss/file/tip__;!!F9svGWnIaVPGSwU!pS0OotMd-Jh-z-SVj2S5lPtPdwEFnW91q8dCjWfH3dBT_Fgs2b2ZshlxTDq6w3yVKuxxfB0BWo4NrQ6bwEbXfxA$ Thank you. -- Sergey A. Osokin _______________________________________________ nginx mailing list -- nginx at nginx.org To unsubscribe send an email to nginx-leave at nginx.org From mdounin at mdounin.ru Wed Oct 19 12:10:14 2022 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 19 Oct 2022 15:10:14 +0300 Subject: nginx-1.23.2 Message-ID: Changes with nginx 1.23.2 19 Oct 2022 *) Security: processing of a specially crafted mp4 file by the ngx_http_mp4_module might cause a worker process crash, worker process memory disclosure, or might have potential other impact (CVE-2022-41741, CVE-2022-41742). *) Feature: the "$proxy_protocol_tlv_..." variables. *) Feature: TLS session tickets encryption keys are now automatically rotated when using shared memory in the "ssl_session_cache" directive. *) Change: the logging level of the "bad record type" SSL errors has been lowered from "crit" to "info". Thanks to Murilo Andrade. *) Change: now when using shared memory in the "ssl_session_cache" directive the "could not allocate new session" errors are logged at the "warn" level instead of "alert" and not more often than once per second. *) Bugfix: nginx/Windows could not be built with OpenSSL 3.0.x. *) Bugfix: in logging of the PROXY protocol errors. Thanks to Sergey Brester. *) Workaround: shared memory from the "ssl_session_cache" directive was spent on sessions using TLS session tickets when using TLSv1.3 with OpenSSL. *) Workaround: timeout specified with the "ssl_session_timeout" directive did not work when using TLSv1.3 with OpenSSL or BoringSSL. -- Maxim Dounin http://nginx.org/ From mdounin at mdounin.ru Wed Oct 19 12:10:45 2022 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 19 Oct 2022 15:10:45 +0300 Subject: nginx-1.22.1 Message-ID: Changes with nginx 1.22.1 19 Oct 2022 *) Security: processing of a specially crafted mp4 file by the ngx_http_mp4_module might cause a worker process crash, worker process memory disclosure, or might have potential other impact (CVE-2022-41741, CVE-2022-41742). -- Maxim Dounin http://nginx.org/ From mdounin at mdounin.ru Wed Oct 19 12:11:22 2022 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 19 Oct 2022 15:11:22 +0300 Subject: nginx security advisory (CVE-2022-41741, CVE-2022-41742) Message-ID: Hello! Two security issues were identified in the ngx_http_mp4_module, which might allow an attacker to cause a worker process crash or worker process memory disclosure by using a specially crafted mp4 file, or might have potential other impact (CVE-2022-41741, CVE-2022-41742). The issues only affect nginx if it is built with the ngx_http_mp4_module (the module is not built by default) and the "mp4" directive is used in the configuration file. Further, the attack is only possible if an attacker is able to trigger processing of a specially crafted mp4 file with the ngx_http_mp4_module. The issues affect nginx 1.1.3+, 1.0.7+. The issues are fixed in 1.23.2, 1.22.1. Patch for the issues can be found here: http://nginx.org/download/patch.2022.mp4.txt -- Maxim Dounin http://nginx.org/ From dan.switzer at givainc.com Thu Oct 20 13:04:31 2022 From: dan.switzer at givainc.com (Dan G. Switzer, II) Date: Thu, 20 Oct 2022 09:04:31 -0400 Subject: Custom 413 error pages when client_max_body_size exceeded Message-ID: <6227bb5e-0494-e3b4-8e5d-97627ddb5006@givainc.com> I'm using nginx/1.20.1 under CentOS Linux release 7.9.2009 (Core) and I cannot get a custom error page to show when the client_max_body_size limit has been exceeded. The browser will only return the default nginx error page. I see a number of posts mentioning how there was a bug in earlier versions of nginx, but this appears to be have been fixed a long time ago. In my code, I have the following: > error_page 404 =404 /404_status_code.htm; > error_page 403 =404 /404_status_code.htm; > > location /404_status_code.htm { >     internal; >     root /path/to/my/custom/errors/; >     add_header X-Original-URL "$scheme://$http_host$request_uri" always; > } This works perfectly fine. When either a 403 or 404 error is generated, nginx returns my custom error page. However, if I change the code to: > error_page 404 =404 /404_status_code.htm; > error_page 403 =404 /404_status_code.htm; > error_page 413 =413 /413_request_too_large.htm; > > location /404_status_code.htm { >     internal; >     root /path/to/my/custom/errors/; >     add_header X-Original-URL "$scheme://$http_host$request_uri" always; > } > > location /413_request_too_large.htm { >     internal; >     root /path/to/my/custom/errors/; >     add_header X-Original-URL "$scheme://$http_host$request_uri" always; > } When I try to upload a file larger than my client_max_body_size setting, I still get the default error page. I've tried a lot of different variations of the code, but nothing seems to work. I've tried: > error_page 413 /413_request_too_large.htm; > location /413_request_too_large.htm { >     internal; >     root /path/to/my/custom/errors/; >     add_header X-Original-URL "$scheme://$http_host$request_uri" always; > } Using an handler instead: > error_page 413 @413_request_too_large > location @413_request_too_large { >     internal; >     root /path/to/my/custom/errors/; >     add_header X-Original-URL "$scheme://$http_host$request_uri" always; > } And every variation I can think of, but nothing seems to work. Is there something special that needs to be done to implement a custom error page for a 413 status code? Or is there perhaps a regression that broke this from working? -Dan -- Dan G. Switzer, II Giva, Inc. Email:dan.switzer at givainc.com Web Site:http://www.givainc.com See Our Customer Successes http://www.givainc.com/customers-casestudies.htm -------------- next part -------------- An HTML attachment was scrubbed... URL: From osa at freebsd.org.ru Thu Oct 20 15:05:27 2022 From: osa at freebsd.org.ru (Sergey A. Osokin) Date: Thu, 20 Oct 2022 18:05:27 +0300 Subject: Custom 413 error pages when client_max_body_size exceeded In-Reply-To: <6227bb5e-0494-e3b4-8e5d-97627ddb5006@givainc.com> References: <6227bb5e-0494-e3b4-8e5d-97627ddb5006@givainc.com> Message-ID: Hi Dan, thanks for the report. On Thu, Oct 20, 2022 at 09:04:31AM -0400, Dan G. Switzer, II wrote: > I'm using nginx/1.20.1 under CentOS Linux release 7.9.2009 (Core) and I > cannot get a custom error page to show when the client_max_body_size > limit has been exceeded. The browser will only return the default nginx > error page. [...] > However, if I change the code to: > > > error_page 404 =404 /404_status_code.htm; > > error_page 403 =404 /404_status_code.htm; > > error_page 413 =413 /413_request_too_large.htm; > > > > location /404_status_code.htm { > >     internal; > >     root /path/to/my/custom/errors/; > >     add_header X-Original-URL "$scheme://$http_host$request_uri" always; > > } > > > > location /413_request_too_large.htm { > >     internal; > >     root /path/to/my/custom/errors/; > >     add_header X-Original-URL "$scheme://$http_host$request_uri" always; > > } > > When I try to upload a file larger than my client_max_body_size setting, > I still get the default error page. I've tried a lot of different > variations of the code, but nothing seems to work. > > > Is there something special that needs to be done to implement a custom > error page for a 413 status code? Or is there perhaps a regression that > broke this from working? Here's the configuration that works here: server { listen 80; client_max_body_size 10k; error_page 403 =404 /404_status_code.html; error_page 404 =404 /404_status_code.html; error_page 413 =413 /413_status_code.html; location /upload { dav_methods PUT; create_full_put_path on; dav_access group:rw all:r; } location = /413_status_code.html { internal; root /usr/local/www/nginx; } } % dd if=/dev/zero of=11k bs=1k count=11 11+0 records in 11+0 records out 11264 bytes transferred in 0.000075 secs (150232738 bytes/sec) % cat /usr/local/www/nginx/413_status_code.html here's the 413 error % curl -T 11k http://127.0.0.1/upload/11k here's the 413 error Thank you. -- Sergey A. Osokin From dan.switzer at givainc.com Thu Oct 20 18:01:44 2022 From: dan.switzer at givainc.com (Dan G. Switzer, II) Date: Thu, 20 Oct 2022 14:01:44 -0400 Subject: Custom 413 error pages when client_max_body_size exceeded In-Reply-To: References: <6227bb5e-0494-e3b4-8e5d-97627ddb5006@givainc.com> Message-ID: Sergey, Thanks for taking the time to respond. That's not working for me. I tried the following: > > server { >     listen 80; > >     client_max_body_size 10k; > >     error_page 403 =404 /404_status_code.htm; >     error_page 404 =404 /404_status_code.htm; >     error_page 413 =413 /413_status_code.htm; > >     location /upload { >         dav_methods  PUT; >         create_full_put_path   on; >         dav_access             group:rw  all:r; >     } > >     location = /404_status_code.html { >         internal; >         root /path/to/my/custom/errors; >     } > >     location = /413_status_code.html { >         internal; >         root /path/to/my/custom/errors; >     } > } The 404 works fine, but sending more than 10k to the request still returns the default nginx page. If I curl to a non-existent URL, I get the custom 404. The 413 doesn't. If I remove the "internal" command, I can view the /413_status_code.html file just fine. Is there a good way I can debug/troubleshoot why it might not be working? It really seems like it might be a bug with the version of nginx that CentOS 7 is installing. -Dan On 10/20/2022 11:05 AM, Sergey A. Osokin wrote: > Hi Dan, > > thanks for the report. > > On Thu, Oct 20, 2022 at 09:04:31AM -0400, Dan G. Switzer, II wrote: >> I'm using nginx/1.20.1 under CentOS Linux release 7.9.2009 (Core) and I >> cannot get a custom error page to show when the client_max_body_size >> limit has been exceeded. The browser will only return the default nginx >> error page. > [...] > >> However, if I change the code to: >> >>> error_page 404 =404 /404_status_code.htm; >>> error_page 403 =404 /404_status_code.htm; >>> error_page 413 =413 /413_request_too_large.htm; >>> >>> location /404_status_code.htm { >>>     internal; >>>     root /path/to/my/custom/errors/; >>>     add_header X-Original-URL "$scheme://$http_host$request_uri" always; >>> } >>> >>> location /413_request_too_large.htm { >>>     internal; >>>     root /path/to/my/custom/errors/; >>>     add_header X-Original-URL "$scheme://$http_host$request_uri" always; >>> } >> When I try to upload a file larger than my client_max_body_size setting, >> I still get the default error page. I've tried a lot of different >> variations of the code, but nothing seems to work. >> >> >> Is there something special that needs to be done to implement a custom >> error page for a 413 status code? Or is there perhaps a regression that >> broke this from working? > Here's the configuration that works here: > > server { > listen 80; > > client_max_body_size 10k; > > error_page 403 =404 /404_status_code.html; > error_page 404 =404 /404_status_code.html; > error_page 413 =413 /413_status_code.html; > > location /upload { > dav_methods PUT; > create_full_put_path on; > dav_access group:rw all:r; > } > > location = /413_status_code.html { > internal; > root /usr/local/www/nginx; > } > } > > % dd if=/dev/zero of=11k bs=1k count=11 > 11+0 records in > 11+0 records out > 11264 bytes transferred in 0.000075 secs (150232738 bytes/sec) > > % cat /usr/local/www/nginx/413_status_code.html > > > here's the 413 error > > > > % curl -T 11khttps://url.emailprotection.link/?bBgKrp4MmqsBU6w4TjxZ9_JqJd9V0NDmTOHlOJxvE4o6VBzwgW7OP1tEufUK7BpJqJXzp1a-EKqVvPqu_3UYV0A~~ > > > here's the 413 error > > > > Thank you. > -- Dan G. Switzer, II Giva, Inc. Email:dan.switzer at givainc.com Web Site:http://www.givainc.com See Our Customer Successes http://www.givainc.com/customers-casestudies.htm -------------- next part -------------- An HTML attachment was scrubbed... URL: From osa at freebsd.org.ru Thu Oct 20 18:43:43 2022 From: osa at freebsd.org.ru (Sergey A. Osokin) Date: Thu, 20 Oct 2022 21:43:43 +0300 Subject: Custom 413 error pages when client_max_body_size exceeded In-Reply-To: References: <6227bb5e-0494-e3b4-8e5d-97627ddb5006@givainc.com> Message-ID: On Thu, Oct 20, 2022 at 02:01:44PM -0400, Dan G. Switzer, II wrote: > Thanks for taking the time to respond. That's not working for me. I > tried the following: [...] > The 404 works fine, but sending more than 10k to the request still > returns the default nginx page. [...] > Is there a good way I can debug/troubleshoot why it might not be working? Sure, it's possible to enable a debugging log, [1]. > It really seems like it might be a bug with the version of nginx that > CentOS 7 is installing. How's the nginx has been built and deployed there? Is there a chance to try to use recent packages from [2] to reproduce the issue? Thank you. References 1. https://nginx.org/en/docs/debugging_log.html 2. https://nginx.org/en/linux_packages.html -- Sergey A. Osokin From sca at andreasschulze.de Thu Oct 20 19:45:17 2022 From: sca at andreasschulze.de (A. Schulze) Date: Thu, 20 Oct 2022 21:45:17 +0200 Subject: nginx-1.23.2 In-Reply-To: References: Message-ID: <405af862-22af-e670-c52f-654ea51ee30d@andreasschulze.de> Am 19.10.22 um 14:10 schrieb Maxim Dounin: > Changes with nginx 1.23.2 19 Oct 2022 > *) Feature: TLS session tickets encryption keys are now automatically > rotated when using shared memory in the "ssl_session_cache" > directive. Hello, this announcement let me hope, I could throw away my srcipt-foo that rotate - ssl_session_ticket_key current.key; - ssl_session_ticket_key previous.key; Are there some more hints on how to configure nginx now? Andreas From frank.swasey at gmail.com Thu Oct 20 20:22:52 2022 From: frank.swasey at gmail.com (Frank Swasey) Date: Thu, 20 Oct 2022 16:22:52 -0400 Subject: Custom 413 error pages when client_max_body_size exceeded In-Reply-To: References: <6227bb5e-0494-e3b4-8e5d-97627ddb5006@givainc.com> Message-ID: You do realize you redirected to .htm and specified .html in the location, right? On Thu, Oct 20, 2022 at 14:02 Dan G. Switzer, II wrote: > Sergey, > > Thanks for taking the time to respond. That's not working for me. I tried > the following: > > > server { > listen 80; > > client_max_body_size 10k; > > error_page 403 =404 /404_status_code.htm; > error_page 404 =404 /404_status_code.htm; > error_page 413 =413 /413_status_code.htm; > > location /upload { > dav_methods PUT; > create_full_put_path on; > dav_access group:rw all:r; > } > > location = /404_status_code.html { > internal; > root /path/to/my/custom/errors; > } > > location = /413_status_code.html { > internal; > root /path/to/my/custom/errors; > } > } > > > The 404 works fine, but sending more than 10k to the request still returns > the default nginx page. > > If I curl to a non-existent URL, I get the custom 404. The 413 doesn't. If > I remove the "internal" command, I can view the /413_status_code.html file > just fine. > > Is there a good way I can debug/troubleshoot why it might not be working? > > It really seems like it might be a bug with the version of nginx that > CentOS 7 is installing. > > -Dan > > On 10/20/2022 11:05 AM, Sergey A. Osokin wrote: > > Hi Dan, > > thanks for the report. > > On Thu, Oct 20, 2022 at 09:04:31AM -0400, Dan G. Switzer, II wrote: > > I'm using nginx/1.20.1 under CentOS Linux release 7.9.2009 (Core) and I > cannot get a custom error page to show when the client_max_body_size > limit has been exceeded. The browser will only return the default nginx > error page. > > [...] > > > However, if I change the code to: > > > error_page 404 =404 /404_status_code.htm; > error_page 403 =404 /404_status_code.htm; > error_page 413 =413 /413_request_too_large.htm; > > location /404_status_code.htm { > internal; > root /path/to/my/custom/errors/; > add_header X-Original-URL "$scheme://$http_host$request_uri" always; > } > > location /413_request_too_large.htm { > internal; > root /path/to/my/custom/errors/; > add_header X-Original-URL "$scheme://$http_host$request_uri" always; > } > > When I try to upload a file larger than my client_max_body_size setting, > I still get the default error page. I've tried a lot of different > variations of the code, but nothing seems to work. > > > Is there something special that needs to be done to implement a custom > error page for a 413 status code? Or is there perhaps a regression that > broke this from working? > > Here's the configuration that works here: > > server { > listen 80; > > client_max_body_size 10k; > > error_page 403 =404 /404_status_code.html; > error_page 404 =404 /404_status_code.html; > error_page 413 =413 /413_status_code.html; > > location /upload { > dav_methods PUT; > create_full_put_path on; > dav_access group:rw all:r; > } > > location = /413_status_code.html { > internal; > root /usr/local/www/nginx; > } > } > > % dd if=/dev/zero of=11k bs=1k count=11 > 11+0 records in > 11+0 records out > 11264 bytes transferred in 0.000075 secs (150232738 bytes/sec) > > % cat /usr/local/www/nginx/413_status_code.html > > > here's the 413 error > > > > % curl -T 11k https://url.emailprotection.link/?bBgKrp4MmqsBU6w4TjxZ9_JqJd9V0NDmTOHlOJxvE4o6VBzwgW7OP1tEufUK7BpJqJXzp1a-EKqVvPqu_3UYV0A~~ > > > here's the 413 error > > > > Thank you. > > > > -- > Dan G. Switzer, II > Giva, Inc. > Email: dan.switzer at givainc.com > Web Site: http://www.givainc.com > > See Our Customer Successes http://www.givainc.com/customers-casestudies.htm > > _______________________________________________ > nginx mailing list -- nginx at nginx.org > To unsubscribe send an email to nginx-leave at nginx.org > -- I am not young enough to know everything. - Oscar Wilde (1854-1900) -------------- next part -------------- An HTML attachment was scrubbed... URL: From dan.switzer at givainc.com Thu Oct 20 20:25:33 2022 From: dan.switzer at givainc.com (Dan G. Switzer, II) Date: Thu, 20 Oct 2022 16:25:33 -0400 Subject: Custom 413 error pages when client_max_body_size exceeded In-Reply-To: References: <6227bb5e-0494-e3b4-8e5d-97627ddb5006@givainc.com> Message-ID: Sorry, that was a typo from the tweak to the real file names. The file extensions matched in my test and I got the custom 404 error page. On 10/20/2022 4:22 PM, Frank Swasey wrote: > You do realize you redirected to .htm and specified .html > in the location, right? > > > > On Thu, Oct 20, 2022 at 14:02 Dan G. Switzer, II > wrote: > > Sergey, > > Thanks for taking the time to respond. That's not working for me. > I tried the following: >> >> server { >>     listen 80; >> >>     client_max_body_size 10k; >> >>     error_page 403 =404 /404_status_code.htm; >>     error_page 404 =404 /404_status_code.htm; >>     error_page 413 =413 /413_status_code.htm; >> >>     location /upload { >>         dav_methods  PUT; >>         create_full_put_path   on; >>         dav_access             group:rw  all:r; >>     } >> >>     location = /404_status_code.html { >>         internal; >>         root /path/to/my/custom/errors; >>     } >> >>     location = /413_status_code.html { >>         internal; >>         root /path/to/my/custom/errors; >>     } >> } > > The 404 works fine, but sending more than 10k to the request still > returns the default nginx page. > > If I curl to a non-existent URL, I get the custom 404. The 413 > doesn't. If I remove the "internal" command, I can view the > /413_status_code.html file just fine. > > Is there a good way I can debug/troubleshoot why it might not be > working? > > It really seems like it might be a bug with the version of nginx > that CentOS 7 is installing. > > -Dan > > On 10/20/2022 11:05 AM, Sergey A. Osokin wrote: >> Hi Dan, >> >> thanks for the report. >> >> On Thu, Oct 20, 2022 at 09:04:31AM -0400, Dan G. Switzer, II wrote: >>> I'm using nginx/1.20.1 under CentOS Linux release 7.9.2009 (Core) and I >>> cannot get a custom error page to show when the client_max_body_size >>> limit has been exceeded. The browser will only return the default nginx >>> error page. >> [...] >> >>> However, if I change the code to: >>> >>>> error_page 404 =404 /404_status_code.htm; >>>> error_page 403 =404 /404_status_code.htm; >>>> error_page 413 =413 /413_request_too_large.htm; >>>> >>>> location /404_status_code.htm { >>>>     internal; >>>>     root /path/to/my/custom/errors/; >>>>     add_header X-Original-URL "$scheme://$http_host$request_uri" always; >>>> } >>>> >>>> location /413_request_too_large.htm { >>>>     internal; >>>>     root /path/to/my/custom/errors/; >>>>     add_header X-Original-URL "$scheme://$http_host$request_uri" always; >>>> } >>> When I try to upload a file larger than my client_max_body_size setting, >>> I still get the default error page. I've tried a lot of different >>> variations of the code, but nothing seems to work. >>> >>> >>> Is there something special that needs to be done to implement a custom >>> error page for a 413 status code? Or is there perhaps a regression that >>> broke this from working? >> Here's the configuration that works here: >> >> server { >> listen 80; >> >> client_max_body_size 10k; >> >> error_page 403 =404 /404_status_code.html; >> error_page 404 =404 /404_status_code.html; >> error_page 413 =413 /413_status_code.html; >> >> location /upload { >> dav_methods PUT; >> create_full_put_path on; >> dav_access group:rw all:r; >> } >> >> location = /413_status_code.html { >> internal; >> root /usr/local/www/nginx; >> } >> } >> >> % dd if=/dev/zero of=11k bs=1k count=11 >> 11+0 records in >> 11+0 records out >> 11264 bytes transferred in 0.000075 secs (150232738 bytes/sec) >> >> % cat /usr/local/www/nginx/413_status_code.html >> >> >> here's the 413 error >> >> >> >> % curl -T 11khttps://url.emailprotection.link/?bBgKrp4MmqsBU6w4TjxZ9_JqJd9V0NDmTOHlOJxvE4o6VBzwgW7OP1tEufUK7BpJqJXzp1a-EKqVvPqu_3UYV0A~~ >> >> >> here's the 413 error >> >> >> >> Thank you. >> > > -- > Dan G. Switzer, II > Giva, Inc. > Email:dan.switzer at givainc.com > Web Site:http://www.givainc.com > > See Our Customer Successes > http://www.givainc.com/customers-casestudies.htm > > _______________________________________________ > nginx mailing list -- nginx at nginx.org > To unsubscribe send an email to nginx-leave at nginx.org > > -- > I am not young enough to know everything. - Oscar Wilde (1854-1900) > > _______________________________________________ > nginx mailing list --nginx at nginx.org > To unsubscribe send an email tonginx-leave at nginx.org -- Dan G. Switzer, II Giva, Inc. Email:dan.switzer at givainc.com Web Site:http://www.givainc.com See Our Customer Successes http://www.givainc.com/customers-casestudies.htm -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Thu Oct 20 20:30:07 2022 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 20 Oct 2022 23:30:07 +0300 Subject: nginx-1.23.2 In-Reply-To: <405af862-22af-e670-c52f-654ea51ee30d@andreasschulze.de> References: <405af862-22af-e670-c52f-654ea51ee30d@andreasschulze.de> Message-ID: Hello! On Thu, Oct 20, 2022 at 09:45:17PM +0200, A. Schulze via nginx wrote: > > > Am 19.10.22 um 14:10 schrieb Maxim Dounin: > > Changes with nginx 1.23.2 19 Oct 2022 > > *) Feature: TLS session tickets encryption keys are now automatically > > rotated when using shared memory in the "ssl_session_cache" > > directive. > > Hello, > > this announcement let me hope, I could throw away my srcipt-foo that rotate > > - ssl_session_ticket_key current.key; > - ssl_session_ticket_key previous.key; > > Are there some more hints on how to configure nginx now? Now for automatic ticket keys rotation it is enough to configure "ssl_session_cache shared:...", something you likely already have configured anyway. Everything else will be done by nginx: it will rotate keys every ssl_session_timeout. If you are interested in details, see these commits: http://hg.nginx.org/nginx/rev/0f3d98e4bcc5 http://hg.nginx.org/nginx/rev/043006e5a0b1 Hope this helps. -- Maxim Dounin http://mdounin.ru/ From sca at andreasschulze.de Thu Oct 20 21:23:09 2022 From: sca at andreasschulze.de (A. Schulze) Date: Thu, 20 Oct 2022 23:23:09 +0200 Subject: nginx-1.23.2 In-Reply-To: References: <405af862-22af-e670-c52f-654ea51ee30d@andreasschulze.de> Message-ID: Am 20.10.22 um 22:30 schrieb Maxim Dounin: > Now for automatic ticket keys rotation it is enough to configure > "ssl_session_cache shared:...", something you likely already have > configured anyway. Everything else will be done by nginx: it will > rotate keys every ssl_session_timeout. so it's enougth to not set ssl_session_ticket_key anymore and activate the default again... nice! From osa at freebsd.org.ru Thu Oct 20 23:49:35 2022 From: osa at freebsd.org.ru (Sergey A. Osokin) Date: Fri, 21 Oct 2022 02:49:35 +0300 Subject: Custom 413 error pages when client_max_body_size exceeded In-Reply-To: References: <6227bb5e-0494-e3b4-8e5d-97627ddb5006@givainc.com> Message-ID: On Thu, Oct 20, 2022 at 04:25:33PM -0400, Dan G. Switzer, II wrote: > Sorry, that was a typo from the tweak to the real file names. The file > extensions matched in my test and I got the custom 404 error page. Dan, follow that everything works fine, could you confirm. Thank you. -- Sergey A. Osokin From dan.switzer at givainc.com Fri Oct 21 11:13:22 2022 From: dan.switzer at givainc.com (Dan G. Switzer, II) Date: Fri, 21 Oct 2022 07:13:22 -0400 Subject: Custom 413 error pages when client_max_body_size exceeded In-Reply-To: References: Message-ID: <5A3FDF8E-3CE3-477F-902B-D8ED49D87F9C@givainc.com> No, the 404 works as expected, the 413 does not. I’ll try building from source some time early next week to see if that fixes things. -Dan Sent from my iPhone > On Oct 20, 2022, at 7:49 PM, Sergey A. Osokin wrote: > > On Thu, Oct 20, 2022 at 04:25:33PM -0400, Dan G. Switzer, II wrote: >> Sorry, that was a typo from the tweak to the real file names. The file >> extensions matched in my test and I got the custom 404 error page. > > Dan, > > follow that everything works fine, could you confirm. Thank you. > > -- > Sergey A. Osokin > _______________________________________________ > nginx mailing list -- nginx at nginx.org > To unsubscribe send an email to nginx-leave at nginx.org From nginx-forum at forum.nginx.org Fri Oct 21 15:02:20 2022 From: nginx-forum at forum.nginx.org (libresco_27) Date: Fri, 21 Oct 2022 11:02:20 -0400 Subject: Disabling keepalive Message-ID: Hello, Is there a way to totally disable keepalive form upstream? Right now, I have the following configuration in upstream to keep the figures to a minimum- keepalive: 1; keepalive_requests: 1; keepalive_timeout: 1s; keepalive_time: 1s Since, I can't change the keepalive directive's value to 0, is there a way I can remove this setting totally? Thanks! Posted at Nginx Forum: https://forum.nginx.org/read.php?2,295571,295571#msg-295571 From mdounin at mdounin.ru Fri Oct 21 15:16:11 2022 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 21 Oct 2022 18:16:11 +0300 Subject: Disabling keepalive In-Reply-To: References: Message-ID: Hello! On Fri, Oct 21, 2022 at 11:02:20AM -0400, libresco_27 wrote: > Is there a way to totally disable keepalive form upstream? Right now, I have > the following configuration in upstream to keep the figures to a minimum- > > keepalive: 1; > keepalive_requests: 1; > keepalive_timeout: 1s; > keepalive_time: 1s > > Since, I can't change the keepalive directive's value to 0, is there a way I > can remove this setting totally? Keepalive connections with upstream servers are disabled by default. That is, it is enough to remove the "keepalive" directive from the upstream block to disable connection cache completely. Note that you may also want to adjust proxying configuration to ensure connections are closed by the upstream server when possible. In particular, make sure you are not using "proxy_set_header Connection "";" with HTTP proxying and/or "fastcgi_keep_conn on;" with FastCGI. See http://nginx.org/r/keepalive for details. -- Maxim Dounin http://mdounin.ru/ From nginx-forum at forum.nginx.org Fri Oct 21 22:04:00 2022 From: nginx-forum at forum.nginx.org (Dr_tux) Date: Fri, 21 Oct 2022 18:04:00 -0400 Subject: least_conn method session issue Message-ID: I use least_conn in Nginx (reverse proxy), but when I open the application's dashboard, it logs out. It doesn't do this in ip_hash method. What should I do for this? I m stuck in this situation, I can use least_conn on AWS and it 's working properly. Thanks in advance. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,295575,295575#msg-295575 From mdounin at mdounin.ru Fri Oct 21 22:36:46 2022 From: mdounin at mdounin.ru (Maxim Dounin) Date: Sat, 22 Oct 2022 01:36:46 +0300 Subject: least_conn method session issue In-Reply-To: References: Message-ID: Hello! On Fri, Oct 21, 2022 at 06:04:00PM -0400, Dr_tux wrote: > I use least_conn in Nginx (reverse proxy), but when I open the application's > dashboard, it logs out. It doesn't do this in ip_hash method. What should I > do for this? I m stuck in this situation, I can use least_conn on AWS and it > 's working properly. Is your application's dashboard able to maintain user session across different backend servers? Symptoms suggests it probably doesn't, so you have to either configure your backend to ensure session sharing between servers[1], or use some forms user-aware balancing such as ip_hash or hash. [1] For example, in PHP there are standard ways to store sessions across multiple servers, such as memcached, see https://www.php.net/manual/en/memcached.sessions.php for details. -- Maxim Dounin http://mdounin.ru/ From nginx-forum at forum.nginx.org Sat Oct 22 05:09:50 2022 From: nginx-forum at forum.nginx.org (Dr_tux) Date: Sat, 22 Oct 2022 01:09:50 -0400 Subject: least_conn method session issue In-Reply-To: References: Message-ID: Thanks for your answer. I use the same application on AWS (With ELB support least_conn) and my java app is working properly. Is there any way to solve this issue on Nginx side ? Thanks. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,295575,295580#msg-295580 From nginx-forum at forum.nginx.org Sat Oct 22 05:17:45 2022 From: nginx-forum at forum.nginx.org (Dr_tux) Date: Sat, 22 Oct 2022 01:17:45 -0400 Subject: least_conn method session issue In-Reply-To: References: Message-ID: For example: my dashboard url : https://my_dashboard:8000/#/index/login If such a URL is entered, can I say use ip_hash in reverse proxy? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,295575,295581#msg-295581 From nginx-forum at forum.nginx.org Tue Oct 25 13:55:14 2022 From: nginx-forum at forum.nginx.org (gubtiny) Date: Tue, 25 Oct 2022 09:55:14 -0400 Subject: nginx proxy for jdbc connection with modifying property Message-ID: <1296da3158bf02d021fd3b08863d800d.NginxMailingListEnglish@forum.nginx.org> I am using nginx server for jdbc connection for snowflake db and it is working fine, currently authentication is based on password. But due to recent changes I need to enable keypair based authenticate using private key. Front end application does not support the required[1] property for keypair based, so I am thinking of sending the private key in the password field and at nginx-proxy add the property "privatekey" based on the password field. Is this possible at nginx side? I could not get much info from nginx config so checking here, I really appreciate if any suggestions [1] https://docs.snowflake.com/en/user-guide/jdbc-configure.html#privatekey-property-in-connection-properties Posted at Nginx Forum: https://forum.nginx.org/read.php?2,295600,295600#msg-295600 From nginx-forum at forum.nginx.org Tue Oct 25 15:25:39 2022 From: nginx-forum at forum.nginx.org (wordlesswind) Date: Tue, 25 Oct 2022 11:25:39 -0400 Subject: About ssl_ecdh_curve auto Message-ID: <29b28abc1c49739baa2a3cdbb8c9a6fc.NginxMailingListEnglish@forum.nginx.org> Hello guys, I deployed ECDSA P-256 certificate issued by Let's Encrypt E1 on nginx, and I noticed something about "ssl_ecdh_curve auto;". When I set ssl_protocols to "TLSv1.2 TLSv1.3", ssl_ecdh_curve has only prime256v1. When set to TLSv1.3, x448 is missing and is the preferred order for the server. As far as I know, the full list of nginx support should be x25519, x448, secp256r1, secp384r1, secp521r1. So what caused the difference in "ssl_ecdh_curve auto;"? Best regards, wordlesswind Posted at Nginx Forum: https://forum.nginx.org/read.php?2,295602,295602#msg-295602 From xeioex at nginx.com Tue Oct 25 17:27:13 2022 From: xeioex at nginx.com (Dmitry Volyntsev) Date: Tue, 25 Oct 2022 10:27:13 -0700 Subject: njs-0.7.8 Message-ID: <45b74109-56df-2e0b-4eee-cab435c95c93@nginx.com> Hello, I'm glad to announce a new release of NGINX JavaScript module (njs). Notable new features: - js_preload_object directive: The directive preloads an immutable object at configure time. This significantly reduces the runtime cost of referencing large objects in JS code. Learn more about njs: - Overview and introduction: https://nginx.org/en/docs/njs/ - NGINX JavaScript in Your Web Server Configuration: https://youtu.be/Jc_L6UffFOs - Extending NGINX with Custom Code: https://youtu.be/0CVhq4AUU7M - Using node modules with njs: https://nginx.org/en/docs/njs/node_modules.html - Writing njs code using TypeScript definition files: https://nginx.org/en/docs/njs/typescript.html Feel free to try it and give us feedback on: - Github: https://github.com/nginx/njs/issues - Mailing list: https://mailman.nginx.org/mailman/listinfo/nginx-devel Additional examples and howtos can be found here: - Github: https://github.com/nginx/njs-examples Changes with njs 0.7.8 25 Oct 2022 nginx modules: *) Feature: added js_preload_object directive. *) Feature: added ngx.conf_prefix property. *) Feature: added s.sendUpstream() and s.sendDownstream() in stream module. *) Feature: added support for HEAD method in Fetch API. *) Improvement: improved async callback support for s.send() in stream module. Core: *) Feature: added "name" instance property for a function object. *) Feature: added njs.memoryStats object. *) Bugfix: fixed String.prototype.trimEnd() with unicode string. *) Bugfix: fixed Object.freeze() with fast arrays. *) Bugfix: fixed Object.defineProperty() with fast arrays. *) Bugfix: fixed async token as a property name of an object. *) Bugfix: fixed property set instruction when key modifies base binding. *) Bugfix: fixed complex assignments. *) Bugfix: fixed handling of unhandled promise rejection. *) Bugfix: fixed process.env when duplicate environ variables are present. *) Bugfix: fixed double declaration detection in modules. *) Bugfix: fixed bound function calls according to the spec. *) Bugfix: fixed break label for if statement. *) Bugfix: fixed labeled empty statements. From tdtemccnp at gmail.com Wed Oct 26 01:44:56 2022 From: tdtemccnp at gmail.com (Turritopsis Dohrnii Teo En Ming) Date: Wed, 26 Oct 2022 12:44:56 +1100 Subject: I notice that UniFi Cloud Key Gen 2 Plus is missing the nginx package Message-ID: Subject: I notice that UniFi Cloud Key Gen 2 Plus is missing the nginx package Good day from Singapore, UniFi Cloud Key Gen 2 Plus is powered by Debian GNU/Linux 9. I notice that it is missing the nginx package. Can I solve the problem by simply installing the nginx package? apt install nginx ?? Will it work? May I know why the UCK G2 Plus is missing the nginx package while all the reference guides on the internet mention that UCK G2 Plus is supposed to have nginx web server? I saw instructions to stop and start nginx in the UCK G2 Plus. # service nginx stop # service nginx start Thank you. Regards, Mr. Turritopsis Dohrnii Teo En Ming Targeted Individual in Singapore Blogs: https://tdtemcerts.blogspot.com https://tdtemcerts.wordpress.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From osa at freebsd.org.ru Wed Oct 26 03:17:37 2022 From: osa at freebsd.org.ru (Sergey A. Osokin) Date: Wed, 26 Oct 2022 06:17:37 +0300 Subject: I notice that UniFi Cloud Key Gen 2 Plus is missing the nginx package In-Reply-To: References: Message-ID: Hi there, hope you're doing well. On Wed, Oct 26, 2022 at 12:44:56PM +1100, Turritopsis Dohrnii Teo En Ming wrote: > > UniFi Cloud Key Gen 2 Plus is powered by Debian GNU/Linux 9. > I notice that it is missing the nginx package. > Can I solve the problem by simply installing the nginx package? I'd recommend to contact to the vendor of the device. Thank you. -- Sergey A. Osokin From osa at freebsd.org.ru Wed Oct 26 03:22:54 2022 From: osa at freebsd.org.ru (Sergey A. Osokin) Date: Wed, 26 Oct 2022 06:22:54 +0300 Subject: About ssl_ecdh_curve auto In-Reply-To: <29b28abc1c49739baa2a3cdbb8c9a6fc.NginxMailingListEnglish@forum.nginx.org> References: <29b28abc1c49739baa2a3cdbb8c9a6fc.NginxMailingListEnglish@forum.nginx.org> Message-ID: Hi, hope you're doing well. On Tue, Oct 25, 2022 at 11:25:39AM -0400, wordlesswind wrote: > > I deployed ECDSA P-256 certificate issued by Let's Encrypt E1 on nginx, and > I noticed something about "ssl_ecdh_curve auto;". Well, the `auto' is the default value of the ssl_ecdh_curve directive, [1]. > When I set ssl_protocols to "TLSv1.2 TLSv1.3", ssl_ecdh_curve has only > prime256v1. When set to TLSv1.3, x448 is missing and is the preferred order > for the server. Is there a official package from [2]? What's the SSL implementation and its version are there? For OpenSSL please run % openssl version -a It's also possible to see the list of the elliptic curve parameters with the following command: % openssl ecparam -list_curves Refereces 1. https://nginx.org/en/docs/http/ngx_http_ssl_module.html#ssl_ecdh_curve 2. https://nginx.org/en/linux_packages.html Thank you. -- Sergey A. Osokin From mdounin at mdounin.ru Wed Oct 26 04:07:43 2022 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 26 Oct 2022 07:07:43 +0300 Subject: About ssl_ecdh_curve auto In-Reply-To: <29b28abc1c49739baa2a3cdbb8c9a6fc.NginxMailingListEnglish@forum.nginx.org> References: <29b28abc1c49739baa2a3cdbb8c9a6fc.NginxMailingListEnglish@forum.nginx.org> Message-ID: Hello! On Tue, Oct 25, 2022 at 11:25:39AM -0400, wordlesswind wrote: > I deployed ECDSA P-256 certificate issued by Let's Encrypt E1 on nginx, and > I noticed something about "ssl_ecdh_curve auto;". > > When I set ssl_protocols to "TLSv1.2 TLSv1.3", ssl_ecdh_curve has only > prime256v1. When set to TLSv1.3, x448 is missing and is the preferred order > for the server. > > As far as I know, the full list of nginx support should be x25519, x448, > secp256r1, secp384r1, secp521r1. > > So what caused the difference in "ssl_ecdh_curve auto;"? The list of curves supported with "ssl_ecdh_curve auto;" depends on the SSL library being used. In recent OpenSSL versions the list is as follows: X25519, secp256r1, X448, secp521r1, secp384r1. In BoringSSL, the list is: X25519, secp256r1, secp384r1. In LibreSSL the list is: X25519, secp256r1, secp384r1. In all cases preferred order is as set by the ssl_prefer_server_ciphers directive. In no cases I see any difference based on the SSL protocols being used (though in theory there might be some, and certainly there is a difference in testing, see below). If you see different behaviour, first of all you may want to check the SSL library you are using (shown by "nginx -V"). It might also make sense to check how do you test things. In particular, when testing with a P-256 certificate over TLSv1.2 and below it is important to include P-256 (aka prime256v1, aka secp256r1) in the client list of supported elliptic curves, or the handshake will fail even if another curve is expected to be used for ephemeral key exchange. This is, however, not needed with TLSv1.3, since signature algorithms in TLSv1.3 explicitly include elliptic curves being used. For example, the following command will be able to establish connection with TLSv1.3, but will fail with TLSv1.2 due to no P-256 in the supported curves: openssl s_client -connect 127.0.0.1:8443 -curves X448 But the following one will use X448 with both TLSv1.2 and TLSv1.3: openssl s_client -connect 127.0.0.1:8443 -curves X448:prime256v1 Hope this helps. -- Maxim Dounin http://mdounin.ru/ From mdounin at mdounin.ru Wed Oct 26 04:24:30 2022 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 26 Oct 2022 07:24:30 +0300 Subject: About ssl_ecdh_curve auto In-Reply-To: References: <29b28abc1c49739baa2a3cdbb8c9a6fc.NginxMailingListEnglish@forum.nginx.org> Message-ID: Hello! On Wed, Oct 26, 2022 at 06:22:54AM +0300, Sergey A. Osokin wrote: [...] > It's also possible to see the list of the elliptic curve parameters with > the following command: > > % openssl ecparam -list_curves Fun fact: this list only includes standard curves, but not custom curves such as X25519 or X448, so it is more or less useless. Not to mention this list has nothing to do with the default list of supported curves as used by default (and with "ssl_ecdh_curve auto;" in nginx). As far as I understand, there are no user-friendly ways to extract this default list from OpenSSL. The best ways I'm aware of include looking into the code or SSL handshakes on the wire. -- Maxim Dounin http://mdounin.ru/ From tdtemccnp at gmail.com Wed Oct 26 14:08:44 2022 From: tdtemccnp at gmail.com (Turritopsis Dohrnii Teo En Ming) Date: Wed, 26 Oct 2022 22:08:44 +0800 Subject: I notice that UniFi Cloud Key Gen 2 Plus is missing the nginx package In-Reply-To: References: Message-ID: On Wed, 26 Oct 2022 at 11:22, Sergey A. Osokin wrote: > Hi there, > > hope you're doing well. > > On Wed, Oct 26, 2022 at 12:44:56PM +1100, Turritopsis Dohrnii Teo En Ming > wrote: > > > > UniFi Cloud Key Gen 2 Plus is powered by Debian GNU/Linux 9. > > I notice that it is missing the nginx package. > > Can I solve the problem by simply installing the nginx package? > > I'd recommend to contact to the vendor of the device. > > Thank you. > > -- > Sergey A. Osokin > > Noted with thanks. Mr. Turritopsis Dohrnii Teo En Ming Targeted Individual in Singapore -------------- next part -------------- An HTML attachment was scrubbed... URL: From bmvishwas at gmail.com Thu Oct 27 10:18:39 2022 From: bmvishwas at gmail.com (Vishwas Bm) Date: Thu, 27 Oct 2022 15:48:39 +0530 Subject: performance guide for nginx L4 stream Message-ID: Hi, We are using nginx as L4 with a stream block configured in nginx.conf. I have attached the nginx.conf being used. We are using the hey (https://github.com/rakyll/hey) tool to pump 50k requests per second and are seeing only 40k requests being received on the backend application side. We are seeing a drop of 10k requests. We have tuned the normal tcp configuration like rmem, wmem, local_port_range etc and we are not seeing any errors in nginx logs. Can you please suggest what other configuration in the nginx.conf needs to be looked at. Any other tcp configuration that needs to be tuned ? *Thanks & Regards,* *Vishwas * -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: nginx.conf Type: application/octet-stream Size: 1087 bytes Desc: not available URL: From r at roze.lv Thu Oct 27 11:44:04 2022 From: r at roze.lv (Reinis Rozitis) Date: Thu, 27 Oct 2022 14:44:04 +0300 Subject: performance guide for nginx L4 stream In-Reply-To: References: Message-ID: <001001d8e9f9$6a7d1360$3f773a20$@roze.lv> > We are using the hey (https://github.com/rakyll/hey) tool to pump 50k requests per second and are seeing only 40k requests being received on the backend application side. > Any other tcp configuration that needs to be tuned ? I am not familiar with the tool but per documentation it should have some sort of error status report for the failed requests What it is for the 10k "missing" requests? Are they "missing" (already) on nginx or just on the proxied backend(s)? (in the provided nginx configuration I don't see any access/error log configuration - you could enable both to see if you actually get those 50k requests to nginx). Are you testing from a single client (same server) or multiple? Do you use keepalive or new connection per request (in the case of later might come close to the ephemeral port limit (~65k) depending on if tcp_tw_reuse is or isn’t configured)? Have you tried with other tools like 'ab', 'httperf' or 'siege' to see if you get the same results/problems? rr From andrejvanderzee at gmail.com Thu Oct 27 11:56:25 2022 From: andrejvanderzee at gmail.com (Andrej van der Zee) Date: Thu, 27 Oct 2022 13:56:25 +0200 Subject: reverse proxy with mTLS does not send client certificate to upstream Message-ID: Dear, I am trying to setup an TLS auth reverse proxy with proxy_ssl_certificate and proxy_ssl_certificate_key like below: http { server { listen 8080; resolver 8.8.8.8; location ~ /mimir/(.*)$ { proxy_pass https:///$1; proxy_ssl_certificate_key /etc/nginx/tls-auth/mimir/tls.key; proxy_ssl_certificate /etc/nginx/tls-auth/mimir/tls.crt; } } } Somehow the nginx reverse proxy does not send the configured client certificate, resulting in the error below from my upstream server: 400 No required SSL certificate was sent

400 Bad Request

No required SSL certificate was sent

nginx
What am I missing? Best regards, Andrej -------------- next part -------------- An HTML attachment was scrubbed... URL: From jonmcalexander at wellsfargo.com Thu Oct 27 17:05:32 2022 From: jonmcalexander at wellsfargo.com (jonmcalexander at wellsfargo.com) Date: Thu, 27 Oct 2022 17:05:32 +0000 Subject: Question on Instance structure. Message-ID: <440c2ad730a44491b1b4924ac73cc668@wellsfargo.com> I have a question regarding structure. What is the proper configuration of NGINX instances when you want to have multiple configurations of NGINX, but only 1 set of binaries? Thanks, Dream * Excel * Explore * Inspire Jon McAlexander Senior Infrastructure Engineer Asst. Vice President He/His Middleware Product Engineering Enterprise CIO | EAS | Middleware | Infrastructure Solutions 8080 Cobblestone Rd | Urbandale, IA 50322 MAC: F4469-010 Tel 515-988-2508 | Cell 515-988-2508 jonmcalexander at wellsfargo.com This message may contain confidential and/or privileged information. If you are not the addressee or authorized to receive this for the addressee, you must not use, copy, disclose, or take any action based on this message or any information herein. If you have received this message in error, please advise the sender immediately by reply e-mail and delete this message. Thank you for your cooperation. -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Thu Oct 27 20:14:12 2022 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 27 Oct 2022 23:14:12 +0300 Subject: reverse proxy with mTLS does not send client certificate to upstream In-Reply-To: References: Message-ID: Hello! On Thu, Oct 27, 2022 at 01:56:25PM +0200, Andrej van der Zee wrote: > I am trying to setup an TLS auth reverse proxy with proxy_ssl_certificate > and proxy_ssl_certificate_key like below: > > http { > server { > listen 8080; > resolver 8.8.8.8; > > location ~ /mimir/(.*)$ { > proxy_pass https:///$1; > proxy_ssl_certificate_key /etc/nginx/tls-auth/mimir/tls.key; > proxy_ssl_certificate /etc/nginx/tls-auth/mimir/tls.crt; > } > } > } > > Somehow the nginx reverse proxy does not send the configured client > certificate, resulting in the error below from my upstream server: > > > 400 No required SSL certificate was sent > >

400 Bad Request

>
No required SSL certificate was sent
>
nginx
> > > > What am I missing? Any other https proxying to the same upstream but without certificates configured? If there are any, it might be a good idea to disable SSL session reuse (http://nginx.org/r/proxy_ssl_session_reuse) or configure distinct upstream blocks/names for proxying with and without certs. -- Maxim Dounin http://mdounin.ru/ From osa at freebsd.org.ru Thu Oct 27 20:37:08 2022 From: osa at freebsd.org.ru (Sergey A. Osokin) Date: Thu, 27 Oct 2022 23:37:08 +0300 Subject: Question on Instance structure. In-Reply-To: <440c2ad730a44491b1b4924ac73cc668@wellsfargo.com> References: <440c2ad730a44491b1b4924ac73cc668@wellsfargo.com> Message-ID: Hi Jon, hope you're doing well. On Thu, Oct 27, 2022 at 05:05:32PM +0000, jonmcalexander--- via nginx wrote: > I have a question regarding structure. > > What is the proper configuration of NGINX instances when you want to > have multiple configurations of NGINX, but only 1 set of binaries? I'd recommend to take a look on how it's organized in www/nginx-devel [1] in FreeBSD ports tree [2]. The management script [3] of that port supports profiles, so it's possible to run several instances of nginx on the same box. 1. Enable nginx in /etc/rc.conf: nginx_enable="YES" 2. Specify needful profiles in /etc/rc.conf, i.e. nginx_profiles="1 2 3 4 5" 3. Add lines with configuration files: nginx_1_configfile="/usr/local/etc/nginx/nginx.conf.1" nginx_2_configfile="/usr/local/etc/nginx/nginx.conf.2" nginx_3_configfile="/usr/local/etc/nginx/nginx.conf.3" nginx_4_configfile="/usr/local/etc/nginx/nginx.conf.4" nginx_5_configfile="/usr/local/etc/nginx/nginx.conf.5" 4. Here's an example of the /usr/local/etc/nginx/nginx.conf.1 file: worker_processes 1; error_log /var/log/nginx-error.log debug; events { worker_connections 10240; } http { include mime.types; default_type application/octet-stream; log_format main '$remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"'; keepalive_timeout 65; include /usr/local/etc/nginx/conf.d.1/http_*.conf; } Refereces --------- 1. https://cgit.freebsd.org/ports/tree/www/nginx-devel/ 2. https://www.freebsd.org/ports/ 3. https://cgit.freebsd.org/ports/tree/www/nginx-devel/files/nginx.in Hope that helps. Thank you. -- Sergey A. Osokin From al-nginx at none.at Thu Oct 27 21:59:32 2022 From: al-nginx at none.at (Aleksandar Lazic) Date: Thu, 27 Oct 2022 23:59:32 +0200 Subject: reverse proxy with mTLS does not send client certificate to upstream In-Reply-To: References: Message-ID: Hi. On 27.10.22 13:56, Andrej van der Zee wrote: > Dear, > > I am trying to setup an TLS auth reverse proxy with proxy_ssl_certificate > and proxy_ssl_certificate_key like below: > > http { > server { > listen 8080; > resolver 8.8.8.8; > > location ~ /mimir/(.*)$ { > proxy_pass https:///$1; > proxy_ssl_certificate_key /etc/nginx/tls-auth/mimir/tls.key; > proxy_ssl_certificate /etc/nginx/tls-auth/mimir/tls.crt; > } > } > } > > Somehow the nginx reverse proxy does not send the configured client > certificate, resulting in the error below from my upstream server: > > > 400 No required SSL certificate was sent > >

400 Bad Request

>
No required SSL certificate was sent
>
nginx
> > > > What am I missing? What's in the error log? You can also try to run nginx in debug mode then will you see more Information why the connection attempt does not work. http://nginx.org/en/docs/debugging_log.html > Best regards, > Andrej > > > _______________________________________________ > nginx mailing list -- nginx at nginx.org > To unsubscribe send an email to nginx-leave at nginx.org From al-nginx at none.at Thu Oct 27 22:01:53 2022 From: al-nginx at none.at (Aleksandar Lazic) Date: Fri, 28 Oct 2022 00:01:53 +0200 Subject: Question on Instance structure. In-Reply-To: <440c2ad730a44491b1b4924ac73cc668@wellsfargo.com> References: <440c2ad730a44491b1b4924ac73cc668@wellsfargo.com> Message-ID: <6bc8b40a-d19c-1d82-2ac0-47ec09178ab5@none.at> Hi. On 27.10.22 19:05, jonmcalexander--- via nginx wrote: > I have a question regarding structure. > > What is the proper configuration of NGINX instances when you want to have > multiple configurations of NGINX, but only 1 set of binaries? You can use the '-c' flag to point to another nginx configuation. http://nginx.org/en/docs/switches.html Regards Alex > Thanks, > > Dream * Excel * Explore * Inspire > Jon McAlexander > Senior Infrastructure Engineer > Asst. Vice President > He/His > > Middleware Product Engineering > Enterprise CIO | EAS | Middleware | Infrastructure Solutions > > 8080 Cobblestone Rd | Urbandale, IA 50322 > MAC: F4469-010 > Tel 515-988-2508 | Cell 515-988-2508 > > jonmcalexander at wellsfargo.com > This message may contain confidential and/or privileged information. If you are not the addressee or authorized to receive this for the addressee, you must not use, copy, disclose, or take any action based on this message or any information herein. If you have received this message in error, please advise the sender immediately by reply e-mail and delete this message. Thank you for your cooperation. > > > > _______________________________________________ > nginx mailing list -- nginx at nginx.org > To unsubscribe send an email to nginx-leave at nginx.org From jonmcalexander at wellsfargo.com Thu Oct 27 22:20:21 2022 From: jonmcalexander at wellsfargo.com (jonmcalexander at wellsfargo.com) Date: Thu, 27 Oct 2022 22:20:21 +0000 Subject: Question on Instance structure. In-Reply-To: <6bc8b40a-d19c-1d82-2ac0-47ec09178ab5@none.at> References: <440c2ad730a44491b1b4924ac73cc668@wellsfargo.com> <6bc8b40a-d19c-1d82-2ac0-47ec09178ab5@none.at> Message-ID: <22c0b8f38b3e4902a4ecd776fdf492d2@wellsfargo.com> Thank you! Can anything be setup in the nginx.conf file? Thanks, Dream * Excel * Explore * Inspire Jon McAlexander Senior Infrastructure Engineer Asst. Vice President He/His Middleware Product Engineering Enterprise CIO | EAS | Middleware | Infrastructure Solutions 8080 Cobblestone Rd | Urbandale, IA 50322 MAC: F4469-010 Tel 515-988-2508 | Cell 515-988-2508 jonmcalexander at wellsfargo.com This message may contain confidential and/or privileged information. If you are not the addressee or authorized to receive this for the addressee, you must not use, copy, disclose, or take any action based on this message or any information herein. If you have received this message in error, please advise the sender immediately by reply e-mail and delete this message. Thank you for your cooperation. > -----Original Message----- > From: Aleksandar Lazic > Sent: Thursday, October 27, 2022 5:02 PM > To: Mcalexander, Jon J. > Cc: nginx at nginx.org > Subject: Re: Question on Instance structure. > > Hi. > > On 27.10.22 19:05, jonmcalexander--- via nginx wrote: > > I have a question regarding structure. > > > > What is the proper configuration of NGINX instances when you want to > > have multiple configurations of NGINX, but only 1 set of binaries? > > You can use the '-c' flag to point to another nginx configuation. > https://urldefense.com/v3/__http://nginx.org/en/docs/switches.html__;!!F > 9svGWnIaVPGSwU!pknKiPNjWvixRyGoR7S8K6q7mMX6b52iQjfQRqt9s31DtW > 5LnUAaDsw9XZJRbE6zhQH5cVSbseKpo2afeSEcCnkmzw$ > > Regards > Alex > > > Thanks, > > > > Dream * Excel * Explore * Inspire > > Jon McAlexander > > Senior Infrastructure Engineer > > Asst. Vice President > > He/His > > > > Middleware Product Engineering > > Enterprise CIO | EAS | Middleware | Infrastructure Solutions > > > > 8080 Cobblestone Rd | Urbandale, IA 50322 > > MAC: F4469-010 > > Tel 515-988-2508 | Cell 515-988-2508 > > > > > jonmcalexander at wellsfargo.com > > This message may contain confidential and/or privileged information. If you > are not the addressee or authorized to receive this for the addressee, you > must not use, copy, disclose, or take any action based on this message or any > information herein. If you have received this message in error, please advise > the sender immediately by reply e-mail and delete this message. Thank you > for your cooperation. > > > > > > > > _______________________________________________ > > nginx mailing list -- nginx at nginx.org > > To unsubscribe send an email to nginx-leave at nginx.org From al-nginx at none.at Fri Oct 28 09:12:10 2022 From: al-nginx at none.at (Aleksandar Lazic) Date: Fri, 28 Oct 2022 11:12:10 +0200 Subject: Question on Instance structure. In-Reply-To: <22c0b8f38b3e4902a4ecd776fdf492d2@wellsfargo.com> References: <440c2ad730a44491b1b4924ac73cc668@wellsfargo.com> <6bc8b40a-d19c-1d82-2ac0-47ec09178ab5@none.at> <22c0b8f38b3e4902a4ecd776fdf492d2@wellsfargo.com> Message-ID: <3a4df27a-c06b-68bc-1ece-93946ec97bdf@none.at> Hi. On 28.10.22 00:20, jonmcalexander at wellsfargo.com wrote: > Thank you! > > Can anything be setup in the nginx.conf file? I don't get your question as the nginx.conf is the main configuration file. Are you already familiar with nginx and the concept behind it? Maybe you can start with this page. https://nginx.org/en/docs/beginners_guide.html as mentioned below can you use `-c` to tell nginx the configuration which should be used. nginx -c nginx-instance-001.conf nginx -c nginx-instance-002.conf ... nginx -c nginx-instance-nnn.conf The files an share some common settings or not, that's up to your concept how you plan to run nginx. Regards Alex > Thanks, > > Dream * Excel * Explore * Inspire > Jon McAlexander > Senior Infrastructure Engineer > Asst. Vice President > He/His > > Middleware Product Engineering > Enterprise CIO | EAS | Middleware | Infrastructure Solutions > > 8080 Cobblestone Rd | Urbandale, IA 50322 > MAC: F4469-010 > Tel 515-988-2508 | Cell 515-988-2508 > > jonmcalexander at wellsfargo.com > This message may contain confidential and/or privileged information. If you are not the addressee or authorized to receive this for the addressee, you must not use, copy, disclose, or take any action based on this message or any information herein. If you have received this message in error, please advise the sender immediately by reply e-mail and delete this message. Thank you for your cooperation. > > >> -----Original Message----- >> From: Aleksandar Lazic >> Sent: Thursday, October 27, 2022 5:02 PM >> To: Mcalexander, Jon J. >> Cc: nginx at nginx.org >> Subject: Re: Question on Instance structure. >> >> Hi. >> >> On 27.10.22 19:05, jonmcalexander--- via nginx wrote: >>> I have a question regarding structure. >>> >>> What is the proper configuration of NGINX instances when you want to >>> have multiple configurations of NGINX, but only 1 set of binaries? >> >> You can use the '-c' flag to point to another nginx configuation. >> https://urldefense.com/v3/__http://nginx.org/en/docs/switches.html__;!!F >> 9svGWnIaVPGSwU!pknKiPNjWvixRyGoR7S8K6q7mMX6b52iQjfQRqt9s31DtW >> 5LnUAaDsw9XZJRbE6zhQH5cVSbseKpo2afeSEcCnkmzw$ >> >> Regards >> Alex >> >>> Thanks, >>> >>> Dream * Excel * Explore * Inspire >>> Jon McAlexander >>> Senior Infrastructure Engineer >>> Asst. Vice President >>> He/His >>> >>> Middleware Product Engineering >>> Enterprise CIO | EAS | Middleware | Infrastructure Solutions >>> >>> 8080 Cobblestone Rd | Urbandale, IA 50322 >>> MAC: F4469-010 >>> Tel 515-988-2508 | Cell 515-988-2508 >>> >>> >> jonmcalexander at wellsfargo.com >>> This message may contain confidential and/or privileged information. If you >> are not the addressee or authorized to receive this for the addressee, you >> must not use, copy, disclose, or take any action based on this message or any >> information herein. If you have received this message in error, please advise >> the sender immediately by reply e-mail and delete this message. Thank you >> for your cooperation. >>> >>> >>> >>> _______________________________________________ >>> nginx mailing list -- nginx at nginx.org >>> To unsubscribe send an email to nginx-leave at nginx.org From teward at thomas-ward.net Fri Oct 28 12:13:07 2022 From: teward at thomas-ward.net (Thomas Ward) Date: Fri, 28 Oct 2022 08:13:07 -0400 Subject: Question on Instance structure. In-Reply-To: <22c0b8f38b3e4902a4ecd776fdf492d2@wellsfargo.com> References: <440c2ad730a44491b1b4924ac73cc668@wellsfargo.com> <6bc8b40a-d19c-1d82-2ac0-47ec09178ab5@none.at> <22c0b8f38b3e4902a4ecd776fdf492d2@wellsfargo.com> Message-ID: Jon, I'm not 100% sure if you understand NGINX properly, but I think you're confusing "multi-instance" and "multi-site" when it comes to NGINX. Multi-instance NGINX requires multiple individual NGINX instances each running completely different configuration stacks, and not the default nginx.conf. A multi-site configuration is one NGINX binary set with multiple `server {}` definitions inside the relevant blocks - i.e multiple server blocks within the `http{}` block for multiple webserver configurations to serve more than one site from a single NGINX binary/config set. Is your goal to have multiple NGINX instances with different configurations running individually alongside each other, or is your goal to have NGINX simply serve / handle multiple sites with a single binary and configuration set? Thomas On 10/27/22 18:20, jonmcalexander--- via nginx wrote: > Thank you! > > Can anything be setup in the nginx.conf file? > > Thanks, > > Dream * Excel * Explore * Inspire > Jon McAlexander > Senior Infrastructure Engineer > Asst. Vice President > He/His > > Middleware Product Engineering > Enterprise CIO | EAS | Middleware | Infrastructure Solutions > > 8080 Cobblestone Rd | Urbandale, IA 50322 > MAC: F4469-010 > Tel 515-988-2508 | Cell 515-988-2508 > > jonmcalexander at wellsfargo.com > This message may contain confidential and/or privileged information. If you are not the addressee or authorized to receive this for the addressee, you must not use, copy, disclose, or take any action based on this message or any information herein. If you have received this message in error, please advise the sender immediately by reply e-mail and delete this message. Thank you for your cooperation. > > >> -----Original Message----- >> From: Aleksandar Lazic >> Sent: Thursday, October 27, 2022 5:02 PM >> To: Mcalexander, Jon J. >> Cc: nginx at nginx.org >> Subject: Re: Question on Instance structure. >> >> Hi. >> >> On 27.10.22 19:05, jonmcalexander--- via nginx wrote: >>> I have a question regarding structure. >>> >>> What is the proper configuration of NGINX instances when you want to >>> have multiple configurations of NGINX, but only 1 set of binaries? >> You can use the '-c' flag to point to another nginx configuation. >> https://urldefense.com/v3/__http://nginx.org/en/docs/switches.html__;!!F >> 9svGWnIaVPGSwU!pknKiPNjWvixRyGoR7S8K6q7mMX6b52iQjfQRqt9s31DtW >> 5LnUAaDsw9XZJRbE6zhQH5cVSbseKpo2afeSEcCnkmzw$ >> >> Regards >> Alex >> >>> Thanks, >>> >>> Dream * Excel * Explore * Inspire >>> Jon McAlexander >>> Senior Infrastructure Engineer >>> Asst. Vice President >>> He/His >>> >>> Middleware Product Engineering >>> Enterprise CIO | EAS | Middleware | Infrastructure Solutions >>> >>> 8080 Cobblestone Rd | Urbandale, IA 50322 >>> MAC: F4469-010 >>> Tel 515-988-2508 | Cell 515-988-2508 >>> >>> >> jonmcalexander at wellsfargo.com >>> This message may contain confidential and/or privileged information. If you >> are not the addressee or authorized to receive this for the addressee, you >> must not use, copy, disclose, or take any action based on this message or any >> information herein. If you have received this message in error, please advise >> the sender immediately by reply e-mail and delete this message. Thank you >> for your cooperation. >>> >>> >>> _______________________________________________ >>> nginx mailing list -- nginx at nginx.org >>> To unsubscribe send an email to nginx-leave at nginx.org > _______________________________________________ > nginx mailing list -- nginx at nginx.org > To unsubscribe send an email to nginx-leave at nginx.org From nginx-forum at forum.nginx.org Fri Oct 28 13:01:27 2022 From: nginx-forum at forum.nginx.org (libresco_27) Date: Fri, 28 Oct 2022 09:01:27 -0400 Subject: Disabling keepalive In-Reply-To: References: Message-ID: Thanks for your answer! I have another query if we can actually see that keepalive is being disabled in nginx logs. Is it possible to confirm that if we run nginx in debug mode and if so, what kind of logs should I look for? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,295571,295628#msg-295628 From jonmcalexander at wellsfargo.com Fri Oct 28 14:25:15 2022 From: jonmcalexander at wellsfargo.com (jonmcalexander at wellsfargo.com) Date: Fri, 28 Oct 2022 14:25:15 +0000 Subject: Question on Instance structure. In-Reply-To: References: <440c2ad730a44491b1b4924ac73cc668@wellsfargo.com> <6bc8b40a-d19c-1d82-2ac0-47ec09178ab5@none.at> <22c0b8f38b3e4902a4ecd776fdf492d2@wellsfargo.com> Message-ID: <5aac08d007b24cbd898214bb3bf113aa@wellsfargo.com> I would be interested in either setup. My thoughts are around having 1 set of binaries with sites/instances in different folder structures that USE the central binary. I'm aware of the -c at the command line level, but items like setting the prefix, etc. is where most of my question lies. Please know that I'm coming from a Tomcat background at this. I did recently find you can also override the prefix on the command line, but apparently NOT in the config file.?. Thanks, Dream * Excel * Explore * Inspire Jon McAlexander Senior Infrastructure Engineer Asst. Vice President He/His Middleware Product Engineering Enterprise CIO | EAS | Middleware | Infrastructure Solutions 8080 Cobblestone Rd | Urbandale, IA 50322 MAC: F4469-010 Tel 515-988-2508 | Cell 515-988-2508 jonmcalexander at wellsfargo.com This message may contain confidential and/or privileged information. If you are not the addressee or authorized to receive this for the addressee, you must not use, copy, disclose, or take any action based on this message or any information herein. If you have received this message in error, please advise the sender immediately by reply e-mail and delete this message. Thank you for your cooperation. > -----Original Message----- > From: Thomas Ward > Sent: Friday, October 28, 2022 7:13 AM > To: nginx at nginx.org; al-nginx at none.at > Cc: Mcalexander, Jon J. > Subject: Re: Question on Instance structure. > > Jon, > > I'm not 100% sure if you understand NGINX properly, but I think you're > confusing "multi-instance" and "multi-site" when it comes to NGINX. > > Multi-instance NGINX requires multiple individual NGINX instances each > running completely different configuration stacks, and not the default > nginx.conf. > > A multi-site configuration is one NGINX binary set with multiple `server {}` > definitions inside the relevant blocks - i.e multiple server blocks within the > `http{}` block for multiple webserver configurations to serve more than one > site from a single NGINX binary/config set. > > Is your goal to have multiple NGINX instances with different configurations > running individually alongside each other, or is your goal to have NGINX > simply serve / handle multiple sites with a single binary and configuration > set? > > > Thomas > > On 10/27/22 18:20, jonmcalexander--- via nginx wrote: > > Thank you! > > > > Can anything be setup in the nginx.conf file? > > > > Thanks, > > > > Dream * Excel * Explore * Inspire > > Jon McAlexander > > Senior Infrastructure Engineer > > Asst. Vice President > > He/His > > > > Middleware Product Engineering > > Enterprise CIO | EAS | Middleware | Infrastructure Solutions > > > > 8080 Cobblestone Rd | Urbandale, IA 50322 > > MAC: F4469-010 > > Tel 515-988-2508 | Cell 515-988-2508 > > > > jonmcalexander at wellsfargo.com > > This message may contain confidential and/or privileged information. If you > are not the addressee or authorized to receive this for the addressee, you > must not use, copy, disclose, or take any action based on this message or any > information herein. If you have received this message in error, please advise > the sender immediately by reply e-mail and delete this message. Thank you > for your cooperation. > > > > > >> -----Original Message----- > >> From: Aleksandar Lazic > >> Sent: Thursday, October 27, 2022 5:02 PM > >> To: Mcalexander, Jon J. > >> Cc: nginx at nginx.org > >> Subject: Re: Question on Instance structure. > >> > >> Hi. > >> > >> On 27.10.22 19:05, jonmcalexander--- via nginx wrote: > >>> I have a question regarding structure. > >>> > >>> What is the proper configuration of NGINX instances when you want to > >>> have multiple configurations of NGINX, but only 1 set of binaries? > >> You can use the '-c' flag to point to another nginx configuation. > >> > https://urldefense.com/v3/__http://nginx.org/en/docs/switches.html__; > >> !!F > 9svGWnIaVPGSwU!pknKiPNjWvixRyGoR7S8K6q7mMX6b52iQjfQRqt9s31DtW > >> 5LnUAaDsw9XZJRbE6zhQH5cVSbseKpo2afeSEcCnkmzw$ > >> > >> Regards > >> Alex > >> > >>> Thanks, > >>> > >>> Dream * Excel * Explore * Inspire > >>> Jon McAlexander > >>> Senior Infrastructure Engineer > >>> Asst. Vice President > >>> He/His > >>> > >>> Middleware Product Engineering > >>> Enterprise CIO | EAS | Middleware | Infrastructure Solutions > >>> > >>> 8080 Cobblestone Rd | Urbandale, IA 50322 > >>> MAC: F4469-010 > >>> Tel 515-988-2508 | Cell 515-988-2508 > >>> > >>> > >> > jonmcalexander at wellsfargo.com > >>> This message may contain confidential and/or privileged information. > >>> If you > >> are not the addressee or authorized to receive this for the > >> addressee, you must not use, copy, disclose, or take any action based > >> on this message or any information herein. If you have received this > >> message in error, please advise the sender immediately by reply > >> e-mail and delete this message. Thank you for your cooperation. > >>> > >>> > >>> _______________________________________________ > >>> nginx mailing list -- nginx at nginx.org To unsubscribe send an email > >>> to nginx-leave at nginx.org > > _______________________________________________ > > nginx mailing list -- nginx at nginx.org > > To unsubscribe send an email to nginx-leave at nginx.org From al-nginx at none.at Sat Oct 29 21:43:22 2022 From: al-nginx at none.at (Aleksandar Lazic) Date: Sat, 29 Oct 2022 23:43:22 +0200 Subject: Question on Instance structure. In-Reply-To: <5aac08d007b24cbd898214bb3bf113aa@wellsfargo.com> References: <440c2ad730a44491b1b4924ac73cc668@wellsfargo.com> <6bc8b40a-d19c-1d82-2ac0-47ec09178ab5@none.at> <22c0b8f38b3e4902a4ecd776fdf492d2@wellsfargo.com> <5aac08d007b24cbd898214bb3bf113aa@wellsfargo.com> Message-ID: <8b83762f-0a16-425c-8642-4a57a622f44e@none.at> Hi. On 28.10.22 16:25, jonmcalexander at wellsfargo.com wrote: > I would be interested in either setup. My thoughts are around having 1 set of > binaries with sites/instances in different folder structures that USE the central > binary. I'm aware of the -c at the command line level, but items like > setting the prefix, etc. is where most of my question lies. Please know that I'm > coming from a Tomcat background at this. I did recently find you can also override > the prefix on the command line, but apparently NOT in the config file.?. What do you mean with "prefix"? Please can you elaborate in more detail what do you plan and what Ideas is in your mind. Regards Alex > Thanks, > > Dream * Excel * Explore * Inspire > Jon McAlexander > Senior Infrastructure Engineer > Asst. Vice President > He/His > > Middleware Product Engineering > Enterprise CIO | EAS | Middleware | Infrastructure Solutions > > 8080 Cobblestone Rd | Urbandale, IA 50322 > MAC: F4469-010 > Tel 515-988-2508 | Cell 515-988-2508 > > jonmcalexander at wellsfargo.com > This message may contain confidential and/or privileged information. If you are not the addressee or authorized to receive this for the addressee, you must not use, copy, disclose, or take any action based on this message or any information herein. If you have received this message in error, please advise the sender immediately by reply e-mail and delete this message. Thank you for your cooperation. > > >> -----Original Message----- >> From: Thomas Ward >> Sent: Friday, October 28, 2022 7:13 AM >> To: nginx at nginx.org; al-nginx at none.at >> Cc: Mcalexander, Jon J. >> Subject: Re: Question on Instance structure. >> >> Jon, >> >> I'm not 100% sure if you understand NGINX properly, but I think you're >> confusing "multi-instance" and "multi-site" when it comes to NGINX. >> >> Multi-instance NGINX requires multiple individual NGINX instances each >> running completely different configuration stacks, and not the default >> nginx.conf. >> >> A multi-site configuration is one NGINX binary set with multiple `server {}` >> definitions inside the relevant blocks - i.e multiple server blocks within the >> `http{}` block for multiple webserver configurations to serve more than one >> site from a single NGINX binary/config set. >> >> Is your goal to have multiple NGINX instances with different configurations >> running individually alongside each other, or is your goal to have NGINX >> simply serve / handle multiple sites with a single binary and configuration >> set? >> >> >> Thomas >> >> On 10/27/22 18:20, jonmcalexander--- via nginx wrote: >>> Thank you! >>> >>> Can anything be setup in the nginx.conf file? >>> >>> Thanks, >>> >>> Dream * Excel * Explore * Inspire >>> Jon McAlexander >>> Senior Infrastructure Engineer >>> Asst. Vice President >>> He/His >>> >>> Middleware Product Engineering >>> Enterprise CIO | EAS | Middleware | Infrastructure Solutions >>> >>> 8080 Cobblestone Rd | Urbandale, IA 50322 >>> MAC: F4469-010 >>> Tel 515-988-2508 | Cell 515-988-2508 >>> >>> jonmcalexander at wellsfargo.com >>> This message may contain confidential and/or privileged information. If you >> are not the addressee or authorized to receive this for the addressee, you >> must not use, copy, disclose, or take any action based on this message or any >> information herein. If you have received this message in error, please advise >> the sender immediately by reply e-mail and delete this message. Thank you >> for your cooperation. >>> >>> >>>> -----Original Message----- >>>> From: Aleksandar Lazic >>>> Sent: Thursday, October 27, 2022 5:02 PM >>>> To: Mcalexander, Jon J. >>>> Cc: nginx at nginx.org >>>> Subject: Re: Question on Instance structure. >>>> >>>> Hi. >>>> >>>> On 27.10.22 19:05, jonmcalexander--- via nginx wrote: >>>>> I have a question regarding structure. >>>>> >>>>> What is the proper configuration of NGINX instances when you want to >>>>> have multiple configurations of NGINX, but only 1 set of binaries? >>>> You can use the '-c' flag to point to another nginx configuation. >>>> >> https://urldefense.com/v3/__http://nginx.org/en/docs/switches.html__; >>>> !!F >> 9svGWnIaVPGSwU!pknKiPNjWvixRyGoR7S8K6q7mMX6b52iQjfQRqt9s31DtW >>>> 5LnUAaDsw9XZJRbE6zhQH5cVSbseKpo2afeSEcCnkmzw$ >>>> >>>> Regards >>>> Alex >>>> >>>>> Thanks, >>>>> >>>>> Dream * Excel * Explore * Inspire >>>>> Jon McAlexander >>>>> Senior Infrastructure Engineer >>>>> Asst. Vice President >>>>> He/His >>>>> >>>>> Middleware Product Engineering >>>>> Enterprise CIO | EAS | Middleware | Infrastructure Solutions >>>>> >>>>> 8080 Cobblestone Rd | Urbandale, IA 50322 >>>>> MAC: F4469-010 >>>>> Tel 515-988-2508 | Cell 515-988-2508 >>>>> >>>>> >>>> >> jonmcalexander at wellsfargo.com >>>>> This message may contain confidential and/or privileged information. >>>>> If you >>>> are not the addressee or authorized to receive this for the >>>> addressee, you must not use, copy, disclose, or take any action based >>>> on this message or any information herein. If you have received this >>>> message in error, please advise the sender immediately by reply >>>> e-mail and delete this message. Thank you for your cooperation. >>>>> >>>>> >>>>> _______________________________________________ >>>>> nginx mailing list -- nginx at nginx.org To unsubscribe send an email >>>>> to nginx-leave at nginx.org >>> _______________________________________________ >>> nginx mailing list -- nginx at nginx.org >>> To unsubscribe send an email to nginx-leave at nginx.org From mdounin at mdounin.ru Sun Oct 30 04:24:43 2022 From: mdounin at mdounin.ru (Maxim Dounin) Date: Sun, 30 Oct 2022 07:24:43 +0300 Subject: Disabling keepalive In-Reply-To: References: Message-ID: Hello! On Fri, Oct 28, 2022 at 09:01:27AM -0400, libresco_27 wrote: > Thanks for your answer! > I have another query if we can actually see that keepalive is being disabled > in nginx logs. > Is it possible to confirm that if we run nginx in debug mode and if so, what > kind of logs should I look for? When keepalive with upstream servers is enabled, in the debug logs there will be "get keepalive peer", "get keepalive peer: using connection ...", "free keepalive peer", and "free keepalive peer: saving connection ..." messages when selecting a peer and finalizing upstream request. For example, a connection attempt without a cached connection but with keepalive enabled will look like: 2022/10/30 07:18:31 [debug] 13406#100156: *1 http init upstream, client timer: 0 2022/10/30 07:18:31 [debug] 13406#100156: *1 kevent set event: 11: ft:-2 fl:0025 2022/10/30 07:18:31 [debug] 13406#100156: *1 http script copy: "Host" 2022/10/30 07:18:31 [debug] 13406#100156: *1 http script var: "u" 2022/10/30 07:18:31 [debug] 13406#100156: *1 http script copy: "Connection" 2022/10/30 07:18:31 [debug] 13406#100156: *1 http script copy: "close" 2022/10/30 07:18:31 [debug] 13406#100156: *1 http script copy: "" 2022/10/30 07:18:31 [debug] 13406#100156: *1 http script copy: "" 2022/10/30 07:18:31 [debug] 13406#100156: *1 http proxy header: "GET / HTTP/1.0 Host: u Connection: close " 2022/10/30 07:18:31 [debug] 13406#100156: *1 http cleanup add: 21982D88 2022/10/30 07:18:31 [debug] 13406#100156: *1 init keepalive peer 2022/10/30 07:18:31 [debug] 13406#100156: *1 get keepalive peer 2022/10/30 07:18:31 [debug] 13406#100156: *1 get rr peer, try: 1 2022/10/30 07:18:31 [debug] 13406#100156: *1 stream socket 12 2022/10/30 07:18:31 [debug] 13406#100156: *1 connect to 127.0.0.1:8081, fd:12 #2 2022/10/30 07:18:31 [debug] 13406#100156: *1 kevent set event: 12: ft:-1 fl:0025 2022/10/30 07:18:31 [debug] 13406#100156: *1 connected Without keepalive enabled there will be no "init keepalive peer" and "get keepalive peer" messages: 2022/10/30 07:21:36 [debug] 13416#100132: *1 http init upstream, client timer: 0 2022/10/30 07:21:36 [debug] 13416#100132: *1 kevent set event: 11: ft:-2 fl:0025 2022/10/30 07:21:36 [debug] 13416#100132: *1 http script copy: "Host" 2022/10/30 07:21:36 [debug] 13416#100132: *1 http script var: "u" 2022/10/30 07:21:36 [debug] 13416#100132: *1 http script copy: "Connection" 2022/10/30 07:21:36 [debug] 13416#100132: *1 http script copy: "close" 2022/10/30 07:21:36 [debug] 13416#100132: *1 http script copy: "" 2022/10/30 07:21:36 [debug] 13416#100132: *1 http script copy: "" 2022/10/30 07:21:36 [debug] 13416#100132: *1 http proxy header: "GET / HTTP/1.0 Host: u Connection: close " 2022/10/30 07:21:36 [debug] 13416#100132: *1 http cleanup add: 21982D88 2022/10/30 07:21:36 [debug] 13416#100132: *1 get rr peer, try: 1 2022/10/30 07:21:36 [debug] 13416#100132: *1 stream socket 12 2022/10/30 07:21:36 [debug] 13416#100132: *1 connect to 127.0.0.1:8081, fd:12 #2 2022/10/30 07:21:36 [debug] 13416#100132: *1 kevent set event: 12: ft:-1 fl:0025 2022/10/30 07:21:36 [debug] 13416#100132: *1 connected Hope this helps. -- Maxim Dounin http://mdounin.ru/ From biscotty666 at gmail.com Sun Oct 30 17:20:35 2022 From: biscotty666 at gmail.com (Brian Carey) Date: Sun, 30 Oct 2022 11:20:35 -0600 Subject: proxy_pass works on main page but not other pages Message-ID: <78a8bd73-d6e3-e890-d9b1-6df427991b9b@gmail.com> Hi, I have an app running at port 8239 on biscotty.me. If I access the app directly everything works as expected. I am able to use proxy_pass to forward https:/biscotty.me/striker to the main page of my app. The problem is that all of the links in the app result in a page not found error from the apache server handling requests to /. So it seems like the port number information is somehow being lost in translation? This is my conf: ``` location /striker {                rewrite /striker/(.*) /$1 break;                proxy_pass http://192.168.0.238:8239; proxy_set_header X-Real-IP $remote_addr;                proxy_set_header X-Forwarded-For $remote_addr;                proxy_set_header Host $host:8239;        }        location / {                proxy_pass http://192.168.0.238:8080/;                proxy_buffering on;                proxy_buffers 12 12k;                proxy_redirect off;                proxy_set_header X-Real-IP $remote_addr;                proxy_set_header X-Forwarded-For $remote_addr;                proxy_set_header Host $host:8080;        } ``` ``` -------------- next part -------------- An HTML attachment was scrubbed... URL: From biscotty666 at gmail.com Sun Oct 30 17:41:31 2022 From: biscotty666 at gmail.com (Brian Carey) Date: Sun, 30 Oct 2022 11:41:31 -0600 Subject: proxy_pass works on main page but not other pages In-Reply-To: <78a8bd73-d6e3-e890-d9b1-6df427991b9b@gmail.com> References: <78a8bd73-d6e3-e890-d9b1-6df427991b9b@gmail.com> Message-ID: Okay I seem to have solved this. I re-wrote the app urls to mount all directories under /striker, something unnecessary for the app itself but necessary for nginx to properly forward. I also removed the rewrite rule below. Thanks by the way for the help you give here. Brian On 10/30/22 11:20, Brian Carey wrote: > > Hi, > > I have an app running at port 8239 on biscotty.me. If I access the app > directly everything works as expected. > > I am able to use proxy_pass to forward https:/biscotty.me/striker to > the main page of my app. The problem is that all of the links in the > app result in a page not found error from the apache server handling > requests to /. So it seems like the port number information is somehow > being lost in translation? > > This is my conf: > > ``` > > location /striker { >                rewrite /striker/(.*) /$1 break; >                proxy_pass http://192.168.0.238:8239; > > proxy_set_header X-Real-IP $remote_addr; >                proxy_set_header X-Forwarded-For $remote_addr; >                proxy_set_header Host $host:8239; > >        } > >        location / { >                proxy_pass http://192.168.0.238:8080/; >                proxy_buffering on; >                proxy_buffers 12 12k; >                proxy_redirect off; > >                proxy_set_header X-Real-IP $remote_addr; >                proxy_set_header X-Forwarded-For $remote_addr; >                proxy_set_header Host $host:8080; > >        } > > ``` > > > ``` > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From biscotty666 at gmail.com Sun Oct 30 17:56:39 2022 From: biscotty666 at gmail.com (Brian Carey) Date: Sun, 30 Oct 2022 11:56:39 -0600 Subject: proxy_pass works on main page but not other pages In-Reply-To: References: <78a8bd73-d6e3-e890-d9b1-6df427991b9b@gmail.com> Message-ID: <6821f650-3a2d-b492-765b-953570c77edc@gmail.com> Thinking it through though I think my solution is bad since it implies a dependency between the urls defined in the program and the location used in nginx, ie. they must match and the program cannot be proxied at an arbitrary location. So hopefully there is a better solution than the one I found. I hope I'm not asking too many questions. On 10/30/22 11:41, Brian Carey wrote: > > Okay I seem to have solved this. I re-wrote the app urls to mount all > directories under /striker, something unnecessary for the app itself > but necessary for nginx to properly forward. I also removed the > rewrite rule below. > > Thanks by the way for the help you give here. > > Brian > > On 10/30/22 11:20, Brian Carey wrote: >> >> Hi, >> >> I have an app running at port 8239 on biscotty.me. If I access the >> app directly everything works as expected. >> >> I am able to use proxy_pass to forward https:/biscotty.me/striker to >> the main page of my app. The problem is that all of the links in the >> app result in a page not found error from the apache server handling >> requests to /. So it seems like the port number information is >> somehow being lost in translation? >> >> This is my conf: >> >> ``` >> >> location /striker { >>                rewrite /striker/(.*) /$1 break; >>                proxy_pass http://192.168.0.238:8239; >> >> proxy_set_header X-Real-IP $remote_addr; >>                proxy_set_header X-Forwarded-For $remote_addr; >>                proxy_set_header Host $host:8239; >> >>        } >> >>        location / { >>                proxy_pass http://192.168.0.238:8080/; >>                proxy_buffering on; >>                proxy_buffers 12 12k; >>                proxy_redirect off; >> >>                proxy_set_header X-Real-IP $remote_addr; >>                proxy_set_header X-Forwarded-For $remote_addr; >>                proxy_set_header Host $host:8080; >> >>        } >> >> ``` >> >> >> ``` >> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: