From mouseless at free.fr Sun Sep 4 11:22:20 2022 From: mouseless at free.fr (Vincent M.) Date: Sun, 4 Sep 2022 13:22:20 +0200 Subject: Nginx stop logging Message-ID: <860a355f-dae2-3c5a-f128-31be3adde2b1@free.fr> Hello, The logs are working fine but once a day, nginx stop loging access and then the file log file is empty. So every day I have to restart my nginx server in order to get the logs. It's on a Rocky Linux 9 with Nginx 1.20.1 Never seen that before, what should I check? Thanks, Vincent. -------------- next part -------------- An HTML attachment was scrubbed... URL: From frank.swasey at gmail.com Sun Sep 4 12:17:46 2022 From: frank.swasey at gmail.com (Frank Swasey) Date: Sun, 4 Sep 2022 08:17:46 -0400 Subject: Nginx stop logging In-Reply-To: <860a355f-dae2-3c5a-f128-31be3adde2b1@free.fr> References: <860a355f-dae2-3c5a-f128-31be3adde2b1@free.fr> Message-ID: This sounds like your log rotation process is not signalling nginx to write a new log. I don't know Rocky Linux, so I can't be specific in further suggestions. ~ Frank On Sun, Sep 4, 2022 at 7:24 AM Vincent M. wrote: > Hello, > > The logs are working fine but once a day, nginx stop loging access and > then the file log file is empty. > > So every day I have to restart my nginx server in order to get the logs. > > It's on a Rocky Linux 9 with Nginx 1.20.1 > > Never seen that before, what should I check? > > Thanks, > Vincent. > _______________________________________________ > nginx mailing list -- nginx at nginx.org > To unsubscribe send an email to nginx-leave at nginx.org > -- I am not young enough to know everything. - Oscar Wilde (1854-1900) -------------- next part -------------- An HTML attachment was scrubbed... URL: From mouseless at free.fr Sun Sep 4 13:45:10 2022 From: mouseless at free.fr (Vincent M.) Date: Sun, 4 Sep 2022 15:45:10 +0200 Subject: Nginx stop logging In-Reply-To: References: <860a355f-dae2-3c5a-f128-31be3adde2b1@free.fr> Message-ID: <89bb9d25-9216-e1f8-921c-d594a60deb41@free.fr> On my Rocky Linux 9, I have found this file /etc/logrotate.d/nginx /var/log/nginx/*log {     daily     rotate 10     missingok     notifempty     compress     delaycompress     sharedscripts     postrotate         /bin/kill -USR1 `cat /run/nginx.pid 2>/dev/null` 2>/dev/null || true     endscript } Whereas I have on my Fedora 36 (with a working fine log rotation process): /var/log/nginx/*.log {     create 0640 nginx root     daily     rotate 10     missingok     notifempty     compress     delaycompress     sharedscripts     postrotate         /bin/kill -USR1 `cat /run/nginx.pid 2>/dev/null` 2>/dev/null || true     endscript } Can I had on (create 0640 nginx root) on the server whitout risking to break everything on next logs rotates? Thank you, Vincent. On 9/4/22 14:17, Frank Swasey wrote: > This sounds like your log rotation process is not signalling nginx to > write a new log.  I don't know Rocky Linux, so I can't be specific in > further suggestions. > >  ~ Frank > > On Sun, Sep 4, 2022 at 7:24 AM Vincent M. wrote: > > Hello, > > The logs are working fine but once a day, nginx stop loging access > and then the file log file is empty. > > So every day I have to restart my nginx server in order to get the > logs. > > It's on a Rocky Linux 9 with Nginx 1.20.1 > > Never seen that before, what should I check? > > Thanks, > Vincent. > > _______________________________________________ > nginx mailing list -- nginx at nginx.org > To unsubscribe send an email to nginx-leave at nginx.org > > > > -- > I am not young enough to know everything. - Oscar Wilde (1854-1900) > > _______________________________________________ > nginx mailing list --nginx at nginx.org > To unsubscribe send an email tonginx-leave at nginx.org -------------- next part -------------- An HTML attachment was scrubbed... URL: From osa at freebsd.org.ru Sun Sep 4 13:51:06 2022 From: osa at freebsd.org.ru (Sergey A. Osokin) Date: Sun, 4 Sep 2022 16:51:06 +0300 Subject: Nginx stop logging In-Reply-To: <860a355f-dae2-3c5a-f128-31be3adde2b1@free.fr> References: <860a355f-dae2-3c5a-f128-31be3adde2b1@free.fr> Message-ID: Hi Vincent, hope you're doing well. On Sun, Sep 04, 2022 at 01:22:20PM +0200, Vincent M. wrote: > Hello, > > The logs are working fine but once a day, nginx stop loging access and > then the file log file is empty. > So every day I have to restart my nginx server in order to get the logs. > It's on a Rocky Linux 9 with Nginx 1.20.1 > Never seen that before, what should I check? The official linux packages [1] contain the logrotate(8) configuration file, so that part of the job can be done automagically. Also, while I'm here I'd recommend to upgrade installed nginx to the more recent stable version, 1.22.0. Please let me know if you have any questions. Thank you. References: 1. https://nginx.org/en/linux_packages.html#RHEL-CentOS -- Sergey A. Osokin From frank.swasey at gmail.com Sun Sep 4 17:30:03 2022 From: frank.swasey at gmail.com (Frank Swasey) Date: Sun, 4 Sep 2022 13:30:03 -0400 Subject: Nginx stop logging In-Reply-To: <89bb9d25-9216-e1f8-921c-d594a60deb41@free.fr> References: <860a355f-dae2-3c5a-f128-31be3adde2b1@free.fr> <89bb9d25-9216-e1f8-921c-d594a60deb41@free.fr> Message-ID: You should verify that /run/nginx.pid actually exists and contains the PID of the master nginx process on your Rocky Linux system. I think that adding the "create 0640 nginx root" line to the logrotate config file would not help. You can issue the "kill -USR1 " where you replace "" with the PID of the running nginx process and it SHOULD create a new log file. ~ Frank On Sun, Sep 4, 2022 at 9:46 AM Vincent M. wrote: > On my Rocky Linux 9, I have found this file /etc/logrotate.d/nginx > /var/log/nginx/*log { > daily > rotate 10 > missingok > notifempty > compress > delaycompress > sharedscripts > postrotate > /bin/kill -USR1 `cat /run/nginx.pid 2>/dev/null` 2>/dev/null || > true > endscript > } > > Whereas I have on my Fedora 36 (with a working fine log rotation process): > /var/log/nginx/*.log { > create 0640 nginx root > daily > rotate 10 > missingok > notifempty > compress > delaycompress > sharedscripts > postrotate > /bin/kill -USR1 `cat /run/nginx.pid 2>/dev/null` 2>/dev/null || > true > endscript > } > > Can I had on (create 0640 nginx root) on the server whitout risking to > break everything on next logs rotates? > Thank you, > Vincent. > > On 9/4/22 14:17, Frank Swasey wrote: > > This sounds like your log rotation process is not signalling nginx to > write a new log. I don't know Rocky Linux, so I can't be specific in > further suggestions. > > ~ Frank > > On Sun, Sep 4, 2022 at 7:24 AM Vincent M. wrote: > >> Hello, >> >> The logs are working fine but once a day, nginx stop loging access and >> then the file log file is empty. >> >> So every day I have to restart my nginx server in order to get the logs. >> >> It's on a Rocky Linux 9 with Nginx 1.20.1 >> >> Never seen that before, what should I check? >> >> Thanks, >> Vincent. >> _______________________________________________ >> nginx mailing list -- nginx at nginx.org >> To unsubscribe send an email to nginx-leave at nginx.org >> > > > -- > I am not young enough to know everything. - Oscar Wilde (1854-1900) > > _______________________________________________ > nginx mailing list -- nginx at nginx.org > To unsubscribe send an email to nginx-leave at nginx.org > > > _______________________________________________ > nginx mailing list -- nginx at nginx.org > To unsubscribe send an email to nginx-leave at nginx.org > -- I am not young enough to know everything. - Oscar Wilde (1854-1900) -------------- next part -------------- An HTML attachment was scrubbed... URL: From biscotty666 at gmail.com Sun Sep 4 23:26:05 2022 From: biscotty666 at gmail.com (Brian Carey) Date: Sun, 4 Sep 2022 17:26:05 -0600 Subject: Trouble setting up SSL Message-ID: <1802ea72-2ee0-a2aa-2b66-164ada169180@gmail.com> Hi, I'm pretty new to nginx but do have a server up and running. I've been pulling my hair out over ssl setup though. I have read the docs on your server and some others like the alpine site. The most recent attempt I followed the video tutorial on your website. Whenever I try to connect via ssl it hangs.  I hope someone here has some ideas because I don't know where else to turn. No errors show in the nginx logs. I'm running Ubuntu 20.04. Nginx was installed following the instructions on your website. When I try to access http://www.biscotty.dev with curl I get a response. If I explicitly request https it hangs indefinitely. The commands/responses are posted below. Not sure if this matters but I have learned that dev domains try to enforce https, so explicitly using http in a browser gui craps out no matter what, but curl ignores this and serves you via http anyway. I don't know if this matters but I thought I would mention it. Here is my .conf file. I have not modified anything else from the initial install. ''' server { listen 80 default_server; server_name www.biscotty.dev; return 301 https://$server_name$request_uri; } server { listen 443 ssl; server_name www.biscotty.dev; ssl_certificate /etc/nginx/ssl/biscotty.dev.crt; ssl_certificate_key /etc/nginx/ssl/biscotty.dev.key; location / { root /usr/share/nginx/html; index index.html index.htm; } } ''' ''' root at biscotty-lt:/etc/nginx/conf.d# curl -I http://biscotty.dev HTTP/1.1 301 Moved Permanently Server: nginx/1.23.1 Date: Sun, 04 Sep 2022 21:05:01 GMT Content-Type: text/html Content-Length: 169 Connection: keep-alive Location: https://www.biscotty.dev/ root at biscotty-lt:/etc/nginx/conf.d# curl -I https://biscotty.dev ^C ``` From moshe at ymkatz.net Sun Sep 4 23:48:45 2022 From: moshe at ymkatz.net (Moshe Katz) Date: Sun, 4 Sep 2022 19:48:45 -0400 Subject: Trouble setting up SSL In-Reply-To: <1802ea72-2ee0-a2aa-2b66-164ada169180@gmail.com> References: <1802ea72-2ee0-a2aa-2b66-164ada169180@gmail.com> Message-ID: Here are a few things you can check (all of these need to be run as root or using `sudo`): Is nginx actually listening on port 443? There are lots of different commands you can use to check this, but I like to use `netstat -lptn | grep nginx`. Is there an error in your config? Check this with `nginx -t`. Have you allowed port 443 through your firewall? Check `/var/log/syslog` for firewall messages. On Sun, Sep 4, 2022, 7:27 PM Brian Carey wrote: > Hi, > > I'm pretty new to nginx but do have a server up and running. I've been > pulling my hair out over ssl setup though. I have read the docs on your > server and some others like the alpine site. The most recent attempt I > followed the video tutorial on your website. Whenever I try to connect > via ssl it hangs. I hope someone here has some ideas because I don't > know where else to turn. > > No errors show in the nginx logs. > > I'm running Ubuntu 20.04. Nginx was installed following the instructions > on your website. > > When I try to access http://www.biscotty.dev with curl I get a response. > If I explicitly request https it hangs indefinitely. The > commands/responses are posted below. > > Not sure if this matters but I have learned that dev domains try to > enforce https, so explicitly using http in a browser gui craps out no > matter what, but curl ignores this and serves you via http anyway. I > don't know if this matters but I thought I would mention it. > > Here is my .conf file. I have not modified anything else from the > initial install. > > ''' > server { > listen 80 default_server; > server_name www.biscotty.dev; > return 301 https://$server_name$request_uri; > } > > server { > listen 443 ssl; > server_name www.biscotty.dev; > > ssl_certificate /etc/nginx/ssl/biscotty.dev.crt; > ssl_certificate_key /etc/nginx/ssl/biscotty.dev.key; > > location / { > root /usr/share/nginx/html; > index index.html index.htm; > } > } > ''' > ''' > root at biscotty-lt:/etc/nginx/conf.d# curl -I http://biscotty.dev > HTTP/1.1 301 Moved Permanently > Server: nginx/1.23.1 > Date: Sun, 04 Sep 2022 21:05:01 GMT > Content-Type: text/html > Content-Length: 169 > Connection: keep-alive > Location: https://www.biscotty.dev/ > > root at biscotty-lt:/etc/nginx/conf.d# curl -I https://biscotty.dev > ^C > ``` > > > _______________________________________________ > nginx mailing list -- nginx at nginx.org > To unsubscribe send an email to nginx-leave at nginx.org > -------------- next part -------------- An HTML attachment was scrubbed... URL: From biscotty666 at gmail.com Sun Sep 4 23:54:19 2022 From: biscotty666 at gmail.com (Brian Carey) Date: Sun, 4 Sep 2022 17:54:19 -0600 Subject: Trouble setting up SSL In-Reply-To: References: <1802ea72-2ee0-a2aa-2b66-164ada169180@gmail.com> Message-ID: Thanks so much for your reply. See answers below. On 9/4/22 17:48, Moshe Katz wrote: > Here are a few things you can check (all of these need to be run as > root or using `sudo`): > > Is nginx actually listening on port 443? There are lots of different > commands you can use to check this, but I like to use `netstat -lptn | > grep nginx`. > root at biscotty-lt:/etc/nginx/conf.d# netstat -lptn | grep nginx tcp        0      0 0.0.0.0:443             0.0.0.0:*               LISTEN      58325/nginx: master tcp        0      0 0.0.0.0:80              0.0.0.0:*               LISTEN      58325/nginx: master > Is there an error in your config? Check this with `nginx -t`. root at biscotty-lt:/etc/nginx/conf.d# nginx -t nginx: the configuration file /etc/nginx/nginx.conf syntax is ok nginx: configuration file /etc/nginx/nginx.conf test is successful > > Have you allowed port 443 through your firewall? Check > `/var/log/syslog` for firewall messages. I've disabled the firewall until I get this resolved. > > On Sun, Sep 4, 2022, 7:27 PM Brian Carey wrote: > > Hi, > > I'm pretty new to nginx but do have a server up and running. I've > been > pulling my hair out over ssl setup though. I have read the docs on > your > server and some others like the alpine site. The most recent > attempt I > followed the video tutorial on your website. Whenever I try to > connect > via ssl it hangs.  I hope someone here has some ideas because I don't > know where else to turn. > > No errors show in the nginx logs. > > I'm running Ubuntu 20.04. Nginx was installed following the > instructions > on your website. > > When I try to access http://www.biscotty.dev with curl I get a > response. > If I explicitly request https it hangs indefinitely. The > commands/responses are posted below. > > Not sure if this matters but I have learned that dev domains try to > enforce https, so explicitly using http in a browser gui craps out no > matter what, but curl ignores this and serves you via http anyway. I > don't know if this matters but I thought I would mention it. > > Here is my .conf file. I have not modified anything else from the > initial install. > > ''' > server { > listen 80 default_server; > server_name www.biscotty.dev ; > return 301 https://$server_name$request_uri; > } > > server { > listen 443 ssl; > server_name www.biscotty.dev ; > > ssl_certificate /etc/nginx/ssl/biscotty.dev .crt; > ssl_certificate_key /etc/nginx/ssl/biscotty.dev > .key; > > location / { > root /usr/share/nginx/html; > index index.html index.htm; > } > } > ''' > ''' > root at biscotty-lt:/etc/nginx/conf.d# curl -I http://biscotty.dev > HTTP/1.1 301 Moved Permanently > Server: nginx/1.23.1 > Date: Sun, 04 Sep 2022 21:05:01 GMT > Content-Type: text/html > Content-Length: 169 > Connection: keep-alive > Location: https://www.biscotty.dev/ > > root at biscotty-lt:/etc/nginx/conf.d# curl -I https://biscotty.dev > ^C > ``` > > > _______________________________________________ > nginx mailing list -- nginx at nginx.org > To unsubscribe send an email to nginx-leave at nginx.org > > > _______________________________________________ > nginx mailing list --nginx at nginx.org > To unsubscribe send an email tonginx-leave at nginx.org -------------- next part -------------- An HTML attachment was scrubbed... URL: From biscotty666 at gmail.com Mon Sep 5 00:02:21 2022 From: biscotty666 at gmail.com (Brian Carey) Date: Sun, 4 Sep 2022 18:02:21 -0600 Subject: Trouble setting up SSL In-Reply-To: References: <1802ea72-2ee0-a2aa-2b66-164ada169180@gmail.com> Message-ID: <70dcfe47-eb5e-2685-25e2-c06eedcdcf7f@gmail.com> Also just to give as much info as possible this is how I created the keys: openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout /etc/nginx/ssl/biscotty.dev.ke y -out /etc/nginx/ssl/biscotty.dev.crt On 9/4/22 17:48, Moshe Katz wrote: > Here are a few things you can check (all of these need to be run as > root or using `sudo`): > > Is nginx actually listening on port 443? There are lots of different > commands you can use to check this, but I like to use `netstat -lptn | > grep nginx`. > > Is there an error in your config? Check this with `nginx -t`. > > Have you allowed port 443 through your firewall? Check > `/var/log/syslog` for firewall messages. > > On Sun, Sep 4, 2022, 7:27 PM Brian Carey wrote: > > Hi, > > I'm pretty new to nginx but do have a server up and running. I've > been > pulling my hair out over ssl setup though. I have read the docs on > your > server and some others like the alpine site. The most recent > attempt I > followed the video tutorial on your website. Whenever I try to > connect > via ssl it hangs.  I hope someone here has some ideas because I don't > know where else to turn. > > No errors show in the nginx logs. > > I'm running Ubuntu 20.04. Nginx was installed following the > instructions > on your website. > > When I try to access http://www.biscotty.dev with curl I get a > response. > If I explicitly request https it hangs indefinitely. The > commands/responses are posted below. > > Not sure if this matters but I have learned that dev domains try to > enforce https, so explicitly using http in a browser gui craps out no > matter what, but curl ignores this and serves you via http anyway. I > don't know if this matters but I thought I would mention it. > > Here is my .conf file. I have not modified anything else from the > initial install. > > ''' > server { > listen 80 default_server; > server_name www.biscotty.dev ; > return 301 https://$server_name$request_uri; > } > > server { > listen 443 ssl; > server_name www.biscotty.dev ; > > ssl_certificate /etc/nginx/ssl/biscotty.dev .crt; > ssl_certificate_key /etc/nginx/ssl/biscotty.dev > .key; > > location / { > root /usr/share/nginx/html; > index index.html index.htm; > } > } > ''' > ''' > root at biscotty-lt:/etc/nginx/conf.d# curl -I http://biscotty.dev > HTTP/1.1 301 Moved Permanently > Server: nginx/1.23.1 > Date: Sun, 04 Sep 2022 21:05:01 GMT > Content-Type: text/html > Content-Length: 169 > Connection: keep-alive > Location: https://www.biscotty.dev/ > > root at biscotty-lt:/etc/nginx/conf.d# curl -I https://biscotty.dev > ^C > ``` > > > _______________________________________________ > nginx mailing list -- nginx at nginx.org > To unsubscribe send an email to nginx-leave at nginx.org > > > _______________________________________________ > nginx mailing list --nginx at nginx.org > To unsubscribe send an email tonginx-leave at nginx.org -------------- next part -------------- An HTML attachment was scrubbed... URL: From teward at thomas-ward.net Mon Sep 5 00:10:40 2022 From: teward at thomas-ward.net (Thomas Ward) Date: Mon, 5 Sep 2022 00:10:40 +0000 Subject: Trouble setting up SSL In-Reply-To: References: <1802ea72-2ee0-a2aa-2b66-164ada169180@gmail.com> Message-ID: Is this on a VPS? They might have and additional firewall on the hosting side you need to adjust. If this is behind a routwer and you are outside the network make sure to port-forward port 443. Sent from my Galaxy -------- Original message -------- From: Brian Carey Date: 9/4/22 19:55 (GMT-05:00) To: nginx at nginx.org Subject: Re: Trouble setting up SSL Thanks so much for your reply. See answers below. On 9/4/22 17:48, Moshe Katz wrote: Here are a few things you can check (all of these need to be run as root or using `sudo`): Is nginx actually listening on port 443? There are lots of different commands you can use to check this, but I like to use `netstat -lptn | grep nginx`. root at biscotty-lt:/etc/nginx/conf.d# netstat -lptn | grep nginx tcp 0 0 0.0.0.0:443 0.0.0.0:* LISTEN 58325/nginx: master tcp 0 0 0.0.0.0:80 0.0.0.0:* LISTEN 58325/nginx: master Is there an error in your config? Check this with `nginx -t`. root at biscotty-lt:/etc/nginx/conf.d# nginx -t nginx: the configuration file /etc/nginx/nginx.conf syntax is ok nginx: configuration file /etc/nginx/nginx.conf test is successful Have you allowed port 443 through your firewall? Check `/var/log/syslog` for firewall messages. I've disabled the firewall until I get this resolved. On Sun, Sep 4, 2022, 7:27 PM Brian Carey > wrote: Hi, I'm pretty new to nginx but do have a server up and running. I've been pulling my hair out over ssl setup though. I have read the docs on your server and some others like the alpine site. The most recent attempt I followed the video tutorial on your website. Whenever I try to connect via ssl it hangs. I hope someone here has some ideas because I don't know where else to turn. No errors show in the nginx logs. I'm running Ubuntu 20.04. Nginx was installed following the instructions on your website. When I try to access http://www.biscotty.dev with curl I get a response. If I explicitly request https it hangs indefinitely. The commands/responses are posted below. Not sure if this matters but I have learned that dev domains try to enforce https, so explicitly using http in a browser gui craps out no matter what, but curl ignores this and serves you via http anyway. I don't know if this matters but I thought I would mention it. Here is my .conf file. I have not modified anything else from the initial install. ''' server { listen 80 default_server; server_name www.biscotty.dev; return 301 https://$server_name$request_uri; } server { listen 443 ssl; server_name www.biscotty.dev; ssl_certificate /etc/nginx/ssl/biscotty.dev.crt; ssl_certificate_key /etc/nginx/ssl/biscotty.dev.key; location / { root /usr/share/nginx/html; index index.html index.htm; } } ''' ''' root at biscotty-lt:/etc/nginx/conf.d# curl -I http://biscotty.dev HTTP/1.1 301 Moved Permanently Server: nginx/1.23.1 Date: Sun, 04 Sep 2022 21:05:01 GMT Content-Type: text/html Content-Length: 169 Connection: keep-alive Location: https://www.biscotty.dev/ root at biscotty-lt:/etc/nginx/conf.d# curl -I https://biscotty.dev ^C ``` _______________________________________________ nginx mailing list -- nginx at nginx.org To unsubscribe send an email to nginx-leave at nginx.org _______________________________________________ nginx mailing list -- nginx at nginx.org To unsubscribe send an email to nginx-leave at nginx.org -------------- next part -------------- An HTML attachment was scrubbed... URL: From biscotty666 at gmail.com Mon Sep 5 00:24:16 2022 From: biscotty666 at gmail.com (Brian Carey) Date: Sun, 4 Sep 2022 18:24:16 -0600 Subject: Trouble setting up SSL In-Reply-To: References: <1802ea72-2ee0-a2aa-2b66-164ada169180@gmail.com> Message-ID: <29a03530-6689-53a6-d0ed-0200e075c851@gmail.com> OMG. I went around so many times trying to figure it out I forgot to re-check my router's port forwarding. I must have changed it at some point. A million thanks! On 9/4/22 18:10, Thomas Ward wrote: > Is this on a VPS?  They might have and additional firewall on the > hosting side you need to adjust. > > If this is behind a routwer and you are outside the network make sure > to port-forward port 443. > > > > Sent from my Galaxy > > > > -------- Original message -------- > From: Brian Carey > Date: 9/4/22 19:55 (GMT-05:00) > To: nginx at nginx.org > Subject: Re: Trouble setting up SSL > > Thanks so much for your reply. See answers below. > > On 9/4/22 17:48, Moshe Katz wrote: >> Here are a few things you can check (all of these need to be run as >> root or using `sudo`): >> >> Is nginx actually listening on port 443? There are lots of different >> commands you can use to check this, but I like to use `netstat -lptn >> | grep nginx`. >> > root at biscotty-lt:/etc/nginx/conf.d# netstat -lptn | grep nginx > tcp        0      0 0.0.0.0:443             0.0.0.0:* >               LISTEN      58325/nginx: master > tcp        0      0 0.0.0.0:80              0.0.0.0:* >               LISTEN      58325/nginx: master >> Is there an error in your config? Check this with `nginx -t`. > > root at biscotty-lt:/etc/nginx/conf.d# nginx -t > nginx: the configuration file /etc/nginx/nginx.conf syntax is ok > nginx: configuration file /etc/nginx/nginx.conf test is successful > >> >> Have you allowed port 443 through your firewall? Check >> `/var/log/syslog` for firewall messages. > I've disabled the firewall until I get this resolved. >> >> On Sun, Sep 4, 2022, 7:27 PM Brian Carey wrote: >> >> Hi, >> >> I'm pretty new to nginx but do have a server up and running. I've >> been >> pulling my hair out over ssl setup though. I have read the docs >> on your >> server and some others like the alpine site. The most recent >> attempt I >> followed the video tutorial on your website. Whenever I try to >> connect >> via ssl it hangs.  I hope someone here has some ideas because I >> don't >> know where else to turn. >> >> No errors show in the nginx logs. >> >> I'm running Ubuntu 20.04. Nginx was installed following the >> instructions >> on your website. >> >> When I try to access http://www.biscotty.dev with curl I get a >> response. >> If I explicitly request https it hangs indefinitely. The >> commands/responses are posted below. >> >> Not sure if this matters but I have learned that dev domains try to >> enforce https, so explicitly using http in a browser gui craps >> out no >> matter what, but curl ignores this and serves you via http anyway. I >> don't know if this matters but I thought I would mention it. >> >> Here is my .conf file. I have not modified anything else from the >> initial install. >> >> ''' >> server { >> listen 80 default_server; >> server_name www.biscotty.dev ; >> return 301 https://$server_name$request_uri; >> } >> >> server { >> listen 443 ssl; >> server_name www.biscotty.dev ; >> >> ssl_certificate /etc/nginx/ssl/biscotty.dev >> .crt; >> ssl_certificate_key /etc/nginx/ssl/biscotty.dev >> .key; >> >> location / { >> root /usr/share/nginx/html; >> index index.html index.htm; >> } >> } >> ''' >> ''' >> root at biscotty-lt:/etc/nginx/conf.d# curl -I http://biscotty.dev >> HTTP/1.1 301 Moved Permanently >> Server: nginx/1.23.1 >> Date: Sun, 04 Sep 2022 21:05:01 GMT >> Content-Type: text/html >> Content-Length: 169 >> Connection: keep-alive >> Location: https://www.biscotty.dev/ >> >> root at biscotty-lt:/etc/nginx/conf.d# curl -I https://biscotty.dev >> ^C >> ``` >> >> >> _______________________________________________ >> nginx mailing list -- nginx at nginx.org >> To unsubscribe send an email to nginx-leave at nginx.org >> >> >> _______________________________________________ >> nginx mailing list --nginx at nginx.org >> To unsubscribe send an email tonginx-leave at nginx.org > > _______________________________________________ > nginx mailing list --nginx at nginx.org > To unsubscribe send an email tonginx-leave at nginx.org -------------- next part -------------- An HTML attachment was scrubbed... URL: From biscotty666 at gmail.com Mon Sep 5 05:45:13 2022 From: biscotty666 at gmail.com (Brian Carey) Date: Sun, 4 Sep 2022 23:45:13 -0600 Subject: best practice for file locations Message-ID: Hi, I'm a little confused about the relationship between conf files in conf.d vs sites-enabled. Should server configurations be set up in one directory or both? If either is there a "best practice" location? If there are conf files in both locations which takes precedence? Some sources I've read/seen seem to use one and others seem to use the other. Thanks, biscotty From dmiller at amfes.com Mon Sep 5 19:25:41 2022 From: dmiller at amfes.com (Daniel L. Miller) Date: Mon, 05 Sep 2022 19:25:41 +0000 Subject: Multiple wildcard server_name Message-ID: While I'm sure this is documented somewhere - I haven't found exactly what I'm looking for. Or I'm just not understanding what I've read. My understanding is simply prefixing a server name with ".", such as ".example.com", is a special wildcard that basically becomes "example.com *.example.com". My current nginx version is 1.20.2. I have a number of domains that I want to re-direct to a master name. And I want http re-directed to https. So I have: server { listen 80 default_server; server_name .maindomain.com .example1.com .example2.com .example3.com location / { return 301 https://maindomain.com$request_uri; } } server { listen 443 ssl http2 default_server; server_name_in_redirect on; server_name maindomain.com www.maindomain.com *.maindomain.com; } Based on the docs, I recently changed by second server block from just ".maindomain.com" to the explicit matching for faster default processing. This works for "https://maindomain.com" and "http://maindomain.com". Also for both protocols for "www.maindomain.com". And - it works for "www.example1.com" as well as the other alternate domains with a "www" prefix. But it does not work for just "example1.com" or the other domains. It doesn't appear to be DNS - both the base domain and the "www" A records point to the same IP. What I'm receiving is a 404 Not Found for either "http://example1.com" (which does not re-direct to https) or "https://example1.com". And I don't understand why. -- Daniel -------------- next part -------------- An HTML attachment was scrubbed... URL: From 470101904 at qq.com Mon Sep 5 19:26:41 2022 From: 470101904 at qq.com (=?utf-8?B?NDcwMTAxOTA0?=) Date: Tue, 6 Sep 2022 03:26:41 +0800 Subject: =?utf-8?B?6Ieq5Yqo5Zue5aSNOiBNdWx0aXBsZSB3aWxkY2Fy?= =?utf-8?B?ZCBzZXJ2ZXJfbmFtZQ==?= In-Reply-To: Message-ID: An HTML attachment was scrubbed... URL: From biscotty666 at gmail.com Mon Sep 5 23:14:01 2022 From: biscotty666 at gmail.com (Brian Carey) Date: Mon, 5 Sep 2022 17:14:01 -0600 Subject: nginx exits error code 0 with docker compose Message-ID: <0d57bee2-dc9b-cb62-82ee-77e1f9dfbe75@gmail.com> I'm trying to run nginx/mysql/php in docker. Everything seems to run fine. I have added tty: true and changed the Here is my Dockerfile, docker-compose.yaml and the nginx-related output. I did try adding tty: true but it made no difference. Any ideas? Thanks in advance. biscotty ''' FROM nginx:alpine CMD ["nginx", "-g", "daemon off;"] EXPOSE 80 443 ''' ''' version: '3' services:  php-fpm:    build:      context: ./php-fpm  nginx:    build:      context: ./nginx    volumes:      - ../src:/var/www    ports:      - target: 80        host_ip: 127.0.0.1        published: 8080        protocol: tcp        mode: host      - target: 80        host_ip: 127.0.0.1        published: 8000-9000        protocol: tcp        mode: host  database:    build:      context: ./database    environment:      - MYSQL_DATABASE=containerphp      - MYSQL_USER=container      - MYSQL_PASSWORD=container      - MYSQL_ROOT_PASSWORD=******* ''' ''' docker-nginx-1 | /docker-entrypoint.sh: /docker-entrypoint.d/ is not empty, will attempt to perform configuration docker-nginx-1     | /docker-entrypoint.sh: Looking for shell scripts in /docker-entrypoint.d/ docker-nginx-1     | /docker-entrypoint.sh: Launching /docker-entrypoint.d/10-listen-on-ipv6-by-default.sh docker-nginx-1     | 10-listen-on-ipv6-by-default.sh: info: Getting the checksum of /etc/nginx/conf.d/default.conf docker-nginx-1     | 10-listen-on-ipv6-by-default.sh: info: Enabled listen on IPv6 in /etc/nginx/conf.d/default.conf docker-nginx-1     | /docker-entrypoint.sh: Launching /docker-entrypoint.d/20-envsubst-on-templates.sh docker-nginx-1     | /docker-entrypoint.sh: Launching /docker-entrypoint.d/30-tune-worker-processes.sh docker-nginx-1     | /docker-entrypoint.sh: Configuration complete; ready for start up docker-nginx-1     | 2022/09/05 22:55:51 [notice] 1#1: using the "epoll" event method docker-nginx-1     | 2022/09/05 22:55:51 [notice] 1#1: nginx/1.23.1 docker-nginx-1     | 2022/09/05 22:55:51 [notice] 1#1: built by gcc 11.2.1 20220219 (Alpine 11.2.1_git20220219) docker-nginx-1     | 2022/09/05 22:55:51 [notice] 1#1: OS: Linux 5.15.0-46-generic docker-nginx-1     | 2022/09/05 22:55:51 [notice] 1#1: getrlimit(RLIMIT_NOFILE): 1048576:1048576 docker-nginx-1     | 2022/09/05 22:55:51 [notice] 32#32: start worker processes docker-nginx-1     | 2022/09/05 22:55:51 [notice] 32#32: start worker process 33 docker-nginx-1     | 2022/09/05 22:55:51 [notice] 32#32: start worker process 34 docker-nginx-1 exited with code 0 ''' -------------- next part -------------- An HTML attachment was scrubbed... URL: From thresh at nginx.com Tue Sep 6 08:30:25 2022 From: thresh at nginx.com (Konstantin Pavlov) Date: Tue, 6 Sep 2022 12:30:25 +0400 Subject: nginx exits error code 0 with docker compose In-Reply-To: <0d57bee2-dc9b-cb62-82ee-77e1f9dfbe75@gmail.com> References: <0d57bee2-dc9b-cb62-82ee-77e1f9dfbe75@gmail.com> Message-ID: <1ee43b78-f51d-c0ca-05c0-6d577baafd53@nginx.com> Hi Brian, On 06/09/2022 3:14 AM, Brian Carey wrote: > > I'm trying to run nginx/mysql/php in docker. Everything seems to run > fine. I have added tty: true and changed the > > Here is my Dockerfile, docker-compose.yaml and the nginx-related > output. I did try adding tty: true but it made no difference. > > Any ideas? Thanks in advance. > > biscotty > > ''' > > FROM nginx:alpine > > CMD ["nginx", "-g", "daemon off;"] > > EXPOSE 80 443 > Make sure to rebuild the cached image used by docker-compose.  This should work fine. From nginx-forum at forum.nginx.org Wed Sep 7 13:01:05 2022 From: nginx-forum at forum.nginx.org (mirokub) Date: Wed, 07 Sep 2022 09:01:05 -0400 Subject: waiting for full request Message-ID: <648bffd44128a250724c3071d4b167c0.NginxMailingListEnglish@forum.nginx.org> Hello, Is there a way to configure NGINX to send response (200 OK) only after full body of request is delivered? Currently the response is sent early before the request body is fully sent. More details of my test-case: The request is about 600kB long and it's POST method. Thanks, Miro Posted at Nginx Forum: https://forum.nginx.org/read.php?2,295132,295132#msg-295132 From joao.andrade at protocol.ai Thu Sep 8 10:27:47 2022 From: joao.andrade at protocol.ai (=?utf-8?Q?Jo=C3=A3o_Sousa_Andrade?=) Date: Thu, 8 Sep 2022 11:27:47 +0100 Subject: Running nginx-quic in a real environment Message-ID: <831BD439-0012-42CF-B360-DB4FB7045D9A@protocol.ai> Hello everyone, My team is currently considering using nginx-quic in a production-like environment after running it and benchmarking it in our test env. Reading "The code is currently at a beta level of quality and should not be used in production." under https://github.com/VKCOM/nginx-quic was a bit discouraging. However, I understand that repo isn't the source of truth and one should also note https://hg.nginx.org/nginx-quic is around 10mo ahead. Consequently, I decided to double-check here. Hope it's the right place to do so :) I'm currently wondering what is currently missing in terms of QUIC implementation in the latest version of nginx-quic. Are there any particular bugs I should be aware of? I also understand there are performance improvements currently in the works. This part should be mostly alright given we'll benchmark. It's the functional side of things I'm wondering about. Can anyone help me with that? If this goes forward, we'll be happy to share anything useful we find on our side as well. Thank you, João From mdounin at mdounin.ru Thu Sep 8 14:36:42 2022 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 8 Sep 2022 17:36:42 +0300 Subject: waiting for full request In-Reply-To: <648bffd44128a250724c3071d4b167c0.NginxMailingListEnglish@forum.nginx.org> References: <648bffd44128a250724c3071d4b167c0.NginxMailingListEnglish@forum.nginx.org> Message-ID: Hello! On Wed, Sep 07, 2022 at 09:01:05AM -0400, mirokub wrote: > Hello, > Is there a way to configure NGINX to send response (200 OK) only after full > body of request is delivered? > Currently the response is sent early before the request body is fully sent. > More details of my test-case: The request is about 600kB long and it's POST > method. The response is sent by nginx when it's available. For static files and rewrite constructs like "return 200 ok;" this happens right after the request headers are received, so nginx doesn't try to wait for the request body before sending the response. In some cases this might save the client from sending unneeded body to nginx. If for some reason you want nginx to only respond when the whole request body is received, the readily available solution would be to configured nginx in a way which needs the whole response body before sending the response. For example, you can configure additional proxying with proxy_request_buffering switched off (the default), so nginx will read the whole request body before proxying the request (and returning the response from the upstream server): server { listen 80; location / { proxy_pass http://127.0.0.1:8081; } } server { listen 127.0.0.1:8081; return 200 ok; } Hope this helps. -- Maxim Dounin http://mdounin.ru/ From teodorescu.serban at gmail.com Thu Sep 8 14:58:07 2022 From: teodorescu.serban at gmail.com (Serban Teodorescu) Date: Thu, 8 Sep 2022 17:58:07 +0300 Subject: Change proxy_cache options depending on status code received Message-ID: <9A208E89-24D0-47FE-A830-2D3F5E38DE7F@gmail.com> Hello, I’d like to configure nginx to put up a standard maintenance page depending on the status code received from the upstream (currently for 502, 503 and 504, cached for 3 minutes). This works pretty simple and well, but I would like to ensure the maintenance page is being served directly on all urls. Currently the cache_key contains $request_uri ar the end, of course, to make sure the caching is working properly when everything is fine. Would you be able to please point into the right direction? Thank you! From teodorescu.serban at gmail.com Thu Sep 8 15:01:43 2022 From: teodorescu.serban at gmail.com (Serban Teodorescu) Date: Thu, 8 Sep 2022 18:01:43 +0300 Subject: Change proxy_cache options depending on status code received In-Reply-To: <9A208E89-24D0-47FE-A830-2D3F5E38DE7F@gmail.com> References: <9A208E89-24D0-47FE-A830-2D3F5E38DE7F@gmail.com> Message-ID: <91144893-189F-4245-B2ED-5ACF324E1276@gmail.com> I should also add that currently this works really well for every distinct $request_url by: location / { … error_page 500 502 503 504 /maintenance.html; … proxy_cache_valid 500 502 503 504 3m; } location = /maintenance.html { root /etc/nginx/static; } From nginx-forum at forum.nginx.org Thu Sep 8 17:25:20 2022 From: nginx-forum at forum.nginx.org (libresco_27) Date: Thu, 08 Sep 2022 13:25:20 -0400 Subject: negation in the map directive of nginx Message-ID: <093e8f07992a65f5ace5768ce0457324.NginxMailingListEnglish@forum.nginx.org> Hi, I'm working on rate limiting for specific group of client ids where if the client id is equal to XYZ don't map it, thus, the zone doesn't get incremented. For ex - limit_req_zone $default_rate_client_id zone=globalClientRateLimit_zone:50k rate=10r/m sync; map $client_id $default_rate_client_id { "^(?!ZZZZZZ)$" "$1" } But this doesn't seem to work. Is this the correct way to negate a particular string(ZZZZZ in this example)? Please let me know. Thanks! Posted at Nginx Forum: https://forum.nginx.org/read.php?2,295157,295157#msg-295157 From francis at daoine.org Thu Sep 8 22:52:13 2022 From: francis at daoine.org (Francis Daly) Date: Thu, 8 Sep 2022 23:52:13 +0100 Subject: negation in the map directive of nginx In-Reply-To: <093e8f07992a65f5ace5768ce0457324.NginxMailingListEnglish@forum.nginx.org> References: <093e8f07992a65f5ace5768ce0457324.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20220908225213.GA9502@daoine.org> On Thu, Sep 08, 2022 at 01:25:20PM -0400, libresco_27 wrote: Hi there, > I'm working on rate limiting for specific group of client ids where if the > client id is equal to XYZ don't map it, thus, the zone doesn't get > incremented. http://nginx.org/r/limit_req_zone: Requests with an empty key value are not accounted. It's probably easier to set the value to empty for those ones, and not-empty for the rest. > For ex - > limit_req_zone $default_rate_client_id zone=globalClientRateLimit_zone:50k > rate=10r/m sync; > map $client_id $default_rate_client_id { > "^(?!ZZZZZZ)$" "$1" > } map $client_id $default_rate_client_id { ZZZZZ ""; default $client_id; } (or whatever value is wanted). > But this doesn't seem to work. Is this the correct way to negate a > particular string(ZZZZZ in this example)? Please let me know. Negative regexes can be hard; it's simpler to avoid them entirely. Cheers, f -- Francis Daly francis at daoine.org From nginx-forum at forum.nginx.org Fri Sep 9 06:53:56 2022 From: nginx-forum at forum.nginx.org (teodorescu.serban) Date: Fri, 09 Sep 2022 02:53:56 -0400 Subject: negation in the map directive of nginx In-Reply-To: <093e8f07992a65f5ace5768ce0457324.NginxMailingListEnglish@forum.nginx.org> References: <093e8f07992a65f5ace5768ce0457324.NginxMailingListEnglish@forum.nginx.org> Message-ID: If you want to use regexes (to negate) you should use it properly, e.g. start with a "~". See more: http://nginx.org/en/docs/http/ngx_http_map_module.html That said, Francis' idea is a good one. Trying to negate things in regex is quite counterproductive. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,295157,295161#msg-295161 From michael.glenn.williams at totalvu.tv Sun Sep 11 00:47:29 2022 From: michael.glenn.williams at totalvu.tv (Michael Williams) Date: Sat, 10 Sep 2022 17:47:29 -0700 Subject: help with https to http and WSS to WS reverse proxy conf Message-ID: Hi All, Can someone with fresh eye please review this config and tell me why requests are infinite redirection to https? I'm trying to forward inbound requests on port 443 either to the localhost port 80 or the localhost port 25565, depending if it is a request for a WSS or for HTTP (files) Many thanks! map $http_upgrade $connection_upgrade { default upgrade; '' close; } upstream to-websocket { server localhost:25565; } server_tokens off; # SSL requirements. We use Certbot and LetsEncrypt #ssl_certificate /etc/letsencrypt/live/-myFQDN-/fullchain.pem; # managed by Certbot #ssl_certificate_key /etc/letsencrypt/live/-myFQDN-/privkey.pem; # managed by Certbot #include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot #ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot #ssl_session_cache shared:SSL:1m; #ssl_session_timeout 5m; #ssl_ciphers HIGH:!aNULL:!MD5; #ssl_prefer_server_ciphers on; server { # first redirect to https if ($scheme = "http") { return 301 https://$host$request_uri; } # Now webserver # Port 80 shouldn't be accesed from outside listen 80 default_server; listen [::]:80 default_server; server_name -myFQDN- www.-myFQDN-; return 404; # managed by Certbot root /var/www/html; } server { root /var/www/html; index index.html index.htm; server_name -myFQDN-; # Proxy our outside https to local http listen [::]:443 ssl ipv6only=on; # managed by Certbot listen 443 ssl; # managed by Certbot ssl_certificate /etc/letsencrypt/live/-myFQDN-/fullchain.pem; # managed by Certbot ssl_certificate_key /etc/letsencrypt/live/-myFQDN-/privkey.pem; # managed by Certbot include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot location / { try_files /nonexistent @$http_upgrade; } location @websocket { proxy_http_version 1.1; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection $connection_upgrade; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-Proto $scheme; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host -myFQDN-; proxy_set_header Referer https://-myFQDN-; proxy_set_header Referrer https://-myFQDN-; # proxy_pass http://localhost:25565; proxy_pass http://to-websocket; } location @ { proxy_set_header X-Real-IP $remote_addr; proxy_set_header Host -myFQDN-; proxy_set_header Referer https://-myFQDN-; proxy_set_header Referrer https://-myFQDN-; proxy_set_header X-Forwarded-Proto $scheme; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_pass http://localhost:80; } } [image: linkedin] -------------- next part -------------- An HTML attachment was scrubbed... URL: From francis at daoine.org Sun Sep 11 08:43:23 2022 From: francis at daoine.org (Francis Daly) Date: Sun, 11 Sep 2022 09:43:23 +0100 Subject: help with https to http and WSS to WS reverse proxy conf In-Reply-To: References: Message-ID: <20220911084323.GB9502@daoine.org> On Sat, Sep 10, 2022 at 05:47:29PM -0700, Michael Williams wrote: Hi there, > Can someone with fresh eye please review this config and tell me why > requests are infinite redirection to https? I suspect that whatever you are proxy_pass'ing to is seeing that it is getting a http connection, and it has been configured to insist on having a https connection. In this particular case, your "listen 80 default_server" server block presumably includes "localhost"; and so your "proxy_pass http://localhost:80;" directive is talking back to that. Which is where the loop is. So - proxy_pass to something that will return content. Cheers, f -- Francis Daly francis at daoine.org From nginx-forum at forum.nginx.org Sun Sep 11 15:22:39 2022 From: nginx-forum at forum.nginx.org (libresco_27) Date: Sun, 11 Sep 2022 11:22:39 -0400 Subject: negation in the map directive of nginx In-Reply-To: <20220908225213.GA9502@daoine.org> References: <20220908225213.GA9502@daoine.org> Message-ID: Thanks Francis for your response. I tried the approach you suggested and it still doesn't seem to work. This is what I am doing right now :- limit_req_zone $default_client_id zone=sample_zone:50k rate=3r/m sync; map $client_id $default_client_id { ZZZZZ ""; $client_id $client_id; } When I try to hit the gateway with ZZZZ client_id, it still limits the requests according to 3rpm configuration. Am I doing this wrong? (PS - I'm pretty new to nginx, sorry if I'm asking dumb questions:) ) Posted at Nginx Forum: https://forum.nginx.org/read.php?2,295157,295170#msg-295170 From michael.glenn.williams at totalvu.tv Sun Sep 11 18:53:43 2022 From: michael.glenn.williams at totalvu.tv (Michael Williams) Date: Sun, 11 Sep 2022 11:53:43 -0700 Subject: help with https to http and WSS to WS reverse proxy conf In-Reply-To: <20220911084323.GB9502@daoine.org> References: <20220911084323.GB9502@daoine.org> Message-ID: Francis thanks very much for taking the time to look at this. Based on your suggestion, I commented out these 3 lines and it got rid of the looping. I thought the same process that wants the WS feed also looked for inbound on port 80, but that is not the case after all. location @ { proxy_set_header X-Real-IP $remote_addr; proxy_set_header Host esports1.totalvu.live; proxy_set_header Referer https://esports1.totalvu.live; proxy_set_header Referrer https://esports1.totalvu.live; # proxy_set_header X-Forwarded-Proto $scheme; # proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; # proxy_pass http://localhost:80; } I thought that localhost was a different route to the Debian kernel, than the network interface... so listening to localhost:80 wouldn't hear traffic on the network interface port 80 and vice versa. Is that wrong? Anyway, many thanks again if you can help with the next part, since that is the real goal: Unfortunately, WSS inbound proxied to WS on localhost isn't working. The process that is listening is running inside a docker. When the webpage tries to connect to NGINX to start a WSS from a testing site like https://websocketking.com/ going to the host without the port, just to test conf.d : wss://myFQDN the access log shows: myIPAddr - - [11/Sep/2022:18:42:41 +0000] "GET / HTTP/1.1" 502 552 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/105.0.0.0 Safari/537.36" "-" Should it say HTTPS here ? When I try with the port: wss://myFQDN:25565 *the request hangs in Pending state forever.* FYI here is some supporting info to help provide the context. The up to date conf.d: map $http_upgrade $connection_upgrade { default upgrade; '' close; } upstream to-websocket { server localhost:25565; } server_tokens off; # SSL requirements. We use Certbot and LetsEncrypt #ssl_certificate /etc/letsencrypt/live/myFQDN/fullchain.pem; # managed by Certbot #ssl_certificate_key /etc/letsencrypt/live/myFQDN/privkey.pem; # managed by Certbot #include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot #ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot #ssl_session_cache shared:SSL:1m; #ssl_session_timeout 5m; #ssl_ciphers HIGH:!aNULL:!MD5; #ssl_prefer_server_ciphers on; server { # first redirect to https if ($scheme = "http") { return 301 https://$host$request_uri; } # Now webserver # Port 80 shouldn't be accesed from outside # listen 80 default_server; # listen [::]:80 default_server; # server_name myFQDN www.myFQDN; # return 404; # managed by Certbot # root /var/www/html; } server { root /var/www/html; index index.html index.htm; server_name myFQDN; # Proxy our outside https to local http listen [::]:443 ssl ipv6only=on; # managed by Certbot listen 443 ssl; # managed by Certbot ssl_certificate /etc/letsencrypt/live/myFQDN/fullchain.pem; # managed by Certbot ssl_certificate_key /etc/letsencrypt/live/myFQDN/privkey.pem; # managed by Certbot include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot location / { try_files /nonexistent @$http_upgrade; } location @websocket { proxy_http_version 1.1; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection $connection_upgrade; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-Proto $scheme; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host myFQDN; proxy_set_header Referer https://myFQDN; proxy_set_header Referrer https://myFQDN; # proxy_pass http://localhost:25565; proxy_pass http://to-websocket; } location @ { proxy_set_header X-Real-IP $remote_addr; proxy_set_header Host myFQDN; proxy_set_header Referer https://myFQDN; proxy_set_header Referrer https://myFQDN; # proxy_set_header X-Forwarded-Proto $scheme; # proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; # proxy_pass http://localhost:80; } } Here is the listener process on netstat: netstat -a -o | grep 255 tcp 0 0 ip-172-31-24-191.:25565 0.0.0.0:* LISTEN off (0.00/0/0) udp 0 0 ip-172-31-24-191.:25565 0.0.0.0:* off (0.00/0/0) Here is the interface being used: ifconfig pterodactyl0: flags=4163 mtu 1500 inet 172.18.0.1 netmask 255.255.0.0 broadcast 172.18.255.255 inet6 fdba:17c8:6c94::1011 prefixlen 64 scopeid 0x0 inet6 fe80::42:34ff:fecd:a2ca prefixlen 64 scopeid 0x20 inet6 fe80::1 prefixlen 64 scopeid 0x20 ether 02:42:34:cd:a2:ca txqueuelen 0 (Ethernet) RX packets 531199 bytes 44240022 (42.1 MiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 599094 bytes 2239954356 (2.0 GiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 Here are the iptables stats: iptables -L -n -v Chain INPUT (policy ACCEPT 0 packets, 0 bytes) pkts bytes target prot opt in out source destination Chain FORWARD (policy DROP 0 packets, 0 bytes) pkts bytes target prot opt in out source destination 26591 3605K DOCKER-USER all -- * * 0.0.0.0/0 0.0.0.0/0 26591 3605K DOCKER-ISOLATION-STAGE-1 all -- * * 0.0.0.0/0 0.0.0.0/0 0 0 ACCEPT all -- * docker0 0.0.0.0/0 0.0.0.0/0 ctstate RELATED,ESTABLISHED 0 0 DOCKER all -- * docker0 0.0.0.0/0 0.0.0.0/0 0 0 ACCEPT all -- docker0 !docker0 0.0.0.0/0 0.0.0.0/0 0 0 ACCEPT all -- docker0 docker0 0.0.0.0/0 0.0.0.0/0 555K 2230M ACCEPT all -- * pterodactyl0 0.0.0.0/0 0.0.0.0/0 ctstate RELATED,ESTABLISHED 754 43364 DOCKER all -- * pterodactyl0 0.0.0.0/0 0.0.0.0/0 487K 43M ACCEPT all -- pterodactyl0 !pterodactyl0 0.0.0.0/0 0.0.0.0/0 0 0 ACCEPT all -- pterodactyl0 pterodactyl0 0.0.0.0/0 0.0.0.0/0 Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) pkts bytes target prot opt in out source destination Chain DOCKER (2 references) pkts bytes target prot opt in out source destination 285 17856 ACCEPT tcp -- !pterodactyl0 pterodactyl0 0.0.0.0/0 172.18.0.2 tcp dpt:25565 0 0 ACCEPT udp -- !pterodactyl0 pterodactyl0 0.0.0.0/0 172.18.0.2 udp dpt:25565 Chain DOCKER-ISOLATION-STAGE-1 (1 references) pkts bytes target prot opt in out source destination 0 0 DOCKER-ISOLATION-STAGE-2 all -- docker0 !docker0 0.0.0.0/0 0.0.0.0/0 13358 1529K DOCKER-ISOLATION-STAGE-2 all -- pterodactyl0 !pterodactyl0 0.0.0.0/0 0.0.0.0/0 26591 3605K RETURN all -- * * 0.0.0.0/0 0.0.0.0/0 Chain DOCKER-ISOLATION-STAGE-2 (2 references) pkts bytes target prot opt in out source destination 0 0 DROP all -- * docker0 0.0.0.0/0 0.0.0.0/0 0 0 DROP all -- * pterodactyl0 0.0.0.0/0 0.0.0.0/0 13358 1529K RETURN all -- * * 0.0.0.0/0 0.0.0.0/0 Chain DOCKER-USER (1 references) pkts bytes target prot opt in out source destination 1535K 4381M RETURN all -- * * 0.0.0.0/0 0.0.0.0/0 On Sun, Sep 11, 2022 at 1:45 AM Francis Daly wrote: > On Sat, Sep 10, 2022 at 05:47:29PM -0700, Michael Williams wrote: > > Hi there, > > > Can someone with fresh eye please review this config and tell me why > > requests are infinite redirection to https? > > I suspect that whatever you are proxy_pass'ing to is seeing that it > is getting a http connection, and it has been configured to insist on > having a https connection. > > In this particular case, your "listen 80 default_server" server > block presumably includes "localhost"; and so your "proxy_pass > http://localhost:80;" directive is talking back to that. Which is where > the loop is. > > So - proxy_pass to something that will return content. > > Cheers, > > f > -- > Francis Daly francis at daoine.org > _______________________________________________ > nginx mailing list -- nginx at nginx.org > To unsubscribe send an email to nginx-leave at nginx.org > -------------- next part -------------- An HTML attachment was scrubbed... URL: From francis at daoine.org Sun Sep 11 22:56:14 2022 From: francis at daoine.org (Francis Daly) Date: Sun, 11 Sep 2022 23:56:14 +0100 Subject: negation in the map directive of nginx In-Reply-To: References: <20220908225213.GA9502@daoine.org> Message-ID: <20220911225614.GC9502@daoine.org> On Sun, Sep 11, 2022 at 11:22:39AM -0400, libresco_27 wrote: Hi there, > I tried the approach you suggested and it still doesn't seem to work. > This is what I am doing right now :- > > limit_req_zone $default_client_id zone=sample_zone:50k rate=3r/m sync; > map $client_id $default_client_id { > ZZZZZ ""; > $client_id $client_id; Probably you want "default" there as the first word on the last line. > When I try to hit the gateway with ZZZZ client_id, it still limits the > requests according to 3rpm configuration. Am I doing this wrong? The map has 5 Zs. Your example has 4 Zs. But more interestingly: $client_id is not a standard nginx variable. How is it being set; and what test are you running? Presumably somewhere else in your config you have a "limit_req" directive, so that you can see the delay between responses. Cheers, f -- Francis Daly francis at daoine.org From michael.glenn.williams at totalvu.tv Sun Sep 11 23:18:07 2022 From: michael.glenn.williams at totalvu.tv (Michael Williams) Date: Sun, 11 Sep 2022 16:18:07 -0700 Subject: is WSS or WS a $scheme? Message-ID: Is it possible to recognize $scheme = WSS within the NGINX conf file ? Many thanks [image: linkedin] -------------- next part -------------- An HTML attachment was scrubbed... URL: From francis at daoine.org Mon Sep 12 20:35:07 2022 From: francis at daoine.org (Francis Daly) Date: Mon, 12 Sep 2022 21:35:07 +0100 Subject: help with https to http and WSS to WS reverse proxy conf In-Reply-To: References: <20220911084323.GB9502@daoine.org> Message-ID: <20220912203507.GD9502@daoine.org> On Sun, Sep 11, 2022 at 11:53:43AM -0700, Michael Williams wrote: Hi there, > Francis thanks very much for taking the time to look at this. > Based on your suggestion, I commented out these 3 lines and it got rid of > the looping. I thought the same process that wants the WS feed also looked > for inbound on port 80, but that is not the case after all. > > location @ { > proxy_set_header X-Real-IP $remote_addr; > proxy_set_header Host esports1.totalvu.live; > proxy_set_header Referer https://esports1.totalvu.live; > proxy_set_header Referrer https://esports1.totalvu.live; > # proxy_set_header X-Forwarded-Proto $scheme; > # proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; > # proxy_pass http://localhost:80; > } That location{} now has no "*_pass" directive, so if it is used, it will end up trying to serve a file from the filesystem. If that is what you want it to do, it's fine. If not, you will probably want to decide what you want your nginx to do with a request, and then configure nginx to match. > I thought that localhost was a different route to the Debian kernel, than > the network interface... so listening to localhost:80 wouldn't hear traffic > on the network interface port 80 and vice versa. Is that wrong? "It depends". In this context, where you have nginx listening on port 80 on the "everything" address, localhost counts as part of everything. > Unfortunately, WSS inbound proxied to WS on localhost isn't working. The > process that is listening is running inside a docker. Once you introduce docker, you are introducing the docker networking system. In docker networking, it is simplest to imagine that there is no localhost. (More strictly: there is not exactly one localhost; so you are better off keeping a very clear idea of what IP address is being used, from the perspective of what system.) > When the webpage tries to connect to NGINX to start a WSS from a testing > site like https://websocketking.com/ going to the host without the port, > just to test conf.d : > > wss://myFQDN > > the access log shows: > > myIPAddr - - [11/Sep/2022:18:42:41 +0000] "GET / HTTP/1.1" 502 552 "-" > "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, > like Gecko) Chrome/105.0.0.0 Safari/537.36" "-" > > > Should it say HTTPS here ? When I try with the port: > > wss://myFQDN:25565 > > *the request hangs in Pending state forever.* I am confused as to what exactly you are doing. The overview is: somewhere, you are running the eventual "upstream" websocket service. That is listening on one specific IP:port. You will want to configure your nginx to proxy_pass to that IP:port when nginx receives the websocket connection-upgrade thing. If that upstream service is running inside docker, then the IP:port that you will connect to from outside that docker container, is whatever port is exposed by docker -- by the EXPOSE Dockerfile line, or by the -p or -P arguments to "docker run". > map $http_upgrade $connection_upgrade { > default upgrade; > '' close; > } So "$connection_upgrade" is either the word "upgrade" or the word "close". But you don't use "$connection_upgrade" anywhere in the config that you show. > upstream to-websocket { > server localhost:25565; > } That is referring to nginx's idea of localhost, which may or may not correspond to your in-docker service. Can you access that IP:port from the machine that nginx is running on? If not, change it to be whatever IP:port you can use the talk to your upstream websocket service. > server { > > # first redirect to https > if ($scheme = "http") { > return 301 https://$host$request_uri; > } This entire server{} block is equivalent to server { return 301 https://$host$request_uri; } because of the directive default values. If you don't want to listen for http, just don't have a server with (effectively) "listen 80;", (which is what this one has). > server { > > root /var/www/html; > index index.html index.htm; > server_name myFQDN; > > # Proxy our outside https to local http > listen [::]:443 ssl ipv6only=on; # managed by Certbot > listen 443 ssl; # managed by Certbot > location / { > try_files /nonexistent @$http_upgrade; > } That will do an internal redirect to a location that can be chosen by the client. You hope the http "Upgrade" header will either be empty, or have the value "websocket". If it is, then one of the following location{}s will be used; otherwise there will probably be an error returned to the client. > location @websocket { > proxy_http_version 1.1; > proxy_set_header Upgrade $http_upgrade; > proxy_set_header Connection $connection_upgrade; > proxy_set_header X-Real-IP $remote_addr; > proxy_set_header X-Forwarded-Proto $scheme; > proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; > proxy_set_header Host myFQDN; > proxy_set_header Referer https://myFQDN; > proxy_set_header Referrer https://myFQDN; > # proxy_pass http://localhost:25565; > proxy_pass http://to-websocket; >From below, your websocket service appears to be listening on ip-172-31-24-191.:25565. You'll want to invite nginx to talk to that IP:port, not localhost. > location @ { And this is what should be used if the incoming request has no "Upgrade" header. This entire block is equivalent to "location @ { }" > Here is the listener process on netstat: > > netstat -a -o | grep 255 > > tcp 0 0 ip-172-31-24-191.:25565 0.0.0.0:* LISTEN > off (0.00/0/0) If you can access that IP:port from the nginx server to talk to the websocket service, that's what you should configure nginx to try to talk to. > Here is the interface being used: In this case: nginx is talking to an IP. It does not care what the physical interface is. (iptables and the like do care; but that part all looks good from here.) > Here are the iptables stats: If these rules block nginx from talking to the IP:port and getting the response, that will want fixing. Otherwise, it's good. > iptables -L -n -v These appear to say "accept almost everything; nothing has been dropped", so these rules are presumably not blocking nginx. Good luck with it, f -- Francis Daly francis at daoine.org From michael.glenn.williams at totalvu.tv Tue Sep 13 00:46:21 2022 From: michael.glenn.williams at totalvu.tv (Michael Williams) Date: Mon, 12 Sep 2022 17:46:21 -0700 Subject: help with https to http and WSS to WS reverse proxy conf In-Reply-To: <20220912203507.GD9502@daoine.org> References: <20220911084323.GB9502@daoine.org> <20220912203507.GD9502@daoine.org> Message-ID: Francis! Wow thank you. This really helps all the guidance and instruction. I really appreciate your time. One thing to clarify, is that if I turn off NGINX, the client page works fine and connects to the app server inside the docker OK. I've changed the conf.d to the following, but still fail to get my app's server to work. map $http_upgrade $connection_upgrade { default upgrade; '' close; } upstream to-websocket { server 172.31.24.191:25565; } server_tokens off; server { # first redirect to https if ($scheme = "http") { return 301 https://$host$request_uri; } } server { server_name esports1.totalvu.live; root /var/www/html; index index.html index.htm; # Proxy our outside https to local http listen [::]:443 ssl ipv6only=on; # managed by Certbot listen 443 ssl; # managed by Certbot listen 25566 ssl; ssl_certificate /etc/letsencrypt/live/esports1.totalvu.live/fullchain.pem; # managed by Certbot ssl_certificate_key /etc/letsencrypt/live/esports1.totalvu.live/privkey.pem; # managed by Certbot include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot location / { try_files $uri /static/ @wss; } location @wss { error_log /var/log/nginx/wsserror.log; proxy_http_version 1.1; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection $connection_upgrade; proxy_set_header X-Real-IP $remote_addr; # proxy_set_header X-Forwarded-Proto $scheme; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host esports1.totalvu.live; proxy_set_header Referer https://esports1.totalvu.live; proxy_set_header Referrer https://esports1.totalvu.live; proxy_pass http://172.31.24.191:25565; # proxy_pass http://to-websocket; } location /static/ { try_files $uri =404; } } My idea was to try changing our client webpage to access a different port # than the one our app server in the docker is listening to. With that change I see from WIreshark on my local that the WSS connection seems to go through OK with NGINX: [image: Screen Shot 2022-09-12 at 5.29.50 PM.png] Our app server shows that the connection to the server also starts but then disconnect it: (22:36:59) Disconnected (unknown opcode 22) I confirmed that using local host or 127.0.0.1 was not where our app was listening as you said, so I changed to the local IP. My question here, does NGINX negotiate the entire handshake for HTTPS to WSS upgrade itself, without forwarding the same pages to our app server ? Is there a way to forward those pages to the app server also ? I think our app server may insist on negotiating a ws:// connection itself, but not a wss:// connection. Again Francis Many thanks! On Mon, Sep 12, 2022 at 1:37 PM Francis Daly wrote: > On Sun, Sep 11, 2022 at 11:53:43AM -0700, Michael Williams wrote: > > Hi there, > > > Francis thanks very much for taking the time to look at this. > > Based on your suggestion, I commented out these 3 lines and it got rid > of > > the looping. I thought the same process that wants the WS feed also > looked > > for inbound on port 80, but that is not the case after all. > > > > location @ { > > proxy_set_header X-Real-IP $remote_addr; > > proxy_set_header Host esports1.totalvu.live; > > proxy_set_header Referer https://esports1.totalvu.live; > > proxy_set_header Referrer https://esports1.totalvu.live; > > # proxy_set_header X-Forwarded-Proto $scheme; > > # proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; > > # proxy_pass http://localhost:80; > > } > > That location{} now has no "*_pass" directive, so if it is used, it will > end up trying to serve a file from the filesystem. > > If that is what you want it to do, it's fine. If not, you will probably > want to decide what you want your nginx to do with a request, and then > configure nginx to match. > > > I thought that localhost was a different route to the Debian kernel, than > > the network interface... so listening to localhost:80 wouldn't hear > traffic > > on the network interface port 80 and vice versa. Is that wrong? > > "It depends". > > In this context, where you have nginx listening on port 80 on the > "everything" address, localhost counts as part of everything. > > > Unfortunately, WSS inbound proxied to WS on localhost isn't working. The > > process that is listening is running inside a docker. > > Once you introduce docker, you are introducing the docker networking > system. > > In docker networking, it is simplest to imagine that there is no > localhost. (More strictly: there is not exactly one localhost; so you are > better off keeping a very clear idea of what IP address is being used, > from the perspective of what system.) > > > When the webpage tries to connect to NGINX to start a WSS from a testing > > site like https://websocketking.com/ going to the host without the port, > > just to test conf.d : > > > > wss://myFQDN > > > > the access log shows: > > > > myIPAddr - - [11/Sep/2022:18:42:41 +0000] "GET / HTTP/1.1" 502 552 "-" > > "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 > (KHTML, > > like Gecko) Chrome/105.0.0.0 Safari/537.36" "-" > > > > > > Should it say HTTPS here ? When I try with the port: > > > > wss://myFQDN:25565 > > > > *the request hangs in Pending state forever.* > > I am confused as to what exactly you are doing. > > The overview is: somewhere, you are running the eventual "upstream" > websocket service. That is listening on one specific IP:port. You will > want to configure your nginx to proxy_pass to that IP:port when nginx > receives the websocket connection-upgrade thing. > > If that upstream service is running inside docker, then the IP:port that > you will connect to from outside that docker container, is whatever port > is exposed by docker -- by the EXPOSE Dockerfile line, or by the -p or > -P arguments to "docker run". > > > map $http_upgrade $connection_upgrade { > > default upgrade; > > '' close; > > } > > So "$connection_upgrade" is either the word "upgrade" or the word "close". > > But you don't use "$connection_upgrade" anywhere in the config that you > show. > > > upstream to-websocket { > > server localhost:25565; > > } > > That is referring to nginx's idea of localhost, which may or may not > correspond to your in-docker service. > > Can you access that IP:port from the machine that nginx is running on? If > not, change it to be whatever IP:port you can use the talk to your > upstream websocket service. > > > server { > > > > # first redirect to https > > if ($scheme = "http") { > > return 301 https://$host$request_uri; > > } > > This entire server{} block is equivalent to > > server { return 301 https://$host$request_uri; } > > because of the directive default values. If you don't want to listen > for http, just don't have a server with (effectively) "listen 80;", > (which is what this one has). > > > server { > > > > root /var/www/html; > > index index.html index.htm; > > server_name myFQDN; > > > > # Proxy our outside https to local http > > listen [::]:443 ssl ipv6only=on; # managed by Certbot > > listen 443 ssl; # managed by Certbot > > > > > location / { > > try_files /nonexistent @$http_upgrade; > > } > > That will do an internal redirect to a location that can be chosen by > the client. You hope the http "Upgrade" header will either be empty, > or have the value "websocket". If it is, then one of the following > location{}s will be used; otherwise there will probably be an error > returned to the client. > > > location @websocket { > > proxy_http_version 1.1; > > proxy_set_header Upgrade $http_upgrade; > > proxy_set_header Connection $connection_upgrade; > > proxy_set_header X-Real-IP $remote_addr; > > proxy_set_header X-Forwarded-Proto $scheme; > > proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; > > proxy_set_header Host myFQDN; > > proxy_set_header Referer https://myFQDN; > > proxy_set_header Referrer https://myFQDN; > > # proxy_pass http://localhost:25565; > > proxy_pass http://to-websocket; > > From below, your websocket service appears to be listening on > ip-172-31-24-191.:25565. You'll want to invite nginx to talk to that > IP:port, not localhost. > > > location @ { > > And this is what should be used if the incoming request has no "Upgrade" > header. This entire block is equivalent to "location @ { }" > > > Here is the listener process on netstat: > > > > netstat -a -o | grep 255 > > > > tcp 0 0 ip-172-31-24-191.:25565 0.0.0.0:* > LISTEN > > off (0.00/0/0) > > If you can access that IP:port from the nginx server to talk to the > websocket service, that's what you should configure nginx to try to > talk to. > > > Here is the interface being used: > > In this case: nginx is talking to an IP. It does not care what the > physical interface is. (iptables and the like do care; but that part > all looks good from here.) > > > Here are the iptables stats: > > If these rules block nginx from talking to the IP:port and getting the > response, that will want fixing. Otherwise, it's good. > > > iptables -L -n -v > > These appear to say "accept almost everything; nothing has been dropped", > so these rules are presumably not blocking nginx. > > Good luck with it, > > f > -- > Francis Daly francis at daoine.org > _______________________________________________ > nginx mailing list -- nginx at nginx.org > To unsubscribe send an email to nginx-leave at nginx.org > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Screen Shot 2022-09-12 at 5.29.50 PM.png Type: image/png Size: 199223 bytes Desc: not available URL: From michael.glenn.williams at totalvu.tv Tue Sep 13 18:24:32 2022 From: michael.glenn.williams at totalvu.tv (Michael Williams) Date: Tue, 13 Sep 2022 11:24:32 -0700 Subject: Port numbers in the access or error logs ? Message-ID: Is there a way to include the request port number in each line of the access logs? I'm on Debian 11, using free NGINX downloaded. Many thanks, Michael [image: linkedin] -------------- next part -------------- An HTML attachment was scrubbed... URL: From lucas at lucasrolff.com Tue Sep 13 18:46:33 2022 From: lucas at lucasrolff.com (Lucas Rolff) Date: Tue, 13 Sep 2022 18:46:33 +0000 Subject: Port numbers in the access or error logs ? In-Reply-To: References: Message-ID: <62E829E7-F546-46AA-906D-E7E2E8CAF79B@lucasrolff.com> Yes, it’s documented in http://nginx.org/en/docs/http/ngx_http_core_module.html#variables $remote_port is probably what you’re after. On 13 Sep 2022, at 20:24, Michael Williams > wrote: Is there a way to include the request port number in each line of the access logs? I'm on Debian 11, using free NGINX downloaded. Many thanks, Michael [linkedin] _______________________________________________ nginx mailing list -- nginx at nginx.org To unsubscribe send an email to nginx-leave at nginx.org -------------- next part -------------- An HTML attachment was scrubbed... URL: From michael.glenn.williams at totalvu.tv Tue Sep 13 23:01:55 2022 From: michael.glenn.williams at totalvu.tv (Michael Williams) Date: Tue, 13 Sep 2022 16:01:55 -0700 Subject: Port numbers in the access or error logs ? In-Reply-To: <62E829E7-F546-46AA-906D-E7E2E8CAF79B@lucasrolff.com> References: <62E829E7-F546-46AA-906D-E7E2E8CAF79B@lucasrolff.com> Message-ID: Thank you this worked perfectly. I used both server and remote ports in the logs. Many thanks! On Tue, Sep 13, 2022 at 11:48 AM Lucas Rolff wrote: > Yes, it’s documented in > http://nginx.org/en/docs/http/ngx_http_core_module.html#variables > > $remote_port is probably what you’re after. > > On 13 Sep 2022, at 20:24, Michael Williams < > michael.glenn.williams at totalvu.tv> wrote: > > Is there a way to include the request port number in each line of the > access logs? > I'm on Debian 11, using free NGINX downloaded. > > Many thanks, > Michael > > [image: linkedin] > > _______________________________________________ > nginx mailing list -- nginx at nginx.org > To unsubscribe send an email to nginx-leave at nginx.org > > > _______________________________________________ > nginx mailing list -- nginx at nginx.org > To unsubscribe send an email to nginx-leave at nginx.org > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Thu Sep 15 00:15:02 2022 From: nginx-forum at forum.nginx.org (Davis_J) Date: Wed, 14 Sep 2022 20:15:02 -0400 Subject: Nginx KTLS hardware offloading not working In-Reply-To: References: Message-ID: <9d27a731a9aada993dcca2c3ce062bc0.NginxMailingListEnglish@forum.nginx.org> I'm running to the exact same issue, and I've done exactly the same troubleshoot, yet I don't have any more ideas of what to try .... I'm with Ubuntu 22.04.1 LTS , Linux HOST 5.15.0-47-generic #51-Ubuntu SMP Thu Aug 11 07:51:15 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux Product Name: ConnectX-6 Dx EN adapter card, 100GbE, Dual-port QSFP56, PCIe 4.0 x16, Crypto and Secure Boot [PN] Part number: MCX623106AC-CDAT Running the latest firmware and drivers ethtool -i enp193s0f1np1 driver: mlx5_core version: 5.7-1.0.2 firmware-version: 22.34.4000 (MT_0000000436) expansion-rom-version: bus-info: 0000:c1:00.1 supports-statistics: yes supports-test: yes supports-eeprom-access: no supports-register-dump: no supports-priv-flags: yes ethtool -k enp193s0f1np1 | grep tls tls-hw-tx-offload: on tls-hw-rx-offload: on tls-hw-record: off [fixed] I have almost everything the same as @liwuliu, yet I'm unable to use NIC kTLS. I tried nginx 1.23.1, and 1.22.0 I tried openSSL 3.0.1 and 3.0.5 I tried static content only, I tried reverse proxy, yet unable to make HW kTLS work (based on TLS stats, and based on Ethtool -S stats) SW kTLS works: cat /proc/net/tls_stat TlsCurrTxSw 0 TlsCurrRxSw 0 TlsCurrTxDevice 0 TlsCurrRxDevice 0 TlsTxSw 11 TlsRxSw 0 TlsTxDevice 0 TlsRxDevice 0 TlsDecryptError 0 TlsRxDeviceResync 0 inline/nic kTLS doesn't seem to work tx_tls_encrypted_packets: 0 tx_tls_encrypted_bytes: 0 tx_tls_ooo: 0 tx_tls_dump_packets: 0 tx_tls_dump_bytes: 0 tx_tls_resync_bytes: 0 tx_tls_skip_no_sync_data: 0 tx_tls_drop_no_sync_data: 0 tx_tls_drop_bypass_req: 0 rx_tls_decrypted_packets: 0 rx_tls_decrypted_bytes: 0 rx_tls_resync_req_pkt: 0 rx_tls_resync_req_start: 0 rx_tls_resync_req_end: 0 rx_tls_resync_req_skip: 0 rx_tls_resync_res_ok: 0 rx_tls_resync_res_retry: 0 rx_tls_resync_res_skip: 0 rx_tls_err: 0 tx_tls_ctx: 0 tx_tls_del: 0 rx_tls_ctx: 0 rx_tls_del: 0 rx0_tls_decrypted_packets: 0 rx0_tls_decrypted_bytes: 0 rx0_tls_resync_req_pkt: 0 rx0_tls_resync_req_start: 0 rx0_tls_resync_req_end: 0 rx0_tls_resync_req_skip: 0 rx0_tls_resync_res_ok: 0 rx0_tls_resync_res_retry: 0 rx0_tls_resync_res_skip: 0 rx0_tls_err: 0 All the settings @liwuliu wrote, I have the same. Only thing I'm not sure when @liwuliu wrote he made it work, if it was typo OpenSSL "3.1.0" that he said, cuz I can't find that version, so maybe he tried 3.0.1? not quite sure, Latest I was able to find is 3.0.5. I checked his Cipher list, and its 100% exactly the same as his (/nginx/openssl-3.0.5/.openssl/bin] ./openssl ciphers) my Nginx is built the same as well. nginx version: nginx/1.22.0 built by gcc 11.2.0 (Ubuntu 11.2.0-19ubuntu1) built with OpenSSL 3.0.5 5 Jul 2022 TLS SNI support enabled configure arguments: --with-debug --with-http_slice_module --with-http_ssl_module --with-http_realip_module --with-http_mp4_module --with-http_flv_module --with-threads --with-http_stub_status_module --with-http_secure_link_module --with-http_gzip_static_module --with-http_v2_module --with-http_gunzip_module --with-http_geoip_module --with-pcre-jit --with-compat --with-file-aio --with-http_addition_module --with-http_auth_request_module --with-http_dav_module --with-http_random_index_module --with-http_sub_module --with-stream --with-stream_realip_module --with-stream_ssl_module --with-stream_ssl_preread_module --with-openssl=../openssl-3.0.5 --with-openssl-opt=enable-ktls --with-cc-opt='-g -O2 -fstack-protector-strong -Wformat -Werror=format-security -Wp,-D_FORTIFY_SOURCE=2 -fPIC' This is the output of strace, downloading 400MB file via TLS strace -e trace=network -p `pidof nginx | sed -e 's/ /,/g'` 2>&1 : https://dpaste.com/HSU5QY2PY This is "curl -v https://domain.com/data/1" output: https://dpaste.com/29DSYBQU2 my nginx config is the following: pcre_jit on; error_log /home/logs/error.log debug; user www-data; worker_processes auto; worker_rlimit_nofile 50000; worker_cpu_affinity auto; events { worker_connections 50000; multi_accept on; } http { include mime.types; # tcp_nodelay on; # tcp_nopush on; sendfile on; # sendfile_max_chunk 1m; keepalive_timeout 60; server { listen 443 ssl reuseport; server_name *.domain; ssl_conf_command Options KTLS; ssl_certificate /usr/local/nginx/cert/certificate.cer; ssl_certificate_key /usr/local/nginx/cert/certificate.key; ssl_protocols TLSv1.3; #ssl_session_cache shared:SSL:10m; #ssl_session_timeout 5m; #ssl_protocols TLSv1 TLSv1.1 TLSv1.2 TLSv1.3; #ssl_ciphers HIGH:!aNULL:!eNULL:!EXPORT:!CAMELLIA:!DES:!MD5:!PSK:!RC4; #ssl_prefer_server_ciphers on; #access_log /home/logs/access.log; #error_log /home/logs/error.log debug; location / { root html; } } } I would appreciate any help. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,294477,295200#msg-295200 From pluknet at nginx.com Thu Sep 15 12:04:20 2022 From: pluknet at nginx.com (Sergey Kandaurov) Date: Thu, 15 Sep 2022 16:04:20 +0400 Subject: Running nginx-quic in a real environment In-Reply-To: <831BD439-0012-42CF-B360-DB4FB7045D9A@protocol.ai> References: <831BD439-0012-42CF-B360-DB4FB7045D9A@protocol.ai> Message-ID: <761C055B-32A9-46A1-901E-B74784C78E19@nginx.com> > On 8 Sep 2022, at 14:27, João Sousa Andrade via nginx wrote: > > Hello everyone, > > My team is currently considering using nginx-quic in a production-like environment after running it and benchmarking it in our test env. > > Reading "The code is currently at a beta level of quality and should not be used in production." under https://github.com/VKCOM/nginx-quic was a bit discouraging. However, I understand that repo isn't the source of truth and one should also note https://hg.nginx.org/nginx-quic is around 10mo ahead. > Consequently, I decided to double-check here. Hope it's the right place to do so :) > > I'm currently wondering what is currently missing in terms of QUIC implementation in the latest version of nginx-quic. Are there any particular bugs I should be aware of? > I also understand there are performance improvements currently in the works. This part should be mostly alright given we'll benchmark. It's the functional side of things I'm wondering about. > The beta status means the code base isn't stabilized yet, which means further updates of this code, including features, potential changes in behaviour, bug fixes, and refactoring. There are still rough edges but basically it works. > Can anyone help me with that? If this goes forward, we'll be happy to share anything useful we find on our side as well. > We appreciate to receive feedback. -- Sergey Kandaurov From francis at daoine.org Thu Sep 15 14:29:15 2022 From: francis at daoine.org (Francis Daly) Date: Thu, 15 Sep 2022 15:29:15 +0100 Subject: help with https to http and WSS to WS reverse proxy conf In-Reply-To: References: <20220911084323.GB9502@daoine.org> <20220912203507.GD9502@daoine.org> Message-ID: <20220915142915.GF9502@daoine.org> On Mon, Sep 12, 2022 at 05:46:21PM -0700, Michael Williams wrote: Hi there, > Wow thank you. This really helps all the guidance and instruction. I really > appreciate your time. No worries. > One thing to clarify, is that if I turn off NGINX, the client page works > fine and connects to the app server inside the docker OK. I confess that I am confused as to what your current architecture is. Can you describe it? Along the lines of: Without nginx involved, we have (http service) running on (docker ip:port) and when we tell the client to access (http:// docker ip:port) everything works, including the websocket thing. With nginx involved to test reverse-proxying http, the docker side is identical, but we tell the client to access (http:// nginx ip:port) and everything works? Or not everything works? With nginx involved to test reverse-proxying https, the docker side is identical, and we tell the client to access (https:// nginx ip:port), and some things work? With that information, it might be clear to someone where the first problem appears. In this configuration: > server { > index index.html index.htm; > listen [::]:443 ssl ipv6only=on; # managed by Certbot > listen 443 ssl; # managed by Certbot > listen 25566 ssl; nginx is listening for https on two ports. What test are you running? Which port are you using? > location @wss { > proxy_http_version 1.1; > proxy_set_header Upgrade $http_upgrade; > proxy_set_header Connection $connection_upgrade; > proxy_pass http://172.31.24.191:25565; nginx is talking to this port without https. What works here / what fails here? What do the logs say? > My idea was to try changing our client webpage to access a different port # > than the one our app server in the docker is listening to. I'm afraid I am not sure what that means. I thought the client webpage was accessing nginx on port 443 and the backend / upstream http server was listening on the high port? Maybe I am getting confused among multiple tests that you are running. > With that change > I see from WIreshark on my local that the WSS connection seems to go > through OK with NGINX: > > [image: Screen Shot 2022-09-12 at 5.29.50 PM.png] I'm seeing a picture; but I'm not seeing anything that obviously says that a WSS connection is working anywhere. I'm seeing a TLS connection between the client and nginx that is cleanly closed after a fraction of a second. I see nothing that suggests that nginx is doing a proxy_pass to the upstream server. (But maybe that was excluded from the tcpdump?) > Our app server shows that the connection to the server also starts but then > disconnect it: > (22:36:59) Disconnected (unknown opcode 22) With nginx involved, the app server should never see the client IP address directly; it should only see connections from nginx. (It might see the client IP listed in the http headers.) > My question here, does NGINX negotiate the entire handshake for HTTPS to > WSS upgrade itself, without forwarding the same pages to our app server ? > Is there a way to forward those pages to the app server also ? I think our > app server may insist on negotiating a ws:// connection itself, but not a > wss:// connection. As I understand it: the client makes a TLS connection to nginx, and sends a http request inside that TLS connection (== a https request). Separately, nginx makes a http connection to the upstream server, and (through config) passes along the Upgrade-and-friends headers that the client sent to nginx, requesting that this connection switch to a websocket connection. And after that works, nginx effectively becomes a "blind tunnel" for the connection contents, passing unencrypted things on the nginx-to-upstream side and encrypted things on the nginx-to-client side, and generally not caring about what is inside. If things are still not working as wanted, I suggest simplifying things as much as possible. Make the nginx config be not much more than what step 6 on https://www.nginx.com/blog/websocket-nginx/ shows, and include enough information in any report of a test, so that someone else will be able to repeat the test on their system to see what happens there. Good luck with it, f -- Francis Daly francis at daoine.org From evertisland at hotmail.com Thu Sep 15 14:44:55 2022 From: evertisland at hotmail.com (Evert Saar) Date: Thu, 15 Sep 2022 14:44:55 +0000 Subject: Njs 0.7.7 Message-ID: Njs 0.7.7 looks great. How to use it? Latest public nginx 1.23.1 supports 0.7.6 and have no luck upgrading njs to latest. From iippolitov at nginx.com Thu Sep 15 16:22:21 2022 From: iippolitov at nginx.com (Igor Ippolitov) Date: Thu, 15 Sep 2022 17:22:21 +0100 Subject: Njs 0.7.7 In-Reply-To: References: Message-ID: <9fdc70ce-3936-30df-4243-7565eeefe012@nginx.com> Evert, Which repository are you using? Try setting up a repo using this doc: http://nginx.org/en/linux_packages.html Let me know if you face issues with it. Regards, Igor On 15/09/2022 15:44, Evert Saar wrote: > Njs 0.7.7 looks great. How to use it? Latest public nginx 1.23.1 supports 0.7.6 and have no luck upgrading njs to latest. > > _______________________________________________ > nginx mailing list -- nginx at nginx.org > To unsubscribe send an email to nginx-leave at nginx.org From michael.glenn.williams at totalvu.tv Thu Sep 15 17:00:31 2022 From: michael.glenn.williams at totalvu.tv (Michael Williams) Date: Thu, 15 Sep 2022 10:00:31 -0700 Subject: help with https to http and WSS to WS reverse proxy conf In-Reply-To: <20220915142915.GF9502@daoine.org> References: <20220911084323.GB9502@daoine.org> <20220912203507.GD9502@daoine.org> <20220915142915.GF9502@daoine.org> Message-ID: Francis, WIth your help and suggestions of starting with the simplest and slowly adding steps, it is working. Here is the working config below. I started by getting the web server part only working well. You suggested not to use scheme=http for redirect, so I'm listening on 80 and redirecting. I put each function into a distinct server which seems to have helped. The HTTPS server is serving the requests, and serving our app as the index page. Once that was working, I tried using port 25565 within NGINX and also the app inside docker. NGINX refused saying there was already a listener on that port. So I had to go back to putting the client on a different port than the server. I took out the map block and the upstream block, as it seems fine to put the commands inline in the server. The two things that seemed to clinch it were adding ssl to the listener line in the 25566 server, and adding the timeout. I turns out, our app server negotiates everything on 25565, and does not use 80 or 443 to start the WSS negotiation. Once the timeout was added, it worked. There seemed to be a lot of discussion around WSS timeouts on the web, so I tried it. Frankly I'm not sure why it is required. I don't know what I would have done if the app server needed to start the WS negotiation on 443 and then switch to 25565. I suspect there may be some additional ssl settings I will need. Many thanks again for your time and wisdom and sharing it. server_tokens off; ssl_certificate /etc/letsencrypt/live/esports1.totalvu.live/fullchain.pem; # managed by Certbot ssl_certificate_key /etc/letsencrypt/live/esports1.totalvu.live/privkey.pem; # managed by Certbot include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot ssl_protocols TLSv1.1 TLSv1.2; log_format withport '$remote_addr - $remote_user [$time_local] ' '"$request" $status $body_bytes_sent ' '"$http_referer" "$http_user_agent" ' "srv p " $server_port " rem p " $remote_port; access_log /var/log/nginx/access.log withport; # Port 80 web server server { # first redirect to https listen 80; return 301 https://$host$request_uri; } # Port 443 web and wss server server { root /var/www/html; index cc.html; # Web server for https listen [::]:443 ssl ipv6only=on; # managed by Certbot listen 443 ssl; # managed by Certbot } server { listen 25566 ssl; location / { proxy_http_version 1.1; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection "upgrade"; proxy_pass http://172.31.24.191:25565; proxy_read_timeout 86400; } } On Thu, Sep 15, 2022 at 7:30 AM Francis Daly wrote: > On Mon, Sep 12, 2022 at 05:46:21PM -0700, Michael Williams wrote: > > Hi there, > > > Wow thank you. This really helps all the guidance and instruction. I > really > > appreciate your time. > > No worries. > > > One thing to clarify, is that if I turn off NGINX, the client page works > > fine and connects to the app server inside the docker OK. > > I confess that I am confused as to what your current architecture is. > > Can you describe it? Along the lines of: > > Without nginx involved, we have (http service) running on (docker > ip:port) and when we tell the client to access (http:// docker ip:port) > everything works, including the websocket thing. > > With nginx involved to test reverse-proxying http, the docker side is > identical, but we tell the client to access (http:// nginx ip:port) > and everything works? Or not everything works? > > With nginx involved to test reverse-proxying https, the docker side is > identical, and we tell the client to access (https:// nginx ip:port), > and some things work? > > With that information, it might be clear to someone where the first > problem appears. > > In this configuration: > > > server { > > index index.html index.htm; > > listen [::]:443 ssl ipv6only=on; # managed by Certbot > > listen 443 ssl; # managed by Certbot > > listen 25566 ssl; > > nginx is listening for https on two ports. > > What test are you running? Which port are you using? > > > location @wss { > > > proxy_http_version 1.1; > > proxy_set_header Upgrade $http_upgrade; > > proxy_set_header Connection $connection_upgrade; > > > proxy_pass http://172.31.24.191:25565; > > nginx is talking to this port without https. > > What works here / what fails here? What do the logs say? > > > My idea was to try changing our client webpage to access a different > port # > > than the one our app server in the docker is listening to. > > I'm afraid I am not sure what that means. I thought the client webpage > was accessing nginx on port 443 and the backend / upstream http server > was listening on the high port? > > Maybe I am getting confused among multiple tests that you are running. > > > With that change > > I see from WIreshark on my local that the WSS connection seems to go > > through OK with NGINX: > > > > [image: Screen Shot 2022-09-12 at 5.29.50 PM.png] > > I'm seeing a picture; but I'm not seeing anything that obviously says > that a WSS connection is working anywhere. > > I'm seeing a TLS connection between the client and nginx that is cleanly > closed after a fraction of a second. I see nothing that suggests that > nginx is doing a proxy_pass to the upstream server. (But maybe that was > excluded from the tcpdump?) > > > Our app server shows that the connection to the server also starts but > then > > disconnect it: > > (22:36:59) Disconnected (unknown opcode 22) > > With nginx involved, the app server should never see the client IP > address directly; it should only see connections from nginx. (It might > see the client IP listed in the http headers.) > > > My question here, does NGINX negotiate the entire handshake for HTTPS to > > WSS upgrade itself, without forwarding the same pages to our app server ? > > Is there a way to forward those pages to the app server also ? I think > our > > app server may insist on negotiating a ws:// connection itself, but not a > > wss:// connection. > > As I understand it: the client makes a TLS connection to nginx, > and sends a http request inside that TLS connection (== a https > request). Separately, nginx makes a http connection to the upstream > server, and (through config) passes along the Upgrade-and-friends headers > that the client sent to nginx, requesting that this connection switch to > a websocket connection. And after that works, nginx effectively becomes a > "blind tunnel" for the connection contents, passing unencrypted things > on the nginx-to-upstream side and encrypted things on the nginx-to-client > side, and generally not caring about what is inside. > > > If things are still not working as wanted, I suggest simplifying things > as much as possible. > > Make the nginx config be not much more than what step 6 on > https://www.nginx.com/blog/websocket-nginx/ shows, and include enough > information in any report of a test, so that someone else will be able > to repeat the test on their system to see what happens there. > > Good luck with it, > > f > -- > Francis Daly francis at daoine.org > _______________________________________________ > nginx mailing list -- nginx at nginx.org > To unsubscribe send an email to nginx-leave at nginx.org > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Thu Sep 15 17:30:24 2022 From: nginx-forum at forum.nginx.org (libresco_27) Date: Thu, 15 Sep 2022 13:30:24 -0400 Subject: help with regex in nginx map Message-ID: <7dcd58d1769ffe00d82dcf9f5c408286.NginxMailingListEnglish@forum.nginx.org> Hello there, I'm trying to write a simple regex for a map where only the first part of a string should match. I went through the documentation, which unfortunately didn't have much examples. Heres what I'm trying to achieve - map $string $redirct_string{ "~^abc*$" 1; } I also tried to change the regex to a simple "abc*", but it didn't work. Please let me know where am I going wrong. Thank you for your answer. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,295213,295213#msg-295213 From francis at daoine.org Thu Sep 15 22:46:37 2022 From: francis at daoine.org (Francis Daly) Date: Thu, 15 Sep 2022 23:46:37 +0100 Subject: help with regex in nginx map In-Reply-To: <7dcd58d1769ffe00d82dcf9f5c408286.NginxMailingListEnglish@forum.nginx.org> References: <7dcd58d1769ffe00d82dcf9f5c408286.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20220915224637.GG9502@daoine.org> On Thu, Sep 15, 2022 at 01:30:24PM -0400, libresco_27 wrote: Hi there, > I'm trying to write a simple regex for a map where only the first part of a > string should match. I went through the documentation, which unfortunately > didn't have much examples. I'm not sure if you are asking "how to use map", "how to set a variable", "how to write a regex in nginx", or something else. Does the following config fragment and example requests help at all? Within a http{} block: === map $arg_input $my_output_variable { "" "it was empty or not set"; default "did not match anything else"; ~^abc*$ "matches start abc star end"; ~^abc "starts with abc"; abc "is abc"; ~abc "contains abc"; } server { listen 127.0.0.3:80; location / { return 200 "input is :$arg_input:, output is :$my_output_variable:\n"; } } === $ curl http://127.0.0.3/ input is ::, output is :it was empty or not set: $ curl http://127.0.0.3/?input=abc input is :abc:, output is :is abc: $ curl http://127.0.0.3/?input=abcc input is :abcc:, output is :matches start abc star end: $ curl http://127.0.0.3/?input=abcd input is :abcd:, output is :starts with abc: $ curl http://127.0.0.3/?input=dabcd input is :dabcd:, output is :contains abc: $ curl http://127.0.0.3/?input=d input is :d:, output is :did not match anything else: > map $string $redirct_string{ > "~^abc*$" 1; > } That regex will only match the strings "ab", "abc", "abcc", "abccc", etc, with any number of c:s. > I also tried to change the regex to a simple "abc*", but it didn't work. That regex will match any string that includes "ab" anywhere in it. Cheers, f -- Francis Daly francis at daoine.org From biscotty666 at gmail.com Fri Sep 16 07:22:46 2022 From: biscotty666 at gmail.com (Brian Carey) Date: Fri, 16 Sep 2022 01:22:46 -0600 Subject: soooo close: nginx gunicorn flask directory stripped from links Message-ID: <0937e9b4-af9e-7972-ecce-f2287c5da27f@gmail.com> Hi, I'm very close to getting my flask app properly reverse-proxied by nginx. What works: I can access and use my app successfully at http://127.0.0.1:8000. I can get to my main page at http://my.domain/app. If I specifically enter the url of a sub-directory/page I can get there, for example http://my.domain/app works and http://my.domain/app/home works. What doesn't: But if I click on the "home" or any other link I get a file not found error. Hovering over the link it points to http://my.domain/home instead of http://my.domain/app/home. I have tried playing around with the @app.routes but I don't seem to be getting anywhere. What am I missing? (fwiw I'm doing all this locally without a hosting service) Thanks in advance for any ideas From devashi.tandon at appsentinels.ai Fri Sep 16 07:31:08 2022 From: devashi.tandon at appsentinels.ai (Devashi Tandon) Date: Fri, 16 Sep 2022 07:31:08 +0000 Subject: Regarding HTTP chunked Body being stored in temp_file Message-ID: Hi, In our module code, we are processing the HTTP request body when it is not stored in r->request_body->temp_file. When I send a 9381 bytes body, NGINX doesn't store the body in temp_file but in the internal buffers. Hence we are able to process the body. However, when I enable chunked encoding, the same 9381 bytes body, gets stored in the r->request_body->temp_file. To avoid getting stored in temp_file, I have to increase the client_body_buffer_size to a larger value than the default. In that case, chunked encoded http body is NOT stored in temp_file and we are able to process it. Is there any reason why the behaviour of client_body_buffer_size is different in case of regular HTTP traffic v/s chunked encoded HTTP traffic? Why do we need a larger buffer size to ensure chunked encoded traffic doesn't get stored in temp_file? Thanks, Devashi -------------- next part -------------- An HTML attachment was scrubbed... URL: From francis at daoine.org Fri Sep 16 07:48:28 2022 From: francis at daoine.org (Francis Daly) Date: Fri, 16 Sep 2022 08:48:28 +0100 Subject: soooo close: nginx gunicorn flask directory stripped from links In-Reply-To: <0937e9b4-af9e-7972-ecce-f2287c5da27f@gmail.com> References: <0937e9b4-af9e-7972-ecce-f2287c5da27f@gmail.com> Message-ID: <20220916074828.GH9502@daoine.org> On Fri, Sep 16, 2022 at 01:22:46AM -0600, Brian Carey wrote: Hi there, > I'm very close to getting my flask app properly reverse-proxied by nginx. If your nginx config is correct, then it might be that the upstream / backend service (the flask app, in this case) does not like being reverse-proxied (to a different "sub-directory"), > I can access and use my app successfully at http://127.0.0.1:8000. > > I can get to my main page at http://my.domain/app. If I specifically enter > the url of a sub-directory/page I can get there, for example > http://my.domain/app works and http://my.domain/app/home works. > Hovering over the link it points to http://my.domain/home instead of > http://my.domain/app/home. Does https://pypi.org/project/flask-reverse-proxy-fix/ apply in your case? That page links to a 404 page, where the original content appears to be at https://web.archive.org/web/20131129080707/http://flask.pocoo.org/snippets/35/ You can possibly / potentially avoid all of that, if you are happy to deploy your app at http://127.0.0.1:8000/app/ instead of at http://127.0.0.1:8000/ -- in that case, all of the "local" links will be the same in both the direct and reverse-proxied cases, so only the hostname/port would need adjusting. (Which is usually more straightforward.) (I'm presuming that it is possible to deploy a flask app somewhere other than the root of the web service.) Good luck with it, f -- Francis Daly francis at daoine.org From biscotty666 at gmail.com Fri Sep 16 08:03:21 2022 From: biscotty666 at gmail.com (Brian Scott) Date: Fri, 16 Sep 2022 02:03:21 -0600 Subject: soooo close: nginx gunicorn flask directory stripped from links In-Reply-To: <20220916074828.GH9502@daoine.org> References: <0937e9b4-af9e-7972-ecce-f2287c5da27f@gmail.com> <20220916074828.GH9502@daoine.org> Message-ID: <0a2a2e12-2642-42f9-a5b2-f724b49b4856@gmail.com> Wow that looks promising. Can't try until tomorrow because it's 2am but I'll try first thing tomorrow. From a best practices point of view would one solution be better than the other assuming both work? The second suggestion seems more straight-forward and avoids patches/fixes which is a good thing in general. ⁣Get BlueMail for Android ​ On Sep 16, 2022, 1:49 AM, at 1:49 AM, Francis Daly wrote: >On Fri, Sep 16, 2022 at 01:22:46AM -0600, Brian Carey wrote: > >Hi there, > >> I'm very close to getting my flask app properly reverse-proxied by >nginx. > >If your nginx config is correct, then it might be that the upstream >/ backend service (the flask app, in this case) does not like being >reverse-proxied (to a different "sub-directory"), > >> I can access and use my app successfully at http://127.0.0.1:8000. >> >> I can get to my main page at http://my.domain/app. If I specifically >enter >> the url of a sub-directory/page I can get there, for example >> http://my.domain/app works and http://my.domain/app/home works. > >> Hovering over the link it points to http://my.domain/home instead of >> http://my.domain/app/home. > >Does https://pypi.org/project/flask-reverse-proxy-fix/ apply in your >case? > >That page links to a 404 page, where the original content appears to be >at >https://web.archive.org/web/20131129080707/http://flask.pocoo.org/snippets/35/ > >You can possibly / potentially avoid all of that, if you are happy >to deploy your app at http://127.0.0.1:8000/app/ instead of at >http://127.0.0.1:8000/ -- in that case, all of the "local" links will >be the same in both the direct and reverse-proxied cases, so only the >hostname/port would need adjusting. (Which is usually more >straightforward.) > >(I'm presuming that it is possible to deploy a flask app somewhere >other >than the root of the web service.) > >Good luck with it, > > f >-- >Francis Daly francis at daoine.org >_______________________________________________ >nginx mailing list -- nginx at nginx.org >To unsubscribe send an email to nginx-leave at nginx.org -------------- next part -------------- An HTML attachment was scrubbed... URL: From francis at daoine.org Fri Sep 16 08:21:12 2022 From: francis at daoine.org (Francis Daly) Date: Fri, 16 Sep 2022 09:21:12 +0100 Subject: soooo close: nginx gunicorn flask directory stripped from links In-Reply-To: <0a2a2e12-2642-42f9-a5b2-f724b49b4856@gmail.com> References: <0937e9b4-af9e-7972-ecce-f2287c5da27f@gmail.com> <20220916074828.GH9502@daoine.org> <0a2a2e12-2642-42f9-a5b2-f724b49b4856@gmail.com> Message-ID: <20220916082112.GI9502@daoine.org> On Fri, Sep 16, 2022 at 02:03:21AM -0600, Brian Scott wrote: Hi there, > Wow that looks promising. Can't try until tomorrow because it's 2am but I'll try first thing tomorrow. From a best practices point of view would one solution be better than the other assuming both work? The second suggestion seems more straight-forward and avoids patches/fixes which is a good thing in general. > I'm not aware of official "best practices" in this matter. I like "simple", so I tend to try to set up the internal "thing" so that I can reverse-proxy https://external/thing/ to http://internal/thing/, with the hope that internally I can access both forms (while externally only the external form is accessible). (I also try to make http://internal/ redirect to http://internal/thing/, so that I *can* access it easily internally.) Fundamentally, both options should work, provided that the application does not use any internal links that start with "/". Cheers, f -- Francis Daly francis at daoine.org From nginx-forum at forum.nginx.org Fri Sep 16 14:08:44 2022 From: nginx-forum at forum.nginx.org (petecooper) Date: Fri, 16 Sep 2022 10:08:44 -0400 Subject: QUIC preview `make` fails: `[objs/Makefile:1055: objs/src/event/quic/ngx_event_quic.o] Error 1` Message-ID: <6ba8df5dd3810dbcbb5e22c594b58655.NginxMailingListEnglish@forum.nginx.org> Hello. I am adapting my stable Nginx compile script to road test the QUIC preview. Per the readme, I am using `quictls`, specifically v3.0.3. I have not yet tried BoringSSL. My `configure` command completes successfully, but my `make` command fails. I have included output below, and there is a (safe for work) GitHub gist for the `make` output to retain formatting. I would be very grateful for any advice or feedback as to what I am (or might be) doing wrong. Thank you for your consideration. Best wishes. ==8<== Here is my `make` error`: cc1: all warnings being treated as errors make[1]: *** [objs/Makefile:1055: objs/src/event/quic/ngx_event_quic.o] Error 1 make[1]: Leaving directory '/home/pete/nginx-quic' make: *** [Makefile:10: build] Error 2 Here is my `configure` script: cd ~/nginx-quic/ \ && yes Y | ./auto/configure \ --add-dynamic-module=../brotli-source/ \ --add-dynamic-module=../cache-purge-source/ngx_cache_purge-$cache_purge_source_version \ --add-dynamic-module=../devel-kit-source/ngx_devel_kit-$devel_kit_source_version \ --add-dynamic-module=../echo-source/echo-nginx-module-$echo_source_version \ --add-dynamic-module=../headers-more-source/headers-more-nginx-module-$headers_more_source_version \ --add-dynamic-module=../ipscrub-source/ipscrub-$ipscrub_source_version/ipscrub \ --add-dynamic-module=../length-hiding-source/nginx-length-hiding-filter-module-$length_hiding_source_version \ --add-dynamic-module=../memcached-source/memc-nginx-module-$memcached_source_version \ --add-dynamic-module=../nchan-source/nchan-$nchan_source_version/ \ --add-dynamic-module=../redis2-source/redis2-nginx-module-$redis2_source_version \ --add-dynamic-module=../set-misc-source/set-misc-nginx-module-$set_misc_source_version \ --add-dynamic-module=../vts-source/nginx-module-vts-$vts_source_version \ --build=$(date --iso-8601=seconds) \ --conf-path=/etc/nginx/nginx.conf \ --error-log-path=$nginx_log_dir_base/log/nginx/live/nginx/nginx.error.log \ --http-client-body-temp-path=/var/cache/nginx/client_temp \ --http-fastcgi-temp-path=/var/cache/nginx/fastcgi_temp \ --http-log-path=$nginx_log_dir_base/log/nginx/live/nginx/nginx.access.log \ --http-proxy-temp-path=/var/cache/nginx/proxy_temp \ --lock-path=/var/run/nginx.lock \ --modules-path=/usr/lib/nginx/modules \ --pid-path=/var/run/nginx.pid \ --prefix=/etc/nginx \ --sbin-path=/usr/sbin/nginx \ --with-cc-opt="-m64 -march=native -mtune=native -DTCP_FASTOPEN=23 -g -O3 -fstack-protector-strong -flto -ffat-lto-objects -fuse-ld=gold --param=ssp-buffer-size=4 -Wformat -Werror=format-security -Wimplicit-fallthrough=0 -Wno-deprecated-declarations -fcode-hoisting -Wp,-D_FORTIFY_SOURCE=2 -I../quictls-source/openssl-openssl-$quictls_source_version/apps/include" \ --with-compat \ --with-debug \ --with-file-aio \ --with-http_addition_module \ --with-http_auth_request_module \ --with-http_dav_module \ --with-http_geoip_module=dynamic \ --with-http_gunzip_module \ --with-http_gzip_static_module \ --with-http_image_filter_module=dynamic \ --with-http_realip_module \ --with-http_secure_link_module \ --with-http_slice_module \ --with-http_ssl_module \ --with-http_stub_status_module \ --with-http_sub_module \ --with-http_v2_module \ --with-http_v3_module \ --with-http_xslt_module=dynamic \ --with-ld-opt='-Wl,-E -lrt -lpcre -Wl,-z,relro -L../quictls-source/openssl-openssl-$quictls_source_version/apps/lib' \ --with-libatomic \ --with-openssl-opt="\ shared \ no-ssl3 \ no-weak-ssl-ciphers \ -fstack-protector-strong \ " \ --with-openssl=../quictls-source/openssl-openssl-$quictls_source_version \ --with-openssl-opt=enable-ktls \ --with-pcre=../pcre2-source/pcre2-$pcre2_source_version \ --with-pcre-jit \ --with-stream \ --with-stream_ssl_module \ --with-stream_ssl_preread_module \ --with-stream=dynamic \ --with-threads \ --with-zlib=../cf-zlib-source \ --without-http_empty_gif_module \ --without-http_scgi_module \ --without-http_ssi_module \ --without-http_uwsgi_module \ --without-mail_imap_module \ --without-mail_pop3_module \ --without-mail_smtp_module Here is the final part of the `make` output (formatted here https://gist.github.com/petecooper/26e6a47e44f4ad1e49a031e26dde2de4): cc -c -pipe -O -W -Wall -Wpointer-arith -Wno-unused-parameter -Werror -g -m64 -march=native -mtune=native -DTCP_FASTOPEN=23 -g -O3 -fstack-protector-strong -flto -ffat-lto-objects -fuse-ld=gold --param=ssp-buffer-size=4 -Wformat -Werror=format-security -Wimplicit-fallthrough=0 -Wno-deprecated-declarations -fcode-hoisting -Wp,-D_FORTIFY_SOURCE=2 -I../quictls-source/openssl-openssl-3.0.3/apps/include -Wno-deprecated-declarations -DNDK_SET_VAR -DNDK_UPSTREAM_LIST -I src/core -I src/event -I src/event/modules -I src/event/quic -I src/os/unix -I ../brotli-source//deps/brotli/c/include -I ../devel-kit-source/ngx_devel_kit-0.3.1/objs -I objs/addon/ndk -I ../devel-kit-source/ngx_devel_kit-0.3.1/src -I ../devel-kit-source/ngx_devel_kit-0.3.1/objs -I objs/addon/ndk -I ../nchan-source/nchan-1.3.3//src -I ../pcre2-source/pcre2-10.40/src/ -I ../quictls-source/openssl-openssl-3.0.3/.openssl/include -I ../cf-zlib-source -I /usr/include/libxml2 -I objs \ -o objs/src/event/ngx_event_openssl.o \ src/event/ngx_event_openssl.c cc -c -pipe -O -W -Wall -Wpointer-arith -Wno-unused-parameter -Werror -g -m64 -march=native -mtune=native -DTCP_FASTOPEN=23 -g -O3 -fstack-protector-strong -flto -ffat-lto-objects -fuse-ld=gold --param=ssp-buffer-size=4 -Wformat -Werror=format-security -Wimplicit-fallthrough=0 -Wno-deprecated-declarations -fcode-hoisting -Wp,-D_FORTIFY_SOURCE=2 -I../quictls-source/openssl-openssl-3.0.3/apps/include -Wno-deprecated-declarations -DNDK_SET_VAR -DNDK_UPSTREAM_LIST -I src/core -I src/event -I src/event/modules -I src/event/quic -I src/os/unix -I ../brotli-source//deps/brotli/c/include -I ../devel-kit-source/ngx_devel_kit-0.3.1/objs -I objs/addon/ndk -I ../devel-kit-source/ngx_devel_kit-0.3.1/src -I ../devel-kit-source/ngx_devel_kit-0.3.1/objs -I objs/addon/ndk -I ../nchan-source/nchan-1.3.3//src -I ../pcre2-source/pcre2-10.40/src/ -I ../quictls-source/openssl-openssl-3.0.3/.openssl/include -I ../cf-zlib-source -I /usr/include/libxml2 -I objs \ -o objs/src/event/ngx_event_openssl_stapling.o \ src/event/ngx_event_openssl_stapling.c cc -c -pipe -O -W -Wall -Wpointer-arith -Wno-unused-parameter -Werror -g -m64 -march=native -mtune=native -DTCP_FASTOPEN=23 -g -O3 -fstack-protector-strong -flto -ffat-lto-objects -fuse-ld=gold --param=ssp-buffer-size=4 -Wformat -Werror=format-security -Wimplicit-fallthrough=0 -Wno-deprecated-declarations -fcode-hoisting -Wp,-D_FORTIFY_SOURCE=2 -I../quictls-source/openssl-openssl-3.0.3/apps/include -Wno-deprecated-declarations -DNDK_SET_VAR -DNDK_UPSTREAM_LIST -I src/core -I src/event -I src/event/modules -I src/event/quic -I src/os/unix -I ../brotli-source//deps/brotli/c/include -I ../devel-kit-source/ngx_devel_kit-0.3.1/objs -I objs/addon/ndk -I ../devel-kit-source/ngx_devel_kit-0.3.1/src -I ../devel-kit-source/ngx_devel_kit-0.3.1/objs -I objs/addon/ndk -I ../nchan-source/nchan-1.3.3//src -I ../pcre2-source/pcre2-10.40/src/ -I ../quictls-source/openssl-openssl-3.0.3/.openssl/include -I ../cf-zlib-source -I /usr/include/libxml2 -I objs \ -o objs/src/event/quic/ngx_event_quic.o \ src/event/quic/ngx_event_quic.c In file included from src/event/quic/ngx_event_quic_connection.h:28, from src/event/quic/ngx_event_quic.c:10: src/event/quic/ngx_event_quic_transport.h:266:49: error: field ‘level’ has incomplete type 266 | enum ssl_encryption_level_t level; | ^~~~~ src/event/quic/ngx_event_quic_transport.h:314:49: error: field ‘level’ has incomplete type 314 | enum ssl_encryption_level_t level; | ^~~~~ In file included from src/event/quic/ngx_event_quic_connection.h:29, from src/event/quic/ngx_event_quic.c:10: src/event/quic/ngx_event_quic_protection.h:17:37: error: ‘ssl_encryption_application’ undeclared here (not in a function) 17 | #define NGX_QUIC_ENCRYPTION_LAST ((ssl_encryption_application) + 1) | ^~~~~~~~~~~~~~~~~~~~~~~~~~ src/event/quic/ngx_event_quic_protection.h:53:39: note: in expansion of macro ‘NGX_QUIC_ENCRYPTION_LAST’ 53 | ngx_quic_secrets_t secrets[NGX_QUIC_ENCRYPTION_LAST]; | ^~~~~~~~~~~~~~~~~~~~~~~~ In file included from src/event/quic/ngx_event_quic.c:10: src/event/quic/ngx_event_quic_connection.h:163:39: error: field ‘level’ has incomplete type 163 | enum ssl_encryption_level_t level; | ^~~~~ src/event/quic/ngx_event_quic_connection.h:245:39: error: field ‘error_level’ has incomplete type 245 | enum ssl_encryption_level_t error_level; | ^~~~~~~~~~~ src/event/quic/ngx_event_quic.c: In function ‘ngx_quic_new_connection’: src/event/quic/ngx_event_quic.c:261:29: error: ‘ssl_encryption_initial’ undeclared (first use in this function); did you mean ‘ssl_encryption_level_t’? 261 | qc->send_ctx[0].level = ssl_encryption_initial; | ^~~~~~~~~~~~~~~~~~~~~~ | ssl_encryption_level_t src/event/quic/ngx_event_quic.c:261:29: note: each undeclared identifier is reported only once for each function it appears in src/event/quic/ngx_event_quic.c:262:29: error: ‘ssl_encryption_handshake’ undeclared (first use in this function) 262 | qc->send_ctx[1].level = ssl_encryption_handshake; | ^~~~~~~~~~~~~~~~~~~~~~~~ src/event/quic/ngx_event_quic.c: In function ‘ngx_quic_close_connection’: src/event/quic/ngx_event_quic.c:506:40: error: implicit declaration of function ‘SSL_quic_read_level’ [-Werror=implicit-function-declaration] 506 | qc->error_level = c->ssl ? SSL_quic_read_level(c->ssl->connection) | ^~~~~~~~~~~~~~~~~~~ src/event/quic/ngx_event_quic.c:507:40: error: ‘ssl_encryption_initial’ undeclared (first use in this function); did you mean ‘ssl_encryption_level_t’? 507 | : ssl_encryption_initial; | ^~~~~~~~~~~~~~~~~~~~~~ | ssl_encryption_level_t In file included from src/event/quic/ngx_event_quic.c:10: src/event/quic/ngx_event_quic_connection.h:57:24: error: ‘ssl_encryption_handshake’ undeclared (first use in this function) 57 | : (((level) == ssl_encryption_handshake) ? &((qc)->send_ctx[1]) \ | ^~~~~~~~~~~~~~~~~~~~~~~~ src/event/quic/ngx_event_quic.c:519:23: note: in expansion of macro ‘ngx_quic_get_send_ctx’ 519 | ctx = ngx_quic_get_send_ctx(qc, qc->error_level); | ^~~~~~~~~~~~~~~~~~~~~ In file included from src/core/ngx_core.h:61, from src/event/quic/ngx_event_quic.c:8: src/event/quic/ngx_event_quic.c: In function ‘ngx_quic_handle_datagram’: src/event/quic/ngx_event_quic_transport.h:51:19: error: ‘ssl_encryption_initial’ undeclared (first use in this function); did you mean ‘ssl_encryption_level_t’? 51 | : (lvl == ssl_encryption_initial) ? "init" \ | ^~~~~~~~~~~~~~~~~~~~~~ src/core/ngx_log.h:93:48: note: in definition of macro ‘ngx_log_debug’ 93 | ngx_log_error_core(NGX_LOG_DEBUG, log, __VA_ARGS__) | ^~~~~~~~~~~ src/event/quic/ngx_event_quic.c:686:13: note: in expansion of macro ‘ngx_log_debug5’ 686 | ngx_log_debug5(NGX_LOG_DEBUG_EVENT, c->log, 0, | ^~~~~~~~~~~~~~ src/event/quic/ngx_event_quic.c:689:32: note: in expansion of macro ‘ngx_quic_level_name’ 689 | rc, ngx_quic_level_name(pkt.level), | ^~~~~~~~~~~~~~~~~~~ src/event/quic/ngx_event_quic_transport.h:52:23: error: ‘ssl_encryption_handshake’ undeclared (first use in this function) 52 | : (lvl == ssl_encryption_handshake) ? "hs" : "early" | ^~~~~~~~~~~~~~~~~~~~~~~~ src/core/ngx_log.h:93:48: note: in definition of macro ‘ngx_log_debug’ 93 | ngx_log_error_core(NGX_LOG_DEBUG, log, __VA_ARGS__) | ^~~~~~~~~~~ src/event/quic/ngx_event_quic.c:686:13: note: in expansion of macro ‘ngx_log_debug5’ 686 | ngx_log_debug5(NGX_LOG_DEBUG_EVENT, c->log, 0, | ^~~~~~~~~~~~~~ src/event/quic/ngx_event_quic.c:689:32: note: in expansion of macro ‘ngx_quic_level_name’ 689 | rc, ngx_quic_level_name(pkt.level), | ^~~~~~~~~~~~~~~~~~~ src/event/quic/ngx_event_quic.c: In function ‘ngx_quic_handle_packet’: src/event/quic/ngx_event_quic.c:787:23: error: ‘ssl_encryption_initial’ undeclared (first use in this function); did you mean ‘ssl_encryption_level_t’? 787 | if (pkt->level == ssl_encryption_initial) { | ^~~~~~~~~~~~~~~~~~~~~~ | ssl_encryption_level_t src/event/quic/ngx_event_quic.c: In function ‘ngx_quic_handle_payload’: src/event/quic/ngx_event_quic.c:947:44: error: type of formal parameter 2 is incomplete 947 | if (!ngx_quic_keys_available(qc->keys, pkt->level)) { | ^~~~~~~~~~ In file included from src/core/ngx_core.h:61, from src/event/quic/ngx_event_quic.c:8: src/event/quic/ngx_event_quic_transport.h:51:19: error: ‘ssl_encryption_initial’ undeclared (first use in this function); did you mean ‘ssl_encryption_level_t’? 51 | : (lvl == ssl_encryption_initial) ? "init" \ | ^~~~~~~~~~~~~~~~~~~~~~ src/core/ngx_log.h:86:67: note: in definition of macro ‘ngx_log_error’ 86 | if ((log)->log_level >= level) ngx_log_error_core(level, log, __VA_ARGS__) | ^~~~~~~~~~~ src/event/quic/ngx_event_quic.c:950:23: note: in expansion of macro ‘ngx_quic_level_name’ 950 | ngx_quic_level_name(pkt->level)); | ^~~~~~~~~~~~~~~~~~~ src/event/quic/ngx_event_quic_transport.h:52:23: error: ‘ssl_encryption_handshake’ undeclared (first use in this function) 52 | : (lvl == ssl_encryption_handshake) ? "hs" : "early" | ^~~~~~~~~~~~~~~~~~~~~~~~ src/core/ngx_log.h:86:67: note: in definition of macro ‘ngx_log_error’ 86 | if ((log)->log_level >= level) ngx_log_error_core(level, log, __VA_ARGS__) | ^~~~~~~~~~~ src/event/quic/ngx_event_quic.c:950:23: note: in expansion of macro ‘ngx_quic_level_name’ 950 | ngx_quic_level_name(pkt->level)); | ^~~~~~~~~~~~~~~~~~~ src/event/quic/ngx_event_quic.c:1005:33: error: type of formal parameter 2 is incomplete 1005 | ngx_quic_discard_ctx(c, ssl_encryption_initial); | ^~~~~~~~~~~~~~~~~~~~~~ src/event/quic/ngx_event_quic.c: At top level: src/event/quic/ngx_event_quic.c:1062:71: error: parameter 2 (‘level’) has incomplete type 1062 | ngx_quic_discard_ctx(ngx_connection_t *c, enum ssl_encryption_level_t level) | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~^~~~~ src/event/quic/ngx_event_quic.c: In function ‘ngx_quic_discard_ctx’: src/event/quic/ngx_event_quic.c:1072:44: error: type of formal parameter 2 is incomplete 1072 | if (!ngx_quic_keys_available(qc->keys, level)) { | ^~~~~ src/event/quic/ngx_event_quic.c:1076:37: error: type of formal parameter 2 is incomplete 1076 | ngx_quic_keys_discard(qc->keys, level); | ^~~~~ In file included from src/event/quic/ngx_event_quic.c:10: src/event/quic/ngx_event_quic_connection.h:56:17: error: ‘ssl_encryption_initial’ undeclared (first use in this function); did you mean ‘ssl_encryption_level_t’? 56 | ((level) == ssl_encryption_initial) ? &((qc)->send_ctx[0]) \ | ^~~~~~~~~~~~~~~~~~~~~~ src/event/quic/ngx_event_quic.c:1080:11: note: in expansion of macro ‘ngx_quic_get_send_ctx’ 1080 | ctx = ngx_quic_get_send_ctx(qc, level); | ^~~~~~~~~~~~~~~~~~~~~ src/event/quic/ngx_event_quic_connection.h:57:24: error: ‘ssl_encryption_handshake’ undeclared (first use in this function) 57 | : (((level) == ssl_encryption_handshake) ? &((qc)->send_ctx[1]) \ | ^~~~~~~~~~~~~~~~~~~~~~~~ src/event/quic/ngx_event_quic.c:1080:11: note: in expansion of macro ‘ngx_quic_get_send_ctx’ 1080 | ctx = ngx_quic_get_send_ctx(qc, level); | ^~~~~~~~~~~~~~~~~~~~~ cc1: all warnings being treated as errors make[1]: *** [objs/Makefile:1055: objs/src/event/quic/ngx_event_quic.o] Error 1 make[1]: Leaving directory '/home/pete/nginx-quic' make: *** [Makefile:10: build] Error 2 Here is the code around line #1055 in `objs/Makefile`: objs/src/event/quic/ngx_event_quic.o: $(CORE_DEPS) \ src/event/quic/ngx_event_quic.c $(CC) -c $(CFLAGS) $(CORE_INCS) \ -o objs/src/event/quic/ngx_event_quic.o \ src/event/quic/ngx_event_quic.c Posted at Nginx Forum: https://forum.nginx.org/read.php?2,295227,295227#msg-295227 From pluknet at nginx.com Fri Sep 16 14:38:40 2022 From: pluknet at nginx.com (Sergey Kandaurov) Date: Fri, 16 Sep 2022 18:38:40 +0400 Subject: QUIC preview `make` fails: `[objs/Makefile:1055: objs/src/event/quic/ngx_event_quic.o] Error 1` In-Reply-To: <6ba8df5dd3810dbcbb5e22c594b58655.NginxMailingListEnglish@forum.nginx.org> References: <6ba8df5dd3810dbcbb5e22c594b58655.NginxMailingListEnglish@forum.nginx.org> Message-ID: > On 16 Sep 2022, at 18:08, petecooper wrote: > > Hello. > I am adapting my stable Nginx compile script to road test the QUIC preview. > Per the readme, I am using `quictls`, specifically v3.0.3. I have not yet > tried BoringSSL. > > My `configure` command completes successfully, but my `make` command fails. > I have included output below, and there is a (safe for work) GitHub gist for > the `make` output to retain formatting. > > I would be very grateful for any advice or feedback as to what I am (or > might be) doing wrong. > > Thank you for your consideration. > > Best wishes. > > ==8<== > > [..] > Here is my `configure` script: > > [..] > --with-cc-opt="-m64 -march=native -mtune=native -DTCP_FASTOPEN=23 -g -O3 > -fstack-protector-strong -flto -ffat-lto-objects -fuse-ld=gold > --param=ssp-buffer-size=4 -Wformat -Werror=format-security > -Wimplicit-fallthrough=0 -Wno-deprecated-declarations -fcode-hoisting > -Wp,-D_FORTIFY_SOURCE=2 > -I../quictls-source/openssl-openssl-$quictls_source_version/apps/include" \ > [..] > --with-openssl=../quictls-source/openssl-openssl-$quictls_source_version \ > You're supposed to provide compiler paths with --with-cc-opt/--with-ld-opt or point to OpenSSL source distribution with --with-openssl=, not both. > > In file included from src/event/quic/ngx_event_quic_connection.h:28, > from src/event/quic/ngx_event_quic.c:10: > src/event/quic/ngx_event_quic_transport.h:266:49: error: field ‘level’ has > incomplete type > 266 | enum ssl_encryption_level_t level; > | ^~~~~ > src/event/quic/ngx_event_quic_transport.h:314:49: error: field ‘level’ has > incomplete type > 314 | enum ssl_encryption_level_t level; > | ^~~~~ Make sure to provide correct OpenSSL path(s). -- Sergey Kandaurov From biscotty666 at gmail.com Fri Sep 16 18:11:05 2022 From: biscotty666 at gmail.com (Brian Carey) Date: Fri, 16 Sep 2022 12:11:05 -0600 Subject: Fwd: soooo close: nginx gunicorn flask directory stripped from links In-Reply-To: <1772b222-563e-10e9-b74a-89a0c5eb475d@gmail.com> References: <1772b222-563e-10e9-b74a-89a0c5eb475d@gmail.com> Message-ID: <6c6796f5-a230-3b25-5eaf-4bf78dffebed@gmail.com> OK, sadly I was pre-mature in my explanation and claim of success, although the trailing slash was clearly an issue. Now I can use the application fine in the open browser  which I can see did implement my changes because I can move around in the app normally. But I get too many redirects with other browsers or other instances of the same browser, which suggests to me that something was cached at some point in my testing that is allowing it to work. curl returns a 301 and firefox returns a too many redirects. So not solved yet unfortunately but progress has been made. -------- Forwarded Message -------- Subject: Re: soooo close: nginx gunicorn flask directory stripped from links - SOLVED Date: Fri, 16 Sep 2022 11:57:55 -0600 From: Brian Carey To: Francis Daly Hi, I believe that if something isn't working that should be the usual answer is very simple. In my case I forgot the trailing slash at the end of the proxy_pass directive. otoh I did learn a lot partly thanks to my "conversation" with you. To clarify one question below, in my case redirects were working correctly, which are the ones using url_for(). It was the render_template calls which were failing. Anyway I really appreciate the thoroughness of your responses. Cheers, Brian On 9/16/22 10:17, Francis Daly wrote: > On Fri, Sep 16, 2022 at 09:38:54AM -0600, Brian Carey wrote: > > Hi there, > >> I'm trying to implement this suggestion: >> >> You can possibly / potentially avoid all of that, if you are happy >> to deploy your app athttp://127.0.0.1:8000/app/ instead of at >> http://127.0.0.1:8000/ > I think I did follow that with: I'm presuming that it is possible to > deploy a flask app somewhere other than the root of the web service. > > ;-) > > It will be entirely a flask / gunicorn / something-other-than-nginx thing. > > After that is working, your nginx config will be of the form > > location ^~ /this-prefix/ { proxy_pass http://ip:port; } > > with no / or anything else after the port. > >> 1. adding a directory level, ie. moving /var/www/application to >> /var/www/application/application. This seemed to have no effect at all. > If nginx is doing proxy_pass, it does not care about the filesystem. I > don't know what gunicorn does with that. > >> 2. modifying gunicorn unit file to: --bind 0.0.0.0:8000/app. No effect > I think I'd expect that to fail; unless "bind" is clever enough to stop > reading when it knows the IP and port. > >> 3. changing nginx conf proxy_pass declaration to localhost:8000/app. This >> broke everything. > That can work, in different circumstances -- it would need the gunicorn > setup to know what to do with requests that start /app. And once you do > the SCRIPT_NAME thing for gunicorn (described below), then it does know > that -- but the suggested nginx config does not duplicate the "location" > in the "proxy_pass". > >> 4. I did try 2 & 3 together but that broke it. > Yes, "3" with the eventual "two-step" (below) will break. > >> 4. In the app I removed initial slash in @app.routes, no joy > That sounds like a flask/gunicorn thing; I'm lost there ;-) > >> Can you tell me where/how I can effect this change? > Some web searching points to > https://github.com/benoitc/gunicorn/issues/1513 and > https://dlukes.github.io/flask-wsgi-url-prefix.html, which seem to > suggest a two-step thing, the first of which you might be doing already: > > * in your code, use url_for() for internal links: > > """ > instead of writing href="/login" in your templates or redirect("/login") > in your view functions, write href="{{ url_for('login_func') }}" > and redirect(url_for("login_func")). This will make sure the URLs are > correctly prefixed with SCRIPT_NAME, if applicable. > """ > > * when you start gunicorn, make the environment variable SCRIPT_NAME > have the value "/this-prefix" > > The second url has a "minimum working example" of an "app.py" shown at > https://dlukes.github.io/flask-wsgi-url-prefix.html#mwe > > Stick that behind an nginx, and you should be able to access > http://nginx/app/ or http://localhost:8000/app/. (But probably not > http://localhost:8000/.) > > And if you want to run a completely separate "app2", you can > have http://nginx/app2/ giving the same response as (for example) > http://localhost:8001/app2/. > > Good luck with it! > > If you do find a working answer, one of us should follow-up to the list > with the details, so that the next person with the same problem will > have a better chance of a search engine pointing them to the answer. > > Cheers, > > f -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Sat Sep 17 17:07:32 2022 From: mdounin at mdounin.ru (Maxim Dounin) Date: Sat, 17 Sep 2022 20:07:32 +0300 Subject: Regarding HTTP chunked Body being stored in temp_file In-Reply-To: References: Message-ID: Hello! On Fri, Sep 16, 2022 at 07:31:08AM +0000, Devashi Tandon wrote: > Hi, > > In our module code, we are processing the HTTP request body when > it is not stored in r->request_body->temp_file. > > When I send a 9381 bytes body, NGINX doesn't store the body in > temp_file but in the internal buffers. Hence we are able to > process the body. > > However, when I enable chunked encoding, the same 9381 bytes > body, gets stored in the r->request_body->temp_file. > > To avoid getting stored in temp_file, I have to increase the > client_body_buffer_size to a larger value than the default. In > that case, chunked encoded http body is NOT stored in temp_file > and we are able to process it. > > Is there any reason why the behaviour of client_body_buffer_size > is different in case of regular HTTP traffic v/s chunked encoded > HTTP traffic? Why do we need a larger buffer size to ensure > chunked encoded traffic doesn't get stored in temp_file? The client_body_buffer_size defaults to 8192 bytes, and this means that nginx can store up to 8192 bytes of raw data without using disk buffering. If you are lucky enough, some request body bytes can also happen to be in the last client header buffer (and 9381 you are seeing suggests it's the case). With chunked encoding you'll likely get less space in the header buffer due to "Transfer-Encoding: chunked" additional header. Further, some space in the client body buffer will be spent on the chunked encoding framing. Hence when using chunked encoding you'll need a larger buffer to keep the body of the same size in memory. -- Maxim Dounin http://mdounin.ru/ From biscotty666 at gmail.com Mon Sep 19 18:25:04 2022 From: biscotty666 at gmail.com (Brian Carey) Date: Mon, 19 Sep 2022 12:25:04 -0600 Subject: forward 443 requests to correct (?) port Message-ID: Hi, Maybe I'm misunderstanding how this should work. Can I use non-ssl connections for upstream servers when the originating request is https? I'm forwarding nginx requests to an apache server listening on 8080. Everything works fine if I explicitly use http but not https. My nginx site itself has no problem with https and all http traffic is forwarded to https. However when I try to go to wordpress (on apache) I get an error in my browser that I am forwarding plain http to https, and indeed the port I see in the browser is 443 not 8080. Again if I explicitly request http I'm good but it fails with https. Why is nginx forwarding this traffic to 443 instead of 8080? Or probably better how do I change this behavior? So I'm trying to find out how nginx makes that decision. This is the stanza nginx conf file. server {         listen 80 default_server;         listen [::]:80;         server_name biscotty.me;         return 301 https://$hostname$request_uri; } server{  listen 443 ssl http2;         listen [::]:443 ssl;         server_name biscotty.me;         ssl_certificate         /etc/nginx/ssl/certificates.crt;         ssl_certificate_key     /etc/nginx/ssl/private.key;         root /var/www/html;         index index.html index.htm index.nginx-debian.html;         server_name _;         location / {                 # First attempt to serve request as file, then                 # as directory, then fall back to displaying a 404.                 try_files $uri $uri/ =404;         } location /wordpress {                 proxy_pass http://0.0.0.0:8080;                 proxy_buffering on;                 proxy_buffers 12 12k;                 proxy_redirect off;                 proxy_set_header X-Real-IP $remote_addr;                 proxy_set_header X-Forwarded-For $remote_addr;                 proxy_set_header Host $host:8080;         } } From ekgermann at semperen.com Mon Sep 19 18:46:55 2022 From: ekgermann at semperen.com (Eric Germann) Date: Mon, 19 Sep 2022 14:46:55 -0400 Subject: forward 443 requests to correct (?) port In-Reply-To: References: Message-ID: Do you have https: at the front of the URL instead of http:? --- Eric Germann ekgermann {at} semperen {dot} com || ekgermann {at} gmail {dot} com LinkedIn: https://www.linkedin.com/in/ericgermann Medium: https://ekgermann.medium.com Twitter: @ekgermann Telegram || Signal || Skype || Phone +1 {dash} 419 {dash} 513 {dash} 0712 GPG Fingerprint: 89ED 36B3 515A 211B 6390 60A9 E30D 9B9B 3EBF F1A1 > On Sep 19, 2022, at 14:25, Brian Carey wrote: > > Hi, > > Maybe I'm misunderstanding how this should work. Can I use non-ssl connections for upstream servers when the originating request is https? > > I'm forwarding nginx requests to an apache server listening on 8080. Everything works fine if I explicitly use http but not https. My nginx site itself has no problem with https and all http traffic is forwarded to https. However when I try to go to wordpress (on apache) I get an error in my browser that I am forwarding plain http to https, and indeed the port I see in the browser is 443 not 8080. Again if I explicitly request http I'm good but it fails with https. Why is nginx forwarding this traffic to 443 instead of 8080? Or probably better how do I change this behavior? > > So I'm trying to find out how nginx makes that decision. This is the stanza nginx conf file. > > server { > listen 80 default_server; > listen [::]:80; > server_name biscotty.me; > return 301 https://$hostname$request_uri; > } > > server{ > > listen 443 ssl http2; > listen [::]:443 ssl; > server_name biscotty.me; > > ssl_certificate /etc/nginx/ssl/certificates.crt; > ssl_certificate_key /etc/nginx/ssl/private.key; > > root /var/www/html; > > index index.html index.htm index.nginx-debian.html; > > server_name _; > > location / { > # First attempt to serve request as file, then > # as directory, then fall back to displaying a 404. > try_files $uri $uri/ =404; > } > > location /wordpress { > proxy_pass http://0.0.0.0:8080; > proxy_buffering on; > proxy_buffers 12 12k; > proxy_redirect off; > > proxy_set_header X-Real-IP $remote_addr; > proxy_set_header X-Forwarded-For $remote_addr; > proxy_set_header Host $host:8080; > } > > } > > _______________________________________________ > nginx mailing list -- nginx at nginx.org > To unsubscribe send an email to nginx-leave at nginx.org -------------- next part -------------- An HTML attachment was scrubbed... URL: From biscotty666 at gmail.com Mon Sep 19 20:56:24 2022 From: biscotty666 at gmail.com (Brian Carey) Date: Mon, 19 Sep 2022 14:56:24 -0600 Subject: forward 443 requests to correct (?) port In-Reply-To: References: Message-ID: <4a89908e-4e6c-43a3-51a6-5325a16a02f6@gmail.com> It doesn't work if the originating url is https:// but does if it is http:// On 9/19/22 12:46, Eric Germann via nginx wrote: > Do you have https: at the front of the URL instead of http:? > > --- > Eric Germann > ekgermann {at} semperen {dot} com || ekgermann {at} gmail {dot} com > LinkedIn: https://www.linkedin.com/in/ericgermann > Medium:https://ekgermann.medium.com > Twitter: @ekgermann > Telegram || Signal || Skype || Phone +1 {dash} 419 {dash} 513 {dash} 0712 > > GPG Fingerprint:89ED 36B3 515A 211B 6390  60A9 E30D 9B9B 3EBF F1A1 > > > > > > > >> On Sep 19, 2022, at 14:25, Brian Carey wrote: >> >> Hi, >> >> Maybe I'm misunderstanding how this should work. Can I use non-ssl >> connections for upstream servers when the originating request is https? >> >> I'm forwarding nginx requests to an apache server listening on 8080. >> Everything works fine if I explicitly use http but not https. My >> nginx site itself has no problem with https and all http traffic is >> forwarded to https. However when I try to go to wordpress (on apache) >> I get an error in my browser that I am forwarding plain http to >> https, and indeed the port I see in the browser is 443 not 8080. >> Again if I explicitly request http I'm good but it fails with https. >> Why is nginx forwarding this traffic to 443 instead of 8080? Or >> probably better how do I change this behavior? >> >> So I'm trying to find out how nginx makes that decision. This is the >> stanza nginx conf file. >> >> server { >>         listen 80 default_server; >>         listen [::]:80; >>         server_name biscotty.me ; >>         return 301 https://$hostname$request_uri; >> } >> >> server{ >> >>  listen 443 ssl http2; >>         listen [::]:443 ssl; >>         server_name biscotty.me ; >> >>         ssl_certificate /etc/nginx/ssl/certificates.crt; >>         ssl_certificate_key /etc/nginx/ssl/private.key; >> >>         root /var/www/html; >> >>         index index.html index.htm index.nginx-debian.html; >> >>         server_name _; >> >>         location / { >>                 # First attempt to serve request as file, then >>                 # as directory, then fall back to displaying a 404. >>                 try_files $uri $uri/ =404; >>         } >> >> location /wordpress { >>                 proxy_pass http://0.0.0.0:8080; >>                 proxy_buffering on; >>                 proxy_buffers 12 12k; >>                 proxy_redirect off; >> >>                 proxy_set_header X-Real-IP $remote_addr; >>                 proxy_set_header X-Forwarded-For $remote_addr; >>                 proxy_set_header Host $host:8080; >>         } >> >> } >> >> _______________________________________________ >> nginx mailing list -- nginx at nginx.org >> To unsubscribe send an email to nginx-leave at nginx.org > > > _______________________________________________ > nginx mailing list --nginx at nginx.org > To unsubscribe send an email tonginx-leave at nginx.org -------------- next part -------------- An HTML attachment was scrubbed... URL: From francis at daoine.org Mon Sep 19 23:29:04 2022 From: francis at daoine.org (Francis Daly) Date: Tue, 20 Sep 2022 00:29:04 +0100 Subject: Fwd: soooo close: nginx gunicorn flask directory stripped from links In-Reply-To: <6c6796f5-a230-3b25-5eaf-4bf78dffebed@gmail.com> References: <1772b222-563e-10e9-b74a-89a0c5eb475d@gmail.com> <6c6796f5-a230-3b25-5eaf-4bf78dffebed@gmail.com> Message-ID: <20220919232904.GK9502@daoine.org> On Fri, Sep 16, 2022 at 12:11:05PM -0600, Brian Carey wrote: Hi there, > OK, sadly I was pre-mature in my explanation and claim of success, although > the trailing slash was clearly an issue. Now I can use the application fine > in the open browser  which I can see did implement my changes because I can > move around in the app normally. > > But I get too many redirects with other browsers or other instances of the > same browser, which suggests to me that something was cached at some point > in my testing that is allowing it to work. curl returns a 301 and firefox > returns a too many redirects. A 301 to the same Location: url will be a redirect loop; a 301 to a different url may not be. So right now, without involving nginx at all, can you fully use your application if you point your browser at http://gunicorn:8080/, or if you point your browser at http://gunicorn:8080/prefix/ ? Once the upstream / backend is in a known state, adding nginx in front should be more straightforward. Cheers, f -- Francis Daly francis at daoine.org From francis at daoine.org Mon Sep 19 23:58:28 2022 From: francis at daoine.org (Francis Daly) Date: Tue, 20 Sep 2022 00:58:28 +0100 Subject: forward 443 requests to correct (?) port In-Reply-To: References: Message-ID: <20220919235828.GL9502@daoine.org> On Mon, Sep 19, 2022 at 12:25:04PM -0600, Brian Carey wrote: Hi there, > Maybe I'm misunderstanding how this should work. Can I use non-ssl > connections for upstream servers when the originating request is https? >From nginx's point of view: yes, not a problem. >From the upstream application's point of view: it will often want to know what scheme://host:port/prefix/ it should use when creating links in what it produces; and it might be configured to only "work" when the connection from the client is https. So you might need to configure that side of things in some way to believe that "anything from nginx" is trustworthy; or that "anything with specific http headers" is trustworthy, or something else that depends on this particular application. > I'm forwarding nginx requests to an apache server listening on 8080. > Everything works fine if I explicitly use http but not https. My nginx site > itself has no problem with https and all http traffic is forwarded to https. > However when I try to go to wordpress (on apache) I get an error in my > browser that I am forwarding plain http to https, and indeed the port I see > in the browser is 443 not 8080. Again if I explicitly request http I'm good > but it fails with https. Why is nginx forwarding this traffic to 443 instead > of 8080? Or probably better how do I change this behavior? I'm a bit unclear on what exactly you are reporting, sorry. In general, the browser talks to nginx only; and nginx talks to upstream only; and the browser should not necessarily be aware that it is not talking to upstream. So if you have https://nginx reverse proxying to http://apache:8080, the browser should never know or care about port 8080. It's probably good to be very clear about what should be talking to what; and about what is talking to what. And I suspect that that will need some specific copy-paste details from you, if you are unsure. > So I'm trying to find out how nginx makes that decision. This is the stanza > nginx conf file. > > server { >         listen 80 default_server; >         listen [::]:80; >         server_name biscotty.me; >         return 301 https://$hostname$request_uri; > } Ok. Any http request to nginx (on port 80), gets nginx inviting the browser to make a https request. (You may want $server_name or $host, instead of $hostname; but anything that works is good.) > server{ > >  listen 443 ssl http2; ... > location /wordpress { >                 proxy_pass http://0.0.0.0:8080; >                 proxy_buffering on; >                 proxy_buffers 12 12k; >                 proxy_redirect off; > >                 proxy_set_header X-Real-IP $remote_addr; >                 proxy_set_header X-Forwarded-For $remote_addr; >                 proxy_set_header Host $host:8080; >         } > > } A https request from the browser to /wordpress/x will lead to a http request from nginx to /wordpress/x. I'm not sure that 0.0.0.0:8080 always works as an IP:port to connect to (I'd probably use a specific IP there); but if it works for you, it is good. What happens after that, is entirely up to wordpress on apache. If you can show the specific request that you make and the response that you get, perhaps using "curl -i" in order to avoid browser caching or "friendly" response interception, then it may become clear what problem exists and what solution to it can be found. I suspect that you will want to omit ":8080" from the proxy_set_header. When you show one request and its response, it may become clear whether your current proxy_redirect setting is appropriate here. (And I do think that, in the past, wordpress was not happy being installed anywhere other than the root of the web service -- it did not work well in a subdirectory. It may well be that that is no longer the case, and things will all Just Work now.) Cheers, f -- Francis Daly francis at daoine.org From nginx-forum at forum.nginx.org Tue Sep 20 11:03:12 2022 From: nginx-forum at forum.nginx.org (s.schumacher) Date: Tue, 20 Sep 2022 07:03:12 -0400 Subject: Random TCP Timeouts Message-ID: <2fe419807ad88e506640ff8bfce95a20.NginxMailingListEnglish@forum.nginx.org> Hello I am running about sixty nginx webservers on a Proxmox Cluster (uses KVM). The VMs are running the most recent version of Debian 11. They use nginx, different versions of PHP-FPM and MariaDB. I follow a infrastructure-as-code-approch, all the servers with some exceptions in our infrastrucure like jitsi are provisioned by ansible and therefore nearly identical. I host standard Typo3-Installations as well as more complex applications usually developed in Laravel. Some time ago I started configuring active checks in CheckMK (the checks use a plugin from Nagios called check_http) for our infrastructure and projects of our customers which had gone live, meaning accessible from the outside. After this I started getting timeout errors at the frequency of about one or two a day spread seemingly at random accross the twenty servers which I monitor with active checks. At first I considered this to be simply false positives, but last Friday it happened during a Jitsi Conference and was reported to me by a colleague. I checked the logs of nginx and found the following entries for the exact time period in which Checkmk couldn't reach the server, which is less than one minute (time between checks, the next check is always negative, meaning no errors) and probably only a few seconds. What is the cause of this problem and how can I fix it? Do you have a suggestion how I could reproduce and then further analyze the problem? Checkmk-Error-Message: Summary connect to address 195.34.XXX.XXX and port 443: Connection refused Details HTTP CRITICAL - Unable to open TCP socket Checkmk-Recovery-Message: Summary HTTP OK: HTTP/1.1 200 OK - 59404 bytes in 0.008 second response time Nginx error log: 2022/09/16 11:18:42 [alert] 3212994#3212994: *2590 open socket #18 left in connection 5 2022/09/16 11:18:42 [alert] 3212994#3212994: *2494 open socket #15 left in connection 8 2022/09/16 11:18:42 [alert] 3212994#3212994: *2533 open socket #16 left in connection 9 2022/09/16 11:18:42 [alert] 3212994#3212994: *2534 open socket #17 left in connection 10 2022/09/16 11:18:42 [alert] 3212994#3212994: *2591 open socket #20 left in connection 11 2022/09/16 11:18:42 [alert] 3212994#3212994: *2573 open socket #24 left in connection 12 2022/09/16 11:18:42 [alert] 3212994#3212994: *2532 open socket #10 left in connection 13 2022/09/16 11:18:42 [alert] 3212994#3212994: *3230 open socket #28 left in connection 14 2022/09/16 11:18:42 [alert] 3212994#3212994: *2467 open socket #19 left in connection 15 2022/09/16 11:18:42 [alert] 3212994#3212994: *2535 open socket #21 left in connection 16 2022/09/16 11:18:42 [alert] 3212994#3212994: *3233 open socket #27 left in connection 17 2022/09/16 11:18:42 [alert] 3212994#3212994: *2771 open socket #30 left in connection 22 2022/09/16 11:18:42 [alert] 3212994#3212994: *2770 open socket #29 left in connection 23 2022/09/16 11:18:42 [alert] 3212994#3212994: *3234 open socket #22 left in connection 24 2022/09/16 11:18:42 [alert] 3212994#3212994: *3229 open socket #11 left in connection 26 2022/09/16 11:18:42 [alert] 3212994#3212994: *3231 open socket #32 left in connection 28 2022/09/16 11:18:42 [alert] 3212994#3212994: aborting 2022/09/16 11:20:19 [error] 3295994#3295994: *153 upstream timed out (110: Connection timed out) while reading response> Yours sincerely Stefan Malte Schumacher Posted at Nginx Forum: https://forum.nginx.org/read.php?2,295249,295249#msg-295249 From nginx-forum at forum.nginx.org Wed Sep 21 10:15:25 2022 From: nginx-forum at forum.nginx.org (suryamohan05) Date: Wed, 21 Sep 2022 06:15:25 -0400 Subject: Nginx returning 401 In-Reply-To: References: Message-ID: <8f37a1c82f0057d0a4ca1a2f0669f3f7.NginxMailingListEnglish@forum.nginx.org> We are using nginx as reverse proxy in our project. When connecting to cloud server, Nginx returns 401 and then 200. But when connecting to on premise server it is returning 401. In debug logs we are seeing SSL_get_error: 5 , SSL_get_error: 2 SSL_get_error: -1. what does this SSL_get_error return type mean... Is there anyway to find the cause of this issue. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,295132,295257#msg-295257 From nginx-forum at forum.nginx.org Wed Sep 21 10:39:25 2022 From: nginx-forum at forum.nginx.org (suryamohan05) Date: Wed, 21 Sep 2022 06:39:25 -0400 Subject: Nginx returning 401 Message-ID: We are using nginx as reverse proxy in our project. When connecting to cloud server, Nginx returns 401 and then 200. But when connecting to on premise server it is returning 401. In debug logs we are seeing SSL_get_error: 5 , SSL_get_error: 2 SSL_get_error: -1. what does this SSL_get_error return type mean... Is there anyway to find the cause of this issue. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,295259,295259#msg-295259 From mdounin at mdounin.ru Wed Sep 21 10:51:03 2022 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 21 Sep 2022 13:51:03 +0300 Subject: Nginx returning 401 In-Reply-To: References: Message-ID: Hello! On Wed, Sep 21, 2022 at 06:39:25AM -0400, suryamohan05 wrote: > We are using nginx as reverse proxy in our project. When connecting to cloud > server, Nginx returns 401 and then 200. But when connecting to on premise > server it is returning 401. > > In debug logs we are seeing SSL_get_error: 5 , SSL_get_error: 2 > SSL_get_error: -1. what does this SSL_get_error return type mean... Is there > anyway to find the cause of this issue. SSL_get_error() is an OpenSSL function used to retrieve status of various SSL-related operations. It is completely unrelated to the HTTP return codes you are seeing. When nginx is configured as a reverse proxy, HTTP status codes like 200 and 401 are usually returned by the upstream server, and only proxied by nginx. First of all, you may want to make sure that this is what happens in your case. Relevant information for sure can be found in the debug log, or you can use the $upstream_status variable to get the details without debugging (http://nginx.org/r/$upstream_status). And, as long as responses are indeed returned by the upstream server, you may want to start looking at the upstream server instead. Hope this helps. -- Maxim Dounin http://mdounin.ru/ From osa at freebsd.org.ru Wed Sep 21 13:35:01 2022 From: osa at freebsd.org.ru (Sergey A. Osokin) Date: Wed, 21 Sep 2022 16:35:01 +0300 Subject: Nginx returning 401 In-Reply-To: <8f37a1c82f0057d0a4ca1a2f0669f3f7.NginxMailingListEnglish@forum.nginx.org> References: <8f37a1c82f0057d0a4ca1a2f0669f3f7.NginxMailingListEnglish@forum.nginx.org> Message-ID: Hi there, hope you're doing well. On Wed, Sep 21, 2022 at 06:15:25AM -0400, suryamohan05 wrote: > We are using nginx as reverse proxy in our project. When connecting to cloud > server, Nginx returns 401 and then 200. > But when connecting to on premise server it is returning 401. > > In debug logs we are seeing SSL_get_error: 5 , SSL_get_error: 2 > SSL_get_error: -1. what does this SSL_get_error return type mean... > Is there anyway to find the cause of this issue. Please keep patience and do not send multiple emails or open multiple threads on the forum with the same topic. Thank you. This question has been answered by Maxim Dounin in a separate thread, https://mailman.nginx.org/archives/list/nginx at nginx.org/thread/EEGJGZGQEGB7MQIW6NM37GKR4SYC7LNH/ -- Sergey A. Osokin From nginx-forum at forum.nginx.org Thu Sep 22 00:12:07 2022 From: nginx-forum at forum.nginx.org (unmesh2892) Date: Wed, 21 Sep 2022 20:12:07 -0400 Subject: Find active websocket connection Message-ID: <936c14ca2155f6d82f55ac57aaffe0d7.NginxMailingListEnglish@forum.nginx.org> How can I find the active concurrent websocket connections? I am using nginx in C code to process the requests. Is there any module/library which can provide such details? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,295268,295268#msg-295268 From devashi.tandon at appsentinels.ai Thu Sep 22 12:28:04 2022 From: devashi.tandon at appsentinels.ai (Devashi Tandon) Date: Thu, 22 Sep 2022 12:28:04 +0000 Subject: Load balancing with custom configuration fields Message-ID: Hi, We have a custom configuration for a private server defined as: ext_private_server http://private-server:8050; under the server block. This configuration is parsed by our custom nginx module, and then we create a socket to send packets to the server port. We were wondering if we can somehow use the upstream load balancing of NGINX with this custom configuration, or will we have to define our own load balancing logic in our nginx module? Could we do something similar to: ext_private_server http://load-balancer; and then: upstream load-balancer {       server http://private-server1:8050;       server http://private-server2:8050;       ....       .... } Is there some API hook that we could call in our module to use the NGINX's load balancing and make the above configuration work? Thanks, Devashi -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Thu Sep 22 15:07:36 2022 From: nginx-forum at forum.nginx.org (suryamohan05) Date: Thu, 22 Sep 2022 11:07:36 -0400 Subject: Impersonation in Nginx Message-ID: <313bc71bb1fd9e09833257de1bf9d9fb.NginxMailingListEnglish@forum.nginx.org> Hi Team, Does Nginx has a feasibility to pass impersonation in of the config file. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,295281,295281#msg-295281 From zjd5536 at 163.com Tue Sep 27 07:25:18 2022 From: zjd5536 at 163.com (zjd) Date: Tue, 27 Sep 2022 15:25:18 +0800 (CST) Subject: fix accidental corrdump Message-ID: <7c1a50ac.4f78.1837dd5f99b.Coremail.zjd5536@163.com> # HG changeset patch # User Zhang Jinde # Date 1664261587 -28800 # Tue Sep 27 14:53:07 2022 +0800 # Node ID 226a75a3703db612ed13d4357ac1b71faef6974a # Parent ba5cf8f73a2d0a3615565bf9545f3d65216a0530 Core: fix ngx_reset_pool wild pointer's coredump When frequently use ngx_reset_pool and use memory to the same pool in a worker, it hanpens to accidental corrdump sometimes. diff -r ba5cf8f73a2d -r 226a75a3703d src/core/ngx_palloc.c --- a/src/core/ngx_palloc.c Thu Sep 08 13:53:49 2022 +0400 +++ b/src/core/ngx_palloc.c Tue Sep 27 14:53:07 2022 +0800 @@ -105,6 +105,7 @@ for (l = pool->large; l; l = l->next) { if (l->alloc) { ngx_free(l->alloc); + l->alloc = NULL; } } -------------- next part -------------- An HTML attachment was scrubbed... URL: From joao.andrade at protocol.ai Tue Sep 27 10:11:36 2022 From: joao.andrade at protocol.ai (=?utf-8?Q?Jo=C3=A3o_Sousa_Andrade?=) Date: Tue, 27 Sep 2022 11:11:36 +0100 Subject: Running nginx-quic in a real environment In-Reply-To: <761C055B-32A9-46A1-901E-B74784C78E19@nginx.com> References: <831BD439-0012-42CF-B360-DB4FB7045D9A@protocol.ai> <761C055B-32A9-46A1-901E-B74784C78E19@nginx.com> Message-ID: <76C38646-4187-4AE3-99A0-EDC84BB5268A@protocol.ai> Thank you for the clarification Sergey. We have been running http3 in production for the past couple of weeks. There's something we have noticed which I'm not entirely sure as to what is causing it. We have been getting lots of errors of the form: "[error] 34#34: *338736 quic no available client ids for new path while handling decrypted packet, client: $IP, server: 0.0.0.0:443". I tried looking through the code to little avail. I'm wondering what's causing these errors. Is it something which could be tweaked through configuration? Thank you again, João > On 15 Sep 2022, at 13:04, Sergey Kandaurov wrote: > >> >> On 8 Sep 2022, at 14:27, João Sousa Andrade via nginx wrote: >> >> Hello everyone, >> >> My team is currently considering using nginx-quic in a production-like environment after running it and benchmarking it in our test env. >> >> Reading "The code is currently at a beta level of quality and should not be used in production." under https://github.com/VKCOM/nginx-quic was a bit discouraging. However, I understand that repo isn't the source of truth and one should also note https://hg.nginx.org/nginx-quic is around 10mo ahead. >> Consequently, I decided to double-check here. Hope it's the right place to do so :) >> >> I'm currently wondering what is currently missing in terms of QUIC implementation in the latest version of nginx-quic. Are there any particular bugs I should be aware of? >> I also understand there are performance improvements currently in the works. This part should be mostly alright given we'll benchmark. It's the functional side of things I'm wondering about. >> > > The beta status means the code base isn't stabilized yet, > which means further updates of this code, including features, > potential changes in behaviour, bug fixes, and refactoring. > There are still rough edges but basically it works. > >> Can anyone help me with that? If this goes forward, we'll be happy to share anything useful we find on our side as well. >> > > We appreciate to receive feedback. > > -- > Sergey Kandaurov -------------- next part -------------- An HTML attachment was scrubbed... URL: From pluknet at nginx.com Tue Sep 27 12:05:54 2022 From: pluknet at nginx.com (Sergey Kandaurov) Date: Tue, 27 Sep 2022 16:05:54 +0400 Subject: Running nginx-quic in a real environment In-Reply-To: <76C38646-4187-4AE3-99A0-EDC84BB5268A@protocol.ai> References: <831BD439-0012-42CF-B360-DB4FB7045D9A@protocol.ai> <761C055B-32A9-46A1-901E-B74784C78E19@nginx.com> <76C38646-4187-4AE3-99A0-EDC84BB5268A@protocol.ai> Message-ID: <523BA2CA-72BA-4D36-8568-2ABAF2FD20C0@nginx.com> > On 27 Sep 2022, at 14:11, João Sousa Andrade wrote: > > Thank you for the clarification Sergey. > > We have been running http3 in production for the past couple of weeks. There's something we have noticed which I'm not entirely sure as to what is causing it. > > We have been getting lots of errors of the form: "[error] 34#34: *338736 quic no available client ids for new path while handling decrypted packet, client: $IP, server: 0.0.0.0:443". I tried looking through the code to little avail. I'm wondering what's causing these errors. Is it something which could be tweaked through configuration? This happens when QUIC connection continues over a new network path, known as QUIC connection migration, which should by done by switching to a new connection ID, but the client didn't previously supply enough Connection ID to chose from. Normally both peers maintain a set of unused Connection IDs, which may be needed not only for migration, but also due to NAT rebinding. Assuming that peers that initiate connection migration maintain enough connection IDs, a likely reason of the network path change failure as seen in the above error can be due to NAT rebinding with the client implementation that doesn't a notion of connection migration so it didn't sent NEW_CONNECTION_ID frames. In the case of NAT rebinding the server should send probing frames over a new network path using next available client Connection ID, but there were no any, as seen in the above error. Hope that helps. -- Sergey Kandaurov From mdounin at mdounin.ru Tue Sep 27 20:51:55 2022 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 27 Sep 2022 23:51:55 +0300 Subject: fix accidental corrdump In-Reply-To: <7c1a50ac.4f78.1837dd5f99b.Coremail.zjd5536@163.com> References: <7c1a50ac.4f78.1837dd5f99b.Coremail.zjd5536@163.com> Message-ID: Hello! On Tue, Sep 27, 2022 at 03:25:18PM +0800, zjd wrote: > # HG changeset patch > # User Zhang Jinde > # Date 1664261587 -28800 > # Tue Sep 27 14:53:07 2022 +0800 > # Node ID 226a75a3703db612ed13d4357ac1b71faef6974a > # Parent ba5cf8f73a2d0a3615565bf9545f3d65216a0530 > Core: fix ngx_reset_pool wild pointer's coredump > > When frequently use ngx_reset_pool and use memory to the same pool in a worker, it hanpens to accidental corrdump sometimes. > > diff -r ba5cf8f73a2d -r 226a75a3703d src/core/ngx_palloc.c > --- a/src/core/ngx_palloc.c Thu Sep 08 13:53:49 2022 +0400 > +++ b/src/core/ngx_palloc.c Tue Sep 27 14:53:07 2022 +0800 > @@ -105,6 +105,7 @@ > for (l = pool->large; l; l = l->next) { > if (l->alloc) { > ngx_free(l->alloc); > + l->alloc = NULL; > } > } Could you please clarify what you are trying to fix here? >From the description it looks like your module tries to use memory already freed by ngx_reset_pool(). If that's the case, the coredumps you observing aren't accidental, but rather an expected result of the use-after-free bug in your module. -- Maxim Dounin http://mdounin.ru/ From mdounin at mdounin.ru Tue Sep 27 21:04:31 2022 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 28 Sep 2022 00:04:31 +0300 Subject: Running nginx-quic in a real environment In-Reply-To: <523BA2CA-72BA-4D36-8568-2ABAF2FD20C0@nginx.com> References: <831BD439-0012-42CF-B360-DB4FB7045D9A@protocol.ai> <761C055B-32A9-46A1-901E-B74784C78E19@nginx.com> <76C38646-4187-4AE3-99A0-EDC84BB5268A@protocol.ai> <523BA2CA-72BA-4D36-8568-2ABAF2FD20C0@nginx.com> Message-ID: Hello! On Tue, Sep 27, 2022 at 04:05:54PM +0400, Sergey Kandaurov wrote: > > On 27 Sep 2022, at 14:11, João Sousa Andrade wrote: > > > > Thank you for the clarification Sergey. > > > > We have been running http3 in production for the past couple of weeks. There's something we have noticed which I'm not entirely sure as to what is causing it. > > > > We have been getting lots of errors of the form: "[error] 34#34: *338736 quic no available client ids for new path while handling decrypted packet, client: $IP, server: 0.0.0.0:443". I tried looking through the code to little avail. I'm wondering what's causing these errors. Is it something which could be tweaked through configuration? > > This happens when QUIC connection continues over a new network path, > known as QUIC connection migration, which should by done by switching > to a new connection ID, but the client didn't previously supply enough > Connection ID to chose from. Normally both peers maintain a set of > unused Connection IDs, which may be needed not only for migration, > but also due to NAT rebinding. Assuming that peers that initiate > connection migration maintain enough connection IDs, a likely reason > of the network path change failure as seen in the above error can be > due to NAT rebinding with the client implementation that doesn't a notion > of connection migration so it didn't sent NEW_CONNECTION_ID frames. > In the case of NAT rebinding the server should send probing frames > over a new network path using next available client Connection ID, > but there were no any, as seen in the above error. > Hope that helps. Shouldn't it be at the "info" level, much like other client-related errors? -- Maxim Dounin http://mdounin.ru/ From zjd5536 at 163.com Wed Sep 28 02:56:15 2022 From: zjd5536 at 163.com (zjd) Date: Wed, 28 Sep 2022 10:56:15 +0800 (CST) Subject: fix accidental corrdump In-Reply-To: References: <7c1a50ac.4f78.1837dd5f99b.Coremail.zjd5536@163.com> Message-ID: <5836e9bb.28fc.18382060443.Coremail.zjd5536@163.com> my module is such: The pool is created with ngx_create_pool in module_init_process; I get momery from the pool in the start of each request, and then use ngx_reset_pool in the end of each request. The ngx_reset_pool take pool->large each alloc pointer return to pool, and each alloc pointer is wild pointer. And When another get momery from the pool in next request or next next request..., I maybe get wild pointer address and access inaccessible addresses, and then coredump. maybe the describe is simple in last mail. At 2022-09-28 04:51:55, "Maxim Dounin" wrote: >Hello! > >On Tue, Sep 27, 2022 at 03:25:18PM +0800, zjd wrote: > >> # HG changeset patch >> # User Zhang Jinde >> # Date 1664261587 -28800 >> # Tue Sep 27 14:53:07 2022 +0800 >> # Node ID 226a75a3703db612ed13d4357ac1b71faef6974a >> # Parent ba5cf8f73a2d0a3615565bf9545f3d65216a0530 >> Core: fix ngx_reset_pool wild pointer's coredump >> >> When frequently use ngx_reset_pool and use memory to the same pool in a worker, it hanpens to accidental corrdump sometimes. >> >> diff -r ba5cf8f73a2d -r 226a75a3703d src/core/ngx_palloc.c >> --- a/src/core/ngx_palloc.c Thu Sep 08 13:53:49 2022 +0400 >> +++ b/src/core/ngx_palloc.c Tue Sep 27 14:53:07 2022 +0800 >> @@ -105,6 +105,7 @@ >> for (l = pool->large; l; l = l->next) { >> if (l->alloc) { >> ngx_free(l->alloc); >> + l->alloc = NULL; >> } >> } > >Could you please clarify what you are trying to fix here? > >From the description it looks like your module tries to use memory >already freed by ngx_reset_pool(). If that's the case, the >coredumps you observing aren't accidental, but rather an expected >result of the use-after-free bug in your module. > >-- >Maxim Dounin >http://mdounin.ru/ >_______________________________________________ >nginx mailing list -- nginx at nginx.org >To unsubscribe send an email to nginx-leave at nginx.org -------------- next part -------------- An HTML attachment was scrubbed... URL: From francis at daoine.org Wed Sep 28 08:19:27 2022 From: francis at daoine.org (Francis Daly) Date: Wed, 28 Sep 2022 09:19:27 +0100 Subject: Impersonation in Nginx In-Reply-To: <313bc71bb1fd9e09833257de1bf9d9fb.NginxMailingListEnglish@forum.nginx.org> References: <313bc71bb1fd9e09833257de1bf9d9fb.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20220928081927.GN9502@daoine.org> On Thu, Sep 22, 2022 at 11:07:36AM -0400, suryamohan05 wrote: Hi there, > Does Nginx has a feasibility to pass impersonation in of the config file. I'm afraid I don't understand what you are asking. I suspect that a translation to english chose the wrong possible meaning of the word you are using? Could you use more, or other, words to describe what you want? If you mean: when a user talks to nginx, can nginx send a different http Host: header to different upstream services, then "yes". If you mean: when a user talks to nginx, can nginx send that user's http Basic Authentication credentials, or "login" Cookie, to the upstream, then "yes". If you mean: the same, but nginx sends *another* user's credential, then "maybe" (you can hard-code the http headers to send on every request). Maybe your question is clear to someone else who can answer; but in case not, if you re-ask with an example, you might get a better answer. Thanks, f -- Francis Daly francis at daoine.org From martin at martin-wolfert.de Wed Sep 28 08:49:15 2022 From: martin at martin-wolfert.de (Martin Wolfert) Date: Wed, 28 Sep 2022 10:49:15 +0200 Subject: Nginx does not serve avif Message-ID: <4423e587-fc54-39c4-baaf-b4d5f0c18f21@martin-wolfert.de> Hi, i want to use new image files. That means: first serve (if available) avif, than webp and lastly jpg images. I configured this in ... nginx.conf:          map $http_accept $img_ext {                 "~*avif"   ".avif";                 "~*webp"   ".webp";                 "~*jpg"    ".jpg";                 "default"   "";         } server.conf:         location ~* ^/wp-content/.*/.*/.*\.(png|jpg)$ {                 add_header Vary Accept;                 try_files   $uri$img_ext $uri =404;         } mime.types: ...     image/webp                                    webp;     image/avif                                       avif;     image/avif-sequence                       avifs; .... Unfortunately ... Nginx does not serve avif files, if available. Tested it with the newest Chrome Versions. Anyone any idea where my error is located? Best, Martin -- Pflichtinformationen gemäß Artikel 13 DSGVO Im Falle des Erstkontakts sind wir gemäß Art. 12, 13 DSGVO verpflichtet, Ihnen folgende datenschutzrechtliche Pflichtinformationen zur Verfügung zu stellen: Wenn Sie uns per E-Mail kontaktieren, verarbeiten wir Ihre personenbezogenen Daten nur, soweit an der Verarbeitung ein berechtigtes Interesse besteht (Art. 6 Abs. 1 lit. f DSGVO), Sie in die Datenverarbeitung eingewilligt haben (Art. 6 Abs. 1 lit. a DSGVO), die Verarbeitung für die Anbahnung, Begründung, inhaltliche Ausgestaltung oder Änderung eines Rechtsverhältnisses zwischen Ihnen und uns erforderlich sind (Art. 6 Abs. 1 lit. b DSGVO) oder eine sonstige Rechtsnorm die Verarbeitung gestattet. Ihre personenbezogenen Daten verbleiben bei uns, bis Sie uns zur Löschung auffordern, Ihre Einwilligung zur Speicherung widerrufen oder der Zweck für die Datenspeicherung entfällt (z. B. nach abgeschlossener Bearbeitung Ihres Anliegens). Zwingende gesetzliche Bestimmungen – insbesondere steuer- und handelsrechtliche Aufbewahrungsfristen – bleiben unberührt. Sie haben jederzeit das Recht, unentgeltlich Auskunft über Herkunft, Empfänger und Zweck Ihrer gespeicherten personenbezogenen Daten zu erhalten. Ihnen steht außerdem ein Recht auf Widerspruch, auf Datenübertragbarkeit und ein Beschwerderecht bei der zuständigen Aufsichtsbehörde zu. Ferner können Sie die Berichtigung, die Löschung und unter bestimmten Umständen die Einschränkung der Verarbeitung Ihrer personenbezogenen Daten verlangen. Details entnehmen Sie meiner Datenschutzerklärung unter https://lichttraeumer.de/datenschutz/ From mdounin at mdounin.ru Wed Sep 28 15:57:23 2022 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 28 Sep 2022 18:57:23 +0300 Subject: fix accidental corrdump In-Reply-To: <5836e9bb.28fc.18382060443.Coremail.zjd5536@163.com> References: <7c1a50ac.4f78.1837dd5f99b.Coremail.zjd5536@163.com> <5836e9bb.28fc.18382060443.Coremail.zjd5536@163.com> Message-ID: Hello! On Wed, Sep 28, 2022 at 10:56:15AM +0800, zjd wrote: > my module is such: > The pool is created with ngx_create_pool in > module_init_process; I get momery from the pool in the start of > each request, and then use ngx_reset_pool in the end of each > request. > The ngx_reset_pool take pool->large each alloc pointer return > to pool, and each alloc pointer is wild pointer. > And When another get momery from the pool in next request or > next next request..., I maybe get wild pointer address and > access inaccessible addresses, and then coredump. > > maybe the describe is simple in last mail. Ok, so from your description you are getting segfaults, and you don't know why. Note that the ngx_reset_pool() function clears pool->large, and also frees all the ngx_pool_large_t structures (since it resets all pool blocks). That is, l->alloc you are clearing in your patch is not expected to be used anywhere. If clearing it helps, this suggests that there is a bug in your module which results in this freed memory to be used somehow. While clearing l->alloc might appear to help, most likely it is just hiding a bug in your module. Correct solution would be to find the bug in your module and fix it. -- Maxim Dounin http://mdounin.ru/ From francis at daoine.org Wed Sep 28 21:18:22 2022 From: francis at daoine.org (Francis Daly) Date: Wed, 28 Sep 2022 22:18:22 +0100 Subject: Nginx does not serve avif In-Reply-To: <4423e587-fc54-39c4-baaf-b4d5f0c18f21@martin-wolfert.de> References: <4423e587-fc54-39c4-baaf-b4d5f0c18f21@martin-wolfert.de> Message-ID: <20220928211822.GO9502@daoine.org> On Wed, Sep 28, 2022 at 10:49:15AM +0200, Martin Wolfert wrote: Hi there, > i want to use new image files. That means: first serve (if available) avif, > than webp and lastly jpg images. >         location ~* ^/wp-content/.*/.*/.*\.(png|jpg)$ { >                 add_header Vary Accept; >                 try_files   $uri$img_ext $uri =404; >         } > Unfortunately ... Nginx does not serve avif files, if available. Tested it > with the newest Chrome Versions. > > Anyone any idea where my error is located? When you make the request for /dir/thing.png, do you want to get the file /var/www/dir/thing.avif, or the file /var/www/dir/thing.png.avif? The usual questions are: What request do you make? What response do you get? What response do you want to get instead? Cheers, f -- Francis Daly francis at daoine.org From zjd5536 at 163.com Thu Sep 29 08:30:45 2022 From: zjd5536 at 163.com (zjd) Date: Thu, 29 Sep 2022 16:30:45 +0800 (CST) Subject: fix accidental corrdump In-Reply-To: References: <7c1a50ac.4f78.1837dd5f99b.Coremail.zjd5536@163.com> <5836e9bb.28fc.18382060443.Coremail.zjd5536@163.com> Message-ID: <315ed368.5e37.183885e9cb0.Coremail.zjd5536@163.com> Actually, I'm not sure where is coredump; So I think l->alloc=NULL after free(l-alloc) is reasonable, because l->alloc's address can be reused in the pool. Of course, memzero after get mem from the pool can be solved about this question.But for example, ngx_array_push maybe reuse l->alloc's address, and ngx_array_push is not memzero from the start of get memory. So I think l->alloc=NULL after free(l-alloc) is necessary. Thanks for discussing this with me At 2022-09-29 00:03:24, "Maxim Dounin" wrote: >Hello! > >On Wed, Sep 28, 2022 at 10:56:15AM +0800, zjd wrote: > >> my module is such: >> The pool is created with ngx_create_pool in >> module_init_process; I get momery from the pool in the start of >> each request, and then use ngx_reset_pool in the end of each >> request. >> The ngx_reset_pool take pool->large each alloc pointer return >> to pool, and each alloc pointer is wild pointer. >> And When another get momery from the pool in next request or >> next next request..., I maybe get wild pointer address and >> access inaccessible addresses, and then coredump. >> >> maybe the describe is simple in last mail. > >Ok, so from your description you are getting segfaults, and you >don't know why. > >Note that the ngx_reset_pool() function clears pool->large, and >also frees all the ngx_pool_large_t structures (since it resets >all pool blocks). That is, l->alloc you are clearing in your >patch is not expected to be used anywhere. If clearing it helps, >this suggests that there is a bug in your module which results in >this freed memory to be used somehow. > >While clearing l->alloc might appear to help, most likely it is >just hiding a bug in your module. Correct solution would be to >find the bug in your module and fix it. > >-- >Maxim Dounin >http://mdounin.ru/ >_______________________________________________ >nginx mailing list -- nginx at nginx.org >To unsubscribe send an email to nginx-leave at nginx.org -------------- next part -------------- An HTML attachment was scrubbed... URL: From pluknet at nginx.com Thu Sep 29 13:53:52 2022 From: pluknet at nginx.com (Sergey Kandaurov) Date: Thu, 29 Sep 2022 17:53:52 +0400 Subject: Running nginx-quic in a real environment In-Reply-To: References: <831BD439-0012-42CF-B360-DB4FB7045D9A@protocol.ai> <761C055B-32A9-46A1-901E-B74784C78E19@nginx.com> <76C38646-4187-4AE3-99A0-EDC84BB5268A@protocol.ai> <523BA2CA-72BA-4D36-8568-2ABAF2FD20C0@nginx.com> Message-ID: > On 28 Sep 2022, at 01:04, Maxim Dounin wrote: > > Hello! > > On Tue, Sep 27, 2022 at 04:05:54PM +0400, Sergey Kandaurov wrote: > >>> On 27 Sep 2022, at 14:11, João Sousa Andrade wrote: >>> >>> Thank you for the clarification Sergey. >>> >>> We have been running http3 in production for the past couple of weeks. There's something we have noticed which I'm not entirely sure as to what is causing it. >>> >>> We have been getting lots of errors of the form: "[error] 34#34: *338736 quic no available client ids for new path while handling decrypted packet, client: $IP, server: 0.0.0.0:443". I tried looking through the code to little avail. I'm wondering what's causing these errors. Is it something which could be tweaked through configuration? >> >> This happens when QUIC connection continues over a new network path, >> known as QUIC connection migration, which should by done by switching >> to a new connection ID, but the client didn't previously supply enough >> Connection ID to chose from. Normally both peers maintain a set of >> unused Connection IDs, which may be needed not only for migration, >> but also due to NAT rebinding. Assuming that peers that initiate >> connection migration maintain enough connection IDs, a likely reason >> of the network path change failure as seen in the above error can be >> due to NAT rebinding with the client implementation that doesn't a notion >> of connection migration so it didn't sent NEW_CONNECTION_ID frames. >> In the case of NAT rebinding the server should send probing frames >> over a new network path using next available client Connection ID, >> but there were no any, as seen in the above error. >> Hope that helps. > At least Firefox doesn't send NEW_CONNECTION_ID, which normally happens after handshake completion, and, as additionally received in another, private report, this leads to such errors behind NAT. > Shouldn't it be at the "info" level, much like other > client-related errors? > So indeed it may have sense to change logging level. # HG changeset patch # User Sergey Kandaurov # Date 1664459599 -14400 # Thu Sep 29 17:53:19 2022 +0400 # Branch quic # Node ID 7d78208f141b382a55bea3f7c1e66471b0c53937 # Parent a931e690475ee59387af517de60845b4b4307d28 QUIC: "info" logging level on insufficient client connection ids. Apparently, this error is reported on NAT rebinding if client didn't previously send NEW_CONNECTION_ID to supply additional connection ids. diff --git a/src/event/quic/ngx_event_quic_migration.c b/src/event/quic/ngx_event_quic_migration.c --- a/src/event/quic/ngx_event_quic_migration.c +++ b/src/event/quic/ngx_event_quic_migration.c @@ -309,7 +309,7 @@ ngx_quic_set_path(ngx_connection_t *c, n /* new path requires new client id */ cid = ngx_quic_next_client_id(c); if (cid == NULL) { - ngx_log_error(NGX_LOG_ERR, c->log, 0, + ngx_log_error(NGX_LOG_INFO, c->log, 0, "quic no available client ids for new path"); /* stop processing of this datagram */ return NGX_DONE; -- Sergey Kandaurov From mdounin at mdounin.ru Thu Sep 29 19:33:55 2022 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 29 Sep 2022 22:33:55 +0300 Subject: Running nginx-quic in a real environment In-Reply-To: References: <831BD439-0012-42CF-B360-DB4FB7045D9A@protocol.ai> <761C055B-32A9-46A1-901E-B74784C78E19@nginx.com> <76C38646-4187-4AE3-99A0-EDC84BB5268A@protocol.ai> <523BA2CA-72BA-4D36-8568-2ABAF2FD20C0@nginx.com> Message-ID: Hello! On Thu, Sep 29, 2022 at 05:53:52PM +0400, Sergey Kandaurov wrote: > > > On 28 Sep 2022, at 01:04, Maxim Dounin wrote: > > > > Hello! > > > > On Tue, Sep 27, 2022 at 04:05:54PM +0400, Sergey Kandaurov wrote: > > > >>> On 27 Sep 2022, at 14:11, João Sousa Andrade wrote: > >>> > >>> Thank you for the clarification Sergey. > >>> > >>> We have been running http3 in production for the past couple of weeks. There's something we have noticed which I'm not entirely sure as to what is causing it. > >>> > >>> We have been getting lots of errors of the form: "[error] 34#34: *338736 quic no available client ids for new path while handling decrypted packet, client: $IP, server: 0.0.0.0:443". I tried looking through the code to little avail. I'm wondering what's causing these errors. Is it something which could be tweaked through configuration? > >> > >> This happens when QUIC connection continues over a new network path, > >> known as QUIC connection migration, which should by done by switching > >> to a new connection ID, but the client didn't previously supply enough > >> Connection ID to chose from. Normally both peers maintain a set of > >> unused Connection IDs, which may be needed not only for migration, > >> but also due to NAT rebinding. Assuming that peers that initiate > >> connection migration maintain enough connection IDs, a likely reason > >> of the network path change failure as seen in the above error can be > >> due to NAT rebinding with the client implementation that doesn't a notion > >> of connection migration so it didn't sent NEW_CONNECTION_ID frames. > >> In the case of NAT rebinding the server should send probing frames > >> over a new network path using next available client Connection ID, > >> but there were no any, as seen in the above error. > >> Hope that helps. > > > > At least Firefox doesn't send NEW_CONNECTION_ID, which normally > happens after handshake completion, and, as additionally received > in another, private report, this leads to such errors behind NAT. > > > Shouldn't it be at the "info" level, much like other > > client-related errors? > > > > So indeed it may have sense to change logging level. > > # HG changeset patch > # User Sergey Kandaurov > # Date 1664459599 -14400 > # Thu Sep 29 17:53:19 2022 +0400 > # Branch quic > # Node ID 7d78208f141b382a55bea3f7c1e66471b0c53937 > # Parent a931e690475ee59387af517de60845b4b4307d28 > QUIC: "info" logging level on insufficient client connection ids. > > Apparently, this error is reported on NAT rebinding if client didn't > previously send NEW_CONNECTION_ID to supply additional connection ids. > > diff --git a/src/event/quic/ngx_event_quic_migration.c b/src/event/quic/ngx_event_quic_migration.c > --- a/src/event/quic/ngx_event_quic_migration.c > +++ b/src/event/quic/ngx_event_quic_migration.c > @@ -309,7 +309,7 @@ ngx_quic_set_path(ngx_connection_t *c, n > /* new path requires new client id */ > cid = ngx_quic_next_client_id(c); > if (cid == NULL) { > - ngx_log_error(NGX_LOG_ERR, c->log, 0, > + ngx_log_error(NGX_LOG_INFO, c->log, 0, > "quic no available client ids for new path"); > /* stop processing of this datagram */ > return NGX_DONE; Looks good. -- Maxim Dounin http://mdounin.ru/ From mdounin at mdounin.ru Thu Sep 29 19:52:19 2022 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 29 Sep 2022 22:52:19 +0300 Subject: fix accidental corrdump In-Reply-To: <315ed368.5e37.183885e9cb0.Coremail.zjd5536@163.com> References: <7c1a50ac.4f78.1837dd5f99b.Coremail.zjd5536@163.com> <5836e9bb.28fc.18382060443.Coremail.zjd5536@163.com> <315ed368.5e37.183885e9cb0.Coremail.zjd5536@163.com> Message-ID: Hello! On Thu, Sep 29, 2022 at 04:30:45PM +0800, zjd wrote: > Actually, I'm not sure where is coredump; So I think > l->alloc=NULL after free(l-alloc) is reasonable, because > l->alloc's address can be reused in the pool. Of course, > memzero after get mem from the pool can be solved about this > question.But for example, ngx_array_push maybe reuse l->alloc's > address, and ngx_array_push is not memzero from the start of > get memory. > So I think l->alloc=NULL after free(l-alloc) is necessary. > Thanks for discussing this with me As previously explained, l->alloc is not used after free(). Clearing unused memory without reasons is certainly not necessary, much like clearing allocated memory. While it might be helpful to mitigate various bugs, a better approach would be to find and fix bugs. To find and fix bugs a better approach is usually to set the unused memory to a pattern which is more likely to cause segfault if used, such as memset(0x5A). In nginx, various mechanisms to facilitate memory debugging are available with NGX_DEBUG_MALLOC and NGX_DEBUG_PALLOC defines, see code for details. Using system allocator options, Address Sanitizer, and tools like Valgrind also might be helpful. -- Maxim Dounin http://mdounin.ru/ From frank.swasey at gmail.com Thu Sep 29 20:37:24 2022 From: frank.swasey at gmail.com (Frank Swasey) Date: Thu, 29 Sep 2022 16:37:24 -0400 Subject: fix accidental corrdump In-Reply-To: References: <7c1a50ac.4f78.1837dd5f99b.Coremail.zjd5536@163.com> <5836e9bb.28fc.18382060443.Coremail.zjd5536@163.com> <315ed368.5e37.183885e9cb0.Coremail.zjd5536@163.com> Message-ID: This is getting quite tiresome. You are both stuck in your point of view and refusing to hear what the other one is saying. Maxim - you keep repeating " l->alloc is not used after free(). " Clearly, that is not true if setting it to NULL prevents the segfault. What is true is that NGINX core code does not use it. As a defensive coding technique, I agree with zjd that setting the pointer you just freed to NULL to indicate to any other code that is checking it is the proper action. The only other thing that zjd can do is to set the pointer to NULL in their own code after calling the reset function if you are adamant that such defensive measures cannot be put into the NGINX core code. Any future programmers that write modules like zjd has done that test a pointer for being NULL and use it if it has a non-NULL value, will trip over the same problem, and you can have this argument all over again. ~ Frank Swasey, lurker On Thu, Sep 29, 2022 at 3:53 PM Maxim Dounin wrote: > Hello! > > On Thu, Sep 29, 2022 at 04:30:45PM +0800, zjd wrote: > > > Actually, I'm not sure where is coredump; So I think > > l->alloc=NULL after free(l-alloc) is reasonable, because > > l->alloc's address can be reused in the pool. Of course, > > memzero after get mem from the pool can be solved about this > > question.But for example, ngx_array_push maybe reuse l->alloc's > > address, and ngx_array_push is not memzero from the start of > > get memory. > > So I think l->alloc=NULL after free(l-alloc) is necessary. > > Thanks for discussing this with me > > As previously explained, l->alloc is not used after free(). > Clearing unused memory without reasons is certainly not necessary, > much like clearing allocated memory. While it might be helpful to > mitigate various bugs, a better approach would be to find and fix > bugs. > > To find and fix bugs a better approach is usually to set the > unused memory to a pattern which is more likely to cause segfault > if used, such as memset(0x5A). In nginx, various mechanisms to > facilitate memory debugging are available with NGX_DEBUG_MALLOC > and NGX_DEBUG_PALLOC defines, see code for details. Using system > allocator options, Address Sanitizer, and tools like Valgrind also > might be helpful. > > -- > Maxim Dounin > http://mdounin.ru/ > _______________________________________________ > nginx mailing list -- nginx at nginx.org > To unsubscribe send an email to nginx-leave at nginx.org > -- I am not young enough to know everything. - Oscar Wilde (1854-1900) -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Thu Sep 29 23:21:29 2022 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 30 Sep 2022 02:21:29 +0300 Subject: fix accidental corrdump In-Reply-To: References: <7c1a50ac.4f78.1837dd5f99b.Coremail.zjd5536@163.com> <5836e9bb.28fc.18382060443.Coremail.zjd5536@163.com> <315ed368.5e37.183885e9cb0.Coremail.zjd5536@163.com> Message-ID: Hello! On Thu, Sep 29, 2022 at 04:37:24PM -0400, Frank Swasey wrote: > This is getting quite tiresome. You are both stuck in your point of view > and refusing to hear what the other one is saying. > > Maxim - you keep repeating " l->alloc is not used after free(). " Clearly, > that is not true if setting it to NULL prevents the segfault. What is true > is that NGINX core code does not use it. As a defensive coding technique, > I agree with zjd that setting the pointer you just freed to NULL to > indicate to any other code that is checking it is the proper action. The > only other thing that zjd can do is to set the pointer to NULL in their own > code after calling the reset function if you are adamant that such > defensive measures cannot be put into the NGINX core code. Any future > programmers that write modules like zjd has done that test a pointer for > being NULL and use it if it has a non-NULL value, will trip over the same > problem, and you can have this argument all over again. The particular code is internal to nginx core, and the l->alloc is never used after free: this is something which can be easily seen within the ngx_reset_pool() function, which is about 20 lines by itself. That is, setting l->alloc to NULL is not a defensive coding technique by any means, and that's what I've tried to explain. If setting l->alloc to NULL prevents the segfault, it is accidental, and likely indicate that zjd's module is using uninitialized and/or freed memory somewhere. Trying to mitigate such bugs by setting arbitrary pointers to NULL is not going to fix these bugs. Rather, this will make them harder to find and fix. Instead, actions should be taken to make segfaults due to these bugs more likely, so it would be easier to find and fix them. I've provided a few pointers on how to get segfaults due to such bugs more likely, hopefully it'll help zjd to find and fix bugs in his code. In this particular case, I suspect that Address Sanitizer combined with NGX_DEBUG_PALLOC would be enough to immediately identify the bug. What can be an improvement here is to introduce junk filling in the pool allocator code, for both new allocations and all blocks freed by ngx_reset_pool(), similarly to ngx_slab_junk() as used in the slab allocator. I think that NGX_DEBUG_PALLOC would be more than enough in this particular case though, as this changes all pool allocations to use system allocator, and therefore makes it possible to configure junk filling and malloc checking on the OS level. Sorry if this discussion bothers you. It would be more appropriate in the nginx-devel@ mailing list, but the patch was posted here and it's probably too late to move the discussion anyway. -- Maxim Dounin http://mdounin.ru/ From zjd5536 at 163.com Fri Sep 30 04:07:47 2022 From: zjd5536 at 163.com (zjd) Date: Fri, 30 Sep 2022 12:07:47 +0800 (CST) Subject: fix accidental corrdump In-Reply-To: References: <7c1a50ac.4f78.1837dd5f99b.Coremail.zjd5536@163.com> <5836e9bb.28fc.18382060443.Coremail.zjd5536@163.com> <315ed368.5e37.183885e9cb0.Coremail.zjd5536@163.com> Message-ID: <62364e1c.35aa.1838c94366e.Coremail.zjd5536@163.com> If disturb everyone, I'm sorry. l->alloc itself address(&l->alloc) in the pool can be reused rather than l->alloc pointer to wild address, &l->alloc return to pool. And I try only use large memory with Maxim's way, but it's not coredump. Because coredump is accident, not coredump maybe be reasonable. if l->alloc is not setted NULL after free, the place where use ngx_palloc or ngx_array_push etc, need memzero to avoid wild pointer after use ngx_reset_pool. At the end, sorry again if disturb everyone. At 2022-09-30 07:21:29, "Maxim Dounin" wrote: >Hello! > >On Thu, Sep 29, 2022 at 04:37:24PM -0400, Frank Swasey wrote: > >> This is getting quite tiresome. You are both stuck in your point of view >> and refusing to hear what the other one is saying. >> >> Maxim - you keep repeating " l->alloc is not used after free(). " Clearly, >> that is not true if setting it to NULL prevents the segfault. What is true >> is that NGINX core code does not use it. As a defensive coding technique, >> I agree with zjd that setting the pointer you just freed to NULL to >> indicate to any other code that is checking it is the proper action. The >> only other thing that zjd can do is to set the pointer to NULL in their own >> code after calling the reset function if you are adamant that such >> defensive measures cannot be put into the NGINX core code. Any future >> programmers that write modules like zjd has done that test a pointer for >> being NULL and use it if it has a non-NULL value, will trip over the same >> problem, and you can have this argument all over again. > >The particular code is internal to nginx core, and the l->alloc is >never used after free: this is something which can be easily >seen within the ngx_reset_pool() function, which is about 20 >lines by itself. That is, setting l->alloc to NULL is not a >defensive coding technique by any means, and that's what I've >tried to explain. > >If setting l->alloc to NULL prevents the segfault, it is >accidental, and likely indicate that zjd's module is using >uninitialized and/or freed memory somewhere. Trying to mitigate >such bugs by setting arbitrary pointers to NULL is not going to >fix these bugs. Rather, this will make them harder to find and >fix. Instead, actions should be taken to make segfaults due to >these bugs more likely, so it would be easier to find and fix >them. > >I've provided a few pointers on how to get segfaults due to such >bugs more likely, hopefully it'll help zjd to find and fix bugs in >his code. In this particular case, I suspect that Address >Sanitizer combined with NGX_DEBUG_PALLOC would be enough to >immediately identify the bug. > >What can be an improvement here is to introduce junk filling in >the pool allocator code, for both new allocations and all blocks >freed by ngx_reset_pool(), similarly to ngx_slab_junk() as used in >the slab allocator. I think that NGX_DEBUG_PALLOC would be more >than enough in this particular case though, as this changes all >pool allocations to use system allocator, and therefore makes it >possible to configure junk filling and malloc checking on the OS >level. > >Sorry if this discussion bothers you. It would be more >appropriate in the nginx-devel@ mailing list, but the patch was >posted here and it's probably too late to move the discussion >anyway. > >-- >Maxim Dounin >http://mdounin.ru/ >_______________________________________________ >nginx mailing list -- nginx at nginx.org >To unsubscribe send an email to nginx-leave at nginx.org -------------- next part -------------- An HTML attachment was scrubbed... URL: From avinashghadshi93 at gmail.com Fri Sep 30 10:49:39 2022 From: avinashghadshi93 at gmail.com (Avinash Ghadshi) Date: Fri, 30 Sep 2022 16:19:39 +0530 Subject: need ngx_http_perl_module equivalent GO module Message-ID: Hi Team, I search through out the nginx site but did not find go module which is equivalent to ngx_http_perl_module Where can I find this module e.g ngx_http_go_module? Please advise. -- *Thanks & Regards,* *Avinash Ghadshi* *Navi Mumbai-410206* -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Fri Sep 30 12:06:36 2022 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 30 Sep 2022 15:06:36 +0300 Subject: fix accidental corrdump In-Reply-To: <62364e1c.35aa.1838c94366e.Coremail.zjd5536@163.com> References: <7c1a50ac.4f78.1837dd5f99b.Coremail.zjd5536@163.com> <5836e9bb.28fc.18382060443.Coremail.zjd5536@163.com> <315ed368.5e37.183885e9cb0.Coremail.zjd5536@163.com> <62364e1c.35aa.1838c94366e.Coremail.zjd5536@163.com> Message-ID: Hello! On Fri, Sep 30, 2022 at 12:07:47PM +0800, zjd wrote: > If disturb everyone, I'm sorry. > > l->alloc itself address(&l->alloc) in the pool can be reused > rather than l->alloc pointer to wild address, &l->alloc return > to pool. > And I try only use large memory with Maxim's way, but it's not > coredump. Because coredump is accident, not coredump maybe be > reasonable. if l->alloc is not setted NULL after free, the place > where use ngx_palloc or ngx_array_push etc, need memzero to > avoid wild pointer after use ngx_reset_pool. The ngx_palloc() and ngx_array_push() are expected to return allocated, but uninitialized memory, much like normal malloc(). The returned memory needs to be initialized before use. If you need zeroed memory, you can either use ngx_calloc(), which explicitly initializes all allocated bytes to zero, much like calloc(), or clear the memory yourself with ngx_memzero(). Compiling nginx with NGX_DEBUG_PALLOC and using your OS malloc options to debug memory should help to catch memory access bugs, and using uninitialized memory in particular. When using Linux, see [1], notably MALLOC_CHECK_ and MALLOC_PERTURB_ environment variables (note that you may need to use env[2] to pass these to worker processes). Alternatively, you may consider using various tools, such as Address Sanitizer, Memory Sanitizer, and Valgrind. These may need some effort to make them work correctly, though should should catch most of the possible bugs, including out-of-bounds accesses and uninitialized memory accesses (see, for example, [3]). Hope this helps. [1] https://man7.org/linux/man-pages/man3/mallopt.3.html [2] http://nginx.org/r/env [3] https://developers.redhat.com/blog/2021/05/05/memory-error-checking-in-c-and-c-comparing-sanitizers-and-valgrind -- Maxim Dounin http://mdounin.ru/ From mdounin at mdounin.ru Fri Sep 30 12:10:19 2022 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 30 Sep 2022 15:10:19 +0300 Subject: need ngx_http_perl_module equivalent GO module In-Reply-To: References: Message-ID: Hello! On Fri, Sep 30, 2022 at 04:19:39PM +0530, Avinash Ghadshi wrote: > Hi Team, > I search through out the nginx site but did not find go module which is > equivalent to ngx_http_perl_module > > Where can I find this module e.g ngx_http_go_module? There is no such module. -- Maxim Dounin http://mdounin.ru/ From bmvishwas at gmail.com Fri Sep 30 12:28:43 2022 From: bmvishwas at gmail.com (Vishwas Bm) Date: Fri, 30 Sep 2022 17:58:43 +0530 Subject: Nginx to tcp/tls enabed syslog server Message-ID: Hi, Current syslog directive sends logs to an udp syslog server. Is there a plan to enhance it to support sending logs to tcp and tls+tcp based syslog server ? Regards, Vishwas -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Fri Sep 30 13:25:26 2022 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 30 Sep 2022 16:25:26 +0300 Subject: Nginx to tcp/tls enabed syslog server In-Reply-To: References: Message-ID: Hello! On Fri, Sep 30, 2022 at 05:58:43PM +0530, Vishwas Bm wrote: > Current syslog directive sends logs to an udp syslog server. > Is there a plan to enhance it to support sending logs to tcp and tls+tcp > based syslog server ? There are no such plans. If you want to send logs to a remote tcp syslog server, consider configuring nginx to log into local syslog server, and forwarding relevant message by the local syslog server to a remote one (that is, something you likely already do for local syslog messages from other local services). -- Maxim Dounin http://mdounin.ru/ From osa at freebsd.org.ru Fri Sep 30 13:59:58 2022 From: osa at freebsd.org.ru (Sergey A. Osokin) Date: Fri, 30 Sep 2022 16:59:58 +0300 Subject: need ngx_http_perl_module equivalent GO module In-Reply-To: References: Message-ID: Hi Avinash, On Fri, Sep 30, 2022 at 04:19:39PM +0530, Avinash Ghadshi wrote: > I search through out the nginx site but did not find go module which is > equivalent to ngx_http_perl_module > > Where can I find this module e.g ngx_http_go_module? > > Please advise. There's no go module for nginx, but it's possible to run go application with a polyglot application server, it's NGINX Unit, [1] created by the same team as the original NGINX. References 1. https://unit.nginx.org/ -- Sergey A. Osokin From nginx-forum at forum.nginx.org Fri Sep 30 19:29:16 2022 From: nginx-forum at forum.nginx.org (achekalin) Date: Fri, 30 Sep 2022 15:29:16 -0400 Subject: Nginx as mail proxy: different domains with different certs Message-ID: <1051f742a8c2c534e029006f4ba0b01e.NginxMailingListEnglish@forum.nginx.org> I set up nginx as mail proxy, and it works for one domain, but won't work when I try to serve more that one domain each with different SSL certificate. Are there any way I can archive that, since nginx as mail proxy it quite good and seems to be good solution. My fail is that I expected from mail servers the same I used to see in http server. Say, I tried to write this: mail { ... server { listen 25; protocol smtp; server_name mail.domain1.com; ssl_certificate mail.domain1.com.fullchain.pem; ssl_certificate_key mail.domain1.com.key.pem; starttls on; proxy on; xclient off; } server { listen 25; protocol smtp; server_name mail.domain2.com; ssl_certificate mail.domain2.com.fullchain.pem; ssl_certificate_key mail.domain2.com.key.pem; starttls on; proxy on; xclient off; } ... } I expected nginx will choose right 'server' block based on server_name (which was wrong assumption) and then will use ssl certificate set in that server block. I do understand I can set up LE certs with many hostnames included but say story is that domain list is too big to be included in single cert so I have to use more that one server block anyway. Please advice! Posted at Nginx Forum: https://forum.nginx.org/read.php?2,295369,295369#msg-295369