From kaushalshriyan at gmail.com Wed Jan 4 17:23:23 2023 From: kaushalshriyan at gmail.com (Kaushal Shriyan) Date: Wed, 4 Jan 2023 22:53:23 +0530 Subject: Upstream service php-fpm is up and running but reports {"errors": {"status_code": 502,"status": "php-fpm server is down"}} In-Reply-To: References: Message-ID: Hi Maxim, I have tested using the attached nginx.conf file for your reference. I tested using both scenarios. When MySQL DB is down it works as expected. {"errors": "MySQL DB Server is down"} MySQL DB is up and running It reports {"errors": "MySQL DB Server is down"} in spite of MySQL DB server being fine. Please suggest. Thanks in advance. Best Regards, Kaushal On Thu, Dec 22, 2022 at 7:04 AM Maxim Dounin wrote: > Hello! > > On Tue, Dec 20, 2022 at 11:44:05PM +0530, Kaushal Shriyan wrote: > > > On Sat, Dec 17, 2022 at 3:48 AM Maxim Dounin wrote: > > > > > On Fri, Dec 16, 2022 at 11:53:40PM +0530, Kaushal Shriyan wrote: > > > > > > > I have a follow up question regarding the settings below in > nginx.conf > > > > where the php-fpm upstream server is processing all php files for > Drupal > > > > CMS. > > > > > > > > fastcgi_intercept_errors off > > > > proxy_intercept_errors off > > > > > > > > User -> Nginx -> php-fpm -> MySQL DB. > > > > > > > > For example if the php-fpm upstream server is down then nginx should > > > render > > > > 502 bad gateway > > > > if MySQL DB service is down then nginx should > render > > > > 500 ISE. > > > > > > > > Is there a way to render any of the messages or any custom messages > to > > > the > > > > User from the php-fpm upstream server that should be passed to a > client > > > > without being intercepted by the Nginx web server. Any examples? I > have > > > > attached the file for your reference. Please guide me. Thanks in > advance. > > > > > > Not sure I understand what are you asking about. > > > > > > With fastcgi_intercept_errors turned off (the default) nginx does > > > not intercept any of the errors returned by php-fpm. > > > > > > That is, when MySQL is down and php-fpm returns 500 (Internal > > > Server Error), it is returned directory to the client. When > > > php-fpm is down, nginx generates 502 (Bad Gateway) itself and > > > returns it to the client. > > > > > > > > Hi Maxim, > > > > Apologies for the delay in responding. I am still not able to get it. The > > below settings will be hardcoded in nginx.conf. Is there a way to > > dynamically render the different errors to the client when the client > hits > > http://mydomain.com/apis > > > > error_page 502 /502.json; > > > > location = /502.json { > > return 200 '{"errors": {"status_code": 502, "status": "php-fpm > > server is down"}}'; > > } > > > > Please guide me. Thanks in advance. > > You can pass these error pages to a backend server by using > proxy_pass or fastcgi_pass in the location, much like any other > resource in nginx. > > Note though that in most cases it's a bad idea, at least unless > you have a dedicated backend to generate error pages: if a request > to an upstream server failed, there is a good chance that another > request to generate an error page will fail as well. > > As such, it is usually recommended to keep error pages served by > nginx itself, either as static files, or directly returned with > "return". > > -- > Maxim Dounin > http://mdounin.ru/ > _______________________________________________ > nginx mailing list > nginx at nginx.org > https://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: nginxtest.conf Type: application/octet-stream Size: 7893 bytes Desc: not available URL: From ekortright at ewtn.com Tue Jan 3 20:54:48 2023 From: ekortright at ewtn.com (Eduardo Kortright) Date: Tue, 3 Jan 2023 20:54:48 +0000 Subject: nginx serving wrong proxy content, static assets not affected References: <0be15171-00c3-4c60-8a74-fea6be3a063f@Spark> Message-ID: I have several servers hosting multiple Rails sites, using nginx as a reverse proxy. All sites have unique host names and, at least at first, nginx returns the content for each site correctly (dynamic content from Rails as well as static assets such as images, javascript and CSS). Both Rails and nginx are running in Docker containers. I am using nginx 1.23.1 running in a Docker image built from the official Debian Docker image (I only added certbot for TLS certificate processing). nginx connects to each proxy via HTTP using the internal service name defined in the Docker network by each docker-compose.yml file. For some reason, some time after nginx starts nginx gets confused and begins serving content from the wrong proxy. For instance, requesting a page from aaa.com returns the Rails content for bbb.com; requesting bbb.com returns content from ccc.com; and requesting ccc.com returns content from aaa.com. This problem only affects the proxy content; the static assets for aaa.com are returned as expected (and so on for all the other sites). This is not a problem of nginx simply returning content from the wrong site. If you request a page from aaa.com, since the dynamic content (HTML markup) comes from bbb.com, the markup will contain references to assets from bbb.com, but since the URLs are relative they will be requested from aaa.com; those assets return 404 Not found because they do not exist in aaa.com. If you change the URL manually and request them from bbb.com, they are returned with no problem, so it’s not that nginx can’t resolve the host name, just that the proxy content is being routed incorrectly. Also, I don’t believe this is a problem with Docker. When nginx gets confused, I can run a shell in any Docker container and connect directly to Rails (by running a cURL command pointing to the proxy URL configured for each site in nginx), and I get the correct content every time. It is only when I request it through nginx that the content comes from a different site than the one requested. I have no idea what triggers this behavior. Once it happens, the only thing that can be done to correct it is to restart nginx. After that (could be minutes, hours, or days), the server will function as expected once again. Since I am using this setup in several production servers, at first I created a cron job to restart nginx every day, then every hour, and finally I decided to poll the sites on each server every five minutes, so that if the responses don’t look right I can restart nginx without having users experience a lengthy interruption. I have confirmed that this problem occurs on more than one server (although on one server I have only observed it once). I have also set up a staging server that is as close to one of the production servers as possible, but so far the problem has not occurred there (since this staging server does not get any traffic, the problem may never surface if it is triggered by a particular kind of incoming request). I have asked a question at Server Fault (https://serverfault.com/questions/1117412/nginx-serving-content-from-wrong-proxy), where I have posted sanitized versions of the nginx configuration for two sample sites. I can post here the same or any other configuration that might help diagnose this. I have never seen nginx behave like this before (I have used it to host multiple Rails sites for years without any problems—only not in combination with Docker). Any suggestions as to what to look for or what to try would be most appreciated. — Eduardo Kortright EWTN Online Services (205) 271-2900 (205) 332-4835 (cell) -------------- next part -------------- An HTML attachment was scrubbed... URL: From pchychi at gmail.com Wed Jan 4 21:37:31 2023 From: pchychi at gmail.com (Payam Chychi) Date: Wed, 4 Jan 2023 13:37:31 -0800 Subject: nginx serving wrong proxy content, static assets not affected In-Reply-To: References: <0be15171-00c3-4c60-8a74-fea6be3a063f@Spark> Message-ID: On Wed, Jan 4, 2023 at 12:33 PM Eduardo Kortright wrote: > I have several servers hosting multiple Rails sites, using nginx as a > reverse proxy. All sites have unique host names and, at least at first, > nginx returns the content for each site correctly (dynamic content from > Rails as well as static assets such as images, javascript and CSS). > > Both Rails and nginx are running in Docker containers. I am using nginx > 1.23.1 running in a Docker image built from the official Debian Docker > image (I only added certbot for TLS certificate processing). nginx connects > to each proxy via HTTP using the internal service name defined in the > Docker network by each docker-compose.yml file. > > For some reason, some time after nginx starts nginx gets confused and > begins serving content from the wrong proxy. For instance, requesting a > page from aaa.com returns the Rails content for bbb.com; requesting > bbb.com returns content from ccc.com; and requesting ccc.com returns > content from aaa.com. This problem only affects the proxy content; the > static assets for aaa.com are returned as expected (and so on for all the > other sites). > > This is not a problem of nginx simply returning content from the wrong > site. If you request a page from aaa.com, since the dynamic content (HTML > markup) comes from bbb.com, the markup will contain references to assets > from bbb.com, but since the URLs are relative they will be requested from > aaa.com; those assets return 404 Not found because they do not exist in > aaa.com. If you change the URL manually and request them from bbb.com, > they are returned with no problem, so it’s not that nginx can’t resolve the > host name, just that the proxy content is being routed incorrectly. > > Also, I don’t believe this is a problem with Docker. When nginx gets > confused, I can run a shell in any Docker container and connect directly to > Rails (by running a cURL command pointing to the proxy URL configured for > each site in nginx), and I get the correct content every time. It is only > when I request it through nginx that the content comes from a different > site than the one requested. > > I have no idea what triggers this behavior. Once it happens, the only > thing that can be done to correct it is to restart nginx. After that (could > be minutes, hours, or days), the server will function as expected once > again. Since I am using this setup in several production servers, at first > I created a cron job to restart nginx every day, then every hour, and > finally I decided to poll the sites on each server every five minutes, so > that if the responses don’t look right I can restart nginx without having > users experience a lengthy interruption. > > I have confirmed that this problem occurs on more than one server > (although on one server I have only observed it once). I have also set up a > staging server that is as close to one of the production servers as > possible, but so far the problem has not occurred there (since this staging > server does not get any traffic, the problem may never surface if it is > triggered by a particular kind of incoming request). > > I have asked a question at Server Fault ( > https://serverfault.com/questions/1117412/nginx-serving-content-from-wrong-proxy), > where I have posted sanitized versions of the nginx configuration for two > sample sites. I can post here the same or any other configuration that > might help diagnose this. I have never seen nginx behave like this before > (I have used it to host multiple Rails sites for years without any > problems—only not in combination with Docker). > > Any suggestions as to what to look for or what to try would be most > appreciated. > > — > Eduardo Kortright > EWTN Online Services > (205) 271-2900 > (205) 332-4835 (cell) > _______________________________________________ > nginx mailing list > nginx at nginx.org > https://mailman.nginx.org/mailman/listinfo/nginx > Sounds like you are either having hash collisions or incorrect dns resolution. - Are you caching? Set a larger key size - Have you looked at dns and host resolution for the impacted requests? — Payam -- Payam Tarverdyan Chychi -------------- next part -------------- An HTML attachment was scrubbed... URL: From kaushalshriyan at gmail.com Thu Jan 5 16:45:34 2023 From: kaushalshriyan at gmail.com (Kaushal Shriyan) Date: Thu, 5 Jan 2023 22:15:34 +0530 Subject: Upstream service php-fpm is up and running but reports {"errors": {"status_code": 502,"status": "php-fpm server is down"}} In-Reply-To: References: Message-ID: Hi, I will appreciate if someone can pitch in for my earlier post to this mailing list. I have the below location block. location /apis { # First attempt to serve request as file, then # as directory, then fall back to displaying a 404. fastcgi_intercept_errors off; add_header "X-Debug-JSON-APIS" $upstream_http_content_type"abc" always; default_type application/json; return 500 '{"errors": "MySQL DB Server is down"}'; # if ($isdatatypejson = "no") { # add_header 'Content-Type' 'application/json' always; # add_header 'Content-Type-2' $upstream_http_content_type$isdatatypejson"OK" always; # return 502 '{"errors": {"status_code": 502,"status": "php-fpm server is down"}}'; # } When I hit http://mydomain.com/apis for conditions when MySQL DB is down. I get the below output and it works as expected. {"errors": "MySQL DB Server is down"} When I hit http://mydomain.com/apis for conditions when MySQL DB is up and running fine, I get the below output in spite of MySQL DB server being fine. {"errors": "MySQL DB Server is down"} Ideally the application will be working as normal. Am I missing anything in the nginx config? I have tested using the attached nginx.conf file for your reference. Please suggest. Thanks in advance. Best Regards, Kaushal On Wed, Jan 4, 2023 at 10:53 PM Kaushal Shriyan wrote: > Hi Maxim, > > I have tested using the attached nginx.conf file for your reference. I > tested using both scenarios. > > When MySQL DB is down it works as expected. > > {"errors": "MySQL DB Server is down"} > > MySQL DB is up and running > > It reports {"errors": "MySQL DB Server is down"} in spite of MySQL DB > server being fine. > > Please suggest. Thanks in advance. > > Best Regards, > > Kaushal > > > On Thu, Dec 22, 2022 at 7:04 AM Maxim Dounin wrote: > >> Hello! >> >> On Tue, Dec 20, 2022 at 11:44:05PM +0530, Kaushal Shriyan wrote: >> >> > On Sat, Dec 17, 2022 at 3:48 AM Maxim Dounin >> wrote: >> > >> > > On Fri, Dec 16, 2022 at 11:53:40PM +0530, Kaushal Shriyan wrote: >> > > >> > > > I have a follow up question regarding the settings below in >> nginx.conf >> > > > where the php-fpm upstream server is processing all php files for >> Drupal >> > > > CMS. >> > > > >> > > > fastcgi_intercept_errors off >> > > > proxy_intercept_errors off >> > > > >> > > > User -> Nginx -> php-fpm -> MySQL DB. >> > > > >> > > > For example if the php-fpm upstream server is down then nginx should >> > > render >> > > > 502 bad gateway >> > > > if MySQL DB service is down then nginx should >> render >> > > > 500 ISE. >> > > > >> > > > Is there a way to render any of the messages or any custom messages >> to >> > > the >> > > > User from the php-fpm upstream server that should be passed to a >> client >> > > > without being intercepted by the Nginx web server. Any examples? I >> have >> > > > attached the file for your reference. Please guide me. Thanks in >> advance. >> > > >> > > Not sure I understand what are you asking about. >> > > >> > > With fastcgi_intercept_errors turned off (the default) nginx does >> > > not intercept any of the errors returned by php-fpm. >> > > >> > > That is, when MySQL is down and php-fpm returns 500 (Internal >> > > Server Error), it is returned directory to the client. When >> > > php-fpm is down, nginx generates 502 (Bad Gateway) itself and >> > > returns it to the client. >> > > >> > > >> > Hi Maxim, >> > >> > Apologies for the delay in responding. I am still not able to get it. >> The >> > below settings will be hardcoded in nginx.conf. Is there a way to >> > dynamically render the different errors to the client when the client >> hits >> > http://mydomain.com/apis >> > >> > error_page 502 /502.json; >> > >> > location = /502.json { >> > return 200 '{"errors": {"status_code": 502, "status": "php-fpm >> > server is down"}}'; >> > } >> > >> > Please guide me. Thanks in advance. >> >> You can pass these error pages to a backend server by using >> proxy_pass or fastcgi_pass in the location, much like any other >> resource in nginx. >> >> Note though that in most cases it's a bad idea, at least unless >> you have a dedicated backend to generate error pages: if a request >> to an upstream server failed, there is a good chance that another >> request to generate an error page will fail as well. >> >> As such, it is usually recommended to keep error pages served by >> nginx itself, either as static files, or directly returned with >> "return". >> >> -- >> Maxim Dounin >> http://mdounin.ru/ >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> https://mailman.nginx.org/mailman/listinfo/nginx >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From justdan23 at gmail.com Thu Jan 5 18:46:29 2023 From: justdan23 at gmail.com (Dan Swaney) Date: Thu, 5 Jan 2023 13:46:29 -0500 Subject: Upstream service php-fpm is up and running but reports {"errors": {"status_code": 502,"status": "php-fpm server is down"}} In-Reply-To: References: Message-ID: Have you tried enabling "debug" mode logging and checked the log file for more details? For example, modify the settings in your nginx.conf and restart the nginx service: error_log /dev/stdout debug; http { access_log /dev/stdout; ... } On Thu, Dec 15, 2022, 11:23 AM Kaushal Shriyan wrote: > Hi, > > I am running the nginx version: nginx/1.22 as a reverse proxy server on > CentOS Linux release 7.9.2009 (Core). When I hit http://mydomain.com/apis I > see the below message on the browser even if the upstream server php-fpm > server is up and running. > > *{"errors": {"status_code": 502,"status": "php-fpm server is down"}}* > > I have set the below in the nginx.conf file and attached the file for your > reference. > > if ($upstream_http_content_type = "") { > add_header 'Content-Type' 'application/json' always; > add_header 'Content-Type-3' > $upstream_http_content_type$isdatatypejson"OK" always; > return 502 '{"errors": {"status_code": 502,"status": > "php-fpm server is down"}}'; > } > > # systemctl status php-fpm > ● php-fpm.service - The PHP FastCGI Process Manager > Loaded: loaded (/usr/lib/systemd/system/php-fpm.service; enabled; > vendor preset: disabled) > Drop-In: /etc/systemd/system/php-fpm.service.d > └─override.conf > Active: active (running) since Thu 2022-12-15 15:53:31 UTC; 10s ago > Main PID: 9185 (php-fpm) > Status: "Processes active: 0, idle: 5, Requests: 0, slow: 0, Traffic: > 0req/sec" > CGroup: /system.slice/php-fpm.service > ├─9185 php-fpm: master process (/etc/php-fpm.conf) > ├─9187 php-fpm: pool www > ├─9188 php-fpm: pool www > ├─9189 php-fpm: pool www > ├─9190 php-fpm: pool www > └─9191 php-fpm: pool www > > Dec 15 15:53:31 systemd[1]: Starting The PHP FastCGI Process Manager... > Dec 15 15:53:31 systemd[1]: Started The PHP FastCGI Process Manager. > # > > Please guide me. > > Best Regards, > > Kaushal > > {"errors": {"status_code": 502,"status": "php-fpm server is down"}} > > _______________________________________________ > nginx mailing list > nginx at nginx.org > https://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx at 16bits.net Fri Jan 6 00:34:26 2023 From: nginx at 16bits.net (=?ISO-8859-1?Q?=C1ngel?=) Date: Fri, 06 Jan 2023 01:34:26 +0100 Subject: nginx serving wrong proxy content, static assets not affected In-Reply-To: References: <0be15171-00c3-4c60-8a74-fea6be3a063f@Spark> Message-ID: <5423482fda51eec7b4f8871408ad8657423476af.camel@16bits.net> On 2023-01-03 at 20:54 +0000, Eduardo Kortright wrote: > I have no idea what triggers this behavior. Once it happens, the only > thing that can be done to correct it is to restart nginx. After that > (could be minutes, hours, or days), the server will function as > expected once again. Since I am using this setup in several > production servers, at first I created a cron job to restart nginx > every day, then every hour, and finally I decided to poll the sites > on each server every five minutes, so that if the responses don’t > look right I can restart nginx without having users experience a > lengthy interruption. Did you try reload instead of a restart? That's usually enough for getting nginx update the sources, and is transparent to your users. As for the actual problem, as I understand you have 4 docker containers: - aaa.com (Rails app) - bbb.com (Rails app) - ccc.com (Rails app) - proxy (nginx, with the static assets for the 3 sites) Do the ip addresses for the rails sites change over time? Mind that nginx will query the hostname only once (at startup/reload), *and use that same ip forever* If the other containers switched ips, that would produce the exact behavior that you are seeing. You can force nginx to requery dns by using a variable see https://forum.nginx.org/read.php?2,215830,215832#msg-215832 From ekortright at ewtn.com Fri Jan 6 01:54:39 2023 From: ekortright at ewtn.com (Eduardo Kortright) Date: Fri, 6 Jan 2023 01:54:39 +0000 Subject: nginx serving wrong proxy content, static assets not affected In-Reply-To: <5423482fda51eec7b4f8871408ad8657423476af.camel@16bits.net> References: <0be15171-00c3-4c60-8a74-fea6be3a063f@Spark> <5423482fda51eec7b4f8871408ad8657423476af.camel@16bits.net> Message-ID: I'll bet that's it! There is nothing in my configuration that makes the IP addresses of the containers in the Docker network stay fixed. I would not be surprised if, when two or more containers are restarted (as they probably are every once in a while when logrotate runs), some or all of them may exchange IP addresses. I will try to duplicate this so I can post the results here, but in any case I will find out how to assign specific IP addresses to the containers in the Docker configuration and do that from now on. Your observation that nginx looks up the IP once and assumes it will not change would explain what is going on. I can't thank you enough, as this was driving me crazy. Thank you also for your other very helpful suggestions (reloading nginx instead of restarting, forcing DNS lookups). ________________________________ > I have no idea what triggers this behavior. Once it happens, the only > thing that can be done to correct it is to restart nginx. After that > (could be minutes, hours, or days), the server will function as > expected once again. Since I am using this setup in several > production servers, at first I created a cron job to restart nginx > every day, then every hour, and finally I decided to poll the sites > on each server every five minutes, so that if the responses don’t > look right I can restart nginx without having users experience a > lengthy interruption. Did you try reload instead of a restart? That's usually enough for getting nginx update the sources, and is transparent to your users. As for the actual problem, as I understand you have 4 docker containers: - aaa.com (Rails app) - bbb.com (Rails app) - ccc.com (Rails app) - proxy (nginx, with the static assets for the 3 sites) Do the ip addresses for the rails sites change over time? Mind that nginx will query the hostname only once (at startup/reload), *and use that same ip forever* If the other containers switched ips, that would produce the exact behavior that you are seeing. You can force nginx to requery dns by using a variable see https://forum.nginx.org/read.php?2,215830,215832#msg-215832 -------------- next part -------------- An HTML attachment was scrubbed... URL: From unilynx at gmail.com Fri Jan 6 17:32:37 2023 From: unilynx at gmail.com (Arnold Hendriks) Date: Fri, 6 Jan 2023 18:32:37 +0100 Subject: Setting response headers per file Message-ID: Is there a way to have nginx serve up static content with custom headers - but being able to set these headers per file (and not in the nginx configuration) Eg. if I request http://127.0.0.1/attachment-12345 I would like nginx to use try_files to find the file 'attachment-12345' and serve it... but to also look at 'attachment-12345.headers', take Content-Type: application/mswordContent-Disposition: attachment; name="abc.doc"Cache-Control: immutable from that file and serve those headers. So nothing based on extension, and each file can have its own headers. (This would allow me to somewhat simulate a S3/Cloudfront based cache using local disk... as S3 allows me to set a subset of HTTP headers for every file) It would be even cooler if you could also put X-Accel-Redirect or Status: 301 headers in such a file and manage those in a 'static' way, but that's beyond what I need now With regards,Arnold Hendriks -------------- next part -------------- An HTML attachment was scrubbed... URL: From ekortright at ewtn.com Fri Jan 6 20:46:23 2023 From: ekortright at ewtn.com (Eduardo Kortright) Date: Fri, 6 Jan 2023 20:46:23 +0000 Subject: nginx serving wrong proxy content, static assets not affected In-Reply-To: References: <0be15171-00c3-4c60-8a74-fea6be3a063f@Spark> <5423482fda51eec7b4f8871408ad8657423476af.camel@16bits.net> Message-ID: On my staging server, I stopped container aaa (with IP address x.x.x.5); I then restarted container bbb (with IP address x.x.x.6); finally, I started aaa again. As expected, when bbb restarted it claimed aaa’s old IP address, since it was the lowest available address. When aaa started back up, it took bbb’s old IP, so they ended up swapping IP addresses, but nginx thinks they’re at their original locations. Once again, thank you for your help with this. As I mentioned, I’m probably just going to make Docker assign fixed addresses to each container so that nginx can look up the names once. If you are interested in that sort of thing, please leave an answer at https://serverfault.com/questions/1117412/nginx-serving-content-from-wrong-proxy and I’ll be happy to mark it correct. I'll bet that's it! There is nothing in my configuration that makes the IP addresses of the containers in the Docker network stay fixed. I would not be surprised if, when two or more containers are restarted (as they probably are every once in a while when logrotate runs), some or all of them may exchange IP addresses. I will try to duplicate this so I can post the results here, but in any case I will find out how to assign specific IP addresses to the containers in the Docker configuration and do that from now on. Your observation that nginx looks up the IP once and assumes it will not change would explain what is going on. I can't thank you enough, as this was driving me crazy. Thank you also for your other very helpful suggestions (reloading nginx instead of restarting, forcing DNS lookups). Do the ip addresses for the rails sites change over time?Mind that nginx will query the hostname only once (at startup/reload),*and use that same ip forever*If the other containers switched ips, that would produce the exactbehavior that you are seeing. -------------- next part -------------- An HTML attachment was scrubbed... URL: From ekortright at ewtn.com Fri Jan 6 20:50:17 2023 From: ekortright at ewtn.com (Eduardo Kortright) Date: Fri, 6 Jan 2023 20:50:17 +0000 Subject: nginx serving wrong proxy content, static assets not affected In-Reply-To: References: <0be15171-00c3-4c60-8a74-fea6be3a063f@Spark> Message-ID: <8418071c-eaf6-4b9e-887c-af40567979bd@Spark> Hi Payam, I’m not doing any caching. It looks like it is indeed a DNS problem, as the Docker containers are occasionally changing their IP address as the containers are restarted, but as Ángel pointed out, nginx does not resolve the name at each request, but only when it loads the configuration initially. Thank you for your help. Sounds like you are either having hash collisions or incorrect dns resolution. - Are you caching? Set a larger key size - Have you looked at dns and host resolution for the impacted requests? — Payam -------------- next part -------------- An HTML attachment was scrubbed... URL: From francis at daoine.org Fri Jan 6 21:25:54 2023 From: francis at daoine.org (Francis Daly) Date: Fri, 6 Jan 2023 21:25:54 +0000 Subject: Upstream service php-fpm is up and running but reports {"errors": {"status_code": 502,"status": "php-fpm server is down"}} In-Reply-To: References: Message-ID: <20230106212554.GE875@daoine.org> On Thu, Jan 05, 2023 at 10:15:34PM +0530, Kaushal Shriyan wrote: Hi there, > When I hit http://mydomain.com/apis for conditions when MySQL DB is down. I > get the below output and it works as expected. > > {"errors": "MySQL DB Server is down"} > > When I hit http://mydomain.com/apis for conditions when MySQL DB is up and > running fine, I get the below output in spite of MySQL DB server being > fine. > > {"errors": "MySQL DB Server is down"} Your config is location /apis { return 500 '{"errors": "MySQL DB Server is down"}'; } Whenever you make a request that is handled in that location{}, your nginx will return that response. It looks like your nginx is doing what it was told to do. No part of your config indicates that nginx knows (or cares) whether MySQL DB is up or down. Does something outside of nginx know that? f -- Francis Daly francis at daoine.org From kaushalshriyan at gmail.com Sat Jan 7 02:58:17 2023 From: kaushalshriyan at gmail.com (Kaushal Shriyan) Date: Sat, 7 Jan 2023 08:28:17 +0530 Subject: Upstream service php-fpm is up and running but reports {"errors": {"status_code": 502,"status": "php-fpm server is down"}} In-Reply-To: <20230106212554.GE875@daoine.org> References: <20230106212554.GE875@daoine.org> Message-ID: On Sat, Jan 7, 2023 at 2:56 AM Francis Daly wrote: > On Thu, Jan 05, 2023 at 10:15:34PM +0530, Kaushal Shriyan wrote: > > Hi there, > > > When I hit http://mydomain.com/apis for conditions when MySQL DB is > down. I > > get the below output and it works as expected. > > > > {"errors": "MySQL DB Server is down"} > > > > When I hit http://mydomain.com/apis for conditions when MySQL DB is up > and > > running fine, I get the below output in spite of MySQL DB server being > > fine. > > > > {"errors": "MySQL DB Server is down"} > > Your config is > > location /apis { > return 500 '{"errors": "MySQL DB Server is down"}'; > } > > Whenever you make a request that is handled in that location{}, your > nginx will return that response. > > It looks like your nginx is doing what it was told to do. > > No part of your config indicates that nginx knows (or cares) whether > MySQL DB is up or down. Does something outside of nginx know that? > > f > -- > Francis Daly francis at daoine.org > _______________________________________________ > nginx mailing list > nginx at nginx.org > https://mailman.nginx.org/mailman/listinfo/nginx Thanks Francis for the detailed explanation. Is there a way to configure Nginx for the below conditions? When I hit http://mydomain.com/apis for conditions when MySQL DB is down. I get the below output and it works as expected. {"errors": "MySQL DB Server is down"} When I hit http://mydomain.com/apis for conditions when MySQL DB is up and running fine, I get the below output in spite of MySQL DB server being fine. {"errors": "MySQL DB Server is down"} Please suggest. Thanks in advance. I appreciate your help as always. Best Regards, Kaushal -------------- next part -------------- An HTML attachment was scrubbed... URL: From justdan23 at gmail.com Sat Jan 7 17:09:51 2023 From: justdan23 at gmail.com (Dan Swaney) Date: Sat, 7 Jan 2023 12:09:51 -0500 Subject: Upstream service php-fpm is up and running but reports {"errors": {"status_code": 502,"status": "php-fpm server is down"}} In-Reply-To: References: <20230106212554.GE875@daoine.org> Message-ID: To query your MySql database, I propose you use a REST interface to decouple it from your Front End. The REST interface should be a HTTP Web Service either self hosted or hosted by another Web Server locally on the NGINX Server. (This allows NGINX to handle authentication and HTTPS encryption centrally.) As a result, use your REST Web Servive to also determine if tthe MySql DB s up or down when servicing the request and return a 200 message to NGINX indicating if there is an error in a JSON payload. I propose avoiding use of HTTP error codes yourself. Let NGINX or the Web Server or the Client to deal with underlying network issues use status codes beyond 2000. This also will guarantee your message will not be stripped and you can provide whatever level of detail needed for your client code or Front End to understand and handle. Let NGINX proxy all traffic between the REST Web Service and your Client. Adjust cookie details as needed for the domain security sabdboxing. On Fri, Jan 6, 2023, 9:58 PM Kaushal Shriyan wrote: > > > On Sat, Jan 7, 2023 at 2:56 AM Francis Daly wrote: > >> On Thu, Jan 05, 2023 at 10:15:34PM +0530, Kaushal Shriyan wrote: >> >> Hi there, >> >> > When I hit http://mydomain.com/apis for conditions when MySQL DB is >> down. I >> > get the below output and it works as expected. >> > >> > {"errors": "MySQL DB Server is down"} >> > >> > When I hit http://mydomain.com/apis for conditions when MySQL DB is up >> and >> > running fine, I get the below output in spite of MySQL DB server being >> > fine. >> > >> > {"errors": "MySQL DB Server is down"} >> >> Your config is >> >> location /apis { >> return 500 '{"errors": "MySQL DB Server is down"}'; >> } >> >> Whenever you make a request that is handled in that location{}, your >> nginx will return that response. >> >> It looks like your nginx is doing what it was told to do. >> >> No part of your config indicates that nginx knows (or cares) whether >> MySQL DB is up or down. Does something outside of nginx know that? >> >> f >> -- >> Francis Daly francis at daoine.org >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> https://mailman.nginx.org/mailman/listinfo/nginx > > > > Thanks Francis for the detailed explanation. Is there a way to configure > Nginx for the below conditions? > > When I hit http://mydomain.com/apis for conditions when MySQL DB is down. > I get the below output and it works as expected. > > {"errors": "MySQL DB Server is down"} > > When I hit http://mydomain.com/apis for conditions when MySQL DB is up > and running fine, I get the below output in spite of MySQL DB server being > fine. > > {"errors": "MySQL DB Server is down"} > > Please suggest. Thanks in advance. I appreciate your help as always. > > Best Regards, > > Kaushal > _______________________________________________ > nginx mailing list > nginx at nginx.org > https://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From bmvishwas at gmail.com Sat Jan 7 17:18:31 2023 From: bmvishwas at gmail.com (Vishwas Bm) Date: Sat, 7 Jan 2023 22:48:31 +0530 Subject: nginx ssl stream termination for MySQL backends Message-ID: Hi, Below is the use case which I am trying: client--->nginx stream(ssl termination) ---> MySQL Db Connection between nginx and MySQL db is unencrypted. When I send ssl request using MySQL client, I am getting ssl handshake timeout error. I do not see client hello from client in tcpdump capture. Is the above usecase valid with nginx? Has someone tried this configuration ? Regards, Vishwas -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Sat Jan 7 17:56:34 2023 From: mdounin at mdounin.ru (Maxim Dounin) Date: Sat, 7 Jan 2023 20:56:34 +0300 Subject: nginx ssl stream termination for MySQL backends In-Reply-To: References: Message-ID: Hello! On Sat, Jan 07, 2023 at 10:48:31PM +0530, Vishwas Bm wrote: > Below is the use case which I am trying: > > client--->nginx stream(ssl termination) ---> MySQL Db > > Connection between nginx and MySQL db is unencrypted. > > When I send ssl request using MySQL client, I am getting ssl handshake > timeout error. I do not see client hello from client in tcpdump capture. > > Is the above usecase valid with nginx? > Has someone tried this configuration ? The MySQL protocol uses an internal SSL handshake establishment, which is only happens if both client and server agree to use it. That is, it works similarly to STARTTLS in SMTP. See here for details: https://dev.mysql.com/doc/dev/mysql-server/latest/page_protocol_basic_tls.html As such, it is not possible to do simple SSL offloading, something that nginx stream module can do for you, but rather a protocol-specific implementation is needed. -- Maxim Dounin http://mdounin.ru/ From francis at daoine.org Mon Jan 9 09:22:29 2023 From: francis at daoine.org (Francis Daly) Date: Mon, 9 Jan 2023 09:22:29 +0000 Subject: Upstream service php-fpm is up and running but reports {"errors": {"status_code": 502,"status": "php-fpm server is down"}} In-Reply-To: References: <20230106212554.GE875@daoine.org> Message-ID: <20230109092229.GF875@daoine.org> On Sat, Jan 07, 2023 at 08:28:17AM +0530, Kaushal Shriyan wrote: Hi there, > Thanks Francis for the detailed explanation. Is there a way to configure > Nginx for the below conditions? I think, indirectly, yes. If you want your nginx to react to some external status (e.g., the up-or-down state of a MySQL DB), then you need to indicate how your nginx will become aware of that status. (For what it's worth: if there is nothing new in your system since November / December, the suggestion remains "change your php to do this".) f -- Francis Daly francis at daoine.org From paul at stormy.ca Tue Jan 10 17:03:06 2023 From: paul at stormy.ca (Paul) Date: Tue, 10 Jan 2023 12:03:06 -0500 Subject: Redirect www to not-www Message-ID: Happy 2023 to all on this list. Using nginx (1.18.0 on Ubuntu 20.04.5) as proxy to back-end, I have three sites (a|b|c.example.com) in a fast, reliable production environment. I have DNS records set up for www.a|b|c.example.com. I have CertBot set up for only a|b|c.example.com. To avoid "doubling" the number of sites-available and security scripts, and to avoid the unnecessary "www." I would like to add something like: server { server_name www.a.example.com; return 301 $scheme://a.example.com$request_uri; } but I have tried this in several places, www.a.example.com works, but does not remove the www prefix, and fails any browser's security checks (nginx -t is "ok"). Where, in the following config, is the most elegant place to put such a "return" line? Maybe I'm missing something fundamental? server { listen 443 ssl; [ ... # 4 lines managed by Certbot ... ] server_name a.example.com; # Note: or b.example.com, or c.example.com [ ... logging ... ] proxy_buffering off; if ($request_method !~ ^(GET|HEAD|POST)$) { return 444; } location / { proxy_pass http://192.168.x.y:port; proxy_set_header Host $host; proxy_http_version 1.1; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; } } server { if ($host = a.example.com) { # Note: or b.example.com, or c.example.com return 301 https://$host$request_uri; } listen 80; server_name a.example.com; # Note: or b.example.com, or c.example.com rewrite ^ https://$host$request_uri? permanent; } Many thanks -- Paul \\\||// (@ @) ooO_(_)_Ooo__________________________________ |______|_____|_____|_____|_____|_____|_____|_____| |___|____|_____|_____|_____|_____|_____|_____|____| |_____|_____| mailto:paul at stormy.ca _|____|____| From francis at daoine.org Tue Jan 10 18:43:43 2023 From: francis at daoine.org (Francis Daly) Date: Tue, 10 Jan 2023 18:43:43 +0000 Subject: Redirect www to not-www In-Reply-To: References: Message-ID: <20230110184343.GH875@daoine.org> On Tue, Jan 10, 2023 at 12:03:06PM -0500, Paul wrote: Hi there, > Using nginx (1.18.0 on Ubuntu 20.04.5) as proxy to back-end, I have three > sites (a|b|c.example.com) in a fast, reliable production environment. I have > DNS records set up for www.a|b|c.example.com. I have CertBot set up for > only a|b|c.example.com. > > To avoid "doubling" the number of sites-available and security scripts, and > to avoid the unnecessary "www." I would like to add something like: > > server { > server_name www.a.example.com; > return 301 $scheme://a.example.com$request_uri; > } > Maybe I'm missing something fundamental? Yes, you are missing something fundamental :-( There are 4 families of requests that the client can make: * http://www.a.example.com * http://a.example.com * https://www.a.example.com * https://a.example.com It looks like you want each of the first three to be redirected to the fourth? It is straightforward to redirect the first two to the fourth -- something like server { server_name a.example.com www.a.example.com; return 301 https://a.example.com$request_uri; } should cover both. (Optionally with "listen 80;", it replaces your similar no-ssl server{} block.) But for the third family, the client will first try to validate the certificate that it is given when it connects to www.a.example.com, before it will make the http(s) request that you can reply to with a redirect. And since you do not (appear to) have a certificate for www.a.example.com, that validation will fail and there is nothing you can do about it. (Other that get a certificate.) Cheers, f -- Francis Daly francis at daoine.org From paul at stormy.ca Tue Jan 10 23:45:15 2023 From: paul at stormy.ca (Paul) Date: Tue, 10 Jan 2023 18:45:15 -0500 Subject: Redirect www to not-www In-Reply-To: <20230110184343.GH875@daoine.org> References: <20230110184343.GH875@daoine.org> Message-ID: <8441c28a-3508-85f5-5440-73cd660eadc9@stormy.ca> On 2023-01-10 13:43, Francis Daly wrote: >> Using nginx (1.18.0 on Ubuntu 20.04.5) as proxy to back-end, I have three >> sites (a|b|c.example.com) in a fast, reliable production environment. I have >> DNS records set up for www.a|b|c.example.com. I have CertBot set up for >> only a|b|c.example.com. >> >> To avoid "doubling" the number of sites-available and security scripts, and >> to avoid the unnecessary "www." I would like to add something like: >> /.../ > There are 4 families of requests that the client can make: > > * http://www.a.example.com > * http://a.example.com > * https://www.a.example.com > * https://a.example.com > > It looks like you want each of the first three to be redirected to > the fourth? Many thanks. That is totally correct. Given your comment re "lack of certificate" and "validation will fail" I have now expanded CertBot to include the three "www." names. All works fine (as far as I can see using Firefox, Opera, Vivaldi clients -- and Edge, had to boot up an old laptop!) BUT... for that one step further and have all server (nginx) responses go back to the end-client as: https://a.example.com and NOT as: https://www.a.example.com ^^^ I have written an /etc/nginx/conf.d/redirect.conf as: server { server_name www.a.example.com; return 301 $scheme://a.example.com$request_uri; } which seems to work, but I would appreciate your opinion - is this the best, most elegant, secure way? Does it need "permanent" somewhere? I've never used "scheme" before today, but we've got an external advisory audit going on, and I'm trying to keep them happy. Many thanks and best regards, Paul > > It is straightforward to redirect the first two to the fourth -- > something like > > server { > server_name a.example.com www.a.example.com; > return 301 https://a.example.com$request_uri; > } > > should cover both. > > (Optionally with "listen 80;", it replaces your similar no-ssl server{} > block.) > > But for the third family, the client will first try to validate the > certificate that it is given when it connects to www.a.example.com, > before it will make the http(s) request that you can reply to with > a redirect. And since you do not (appear to) have a certificate for > www.a.example.com, that validation will fail and there is nothing you > can do about it. (Other that get a certificate.) > > Cheers, > > f \\\||// (@ @) ooO_(_)_Ooo__________________________________ |______|_____|_____|_____|_____|_____|_____|_____| |___|____|_____|_____|_____|_____|_____|_____|____| |_____|_____| mailto:paul at stormy.ca _|____|____| From francis at daoine.org Wed Jan 11 00:37:49 2023 From: francis at daoine.org (Francis Daly) Date: Wed, 11 Jan 2023 00:37:49 +0000 Subject: Redirect www to not-www In-Reply-To: <8441c28a-3508-85f5-5440-73cd660eadc9@stormy.ca> References: <20230110184343.GH875@daoine.org> <8441c28a-3508-85f5-5440-73cd660eadc9@stormy.ca> Message-ID: <20230111003749.GI875@daoine.org> On Tue, Jan 10, 2023 at 06:45:15PM -0500, Paul wrote: Hi there, > BUT... for that one step further and have all server (nginx) responses go > back to the end-client as: > https://a.example.com > and NOT as: > https://www.a.example.com > ^^^ > I have written an /etc/nginx/conf.d/redirect.conf as: > server { > server_name www.a.example.com; > return 301 $scheme://a.example.com$request_uri; > } > > which seems to work, but I would appreciate your opinion - is this the best, > most elegant, secure way? Does it need "permanent" somewhere? It does not need "permanent" -- that it a signal to "rewrite" to use a http 301 not http 302 response; and you are using a http 301 response directly. (See, for example, http://http.cat/301 or http://http.cat/302 for the meaning of the numbers. Warning: contains cats.) > I've never used "scheme" before today, but we've got an external advisory > audit going on, and I'm trying to keep them happy. $scheme is http or https depending on the incoming ssl status. That 4-line server{} block does not do ssl, so $scheme is always http there. http://nginx.org/r/$scheme Either way, this would redirect from http://www.a. to http://a., and then the next request would redirect from http://a. to https://a.. I suggest that you are better off just redirecting to https the first time. You will want a server{} with something like "listen 443 ssl;" and "server_name www.a.example.com;" and the appropriate certificate and key; and then also redirect to https://a. in that block. So for the four families http,https of www.a,a you will probably want three or four server{} blocks -- you could either put http www.a and http a in one block; or you could put https www.a and http www.a in one block; and then one block for the other, plus one for the https a that is the "real" config -- the other ones will be small enough configs that "just" return 301 to https://a. Which should be simple enough to audit for correctness. Good luck with it, f -- Francis Daly francis at daoine.org From xserverlinux at gmail.com Fri Jan 13 03:30:12 2023 From: xserverlinux at gmail.com (Rick Gutierrez) Date: Thu, 12 Jan 2023 21:30:12 -0600 Subject: load balancer the best way to solve Message-ID: Hi list, I have a situation where I am looking for the best way to solve it, I have a load balancer with nginx 1.22.1 and behind it I have three backend servers: / -> App1 / load balancer. ----/ --> App2 / /---> App3 if shutdown app1, the load balancer keeps sending traffic to app1 , and the clients are in a lag waiting for app1 to respond, I think the load balancer should send all those clients to app2 and app3, but it doesn't. it put me in research mode :) and the nginx version can't do that, it's only in the plus version, correct me if I'm wrong, but "Voilà" https://docs.nginx.com/nginx/admin-guide/load-balancer/http-health-check/#hc_active , It gave me hope when reading that this option could help me, that by adding it to the nginx configuration it would not be able to continue sending requests, but it is not like that. logs: 2023/01/12 18:07:38 [error] 26895#26895: *834 no live upstreams while connecting to upstream, client: 44.210.106.130, server: demo.app.com, request: "GET /aaaaaaaaaaaaaaaaaaaaaaaaaqr HTTP/1.1", upstream: "http://paginaweb/aaaaaaaaaaaaaaaaaaaaaaaaaqr", host: "demo.app.com", referrer: "http://173.255.X.X:80/aaaaaaaaaaaaaaaaaaaaaaaaaqr" 2023/01/12 18:07:38 [error] 26895#26895: *832 no live upstreams while connecting to upstream, client: 44.210.106.130, server: demo.app.com, request: "GET /99vt HTTP/1.1", upstream: "http://paginaweb/99vt", host: "demo.app.com", referrer: "http://173.255.X.X:80/99vt" 2023/01/12 18:07:38 [error] 26895#26895: *838 no live upstreams while connecting to upstream, client: 44.210.106.130, server: demo.app.com, request: "GET /99vu HTTP/1.1", upstream: "http://paginaweb/99vu", host: "173.255.X.X:443" 2023/01/12 18:07:39 [error] 26895#26895: *841 no live upstreams while connecting to upstream, client: 44.210.106.130, server: demo.app.com, request: "GET /99vu HTTP/1.1", upstream: "http://paginaweb/99vu", host: "demo.app.com", referrer: "http://173.255.X.X:80/99vu" 2023/01/12 20:24:16 [error] 26895#26895: *6206 upstream prematurely closed connection while reading response header from upstream, client: 167.94.138.62, server: demo.app.com, request: "GET / HTTP/1.1", upstream: "http://104.237.138.27:80/", host: "demo.app.com" 2023/01/12 20:24:16 [error] 26895#26895: *6206 no live upstreams while connecting to upstream, client: 167.94.138.62, server: demo.app.com, request: "GET / HTTP/1.1", upstream: "http://paginaweb/", host: "demo.app.com" 2023/01/12 20:24:26 [error] 26895#26895: *6220 no live upstreams while connecting to upstream, client: 167.248.133.46, server: demo.app.com, request: "GET / HTTP/1.1", upstream: "http://paginaweb/", host: "demo.app.com" 2023/01/12 21:14:27 [error] 26895#26895: *7559 no live upstreams while connecting to upstream, client: 162.221.192.26, server: demo.app.com, request: "GET / HTTP/1.1", upstream: "http://paginaweb/", host: "173.255.X.X" #### CONFIG LB #### upstream paginaweb { ip_hash; server 96.X.X.X:80 weight=1 fail_timeout=30s max_fails=3; server 104.X.X.X:80 weight=1 fail_timeout=30s max_fails=3; server 190.X.X.X:80 weight=1 fail_timeout=30s max_fails=3; keepalive 5; } server { server_tokens off; listen 80; server_name demo.app.com; pagespeed unplugged; return 301 https://$server_name$request_uri; } server { listen 443 ssl http2; server_name demo.app.com; access_log /var/www/vhost/demo.app.com/logs/access.log specialLog; error_log /var/www/vhost/demo.app.com/logs/error.log; add_header Cache-Control "max-age=86400, public"; ssl_certificate /etc/letsencrypt/live/demo.app.com/fullchain.pem; # managed by Certbot ssl_certificate_key /etc/letsencrypt/live/demo.app.com/privkey.pem; # managed by Certbot ssl_trusted_certificate /etc/letsencrypt/live/demo.app.com/chain.pem; # ssl_certificate /etc/nginx/ssl/nginx-selfsigned.crt; # ssl_certificate_key /etc/nginx/ssl/nginx-selfsigned.key; ssl_protocols TLSv1.3 TLSv1.2; ssl_ciphers ECDHE-RSA-AES256-GCM-SHA512:DHE-RSA-AES256-GCM-SHA512:ECDHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES256-GCM-SHA384:ECDHE-RSA-A ES256-SHA384; ssl_prefer_server_ciphers on; ssl_stapling on; ssl_stapling_verify on; add_header Strict-Transport-Security "max-age=31557600; includeSubDomains"; add_header X-Xss-Protection "1; mode=block" always; add_header X-Frame-Options "SAMEORIGIN" always; add_header X-Content-Type-Options "nosniff" always; keepalive_requests 1000; keepalive_timeout 75 75; ssl_session_cache shared:SSL:10m; ssl_session_timeout 30m; gzip on; gzip_types text/plain text/css text/xml text/javascript; gzip_proxied any; gzip_vary on; client_max_body_size 5120m; location / { proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection "upgrade"; proxy_set_header X-Real_IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $http_host; proxy_set_header X-WAf-Proxy true; real_ip_header X-Real-IP; proxy_http_version 1.1; proxy_pass http://paginaweb; proxy_redirect off; # health_check; } } any idea? -- rickygm http://gnuforever.homelinux.com From mdounin at mdounin.ru Fri Jan 13 23:50:30 2023 From: mdounin at mdounin.ru (Maxim Dounin) Date: Sat, 14 Jan 2023 02:50:30 +0300 Subject: load balancer the best way to solve In-Reply-To: References: Message-ID: Hello! On Thu, Jan 12, 2023 at 09:30:12PM -0600, Rick Gutierrez wrote: > Hi list, I have a situation where I am looking for the best way to > solve it, I have a load balancer with nginx 1.22.1 and behind it I > have three backend servers: > > / -> App1 > / > load balancer. ----/ --> App2 > / > /---> App3 > > if shutdown app1, the load balancer keeps sending traffic to app1 , > and the clients are in a lag waiting for app1 to respond, I think the > load balancer should send all those clients to app2 and app3, but it > doesn't. > > it put me in research mode :) and the nginx version can't do that, > it's only in the plus version, correct me if I'm wrong, but "Voilà" > https://docs.nginx.com/nginx/admin-guide/load-balancer/http-health-check/#hc_active > , It gave me hope when reading that this option could help me, that by > adding it to the nginx configuration it would not be able to continue > sending requests, but it is not like that. Certainly nginx can do that. By default, all user requests are used by nginx to detect any upstream server failures, and re-route requests to other available servers. Active health checks, which are indeed only available in the commercial version, are only different that they also use requests generated periodically by nginx-plus itself. This might improve service to some real clients in some specific cases, but not generally required. > logs: > > 2023/01/12 18:07:38 [error] 26895#26895: *834 no live upstreams while > connecting to upstream, client: 44.210.106.130, server: demo.app.com, > request: "GET /aaaaaaaaaaaaaaaaaaaaaaaaaqr HTTP/1.1", upstream: > "http://paginaweb/aaaaaaaaaaaaaaaaaaaaaaaaaqr", host: "demo.app.com", > referrer: "http://173.255.X.X:80/aaaaaaaaaaaaaaaaaaaaaaaaaqr" > 2023/01/12 18:07:38 [error] 26895#26895: *832 no live upstreams while > connecting to upstream, client: 44.210.106.130, server: demo.app.com, > request: "GET /99vt HTTP/1.1", upstream: "http://paginaweb/99vt", > host: "demo.app.com", referrer: "http://173.255.X.X:80/99vt" The errors indicate that all your upstream servers were not responding properly, and were either all tried for the particular request, or were disabled based on "fail_timeout=30s max_fails=3" in your configuration. Usually looking into other errors in the logs makes it immediately obvious what actually happened. Alternatively, you may want to further dig into what happened with the requests by logging the $upstream_addr and $upstream_status variables (see https://nginx.org/r/$upstream_addr and https://nginx.org/r/$upstream_status for details). [...] -- Maxim Dounin http://mdounin.ru/ From mqudsi at neosmart.net Tue Jan 17 18:04:31 2023 From: mqudsi at neosmart.net (Mahmoud Al-Qudsi) Date: Tue, 17 Jan 2023 12:04:31 -0600 Subject: Unsafe AIO under FreeBSD? Message-ID: Hello all, By default, FreeBSD restricts potentially unsafe AIO operations (as determined by the target fd type) and operations like aio_read(2) will “falsely” return EOPNOTSUPP to avoid a potentially dangerous operation that can result in blocking the aio threadpool hanging the system or the process, per aio(4). I’ve observed in production with an nginx/1.23.3 instance (compiled with --with-file-aio) running on FreeBSD 13.1-RELEASE-p5, configured with `aio on;` (and `use kqueue;` though I suspect that is not relevant), the following syslog entry: pid 1125 (nginx) is attempting to use unsafe AIO requests - not logging anymore My curiosity got the best of me and I decided to allow unsafe aio requests to see what would happen (`sysctl vfs.aio.enable_unsafe=1`). It’s been about 24 hours and I haven’t noticed any ill or adverse effects, at least judging by my scrutiny of the logs, though I intend to continue to closely monitor this server and see what happens. My question is whether or not nginx does anything “advanced” with aio under FreeBSD, beyond using aio for operations on “sockets, raw disk devices, and regular files on local filesystems,” which is the “safe” list, again per aio(4), while other types of fds are blocked unless unsafe aio is enabled. On this server, nginx is serving static files from various zfs datasets and is functioning as a reverse proxy to http and fastcgi upstreams. I do have a few 3rd party modules statically compiled into nginx, so I'm naturally limiting my question to core/stock nginx behavior to the best of its developers’ knowledge :) I don't have all the system logs but in a sample of the logs preserved going back to November 2022 the "unsafe AIO" is not repeated anywhere, leading me to _suspect_ that this isn't "normal" nginx behavior and that I probably should *not* be enabling unsafe AIO - but curiosity is a hell of a drug! Thanks, Mahmoud Al-Qudsi NeoSmart Technologies From mdounin at mdounin.ru Wed Jan 18 00:00:32 2023 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 18 Jan 2023 03:00:32 +0300 Subject: Unsafe AIO under FreeBSD? In-Reply-To: References: Message-ID: Hello! On Tue, Jan 17, 2023 at 12:04:31PM -0600, Mahmoud Al-Qudsi wrote: > Hello all, > > By default, FreeBSD restricts potentially unsafe AIO operations (as > determined by the target fd type) and operations like aio_read(2) will > “falsely” return EOPNOTSUPP to avoid a potentially dangerous operation > that can result in blocking the aio threadpool hanging the system or the > process, per aio(4). > > I’ve observed in production with an nginx/1.23.3 instance (compiled with > --with-file-aio) running on FreeBSD 13.1-RELEASE-p5, configured with > `aio on;` (and `use kqueue;` though I suspect that is not relevant), > the following syslog entry: > > pid 1125 (nginx) is attempting to use unsafe AIO requests - not > logging anymore > > My curiosity got the best of me and I decided to allow unsafe aio > requests to see what would happen (`sysctl vfs.aio.enable_unsafe=1`). > It’s been about 24 hours and I haven’t noticed any ill or adverse > effects, at least judging by my scrutiny of the logs, though I intend to > continue to closely monitor this server and see what happens. > > My question is whether or not nginx does anything “advanced” with aio > under FreeBSD, beyond using aio for operations on “sockets, raw disk > devices, and regular files on local filesystems,” which is the “safe” > list, again per aio(4), while other types of fds are blocked unless > unsafe aio is enabled. > > On this server, nginx is serving static files from various zfs datasets > and is functioning as a reverse proxy to http and fastcgi upstreams. I > do have a few 3rd party modules statically compiled into nginx, so I'm > naturally limiting my question to core/stock nginx behavior to the best > of its developers’ knowledge :) > > I don't have all the system logs but in a sample of the logs preserved > going back to November 2022 the "unsafe AIO" is not repeated anywhere, > leading me to _suspect_ that this isn't "normal" nginx behavior and that > I probably should *not* be enabling unsafe AIO - but curiosity is a hell > of a drug! The only aio operation nginx uses is aio_read(), and it does nothing "advanced" - just reads normal files which are being served by nginx. Further, nginx explicitly checks files being served, and rejects non-regular files. As such, the "unsafe AIO" checks shouldn't be triggered unless you are trying to serve something from non-local file systems (well, you indeed shouldn't). In general, if an aio_read() error happens, you should be able to see corresponding error in nginx error log at the "crit" level. The error will look like "[crit] ... aio_read("/path/to/file") failed (45: Operation not supported)". It should make it possible to find out what actually causes the error. -- Maxim Dounin http://mdounin.ru/ From xserverlinux at gmail.com Wed Jan 18 00:49:06 2023 From: xserverlinux at gmail.com (Rick Gutierrez) Date: Tue, 17 Jan 2023 18:49:06 -0600 Subject: load balancer the best way to solve In-Reply-To: References: Message-ID: El vie, 13 ene 2023 a las 17:51, Maxim Dounin () escribió: > > > > Certainly nginx can do that. By default, all user requests are > used by nginx to detect any upstream server failures, and re-route > requests to other available servers. > > Active health checks, which are indeed only available in the > commercial version, are only different that they also use requests > generated periodically by nginx-plus itself. This might improve > service to some real clients in some specific cases, but not > generally required. ok > > > logs: > > > > 2023/01/12 18:07:38 [error] 26895#26895: *834 no live upstreams while > > connecting to upstream, client: 44.210.106.130, server: demo.app.com, > > request: "GET /aaaaaaaaaaaaaaaaaaaaaaaaaqr HTTP/1.1", upstream: > > "http://paginaweb/aaaaaaaaaaaaaaaaaaaaaaaaaqr", host: "demo.app.com", > > referrer: "http://173.255.X.X:80/aaaaaaaaaaaaaaaaaaaaaaaaaqr" > > 2023/01/12 18:07:38 [error] 26895#26895: *832 no live upstreams while > > connecting to upstream, client: 44.210.106.130, server: demo.app.com, > > request: "GET /99vt HTTP/1.1", upstream: "http://paginaweb/99vt", > > host: "demo.app.com", referrer: "http://173.255.X.X:80/99vt" > > The errors indicate that all your upstream servers were not > responding properly, and were either all tried for the particular > request, or were disabled based on "fail_timeout=30s max_fails=3" > in your configuration. > > Usually looking into other errors in the logs makes it immediately > obvious what actually happened. Alternatively, you may want to > further dig into what happened with the requests by logging the > $upstream_addr and $upstream_status variables (see > https://nginx.org/r/$upstream_addr and > https://nginx.org/r/$upstream_status for details). > Thanks. The problem here is that crowdsec was blocking my connection. I also found an interesting module that does almost the same thing as the commercial version. https://github.com/yaoweibin/nginx_upstream_check_module , I'm already evaluating it -- rickygm http://gnuforever.homelinux.com From Michael.Kappes at bfw-berlin-brandenburg.de Wed Jan 18 08:51:18 2023 From: Michael.Kappes at bfw-berlin-brandenburg.de (Kappes, Michael) Date: Wed, 18 Jan 2023 08:51:18 +0000 Subject: nginx-1.23.3 on Win Server wth HTTPS Message-ID: Hello, i will use nginx-1.23.3 on a Win Server, inclusiv HTTPS After unpacking the ZIP file I have a "nginx.conf" file which I edit, from line 98, the HTTPS server block starts there. I enter the absolute path to the CA and KEY file (Windows like path) but NGINX refuses to accept HTTPS. I restart NGINX from CMD => C:\nginx\nginx-1.23.3>nginx -s reload nginx: [emerg] unknown directive "HTTPS" in C:\nginx\nginx-1.23.3/conf/nginx.conf:98 I test with c:\CERT\my_cert I test with CERT/my_cert Both does not work What do I have to do so that NGINX also accepts HTTPS connections? Thanks Michael Berufsförderungswerk Berlin-Brandenburg e. V. Epiphanienweg 1, 14059 Berlin Telefon: +49 30 30399 0 info at bfw-berlin-brandenburg.de www.bfw-berlin-brandenburg.de Amtsgericht Charlottenburg VR 3642 Nz, Steuer-Nr. 27/661/55590 Vorstandsvorsitzender: Stefan Moschko Geschäftsführung: Thomas Kastner, Andreas Braatz -------------- next part -------------- A non-text attachment was scrubbed... Name: bak.nginx.conf Type: application/octet-stream Size: 2773 bytes Desc: bak.nginx.conf URL: From francis at daoine.org Wed Jan 18 11:01:55 2023 From: francis at daoine.org (Francis Daly) Date: Wed, 18 Jan 2023 11:01:55 +0000 Subject: nginx-1.23.3 on Win Server wth HTTPS In-Reply-To: References: Message-ID: <20230118110155.GL875@daoine.org> On Wed, Jan 18, 2023 at 08:51:18AM +0000, Kappes, Michael wrote: Hi there, > After unpacking the ZIP file I have a "nginx.conf" file which I edit, from line 98, the HTTPS server block starts there. > C:\nginx\nginx-1.23.3>nginx -s reload > nginx: [emerg] unknown directive "HTTPS" in C:\nginx\nginx-1.23.3/conf/nginx.conf:98 In that file, # HTTPS server is a comment that should stay a comment. You should uncomment and adjust the relevant lines that go from # server { to the matching # } > What do I have to do so that NGINX also accepts HTTPS connections? Have a "listen" with "ssl", and the correct certificate information. http://nginx.org/en/docs/http/configuring_https_servers.html Good luck with it, f -- Francis Daly francis at daoine.org From Michael.Kappes at bfw-berlin-brandenburg.de Wed Jan 18 12:41:32 2023 From: Michael.Kappes at bfw-berlin-brandenburg.de (Kappes, Michael) Date: Wed, 18 Jan 2023 12:41:32 +0000 Subject: AW: nginx-1.23.3 on Win Server wth HTTPS In-Reply-To: <20230118110155.GL875@daoine.org> References: <20230118110155.GL875@daoine.org> Message-ID: <27ef4356881e4078a715e3b26bb1e989@bfw-berlin-brandenburg.de> Hello Francis Dear readers, Thanks for Help! OK, the # stay on HTTPS ;-) My "correct certificate information" is the Problem. Nginx tells me: invalid number of arguments in "ssl_certificate" directive in C:\nginx\nginx-1.23.3/conf/nginx.conf:102 (please note: "\" and "/" in the same path?!) C:\nginx\nginx-1.23.3\cert\ => here a my cert and key files At my nginx.conf file (the syntax) => ssl_certificate C:\nginx\nginx-1.23.3\cert\1-Servername.cert.pem; ssl_certificate_key C:\nginx\nginx-1.23.3\cert\1-Servername.cert.key; I have tested both C:\nginx\nginx-1.23.3\cert\ And also C:/nginx/nginx-1.23.3/cert/ Both won't work... Or, i make a temp folder ( for testing) At my nginx.conf file (the syntax) => ssl_certificate C:\temp\1-Servername.cert.pem; ssl_certificate_key C:\temp\1-Servername.cert.key; Also here, i change "\" to "/" without that it makes a difference Strange Michael -----Ursprüngliche Nachricht----- Von: nginx Im Auftrag von Francis Daly Gesendet: Mittwoch, 18. Januar 2023 12:02 An: nginx at nginx.org Betreff: Re: nginx-1.23.3 on Win Server wth HTTPS On Wed, Jan 18, 2023 at 08:51:18AM +0000, Kappes, Michael wrote: Hi there, > After unpacking the ZIP file I have a "nginx.conf" file which I edit, from line 98, the HTTPS server block starts there. > C:\nginx\nginx-1.23.3>nginx -s reload > nginx: [emerg] unknown directive "HTTPS" in C:\nginx\nginx-1.23.3/conf/nginx.conf:98 In that file, # HTTPS server is a comment that should stay a comment. You should uncomment and adjust the relevant lines that go from # server { to the matching # } > What do I have to do so that NGINX also accepts HTTPS connections? Have a "listen" with "ssl", and the correct certificate information. http://nginx.org/en/docs/http/configuring_https_servers.html Good luck with it, f -- Francis Daly francis at daoine.org _______________________________________________ nginx mailing list nginx at nginx.org https://mailman.nginx.org/mailman/listinfo/nginx Berufsförderungswerk Berlin-Brandenburg e. V. Epiphanienweg 1, 14059 Berlin Telefon: +49 30 30399 0 info at bfw-berlin-brandenburg.de www.bfw-berlin-brandenburg.de Amtsgericht Charlottenburg VR 3642 Nz, Steuer-Nr. 27/661/55590 Vorstandsvorsitzender: Stefan Moschko Geschäftsführung: Thomas Kastner, Andreas Braatz From mqudsi at neosmart.net Wed Jan 18 18:07:56 2023 From: mqudsi at neosmart.net (Mahmoud Al-Qudsi) Date: Wed, 18 Jan 2023 12:07:56 -0600 Subject: Unsafe AIO under FreeBSD? In-Reply-To: References: Message-ID: On Tue, Jan 17, 2023 at 6:00 PM Maxim Dounin wrote: > > The only aio operation nginx uses is aio_read(), and it does > nothing "advanced" - just reads normal files which are being > served by nginx. > > Further, nginx explicitly checks files being served, and rejects > non-regular files. As such, the "unsafe AIO" checks shouldn't be > triggered unless you are trying to serve something from non-local > file systems (well, you indeed shouldn't). > > In general, if an aio_read() error happens, you should be able to > see corresponding error in nginx error log at the "crit" level. > The error will look like "[crit] ... aio_read("/path/to/file") > failed (45: Operation not supported)". It should make it possible > to find out what actually causes the error. Hello and thanks for the reply! This is exactly what I was hoping to hear; thanks for clarifying. Due to a most unfortunate series of events (truncating the nginx error log before setting up newsyslog(8) to manage it) I do not have the error log for the three-hour window when this incident took place. I do have an archive of error.log from shortly before then going all the way back to 2016 and the only instances of the `[crit] .. aio_read(..) failed (45: Operation not supported)` were six such errors from 2018 in close succession, each for a different path, while nginx was trying to read statically compressed .br versions of static assets as enabled by ngx_brotli [0] (which doesn't make any `aio_read()` calls of its own). I think it's improbable I'll see the error again anytime soon, but I now know where to look if I do. [0]: Specifically my mtime-enabled fork of ngx_brotli, open-sourced at https://github.com/mqudsi/ngx_brotli Thanks, Mahmoud From mdounin at mdounin.ru Wed Jan 18 19:31:29 2023 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 18 Jan 2023 22:31:29 +0300 Subject: AW: nginx-1.23.3 on Win Server wth HTTPS In-Reply-To: <27ef4356881e4078a715e3b26bb1e989@bfw-berlin-brandenburg.de> References: <20230118110155.GL875@daoine.org> <27ef4356881e4078a715e3b26bb1e989@bfw-berlin-brandenburg.de> Message-ID: Hello! On Wed, Jan 18, 2023 at 12:41:32PM +0000, Kappes, Michael wrote: > My "correct certificate information" is the Problem. Nginx tells me: > > invalid number of arguments in "ssl_certificate" directive in C:\nginx\nginx-1.23.3/conf/nginx.conf:102 > (please note: "\" and "/" in the same path?!) This works fine, since Windows interprets both "\" and "/" in places (including the functions nginx uses). > C:\nginx\nginx-1.23.3\cert\ => here a my cert and key files > At my nginx.conf file (the syntax) => > > ssl_certificate C:\nginx\nginx-1.23.3\cert\1-Servername.cert.pem; > ssl_certificate_key C:\nginx\nginx-1.23.3\cert\1-Servername.cert.key; When using "\" in nginx configuration, you have to be careful to properly escape it, since "\" is also an escape character, and, for example, "\n" will be interpreted as a newline character. As such, using "/" is usually easier. On the other hand, this particular issue does not explain why you are seeing the "invalid number of arguments" error, it should be "cannot load certificate" with a garbled certificate path instead. The "invalid number of arguments" error suggests you've typed multiple arguments in the directive (instead of just one it accepts). This usually happens if a space character is accidentally used where it shouldn't, but in the directives as shown certainly there are no extra space characters. Most likely, there is an issue with the text editor you use, and it's somehow inserts some additional non-printable characters, such as byte order mark or something like that, and this confuses nginx. What editor do you use? Is using another one and re-typing the directives) makes a difference? E.g., Notepad is usually available on Windows and does not seem to corrupt text files. [...] -- Maxim Dounin http://mdounin.ru/ From Michael.Kappes at bfw-berlin-brandenburg.de Thu Jan 19 10:32:09 2023 From: Michael.Kappes at bfw-berlin-brandenburg.de (Kappes, Michael) Date: Thu, 19 Jan 2023 10:32:09 +0000 Subject: AW: AW: nginx-1.23.3 on Win Server wth HTTPS In-Reply-To: References: <20230118110155.GL875@daoine.org> <27ef4356881e4078a715e3b26bb1e989@bfw-berlin-brandenburg.de> Message-ID: Hello Maxim, OK, i use notepad++ for editing And yes, it was a problem with "some additional non-printable characters" Now, all up and running Ergo: Solved! Big Thanks @all for Help Michael -----Ursprüngliche Nachricht----- Von: nginx Im Auftrag von Maxim Dounin Gesendet: Mittwoch, 18. Januar 2023 20:31 An: nginx at nginx.org Betreff: Re: AW: nginx-1.23.3 on Win Server wth HTTPS Hello! On Wed, Jan 18, 2023 at 12:41:32PM +0000, Kappes, Michael wrote: > My "correct certificate information" is the Problem. Nginx tells me: > > invalid number of arguments in "ssl_certificate" directive in > C:\nginx\nginx-1.23.3/conf/nginx.conf:102 > (please note: "\" and "/" in the same path?!) This works fine, since Windows interprets both "\" and "/" in places (including the functions nginx uses). > C:\nginx\nginx-1.23.3\cert\ => here a my cert and key files At my > nginx.conf file (the syntax) => > > ssl_certificate C:\nginx\nginx-1.23.3\cert\1-Servername.cert.pem; > ssl_certificate_key > C:\nginx\nginx-1.23.3\cert\1-Servername.cert.key; When using "\" in nginx configuration, you have to be careful to properly escape it, since "\" is also an escape character, and, for example, "\n" will be interpreted as a newline character. As such, using "/" is usually easier. On the other hand, this particular issue does not explain why you are seeing the "invalid number of arguments" error, it should be "cannot load certificate" with a garbled certificate path instead. The "invalid number of arguments" error suggests you've typed multiple arguments in the directive (instead of just one it accepts). This usually happens if a space character is accidentally used where it shouldn't, but in the directives as shown certainly there are no extra space characters. Most likely, there is an issue with the text editor you use, and it's somehow inserts some additional non-printable characters, such as byte order mark or something like that, and this confuses nginx. What editor do you use? Is using another one and re-typing the directives) makes a difference? E.g., Notepad is usually available on Windows and does not seem to corrupt text files. [...] -- Maxim Dounin http://mdounin.ru/ _______________________________________________ nginx mailing list nginx at nginx.org https://mailman.nginx.org/mailman/listinfo/nginx Berufsförderungswerk Berlin-Brandenburg e. V. Epiphanienweg 1, 14059 Berlin Telefon: +49 30 30399 0 info at bfw-berlin-brandenburg.de www.bfw-berlin-brandenburg.de Amtsgericht Charlottenburg VR 3642 Nz, Steuer-Nr. 27/661/55590 Vorstandsvorsitzender: Stefan Moschko Geschäftsführung: Thomas Kastner, Andreas Braatz From bmvishwas at gmail.com Fri Jan 20 06:07:15 2023 From: bmvishwas at gmail.com (Vishwas Bm) Date: Fri, 20 Jan 2023 11:37:15 +0530 Subject: Use of upstream keepalive_time Message-ID: Hi, I see that from 1.19.10 keepalive_time has been added. http://nginx.org/en/docs/http/ngx_http_upstream_module.html#keepalive_time Also keepalive_timeout is present for idle connection http://nginx.org/en/docs/http/ngx_http_upstream_module.html#keepalive_timeout On checking the behaviour between these two, I see that keepalive_time is having higher precedence over keepalive_timeout. Even if connection is not idle based on keepqlive_timeout, connection is still getting closed because of keepalive_time. Is this expected behaviour? Also can I set keepalive_time to higher value say 24hours ? Any drawbacks with this ? Can this keepalive_time be disabled and priority given only to keepalive_timeout ? Regards, Vishwas -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Sat Jan 21 00:02:10 2023 From: mdounin at mdounin.ru (Maxim Dounin) Date: Sat, 21 Jan 2023 03:02:10 +0300 Subject: Use of upstream keepalive_time In-Reply-To: References: Message-ID: Hello! On Fri, Jan 20, 2023 at 11:37:15AM +0530, Vishwas Bm wrote: > I see that from 1.19.10 keepalive_time has been added. > http://nginx.org/en/docs/http/ngx_http_upstream_module.html#keepalive_time > > Also keepalive_timeout is present for idle connection > http://nginx.org/en/docs/http/ngx_http_upstream_module.html#keepalive_timeout > > On checking the behaviour between these two, I see that keepalive_time is > having higher precedence over keepalive_timeout. > Even if connection is not idle based on keepqlive_timeout, connection is > still getting closed because of keepalive_time. > > Is this expected behaviour? > Also can I set keepalive_time to higher value say 24hours ? Any drawbacks > with this ? > Can this keepalive_time be disabled and priority given only to > keepalive_timeout ? The "keepalive_time" is a directive to limit total lifetime of the connection, making it possible to free any resources associated with the connection, notably allocated memory. Further, in some setups it might be important to periodically redo connection authentication, notably re-validate peer certificates. The "keepalive_time" directive is mostly equivalent to keepalive_requests, which is documented as follows: : Closing connections periodically is necessary to free : per-connection memory allocations. Therefore, using too high : maximum number of requests could result in excessive memory usage : and not recommended. Note though that keepalive_time is 1 hour by default, and reopening connections once per hour is not expected to have any performance impact. Rather, it is expected to be a hard limit on the total connection lifetime on connections which are mostly idle and therefore do not reach the "keepalive_requests" limit in a reasonable time. -- Maxim Dounin http://mdounin.ru/ From xserverlinux at gmail.com Sat Jan 21 22:34:26 2023 From: xserverlinux at gmail.com (Rick Gutierrez) Date: Sat, 21 Jan 2023 16:34:26 -0600 Subject: module geoip2 with map directive Message-ID: Hi list, I'm trying to block some countries to prevent access to three applications from my app server, the problem is that the list is large and I want to separate them into a separate file. I'm using the geoip2 module and when I add the maps directive and make an include to specify the file it doesn't work. part of my nginx.conf map $geoip2_data_country_code $allowed_country { default yes; include /etc/nginx/conf.d/geo_country.conf; } conf.d/geo_country.conf NI no; #Nicaragua nginx -t nginx: [emerg] unknown directive "NI" in /etc/nginx/conf.d/geo_country.conf:1 nginx: configuration file /etc/nginx/nginx.conf test failed Any ideas, how to do this? -- rickygm http://gnuforever.homelinux.com From francis at daoine.org Sun Jan 22 01:16:20 2023 From: francis at daoine.org (Francis Daly) Date: Sun, 22 Jan 2023 01:16:20 +0000 Subject: module geoip2 with map directive In-Reply-To: References: Message-ID: <20230122011620.GN875@daoine.org> On Sat, Jan 21, 2023 at 04:34:26PM -0600, Rick Gutierrez wrote: Hi there, > I'm using the geoip2 module and when I add the maps directive and make > an include to specify the file it doesn't work. I'm pretty sure that this "include" line works, but... > part of my nginx.conf > > map $geoip2_data_country_code $allowed_country { > default yes; > include /etc/nginx/conf.d/geo_country.conf; > } ...I suspect that you have another line somewhere else like "include /etc/nginx/conf.d/*.conf;", and this file is also included at that point in the config, and its content is not valid there. > nginx: [emerg] unknown directive "NI" in /etc/nginx/conf.d/geo_country.conf:1 > nginx: configuration file /etc/nginx/nginx.conf test failed > > Any ideas, how to do this? Rename this file (and change the include line) to something like geo_country.map, so that the name does not match the other include directive pattern in your config. f -- Francis Daly francis at daoine.org From sandeep.sanash at gmail.com Mon Jan 23 09:34:41 2023 From: sandeep.sanash at gmail.com (sandeep dubey) Date: Mon, 23 Jan 2023 15:04:41 +0530 Subject: Allow/Deny rules in Location block Message-ID: Hello, I am trying to restrict some Location block in my Nginx configuration to specific IPs. Below are the changes I made - Version: nginx:1.21.0 location / { > proxy_pass http://127.0.0.1:8080; > } > location = /auth { > proxy_pass http://127.0.0.1:8080; > allow 1.2.3.4/8; > allow 5.6.7.8/16; > allow my.vpn.ip.here; > allow my.public.ip.here; > deny all; > error_page 403 /usr/share/nginx/html/403.html; > auth_basic "Administrator’s area"; > auth_basic_user_file /etc/nginx/.htpasswd; > } > Here, the deny rule is not working. Users are still able to access the page publicly. Am I missing something? -- Regards, Sandeep -------------- next part -------------- An HTML attachment was scrubbed... URL: From hobson42 at gmail.com Mon Jan 23 11:57:11 2023 From: hobson42 at gmail.com (Ian Hobson) Date: Mon, 23 Jan 2023 18:57:11 +0700 Subject: Allow/Deny rules in Location block In-Reply-To: References: Message-ID: Hi Sandeep, I rather suspect that your top two CIDR allow lines are allowing too many people in. Remove them, and check that only the last two lines are allowed in. Then create the two top addresses very carefully, and test. 1.2.3.4/8 allows all C level addresses of the format 1.*.*.* in. I think you need 1.2.3.4/24 which allows all of the format 1.2.3.* Hope this helps. Ian On 23/01/2023 16:34, sandeep dubey wrote: > Hello, > > I am trying to restrict some Location block in my Nginx configuration to > specific IPs. Below are the changes I made - > > Version: nginx:1.21.0 > > location / { >             proxy_pass http://127.0.0.1:8080 ; >         } > >   location = /auth { >             proxy_pass http://127.0.0.1:8080 ; >             allow 1.2.3.4/8 ; >             allow 5.6.7.8/16 ; >             allow my.vpn.ip.here; >             allow my.public.ip.here; >             deny all; >             error_page 403 /usr/share/nginx/html/403.html; >             auth_basic "Administrator’s area"; >             auth_basic_user_file /etc/nginx/.htpasswd; >         } > > Here, the deny rule is not working. Users are still able to access the > page publicly. Am I missing something? > > -- > Regards, > Sandeep > > _______________________________________________ > nginx mailing list > nginx at nginx.org > https://mailman.nginx.org/mailman/listinfo/nginx -- Ian Hobson Tel (+66) 626 544 695 From r at roze.lv Mon Jan 23 12:38:30 2023 From: r at roze.lv (Reinis Rozitis) Date: Mon, 23 Jan 2023 14:38:30 +0200 Subject: Allow/Deny rules in Location block In-Reply-To: References: Message-ID: <000001d92f27$99e741c0$cdb5c540$@roze.lv> > I am trying to restrict some Location block in my Nginx configuration to > specific IPs. Below are the changes I made - > > location = /auth { > } > > Here, the deny rule is not working. Users are still able to access the > page publicly. Am I missing something? Are you sure that the request is exactly /auth since anything else like /auth/ or /auth/something will land in the first location block without any restrictions defined. Try to remove the '=' and see if it works then. rr From xserverlinux at gmail.com Mon Jan 23 17:32:07 2023 From: xserverlinux at gmail.com (Rick Gutierrez) Date: Mon, 23 Jan 2023 11:32:07 -0600 Subject: module geoip2 with map directive In-Reply-To: <20230122011620.GN875@daoine.org> References: <20230122011620.GN875@daoine.org> Message-ID: El sáb, 21 ene 2023 a las 19:16, Francis Daly () escribió: > > On Sat, Jan 21, 2023 at 04:34:26PM -0600, Rick Gutierrez wrote: > > I'm pretty sure that this "include" line works, but... > Hi francis > > part of my nginx.conf > > > > map $geoip2_data_country_code $allowed_country { > > default yes; > > include /etc/nginx/conf.d/geo_country.conf; > > } > > ...I suspect that you have another line somewhere else like "include > /etc/nginx/conf.d/*.conf;", and this file is also included at that point > in the config, and its content is not valid there. > > > nginx: [emerg] unknown directive "NI" in /etc/nginx/conf.d/geo_country.conf:1 > > nginx: configuration file /etc/nginx/nginx.conf test failed > > > > Any ideas, how to do this? > > Rename this file (and change the include line) to something like > geo_country.map, so that the name does not match the other include > directive pattern in your config. francis Right now I'm singing the song "We are the champions" , It worked perfectly, thanks for the help. -- rickygm http://gnuforever.homelinux.com From sandeep.sanash at gmail.com Tue Jan 24 05:37:42 2023 From: sandeep.sanash at gmail.com (sandeep dubey) Date: Tue, 24 Jan 2023 11:07:42 +0530 Subject: Allow/Deny rules in Location block In-Reply-To: References: Message-ID: Thanks Ian for the reply. I did it because the container was failing to start with the error below, will restrict that too. - > [error] 7#7: *1 connect() failed (111: Connection refused) while > connecting to upstream, client: 10.10.0.38, server: _, request: "GET > /api/saml-links HTTP/1.1", upstream: "http://127.0.0.1:8000/api/saml-links", > host: "10.18.9.132:80" > On Mon, Jan 23, 2023 at 5:27 PM Ian Hobson wrote: > Hi Sandeep, > > I rather suspect that your top two CIDR allow lines are allowing too > many people in. > > Remove them, and check that only the last two lines are > allowed in. > > Then create the two top addresses very carefully, and test. > > 1.2.3.4/8 allows all C level addresses of the format 1.*.*.* in. I think > you need 1.2.3.4/24 which allows all of the format > 1.2.3.* > > Hope this helps. > > Ian > > On 23/01/2023 16:34, sandeep dubey wrote: > > Hello, > > > > I am trying to restrict some Location block in my Nginx configuration to > > specific IPs. Below are the changes I made - > > > > Version: nginx:1.21.0 > > > > location / { > > proxy_pass http://127.0.0.1:8080 >; > > } > > > > location = /auth { > > proxy_pass http://127.0.0.1:8080 >; > > allow 1.2.3.4/8 ; > > allow 5.6.7.8/16 ; > > allow my.vpn.ip.here; > > allow my.public.ip.here; > > deny all; > > error_page 403 /usr/share/nginx/html/403.html; > > auth_basic "Administrator’s area"; > > auth_basic_user_file /etc/nginx/.htpasswd; > > } > > > > Here, the deny rule is not working. Users are still able to access the > > page publicly. Am I missing something? > > > > -- > > Regards, > > Sandeep > > > > _______________________________________________ > > nginx mailing list > > nginx at nginx.org > > https://mailman.nginx.org/mailman/listinfo/nginx > > -- > Ian Hobson > Tel (+66) 626 544 695 > _______________________________________________ > nginx mailing list > nginx at nginx.org > https://mailman.nginx.org/mailman/listinfo/nginx > -- Regards, Sandeep -------------- next part -------------- An HTML attachment was scrubbed... URL: From sandeep.sanash at gmail.com Tue Jan 24 05:40:57 2023 From: sandeep.sanash at gmail.com (sandeep dubey) Date: Tue, 24 Jan 2023 11:10:57 +0530 Subject: Allow/Deny rules in Location block In-Reply-To: <000001d92f27$99e741c0$cdb5c540$@roze.lv> References: <000001d92f27$99e741c0$cdb5c540$@roze.lv> Message-ID: Thanks Reinis for the reply, There are other locations like /auth, /auth/, /auth/admin, /auth/admin/ and few more which have the same rules. I am trying to restrict access to /auth and /auth/admin which are sensitive for public access. Do you think removing "=" can help in this case? On Mon, Jan 23, 2023 at 6:08 PM Reinis Rozitis wrote: > > I am trying to restrict some Location block in my Nginx configuration to > > specific IPs. Below are the changes I made - > > > > location = /auth { > > } > > > > Here, the deny rule is not working. Users are still able to access the > > page publicly. Am I missing something? > > Are you sure that the request is exactly /auth since anything else like > /auth/ or /auth/something will land in the first location block without any > restrictions defined. > Try to remove the '=' and see if it works then. > > rr > _______________________________________________ > nginx mailing list > nginx at nginx.org > https://mailman.nginx.org/mailman/listinfo/nginx > -- Regards, Sandeep -------------- next part -------------- An HTML attachment was scrubbed... URL: From r at roze.lv Tue Jan 24 16:56:05 2023 From: r at roze.lv (Reinis Rozitis) Date: Tue, 24 Jan 2023 18:56:05 +0200 Subject: Allow/Deny rules in Location block In-Reply-To: References: <000001d92f27$99e741c0$cdb5c540$@roze.lv> Message-ID: <000001d93014$bff31b80$3fd95280$@roze.lv> > There are other locations like /auth, /auth/, /auth/admin, /auth/admin/ and few more which have the same rules. I am trying to restrict access to /auth and /auth/admin which are sensitive for public access. Do you think removing "=" can help in this case? '=' in location definition means that nginx will use it only on exact uri match. if you have location = /auth {} but client requests /auth/admin (unless you have also location = /auth/admin) then that particular location configuration won't be used and will match the 'location / {}' which in your configuration sample was proxied without any deny rules. By removing the '=' it means all the /auth, /auth/* requests will be processed in that location. Good to also check the documentation on it http://nginx.org/en/docs/http/ngx_http_core_module.html#location rr From me at nanaya.net Wed Jan 25 05:27:06 2023 From: me at nanaya.net (nanaya) Date: Wed, 25 Jan 2023 14:27:06 +0900 Subject: Allow/Deny rules in Location block In-Reply-To: <000001d93014$bff31b80$3fd95280$@roze.lv> References: <000001d92f27$99e741c0$cdb5c540$@roze.lv> <000001d93014$bff31b80$3fd95280$@roze.lv> Message-ID: <19afb3af-28e8-4a7c-abcd-c3e8b8cc4962@app.fastmail.com> Just adding, if it's `location /auth {}`, it'll also match /autha, /authb, /authsomething/something, not just limited to /auth/*. On Wed, Jan 25, 2023, at 01:56, Reinis Rozitis wrote: >> There are other locations like /auth, /auth/, /auth/admin, /auth/admin/ and few more which have the same rules. I am trying to restrict access to /auth and /auth/admin which are sensitive for public access. Do you think removing "=" can help in this case? > > > '=' in location definition means that nginx will use it only on exact uri match. > > if you have location = /auth {} but client requests /auth/admin (unless > you have also location = /auth/admin) then that particular location > configuration won't be used and will match the 'location / {}' which in > your configuration sample was proxied without any deny rules. > > By removing the '=' it means all the /auth, /auth/* requests will be > processed in that location. > > Good to also check the documentation on it > http://nginx.org/en/docs/http/ngx_http_core_module.html#location > > rr > _______________________________________________ > nginx mailing list > nginx at nginx.org > https://mailman.nginx.org/mailman/listinfo/nginx From sandeep.sanash at gmail.com Wed Jan 25 05:54:42 2023 From: sandeep.sanash at gmail.com (sandeep dubey) Date: Wed, 25 Jan 2023 11:24:42 +0530 Subject: Allow/Deny rules in Location block In-Reply-To: <000001d93014$bff31b80$3fd95280$@roze.lv> References: <000001d92f27$99e741c0$cdb5c540$@roze.lv> <000001d93014$bff31b80$3fd95280$@roze.lv> Message-ID: I have attached my config file which may help to understand it better. With this change, I am getting "404 - Not Found" error and in log it says [error] 11#11: *49 access forbidden by rule, client: 10.48.11.9, server: _, request: "GET /auth/ HTTP/1.1", host: "my.domain.info", referrer: " https://my.domain.info" It seems that the rule is working but at some wrong place, I am not sure how to organise or set the right sequence here. On Tue, Jan 24, 2023 at 10:26 PM Reinis Rozitis wrote: > > There are other locations like /auth, /auth/, /auth/admin, /auth/admin/ > and few more which have the same rules. I am trying to restrict access to > /auth and /auth/admin which are sensitive for public access. Do you think > removing "=" can help in this case? > > > '=' in location definition means that nginx will use it only on exact uri > match. > > if you have location = /auth {} but client requests /auth/admin (unless > you have also location = /auth/admin) then that particular location > configuration won't be used and will match the 'location / {}' which in > your configuration sample was proxied without any deny rules. > > By removing the '=' it means all the /auth, /auth/* requests will be > processed in that location. > > Good to also check the documentation on it > http://nginx.org/en/docs/http/ngx_http_core_module.html#location > > rr > _______________________________________________ > nginx mailing list > nginx at nginx.org > https://mailman.nginx.org/mailman/listinfo/nginx > -- Regards, Sandeep -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: ngxinx.conf Type: application/octet-stream Size: 4979 bytes Desc: not available URL: From sandeep.sanash at gmail.com Wed Jan 25 05:55:45 2023 From: sandeep.sanash at gmail.com (sandeep dubey) Date: Wed, 25 Jan 2023 11:25:45 +0530 Subject: Allow/Deny rules in Location block In-Reply-To: <19afb3af-28e8-4a7c-abcd-c3e8b8cc4962@app.fastmail.com> References: <000001d92f27$99e741c0$cdb5c540$@roze.lv> <000001d93014$bff31b80$3fd95280$@roze.lv> <19afb3af-28e8-4a7c-abcd-c3e8b8cc4962@app.fastmail.com> Message-ID: Thanks Daniel for the reply. I have attached my config file for reference in a previous reply. On Wed, Jan 25, 2023 at 10:58 AM nanaya wrote: > Just adding, if it's `location /auth {}`, it'll also match /autha, /authb, > /authsomething/something, not just limited to /auth/*. > > On Wed, Jan 25, 2023, at 01:56, Reinis Rozitis wrote: > >> There are other locations like /auth, /auth/, /auth/admin, /auth/admin/ > and few more which have the same rules. I am trying to restrict access to > /auth and /auth/admin which are sensitive for public access. Do you think > removing "=" can help in this case? > > > > > > '=' in location definition means that nginx will use it only on exact > uri match. > > > > if you have location = /auth {} but client requests /auth/admin (unless > > you have also location = /auth/admin) then that particular location > > configuration won't be used and will match the 'location / {}' which in > > your configuration sample was proxied without any deny rules. > > > > By removing the '=' it means all the /auth, /auth/* requests will be > > processed in that location. > > > > Good to also check the documentation on it > > http://nginx.org/en/docs/http/ngx_http_core_module.html#location > > > > rr > > _______________________________________________ > > nginx mailing list > > nginx at nginx.org > > https://mailman.nginx.org/mailman/listinfo/nginx > _______________________________________________ > nginx mailing list > nginx at nginx.org > https://mailman.nginx.org/mailman/listinfo/nginx > -- Regards, Sandeep -------------- next part -------------- An HTML attachment was scrubbed... URL: From r at roze.lv Wed Jan 25 13:29:07 2023 From: r at roze.lv (Reinis Rozitis) Date: Wed, 25 Jan 2023 15:29:07 +0200 Subject: Allow/Deny rules in Location block In-Reply-To: References: <000001d92f27$99e741c0$cdb5c540$@roze.lv> <000001d93014$bff31b80$3fd95280$@roze.lv> Message-ID: <000001d930c1$00bf03c0$023d0b40$@roze.lv> > [error] 11#11: *49 access forbidden by rule, client: 10.48.11.9, server: _, request: "GET /auth/ HTTP/1.1", host: "http://my.domain.info", referrer: "https://my.domain.info" It seems that the rule is working but at some wrong place, I am not sure how to organise or set the right sequence here. Just from the log it seems correct - you have a rule to allow 10.48.0.0/24; but the ip 10.48.11.9 doesn't go within that subnet (/24 subnet mask is just a single C subnet 10.48.0.1-254). Then again, your whole configuration would be simpler with just a single location block (since it doesn't seem you have an application which uses /auth without a trailing slash): location /auth/ { allow 172.20.0.0/24; allow 10.48.0.0/24; #allow vpn1.ip.here; allow vpn2.ip.here; deny all; proxy_pass http://127.0.0.1:8080; auth_basic "Restricted area"; auth_basic_user_file /etc/nginx/.htpasswd; } If you wanted to get the basic http auth for those who are not within allowed ip ranges you need to add 'satisfy any;' directive [1] Also: error_page 403 /usr/share/nginx/html/403.html; <- error_page needs a relative uri not a full path in filesystem this is why nginx also returns 404 (as it can't find the error page) instead of 403 forbidden. If /usr/share/nginx/html is your default nginx webroot you can just specify: error_page 403 /403.html; If you store your error pages in different webroot add something like this: location /403.html { root /usr/share/nginx/html; } Also your attached configuration has duplicate 'location /' directives. Nginx should complain about invalid configuration. Are you sure you are testing correctly? [1] http://nginx.org/en/docs/http/ngx_http_core_module.html#satisfy rr From relectgustfs at gmail.com Wed Jan 25 13:30:26 2023 From: relectgustfs at gmail.com (Gus Flowers Starkiller) Date: Wed, 25 Jan 2023 10:30:26 -0300 Subject: Updates in Linux without affect NGINX Message-ID: Good morning, Sorry for bothering you, please could you help me with some things about Nginx with Linux? Mi case is the next, I have some Nginx servers with different versions of Linux, Debian, Ubuntu, etc. I should execute *security updates* in these Linux servers but I don't know if these updates will affect the Nginx, Is there any way to execute ONLY SECURITY UPDATES in Linux without affect the environment of Nginx and all its publications? Thanks a lot. -- *Gus Flowers* -------------- next part -------------- An HTML attachment was scrubbed... URL: From relectgustfs at gmail.com Fri Jan 27 01:26:04 2023 From: relectgustfs at gmail.com (Gus Flowers Starkiller) Date: Thu, 26 Jan 2023 22:26:04 -0300 Subject: Nginx using HTPPS but without SSL ??? Message-ID: Hi people, please could you help me? I need test some pages in my office but with a different domain and I don't have SSL certificates. Is there any way to publicate a web site with an alias? For example mysite1.com (using nginx) to siteok.domain.com ??? Thanks for your time. Greetings. -- *Gus Flowers* -------------- next part -------------- An HTML attachment was scrubbed... URL: From sandeep.sanash at gmail.com Fri Jan 27 14:00:32 2023 From: sandeep.sanash at gmail.com (sandeep dubey) Date: Fri, 27 Jan 2023 19:30:32 +0530 Subject: Allow/Deny rules in Location block In-Reply-To: <000001d930c1$00bf03c0$023d0b40$@roze.lv> References: <000001d92f27$99e741c0$cdb5c540$@roze.lv> <000001d93014$bff31b80$3fd95280$@roze.lv> <000001d930c1$00bf03c0$023d0b40$@roze.lv> Message-ID: Thanks Reinis for the response and suggestions. I made the changes and unfortunately couldn't make it work. Later realised that we are running a Nginx Controller in GKE env., So assuming that the restriction changes should be done at controller level and not in the Nginx (not very sure). On Wed, Jan 25, 2023 at 6:59 PM Reinis Rozitis wrote: > > [error] 11#11: *49 access forbidden by rule, client: 10.48.11.9, server: > _, request: "GET /auth/ HTTP/1.1", host: "http://my.domain.info", > referrer: "https://my.domain.info" > It seems that the rule is working but at some wrong place, I am not sure > how to organise or set the right sequence here. > > > Just from the log it seems correct - you have a rule to allow 10.48.0.0/24; > but the ip 10.48.11.9 doesn't go within that subnet (/24 subnet mask is > just a single C subnet 10.48.0.1-254). > > Then again, your whole configuration would be simpler with just a single > location block (since it doesn't seem you have an application which uses > /auth without a trailing slash): > > location /auth/ { > allow 172.20.0.0/24; > allow 10.48.0.0/24; > #allow vpn1.ip.here; > allow vpn2.ip.here; > deny all; > proxy_pass http://127.0.0.1:8080; > auth_basic "Restricted area"; > auth_basic_user_file /etc/nginx/.htpasswd; > } > > If you wanted to get the basic http auth for those who are not within > allowed ip ranges you need to add 'satisfy any;' directive [1] > > Also: > error_page 403 /usr/share/nginx/html/403.html; <- error_page needs a > relative uri not a full path in filesystem this is why nginx also returns > 404 (as it can't find the error page) instead of 403 forbidden. > > If /usr/share/nginx/html is your default nginx webroot you can just > specify: > > error_page 403 /403.html; > > If you store your error pages in different webroot add something like this: > > location /403.html { > root /usr/share/nginx/html; > } > > Also your attached configuration has duplicate 'location /' directives. > Nginx should complain about invalid configuration. Are you sure you are > testing correctly? > > [1] http://nginx.org/en/docs/http/ngx_http_core_module.html#satisfy > > rr > _______________________________________________ > nginx mailing list > nginx at nginx.org > https://mailman.nginx.org/mailman/listinfo/nginx > -- Regards, Sandeep -------------- next part -------------- An HTML attachment was scrubbed... URL: From i at qingly.me Sat Jan 28 00:34:34 2023 From: i at qingly.me (wordlesswind) Date: Sat, 28 Jan 2023 08:34:34 +0800 Subject: Nginx using HTPPS but without SSL ??? In-Reply-To: References: Message-ID: <633e16076add6e11a66abfa62e40dfdf@qingly.me> I'm not sure if it's possible to use TLS without a certificate in nginx, but you can use OpenSSL to generate a CA certificate and server certificate and deploy it to nginx, as well as trust the CA certificate in the client: https://mariadb.com/docs/xpand/security/data-in-transit-encryption/create-self -signed-certificates-keys-openssl/ How To Configure Nginx as a Reverse Proxy: https://www.digitalocean.com/community/tutorials/how-to-configure-nginx-as-a-reverse-proxy-on-ubuntu-22-04 However, as the server program may be bound to a domain, unless your test domain is added, reverse proxying may not work or may cause problems. 在 2023-01-27 09:26,Gus Flowers Starkiller 写道: > Hi people, please could you help me? > I need test some pages in my office but with a different domain and I > don't have SSL certificates. > > Is there any way to publicate a web site with an alias? > For example mysite1.com [1] (using nginx) to siteok.domain.com [2] > ??? > > Thanks for your time. Greetings. > -- > > Gus Flowers > > _______________________________________________ > nginx mailing list > nginx at nginx.org > https://mailman.nginx.org/mailman/listinfo/nginx Links: ------ [1] http://mysite1.com [2] http://siteok.domain.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From noloader at gmail.com Sat Jan 28 02:11:30 2023 From: noloader at gmail.com (Jeffrey Walton) Date: Fri, 27 Jan 2023 21:11:30 -0500 Subject: Nginx using HTPPS but without SSL ??? In-Reply-To: References: Message-ID: On Thu, Jan 26, 2023 at 8:26 PM Gus Flowers Starkiller wrote: > > I need test some pages in my office but with a different domain and I don't have SSL certificates. > > Is there any way to publicate a web site with an alias? > For example mysite1.com (using nginx) to siteok.domain.com ??? You can add an alias or cname record in DNS that says mysite1.com -> siteok.domain.com. However, siteok.domain.com must have a TLS certificate issued to siteok.domain.com. One certificate can have multiple domain names by adding the names in the Subject Alternate Names (SAN). So one certificate can have mysite1.com and siteok.domain.com. If you don't want to buy the certificates or go through the validation process, then you can run your own CA. You install your Root CA certificate into the browser store. Then you issue web server certificates to hosts for testing. Running your own CA is safe and effective. It's no different from what the public CAs do. I run my own CA at the house. Jeff From venefax at gmail.com Sun Jan 29 20:17:15 2023 From: venefax at gmail.com (Saint Michael) Date: Sun, 29 Jan 2023 15:17:15 -0500 Subject: Question about proxy Message-ID: In my website, I proxied https://perplexity.ai trough a domain of mine but when I get redirected, I see on top, on the domain line, not my own line. In other cases, I see my own domain line. What causes each case, i.e., what do I need to do so always the https://domain.com is NOT the original domain being proxied, but my own domain (https://disney.ibm.com). in this case, this is the example: server { default_type application/octet-stream; set $template_root /usr/local/openresty/nginx/html/templates; listen 0.0.0:443 ssl; # reuseport; error_log logs/error.log warn; access_log logs/access.log; server_name disney.ibm.com; ssl_certificate /etc/letsencrypt/live/disney.ibm.com/fullchain.pem; ssl_certificate_key /etc/letsencrypt/live/disney.ibm.com/privkey.pem; location / { proxy_cookie_domain https://perplexity.ai https://disney.ibm.com; proxy_buffering on; resolver 127.0.0.1 ipv6=off; proxy_http_version 1.1; proxy_buffer_size 128k; proxy_busy_buffers_size 256k; proxy_buffers 4 256k; proxy_set_header User-Agent $http_user_agent; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto $scheme; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection $http_connection; proxy_set_header Accept-Encoding ""; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Scheme $scheme; proxy_ssl_server_name on; proxy_ssl_name $proxy_host; proxy_set_header Host perplexity.ai; proxy_pass https://perplexity.ai; proxy_redirect https://perplexity.ai https://disney.ibm.com; subs_filter_types text/css text/javascript application/javascript; subs_filter "https://cdn*.perplexity.ai/(.*)" "https://disney.ibm.com/cdn*/$1" gi subs_filter "https://perplexity.ai/(.*)" "https://disney.ibm.com/$1" gi; subs_filter "https://(.*).perplexity.ai/(.*)" "https://disney.ibm.com/$1/$2" gi; subs_filter "https://www.perplexity.ai" "https://disney.ibm.com" gi; subs_filter "https://perplexity.ai" "https://disney.ibm.com" gi; subs_filter "perplexity.ai" "disney.ibm.com" gi; } } From francis at daoine.org Tue Jan 31 01:20:29 2023 From: francis at daoine.org (Francis Daly) Date: Tue, 31 Jan 2023 01:20:29 +0000 Subject: Question about proxy In-Reply-To: References: Message-ID: <20230131012029.GB21799@daoine.org> On Sun, Jan 29, 2023 at 03:17:15PM -0500, Saint Michael wrote: Hi there, > What causes each case, i.e., what do I need to do so always the > https://domain.com is NOT the original domain being proxied, but my > own domain (https://disney.ibm.com). You seem to be using the module at https://github.com/yaoweibin/ngx_http_substitutions_filter_module. You probably want subs_filter_types to include text/html, and you probably want "r" on the subs_filter patterns that are regular expressions rather than fixed strings. Generally, you proxy_pass to a server you control, so it may be easier to adjust the upstream so that subs_filter is not needed. But basically: you want any string in the response that the browser will interpret as a url, to be on your server not on the upstream one. So in this case, you can test the output of things like "curl -i https://disney.ibm.com/something", and see that it does not contain any unexpected mention of perplexity.ai. > subs_filter_types text/css text/javascript application/javascript; > subs_filter "https://cdn*.perplexity.ai/(.*)" > "https://disney.ibm.com/cdn*/$1" gi > subs_filter "https://perplexity.ai/(.*)" "https://disney.ibm.com/$1" gi; > subs_filter "https://(.*).perplexity.ai/(.*)" "https://disney.ibm.com/$1/$2" gi; > subs_filter "https://www.perplexity.ai" "https://disney.ibm.com" gi; > subs_filter "https://perplexity.ai" "https://disney.ibm.com" gi; > subs_filter "perplexity.ai" "disney.ibm.com" gi; If you do see an unexpected mention, you can try to see why it is there -- especially the first subs_filter above, I'm not certain what it is trying to do; and the second one probably does not need the regex parts at all -- the fifth and sixth ones probably both do the same thing as it. The third and fourth seem to have different ideas of how "https://www.perplexity.ai/something" should be substituted; maybe you have a test case which shows why both are needed. Good luck with it, f -- Francis Daly francis at daoine.org From venefax at gmail.com Tue Jan 31 03:39:52 2023 From: venefax at gmail.com (Saint Michael) Date: Mon, 30 Jan 2023 22:39:52 -0500 Subject: Question about proxy In-Reply-To: <20230131012029.GB21799@daoine.org> References: <20230131012029.GB21799@daoine.org> Message-ID: Can you please elaborate on this: "You probably want subs_filter_types to include text/html, and you probably want "r" on the subs_filter patterns that are regular expressions rather than fixed strings" one example will suffice. On Mon, Jan 30, 2023 at 8:20 PM Francis Daly wrote: > > On Sun, Jan 29, 2023 at 03:17:15PM -0500, Saint Michael wrote: > > Hi there, > > > What causes each case, i.e., what do I need to do so always the > > https://domain.com is NOT the original domain being proxied, but my > > own domain (https://disney.ibm.com). > > You seem to be using the module at > https://github.com/yaoweibin/ngx_http_substitutions_filter_module. > > You probably want subs_filter_types to include text/html, and you probably > want "r" on the subs_filter patterns that are regular expressions rather > than fixed strings. > > Generally, you proxy_pass to a server you control, so it may be easier > to adjust the upstream so that subs_filter is not needed. But basically: > you want any string in the response that the browser will interpret as > a url, to be on your server not on the upstream one. > > So in this case, you can test the output of things like "curl -i > https://disney.ibm.com/something", and see that it does not contain any > unexpected mention of perplexity.ai. > > > subs_filter_types text/css text/javascript application/javascript; > > subs_filter "https://cdn*.perplexity.ai/(.*)" > > "https://disney.ibm.com/cdn*/$1" gi > > subs_filter "https://perplexity.ai/(.*)" "https://disney.ibm.com/$1" gi; > > subs_filter "https://(.*).perplexity.ai/(.*)" "https://disney.ibm.com/$1/$2" gi; > > subs_filter "https://www.perplexity.ai" "https://disney.ibm.com" gi; > > subs_filter "https://perplexity.ai" "https://disney.ibm.com" gi; > > subs_filter "perplexity.ai" "disney.ibm.com" gi; > > If you do see an unexpected mention, you can try to see why it is there > -- especially the first subs_filter above, I'm not certain what it > is trying to do; and the second one probably does not need the regex > parts at all -- the fifth and sixth ones probably both do the same > thing as it. The third and fourth seem to have different ideas of how > "https://www.perplexity.ai/something" should be substituted; maybe you > have a test case which shows why both are needed. > > Good luck with it, > > f > -- > Francis Daly francis at daoine.org > _______________________________________________ > nginx mailing list > nginx at nginx.org > https://mailman.nginx.org/mailman/listinfo/nginx From francis at daoine.org Tue Jan 31 07:45:52 2023 From: francis at daoine.org (Francis Daly) Date: Tue, 31 Jan 2023 07:45:52 +0000 Subject: Question about proxy In-Reply-To: References: <20230131012029.GB21799@daoine.org> Message-ID: <20230131074552.GC21799@daoine.org> On Mon, Jan 30, 2023 at 10:39:52PM -0500, Saint Michael wrote: Hi there, > Can you please elaborate on this: > "You probably want subs_filter_types to include text/html, and you probably > want "r" on the subs_filter patterns that are regular expressions rather > than fixed strings" > one example will suffice. https://github.com/yaoweibin/ngx_http_substitutions_filter_module includes: """ Example location / { subs_filter_types text/html text/css text/xml; subs_filter st(\d*).example.com $1.example.com ir; subs_filter a.example.com s.example.com; subs_filter http://$host https://$host; } """ along with explanations of each directive. (If that's *not* the module that you are using, then the documentation for your module should show something similar.) Although I do see that some later text suggests that text/html content is always searched, so maybe being explicit about that in subs_filter_types is not necessary. Cheers, f -- Francis Daly francis at daoine.org