From relectgustfs at gmail.com Thu Dec 1 03:40:08 2022 From: relectgustfs at gmail.com (Gus Flowers Starkiller) Date: Thu, 1 Dec 2022 00:40:08 -0300 Subject: Restarting service takes too much time Message-ID: Hi, please could anyone tell me why the nginx service takes too much time to restart? As we know, after some change with nginx the service must be restarted, I do this with "nginx -s reload" or "systemctl restart nginx", and this takes some three minutes or more. This process is happening in servers with many websites (eg 200 sites). So, with a new nginx this restart of nginx is immediate but with many sites this restart process of nginx is very slow, I am using Debian with nginx and OWASP. Thanks for your help. -- *Gus Flowers* -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Thu Dec 1 05:55:17 2022 From: nginx-forum at forum.nginx.org (blason) Date: Thu, 01 Dec 2022 00:55:17 -0500 Subject: Restarting service takes too much time In-Reply-To: References: Message-ID: <0f520b180d742cf95565e1d4b135f3fe.NginxMailingListEnglish@forum.nginx.org> Hi, Did you check error log or syslog? Is that spitting out any errors? Do you have SSL_OCSP settings configured and it might not be able to reach to the protocol? I mean I had 45 portals and was facing a same issue. Later when I done the debug I found that ocsp.godaddy.com was not reachable and it verifies every time we reload the service. Just a heads up though. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,295945,295946#msg-295946 From mdounin at mdounin.ru Thu Dec 1 18:03:24 2022 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 1 Dec 2022 21:03:24 +0300 Subject: Restarting service takes too much time In-Reply-To: References: Message-ID: Hello! On Thu, Dec 01, 2022 at 12:40:08AM -0300, Gus Flowers Starkiller wrote: > Hi, please could anyone tell me why the nginx service takes too much time > to restart? As we know, after some change with nginx the service must be > restarted, I do this with "nginx -s reload" or "systemctl restart nginx", > and this takes some three minutes or more. This process is happening in > servers with many websites (eg 200 sites). So, with a new nginx this > restart of nginx is immediate but with many sites this restart process of > nginx is very slow, I am using Debian with nginx and OWASP. The most obvious thing I would recommend to check is if the system resolver is functioning properly. If nginx configuration uses domain names rather than IP addresses, and there are issues with system resolver (for example, one of the configured DNS servers do not respond), loading the configuration might take a lot of time. -- Maxim Dounin http://mdounin.ru/ From francis at daoine.org Thu Dec 1 19:42:32 2022 From: francis at daoine.org (Francis Daly) Date: Thu, 1 Dec 2022 19:42:32 +0000 Subject: Use nginx to return a scraped copy of another site In-Reply-To: References: Message-ID: <20221201194232.GB4185@daoine.org> On Thu, Nov 10, 2022 at 08:50:14PM +0800, Tony Mobily wrote: Hi there, > I want to run b.com as a duplicate of a.com, with nginx acting as a proxy. > The contents would be identical; however, I would apply some minor > modifications to the HTML. > Is this possible with nginx? Is there an example configuration I can use as > a starting point? It sounds mostly straightforward. http://nginx.org/r/proxy_pass is probably the basic directive to use. If there are specific things that do not respond the way you want, perhaps something can be changed somewhere to deal with those. Good luck with it, f -- Francis Daly francis at daoine.org From francis at daoine.org Thu Dec 1 19:45:15 2022 From: francis at daoine.org (Francis Daly) Date: Thu, 1 Dec 2022 19:45:15 +0000 Subject: How can I redirect a request via Another External proxy In-Reply-To: References: Message-ID: <20221201194515.GC4185@daoine.org> On Fri, Nov 25, 2022 at 01:41:37PM +0530, Aakarshit Agarwal wrote: Hi there, > Inside a private network, I want to redirect a request via another External > proxy. "Stock" nginx does not talk to a proxy server. If you want to talk to/through a proxy server, you will want something other than nginx. (I'm not aware of any third-party modules that add the facility.) Cheers, f -- Francis Daly francis at daoine.org From francis at daoine.org Thu Dec 1 19:47:53 2022 From: francis at daoine.org (Francis Daly) Date: Thu, 1 Dec 2022 19:47:53 +0000 Subject: nginx returns html instead of json response In-Reply-To: References: <20221118160406.GS4185@daoine.org> <20221118183018.GT4185@daoine.org> <20221121192105.GW4185@daoine.org> <20221123174812.GZ4185@daoine.org> <20221123180924.GA4185@daoine.org> Message-ID: <20221201194753.GD4185@daoine.org> On Tue, Nov 29, 2022 at 09:58:20PM +0530, Kaushal Shriyan wrote: Hi there, > I have a follow up question related to the below error which appears in > html instead of JSON format when I hit rest api calls > http://mydomain.com/apis in case of when the MySQL Database service is down > as part of testing the end to end flow. The flow is as follows: > > User -> Nginx webserver -> PHP-FPM upstream server -> MySQL Database. > > *The Website Encountered an Unexpected Error. Please try again Later
* > > > Is there a way to display the above string in JSON format? The easiest is probably to see which part of the chain creates that error message (and I guess that it is probably some php); and to change it to return the error content that you want it to return. And then let everything else continue to pass the error content through without change. Cheers, f -- Francis Daly francis at daoine.org From saurav.sarkar1 at gmail.com Sat Dec 3 14:19:37 2022 From: saurav.sarkar1 at gmail.com (Saurav Sarkar) Date: Sat, 3 Dec 2022 19:49:37 +0530 Subject: Rate limting in a distributed setup Message-ID: Hi All, I am newbie to NGINX and want to use NGINX as a reverse proxy for my cloud foundry apps. I am using NGINX cloudfoundry buildpack. https://docs.cloudfoundry.org/buildpacks/nginx/index.html I was exploring the rate limiting options and was able to achieve basic rate limiting using NGINX limit_req_zone Now I want to run NGINX in distributed mode/multiple instances. I want to save the rate limiting counters in a shared cache like Redis. I was exploring nginxjs for extension. Is it possible to do so using the NGINX rate limiting module "limit_req_zone" in nginxjs ? Or I have to write a complete implementation of my own in java script? I checked some openresty lua modules with rate limtiing in redis are avaialble. But was looking for some examples in javascript. Any other hints for this topic will be also highly appreciated. Best Regards, Saurav -------------- next part -------------- An HTML attachment was scrubbed... URL: From davidmichaelkarr at gmail.com Sat Dec 3 18:07:51 2022 From: davidmichaelkarr at gmail.com (David Karr) Date: Sat, 3 Dec 2022 10:07:51 -0800 Subject: Trying to confirm syntax of CLIENT_MAX_BODY_SIZE environment variable Message-ID: (Sorry if this is a dup. I sent this originally before my subscription was confirmed.) I was told that v1.22 of nginx will look for a CLIENT_MAX_BODY_SIZE environment variable to configure the "client_max_body_size" configuration property. I'm having trouble finding a clear statement of what the required syntax is for that. The documentation for the configuration property, at http://nginx.org/en/docs/http/ngx_http_core_module.html#client_max_body_size , simply uses "m" as an example, but it doesn't actually say anything about what the required syntax is. Does it only allow "m", or does it check for "M" or other variations? Similarly, is the required syntax for the environment variable the same? Is it really v1.22 that will check for that environment variable? -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Sun Dec 4 07:04:20 2022 From: nginx-forum at forum.nginx.org (blason) Date: Sun, 04 Dec 2022 02:04:20 -0500 Subject: Restarting service takes too much time In-Reply-To: References: Message-ID: Yes - He is right; everything is revolves around DNS and even my error is with DNS resolving as it was not able to resolve the ocsp.godaddy.com hence please troubelshoot from DNS perspetive. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,295945,295963#msg-295963 From sca at andreasschulze.de Sun Dec 4 12:29:16 2022 From: sca at andreasschulze.de (A. Schulze) Date: Sun, 4 Dec 2022 13:29:16 +0100 Subject: Restarting service takes too much time In-Reply-To: References: Message-ID: <68080cb4-5b39-25d6-a67f-addfecdbfd8c@andreasschulze.de> Am 04.12.22 um 08:04 schrieb blason: > Yes - He is right; everything is revolves around DNS and even my error is > with DNS resolving as it was not able to resolve the ocsp.godaddy.com hence > please troubelshoot from DNS perspetive. Hello List, To avoid this problems I prefer https://nginx.org/r/ssl_stapling_file Some years ago I run a nginx instance handling thousand of vhosts. The - in practice not notable - reload time was amazing! attached a simplified 'update_ssl_stapling_file' It should be run once a day. The operator should monitor, every 'sll_stapling_file.der' isn't older then 3-4 days Andreas -------------- next part -------------- #!/bin/sh set -u # used files: # # cert.pem # - contain only the server certificate itself # # intermediate.pem # - contain one or more intermediate certificates excluding the root itself # - may be empty # - this script assume exactly one intermediate # # root.pem # - the root, unused in this example # # cert+intermediate.pem # - created by 'cat cert.pem intermediate.pem > ssl_certificate.pem' # - used as https://nginx.org/r/ssl_certificate # # key.pem # - used as https://nginx.org/r/ssl_certificate_key # # ssl_stapling_file.der # - created by this script # - used as https://nginx.org/r/ssl_stapling_file _ocsp_uri="$( openssl x509 -in cert.pem -noout -ocsp_uri )" failed() { echo >&2 "$0 failed: $1" rm -f ssl_stapling_file.tmp exit 1 } if ! _r="$( openssl ocsp \ -no_nonce \ -respout ssl_stapling_file.tmp \ -CAfile intermediate.pem \ -issuer intermediate.pem \ -cert cert.pem \ -url "${_ocsp_uri}" \ 2>&1 )"; then failed "${_r}" fi if ! echo "${_r}" | grep --text --silent -e 'Response verify OK' \ -e 'cert.pem: good2' >/dev/null; then failed "${_r}" fi mv ssl_stapling_file.tmp ssl_stapling_file.der echo 'ssl_stapling_file.der updated, "nginx -s reload" is recommended' From nginx-forum at forum.nginx.org Sun Dec 4 12:49:37 2022 From: nginx-forum at forum.nginx.org (blason) Date: Sun, 04 Dec 2022 07:49:37 -0500 Subject: Restarting service takes too much time In-Reply-To: References: Message-ID: <1a90d8f421cfca311b1a406216f09684.NginxMailingListEnglish@forum.nginx.org> Yes - He is right; everything is revolves around DNS and even my error is with DNS resolving as it was not able to resolve the ocsp.godaddy.com hence please troubelshoot from DNS perspetive. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,295945,295964#msg-295964 From osa at freebsd.org.ru Sun Dec 4 16:33:04 2022 From: osa at freebsd.org.ru (Sergey A. Osokin) Date: Sun, 4 Dec 2022 19:33:04 +0300 Subject: Trying to confirm syntax of CLIENT_MAX_BODY_SIZE environment variable In-Reply-To: References: Message-ID: Hi David, On Sat, Dec 03, 2022 at 10:07:51AM -0800, David Karr wrote: > > I was told that v1.22 of nginx will look for a CLIENT_MAX_BODY_SIZE > environment variable to configure the "client_max_body_size" configuration > property. I'm having trouble finding a clear statement of what the > required syntax is for that. That's not correct, to make sure how it works please go ahead and take a look on the env directive [1]. > The documentation for the configuration property, at > http://nginx.org/en/docs/http/ngx_http_core_module.html#client_max_body_size > , simply uses "m" as an example, but it doesn't actually say anything about > what the required syntax is. Does it only allow "m", or does it check for > "M" or other variations? Please take a look on the following document [2]. References 1. https://nginx.org/en/docs/ngx_core_module.html#env 2. https://nginx.org/en/docs/syntax.html Hope that helps. -- Sergey A. Osokin From jeremy at jeremy.cx Sun Dec 4 16:57:17 2022 From: jeremy at jeremy.cx (Jeremy Cocks) Date: Sun, 4 Dec 2022 16:57:17 +0000 Subject: My module is overriding the rate limiting module status for POST requests. Message-ID: I am developing an AuthZ module. While testing using the rate limiting module. I can see rate limiting kick in for GET requests fine (it's tuned extra low to demonstrate this case): curl -s -I http://localhost/login?{1..3} HTTP/1.1 200 OK Server: nginx/1.21.6 Date: Sun, 04 Dec 2022 16:43:17 GMT Content-Type: text/html; charset=utf-8 Content-Length: 1651 Connection: keep-alive HTTP/1.1 429 Too Many Requests Server: nginx/1.21.6 Date: Sun, 04 Dec 2022 16:43:17 GMT Content-Type: text/html Content-Length: 169 Connection: keep-alive HTTP/1.1 429 Too Many Requests Server: nginx/1.21.6 Date: Sun, 04 Dec 2022 16:43:17 GMT Content-Type: text/html Content-Length: 169 Connection: keep-alive However, doing the same for POST requests, this does not work: curl -s -w "\nStatus: %{http_code}\n\n" http://localhost/login?{1..3} --data-raw 'username=user&password=user' login success: user Status: 200 login success: user Status: 200 login success: user Status: 200 Setting my module to run in the `precontent` phase allows this to work, so it's all happening in rewrite (where the rate limiting module would be kicking in). I obviously don't want to run in precontent and my module gets its advice from an external "agent" as to what to set the status. So I'm assuming it is overwriting the nginx rate limiting module's status and setting it back to a 200, when I'd rather respect the rate limiting modules 429. What would be the best approach here to avoid this from happening? I have read about module ordering, but that would require a recompile of my end, however, I am more intrigued about how to handle this in code. Thanks Jeremy -------------- next part -------------- An HTML attachment was scrubbed... URL: From jeremy at jeremy.cx Sun Dec 4 17:29:29 2022 From: jeremy at jeremy.cx (Jeremy Cocks) Date: Sun, 4 Dec 2022 17:29:29 +0000 Subject: My module is overriding the rate limiting module status for POST requests. In-Reply-To: References: Message-ID: Actually analysing the log files of this, it seems the rate limiting module never kicks in for POST requests, my module just sets the status and bails. Assuming this is because POST actually needs to write content? On Sun, 4 Dec 2022 at 16:57, Jeremy Cocks wrote: > I am developing an AuthZ module. > > While testing using the rate limiting module. I can see rate limiting kick > in for GET requests fine (it's tuned extra low to demonstrate this case): > > curl -s -I http://localhost/login?{1..3} > HTTP/1.1 200 OK > Server: nginx/1.21.6 > Date: Sun, 04 Dec 2022 16:43:17 GMT > Content-Type: text/html; charset=utf-8 > Content-Length: 1651 > Connection: keep-alive > > HTTP/1.1 429 Too Many Requests > Server: nginx/1.21.6 > Date: Sun, 04 Dec 2022 16:43:17 GMT > Content-Type: text/html > Content-Length: 169 > Connection: keep-alive > > HTTP/1.1 429 Too Many Requests > Server: nginx/1.21.6 > Date: Sun, 04 Dec 2022 16:43:17 GMT > Content-Type: text/html > Content-Length: 169 > Connection: keep-alive > > > However, doing the same for POST requests, this does not work: > > curl -s -w "\nStatus: %{http_code}\n\n" http://localhost/login?{1..3} > --data-raw 'username=user&password=user' > login success: user > Status: 200 > > login success: user > Status: 200 > > login success: user > Status: 200 > > Setting my module to run in the `precontent` phase allows this to work, so > it's all happening in rewrite (where the rate limiting module would be > kicking in). > > I obviously don't want to run in precontent and my module gets its > advice from an external "agent" as to what to set the status. So I'm > assuming it is overwriting the nginx rate limiting module's status and > setting it back to a 200, when I'd rather respect the rate limiting modules > 429. > > What would be the best approach here to avoid this from happening? I have > read about module ordering, but that would require a recompile of my end, > however, I am more intrigued about how to handle this in code. > > Thanks > Jeremy > > > > > > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Sun Dec 4 19:49:05 2022 From: mdounin at mdounin.ru (Maxim Dounin) Date: Sun, 4 Dec 2022 22:49:05 +0300 Subject: My module is overriding the rate limiting module status for POST requests. In-Reply-To: References: Message-ID: Hello! On Sun, Dec 04, 2022 at 04:57:17PM +0000, Jeremy Cocks via nginx wrote: > I am developing an AuthZ module. [...] > Setting my module to run in the `precontent` phase allows this to work, so > it's all happening in rewrite (where the rate limiting module would be > kicking in). > > I obviously don't want to run in precontent and my module gets its > advice from an external "agent" as to what to set the status. So I'm > assuming it is overwriting the nginx rate limiting module's status and > setting it back to a 200, when I'd rather respect the rate limiting modules > 429. > > What would be the best approach here to avoid this from happening? I have > read about module ordering, but that would require a recompile of my end, > however, I am more intrigued about how to handle this in code. Auth modules are expected to work at the access phase (NGX_HTTP_ACCESS_PHASE), so these can be combined by using the "satisfy" directive (http://nginx.org/r/satisfy), and won't interfere with request limiting, which happens just before the access phase, at the preaccess phase (NGX_HTTP_PREACCESS_PHASE). In particular, such order ensures that rate limiting is able to protect auth modules from bruteforce attacks. It also ensures that you don't need to think about any overwriting and or anything like this - requests which do not satisfy rate limits configured will be rejected before the control reaches the access phase. -- Maxim Dounin http://mdounin.ru/ From mdounin at mdounin.ru Sun Dec 4 19:51:40 2022 From: mdounin at mdounin.ru (Maxim Dounin) Date: Sun, 4 Dec 2022 22:51:40 +0300 Subject: My module is overriding the rate limiting module status for POST requests. In-Reply-To: References: Message-ID: Hello! On Sun, Dec 04, 2022 at 05:29:29PM +0000, Jeremy Cocks via nginx wrote: > Actually analysing the log files of this, it seems the rate limiting module > never kicks in for POST requests, my module just sets the status and bails. > Assuming this is because POST actually needs to write content? Your observation is wrong. Rate limiting as implemented in the limit_req module does not distinguish between different request methods and always works after reading the request headers. -- Maxim Dounin http://mdounin.ru/ From jeremy at jeremy.cx Sun Dec 4 20:00:04 2022 From: jeremy at jeremy.cx (Jeremy Cocks) Date: Sun, 4 Dec 2022 20:00:04 +0000 Subject: My module is overriding the rate limiting module status for POST requests. In-Reply-To: References: Message-ID: > Your observation is wrong. Rate limiting as implemented in the > limit_req module does not distinguish between different request > methods and always works after reading the request header Sorry, I wasn't clear there. It kicks in for POST requests without my module ;) > does not distinguish between different request methods and always works after reading the request headers. I am assuming, given the request I am testing, is on a proxy_pass which is a content handler, that has something to do with why rate limiting is not working on POST and not GET here? If I remove the location block and just have my module and rate limiting going without a proxy_pass, it seems to be working fine for all requests. Thanks! J On Sun, 4 Dec 2022 at 19:52, Maxim Dounin wrote: > Hello! > > On Sun, Dec 04, 2022 at 05:29:29PM +0000, Jeremy Cocks via nginx wrote: > > > Actually analysing the log files of this, it seems the rate limiting > module > > never kicks in for POST requests, my module just sets the status and > bails. > > Assuming this is because POST actually needs to write content? > > Your observation is wrong. Rate limiting as implemented in the > limit_req module does not distinguish between different request > methods and always works after reading the request headers. > > -- > Maxim Dounin > http://mdounin.ru/ > _______________________________________________ > nginx mailing list -- nginx at nginx.org > To unsubscribe send an email to nginx-leave at nginx.org > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Sun Dec 4 20:19:20 2022 From: mdounin at mdounin.ru (Maxim Dounin) Date: Sun, 4 Dec 2022 23:19:20 +0300 Subject: My module is overriding the rate limiting module status for POST requests. In-Reply-To: References: Message-ID: Hello! On Sun, Dec 04, 2022 at 08:00:04PM +0000, Jeremy Cocks via nginx wrote: > > Your observation is wrong. Rate limiting as implemented in the > > limit_req module does not distinguish between different request > > methods and always works after reading the request header > > Sorry, I wasn't clear there. It kicks in for POST requests without my > module ;) So, it looks like your module breaks something. > > does not distinguish between different request methods and always works > after reading the request headers. > > I am assuming, given the request I am testing, is on a proxy_pass which is > a content handler, that has something to do with why rate limiting is not > working on POST and not GET here? > If I remove the location block and just have my module and rate limiting > going without a proxy_pass, it seems to be working fine for all requests. It's hard to say anything beyond that your module breaks something without seeing your module's code and the configuration you are trying to use. A properly implemented auth module, as already suggested, should work in the access phase, and wouldn't interfere with any rate limits, since rate limiting happens before the auth module ever sees a request. -- Maxim Dounin http://mdounin.ru/ From jeremy at jeremy.cx Sun Dec 4 20:32:13 2022 From: jeremy at jeremy.cx (Jeremy Cocks) Date: Sun, 4 Dec 2022 20:32:13 +0000 Subject: My module is overriding the rate limiting module status for POST requests. In-Reply-To: References: Message-ID: At the moment, no authz is really happening. During this testing I was statically setting the advice from the policy server to be 200. Setting this in the access phase works fine for me with no random hitches, so that's fixed. Thanks. For future reference, is there anything to go by which dictates what phase a module should be in and its impact? Obviously, access is quite self-explanatory and not sure how i missed it ;') On Sun, 4 Dec 2022 at 20:19, Maxim Dounin wrote: > Hello! > > On Sun, Dec 04, 2022 at 08:00:04PM +0000, Jeremy Cocks via nginx wrote: > > > > Your observation is wrong. Rate limiting as implemented in the > > > limit_req module does not distinguish between different request > > > methods and always works after reading the request header > > > > Sorry, I wasn't clear there. It kicks in for POST requests without my > > module ;) > > So, it looks like your module breaks something. > > > > does not distinguish between different request methods and always works > > after reading the request headers. > > > > I am assuming, given the request I am testing, is on a proxy_pass which > is > > a content handler, that has something to do with why rate limiting is > not > > working on POST and not GET here? > > If I remove the location block and just have my module and rate limiting > > going without a proxy_pass, it seems to be working fine for all requests. > > It's hard to say anything beyond that your module breaks something > without seeing your module's code and the configuration you are > trying to use. > > A properly implemented auth module, as already suggested, should > work in the access phase, and wouldn't interfere with any rate > limits, since rate limiting happens before the auth module ever > sees a request. > > -- > Maxim Dounin > http://mdounin.ru/ > _______________________________________________ > nginx mailing list -- nginx at nginx.org > To unsubscribe send an email to nginx-leave at nginx.org > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Sun Dec 4 22:07:55 2022 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 5 Dec 2022 01:07:55 +0300 Subject: My module is overriding the rate limiting module status for POST requests. In-Reply-To: References: Message-ID: Hello! On Sun, Dec 04, 2022 at 08:32:13PM +0000, Jeremy Cocks via nginx wrote: > At the moment, no authz is really happening. During this testing I was > statically setting the advice from the policy server to be 200. > Setting this in the access phase works fine for me with no random hitches, > so that's fixed. Thanks. Note that in an auth module you are not expected to return or set 200 anywhere. Instead, you should return NGX_DECLINED, NGX_OK, NGX_HTTP_FORBIDDEN, or NGX_HTTP_UNAUTHORIZED. > For future reference, is there anything to go by which dictates what phase > a module should be in and its impact? > Obviously, access is quite self-explanatory and not sure how i missed it ;') First of all, you may want to read the relevant chapter of the development guide: http://nginx.org/en/docs/dev/development_guide.html#http_phases It explains what different phases are expected to do, and lists various standard modules which work at the relevant phases. Reading the relevant core code and corresponding standard modules might also help. -- Maxim Dounin http://mdounin.ru/ From krikkiteer at gmail.com Mon Dec 5 20:43:18 2022 From: krikkiteer at gmail.com (Charlie Kilo) Date: Mon, 5 Dec 2022 21:43:18 +0100 Subject: Restarting service takes too much time In-Reply-To: References: Message-ID: I know the problem also from an environment with many sites and thousands of ips to bind to. for us the problem is that nginx binds every worker to every ip sequentially - leading to a restart time of 10-15 minutes. the problem can easily be observed using strace on the master process during startup.. we couldn't find an easy solution so far. Gus Flowers Starkiller schrieb am Do., 1. Dez. 2022, 04:42: > Hi, please could anyone tell me why the nginx service takes too much time > to restart? As we know, after some change with nginx the service must be > restarted, I do this with "nginx -s reload" or "systemctl restart nginx", > and this takes some three minutes or more. This process is happening in > servers with many websites (eg 200 sites). So, with a new nginx this > restart of nginx is immediate but with many sites this restart process of > nginx is very slow, I am using Debian with nginx and OWASP. > > Thanks for your help. > > -- > *Gus Flowers* > > > _______________________________________________ > nginx mailing list -- nginx at nginx.org > To unsubscribe send an email to nginx-leave at nginx.org > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Tue Dec 6 00:34:36 2022 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 6 Dec 2022 03:34:36 +0300 Subject: Restarting service takes too much time In-Reply-To: References: Message-ID: Hello! On Mon, Dec 05, 2022 at 09:43:18PM +0100, Charlie Kilo wrote: > I know the problem also from an environment with many sites and thousands > of ips to bind to. for us the problem is that nginx binds every worker to > every ip sequentially - leading to a restart time of 10-15 minutes. the > problem can easily be observed using strace on the master process during > startup.. we couldn't find an easy solution so far. Could you please share some numbers and details of the configuration? Some strace output with timestamps might be also helpful (something like "strace -ttT" would be great). While binding listening sockets indeed happens sequentially, it is expected to take at most seconds even with thousands of listening sockets, and even under load, not minutes. It would be interesting to dig into what causes 10-15 minutes restart time. In particular, in ticket #2188 (https://trac.nginx.org/nginx/ticket/2188), which was about speeding up "nginx -t" with lots of listening sockets under load, opening 20k listening sockets (expanded from about 1k sockets in the configuration with "listen ... reuseport" and multiple worker processes) was observed to take about 1 second without load (and up to 15 seconds under load, though this shouldn't affect restart). Also note that nginx provides a lot of ways to actually do not open that many sockets (including using a single socket on a wildcard address for a given port instead of a socket for each IP address, and not using reuseport, which is really needed only if you are balancing UDP). If the issue you are observing is indeed due to slow bind() calls, one of the possible solutions might be to reduce the number of listening sockets being used. -- Maxim Dounin http://mdounin.ru/ From arjun.singri at uber.com Tue Dec 6 17:43:53 2022 From: arjun.singri at uber.com (Arjun Singri) Date: Tue, 6 Dec 2022 09:43:53 -0800 Subject: Info log: client terminated stream Message-ID: Hello there, What does the below info log mean? *client terminated stream 3 with status 0 while sending request to upstream* Arjun -------------- next part -------------- An HTML attachment was scrubbed... URL: From justdan23 at gmail.com Wed Dec 7 22:36:48 2022 From: justdan23 at gmail.com (Dan Swaney) Date: Wed, 7 Dec 2022 17:36:48 -0500 Subject: Build Issue with NGINX for Windows Message-ID: I'm trying to build the open source for windows using the instructions at: http://nginx.org/en/docs/howto_build_on_win32.html I've set up the environment using Visual Studio 2022 Community Edition and Windows 11 SDK and VC RT v1.43 by: - Running vcvars64.bat using: - pushd C:\"Program Files"\"Microsoft Visual Studio"\2022\Community\VC\Auxiliary\Build - vcvars64.bat - popd back to the nginx base folder - Run configure: - auto/configure \ --with-cc=cl \ --builddir=objs \ --with-debug \ --prefix= \ --conf-path=conf/nginx.conf \ --pid-path=logs/nginx.pid \ --http-log-path=logs/access.log \ --error-log-path=logs/error.log \ --sbin-path=nginx.exe \ --http-client-body-temp-path=temp/client_body_temp \ --http-proxy-temp-path=temp/proxy_temp \ --http-fastcgi-temp-path=temp/fastcgi_temp \ --http-scgi-temp-path=temp/scgi_temp \ --http-uwsgi-temp-path=temp/uwsgi_temp \ --with-cc-opt=-DFD_SETSIZE=1024 \ --with-pcre=objs/lib/pcre2-10.40 \ --with-zlib=objs/lib/zlib-1.2.13 \ --with-select_module \ --with-http_v2_module \ --with-http_realip_module \ --with-http_addition_module \ --with-http_sub_module \ --with-http_dav_module \ --with-http_stub_status_module \ --with-http_flv_module \ --with-http_mp4_module \ --with-http_gunzip_module \ --with-http_gzip_static_module \ --with-http_auth_request_module \ --with-http_random_index_module \ --with-http_secure_link_module \ --with-http_slice_module \ --with-mail \ --with-stream \ --with-openssl=objs/lib/openssl \ --with-http_ssl_module \ --with-mail_ssl_module \ --with-stream_ssl_module - Here's the response from configure: - checking for OS + MINGW64_NT-10.0-22621 3.3.6-341.x86_64 x86_64 + using Microsoft Visual C++ compiler + cl version: 19.34.31935 for x64 checking for MINGW64_NT-10.0-22621 specific features creating objs/Makefile Configuration summary + using PCRE2 library: objs/lib/pcre2-10.40 + using OpenSSL library: objs/lib/openssl + using zlib library: objs/lib/zlib-1.2.13 nginx path prefix: "" nginx binary file: "/nginx.exe" nginx modules path: "/modules" nginx configuration prefix: "/conf" nginx configuration file: "/conf/nginx.conf" nginx pid file: "/logs/nginx.pid" nginx error log file: "/logs/error.log" nginx http access log file: "/logs/access.log" nginx http client request body temporary files: "temp/client_body_temp" nginx http proxy temporary files: "temp/proxy_temp" nginx http fastcgi temporary files: "temp/fastcgi_temp" nginx http uwsgi temporary files: "temp/uwsgi_temp" nginx http scgi temporary files: "temp/scgi_temp" - I'm trying to build the 64-bit version and OpenSSL keeps including 32-bit code when using NGINX's makefile - "cl" /Zi /Fdossl_static.pdb /MT /Zl /Gs0 /GF /Gy /W3 /wd4090 /nologo /O2 -I"." -I"include" -I"providers\common\include" -I"providers\implementations\include" -D"L_ENDIAN" -D"OPENSSL_PIC" -D"OPENSSLDIR=\"C:\\msys64\\home\\justd\\nginx\\objs\\lib\\openssl\\openssl\\ssl\"" -D"ENGINESDIR=\"C:\\msys64\\home\\justd\\nginx\\objs\\lib\\openssl\\openssl\\lib\\engines-3\"" -D"MODULESDIR=\"C:\\msys64\\home\\justd\\nginx\\objs\\lib\\openssl\\openssl\\lib\\ossl-modules\"" -D"OPENSSL_BUILDING_OPENSSL" -D"OPENSSL_SYS_WIN32" -D"WIN32_LEAN_AND_MEAN" -D"UNICODE" -D"_UNICODE" -D"_CRT_SECURE_NO_DEPRECATE" -D"_WINSOCK_DEPRECATED_NO_WARNINGS" -D"NDEBUG" -DAES_ASM -DBSAES_ASM -DCMLL_ASM -DECP_NISTZ256_ASM -DGHASH_ASM -DKECCAK1600_ASM -DMD5_ASM -DOPENSSL_BN_ASM_GF2m -DOPENSSL_BN_ASM_MONT -DOPENSSL_BN_ASM_MONT5 -DOPENSSL_CPUID_OBJ -DOPENSSL_IA32_SSE2 -DPADLOCK_ASM -DPOLY1305_ASM -DSHA1_ASM -DSHA256_ASM -DSHA512_ASM -DVPAES_ASM -DWHIRLPOOL_ASM -DX25519_ASM -c /Focrypto\asn1\libcrypto-lib-asn1_parse.obj "crypto\asn1\asn1_parse.c" asn1_parse.c crypto\asn1\asn1_parse.c(123): *warning C4244: '=': conversion from '__int64' to 'int', possible loss of data* crypto\asn1\asn1_parse.c(144): warning C4244: 'function': conversion from '__int64' to 'int', possible loss of data crypto\asn1\asn1_parse.c(149): warning C4244: '=': conversion from '__int64' to 'long', possible loss of data crypto\asn1\asn1_parse.c(159): warning C4244: 'function': conversion from '__int64' to 'int', possible loss of data crypto\asn1\asn1_parse.c(163): warning C4244: '-=': conversion from '__int64' to 'long', possible loss of data - *ENV is:* - !;=;\ !C:=C:\msys64\home\justd\nginx !ExitCode=00000002 ACLOCAL_PATH=C:\msys64\ucrt64\share\aclocal;C:\msys64\usr\share\aclocal ALLUSERSPROFILE=C:\ProgramData APPDATA=C:\Users\justd\AppData\Roaming CommandPromptType=Native COMMONPROGRAMFILES=C:\Program Files\Common Files COMPUTERNAME=DREAM-STONE-RTI COMSPEC=C:\WINDOWS\system32\cmd.exe CONFIG_SITE=C:/msys64/etc/config.site CommonProgramFiles(x86)=C:\Program Files (x86)\Common Files CommonProgramW6432=C:\Program Files\Common Files DevEnvDir=C:\Program Files\Microsoft Visual Studio\2022\Community\Common7\IDE\ DriverData=C:\Windows\System32\Drivers\DriverData ExtensionSdkDir=C:\Program Files (x86)\Microsoft SDKs\Windows Kits\10\ExtensionSDKs EXTERNAL_INCLUDE=C:\Program Files\Microsoft Visual Studio\2022\Community\VC\Tools\MSVC\14.34.31933\include;C:\Program Files\Microsoft Visual Studio\2022\Community\VC\Tools\MSVC\14.34.31933\ATLMFC\include;C:\Program Files\Microsoft Visual Studio\2022\Community\VC\Auxiliary\VS\include;C:\Program Files (x86)\Windows Kits\10\include\10.0.22621.0\ucrt;C:\Program Files (x86)\Windows Kits\10\\include\10.0.22621.0\\um;C:\Program Files (x86)\Windows Kits\10\\include\10.0.22621.0\\shared;C:\Program Files (x86)\Windows Kits\10\\include\10.0.22621.0\\winrt;C:\Program Files (x86)\Windows Kits\10\\include\10.0.22621.0\\cppwinrt;C:\Program Files (x86)\Windows Kits\NETFXSDK\4.8\include\um;C:\Program Files\Microsoft Visual Studio\2022\Community\VC\Tools\MSVC\14.34.31933\include;C:\Program Files\Microsoft Visual Studio\2022\Community\VC\Tools\MSVC\14.34.31933\ATLMFC\include;C:\Program Files\Microsoft Visual Studio\2022\Community\VC\Auxiliary\VS\include;C:\Program Files (x86)\Windows Kits\10\include\10.0.22621.0\ucrt;C:\Program Files (x86)\Windows Kits\10\\include\10.0.22621.0\\um;C:\Program Files (x86)\Windows Kits\10\\include\10.0.22621.0\\shared;C:\Program Files (x86)\Windows Kits\10\\include\10.0.22621.0\\winrt;C:\Program Files (x86)\Windows Kits\10\\include\10.0.22621.0\\cppwinrt;C:\Program Files (x86)\Windows Kits\NETFXSDK\4.8\include\um Framework40Version=v4.0 FrameworkDir=C:\Windows\Microsoft.NET\Framework64\ FrameworkDir64=C:\Windows\Microsoft.NET\Framework64\ FrameworkVersion=v4.0.30319 FrameworkVersion64=v4.0.30319 GOPATH=C:\Users\justd\go HG=C:/msys64/usr/bin/hg HOME=/home/justd HOMEDRIVE=C: HOMEPATH=\Users\justd HOSTNAME=DREAM-STONE-RTI HTMLHelpDir=C:\Program Files (x86)\HTML Help Workshop IFCPATH=C:\Program Files\Microsoft Visual Studio\2022\Community\VC\Tools\MSVC\14.34.31933\ifc\x64 *INCLUDE="C:/Program Files (x86)/Windows Kits/10/Include/10.0.22621.0/ucrt";"C:/Program Files/Microsoft Visual Studio/2022/Community/VC/Tools/MSVC/14.34.31933/include";*C:\Program Files\Microsoft Visual Studio\2022\Community\VC\Tools\MSVC\14.34.31933\include;C:\Program Files\Microsoft Visual Studio\2022\Community\VC\Tools\MSVC\14.34.31933\ATLMFC\include;C:\Program Files\Microsoft Visual Studio\2022\Community\VC\Auxiliary\VS\include;C:\Program Files (x86)\Windows Kits\10\include\10.0.22621.0\ucrt;C:\Program Files (x86)\Windows Kits\10\\include\10.0.22621.0\\um;C:\Program Files (x86)\Windows Kits\10\\include\10.0.22621.0\\shared;C:\Program Files (x86)\Windows Kits\10\\include\10.0.22621.0\\winrt;C:\Program Files (x86)\Windows Kits\10\\include\10.0.22621.0\\cppwinrt;C:\Program Files (x86)\Windows Kits\NETFXSDK\4.8\include\um;C:\Program Files\Microsoft Visual Studio\2022\Community\VC\Tools\MSVC\14.34.31933\include;C:\Program Files\Microsoft Visual Studio\2022\Community\VC\Tools\MSVC\14.34.31933\ATLMFC\include;C:\Program Files\Microsoft Visual Studio\2022\Community\VC\Auxiliary\VS\include;C:\Program Files (x86)\Windows Kits\10\include\10.0.22621.0\ucrt;C:\Program Files (x86)\Windows Kits\10\\include\10.0.22621.0\\um;C:\Program Files (x86)\Windows Kits\10\\include\10.0.22621.0\\shared;C:\Program Files (x86)\Windows Kits\10\\include\10.0.22621.0\\winrt;C:\Program Files (x86)\Windows Kits\10\\include\10.0.22621.0\\cppwinrt;C:\Program Files (x86)\Windows Kits\NETFXSDK\4.8\include\um;C:\msys64\ucrt64\include;C:\msys64\ucrt64\lib\gcc\x86_64-w64-mingw32\12.2.0\include INFOPATH=C:\msys64\usr\local\info;C:\msys64\usr\share\info;C:\msys64\usr\info;C:\msys64\share\info LC_CTYPE=en_US.UTF-8 *LIB=*C:\Program Files\Microsoft Visual Studio\2022\Community\VC\Tools\MSVC\14.34.31933\ATLMFC\lib\x64;C:\Program Files\Microsoft Visual Studio\2022\Community\VC\Tools\MSVC\14.34.31933\lib\x64;C:\Program Files (x86)\Windows Kits\NETFXSDK\4.8\lib\um\x64;C:\Program Files (x86)\Windows Kits\10\lib\10.0.22621.0\ucrt\x64;C:\Program Files (x86)\Windows Kits\10\\lib\10.0.22621.0\\um\x64;C:\Program Files\Microsoft Visual Studio\2022\Community\VC\Tools\MSVC\14.34.31933\ATLMFC\lib\x64;C:\Program Files\Microsoft Visual Studio\2022\Community\VC\Tools\MSVC\14.34.31933\lib\x64;C:\Program Files (x86)\Windows Kits\NETFXSDK\4.8\lib\um\x64;C:\Program Files (x86)\Windows Kits\10\lib\10.0.22621.0\ucrt\x64;C:\Program Files (x86)\Windows Kits\10\\lib\10.0.22621.0\\um\x64 *LIBPATH=*C:\Program Files\Microsoft Visual Studio\2022\Community\VC\Tools\MSVC\14.34.31933\ATLMFC\lib\x64;C:\Program Files\Microsoft Visual Studio\2022\Community\VC\Tools\MSVC\14.34.31933\lib\x64;C:\Program Files\Microsoft Visual Studio\2022\Community\VC\Tools\MSVC\14.34.31933\lib\x86\store\references;C:\Program Files (x86)\Windows Kits\10\UnionMetadata\10.0.22621.0;C:\Program Files (x86)\Windows Kits\10\References\10.0.22621.0;C:\Windows\Microsoft.NET\Framework64\v4.0.30319;C:\Program Files\Microsoft Visual Studio\2022\Community\VC\Tools\MSVC\14.34.31933\ATLMFC\lib\x64;C:\Program Files\Microsoft Visual Studio\2022\Community\VC\Tools\MSVC\14.34.31933\lib\x64;C:\Program Files\Microsoft Visual Studio\2022\Community\VC\Tools\MSVC\14.34.31933\lib\x86\store\references;C:\Program Files (x86)\Windows Kits\10\UnionMetadata\10.0.22621.0;C:\Program Files (x86)\Windows Kits\10\References\10.0.22621.0;C:\Windows\Microsoft.NET\Framework64\v4.0.30319 LOCALAPPDATA=C:\Users\justd\AppData\Local LOGONSERVER=\\DREAM-STONE-RTI MANPATH=C:\msys64\ucrt64\local\man;C:\msys64\ucrt64\share\man;C:\msys64\usr\local\man;C:\msys64\usr\share\man;C:\msys64\usr\man;C:\msys64\share\man MINGW_CHOST=x86_64-w64-mingw32 MINGW_PACKAGE_PREFIX=mingw-w64-ucrt-x86_64 MINGW_PREFIX=C:/msys64/ucrt64 MSYSCON=mintty.exe MSYSTEM=UCRT64 MSYSTEM_CARCH=x86_64 MSYSTEM_CHOST=x86_64-w64-mingw32 MSYSTEM_PREFIX=C:/msys64/ucrt64 NETFXSDKDir=C:\Program Files (x86)\Windows Kits\NETFXSDK\4.8\ NUMBER_OF_PROCESSORS=8 OLDPWD=C:/msys64/home/justd ORIGINAL_PATH=/c/Windows/System32:/c/Windows:/c/Windows/System32/Wbem:/c/Windows/System32/WindowsPowerShell/v1.0 ORIGINAL_TEMP=C:/Users/justd/AppData/Local/Temp ORIGINAL_TMP=C:/Users/justd/AppData/Local/Temp OS=Windows_NT OneDrive=C:\Users\justd\OneDrive OneDriveConsumer=C:\Users\justd\OneDrive *PATH=/c/Strawberry/perl/bin:/c/Program Files/Microsoft Visual Studio/2022/Community/VC/Tools/MSVC/14.34.31933/bin/Hostx64/x64:/c/Program Files/Microsoft Visual Studio/2022/Community/VC/Tools/MSVC/14.34.31933/bin/HostX64/x64*:/c/Program Files/Microsoft Visual Studio/2022/Community/Common7/IDE/VC/VCPackages:/c/Program Files/Microsoft Visual Studio/2022/Community/Common7/IDE/CommonExtensions/Microsoft/TestWindow:/c/Program Files/Microsoft Visual Studio/2022/Community/Common7/IDE/CommonExtensions/Microsoft/TeamFoundation/Team Explorer:/c/Program Files/Microsoft Visual Studio/2022/Community/MSBuild/Current/bin/Roslyn:/c/Program Files/Microsoft Visual Studio/2022/Community/Team Tools/Performance Tools/x64:/c/Program Files/Microsoft Visual Studio/2022/Community/Team Tools/Performance Tools:/c/Program Files (x86)/Microsoft SDKs/Windows/v10.0A/bin/NETFX 4.8 Tools/x64:/c/Program Files (x86)/HTML Help Workshop:/c/Program Files (x86)/Windows Kits/10/bin/10.0.22621.0/x64:/c/Program Files (x86)/Windows Kits/10/bin/x64:/c/Program Files/Microsoft Visual Studio/2022/Community/MSBuild/Current/Bin/amd64:/c/Windows/Microsoft.NET/Framework64/v4.0.30319:/c/Program Files/Microsoft Visual Studio/2022/Community/Common7/IDE:/c/Program Files/Microsoft Visual Studio/2022/Community/Common7/Tools:/c/Program Files/Microsoft Visual Studio/2022/Community/VC/Tools/MSVC/14.34.31933/bin/HostX64/x64:/c/Program Files/Microsoft Visual Studio/2022/Community/Common7/IDE/VC/VCPackages:/c/Program Files/Microsoft Visual Studio/2022/Community/Common7/IDE/CommonExtensions/Microsoft/TestWindow:/c/Program Files/Microsoft Visual Studio/2022/Community/Common7/IDE/CommonExtensions/Microsoft/TeamFoundation/Team Explorer:/c/Program Files/Microsoft Visual Studio/2022/Community/MSBuild/Current/bin/Roslyn:/c/Program Files/Microsoft Visual Studio/2022/Community/Team Tools/Performance Tools/x64:/c/Program Files/Microsoft Visual Studio/2022/Community/Team Tools/Performance Tools:/c/Program Files (x86)/Microsoft SDKs/Windows/v10.0A/bin/NETFX 4.8 Tools/x64:/c/Program Files (x86)/HTML Help Workshop:/c/Program Files (x86)/Windows Kits/10/bin/10.0.22621.0/x64:/c/Program Files (x86)/Windows Kits/10/bin/x64:/c/Program Files/Microsoft Visual Studio/2022/Community/MSBuild/Current/Bin/amd64:/c/Windows/Microsoft.NET/Framework64/v4.0.30319:/c/Program Files/Microsoft Visual Studio/2022/Community/Common7/IDE:/c/Program Files/Microsoft Visual Studio/2022/Community/Common7/Tools:/c/Strawberry/perl/bin:/ucrt64/bin:/usr/local/bin:/usr/bin:/usr/bin:/c/Windows/System32:/c/Windows:/c/Windows/System32/Wbem:/c/Windows/System32/WindowsPowerShell/v1.0:/usr/bin/site_perl:/usr/bin/vendor_perl:/usr/bin/core_perl:/c/Program Files/Microsoft Visual Studio/2022/Community/VC/Tools/MSVC/14.34.31933/bin/Hostx64/x64:/c/Users/justd/Apps/sysinternals:/c/Program Files/Microsoft Visual Studio/2022/Community/Common7/IDE/CommonExtensions/Microsoft/CMake/CMake/bin:/c/Program Files/Microsoft Visual Studio/2022/Community/Common7/IDE/CommonExtensions/Microsoft/CMake/Ninja:/c/Program Files/Microsoft Visual Studio/2022/Community/Common7/IDE/VC/Linux/bin/ConnectionManagerExe:/c/Program Files/Microsoft Visual Studio/2022/Community/Common7/IDE/CommonExtensions/Microsoft/CMake/CMake/bin:/c/Program Files/Microsoft Visual Studio/2022/Community/Common7/IDE/CommonExtensions/Microsoft/CMake/Ninja:/c/Program Files/Microsoft Visual Studio/2022/Community/Common7/IDE/VC/Linux/bin/ConnectionManagerExe PATHEXT=.COM;.EXE;.BAT;.CMD;.VBS;.VBE;.JS;.JSE;.WSF;.WSH;.MSC PKG_CONFIG_PATH=C:\msys64\ucrt64\lib\pkgconfig;C:\msys64\ucrt64\share\pkgconfig PKG_CONFIG_SYSTEM_INCLUDE_PATH=C:/msys64/ucrt64/include PKG_CONFIG_SYSTEM_LIBRARY_PATH=C:/msys64/ucrt64/lib *Platform=x64* PRINTER=HP858925 (HP Officejet 6700) PROCESSOR_ARCHITECTURE=AMD64 PROCESSOR_IDENTIFIER=Intel64 Family 6 Model 140 Stepping 1, GenuineIntel PROCESSOR_LEVEL=6 PROCESSOR_REVISION=8c01 PROGRAMFILES=C:\Program Files PROMPT=$P$G PSModulePath=C:\Program Files\WindowsPowerShell\Modules;C:\WINDOWS\system32\WindowsPowerShell\v1.0\Modules PUBLIC=C:\Users\Public PWD=C:/msys64/home/justd/nginx ProgramData=C:\ProgramData ProgramFiles(x86)=C:\Program Files (x86) ProgramW6432=C:\Program Files SESSIONNAME=Console SHELL=/usr/bin/bash.exe SHLVL=2 SYSTEMDRIVE=C: SYSTEMROOT=C:\WINDOWS TEMP=/tmp TERM=xterm TERM_PROGRAM=mintty TERM_PROGRAM_VERSION=3.6.2 TMP=/tmp UCRTVersion=10.0.22621.0 UniversalCRTSdkDir=C:\Program Files (x86)\Windows Kits\10\ USER=justd USERDOMAIN=DREAM-STONE-RTI USERDOMAIN_ROAMINGPROFILE=DREAM-STONE-RTI USERNAME=justd USERPROFILE=C:\Users\justd VCIDEInstallDir=C:\Program Files\Microsoft Visual Studio\2022\Community\Common7\IDE\VC\ VCINSTALLDIR=C:\Program Files\Microsoft Visual Studio\2022\Community\VC\ VCToolsInstallDir=C:\Program Files\Microsoft Visual Studio\2022\Community\VC\Tools\MSVC\14.34.31933\ VCToolsRedistDir=C:\Program Files\Microsoft Visual Studio\2022\Community\VC\Redist\MSVC\14.34.31931\ VCToolsVersion=14.34.31933 VisualStudioVersion=17.0 VS170COMNTOOLS=C:\Program Files\Microsoft Visual Studio\2022\Community\Common7\Tools\ VSCMD_ARG_app_plat=Desktop VSCMD_ARG_HOST_ARCH=x64 VSCMD_ARG_TGT_ARCH=x64 VSCMD_VER=17.4.2 VSINSTALLDIR=C:\Program Files\Microsoft Visual Studio\2022\Community\ WINDIR=C:\WINDOWS WindowsLibPath=C:\Program Files (x86)\Windows Kits\10\UnionMetadata\10.0.22621.0;C:\Program Files (x86)\Windows Kits\10\References\10.0.22621.0 WindowsSdkBinPath=C:\Program Files (x86)\Windows Kits\10\bin\ WindowsSdkDir=C:\Program Files (x86)\Windows Kits\10\ WindowsSDKLibVersion=10.0.22621.0\ WindowsSdkVerBinPath=C:\Program Files (x86)\Windows Kits\10\bin\10.0.22621.0\ WindowsSDKVersion=10.0.22621.0\ WindowsSDK_ExecutablePath_x64=C:\Program Files (x86)\Microsoft SDKs\Windows\v10.0A\bin\NETFX 4.8 Tools\x64\ WindowsSDK_ExecutablePath_x86=C:\Program Files (x86)\Microsoft SDKs\Windows\v10.0A\bin\NETFX 4.8 Tools\ XDG_DATA_DIRS=C:\msys64\ucrt64\share\;C:\msys64\usr\local\share\;C:\msys64\usr\share\ ZES_ENABLE_SYSMAN=1 _=C:\WINDOWS\system32\cmd.exe TEMP=/c/Users/justd/AppData/Local/Temp TMP=/c/Users/justd/AppData/Local/Temp __DOTNET_ADD_64BIT=1 __DOTNET_PREFERRED_BITNESS=64 __VSCMD_PREINIT_INCLUDE=C:\msys64\ucrt64\include;C:\msys64\ucrt64\lib\gcc\x86_64-w64-mingw32\12.2.0\include __VSCMD_PREINIT_PATH=C:\Strawberry\perl\bin;C:\msys64\ucrt64\bin;C:\msys64\usr\local\bin;C:\msys64\usr\bin;C:\msys64\usr\bin;C:\Windows\System32;C:\Windows;C:\Windows\System32\Wbem;C:\Windows\System32\WindowsPowerShell\v1.0\;C:\msys64\usr\bin\site_perl;C:\msys64\usr\bin\vendor_perl;C:\msys64\usr\bin\core_perl;C:\Program Files\Microsoft Visual Studio\2022\Community\VC\Tools\MSVC\14.34.31933\bin\Hostx64\x64;C:\Users\justd\Apps\sysinternals __VSCMD_PREINIT_VCToolsVersion=14.34.31933 __VSCMD_PREINIT_VS170COMNTOOLS=C:\Program Files\Microsoft Visual Studio\2022\Community\Common7\Tools\ Do you know what I may be doing wrong? Do you have a copy of your ENV and output to see what a successful build environment has? Thoughts? Dan -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Thu Dec 8 09:11:11 2022 From: nginx-forum at forum.nginx.org (anfassl) Date: Thu, 08 Dec 2022 04:11:11 -0500 Subject: nginx Restart Issue after Cert-Update (Letsencrypt) Message-ID: <6a3e35a2aa65c2b27958f6b76b615b73.NginxMailingListEnglish@forum.nginx.org> Hi, we've got a strange issue with nginx and letsencrypt. - A daily job is configured to run "certbot renew", which updates all the certs on a webserver (round about 30 certs) - After the certbot run we do issue a nginx reload Issue: The certs aren't updated in nginx We've than added a hard nginx stop/start in the script. But this doesn't cure the problem. When issueing the stop/start on the command line, all is fine. Any idea, what is the cause for this? I've did lots of googling, and searching here in the forum as well, but without any hint. Thanks for any hint, Andreas Posted at Nginx Forum: https://forum.nginx.org/read.php?2,296005,296005#msg-296005 From nginx-forum at forum.nginx.org Thu Dec 8 09:15:23 2022 From: nginx-forum at forum.nginx.org (itpp2012) Date: Thu, 08 Dec 2022 04:15:23 -0500 Subject: Build Issue with NGINX for Windows In-Reply-To: References: Message-ID: https://stackoverflow.com/questions/54695248/building-openssl-for-windows-x64 Posted at Nginx Forum: https://forum.nginx.org/read.php?2,295998,296006#msg-296006 From mdounin at mdounin.ru Thu Dec 8 19:59:29 2022 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 8 Dec 2022 22:59:29 +0300 Subject: nginx Restart Issue after Cert-Update (Letsencrypt) In-Reply-To: <6a3e35a2aa65c2b27958f6b76b615b73.NginxMailingListEnglish@forum.nginx.org> References: <6a3e35a2aa65c2b27958f6b76b615b73.NginxMailingListEnglish@forum.nginx.org> Message-ID: Hello! On Thu, Dec 08, 2022 at 04:11:11AM -0500, anfassl wrote: > we've got a strange issue with nginx and letsencrypt. > - A daily job is configured to run "certbot renew", which updates all the > certs on a webserver (round about 30 certs) > - After the certbot run we do issue a nginx reload > > Issue: The certs aren't updated in nginx > We've than added a hard nginx stop/start in the script. But this doesn't > cure the problem. > When issueing the stop/start on the command line, all is fine. > > Any idea, what is the cause for this? > I've did lots of googling, and searching here in the forum as well, but > without any hint. Try looking into nginx error log, the one specified at the global level. If there is an issue with reloading configuration, nginx will complain there. It should also help to make sure that nginx was actually asked by your script to reload. You'll have to set the logging level to "notice" though, see http://nginx.org/r/error_log for details. -- Maxim Dounin http://mdounin.ru/ From bmvishwas at gmail.com Sat Dec 10 06:33:04 2022 From: bmvishwas at gmail.com (Vishwas Bm) Date: Sat, 10 Dec 2022 12:03:04 +0530 Subject: nginx 400 bad request Message-ID: Hi. I am using nginx 1.22.1 and for some requests from client, I am seeing below 400 bad request errors. Below are some error messages: "log":{"message":"225#225: *8413018 client sent invalid method while reading client pipelined request line, client: 10.178.111.18, server: _, request: 'ocality': 'ny','mode': 'proactive','type': 'HSS','service': {'type': 'str','kspaces': [],'prtl': 'nds','traffic_type': 'rst'}}<92>¹ÏNfGå<8d>¹Âø^FÍ^XÌb<9d>jó<91>¸<81>Ô|]\P0Q^P'^\BÝ<93>ÛT±R'"}} {"[ocality\x22: \x22NY\x22,\x22mode\x22: \x22proactive\x22,\x22type\x22: \x22TSS\x22,\x22service\x22: {\x22type\x22: \x22storage\x22,\x22keyspaces\x22: [],\x22protocol\x22: \x22nds\x22,\x22traffic_type\x22: \x22rts\x22}}\x92\xB9\xCFNfG\xE5\x8D\xB9\xC2\xF8\x06\xCD\x18\xCCb\x9Dj\xF3\x91\xB8\x81\xD4|]\x5CP0Q\x10\x22\x1CB\xDD\x93\xDBT\xB1R] 400 150 [-] [-] 0 10.577 [] - - - - b1d1932b3a43ced62c2ebfa80d435092"}} Is there any way to decode this or Is the garbled data itself causing nginx to say 400 errors ? Can someone help me with this ? *Thanks & Regards,* *Vishwas * -------------- next part -------------- An HTML attachment was scrubbed... URL: From krikkiteer at gmail.com Sat Dec 10 08:52:37 2022 From: krikkiteer at gmail.com (Charlie Kilo) Date: Sat, 10 Dec 2022 09:52:37 +0100 Subject: Restarting service takes too much time In-Reply-To: References: Message-ID: Hi Maxim, we have roundabout 7k ips in use, 3k ipv6, 4k ipv4 and 52 workers. that results in ~364000 ips which need to be bound - twice that in sockets if i count port 80 and 443. we have indeed reuseport active - we already thought about using a wildcard-address on a socket, but didnt have time to investigate and test thoroughly.. if its really only useful for balancing udp we might be able to get rid of it. we are aware of the need to reduce the number of listening sockets and config-size per server, however this will be challenging and involve changes on a lot of levels.. i'll have to look into that again.. thank you for your suggestions in any case! On Tue, Dec 6, 2022 at 1:34 AM Maxim Dounin wrote: > Hello! > > On Mon, Dec 05, 2022 at 09:43:18PM +0100, Charlie Kilo wrote: > > > I know the problem also from an environment with many sites and thousands > > of ips to bind to. for us the problem is that nginx binds every worker > to > > every ip sequentially - leading to a restart time of 10-15 minutes. the > > problem can easily be observed using strace on the master process during > > startup.. we couldn't find an easy solution so far. > > Could you please share some numbers and details of the > configuration? Some strace output with timestamps might be also > helpful (something like "strace -ttT" would be great). > > While binding listening sockets indeed happens sequentially, it is > expected to take at most seconds even with thousands of listening > sockets, and even under load, not minutes. It would be > interesting to dig into what causes 10-15 minutes restart time. > > In particular, in ticket #2188 > (https://trac.nginx.org/nginx/ticket/2188), which was about > speeding up "nginx -t" with lots of listening sockets under load, > opening 20k listening sockets (expanded from about 1k sockets in > the configuration with "listen ... reuseport" and multiple worker > processes) was observed to take about 1 second without load (and > up to 15 seconds under load, though this shouldn't affect restart). > > Also note that nginx provides a lot of ways to actually do not > open that many sockets (including using a single socket on a > wildcard address for a given port instead of a socket for each IP > address, and not using reuseport, which is really needed only if > you are balancing UDP). If the issue you are observing is indeed > due to slow bind() calls, one of the possible solutions might be > to reduce the number of listening sockets being used. > > -- > Maxim Dounin > http://mdounin.ru/ > _______________________________________________ > nginx mailing list -- nginx at nginx.org > To unsubscribe send an email to nginx-leave at nginx.org > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Sat Dec 10 20:21:39 2022 From: mdounin at mdounin.ru (Maxim Dounin) Date: Sat, 10 Dec 2022 23:21:39 +0300 Subject: nginx 400 bad request In-Reply-To: References: Message-ID: Hello! On Sat, Dec 10, 2022 at 12:03:04PM +0530, Vishwas Bm wrote: > I am using nginx 1.22.1 and for some requests from client, I am seeing > below 400 bad request errors. > > Below are some error messages: > > "log":{"message":"225#225: *8413018 client sent invalid method while > reading client pipelined request line, client: 10.178.111.18, server: _, > request: 'ocality': 'ny','mode': 'proactive','type': 'HSS','service': > {'type': 'str','kspaces': [],'prtl': 'nds','traffic_type': > 'rst'}}<92>¹ÏNfGå<8d>¹Âø^FÍ^XÌb<9d>jó<91>¸<81>Ô|]\P0Q^P'^\BÝ<93>ÛT±R'"}} > > {"[ocality\x22: \x22NY\x22,\x22mode\x22: \x22proactive\x22,\x22type\x22: > \x22TSS\x22,\x22service\x22: {\x22type\x22: > \x22storage\x22,\x22keyspaces\x22: [],\x22protocol\x22: > \x22nds\x22,\x22traffic_type\x22: > \x22rts\x22}}\x92\xB9\xCFNfG\xE5\x8D\xB9\xC2\xF8\x06\xCD\x18\xCCb\x9Dj\xF3\x91\xB8\x81\xD4|]\x5CP0Q\x10\x22\x1CB\xDD\x93\xDBT\xB1R] > 400 150 [-] [-] 0 10.577 [] - - - - b1d1932b3a43ced62c2ebfa80d435092"}} > > > Is there any way to decode this or Is the garbled data itself causing nginx > to say 400 errors ? > Can someone help me with this ? >From the error message it looks like the client incorrectly specified Content-Length in the request, so the rest of the request body not covered by Content-Length is interpreted as another request (and rejected by nginx, because obviously enough it is not a valid HTTP request). -- Maxim Dounin http://mdounin.ru/ From mikydevel at yahoo.fr Sun Dec 11 00:02:00 2022 From: mikydevel at yahoo.fr (Mik J) Date: Sun, 11 Dec 2022 00:02:00 +0000 (UTC) Subject: Nginx sends syslog messages with the name of the server - I would like the ip References: <274595468.6024876.1670716920809.ref@mail.yahoo.com> Message-ID: <274595468.6024876.1670716920809@mail.yahoo.com> Hello, My Nginx server sends syslogs to my remote syslog server with a host = myserver.mydomain.org However I would like that the host to be the IP a specific IP of the server (which exists) On my Nginx server server { ... access_log syslog:server=1.2.3.4; error_log syslog:server=1.2.3.4; Is it possible that the syslog hostname in the message is set to 4.5.6.7 (the IP address of the Nginx server) ? Regards From jfs.world at gmail.com Sun Dec 11 08:30:58 2022 From: jfs.world at gmail.com (Jeffrey 'jf' Lim) Date: Sun, 11 Dec 2022 16:30:58 +0800 Subject: Nginx sends syslog messages with the name of the server - I would like the ip In-Reply-To: <274595468.6024876.1670716920809@mail.yahoo.com> References: <274595468.6024876.1670716920809.ref@mail.yahoo.com> <274595468.6024876.1670716920809@mail.yahoo.com> Message-ID: On Sun, Dec 11, 2022 at 8:03 AM Mik J via nginx wrote: > > Hello, > > My Nginx server sends syslogs to my remote syslog server with a host = myserver.mydomain.org > However I would like that the host to be the IP a specific IP of the server (which exists) > > On my Nginx server > server { > ... > access_log syslog:server=1.2.3.4; > error_log syslog:server=1.2.3.4; > > Is it possible that the syslog hostname in the message is set to 4.5.6.7 (the IP address of the Nginx server) ? > you can define a custom log_format (http://nginx.org/en/docs/http/ngx_http_log_module.html#log_format) and then log using that format -jf -- He who settles on the idea of the intelligent man as a static entity only shows himself to be a fool. From mikydevel at yahoo.fr Sun Dec 11 09:16:54 2022 From: mikydevel at yahoo.fr (Mik J) Date: Sun, 11 Dec 2022 09:16:54 +0000 (UTC) Subject: Nginx sends syslog messages with the name of the server - I would like the ip In-Reply-To: References: <274595468.6024876.1670716920809.ref@mail.yahoo.com> <274595468.6024876.1670716920809@mail.yahoo.com> Message-ID: <855440115.6086955.1670750214259@mail.yahoo.com> Thannk you Jeffrey for your help Le dimanche 11 décembre 2022 à 09:31:10 UTC+1, Jeffrey 'jf' Lim a écrit : On Sun, Dec 11, 2022 at 8:03 AM Mik J via nginx wrote: > > Hello, > > My Nginx server sends syslogs to my remote syslog server with a host = myserver.mydomain.org > However I would like that the host to be the IP a specific IP of the server (which exists) > > On my Nginx server > server { > ... > access_log syslog:server=1.2.3.4; > error_log syslog:server=1.2.3.4; > > Is it possible that the syslog hostname in the message is set to 4.5.6.7 (the IP address of the Nginx server) ? > you can define a custom log_format (http://nginx.org/en/docs/http/ngx_http_log_module.html#log_format) and then log using that format -jf -- He who settles on the idea of the intelligent man as a static entity only shows himself to be a fool. From nginx-forum at forum.nginx.org Sun Dec 11 13:11:42 2022 From: nginx-forum at forum.nginx.org (anfassl) Date: Sun, 11 Dec 2022 08:11:42 -0500 Subject: nginx Restart Issue after Cert-Update (Letsencrypt) In-Reply-To: References: Message-ID: Hi Maxim, the script consists of: - Letsencrypt job (certbot renew) - works fine, certs are being updated - nginx restart - nginx stop - nginx start but even with those three commands the new certs aren't visible Only cure so far log in to the server, and issue a restart manually. The nginx logs don't show any messages - I'm increasing to notice for now. this is the script #!/bin/sh # #Daily check for new certs # # Get certs certbot renew # Restart NGINX Instances service nginx restart service nginx stop service nginx start Posted at Nginx Forum: https://forum.nginx.org/read.php?2,296005,296022#msg-296022 From mdounin at mdounin.ru Sun Dec 11 23:30:03 2022 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 12 Dec 2022 02:30:03 +0300 Subject: Restarting service takes too much time In-Reply-To: References: Message-ID: Hello! On Sat, Dec 10, 2022 at 09:52:37AM +0100, Charlie Kilo wrote: > we have roundabout 7k ips in use, 3k ipv6, 4k ipv4 and 52 workers. > that results in ~364000 ips which need to be bound - twice that in sockets > if i count port 80 and 443. > > we have indeed reuseport active - we already thought about using a > wildcard-address on a socket, but didnt have time to investigate and test > thoroughly.. > if its really only useful for balancing udp we might be able to get rid of > it. Thanks for the details. Running with 700k listening sockets indeed might be a challenge. Further, it looks like Linux isn't very effective when handling lots of listening sockets on the same port. In my limited testing, binding 10k listening sockets on the same port takes about 10 seconds, binding 20k listening sockets takes 50 seconds, and binding 30k listening sockets takes 140 seconds. The most simple and effective solution should be to use listen on the wildcard address on the relevant port somewhere in the configuration, such as "listen 80;" (with "reuseport" if needed, see below), so nginx will open just one listening socket and will distribute connections based on the local address as obtained by getsockname(), see the description of the "bind" parameter of the "listen" directive (http://nginx.org/r/listen). The only additional change to the configuration this requires is removing all socket options from the per-IP listen directives, so nginx won't try to bind them separately. Not using "reuseport" should be an option too, but keep in mind that in nginx versions before 1.21.6 it might be also useful as a workaround for uneven distribution of connections between worker processes on modern Linux versions As an alternative solution, "accept_mutex on;" can be used (see https://trac.nginx.org/nginx/ticket/2285 for details). -- Maxim Dounin http://mdounin.ru/ From mdounin at mdounin.ru Tue Dec 13 17:14:42 2022 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 13 Dec 2022 20:14:42 +0300 Subject: nginx-1.23.3 Message-ID: Changes with nginx 1.23.3 13 Dec 2022 *) Bugfix: an error might occur when reading PROXY protocol version 2 header with large number of TLVs. *) Bugfix: a segmentation fault might occur in a worker process if SSI was used to process subrequests created by other modules. Thanks to Ciel Zhao. *) Workaround: when a hostname used in the "listen" directive resolves to multiple addresses, nginx now ignores duplicates within these addresses. *) Bugfix: nginx might hog CPU during unbuffered proxying if SSL connections to backends were used. -- Maxim Dounin http://nginx.org/ From bmvishwas at gmail.com Wed Dec 14 10:02:00 2022 From: bmvishwas at gmail.com (Vishwas Bm) Date: Wed, 14 Dec 2022 15:32:00 +0530 Subject: nginx 400 bad request In-Reply-To: References: Message-ID: Thanks for the response. What happens when content-length provided in request is greater than the post body size? How does nginx handle this case ? Does it fail with 400 ? Also how is the truncation/padding done in case of lesser or higher content length ? Regards, Vishwas On Sun, Dec 11, 2022, 01:52 Maxim Dounin wrote: > Hello! > > On Sat, Dec 10, 2022 at 12:03:04PM +0530, Vishwas Bm wrote: > > > I am using nginx 1.22.1 and for some requests from client, I am seeing > > below 400 bad request errors. > > > > Below are some error messages: > > > > "log":{"message":"225#225: *8413018 client sent invalid method while > > reading client pipelined request line, client: 10.178.111.18, server: _, > > request: 'ocality': 'ny','mode': 'proactive','type': 'HSS','service': > > {'type': 'str','kspaces': [],'prtl': 'nds','traffic_type': > > 'rst'}}<92>¹ÏNfGå<8d>¹Âø^FÍ^XÌb<9d>jó<91>¸<81>Ô|]\P0Q^P'^\BÝ<93>ÛT±R'"}} > > > > {"[ocality\x22: \x22NY\x22,\x22mode\x22: \x22proactive\x22,\x22type\x22: > > \x22TSS\x22,\x22service\x22: {\x22type\x22: > > \x22storage\x22,\x22keyspaces\x22: [],\x22protocol\x22: > > \x22nds\x22,\x22traffic_type\x22: > > > \x22rts\x22}}\x92\xB9\xCFNfG\xE5\x8D\xB9\xC2\xF8\x06\xCD\x18\xCCb\x9Dj\xF3\x91\xB8\x81\xD4|]\x5CP0Q\x10\x22\x1CB\xDD\x93\xDBT\xB1R] > > 400 150 [-] [-] 0 10.577 [] - - - - b1d1932b3a43ced62c2ebfa80d435092"}} > > > > > > Is there any way to decode this or Is the garbled data itself causing > nginx > > to say 400 errors ? > > Can someone help me with this ? > > >From the error message it looks like the client incorrectly > specified Content-Length in the request, so the rest of the > request body not covered by Content-Length is interpreted as > another request (and rejected by nginx, because obviously enough > it is not a valid HTTP request). > > -- > Maxim Dounin > http://mdounin.ru/ > _______________________________________________ > nginx mailing list -- nginx at nginx.org > To unsubscribe send an email to nginx-leave at nginx.org > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Wed Dec 14 23:50:42 2022 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 15 Dec 2022 02:50:42 +0300 Subject: nginx 400 bad request In-Reply-To: References: Message-ID: Hello! On Wed, Dec 14, 2022 at 03:32:00PM +0530, Vishwas Bm wrote: > Thanks for the response. > What happens when content-length provided in request is greater than the > post body size? > How does nginx handle this case ? > Does it fail with 400 ? > > Also how is the truncation/padding done in case of lesser or higher content > length ? In HTTP/1.x, Content-Length specifies the exact size of the request body. Anything after it sent on the same persistent connection is expected to be the next request, and it is interpreted as an HTTP request. As long as it is not a valid HTTP request, like in the case you are seeing in your logs, the 400 (Bad request) error is generated by nginx. For further information about the HTTP protocol consider reading relevant RFCs, notably RFC 9112 (https://www.rfc-editor.org/rfc/rfc9112). -- Maxim Dounin http://mdounin.ru/ From softwareinfojam at gmail.com Thu Dec 15 03:02:04 2022 From: softwareinfojam at gmail.com (Software Info) Date: Wed, 14 Dec 2022 22:02:04 -0500 Subject: Certificate Error Message-ID: Hi All, I would really appreciate some help with this sticky problem. I am using nginx as a reverse proxy. I have version 1.20.1 running on FreeBSD 13.1. Today I set up for a new domain. I got a wildcard certificate for mydomain.com from GoDaddy. I put the paths in nginx.conf but when I run nginx -t I get the following error: nginx: [emerg] SSL_CTX_use_PrivateKey("/usr/local/etc/nginx/ssl/domain.com.key") failed (SSL: error:0B080074:x509 certificate routines:X509_check_private_key:key values mismatch) nginx: configuration file /usr/local/etc/nginx /nginx.conf test failed When I ran the test below to check the Public and Private keys, I get back the same checksum so I guess the Certs must be ok. # openssl rsa -modulus -in domain.com.key -noout | md5sum # openssl x509 -modulus -in domain.com.crt -noout | md5sum This is the relevant section in my nginx.conf server { if ($country_access = no) { return 403; } listen 443 ssl http2; server_tokens off; more_clear_headers Server; server_name this.domain.com; ssl_certificate ssl/gd_bundle-g2-g1.crt; ssl_certificate_key ssl/domain.com.key; ssl_dhparam ssl/dhparams.pem; ssl_ecdh_curve secp384r1; ssl_stapling on; ssl_stapling_verify on; ssl_trusted_certificate ssl/domain.com.crt; resolver 8.8.8.8 8.8.4.4 valid=300s; resolver_timeout 10s; ssl_protocols TLSv1.3 TLSv1.2; ssl_prefer_server_ciphers on; ssl_ciphers 'EECDH+AESGCM:EDH+AESGCM:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA38 ssl_session_cache shared:SSL:1m; ssl_session_timeout 1h; ssl_session_tickets off; add_header Strict-Transport-Security "max-age=31536000;includeSubDomains" always; access_log /var/log/nginx/access.log main; log_not_found on; } From lists at lazygranch.com Thu Dec 15 03:55:07 2022 From: lists at lazygranch.com (lists) Date: Wed, 14 Dec 2022 19:55:07 -0800 Subject: Certificate Error In-Reply-To: Message-ID: You can inspect the certificate at https://www.ssllabs.com/ssltest/ Maybe you will get lucky and it will help you find out what is wrong.   Original Message   From: softwareinfojam at gmail.com Sent: December 14, 2022 7:02 PM To: nginx at nginx.org Reply-to: nginx at nginx.org Subject: Certificate Error Hi All, I would really appreciate some help with this sticky problem. I am using nginx as a reverse proxy. I have version 1.20.1 running on FreeBSD 13.1. Today I set up for a new domain. I got a wildcard certificate for mydomain.com from GoDaddy. I put the paths in nginx.conf but when I run nginx -t I get the following error: nginx: [emerg] SSL_CTX_use_PrivateKey("/usr/local/etc/nginx/ssl/domain.com.key") failed (SSL: error:0B080074:x509 certificate routines:X509_check_private_key:key values mismatch) nginx: configuration file /usr/local/etc/nginx /nginx.conf test failed When I ran the test below to check the Public and Private keys, I get back the same checksum so I guess the Certs must be ok. # openssl rsa -modulus -in domain.com.key -noout | md5sum # openssl x509 -modulus -in domain.com.crt -noout | md5sum This is the relevant section in my nginx.conf    server {          if ($country_access = no) {          return 403;          }          listen 443 ssl http2;          server_tokens off;          more_clear_headers Server;          server_name this.domain.com;          ssl_certificate ssl/gd_bundle-g2-g1.crt;          ssl_certificate_key ssl/domain.com.key;          ssl_dhparam ssl/dhparams.pem;          ssl_ecdh_curve secp384r1;          ssl_stapling on;          ssl_stapling_verify on;          ssl_trusted_certificate ssl/domain.com.crt;          resolver 8.8.8.8 8.8.4.4 valid=300s;          resolver_timeout 10s;          ssl_protocols TLSv1.3 TLSv1.2;          ssl_prefer_server_ciphers on;          ssl_ciphers 'EECDH+AESGCM:EDH+AESGCM:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA38          ssl_session_cache shared:SSL:1m;          ssl_session_timeout 1h;          ssl_session_tickets off;          add_header Strict-Transport-Security "max-age=31536000;includeSubDomains" always;          access_log /var/log/nginx/access.log main;          log_not_found on;         } _______________________________________________ nginx mailing list nginx at nginx.org https://mailman.nginx.org/mailman/listinfo/nginx From mdounin at mdounin.ru Thu Dec 15 04:32:00 2022 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 15 Dec 2022 07:32:00 +0300 Subject: Certificate Error In-Reply-To: References: Message-ID: Hello! On Wed, Dec 14, 2022 at 10:02:04PM -0500, Software Info wrote: > Hi All, > I would really appreciate some help with this sticky problem. I am > using nginx as a reverse proxy. I have version 1.20.1 running on > FreeBSD 13.1. Today I set up for a new domain. I got a wildcard > certificate for mydomain.com from GoDaddy. I put the paths in > nginx.conf but when I run nginx -t > I get the following error: nginx: > [emerg] SSL_CTX_use_PrivateKey("/usr/local/etc/nginx/ssl/domain.com.key") > failed (SSL: error:0B080074:x509 certificate > routines:X509_check_private_key:key values mismatch) > nginx: configuration file /usr/local/etc/nginx /nginx.conf test failed > > When I ran the test below to check the Public and Private keys, I get > back the same checksum so I guess the Certs must be ok. > # openssl rsa -modulus -in domain.com.key -noout | md5sum > # openssl x509 -modulus -in domain.com.crt -noout | md5sum > > This is the relevant section in my nginx.conf [...] > ssl_certificate ssl/gd_bundle-g2-g1.crt; > ssl_certificate_key ssl/domain.com.key; You are "gd_bundle-g2-g1.crt" instead of "domain.com.crt", this looks like the culprit. See http://nginx.org/en/docs/http/configuring_https_servers.html for some basic tips about configuring HTTPS servers. [...] > ssl_trusted_certificate ssl/domain.com.crt; And this also looks incorrect. Hope this helps. -- Maxim Dounin http://mdounin.ru/ From rejaine at bhz.jamef.com.br Thu Dec 15 12:50:46 2022 From: rejaine at bhz.jamef.com.br (Rejaine Monteiro) Date: Thu, 15 Dec 2022 09:50:46 -0300 Subject: lmit_req with differents rules Message-ID: Hello!! I need apply different limit_req rules with different rules, like limit_req_zone $binary_remote_addr zone=ipsrc:10m rate=1r/s; limit_req_zone $arg_token zone=apitoken:10m rate=5r/m; limit_req_zone $http_autorization zone=httpauth:10m rate=5r/s; server { listen 443; server_name api.domain.com; } location / { limit_req zone=ipsrc; limit_req zone=apitoken; limit_req zone=httpauth; proxy_pass http://internal.api.com; } } Would this be correct and should it work as expected? -- *Esta mensagem pode conter informações confidenciais ou privilegiadas, sendo seu sigilo protegido por lei. Se você não for o destinatário ou a pessoa autorizada a receber esta mensagem, não pode usar, copiar ou divulgar as informações nela contidas ou tomar qualquer ação baseada nessas informações. Se você recebeu esta mensagem por engano, por favor avise imediatamente ao remetente, respondendo o e-mail e em seguida apague-o. Agradecemos sua cooperação.* -------------- next part -------------- An HTML attachment was scrubbed... URL: From kaushalshriyan at gmail.com Thu Dec 15 16:23:11 2022 From: kaushalshriyan at gmail.com (Kaushal Shriyan) Date: Thu, 15 Dec 2022 21:53:11 +0530 Subject: Upstream service php-fpm is up and running but reports {"errors": {"status_code": 502,"status": "php-fpm server is down"}} Message-ID: Hi, I am running the nginx version: nginx/1.22 as a reverse proxy server on CentOS Linux release 7.9.2009 (Core). When I hit http://mydomain.com/apis I see the below message on the browser even if the upstream server php-fpm server is up and running. *{"errors": {"status_code": 502,"status": "php-fpm server is down"}}* I have set the below in the nginx.conf file and attached the file for your reference. if ($upstream_http_content_type = "") { add_header 'Content-Type' 'application/json' always; add_header 'Content-Type-3' $upstream_http_content_type$isdatatypejson"OK" always; return 502 '{"errors": {"status_code": 502,"status": "php-fpm server is down"}}'; } # systemctl status php-fpm ● php-fpm.service - The PHP FastCGI Process Manager Loaded: loaded (/usr/lib/systemd/system/php-fpm.service; enabled; vendor preset: disabled) Drop-In: /etc/systemd/system/php-fpm.service.d └─override.conf Active: active (running) since Thu 2022-12-15 15:53:31 UTC; 10s ago Main PID: 9185 (php-fpm) Status: "Processes active: 0, idle: 5, Requests: 0, slow: 0, Traffic: 0req/sec" CGroup: /system.slice/php-fpm.service ├─9185 php-fpm: master process (/etc/php-fpm.conf) ├─9187 php-fpm: pool www ├─9188 php-fpm: pool www ├─9189 php-fpm: pool www ├─9190 php-fpm: pool www └─9191 php-fpm: pool www Dec 15 15:53:31 systemd[1]: Starting The PHP FastCGI Process Manager... Dec 15 15:53:31 systemd[1]: Started The PHP FastCGI Process Manager. # Please guide me. Best Regards, Kaushal {"errors": {"status_code": 502,"status": "php-fpm server is down"}} -------------- next part -------------- An HTML attachment was scrubbed... URL: From kaushalshriyan at gmail.com Thu Dec 15 16:25:52 2022 From: kaushalshriyan at gmail.com (Kaushal Shriyan) Date: Thu, 15 Dec 2022 21:55:52 +0530 Subject: Upstream service php-fpm is up and running but reports {"errors": {"status_code": 502,"status": "php-fpm server is down"}} In-Reply-To: References: Message-ID: On Thu, Dec 15, 2022 at 9:53 PM Kaushal Shriyan wrote: > Hi, > > I am running the nginx version: nginx/1.22 as a reverse proxy server on > CentOS Linux release 7.9.2009 (Core). When I hit http://mydomain.com/apis I > see the below message on the browser even if the upstream server php-fpm > server is up and running. > > *{"errors": {"status_code": 502,"status": "php-fpm server is down"}}* > > I have set the below in the nginx.conf file and attached the file for your > reference. > > if ($upstream_http_content_type = "") { > add_header 'Content-Type' 'application/json' always; > add_header 'Content-Type-3' > $upstream_http_content_type$isdatatypejson"OK" always; > return 502 '{"errors": {"status_code": 502,"status": > "php-fpm server is down"}}'; > } > > # systemctl status php-fpm > ● php-fpm.service - The PHP FastCGI Process Manager > Loaded: loaded (/usr/lib/systemd/system/php-fpm.service; enabled; > vendor preset: disabled) > Drop-In: /etc/systemd/system/php-fpm.service.d > └─override.conf > Active: active (running) since Thu 2022-12-15 15:53:31 UTC; 10s ago > Main PID: 9185 (php-fpm) > Status: "Processes active: 0, idle: 5, Requests: 0, slow: 0, Traffic: > 0req/sec" > CGroup: /system.slice/php-fpm.service > ├─9185 php-fpm: master process (/etc/php-fpm.conf) > ├─9187 php-fpm: pool www > ├─9188 php-fpm: pool www > ├─9189 php-fpm: pool www > ├─9190 php-fpm: pool www > └─9191 php-fpm: pool www > > Dec 15 15:53:31 systemd[1]: Starting The PHP FastCGI Process Manager... > Dec 15 15:53:31 systemd[1]: Started The PHP FastCGI Process Manager. > # > > Please guide me. > > Best Regards, > > Kaushal > > {"errors": {"status_code": 502,"status": "php-fpm server is down"}} > > Hi, I have attached the file for your reference. Best Regards, Kaushal -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: nginxtest.conf Type: application/octet-stream Size: 7363 bytes Desc: not available URL: From mdounin at mdounin.ru Thu Dec 15 19:41:14 2022 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 15 Dec 2022 22:41:14 +0300 Subject: lmit_req with differents rules In-Reply-To: References: Message-ID: Hello! On Thu, Dec 15, 2022 at 09:50:46AM -0300, Rejaine Monteiro wrote: > Hello!! > > I need apply different limit_req rules with different rules, like > > limit_req_zone $binary_remote_addr zone=ipsrc:10m rate=1r/s; > limit_req_zone $arg_token zone=apitoken:10m rate=5r/m; > limit_req_zone $http_autorization zone=httpauth:10m rate=5r/s; > > server { > listen 443; > server_name api.domain.com; > } > > location / { > limit_req zone=ipsrc; > limit_req zone=apitoken; > limit_req zone=httpauth; > proxy_pass http://internal.api.com; > } > } > > Would this be correct and should it work as expected? This is certainly supported, see http://nginx.org/r/limit_req for details. Note that it might be a good idea to add some meaningful "burst" to the configuration, as well as "nodelay". -- Maxim Dounin http://mdounin.ru/ From mdounin at mdounin.ru Thu Dec 15 20:08:15 2022 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 15 Dec 2022 23:08:15 +0300 Subject: Upstream service php-fpm is up and running but reports {"errors": {"status_code": 502,"status": "php-fpm server is down"}} In-Reply-To: References: Message-ID: Hello! On Thu, Dec 15, 2022 at 09:53:11PM +0530, Kaushal Shriyan wrote: > > I am running the nginx version: nginx/1.22 as a reverse proxy server on > CentOS Linux release 7.9.2009 (Core). When I hit http://mydomain.com/apis I > see the below message on the browser even if the upstream server php-fpm > server is up and running. > > *{"errors": {"status_code": 502,"status": "php-fpm server is down"}}* > > I have set the below in the nginx.conf file and attached the file for your > reference. > > if ($upstream_http_content_type = "") { > add_header 'Content-Type' 'application/json' always; > add_header 'Content-Type-3' > $upstream_http_content_type$isdatatypejson"OK" always; > return 502 '{"errors": {"status_code": 502,"status": > "php-fpm server is down"}}'; > } The "if" directive makes it possible to conditionally select configuration to handle a request, and therefore can only use information available before the request is handled. In your case, before the request is sent to the upstream server. See http://nginx.org/en/docs/http/ngx_http_rewrite_module.html for more details. As such, $upstream_http_content_type will be always empty, since there are no upstream response yet, and therefore the configuration will always return 502. This matches your observations. An obvious fix would be to remove the configuration chunk in question. Instead, you probably need something like: error_page 502 /502.json; location = /502.json { return 200 '{"errors": {"status_code": 502, "status": "php-fpm server is down"}}'; } Hope this helps. -- Maxim Dounin http://mdounin.ru/ From kaushalshriyan at gmail.com Fri Dec 16 01:52:16 2022 From: kaushalshriyan at gmail.com (Kaushal Shriyan) Date: Fri, 16 Dec 2022 07:22:16 +0530 Subject: Upstream service php-fpm is up and running but reports {"errors": {"status_code": 502,"status": "php-fpm server is down"}} In-Reply-To: References: Message-ID: On Fri, Dec 16, 2022 at 1:38 AM Maxim Dounin wrote: > Hello! > > On Thu, Dec 15, 2022 at 09:53:11PM +0530, Kaushal Shriyan wrote: > > > > > I am running the nginx version: nginx/1.22 as a reverse proxy server on > > CentOS Linux release 7.9.2009 (Core). When I hit > http://mydomain.com/apis I > > see the below message on the browser even if the upstream server php-fpm > > server is up and running. > > > > *{"errors": {"status_code": 502,"status": "php-fpm server is down"}}* > > > > I have set the below in the nginx.conf file and attached the file for > your > > reference. > > > > if ($upstream_http_content_type = "") { > > add_header 'Content-Type' 'application/json' always; > > add_header 'Content-Type-3' > > $upstream_http_content_type$isdatatypejson"OK" always; > > return 502 '{"errors": {"status_code": > 502,"status": > > "php-fpm server is down"}}'; > > } > > The "if" directive makes it possible to conditionally select > configuration to handle a request, and therefore can only use > information available before the request is handled. In your > case, before the request is sent to the upstream server. See > http://nginx.org/en/docs/http/ngx_http_rewrite_module.html for > more details. > > As such, $upstream_http_content_type will be always empty, since > there are no upstream response yet, and therefore the > configuration will always return 502. This matches your > observations. > > An obvious fix would be to remove the configuration chunk in > question. > > Instead, you probably need something like: > > error_page 502 /502.json; > > location = /502.json { > return 200 '{"errors": {"status_code": 502, "status": "php-fpm > server is down"}}'; > } > > Thanks Maxim for the suggestion. I will try it out and keep you posted with the testing as it progresses. I am obliged to this mailing list. Thanks in advance. Best Regards, Kaushal -------------- next part -------------- An HTML attachment was scrubbed... URL: From mike at netwright.net Fri Dec 16 08:27:15 2022 From: mike at netwright.net (Mike Lieberman) Date: Fri, 16 Dec 2022 16:27:15 +0800 Subject: Is there a conflict between Debian Bullseye and nginx? Message-ID: I am running Debian Bullseye: this is a stripped down Command Line only install without a GUI. It runs apache2 and ISC-BIND only in a VirtualBox VM. # uname -r 5.10.0-19-amd64 # nginx -v nginx version: nginx/1.23.2 # apache2 -v Server version: Apache/2.4.54 (Debian) Server built: 2022-06-08T16:00:36 This is NOT commented out: pid /run/nginx.pid; apache2 - runs with no problems. syslog is clear of any error message from anything. I was looking to use LetsEncrypt which uses nginx. ** While apache2 is running, if I try to start nginx I get this. ** nginx.service - A high performance web server and a reverse proxy server Loaded: loaded (/lib/systemd/system/nginx.service; enabled; vendor preset: enabled) Active: failed (Result: exit-code) since Fri 2022-12-16 15:54:02 PST; 58s ago Docs: man:nginx(8) Process: 38159 ExecStartPre=/usr/sbin/nginx -t -q -g daemon on; master_process on; (code=exited, status=0/SUCCESS) Process: 38160 ExecStart=/usr/sbin/nginx -g daemon on; master_process on; ([b]code=exited, status=1/FAILURE[/b]) CPU: 51ms Dec 16 15:54:00 www nginx[38160]: nginx: [emerg] bind() to 0.0.0.0:80 failed (98: Address already in use) Dec 16 15:54:00 www nginx[38160]: nginx: [emerg] bind() to [::]:80 failed (98: Address already in use) Dec 16 15:54:01 www nginx[38160]: nginx: [emerg] bind() to 0.0.0.0:80 failed (98: Address already in use) Dec 16 15:54:01 www nginx[38160]: nginx: [emerg] bind() to [::]:80 failed (98: Address already in use) Dec 16 15:54:01 www nginx[38160]: nginx: [emerg] bind() to 0.0.0.0:80 failed (98: Address already in use) Dec 16 15:54:01 www nginx[38160]: nginx: [emerg] bind() to [::]:80 failed (98: Address already in use) Dec 16 15:54:02 www nginx[38160]: nginx: [emerg] still could not bind() Dec 16 15:54:02 www systemd[1]: nginx.service: Control process exited, code=exited, status=1/FAILURE Dec 16 15:54:02 www systemd[1]: nginx.service: Failed with result 'exit-code'. Dec 16 15:54:02 www systemd[1]: Failed to start A high performance web server and a reverse proxy server. If I stop apache2 then nginx loads **BUT** then apache2 refused to load with this statement: Starting apache2 (via systemctl): apache2.serviceJob for apache2.service failed because the control process exited with error code. See "systemctl status apache2.service" and "journalctl -xe" for details. failed! root at www:/home/mike# systemctl status apache2.service ● apache2.service - The Apache HTTP Server Loaded: loaded (/lib/systemd/system/apache2.service; enabled; vendor preset: enabled) Active: failed (Result: exit-code) since Fri 2022-12-16 15:57:04 PST; 17s ago Docs: https://httpd.apache.org/docs/2.4/ Process: 38198 ExecStart=/usr/sbin/apachectl start ([b]code=exited, status=1/FAILURE[/b]) CPU: 32ms Dec 16 15:57:04 www systemd[1]: Starting The Apache HTTP Server... Dec 16 15:57:04 www apachectl[38201]: (98)Address already in use: AH00072: make_sock: could not bind to address [::]:80 Dec 16 15:57:04 www apachectl[38201]: (98)Address already in use: AH00072: make_sock: could not bind to address 0.0.0.0:80 Dec 16 15:57:04 www apachectl[38201]: no listening sockets available, shutting down Dec 16 15:57:04 www apachectl[38201]: AH00015: Unable to open logs Dec 16 15:57:04 www apachectl[38198]: Action 'start' failed. Dec 16 15:57:04 www apachectl[38198]: The Apache error log may have more information. Dec 16 15:57:04 www systemd[1]: apache2.service: Control process exited, code=exited, status=1/FAILURE Dec 16 15:57:04 www systemd[1]: apache2.service: Failed with result 'exit-code'. Dec 16 15:57:04 www systemd[1]: Failed to start The Apache HTTP Server. From francis at daoine.org Fri Dec 16 15:59:53 2022 From: francis at daoine.org (Francis Daly) Date: Fri, 16 Dec 2022 15:59:53 +0000 Subject: Is there a conflict between Debian Bullseye and nginx? In-Reply-To: References: Message-ID: <20221216155953.GA875@daoine.org> On Fri, Dec 16, 2022 at 04:27:15PM +0800, Mike Lieberman wrote: Hi there, You have configured your nginx to listen on port 80 on all IP addresses. You have configured your apache to listen on port 80 on all IP addresses. They can't both do that at the same time. The first one works, the other one fails. If you want both to be running, you must configure them to listen on different IP:ports from each other. This is normal. Cheers, f -- Francis Daly francis at daoine.org From mike at netwright.net Fri Dec 16 16:22:36 2022 From: mike at netwright.net (Mike Lieberman) Date: Sat, 17 Dec 2022 00:22:36 +0800 Subject: Is there a conflict between Debian Bullseye and nginx? In-Reply-To: <20221216155953.GA875@daoine.org> References: <20221216155953.GA875@daoine.org> Message-ID: <3728ff791bca4f90aae3beae50e2354d4fef68b3.camel@netwright.net> On Fri, 2022-12-16 at 15:59 +0000, Francis Daly wrote: > On Fri, Dec 16, 2022 at 04:27:15PM +0800, Mike Lieberman wrote: > > Hi there, > > You have configured your nginx to listen on port 80 on all IP addresses. > > You have configured your apache to listen on port 80 on all IP addresses. > > They can't both do that at the same time. > > The first one works, the other one fails. > > If you want both to be running, you must configure them to listen on > different IP:ports from each other. > > This is normal. > > Cheers, > >         f Actually, I didn't configure it on any port. I was following a guide in https://www.linuxcapable.com/how-to-install-nginx-with-lets-encrypt-tls-ssl-on-debian-11-bullseye. However I can't get brotli to be accepted in /etc/nginx/nginx.conf (as specified in the guide) and nginx you say I have it looking at port 80, but (a) I don't [at least not by my direction] and the guide does not specify anything regarding that. Clearly the guide is wrong and I will have to find another way to approach this. Thank you for your clear comments. It is I who is over my head. From noloader at gmail.com Fri Dec 16 17:36:33 2022 From: noloader at gmail.com (Jeffrey Walton) Date: Fri, 16 Dec 2022 12:36:33 -0500 Subject: Is there a conflict between Debian Bullseye and nginx? In-Reply-To: <3728ff791bca4f90aae3beae50e2354d4fef68b3.camel@netwright.net> References: <20221216155953.GA875@daoine.org> <3728ff791bca4f90aae3beae50e2354d4fef68b3.camel@netwright.net> Message-ID: On Fri, Dec 16, 2022 at 11:23 AM Mike Lieberman wrote: > > On Fri, 2022-12-16 at 15:59 +0000, Francis Daly wrote: > > > On Fri, Dec 16, 2022 at 04:27:15PM +0800, Mike Lieberman wrote: > > > > You have configured your nginx to listen on port 80 on all IP addresses. > > > > You have configured your apache to listen on port 80 on all IP addresses. > > > > They can't both do that at the same time. > > > > The first one works, the other one fails. > > > > If you want both to be running, you must configure them to listen on > > different IP:ports from each other. > > Actually, I didn't configure it on any port. I was following a guide in https://www.linuxcapable.com/how-to-install-nginx-with-lets-encrypt-tls-ssl-on-debian-11-bullseye. However I can't get brotli to be accepted in /etc/nginx/nginx.conf (as specified in the guide) and nginx you say I have it looking at port 80, but (a) I don't [at least not by my direction] and the guide does not specify anything regarding that. Clearly the guide is wrong and I will have to find another way to approach this. If 80 and 443 are in use, you typically move on to the next set of well known ports, like 8080 and 8443. Or you can use random port numbers. You can move Apache or Nginx. Jeff From kaushalshriyan at gmail.com Fri Dec 16 18:23:40 2022 From: kaushalshriyan at gmail.com (Kaushal Shriyan) Date: Fri, 16 Dec 2022 23:53:40 +0530 Subject: Upstream service php-fpm is up and running but reports {"errors": {"status_code": 502,"status": "php-fpm server is down"}} In-Reply-To: References: Message-ID: Hi Maxim, I have a follow up question regarding the settings below in nginx.conf where the php-fpm upstream server is processing all php files for Drupal CMS. fastcgi_intercept_errors off proxy_intercept_errors off User -> Nginx -> php-fpm -> MySQL DB. For example if the php-fpm upstream server is down then nginx should render 502 bad gateway if MySQL DB service is down then nginx should render 500 ISE. Is there a way to render any of the messages or any custom messages to the User from the php-fpm upstream server that should be passed to a client without being intercepted by the Nginx web server. Any examples? I have attached the file for your reference. Please guide me. Thanks in advance. Best Regards, Kaushal On Fri, Dec 16, 2022 at 7:22 AM Kaushal Shriyan wrote: > > > On Fri, Dec 16, 2022 at 1:38 AM Maxim Dounin wrote: > >> Hello! >> >> On Thu, Dec 15, 2022 at 09:53:11PM +0530, Kaushal Shriyan wrote: >> >> > >> > I am running the nginx version: nginx/1.22 as a reverse proxy server on >> > CentOS Linux release 7.9.2009 (Core). When I hit >> http://mydomain.com/apis I >> > see the below message on the browser even if the upstream server php-fpm >> > server is up and running. >> > >> > *{"errors": {"status_code": 502,"status": "php-fpm server is down"}}* >> > >> > I have set the below in the nginx.conf file and attached the file for >> your >> > reference. >> > >> > if ($upstream_http_content_type = "") { >> > add_header 'Content-Type' 'application/json' >> always; >> > add_header 'Content-Type-3' >> > $upstream_http_content_type$isdatatypejson"OK" always; >> > return 502 '{"errors": {"status_code": >> 502,"status": >> > "php-fpm server is down"}}'; >> > } >> >> The "if" directive makes it possible to conditionally select >> configuration to handle a request, and therefore can only use >> information available before the request is handled. In your >> case, before the request is sent to the upstream server. See >> http://nginx.org/en/docs/http/ngx_http_rewrite_module.html for >> more details. >> >> As such, $upstream_http_content_type will be always empty, since >> there are no upstream response yet, and therefore the >> configuration will always return 502. This matches your >> observations. >> >> An obvious fix would be to remove the configuration chunk in >> question. >> >> Instead, you probably need something like: >> >> error_page 502 /502.json; >> >> location = /502.json { >> return 200 '{"errors": {"status_code": 502, "status": "php-fpm >> server is down"}}'; >> } >> >> > Thanks Maxim for the suggestion. I will try it out and keep you posted > with the testing as it progresses. I am obliged to this mailing list. > Thanks in advance. > > Best Regards, > > Kaushal > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: nginxtest.conf Type: application/octet-stream Size: 7363 bytes Desc: not available URL: From mdounin at mdounin.ru Fri Dec 16 22:17:54 2022 From: mdounin at mdounin.ru (Maxim Dounin) Date: Sat, 17 Dec 2022 01:17:54 +0300 Subject: Upstream service php-fpm is up and running but reports {"errors": {"status_code": 502,"status": "php-fpm server is down"}} In-Reply-To: References: Message-ID: Hello! On Fri, Dec 16, 2022 at 11:53:40PM +0530, Kaushal Shriyan wrote: > I have a follow up question regarding the settings below in nginx.conf > where the php-fpm upstream server is processing all php files for Drupal > CMS. > > fastcgi_intercept_errors off > proxy_intercept_errors off > > User -> Nginx -> php-fpm -> MySQL DB. > > For example if the php-fpm upstream server is down then nginx should render > 502 bad gateway > if MySQL DB service is down then nginx should render > 500 ISE. > > Is there a way to render any of the messages or any custom messages to the > User from the php-fpm upstream server that should be passed to a client > without being intercepted by the Nginx web server. Any examples? I have > attached the file for your reference. Please guide me. Thanks in advance. Not sure I understand what are you asking about. With fastcgi_intercept_errors turned off (the default) nginx does not intercept any of the errors returned by php-fpm. That is, when MySQL is down and php-fpm returns 500 (Internal Server Error), it is returned directory to the client. When php-fpm is down, nginx generates 502 (Bad Gateway) itself and returns it to the client. -- Maxim Dounin http://mdounin.ru/ From mdounin at mdounin.ru Thu Dec 22 01:34:15 2022 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 22 Dec 2022 04:34:15 +0300 Subject: Upstream service php-fpm is up and running but reports {"errors": {"status_code": 502,"status": "php-fpm server is down"}} In-Reply-To: References: Message-ID: Hello! On Tue, Dec 20, 2022 at 11:44:05PM +0530, Kaushal Shriyan wrote: > On Sat, Dec 17, 2022 at 3:48 AM Maxim Dounin wrote: > > > On Fri, Dec 16, 2022 at 11:53:40PM +0530, Kaushal Shriyan wrote: > > > > > I have a follow up question regarding the settings below in nginx.conf > > > where the php-fpm upstream server is processing all php files for Drupal > > > CMS. > > > > > > fastcgi_intercept_errors off > > > proxy_intercept_errors off > > > > > > User -> Nginx -> php-fpm -> MySQL DB. > > > > > > For example if the php-fpm upstream server is down then nginx should > > render > > > 502 bad gateway > > > if MySQL DB service is down then nginx should render > > > 500 ISE. > > > > > > Is there a way to render any of the messages or any custom messages to > > the > > > User from the php-fpm upstream server that should be passed to a client > > > without being intercepted by the Nginx web server. Any examples? I have > > > attached the file for your reference. Please guide me. Thanks in advance. > > > > Not sure I understand what are you asking about. > > > > With fastcgi_intercept_errors turned off (the default) nginx does > > not intercept any of the errors returned by php-fpm. > > > > That is, when MySQL is down and php-fpm returns 500 (Internal > > Server Error), it is returned directory to the client. When > > php-fpm is down, nginx generates 502 (Bad Gateway) itself and > > returns it to the client. > > > > > Hi Maxim, > > Apologies for the delay in responding. I am still not able to get it. The > below settings will be hardcoded in nginx.conf. Is there a way to > dynamically render the different errors to the client when the client hits > http://mydomain.com/apis > > error_page 502 /502.json; > > location = /502.json { > return 200 '{"errors": {"status_code": 502, "status": "php-fpm > server is down"}}'; > } > > Please guide me. Thanks in advance. You can pass these error pages to a backend server by using proxy_pass or fastcgi_pass in the location, much like any other resource in nginx. Note though that in most cases it's a bad idea, at least unless you have a dedicated backend to generate error pages: if a request to an upstream server failed, there is a good chance that another request to generate an error page will fail as well. As such, it is usually recommended to keep error pages served by nginx itself, either as static files, or directly returned with "return". -- Maxim Dounin http://mdounin.ru/ From mikydevel at yahoo.fr Wed Dec 28 23:01:11 2022 From: mikydevel at yahoo.fr (Mik J) Date: Wed, 28 Dec 2022 23:01:11 +0000 (UTC) Subject: website/admin behind my reverse proxy doesn't work References: <1364957511.7577168.1672268471466.ref@mail.yahoo.com> Message-ID: <1364957511.7577168.1672268471466@mail.yahoo.com> Hello, I have a website hosted on a server using nginx behind a nginx reverse proxy but things don't work properly. https://mywebsite.org => workshttps://mywebsite.org/admin => doestn't work it redirects to https://mywebsite.org On my backend serverserver {         listen 80;         server_name mywebsite.org ;        index index.php;         root /var/www/htdocs/sites/mywebsite;...        location / {           try_files $uri $uri/ /index.php$is_args$args;           location ~ \.php$ {               root           /var/www/htdocs/sites/mywebsite;              try_files $uri =404;               fastcgi_pass   unix:/run/php-fpm.mywebsite.org.sock;              fastcgi_split_path_info ^(.+\.php)(/.+)$;               fastcgi_index  index.php;               fastcgi_param  SCRIPT_FILENAME $document_root$fastcgi_script_name;               include        fastcgi_params;           }         }} On my reverse proxyserver { #    listen 80; #    listen [::]:80;     listen 443 ssl;     listen [::]:443 ssl;     server_name http://mywebsite.org;...     root /var/www/htdocs/mywebsite;    location ^~ / {        proxy_pass              http://10.12.255.23:80;         proxy_redirect          off;         proxy_set_header        Host    $host;         proxy_http_version 1.1;         proxy_set_header  X-Real-IP        $remote_addr;         proxy_set_header  X-Forwarded-Host $host;         proxy_set_header  X-Forwarded-For  $proxy_add_x_forwarded_for;         proxy_set_header        Referer         "http://mywebsite.org/";        proxy_pass_header Set-Cookie;         proxy_set_header  X-Forwarded-Proto $scheme;     } } So I can't access In the backend server logs I see[28/Dec/2022:23:54:33 +0100] "GET /admin/ HTTP/1.1" 302 5 "http://mywebsite.org/" ...[28/Dec/2022:23:54:33 +0100] "GET / HTTP/1.1" 499 0 "http://mywebsite.org/" ... Regards -------------- next part -------------- An HTML attachment was scrubbed... URL: From mikydevel at yahoo.fr Wed Dec 28 23:05:01 2022 From: mikydevel at yahoo.fr (Mik J) Date: Wed, 28 Dec 2022 23:05:01 +0000 (UTC) Subject: Where to compress text files and filter access References: <127461778.7583136.1672268701989.ref@mail.yahoo.com> Message-ID: <127461778.7583136.1672268701989@mail.yahoo.com> Hello, What is the best practice for these two situations: 1. Compress text files, should I make the compression on the reverse proxy or on the backend server ? 2. Deny access to specific files for example, files starting with a dot .file, should I write the rule on the reverse proxy or on the backend server ? Regards -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Thu Dec 29 23:37:04 2022 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 30 Dec 2022 02:37:04 +0300 Subject: website/admin behind my reverse proxy doesn't work In-Reply-To: <1364957511.7577168.1672268471466@mail.yahoo.com> References: <1364957511.7577168.1672268471466.ref@mail.yahoo.com> <1364957511.7577168.1672268471466@mail.yahoo.com> Message-ID: Hello! On Wed, Dec 28, 2022 at 11:01:11PM +0000, Mik J via nginx wrote: > Hello, > I have a website hosted on a server using nginx behind a nginx > reverse proxy but things don't work properly. > https://mywebsite.org => works > https://mywebsite.org/admin => doestn't work it redirects to https://mywebsite.org > > On my backend serverserver { >         listen 80; >         server_name mywebsite.org ; >         index index.php; >         root /var/www/htdocs/sites/mywebsite;... >         location / { >           try_files $uri $uri/ /index.php$is_args$args; > >           location ~ \.php$ { >               root           /var/www/htdocs/sites/mywebsite; >               try_files $uri =404; >               fastcgi_pass   unix:/run/php-fpm.mywebsite.org.sock; >               fastcgi_split_path_info ^(.+\.php)(/.+)$; >               fastcgi_index  index.php; >               fastcgi_param  SCRIPT_FILENAME $document_root$fastcgi_script_name; >               include        fastcgi_params; >           } >         }} > On my reverse proxyserver { > #    listen 80; > #    listen [::]:80; >     listen 443 ssl; >     listen [::]:443 ssl; >     server_name http://mywebsite.org;... >     root /var/www/htdocs/mywebsite; >     location ^~ / { >         proxy_pass              http://10.12.255.23:80; >         proxy_redirect          off; >         proxy_set_header        Host    $host; >         proxy_http_version 1.1; >         proxy_set_header  X-Real-IP        $remote_addr; >         proxy_set_header  X-Forwarded-Host $host; >         proxy_set_header  X-Forwarded-For  $proxy_add_x_forwarded_for; >         proxy_set_header        Referer         "http://mywebsite.org/"; >         proxy_pass_header Set-Cookie; >         proxy_set_header  X-Forwarded-Proto $scheme; >     } > } > > > So I can't access > > In the backend server logs I see > [28/Dec/2022:23:54:33 +0100] "GET /admin/ HTTP/1.1" 302 5 "http://mywebsite.org/" ... > [28/Dec/2022:23:54:33 +0100] "GET / HTTP/1.1" 499 0 "http://mywebsite.org/" ... In your nginx configurations no redirects are returned. Accordingly, it looks like redirects you are seeing are returned by the backend's PHP code. To find out why these are returned you'll probably have to look into the PHP code. -- Maxim Dounin http://mdounin.ru/ From mdounin at mdounin.ru Fri Dec 30 00:16:57 2022 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 30 Dec 2022 03:16:57 +0300 Subject: Where to compress text files and filter access In-Reply-To: <127461778.7583136.1672268701989@mail.yahoo.com> References: <127461778.7583136.1672268701989.ref@mail.yahoo.com> <127461778.7583136.1672268701989@mail.yahoo.com> Message-ID: Hello! On Wed, Dec 28, 2022 at 11:05:01PM +0000, Mik J via nginx wrote: > What is the best practice for these two situations: > 1. Compress text files, should I make the compression on the > reverse proxy or on the backend server ? In most cases, it makes sense to compress things on the frontend server. In particular, this is because of at least the following factors: 1) Frontend servers are usually not just reverse proxies, but also serve some static resources. As such, compression is anyway needs to be configured on frontend servers. 2) Frontend servers often used with multiple different backends. Further, in some cases they are used to generate responses based on subrequests to different requests, such as with SSI. This makes compression on frontend servers easier or even the only possible solution. 3) Frontend servers are often used to cache backend responses, and proper caching of compressed responses might be problematic and/or inefficient (in particular, because the only mechanism available is Vary). Note well that by default nginx uses HTTP/1.0 when connecting to upstream servers, and this in turn will disable gzip with default settings. This naturally results in compression being done on frontend servers when nginx with default settings is used both as a backend and a frontend. In some cases, it might make sense to compress on the backend servers, for example, to ensure that CPU usage for compression is balanced among multiple backend servers, or to minimize traffic between frontends and backends. These are mostly about specific configurations though. > 2. Deny access to specific files for example, files starting > with a dot .file, should I write the rule on the reverse proxy > or on the backend server ? I would recommend both. In particular, rules on the backend server will ensure that the access is denied where the file resides, making things safe even if the frontend servers is somehow bypassed. Rules on the frontend server ensure that requests are denied efficiently. -- Maxim Dounin http://mdounin.ru/ From mikydevel at yahoo.fr Sat Dec 31 16:34:33 2022 From: mikydevel at yahoo.fr (Mik J) Date: Sat, 31 Dec 2022 16:34:33 +0000 (UTC) Subject: Where to compress text files and filter access In-Reply-To: References: <127461778.7583136.1672268701989.ref@mail.yahoo.com> <127461778.7583136.1672268701989@mail.yahoo.com> Message-ID: <350036983.8818039.1672504473279@mail.yahoo.com> Hello Maxim,Thank you for this detailed answer.I'll keep it in my personal notes.I wish you a good year for 2023 Le vendredi 30 décembre 2022 à 01:17:11 UTC+1, Maxim Dounin a écrit : Hello! On Wed, Dec 28, 2022 at 11:05:01PM +0000, Mik J via nginx wrote: > What is the best practice for these two situations: > 1. Compress text files, should I make the compression on the > reverse proxy or on the backend server ? In most cases, it makes sense to compress things on the frontend server. In particular, this is because of at least the following factors: 1) Frontend servers are usually not just reverse proxies, but also serve some static resources.  As such, compression is anyway needs to be configured on frontend servers. 2) Frontend servers often used with multiple different backends.  Further, in some cases they are used to generate responses based on subrequests to different requests, such as with SSI.  This makes compression on frontend servers easier or even the only possible solution. 3) Frontend servers are often used to cache backend responses, and proper caching of compressed responses might be problematic and/or inefficient (in particular, because the only mechanism available is Vary). Note well that by default nginx uses HTTP/1.0 when connecting to upstream servers, and this in turn will disable gzip with default settings.  This naturally results in compression being done on frontend servers when nginx with default settings is used both as a backend and a frontend. In some cases, it might make sense to compress on the backend servers, for example, to ensure that CPU usage for compression is balanced among multiple backend servers, or to minimize traffic between frontends and backends.  These are mostly about specific configurations though. > 2. Deny access to specific files for example, files starting > with a dot .file, should I write the rule on the reverse proxy > or on the backend server ? I would recommend both.  In particular, rules on the backend server will ensure that the access is denied where the file resides, making things safe even if the frontend servers is somehow bypassed.  Rules on the frontend server ensure that requests are denied efficiently. -- Maxim Dounin http://mdounin.ru/ _______________________________________________ nginx mailing list nginx at nginx.org https://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: