From nginx-forum at forum.nginx.org Mon Mar 1 09:07:47 2021 From: nginx-forum at forum.nginx.org (dvhh) Date: Mon, 01 Mar 2021 04:07:47 -0500 Subject: Seeking example of module using theadpool Message-ID: <02469eb20a437c9f8db730c10617da39.NginxMailingListEnglish@forum.nginx.org> Hello, I have developed a module which perform long running calculations to produce the output, unfortunately blocking the server thread from handling other requests. I am looking at using threadpool, unfortunately there is no example of using threadpool with module that I could find. I am using src/http/modules/ngx_http_image_filter_module.c as a base for my code. How would you modify such module to perform the image processing in background threads ? Best regards and thanks in advance, Posted at Nginx Forum: https://forum.nginx.org/read.php?2,290856,290856#msg-290856 From vl at nginx.com Mon Mar 1 09:24:43 2021 From: vl at nginx.com (Vladimir Homutov) Date: Mon, 1 Mar 2021 12:24:43 +0300 Subject: Seeking example of module using theadpool In-Reply-To: <02469eb20a437c9f8db730c10617da39.NginxMailingListEnglish@forum.nginx.org> References: <02469eb20a437c9f8db730c10617da39.NginxMailingListEnglish@forum.nginx.org> Message-ID: On Mon, Mar 01, 2021 at 04:07:47AM -0500, dvhh wrote: > Hello, > > I have developed a module which perform long running calculations to produce > the output, unfortunately blocking the server thread from handling other > requests. I am looking at using threadpool, unfortunately there is no > example of using threadpool with module that I could find. > > I am using src/http/modules/ngx_http_image_filter_module.c as a base for my > code. > > How would you modify such module to perform the image processing in > background threads ? > It is documented: http://nginx.org/en/docs/dev/development_guide.html#threads You may also look at how nginx implements reading files using threads: http://hg.nginx.org/nginx/file/tip/src/os/unix/ngx_files.c#l95 From senor.j.onion at gmail.com Mon Mar 1 09:35:47 2021 From: senor.j.onion at gmail.com (=?utf-8?Q?Se=C3=B1or_J_Onion?=) Date: Mon, 1 Mar 2021 11:35:47 +0200 Subject: forward proxy config is causing "upstream server temporarily disabled while connecting to upstream" error Message-ID: I want to set up nginx as a forward proxy - much like Squid might work. This is my server block: server { listen 3128; server_name localhost; location / { resolver 8.8.8.8; proxy_pass http://$http_host$uri$is_args$args; } } This is the curl command I use to test, and it works the first time, maybe even the second time. curl -s -D - -o /dev/null -x "http://localhost:3128" http://storage.googleapis.com/my.appspot.com/test.jpeg The corresponding nginx access log is 172.23.0.1 - - [26/Feb/2021:12:38:59 +0000] "GET http://storage.googleapis.com/my.appspot.com/test.jpeg HTTP/1.1" 200 2296040 "-" "curl/7.64.1" "-" However - on repeated requests, I start getting these errors in my nginx logs (after say the 2nd or 3rd attempt) 2021/02/26 12:39:49 [crit] 31#31: *4 connect() to [2c0f:fb50:4002:804::2010]:80 failed (99: Address not available) while connecting to upstream, client: 172.23.0.1, server: localhost, request: "GET http://storage.googleapis.com/omgimg.appspot.com/test.jpeg HTTP/1.1", upstream: "http://[2c0f:fb50:4002:804::2010]:80/my.appspot.com/test.jpeg", host: "storage.googleapis.com" 2021/02/26 12:39:49 [warn] 31#31: *4 upstream server temporarily disabled while connecting to upstream, client: 172.23.0.1, server: localhost, request: "GET http://storage.googleapis.com/my.appspot.com/test.jpeg HTTP/1.1", upstream: "http://[2c0f:fb50:4002:804::2010]:80/my.appspot.com/test.jpeg", host: "storage.googleapis.com" What might be causing these issues after just a handful of requests? (curl still fetches the URL fine) From mdounin at mdounin.ru Mon Mar 1 14:17:11 2021 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 1 Mar 2021 17:17:11 +0300 Subject: forward proxy config is causing "upstream server temporarily disabled while connecting to upstream" error In-Reply-To: References: Message-ID: Hello! On Mon, Mar 01, 2021 at 11:35:47AM +0200, Se?or J Onion wrote: > I want to set up nginx as a forward proxy - much like Squid might work. First of all, you probably already know it, but to clarify: nginx is not a forward proxy. What you are trying to do is not supported and entirely at your own risk. > This is my server block: > > server { > listen 3128; > server_name localhost; > > location / { > resolver 8.8.8.8; A side note: nginx resolver is rudimentary and provided with the only goal to avoid using blocking system resolver. As the documentation says (http://nginx.org/r/resolver), is is a good idea to use DNS servers in properly secured local network. > proxy_pass http://$http_host$uri$is_args$args; A side note: consider just "proxy_pass http://$http_host;" instead. > } > } > > This is the curl command I use to test, and it works the first time, maybe even the second time. > > curl -s -D - -o /dev/null -x "http://localhost:3128" http://storage.googleapis.com/my.appspot.com/test.jpeg > > The corresponding nginx access log is > > 172.23.0.1 - - [26/Feb/2021:12:38:59 +0000] "GET http://storage.googleapis.com/my.appspot.com/test.jpeg HTTP/1.1" 200 2296040 "-" "curl/7.64.1" "-" > > However - on repeated requests, I start getting these errors in my nginx logs (after say the 2nd or 3rd attempt) > > 2021/02/26 12:39:49 [crit] 31#31: *4 connect() to [2c0f:fb50:4002:804::2010]:80 failed (99: Address not available) while connecting to upstream, client: 172.23.0.1, server: localhost, request: "GET http://storage.googleapis.com/omgimg.appspot.com/test.jpeg HTTP/1.1", upstream: "http://[2c0f:fb50:4002:804::2010]:80/my.appspot.com/test.jpeg", host: "storage.googleapis.com" > 2021/02/26 12:39:49 [warn] 31#31: *4 upstream server temporarily disabled while connecting to upstream, client: 172.23.0.1, server: localhost, request: "GET http://storage.googleapis.com/my.appspot.com/test.jpeg HTTP/1.1", upstream: "http://[2c0f:fb50:4002:804::2010]:80/my.appspot.com/test.jpeg", host: "storage.googleapis.com" > > > What might be causing these issues after just a handful of requests? (curl still fetches the URL fine) You are trying to connect to an upstream server with an IPv6 address, yet your system has no IPv6 addresses configured, so the connection attempt fails. This is not fatal, as nginx is able to switch to using other addresses of the same server, but probably a configuration error. Most likely you want nginx to ignore IPv6 addresses. To do this, consider using "resolver ... ipv6=off;". This should prevent nginx from trying to connect to IPv6 addresses, and so corresponding errors will disappear from the error log. Hope this helps. -- Maxim Dounin http://mdounin.ru/ From nginx-forum at forum.nginx.org Mon Mar 1 14:44:24 2021 From: nginx-forum at forum.nginx.org (dvhh) Date: Mon, 01 Mar 2021 09:44:24 -0500 Subject: Seeking example of module using theadpool In-Reply-To: References: Message-ID: <89dafa2a6eeccabfe84869006cab6fb7.NginxMailingListEnglish@forum.nginx.org> Vladimir Homutov Wrote: ------------------------------------------------------- > On Mon, Mar 01, 2021 at 04:07:47AM -0500, dvhh wrote: > > Hello, > > > > I have developed a module which perform long running calculations to > produce > > the output, unfortunately blocking the server thread from handling > other > > requests. I am looking at using threadpool, unfortunately there is > no > > example of using threadpool with module that I could find. > > > > I am using src/http/modules/ngx_http_image_filter_module.c as a base > for my > > code. > > > > How would you modify such module to perform the image processing in > > background threads ? > > > > It is documented: > > http://nginx.org/en/docs/dev/development_guide.html#threads > > You may also look at how nginx implements reading files using threads: > > http://hg.nginx.org/nginx/file/tip/src/os/unix/ngx_files.c#l95 > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx Thanks Vladimir, I have already consulted the documentation and tried implemented it, task is successfully executed. The current main issue, is how/when to send the calculated response Using the completion function look like a good bet, by I the have little clue on how to send response/chain to the next filter from there. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,290856,290863#msg-290863 From senor.j.onion at gmail.com Mon Mar 1 15:08:08 2021 From: senor.j.onion at gmail.com (=?utf-8?Q?Se=C3=B1or_J_Onion?=) Date: Mon, 1 Mar 2021 17:08:08 +0200 Subject: forward proxy config is causing "upstream server temporarily disabled while connecting to upstream" error In-Reply-To: References: Message-ID: <03AD27F7-C54A-4ABF-A830-097879FB1B6B@gmail.com> Hi Maxim, > > You are trying to connect to an upstream server with an IPv6 > address, yet your system has no IPv6 addresses configured, so > the connection attempt fails. This is not fatal, as nginx is able > to switch to using other addresses of the same server, but > probably a configuration error. > > Most likely you want nginx to ignore IPv6 addresses. To do this, > consider using "resolver ... ipv6=off;". This should prevent > nginx from trying to connect to IPv6 addresses, and so > corresponding errors will disappear from the error log. > That did it - switching off ipv6! Thank you! J?rg From nginx-forum at forum.nginx.org Tue Mar 2 08:30:02 2021 From: nginx-forum at forum.nginx.org (charlemagnelasse) Date: Tue, 02 Mar 2021 03:30:02 -0500 Subject: force nginx to use SSL/TLS alert on invalid client certificate Message-ID: <5fc728cc51b9921d543d3ee3f3f7cbaf.NginxMailingListEnglish@forum.nginx.org> If I am using a Apache to verify the client certificate and the client certificate is invalid (e.g. revoked) than I can get the appropriate SSL/TLS alert which can be evaluated by the client: curl -v --insecure --cert cert.pem --key key.pem --cacert ca.pem https://127.0.0.1:443/1/config * Expire in 0 ms for 6 (transfer 0x5611097bcfb0) * Trying 127.0.0.1... * TCP_NODELAY set * Expire in 200 ms for 4 (transfer 0x5611097bcfb0) * Connected to 127.0.0.1 (127.0.0.1) port 443 (#0) * ALPN, offering h2 * ALPN, offering http/1.1 * successfully set certificate verify locations: * CAfile: ca.pem CApath: /etc/ssl/certs * TLSv1.3 (OUT), TLS handshake, Client hello (1): * TLSv1.3 (IN), TLS handshake, Server hello (2): * TLSv1.3 (IN), TLS handshake, Encrypted Extensions (8): * TLSv1.3 (IN), TLS handshake, Request CERT (13): * TLSv1.3 (IN), TLS handshake, Certificate (11): * TLSv1.3 (IN), TLS handshake, CERT verify (15): * TLSv1.3 (IN), TLS handshake, Finished (20): * TLSv1.3 (OUT), TLS change cipher, Change cipher spec (1): * TLSv1.3 (OUT), TLS handshake, Certificate (11): * TLSv1.3 (OUT), TLS handshake, CERT verify (15): * TLSv1.3 (OUT), TLS handshake, Finished (20): * SSL connection using TLSv1.3 / TLS_AES_256_GCM_SHA384 * ALPN, server accepted to use http/1.1 * Server certificate: * subject: C=ZZ; ST=YY; L=Duckburg; O=McDuck LLC.; OU=moneybin; CN=moneybin.example.com * start date: Jan 1 01:02:03 2021 UTC * expire date: Jan 1 01:02:03 2022 UTC * issuer: C=ZZ; ST=YY; L=Duckburg; O=McDuck LLC.; CN=Scrooge CA * SSL certificate verify result: unable to get local issuer certificate (20), continuing anyway. > GET /1/config HTTP/1.1 > Host: 127.0.0.1 > User-Agent: curl/7.64.0 > Accept: */* > * TLSv1.3 (IN), TLS alert, certificate revoked (556): * OpenSSL SSL_read: error:14094414:SSL routines:ssl3_read_bytes:sslv3 alert certificate revoked, errno 0 * Closing connection 0 curl: (56) OpenSSL SSL_read: error:14094414:SSL routines:ssl3_read_bytes:sslv3 alert certificate revoked, errno 0 This is awesome because the (API) client can evaluate this information and start the correct actions based on that information. But for ningx (with the default configuration), I only get a generic error when the SSL certificate is revoked: curl -v --insecure --cert cert.pem --key key.pem --cacert ca.pem https://127.0.0.1:443/1/config * Expire in 0 ms for 6 (transfer 0x5611097bcfb0) * Trying 127.0.0.1... * TCP_NODELAY set * Expire in 200 ms for 4 (transfer 0x5611097bcfb0) * Connected to 127.0.0.1 (127.0.0.1) port 443 (#0) * ALPN, offering h2 * ALPN, offering http/1.1 * successfully set certificate verify locations: * CAfile: ca.pem CApath: /etc/ssl/certs * TLSv1.3 (OUT), TLS handshake, Client hello (1): * TLSv1.3 (IN), TLS handshake, Server hello (2): * TLSv1.3 (IN), TLS handshake, Encrypted Extensions (8): * TLSv1.3 (IN), TLS handshake, Request CERT (13): * TLSv1.3 (IN), TLS handshake, Certificate (11): * TLSv1.3 (IN), TLS handshake, CERT verify (15): * TLSv1.3 (IN), TLS handshake, Finished (20): * TLSv1.3 (OUT), TLS change cipher, Change cipher spec (1): * TLSv1.3 (OUT), TLS handshake, Certificate (11): * TLSv1.3 (OUT), TLS handshake, CERT verify (15): * TLSv1.3 (OUT), TLS handshake, Finished (20): * SSL connection using TLSv1.3 / TLS_AES_256_GCM_SHA384 * ALPN, server accepted to use h2 * Server certificate: * subject: C=ZZ; ST=YY; L=Duckburg; O=McDuck LLC.; OU=moneybin; CN=moneybin.example.com * start date: Jan 1 01:02:03 2021 UTC * expire date: Jan 1 01:02:03 2022 UTC * issuer: C=ZZ; ST=YY; L=Duckburg; O=McDuck LLC.; CN=Scrooge CA * SSL certificate verify result: unable to get local issuer certificate (20), continuing anyway. * Using HTTP2, server supports multi-use * Connection state changed (HTTP/2 confirmed) * Copying HTTP/2 data in stream buffer to connection buffer after upgrade: len=0 * Using Stream ID: 1 (easy handle 0x5611097bcfb0) > GET /1/config HTTP/2 > Host: 127.0.0.1:443 > User-Agent: curl/7.64.0 > Accept: */* > * TLSv1.3 (IN), TLS handshake, Newsession Ticket (4): * TLSv1.3 (IN), TLS handshake, Newsession Ticket (4): * old SSL session ID is stale, removing * Connection state changed (MAX_CONCURRENT_STREAMS == 128)! < HTTP/2 400 < server: nginx < date: Tue, 02 Mar 2021 07:54:36 GMT < content-type: text/html < content-length: 224 < strict-transport-security: max-age=63072000; includeSubDomains; preload < 400 The SSL certificate error

400 Bad Request

The SSL certificate error

nginx
* Connection #0 to host 127.0.0.1 left intact How can I force nginx also to report the client certificate error via the TLS alert mechanisms instead of this useless HTML page? --- Here is the nginx site configuration as reference: server { listen 443 ssl http2 default_server; error_log /var/log/nginx/moneybin_error.log; access_log /var/log/nginx/moneybin_access.log; ssl_certificate /some/path/to/certs/fullchain.pem; ssl_certificate_key /some/path/to/certs/privkey.pem; ssl_client_certificate /some/path/to/certs/ca.pem; ssl_trusted_certificate /some/path/to/certs/ca.pem; ssl_crl /some/path/to/certs/crl.pem; ssl_session_timeout 1d; ssl_session_cache off; ssl_session_tickets off; # intermediate configuration ssl_protocols TLSv1.2 TLSv1.3; ssl_ciphers ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384; ssl_prefer_server_ciphers off; ssl_verify_client optional; ssl_verify_depth 2; root /var/www/moneybin; server_name _; location / { fastcgi_param HTTP_PROXY ""; fastcgi_pass unix:/var/run/php/php7.3-fpm.sock; fastcgi_index index.php; include fastcgi_params; fastcgi_param SCRIPT_FILENAME /var/www/moneybin/index.php; } } Posted at Nginx Forum: https://forum.nginx.org/read.php?2,290872,290872#msg-290872 From mdounin at mdounin.ru Tue Mar 2 13:32:24 2021 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 2 Mar 2021 16:32:24 +0300 Subject: force nginx to use SSL/TLS alert on invalid client certificate In-Reply-To: <5fc728cc51b9921d543d3ee3f3f7cbaf.NginxMailingListEnglish@forum.nginx.org> References: <5fc728cc51b9921d543d3ee3f3f7cbaf.NginxMailingListEnglish@forum.nginx.org> Message-ID: Hello! On Tue, Mar 02, 2021 at 03:30:02AM -0500, charlemagnelasse wrote: > How can I force nginx also to report the client certificate error via the > TLS alert mechanisms instead of this useless HTML page? This is not currently posssible. On the other hand, if you want to make the page more useful in your particular use case, you can do so by configuring appropriate page with the error_page directive, see here: http://nginx.org/en/docs/http/ngx_http_ssl_module.html#errors Certificate verification results can be found in the $ssl_client_verify variable. -- Maxim Dounin http://mdounin.ru/ From dwdixon at umich.edu Wed Mar 3 17:02:44 2021 From: dwdixon at umich.edu (Drew Dixon) Date: Wed, 3 Mar 2021 12:02:44 -0500 Subject: proxy_buffering context and unexpected error Message-ID: Hi, In my testing I desired to disable proxy_buffering per: https://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_buffering Which states that this is valid within the server context, however whenever I attempt to set proxy_buffering to off (proxy_buffering off;) within my server configuration context I receive the error message below: nginx[6203]: nginx: [emerg] "proxy_buffering" directive is not allowed here in /etc/nginx/conf.d/test.conf This is clearly within the server context in my config. I am able, however, to use this within the default /etc/nginx/nginx.conf *http* context without error. I wanted to report this to see if someone could shed some light on this or if this may be a bug or an error with the documentation? Thank you! -------------- next part -------------- An HTML attachment was scrubbed... URL: From francis at daoine.org Wed Mar 3 17:39:53 2021 From: francis at daoine.org (Francis Daly) Date: Wed, 3 Mar 2021 17:39:53 +0000 Subject: proxy_buffering context and unexpected error In-Reply-To: References: Message-ID: <20210303173953.GV6011@daoine.org> On Wed, Mar 03, 2021 at 12:02:44PM -0500, Drew Dixon wrote: Hi there, > Which states that this is valid within the server context, however whenever > I attempt to set proxy_buffering to off (proxy_buffering off;) within my > server configuration context I receive the error message below: > > nginx[6203]: nginx: [emerg] "proxy_buffering" directive is not allowed here > in /etc/nginx/conf.d/test.conf > > This is clearly within the server context in my config. Can you show the line, and the surrounding lines or other indentation that indicates that your "proxy_buffering" directive is directly within "server", and is not, for example, within an "if" within "server"? The nginx contexts should be understood to be direct-only. Cheers, f -- Francis Daly francis at daoine.org From dwdixon at umich.edu Wed Mar 3 19:51:33 2021 From: dwdixon at umich.edu (Drew Dixon) Date: Wed, 3 Mar 2021 14:51:33 -0500 Subject: proxy_buffering context and unexpected error In-Reply-To: <20210303173953.GV6011@daoine.org> References: <20210303173953.GV6011@daoine.org> Message-ID: Hi there, thanks for the quick reply, sure, the config is rather simple for some initial testing, I'm not sure the directive being turned off will have any effect w/ the present config but the directive is not within any "if" within the "server" context and seems to be throwing an error when I would expect it not to: /etc/nginx/conf.d/test.conf: ``` stream { upstream upstreams { least_conn; server 192.168.1.100:9998; server 192.168.1.101:9998; server 192.168.1.102:9998; } server { proxy_buffering off; listen 9998 udp; proxy_pass upstreams; proxy_timeout 1s; proxy_responses 0; error_log /var/log/nginx/test-lb.log; } } ``` Thank you! On Wed, Mar 3, 2021 at 12:40 PM Francis Daly wrote: > On Wed, Mar 03, 2021 at 12:02:44PM -0500, Drew Dixon wrote: > > Hi there, > > > Which states that this is valid within the server context, however > whenever > > I attempt to set proxy_buffering to off (proxy_buffering off;) within my > > server configuration context I receive the error message below: > > > > nginx[6203]: nginx: [emerg] "proxy_buffering" directive is not allowed > here > > in /etc/nginx/conf.d/test.conf > > > > This is clearly within the server context in my config. > > Can you show the line, and the surrounding lines or other indentation > that indicates that your "proxy_buffering" directive is directly within > "server", and is not, for example, within an "if" within "server"? > > The nginx contexts should be understood to be direct-only. > > Cheers, > > f > -- > Francis Daly francis at daoine.org > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From francis at daoine.org Wed Mar 3 20:14:23 2021 From: francis at daoine.org (Francis Daly) Date: Wed, 3 Mar 2021 20:14:23 +0000 Subject: proxy_buffering context and unexpected error In-Reply-To: References: <20210303173953.GV6011@daoine.org> Message-ID: <20210303201423.GW6011@daoine.org> On Wed, Mar 03, 2021 at 02:51:33PM -0500, Drew Dixon wrote: Hi there, > Hi there, thanks for the quick reply, sure, the config is rather simple for > some initial testing, I'm not sure the directive being turned off will have > any effect w/ the present config but the directive is not within any "if" > within the "server" context and seems to be throwing an error when I would > expect it not to: Thanks for the example. I see the confusion. > stream { https://nginx.org/en/docs/http/ngx_http_proxy_module.html is for proxy_* directives with the http section. This example is within the stream section, so you'll be wanting the docs at https://nginx.org/en/docs/stream/ngx_stream_proxy_module.html > server { > proxy_buffering off; I think that there is not something like the http proxy_buffering available within the stream module, so it is effectively always off in that case. Cheers, f -- Francis Daly francis at daoine.org From dwdixon at umich.edu Wed Mar 3 20:28:08 2021 From: dwdixon at umich.edu (Drew Dixon) Date: Wed, 3 Mar 2021 15:28:08 -0500 Subject: proxy_buffering context and unexpected error In-Reply-To: <20210303201423.GW6011@daoine.org> References: <20210303173953.GV6011@daoine.org> <20210303201423.GW6011@daoine.org> Message-ID: Ah...I see...I definitely did not notice that I was led while searching via Google to the http proxy module specific section of the docs rather than docs for that directive which may have been more broadly applicable. Thank you for clarifying, and apologies for the pseudo-spam : ) Cheers! On Wed, Mar 3, 2021 at 3:14 PM Francis Daly wrote: > On Wed, Mar 03, 2021 at 02:51:33PM -0500, Drew Dixon wrote: > > Hi there, > > > Hi there, thanks for the quick reply, sure, the config is rather simple > for > > some initial testing, I'm not sure the directive being turned off will > have > > any effect w/ the present config but the directive is not within any "if" > > within the "server" context and seems to be throwing an error when I > would > > expect it not to: > > Thanks for the example. > > I see the confusion. > > > stream { > > https://nginx.org/en/docs/http/ngx_http_proxy_module.html is for proxy_* > directives with the http section. > > This example is within the stream section, so you'll be wanting the docs > at https://nginx.org/en/docs/stream/ngx_stream_proxy_module.html > > > server { > > proxy_buffering off; > > I think that there is not something like the http proxy_buffering > available within the stream module, so it is effectively always off in > that case. > > Cheers, > > f > -- > Francis Daly francis at daoine.org > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Wed Mar 3 22:20:13 2021 From: nginx-forum at forum.nginx.org (bouvierh) Date: Wed, 03 Mar 2021 17:20:13 -0500 Subject: arm32v7/nginx:1.19.7-alpine crashing silently Message-ID: <34d41508f1831cd49a78d137205d603b.NginxMailingListEnglish@forum.nginx.org> My system is: Hardware: https://www.moxa.com/en/products/industrial-computing/arm-based-computers/uc-8100a-me-t-series#specifications CPU: Armv7 Cortex-A8 1 GHz RAM: 1 GB DDR3 Docker: Docker version 3.0.13+azure, build dd360c7c0de8d9132a3965db6a59d3ae74f43ba7 OS: Debian 9 stretch I am trying to run nginx with the following commands but the image close right away with the following message: /docker-entrypoint.sh: /docker-entrypoint.d/ is not empty, will attempt to perform configuration /docker-entrypoint.sh: Looking for shell scripts in /docker-entrypoint.d/ /docker-entrypoint.sh: Launching /docker-entrypoint.d/10-listen-on-ipv6-by-default.sh 10-listen-on-ipv6-by-default.sh: info: Getting the checksum of /etc/nginx/conf.d/default.conf 10-listen-on-ipv6-by-default.sh: info: Enabled listen on IPv6 in /etc/nginx/conf.d/default.conf /docker-entrypoint.sh: Launching /docker-entrypoint.d/20-envsubst-on-templates.sh /docker-entrypoint.sh: Launching /docker-entrypoint.d/30-tune-worker-processes.sh /docker-entrypoint.sh: Configuration complete; ready for start up => Close at that point Running that commands on my ubuntu VM or raspberry it doesn't close. Running in detach mode or mapping the port doesn't change anything. I tried debug mode but it seems to crash before that :(. Do you have any idea what it could be or what I could do to troubleshoot it? Since there is not error messages, it is pretty difficult for me. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,290882,290882#msg-290882 From fatma.mazari at esprit.tn Thu Mar 4 11:26:06 2021 From: fatma.mazari at esprit.tn (Fatma MAZARI) Date: Thu, 4 Mar 2021 12:26:06 +0100 Subject: proxy_buffering context and unexpected error In-Reply-To: References: <20210303173953.GV6011@daoine.org> <20210303201423.GW6011@daoine.org> Message-ID: I 'm using nginx for generating copies of stream chunks with ".ts" format that are encapsulated in m3u format. The issue here, is that I don't want to work with m3u format, I want to send directly and contouniesly the ts chunks to the media player(because m3u format it's not readable in so many media player) ! So, I want to know the configuration or the algorithme to work with in Nginx? Please, And thank you! Le mer. 3 mars 2021 ? 21:29, Drew Dixon a ?crit : > Ah...I see...I definitely did not notice that I was led while searching > via Google to the http proxy module specific section of the docs rather > than docs for that directive which may have been more broadly applicable. > Thank you for clarifying, and apologies for the pseudo-spam : ) > > Cheers! > > On Wed, Mar 3, 2021 at 3:14 PM Francis Daly wrote: > >> On Wed, Mar 03, 2021 at 02:51:33PM -0500, Drew Dixon wrote: >> >> Hi there, >> >> > Hi there, thanks for the quick reply, sure, the config is rather simple >> for >> > some initial testing, I'm not sure the directive being turned off will >> have >> > any effect w/ the present config but the directive is not within any >> "if" >> > within the "server" context and seems to be throwing an error when I >> would >> > expect it not to: >> >> Thanks for the example. >> >> I see the confusion. >> >> > stream { >> >> https://nginx.org/en/docs/http/ngx_http_proxy_module.html is for proxy_* >> directives with the http section. >> >> This example is within the stream section, so you'll be wanting the docs >> at https://nginx.org/en/docs/stream/ngx_stream_proxy_module.html >> >> > server { >> > proxy_buffering off; >> >> I think that there is not something like the http proxy_buffering >> available within the stream module, so it is effectively always off in >> that case. >> >> Cheers, >> >> f >> -- >> Francis Daly francis at daoine.org >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx >> > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From senor.j.onion at gmail.com Thu Mar 4 14:03:40 2021 From: senor.j.onion at gmail.com (=?utf-8?Q?Se=C3=B1or_J_Onion?=) Date: Thu, 4 Mar 2021 16:03:40 +0200 Subject: HEAD request to GCS caching body Message-ID: <78D56D31-919E-417D-8AA5-D52A0309570D@gmail.com> I use nginx as a forward proxy, with content caching. My app first performs a HEAD request to a Google Cloud Storage object. Then it may perform a GET request to the same object. The HEAD request (which comes first) causes a cache MISS. The content body length returned to the client is 0 (which is obviously correct). However, I think that the actual object is still included in the body from the upstream response. The reason I believe why the object gets added to the HEAD response from the upstream service (GCS) is for two reasons: a) When I subsequently do the GET request, I don't get a cache MISS (even though this is my first GET request to that object), but a cache REVALIDATED. The response from the upstream service is just a 304 with no body saying the cached object is still valid ($upstream_header_time and $upstream_response_time are identical == 0.421, which would then be correct if the cached object is still valid). So - this seems like the initial HEAD request cached the response also as a GET request with the body of the object that seemed to have been in the HEAD request b) Also, when I do the initial HEAD request, I can see that the $upstream_header_time==0.832, and the $upstream_response_time==2.459 ... If it's a HEAD request there really shouldn't be a body, so I would expect both $upstream_header_time and $upstream_response_time to be identical. However the 1.5sec time difference shows me that there is something in the body (even though when the request returns to the client it all seems correct again in terms of that the actual response.body.length is indeed 0.) So - the way this is working is messing with my app and HTTP analytics. I believe this to be behaving incorrectly. I don't know where the "error" lies. If it is a Google Cloud Storage bug that it passes along the object in the body of the HEAD request, or whether the issue lies with nginx, or with my configuration, or whether it is with the content caching part of nginx? Or perhaps it is behaving exactly as it should, and there is something about the HEAD/GET requests in combination with caching that I am not understanding. Any help to shed light on this strange behaviour would be greatly appreciated. My server block config is as follows: proxy_cache_path /var/cache/nginx levels=1:2 keys_zone=image_cache:10m inactive=60d use_temp_path=off; server { listen 3128; location / { proxy_cache image_cache; proxy_cache_revalidate on; proxy_cache_lock on; proxy_cache_lock_timeout 5s; proxy_ignore_headers Cache-Control; proxy_cache_valid 200 60d; add_header X-Cache-Status $upstream_cache_status; resolver 8.8.8.8 ipv6=off; proxy_pass http://$http_host$uri$is_args$args; } } From gk at leniwiec.biz Thu Mar 4 14:07:39 2021 From: gk at leniwiec.biz (Grzegorz Kulewski) Date: Thu, 4 Mar 2021 15:07:39 +0100 Subject: HEAD request to GCS caching body In-Reply-To: <78D56D31-919E-417D-8AA5-D52A0309570D@gmail.com> References: <78D56D31-919E-417D-8AA5-D52A0309570D@gmail.com> Message-ID: <2501bc87-9681-f9cf-eafe-8938eb51d9c7@leniwiec.biz> http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_cache_convert_head W dniu 04.03.2021 o?15:03, Se?or J Onion pisze: > I use nginx as a forward proxy, with content caching. > > My app first performs a HEAD request to a Google Cloud Storage object. Then it may perform a GET request to the same object. > > The HEAD request (which comes first) causes a cache MISS. The content body length returned to the client is 0 (which is obviously correct). > > However, I think that the actual object is still included in the body from the upstream response. The reason I believe why the object gets added to the HEAD response from the upstream service (GCS) is for two reasons: > > a) When I subsequently do the GET request, I don't get a cache MISS (even though this is my first GET request to that object), but a cache REVALIDATED. The response from the upstream service is just a 304 with no body saying the cached object is still valid ($upstream_header_time and $upstream_response_time are identical == 0.421, which would then be correct if the cached object is still valid). > So - this seems like the initial HEAD request cached the response also as a GET request with the body of the object that seemed to have been in the HEAD request > > b) Also, when I do the initial HEAD request, I can see that the $upstream_header_time==0.832, and the $upstream_response_time==2.459 ... If it's a HEAD request there really shouldn't be a body, so I would expect both $upstream_header_time and $upstream_response_time to be identical. However the 1.5sec time difference shows me that there is something in the body (even though when the request returns to the client it all seems correct again in terms of that the actual response.body.length is indeed 0.) > > So - the way this is working is messing with my app and HTTP analytics. I believe this to be behaving incorrectly. > > > I don't know where the "error" lies. If it is a Google Cloud Storage bug that it passes along the object in the body of the HEAD request, or whether the issue lies with nginx, or with my configuration, or whether it is with the content caching part of nginx? > Or perhaps it is behaving exactly as it should, and there is something about the HEAD/GET requests in combination with caching that I am not understanding. > > Any help to shed light on this strange behaviour would be greatly appreciated. > > > My server block config is as follows: > > proxy_cache_path /var/cache/nginx levels=1:2 keys_zone=image_cache:10m inactive=60d use_temp_path=off; > > server { > listen 3128; > > location / { > proxy_cache image_cache; > > proxy_cache_revalidate on; > > proxy_cache_lock on; > proxy_cache_lock_timeout 5s; > > proxy_ignore_headers Cache-Control; > proxy_cache_valid 200 60d; > > add_header X-Cache-Status $upstream_cache_status; > > resolver 8.8.8.8 ipv6=off; > proxy_pass http://$http_host$uri$is_args$args; > } > } From mailinglist at unix-solution.de Fri Mar 5 14:09:34 2021 From: mailinglist at unix-solution.de (basti) Date: Fri, 5 Mar 2021 15:09:34 +0100 Subject: How did nginx resolve names? Message-ID: <81ebaa96-256f-8018-64d5-6ee95ac30cac@unix-solution.de> Hello, first of all, I'm not sure if it is a php or nginx problem. Today I had the problem, that nginx run into "504 Gateway Time-out" when the first nameserver in /etc/resolv.conf did not answer. The php application is query some names (db-server for example). Did nginx use nsswitch? Did someone know how php resolve the names? If the first dns server answer nxdomain it's OK. But if the first one did *not* answer / get timeout it should ask the second one. Best Regards, From sumitkumarsingh509 at gmail.com Sat Mar 6 14:47:11 2021 From: sumitkumarsingh509 at gmail.com (Sumit Kumar) Date: Sat, 6 Mar 2021 20:17:11 +0530 Subject: Fwd: FW: QUIC based Nginx in Linux(Ubuntu) based systems In-Reply-To: References: Message-ID: *From: *Sumit Kumar (sumikum7) *Date: *Saturday, 6 March 2021 at 8:14 PM *To: *igor at sysoev.ru , nginx at nginx.org *Subject: *QUIC based Nginx in Linux(Ubuntu) based systems Hi Igor/Nginx-team, I am trying to develop a QUIC POC as part of my master's project and the results gained from here are also supposed to help me build a base for QUIC based Nginx adoption in Cisco's Products. I am simply trying to follow the official wiki here : https://www.nginx.com/blog/introducing-technology-preview-nginx-support-for-quic-http-3/ Well, we have been able to accomplish the successful building of the code (via README). The Nginx server is up and running as of now and I see no issues there. The next *pit stop* is finding a *successful QUIC client* which I already tried all the browsers and failing to find any of them working for QUIC as of now(I tested these browsers with QUIC enabled websites). One section, where we would like your kind help for the project/PoC would be having a *proper* nginx.conf being displayed on the wiki page so that we can quickly set up and play with QUIC based Nginx server. As of now, I am unable to see chrome/Firefox loading localhost or 127.0.0.1/ pages with the configs mentioned in your wiki. It says 404 as of now. (nginx.conf attached) *Access log :* 127.0.0.1 - - [06/Mar/2021:20:13:04 +0530] "GET / HTTP/1.1" 404 555 "-" "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/89.0.4389.72 Safari/537.36" "-" "h3-29" Regards Sumit Kumar(sumikum7) -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.png Type: image/png Size: 86165 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: nginx.conf Type: application/octet-stream Size: 3504 bytes Desc: not available URL: From nginx-forum at forum.nginx.org Sun Mar 7 09:02:37 2021 From: nginx-forum at forum.nginx.org (tarikislam8091) Date: Sun, 07 Mar 2021 04:02:37 -0500 Subject: unknown directive "thread_pool" In-Reply-To: <38649b7e498a796bed0ee4db7821d477.NginxMailingListEnglish@forum.nginx.org> References: <6C4D3983-F9E9-47AF-B9A0-9576C76BAB1F@nginx.com> <38649b7e498a796bed0ee4db7821d477.NginxMailingListEnglish@forum.nginx.org> Message-ID: <931e55fa2c97071a9c66e6cb8eae2768.NginxMailingListEnglish@forum.nginx.org> nginx: [emerg] unknown directive "thread_pool" in /etc/nginx/nginx.conf Posted at Nginx Forum: https://forum.nginx.org/read.php?2,259166,290904#msg-290904 From francis at daoine.org Sun Mar 7 11:24:06 2021 From: francis at daoine.org (Francis Daly) Date: Sun, 7 Mar 2021 11:24:06 +0000 Subject: How did nginx resolve names? In-Reply-To: <81ebaa96-256f-8018-64d5-6ee95ac30cac@unix-solution.de> References: <81ebaa96-256f-8018-64d5-6ee95ac30cac@unix-solution.de> Message-ID: <20210307112406.GY6011@daoine.org> On Fri, Mar 05, 2021 at 03:09:34PM +0100, basti wrote: Hi there, > Today I had the problem, that nginx run into "504 Gateway Time-out" when the > first nameserver in /etc/resolv.conf did not answer. > > The php application is query some names (db-server for example). > > Did nginx use nsswitch? Very roughly: if nginx sees a hostname that matters in the config file, nginx will use the system resolver at start-up time to turn that into an IP address, and will never resolve it again. If nginx instead sees a hostname at request-processing time, it will use whatever "resolver" was configured in the nginx config file. http://nginx.org/r/resolver nginx-at-startup does not use nsswitch directly; it uses your system resolver, and *that* might use nsswitch -- nginx does not care. nginx-at-request does not use anything from the system resolver. And: if the issue is that your php script is doing some name resolution, then nginx will not be involved in that -- your php will do whatever your php is configured to do -- probably something that your favourite search engine will mention when requesting "php hostname resolution". Cheers, f -- Francis Daly francis at daoine.org From francis at daoine.org Sun Mar 7 11:49:11 2021 From: francis at daoine.org (Francis Daly) Date: Sun, 7 Mar 2021 11:49:11 +0000 Subject: How nginx stream module reuse tcp connections? In-Reply-To: <718db6d63522d1e64df098ff99657d26.NginxMailingListEnglish@forum.nginx.org> References: <718db6d63522d1e64df098ff99657d26.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20210307114911.GZ6011@daoine.org> On Thu, Feb 25, 2021 at 09:31:36PM -0500, allenhe wrote: Hi there, > As we know there are some keepalive options in the nginx http modules to > reuse tcp connections, That is: the http protocol, which nginx speaks both as a server and as a client, includes a facility to request that multiple requests can be made on the same tcp connection. > But are there corresponding options in the nginx stream module to achieve > the same? In general - no. You usually don't care about "stream", you care about the specific protocol that you are using on top of the tcp-or-other connection. And in general, nginx does not know the details of *that* protocol. In specific cases - maybe. What is the use case that you care about here? > How nginx persist tcp connection with downstream? It's a tcp connection. From ip:port to ip:port at time, and subsequent packets refer to previous ones. The tcp connection stays active until it gets closed - Reset or Fin, usually. > How nginx persist tcp connection with upstream? Same answer, unless you have a specific use-case that wants further explanation, I think. > What is the "session" meaning in the stream context? Where do you see that term? Possibly the context there, will provide the description? I see http://nginx.org/en/docs/njs/reference.html#stream and http://nginx.org/en/docs/stream/ngx_stream_upstream_module.html#var_upstream_session_time and some mentions in http://nginx.org/en/docs/stream/ngx_stream_proxy_module.html Generally, they seem to refer to "what you would expect they refer to". All of the exact what-nginx-does "documentation" is in the src/stream/ directory. The more general "summary" docs are at the website links above. If you can point at a piece of documentation that should describe what you want but does not, that will be useful information to help someone improve the documentation. Cheers, f -- Francis Daly francis at daoine.org From bee.lists at gmail.com Sun Mar 7 23:47:20 2021 From: bee.lists at gmail.com (bee.lists) Date: Sun, 7 Mar 2021 18:47:20 -0500 Subject: worker_connections exceed open file resource limit Message-ID: Big Sur has a warning that 1024 exceeds open file resource limit of 256. Is this normal, considering I?ve set my worker_connections to 1024 in nginx.conf? Also, is this a package manager designation? Cheers, Bee From bee.lists at gmail.com Sun Mar 7 23:52:24 2021 From: bee.lists at gmail.com (bee.lists) Date: Sun, 7 Mar 2021 18:52:24 -0500 Subject: invalid PID number in Big Sur Message-ID: Trying to get nginx working in Big Sur. Upon trying to start the server (nginx.conf test passes fine), I get this: nginx: [error] invalid PID number ?? in ?/opt/homebrew/var/run/nginx.pid? The file is there, but it seems empty. Given that homebrew now chooses that location, and is set in nginx.conf, how can I get this to work properly? I tried deleting it, but it just threw an error that it wasn?t there. Cheers, Bee From nginx-forum at forum.nginx.org Mon Mar 8 02:03:48 2021 From: nginx-forum at forum.nginx.org (allenhe) Date: Sun, 07 Mar 2021 21:03:48 -0500 Subject: How nginx stream module reuse tcp connections? In-Reply-To: <20210307114911.GZ6011@daoine.org> References: <20210307114911.GZ6011@daoine.org> Message-ID: Hi, Thanks for the reply! So, if there is no error and the downstream/upstream didn't actively close the connection, the nginx won't timeout and close the tcp connection with the downstream or with the upstream at all? is that correct? and I suppose the SO_KEEPALIVE option is turn on by default on connection sockets, right? The "session" refers to the ngx_stream_session_t in the source code, I want to know if the tcp load balance (select upstream) is triggered just once per "session" just like http does with ngx_http_request_t? Regards, Allen Posted at Nginx Forum: https://forum.nginx.org/read.php?2,290851,290909#msg-290909 From yichun at openresty.com Mon Mar 8 22:52:51 2021 From: yichun at openresty.com (Yichun Zhang) Date: Mon, 8 Mar 2021 14:52:51 -0800 Subject: New Official Aarch64/ARM64 package repos for OpenResty Message-ID: Hi folks, Now we provide official OpenResty Aarch64/ARM64 package repos for Ubuntu 18.04/20.04, Debian 9/10, CentOS/RHEL 7/8, Fedora 32/33. See https://openresty.org/en/linux-packages.html for more details. Feedback welcome. Thanks! Best, Yichun -- Yichun Zhang Founder and CEO of OpenResty Inc. https://openresty.com/ From mdounin at mdounin.ru Tue Mar 9 01:39:10 2021 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 9 Mar 2021 04:39:10 +0300 Subject: worker_connections exceed open file resource limit In-Reply-To: References: Message-ID: Hello! On Sun, Mar 07, 2021 at 06:47:20PM -0500, bee.lists wrote: > Big Sur has a warning that 1024 exceeds open file resource limit of 256. > > Is this normal, considering I?ve set my worker_connections to 1024 in nginx.conf? > > Also, is this a package manager designation? For production use, you should either set worker_connections below the limit, or raise the limit. Without this, nginx might end up in a situation when it cannot accept new connections due to maxfiles limit being reached by the process, and cannot do anything with this. And hence nginx prints the warning that your system is misconfigured. Also, 1024 is somewhat low for any serious production use, so you probably want to raise the limit. In simple cases just "ulimit -n" should be enough (or you can use worker_rlimit_nofile as an easier alternative, http://nginx.org/r/worker_rlimit_nofile). In more complex cases you might need to adjust kernel limits, such as kern.maxfiles and kern.maxfilesperproc. Some macOS-specific instructions can be found, for example, at https://wilsonmar.github.io/maximum-limits/ (just the first link from Google "macos maxfiles", looks reasonable). On the other hand, given macOS, this is highly unlikely going to be a production use, so you can safely ignore the warning. -- Maxim Dounin http://mdounin.ru/ From nginx-forum at forum.nginx.org Tue Mar 9 06:00:20 2021 From: nginx-forum at forum.nginx.org (klowd92) Date: Tue, 09 Mar 2021 01:00:20 -0500 Subject: pthread in nginx module Message-ID: Hi Everyone, I am developing an nginx module. The module requires some background processing via thread when the server is running. I have written my module to use pthread.h I have attempted to spawn a thread during the init_module function (specified in ngx_module_t of the module) Unfortunately this thread is terminated after the init_module function is done. I need the thread to continue running indefinitely. I have also tried to create a thread during the init_process function, which also fails. nginx is compiled with threadpool and fileio support. Is it possible to have my module use pthread.h and run pthead_create() to have create threads during the execution of nginx, which stay alive throught the life of the program? My attempts all failed as soon as the module return execution the nginx, the threads are terminated. Thanks. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,290912,290912#msg-290912 From bee.lists at gmail.com Tue Mar 9 10:29:02 2021 From: bee.lists at gmail.com (bee.lists) Date: Tue, 9 Mar 2021 05:29:02 -0500 Subject: worker_connections exceed open file resource limit In-Reply-To: References: Message-ID: Hi there. It is development. I?ve been running 1024 for over 10 years and now there?s a restriction on 256 workers, and I don?t know why. It seems to be set quite low. Given the fact this is a new warning, developers that have increased this setting have been wrong all this time? I?ll take a look at the OS specifics, but I?ve set that to 1024 and it didn?t take. I will have to review. Thanks for the response. > On Mar 8, 2021, at 8:39 PM, Maxim Dounin wrote: > > For production use, you should either set worker_connections below > the limit, or raise the limit. Without this, nginx might end up > in a situation when it cannot accept new connections due to > maxfiles limit being reached by the process, and cannot do > anything with this. And hence nginx prints the warning that your > system is misconfigured. > > Also, 1024 is somewhat low for any serious production use, so you > probably want to raise the limit. In simple cases just "ulimit > -n" should be enough (or you can use worker_rlimit_nofile as an > easier alternative, http://nginx.org/r/worker_rlimit_nofile). In > more complex cases you might need to adjust kernel limits, such as > kern.maxfiles and kern.maxfilesperproc. Some macOS-specific > instructions can be found, for example, at > https://wilsonmar.github.io/maximum-limits/ (just the first link > from Google "macos maxfiles", looks reasonable). > > On the other hand, given macOS, this is highly unlikely going to > be a production use, so you can safely ignore the warning. Cheers, Bee From pluknet at nginx.com Tue Mar 9 12:45:41 2021 From: pluknet at nginx.com (Sergey Kandaurov) Date: Tue, 9 Mar 2021 15:45:41 +0300 Subject: QUIC based Nginx in Linux(Ubuntu) based systems In-Reply-To: References: Message-ID: > On 6 Mar 2021, at 17:47, Sumit Kumar wrote: > > > I am trying to develop a QUIC POC as part of my master's project and the results gained from here are also supposed to help me build a base for QUIC based Nginx adoption in Cisco's Products. > > > > I am simply trying to follow the official wiki here : https://www.nginx.com/blog/introducing-technology-preview-nginx-support-for-quic-http-3/ > > > > Well, we have been able to accomplish the successful building of the code (via README). The Nginx server is up and running as of now and I see no issues there. The next pit stop is finding a successful QUIC client which I already tried all the browsers and failing to find any of them working for QUIC as of now(I tested these browsers with QUIC enabled websites). > > > > One section, where we would like your kind help for the project/PoC would be having a proper nginx.conf being displayed on the wiki page so that we can quickly set up and play with QUIC based Nginx server. As of now, I am unable to see chrome/Firefox loading localhost or 127.0.0.1/ pages with the configs mentioned in your wiki. See https://quic.nginx.org/README for an up to date nginx.conf example and known to work clients. You may first ensure the client is working correctly by trying a public HTTP/3 server such as quic.nginx.org; see also https://github.com/quicwg/base-drafts/wiki/Implementations > > It says 404 as of now. (nginx.conf attached) > That doesn't explain much. HTTP semantics such as status codes doesn't differ between protocol versions. > Access log : > > 127.0.0.1 - - [06/Mar/2021:20:13:04 +0530] "GET / HTTP/1.1" 404 555 "-" "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/89.0.4389.72 Safari/537.36" "-" "h3-29" > It using HTTP/1.1 (for whatever reasons). Usually that means a failure to negotiate HTTP/3. -- Sergey Kandaurov From mdounin at mdounin.ru Tue Mar 9 13:34:34 2021 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 9 Mar 2021 16:34:34 +0300 Subject: pthread in nginx module In-Reply-To: References: Message-ID: Hello! On Tue, Mar 09, 2021 at 01:00:20AM -0500, klowd92 wrote: > Hi Everyone, > > I am developing an nginx module. > The module requires some background processing via thread when the server is > running. > > I have written my module to use pthread.h > I have attempted to spawn a thread during the init_module function > (specified in ngx_module_t of the module) > Unfortunately this thread is terminated after the init_module function is > done. I need the thread to continue running indefinitely. > > I have also tried to create a thread during the init_process function, which > also fails. > > nginx is compiled with threadpool and fileio support. > > Is it possible to have my module use pthread.h and run pthead_create() to > have create threads during the execution of nginx, which stay alive throught > the life of the program? > My attempts all failed as soon as the module return execution the nginx, the > threads are terminated. While you should be able to create threads if implemented properly, this is usually a bad idea. Quoting development guide (http://nginx.org/en/docs/dev/development_guide.html#threads_pitfalls): It is recommended to avoid using threads in nginx because it will definitely break things: most nginx functions are not thread-safe. It is expected that a thread will be executing only system calls and thread-safe library functions. If you need to run some code that is not related to client request processing, the proper way is to schedule a timer in the init_process module handler and perform required actions in timer handler. Internally nginx makes use of threads to boost IO-related operations, but this is a special case with a lot of limitations. -- Maxim Dounin http://mdounin.ru/ From mdounin at mdounin.ru Tue Mar 9 14:38:41 2021 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 9 Mar 2021 17:38:41 +0300 Subject: worker_connections exceed open file resource limit In-Reply-To: References: Message-ID: Hello! On Tue, Mar 09, 2021 at 05:29:02AM -0500, bee.lists wrote: > It is development. I?ve been running 1024 for over 10 years and > now there?s a restriction on 256 workers, and I don?t know why. > It seems to be set quite low. Given the fact this is a new > warning, developers that have increased this setting have been > wrong all this time? I would suggest that you've somehow misunderstood what happens in your configuration. The warning isn't new, and the limit is traditionally very low in macOS. My best guess is that you previously have the limit ajusted somehow (such as "ulimit -n 1024" in your ~/.profile or ~/.bashrc), and this adjustment no longer applies for some reason (e.g., clean install or switch to zsh). -- Maxim Dounin http://mdounin.ru/ From francis at daoine.org Tue Mar 9 15:11:11 2021 From: francis at daoine.org (Francis Daly) Date: Tue, 9 Mar 2021 15:11:11 +0000 Subject: How nginx stream module reuse tcp connections? In-Reply-To: References: <20210307114911.GZ6011@daoine.org> Message-ID: <20210309151111.GA6011@daoine.org> On Sun, Mar 07, 2021 at 09:03:48PM -0500, allenhe wrote: Hi there, > So, if there is no error and the downstream/upstream didn't actively close > the connection, the nginx won't timeout and close the tcp connection with > the downstream or with the upstream at all? is that correct? I don't think that's what I wrote. It seems like a reasonable thing to hope exists; if you can point at the source-or-documentation implements that, then you can expect it to work that way. But given that the point of "stream", in the context of tcp, is to sit in the middle of a connection from the client, and a connection to the upstream, I would expect that nginx would keep the client connection open until the client closes it; and would keep the upstream connection open until upstream closes it, or until the client closes the client connection. (Unless other config overrides that.) > and I suppose > the SO_KEEPALIVE option is turn on by default on connection sockets, right? That's another of those things that does not need guessing. $ grep -rl SO_KEEPALIVE src/ I see it used in two files. (And the functions in which it is used, are called from some other files.) More usefully: I see it mentioned in the documentation of the stream listen directive and the stream proxy_socket_keepalive directive; that latter one is probably what you want here. > The "session" refers to the ngx_stream_session_t in the source code, I want > to know if the tcp load balance (select upstream) is triggered just once per > "session" just like http does with ngx_http_request_t? That sounds like a clear question (although I'm not sure the analogy with http is correct). Maybe someone who knows the right answer, will be willing to answer. I would guess that one client tcp connection == one stream session == one upstream tcp connection; but I have not tested it to check. Cheers, f -- Francis Daly francis at daoine.org From mdounin at mdounin.ru Tue Mar 9 15:42:27 2021 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 9 Mar 2021 18:42:27 +0300 Subject: nginx-1.19.8 Message-ID: Changes with nginx 1.19.8 09 Mar 2021 *) Feature: flags in the "proxy_cookie_flags" directive can now contain variables. *) Feature: the "proxy_protocol" parameter of the "listen" directive, the "proxy_protocol" and "set_real_ip_from" directives in mail proxy. *) Bugfix: HTTP/2 connections were immediately closed when using "keepalive_timeout 0"; the bug had appeared in 1.19.7. *) Bugfix: some errors were logged as unknown if nginx was built with glibc 2.32. *) Bugfix: in the eventport method. -- Maxim Dounin http://nginx.org/ From xeioex at nginx.com Tue Mar 9 18:11:06 2021 From: xeioex at nginx.com (Dmitry Volyntsev) Date: Tue, 9 Mar 2021 21:11:06 +0300 Subject: njs-0.5.2 Message-ID: Hello, I'm glad to announce a new release of NGINX JavaScript module (njs). This release focuses on extending the modules functionality. Notable new features: - js_body_filter directive. The directive allows changing the response body. : nginx.conf: : js_import foo.js; : : location / { : js_body_filter foo.to_lower; : proxy_pass http://127.0.0.1:8081/; : } : : foo.js: : function to_lower(r, data, flags) { : r.sendBuffer(data.toLowerCase(), flags); : } : : export default {to_lower}; - njs.on('exit') callback. The "exit" hook allows to implement some cleanup logic before the VM instance is destroyed. : foo.js: : function handler(r) { : njs.on('exit', () => { : r.warn("DONE"); : }); : } You can learn more about njs: - Overview and introduction: http://nginx.org/en/docs/njs/ - Presentation: https://youtu.be/Jc_L6UffFOs - Using node modules with njs: http://nginx.org/en/docs/njs/node_modules.html - Writing njs code using TypeScript definition files: http://nginx.org/en/docs/njs/typescript.html Feel free to try it and give us feedback on: - Github: https://github.com/nginx/njs/issues - Mailing list: http://mailman.nginx.org/mailman/listinfo/nginx-devel Changes with njs 0.5.2 09 Mar 2021 nginx modules: *) Feature: added the "js_body_filter" directive. *) Feature: introduced the "status" property for stream session object. *) Feature: added njs.on('exit') callback support. *) Bugfix: fixed property descriptor reuse for not extensible objects. Thanks to Artem S. Povalyukhin. *) Bugfix: fixed Object.freeze() and friends according to the specification. Thanks to Artem S. Povalyukhin. *) Bugfix: fixed Function() in CLI mode. *) Bugfix: fixed for-in iteration of typed array values. Thanks to Artem S. Povalyukhin. From nginx-forum at forum.nginx.org Wed Mar 10 01:05:02 2021 From: nginx-forum at forum.nginx.org (andromeda123) Date: Tue, 09 Mar 2021 20:05:02 -0500 Subject: How did nginx resolve names? In-Reply-To: <20210307112406.GY6011@daoine.org> References: <20210307112406.GY6011@daoine.org> Message-ID: <13e2d16268ee8d99249bb149b848fbe7.NginxMailingListEnglish@forum.nginx.org> Hi Francis - Thank you for response explaining the difference in DNS resolution behavior during 1) start - up 2) run time. Can you please elaborate why start-up DNS resolution does not use `resolver` directive? Just the way run time resolution does? A follow up question - Is it feasible to use system resolver as the run time resolver by explicitly adding it to the `resolver` directive? Thanks. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,290889,290934#msg-290934 From nginx-forum at forum.nginx.org Wed Mar 10 03:17:43 2021 From: nginx-forum at forum.nginx.org (lingtao.klt) Date: Tue, 09 Mar 2021 22:17:43 -0500 Subject: [QUIC][BUG] function 'ngx_hkdf_extract ' has memory leak when use OPENSSL but not BoringSSL. Message-ID: In ngx_hkdf_expand, when use OPENSSL, the *pctx need to be free. ``` static ngx_int_t ngx_hkdf_expand(u_char *out_key, size_t out_len, const EVP_MD *digest, const uint8_t *prk, size_t prk_len, const u_char *info, size_t info_len) { #ifdef OPENSSL_IS_BORINGSSL if (HKDF_expand(out_key, out_len, digest, prk, prk_len, info, info_len) == 0) { return NGX_ERROR; } #else EVP_PKEY_CTX *pctx; pctx = EVP_PKEY_CTX_new_id(EVP_PKEY_HKDF, NULL); if (EVP_PKEY_derive_init(pctx) <= 0) { return NGX_ERROR; } if (EVP_PKEY_CTX_hkdf_mode(pctx, EVP_PKEY_HKDEF_MODE_EXPAND_ONLY) <= 0) { return NGX_ERROR; } if (EVP_PKEY_CTX_set_hkdf_md(pctx, digest) <= 0) { return NGX_ERROR; } if (EVP_PKEY_CTX_set1_hkdf_key(pctx, prk, prk_len) <= 0) { return NGX_ERROR; } if (EVP_PKEY_CTX_add1_hkdf_info(pctx, info, info_len) <= 0) { return NGX_ERROR; } if (EVP_PKEY_derive(pctx, out_key, &out_len) <= 0) { return NGX_ERROR; } #endif return NGX_OK; } ``` Posted at Nginx Forum: https://forum.nginx.org/read.php?2,290935,290935#msg-290935 From francis at daoine.org Wed Mar 10 14:15:17 2021 From: francis at daoine.org (Francis Daly) Date: Wed, 10 Mar 2021 14:15:17 +0000 Subject: How did nginx resolve names? In-Reply-To: <13e2d16268ee8d99249bb149b848fbe7.NginxMailingListEnglish@forum.nginx.org> References: <20210307112406.GY6011@daoine.org> <13e2d16268ee8d99249bb149b848fbe7.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20210310141517.GA12454@daoine.org> On Tue, Mar 09, 2021 at 08:05:02PM -0500, andromeda123 wrote: Hi there, > Can you please elaborate why start-up DNS resolution does not use `resolver` > directive? Just the way run time resolution does? At a guess -- there does not need to be exactly one resolver directive; and there is no need to avoid the system resolver at startup because things are under less time pressure then. The "resolver" directive takes the address of a DNS server. The system resolver may or may not use DNS -- applications don't have to care what it uses, just that it works. > A follow up question - > Is it feasible to use system resolver as the run time resolver by explicitly > adding it to the `resolver` directive? You can use the same DNS server that the system resolver uses, yes. (Assuming your system resolver uses a DNS server.) But you won't be using "the system resolver". The difference there may not be important. Good luck with it, f -- Francis Daly francis at daoine.org From nginx-forum at forum.nginx.org Wed Mar 10 19:49:31 2021 From: nginx-forum at forum.nginx.org (salmaanp) Date: Wed, 10 Mar 2021 14:49:31 -0500 Subject: Nginx use temp file as request body in post subrequest Message-ID: <08241453a851703037555a4e57f810e6.NginxMailingListEnglish@forum.nginx.org> I'm building an nginx module and trying to create a post subrequest from the main request. The response from the main request buffers is saved to a nginx temp file and a post subrequest is created. The body of the post subrequest is from the nginx temp file instead of buffers. ``` ngx_http_request_t *sr; ngx_http_post_subrequest_t *ps; ps = ngx_palloc(r->pool, sizeof(ngx_http_post_subrequest_t)); if (ps == NULL) { ngx_log_stderr(0, "ngx_palloc failed"); return NGX_HTTP_INTERNAL_SERVER_ERROR; } //sub-request callback ps->handler = subrequest_done; ps->data = last_buf; ngx_http_test_conf_t *conf = ngx_http_get_module_loc_conf(r, ngx_http_test_filter_module); if (ngx_http_subrequest(r, &conf->test_location, NULL, &sr, ps, NGX_HTTP_SUBREQUEST_IN_MEMORY) != NGX_OK) { ngx_log_stderr(0, "failed to create subrequest"); return NGX_HTTP_INTERNAL_SERVER_ERROR; } ngx_str_t post_str = ngx_string("POST"); sr->method = NGX_HTTP_POST; sr->method_name = post_str; sr->request_body_in_file_only = 1; ngx_http_set_ctx(sr, ctx, ngx_http_test_filter_module); // Create request body and assign temp file sr->request_body = ngx_pcalloc(r->pool, sizeof(ngx_http_request_body_t)); if (sr->request_body == NULL) { ngx_log_stderr(0, "request body ngx_pcalloc failed"); return NGX_HTTP_INTERNAL_SERVER_ERROR; } sr->request_body->temp_file = ctx->temp_file; //sr->request_body->temp_file->offset = 0; (?) sr->header_only = 1; //sr->filter_need_in_memory = 1; sr->headers_in.content_length_n = ctx->file_size; ngx_log_stderr(0, "Created sub-request."); return NGX_OK; ``` However this does not send out the request body in the subrequest, the request gets stuck after finishing the handshake. If I use request buffers, it works fine. ``` sr->request_body->bufs->buf = payload_buf; sr->request_body->bufs->next = NULL; sr->request_body->buf = payload_buf; ``` Is there something I'm missing when using the temp file? I've verified the temp file is there and valid. Does anyone know if using a temp file is supported in post subrequest? or how else to achieve this. I was wondering if the post subrequest can stream the request body as new data is added to the temp file. Something like proxy buffering on, response buffers are added to temp file until the last one and the post subrequest keeps streaming the data from the tempfile. Cant use request buffers in the request body since I will run out of buffers after 64K, and the response can be larger than that. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,290940,290940#msg-290940 From nginx-forum at forum.nginx.org Wed Mar 10 23:09:22 2021 From: nginx-forum at forum.nginx.org (salmaanp) Date: Wed, 10 Mar 2021 18:09:22 -0500 Subject: Nginx use temp file as request body in post subrequest In-Reply-To: <08241453a851703037555a4e57f810e6.NginxMailingListEnglish@forum.nginx.org> References: <08241453a851703037555a4e57f810e6.NginxMailingListEnglish@forum.nginx.org> Message-ID: <67322c8cf1a61d53d2ec2a98e345180a.NginxMailingListEnglish@forum.nginx.org> I guess we still need the request buf, so made these changes which seemed better. but the behavior is still same. Went though the code for upstream create request and looks like the body is always read from buffer. ngx_buf_t *payload_buf = NULL; payload_buf = ngx_create_temp_buf(r->pool, ctx->temp_buf_size); if (payload_buf == NULL) { ngx_log_stderr(0, "failed to ngx_create_temp_buf"); return NGX_HTTP_INTERNAL_SERVER_ERROR; } payload_buf->in_file = 1; payload_buf->temp_file = 1; payload_buf->file = &ctx->test_temp_file->file; sr->request_body->buf = payload_buf; sr->request_body->bufs->next = NULL; sr->request_body->bufs->buf = payload_buf; sr->header_only = 1; //sr->filter_need_in_memory = 1; Posted at Nginx Forum: https://forum.nginx.org/read.php?2,290940,290941#msg-290941 From vl at nginx.com Fri Mar 12 08:50:53 2021 From: vl at nginx.com (Vladimir Homutov) Date: Fri, 12 Mar 2021 11:50:53 +0300 Subject: [QUIC][BUG] function 'ngx_hkdf_extract ' has memory leak when use OPENSSL but not BoringSSL. In-Reply-To: References: Message-ID: On Tue, Mar 09, 2021 at 10:17:43PM -0500, lingtao.klt wrote: > In ngx_hkdf_expand, when use OPENSSL, the *pctx need to be free. > > > ``` > > static ngx_int_t > ngx_hkdf_expand(u_char *out_key, size_t out_len, const EVP_MD *digest, > const uint8_t *prk, size_t prk_len, const u_char *info, size_t > info_len) > { > #ifdef OPENSSL_IS_BORINGSSL > if (HKDF_expand(out_key, out_len, digest, prk, prk_len, info, info_len) > == 0) > { > return NGX_ERROR; > } > #else > > EVP_PKEY_CTX *pctx; > > pctx = EVP_PKEY_CTX_new_id(EVP_PKEY_HKDF, NULL); > > if (EVP_PKEY_derive_init(pctx) <= 0) { > return NGX_ERROR; > } > > if (EVP_PKEY_CTX_hkdf_mode(pctx, EVP_PKEY_HKDEF_MODE_EXPAND_ONLY) <= 0) > { > return NGX_ERROR; > } > > if (EVP_PKEY_CTX_set_hkdf_md(pctx, digest) <= 0) { > return NGX_ERROR; > } > > if (EVP_PKEY_CTX_set1_hkdf_key(pctx, prk, prk_len) <= 0) { > return NGX_ERROR; > } > > if (EVP_PKEY_CTX_add1_hkdf_info(pctx, info, info_len) <= 0) { > return NGX_ERROR; > } > > if (EVP_PKEY_derive(pctx, out_key, &out_len) <= 0) { > return NGX_ERROR; > } > > #endif > > return NGX_OK; > } > > ``` Thank you for reporting, this was fixed: http://hg.nginx.org/nginx-quic/rev/1c48629cfa74 From nginx-forum at forum.nginx.org Fri Mar 12 14:08:35 2021 From: nginx-forum at forum.nginx.org (lingtao.klt) Date: Fri, 12 Mar 2021 09:08:35 -0500 Subject: [QUIC][BUG] function 'ngx_hkdf_extract ' has memory leak when use OPENSSL but not BoringSSL. In-Reply-To: References: Message-ID: <461e1d4c94c8b6b32ef46c96e6dbafeb.NginxMailingListEnglish@forum.nginx.org> No thx, my pleasure Posted at Nginx Forum: https://forum.nginx.org/read.php?2,290935,290954#msg-290954 From community at thoughtmaybe.com Fri Mar 12 20:56:35 2021 From: community at thoughtmaybe.com (Jore) Date: Sat, 13 Mar 2021 07:56:35 +1100 Subject: Possible to make subdomain only accessible through 'embed' Message-ID: <6eebdb2e-cc06-fc9c-6b5d-d29f365e2689@thoughtmaybe.com> Hi there, I have pages served from "embed.domain.com" that I'd only like to be accessible when they're embedded in files served from "docs.domain.com" Visualisation below: Is it possible to lock down "embed.domain.com" so it can only be accessed through "docs.domain.com"? Can this be done with nginx conf or another method? Thank you! Jore -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: ghhdhhpkijhodmio.png Type: image/png Size: 9611 bytes Desc: not available URL: From nginx-forum at forum.nginx.org Sat Mar 13 05:10:15 2021 From: nginx-forum at forum.nginx.org (blason) Date: Sat, 13 Mar 2021 00:10:15 -0500 Subject: Stuck in weird issue - need help pls Message-ID: Hi Team, I am stuck in this weird issue. I have nginx as my reverse proxy set in front of Apache web server Some how my proxy_pass is not working as expected and getting 404 not found error while retrieving page. Can someone pls help? Reve Proxy IP - 10.122.0.4 Apache 10.122.0.3 On my Rev Proxy /etc/hosts file 10.122.0.3 ipbl.xxxx.xxx Here is my nginx stanza server { listen 80; server_name threat.list.xxx.xxx; # return 301 https://$server_name$request_uri; add_header X-Frame-Options "SAMEORIGIN"; modsecurity on; modsecurity_rules_file /etc/nginx/modsec/main.conf; error_page 404 403 /custom_404.html; location = /custom_404.html { root /usr/share/nginx/html; internal; } access_log /var/log/nginx/threatlist/access.log; error_log /var/log/nginx/threatlist/error.log; location / { if ($request_method !~ "GET") { return 403; break; } include /etc/nginx/threatlistacl/ipacls; deny all; client_max_body_size 10m; client_body_buffer_size 128k; proxy_send_timeout 90; proxy_read_timeout 90; proxy_buffer_size 128k; proxy_buffers 4 256k; proxy_busy_buffers_size 256k; proxy_temp_file_write_size 256k; proxy_connect_timeout 30s; proxy_pass http://ipbl.xxxx.xxxx; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; } } Now if I access ipbl.xxx.xxx/ipbl.txt page it gets accessed successfully Request URL: http://threat.list.xxx.xxx/ipbl.txt Request Method: GET Status Code: 404 Not Found Remote Address: xxx.xx.xx.xx:80 Referrer Policy: strict-origin-when-cross-origin Connection: keep-alive Content-Type: text/html; charset=iso-8859-1 Date: Sat, 13 Mar 2021 04:50:53 GMT Server: nginx Transfer-Encoding: chunked Accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.9 Accept-Encoding: gzip, deflate Accept-Language: en-GB,en;q=0.9 Connection: keep-alive DNT: 1 Host: threat.list.xxx.xxx Upgrade-Insecure-Requests: 1 User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/89.0.4389.82 Safari/537.36 And my access.log xx.xx.xx.xx - - [13/Mar/2021:10:31:17 +0530] "GET /ipbl.txt HTTP/1.1" 404 183 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/89.0.4389.82 Safari/537.36" Posted at Nginx Forum: https://forum.nginx.org/read.php?2,290958,290958#msg-290958 From wizard at koalatyworks.com Sat Mar 13 18:58:23 2021 From: wizard at koalatyworks.com (Ken Wright) Date: Sat, 13 Mar 2021 13:58:23 -0500 Subject: Nginx with PHP8.0? Message-ID: <01e90e29c5d2a7bef3f8c3bf86b2b1b33c5bc0a0.camel@koalatyworks.com> I recently upgraded from php7.4 to php8.0 and I find some of my applications no longer operate. When I checked the error log, I found the problem: the applications are still look for php7.4-fpm.sock, which no longer exists. I tried reinstalling php7.4, but with no success. Can anyone help me solve this issue? Ken Wright -- If you ever think international affairs make sense, remember this: ? Because a Serb shot an Austrian in Bosnia, Germany invaded Belgium. -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: This is a digitally signed message part URL: From chris at cretaforce.gr Sat Mar 13 19:18:55 2021 From: chris at cretaforce.gr (Christos Chatzaras) Date: Sat, 13 Mar 2021 21:18:55 +0200 Subject: Nginx with PHP8.0? In-Reply-To: <01e90e29c5d2a7bef3f8c3bf86b2b1b33c5bc0a0.camel@koalatyworks.com> References: <01e90e29c5d2a7bef3f8c3bf86b2b1b33c5bc0a0.camel@koalatyworks.com> Message-ID: <1CDD1DAE-75FA-4022-8FC2-C9EB3FAB6090@cretaforce.gr> > On 13 Mar 2021, at 20:58, Ken Wright wrote: > > I recently upgraded from php7.4 to php8.0 and I find some of my > applications no longer operate. When I checked the error log, I found > the problem: the applications are still look for php7.4-fpm.sock, > which no longer exists. I tried reinstalling php7.4, but with no > success. Can anyone help me solve this issue? Did you change fastcgi_pass socket path in your virtual host? From nginx-forum at forum.nginx.org Sat Mar 13 21:52:01 2021 From: nginx-forum at forum.nginx.org (bubugian) Date: Sat, 13 Mar 2021 16:52:01 -0500 Subject: Works only in root.. Message-ID: <318cbee4b055810e7f86495ca699565c.NginxMailingListEnglish@forum.nginx.org> Hi GROUP ! I've a problem with NGINX (reverse proxy). All work perfectly if I assign internal server to root: server { listen 80; listen [::]:80; access_log /var/log/nginx/reverse-access.log; error_log /var/log/nginx/reverse-error.log; location /{ proxy_pass https://192.168.1.10; } } and ask browser to: NGINX_IP\ ___ and stop work if I use a path different from ROOT: server { listen 80; listen [::]:80; access_log /var/log/nginx/reverse-access.log; error_log /var/log/nginx/reverse-error.log; location /qnap{ proxy_pass https://192.168.1.10; } } and ask browser to: NGINX_IP\qnap ____ What am I doing wrong ? Thanks ! Posted at Nginx Forum: https://forum.nginx.org/read.php?2,290968,290968#msg-290968 From osa at freebsd.org.ru Sun Mar 14 01:36:49 2021 From: osa at freebsd.org.ru (Sergey A. Osokin) Date: Sun, 14 Mar 2021 04:36:49 +0300 Subject: Stuck in weird issue - need help pls In-Reply-To: References: Message-ID: Hi there, seems like the file you request is unavailable on the remote server. Could you run to make sure the the file is accessible: % curl -v http://10.122.0.3/ipbl.txt While I'm here in the configuration file you provided the backend desribed with a hostname, not an IP address. Is there any specific reason to do that? -- Sergey Osokin On Sat, Mar 13, 2021 at 12:10:15AM -0500, blason wrote: > Hi Team, > > I am stuck in this weird issue. I have nginx as my reverse proxy set in > front of Apache web server Some how my proxy_pass is not working as expected > and getting 404 not found error while retrieving page. Can someone pls > help? > > Reve Proxy IP - 10.122.0.4 > Apache 10.122.0.3 > > On my Rev Proxy /etc/hosts file > 10.122.0.3 ipbl.xxxx.xxx > > Here is my nginx stanza > > server { > listen 80; > server_name threat.list.xxx.xxx; > # return 301 https://$server_name$request_uri; > add_header X-Frame-Options "SAMEORIGIN"; > modsecurity on; > modsecurity_rules_file /etc/nginx/modsec/main.conf; > error_page 404 403 /custom_404.html; > location = /custom_404.html { > root /usr/share/nginx/html; > internal; > } > access_log /var/log/nginx/threatlist/access.log; > error_log /var/log/nginx/threatlist/error.log; > location / { > if ($request_method !~ "GET") { > return 403; > break; > } > include /etc/nginx/threatlistacl/ipacls; > deny all; > client_max_body_size 10m; > client_body_buffer_size 128k; > proxy_send_timeout 90; > proxy_read_timeout 90; > proxy_buffer_size 128k; > proxy_buffers 4 256k; > proxy_busy_buffers_size 256k; > proxy_temp_file_write_size 256k; > proxy_connect_timeout 30s; > proxy_pass http://ipbl.xxxx.xxxx; > proxy_set_header Host $host; > proxy_set_header X-Real-IP $remote_addr; > proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; > } > } > > Now if I access ipbl.xxx.xxx/ipbl.txt page it gets accessed successfully > > Request URL: http://threat.list.xxx.xxx/ipbl.txt > Request Method: GET > Status Code: 404 Not Found > Remote Address: xxx.xx.xx.xx:80 > Referrer Policy: strict-origin-when-cross-origin > Connection: keep-alive > Content-Type: text/html; charset=iso-8859-1 > Date: Sat, 13 Mar 2021 04:50:53 GMT > Server: nginx > Transfer-Encoding: chunked > Accept: > text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.9 > Accept-Encoding: gzip, deflate > Accept-Language: en-GB,en;q=0.9 > Connection: keep-alive > DNT: 1 > Host: threat.list.xxx.xxx > Upgrade-Insecure-Requests: 1 > User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 > (KHTML, like Gecko) Chrome/89.0.4389.82 Safari/537.36 > > And my access.log > > xx.xx.xx.xx - - [13/Mar/2021:10:31:17 +0530] "GET /ipbl.txt HTTP/1.1" 404 > 183 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 > (KHTML, like Gecko) Chrome/89.0.4389.82 Safari/537.36" > > Posted at Nginx Forum: https://forum.nginx.org/read.php?2,290958,290958#msg-290958 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From osa at freebsd.org.ru Sun Mar 14 04:59:27 2021 From: osa at freebsd.org.ru (Sergey A. Osokin) Date: Sun, 14 Mar 2021 07:59:27 +0300 Subject: Works only in root.. In-Reply-To: <318cbee4b055810e7f86495ca699565c.NginxMailingListEnglish@forum.nginx.org> References: <318cbee4b055810e7f86495ca699565c.NginxMailingListEnglish@forum.nginx.org> Message-ID: Hi, is there any messages in error.log file? While I'm here could you guide me - is there any specific reason to use a back slash instead of a very common forward slash? Thanks. -- Sergey Osokin On Sat, Mar 13, 2021 at 04:52:01PM -0500, bubugian wrote: > Hi GROUP ! > > I've a problem with NGINX (reverse proxy). > > All work perfectly if I assign internal server to root: > > server { > listen 80; > listen [::]:80; > > access_log /var/log/nginx/reverse-access.log; > error_log /var/log/nginx/reverse-error.log; > > location /{ > proxy_pass https://192.168.1.10; > } > } > and ask browser to: NGINX_IP\ > > ___ > > and stop work if I use a path different from ROOT: > > server { > listen 80; > listen [::]:80; > > access_log /var/log/nginx/reverse-access.log; > error_log /var/log/nginx/reverse-error.log; > > location /qnap{ > proxy_pass https://192.168.1.10; > } > } > > and ask browser to: NGINX_IP\qnap > > ____ > > What am I doing wrong ? > > Thanks ! > > Posted at Nginx Forum: https://forum.nginx.org/read.php?2,290968,290968#msg-290968 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From ADD_SP at outlook.com Sun Mar 14 08:55:20 2021 From: ADD_SP at outlook.com (ADD SP) Date: Sun, 14 Mar 2021 08:55:20 +0000 Subject: About the order of execution of the modules. Message-ID: Hello! I am a developer of third-party modules. Assuming that all modules are registered in the same phase (e.g. NGX_HTTP_ACCESS_PHASE) I would like to know the order of execution between dynamic modules. Can I control the order of execution between dynamic modules? What is the order of execution between static and dynamic modules? Can I control the order of execution between static and dynamic modules? I have looked at https://www.nginx.com/blog/nginx-dynamic-modules-how-they-work/#modOrder but it only gives a general overview. ADD-SP ??? Windows 10 ????? -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Sun Mar 14 11:34:12 2021 From: nginx-forum at forum.nginx.org (blason) Date: Sun, 14 Mar 2021 07:34:12 -0400 Subject: Stuck in weird issue - need help pls In-Reply-To: References: Message-ID: <83860d6b9ad0913f1b7297fa65bfabe4.NginxMailingListEnglish@forum.nginx.org> Well - That was not the nginx issue and was an apache2 issue. I had virtual hosts defined on apache2 server and apache2 was not finding a match even through config was there. Hence I added the entry in hosts file and it worked. Plus moved my vshost config file to apache2.conf file. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,290958,290973#msg-290973 From francis at daoine.org Sun Mar 14 14:50:43 2021 From: francis at daoine.org (Francis Daly) Date: Sun, 14 Mar 2021 14:50:43 +0000 Subject: Possible to make subdomain only accessible through 'embed' In-Reply-To: <6eebdb2e-cc06-fc9c-6b5d-d29f365e2689@thoughtmaybe.com> References: <6eebdb2e-cc06-fc9c-6b5d-d29f365e2689@thoughtmaybe.com> Message-ID: <20210314145043.GA16474@daoine.org> On Sat, Mar 13, 2021 at 07:56:35AM +1100, Jore wrote: Hi there, > I have pages served from "embed.domain.com" that I'd only like to be > accessible when they're embedded in files served from "docs.domain.com" > Is it possible to lock down "embed.domain.com" so it can only be accessed > through "docs.domain.com"? If you mean "a http request to the embed.domain.com site must only get a response if the request was made by a user clicking a link on the docs.domain.com site", then that can't be done reliably. That's the nature of http. You could do something like block external access to embed.domain.com altogether, and use nginx to reverse-proxy requests to it behind http://docs.domain.com/embed/, for example. That would mean that all external http requests would go to docs.domain.com; but it still does not mean that a request to docs.domain.com/embed/ came from a user clicking a link somewhere else on docs.domain.com. It may or may not match what you want. Good luck with it, f -- Francis Daly francis at daoine.org From francis at daoine.org Sun Mar 14 15:18:51 2021 From: francis at daoine.org (Francis Daly) Date: Sun, 14 Mar 2021 15:18:51 +0000 Subject: About the order of execution of the modules. In-Reply-To: References: Message-ID: <20210314151851.GB16474@daoine.org> On Sun, Mar 14, 2021 at 08:55:20AM +0000, ADD SP wrote: Hi there, > I am a developer of third-party modules. Assuming that all modules are registered in the same phase (e.g. NGX_HTTP_ACCESS_PHASE) I would like to know the order of execution between dynamic modules. Can I control the order of execution between dynamic modules? What is the order of execution between static and dynamic modules? Can I control the order of execution between static and dynamic modules? > Does the "ngx_module_order" content at http://nginx.org/en/docs/dev/development_guide.html#adding_new_modules answer your questions? If not: is there a specific order you wish to enforce between your modules and stock modules; or between your modules and other non-stock modules? Cheers, f -- Francis Daly francis at daoine.org From nginx-forum at forum.nginx.org Sun Mar 14 15:33:02 2021 From: nginx-forum at forum.nginx.org (petecooper) Date: Sun, 14 Mar 2021 11:33:02 -0400 Subject: Check for existence of PHP socket availability with `nginx -t` Message-ID: <861966ad7ff5410eeb359023d4cbccef.NginxMailingListEnglish@forum.nginx.org> Hello. I have some servers running PHP applications on Nginx via PHP-FPM. Each server uses a named socket in the filesystem. Nginx can often pass its configuration test but the server does not function as expected if the named socket file is not there (i.e. PHP-FPM is not running as expected). Is it possible to integrate a check for the existence of that socket file in the `nginx -t` process? I am able to create a shell script to check for the socket and then run `nginx -t`, but I am wondering if there is a native route to check. The server configs can have additional directives added outside of the PHP-speciflc `location` blocks, if that makes it more viable. Thank you, and best wishes. Pete Posted at Nginx Forum: https://forum.nginx.org/read.php?2,290976,290976#msg-290976 From wizard at koalatyworks.com Sun Mar 14 16:45:05 2021 From: wizard at koalatyworks.com (Ken Wright) Date: Sun, 14 Mar 2021 12:45:05 -0400 Subject: Nginx with PHP8.0? In-Reply-To: <1CDD1DAE-75FA-4022-8FC2-C9EB3FAB6090@cretaforce.gr> References: <01e90e29c5d2a7bef3f8c3bf86b2b1b33c5bc0a0.camel@koalatyworks.com> <1CDD1DAE-75FA-4022-8FC2-C9EB3FAB6090@cretaforce.gr> Message-ID: <159625cdb2517b86cacea71321d5eec05f56738c.camel@koalatyworks.com> On Sat, 2021-03-13 at 21:18 +0200, Christos Chatzaras wrote: > > > On 13 Mar 2021, at 20:58, Ken Wright > > wrote: > > > > I recently upgraded from php7.4 to php8.0 and I find some of my > > applications no longer operate.? When I checked the error log, I > > found > > the problem:? the applications are still look for php7.4-fpm.sock, > > which no longer exists.? I tried reinstalling php7.4, but with no > > success.? Can anyone help me solve this issue? > > Did you change fastcgi_pass socket path in your virtual host? > I checked all the virtual hosts, and they all have been updated. Still, some apps run and some don't. I finally got Nextcloud running, sort of, but others are still failing to start with just a white screen. Ken -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: This is a digitally signed message part URL: From teward at thomas-ward.net Sun Mar 14 16:50:44 2021 From: teward at thomas-ward.net (Thomas Ward) Date: Sun, 14 Mar 2021 12:50:44 -0400 Subject: Nginx with PHP8.0? In-Reply-To: <159625cdb2517b86cacea71321d5eec05f56738c.camel@koalatyworks.com> Message-ID: <4Dz5CV66c0z3VXk@mail.syn-ack.link> A white screen indicates some failure in the PHP processor.? That is not an nginx error, rather a problem with PHP that caused a fatal processing error.? Check your PHP logs or enable error reporting to the page in PHP so that it spits out the error data you need to understand why it fauled processing.? ('fastcgi_intercept_errors on;' might help in the PHP block to report the errors to the nginx error logs so you can see the error cause.)Sent from my Sprint Samsung Galaxy Note10+. -------- Original message --------From: Ken Wright Date: 3/14/21 12:45 (GMT-05:00) To: nginx at nginx.org Subject: Re: Nginx with PHP8.0? On Sat, 2021-03-13 at 21:18 +0200, Christos Chatzaras wrote:> > > On 13 Mar 2021, at 20:58, Ken Wright > > wrote:> > > > I recently upgraded from php7.4 to php8.0 and I find some of my> > applications no longer operate.? When I checked the error log, I> > found> > the problem:? the applications are still look for php7.4-fpm.sock,> > which no longer exists.? I tried reinstalling php7.4, but with no> > success.? Can anyone help me solve this issue?> > Did you change fastcgi_pass socket path in your virtual host?> I checked all the virtual hosts, and they all have been updated. Still, some apps run and some don't.? I finally got Nextcloud running,sort of, but others are still failing to start with just a whitescreen.Ken -------------- next part -------------- An HTML attachment was scrubbed... URL: From ADD_SP at outlook.com Sun Mar 14 17:43:19 2021 From: ADD_SP at outlook.com (SP ADD) Date: Sun, 14 Mar 2021 17:43:19 +0000 Subject: About the order of execution of the modules. In-Reply-To: <20210314151851.GB16474@daoine.org> References: , <20210314151851.GB16474@daoine.org> Message-ID: Hi there! Thanks for your answer! In my module, ngx_module_type is set to HTTP, so ngx_module_order doesn't apply. I'm sorry, I'm not sure what "stock module" means, I guess it means a built-in module, i.e. a module that can be enabled or disabled just by the parameters of the "configure" script, without having to download the source code of the module, e.g. ngx_http_flv_module and ngx_http_rewrite_module. If I am correct, I will list below the questions I would like to ask and hopefully the questions below are clear and unambiguous. * How to control the order of execution between the dynamic stock module and my dynamic module. * How to control the order of execution between the dynamic stock module and my static module. * How to control the order of execution between my dynamic non-stock modules and my dynamic modules. * How to control the order of execution between dynamic non-stock modules and my static modules. * How to control the order of execution between static stock modules and my dynamic modules. * How to control the order of execution between static stock modules and my static modules. Regarding the last question, have I looked at https://forum.nginx.org/read.php?2,246978,246999#msg-246999 and can the answers there be used to control the order between the static stock modules and my static modules? ADD-SP ________________________________ ???: nginx ?? Francis Daly ????: 2021?3?14? 23:18 ???: nginx at nginx.org ??: Re: About the order of execution of the modules. On Sun, Mar 14, 2021 at 08:55:20AM +0000, ADD SP wrote: Hi there, > I am a developer of third-party modules. Assuming that all modules are registered in the same phase (e.g. NGX_HTTP_ACCESS_PHASE) I would like to know the order of execution between dynamic modules. Can I control the order of execution between dynamic modules? What is the order of execution between static and dynamic modules? Can I control the order of execution between static and dynamic modules? > Does the "ngx_module_order" content at http://nginx.org/en/docs/dev/development_guide.html#adding_new_modules answer your questions? If not: is there a specific order you wish to enforce between your modules and stock modules; or between your modules and other non-stock modules? Cheers, f -- Francis Daly francis at daoine.org _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From ADD_SP at outlook.com Sun Mar 14 17:55:54 2021 From: ADD_SP at outlook.com (SP ADD) Date: Sun, 14 Mar 2021 17:55:54 +0000 Subject: About the order of execution of the modules. In-Reply-To: <20210314151851.GB16474@daoine.org> References: , <20210314151851.GB16474@daoine.org> Message-ID: There are some errors in the content of my reply just now, I will re-post the question I wanted to ask, please see the previous reply for the rest of the content. * How to control the order of execution between dynamic stock modules and my dynamic module. * How to control the order of execution between dynamic stock modules and my static module. * How to control the order of execution between dynamic non-stock modules and my dynamic modules. * How to control the order of execution between dynamic non-stock modules and my static modules. * How to control the order of execution between static stock modules and my dynamic modules. * How to control the order of execution between static stock modules and my static modules. -------------- next part -------------- An HTML attachment was scrubbed... URL: From francis at daoine.org Sun Mar 14 20:04:14 2021 From: francis at daoine.org (Francis Daly) Date: Sun, 14 Mar 2021 20:04:14 +0000 Subject: About the order of execution of the modules. In-Reply-To: References: <20210314151851.GB16474@daoine.org> Message-ID: <20210314200414.GC16474@daoine.org> On Sun, Mar 14, 2021 at 05:43:19PM +0000, SP ADD wrote: Hi there, What follows is based on my reading of the docs; I may have missed something, and I am happy to be corrected by someone who knows what really happens. > In my module, ngx_module_type is set to HTTP, so ngx_module_order doesn't apply. It is not clear to me that that "so" statement is true. Have you documentation to show that it is? What happened when you tried it? > I'm sorry, I'm not sure what "stock module" means, I guess it means a built-in module, i.e. a module that can be enabled or disabled just by the parameters of the "configure" script, without having to download the source code of the module, e.g. ngx_http_flv_module and ngx_http_rewrite_module. > You are correct. Perhaps "standard" modules would have been a better term for me to use. > If I am correct, I will list below the questions I would like to ask and hopefully the questions below are clear and unambiguous. What control do you want, specifically? My reading suggests that the "static" module order is "from the ./configure line, --add-module order", and the "dynamic" module order is "from the nginx.conf file, load_module order", where the dynamic modules have the option to list which named modules they should run before. (And: order is only relevant within the same phase; you did mention that already.) If you want more control than that, you may want to ship an nginx binary configured the way you want it, without load_module support. (And you probably don't want to do that.) > * How to control the order of execution between the dynamic stock module and my dynamic module. As the developer, I think that you do not know which stock modules are dynamic. But you do know the names of the (current) stock modules. If you want to be "after" a stock module, "load_module" after it in nginx.conf. If you want to be "before" a stock module, set your ngx_module_order to your name then the stock module name. > * How to control the order of execution between the dynamic stock module and my static module. You don't. You will be "before" (absent circumstances you can't control). > * How to control the order of execution between my dynamic non-stock modules and my dynamic modules. (Corrected to remove the first "my".) load_module order, unless you-or-they know each other's names and set ngx_module_order; in that case, whichever was last in load_module gets to be before whichever names they nominate. > * How to control the order of execution between dynamic non-stock modules and my static modules. Same as dynamic stock module, except you probably do not know their names in advance. > * How to control the order of execution between static stock modules and my dynamic modules. You will be "after", unless you set your ngx_module_order. > * How to control the order of execution between static stock modules and my static modules. ./configure order. > Regarding the last question, have I looked at https://forum.nginx.org/read.php?2,246978,246999#msg-246999 and can the answers there be used to control the order between the static stock modules and my static modules? > I suspect that what is there is still fundamentally correct, yes. Cheers, f -- Francis Daly francis at daoine.org From mdounin at mdounin.ru Mon Mar 15 05:02:35 2021 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 15 Mar 2021 08:02:35 +0300 Subject: Check for existence of PHP socket availability with `nginx -t` In-Reply-To: <861966ad7ff5410eeb359023d4cbccef.NginxMailingListEnglish@forum.nginx.org> References: <861966ad7ff5410eeb359023d4cbccef.NginxMailingListEnglish@forum.nginx.org> Message-ID: Hello! On Sun, Mar 14, 2021 at 11:33:02AM -0400, petecooper wrote: > I have some servers running PHP applications on Nginx via PHP-FPM. Each > server uses a named socket in the filesystem. Nginx can often pass its > configuration test but the server does not function as expected if the named > socket file is not there (i.e. PHP-FPM is not running as expected). > > Is it possible to integrate a check for the existence of that socket file in > the `nginx -t` process? I am able to create a shell script to check for the > socket and then run `nginx -t`, but I am wondering if there is a native > route to check. Short answer is: no. Long answer: nginx does not care if the upstream socket is reacheable or not when it parses configuration, it is only important when processing a particular request. That is, nginx can (and will) start just fine if the socket doesn't exist (or, similarly, upstream server's IP address isn't reachable). And that's what "nginx -t" checks for: if nginx itself will be able to start. As long as your upstream server is reachable when a request comes in, the request will be passed to it. If the backend server is not reacheable for some reason - an error will be returned to the client (and you can configure custom processing for the error using the error_page directive - for example, you can configure nginx to try a different upstream server instead). In particular, this makes it possible to restart nginx and your backend servers independently. If your use case is simple enough and you want both nginx and corresponding PHP-FPM processes to be running at the same time, and, for example, don't want to start nginx if PHP-FPM isn't running - this is something to check by means external to nginx. -- Maxim Dounin http://mdounin.ru/ From community at thoughtmaybe.com Mon Mar 15 05:24:27 2021 From: community at thoughtmaybe.com (Jore) Date: Mon, 15 Mar 2021 16:24:27 +1100 Subject: Possible to make subdomain only accessible through 'embed' In-Reply-To: <20210314145043.GA16474@daoine.org> References: <6eebdb2e-cc06-fc9c-6b5d-d29f365e2689@thoughtmaybe.com> <20210314145043.GA16474@daoine.org> Message-ID: <79054732-f565-25df-3215-90d49d37201d@thoughtmaybe.com> Hi there, Thanks for your reply, I appreciate it. Apologies I wasn't more clear, but yes, I mean "a HTTP request to the embed.domain.com site must only get a response if the request was made by a user clicking a link on the docs.domain.com site"... Am I correct in understanding that you mean it's not reliable as headers can be spoofed? In any event, I just want to brainstorm some implementations of how to do that even and weigh up the pros/cons... I should also explain more! The end goal is to run Mediawiki on "embed.domain.com", but to not have the Wiki accessible to the whole world. At the moment, it */is/* accessible to the whole world but I have it locked down so that all pages require a login. But that's undesirable for our users though as it's one more username/password for them to remember and that's annoying for them when the whole purpose of heading to the Wiki in the first place is likely to find information to help them with using their other accounts on our infrastructure. More context is that "docs.domain.com" is where a Nextcloud instance is served from, and so the desired result would be to only allow access to the Wiki /through/ Nextcloud (by adding the Wiki to Nextcloud as an "external site"). And so for experimenting with the current situation where the Wiki is locked down and requires a login, to get around that, I've looked at OAuth, but Mediawiki does not support Nextcloud as an OAuth provider (as far as I can tell), and without going into other crazy login setups like LDAP, I'd actually prefer to try and go the other way---unrestrict the Wiki so that viewing and editing pages don't require a login, but the pages are only served *if* they've been requested through Nextcloud (from docs.domain.com)... Maybe this is impossible, but can anybody imagine how this could be done and what the pros/cons of the approach could be? And as for the reason to prevent the Wiki from being accessible to the world in the first place, that is: while there wouldn't be extremely sensitive information on the Wiki per se, some content would reveal in some instances the general backends of some of our infrastructure, which the whole world doesn't need to know... So yeah, any questions/ideas/suggestions/commentary welcome! Thanks, Jore On 15/3/21 1:50 am, Francis Daly wrote: > On Sat, Mar 13, 2021 at 07:56:35AM +1100, Jore wrote: > > Hi there, > >> I have pages served from "embed.domain.com" that I'd only like to be >> accessible when they're embedded in files served from "docs.domain.com" >> Is it possible to lock down "embed.domain.com" so it can only be accessed >> through "docs.domain.com"? > If you mean "a http request to the embed.domain.com site must only get > a response if the request was made by a user clicking a link on the > docs.domain.com site", then that can't be done reliably. That's the > nature of http. > > You could do something like block external access to embed.domain.com > altogether, and use nginx to reverse-proxy requests to it behind > http://docs.domain.com/embed/, for example. > > That would mean that all external http requests would go to > docs.domain.com; but it still does not mean that a request to > docs.domain.com/embed/ came from a user clicking a link somewhere else > on docs.domain.com. > > It may or may not match what you want. > > Good luck with it, > > f -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Mon Mar 15 06:23:38 2021 From: nginx-forum at forum.nginx.org (petecooper) Date: Mon, 15 Mar 2021 02:23:38 -0400 Subject: Check for existence of PHP socket availability with `nginx -t` In-Reply-To: References: Message-ID: Hello Maxim. > nginx does not care if the upstream socket is reacheable or not > when it parses configuration, it is only important when processing > a particular request. That is, nginx can (and will) start just > fine if the socket doesn't exist (or, similarly, upstream server's > IP address isn't reachable). And that's what "nginx -t" > checks for: if nginx itself will be able to start. > > [?] > > If your use case is simple enough and you want both nginx and > corresponding PHP-FPM processes to be running at the same time, > and, for example, don't want to start nginx if PHP-FPM isn't > running - this is something to check by means external to nginx. Perfect answer - thank you very much for your clarification. With gratitude and best wishes, Pete Posted at Nginx Forum: https://forum.nginx.org/read.php?2,290976,290985#msg-290985 From ADD_SP at outlook.com Mon Mar 15 06:49:44 2021 From: ADD_SP at outlook.com (ADD SP) Date: Mon, 15 Mar 2021 06:49:44 +0000 Subject: About the order of execution of the modules. In-Reply-To: <20210314200414.GC16474@daoine.org> References: <20210314151851.GB16474@daoine.org> , <20210314200414.GC16474@daoine.org> Message-ID: Hi there! Thank you very much for your help! > It is not clear to me that that "so" statement is true. I made a stupid mistake. I didn't recompile nginx and my modules when I debugged with GDB, so I thought that "ngx_module_order" wouldn't solve my problem, but when I retested it I found that "ngx_module_order " solved my problem. Thank you. Here are some minor issues I would like to discuss with you. > My reading suggests that the "static" module order is "from the ... /configure line, --add-module order", and the "dynamic" module order is "from the nginx.conf file, load_module order", where the dynamic modules have the option to list which named modules they should run before. Is the order of execution of dynamic modules determined by the "load_module" order? Where did you find this? I looked at http://nginx.org/en/docs/ngx_core_module.html#load_module but did not find this statement. > As the developer, I think that you do not know which stock modules are But you do know the names of the (current) stock modules. It seems that some of the stock modules (standard modules) can also be compiled as dynamic modules, e.g. " --with-stream=dynamic", so perhaps I have misunderstood something. > You will be "after", unless you set your ngx_module_order. I found out through GDB debugging that my module takes effect before "ngx_http_rewrite_module" if I don't set "ngx_module_order". Perhaps third party dynamic modules are executed before the stock (standard) modules? ADD-SP -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Mon Mar 15 11:11:54 2021 From: nginx-forum at forum.nginx.org (bubugian) Date: Mon, 15 Mar 2021 07:11:54 -0400 Subject: Works only in root.. In-Reply-To: <318cbee4b055810e7f86495ca699565c.NginxMailingListEnglish@forum.nginx.org> References: <318cbee4b055810e7f86495ca699565c.NginxMailingListEnglish@forum.nginx.org> Message-ID: <16c0343e47c9f1e398e85408cbd275e0.NginxMailingListEnglish@forum.nginx.org> I'm trying to hide my QNAP NAS behind NGINX... without success. With more attention I discover that with server { listen 80; listen [::]:80; access_log /var/log/nginx/reverse-access.log; error_log /var/log/nginx/reverse-error.log; location /qnap{ proxy_pass https://192.168.1.10; } } when I ask browser to visit: NGINX_IP\qnap something good happens. In fact, I read the correct webpage name and I discover that the error page is not from NGINX but from QNAP. It seems that NGINX does not have success to get some of the resource behind my qnap NAS. Is this possible? If yes, why ? Why all works perfectly when, in .conf, if I change: - location /qnap{ with - location / { ? Error log shows: [error] 1722#1722: *74 open() "usr/share/nginx/html/cgi-bin/images/error/logo_gray.png failed (2: no such file or directory), client 192.168.xx.xx, server: , request: "GET /gci-bin/images/error/logo_gray.png HTTP/1.1 host: 192.168.xx.NGINX_address, referrer "http://192.168.xx.NGINX_address/qnap" I have two questions: - does NGINX make a local copy of remote resource before building page ? If yes I understand why it try to open a local folder (usr/share/nginx/html/cgi-bin/images/error/logo_gray.png) But, this folder (that corresponds to the address of target) is obviously empty. - is it possible that target does not answer to NGINX ? Why? Should I change something in .conf? >While I'm here could you guide me - is there any specific reason to >use a back slash instead of a very common forward slash? Whith '/' all things seems to work.. Thanks Posted at Nginx Forum: https://forum.nginx.org/read.php?2,290968,290987#msg-290987 From francis at daoine.org Mon Mar 15 17:25:56 2021 From: francis at daoine.org (Francis Daly) Date: Mon, 15 Mar 2021 17:25:56 +0000 Subject: About the order of execution of the modules. In-Reply-To: References: <20210314151851.GB16474@daoine.org> <20210314200414.GC16474@daoine.org> Message-ID: <20210315172556.GD16474@daoine.org> On Mon, Mar 15, 2021 at 06:49:44AM +0000, ADD SP wrote: Hi there, > > It is not clear to me that that "so" statement is true. > > I made a stupid mistake. I didn't recompile nginx and my modules when I debugged with GDB, so I thought that "ngx_module_order" wouldn't solve my problem, but when I retested it I found that "ngx_module_order " solved my problem. Thank you. Good that you found a thing that works :-) As with most things software, there is "what is documented as the API that is intended to work in the future", and there is "the current implementation". If you can't find a "here is the recipe to ensure that your module always executes in *this* position in the list" document, that might be because the product does not intend to make guarantees that that method will remain working in the future. So whatever is there today, should work today; and if the execution order matters to you, then you will have to test after any update to see whether things still work for you. I suspect that, in the main, you are not expected to care about the full module order. But the source is there, and nothing stops you from changing it to work the way you want it to work. (At least: in your build.) > Here are some minor issues I would like to discuss with you. > > > My reading suggests that the "static" module order is "from the > ... /configure line, --add-module order", and the "dynamic" module order is > "from the nginx.conf file, load_module order", where the dynamic modules > have the option to list which named modules they should run before. > > Is the order of execution of dynamic modules determined by the "load_module" order? Where did you find this? I looked at http://nginx.org/en/docs/ngx_core_module.html#load_module but did not find this statement. Probably the simplest thing at this stage is just to point you at the implementation -- that is exactly what the current version of nginx does, and any attempted explanations that contradict it are wrong. auto/module sets the shell variable [this_module]_ORDER; auto/make reads that and populates the C variable "char *ngx_module_order[]" for this module; and src/core/nginx.c says what happens when a load_module directive is read from nginx.conf, which includes reading ngx_module_order and calling ngx_add_module() from src/core/ngx_module.c > > As the developer, I think that you do not know which stock modules are > But you do know the names of the (current) stock modules. > > It seems that some of the stock modules (standard modules) can also be compiled as dynamic modules, e.g. " --with-stream=dynamic", so perhaps I have misunderstood something. The person building nginx can choose (with some limitations) which standard modules are included as static; which third-party modules are included as static and in what order; and (sort of) whether dynamic modules can be usefully used. If dynamic modules can be used, then the person configuring nginx can choose which standard-or-third-party dynamic modules are loaded, and in which order. > > You will be "after", unless you set your ngx_module_order. > > I found out through GDB debugging that my module takes effect before "ngx_http_rewrite_module" if I don't set "ngx_module_order". Perhaps third party dynamic modules are executed before the stock (standard) modules? Yes, I was probably unclear/misleading on that, sorry. There is the order of loading the modules, and there is the order of running of modules, and they are backwards with respect to each other. I was intending to use before/after in the "loading" sense, so, unless I got confused somewhere, please reconsider my mail in that light. Cheers, f -- Francis Daly francis at daoine.org From francis at daoine.org Mon Mar 15 17:45:05 2021 From: francis at daoine.org (Francis Daly) Date: Mon, 15 Mar 2021 17:45:05 +0000 Subject: Works only in root.. In-Reply-To: <16c0343e47c9f1e398e85408cbd275e0.NginxMailingListEnglish@forum.nginx.org> References: <318cbee4b055810e7f86495ca699565c.NginxMailingListEnglish@forum.nginx.org> <16c0343e47c9f1e398e85408cbd275e0.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20210315174505.GE16474@daoine.org> On Mon, Mar 15, 2021 at 07:11:54AM -0400, bubugian wrote: Hi there, there have been some mentions of QNAP on the list in the past. I'm not aware of any QNAP owner actually following up with a working recipe, though. > location /qnap{ > proxy_pass https://192.168.1.10; > } You probably want an extra / on those two lines: location /qnap/ { proxy_pass https://192.168.1.10/; } That will probably not fully work; if you can show the request you make and the response you get, then maybe someone will be able to offer a suggestion of alternate configuration. That is: what is the response when you do something like curl -i http://[nginx_ip]/qnap/ Possibly it is a http redirect to another url; maybe it is some content. > when I ask browser to visit: NGINX_IP\qnap something good happens. > In fact, I read the correct webpage name and I discover that the error page > is not from NGINX but from QNAP. > > It seems that NGINX does not have success to get some of the resource behind > my qnap NAS. Is this possible? If yes, why ? > Why all works perfectly when, in .conf, if I change: > - location /qnap{ > with > - location / { Whatever application is on the "upstream" server (the QNAP) is happy when it is at the "root" of the web service (all requests below /); but may not be happy when it is somewhere else (all requests below /qnap/). > Error log shows: > [error] 1722#1722: *74 open() > "usr/share/nginx/html/cgi-bin/images/error/logo_gray.png failed (2: no such > file or directory), client 192.168.xx.xx, server: , request: "GET > /gci-bin/images/error/logo_gray.png HTTP/1.1 host: 192.168.xx.NGINX_address, > referrer "http://192.168.xx.NGINX_address/qnap" That looks like the content that QNAP returns include a link to something that starts with /cgi-bin/; and your nginx config does not say how to handle anything that starts with /cgi-bin/, and so your nginx tries to serve the file from its filesystem. > I have two questions: > - does NGINX make a local copy of remote resource before building page ? No. nginx does not build a page. Your browser makes one request, and then later (maybe) your browser makes one request. Each request is independent. > - is it possible that target does not answer to NGINX ? Why? Should I change > something in .conf? You seem to be getting a response, so the target is answering. If it is too difficult to configure things so that the upstream (QNAP) service will work below the url /qnap/, then it might be easier for you to use a separate nginx server{} block, with a different server_name, and in *that* block, do the location / { proxy_pass... } thing. And then if you access nginx using that server name instead or the IP, things will probably work for qnap. If you access nginx using a different name, whatever other config you have for that should work. Good luck with it, f -- Francis Daly francis at daoine.org From ADD_SP at outlook.com Tue Mar 16 06:50:19 2021 From: ADD_SP at outlook.com (ADD SP) Date: Tue, 16 Mar 2021 06:50:19 +0000 Subject: About the order of execution of the modules. In-Reply-To: <20210315172556.GD16474@daoine.org> References: <20210314151851.GB16474@daoine.org> <20210314200414.GC16474@daoine.org> , <20210315172556.GD16474@daoine.org> Message-ID: Hi there! My questions have all been resolved, thank you very much! ? If you can't find a "here is the recipe to ensure that your module always executes in *this* position in the list" document, that might be because the product does not intend to make guarantees that that method will remain working in the future. I'm emailing a question mainly because of this. Because I need to make my module compatible with other modules, but I don't want to use undefined behavior (behavior that is not defined in the standard documentation and is completely dependent on the specific implementation) to be compatible with other modules. ? There is the order of loading the modules, and there is the order of running of modules, and they are backwards with respect to each other. I was intending to use before/after in the "loading" sense, so, unless I got confused somewhere, please reconsider my mail in that light. This is something I stumbled upon while debugging. It is indeed easy to confuse the order of initialization with the order of taking effect (execution or running), so maybe my question is a bit ambiguous. Thanks for your help! ADD-SP -------------- next part -------------- An HTML attachment was scrubbed... URL: From francis at daoine.org Wed Mar 17 09:22:21 2021 From: francis at daoine.org (Francis Daly) Date: Wed, 17 Mar 2021 09:22:21 +0000 Subject: Possible to make subdomain only accessible through 'embed' In-Reply-To: <79054732-f565-25df-3215-90d49d37201d@thoughtmaybe.com> References: <6eebdb2e-cc06-fc9c-6b5d-d29f365e2689@thoughtmaybe.com> <20210314145043.GA16474@daoine.org> <79054732-f565-25df-3215-90d49d37201d@thoughtmaybe.com> Message-ID: <20210317092221.GF16474@daoine.org> On Mon, Mar 15, 2021 at 04:24:27PM +1100, Jore wrote: Hi there, > "a HTTP request to the > embed.domain.com site must only get a response if the request was made by a > user clicking a link on the docs.domain.com site"... Am I correct in > understanding that you mean it's not reliable as headers can be spoofed? It's not reliable because HTTP says that every request is independent. And requests to two different hostnames are "extra"-independent. If you want to try to add some control, you have to decide what level of "allow what you want blocked" and "block what you want allowed" you are happy with. > In > any event, I just want to brainstorm some implementations of how to do that > even and weigh up the pros/cons... In principle: you could (dynamically) change all of the links on docs.domain.com pointing to embed.domain.com to be limited based on time and whatever other request-based criteria you like; and then change all of the content on embed.domain.com to include similar links; and change the service that provides that content to validate the requests before continuing. In practice: you probably don't want to do that. > The end goal is to run Mediawiki on "embed.domain.com", but to not have the > Wiki accessible to the whole world. At the moment, it */is/* accessible to > the whole world but I have it locked down so that all pages require a login. > But that's undesirable for our users though as it's one more > username/password for them to remember and that's annoying for them when the > whole purpose of heading to the Wiki in the first place is likely to find > information to help them with using their other accounts on our > infrastructure. I don't fully understand what restrictions you want to apply here. (That's ok; I don't have to understand it.) Maybe you could allow unrestricted access to the "here is how to reset your password" information, and require a password for everything else? Alternatively: if you were to reverse-proxy the MediaWiki instance at docs.domain.com/embed/, then you could potentially set a cookie on docs.domain.com, and require that a suitable cookie is present for any requests to docs.domain.com/embed/. That might be the closest to what you want? Good luck with it, f -- Francis Daly francis at daoine.org From petite.abeille at gmail.com Wed Mar 17 09:49:02 2021 From: petite.abeille at gmail.com (Petite Abeille) Date: Wed, 17 Mar 2021 10:49:02 +0100 Subject: [OT] Fwd: [ann] publictext References: Message-ID: Slightly off-topic, but perhaps of interest ? some sort of postmodernist plain text protocol server. Squarely on the other end of the complexity spectrum. > Begin forwarded message: > > From: ??? > Subject: [ann] publictext > Date: March 15, 2021 at 14:08:47 GMT+1 > To: Lua mailing list > Reply-To: ??? , Lua mailing list > > publictext ? a small ucspi-tcp text://protocol server. [1][2][3][4] > > ~150 lines of Lua code: > > https://github.com/textprotocol/publictext/blob/main/publictext > > Usage example: > > # echo -e 'text://txt.textprotocol.org/\r\n' | nc txt.textprotocol.org 1961 > 20 text/plain; charset=utf-8 > TEXT://PROTOCOL > > => geo:37.429167,-122.138056 PALO ALTO, CA 94301, USA > => tag:txt.textprotocol.org,2021-03-07:textprotocol at github rel=me > => text://txt.textprotocol.org/icon.png rel=icon > => text://txt.textprotocol.org/license.txt rel=license CC0-1.0 > > ? > ??? > > > [1] https://github.com/textprotocol/publictext > [2] http://cr.yp.to/ucspi-tcp.html > [3] https://textprotocol.org > [4] http://cr.yp.to/ucspi-tcp/tcpserver.html > > ? > ??? > https://textprotocol.org > https://textprotocol.org/contact > ?0? -------------- next part -------------- An HTML attachment was scrubbed... URL: From maxim at nginx.com Wed Mar 17 11:35:50 2021 From: maxim at nginx.com (Maxim Konovalov) Date: Wed, 17 Mar 2021 14:35:50 +0300 Subject: [OT] Fwd: [ann] publictext In-Reply-To: References: Message-ID: Enterprise level disruptive innovation! Unfortunately it doesn't come from Google, won't fly. (just kidding!) On 17.03.2021 12:49, Petite Abeille wrote: > Slightly off-topic, but perhaps of interest ? some sort of postmodernist > plain text protocol server. > > Squarely on the other end of the complexity spectrum.? > >> Begin forwarded message: >> >> *From: *??? > > >> *Subject: **[ann] publictext* >> *Date: *March 15, 2021 at 14:08:47 GMT+1 >> *To: *Lua mailing list > >> *Reply-To: *??? > >, Lua mailing list >> > >> >> publictext ? a small ucspi-tcp text://protocol >> server. [1][2][3][4] >> >> ~150 lines of Lua code: >> >> https://github.com/textprotocol/publictext/blob/main/publictext >> >> >> Usage example: >> >> # echo -e 'text://txt.textprotocol.org/\r\n' | nc txt.textprotocol.org >> 1961 >> 20 text/plain; charset=utf-8 >> TEXT://PROTOCOL >> >> => geo:37.429167,-122.138056 PALO ALTO, CA 94301, USA >> => tag:txt.textprotocol.org,2021-03-07:textprotocol at github rel=me >> => text://txt.textprotocol.org/icon.png rel=icon >> => text://txt.textprotocol.org/license.txt rel=license CC0-1.0 >> >> ? >> ??? >> >> >> [1] https://github.com/textprotocol/publictext >> [2] http://cr.yp.to/ucspi-tcp.html >> [3] https://textprotocol.org >> [4] http://cr.yp.to/ucspi-tcp/tcpserver.html >> >> ? >> ??? >> https://textprotocol.org >> https://textprotocol.org/contact >> > > ?0? > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -- Maxim Konovalov From petite.abeille at gmail.com Wed Mar 17 11:48:57 2021 From: petite.abeille at gmail.com (Petite Abeille) Date: Wed, 17 Mar 2021 12:48:57 +0100 Subject: [OT] [ann] publictext In-Reply-To: References: Message-ID: <814499F4-25BD-4EB1-80B2-ED852E8C7040@gmail.com> True, true, but, but... IT RUNS OVER QUIC! :D /dns/textprotocol.org/udp/1968/quic > On Mar 17, 2021, at 12:35, Maxim Konovalov wrote: > > Enterprise level disruptive innovation! > > Unfortunately it doesn't come from Google, won't fly. > > (just kidding!) > > On 17.03.2021 12:49, Petite Abeille wrote: >> Slightly off-topic, but perhaps of interest ? some sort of postmodernist >> plain text protocol server. >> >> Squarely on the other end of the complexity spectrum. >> >>> Begin forwarded message: >>> >>> *From: *??? >> > >>> *Subject: **[ann] publictext* >>> *Date: *March 15, 2021 at 14:08:47 GMT+1 >>> *To: *Lua mailing list > >>> *Reply-To: *??? >> >, Lua mailing list >>> > >>> >>> publictext ? a small ucspi-tcp text://protocol >>> server. [1][2][3][4] >>> >>> ~150 lines of Lua code: >>> >>> https://github.com/textprotocol/publictext/blob/main/publictext >>> >>> >>> Usage example: >>> >>> # echo -e 'text://txt.textprotocol.org/\r\n' | nc txt.textprotocol.org >>> 1961 >>> 20 text/plain; charset=utf-8 >>> TEXT://PROTOCOL >>> >>> => geo:37.429167,-122.138056 PALO ALTO, CA 94301, USA >>> => tag:txt.textprotocol.org,2021-03-07:textprotocol at github rel=me >>> => text://txt.textprotocol.org/icon.png rel=icon >>> => text://txt.textprotocol.org/license.txt rel=license CC0-1.0 >>> >>> ? >>> ??? >>> >>> >>> [1] https://github.com/textprotocol/publictext >>> [2] http://cr.yp.to/ucspi-tcp.html >>> [3] https://textprotocol.org >>> [4] http://cr.yp.to/ucspi-tcp/tcpserver.html >>> >>> ? >>> ??? >>> https://textprotocol.org >>> https://textprotocol.org/contact >>> >> >> ?0? >> >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx >> > > > -- > Maxim Konovalov ?0? From community at thoughtmaybe.com Wed Mar 17 12:21:58 2021 From: community at thoughtmaybe.com (Jore) Date: Wed, 17 Mar 2021 23:21:58 +1100 Subject: Possible to make subdomain only accessible through 'embed' In-Reply-To: <20210317092221.GF16474@daoine.org> References: <6eebdb2e-cc06-fc9c-6b5d-d29f365e2689@thoughtmaybe.com> <20210314145043.GA16474@daoine.org> <79054732-f565-25df-3215-90d49d37201d@thoughtmaybe.com> <20210317092221.GF16474@daoine.org> Message-ID: Hi there, Thanks for getting back. On 17/3/21 8:22 pm, Francis Daly wrote: > Alternatively: if you were to reverse-proxy the MediaWiki instance at > docs.domain.com/embed/, then you could potentially set a cookie on > docs.domain.com, and require that a suitable cookie is present for any > requests to docs.domain.com/embed/. > > That might be the closest to what you want? Is this all possible through a nginx config? If so, are there some examples you could point me to? Or do you know if I'd have to get Mediawiki modified to do something like this? > Maybe you could allow unrestricted access to the "here is how to reset > your password" information, and require a password for everything else? This is also possible, but again, it would be good to do a lockdown in some way other than requiring the user to enter a password... Thanks! Jore -------------- next part -------------- An HTML attachment was scrubbed... URL: From hobson42 at gmail.com Wed Mar 17 14:59:11 2021 From: hobson42 at gmail.com (Ian Hobson) Date: Wed, 17 Mar 2021 14:59:11 +0000 Subject: Possible to make subdomain only accessible through 'embed' In-Reply-To: <6eebdb2e-cc06-fc9c-6b5d-d29f365e2689@thoughtmaybe.com> References: <6eebdb2e-cc06-fc9c-6b5d-d29f365e2689@thoughtmaybe.com> Message-ID: <9b256bf5-8aae-6f00-8c2c-f59546cb141c@gmail.com> Hi, I have not tried it, but I believe if you set a cookie on .domain.com to say that they are logged in (Note the leading .) , then you can read that cookie in all sub-domains, and check they are logged in to domain.com. You might have to use domain.com, instead of docs.domain.com for the outer level. RFC6265 is the standard that modern browsers follow https://tools.ietf.org/html/rfc6265 The clause you might need in your server {} are of nginx is if ($cookie_fileURI != "mymagicvalue") { return 403; } Where "mymagicvalue" was put in the cookie upon successful login. Regards Ian On 12/03/2021 20:56, Jore wrote: > Hi there, > > I have pages served from "embed.domain.com" that I'd only like to be > accessible when they're embedded in files served from "docs.domain.com" > > Visualisation below: > > Is it possible to lock down "embed.domain.com" so it can only be > accessed through "docs.domain.com"? > > Can this be done with nginx conf or another method? > > Thank you! > Jore > > > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -- Ian Hobson Tel (+351) 910 418 473 -- This email has been checked for viruses by AVG. https://www.avg.com From amarbs at gmail.com Thu Mar 18 10:00:44 2021 From: amarbs at gmail.com (Amarnath B S) Date: Thu, 18 Mar 2021 15:30:44 +0530 Subject: On-demand SSL Cert key loading Message-ID: All, We have a requirement where the certificate keys need to be loaded only in Nginx memory. That is, saving it in the local FS is not an option. Also, we need the cert key to be present in Nginx memory only when there are active lookups to it (requests to the virtual server using the cert). When there are no requests, the cert key should be flushed from the memory and reloaded from a KMS (key mgmt server) on-demand through client authentication (Nginx authenticating to the KMS as a client). Pls provide pointers if you have insight into such or a similar requirement. I referred to best practices in this Nginx blog . However, not all of our requirements are met. There are a few questions: a) Does the ngx_http_ssl_module load the certificate on demand or during config parse? Once loaded, does it always stay in memory, whether used or not? b) Is it possible to load the certificate key through a sub-request on-demand (that is when SSL hand-shake is initiated)? Thanks in advance, -Amar -------------- next part -------------- An HTML attachment was scrubbed... URL: From community at thoughtmaybe.com Thu Mar 18 12:43:54 2021 From: community at thoughtmaybe.com (Jore) Date: Thu, 18 Mar 2021 23:43:54 +1100 Subject: Possible to make subdomain only accessible through 'embed' In-Reply-To: <9b256bf5-8aae-6f00-8c2c-f59546cb141c@gmail.com> References: <6eebdb2e-cc06-fc9c-6b5d-d29f365e2689@thoughtmaybe.com> <9b256bf5-8aae-6f00-8c2c-f59546cb141c@gmail.com> Message-ID: Hi there, Thank you for the suggestion. Jore On 18/3/21 1:59 am, Ian Hobson wrote: > Hi, > > I have not tried it, but I believe if you set a cookie > on .domain.com to say that they are logged in (Note the leading .) , > then you can read that cookie in all sub-domains, and check they are > logged in to domain.com. > > You might have to use domain.com, instead of docs.domain.com for the > outer level. > > RFC6265 is the standard that modern browsers follow > https://tools.ietf.org/html/rfc6265 > > The clause you might need in your server {} are of nginx is > > if ($cookie_fileURI != "mymagicvalue") { return 403; } > > Where "mymagicvalue" was put in the cookie upon successful login. > > Regards > > Ian > > On 12/03/2021 20:56, Jore wrote: >> Hi there, >> >> I have pages served from "embed.domain.com" that I'd only like to be >> accessible when they're embedded in files served from "docs.domain.com" >> >> Visualisation below: >> >> Is it possible to lock down "embed.domain.com" so it can only be >> accessed through "docs.domain.com"? >> >> Can this be done with nginx conf or another method? >> >> Thank you! >> Jore >> >> >> >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From francis at daoine.org Thu Mar 18 20:35:48 2021 From: francis at daoine.org (Francis Daly) Date: Thu, 18 Mar 2021 20:35:48 +0000 Subject: Possible to make subdomain only accessible through 'embed' In-Reply-To: References: <6eebdb2e-cc06-fc9c-6b5d-d29f365e2689@thoughtmaybe.com> <20210314145043.GA16474@daoine.org> <79054732-f565-25df-3215-90d49d37201d@thoughtmaybe.com> <20210317092221.GF16474@daoine.org> Message-ID: <20210318203548.GG16474@daoine.org> On Wed, Mar 17, 2021 at 11:21:58PM +1100, Jore wrote: > On 17/3/21 8:22 pm, Francis Daly wrote: Hi there, > > Alternatively: if you were to reverse-proxy the MediaWiki instance at > > docs.domain.com/embed/, then you could potentially set a cookie on > > docs.domain.com, and require that a suitable cookie is present for any > > requests to docs.domain.com/embed/. > > > > That might be the closest to what you want? > > Is this all possible through a nginx config? If so, are there some examples > you could point me to? I have not tried it; but some web searching indicates that it is possible to install MediaWiki to be below /embed/ on the embed.domain.com server; and you might also be able to set $wgServer to tell it that it "really" is on the docs.domain.com server, and you can optionally set $wgSquidServers so that MediaWiki will use the X-Forwarded-For header. In that case, the nginx side would basically be location ^~/embed/ { proxy_pass http://embed.domain.com; } And you might want to include "proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;" or similar things too. Also in that location{}, you would do whatever tests you want, to see if this request should be allowed or not. That might be an "if $cookie", or a fuller auth_request, or something written in one of the embedded languages. In this case, the first allow-or-not decision is made on the nginx side, without involving MediaWiki at all. > Or do you know if I'd have to get Mediawiki modified to do something like > this? I don't think a MediaWiki code change would be needed. There might be useful config changes that could be made, but may not be compulsory. I suspect that things would work more cleanly if MediaWiki knows that it is below /embed/ instead of being at /; but it might be possible to work in the latter case. Good luck with it, f -- Francis Daly francis at daoine.org From jcfowler at pacbell.net Fri Mar 19 22:01:19 2021 From: jcfowler at pacbell.net (John Fowler) Date: Fri, 19 Mar 2021 15:01:19 -0700 Subject: location root not working as described References: Message-ID: Good Day, I have nginx running in a docker container and configured to use let?s encrypt for certificates services. The location redirect to /var/www/certbot from /.well-known/acme-challenge does not seem to work. Shown below is the contents of the target location and the contents. ****** df -h & directory contents of nginx instance. root at b15f5f234fbb:/var/log/nginx# df -h Filesystem Size Used Avail Use% Mounted on overlay 28G 9.1G 17G 35% / tmpfs 64M 0 64M 0% /dev tmpfs 461M 0 461M 0% /sys/fs/cgroup shm 64M 0 64M 0% /dev/shm coolwave.lese-fowler.us:/volume1/homes/pi/nginx/certbot/conf 1.8T 1.2T 685G 63% /etc/letsencrypt /dev/root 28G 9.1G 17G 35% /etc/hosts coolwave.lese-fowler.us:/volume1/homes/pi/nginx/conf 1.8T 1.2T 685G 63% /etc/nginx/conf coolwave.lese-fowler.us:/volume1/homes/pi/nginx/certbot/www 1.8T 1.2T 685G 63% /var/www/certbot coolwave.lese-fowler.us:/volume1/homes/pi/www 1.8T 1.2T 685G 63% /var/www/html coolwave.lese-fowler.us:/volume1/homes/pi/nginx/logs 1.8T 1.2T 685G 63% /var/log/nginx coolwave.lese-fowler.us:/volume1/homes/pi/nginx/logs 1.8T 1.2T 685G 63% /usr/share/nginx/logs tmpfs 461M 0 461M 0% /proc/asound tmpfs 461M 0 461M 0% /sys/firmware root at b15f5f234fbb:/var/log/nginx# cd /var/www/certbot root at b15f5f234fbb:/var/www/certbot# ls -l total 12 -rw-r--r-- 1 root root 1727 Mar 18 03:22 index.html -rw-r--r-- 1 root root 1531 Mar 18 03:23 test.html -rw-r--r-- 1 root root 1533 Mar 9 02:59 test2.html root at b15f5f234fbb:/var/www/certbot# ******************************** Access & Error Logs. root at b15f5f234fbb:/var/log/nginx# 10.0.0.2 - - [19/Mar/2021:21:28:27 +0000] "GET /.well-known/acme-challenge/index.html HTTP/1.1" 404 555 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 11_2_3) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/89.0.4389.90 Safari/537.36" "-" 2021/03/19 21:28:27 [warn] 15#15: no resolver defined to resolve r3.o.lencr.org while requesting certificate status, responder: r3.o.lencr.org, certificate: "/etc/letsencrypt/live/cyva.lese-fowler.us/fullchain.pem" 2021/03/19 21:28:27 [error] 15#15: *1 open() "/var/www/certbot/.well-known/acme-challenge/index.html" failed (2: No such file or directory), client: 10.0.0.2, server: cyva.lese-fowler.us, request: "GET /.well-known/acme-challenge/index.html HTTP/1.1", host: "cyva.lese-fowler.us" 10.0.0.2 - - [19/Mar/2021:21:28:32 +0000] "GET /.well-known/acme-challenge/ HTTP/1.1" 404 555 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 11_2_3) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/89.0.4389.90 Safari/537.36" "-" 2021/03/19 21:28:32 [error] 15#15: *1 "/var/www/certbot/.well-known/acme-challenge/index.html" is not found (2: No such file or directory), client: 10.0.0.2, server: cyva.lese-fowler.us, request: "GET /.well-known/acme-challenge/ HTTP/1.1", host: "cyva.lese-fowler.us" 10.0.0.2 - - [19/Mar/2021:21:28:40 +0000] "GET /.well-known/acme-challenge/test.html HTTP/1.1" 404 555 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 11_2_3) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/89.0.4389.90 Safari/537.36" "-" 2021/03/19 21:28:40 [error] 15#15: *1 open() "/var/www/certbot/.well-known/acme-challenge/test.html" failed (2: No such file or directory), client: 10.0.0.2, server: cyva.lese-fowler.us, request: "GET /.well-known/acme-challenge/test.html HTTP/1.1", host: "cyva.lese-fowler.us" 2021/03/19 21:29:27 [info] 15#15: *2 client timed out (110: Connection timed out) while waiting for request, client: 10.0.0.2, server: 0.0.0.0:443 2021/03/19 21:31:56 [error] 15#15: *4 open() "/var/www/certbot/.well-known/acme-challenge/test.html" failed (2: No such file or directory), client: 10.0.0.2, server: cyva.lese-fowler.us, request: "GET /.well-known/acme-challenge/test.html HTTP/1.1", host: "cyva.lese-fowler.us" 10.0.0.2 - - [19/Mar/2021:21:31:56 +0000] "GET /.well-known/acme-challenge/test.html HTTP/1.1" 404 555 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 11_2_3) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/89.0.4389.90 Safari/537.36" "-" 10.0.0.2 - - [19/Mar/2021:21:31:56 +0000] "GET /favicon.ico HTTP/1.1" 200 318 "https://cyva.lese-fowler.us/.well-known/acme-challenge/test.html" "Mozilla/5.0 (Macintosh; Intel Mac OS X 11_2_3) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/89.0.4389.90 Safari/537.36" "-" 2021/03/19 21:32:56 [info] 15#15: *5 client timed out (110: Connection timed out) while waiting for request, client: 10.0.0.2, server: 0.0.0.0:443 ********************************* Nginx.conf # Designed to host port 80 & 443 for web viewing # port 8080 is configured to pass traffic to pi3.lese-fowler.us # port 1883 and 8883 are configured to pass tcp streams to # pi3.lese-fowler.us as well. # load_module modules/nginx-plus-module-headers-more # load_module modules/nginx-plus-module-set-misc # load_module modules/ngx_stream_proxy_module load_module /etc/nginx/modules/ngx_stream_module.so; # # configuration file /etc/nginx/nginx.conf: #user nobody; worker_processes 1; # error_log /var/log/nginx/error.log debug; error_log /var/log/nginx/error.log debug; #error_log /var/log/nginx/error.log notice; #error_log /var/log/nginx/error.log info; #pid /var/log/nginx/nginx.pid; events { worker_connections 1024; } # http { include mime.types; default_type application/octet-stream; log_format main '$remote_addr - $remote_user [$time_local] "$request" $status $body_bytes_sent "$http_referer" "$http_user_agent" "$http_x_forwarded_for"'; access_log /var/log/nginx/access.log main; # # sendfile on; # tcp_nopush on; # keepalive_timeout 65; # set_real_ip_from 10.0.0.0/16; set_real_ip_from 172.16.0.0/24; set_real_ip_from 192.168.0.0/24; real_ip_header X-Real-IP; # server { # Add headers to serve security related headers # Before enabling Strict-Transport-Security headers please read into this # topic first. #add_header Strict-Transport-Security "max-age=15768000; includeSubDomains; preload;" always; # # WARNING: Only add the preload option once you read about # the consequences in https://hstspreload.org/. This option # will add the domain to a hardcoded list that is shipped # in all major browsers and getting removed from this list # could take several months. add_header Referrer-Policy "no-referrer" always; add_header X-Content-Type-Options "nosniff" always; add_header X-Download-Options "noopen" always; add_header X-Frame-Options "SAMEORIGIN" always; add_header X-Permitted-Cross-Domain-Policies "none" always; add_header X-Robots-Tag "none" always; add_header X-XSS-Protection "1; mode=block" always; # add_header cache-control: public, max-age=120; # Remove X-Powered-By, which is an information leak fastcgi_hide_header X-Powered-By; listen 80; server_name cyva.lese-fowler.us 172.16.10.30; access_log /var/log/nginx/cyva.access.log main; root /var/www/html; # location ^~ /.well-known/acme-challenge { # location ^~ /.well-known/ { # allow all; # } # alias vs root location /.well-known/acme-challenge/ { allow all; root /var/www/certbot; try_files $uri =405; } location / { allow all; } location = /robots.txt { allow all; log_not_found on; access_log on; } # serve static files # location ~ ^/(images|javascript|js|css|flash|media|static)/ { #location ~* / { # alias /var/www/html; # expires 30d; #} # redirect http to https www return 301 https://cyva.lese-fowler.us$request_uri; # Enable gzip but do not remove ETag headers gzip on; gzip_vary on; gzip_comp_level 4; gzip_min_length 256; gzip_proxied expired no-cache no-store private no_last_modified no_etag auth; gzip_types application/atom+xml application/javascript application/json application/ld+json application/manifest+json application/rss+xml application/vnd.geo+json application/vnd.ms-fontobject application/x-font-ttf application/x-web-app-manifest+json application/xhtml+xml application/xml font/opentype image/bmp image/svg+xml image/x-icon text/cache-manifest text/css text/plain text/vcard text/vnd.rim.location.xloc text/vtt text/x-component text/x-cross-domain-policy; # # Uncomment if your server is build with the ngx_pagespeed module # This module is currently not supported. #pagespeed off; # # location / { # rewrite ^ /index.html; #} location ~ ^\/(?:build|tests|config|lib|3rdparty|templates|data)\/ { deny all; } location ~ ^\/(?:\.|autotest|occ|issue|indie|db_|console) { deny all; } # } server { # http2 listen [::]:443 ssl; listen 443 ssl; server_name cyva.lese-fowler.us 172.16.10.30; root /var/www/html; # SSL code ssl_certificate /etc/letsencrypt/live/cyva.lese-fowler.us/fullchain.pem; ssl_certificate_key /etc/letsencrypt/live/cyva.lese-fowler.us/privkey.pem; ssl_session_timeout 1d; ssl_session_cache shared:SharedNixCraftSSL:10m; ssl_session_tickets off; # TLS 1.2 & 1.3 only ssl_protocols TLSv1.2 TLSv1.3; ssl_prefer_server_ciphers on; ssl_ciphers "EECDH+ECDSA+AESGCM EECDH+aRSA+AESGCM EECDH+ECDSA+SHA384 EECDH+ECDSA+SHA256 EECDH+aRSA+SHA384 EECDH+aRSA+SHA256 EECDH+aRSA+RC4 EECDH EDH+aRSA RC4 !aNULL !eNULL !LOW !3DES !MD5 !EXP !PSK !SRP !DSS"; # HSTS (ngx_http_headers_module is required) (63072000 seconds) add_header Strict-Transport-Security "max-age=63072000" always; # OCSP stapling ssl_stapling on; ssl_stapling_verify on; # location ~ /.well-known { # allow all; # } # verify chain of trust of OCSP response using Root CA and Intermediate certs # ssl_trusted_certificate /etc/nginx/ssl/fullchain.pem; # location ^~ /.well-known/acme-challenge/ { # location ^~ /.well-known/ { location /.well-known/acme-challenge/ { allow all; root /var/www/certbot; } location / { allow all; index index.html; } } # simple reverse-proxy server { listen 8080; server_name cyva.lese-fowler.us 172.16.10.30; access_log /var/log/nginx/reverse.access.log main; # serve static files location / { proxy_pass https://pi3.lese-fowler.us:80; } } } # End of html block # *********************** # The following code is not permitted until we find the correct nginx code # stream { log_format st-main '$remote_addr - [$time_local] $status $bytes_sent '; server { listen 8084; access_log /var/log/nginx/mqtt-non.log st-main; #TCP traffic will be forwarded to the "stream_backend" upstream group proxy_pass cyva.lese-fowler.us:1883; } # server { listen 8085; access_log /var/log/nginx/mqtt.log st-main; #TCP traffic will be forwarded to the specified server proxy_pass cyva.lese-fowler.us:8883; } } # From francis at daoine.org Fri Mar 19 23:17:52 2021 From: francis at daoine.org (Francis Daly) Date: Fri, 19 Mar 2021 23:17:52 +0000 Subject: location root not working as described In-Reply-To: References: Message-ID: <20210319231752.GH16474@daoine.org> On Fri, Mar 19, 2021 at 03:01:19PM -0700, John Fowler wrote: Hi there, > I have nginx running in a docker container and configured to use let?s encrypt for certificates services. > The location redirect to /var/www/certbot from /.well-known/acme-challenge does not seem to work. Compare http://nginx.org/r/root and http://nginx.org/r/alias You probably want alias. > root at b15f5f234fbb:/var/log/nginx# cd /var/www/certbot > root at b15f5f234fbb:/var/www/certbot# ls -l > total 12 > -rw-r--r-- 1 root root 1727 Mar 18 03:22 index.html > -rw-r--r-- 1 root root 1531 Mar 18 03:23 test.html > -rw-r--r-- 1 root root 1533 Mar 9 02:59 test2.html > root at b15f5f234fbb:/var/www/certbot# > 2021/03/19 21:28:27 [error] 15#15: *1 open() "/var/www/certbot/.well-known/acme-challenge/index.html" failed (2: No such file or directory), client: 10.0.0.2, server: cyva.lese-fowler.us, request: "GET /.well-known/acme-challenge/index.html HTTP/1.1", host: "cyva.lese-fowler.us" That GET request trying to open that file, is because "root" was used. > # location ^~ /.well-known/acme-challenge { > # location ^~ /.well-known/ { > # allow all; > # } > # alias vs root > location /.well-known/acme-challenge/ { > allow all; > root /var/www/certbot; > try_files $uri =405; > } Probably use "location ^~", but definitely use "alias /var/www/certbot/;". Cheers, f -- Francis Daly francis at daoine.org From jcfowler at pacbell.net Sat Mar 20 22:08:03 2021 From: jcfowler at pacbell.net (John Fowler) Date: Sat, 20 Mar 2021 15:08:03 -0700 Subject: location root not working as described References: <639C45B8-FA6B-40F4-90A8-C3A8730FCD14.ref@pacbell.net> Message-ID: <639C45B8-FA6B-40F4-90A8-C3A8730FCD14@pacbell.net> Good Day, Thank you for your suggestion, but I had tried using the alias as well and that did not work as advertised. The following is the results of using the alias and ?^~?. I do see some odd behavior. The following access snippet 2021/03/20 21:44:21 [error] 14#14: *7 open() "/var/www/certbottest.html" failed (2: No such file or directory), client: 10.0.0.2, server: cyva.lese-fowler.us, request: "GET /.well-known/acme-challenge/test.html HTTP/1.1", host: "cyva.lese-fowler.us" 2021/03/20 21:44:37 [error] 14#14: *5 open() "/var/www/certbotindex.html" failed (2: No such file or directory), client: 10.0.0.2, server: cyva.lese-fowler.us, request: "GET /.well-known/acme-challenge/index.html HTTP/1.1", host: "cyva.lese-fowler.us" aligns to the error log entries of 2021/03/20 21:44:21 [error] 14#14: *7 open() "/var/www/certbottest.html" failed (2: No such file or directory), client: 10.0.0.2, server: cyva.lese-fowler.us, request: "GET /.well-known/acme-challenge/test.html HTTP/1.1", host: "cyva.lese-fowler.us" 2021/03/20 21:44:37 [error] 14#14: *5 open() "/var/www/certbotindex.html" failed (2: No such file or directory), client: 10.0.0.2, server: cyva.lese-fowler.us, request: "GET /.well-known/acme-challenge/index.html HTTP/1.1", host: "cyva.lese-fowler.us" But I don?t know where the certbottest.html is coming from? Still based on what I have read, I would expect that the /var/www/certbot/index.html should be visible via the URL of /.well-known/acme-challenge/index.html The larger segments of Error log & Access Logs 2021/03/20 21:18:11 [notice] 12#12: using the "epoll" event method 2021/03/20 21:18:11 [notice] 12#12: nginx/1.19.5 2021/03/20 21:18:11 [notice] 12#12: built by gcc 8.3.0 (Raspbian 8.3.0-6+rpi1) 2021/03/20 21:18:11 [notice] 12#12: OS: Linux 5.10.17-v7+ 2021/03/20 21:18:11 [notice] 12#12: getrlimit(RLIMIT_NOFILE): 1048576:1048576 2021/03/20 21:18:11 [notice] 13#13: start worker processes 2021/03/20 21:18:11 [notice] 13#13: start worker process 14 2021/03/20 21:29:02 [info] 14#14: *1 client 10.0.0.2 closed keepalive connection 2021/03/20 21:38:04 [warn] 14#14: no resolver defined to resolve r3.o.lencr.org while requesting certificate status, responder: r3.o.lencr.org, certificate: "/etc/letsencrypt/live/cyva.lese-fowler.us/fullchain.pem" 2021/03/20 21:38:43 [error] 14#14: *2 open() "/var/www/certbottest.html" failed (2: No such file or directory), client: 10.0.0.2, server: cyva.lese-fowler.us, request: "GET /.well-known/acme-challenge/test.html HTTP/1.1", host: "cyva.lese-fowler.us" 2021/03/20 21:38:54 [error] 14#14: *2 open() "/var/www/certbotindex.html" failed (2: No such file or directory), client: 10.0.0.2, server: cyva.lese-fowler.us, request: "GET /.well-known/acme-challenge/index.html HTTP/1.1", host: "cyva.lese-fowler.us" 2021/03/20 21:38:55 [error] 14#14: *2 open() "/var/www/certbotindex.html" failed (2: No such file or directory), client: 10.0.0.2, server: cyva.lese-fowler.us, request: "GET /.well-known/acme-challenge/index.html HTTP/1.1", host: "cyva.lese-fowler.us" 2021/03/20 21:39:32 [info] 14#14: *4 client 10.0.0.2 closed keepalive connection 2021/03/20 21:39:34 [info] 14#14: *2 client 10.0.0.2 closed keepalive connection 2021/03/20 21:43:07 [error] 14#14: *6 open() "/var/www/certbottest.html" failed (2: No such file or directory), client: 10.0.0.2, server: cyva.lese-fowler.us, request: "GET /.well-known/acme-challenge/test.html HTTP/1.1", host: "cyva.lese-fowler.us" 2021/03/20 21:43:08 [error] 14#14: *5 open() "/var/www/certbottest2.html" failed (2: No such file or directory), client: 10.0.0.2, server: cyva.lese-fowler.us, request: "GET /.well-known/acme-challenge/test2.html HTTP/1.1", host: "cyva.lese-fowler.us" 2021/03/20 21:43:10 [error] 14#14: *5 open() "/var/www/certbottest2.html" failed (2: No such file or directory), client: 10.0.0.2, server: cyva.lese-fowler.us, request: "GET /.well-known/acme-challenge/test2.html HTTP/1.1", host: "cyva.lese-fowler.us" 2021/03/20 21:43:52 [error] 14#14: *5 open() "/var/www/certbottest.html" failed (2: No such file or directory), client: 10.0.0.2, server: cyva.lese-fowler.us, request: "GET /.well-known/acme-challenge/test.html HTTP/1.1", host: "cyva.lese-fowler.us" 2021/03/20 21:44:07 [info] 14#14: *6 client 10.0.0.2 closed keepalive connection 2021/03/20 21:44:15 [error] 14#14: *5 open() "/var/www/certbottest.html" failed (2: No such file or directory), client: 10.0.0.2, server: cyva.lese-fowler.us, request: "GET /.well-known/acme-challenge/test.html HTTP/1.1", host: "cyva.lese-fowler.us" 2021/03/20 21:44:21 [error] 14#14: *7 open() "/var/www/certbottest.html" failed (2: No such file or directory), client: 10.0.0.2, server: cyva.lese-fowler.us, request: "GET /.well-known/acme-challenge/test.html HTTP/1.1", host: "cyva.lese-fowler.us" 2021/03/20 21:44:37 [error] 14#14: *5 open() "/var/www/certbotindex.html" failed (2: No such file or directory), client: 10.0.0.2, server: cyva.lese-fowler.us, request: "GET /.well-known/acme-challenge/index.html HTTP/1.1", host: "cyva.lese-fowler.us" 2021/03/20 21:44:58 [error] 14#14: *5 open() "/var/www/certbottest.html" failed (2: No such file or directory), client: 10.0.0.2, server: cyva.lese-fowler.us, request: "GET /.well-known/acme-challenge/test.html HTTP/1.1", host: "cyva.lese-fowler.us" 2021/03/20 21:45:00 [error] 14#14: *5 open() "/var/www/certbottest2.html" failed (2: No such file or directory), client: 10.0.0.2, server: cyva.lese-fowler.us, request: "GET /.well-known/acme-challenge/test2.html HTTP/1.1", host: "cyva.lese-fowler.us" 2021/03/20 21:45:20 [info] 14#14: *7 client 10.0.0.2 closed keepalive connection 2021/03/20 21:45:37 [info] 14#14: *5 client 10.0.0.2 closed keepalive connection 2021/03/20 21:50:17 [error] 14#14: *9 open() "/var/www/html/robots.txt" failed (2: No such file or directory), client: 10.0.0.2, server: cyva.lese-fowler.us, request: "GET /robots.txt HTTP/1.1", host: "cyva.lese-fowler.us" 2021/03/20 21:50:17 [info] 14#14: *8 client 10.0.0.2 closed keepalive connection 2021/03/20 21:50:17 [info] 14#14: *9 client 10.0.0.2 closed keepalive connection 2021/03/20 21:50:19 [info] 14#14: *10 client 10.0.0.2 closed keepalive connection 2021/03/20 21:52:07 [info] 14#14: *11 client closed connection while waiting for request, client: 10.0.0.2, server: 0.0.0.0:80 root at ebd348cda0e9:~# 10.0.0.2 - - [20/Mar/2021:21:43:52 +0000] "GET /.well-known/acme-challenge/test.html HTTP/1.1" 404 153 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_6) AppleWebKit/605.1.15 (KHTML, like Gecko) Version/14.0.3 Safari/605.1.15" "-" 10.0.0.2 - - [20/Mar/2021:21:44:01 +0000] "GET /web/patches.html HTTP/1.1" 200 1476 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_6) AppleWebKit/605.1.15 (KHTML, like Gecko) Version/14.0.3 Safari/605.1.15" "-" 10.0.0.2 - - [20/Mar/2021:21:44:15 +0000] "GET /.well-known/acme-challenge/test.html HTTP/1.1" 404 153 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_6) AppleWebKit/605.1.15 (KHTML, like Gecko) Version/14.0.3 Safari/605.1.15" "-" 10.0.0.2 - - [20/Mar/2021:21:44:21 +0000] "GET /.well-known/acme-challenge/test.html HTTP/1.1" 404 153 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_6) AppleWebKit/605.1.15 (KHTML, like Gecko) Version/14.0.3 Safari/605.1.15" "-" 10.0.0.2 - - [20/Mar/2021:21:44:37 +0000] "GET /.well-known/acme-challenge/index.html HTTP/1.1" 404 153 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_6) AppleWebKit/605.1.15 (KHTML, like Gecko) Version/14.0.3 Safari/605.1.15" "-" 10.0.0.2 - - [20/Mar/2021:21:44:58 +0000] "GET /.well-known/acme-challenge/test.html HTTP/1.1" 404 153 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_6) AppleWebKit/605.1.15 (KHTML, like Gecko) Version/14.0.3 Safari/605.1.15" "-" 10.0.0.2 - - [20/Mar/2021:21:45:00 +0000] "GET /.well-known/acme-challenge/test2.html HTTP/1.1" 404 153 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_6) AppleWebKit/605.1.15 (KHTML, like Gecko) Version/14.0.3 Safari/605.1.15" "-" 10.0.0.2 - - [20/Mar/2021:21:50:17 +0000] "GET /robots.txt HTTP/1.1" 404 153 "-" "Mozilla/5.0 (compatible; bingbot/2.0; +http://www.bing.com/bingbot.htm)" "-" root at ebd348cda0e9:~# The following is the view from within the docker container running nginx root at ebd348cda0e9:~# fg tail -f /var/log/nginx/error.log ^C root at ebd348cda0e9:~# fg tail -f /var/log/nginx/access.log ^C root at ebd348cda0e9:~# root at ebd348cda0e9:~# ls -la /var/www/html total 42088 drwxrwxrwx 9 1000 1000 4096 Mar 20 21:28 . drwxr-xr-x 1 root root 4096 Feb 12 00:52 .. drwxrwxrwx 3 1000 1000 4096 Mar 3 2020 ANA630 drwxrwxrwx 2 1000 1000 4096 Mar 3 2020 ANA660 drwxrwxrwx 3 1000 1000 4096 Jul 24 2020 CYVA drwxrwxrwx 2 1000 1000 4096 Mar 3 2020 Patches -rw-r--r-- 1 1000 1000 4291 Jan 21 2019 Pi-Root-Index.html -rw-r--r-- 1 1000 1000 3253523 Jan 29 2018 Pi-Root-Index_html_407e10bad835ca36.jpg -rw-r--r-- 1 1000 1000 17632732 Jan 29 2018 Pi-Root-Index_html_549480d9aaa04db.jpg -rw-r--r-- 1 1000 1000 13986083 Jan 29 2018 Pi-Root-Index_html_6a0daedaa917b2c9.jpg -rw-r--r-- 1 1000 1000 1652156 Jan 29 2018 Pi-Root-Index_html_7150c5fe82705e6d.jpg -rw-r--r-- 1 1000 1000 6334055 Jan 29 2018 Pi-Root-Index_html_97f9e5219c289a82.jpg -rw-r--r-- 1 1000 1000 84524 Jan 29 2018 Pi-Root-Index_html_dad3680972715349.jpg drwxrwxrwx 3 1000 1000 4096 Mar 1 02:39 Test -rw-rw-r-- 1 1000 1000 318 Feb 8 2019 favicon.ico -rw-r--r-- 1 1000 1000 4394 Feb 13 23:04 index.html drwxr-xr-x 2 1000 1000 4096 Feb 16 02:45 info -rw-r--r-- 1 1000 1000 10701 Jan 29 2018 original.html drwxr-sr-x 2 1000 1000 4096 Mar 18 03:34 web root at ebd348cda0e9:~# ls -lRa /var/www/certbot/ /var/www/certbot/: total 24 drwxr-xr-x 2 root root 4096 Mar 18 04:06 . drwxr-xr-x 1 root root 4096 Feb 12 00:52 .. -rw-r--r-- 1 root root 1727 Mar 18 03:22 index.html -rw-r--r-- 1 root root 1531 Mar 18 03:23 test.html -rw-r--r-- 1 root root 1533 Mar 9 02:59 test2.html root at ebd348cda0e9:~# df -h Filesystem Size Used Avail Use% Mounted on overlay 28G 9.1G 17G 35% / tmpfs 64M 0 64M 0% /dev tmpfs 461M 0 461M 0% /sys/fs/cgroup shm 64M 0 64M 0% /dev/shm coolwave.lese-fowler.us:/volume1/homes/pi/nginx/certbot/conf 1.8T 1.2T 687G 63% /etc/letsencrypt /dev/root 28G 9.1G 17G 35% /etc/hosts coolwave.lese-fowler.us:/volume1/homes/pi/www 1.8T 1.2T 687G 63% /var/www/html coolwave.lese-fowler.us:/volume1/homes/pi/nginx/logs 1.8T 1.2T 687G 63% /var/log/nginx coolwave.lese-fowler.us:/volume1/homes/pi/nginx/conf 1.8T 1.2T 687G 63% /etc/nginx/conf coolwave.lese-fowler.us:/volume1/homes/pi/nginx/certbot/www 1.8T 1.2T 687G 63% /var/www/certbot coolwave.lese-fowler.us:/volume1/homes/pi/nginx/logs 1.8T 1.2T 687G 63% /usr/share/nginx/logs tmpfs 461M 0 461M 0% /proc/asound tmpfs 461M 0 461M 0% /sys/firmware ***** NGINX.CONF root at ebd348cda0e9:~# cat /etc/nginx/conf/nginx.conf # Configuration File Provided by CYVA Research # Designed to host port 80 & 443 for web viewing # port 8080 is configured to pass traffic to pi3.lese-fowler.us # port 1883 and 8883 are configured to pass tcp streams to # pi3.lese-fowler.us as well. # load_module modules/nginx-plus-module-headers-more # load_module modules/nginx-plus-module-set-misc # load_module modules/ngx_stream_proxy_module load_module /etc/nginx/modules/ngx_stream_module.so; # # configuration file /etc/nginx/nginx.conf: #user nobody; worker_processes 1; # error_log /var/log/nginx/error.log debug; error_log /var/log/nginx/error.log debug; #error_log /var/log/nginx/error.log notice; #error_log /var/log/nginx/error.log info; #pid /var/log/nginx/nginx.pid; events { worker_connections 1024; } # http { include mime.types; default_type application/octet-stream; log_format main '$remote_addr - $remote_user [$time_local] "$request" $status $body_bytes_sent "$http_referer" "$http_user_agent" "$http_x_forwarded_for"'; access_log /var/log/nginx/access.log main; # # sendfile on; # tcp_nopush on; # keepalive_timeout 65; # set_real_ip_from 10.0.0.0/16; set_real_ip_from 172.16.0.0/24; set_real_ip_from 192.168.0.0/24; real_ip_header X-Real-IP; # server { # Add headers to serve security related headers # Before enabling Strict-Transport-Security headers please read into this # topic first. #add_header Strict-Transport-Security "max-age=15768000; includeSubDomains; preload;" always; # # WARNING: Only add the preload option once you read about # the consequences in https://hstspreload.org/. This option # will add the domain to a hardcoded list that is shipped # in all major browsers and getting removed from this list # could take several months. add_header Referrer-Policy "no-referrer" always; add_header X-Content-Type-Options "nosniff" always; add_header X-Download-Options "noopen" always; add_header X-Frame-Options "SAMEORIGIN" always; add_header X-Permitted-Cross-Domain-Policies "none" always; add_header X-Robots-Tag "none" always; add_header X-XSS-Protection "1; mode=block" always; # add_header cache-control: public, max-age=120; # Remove X-Powered-By, which is an information leak fastcgi_hide_header X-Powered-By; listen 80; server_name cyva.lese-fowler.us 172.16.10.30; access_log /var/log/nginx/cyva.access.log main; root /var/www/html; # location ^~ /.well-known/acme-challenge { # location ^~ /.well-known/ { # allow all; # } # alias vs root location ^~ /.well-known/acme-challenge/ { allow all; alias /var/www/certbot; try_files $uri =405; } location / { allow all; } location = /robots.txt { allow all; log_not_found on; access_log on; } # serve static files # location ~ ^/(images|javascript|js|css|flash|media|static)/ { #location ~* / { # alias /var/www/html; # expires 30d; #} # redirect http to https www return 301 https://cyva.lese-fowler.us$request_uri; # Enable gzip but do not remove ETag headers gzip on; gzip_vary on; gzip_comp_level 4; gzip_min_length 256; gzip_proxied expired no-cache no-store private no_last_modified no_etag auth; gzip_types application/atom+xml application/javascript application/json application/ld+json application/manifest+json application/rss+xml application/vnd.geo+json application/vnd.ms-fontobject application/x-font-ttf application/x-web-app-manifest+json application/xhtml+xml application/xml font/opentype image/bmp image/svg+xml image/x-icon text/cache-manifest text/css text/plain text/vcard text/vnd.rim.location.xloc text/vtt text/x-component text/x-cross-domain-policy; # # Uncomment if your server is build with the ngx_pagespeed module # This module is currently not supported. #pagespeed off; # # location / { # rewrite ^ /index.html; #} location ~ ^\/(?:build|tests|config|lib|3rdparty|templates|data)\/ { deny all; } location ~ ^\/(?:\.|autotest|occ|issue|indie|db_|console) { deny all; } # } server { # http2 listen [::]:443 ssl; listen 443 ssl; server_name cyva.lese-fowler.us 172.16.10.30; root /var/www/html; # SSL code ssl_certificate /etc/letsencrypt/live/cyva.lese-fowler.us/fullchain.pem; ssl_certificate_key /etc/letsencrypt/live/cyva.lese-fowler.us/privkey.pem; ssl_session_timeout 1d; ssl_session_cache shared:SharedNixCraftSSL:10m; ssl_session_tickets off; # TLS 1.2 & 1.3 only ssl_protocols TLSv1.2 TLSv1.3; ssl_prefer_server_ciphers on; ssl_ciphers "EECDH+ECDSA+AESGCM EECDH+aRSA+AESGCM EECDH+ECDSA+SHA384 EECDH+ECDSA+SHA256 EECDH+aRSA+SHA384 EECDH+aRSA+SHA256 EECDH+aRSA+RC4 EECDH EDH+aRSA RC4 !aNULL !eNULL !LOW !3DES !MD5 !EXP !PSK !SRP !DSS"; # HSTS (ngx_http_headers_module is required) (63072000 seconds) add_header Strict-Transport-Security "max-age=63072000" always; # OCSP stapling ssl_stapling on; ssl_stapling_verify on; # location ~ /.well-known { # allow all; # } # verify chain of trust of OCSP response using Root CA and Intermediate certs # ssl_trusted_certificate /etc/nginx/ssl/fullchain.pem; # location ^~ /.well-known/acme-challenge/ { # location ^~ /.well-known/ { location ^~ /.well-known/acme-challenge/ { allow all; alias /var/www/certbot; } location / { allow all; index index.html; } } # simple reverse-proxy server { listen 8080; server_name cyva.lese-fowler.us 172.16.10.30; access_log /var/log/nginx/reverse.access.log main; # serve static files location / { proxy_pass https://pi3.lese-fowler.us:80; } } } # End of html block # *********************** # The following code is not permitted until we find the correct nginx code # stream { log_format st-main '$remote_addr - [$time_local] $status $bytes_sent '; server { listen 8084; access_log /var/log/nginx/mqtt-non.log st-main; #TCP traffic will be forwarded to the "stream_backend" upstream group proxy_pass cyva.lese-fowler.us:1883; } # server { listen 8085; access_log /var/log/nginx/mqtt.log st-main; #TCP traffic will be forwarded to the specified server proxy_pass cyva.lese-fowler.us:8883; } } #root at ebd348cda0e9:~# -------------- next part -------------- An HTML attachment was scrubbed... URL: From francis at daoine.org Sun Mar 21 00:14:42 2021 From: francis at daoine.org (Francis Daly) Date: Sun, 21 Mar 2021 00:14:42 +0000 Subject: location root not working as described In-Reply-To: <639C45B8-FA6B-40F4-90A8-C3A8730FCD14@pacbell.net> References: <639C45B8-FA6B-40F4-90A8-C3A8730FCD14.ref@pacbell.net> <639C45B8-FA6B-40F4-90A8-C3A8730FCD14@pacbell.net> Message-ID: <20210321001442.GI16474@daoine.org> On Sat, Mar 20, 2021 at 03:08:03PM -0700, John Fowler wrote: Hi there, > Thank you for your suggestion, but I had tried using the alias as well and that did not work as advertised. The following is the results of using the alias and ?^~?. > > I do see some odd behavior. > > The following access snippet > 2021/03/20 21:44:21 [error] 14#14: *7 open() "/var/www/certbottest.html" failed (2: No such file or directory), client: 10.0.0.2, server: cyva.lese-fowler.us, request: "GET /.well-known/acme-challenge/test.html HTTP/1.1", host: "cyva.lese-fowler.us" "alias" does a straight textual swap of the part in "location" for the part in "alias". Either both should end in /, or neither should. (Usually, both should.) > 2021/03/20 21:44:21 [error] 14#14: *7 open() "/var/www/certbottest.html" failed (2: No such file or directory), client: 10.0.0.2, server: cyva.lese-fowler.us, request: "GET /.well-known/acme-challenge/test.html HTTP/1.1", host: "cyva.lese-fowler.us" > But I don?t know where the certbottest.html is coming from? The request is /.well-known/acme-challenge/test.html > location ^~ /.well-known/acme-challenge/ { > allow all; > alias /var/www/certbot; > try_files $uri =405; > } The "location" part is "/.well-known/acme-challenge/". Replace that in the request with the "alias" part of "/var/www/certbot", and you end up with the configured filename of "/var/www/certbottest.html". nginx is doing what it was told to do. The previous suggestion was > > Probably use "location ^~", but definitely use "alias /var/www/certbot/;". Good luck with it, f -- Francis Daly francis at daoine.org From jcfowler at pacbell.net Sun Mar 21 00:35:20 2021 From: jcfowler at pacbell.net (John Fowler) Date: Sat, 20 Mar 2021 17:35:20 -0700 Subject: location root not working as described References: <77405399-5A6B-4C66-A0E9-94E7B9B384E6.ref@pacbell.net> Message-ID: <77405399-5A6B-4C66-A0E9-94E7B9B384E6@pacbell.net> Thank you for the pointer. Looks like the problem has been resolved. The location needed a trailing ?/? on the path. location ^~ /.well-known/acme-challenge/ { allow all; alias /var/www/certbot/; try_files $uri =405; } John Working Niginx.conf root at f6c5686e1968:/# cat /etc/nginx/conf/nginx.conf # Configuration File Provided by CYVA Research # Designed to host port 80 & 443 for web viewing # port 8080 is configured to pass traffic to pi3.lese-fowler.us # port 1883 and 8883 are configured to pass tcp streams to # pi3.lese-fowler.us as well. # load_module modules/nginx-plus-module-headers-more # load_module modules/nginx-plus-module-set-misc # load_module modules/ngx_stream_proxy_module load_module /etc/nginx/modules/ngx_stream_module.so; # # configuration file /etc/nginx/nginx.conf: #user nobody; worker_processes 1; # error_log /var/log/nginx/error.log debug; error_log /var/log/nginx/error.log debug; #error_log /var/log/nginx/error.log notice; #error_log /var/log/nginx/error.log info; #pid /var/log/nginx/nginx.pid; events { worker_connections 1024; } # http { include mime.types; default_type application/octet-stream; log_format main '$remote_addr - $remote_user [$time_local] "$request" $status $body_bytes_sent "$http_referer" "$http_user_agent" "$http_x_forwarded_for"'; access_log /var/log/nginx/access.log main; # # sendfile on; # tcp_nopush on; # keepalive_timeout 65; # set_real_ip_from 10.0.0.0/16; set_real_ip_from 172.16.0.0/24; set_real_ip_from 192.168.0.0/24; real_ip_header X-Real-IP; # server { # Add headers to serve security related headers # Before enabling Strict-Transport-Security headers please read into this # topic first. #add_header Strict-Transport-Security "max-age=15768000; includeSubDomains; preload;" always; # # WARNING: Only add the preload option once you read about # the consequences in https://hstspreload.org/. This option # will add the domain to a hardcoded list that is shipped # in all major browsers and getting removed from this list # could take several months. add_header Referrer-Policy "no-referrer" always; add_header X-Content-Type-Options "nosniff" always; add_header X-Download-Options "noopen" always; add_header X-Frame-Options "SAMEORIGIN" always; add_header X-Permitted-Cross-Domain-Policies "none" always; add_header X-Robots-Tag "none" always; add_header X-XSS-Protection "1; mode=block" always; # add_header cache-control: public, max-age=120; # Remove X-Powered-By, which is an information leak fastcgi_hide_header X-Powered-By; listen 80; server_name cyva.lese-fowler.us 172.16.10.30; access_log /var/log/nginx/cyva.access.log main; root /var/www/html; # location ^~ /.well-known/acme-challenge { # location ^~ /.well-known/ { # allow all; # } # alias vs root location ^~ /.well-known/acme-challenge/ { allow all; alias /var/www/certbot/; try_files $uri =405; } location / { allow all; } location = /robots.txt { allow all; log_not_found on; access_log on; } # serve static files # location ~ ^/(images|javascript|js|css|flash|media|static)/ { #location ~* / { # alias /var/www/html; # expires 30d; #} # redirect http to https www return 301 https://cyva.lese-fowler.us$request_uri; # Enable gzip but do not remove ETag headers gzip on; gzip_vary on; gzip_comp_level 4; gzip_min_length 256; gzip_proxied expired no-cache no-store private no_last_modified no_etag auth; gzip_types application/atom+xml application/javascript application/json application/ld+json application/manifest+json application/rss+xml application/vnd.geo+json application/vnd.ms-fontobject application/x-font-ttf application/x-web-app-manifest+json application/xhtml+xml application/xml font/opentype image/bmp image/svg+xml image/x-icon text/cache-manifest text/css text/plain text/vcard text/vnd.rim.location.xloc text/vtt text/x-component text/x-cross-domain-policy; # # Uncomment if your server is build with the ngx_pagespeed module # This module is currently not supported. #pagespeed off; # # location / { # rewrite ^ /index.html; #} location ~ ^\/(?:build|tests|config|lib|3rdparty|templates|data)\/ { deny all; } location ~ ^\/(?:\.|autotest|occ|issue|indie|db_|console) { deny all; } # } server { # http2 listen [::]:443 ssl; listen 443 ssl; server_name cyva.lese-fowler.us 172.16.10.30; root /var/www/html; # SSL code ssl_certificate /etc/letsencrypt/live/cyva.lese-fowler.us/fullchain.pem; ssl_certificate_key /etc/letsencrypt/live/cyva.lese-fowler.us/privkey.pem; ssl_session_timeout 1d; ssl_session_cache shared:SharedNixCraftSSL:10m; ssl_session_tickets off; # TLS 1.2 & 1.3 only ssl_protocols TLSv1.2 TLSv1.3; ssl_prefer_server_ciphers on; ssl_ciphers "EECDH+ECDSA+AESGCM EECDH+aRSA+AESGCM EECDH+ECDSA+SHA384 EECDH+ECDSA+SHA256 EECDH+aRSA+SHA384 EECDH+aRSA+SHA256 EECDH+aRSA+RC4 EECDH EDH+aRSA RC4 !aNULL !eNULL !LOW !3DES !MD5 !EXP !PSK !SRP !DSS"; # HSTS (ngx_http_headers_module is required) (63072000 seconds) add_header Strict-Transport-Security "max-age=63072000" always; # OCSP stapling ssl_stapling on; ssl_stapling_verify on; # location ~ /.well-known { # allow all; # } # verify chain of trust of OCSP response using Root CA and Intermediate certs # ssl_trusted_certificate /etc/nginx/ssl/fullchain.pem; # location ^~ /.well-known/acme-challenge/ { # location ^~ /.well-known/ { location ^~ /.well-known/acme-challenge/ { allow all; alias /var/www/certbot/; } location / { allow all; index index.html; } } # simple reverse-proxy server { listen 8080; server_name cyva.lese-fowler.us 172.16.10.30; access_log /var/log/nginx/reverse.access.log main; # serve static files location / { proxy_pass https://pi3.lese-fowler.us:80; } } } # End of html block # *********************** # The following code is not permitted until we find the correct nginx code # stream { log_format st-main '$remote_addr - [$time_local] $status $bytes_sent '; server { listen 8084; access_log /var/log/nginx/mqtt-non.log st-main; #TCP traffic will be forwarded to the "stream_backend" upstream group proxy_pass cyva.lese-fowler.us:1883; } # server { listen 8085; access_log /var/log/nginx/mqtt.log st-main; #TCP traffic will be forwarded to the specified server proxy_pass cyva.lese-fowler.us:8883; } } #root at f6c5686e1968:/# -------------- next part -------------- An HTML attachment was scrubbed... URL: From james_ at catbus.co.uk Sun Mar 21 16:06:40 2021 From: james_ at catbus.co.uk (James Beal) Date: Sun, 21 Mar 2021 16:06:40 +0000 Subject: thread already active. Message-ID: We have a fairly complex nginx config however the vhost that I know I am having issues is quiet simple. We have??aio threads ; enabled? /usr/sbin/nginx -V nginx version: nginx/1.19.6 built by gcc 8.3.0 (Debian 8.3.0-6) built with OpenSSL 1.1.1d? 10 Sep 2019 TLS SNI support enabled configure arguments: --add-module=/root/incubator-pagespeed-ngx-latest-stable --with-http_flv_module --with-http_gunzip_module --with-http_gzip_static_module --with-http_mp4_module --with-http_ssl_module --with-http_stub_status_module --with-pcre-jit --with-http_secure_link_module --with-http_v2_module --with-http_realip_module --with-stream_geoip_module --http-scgi-temp-path=/tmp --http-uwsgi-temp-path=/tmp --http-fastcgi-temp-path=/tmp --http-proxy-temp-path=/tmp --http-log-path=/var/log/nginx/access --error-log-path=/var/log/nginx/error --pid-path=/var/run/nginx.pid --conf-path=/etc/nginx/nginx.conf --sbin-path=/usr/sbin --prefix=/usr --with-threads server { ? ? ? ? listen 19994 backlog=90000 ssl; ? ? ? ? listen 29994 backlog=90000 ; ? ? ? ? ssl_protocols TLSv1 TLSv1.1 TLSv1.2; ? ? ? ? ssl_certificate? ? ? /etc/nginx/ao3_fanlore_2020-2021.crt; ? ? ? ? ssl_certificate_key? /etc/nginx/ao3_fanlore_2020-2021.key; ? ? ? ? ssl_prefer_server_ciphers on; ? ? ? ? ssl_ciphers "ECDH+AESGCM DH+AESGCM ECDH+AES256 DH+AES256 ECDH+AES128 DH+AES ECDH+3DES DH+3DES RSA+AES RSA+3DES !ADH !AECDH !RC4 !aNULL !eNULL !LOW !3DES !MD5 !EXP !PSK !SRP !DSS"; ? ? ? ? ssl_dhparam /etc/nginx/dhparams.pem ; ? ? ? ? add_header Strict-Transport-Security "max-age=120;"; ? ? ? ? proxy_http_version 1.1; ? ? ? ? proxy_headers_hash_bucket_size 4096 ; ? ? ? ? proxy_set_header Connection ""; ? ? ? ? client_max_body_size 4G; ? ? ? ? keepalive_timeout 5; ? ? ? ? server_name media.archiveofourown.org media.transformativeworks.org ; ? ? ? ? root /var/www/media; ? ? ? ? location / { ? ? ? ? ? ? ? ? access_log off; ? ? ? ? ? ? ? ? autoindex on; ? ? ? ? } } When I try and download a larger than average file It fails, it fails in the same way if I try and us the internal ip address and port directly rather than via the haproxy ( firewall ) etc. I believe the issue does occur with more than just large downloads. grep "already active" /var/log/nginx/error.log? | wc -l 58 curl -v? -o foo.zip http://media.archiveofourown.org/ao3/stats/2021/02/26/20210226-stats.zip ? % Total? ? % Received % Xferd? Average Speed? ?Time? ? Time? ? ?Time? Current ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ?Dload? Upload? ?Total? ?Spent? ? Left? Speed ? 0? ? ?0? ? 0? ? ?0? ? 0? ? ?0? ? ? 0? ? ? 0 --:--:-- --:--:-- --:--:--? ? ?0*? ?Trying 208.85.241.157:80... * TCP_NODELAY set * Connected to media.archiveofourown.org (208.85.241.157) port 80 (#0) > GET /ao3/stats/2021/02/26/20210226-stats.zip HTTP/1.1 > Host: media.archiveofourown.org > User-Agent: curl/7.68.0 > Accept: */* > * Mark bundle as not supporting multiuse < HTTP/1.1 200 OK < Server: nginx/1.19.6 < Date: Sun, 21 Mar 2021 15:59:02 GMT < Content-Type: application/zip < Content-Length: 437117255 < Last-Modified: Sun, 21 Mar 2021 13:11:35 GMT < ETag: "60574607-1a0de147" < Strict-Transport-Security: max-age=120; < Cache-Control: s-maxage=10 < Accept-Ranges: bytes < { [1124 bytes data] ? 0? 416M? ? 0 11260? ? 0? ? ?0? 48119? ? ? 0? 2:31:24 --:--:--? 2:31:24 47914* transfer closed with 436592967 bytes remaining to read ? 0? 416M? ? 0? 512k? ? 0? ? ?0? ?722k? ? ? 0? 0:09:51 --:--:--? 0:09:51? 721k * Closing connection 0 curl: (18) transfer closed with 436592967 bytes remaining to read -------------- next part -------------- An HTML attachment was scrubbed... URL: From james_ at catbus.co.uk Sun Mar 21 20:19:02 2021 From: james_ at catbus.co.uk (James Beal) Date: Sun, 21 Mar 2021 20:19:02 +0000 Subject: thread already active. In-Reply-To: References: Message-ID: On 21/03/2021 16:06:40, James Beal wrote: We have a fairly complex nginx config however the vhost that I know I am having issues is quiet simple. We have??aio threads ; enabled? James Beal:? Turning?aio off fixes the issue but doesn't seem optimal. -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Mon Mar 22 12:59:32 2021 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 22 Mar 2021 15:59:32 +0300 Subject: thread already active. In-Reply-To: References: Message-ID: Hello! On Sun, Mar 21, 2021 at 08:19:02PM +0000, James Beal wrote: > > On 21/03/2021 16:06:40, James Beal wrote: > We have a fairly complex nginx config however the vhost that I know I am having issues is quiet simple. > > We have??aio threads ; enabled? > > James Beal:? > Turning?aio off fixes the issue but doesn't seem optimal. It looks like you have pagespeed module enabled. Are you seeing the same errors with pagespeed switched off (and preferably not complied in at all)? -- Maxim Dounin http://mdounin.ru/ From vbart at nginx.com Thu Mar 25 19:21:40 2021 From: vbart at nginx.com (Valentin V. Bartenev) Date: Thu, 25 Mar 2021 22:21:40 +0300 Subject: Unit 1.23.0 release Message-ID: <2764553.e9J7NaK4W3@vbart-laptop> Hi, I'm glad to announce a new release of NGINX Unit. Nowadays, TLS is everywhere, while plain HTTP is almost nonexistent in the global network. We are fully aware of this trend and strive to simplify TLS configuration in Unit as much as possible. Frankly, there's still much to do, but the introduction of smart SNI certificate selection marks yet another step in this direction. Perhaps, you already know about Unit's certificate storage API that uploads certificate bundles to a running instance. Otherwise, if you're not yet fully informed but still curious, here's a decent overview: - https://unit.nginx.org/configuration/#certificate-management Basically, you just upload a certificate chain and a key under some name; after that, you can specify the name ("mycert" in the example below) with any listening socket to configure it for HTTPS: { "listeners": { "*:443": { "tls": { "certificate": "mycert" }, "pass": "routes" } } } Unit's API also enables informative introspection of uploaded certificate bundles so you can monitor their validity and benefit from service discovery. You can also upload any number of certificate bundles to switch between them on the fly without restarting the server (yes, Unit's dynamic nature is exactly like that). Still, with this release, there are even more options, as you can supply any number of certificate bundle names with a listener socket: { "certificate": [ "mycertA", "mycertB", ... ] } For each client, Unit automatically selects a suitable certificate from the list depending on the domain name the client connects to (and therefore supplies via the "Server Name Indication" TLS extension). Thus, you don't even need to care about matching certificates to server names; Unit handles that for you. As a result, there's almost no room for a mistake, which spares more time for stuff that matters. As one can reasonably expect, you can always add more certs, delete them, or edit the cert list on the fly without compromising performance. That's the Unit way! In case you're wondering whom to thank for this shiny new feature: give a warm welcome to Andrey Suvorov, a new developer on our team. He will continue working on TLS improvements in Unit, and his TODO list is already stacked. Still, if you'd like to suggest a concept or have a particular interest in some feature, just start a ticket on GitHub; we are open to your ideas: - https://github.com/nginx/unit/issues Also, plenty of solid bug fixing work was done by the whole team. See the full change log below: Changes with Unit 1.23.0 25 Mar 2021 *) Feature: support for multiple certificate bundles on a listener via the Server Name Indication (SNI) TLS extension. *) Feature: "--mandir" ./configure option to specify the directory for man page installation. *) Bugfix: the router process could crash on premature TLS connection close; the bug had appeared in 1.17.0. *) Bugfix: a connection leak occurred on premature TLS connection close; the bug had appeared in 1.6. *) Bugfix: a descriptor and memory leak occurred in the router process when processing small WebSocket frames from a client; the bug had appeared in 1.19.0. *) Bugfix: a descriptor leak occurred in the router process when removing or reconfiguring an application; the bug had appeared in 1.19.0. *) Bugfix: persistent storage of certificates might've not worked with some filesystems in Linux, and all uploaded certificate bundles were forgotten after restart. *) Bugfix: the controller process could crash while requesting information about a certificate with a non-DNS SAN entry. *) Bugfix: the controller process could crash on manipulations with a certificate containing a SAN and no standard name attributes in subject or issuer. *) Bugfix: the Ruby module didn't respect the user locale for defaults in the Encoding class. *) Bugfix: the PHP 5 module failed to build with thread safety enabled; the bug had appeared in 1.22.0. Other notable features we are working on include: - statistics API - process control API - chrooting on a per-request basis during static file serving - MIME types filtering for static files - configuring ciphers and other OpenSSL settings So much more to come! Also, if you'd like to know more about Unit and prefer watching fun videos instead of reading tedious documentation, I'm happy to recommend Timo Stark, our own PM Engineer. Recently, he started regularly streaming on Twitch and YouTube: - https://www.twitch.tv/h30ne - https://www.youtube.com/Tippexs91 Tomorrow (March 26), at 10 p.m. CET (or 2 p.m. PDT), he is going on air to livestream his using Unit's brand-new SNI feature to automate the certbot setup: - https://youtu.be/absaan-8y1Q Everyone is welcome! wbr, Valentin V. Bartenev From andy at bl.ink Thu Mar 25 20:48:41 2021 From: andy at bl.ink (Andy Meadows) Date: Thu, 25 Mar 2021 15:48:41 -0500 Subject: rate limit by header or API key Message-ID: using nginx open source, is there an option to provide a custom rate limit by user agent, http header value, or an API key? Looking for the best option to provide custom rate limits to named users, but the traffic is coming from unpredictable IP addresses. -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Sat Mar 27 08:09:29 2021 From: nginx-forum at forum.nginx.org (blason) Date: Sat, 27 Mar 2021 04:09:29 -0400 Subject: nginx geoip module with reverse proxy in multi tenant Message-ID: Hi Team, This is nginx 1.19.5 I have reverse proxy server where I am hosting around 20 sites behind nginx reverse proxy server. This reverse proxy server only used for reverse proxy purpose and no local web server is running on it. I need to implement geoip blocking but what I understood from the document is map $geoip_country_code $allowed_country variable to has to be set in http section and then if ($allowed_country = no) { return 444; } Can be called in server section. This is fine if I am hosting one site what if in case of mutiple sites? In this case suppose siteA.exampe.com need to have access blocked from CN While siteB.example.com needs to have access allowed from CN How do I achieve it? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,291069,291069#msg-291069 From francis at daoine.org Sat Mar 27 09:51:37 2021 From: francis at daoine.org (Francis Daly) Date: Sat, 27 Mar 2021 09:51:37 +0000 Subject: rate limit by header or API key In-Reply-To: References: Message-ID: <20210327095137.GK16474@daoine.org> On Thu, Mar 25, 2021 at 03:48:41PM -0500, Andy Meadows wrote: Hi there, > using nginx open source, is there an option to provide a custom rate limit > by user agent, http header value, or an API key? Looking for the best > option to provide custom rate limits to named users, but the traffic is > coming from unpredictable IP addresses. "limit_rate" (http://nginx.org/r/limit_rate) can control the kbps-response-writing speed of a single request, using whatever criteria you like, using things that are available at response-writing time. "limit_req" (http://nginx.org/r/limit_req) and limit_req_zone can control the rate of incoming requests that are processed, using whatever criteria you like, using things that are available at request-reading time. I'm not aware of a way to trivially combine the two. Cheers, f -- Francis Daly francis at daoine.org From francis at daoine.org Sat Mar 27 09:54:29 2021 From: francis at daoine.org (Francis Daly) Date: Sat, 27 Mar 2021 09:54:29 +0000 Subject: nginx geoip module with reverse proxy in multi tenant In-Reply-To: References: Message-ID: <20210327095429.GL16474@daoine.org> On Sat, Mar 27, 2021 at 04:09:29AM -0400, blason wrote: Hi there, > I need to implement geoip blocking but what I understood from the document > is > map $geoip_country_code $allowed_country variable to has to be set in http > section and then > if ($allowed_country = no) { > return 444; > } > > Can be called in server section. This is fine if I am hosting one site what > if in case of mutiple sites? In this case suppose map $geoip_country_code $allowed_country_A { ... } map $geoip_country_code $allowed_country_B { ... } if ($allowed_country_A = no) { http://nginx.org/r/map -- the second argument to "map" is whatever variable name you choose. Good luck with it, f -- Francis Daly francis at daoine.org From nginx-forum at forum.nginx.org Sat Mar 27 10:34:54 2021 From: nginx-forum at forum.nginx.org (blason) Date: Sat, 27 Mar 2021 06:34:54 -0400 Subject: nginx geoip module with reverse proxy in multi tenant In-Reply-To: <20210327095429.GL16474@daoine.org> References: <20210327095429.GL16474@daoine.org> Message-ID: <9871ef47abff6d402b658dcfee604858.NginxMailingListEnglish@forum.nginx.org> Oh Ok - Thanks for the pointer and if my understanding is clear then define map $geoip_country_code $allowed_country_A map $geoip_country_code $allowed_country_B map $geoip_country_code $allowed_country_C under http section in /etc/nginx/nginx.conf and then use if ($allowed_country_A = no) in server section? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,291069,291072#msg-291072 From osa at freebsd.org.ru Sun Mar 28 01:19:10 2021 From: osa at freebsd.org.ru (Sergey A. Osokin) Date: Sun, 28 Mar 2021 04:19:10 +0300 Subject: nginx geoip module with reverse proxy in multi tenant In-Reply-To: <9871ef47abff6d402b658dcfee604858.NginxMailingListEnglish@forum.nginx.org> References: <20210327095429.GL16474@daoine.org> <9871ef47abff6d402b658dcfee604858.NginxMailingListEnglish@forum.nginx.org> Message-ID: On Sat, Mar 27, 2021 at 06:34:54AM -0400, blason wrote: > Oh Ok - > > Thanks for the pointer and if my understanding is clear then define > map $geoip_country_code $allowed_country_A > map $geoip_country_code $allowed_country_B > map $geoip_country_code $allowed_country_C > > under http section in /etc/nginx/nginx.conf True. According to the documentation map directive is available on http context, http://nginx.org/en/docs/http/ngx_http_map_module.html#map > and then use > if ($allowed_country_A = no) in server section? According to the documentation if directive is available on server or location contexts, http://nginx.org/en/docs/http/ngx_http_rewrite_module.html#if -- Sergey A. Osokin From nginx-forum at forum.nginx.org Sun Mar 28 03:51:30 2021 From: nginx-forum at forum.nginx.org (blason) Date: Sat, 27 Mar 2021 23:51:30 -0400 Subject: nginx geoip module with reverse proxy in multi tenant In-Reply-To: References: Message-ID: <4af7a9c115c245adf77e760440a01cdf.NginxMailingListEnglish@forum.nginx.org> Thanks appreciated your response. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,291069,291077#msg-291077 From carsten.delellis at delellis.net Mon Mar 29 12:47:58 2021 From: carsten.delellis at delellis.net (Carsten Laun-De Lellis) Date: Mon, 29 Mar 2021 12:47:58 +0000 Subject: Bareos-WebUI on nginx Message-ID: Hi all I want to set up a bareos-webui on a nginx webserver. The bareos and nginx server are not on different machines. On the nginx server I already have some reverse proxies and a web server running. Did anyone manage in the past to set up such a configuration? If yes would it be possible to get this? Many thankx in advance. Freundliche Gr??e EnBW Energie Baden-W?rttemberg AG Durlacher Allee 93 ? 76131 Karlsruhe i.A. Carsten Laun-De Lellis Mobil +49 151 275 30865 mailto: c.laun-delellis at enbw-intern.com www.enbw.com [cid:image001.png at 01D724A9.E21DB670] [JUNI_18/Logos/Rund/Bearbeitet/TW.png] [JUNI_18/Logos/Rund/Bearbeitet/TY.png] [cid:image004.png at 01D724A9.E21DB670] [cid:image005.png at 01D724A9.E21DB670] EnBW Energie Baden-Wuerttemberg AG ? Registered office: Karlsruhe Court of Registry: Mannheim ? Register No: HRB 107956 Chairman of the Supervisory Board: Lutz Feldmann Executive Committee: Dr. Frank Mastiaux (Chairman), Thomas Kusterer, Colette R?ckert-Hennen, Dr. Hans-Josef Zimmer -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.png Type: image/png Size: 1656 bytes Desc: image001.png URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image002.png Type: image/png Size: 1679 bytes Desc: image002.png URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image003.png Type: image/png Size: 1545 bytes Desc: image003.png URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image004.png Type: image/png Size: 1175 bytes Desc: image004.png URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image005.png Type: image/png Size: 1042 bytes Desc: image005.png URL: From francis at daoine.org Mon Mar 29 22:21:26 2021 From: francis at daoine.org (Francis Daly) Date: Mon, 29 Mar 2021 23:21:26 +0100 Subject: Bareos-WebUI on nginx In-Reply-To: References: Message-ID: <20210329222126.GM16474@daoine.org> On Mon, Mar 29, 2021 at 12:47:58PM +0000, Carsten Laun-De Lellis wrote: Hi there, > I want to set up a bareos-webui on a nginx webserver. The bareos and nginx server are not on different machines. > On the nginx server I already have some reverse proxies and a web server running. I have not tried this; but does the information at https://docs.bareos.org/IntroductionAndTutorial/InstallingBareosWebui.html#nginx do what you want? It looks like it may be a mostly-straightforward PHP application which works when configured at the root of its own server{} block. Whether it also works when configured at a sub-level in the url hierarchy might be interesting to learn. Good luck with it, f -- Francis Daly francis at daoine.org From mdounin at mdounin.ru Tue Mar 30 15:00:30 2021 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 30 Mar 2021 18:00:30 +0300 Subject: nginx-1.19.9 Message-ID: Changes with nginx 1.19.9 30 Mar 2021 *) Bugfix: nginx could not be built with the mail proxy module, but without the ngx_mail_ssl_module; the bug had appeared in 1.19.8. *) Bugfix: "upstream sent response body larger than indicated content length" errors might occur when working with gRPC backends; the bug had appeared in 1.19.1. *) Bugfix: nginx might not close a connection till keepalive timeout expiration if the connection was closed by the client while discarding the request body. *) Bugfix: nginx might not detect that a connection was already closed by the client when waiting for auth_delay or limit_req delay, or when working with backends. *) Bugfix: in the eventport method. -- Maxim Dounin http://nginx.org/ From nginx-forum at forum.nginx.org Tue Mar 30 16:09:27 2021 From: nginx-forum at forum.nginx.org (J6DKLygdGq139TVZiBxq) Date: Tue, 30 Mar 2021 12:09:27 -0400 Subject: May the URI specified with `auth_request` directive, reference capture groups (e.g. `$1`) from containing location? Message-ID: Hi all, I may be missing some crucial request processing knowledge, but I am trying to issue an authentication request with variable URI, as as part of processing a request for a protected resource. I hope I am able to express below what I want: ``` location ~ /foo/(\d+)/bar { auth_request /auth/$1/baz; } ``` This does not seem to work, however -- `$1` above is interpreted literally (it is not expanded), while what I want is for every request for /foo/1/bar to only succeed after authorization request to /auth/1/baz, /foo/2/bar only after authorization request to /auth/2/baz, and so on. Am I going wrong about this? Crucially the authorization service has different URI patterns than our website, so I can't just pass the entire URI and I cannot use one single URI for all authorization requests. What are my options here? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,291109,291109#msg-291109 From xeioex at nginx.com Tue Mar 30 19:31:47 2021 From: xeioex at nginx.com (Dmitry Volyntsev) Date: Tue, 30 Mar 2021 22:31:47 +0300 Subject: njs-0.5.3 Message-ID: <4f230c26-39de-bdc2-bab8-fc19f0009503@nginx.com> Hello, I'm glad to announce a new release of NGINX JavaScript module (njs). This release focuses on extending the modules functionality. Notable new features: - js_var directive. The directive creates an nginx variable writable from njs. The variable is not overwritten after a redirect unlike variables created with the set directive. This is especially useful in situations where some directive value depends on the result of a subrequest. The following example illustrates the case where Authorization header is processed by a HTTP endpoint which returns foo value as a result. This result is passed as a header to the backend. : nginx.conf: : js_import main.js; : : js_var $foo; : .. : : location /secure/ { : auth_request /set_foo; : : proxy_set_header Foo $foo; : proxy_pass http://backend; : } : : location =/set_foo { : internal; : js_content main.set_foo; : } : : main.js: : function set_foo(r) { : ngx.fetch('http://127.0.0.1:8080', {headers: {Authorization: r.headersIn.Authorization}}) : .then(reply => { : r.variables.foo = reply.headers.get('foo'); : r.return(200); : }); : } : : export default {set_foo}; You can learn more about njs: - Overview and introduction: http://nginx.org/en/docs/njs/ - Presentation: https://youtu.be/Jc_L6UffFOs - Using node modules with njs: http://nginx.org/en/docs/njs/node_modules.html - Writing njs code using TypeScript definition files: http://nginx.org/en/docs/njs/typescript.html Feel free to try it and give us feedback on: - Github: https://github.com/nginx/njs/issues - Mailing list: http://mailman.nginx.org/mailman/listinfo/nginx-devel Changes with njs 0.5.3 30 Mar 2021 nginx modules: *) Feature: added the "js_var" directive. From nginx-forum at forum.nginx.org Wed Mar 31 09:50:33 2021 From: nginx-forum at forum.nginx.org (allenhe) Date: Wed, 31 Mar 2021 05:50:33 -0400 Subject: How to configure nginx to support multipart/form-data upload Message-ID: <854f3fb5e3aaafb00665bf67fd1b86f9.NginxMailingListEnglish@forum.nginx.org> Hi, Can somebody show the minimal configuration to suppport large file upload with multipart/form-data Content-Type? When I upload 8GB file with the following global configs, I will always get 600s timeout with the error: upstream timed out (110: Connection timed out) while reading response header from upstream client_body_buffer_size 128k; client_header_buffer_size 16K; client_max_body_size 10000m; client_body_timeout 100s; client_header_timeout 60s; proxy_max_temp_file_size 10248m; proxy_http_version 1.1; proxy_request_buffering off; proxy_read_timeout 600s; proxy_send_timeout 600s; proxy_buffering on; proxy_buffer_size 4k; proxy_buffers 8 32k; proxy_busy_buffers_size 64k; proxy_temp_file_write_size 2m; Posted at Nginx Forum: https://forum.nginx.org/read.php?2,291113,291113#msg-291113 From nginx-forum at forum.nginx.org Wed Mar 31 14:19:08 2021 From: nginx-forum at forum.nginx.org (gusto1) Date: Wed, 31 Mar 2021 10:19:08 -0400 Subject: nginx reverse proxy Message-ID: <95b6744dc3552c3ccac4d76897533f1a.NginxMailingListEnglish@forum.nginx.org> I'm newbie to nginx I have an apache server in lxc 192.168.1.10 (3x vhost/domain) Each vhost is configured as follows and works perfectly ---------- DocumentRoot /var/www/www.domain1.ddns.com ServerName domain1.ddns.com ServerAlias www.domain1.ddns.com ServerAdmin webmaster at domain1.ddns.com RewriteEngine on RewriteCond %{HTTP_HOST} ^domain1\.ddns\.com$ [NC] RewriteRule ^(.*)$ http://www.domain1.ddns.com$1 [R=301,NE,L] RewriteCond %{HTTPS} off RewriteRule (.*) https://%{HTTP_HOST}%{REQUEST_URI} [R=301,L,NE] DocumentRoot /var/www/www.domain1.ddns.com ServerName domain1.ddns.com ServerAlias www.domain1.ddns.com ServerAdmin webmaster at domain1.ddns.com SSLEngine on SSLCertificateKeyFile /etc/letsencrypt/live/domain1.ddns.com/privkey.pem SSLCertificateFile /etc/letsencrypt/live/domain1.ddns.com/cert.pem SSLCertificateChainFile /etc/letsencrypt/live/domain1.ddns.com/chain.pem # SSLCertificateChainFile /etc/letsencrypt/live/domain1.ddns.com/fullchain.pem RewriteEngine on RewriteCond %{HTTP_HOST} ^domain1\.ddns\.com$ [NC] RewriteRule ^(.*)$ https://www.domain1.ddns.com$1 [R=301,NE,L] ----------- All my domains are visible from the internet because I have forward ports 80/443 on IP 192.168.1.10 Now I will build another apache server in lxc 192.168.1.11 (3x vhost/domain) For vhosts I want to use a similar configuration as on the first apache. I don't want to change the configuration of apache vhosts. Because it is not possible to make forward ports to 2 local IP addresses, I need to build a reverse proxy. Today I installed nginx in lxc 192.168.1.12 How should I configure nginx? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,291114,291114#msg-291114