From francis at daoine.org Sat Jul 2 08:24:11 2022 From: francis at daoine.org (Francis Daly) Date: Sat, 2 Jul 2022 09:24:11 +0100 Subject: Reverse proxy to traefik In-Reply-To: <139a4296498a0d99fb9affe9a1339a1d@unau.edu.ar> References: <139a4296498a0d99fb9affe9a1339a1d@unau.edu.ar> Message-ID: <20220702082411.GB14648@daoine.org> On Fri, Jun 24, 2022 at 04:23:54PM -0300, Daniel Armando Rodriguez wrote: Hi there, > I need to forward HTTP/HTTPS stream to a traefik within docker container. > Additionally, this traefik is also SSL termination. And just at this point > where I am stuck, as the SSL management against Let's Encrypt needs both > HTTP and HTTPS traffic. I'm not quite sure what you are trying to do, in nginx terms. nginx has the idea of "http", where an incoming http or https request to nginx is handled by nginx making a new http or https request to the upstream service; and nginx has the idea of "stream", where any traffic on an incoming tcp connection is forwarded to an upstream service. That "stream" traffic can optionally be SSL-decrypted or encrypted by nginx before forwarding. > Made this representation to illustrate the situation. > https://i.postimg.cc/Zq1Ndyws/scheme.png If you can describe what you want, in terms of "something external will make a http request of nginx that should be handled in this way; it will make a https request of nginx that should be handled in that way; and it will send a generic tcp stream to this port on nginx that should be handled in this other way", then the nginx config to handle that, might be clearer. Cheers, f -- Francis Daly francis at daoine.org From francis at daoine.org Sat Jul 2 08:34:09 2022 From: francis at daoine.org (Francis Daly) Date: Sat, 2 Jul 2022 09:34:09 +0100 Subject: Reverse proxy question In-Reply-To: References: Message-ID: <20220702083409.GC14648@daoine.org> On Thu, Jun 23, 2022 at 08:19:06AM +0300, Saint Michael wrote: Hi there, > I have these substitution rules: > subs_filter "https://www.postimees.ee" "https://postimees.oneye.us" gir break; > subs_filter "www.postimees.ee" "postimees.oneye.us" gir break; > subs_filter "http://(.*).postimees.ee/(.*)" > "http://postimees.oneye.us/$1/$2" gir break; > > but they don't process the link above. > How do you exactly write the rules, or rewrite rules, in cases like this? The link starts with https://arvamus.postimees.ee/ The third subs_filter wants to match http://, not https:// Cheers, f -- Francis Daly francis at daoine.org From francis at daoine.org Sat Jul 2 08:52:56 2022 From: francis at daoine.org (Francis Daly) Date: Sat, 2 Jul 2022 09:52:56 +0100 Subject: Can I serve CLI Applications using Nginx In-Reply-To: References: Message-ID: <20220702085256.GD14648@daoine.org> On Thu, Jun 23, 2022 at 07:13:34PM +0600, Ahmad Ismail wrote: Hi there, > will it be a bad idea to extend nginx (ex. create a module) to serve > my purpose instead of using inetd? Is it possible to make a module that > will give the HTTP request to `Command1 | Command2 | CommandN`, then build > a response from the output (by adding HTTP response message to it) and > then send the response back to the client. You can try that, but I suspect that you will find that you are re-inventing CGI. Your initial model of Request | Web_Server | CLI_APP | ADD_UI | Web_Server > Response is, I think, not quite right -- the "Web_Server" is not in the pipeline twice; instead, the CLI_APP part is "off to the side". Since you want the request to come in as http, you will probably be happier if you keep your CLI_APP and ADD_UI pieces as they are in a pipeline; but have a CGI wrapper that will call them and provide the headers that any CGI-supporting web server will know how to handle. As it happens -- nginx does not support CGI. So if you want to use nginx in this model, you will want something like a FastCGI wrapper, or SCGI wrapper, or something else that speaks a protocol that nginx does support. For testing/design, you could use something like "date" as your dummy CLI_APP; and then learn what you need to do to end up with the current time showing in the browser. I expect that you will either design your own interfaces that only work in this case; or you will use pre-existing interfaces like CGI. Good luck with it, f -- Francis Daly francis at daoine.org From ismail783 at gmail.com Mon Jul 4 13:30:06 2022 From: ismail783 at gmail.com (Ahmad Ismail) Date: Mon, 4 Jul 2022 19:30:06 +0600 Subject: Can I serve CLI Applications using Nginx In-Reply-To: <20220702085256.GD14648@daoine.org> References: <20220702085256.GD14648@daoine.org> Message-ID: Thank you very very much for taking the time to help me. On Sat, Jul 2, 2022 at 2:54 PM Francis Daly wrote: > On Thu, Jun 23, 2022 at 07:13:34PM +0600, Ahmad Ismail wrote: > > Hi there, > > > will it be a bad idea to extend nginx (ex. create a module) to serve > > my purpose instead of using inetd? Is it possible to make a module that > > will give the HTTP request to `Command1 | Command2 | CommandN`, then > build > > a response from the output (by adding HTTP response message to it) and > > then send the response back to the client. > > You can try that, but I suspect that you will find that you are > re-inventing CGI. > > Your initial model of > > Request | Web_Server | CLI_APP | ADD_UI | Web_Server > Response > > is, I think, not quite right -- the "Web_Server" is not in the pipeline > twice; instead, the CLI_APP part is "off to the side". > > Since you want the request to come in as http, you will probably be > happier if you keep your CLI_APP and ADD_UI pieces as they are in a > pipeline; but have a CGI wrapper that will call them and provide the > headers that any CGI-supporting web server will know how to handle. > > As it happens -- nginx does not support CGI. So if you want to use nginx > in this model, you will want something like a FastCGI wrapper, or SCGI > wrapper, or something else that speaks a protocol that nginx does support. > > For testing/design, you could use something like "date" as your dummy > CLI_APP; and then learn what you need to do to end up with the current > time showing in the browser. > > I expect that you will either design your own interfaces that only work > in this case; or you will use pre-existing interfaces like CGI. > > Good luck with it, > > f > -- > Francis Daly francis at daoine.org > _______________________________________________ > nginx mailing list -- nginx at nginx.org > To unsubscribe send an email to nginx-leave at nginx.org > -------------- next part -------------- An HTML attachment was scrubbed... URL: From drodriguez at unau.edu.ar Tue Jul 5 12:53:05 2022 From: drodriguez at unau.edu.ar (Daniel Armando Rodriguez) Date: Tue, 05 Jul 2022 12:53:05 +0000 Subject: Reverse proxy to traefik In-Reply-To: <20220702082411.GB14648@daoine.org> References: <139a4296498a0d99fb9affe9a1339a1d@unau.edu.ar> <20220702082411.GB14648@daoine.org> Message-ID: <33371ce5e911c1be5da913340254b5ec@unau.edu.ar> El 2022-07-02 08:24, Francis Daly escribió: > On Fri, Jun 24, 2022 at 04:23:54PM -0300, Daniel Armando Rodriguez > wrote: > > Hi there, > >> I need to forward HTTP/HTTPS stream to a traefik within docker >> container. >> Additionally, this traefik is also SSL termination. And just at this >> point >> where I am stuck, as the SSL management against Let's Encrypt needs >> both >> HTTP and HTTPS traffic. > > I'm not quite sure what you are trying to do, in nginx terms. > > nginx has the idea of "http", where an incoming http or https request > to nginx is handled by nginx making a new http or https request to the > upstream service; and nginx has the idea of "stream", where any traffic > on an incoming tcp connection is forwarded to an upstream service. That > "stream" traffic can optionally be SSL-decrypted or encrypted by nginx > before forwarding. > >> Made this representation to illustrate the situation. >> https://i.postimg.cc/Zq1Ndyws/scheme.png > > If you can describe what you want, in terms of "something external > will make a http request of nginx that should be handled in this way; > it will make a https request of nginx that should be handled in that > way; > and it will send a generic tcp stream to this port on nginx that should > be handled in this other way", then the nginx config to handle that, > might be clearer. > > Cheers, Hi, thanks for your time What I need to do is allowing traefik "black" box to negotiate SSL certificate directly with Let's Encrypt, that was intended to be referred as stream. ________________________________________________ Daniel A. Rodriguez _Informática, Conectividad y Sistemas_ Universidad Nacional del Alto Uruguay San Vicente - Misiones - Argentina informatica.unau.edu.ar From amira.solo at gmail.com Wed Jul 6 04:26:30 2022 From: amira.solo at gmail.com (Amira S) Date: Wed, 6 Jul 2022 07:26:30 +0300 Subject: Adding support for OpenSSL engine in Nginx Ingress controller Message-ID: Hello, I want to add support for an ssl_engine + ssl_certificate/key directives in the nignx.conf that configures an nginx server for ingress on kubernetes. This functionality is not provided by default, and I read that Snippets may be the recommended way to add such support. Could you please assist me in adding such support? The ssl_engine should be part of the main-snippets but the ssl_certificate/key are under http and then under server, so not sure if http-snippets or server-snippets should be used. For example, I tried setting the ssl_engine as follows: read -d '' conf << EOF ssl_engine mscryptpfx; EOF helm install ingress-nginx-new ingress-nginx/ingress-nginx \ --set controller.replicaCount=2 \ --set controller.nodeSelector."kubernetes\.io/os"=linux \ --set defaultBackend.nodeSelector."kubernetes\.io/os"=linux \ --set controller.service.annotations."service\.beta\.kubernetes\.io/azure-load-balancer-health-probe-request-path"=/healthz \ --set enable-snippets=true \ --set-string controller.config.main-snippets="$conf" But this wasn't reflected in the nginx.conf of the ingress pod. If anyone could point me to a similar configuration sample, that would be very helpful. Thank you! -------------- next part -------------- An HTML attachment was scrubbed... URL: From francis at daoine.org Wed Jul 6 13:47:31 2022 From: francis at daoine.org (Francis Daly) Date: Wed, 6 Jul 2022 14:47:31 +0100 Subject: Reverse proxy to traefik In-Reply-To: <33371ce5e911c1be5da913340254b5ec@unau.edu.ar> References: <139a4296498a0d99fb9affe9a1339a1d@unau.edu.ar> <20220702082411.GB14648@daoine.org> <33371ce5e911c1be5da913340254b5ec@unau.edu.ar> Message-ID: <20220706134731.GE14648@daoine.org> On Tue, Jul 05, 2022 at 12:53:05PM +0000, Daniel Armando Rodriguez via nginx wrote: > El 2022-07-02 08:24, Francis Daly escribió: > > On Fri, Jun 24, 2022 at 04:23:54PM -0300, Daniel Armando Rodriguez > > wrote: Hi there, > > > Made this representation to illustrate the situation. > > > https://i.postimg.cc/Zq1Ndyws/scheme.png > What I need to do is allowing traefik "black" box to negotiate SSL > certificate directly with Let's Encrypt, that was intended to be referred as > stream. I think you are saying that you want nginx to be a "plain" tcp-forwarder in this case. (I'm not certain *why* that matters here, but that's ok; I don't need to understand it ;-) .) Does http://nginx.org/en/docs/stream/ngx_stream_proxy_module.html work for you? Something like == stream { server { listen nginx-ip:443; proxy_pass traefik-ip:443; } } == (If you have a stream listener on an IP:port, you cannot also have a http listener on that same IP:port.) Your picture also shows some blue lines on the left-hand side, so it may be that you also want something like http://nginx.org/en/docs/stream/ngx_stream_ssl_preread_module.html, to choose which "upstream" to proxy_pass to, depending on the server name presented in the SSL connection to nginx. Cheers, f -- Francis Daly francis at daoine.org From drodriguez at unau.edu.ar Thu Jul 7 14:17:03 2022 From: drodriguez at unau.edu.ar (Daniel A. Rodriguez) Date: Thu, 7 Jul 2022 11:17:03 -0300 Subject: Reverse proxy to traefik In-Reply-To: <20220706134731.GE14648@daoine.org> References: <139a4296498a0d99fb9affe9a1339a1d@unau.edu.ar> <20220702082411.GB14648@daoine.org> <33371ce5e911c1be5da913340254b5ec@unau.edu.ar> <20220706134731.GE14648@daoine.org> Message-ID: <9cc0b71b-0a35-0f25-48ff-aec5359fd090@unau.edu.ar> El 6/7/22 a las 10:47, Francis Daly escribió: > On Tue, Jul 05, 2022 at 12:53:05PM +0000, Daniel Armando Rodriguez via nginx wrote: >> El 2022-07-02 08:24, Francis Daly escribió: >>> On Fri, Jun 24, 2022 at 04:23:54PM -0300, Daniel Armando Rodriguez >>> wrote: > Hi there, > >>>> Made this representation to illustrate the situation. >>>> https://i.postimg.cc/Zq1Ndyws/scheme.png >> What I need to do is allowing traefik "black" box to negotiate SSL >> certificate directly with Let's Encrypt, that was intended to be referred as >> stream. > I think you are saying that you want nginx to be a "plain" tcp-forwarder > in this case. > > (I'm not certain *why* that matters here, but that's ok; I don't need > to understand it ;-) .) > > Doeshttp://nginx.org/en/docs/stream/ngx_stream_proxy_module.html work > for you? > > Something like > > == > stream { > server { > listen nginx-ip:443; > proxy_pass traefik-ip:443; > } > } > == > > (If you have a stream listener on an IP:port, you cannot also have a > http listener on that same IP:port.) > > Your picture also shows some blue lines on the left-hand > side, so it may be that you also want something like > http://nginx.org/en/docs/stream/ngx_stream_ssl_preread_module.html, > to choose which "upstream" to proxy_pass to, depending on the server > name presented in the SSL connection to nginx. > > Cheers, > > f Nginx is actually working as RP for several subdomains for which is also SSL termination. The traefik box is out of my scope, but it has the ability to negotiate TLS certificates for its own. That's why I need to forward just specific subdomain TCP traffic to it. ________________________________________________ *Daniel A. Rodriguez* /Informática, Conectividad y Sistemas/ Universidad Nacional del Alto Uruguay San Vicente - Misiones - Argentina informatica.unau.edu.ar -------------- next part -------------- An HTML attachment was scrubbed... URL: From jason.crews at gmail.com Fri Jul 8 17:14:13 2022 From: jason.crews at gmail.com (Jason Crews) Date: Fri, 8 Jul 2022 10:14:13 -0700 Subject: Domains not working as expected with nginx Message-ID: I'm not sure what I've got misconfigured here, I would appreciate anyone who could point me in the right direction. Site structure: maindomain.com -> mediawiki -> works sub.maindomain.com -> basic php website -> works secondarydomain.com -> wordpress -> goes to sub.maindomain.com I've posted all of the config files on reddit: https://www.reddit.com/r/nginx/comments/vtuha9/domains_not_going_where_expected/ Not sure what's going one, any help would be appreciated. Jason Crews From francis at daoine.org Fri Jul 8 17:58:14 2022 From: francis at daoine.org (Francis Daly) Date: Fri, 8 Jul 2022 18:58:14 +0100 Subject: Reverse proxy to traefik In-Reply-To: <9cc0b71b-0a35-0f25-48ff-aec5359fd090@unau.edu.ar> References: <139a4296498a0d99fb9affe9a1339a1d@unau.edu.ar> <20220702082411.GB14648@daoine.org> <33371ce5e911c1be5da913340254b5ec@unau.edu.ar> <20220706134731.GE14648@daoine.org> <9cc0b71b-0a35-0f25-48ff-aec5359fd090@unau.edu.ar> Message-ID: <20220708175814.GG14648@daoine.org> On Thu, Jul 07, 2022 at 11:17:03AM -0300, Daniel A. Rodriguez wrote: Hi there, > Nginx is actually working as RP for several subdomains for which is also SSL > termination. The traefik box is out of my scope, but it has the ability to > negotiate TLS certificates for its own. That's why I need to forward just > specific subdomain TCP traffic to it. I think you are indicating that you currently have a http section with something like === server { listen nginx-ip:443 ssl; server_name one.example.com; location / { proxy_pass http://internal-one; # or maybe "https://internal-one;" } } server { listen nginx-ip:443 ssl; server_name two.example.com; location / { proxy_pass http://internal-two; # or maybe "https://internal-two;" } } === If you need your traefik server to see the original data stream from the client (such as: if your traefik server is using client certificates for authentication; I can't immediately think of any other https reason), then I suspect that in nginx terms you will need a second IP address, and have a separate nginx "stream" block that will listen on that-ip:443. If you are not using client certificates, you can still use a second IP to let traefik see the original data stream. But maybe you can "get away" with a normal http proxy_pass? I guess it depends on your use case, and I'm afraid that I do not know what your specific use case is. The short answer is: on a single IP:port, nginx either listens for stream, or for http, but not both. Cheers, f -- Francis Daly francis at daoine.org From francis at daoine.org Fri Jul 8 18:06:56 2022 From: francis at daoine.org (Francis Daly) Date: Fri, 8 Jul 2022 19:06:56 +0100 Subject: Domains not working as expected with nginx In-Reply-To: References: Message-ID: <20220708180656.GH14648@daoine.org> On Fri, Jul 08, 2022 at 10:14:13AM -0700, Jason Crews wrote: Hi there, > I'm not sure what I've got misconfigured here, I would appreciate > anyone who could point me in the right direction. > Site structure: > > maindomain.com -> mediawiki -> works > sub.maindomain.com -> basic php website -> works > secondarydomain.com -> wordpress -> goes to sub.maindomain.com > > I've posted all of the config files on reddit: > https://www.reddit.com/r/nginx/comments/vtuha9/domains_not_going_where_expected/ For each server{} block that you have, what are the "listen" directives and what are the "server_name" directives. $ nginx -T | grep 'server\|listen' will probably give a reasonable starting point for that data. Feel free to edit it to hide anything you consider private; but please be consistent. If you use the same IP address in the config twice, edit it to the same thing. If you use different IP addresses, edit them to be different things -- anything in the 10.x network is "private enough". And for server_name entries, one.example.com, two.examle.com, and *.example.net might be reasonable ways to edit thing. (Also: feel free not to change things if you don't consider them private.) And when you report something not working, please be specific about http or https, to which particular hostname. (And confirm whether the hostname resolves to the IP address that nginx is listening on.) Hopefully the answers to those will make it clear what is happening, and what should be changed to make things happen the way you want them to happen. Cheers, f -- Francis Daly francis at daoine.org From lucas at lucasrolff.com Fri Jul 8 19:13:33 2022 From: lucas at lucasrolff.com (Lucas Rolff) Date: Fri, 8 Jul 2022 19:13:33 +0000 Subject: Slice module 206 requirement Message-ID: <8C146EA8-96A8-40C4-8829-374C56C3256F@lucasrolff.com> Hi guys, I’m having an nginx instance where I utilise the nginx slice module to slice upstream mp4 files when using proxy_cache. However, I have an interesting origin where if sending a range request (which happens when the slice module is enabled), to a file that’s less than the slice range, the origin returns a 200 OK, but with the range related headers such as content-range, but obviously the full file is returned since it’s within the requested range. When playing the MP4s through Google Chrome and Firefox it works fine when going through the nginx proxy instance, however, it somehow breaks Safari (both on MacOS, and iOS) - I guess Safari is more strict. When playing directly through the origin it works fine in all browsers. The md5 of response from the origin remains the same, so it’s not that the response itself is an invalid MP4 file, and even if you compare the cache files on disk with a “working” origin and the “broken” origin (one sends a 206 Partial Content, another sends 200 OK) - the content of the cache files remain the same, except obviously the header section of the cache file. The origin returns a 206 status code, only if the file exceeds the slice size, so if I configure a slice size of 5 megabyte, only files above 5 megabytes will give 206s. Anything under 5 megabytes will result in a 200 OK with content-range and the correct content-length, Looking in the slice module itself I see: https://github.com/nginx/nginx/blob/master/src/http/modules/ngx_http_slice_filter_module.c#L116-L126 if (r->headers_out.status != NGX_HTTP_PARTIAL_CONTENT) { if (r == r->main) { ngx_http_set_ctx(r, NULL, ngx_http_slice_filter_module); return ngx_http_next_header_filter(r); } ngx_log_error(NGX_LOG_ERR, r->connection->log, 0, "unexpected status code %ui in slice response", r->headers_out.status); return NGX_ERROR; } This seems like the slice module expects a 206 status code to be returned, however, later in the same function https://github.com/nginx/nginx/blob/master/src/http/modules/ngx_http_slice_filter_module.c#L200-L211 if (r->headers_out.status == NGX_HTTP_PARTIAL_CONTENT) { if (ctx->start + (off_t) slcf->size <= r->headers_out.content_offset) { ctx->start = slcf->size * (r->headers_out.content_offset / slcf->size); } ctx->end = r->headers_out.content_offset + r->headers_out.content_length_n; } else { ctx->end = cr.complete_length; } There it will do an else statement if the status code isn’t 206. So would this piece of code ever be reached, since there’s the initial error? Additionally I don’t see in RFC7233 that 206 responses are an absolute requirement, additionally I don’t see content-range being prohibited/forbidden to be used for 200 OK responses. Now, if one have a secondary proxy that modifies the response headers in between the origin returning 200 OK with the Content-Range header, and then strip out the Content-Range header, the nginx slice module seems to handle it fine, so somehow the combination of 200 OK and a Content-Range header being present seems to break the slice module from functioning. I’m just curious why this happens within the slice module, and if there’s any possible solution for it (like allowing the combination of 200 OK and Content-Range, since those two would still indicate that the origin/upstream supports range requests) - obviously it would be nice to fix the upstream server but sometimes that’s sadly not possible. I know the parts of the slice module haven’t been touched for years, so obviously it works for most people, just dipping my toes here to see if there’s a possible solution other than disabling slice when an origin returns 200 OK for files smaller than the slice size. Thanks in advance Best Regards, Lucas Rolff -------------- next part -------------- An HTML attachment was scrubbed... URL: From jason.crews at gmail.com Fri Jul 8 19:53:39 2022 From: jason.crews at gmail.com (Jason Crews) Date: Fri, 8 Jul 2022 12:53:39 -0700 Subject: Domains not working as expected with nginx In-Reply-To: <20220708180656.GH14648@daoine.org> References: <20220708180656.GH14648@daoine.org> Message-ID: server_tokens off; # server_names_hash_bucket_size 64; # server_name_in_redirect off; ssl_prefer_server_ciphers on; # server { # listen localhost:110; # server { # listen localhost:143; server { listen 127.0.0.2:80; server_name 127.0.0.2; server unix:/tmp/php-cgi.socket; server 127.0.0.1:9000; server { server_name secondarydomain.com; fastcgi_param SERVER_PROTOCOL $server_protocol; fastcgi_param SERVER_ADDR $server_addr; fastcgi_param SERVER_PORT $server_port; fastcgi_param SERVER_NAME $server_name; server { listen 443 ssl http2; listen [::]:443 ssl http2; ssl_prefer_server_ciphers off; server_name sub.maindomain.com; server { listen 80 default_server; listen [::]:80 default_server; server { listen 443 ssl http2; listen [::]:443 ssl http2; ssl_prefer_server_ciphers off; server_name primarydomain.com www.primarydomain.com; fastcgi_pass 127.0.0.1:9000; # or whatever port your PHP-FPM listens on # fastcgi_pass 127.0.0.1:9000; # or whatever port your PHP-FPM listens on Jason Crews On Fri, Jul 8, 2022 at 11:07 AM Francis Daly wrote: > > On Fri, Jul 08, 2022 at 10:14:13AM -0700, Jason Crews wrote: > > Hi there, > > > I'm not sure what I've got misconfigured here, I would appreciate > > anyone who could point me in the right direction. > > Site structure: > > > > maindomain.com -> mediawiki -> works > > sub.maindomain.com -> basic php website -> works > > secondarydomain.com -> wordpress -> goes to sub.maindomain.com > > > > I've posted all of the config files on reddit: > > https://www.reddit.com/r/nginx/comments/vtuha9/domains_not_going_where_expected/ > > For each server{} block that you have, what are the "listen" directives > and what are the "server_name" directives. > > $ nginx -T | grep 'server\|listen' > > will probably give a reasonable starting point for that data. Feel > free to edit it to hide anything you consider private; but please be > consistent. If you use the same IP address in the config twice, edit it > to the same thing. If you use different IP addresses, edit them to be > different things -- anything in the 10.x network is "private enough". > > And for server_name entries, one.example.com, two.examle.com, and > *.example.net might be reasonable ways to edit thing. > > (Also: feel free not to change things if you don't consider them private.) > > And when you report something not working, please be specific about http > or https, to which particular hostname. > > (And confirm whether the hostname resolves to the IP address that nginx > is listening on.) > > Hopefully the answers to those will make it clear what is happening, > and what should be changed to make things happen the way you want them > to happen. > > Cheers, > > f > -- > Francis Daly francis at daoine.org > _______________________________________________ > nginx mailing list -- nginx at nginx.org > To unsubscribe send an email to nginx-leave at nginx.org From francis at daoine.org Fri Jul 8 23:50:33 2022 From: francis at daoine.org (Francis Daly) Date: Sat, 9 Jul 2022 00:50:33 +0100 Subject: Domains not working as expected with nginx In-Reply-To: References: <20220708180656.GH14648@daoine.org> Message-ID: <20220708235033.GI14648@daoine.org> On Fri, Jul 08, 2022 at 12:53:39PM -0700, Jason Crews wrote: Hi there, Thanks for this. I think it says that if you ask for "http://secondarydomain.com", you will get to > server { > server_name secondarydomain.com; that server block (unless secondarydomain.com resolves to 127.0.0.2); but if you ask for "https://secondarydomain.com", you will get to > server { > listen 443 ssl http2; > server_name sub.maindomain.com; that server block. Which I think is what you describe for the "wordpress" side of things. Either configure a server block with ssl for secondarydomain.com; or make sure to only access secondarydomain.com over http. (And if something like wordpress redirects to https, make it stop doing that.) Hope this helps, f -- Francis Daly francis at daoine.org From hobson42 at gmail.com Sat Jul 9 10:23:20 2022 From: hobson42 at gmail.com (Ian Hobson) Date: Sat, 9 Jul 2022 17:23:20 +0700 Subject: Domains not working as expected with nginx In-Reply-To: References: Message-ID: Hi Jason, This sounds to me like a wordpress problem. Within the wordpress database, you will find a ???-options table, where ??? is a prefix for the site. The first two entries in that table are the "siteurl" and "home" urls. If these are wrong, you will find your website redirects as you observed. Just edit the option_value field with phpMyAdmin or similar. Regards Ian On 09/07/2022 00:14, Jason Crews wrote: > I'm not sure what I've got misconfigured here, I would appreciate > anyone who could point me in the right direction. > Site structure: > > maindomain.com -> mediawiki -> works > sub.maindomain.com -> basic php website -> works > secondarydomain.com -> wordpress -> goes to sub.maindomain.com > > I've posted all of the config files on reddit: > https://www.reddit.com/r/nginx/comments/vtuha9/domains_not_going_where_expected/ > > Not sure what's going one, any help would be appreciated. > > Jason Crews > _______________________________________________ > nginx mailing list -- nginx at nginx.org > To unsubscribe send an email to nginx-leave at nginx.org -- Ian Hobson Tel (+66) 626 544 695 From mdounin at mdounin.ru Sun Jul 10 08:35:48 2022 From: mdounin at mdounin.ru (Maxim Dounin) Date: Sun, 10 Jul 2022 11:35:48 +0300 Subject: Slice module 206 requirement In-Reply-To: <8C146EA8-96A8-40C4-8829-374C56C3256F@lucasrolff.com> References: <8C146EA8-96A8-40C4-8829-374C56C3256F@lucasrolff.com> Message-ID: Hello! On Fri, Jul 08, 2022 at 07:13:33PM +0000, Lucas Rolff wrote: > I’m having an nginx instance where I utilise the nginx slice > module to slice upstream mp4 files when using proxy_cache. > > However, I have an interesting origin where if sending a range > request (which happens when the slice module is enabled), to a > file that’s less than the slice range, the origin returns a 200 > OK, but with the range related headers such as content-range, > but obviously the full file is returned since it’s within the > requested range. > > When playing the MP4s through Google Chrome and Firefox it works > fine when going through the nginx proxy instance, however, it > somehow breaks Safari (both on MacOS, and iOS) - I guess Safari > is more strict. > When playing directly through the origin it works fine in all > browsers. > > The md5 of response from the origin remains the same, so it’s > not that the response itself is an invalid MP4 file, and even if > you compare the cache files on disk with a “working” origin and > the “broken” origin (one sends a 206 Partial Content, another > sends 200 OK) - the content of the cache files remain the same, > except obviously the header section of the cache file. > > The origin returns a 206 status code, only if the file exceeds > the slice size, so if I configure a slice size of 5 megabyte, > only files above 5 megabytes will give 206s. Anything under 5 > megabytes will result in a 200 OK with content-range and the > correct content-length, > > Looking in the slice module itself I see: > https://github.com/nginx/nginx/blob/master/src/http/modules/ngx_http_slice_filter_module.c#L116-L126 > > > if (r->headers_out.status != NGX_HTTP_PARTIAL_CONTENT) { > if (r == r->main) { > ngx_http_set_ctx(r, NULL, ngx_http_slice_filter_module); > return ngx_http_next_header_filter(r); > } > > ngx_log_error(NGX_LOG_ERR, r->connection->log, 0, > "unexpected status code %ui in slice response", > r->headers_out.status); > return NGX_ERROR; > } > > This seems like the slice module expects a 206 status code to be > returned, For the main request, the code accepts two basic valid variants: - 206, so the slice module will combine multiple responses to range requests as needed; - anything else, so the slice module will give up and simply return the response to the client. If the module sees a non-206 response to a subrequest, this is an error, as the slice module expects underlying resources to be immutable, and does not expect that some ranges can be requested, while some other aren't. This isn't something related to your case though. > however, later in the same function > https://github.com/nginx/nginx/blob/master/src/http/modules/ngx_http_slice_filter_module.c#L200-L211 > > > if (r->headers_out.status == NGX_HTTP_PARTIAL_CONTENT) { > if (ctx->start + (off_t) slcf->size <= r->headers_out.content_offset) { > ctx->start = slcf->size > * (r->headers_out.content_offset / slcf->size); > } > > ctx->end = r->headers_out.content_offset > + r->headers_out.content_length_n; > > } else { > ctx->end = cr.complete_length; > } > > There it will do an else statement if the status code isn’t 206. > So would this piece of code ever be reached, since there’s the initial error? Following the initial check, r->headers_out.status is explicitly changed to NGX_HTTP_OK. Later on the ngx_http_next_header_filter() call might again change r->headers_out.status as long as the client used a range request, and this is what checked here. > Additionally I don’t see in RFC7233 that 206 responses are an > absolute requirement, additionally I don’t see content-range > being prohibited/forbidden to be used for 200 OK responses. > Now, if one have a secondary proxy that modifies the response > headers in between the origin returning 200 OK with the > Content-Range header, and then strip out the Content-Range > header, the nginx slice module seems to handle it fine, so > somehow the combination of 200 OK and a Content-Range header > being present seems to break the slice module from functioning. > > I’m just curious why this happens within the slice module, and > if there’s any possible solution for it (like allowing the > combination of 200 OK and Content-Range, since those two would > still indicate that the origin/upstream supports range requests) > - obviously it would be nice to fix the upstream server but > sometimes that’s sadly not possible. >From the above explanation it is probably already clear that "disabling slice when an origin returns 200 OK" is what actually happens. The issue does not appear without the slice module in your testing because the Content-Range header seems to be only present in your backend 200 responses when there was a Range header in the request, and this is what happens only with the slice module. I've done some limited testing with Safari and manually added Content-Range header, and there seem to be at least two issues: - Range filter in nginx does not expect the Content-Range header to be already present in 200 responses and simply adds another one. This results in incorrect range responses with multiple Content-Range headers, and this breaks Safari. - Safari also breaks if its test request with "Range: bytes=0-1" results in 200 with the Content-Range header. My initial fix was to simply disable handling of 200 responses with Content-Range headers in the range filter, so such responses wouldn't be touched at all. This is perfectly correct and probably the most secure thing to do, but does not work with Safari due to the second issue outlined above. Another approach would be to clear pre-existing Content-Range headers in the range filter. This seems to work, at least in my testing. See below for the patch. > I know the parts of the slice module haven’t been touched for > years, so obviously it works for most people, just dipping my > toes here to see if there’s a possible solution other than > disabling slice when an origin returns 200 OK for files smaller > than the slice size. Note that that slice module is generally unsafe to use for arbitrary upstream servers: it relies on expectations which are beyond the HTTP standard requirements. In particular: - It requires resources to be immutable, so different range responses can be combined together. - It does not try to handle edge cases, such as 416 returned by the upstream on empty files (which is correct per RFC, but requires complicated additional handling to convert 416 to 200, so it is better to just return 200 OK). In general, the slice module is to be used only in your own infrastructure when you control the backend and can be sure that the slice module expectations are met. As such, disabling it for backends which do something unexpected might actually be a good idea. On the other hand, in this particular case the nginx behaviour can be adjusted to handle things gracefully. Below is a patch to clear pre-existing Content-Range headers in the range filter. Please test if it works for you. # HG changeset patch # User Maxim Dounin # Date 1657439390 -10800 # Sun Jul 10 10:49:50 2022 +0300 # Node ID 219217ea49a8d648f5cadd046f1b1294ef05693c # Parent 9d98d524bd02a562d9cd83f4e369c7e992c5753b Range filter: clearing of pre-existing Content-Range headers. Some servers might emit Conten-Range header on 200 responses, and this does not seem to contradict RFC 9110: as per RFC 9110, the Content-Range header have no meaning for status codes other than 206 and 417. Previously this resulted in duplicate Content-Range headers in nginx responses handled by the range filter. Fix is to clear pre-existing headers. diff --git a/src/http/modules/ngx_http_range_filter_module.c b/src/http/modules/ngx_http_range_filter_module.c --- a/src/http/modules/ngx_http_range_filter_module.c +++ b/src/http/modules/ngx_http_range_filter_module.c @@ -425,6 +425,10 @@ ngx_http_range_singlepart_header(ngx_htt return NGX_ERROR; } + if (r->headers_out.content_range) { + r->headers_out.content_range->hash = 0; + } + r->headers_out.content_range = content_range; content_range->hash = 1; @@ -582,6 +586,11 @@ ngx_http_range_multipart_header(ngx_http r->headers_out.content_length = NULL; } + if (r->headers_out.content_range) { + r->headers_out.content_range->hash = 0; + r->headers_out.content_range = NULL; + } + return ngx_http_next_header_filter(r); } @@ -598,6 +607,10 @@ ngx_http_range_not_satisfiable(ngx_http_ return NGX_ERROR; } + if (r->headers_out.content_range) { + r->headers_out.content_range->hash = 0; + } + r->headers_out.content_range = content_range; content_range->hash = 1; -- Maxim Dounin http://mdounin.ru/ From lucas at lucasrolff.com Sun Jul 10 09:08:18 2022 From: lucas at lucasrolff.com (Lucas Rolff) Date: Sun, 10 Jul 2022 09:08:18 +0000 Subject: Slice module 206 requirement In-Reply-To: References: <8C146EA8-96A8-40C4-8829-374C56C3256F@lucasrolff.com> Message-ID: You’re truly awesome! I’ll give the patch a try tomorrow - and thanks for the other bits and pieces of information, especially regarding the expectations as well. I wish you an awesome Sunday! Best Regards, Lucas Rolff > On 10 Jul 2022, at 10:35, Maxim Dounin wrote: > > Hello! > > On Fri, Jul 08, 2022 at 07:13:33PM +0000, Lucas Rolff wrote: > >> I’m having an nginx instance where I utilise the nginx slice >> module to slice upstream mp4 files when using proxy_cache. >> >> However, I have an interesting origin where if sending a range >> request (which happens when the slice module is enabled), to a >> file that’s less than the slice range, the origin returns a 200 >> OK, but with the range related headers such as content-range, >> but obviously the full file is returned since it’s within the >> requested range. >> >> When playing the MP4s through Google Chrome and Firefox it works >> fine when going through the nginx proxy instance, however, it >> somehow breaks Safari (both on MacOS, and iOS) - I guess Safari >> is more strict. >> When playing directly through the origin it works fine in all >> browsers. >> >> The md5 of response from the origin remains the same, so it’s >> not that the response itself is an invalid MP4 file, and even if >> you compare the cache files on disk with a “working” origin and >> the “broken” origin (one sends a 206 Partial Content, another >> sends 200 OK) - the content of the cache files remain the same, >> except obviously the header section of the cache file. >> >> The origin returns a 206 status code, only if the file exceeds >> the slice size, so if I configure a slice size of 5 megabyte, >> only files above 5 megabytes will give 206s. Anything under 5 >> megabytes will result in a 200 OK with content-range and the >> correct content-length, >> >> Looking in the slice module itself I see: >> https://github.com/nginx/nginx/blob/master/src/http/modules/ngx_http_slice_filter_module.c#L116-L126 >> >> >> if (r->headers_out.status != NGX_HTTP_PARTIAL_CONTENT) { >> if (r == r->main) { >> ngx_http_set_ctx(r, NULL, ngx_http_slice_filter_module); >> return ngx_http_next_header_filter(r); >> } >> >> ngx_log_error(NGX_LOG_ERR, r->connection->log, 0, >> "unexpected status code %ui in slice response", >> r->headers_out.status); >> return NGX_ERROR; >> } >> >> This seems like the slice module expects a 206 status code to be >> returned, > > For the main request, the code accepts two basic valid variants: > > - 206, so the slice module will combine multiple responses to > range requests as needed; > > - anything else, so the slice module will give up and simply > return the response to the client. > > If the module sees a non-206 response to a subrequest, this is an > error, as the slice module expects underlying resources to be > immutable, and does not expect that some ranges can be requested, > while some other aren't. This isn't something related to your > case though. > >> however, later in the same function >> https://github.com/nginx/nginx/blob/master/src/http/modules/ngx_http_slice_filter_module.c#L200-L211 >> >> >> if (r->headers_out.status == NGX_HTTP_PARTIAL_CONTENT) { >> if (ctx->start + (off_t) slcf->size <= r->headers_out.content_offset) { >> ctx->start = slcf->size >> * (r->headers_out.content_offset / slcf->size); >> } >> >> ctx->end = r->headers_out.content_offset >> + r->headers_out.content_length_n; >> >> } else { >> ctx->end = cr.complete_length; >> } >> >> There it will do an else statement if the status code isn’t 206. >> So would this piece of code ever be reached, since there’s the initial error? > > Following the initial check, r->headers_out.status is explicitly > changed to NGX_HTTP_OK. Later on the > ngx_http_next_header_filter() call might again change > r->headers_out.status as long as the client used a range request, > and this is what checked here. > >> Additionally I don’t see in RFC7233 that 206 responses are an >> absolute requirement, additionally I don’t see content-range >> being prohibited/forbidden to be used for 200 OK responses. >> Now, if one have a secondary proxy that modifies the response >> headers in between the origin returning 200 OK with the >> Content-Range header, and then strip out the Content-Range >> header, the nginx slice module seems to handle it fine, so >> somehow the combination of 200 OK and a Content-Range header >> being present seems to break the slice module from functioning. >> >> I’m just curious why this happens within the slice module, and >> if there’s any possible solution for it (like allowing the >> combination of 200 OK and Content-Range, since those two would >> still indicate that the origin/upstream supports range requests) >> - obviously it would be nice to fix the upstream server but >> sometimes that’s sadly not possible. > >> From the above explanation it is probably already clear that > "disabling slice when an origin returns 200 OK" is what actually > happens. > > The issue does not appear without the slice module in your testing > because the Content-Range header seems to be only present in your > backend 200 responses when there was a Range header in the > request, and this is what happens only with the slice module. > > I've done some limited testing with Safari and manually added > Content-Range header, and there seem to be at least two issues: > > - Range filter in nginx does not expect the Content-Range header > to be already present in 200 responses and simply adds another > one. This results in incorrect range responses with multiple > Content-Range headers, and this breaks Safari. > > - Safari also breaks if its test request with "Range: bytes=0-1" > results in 200 with the Content-Range header. > > My initial fix was to simply disable handling of 200 responses > with Content-Range headers in the range filter, so such responses > wouldn't be touched at all. This is perfectly correct and > probably the most secure thing to do, but does not work with > Safari due to the second issue outlined above. > > Another approach would be to clear pre-existing Content-Range > headers in the range filter. This seems to work, at least in my > testing. See below for the patch. > >> I know the parts of the slice module haven’t been touched for >> years, so obviously it works for most people, just dipping my >> toes here to see if there’s a possible solution other than >> disabling slice when an origin returns 200 OK for files smaller >> than the slice size. > > Note that that slice module is generally unsafe to use for > arbitrary upstream servers: it relies on expectations which are > beyond the HTTP standard requirements. In particular: > > - It requires resources to be immutable, so different range > responses can be combined together. > > - It does not try to handle edge cases, such as 416 returned by > the upstream on empty files (which is correct per RFC, but > requires complicated additional handling to convert 416 to 200, so > it is better to just return 200 OK). > > In general, the slice module is to be used only in your own > infrastructure when you control the backend and can be sure that > the slice module expectations are met. As such, disabling it for > backends which do something unexpected might actually be a good > idea. On the other hand, in this particular case the nginx > behaviour can be adjusted to handle things gracefully. > > Below is a patch to clear pre-existing Content-Range headers > in the range filter. Please test if it works for you. > > # HG changeset patch > # User Maxim Dounin > # Date 1657439390 -10800 > # Sun Jul 10 10:49:50 2022 +0300 > # Node ID 219217ea49a8d648f5cadd046f1b1294ef05693c > # Parent 9d98d524bd02a562d9cd83f4e369c7e992c5753b > Range filter: clearing of pre-existing Content-Range headers. > > Some servers might emit Conten-Range header on 200 responses, and this > does not seem to contradict RFC 9110: as per RFC 9110, the Content-Range > header have no meaning for status codes other than 206 and 417. Previously > this resulted in duplicate Content-Range headers in nginx responses handled > by the range filter. Fix is to clear pre-existing headers. > > diff --git a/src/http/modules/ngx_http_range_filter_module.c b/src/http/modules/ngx_http_range_filter_module.c > --- a/src/http/modules/ngx_http_range_filter_module.c > +++ b/src/http/modules/ngx_http_range_filter_module.c > @@ -425,6 +425,10 @@ ngx_http_range_singlepart_header(ngx_htt > return NGX_ERROR; > } > > + if (r->headers_out.content_range) { > + r->headers_out.content_range->hash = 0; > + } > + > r->headers_out.content_range = content_range; > > content_range->hash = 1; > @@ -582,6 +586,11 @@ ngx_http_range_multipart_header(ngx_http > r->headers_out.content_length = NULL; > } > > + if (r->headers_out.content_range) { > + r->headers_out.content_range->hash = 0; > + r->headers_out.content_range = NULL; > + } > + > return ngx_http_next_header_filter(r); > } > > @@ -598,6 +607,10 @@ ngx_http_range_not_satisfiable(ngx_http_ > return NGX_ERROR; > } > > + if (r->headers_out.content_range) { > + r->headers_out.content_range->hash = 0; > + } > + > r->headers_out.content_range = content_range; > > content_range->hash = 1; > > -- > Maxim Dounin > http://mdounin.ru/ > _______________________________________________ > nginx mailing list -- nginx at nginx.org > To unsubscribe send an email to nginx-leave at nginx.org From venefax at gmail.com Mon Jul 11 19:13:24 2022 From: venefax at gmail.com (Saint Michael) Date: Mon, 11 Jul 2022 15:13:24 -0400 Subject: reverse proxy In-Reply-To: <139a4296498a0d99fb9affe9a1339a1d@unau.edu.ar> References: <139a4296498a0d99fb9affe9a1339a1d@unau.edu.ar> Message-ID: I have a reverse proxy and need to execute a bash script each time somebody connects to it. What is the right way to do it? I need to update a database. A parameter must be the public IP of the client. From teward at thomas-ward.net Mon Jul 11 19:49:33 2022 From: teward at thomas-ward.net (Thomas Ward) Date: Mon, 11 Jul 2022 15:49:33 -0400 Subject: reverse proxy In-Reply-To: References: <139a4296498a0d99fb9affe9a1339a1d@unau.edu.ar> Message-ID: <7796d922-6a7a-ab55-8e73-b00f975c31c8@thomas-ward.net> Ideally you would have your reverse proxy hand off to an application that does this.  I don't think there's an inbuilt way to execute a given script every time someone connects via Bash.  This is something your backend application should really be handling. On 7/11/22 15:13, Saint Michael wrote: > I have a reverse proxy and need to execute a bash script each time > somebody connects to it. > What is the right way to do it? I need to update a database. A > parameter must be the public IP of the client. > _______________________________________________ > nginx mailing list -- nginx at nginx.org > To unsubscribe send an email to nginx-leave at nginx.org > From venefax at gmail.com Mon Jul 11 20:22:00 2022 From: venefax at gmail.com (Saint Michael) Date: Mon, 11 Jul 2022 16:22:00 -0400 Subject: reverse proxy In-Reply-To: <7796d922-6a7a-ab55-8e73-b00f975c31c8@thomas-ward.net> References: <139a4296498a0d99fb9affe9a1339a1d@unau.edu.ar> <7796d922-6a7a-ab55-8e73-b00f975c31c8@thomas-ward.net> Message-ID: I did not explain myself well. My reverse proxy is at https://bellingcat.oneye.us/ it goes to https://www.bellingcat.com so, every time somebody opens Chrome and goes to https://belloingcat.oneye.us somewhere in my definition I need to fire a bash script (or any script) with some parameters to record the address. I cannot believe that was not considered. Thanks for the help. On Mon, Jul 11, 2022 at 3:49 PM Thomas Ward wrote: > > Ideally you would have your reverse proxy hand off to an application > that does this. I don't think there's an inbuilt way to execute a given > script every time someone connects via Bash. This is something your > backend application should really be handling. > > On 7/11/22 15:13, Saint Michael wrote: > > I have a reverse proxy and need to execute a bash script each time > > somebody connects to it. > > What is the right way to do it? I need to update a database. A > > parameter must be the public IP of the client. > > _______________________________________________ > > nginx mailing list -- nginx at nginx.org > > To unsubscribe send an email to nginx-leave at nginx.org > > > From teward at thomas-ward.net Mon Jul 11 21:01:56 2022 From: teward at thomas-ward.net (Thomas Ward) Date: Mon, 11 Jul 2022 17:01:56 -0400 Subject: reverse proxy In-Reply-To: References: <139a4296498a0d99fb9affe9a1339a1d@unau.edu.ar> <7796d922-6a7a-ab55-8e73-b00f975c31c8@thomas-ward.net> Message-ID: <47e17259-2567-26cb-0e42-2aefcd426fdb@thomas-ward.net> Reiterating my last statement, I don't think there's a way to configure this in NGINX out of the box, the closest thing I can think of is an Lua script that would be written to do this with the OpenRESTY Lua module, however I"m not a pro at that, and that's not Bash. If you don't need **absolute real time** though, you can probably achieve this with a passive logging method - using a dedicated access log for your specific site and then process and clean your access log when your script runs on an automatedtimer, but it's not 'realtime' or 'on connect' in that approach.  You can still extract IPs, hostnames requested, URIs, etc. from the logs if you configure it right. On 7/11/22 16:22, Saint Michael wrote: > I did not explain myself well. > My reverse proxy is at > https://bellingcat.oneye.us/ > it goes to > https://www.bellingcat.com > so, every time somebody opens Chrome and goes to https://belloingcat.oneye.us > somewhere in my definition I need to fire a bash script (or any > script) with some parameters to record the address. > I cannot believe that was not considered. > Thanks for the help. > > On Mon, Jul 11, 2022 at 3:49 PM Thomas Ward wrote: >> Ideally you would have your reverse proxy hand off to an application >> that does this. I don't think there's an inbuilt way to execute a given >> script every time someone connects via Bash. This is something your >> backend application should really be handling. >> >> On 7/11/22 15:13, Saint Michael wrote: >>> I have a reverse proxy and need to execute a bash script each time >>> somebody connects to it. >>> What is the right way to do it? I need to update a database. A >>> parameter must be the public IP of the client. >>> _______________________________________________ >>> nginx mailing list -- nginx at nginx.org >>> To unsubscribe send an email to nginx-leave at nginx.org >>> From jessica at oplin.ohio.gov Mon Jul 11 21:05:41 2022 From: jessica at oplin.ohio.gov (Jessica Dooley) Date: Mon, 11 Jul 2022 17:05:41 -0400 Subject: reverse proxy In-Reply-To: <47e17259-2567-26cb-0e42-2aefcd426fdb@thomas-ward.net> References: <139a4296498a0d99fb9affe9a1339a1d@unau.edu.ar> <7796d922-6a7a-ab55-8e73-b00f975c31c8@thomas-ward.net> <47e17259-2567-26cb-0e42-2aefcd426fdb@thomas-ward.net> Message-ID: Seconding Thomas's reply - Optimally, this should be done at the application layer. Configure proxy_set_header to send the clients' real public IPs from the reverse proxy to the upstream application. That way, your destination site will see the real IP of every visitor, rather than the reverse proxy's IP. proxy_set_header X-Real-IP $remote_addr; https://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_set_header https://www.nginx.com/resources/wiki/start/topics/examples/forwarded/ It is possible to trigger a bash script by watching your reverse proxy logs. Here is one way: Determine a pattern that will always match the nginx log lines that you want to log; write a bash script to tail the nginx log, and grep for matching lines; cut the values you want to collect from the log line, and insert them into a database. To continuously watch for new visits, create a unit file to run the script as a system service. tail -Fn0 /path/to/access.log | grep --line-buffered "pattern here" | while read -r line; do ip=$(echo $line | cut -f 1 -d " ") && \ timestamp=$(echo $line | cut -f 4 -d " ") && \ mysql -u user -ppass -D dbname -e "INSERT INTO table(timestamp,ip) VALUES ($timestamp, $ip)"; done However, I would strongly suggest avoiding that method for this specific task. Jessica D. Dooley Ohio Public Library Information Network jessica at oplin.ohio.gov On Mon, Jul 11, 2022 at 5:02 PM Thomas Ward wrote: > Reiterating my last statement, I don't think there's a way to configure > this in NGINX out of the box, the closest thing I can think of is an Lua > script that would be written to do this with the OpenRESTY Lua module, > however I"m not a pro at that, and that's not Bash. > > If you don't need **absolute real time** though, you can probably > achieve this with a passive logging method - using a dedicated access > log for your specific site and then process and clean your access log > when your script runs on an automatedtimer, but it's not 'realtime' or > 'on connect' in that approach. You can still extract IPs, hostnames > requested, URIs, etc. from the logs if you configure it right. > > On 7/11/22 16:22, Saint Michael wrote: > > I did not explain myself well. > > My reverse proxy is at > > https://bellingcat.oneye.us/ > > it goes to > > https://www.bellingcat.com > > so, every time somebody opens Chrome and goes to > https://belloingcat.oneye.us > > somewhere in my definition I need to fire a bash script (or any > > script) with some parameters to record the address. > > I cannot believe that was not considered. > > Thanks for the help. > > > > On Mon, Jul 11, 2022 at 3:49 PM Thomas Ward > wrote: > >> Ideally you would have your reverse proxy hand off to an application > >> that does this. I don't think there's an inbuilt way to execute a given > >> script every time someone connects via Bash. This is something your > >> backend application should really be handling. > >> > >> On 7/11/22 15:13, Saint Michael wrote: > >>> I have a reverse proxy and need to execute a bash script each time > >>> somebody connects to it. > >>> What is the right way to do it? I need to update a database. A > >>> parameter must be the public IP of the client. > >>> _______________________________________________ > >>> nginx mailing list -- nginx at nginx.org > >>> To unsubscribe send an email to nginx-leave at nginx.org > >>> > > _______________________________________________ > nginx mailing list -- nginx at nginx.org > To unsubscribe send an email to nginx-leave at nginx.org > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jason.crews at gmail.com Tue Jul 12 02:34:25 2022 From: jason.crews at gmail.com (Jason Crews) Date: Mon, 11 Jul 2022 19:34:25 -0700 Subject: Domains not working as expected with nginx In-Reply-To: References: Message-ID: Turns out I was missing a listen statement. Thanks for the help! Jason Crews On Sat, Jul 9, 2022 at 3:25 AM Ian Hobson wrote: > > Hi Jason, > > This sounds to me like a wordpress problem. > > Within the wordpress database, you will find a ???-options table, where > ??? is a prefix for the site. > > The first two entries in that table are the "siteurl" and "home" urls. > If these are wrong, you will find your website redirects as you observed. > > Just edit the option_value field with phpMyAdmin or similar. > > Regards > > Ian > > On 09/07/2022 00:14, Jason Crews wrote: > > I'm not sure what I've got misconfigured here, I would appreciate > > anyone who could point me in the right direction. > > Site structure: > > > > maindomain.com -> mediawiki -> works > > sub.maindomain.com -> basic php website -> works > > secondarydomain.com -> wordpress -> goes to sub.maindomain.com > > > > I've posted all of the config files on reddit: > > https://www.reddit.com/r/nginx/comments/vtuha9/domains_not_going_where_expected/ > > > > Not sure what's going one, any help would be appreciated. > > > > Jason Crews > > _______________________________________________ > > nginx mailing list -- nginx at nginx.org > > To unsubscribe send an email to nginx-leave at nginx.org > > -- > Ian Hobson > Tel (+66) 626 544 695 > _______________________________________________ > nginx mailing list -- nginx at nginx.org > To unsubscribe send an email to nginx-leave at nginx.org From oasis at embracelabs.com Tue Jul 12 08:00:32 2022 From: oasis at embracelabs.com (=?UTF-8?B?67CV6rec7LKg?=) Date: Tue, 12 Jul 2022 17:00:32 +0900 Subject: Does Nginx support RFC 8673 feature for low latency? Message-ID: Hi. I'm looking for "low latency" related settings and functions to build faster services through Nginx. After many searches and functional tests, I thought that RFC 8673's behavior was necessary to implement "low latency". I have set up and tested the existing slice module, but I think it is insufficient to implement the "low latency" of RFC 8673. 1. I want to know if the operation of RFC 8673 is supported in the released nginx. 2. If the operation of RFC 8673 is not supported, I would like to know if there is a future support plan. Any help is appreciated. Best regards, kyucheol -- -- *박규철* 개발팀 / 차장 Mobile +82 1093285425 Email oasis at embracelabs.com Messenger LINE : kkobal0903 (주)엠브레이스 ᐧ -------------- next part -------------- An HTML attachment was scrubbed... URL: From teward at thomas-ward.net Tue Jul 12 08:07:11 2022 From: teward at thomas-ward.net (Thomas Ward) Date: Tue, 12 Jul 2022 08:07:11 +0000 Subject: Does Nginx support RFC 8673 feature for low latency? In-Reply-To: References: Message-ID: Correct me if I'm wrong, but isnt 8673 an experimental protocol and not a standard, as that rfc even says in its text? If so, then I doubt support for this is likely to land in nginx. Sent from my Galaxy -------- Original message -------- From: 박규철 Date: 7/12/22 04:02 (GMT-05:00) To: nginx at nginx.org Subject: Does Nginx support RFC 8673 feature for low latency? Hi. I'm looking for "low latency" related settings and functions to build faster services through Nginx. After many searches and functional tests, I thought that RFC 8673's behavior was necessary to implement "low latency". I have set up and tested the existing slice module, but I think it is insufficient to implement the "low latency" of RFC 8673. 1. I want to know if the operation of RFC 8673 is supported in the released nginx. 2. If the operation of RFC 8673 is not supported, I would like to know if there is a future support plan. Any help is appreciated. Best regards, kyucheol -- -- 박규철 개발팀 / 차장 Mobile +82 1093285425 Email oasis at embracelabs.com Messenger LINE : kkobal0903 (주)엠브레이스 [https://mailfoogae.appspot.com/t?sender=ab2FzaXNAZW1icmFjZWxhYnMuY29t&type=zerocontent&guid=8bce6599-aaae-42ff-8f11-809b053324f8]ᐧ -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Tue Jul 12 10:28:51 2022 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 12 Jul 2022 13:28:51 +0300 Subject: Does Nginx support RFC 8673 feature for low latency? In-Reply-To: References: Message-ID: Hello! On Tue, Jul 12, 2022 at 05:00:32PM +0900, 박규철 wrote: > I'm looking for "low latency" related settings and functions to build > faster services through Nginx. > > After many searches and functional tests, I thought that RFC 8673's > behavior was necessary to implement "low latency". > > I have set up and tested the existing slice module, but I think it is > insufficient to implement the "low latency" of RFC 8673. > > 1. I want to know if the operation of RFC 8673 is supported in the > released nginx. > > 2. If the operation of RFC 8673 is not supported, I would like to know if > there is a future support plan. > > Any help is appreciated. As of now, no RFC 8673 experimental features are supported by nginx. No support is currently planned. As long as relevant features are HTTP-compatible, nginx can be used for proxying corresponding requests. -- Maxim Dounin http://mdounin.ru/ From arut at nginx.com Wed Jul 13 16:12:06 2022 From: arut at nginx.com (Roman Arutyunyan) Date: Wed, 13 Jul 2022 20:12:06 +0400 Subject: Slice module 206 requirement In-Reply-To: References: <8C146EA8-96A8-40C4-8829-374C56C3256F@lucasrolff.com> Message-ID: <20220713161206.uez4k2z45ldiful3@N00W24XTQX> Hi, On Sun, Jul 10, 2022 at 11:35:48AM +0300, Maxim Dounin wrote: > Hello! > > On Fri, Jul 08, 2022 at 07:13:33PM +0000, Lucas Rolff wrote: > > > I’m having an nginx instance where I utilise the nginx slice > > module to slice upstream mp4 files when using proxy_cache. > > > > However, I have an interesting origin where if sending a range > > request (which happens when the slice module is enabled), to a > > file that’s less than the slice range, the origin returns a 200 > > OK, but with the range related headers such as content-range, > > but obviously the full file is returned since it’s within the > > requested range. > > > > When playing the MP4s through Google Chrome and Firefox it works > > fine when going through the nginx proxy instance, however, it > > somehow breaks Safari (both on MacOS, and iOS) - I guess Safari > > is more strict. > > When playing directly through the origin it works fine in all > > browsers. > > > > The md5 of response from the origin remains the same, so it’s > > not that the response itself is an invalid MP4 file, and even if > > you compare the cache files on disk with a “working” origin and > > the “broken” origin (one sends a 206 Partial Content, another > > sends 200 OK) - the content of the cache files remain the same, > > except obviously the header section of the cache file. > > > > The origin returns a 206 status code, only if the file exceeds > > the slice size, so if I configure a slice size of 5 megabyte, > > only files above 5 megabytes will give 206s. Anything under 5 > > megabytes will result in a 200 OK with content-range and the > > correct content-length, > > > > Looking in the slice module itself I see: > > https://github.com/nginx/nginx/blob/master/src/http/modules/ngx_http_slice_filter_module.c#L116-L126 > > > > > > if (r->headers_out.status != NGX_HTTP_PARTIAL_CONTENT) { > > if (r == r->main) { > > ngx_http_set_ctx(r, NULL, ngx_http_slice_filter_module); > > return ngx_http_next_header_filter(r); > > } > > > > ngx_log_error(NGX_LOG_ERR, r->connection->log, 0, > > "unexpected status code %ui in slice response", > > r->headers_out.status); > > return NGX_ERROR; > > } > > > > This seems like the slice module expects a 206 status code to be > > returned, > > For the main request, the code accepts two basic valid variants: > > - 206, so the slice module will combine multiple responses to > range requests as needed; > > - anything else, so the slice module will give up and simply > return the response to the client. > > If the module sees a non-206 response to a subrequest, this is an > error, as the slice module expects underlying resources to be > immutable, and does not expect that some ranges can be requested, > while some other aren't. This isn't something related to your > case though. > > > however, later in the same function > > https://github.com/nginx/nginx/blob/master/src/http/modules/ngx_http_slice_filter_module.c#L200-L211 > > > > > > if (r->headers_out.status == NGX_HTTP_PARTIAL_CONTENT) { > > if (ctx->start + (off_t) slcf->size <= r->headers_out.content_offset) { > > ctx->start = slcf->size > > * (r->headers_out.content_offset / slcf->size); > > } > > > > ctx->end = r->headers_out.content_offset > > + r->headers_out.content_length_n; > > > > } else { > > ctx->end = cr.complete_length; > > } > > > > There it will do an else statement if the status code isn’t 206. > > So would this piece of code ever be reached, since there’s the initial error? > > Following the initial check, r->headers_out.status is explicitly > changed to NGX_HTTP_OK. Later on the > ngx_http_next_header_filter() call might again change > r->headers_out.status as long as the client used a range request, > and this is what checked here. > > > Additionally I don’t see in RFC7233 that 206 responses are an > > absolute requirement, additionally I don’t see content-range > > being prohibited/forbidden to be used for 200 OK responses. > > Now, if one have a secondary proxy that modifies the response > > headers in between the origin returning 200 OK with the > > Content-Range header, and then strip out the Content-Range > > header, the nginx slice module seems to handle it fine, so > > somehow the combination of 200 OK and a Content-Range header > > being present seems to break the slice module from functioning. > > > > I’m just curious why this happens within the slice module, and > > if there’s any possible solution for it (like allowing the > > combination of 200 OK and Content-Range, since those two would > > still indicate that the origin/upstream supports range requests) > > - obviously it would be nice to fix the upstream server but > > sometimes that’s sadly not possible. > > >From the above explanation it is probably already clear that > "disabling slice when an origin returns 200 OK" is what actually > happens. > > The issue does not appear without the slice module in your testing > because the Content-Range header seems to be only present in your > backend 200 responses when there was a Range header in the > request, and this is what happens only with the slice module. > > I've done some limited testing with Safari and manually added > Content-Range header, and there seem to be at least two issues: > > - Range filter in nginx does not expect the Content-Range header > to be already present in 200 responses and simply adds another > one. This results in incorrect range responses with multiple > Content-Range headers, and this breaks Safari. > > - Safari also breaks if its test request with "Range: bytes=0-1" > results in 200 with the Content-Range header. > > My initial fix was to simply disable handling of 200 responses > with Content-Range headers in the range filter, so such responses > wouldn't be touched at all. This is perfectly correct and > probably the most secure thing to do, but does not work with > Safari due to the second issue outlined above. > > Another approach would be to clear pre-existing Content-Range > headers in the range filter. This seems to work, at least in my > testing. See below for the patch. > > > I know the parts of the slice module haven’t been touched for > > years, so obviously it works for most people, just dipping my > > toes here to see if there’s a possible solution other than > > disabling slice when an origin returns 200 OK for files smaller > > than the slice size. > > Note that that slice module is generally unsafe to use for > arbitrary upstream servers: it relies on expectations which are > beyond the HTTP standard requirements. In particular: > > - It requires resources to be immutable, so different range > responses can be combined together. > > - It does not try to handle edge cases, such as 416 returned by > the upstream on empty files (which is correct per RFC, but > requires complicated additional handling to convert 416 to 200, so > it is better to just return 200 OK). > > In general, the slice module is to be used only in your own > infrastructure when you control the backend and can be sure that > the slice module expectations are met. As such, disabling it for > backends which do something unexpected might actually be a good > idea. On the other hand, in this particular case the nginx > behaviour can be adjusted to handle things gracefully. > > Below is a patch to clear pre-existing Content-Range headers > in the range filter. Please test if it works for you. > > # HG changeset patch > # User Maxim Dounin > # Date 1657439390 -10800 > # Sun Jul 10 10:49:50 2022 +0300 > # Node ID 219217ea49a8d648f5cadd046f1b1294ef05693c > # Parent 9d98d524bd02a562d9cd83f4e369c7e992c5753b > Range filter: clearing of pre-existing Content-Range headers. > > Some servers might emit Conten-Range header on 200 responses, and this Missing "t" in "Conten-Range". > does not seem to contradict RFC 9110: as per RFC 9110, the Content-Range > header have no meaning for status codes other than 206 and 417. Previously have -> has 417 -> 416 > this resulted in duplicate Content-Range headers in nginx responses handled > by the range filter. Fix is to clear pre-existing headers. > > diff --git a/src/http/modules/ngx_http_range_filter_module.c b/src/http/modules/ngx_http_range_filter_module.c > --- a/src/http/modules/ngx_http_range_filter_module.c > +++ b/src/http/modules/ngx_http_range_filter_module.c > @@ -425,6 +425,10 @@ ngx_http_range_singlepart_header(ngx_htt > return NGX_ERROR; > } > > + if (r->headers_out.content_range) { > + r->headers_out.content_range->hash = 0; > + } > + > r->headers_out.content_range = content_range; > > content_range->hash = 1; > @@ -582,6 +586,11 @@ ngx_http_range_multipart_header(ngx_http > r->headers_out.content_length = NULL; > } > > + if (r->headers_out.content_range) { > + r->headers_out.content_range->hash = 0; > + r->headers_out.content_range = NULL; > + } > + > return ngx_http_next_header_filter(r); > } > > @@ -598,6 +607,10 @@ ngx_http_range_not_satisfiable(ngx_http_ > return NGX_ERROR; > } > > + if (r->headers_out.content_range) { > + r->headers_out.content_range->hash = 0; > + } > + > r->headers_out.content_range = content_range; > > content_range->hash = 1; The patch looks ok to me Tested with proxy_force_ranges. From oasis at embracelabs.com Thu Jul 14 09:38:23 2022 From: oasis at embracelabs.com (=?UTF-8?B?67CV6rec7LKg?=) Date: Thu, 14 Jul 2022 18:38:23 +0900 Subject: Does Nginx support RFC 8673 feature for low latency? In-Reply-To: References: Message-ID: Hello Thank you to everyone who answered. I came to understand that RFC 8673 is an experimental feature and is not in the feature support plan. Thank you for your answer again. 2022년 7월 12일 (화) 오후 7:31, Maxim Dounin 님이 작성: > > Hello! > > On Tue, Jul 12, 2022 at 05:00:32PM +0900, 박규철 wrote: > > > I'm looking for "low latency" related settings and functions to build > > faster services through Nginx. > > > > After many searches and functional tests, I thought that RFC 8673's > > behavior was necessary to implement "low latency". > > > > I have set up and tested the existing slice module, but I think it is > > insufficient to implement the "low latency" of RFC 8673. > > > > 1. I want to know if the operation of RFC 8673 is supported in the > > released nginx. > > > > 2. If the operation of RFC 8673 is not supported, I would like to know if > > there is a future support plan. > > > > Any help is appreciated. > > As of now, no RFC 8673 experimental features are supported by > nginx. No support is currently planned. > > As long as relevant features are HTTP-compatible, nginx can be > used for proxying corresponding requests. > > -- > Maxim Dounin > http://mdounin.ru/ > _______________________________________________ > nginx mailing list -- nginx at nginx.org > To unsubscribe send an email to nginx-leave at nginx.org -- -- 박규철 개발팀 / 과장 Mobile +82 1093285425 Email oasis at embracelabs.com Messenger LINE : kkobal0903 (주)엠브레이스 From nginx-forum at forum.nginx.org Thu Jul 14 13:24:18 2022 From: nginx-forum at forum.nginx.org (cw318) Date: Thu, 14 Jul 2022 09:24:18 -0400 Subject: gzip recompression on nginx in reverse proxy mode Message-ID: <06217b5539bde08e438e7e0bba1dc46a.NginxMailingListEnglish@forum.nginx.org> Faced with strange behavior of nginx in reverse proxy mode. There is web server 1 - the original source. web server 2 on nginx caching data in reverse proxy mode between web server 1 and users. Web server 1 delivers content in maximum gzip compression: gzip compressed data, max compression, from Unix Web server 2 delivers content not in maximum compression: gzip compressed data, max speed, from Unix, original size modulo 2^32 328219 It looks like web server 2 recompresses gzip, but gzip compression is disabled in the nginx config with the following directives: gzip_proxied off; gzip_static off; gzip off; May be somebody has experienced similar behavior? Expected behavior: delivery content to the client from cache that we get from the original web server 1 (already in maximum compression) Posted at Nginx Forum: https://forum.nginx.org/read.php?2,294748,294748#msg-294748 From mdounin at mdounin.ru Fri Jul 15 04:07:11 2022 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 15 Jul 2022 07:07:11 +0300 Subject: Slice module 206 requirement In-Reply-To: <20220713161206.uez4k2z45ldiful3@N00W24XTQX> References: <8C146EA8-96A8-40C4-8829-374C56C3256F@lucasrolff.com> <20220713161206.uez4k2z45ldiful3@N00W24XTQX> Message-ID: Hello! On Wed, Jul 13, 2022 at 08:12:06PM +0400, Roman Arutyunyan wrote: > Hi, > > On Sun, Jul 10, 2022 at 11:35:48AM +0300, Maxim Dounin wrote: > > Hello! > > > > On Fri, Jul 08, 2022 at 07:13:33PM +0000, Lucas Rolff wrote: > > > > > I’m having an nginx instance where I utilise the nginx slice > > > module to slice upstream mp4 files when using proxy_cache. > > > > > > However, I have an interesting origin where if sending a range > > > request (which happens when the slice module is enabled), to a > > > file that’s less than the slice range, the origin returns a 200 > > > OK, but with the range related headers such as content-range, > > > but obviously the full file is returned since it’s within the > > > requested range. > > > > > > When playing the MP4s through Google Chrome and Firefox it works > > > fine when going through the nginx proxy instance, however, it > > > somehow breaks Safari (both on MacOS, and iOS) - I guess Safari > > > is more strict. > > > When playing directly through the origin it works fine in all > > > browsers. > > > > > > The md5 of response from the origin remains the same, so it’s > > > not that the response itself is an invalid MP4 file, and even if > > > you compare the cache files on disk with a “working” origin and > > > the “broken” origin (one sends a 206 Partial Content, another > > > sends 200 OK) - the content of the cache files remain the same, > > > except obviously the header section of the cache file. > > > > > > The origin returns a 206 status code, only if the file exceeds > > > the slice size, so if I configure a slice size of 5 megabyte, > > > only files above 5 megabytes will give 206s. Anything under 5 > > > megabytes will result in a 200 OK with content-range and the > > > correct content-length, > > > > > > Looking in the slice module itself I see: > > > https://github.com/nginx/nginx/blob/master/src/http/modules/ngx_http_slice_filter_module.c#L116-L126 > > > > > > > > > if (r->headers_out.status != NGX_HTTP_PARTIAL_CONTENT) { > > > if (r == r->main) { > > > ngx_http_set_ctx(r, NULL, ngx_http_slice_filter_module); > > > return ngx_http_next_header_filter(r); > > > } > > > > > > ngx_log_error(NGX_LOG_ERR, r->connection->log, 0, > > > "unexpected status code %ui in slice response", > > > r->headers_out.status); > > > return NGX_ERROR; > > > } > > > > > > This seems like the slice module expects a 206 status code to be > > > returned, > > > > For the main request, the code accepts two basic valid variants: > > > > - 206, so the slice module will combine multiple responses to > > range requests as needed; > > > > - anything else, so the slice module will give up and simply > > return the response to the client. > > > > If the module sees a non-206 response to a subrequest, this is an > > error, as the slice module expects underlying resources to be > > immutable, and does not expect that some ranges can be requested, > > while some other aren't. This isn't something related to your > > case though. > > > > > however, later in the same function > > > https://github.com/nginx/nginx/blob/master/src/http/modules/ngx_http_slice_filter_module.c#L200-L211 > > > > > > > > > if (r->headers_out.status == NGX_HTTP_PARTIAL_CONTENT) { > > > if (ctx->start + (off_t) slcf->size <= r->headers_out.content_offset) { > > > ctx->start = slcf->size > > > * (r->headers_out.content_offset / slcf->size); > > > } > > > > > > ctx->end = r->headers_out.content_offset > > > + r->headers_out.content_length_n; > > > > > > } else { > > > ctx->end = cr.complete_length; > > > } > > > > > > There it will do an else statement if the status code isn’t 206. > > > So would this piece of code ever be reached, since there’s the initial error? > > > > Following the initial check, r->headers_out.status is explicitly > > changed to NGX_HTTP_OK. Later on the > > ngx_http_next_header_filter() call might again change > > r->headers_out.status as long as the client used a range request, > > and this is what checked here. > > > > > Additionally I don’t see in RFC7233 that 206 responses are an > > > absolute requirement, additionally I don’t see content-range > > > being prohibited/forbidden to be used for 200 OK responses. > > > Now, if one have a secondary proxy that modifies the response > > > headers in between the origin returning 200 OK with the > > > Content-Range header, and then strip out the Content-Range > > > header, the nginx slice module seems to handle it fine, so > > > somehow the combination of 200 OK and a Content-Range header > > > being present seems to break the slice module from functioning. > > > > > > I’m just curious why this happens within the slice module, and > > > if there’s any possible solution for it (like allowing the > > > combination of 200 OK and Content-Range, since those two would > > > still indicate that the origin/upstream supports range requests) > > > - obviously it would be nice to fix the upstream server but > > > sometimes that’s sadly not possible. > > > > >From the above explanation it is probably already clear that > > "disabling slice when an origin returns 200 OK" is what actually > > happens. > > > > The issue does not appear without the slice module in your testing > > because the Content-Range header seems to be only present in your > > backend 200 responses when there was a Range header in the > > request, and this is what happens only with the slice module. > > > > I've done some limited testing with Safari and manually added > > Content-Range header, and there seem to be at least two issues: > > > > - Range filter in nginx does not expect the Content-Range header > > to be already present in 200 responses and simply adds another > > one. This results in incorrect range responses with multiple > > Content-Range headers, and this breaks Safari. > > > > - Safari also breaks if its test request with "Range: bytes=0-1" > > results in 200 with the Content-Range header. > > > > My initial fix was to simply disable handling of 200 responses > > with Content-Range headers in the range filter, so such responses > > wouldn't be touched at all. This is perfectly correct and > > probably the most secure thing to do, but does not work with > > Safari due to the second issue outlined above. > > > > Another approach would be to clear pre-existing Content-Range > > headers in the range filter. This seems to work, at least in my > > testing. See below for the patch. > > > > > I know the parts of the slice module haven’t been touched for > > > years, so obviously it works for most people, just dipping my > > > toes here to see if there’s a possible solution other than > > > disabling slice when an origin returns 200 OK for files smaller > > > than the slice size. > > > > Note that that slice module is generally unsafe to use for > > arbitrary upstream servers: it relies on expectations which are > > beyond the HTTP standard requirements. In particular: > > > > - It requires resources to be immutable, so different range > > responses can be combined together. > > > > - It does not try to handle edge cases, such as 416 returned by > > the upstream on empty files (which is correct per RFC, but > > requires complicated additional handling to convert 416 to 200, so > > it is better to just return 200 OK). > > > > In general, the slice module is to be used only in your own > > infrastructure when you control the backend and can be sure that > > the slice module expectations are met. As such, disabling it for > > backends which do something unexpected might actually be a good > > idea. On the other hand, in this particular case the nginx > > behaviour can be adjusted to handle things gracefully. > > > > Below is a patch to clear pre-existing Content-Range headers > > in the range filter. Please test if it works for you. > > > > # HG changeset patch > > # User Maxim Dounin > > # Date 1657439390 -10800 > > # Sun Jul 10 10:49:50 2022 +0300 > > # Node ID 219217ea49a8d648f5cadd046f1b1294ef05693c > > # Parent 9d98d524bd02a562d9cd83f4e369c7e992c5753b > > Range filter: clearing of pre-existing Content-Range headers. > > > > Some servers might emit Conten-Range header on 200 responses, and this > > Missing "t" in "Conten-Range". Fixed, thnx. > > does not seem to contradict RFC 9110: as per RFC 9110, the Content-Range > > header have no meaning for status codes other than 206 and 417. Previously > > have -> has > 417 -> 416 Fixed, thnx. > > this resulted in duplicate Content-Range headers in nginx responses handled > > by the range filter. Fix is to clear pre-existing headers. > > > > diff --git a/src/http/modules/ngx_http_range_filter_module.c b/src/http/modules/ngx_http_range_filter_module.c > > --- a/src/http/modules/ngx_http_range_filter_module.c > > +++ b/src/http/modules/ngx_http_range_filter_module.c > > @@ -425,6 +425,10 @@ ngx_http_range_singlepart_header(ngx_htt > > return NGX_ERROR; > > } > > > > + if (r->headers_out.content_range) { > > + r->headers_out.content_range->hash = 0; > > + } > > + > > r->headers_out.content_range = content_range; > > > > content_range->hash = 1; > > @@ -582,6 +586,11 @@ ngx_http_range_multipart_header(ngx_http > > r->headers_out.content_length = NULL; > > } > > > > + if (r->headers_out.content_range) { > > + r->headers_out.content_range->hash = 0; > > + r->headers_out.content_range = NULL; > > + } > > + > > return ngx_http_next_header_filter(r); > > } > > > > @@ -598,6 +607,10 @@ ngx_http_range_not_satisfiable(ngx_http_ > > return NGX_ERROR; > > } > > > > + if (r->headers_out.content_range) { > > + r->headers_out.content_range->hash = 0; > > + } > > + > > r->headers_out.content_range = content_range; > > > > content_range->hash = 1; > > The patch looks ok to me > > Tested with proxy_force_ranges. Pushed to http://mdounin.ru/hg/nginx/. -- Maxim Dounin http://mdounin.ru/ From mikydevel at yahoo.fr Sun Jul 17 22:08:49 2022 From: mikydevel at yahoo.fr (Mik J) Date: Sun, 17 Jul 2022 22:08:49 +0000 (UTC) Subject: 2 x Applications using the same domain behind a reverse proxy References: <1490473369.1295398.1658095729329.ref@mail.yahoo.com> Message-ID: <1490473369.1295398.1658095729329@mail.yahoo.com> Hello, I don't manage to make my thing works although it's probably a classic for Nginx users. I have a domain https://example.org What I want is thishttps://example.org goes on reverse proxy => server1 (10.10.10.10) to the application /var/www/htdocs/app1https://example.org/app2 goes on reverse proxy => server1 (10.10.10.10) to the application /var/www/htdocs/app2 So in the latter case the user adds /app2 and the flow is redirected to the /var/www/htdocs/app2 directory First the reverse proxy, I wrote this    ##     # App1    ##      location / {         proxy_pass              http://10.10.10.10:80;        proxy_redirect          off;         proxy_set_header        Host                    $http_host;         proxy_set_header        X-Real-IP               $remote_addr;         proxy_set_header        X-Forwarded-For         $proxy_add_x_forwarded_for;         proxy_set_header        Referer                 "http://example.org";        #proxy_set_header       Upgrade                 $http_upgrade;         #proxy_pass_header      Set-Cookie;      }     ##     # App2    ##      location /app2 {         proxy_pass              http://10.10.10.10:80;        proxy_redirect          off;         proxy_set_header        Host                    $http_host;         proxy_set_header        X-Real-IP               $remote_addr;         proxy_set_header        X-Forwarded-For         $proxy_add_x_forwarded_for;         proxy_set_header        Referer                 "http://example.org";        #proxy_set_header       Upgrade                 $http_upgrade;         #proxy_pass_header      Set-Cookie;      } Second the back end serverserver {         listen 80;         server_name example.org;        index index.html index.php;         root /var/www/htdocs/app1;         access_log /var/log/nginx/example.org.access.log;        error_log /var/log/nginx/example.org.error.log;         location / {           try_files $uri $uri/ /index.php$is_args$args;           location ~ \.php$ {               root              /var/www/htdocs/app1;               fastcgi_pass      unix:/run/php-fpm.app1.sock;              fastcgi_read_timeout 700;               fastcgi_split_path_info ^(.+\.php)(/.+)$;               fastcgi_index     index.php;               fastcgi_param     SCRIPT_FILENAME $document_root$fastcgi_script_name;               include           fastcgi_params;           }         }         location /app2 {           try_files $uri $uri/ /index.php$is_args$args;           location ~ \.php$ {               root              /var/www/htdocs/app2;               fastcgi_pass      unix:/run/php-fpm.app1.sock;              fastcgi_read_timeout 700;               fastcgi_split_path_info ^(.+\.php)(/.+)$;               fastcgi_index     index.php;               fastcgi_param     SCRIPT_FILENAME $document_root$fastcgi_script_name;               include           fastcgi_params;           }         }} The result I have right now is that I can access app1 with http://example.org, but i cannot access app2 with http://example.org/app2 Also what is the best practice on the backend server:- should I make one single virtual host with two location statements like I did or 2 virtual hosts with a fake name like internal.app1.example.org and internal.app2.example.org ? - can I mutualise the location ~ \.php$ between the two ? - Should I copy access_log and error_log in the location /app2 statement ? By the way, app1 and app2 are the same application/program but sometimes I want another instance or test app version 1, app version 2 etc. What I tend to do in the past is to haveapp1.example.orgapp2.example.orgThe problem is that it makes me use multiple certificates.Here I want to group all the applications behind one domain name example.org with one certificate and then access different applications with example.org/app1, example.org/app2 Thank you -------------- next part -------------- An HTML attachment was scrubbed... URL: From fusca14 at gmail.com Mon Jul 18 16:37:47 2022 From: fusca14 at gmail.com (Fabiano Furtado Pessoa Coelho) Date: Mon, 18 Jul 2022 13:37:47 -0300 Subject: Question about rotating log files with USR1 signal Message-ID: Hi... As described in the official documentation http://nginx.org/en/docs/control.html#logs "The master process will then re-open all currently open log files and assign them an unprivileged user under which the worker processes are running, as an owner.", the owner of the log file changes after the USR1 signal is sent to NGINX master process. Why does this behavior happen? Is there a way to keep the original root owner of the log file? The "systemctl reload nginx" is capable of creating a new log file with the original root owner, but I think this isn't a clever solution. Thanks in advance. Fabiano From mdounin at mdounin.ru Tue Jul 19 03:16:16 2022 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 19 Jul 2022 06:16:16 +0300 Subject: Question about rotating log files with USR1 signal In-Reply-To: References: Message-ID: Hello! On Mon, Jul 18, 2022 at 01:37:47PM -0300, Fabiano Furtado Pessoa Coelho wrote: > As described in the official documentation > http://nginx.org/en/docs/control.html#logs "The master process will > then re-open all currently open log files and assign them an > unprivileged user under which the worker processes are running, as an > owner.", the owner of the log file changes after the USR1 signal is > sent to NGINX master process. > > Why does this behavior happen? Is there a way to keep the original > root owner of the log file? Log files owned by root generally cannot be open by worker processes for writing. To make sure worker processes can reopen the log files, master process chowns them and ensures appropriate permissions for the owner. Unless you are willing to run nginx worker processes under root (which is unwise), there is no way to preserve the root as the owner of log files during fast log rotation. If for some reason you must keep root as the owner of log files, using reconfiguration instead of log rotation might work. Obviously enough, this isn't a good solution either. A better solution for reopening log files would be to implement file descriptor passing on systems which support it, see https://trac.nginx.org/nginx/ticket/376. So far attempts to implement this did not result in a reasonably reliable code. > The "systemctl reload nginx" is capable of creating a new log file > with the original root owner, but I think this isn't a clever > solution. More importantly, this won't work. By pre-creating log files you can fine-control permissions on the files, but during log rotation nginx will change the owner anyway. -- Maxim Dounin http://mdounin.ru/ From hobson42 at gmail.com Tue Jul 19 04:09:09 2022 From: hobson42 at gmail.com (Ian Hobson) Date: Tue, 19 Jul 2022 11:09:09 +0700 Subject: 2 x Applications using the same domain behind a reverse proxy In-Reply-To: <1490473369.1295398.1658095729329@mail.yahoo.com> References: <1490473369.1295398.1658095729329.ref@mail.yahoo.com> <1490473369.1295398.1658095729329@mail.yahoo.com> Message-ID: <9f94d34c-c58b-2333-de6d-cb482ea38601@gmail.com> Hi Mik, I think the problem is that your back end cannot distinguish app1 from app2. I don't think there is a need for proxy-pass, unless it is to spread the load. I would try the following approach: Change the root within location / and location /app2 and serve static files directly. When you pass the .php files, the different roots will appear in the $document_root location, so you can share the php instance. It will be MUCH more efficient if you use fast-cgi because it removes a process create from every php serve. Finally, you need to protect against sneaks who try to execute code, by adding a try_files thus... location ~ \.php$ { try_files $uri =450; include /etc/nginx/fastcgi.conf; fastcgi_split_path_info ^(.+\.php)(/.+)$; etc. Hope this helps. Ian On 18/07/2022 05:08, Mik J via nginx wrote: > Hello, > > I don't manage to make my thing works although it's probably a classic > for Nginx users. > > I have a domain https://example.org > > What I want is this > https://example.org goes on reverse proxy => server1 (10.10.10.10) to > the application /var/www/htdocs/app1 > https://example.org/app2 goes on reverse proxy => server1 (10.10.10.10) > to the application /var/www/htdocs/app2 > So in the latter case the user adds /app2 and the flow is redirected to > the /var/www/htdocs/app2 directory > > First the reverse proxy, I wrote this >     ## >     # App1 >     ## >      location / { >         proxy_pass              http://10.10.10.10:80; >         proxy_redirect          off; >         proxy_set_header        Host                    $http_host; >         proxy_set_header        X-Real-IP               $remote_addr; >         proxy_set_header        X-Forwarded-For > $proxy_add_x_forwarded_for; >         proxy_set_header        Referer > "http://example.org"; >         #proxy_set_header       Upgrade                 $http_upgrade; >         #proxy_pass_header      Set-Cookie; >      } >     ## >     # App2 >     ## >      location /app2 { >         proxy_pass              http://10.10.10.10:80; >         proxy_redirect          off; >         proxy_set_header        Host                    $http_host; >         proxy_set_header        X-Real-IP               $remote_addr; >         proxy_set_header        X-Forwarded-For > $proxy_add_x_forwarded_for; >         proxy_set_header        Referer > "http://example.org"; >         #proxy_set_header       Upgrade                 $http_upgrade; >         #proxy_pass_header      Set-Cookie; >      } > > > Second the back end server > server { >         listen 80; >         server_name example.org; >         index index.html index.php; >         root /var/www/htdocs/app1; > >         access_log /var/log/nginx/example.org.access.log; >         error_log /var/log/nginx/example.org.error.log; > >         location / { >           try_files $uri $uri/ /index.php$is_args$args; > >           location ~ \.php$ { >               root              /var/www/htdocs/app1; >               fastcgi_pass      unix:/run/php-fpm.app1.sock; >               fastcgi_read_timeout 700; >               fastcgi_split_path_info ^(.+\.php)(/.+)$; >               fastcgi_index     index.php; >               fastcgi_param     SCRIPT_FILENAME > $document_root$fastcgi_script_name; >               include           fastcgi_params; >           } >         } > >         location /app2 { >           try_files $uri $uri/ /index.php$is_args$args; > >           location ~ \.php$ { >               root              /var/www/htdocs/app2; >               fastcgi_pass      unix:/run/php-fpm.app1.sock; >               fastcgi_read_timeout 700; >               fastcgi_split_path_info ^(.+\.php)(/.+)$; >               fastcgi_index     index.php; >               fastcgi_param     SCRIPT_FILENAME > $document_root$fastcgi_script_name; >               include           fastcgi_params; >           } >         } > } > > The result I have right now is that I can access app1 with > http://example.org, but i cannot access app2 with http://example.org/app2 > > Also what is the best practice on the backend server: > - should I make one single virtual host with two location statements > like I did or 2 virtual hosts with a fake name like > internal.app1.example.org and internal.app2.example.org ? > - can I mutualise the location ~ \.php$ between the two ? > - Should I copy access_log and error_log in the location /app2 statement ? > > By the way, app1 and app2 are the same application/program but sometimes > I want another instance or test app version 1, app version 2 etc. > > What I tend to do in the past is to have > app1.example.org > app2.example.org > The problem is that it makes me use multiple certificates. > Here I want to group all the applications behind one domain name > example.org with one certificate and then access different applications > with example.org/app1, example.org/app2 > > Thank you > > > > > > > > _______________________________________________ > nginx mailing list -- nginx at nginx.org > To unsubscribe send an email to nginx-leave at nginx.org -- Ian Hobson Tel (+66) 626 544 695 From mikydevel at yahoo.fr Tue Jul 19 14:31:29 2022 From: mikydevel at yahoo.fr (Mik J) Date: Tue, 19 Jul 2022 14:31:29 +0000 (UTC) Subject: 2 x Applications using the same domain behind a reverse proxy In-Reply-To: <9f94d34c-c58b-2333-de6d-cb482ea38601@gmail.com> References: <1490473369.1295398.1658095729329.ref@mail.yahoo.com> <1490473369.1295398.1658095729329@mail.yahoo.com> <9f94d34c-c58b-2333-de6d-cb482ea38601@gmail.com> Message-ID: <290877550.1937538.1658241089604@mail.yahoo.com> Hello Ian, Thank you for your answer. I did what you told me Now I have on my reverse proxy      location / {         proxy_pass              http://10.10.10.10:80;         proxy_redirect          off;         proxy_set_header        Host                    $http_host;         proxy_set_header        X-Real-IP               $remote_addr; #        proxy_set_header        X-Real-IP               $proxy_add_x_forwarded_for;         proxy_set_header        X-Forwarded-For         $proxy_add_x_forwarded_for;         proxy_set_header        Referer                 "http://example.org";        #proxy_set_header       Upgrade                 $http_upgrade;         #proxy_pass_header      Set-Cookie;      } And on the backend server server {           listen 80;           server_name example.org;           index index.html index.php;           root /var/www/htdocs/app1;             access_log /var/log/nginx/example.org.access.log;           error_log /var/log/nginx/example.org.error.log;             location / {             try_files $uri $uri/ /index.php$is_args$args;             root /var/www/htdocs/app1;           }             location /app2 {             try_files $uri $uri/ /index.php$is_args$args;             root /var/www/htdocs/app2;           }            location ~ \.php$ {                try_files $uri    =450;                 fastcgi_pass      unix:/run/php-fpm.app1.sock;                 fastcgi_read_timeout 700;                 fastcgi_split_path_info ^(.+\.php)(/.+)$;                 fastcgi_index     index.php;                 fastcgi_param     SCRIPT_FILENAME  $document_root$fastcgi_script_name;                 include           fastcgi_params;             }  } Access to example.org leads me to app1 so it works as expected.Access to example.org/app2 doesnt lead me to app2. It seems to me that the following lineproxy_set_header        Referer                 "http://example.org";on the reverse proxy could make a confusion ? I can see that example.org/app2 still lands on /var/www/htdocs/app1 Regards Le mardi 19 juillet 2022 à 06:10:28 UTC+2, Ian Hobson a écrit : Hi Mik, I think the problem is that your back end cannot distinguish app1 from app2. I don't think there is a need for proxy-pass, unless it is to spread the load. I would try the following approach: Change the root within location / and location /app2 and serve static files directly. When you pass the .php files, the different roots will  appear in the $document_root location, so you can share the php instance. It will be MUCH more efficient if you use fast-cgi because it removes a process create from every php serve. Finally, you need to protect against sneaks who try to execute code, by adding a try_files thus... location ~ \.php$ {     try_files $uri =450;     include /etc/nginx/fastcgi.conf;     fastcgi_split_path_info  ^(.+\.php)(/.+)$;         etc. Hope this helps. Ian On 18/07/2022 05:08, Mik J via nginx wrote: > Hello, > > I don't manage to make my thing works although it's probably a classic > for Nginx users. > > I have a domain https://example.org > > What I want is this > https://example.org goes on reverse proxy => server1 (10.10.10.10) to > the application /var/www/htdocs/app1 > https://example.org/app2 goes on reverse proxy => server1 (10.10.10.10) > to the application /var/www/htdocs/app2 > So in the latter case the user adds /app2 and the flow is redirected to > the /var/www/htdocs/app2 directory > > First the reverse proxy, I wrote this >      ## >      # App1 >      ## >       location / { >          proxy_pass              http://10.10.10.10:80; >          proxy_redirect          off; >          proxy_set_header        Host                    $http_host; >          proxy_set_header        X-Real-IP               $remote_addr; >          proxy_set_header        X-Forwarded-For        > $proxy_add_x_forwarded_for; >          proxy_set_header        Referer                > "http://example.org"; >          #proxy_set_header       Upgrade                 $http_upgrade; >          #proxy_pass_header      Set-Cookie; >       } >      ## >      # App2 >      ## >       location /app2 { >          proxy_pass              http://10.10.10.10:80; >          proxy_redirect          off; >          proxy_set_header        Host                    $http_host; >          proxy_set_header        X-Real-IP               $remote_addr; >          proxy_set_header        X-Forwarded-For        > $proxy_add_x_forwarded_for; >          proxy_set_header        Referer                > "http://example.org"; >          #proxy_set_header       Upgrade                 $http_upgrade; >          #proxy_pass_header      Set-Cookie; >       } > > > Second the back end server > server { >          listen 80; >          server_name example.org; >          index index.html index.php; >          root /var/www/htdocs/app1; > >          access_log /var/log/nginx/example.org.access.log; >          error_log /var/log/nginx/example.org.error.log; > >          location / { >            try_files $uri $uri/ /index.php$is_args$args; > >            location ~ \.php$ { >                root              /var/www/htdocs/app1; >                fastcgi_pass      unix:/run/php-fpm.app1.sock; >                fastcgi_read_timeout 700; >                fastcgi_split_path_info ^(.+\.php)(/.+)$; >                fastcgi_index     index.php; >                fastcgi_param     SCRIPT_FILENAME > $document_root$fastcgi_script_name; >                include           fastcgi_params; >            } >          } > >          location /app2 { >            try_files $uri $uri/ /index.php$is_args$args; > >            location ~ \.php$ { >                root              /var/www/htdocs/app2; >                fastcgi_pass      unix:/run/php-fpm.app1.sock; >                fastcgi_read_timeout 700; >                fastcgi_split_path_info ^(.+\.php)(/.+)$; >                fastcgi_index     index.php; >                fastcgi_param     SCRIPT_FILENAME > $document_root$fastcgi_script_name; >                include           fastcgi_params; >            } >          } > } > > The result I have right now is that I can access app1 with > http://example.org, but i cannot access app2 with http://example.org/app2 > > Also what is the best practice on the backend server: > - should I make one single virtual host with two location statements > like I did or 2 virtual hosts with a fake name like > internal.app1.example.org and internal.app2.example.org ? > - can I mutualise the location ~ \.php$ between the two ? > - Should I copy access_log and error_log in the location /app2 statement ? > > By the way, app1 and app2 are the same application/program but sometimes > I want another instance or test app version 1, app version 2 etc. > > What I tend to do in the past is to have > app1.example.org > app2.example.org > The problem is that it makes me use multiple certificates. > Here I want to group all the applications behind one domain name > example.org with one certificate and then access different applications > with example.org/app1, example.org/app2 > > Thank you > > > > > > > > _______________________________________________ > nginx mailing list -- nginx at nginx.org > To unsubscribe send an email to nginx-leave at nginx.org -- Ian Hobson Tel (+66) 626 544 695 _______________________________________________ nginx mailing list -- nginx at nginx.org To unsubscribe send an email to nginx-leave at nginx.org -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Tue Jul 19 15:06:33 2022 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 19 Jul 2022 18:06:33 +0300 Subject: nginx-1.23.1 Message-ID: Changes with nginx 1.23.1 19 Jul 2022 *) Feature: memory usage optimization in configurations with SSL proxying. *) Feature: looking up of IPv4 addresses while resolving now can be disabled with the "ipv4=off" parameter of the "resolver" directive. *) Change: the logging level of the "bad key share", "bad extension", "bad cipher", and "bad ecpoint" SSL errors has been lowered from "crit" to "info". *) Bugfix: while returning byte ranges nginx did not remove the "Content-Range" header line if it was present in the original backend response. *) Bugfix: a proxied response might be truncated during reconfiguration on Linux; the bug had appeared in 1.17.5. -- Maxim Dounin http://nginx.org/ From xeioex at nginx.com Tue Jul 19 16:30:09 2022 From: xeioex at nginx.com (Dmitry Volyntsev) Date: Tue, 19 Jul 2022 09:30:09 -0700 Subject: njs-0.7.6 Message-ID: <3da89750-d0f1-12ff-867e-1ea889eefffe@nginx.com> Hello, I'm glad to announce a new release of NGINX JavaScript module (njs). Notable new features: - improved r.args: Now, duplicate keys are returned as an array, keys are case-sensitive, both keys and values are percent-decoded. For example, the query string 'a=1&b=%32&A=3&b=4&B=two%20words' is converted to r.args as: {a: "1", b: ["2", "4"], A: "3", B: "two words"} Learn more about njs: - Overview and introduction: https://nginx.org/en/docs/njs/ - NGINX JavaScript in Your Web Server Configuration: https://youtu.be/Jc_L6UffFOs - Extending NGINX with Custom Code: https://youtu.be/0CVhq4AUU7M - Using node modules with njs: https://nginx.org/en/docs/njs/node_modules.html - Writing njs code using TypeScript definition files: https://nginx.org/en/docs/njs/typescript.html Feel free to try it and give us feedback on: - Github: https://github.com/nginx/njs/issues - Mailing list: https://mailman.nginx.org/mailman/listinfo/nginx-devel Additional examples and howtos can be found here: - Github: https://github.com/nginx/njs-examples Changes with njs 0.7.6 19 Jul 2022 nginx modules: *) Feature: improved r.args object. Added support for multiple arguments with the same key. Added case sensitivity for keys. Keys and values are percent-decoded now. *) Bugfix: fixed r.headersOut setter for special headers. Core: *) Feature: added Symbol.for() and Symbol.keyfor(). *) Feature: added btoa() and atob() from WHATWG spec. *) Bugfix: fixed large non-decimal literals. *) Bugfix: fixed unicode argument trimming in parseInt(). *) Bugfix: fixed break instruction in a try-catch block. *) Bugfix: fixed async function declaration in CLI. From fusca14 at gmail.com Tue Jul 19 21:59:32 2022 From: fusca14 at gmail.com (Fabiano Furtado Pessoa Coelho) Date: Tue, 19 Jul 2022 18:59:32 -0300 Subject: Question about rotating log files with USR1 signal In-Reply-To: References: Message-ID: Thanks! This ticket https://trac.nginx.org/nginx/ticket/376 is exactly my doubt. You helped me a lot. On Tue, Jul 19, 2022 at 12:17 AM Maxim Dounin wrote: > > Hello! > > On Mon, Jul 18, 2022 at 01:37:47PM -0300, Fabiano Furtado Pessoa Coelho wrote: > > > As described in the official documentation > > http://nginx.org/en/docs/control.html#logs "The master process will > > then re-open all currently open log files and assign them an > > unprivileged user under which the worker processes are running, as an > > owner.", the owner of the log file changes after the USR1 signal is > > sent to NGINX master process. > > > > Why does this behavior happen? Is there a way to keep the original > > root owner of the log file? > > Log files owned by root generally cannot be open by worker > processes for writing. To make sure worker processes can reopen > the log files, master process chowns them and ensures appropriate > permissions for the owner. > > Unless you are willing to run nginx worker processes under root > (which is unwise), there is no way to preserve the root as the > owner of log files during fast log rotation. > > If for some reason you must keep root as the owner of log files, > using reconfiguration instead of log rotation might work. > Obviously enough, this isn't a good solution either. > > A better solution for reopening log files would be to implement > file descriptor passing on systems which support it, see > https://trac.nginx.org/nginx/ticket/376. So far attempts to > implement this did not result in a reasonably reliable code. > > > The "systemctl reload nginx" is capable of creating a new log file > > with the original root owner, but I think this isn't a clever > > solution. > > More importantly, this won't work. By pre-creating log files you > can fine-control permissions on the files, but during log rotation > nginx will change the owner anyway. > > -- > Maxim Dounin > http://mdounin.ru/ > _______________________________________________ > nginx mailing list -- nginx at nginx.org > To unsubscribe send an email to nginx-leave at nginx.org From venefax at gmail.com Wed Jul 20 01:51:24 2022 From: venefax at gmail.com (Saint Michael) Date: Tue, 19 Jul 2022 21:51:24 -0400 Subject: Reverse proxy forcing language in cookies Message-ID: I was asked to proxy google.com through https://ГУГЛЭ.pl but I need to make Google believe that clients are behind a computer with the Russian language, not English. Now I have this: proxy_cookie_domain https://google.com https://xn--c1aay4a4c.pl; (xn--c1aay4a4c is latin representation of ГУГЛЭ) is there a workaround for this? From nginx-forum at forum.nginx.org Wed Jul 20 14:19:28 2022 From: nginx-forum at forum.nginx.org (strtwtsn) Date: Wed, 20 Jul 2022 10:19:28 -0400 Subject: Problem with basic auth on nuxtjs frontend with wordpress backend Message-ID: I'm trying to add basic authentication to a nginx reverse proxy which is in front of a nuxtjs app. I've configured nginx as such server { server_name ; auth_basic "Restricted Content"; auth_basic_user_file /etc/nginx/.htpasswd; gzip on; gzip_types text/plain application/xml text/css application/javascript; gzip_min_length 1000; location / { proxy_pass http://127.0.0.1:3222; proxy_http_version 1.1; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection 'upgrade'; proxy_set_header Host $host; proxy_cache_bypass $http_upgrade; } listen 443 ssl; # managed by Certbot But if hangs. I've also tried it in the location section, but this hangs too, what am I missing? The .htpasswd file exists with the correct details in. Have also tried changing upstream backend { server backend1.example.com weight=5; server backend2.example.com:8080; server unix:/tmp/backend3; server backup1.example.com:8080 backup; server backup2.example.com:8080 backup; } server { location / { proxy_pass http://backend; } } to something similar to this, but still no luck Remove basic auth and everything works fine. I type in the username and password, and it just sits there spinning. I've tried chrome dev tools, and nothing actually appears in the network page. Eventually i'll get error 504 timeout Posted at Nginx Forum: https://forum.nginx.org/read.php?2,294797,294797#msg-294797 From jstaylor at xmission.com Fri Jul 22 01:58:51 2022 From: jstaylor at xmission.com (Jim Taylor) Date: Thu, 21 Jul 2022 19:58:51 -0600 Subject: update failure Message-ID: <7dd3bc77-c32a-9fde-a489-0285a5e35245@xmission.com> Installed nginx a couple of days ago, everything appeared to go as it should.  This evening a was going to add some programs, and was stopped cold at my first command.  What do i need to do to go forward? Thanks for your help! Jim Taylor root at D-00:~# apt-get update Hit:1 http://security.debian.org/debian-security bullseye-security InRelease Hit:2 http://deb.debian.org/debian bullseye InRelease Hit:3 http://deb.debian.org/debian bullseye-updates InRelease Ign:4 http://nginx.org/packages/debian 'lsb-release InRelease Err:5 http://nginx.org/packages/debian 'lsb-release Release   404  Not Found [IP: 52.58.199.22 80] Hit:6 http://archive.raspberrypi.org/debian bullseye InRelease Reading package lists... Done E: The repository 'http://nginx.org/packages/debian 'lsb-release Release' does not have a Release file. N: Updating from such a repository can't be done securely, and is therefore disabled by default. N: See apt-secure(8) manpage for repository creation and user configuration details. root at D-00:~# update failure From francis at daoine.org Fri Jul 22 07:47:10 2022 From: francis at daoine.org (Francis Daly) Date: Fri, 22 Jul 2022 08:47:10 +0100 Subject: update failure In-Reply-To: <7dd3bc77-c32a-9fde-a489-0285a5e35245@xmission.com> References: <7dd3bc77-c32a-9fde-a489-0285a5e35245@xmission.com> Message-ID: <20220722074710.GJ14648@daoine.org> On Thu, Jul 21, 2022 at 07:58:51PM -0600, Jim Taylor wrote: Hi there, > Installed nginx a couple of days ago, everything appeared to go as it > should.  This evening a was going to add some programs, and was stopped cold > at my first command.  What do i need to do to go forward? This is more a "debian" question than an "nginx" question, but my best guess is: > root at D-00:~# apt-get update > Hit:1 http://security.debian.org/debian-security bullseye-security InRelease > Hit:2 http://deb.debian.org/debian bullseye InRelease > Hit:3 http://deb.debian.org/debian bullseye-updates InRelease > Ign:4 http://nginx.org/packages/debian 'lsb-release InRelease Wherever your list of sources is configured (possibly /etc/apt/sources.list?) has the string "'lsb-release" (with a leading single quote) where it should probably have the string "bullseye". I wonder... did you follow the installation instructions at http://nginx.org/en/linux_packages.html#Debian? If so, perhaps there was a typo when writing the line http://nginx.org/packages/debian `lsb_release -cs` nginx" \ and "'" was used instead of "`"? (That's not exactly right, because of the _/- difference.) If that is what happened, then: edit the file /etc/apt/sources.list.d/nginx.list as root and change the line that has 'lsb-release to end in /debian bullseye nginx and then repeat the apt-get update. Good luck with it, f -- Francis Daly francis at daoine.org From francis at daoine.org Fri Jul 22 07:55:53 2022 From: francis at daoine.org (Francis Daly) Date: Fri, 22 Jul 2022 08:55:53 +0100 Subject: Reverse proxy forcing language in cookies In-Reply-To: References: Message-ID: <20220722075553.GK14648@daoine.org> On Tue, Jul 19, 2022 at 09:51:24PM -0400, Saint Michael wrote: Hi there, > I was asked to proxy google.com through > https://ГУГЛЭ.pl > but I need to make Google believe that clients are behind a computer > with the Russian language, not English. The question of "what do I include in a request to invite Google to respond in the Russian language" is probably best asked elsewhere. (Because (I guess) there is more likely to be people who know the answer, in a different group.) Once you have the answer -- specific http headers, specific headers with specific values, maybe something else -- then you can start to configure your nginx to include those things in the requests that it makes to its upstream. I guess it will involve proxy_set_header, but I do not know what your upstream requirements are. (Typically, you reverse-proxy to a thing that you control, so that you can know in advance if those requirements will change.) Good luck with it, f -- Francis Daly francis at daoine.org From francis at daoine.org Fri Jul 22 08:06:02 2022 From: francis at daoine.org (Francis Daly) Date: Fri, 22 Jul 2022 09:06:02 +0100 Subject: Problem with basic auth on nuxtjs frontend with wordpress backend In-Reply-To: References: Message-ID: <20220722080602.GL14648@daoine.org> On Wed, Jul 20, 2022 at 10:19:28AM -0400, strtwtsn wrote: Hi there, > I'm trying to add basic authentication to a nginx reverse proxy which is in > front of a nuxtjs app. > But if hangs. I've also tried it in the location section, but this hangs > too, what am I missing? What does "it hangs" mean? As in: * what request do you make? (ideally using something like "curl", to avoid any extra-browser complications) * what response do you get? * what response do you want instead? And possibly: * what do the nginx logs (access and error) say about this request? >From your config, a request of the form curl -v https://your-server/TESTING should return information about the SSL negotiation; and after than succeeds, should return a http 401. And then curl -v --user your-name https://your-server/TESTING should have curl ask you for the password, and then should give a different response when using a wrong password and when using the correct password. If this happens: > nothing actually appears in the network page after you have submitted credentials, then something has gone wrong on the client side. When you hit "submit" or "go", the client should make a network request and the network page should show that request. Cheers, f -- Francis Daly francis at daoine.org From nginx-forum at forum.nginx.org Fri Jul 22 09:41:15 2022 From: nginx-forum at forum.nginx.org (sipopo) Date: Fri, 22 Jul 2022 05:41:15 -0400 Subject: 400 Bad request (spaces in request) Message-ID: <1ca62c335105b62f3fb2750f3c56e8fb.NginxMailingListEnglish@forum.nginx.org> Hello, nginx 1.21.1 started return 400 error if exists spaces in request. But I have old clients which need supports. Maybe anyone knows workaround? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,294820,294820#msg-294820 From francis at daoine.org Fri Jul 22 10:54:46 2022 From: francis at daoine.org (Francis Daly) Date: Fri, 22 Jul 2022 11:54:46 +0100 Subject: 400 Bad request (spaces in request) In-Reply-To: <1ca62c335105b62f3fb2750f3c56e8fb.NginxMailingListEnglish@forum.nginx.org> References: <1ca62c335105b62f3fb2750f3c56e8fb.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20220722105446.GM14648@daoine.org> On Fri, Jul 22, 2022 at 05:41:15AM -0400, sipopo wrote: Hi there, > nginx 1.21.1 started return 400 error if exists spaces in request. But I > have old clients which need supports. Maybe anyone knows workaround? spaces in urls have always been incorrect. Early nginx rejected them as broken input; middle nginx was changed to allow most (but not all) spaces, to give broken clients a chance to become fixed clients (which in turn led to problem reports of the form "nginx accepts space G in a url, but rejects space H"); new nginx rejects them again. The change log lists the change as having happened in 1.21.1. It appears that the "become fixed clients" part did not happen. So for your use case for right now -- change back to something earlier than 1.21.1. Once that is working as much as it did previously, you have some time in which you can choose between (as I see it): * fixing your old clients (or links? It might depend how the broken urls are created in the first place.) * staying on the older nginx * carrying your own patch to your newer nginx to handle spaces in the way that you prefer * getting a patch to allow a configuration choice on what to do with spaces committed to stock nginx [+] * using something other than nginx [+] There is a reason why 1.21.1 rejected spaces. You will likely need to convince someone that the benefits of having an option to change back to the known-broken behaviour exceed the costs to them of doing that. Good luck with it, f -- Francis Daly francis at daoine.org From jstaylor at xmission.com Sat Jul 23 05:41:25 2022 From: jstaylor at xmission.com (Jim Taylor) Date: Fri, 22 Jul 2022 23:41:25 -0600 Subject: Thanks for your help, Francis Daly Message-ID: Thank you for helping me unmake a mess! I had mistyped " ' " for " ` ".  To be honest, in 60 years in this business I had never typed " `  " on purpose before, so I typed single quotes around "lsb_release -cs" Now everything seems to have run correctly, but ... When I do an apt-get update, the line after the 5 'hit' messages and 'Reading Package Lists' says "N:  Skipping acquire of configured file 'nginx/binary-arnhf/Packages'  because the repository doesn't support armhf (or something like that)" Is this normal?  Do I need to do another do over? Thanks again for your help.  Now I can build a configuration file and get my website back on the air. From francis at daoine.org Sat Jul 23 08:21:55 2022 From: francis at daoine.org (Francis Daly) Date: Sat, 23 Jul 2022 09:21:55 +0100 Subject: Thanks for your help, Francis Daly In-Reply-To: References: Message-ID: <20220723082155.GN14648@daoine.org> On Fri, Jul 22, 2022 at 11:41:25PM -0600, Jim Taylor wrote: Hi there, > Thank you for helping me unmake a mess! You're welcome. > I had mistyped " ' " for " ` ".  To be honest, in 60 years in this business > I had never typed " `  " on purpose before, so I typed single quotes around > "lsb_release -cs" There's a whole keyboard full of characters there; why leave the edge ones out? ;-) (Although the `backtick is often awkward to type, because it often a "dead key" which only show on-screen after the subsequent keypress.) Given the choice for retyping commands or error messages, copy-paste is usually a good option. Although that can go wrong when something decides to auto-convert plain quotes to something prettier in typography, so there is no one good answer. > Now everything seems to have run correctly, but ... > > When I do an apt-get update, the line after the 5 'hit' messages and > 'Reading Package Lists' says > > "N:  Skipping acquire of configured file 'nginx/binary-arnhf/Packages'  > because the repository doesn't support armhf (or something like that)" In this case, it sounds like you may be running one of the Debian-derived raspberry pi OS's; possibly the one for Raspberry Pi 2 which uses the 32-bit "armhf" architecture. Debian provides binaries built for that architecture; RaspberryPi provides binaries built for that architecture; Nginx does not provide binaries built for that architecture. If that is the case, then the quick-and-easy option is for you to remove the nginx repository from your system config -- which is "remove or #-comment the line that you recently edited". Then the next time you run "apt-get update", it will only use the other configured sources. That means that you will continue to get whichever nginx version is provided by the other sources, and you won't get the errors about skipping arm-hf. If you do want to run "the latest" nginx version on your system, then you will need to have a binary built for your system. That could be any of (in no particular order): * install an "arm64" version of Debian -- nginx does provided binaries built for that architecture * build an "armhf" binary of nginx for yourself whenever you want to update * see if someone else has built an "armhf" binary of nginx that you are happy to use * encourage someone else to build an "armhf" binary of nginx for you "Simplest to use right now" is probably "stick with the Debian version" -- you won't get the new features of later nginx versions, but Debian will (try to) incorporate any security-related fixes and issue a new build then. "More educational in your Copious Free Time(TM)" is probably to build an nginx binary for yourself -- either as a "normal" binary build, or as a package suitable for your current system -- and then build-and-replace whenever there is an interesting update to the nginx source code. And, depending on the hardware that you have and what other things you want to run on it, possibly "simplest to support for the future" could be "re-install the operating system as the arm64 version". > Is this normal?  Do I need to do another do over? It is an informational message which basically says "now that I look, I'm not using anything from that source this time"; so having that source listed does no harm, but removing that source will mean that it won't try to look the next time. > Thanks again for your help.  Now I can build a configuration file and get my > website back on the air. Cheers; and good luck with it, f -- Francis Daly francis at daoine.org From mikydevel at yahoo.fr Sat Jul 23 10:03:12 2022 From: mikydevel at yahoo.fr (Mik J) Date: Sat, 23 Jul 2022 10:03:12 +0000 (UTC) Subject: Php page returns 450 References: <1289422337.1625705.1658570592195.ref@mail.yahoo.com> Message-ID: <1289422337.1625705.1658570592195@mail.yahoo.com> Hello, I use an application named Cacti and everything works well except the logout.php page So when I try to accesshttps://example.org/index.phphttps://example.org/graph_view.phpIt works, code http is 200 But when I access the logout.php page a page 404 is returnedGET /logout.php HTTP/2.0 For php pages I use this   location ~ \.php$ {             try_files           $uri =450;             fastcgi_pass        unix:/run/php-fpm.cacti.sock;             fastcgi_split_path_info ^(.+\.php)(/.+)$;             fastcgi_index       index.php;             fastcgi_param       SCRIPT_FILENAME $document_root$fastcgi_script_name;             include             fastcgi_params;             limit_except        GET HEAD POST { deny all; }    } So I would expect a 450 code If I add this line location = /logout.php { return 405; } before that stanza, a 405 code is returned   location = /logout.php { return 405; }    location ~ \.php$ {             try_files           $uri =450;             fastcgi_pass        unix:/run/php-fpm.cacti.sock;             fastcgi_split_path_info ^(.+\.php)(/.+)$;             fastcgi_index       index.php;             fastcgi_param       SCRIPT_FILENAME $document_root$fastcgi_script_name;             include             fastcgi_params;             limit_except        GET HEAD POST { deny all; }    } So it matches my location My location ~ \.php$ { doesn't seem to mach when the logout.php page is accessed and I don't understand why Do you have any advice ? Thank you -------------- next part -------------- An HTML attachment was scrubbed... URL: From francis at daoine.org Sat Jul 23 13:57:41 2022 From: francis at daoine.org (Francis Daly) Date: Sat, 23 Jul 2022 14:57:41 +0100 Subject: Thanks for your help, Francis Daly In-Reply-To: <20220723082155.GN14648@daoine.org> References: <20220723082155.GN14648@daoine.org> Message-ID: <20220723135741.GO14648@daoine.org> On Sat, Jul 23, 2022 at 09:21:55AM +0100, Francis Daly wrote: > On Fri, Jul 22, 2022 at 11:41:25PM -0600, Jim Taylor wrote: Hi there, one update / possible correction: > > "N:  Skipping acquire of configured file 'nginx/binary-arnhf/Packages'  > > because the repository doesn't support armhf (or something like that)" > > In this case, it sounds like you may be running one of the Debian-derived > raspberry pi OS's; possibly the one for Raspberry Pi 2 which uses the > 32-bit "armhf" architecture. Debian provides binaries built for that > architecture; RaspberryPi provides binaries built for that architecture; > Nginx does not provide binaries built for that architecture. Web content like https://discourse.osmc.tv/t/rpi-4-architecture-armhf-instead-of-arm64/90382 makes it look like maybe your current system could be multi-architecture, and perhaps you can configure your sources.list to look for the arm64 variant explicitly? You'll probably want to check your-OS-specific documentation; but it might be the case that you can use the nginx repository without a reinstall. Of course, if what you have right now works, it is zero extra effort to keep it working as-is. Cheers, f -- Francis Daly francis at daoine.org From nginx-forum at forum.nginx.org Sat Jul 23 16:55:45 2022 From: nginx-forum at forum.nginx.org (jhonnyrobson) Date: Sat, 23 Jul 2022 12:55:45 -0400 Subject: Wordpress RSS pages dont work with Nginx In-Reply-To: <0cdee76385203afab5b8af53d54424c1.NginxMailingListEnglish@forum.nginx.org> References: <0cdee76385203afab5b8af53d54424c1.NginxMailingListEnglish@forum.nginx.org> Message-ID: I'm having the same problem, but the Feed doesn't work. Any solution? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,238692,294837#msg-294837 From mikydevel at yahoo.fr Sat Jul 23 20:17:33 2022 From: mikydevel at yahoo.fr (Mik J) Date: Sat, 23 Jul 2022 20:17:33 +0000 (UTC) Subject: Php page returns 450 In-Reply-To: <1289422337.1625705.1658570592195@mail.yahoo.com> References: <1289422337.1625705.1658570592195.ref@mail.yahoo.com> <1289422337.1625705.1658570592195@mail.yahoo.com> Message-ID: <1029536810.1811534.1658607453079@mail.yahoo.com> Hello, After taking a rest I found the solution. There was this directive placed a few lines beforelocation ~ /log { deny all; return 404; } And the /logout.php page was marching that directive. I have replaced it bylocation /log { deny all; return 404; }Which hopefully will help to protect access to anypage inside the /log directory. Thank you Le samedi 23 juillet 2022 à 12:04:56 UTC+2, Mik J via nginx a écrit : Hello, I use an application named Cacti and everything works well except the logout.php page So when I try to accesshttps://example.org/index.phphttps://example.org/graph_view.phpIt works, code http is 200 But when I access the logout.php page a page 404 is returnedGET /logout.php HTTP/2.0 For php pages I use this   location ~ \.php$ {             try_files           $uri =450;             fastcgi_pass        unix:/run/php-fpm.cacti.sock;             fastcgi_split_path_info ^(.+\.php)(/.+)$;             fastcgi_index       index.php;             fastcgi_param       SCRIPT_FILENAME $document_root$fastcgi_script_name;             include             fastcgi_params;             limit_except        GET HEAD POST { deny all; }    } So I would expect a 450 code If I add this line location = /logout.php { return 405; } before that stanza, a 405 code is returned   location = /logout.php { return 405; }    location ~ \.php$ {             try_files           $uri =450;             fastcgi_pass        unix:/run/php-fpm.cacti.sock;             fastcgi_split_path_info ^(.+\.php)(/.+)$;             fastcgi_index       index.php;             fastcgi_param       SCRIPT_FILENAME $document_root$fastcgi_script_name;             include             fastcgi_params;             limit_except        GET HEAD POST { deny all; }    } So it matches my location My location ~ \.php$ { doesn't seem to mach when the logout.php page is accessed and I don't understand why Do you have any advice ? Thank you _______________________________________________ nginx mailing list -- nginx at nginx.org To unsubscribe send an email to nginx-leave at nginx.org -------------- next part -------------- An HTML attachment was scrubbed... URL: From jeh253 at gmail.com Sat Jul 23 20:59:35 2022 From: jeh253 at gmail.com (Jay Haines) Date: Sat, 23 Jul 2022 16:59:35 -0400 Subject: Error log question In-Reply-To: <1029536810.1811534.1658607453079@mail.yahoo.com> References: <1289422337.1625705.1658570592195.ref@mail.yahoo.com> <1289422337.1625705.1658570592195@mail.yahoo.com> <1029536810.1811534.1658607453079@mail.yahoo.com> Message-ID: <8c3c05db-781f-ac58-d52d-73fe4109d5f3@gmail.com> Hello, My nginx error log is being filled with errors which I believe are being surfaced from OpenSSL. The log entries number in the hundreds of thousands per day and I understand they are most likely due to conditions beyond my control. Examples of the log entries are: 2022/07/23 16:26:32 [crit] 849483#849483: *8078348 SSL_do_handshake() failed (SSL: error:0A00006E:SSL routines::bad extension) while SSL handshaking, client: 113.211.208.188, server: 0.0.0.0:443 2022/07/23 16:26:33 [alert] 849481#849481: *8078448 could not allocate new session in SSL session shared cache "le_nginx_SSL" while SSL handshaking, client: 175.156.80.121, server: 0.0.0.0:443 Is there any way to bypass logging these errors? With thanks, Jay -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Sat Jul 23 22:15:50 2022 From: mdounin at mdounin.ru (Maxim Dounin) Date: Sun, 24 Jul 2022 01:15:50 +0300 Subject: Error log question In-Reply-To: <8c3c05db-781f-ac58-d52d-73fe4109d5f3@gmail.com> References: <1289422337.1625705.1658570592195.ref@mail.yahoo.com> <1289422337.1625705.1658570592195@mail.yahoo.com> <1029536810.1811534.1658607453079@mail.yahoo.com> <8c3c05db-781f-ac58-d52d-73fe4109d5f3@gmail.com> Message-ID: Hello! On Sat, Jul 23, 2022 at 04:59:35PM -0400, Jay Haines wrote: > My nginx error log is being filled with errors which I believe are being > surfaced from OpenSSL. The log entries number in the hundreds of > thousands per day and I understand they are most likely due to > conditions beyond my control. Examples of the log entries are: > > 2022/07/23 16:26:32 [crit] 849483#849483: *8078348 SSL_do_handshake() > failed (SSL: error:0A00006E:SSL routines::bad extension) while SSL > handshaking, client: 113.211.208.188, server: 0.0.0.0:443 Quoting nginx 1.23.1 CHANGES (http://nginx.org/en/CHANGES): *) Change: the logging level of the "bad key share", "bad extension", "bad cipher", and "bad ecpoint" SSL errors has been lowered from "crit" to "info". Upgrade to nginx 1.23.1, these errors should go away. > 2022/07/23 16:26:33 [alert] 849481#849481: *8078448 could not allocate > new session in SSL session shared cache "le_nginx_SSL" while SSL > handshaking, client: 175.156.80.121, server: 0.0.0.0:443 This error indicate that nginx wasn't able to allocate new session in the SSL session cache defined by the "ssl_session_cache" directive, and removing an old session didn't help. This basically indicate that the SSL session cache is too small, and it would be a good idea to either configure a larger cache or reduce ssl_session_timeout. The logging level is probably a bit too scary, see https://trac.nginx.org/nginx/ticket/621 for details. > Is there any way to bypass logging these errors? See above, hope this helps. -- Maxim Dounin http://mdounin.ru/ From jeh253 at gmail.com Sun Jul 24 13:50:15 2022 From: jeh253 at gmail.com (Jay Haines) Date: Sun, 24 Jul 2022 09:50:15 -0400 Subject: Error log question In-Reply-To: References: <1289422337.1625705.1658570592195.ref@mail.yahoo.com> <1289422337.1625705.1658570592195@mail.yahoo.com> <1029536810.1811534.1658607453079@mail.yahoo.com> <8c3c05db-781f-ac58-d52d-73fe4109d5f3@gmail.com> Message-ID: <5d3a3e03-d70c-0734-fcef-c340c88a982d@gmail.com> Thank you! On 7/23/22 18:15, Maxim Dounin wrote: > Hello! > > On Sat, Jul 23, 2022 at 04:59:35PM -0400, Jay Haines wrote: > >> My nginx error log is being filled with errors which I believe are being >> surfaced from OpenSSL. The log entries number in the hundreds of >> thousands per day and I understand they are most likely due to >> conditions beyond my control. Examples of the log entries are: >> >> 2022/07/23 16:26:32 [crit] 849483#849483: *8078348 SSL_do_handshake() >> failed (SSL: error:0A00006E:SSL routines::bad extension) while SSL >> handshaking, client: 113.211.208.188, server: 0.0.0.0:443 > Quoting nginx 1.23.1 CHANGES (http://nginx.org/en/CHANGES): > > *) Change: the logging level of the "bad key share", "bad extension", > "bad cipher", and "bad ecpoint" SSL errors has been lowered from > "crit" to "info". > > Upgrade to nginx 1.23.1, these errors should go away. > >> 2022/07/23 16:26:33 [alert] 849481#849481: *8078448 could not allocate >> new session in SSL session shared cache "le_nginx_SSL" while SSL >> handshaking, client: 175.156.80.121, server: 0.0.0.0:443 > This error indicate that nginx wasn't able to allocate new session > in the SSL session cache defined by the "ssl_session_cache" > directive, and removing an old session didn't help. This > basically indicate that the SSL session cache is too small, and it > would be a good idea to either configure a larger cache or reduce > ssl_session_timeout. The logging level is probably a bit too > scary, see https://trac.nginx.org/nginx/ticket/621 for details. > >> Is there any way to bypass logging these errors? > See above, hope this helps. > From pgnet.dev at gmail.com Mon Jul 25 14:21:33 2022 From: pgnet.dev at gmail.com (PGNet Dev) Date: Mon, 25 Jul 2022 10:21:33 -0400 Subject: hostname support in geo (ngx_http_geo_module) variable maps? Message-ID: i'm running nginx/1.23.1 i use 'geo'-based (ngx_http_geo_module) permissions to restrict access to some sites e.g., for explicit static IPs geo $RESTRICT_ACCESS { default 0; 127.0.0.1/32 1; 2601:...:abcd 1; } server { ... if ($RESTRICT_ACCESS = 0) { return 403;} it works as intended. i'd like to add access for a couple of hosts with dynamic IPs. the IPs *are* tracked, and updated to DNS. e.g., both A & AAAA records exist, and are automatically updated on change, at mydynamicIP.example.com so that, in effect, geo $RESTRICT_ACCESS { default 0; 127.0.0.1/32 1; 2601:...:abcd 1; 1; 1; } at wiki, there is mention of "ngx_http_rdns_module" https://www.nginx.com/resources/wiki/modules/rdns/ which points to https://github.com/flant/nginx-http-rdns but, there "Disclaimer (February, 2022) This module hasn't been maintained by its original developers for years already." is there a recommended/current method for using *hostnames* in geo? ideally, without lua. From mikydevel at yahoo.fr Tue Jul 26 01:11:45 2022 From: mikydevel at yahoo.fr (Mik J) Date: Tue, 26 Jul 2022 01:11:45 +0000 (UTC) Subject: 2 x Applications using the same domain behind a reverse proxy In-Reply-To: <290877550.1937538.1658241089604@mail.yahoo.com> References: <1490473369.1295398.1658095729329.ref@mail.yahoo.com> <1490473369.1295398.1658095729329@mail.yahoo.com> <9f94d34c-c58b-2333-de6d-cb482ea38601@gmail.com> <290877550.1937538.1658241089604@mail.yahoo.com> Message-ID: <1892552412.22210.1658797905889@mail.yahoo.com> Hello everyone, I'm still trying to solve my implementation. When I access to example.org, I was to use /var/www/htdocs/app1 and it works. When I access to example.org/app2, I was to use /var/www/htdocs/app2 and it doesn't really work.         location / {           try_files $uri $uri/ /index.php$is_args$args;         root /var/www/htdocs/app1;           location ~ \.php$ {               root /var/www/htdocs/app1;               try_files $uri    =450;               fastcgi_pass      unix:/run/php-fpm.sock;               fastcgi_read_timeout 700;               fastcgi_split_path_info ^(.+\.php)(/.+)$;               fastcgi_index     index.php;               fastcgi_param     SCRIPT_FILENAME $document_root$fastcgi_script_name;               include           fastcgi_params;           }         }         location /app2 {           #root /var/www/htdocs/app2;           alias /var/www/htdocs/app2;           try_files $uri $uri/ /index.php$is_args$args;           location ~ \.php$ {               root              /var/www/htdocs/app2;               #alias /var/www/htdocs/app2;               try_files $uri   =450;               fastcgi_pass   unix:/run/ php-fpm.sock;#              fastcgi_read_timeout 700;               fastcgi_split_path_info ^(.+\.php)(/.+)$;               fastcgi_index  index.php;               fastcgi_param  SCRIPT_FILENAME $document_root$fastcgi_script_name;               include        fastcgi_params;           }         } I have created an index.html file in /var/www/htdocs/app2, when I access it with example.org/app2/index.html I can see the html text. Problem My application has to be accessed with index.php so when I type example.org/app2/index.php, Nginx should process /var/www/htdocs/app2/index.phpThe problem is that I receive a code 404. I don't receive a code 450.It looks like the condition location /app2 matches but location ~ \.php$ inside doesn't match Then I tried to replace alias by root just after location /app2 and I do get this error code 450. the location ~ \.php$ seems to match but the php code is not being processed. Does anyone has a idea ? Le mardi 19 juillet 2022 à 16:32:05 UTC+2, Mik J via nginx a écrit : Hello Ian, Thank you for your answer. I did what you told me Now I have on my reverse proxy      location / {         proxy_pass              http://10.10.10.10:80;         proxy_redirect          off;         proxy_set_header        Host                    $http_host;         proxy_set_header        X-Real-IP               $remote_addr; #        proxy_set_header        X-Real-IP               $proxy_add_x_forwarded_for;         proxy_set_header        X-Forwarded-For         $proxy_add_x_forwarded_for;         proxy_set_header        Referer                 "http://example.org";        #proxy_set_header       Upgrade                 $http_upgrade;         #proxy_pass_header      Set-Cookie;      } And on the backend server server {           listen 80;           server_name example.org;           index index.html index.php;           root /var/www/htdocs/app1;             access_log /var/log/nginx/example.org.access.log;           error_log /var/log/nginx/example.org.error.log;             location / {             try_files $uri $uri/ /index.php$is_args$args;             root /var/www/htdocs/app1;           }             location /app2 {             try_files $uri $uri/ /index.php$is_args$args;             root /var/www/htdocs/app2;           }            location ~ \.php$ {                try_files $uri    =450;                 fastcgi_pass      unix:/run/php-fpm.app1.sock;                 fastcgi_read_timeout 700;                 fastcgi_split_path_info ^(.+\.php)(/.+)$;                 fastcgi_index     index.php;                 fastcgi_param     SCRIPT_FILENAME  $document_root$fastcgi_script_name;                 include           fastcgi_params;             }  } Access to example.org leads me to app1 so it works as expected.Access to example.org/app2 doesnt lead me to app2. It seems to me that the following lineproxy_set_header        Referer                 "http://example.org";on the reverse proxy could make a confusion ? I can see that example.org/app2 still lands on /var/www/htdocs/app1 Regards Le mardi 19 juillet 2022 à 06:10:28 UTC+2, Ian Hobson a écrit : Hi Mik, I think the problem is that your back end cannot distinguish app1 from app2. I don't think there is a need for proxy-pass, unless it is to spread the load. I would try the following approach: Change the root within location / and location /app2 and serve static files directly. When you pass the .php files, the different roots will  appear in the $document_root location, so you can share the php instance. It will be MUCH more efficient if you use fast-cgi because it removes a process create from every php serve. Finally, you need to protect against sneaks who try to execute code, by adding a try_files thus... location ~ \.php$ {     try_files $uri =450;     include /etc/nginx/fastcgi.conf;     fastcgi_split_path_info  ^(.+\.php)(/.+)$;         etc. Hope this helps. Ian On 18/07/2022 05:08, Mik J via nginx wrote: > Hello, > > I don't manage to make my thing works although it's probably a classic > for Nginx users. > > I have a domain https://example.org > > What I want is this > https://example.org goes on reverse proxy => server1 (10.10.10.10) to > the application /var/www/htdocs/app1 > https://example.org/app2 goes on reverse proxy => server1 (10.10.10.10) > to the application /var/www/htdocs/app2 > So in the latter case the user adds /app2 and the flow is redirected to > the /var/www/htdocs/app2 directory > > First the reverse proxy, I wrote this >      ## >      # App1 >      ## >       location / { >          proxy_pass              http://10.10.10.10:80; >          proxy_redirect          off; >          proxy_set_header        Host                    $http_host; >          proxy_set_header        X-Real-IP               $remote_addr; >          proxy_set_header        X-Forwarded-For        > $proxy_add_x_forwarded_for; >          proxy_set_header        Referer                > "http://example.org"; >          #proxy_set_header       Upgrade                 $http_upgrade; >          #proxy_pass_header      Set-Cookie; >       } >      ## >      # App2 >      ## >       location /app2 { >          proxy_pass              http://10.10.10.10:80; >          proxy_redirect          off; >          proxy_set_header        Host                    $http_host; >          proxy_set_header        X-Real-IP               $remote_addr; >          proxy_set_header        X-Forwarded-For        > $proxy_add_x_forwarded_for; >          proxy_set_header        Referer                > "http://example.org"; >          #proxy_set_header       Upgrade                 $http_upgrade; >          #proxy_pass_header      Set-Cookie; >       } > > > Second the back end server > server { >          listen 80; >          server_name example.org; >          index index.html index.php; >          root /var/www/htdocs/app1; > >          access_log /var/log/nginx/example.org.access.log; >          error_log /var/log/nginx/example.org.error.log; > >          location / { >            try_files $uri $uri/ /index.php$is_args$args; > >            location ~ \.php$ { >                root              /var/www/htdocs/app1; >                fastcgi_pass      unix:/run/php-fpm.app1.sock; >                fastcgi_read_timeout 700; >                fastcgi_split_path_info ^(.+\.php)(/.+)$; >                fastcgi_index     index.php; >                fastcgi_param     SCRIPT_FILENAME > $document_root$fastcgi_script_name; >                include           fastcgi_params; >            } >          } > >          location /app2 { >            try_files $uri $uri/ /index.php$is_args$args; > >            location ~ \.php$ { >                root              /var/www/htdocs/app2; >                fastcgi_pass      unix:/run/php-fpm.app1.sock; >                fastcgi_read_timeout 700; >                fastcgi_split_path_info ^(.+\.php)(/.+)$; >                fastcgi_index     index.php; >                fastcgi_param     SCRIPT_FILENAME > $document_root$fastcgi_script_name; >                include           fastcgi_params; >            } >          } > } > > The result I have right now is that I can access app1 with > http://example.org, but i cannot access app2 with http://example.org/app2 > > Also what is the best practice on the backend server: > - should I make one single virtual host with two location statements > like I did or 2 virtual hosts with a fake name like > internal.app1.example.org and internal.app2.example.org ? > - can I mutualise the location ~ \.php$ between the two ? > - Should I copy access_log and error_log in the location /app2 statement ? > > By the way, app1 and app2 are the same application/program but sometimes > I want another instance or test app version 1, app version 2 etc. > > What I tend to do in the past is to have > app1.example.org > app2.example.org > The problem is that it makes me use multiple certificates. > Here I want to group all the applications behind one domain name > example.org with one certificate and then access different applications > with example.org/app1, example.org/app2 > > Thank you > > > > > > > > _______________________________________________ > nginx mailing list -- nginx at nginx.org > To unsubscribe send an email to nginx-leave at nginx.org -- Ian Hobson Tel (+66) 626 544 695 _______________________________________________ nginx mailing list -- nginx at nginx.org To unsubscribe send an email to nginx-leave at nginx.org _______________________________________________ nginx mailing list -- nginx at nginx.org To unsubscribe send an email to nginx-leave at nginx.org -------------- next part -------------- An HTML attachment was scrubbed... URL: From francis at daoine.org Tue Jul 26 10:27:29 2022 From: francis at daoine.org (Francis Daly) Date: Tue, 26 Jul 2022 11:27:29 +0100 Subject: 2 x Applications using the same domain behind a reverse proxy In-Reply-To: <1892552412.22210.1658797905889@mail.yahoo.com> References: <1490473369.1295398.1658095729329.ref@mail.yahoo.com> <1490473369.1295398.1658095729329@mail.yahoo.com> <9f94d34c-c58b-2333-de6d-cb482ea38601@gmail.com> <290877550.1937538.1658241089604@mail.yahoo.com> <1892552412.22210.1658797905889@mail.yahoo.com> Message-ID: <20220726102729.GP14648@daoine.org> On Tue, Jul 26, 2022 at 01:11:45AM +0000, Mik J via nginx wrote: Hi there, I don't have a full answer, but a few config changes should hopefully help with the ongoing diagnosis. > When I access to example.org, I was to use /var/www/htdocs/app1 and it works. > > When I access to example.org/app2, I was to use /var/www/htdocs/app2 and it doesn't really work. >         location / { >           try_files $uri $uri/ /index.php$is_args$args; >         root /var/www/htdocs/app1; That says "a request for /thing will look for the file /var/www/htdocs/app1/thing, or else will become a subrequest for /index.php". So far, so good. >         location /app2 { >           #root /var/www/htdocs/app2; >           alias /var/www/htdocs/app2; >           try_files $uri $uri/ /index.php$is_args$args; Depending on whether you use "root" or "alias" there, a request for "/app2/thing" will look for one of two different files, or else become a subrequest for "/index.php". I suspect that instead of the above, you want root /var/www/htdocs; try_files $uri $uri/ /app2/index.php$is_args$args; so that if /var/www/htdocs/app2/thing does not exist, the subrequest is for /app2/index.php. >           location ~ \.php$ { >               root              /var/www/htdocs/app2; With that, later things will be looking for /var/www/htdocs/app2/app2/index.php (double /app2) which almost certainly does not exist. With "root" set correctly outside this location{}, you can remove that "root" line entirely. Or change it to be "root /var/www/htdocs;". Those two changes to within "location /app2" and the nested "location ~ \.php$" should be enough to allow whatever the next error is, to appear. If you test by doing (for example) curl -i http://example.org/app2/ the response http headers and content may give a clue as to what is happening versus what should be happening. For the other problem reports -- if they matter, if you can include enough of the configuration that it can be copy-paste'd in to a test system, it will be simpler for someone else to repeat what you are doing. But possibly the above change will mean that they no longer happen. You had a few other questions initially: > > Also what is the best practice on the backend server: > > - should I make one single virtual host with two location statements > > like I did or 2 virtual hosts with a fake name like > > internal.app1.example.org and internal.app2.example.org ? The answer there is always "it depends" :-( In this case, you have moved away from proxy_pass to a backend server, towards fastcgi_pass to a local socket; so I guess it doe not really matter here and now. The more important thing is: does your application allow itself to be (reverse-proxy) accessed or installed in a "subdirectory" like "/app2/"? If it does not, then there are likely to be problems. > > - can I mutualise the location ~ \.php$ between the two ? Probably not; because the two location{}s probably have different requirements. You might be able to have all of the fastcgi_param directives in a common place, and "just" have duplicate "fastcgi_pass" directives in the two locations, though. > > - Should I copy access_log and error_log in the location /app2 statement ? As you wish. You can have nginx writing one log file, and make sure that whatever is reading it knows how to interpret it; or you can have nginx writing multiple log files, and have whatever is reading each one, know how to interpret that one. I suspect that the main advantage to "different log files per location" is that it will be very clear which location{} was in use when the request completed; and if that is not the one that you expected, then you'll want to investigate why. (The main disadvantage is: multiple files to search through, in case things were not handled as you expected.) Good luck with it, f -- Francis Daly francis at daoine.org From nginx-forum at forum.nginx.org Tue Jul 26 14:01:35 2022 From: nginx-forum at forum.nginx.org (blason) Date: Tue, 26 Jul 2022 10:01:35 -0400 Subject: SSL Acceleration or Offloading with Nginx Message-ID: Hi Team, I wanted to know the possibilities with Nginx SSL offloading to separate CPU card or any other hardware? How do I achieve better performance with Nginx SSL offloading? Do I need to go with more CPU cores? or dedicated card or any other mechanism? Can someone please suggest? TIA Posted at Nginx Forum: https://forum.nginx.org/read.php?2,294862,294862#msg-294862 From osa at freebsd.org.ru Tue Jul 26 14:27:33 2022 From: osa at freebsd.org.ru (Sergey A. Osokin) Date: Tue, 26 Jul 2022 17:27:33 +0300 Subject: SSL Acceleration or Offloading with Nginx In-Reply-To: References: Message-ID: Hi, hope you're doing well. On Tue, Jul 26, 2022 at 10:01:35AM -0400, blason wrote: > Hi Team, > > I wanted to know the possibilities with Nginx SSL offloading to separate > CPU card or any other hardware? How do I achieve better performance with > Nginx SSL offloading? Do I need to go with more CPU cores? or dedicated card > or any other mechanism? > > Can someone please suggest? Here's some basic optimizations can be done with nginx [1]. It's also possible to use ssl_engine [2] directive to define the name of the hardware SSL accelerator. References: 1. https://nginx.org/en/docs/http/configuring_https_servers.html#optimization 2. https://nginx.org/en/docs/ngx_core_module.html#ssl_engine -- Sergey A. Osokin From cferreira at senhasegura.com Tue Jul 26 14:41:52 2022 From: cferreira at senhasegura.com (Caio Abreu Ferreira) Date: Tue, 26 Jul 2022 11:41:52 -0300 Subject: nginx + sni + ssh connection Message-ID: <5489bf33-54f9-79d0-e131-85c4c273c454@senhasegura.com> An HTML attachment was scrubbed... URL: From osa at freebsd.org.ru Tue Jul 26 15:08:07 2022 From: osa at freebsd.org.ru (Sergey A. Osokin) Date: Tue, 26 Jul 2022 18:08:07 +0300 Subject: nginx + sni + ssh connection In-Reply-To: <5489bf33-54f9-79d0-e131-85c4c273c454@senhasegura.com> References: <5489bf33-54f9-79d0-e131-85c4c273c454@senhasegura.com> Message-ID: Hi Caio Abreu Ferreira, hope you're doing well. On Tue, Jul 26, 2022 at 11:41:52AM -0300, Caio Abreu Ferreira via nginx wrote: >     List > >     I'm trying to do something like this > >  mt4.senhasegura.local ─┬─► nginx at 192.168.122.10 ─┬─► mt4  at 192.168.122.11 > xpto.senhasegura.local ─┘                                                ─► > xpto at 192.168.122.12 > >     I already got it with the HTTP, HTTPS, and RDP protocols, but I'm not > getting it with the ssh protocol. with only one ssh server, I was able to > configure it. The problem is occurring with multiple ssh servers. I found > several docs on the internet but all docs were about one ssh server and > multiple HTTPS servers. I have multiple ssh servers and multiple HTTPS servers. > Is it possible to do an SSH connection redirection? So, it's possible to use stream module [1] to proxy SSH connections. Also, the SSH settings on backends, including SSH keys, should be the same. If all of those above are correct, then what's the issue? References: 1. https://nginx.org/en/docs/stream/ngx_stream_core_module.html -- Sergey A. Osokin From jstaylor at xmission.com Tue Jul 26 15:21:21 2022 From: jstaylor at xmission.com (Jim Taylor) Date: Tue, 26 Jul 2022 09:21:21 -0600 Subject: More thanks to Francis Daly Message-ID: <4d054205-9d59-c581-146e-a4323bf5bbf7@xmission.com> The folks at nginx and the folks at Raspberry Pi are both smarter than I am.  I downloaded the 64-bit version of the Pi OS and copied nginx.conf and ngnix.ssl from my old SD card typed the magic words "apt-get install nginx," fired up a browser, typed "jstaylor.com,." and there was Carolyn on a camel! All the stuff you tried to explain has been baked into the new version of the Pi OS, so I should have realized I was on the wrong track as soon as I saw "armhf." where "arm64" should have been. Thanks to you for getting me off the wrong track, and to RaspberryPi.com for doing the hard part for me. Jim Taylor From nginx-forum at forum.nginx.org Tue Jul 26 17:42:24 2022 From: nginx-forum at forum.nginx.org (blason) Date: Tue, 26 Jul 2022 13:42:24 -0400 Subject: SSL Acceleration or Offloading with Nginx In-Reply-To: References: Message-ID: <499c479b1a69b95dac6c89add740e2b4.NginxMailingListEnglish@forum.nginx.org> Thanks a lot for your input Posted at Nginx Forum: https://forum.nginx.org/read.php?2,294862,294870#msg-294870 From nginx-forum at forum.nginx.org Tue Jul 26 17:43:25 2022 From: nginx-forum at forum.nginx.org (blason) Date: Tue, 26 Jul 2022 13:43:25 -0400 Subject: SSL Acceleration or Offloading with Nginx In-Reply-To: References: Message-ID: Any specific card or hardware device that you can suggest for the setup? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,294862,294871#msg-294871 From hobson42 at gmail.com Wed Jul 27 02:52:27 2022 From: hobson42 at gmail.com (Ian Hobson) Date: Wed, 27 Jul 2022 09:52:27 +0700 Subject: 2 x Applications using the same domain behind a reverse proxy In-Reply-To: <1892552412.22210.1658797905889@mail.yahoo.com> References: <1490473369.1295398.1658095729329.ref@mail.yahoo.com> <1490473369.1295398.1658095729329@mail.yahoo.com> <9f94d34c-c58b-2333-de6d-cb482ea38601@gmail.com> <290877550.1937538.1658241089604@mail.yahoo.com> <1892552412.22210.1658797905889@mail.yahoo.com> Message-ID: Hi Mik, I am no expert on nginx, just a user, and I have never had to split a site as you are. That said, I think the error is that you have wrapped your locations inside each other. This means that when it matches /, it goes on to pass .php files to app1, without ever looking for /app2. You need three location statements. location / { root /var/www/htdocs/app1 try_files to serve app1 static content } location /app2 root /var/www/htdocs/app2 try_files to serve app2 static content } location ~ \.php { # no root needed here fastcgi stuff here } The line within \.php that sets SCRIPT_FILENAME to $document_root$fastcgi_script_name will pass the different roots to php, so app1 and app2 will each get the correct script names. Regards Ian On 26/07/2022 08:11, Mik J via nginx wrote: > Hello everyone, > > I'm still trying to solve my implementation. > > When I access to example.org, I was to use /var/www/htdocs/app1 and it > works. > > When I access to example.org/app2, I was to use /var/www/htdocs/app2 and > it doesn't really work. > >         location / { >           try_files $uri $uri/ /index.php$is_args$args; >         root /var/www/htdocs/app1; > >           location ~ \.php$ { >               root /var/www/htdocs/app1; >               try_files $uri    =450; >               fastcgi_pass      unix:/run/php-fpm.sock; >               fastcgi_read_timeout 700; >               fastcgi_split_path_info ^(.+\.php)(/.+)$; >               fastcgi_index     index.php; >               fastcgi_param     SCRIPT_FILENAME > $document_root$fastcgi_script_name; >               include           fastcgi_params; >           } > >         } > >         location /app2 { >           #root /var/www/htdocs/app2; >           alias /var/www/htdocs/app2; >           try_files $uri $uri/ /index.php$is_args$args; > >           location ~ \.php$ { >               root              /var/www/htdocs/app2; >               #alias /var/www/htdocs/app2; >               try_files $uri   =450; >               fastcgi_pass   unix:/run/ php-fpm.sock; > #              fastcgi_read_timeout 700; >               fastcgi_split_path_info ^(.+\.php)(/.+)$; >               fastcgi_index  index.php; >               fastcgi_param  SCRIPT_FILENAME > $document_root$fastcgi_script_name; >               include        fastcgi_params; >           } >         } > > I have created an index.html file in /var/www/htdocs/app2, when I access > it with example.org/app2/index.html I can see the html text. > > Problem > My application has to be accessed with index.php so when I type > example.org/app2/index.php, Nginx should process > /var/www/htdocs/app2/index.php > The problem is that I receive a code 404. I don't receive a code 450. > It looks like the condition location /app2 matches but location ~ \.php$ > inside doesn't match > > > Then I tried to replace alias by root just after location /app2 and I do > get this error code 450. the location ~ \.php$ seems to match but the > php code is not being processed. > > Does anyone has a idea ? > Le mardi 19 juillet 2022 à 16:32:05 UTC+2, Mik J via nginx > a écrit : > > > Hello Ian, > > Thank you for your answer. I did what you told me > > Now I have on my reverse proxy >      location / { >         proxy_pass              http://10.10.10.10:80; >         proxy_redirect          off; >         proxy_set_header        Host                    $http_host; >         proxy_set_header        X-Real-IP               $remote_addr; > #        proxy_set_header        X-Real-IP > $proxy_add_x_forwarded_for; >         proxy_set_header        X-Forwarded-For > $proxy_add_x_forwarded_for; >         proxy_set_header        Referer > "http://example.org"; >         #proxy_set_header       Upgrade                 $http_upgrade; >         #proxy_pass_header      Set-Cookie; >      } > > And on the backend server > server { >           listen 80; >           server_name example.org; >           index index.html index.php; >           root /var/www/htdocs/app1; > >           access_log /var/log/nginx/example.org.access.log; >           error_log /var/log/nginx/example.org.error.log; > >           location / { >             try_files $uri $uri/ /index.php$is_args$args; > root /var/www/htdocs/app1; > >           } > >           location /app2 { >             try_files $uri $uri/ /index.php$is_args$args; > root /var/www/htdocs/app2; > >           } >             location ~ \.php$ { > try_files $uri    =450; >                 fastcgi_pass      unix:/run/php-fpm.app1.sock; >                 fastcgi_read_timeout 700; >                 fastcgi_split_path_info ^(.+\.php)(/.+)$; >                 fastcgi_index     index.php; >                 fastcgi_param     SCRIPT_FILENAME > $document_root$fastcgi_script_name; >                 include           fastcgi_params; >             } > >  } > > Access to example.org leads me to app1 so it works as expected. > Access to example.org/app2 doesnt lead me to app2. It seems to me that > the following line > proxy_set_header        Referer                 "http://example.org"; > on the reverse proxy could make a confusion ? > > I can see that example.org/app2 still lands on /var/www/htdocs/app1 > > Regards > > > Le mardi 19 juillet 2022 à 06:10:28 UTC+2, Ian Hobson > a écrit : > > > Hi Mik, > > I think the problem is that your back end cannot distinguish app1 from > app2. I don't think there is a need for proxy-pass, unless it is to > spread the load. > > I would try the following approach: > > Change the root within location / and location /app2 and > serve static files directly. > > When you pass the .php files, the different roots will  appear in the > $document_root location, so > you can share the php instance. > > It will be MUCH more efficient if you use fast-cgi because it removes a > process create from every php serve. > > Finally, you need to protect against sneaks who try to execute code, by > adding a try_files thus... > > location ~ \.php$ { >     try_files $uri =450; >     include /etc/nginx/fastcgi.conf; >     fastcgi_split_path_info  ^(.+\.php)(/.+)$; >         etc. > > Hope this helps. > > Ian > > > On 18/07/2022 05:08, Mik J via nginx wrote: > > Hello, > > > > I don't manage to make my thing works although it's probably a classic > > for Nginx users. > > > > I have a domain https://example.org > > > > What I want is this > > https://example.org goes on reverse proxy => > server1 (10.10.10.10) to > > the application /var/www/htdocs/app1 > > https://example.org/app2 goes on reverse > proxy => server1 (10.10.10.10) > > to the application /var/www/htdocs/app2 > > So in the latter case the user adds /app2 and the flow is redirected to > > the /var/www/htdocs/app2 directory > > > > First the reverse proxy, I wrote this > >      ## > >      # App1 > >      ## > >       location / { > >          proxy_pass http://10.10.10.10:80; > >          proxy_redirect          off; > >          proxy_set_header        Host                    $http_host; > >          proxy_set_header        X-Real-IP               $remote_addr; > >          proxy_set_header        X-Forwarded-For > > $proxy_add_x_forwarded_for; > >          proxy_set_header        Referer > > "http://example.org "; > >          #proxy_set_header       Upgrade                 $http_upgrade; > >          #proxy_pass_header      Set-Cookie; > >       } > >      ## > >      # App2 > >      ## > >       location /app2 { > >          proxy_pass http://10.10.10.10:80; > >          proxy_redirect          off; > >          proxy_set_header        Host                    $http_host; > >          proxy_set_header        X-Real-IP               $remote_addr; > >          proxy_set_header        X-Forwarded-For > > $proxy_add_x_forwarded_for; > >          proxy_set_header        Referer > > "http://example.org "; > >          #proxy_set_header       Upgrade                 $http_upgrade; > >          #proxy_pass_header      Set-Cookie; > >       } > > > > > > Second the back end server > > server { > >          listen 80; > >          server_name example.org; > >          index index.html index.php; > >          root /var/www/htdocs/app1; > > > >          access_log /var/log/nginx/example.org.access.log; > >          error_log /var/log/nginx/example.org.error.log; > > > >          location / { > >            try_files $uri $uri/ /index.php$is_args$args; > > > >            location ~ \.php$ { > >                root              /var/www/htdocs/app1; > >                fastcgi_pass      unix:/run/php-fpm.app1.sock; > >                fastcgi_read_timeout 700; > >                fastcgi_split_path_info ^(.+\.php)(/.+)$; > >                fastcgi_index     index.php; > >                fastcgi_param     SCRIPT_FILENAME > > $document_root$fastcgi_script_name; > >                include           fastcgi_params; > >            } > >          } > > > >          location /app2 { > >            try_files $uri $uri/ /index.php$is_args$args; > > > >            location ~ \.php$ { > >                root              /var/www/htdocs/app2; > >                fastcgi_pass      unix:/run/php-fpm.app1.sock; > >                fastcgi_read_timeout 700; > >                fastcgi_split_path_info ^(.+\.php)(/.+)$; > >                fastcgi_index     index.php; > >                fastcgi_param     SCRIPT_FILENAME > > $document_root$fastcgi_script_name; > >                include           fastcgi_params; > >            } > >          } > > } > > > > The result I have right now is that I can access app1 with > > http://example.org, but i cannot access app2 > with http://example.org/app2 > > > > Also what is the best practice on the backend server: > > - should I make one single virtual host with two location statements > > like I did or 2 virtual hosts with a fake name like > > internal.app1.example.org and internal.app2.example.org ? > > - can I mutualise the location ~ \.php$ between the two ? > > - Should I copy access_log and error_log in the location /app2 > statement ? > > > > By the way, app1 and app2 are the same application/program but sometimes > > I want another instance or test app version 1, app version 2 etc. > > > > What I tend to do in the past is to have > > app1.example.org > > app2.example.org > > The problem is that it makes me use multiple certificates. > > Here I want to group all the applications behind one domain name > > example.org with one certificate and then access different applications > > with example.org/app1, example.org/app2 > > > > Thank you > > > > > > > > > > > > > > > > _______________________________________________ > > nginx mailing list -- nginx at nginx.org > > To unsubscribe send an email to nginx-leave at nginx.org > > > -- > Ian Hobson > Tel (+66) 626 544 695 > > _______________________________________________ > nginx mailing list -- nginx at nginx.org > To unsubscribe send an email to nginx-leave at nginx.org > > _______________________________________________ > nginx mailing list -- nginx at nginx.org > To unsubscribe send an email to nginx-leave at nginx.org > > > _______________________________________________ > nginx mailing list -- nginx at nginx.org > To unsubscribe send an email to nginx-leave at nginx.org -- Ian Hobson Tel (+66) 626 544 695 From nginx-forum at forum.nginx.org Wed Jul 27 13:34:18 2022 From: nginx-forum at forum.nginx.org (Alex Meise) Date: Wed, 27 Jul 2022 09:34:18 -0400 Subject: Logging WebSocket Messages. In-Reply-To: References: Message-ID: <5d963ce0b953487a4c67006a719da522.NginxMailingListEnglish@forum.nginx.org> Did you find the solution to your question? Regards Posted at Nginx Forum: https://forum.nginx.org/read.php?2,290112,294877#msg-294877 From nginx-forum at forum.nginx.org Wed Jul 27 21:24:07 2022 From: nginx-forum at forum.nginx.org (RasmithaM) Date: Wed, 27 Jul 2022 17:24:07 -0400 Subject: Nginx most connections in FIN_WAIT2 state Message-ID: <9b713a796f032ac4d388861bd6dc383b.NginxMailingListEnglish@forum.nginx.org> We are using Nginx for outbound connectivity to client , I see all the requests are going to FIN_WAIT2 state , even server sending us the ACK. the fin_timeout is set to 60 sec , but we observed that the process continues to stay in FIN_WAIT2 even after 60sec. Is this kernel issue / Nginc issue ? netstat -tan | awk '{print $6}' | sort | uniq -c 1793 CLOSE_WAIT 40 ESTABLISHED 6398 FIN_WAIT2 1 Foreign 22 LISTEN 152 TIME_WAIT 1 established) This is filling up the number of sockets finally have to restart Nginx to release the FIN_WAIT2 processes. Nginx configuration : egress-service-meshproxy.conf: | server { listen 9080; server_name www.services.com; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-NginX-Proxy true; proxy_cache_bypass $http_upgrade; proxy_redirect off; proxy_ssl_protocols TLSv1.2 TLSv1.3; proxy_ssl_ciphers HIGH:!aNULL:!MD5; proxy_read_timeout 10s; proxy_connect_timeout 10s; # this doesn't seem to work well of "on" -- 502 upstream drop from on reused connections proxy_http_version 1.1; proxy_set_header Connection ""; proxy_ssl_session_reuse off; #proxy_ssl_name off; proxy_ssl_server_name on; proxy_ssl_verify on; proxy_ssl_verify_depth 3; location / { proxy_ssl_certificate /deployment/secrets/egress-service-prod/tls.crt; proxy_ssl_certificate_key /deployment/secrets/egress-service-prod/tls.key; #proxy_ssl_trusted_certificate /deployment/secrets/egress-service-prod/ca.crt; proxy_ssl_trusted_certificate /etc/ssl/certs/ca-certificates.crt; proxy_pass https://www.services.com:443; } } nginx-server-default.conf: |+ server { listen 9080 default_server; listen [::]:9080 default_server; root /usr/share/nginx/html; index index.html; # Proxy everything we know about to static content location /api/v1/irp/health { add_header Content-Type text/plain; return 200 '{ "status": "OK" }'; } location /api/v1/irp/actuator/health { add_header Content-Type text/plain; return 200 '{ "status": "OK" }'; } location / { add_header Content-Type text/plain; return 200 '{ "status": "OK, no content here, use the services hostname to access SSL reverse proxy!" }'; } } nginx.conf: |+ pcre_jit on; user nginx; worker_processes 1; error_log /var/log/nginx/error.log debug; pid /var/run/nginx.pid; events { worker_connections 2048; accept_mutex off; multi_accept off; } http { include /etc/nginx/mime.types; default_type application/octet-stream; log_format main '{"time": "$time_local","status": "$status","request_time": $request_time, "host": "$http_host", "port": "$server_port", "request_uri": "$uri", "x_et_request_id":"$http_x_et_request_id","x_et_response_code": "$upstream_http_x_et_response_code"}'; access_log /var/log/nginx/access.log main; error_log /var/log/nginx/error.log; sendfile on; tcp_nopush on; tcp_nodelay on; client_max_body_size 10m; keepalive_timeout 60; #ssl_prefer_server_ciphers on; #use epoll; gzip on; include /deployment/config/nginx-server-default.conf; include /deployment/config/egress-service-meshproxy-*.conf; } template-nginx-server.conf: |- server { listen 9080; server_name ${MESH_HOSTNAME}; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-NginX-Proxy true; proxy_cache_bypass $http_upgrade; proxy_redirect off; proxy_ssl_protocols TLSv1.2 TLSv1.3; proxy_ssl_ciphers HIGH:!aNULL:!MD5; proxy_read_timeout 10s; proxy_connect_timeout 10s; # this doesn't seem to work well of "on" -- 502 upstream drop from on reused connections proxy_http_version 1.1; proxy_set_header Connection ""; proxy_ssl_session_reuse off; #proxy_ssl_name off; proxy_ssl_server_name on; proxy_ssl_verify on; proxy_ssl_verify_depth 3; location / { proxy_ssl_certificate /deployment/secrets/payaas-ipccpaas-com/tls.crt; proxy_ssl_certificate_key /deployment/secrets/payaas-ipccpaas-com/tls.key; #proxy_ssl_trusted_certificate /deployment/secrets/payaas-ipccpaas-com/ca.crt; proxy_ssl_trusted_certificate /etc/ssl/certs/ca-certificates.crt; proxy_pass https://${MESH_HOSTNAME}; } } Posted at Nginx Forum: https://forum.nginx.org/read.php?2,294880,294880#msg-294880 From mdounin at mdounin.ru Thu Jul 28 02:41:42 2022 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 28 Jul 2022 05:41:42 +0300 Subject: Nginx most connections in FIN_WAIT2 state In-Reply-To: <9b713a796f032ac4d388861bd6dc383b.NginxMailingListEnglish@forum.nginx.org> References: <9b713a796f032ac4d388861bd6dc383b.NginxMailingListEnglish@forum.nginx.org> Message-ID: Hello! On Wed, Jul 27, 2022 at 05:24:07PM -0400, RasmithaM wrote: > We are using Nginx for outbound connectivity to client , > > I see all the requests are going to FIN_WAIT2 state , even server sending us > the ACK. > > the fin_timeout is set to 60 sec , but we observed that the process > continues to stay in FIN_WAIT2 even after 60sec. > Is this kernel issue / Nginc issue ? > netstat -tan | awk '{print $6}' | sort | uniq -c > 1793 CLOSE_WAIT > 40 ESTABLISHED > 6398 FIN_WAIT2 > 1 Foreign > 22 LISTEN > 152 TIME_WAIT > 1 established) > > This is filling up the number of sockets finally have to restart Nginx to > release the FIN_WAIT2 processes. Are you seeing FIN_WAIT2 on connections from nginx to upstream servers? Or on connections from clients to nginx? The FIN_WAIT2 state suggests that shutdown() or close() was called on the socket, so FIN was sent to the other end, and then ACK was received. At this point system is waiting for the other end's FIN. Depending on whether shutdown() or close() was used, it's either application (in case of shutdown() and no close()) or kernel (in case of close()) responsibility to clean up things. Given that nginx never uses shutdown() on sockets to upstream servers, for connections from nginx to upstream servers the only remaining option seems to be a kernel issue. It is not clear why restarting nginx helps to clean up things though. For connections from clients to nginx, nginx can use shutdown() and then keep reading from the socket for up to lingering_time (http://nginx.org/r/lingering_time), which is 30 seconds by default. That is, on connections from clients to nginx the FIN_WAIT2 state can be seen for up to 90 seconds assuming default settings. If you are seeing sockets in this state for a significantly longer time, it might be a good idea to further debug what's going on. In particular, nginx debugging log might be helpful (see http://nginx.org/en/docs/debugging_log.html for details). Hope this helps. -- Maxim Dounin http://mdounin.ru/ From me at nanaya.pro Fri Jul 29 20:13:52 2022 From: me at nanaya.pro (nanaya) Date: Sat, 30 Jul 2022 05:13:52 +0900 Subject: Questions about real ip module Message-ID: <9551b76b-187c-4424-a3ca-ae42b8aade7d@www.fastmail.com> I have a few questions about the real ip module (tried on nginx/1.22.0): 1. is there no way to reset the list of `set_real_ip_from` for a specific subsection? For example to have a completely different set of trusted addresses for a specific server 2. does setting `real_ip_header '';` in a section effectively disable the module for the section? 3. documentation says `real_ip_header` is allowed in location block but it doesn't seem to do anything? This still uses address from X-Real-Ip instead of X-Other for allow check and log: http { real_ip_header X-Real-Ip; ... server { location /data/ { real_ip_header X-Other; allow 10.0.0.1; # <- checks against value from X-Real-Ip deny all; access_log /var/log/nginx/data.log; # <- logs address from X-Real-Ip } } } Similarly I tried this version as well and it behaves the same: location /da { real_ip_header X-Other; location /data/ { allow 10.0.0.1; deny all; access_log /var/log/nginx/data.log; } }