From jordanc.carter at outlook.com Mon Jan 1 10:57:37 2024 From: jordanc.carter at outlook.com (J Carter) Date: Mon, 1 Jan 2024 10:57:37 +0000 Subject: Calculating requests per second, per IP address In-Reply-To: References: Message-ID: Hello, On Fri, 29 Dec 2023 09:54:30 -0300 Rejaine Monteiro wrote: > Hi all, > > I´m running Nginx community edition and need to implement rate limiting > > There's plenty of guides out there on how to do this, but no guides on how > to get real values/stats from the access logs > > > What I need to get from the NGINX access logs is: > > - Requests per minute, per IP address > > - Average requests per minute, derived from all IP addresses > > - Max requests per minute, per IP address > > We have a few endpoints with different functionalities and we simply cannot > have a common rule that works for everyone. > > Any tips on a tool or script that could help generate this type of > information (if possible in real time or collect this data for future > analysis)? > > > I appreciate any tips. > There isn't an existing bespoke tool for this (at least publicly available). Normally such metrics are generated by: A) Feeding access logs into a log aggregator platform (Splunk, Loki) B) Performing / creating custom queries on that platform, to generate such reports. Of note, Loki (which is AGPL/free) has a nice user experience for this. https://grafana.com/docs/loki/latest/query/metric_queries/ Other than log aggregators, writing a script (python, perl, bash) is likely the fastest approach. Consider sharing it if you do, I'm sure others will find it useful. If you're looking for existing scripts as a starting point, there may be similar tools for Apache that you could adapt for nginx. Both use the 'Common Log Format' for access logs by default. https://github.com/ajohnstone/apache-log-stats Something like this. From jeremy at ardley.org Tue Jan 2 03:23:43 2024 From: jeremy at ardley.org (jeremy ardley) Date: Tue, 2 Jan 2024 11:23:43 +0800 Subject: Calculating requests per second, per IP address In-Reply-To: References: Message-ID: <045aea4d-bc0a-41bd-a7c2-ed6bda80ef1f@ardley.org> On 1/1/24 18:57, J Carter wrote: > Other than log aggregators, writing a script (python, perl, bash) is > likely the fastest approach. Consider sharing it if you do, I'm sure > others will find it useful. -- An alternative option is to put haproxy as a front end to nginx. haproxy has very configurable rate limiting. See https://www.haproxy.com/blog/four-examples-of-haproxy-rate-limiting From anthony.roberts at linaro.org Tue Jan 2 11:03:03 2024 From: anthony.roberts at linaro.org (Anthony Roberts) Date: Tue, 2 Jan 2024 11:03:03 +0000 Subject: Windows ARM64 Message-ID: Hello, A small introduction - I work on Linaro's Windows on Arm enablement team, and we work on porting/enabling various open-source projects for the platform. We have recently done a small investigation, and it turns out nginx can be compiled OOB on Windows ARM64 platforms with VS2022 - an example run from our internal nightlies can be seen here: https://gitlab.com/Linaro/windowsonarm/packages/nginx/-/jobs/5742208111 With the advent of things like Microsoft's Azure Windows ARM64 instances[0], and various client devices, it is a growing platform. Our partners (Microsoft and Qualcomm) would be interested in seeing a release! Is an official Windows ARM64 build something you have considered? Would you consider it? Thanks, Anthony [0]: https://azure.microsoft.com/en-us/blog/azure-virtual-machines-with-ampere-altra-arm-based-processors-generally-available/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From rejaine at bhz.jamef.com.br Tue Jan 2 14:00:34 2024 From: rejaine at bhz.jamef.com.br (Rejaine Monteiro) Date: Tue, 2 Jan 2024 11:00:34 -0300 Subject: Calculating requests per second, per IP address In-Reply-To: <045aea4d-bc0a-41bd-a7c2-ed6bda80ef1f@ardley.org> References: <045aea4d-bc0a-41bd-a7c2-ed6bda80ef1f@ardley.org> Message-ID: Hi all!! I appreciate everyone for the help. For now, I used the good old bash (awk, sort, uniq) to get the information I need, but I'll try later to analyze the other tools and tips you guys sent me! Thanks and a great 2024 to everyone. On Tue, Jan 2, 2024 at 12:24 AM jeremy ardley via nginx wrote: > > On 1/1/24 18:57, J Carter wrote: > > Other than log aggregators, writing a script (python, perl, bash) is > > likely the fastest approach. Consider sharing it if you do, I'm sure > > others will find it useful. > -- > An alternative option is to put haproxy as a front end to nginx. > haproxy has very configurable rate limiting. See > > https://www.haproxy.com/blog/four-examples-of-haproxy-rate-limiting > _______________________________________________ > nginx mailing list > nginx at nginx.org > https://mailman.nginx.org/mailman/listinfo/nginx > -- *Esta mensagem pode conter informações confidenciais ou privilegiadas, sendo seu sigilo protegido por lei. Se você não for o destinatário ou a pessoa autorizada a receber esta mensagem, não pode usar, copiar ou divulgar as informações nela contidas ou tomar qualquer ação baseada nessas informações. Se você recebeu esta mensagem por engano, por favor avise imediatamente ao remetente, respondendo o e-mail e em seguida apague-o. Agradecemos sua cooperação.* -------------- next part -------------- An HTML attachment was scrubbed... URL: From ehoffman333 at gmail.com Tue Jan 2 16:01:00 2024 From: ehoffman333 at gmail.com (Edward Hoffman) Date: Tue, 2 Jan 2024 08:01:00 -0800 Subject: Windows ARM64 In-Reply-To: References: Message-ID: <4CEAAD38-F54C-494B-8C10-86B4731634DA@gmail.com> Only if all source code is published. > On Jan 2, 2024, at 3:01 AM, Anthony Roberts wrote: > >  > Hello, > > A small introduction - I work on Linaro's Windows on Arm enablement team, and we work on porting/enabling various open-source projects for the platform. > > We have recently done a small investigation, and it turns out nginx can be compiled OOB on Windows ARM64 platforms with VS2022 - an example run from our internal nightlies can be seen here: https://gitlab.com/Linaro/windowsonarm/packages/nginx/-/jobs/5742208111 > > With the advent of things like Microsoft's Azure Windows ARM64 instances[0], and various client devices, it is a growing platform. Our partners (Microsoft and Qualcomm) would be interested in seeing a release! > > Is an official Windows ARM64 build something you have considered? Would you consider it? > > Thanks, > Anthony > > [0]: https://azure.microsoft.com/en-us/blog/azure-virtual-machines-with-ampere-altra-arm-based-processors-generally-available/ > _______________________________________________ > nginx mailing list > nginx at nginx.org > https://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From drodriguez at unau.edu.ar Tue Jan 2 21:00:30 2024 From: drodriguez at unau.edu.ar (Daniel A. Rodriguez) Date: Tue, 2 Jan 2024 18:00:30 -0300 Subject: Wrong content served In-Reply-To: References: <20231226235922.GH6038@daoine.org> Message-ID: Hi both Francis and Jake. Sorry for the late response This is the content of such file # cat /etc/nginx/snippets/location-letsencrypt.conf location ^~ /.well-known/acme-challenge/ {     alias /var/www/le_root/.well-known/acme-challenge/; } and the directory exists # ls -alh /var/www/le_root/.well-known/acme-challenge/ total 28K drwxr-xr-x 2 root root 4,0K ene  2 00:14 . drwxr-xr-x 3 root root 4,0K sep  1  2021 .. -rw-r--r-- 1 root root   87 sep  2  2021 9nxS2wAszlGI -rw-r--r-- 1 root root   87 sep  9  2021 AEzjuq9P8yXQ -rw-r--r-- 1 root root   87 sep  9  2021 TPlVMnrhufmE -rw-r--r-- 1 root root   87 oct 14  2021 YbHZSf8CqW40 -rw-r--r-- 1 root root   87 sep  9  2021 ZHFolsWkDv90 and what curl returns # curl -i http://material.av.unau.edu.ar/ HTTP/1.1 200 OK Date: Tue, 02 Jan 2024 20:44:45 GMT Server: Apache/2.4.58 Expires: Thu, 19 Nov 1981 08:52:00 GMT Cache-Control: no-store, no-cache, must-revalidate Pragma: no-cache Set-Cookie: PHPSESSID=cfj2h18l4u9j99o6pa4k77eaff; path=/ Vary: Accept-Encoding Transfer-Encoding: chunked Content-Type: text/html; charset=UTF-8     Oficina virtual - UNAU   Such content is from another host: oficinavirtual.unau.edu.ar. Which is working as expected in its own domain. I use acme.sh script to deploy SSL certificates. El 26/12/23 a las 21:15, Jeff Dyke escribió: > In addition to Francis' always helpful ask.  You have a domain problem > with material.av.domain and it may be > from /etc/hosts all the way to public DNS. Or, incorrectly supplied > *location-letsencrypt.conf.* > > If you provide that file contents, you'll likely see your own error as > you send it (i've done it dozens of times, its not an insult) > > > > On Tue, Dec 26, 2023 at 6:59 PM Francis Daly wrote: > > On Tue, Dec 26, 2023 at 07:57:41PM -0300, Daniel A. Rodriguez wrote: > > Hi there, > > > This behavior is driving me crazy. Currently have more than 30 > sites behind > > this reverse proxy, but the latest is refusing to work. > > Can you provide more details? > > > Config is simple and pretty similar between them all. > > "include" means "anything in that file is effectively in this > config". Nobody but you knows what is in that file. > > > server { > >     listen 80; > >     server_name material.av.domain; > > > >     include /etc/nginx/snippets/location-letsencrypt.conf; > > > > #    return 301 https://$server_name$request_uri; > > > > } > > Your test request is: > > $ curl -i http://material.av.domain/ > > What response do you get? What response do you want to get instead? > > The "return" is commented out, so unless there is something surprising > in the location-letsencrypt.conf file, I would expect a http 200 > response > with the content of "the default" index.html file. > > > If I point the browser to material.av.domain got redirected to > another > > sub-domain, among the 30 mentioned before. However, everything > else works > > just fine. > > Can you show the response to the "curl" request, to see whether > "redirect" > is a http 301 from the web server, or is something like a http 200 > from > the web server with maybe some javascript content that redirects to > "the wrong" place? > > Cheers, > >         f > -- > Francis Daly francis at daoine.org > _______________________________________________ > nginx mailing list > nginx at nginx.org > https://mailman.nginx.org/mailman/listinfo/nginx > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > https://mailman.nginx.org/mailman/listinfo/nginx --
________________________________________________
Daniel A. Rodriguez
Informática, Conectividad y Sistemas
Universidad Nacional del Alto Uruguay
San Vicente - Misiones - Argentina
informatica.unau.edu.ar
-------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Tue Jan 2 21:50:39 2024 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 3 Jan 2024 00:50:39 +0300 Subject: Windows ARM64 In-Reply-To: References: Message-ID: Hello! On Tue, Jan 02, 2024 at 11:03:03AM +0000, Anthony Roberts wrote: > A small introduction - I work on Linaro's Windows on Arm enablement team, > and we work on porting/enabling various open-source projects for the > platform. > > We have recently done a small investigation, and it turns out nginx can be > compiled OOB on Windows ARM64 platforms with VS2022 - an example run from > our internal nightlies can be seen here: > https://gitlab.com/Linaro/windowsonarm/packages/nginx/-/jobs/5742208111 Yep, there shouldn't be any problems with building, at least when building nginx itself and/or when building OpenSSL with "no-asm". In more sophisticated cases, some adjustment might be needed, see https://hg.nginx.org/nginx/rev/3c4d81ea1338 for an example. If you'll find any issues and/or need any help, don't hesitate to write here or in the nginx-devel@ mailing list. > With the advent of things like Microsoft's Azure Windows ARM64 > instances[0], and various client devices, it is a growing platform. Our > partners (Microsoft and Qualcomm) would be interested in seeing a release! > > Is an official Windows ARM64 build something you have considered? Would you > consider it? As of now, there are no plans to publish additional official nginx for Windows builds. Note well that nginx for Windows is in beta and unlikely to be considered production ready in the foreseeable future (https://nginx.org/en/docs/windows.html). Its main purpose is to facilitate web development directly on Windows devices. -- Maxim Dounin http://mdounin.ru/ From ehoffman333 at gmail.com Wed Jan 3 00:30:32 2024 From: ehoffman333 at gmail.com (Edward Hoffman) Date: Tue, 2 Jan 2024 16:30:32 -0800 Subject: Windows ARM64 In-Reply-To: References: Message-ID: <3B07C870-3EB0-4C91-B7FE-3AB8DD0FDF67@gmail.com> Sorry. I misinterpreted the announcement. Is Windows ARM64 open source? > On Jan 2, 2024, at 1:51 PM, Maxim Dounin wrote: > > Hello! > >> On Tue, Jan 02, 2024 at 11:03:03AM +0000, Anthony Roberts wrote: >> >> A small introduction - I work on Linaro's Windows on Arm enablement team, >> and we work on porting/enabling various open-source projects for the >> platform. >> >> We have recently done a small investigation, and it turns out nginx can be >> compiled OOB on Windows ARM64 platforms with VS2022 - an example run from >> our internal nightlies can be seen here: >> https://gitlab.com/Linaro/windowsonarm/packages/nginx/-/jobs/5742208111 > > Yep, there shouldn't be any problems with building, at least when > building nginx itself and/or when building OpenSSL with "no-asm". > In more sophisticated cases, some adjustment might be needed, see > https://hg.nginx.org/nginx/rev/3c4d81ea1338 for an example. > > If you'll find any issues and/or need any help, don't hesitate to > write here or in the nginx-devel@ mailing list. > >> With the advent of things like Microsoft's Azure Windows ARM64 >> instances[0], and various client devices, it is a growing platform. Our >> partners (Microsoft and Qualcomm) would be interested in seeing a release! >> >> Is an official Windows ARM64 build something you have considered? Would you >> consider it? > > As of now, there are no plans to publish additional official nginx > for Windows builds. > > Note well that nginx for Windows is in beta and unlikely to be > considered production ready in the foreseeable future > (https://nginx.org/en/docs/windows.html). Its main purpose is to > facilitate web development directly on Windows devices. > > -- > Maxim Dounin > http://mdounin.ru/ > _______________________________________________ > nginx mailing list > nginx at nginx.org > https://mailman.nginx.org/mailman/listinfo/nginx From venefax at gmail.com Thu Jan 4 09:45:10 2024 From: venefax at gmail.com (Saint Michael) Date: Thu, 4 Jan 2024 04:45:10 -0500 Subject: Question Message-ID: How do I configure a /location { } so anybody can upload a file that goes to a directory on the server? I would use curl, for an API From vukomir at ianculov.ro Thu Jan 4 12:14:54 2024 From: vukomir at ianculov.ro (Vucomir Ianculov) Date: Thu, 4 Jan 2024 14:14:54 +0200 Subject: Question In-Reply-To: References: Message-ID: you can try to use ngx_http_dav_module On Thu, Jan 4, 2024 at 11:45 AM Saint Michael wrote: > How do I configure a /location { > > } > so anybody can upload a file that goes to a directory on the server? > I would use curl, for an API > _______________________________________________ > nginx mailing list > nginx at nginx.org > https://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From anthony.roberts at linaro.org Thu Jan 4 12:40:33 2024 From: anthony.roberts at linaro.org (Anthony Roberts) Date: Thu, 4 Jan 2024 12:40:33 +0000 Subject: Windows ARM64 In-Reply-To: <3B07C870-3EB0-4C91-B7FE-3AB8DD0FDF67@gmail.com> References: <3B07C870-3EB0-4C91-B7FE-3AB8DD0FDF67@gmail.com> Message-ID: Hello, Thanks Maxim for letting me know about nginx for Windows - I will feed it back that it's not production ready. For development, I suspect emulated x64 will suffice on Win11 ARM64 machines. Edward - I'm unsure of your question? Windows is a proprietary and closed-source OS, and has been since 1985, but as per my first email nginx (the subject of this mailing list, and which is open-source) can be compiled from source successfully OOB for Windows ARM64 targets, producing a native binary. Thanks, Anthony On Wed, 3 Jan 2024 at 00:31, Edward Hoffman wrote: > Sorry. I misinterpreted the announcement. Is Windows ARM64 open source? > > > On Jan 2, 2024, at 1:51 PM, Maxim Dounin wrote: > > > > Hello! > > > >> On Tue, Jan 02, 2024 at 11:03:03AM +0000, Anthony Roberts wrote: > >> > >> A small introduction - I work on Linaro's Windows on Arm enablement > team, > >> and we work on porting/enabling various open-source projects for the > >> platform. > >> > >> We have recently done a small investigation, and it turns out nginx can > be > >> compiled OOB on Windows ARM64 platforms with VS2022 - an example run > from > >> our internal nightlies can be seen here: > >> https://gitlab.com/Linaro/windowsonarm/packages/nginx/-/jobs/5742208111 > > > > Yep, there shouldn't be any problems with building, at least when > > building nginx itself and/or when building OpenSSL with "no-asm". > > In more sophisticated cases, some adjustment might be needed, see > > https://hg.nginx.org/nginx/rev/3c4d81ea1338 for an example. > > > > If you'll find any issues and/or need any help, don't hesitate to > > write here or in the nginx-devel@ mailing list. > > > >> With the advent of things like Microsoft's Azure Windows ARM64 > >> instances[0], and various client devices, it is a growing platform. Our > >> partners (Microsoft and Qualcomm) would be interested in seeing a > release! > >> > >> Is an official Windows ARM64 build something you have considered? Would > you > >> consider it? > > > > As of now, there are no plans to publish additional official nginx > > for Windows builds. > > > > Note well that nginx for Windows is in beta and unlikely to be > > considered production ready in the foreseeable future > > (https://nginx.org/en/docs/windows.html). Its main purpose is to > > facilitate web development directly on Windows devices. > > > > -- > > Maxim Dounin > > http://mdounin.ru/ > > _______________________________________________ > > nginx mailing list > > nginx at nginx.org > > https://mailman.nginx.org/mailman/listinfo/nginx > _______________________________________________ > nginx mailing list > nginx at nginx.org > https://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx.list at allycomm.com Sat Jan 6 19:03:47 2024 From: nginx.list at allycomm.com (Jeff Kletsky) Date: Sat, 6 Jan 2024 11:03:47 -0800 Subject: IMAP Proxy with TLS Upstream Configuration Message-ID: <4bd0bb12-25ac-477d-8aba-90a9127fae6e@allycomm.com> I believe I have properly configured nginx v1.24.0 (open source) for IMAP proxy on FreeBSD 14.0. I am, however, unable to establish a TLS connection to the upstream server. I have confirmed that I can connect to the proxy with TLS and that the auth server is called. The auth server returns the expected Auth-Server and Auth-Port. The upstream server is on a remote host with Dovecot running TLS on the standard port of 993. I can see the TCP handshake between the proxy and Dovecot on both machines, but nginx does not proceed. It eventually returns "* BAD internal server error" with the error log indicating a timeout 2024/01/06 10:54:33 [debug] 6217#100294: *1 mail auth http process status line 2024/01/06 10:54:33 [debug] 6217#100294: *1 mail auth http process headers 2024/01/06 10:54:33 [debug] 6217#100294: *1 mail auth http header: "Server: nginx/1.24.0" 2024/01/06 10:54:33 [debug] 6217#100294: *1 mail auth http header: "Date: Sat, 06 Jan 2024 18:54:33 GMT" 2024/01/06 10:54:33 [debug] 6217#100294: *1 mail auth http header: "Connection: close" 2024/01/06 10:54:33 [debug] 6217#100294: *1 mail auth http header: "Auth-Status: OK" 2024/01/06 10:54:33 [debug] 6217#100294: *1 mail auth http header: "Auth-Server: 2601:aaaa:bbbb:cccc::1234" 2024/01/06 10:54:33 [debug] 6217#100294: *1 mail auth http header: "Auth-Port: 993" 2024/01/06 10:54:33 [debug] 6217#100294: *1 mail auth http header done 2024/01/06 10:54:33 [debug] 6217#100294: *1 event timer del: 11: 43974303 2024/01/06 10:54:33 [debug] 6217#100294: *1 reusable connection: 0 2024/01/06 10:54:33 [debug] 6217#100294: *1 free: 0000167258040800, unused: 64 2024/01/06 10:54:33 [debug] 6217#100294: *1 posix_memalign: 0000167258041100:256 @16 2024/01/06 10:54:33 [debug] 6217#100294: *1 stream socket 11 2024/01/06 10:54:33 [debug] 6217#100294: *1 connect to 2601:aaaa:bbbb:cccc::1234:993, fd:11 #4 2024/01/06 10:54:33 [debug] 6217#100294: *1 kevent set event: 11: ft:-1 fl:0025 2024/01/06 10:54:33 [debug] 6217#100294: *1 kevent set event: 11: ft:-2 fl:0025 2024/01/06 10:54:33 [debug] 6217#100294: *1 event timer add: 11: 60000:43974303 2024/01/06 10:54:33 [debug] 6217#100294: *1 posix_memalign: 0000167258041200:256 @16 2024/01/06 10:54:33 [debug] 6217#100294: *1 malloc: 0000167258049000:4096 2024/01/06 10:54:33 [debug] 6217#100294: *1 mail proxy write handler 2024/01/06 10:55:33 [debug] 6217#100294: *1 event timer del: 11: 43974303 2024/01/06 10:55:33 [debug] 6217#100294: *1 mail proxy imap auth handler 2024/01/06 10:55:33 [info] 6217#100294: *1 upstream timed out (60: Operation timed out) while connecting to upstream I have confirmed using openssl s_client that the connection can be made from the host running nginx to the host at the expected IP address and port. Looking at the source, I did not see an option in the auth-header parsing related to using TLS upstream. Is there a way to use TLS for the IMAP upstream natively (without needing to configure a port with STARTTLS)? TIA, Jeff mail {     error_log /var/log/nginx/error.log debug;     ssl_certificate path/to/fullchain.pem;     ssl_certificate_key path/to/privkey.pem;     ssl_session_timeout 1d;     ssl_session_cache shared:MozSSL:1m;  # about 4000 sessions     ssl_session_tickets off;     # modern configuration     ssl_protocols TLSv1.3;     ssl_prefer_server_ciphers off;     # verify chain of trust of OCSP response using Root CA and Intermediate certs     ssl_trusted_certificate path/to/fullchain.pem;     # replace with the IP address of your resolver     resolver [::1] 127.0.0.1;     proxy_pass_error_message on;     server {         server_name     proxy-name.allycomm.com;         listen  993 ssl;         listen  [::]:993 ssl;         protocol imap;         auth_http       [::1]:/;         # From Dovecot (2024-01-04)         imap_capabilities IMAP4rev1 SASL-IR LOGIN-REFERRALS ID ENABLE IDLE LITERAL+ AUTH=PLAIN;     } } From mdounin at mdounin.ru Sun Jan 7 02:29:04 2024 From: mdounin at mdounin.ru (Maxim Dounin) Date: Sun, 7 Jan 2024 05:29:04 +0300 Subject: IMAP Proxy with TLS Upstream Configuration In-Reply-To: <4bd0bb12-25ac-477d-8aba-90a9127fae6e@allycomm.com> References: <4bd0bb12-25ac-477d-8aba-90a9127fae6e@allycomm.com> Message-ID: Hello! On Sat, Jan 06, 2024 at 11:03:47AM -0800, Jeff Kletsky wrote: > I believe I have properly configured nginx v1.24.0 (open source) for > IMAP proxy on FreeBSD 14.0. I am, however, unable to establish a TLS > connection to the upstream server. > > I have confirmed that I can connect to the proxy with TLS and that the > auth server is called. The auth server returns the expected Auth-Server > and Auth-Port. The upstream server is on a remote host with Dovecot > running TLS on the standard port of 993. I can see the TCP handshake > between the proxy and Dovecot on both machines, but nginx does not proceed. > > It eventually returns "* BAD internal server error" with the error log > indicating a timeout [...] > I have confirmed using openssl s_client that the connection can be made > from the host running nginx to the host at the expected IP address and port. > > Looking at the source, I did not see an option in the auth-header > parsing related to using TLS upstream. > > Is there a way to use TLS for the IMAP upstream natively (without > needing to configure a port with STARTTLS)? Backend IMAP servers are expected to be plain text, not SSL/TLS. Neither IMAPS nor IMAP with STARTTLS are supported for upstream connections. If you want to use SSL/TLS connections between nginx and backend servers, consider configuring stream{} proxying on the same nginx instance with "proxy_ssl on;" to handle SSL/TLS with the backend servers for you, see http://nginx.org/r/proxy_ssl for details. -- Maxim Dounin http://mdounin.ru/ From venefax at gmail.com Mon Jan 8 02:41:33 2024 From: venefax at gmail.com (Saint Michael) Date: Sun, 7 Jan 2024 21:41:33 -0500 Subject: Bug in handling POST then sending a file back Message-ID: I am using openresty and nginx. I send a file to the server, which is a POST operation. Then the server processes the file and needs to send back a different file. I try to send a file with ng.exec("/static/file_name") and I get error 405 Not Allowed. But if I do a 302 redirect, it works. I imagine that Nginx feels bad about sending a file in a POST operation, but http does not have such a limitation. Is there a workaround for this? From jamesread5737 at gmail.com Mon Jan 8 13:11:21 2024 From: jamesread5737 at gmail.com (James Read) Date: Mon, 8 Jan 2024 08:11:21 -0500 Subject: Nginx serving wrong site Message-ID: My nginx server is serving the wrong site. I found this explanation online https://www.computerworld.com/article/2987967/why-your-nginx-server-is-responding-with-content-from-the-wrong-site.html However this explanation doesn't seem to fit my case as I have a location which nginx should match correctly. Is there any other reason why nginx would serve the wrong site? -------------- next part -------------- An HTML attachment was scrubbed... URL: From francis at daoine.org Mon Jan 8 13:20:22 2024 From: francis at daoine.org (Francis Daly) Date: Mon, 8 Jan 2024 13:20:22 +0000 Subject: Nginx serving wrong site In-Reply-To: References: Message-ID: <20240108132022.GI6038@daoine.org> On Mon, Jan 08, 2024 at 08:11:21AM -0500, James Read wrote: Hi there, > My nginx server is serving the wrong site. I found this explanation online > https://www.computerworld.com/article/2987967/why-your-nginx-server-is-responding-with-content-from-the-wrong-site.html > However this explanation doesn't seem to fit my case as I have a location > which nginx should match correctly. Is there any other reason why nginx > would serve the wrong site? It pretty much always is because what you think you have told nginx to do is not what you have actually told nginx to do. (The other occasions are usually when your browser is not talking to the nginx that you think it is talking to.) To a first approximation: when a request comes to nginx, it first chooses which server{} to handle the request in, then chooses which location{} within that server{} to handle the request in. Can you show a configuration and a request that is handled in a different location{} from what you want? Thanks, f -- Francis Daly francis at daoine.org From jamesread5737 at gmail.com Mon Jan 8 14:13:38 2024 From: jamesread5737 at gmail.com (James Read) Date: Mon, 8 Jan 2024 09:13:38 -0500 Subject: Nginx serving wrong site In-Reply-To: <20240108132022.GI6038@daoine.org> References: <20240108132022.GI6038@daoine.org> Message-ID: I literally copied a working configuration. The only changes I made were the name of the server and the root to find the files to be served. On Mon, 8 Jan 2024, 08:20 Francis Daly, wrote: > On Mon, Jan 08, 2024 at 08:11:21AM -0500, James Read wrote: > > Hi there, > > > My nginx server is serving the wrong site. I found this explanation > online > > > https://www.computerworld.com/article/2987967/why-your-nginx-server-is-responding-with-content-from-the-wrong-site.html > > However this explanation doesn't seem to fit my case as I have a location > > which nginx should match correctly. Is there any other reason why nginx > > would serve the wrong site? > > It pretty much always is because what you think you have told nginx to > do is not what you have actually told nginx to do. > > (The other occasions are usually when your browser is not talking to the > nginx that you think it is talking to.) > > To a first approximation: when a request comes to nginx, it first chooses > which server{} to handle the request in, then chooses which location{} > within that server{} to handle the request in. > > Can you show a configuration and a request that is handled in a different > location{} from what you want? > > Thanks, > > f > -- > Francis Daly francis at daoine.org > _______________________________________________ > nginx mailing list > nginx at nginx.org > https://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From francis at daoine.org Mon Jan 8 14:28:54 2024 From: francis at daoine.org (Francis Daly) Date: Mon, 8 Jan 2024 14:28:54 +0000 Subject: Nginx serving wrong site In-Reply-To: References: <20240108132022.GI6038@daoine.org> Message-ID: <20240108142854.GJ6038@daoine.org> On Mon, Jan 08, 2024 at 09:13:38AM -0500, James Read wrote: Hi there, > I literally copied a working configuration. The only changes I made were > the name of the server and the root to find the files to be served. If you're not going to show a configuration, then anyone who might be able to help will be reduced to guessing. So I'm going to guess that your "server_name" line is of the form "www.example.com"; and your browser is instead accessing http://example.com; and nginx is returning the content of the default_server for that ip:port instead of this server. https://nginx.org/en/docs/http/request_processing.html Cheers, f -- Francis Daly francis at daoine.org From jamesread5737 at gmail.com Mon Jan 8 14:49:23 2024 From: jamesread5737 at gmail.com (James Read) Date: Mon, 8 Jan 2024 09:49:23 -0500 Subject: Nginx serving wrong site In-Reply-To: <20240108142854.GJ6038@daoine.org> References: <20240108132022.GI6038@daoine.org> <20240108142854.GJ6038@daoine.org> Message-ID: On Mon, 8 Jan 2024, 09:29 Francis Daly, wrote: > On Mon, Jan 08, 2024 at 09:13:38AM -0500, James Read wrote: > > Hi there, > > > I literally copied a working configuration. The only changes I made were > > the name of the server and the root to find the files to be served. > > If you're not going to show a configuration, then anyone who might be > able to help will be reduced to guessing. > > So I'm going to guess that your "server_name" line is of the > form "www.example.com"; and your browser is instead accessing > http://example.com; and nginx is returning the content of the > default_server for that ip:port instead of this server. > My server_name is of the form "example.com www.example.com;" so I don't think that is the problem. Could this be anything to do with dns configuration? > > https://nginx.org/en/docs/http/request_processing.html > > Cheers, > > f > -- > Francis Daly francis at daoine.org > _______________________________________________ > nginx mailing list > nginx at nginx.org > https://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From francis at daoine.org Mon Jan 8 15:03:48 2024 From: francis at daoine.org (Francis Daly) Date: Mon, 8 Jan 2024 15:03:48 +0000 Subject: Nginx serving wrong site In-Reply-To: References: <20240108132022.GI6038@daoine.org> <20240108142854.GJ6038@daoine.org> Message-ID: <20240108150348.GK6038@daoine.org> On Mon, Jan 08, 2024 at 09:49:23AM -0500, James Read wrote: > On Mon, 8 Jan 2024, 09:29 Francis Daly, wrote: > > On Mon, Jan 08, 2024 at 09:13:38AM -0500, James Read wrote: Hi there, > > So I'm going to guess that your "server_name" line is of the > > form "www.example.com"; and your browser is instead accessing > > http://example.com; and nginx is returning the content of the > > default_server for that ip:port instead of this server. > > My server_name is of the form "example.com www.example.com;" so I don't > think that is the problem. Could this be anything to do with dns > configuration? Do your nginx logs indicate that the request is being handled by this nginx instance at all? If not, maybe DNS is not causing your browser to talk to this server's IP address. Do you have any "listen" directives that include specific IP addresses, instead of just ports? Does your example.com resolve to the address of the "listen" in this "server{}"; or to the address of the "listen" in whichever "server{}" is actually being used; or to a different address? Cheers, f -- Francis Daly francis at daoine.org From jamesread5737 at gmail.com Mon Jan 8 16:34:37 2024 From: jamesread5737 at gmail.com (James Read) Date: Mon, 8 Jan 2024 11:34:37 -0500 Subject: Nginx serving wrong site In-Reply-To: <20240108150348.GK6038@daoine.org> References: <20240108132022.GI6038@daoine.org> <20240108142854.GJ6038@daoine.org> <20240108150348.GK6038@daoine.org> Message-ID: On Mon, 8 Jan 2024, 10:04 Francis Daly, wrote: > On Mon, Jan 08, 2024 at 09:49:23AM -0500, James Read wrote: > > On Mon, 8 Jan 2024, 09:29 Francis Daly, wrote: > > > On Mon, Jan 08, 2024 at 09:13:38AM -0500, James Read wrote: > > Hi there, > > > > So I'm going to guess that your "server_name" line is of the > > > form "www.example.com"; and your browser is instead accessing > > > http://example.com; and nginx is returning the content of the > > > default_server for that ip:port instead of this server. > > > > My server_name is of the form "example.com www.example.com;" so I don't > > think that is the problem. Could this be anything to do with dns > > configuration? > > Do your nginx logs indicate that the request is being handled by this > nginx instance at all? > > If not, maybe DNS is not causing your browser to talk to this server's > IP address. > > The logs look fine. They are showing the requests. > > > Do you have any "listen" directives that include specific IP addresses, > instead of just ports? Does your example.com resolve to the address > of the "listen" in this "server{}"; or to the address of the "listen" > in whichever "server{}" is actually being used; or to a different address? > server { listen 80; listen [::]:80; root /var/www/moshiim.it; index index.php index.html index.htm index.nginx-debian.html; server_name example.com www.example.com; location / { try_files $uri $uri/ =404; } location ~ \.php$ { include snippets/fastcgi-php.conf; fastcgi_pass unix:/var/run/php/php7.4-fpm.sock; } } > Cheers, > > f > -- > Francis Daly francis at daoine.org > _______________________________________________ > nginx mailing list > nginx at nginx.org > https://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jamesread5737 at gmail.com Mon Jan 8 16:48:13 2024 From: jamesread5737 at gmail.com (James Read) Date: Mon, 8 Jan 2024 11:48:13 -0500 Subject: Nginx serving wrong site In-Reply-To: References: <20240108132022.GI6038@daoine.org> <20240108142854.GJ6038@daoine.org> <20240108150348.GK6038@daoine.org> Message-ID: On Mon, 8 Jan 2024, 11:34 James Read, wrote: > > > On Mon, 8 Jan 2024, 10:04 Francis Daly, wrote: > >> On Mon, Jan 08, 2024 at 09:49:23AM -0500, James Read wrote: >> > On Mon, 8 Jan 2024, 09:29 Francis Daly, wrote: >> > > On Mon, Jan 08, 2024 at 09:13:38AM -0500, James Read wrote: >> >> Hi there, >> >> > > So I'm going to guess that your "server_name" line is of the >> > > form "www.example.com"; and your browser is instead accessing >> > > http://example.com; and nginx is returning the content of the >> > > default_server for that ip:port instead of this server. >> > >> > My server_name is of the form "example.com www.example.com;" so I don't >> > think that is the problem. Could this be anything to do with dns >> > configuration? >> >> Do your nginx logs indicate that the request is being handled by this >> nginx instance at all? >> >> If not, maybe DNS is not causing your browser to talk to this server's >> IP address. >> >> The logs look fine. They are showing the requests. >> >> >> Do you have any "listen" directives that include specific IP addresses, >> instead of just ports? Does your example.com resolve to the address >> of the "listen" in this "server{}"; or to the address of the "listen" >> in whichever "server{}" is actually being used; or to a different address? >> > > server { > listen 80; > listen [::]:80; > > root /var/www/moshiim.it; > > index index.php index.html index.htm index.nginx-debian.html; > server_name example.com www.example.com; > location / { > > try_files $uri $uri/ =404; > } > > location ~ \.php$ { > include snippets/fastcgi-php.conf; > fastcgi_pass unix:/var/run/php/php7.4-fpm.sock; > } > > } > OK this is a browser issue and not a nginx issue. I just accessed the site with lynx and it is showing the right site. However with Chrome it is showing the wrong site. This may have something to do with the fact that I had to clear the HSTS cache in the browser in order to be able to see anything. The domain used to have a SSL certificate and Chrome was refusing display because it detected the site had no SSL certificate. I need to figure out how to get Chrome to behave normally. > >> Cheers, >> >> f >> -- >> Francis Daly francis at daoine.org >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> https://mailman.nginx.org/mailman/listinfo/nginx >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jamesread5737 at gmail.com Mon Jan 8 19:22:14 2024 From: jamesread5737 at gmail.com (James Read) Date: Mon, 8 Jan 2024 14:22:14 -0500 Subject: Custom redirect for one page from https to http with different name. Message-ID: Hi, how would I redirect https://example.com/oldname.php to http://example.com/newname.php Thanks -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Mon Jan 8 20:49:54 2024 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 8 Jan 2024 23:49:54 +0300 Subject: Bug in handling POST then sending a file back In-Reply-To: References: Message-ID: Hello! On Sun, Jan 07, 2024 at 09:41:33PM -0500, Saint Michael wrote: > I am using openresty and nginx. > I send a file to the server, which is a POST operation. Then the > server processes the file and needs to send back a different file. I > try to send a file with ng.exec("/static/file_name") and I get error > 405 Not Allowed. > But if I do a 302 redirect, it works. > I imagine that Nginx feels bad about sending a file in a POST > operation, but http does not have such a limitation. > Is there a workaround for this? As far as I can see from the Lua module docs, ngx.exec() you are using in your script does an internal redirect. As the result, nginx ends up with a POST request to a static file, which is not something nginx can handle: it does not know what to do with data POSTed to a static file, hence the error. If you've already processed POSTed data, and want to show some static message to a client, consider returning a redirect to the static file to the user, such as 303 (See Other), see https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/303 for a good description. If you are sure you want to return the file as a response to the POST request itself (this is generally a bad practice, since it will break page refresh and browser history navigation), consider returning the file directly from your script instead of trying to do an internal redirect. -- Maxim Dounin http://mdounin.ru/ From venefax at gmail.com Mon Jan 8 21:52:05 2024 From: venefax at gmail.com (Saint Michael) Date: Mon, 8 Jan 2024 16:52:05 -0500 Subject: Bug in handling POST then sending a file back In-Reply-To: References: Message-ID: This is for an API, so no browsers are or will be involved. So, I should print binary information from my LUA script? A second question, can I offload receiving a very large zip file to NGINX? I just need to know in my LUA script when it has fully arrived and the name assigned to it. Many thanks for your help. The Openresty Slack app hasn't helped a bit. On Mon, Jan 8, 2024 at 3:50 PM Maxim Dounin wrote: > > Hello! > > On Sun, Jan 07, 2024 at 09:41:33PM -0500, Saint Michael wrote: > > > I am using openresty and nginx. > > I send a file to the server, which is a POST operation. Then the > > server processes the file and needs to send back a different file. I > > try to send a file with ng.exec("/static/file_name") and I get error > > 405 Not Allowed. > > But if I do a 302 redirect, it works. > > I imagine that Nginx feels bad about sending a file in a POST > > operation, but http does not have such a limitation. > > Is there a workaround for this? > > As far as I can see from the Lua module docs, ngx.exec() you are > using in your script does an internal redirect. As the result, > nginx ends up with a POST request to a static file, which is not > something nginx can handle: it does not know what to do with data > POSTed to a static file, hence the error. > > If you've already processed POSTed data, and want to show some > static message to a client, consider returning a redirect to the > static file to the user, such as 303 (See Other), see > https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/303 for a > good description. > > If you are sure you want to return the file as a response to the > POST request itself (this is generally a bad practice, since it > will break page refresh and browser history navigation), consider > returning the file directly from your script instead of trying to > do an internal redirect. > > -- > Maxim Dounin > http://mdounin.ru/ > _______________________________________________ > nginx mailing list > nginx at nginx.org > https://mailman.nginx.org/mailman/listinfo/nginx From francis at daoine.org Tue Jan 9 09:33:32 2024 From: francis at daoine.org (Francis Daly) Date: Tue, 9 Jan 2024 09:33:32 +0000 Subject: Nginx serving wrong site In-Reply-To: References: <20240108132022.GI6038@daoine.org> <20240108142854.GJ6038@daoine.org> <20240108150348.GK6038@daoine.org> Message-ID: <20240109093332.GL6038@daoine.org> On Mon, Jan 08, 2024 at 11:48:13AM -0500, James Read wrote: Hi there, > OK this is a browser issue and not a nginx issue. I just accessed the site > with lynx and it is showing the right site. However with Chrome it is > showing the wrong site. This may have something to do with the fact that I > had to clear the HSTS cache in the browser in order to be able to see > anything. Thanks for sharing the resolution with the list. It looks like this was a case where you wanted the browser to talk to your nginx on port 80; but the browser was instead talking to a thing on port 443. Cheers, f -- Francis Daly francis at daoine.org From francis at daoine.org Tue Jan 9 09:36:37 2024 From: francis at daoine.org (Francis Daly) Date: Tue, 9 Jan 2024 09:36:37 +0000 Subject: Custom redirect for one page from https to http with different name. In-Reply-To: References: Message-ID: <20240109093637.GM6038@daoine.org> On Mon, Jan 08, 2024 at 02:22:14PM -0500, James Read wrote: Hi there, > how would I redirect https://example.com/oldname.php to > http://example.com/newname.php Within the https server{} block: location = /oldname.php { return 301 http://example.com/newname.php; } should do it. (Other 30x numbers can work too.) Cheers, f -- Francis Daly francis at daoine.org From wangjiahao at openresty.com Wed Jan 10 07:18:45 2024 From: wangjiahao at openresty.com (Jiahao Wang) Date: Wed, 10 Jan 2024 15:18:45 +0800 Subject: [ANN] OpenResty 1.25.3.1 released Message-ID: Hi folks, I am happy to announce the new formal release, 1.25.3.1, of our OpenResty web platform based on NGINX and LuaJIT. It is the first OpenResty version based on Nginx core 1.25.3. The full announcement, download links, and change logs can be found below: https://openresty.org/en/ann-1025003001.html You can download the software packages here: https://openresty.org/en/download.html We recently added official rpm package repository for Amazon Linux 2023: https://openresty.org/en/linux-packages.html OpenResty is a full-fledged web platform by bundling the standard Nginx core, LuaJIT, lots of 3rd-party Nginx modules and Lua libraries, as well as most of their external dependencies. See OpenResty's homepage for details: https://openresty.org/ We have run extensive testing on our Amazon EC2 test cluster and ensured that all the components (including the Nginx core) play well together. The latest test report can always be found here: https://qa.openresty.org/ Enjoy! -------------- next part -------------- An HTML attachment was scrubbed... URL: From mlybarger at gmail.com Thu Jan 11 19:00:37 2024 From: mlybarger at gmail.com (Mark Lybarger) Date: Thu, 11 Jan 2024 14:00:37 -0500 Subject: access logs to parquet Message-ID: hi, i'm using nginx as a proxy to api gateway / lambda services. each day, i get 500mb of gzipped access logs from 6 proxy servers. i want to load these nginx access logs into a data lake that takes parquet format as input. my question is fairly general, is there something that easily converts nginx access logs to parquet format given some conversion map? -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx.list at melmac.space Mon Jan 15 17:52:14 2024 From: nginx.list at melmac.space (nginx.list at melmac.space) Date: Mon, 15 Jan 2024 18:52:14 +0100 Subject: proxy_protocol mixed address family Message-ID: <20240115175214.viemzwidbxjycior@leonard-wagner.org> Hey together, I would like to follow up on the Thread from October 2023 with the subject "proxy_protocol send incorrect header".[0] TL;DR: are there any plans to make it possible that the realip module can also change the destination address and not just the source address? Or to just not touch anything at all so that proxy_protocol stuff can traverse multiple layers with changing IP versions? I have the following Setup: IPv4: User --> 4to6 Proxy (Alpine Linux / nginx 1.24.x) --> SNI Proxy (Debian Bookworm / nginx 1.22.x) --> Mixed Downstream with Traefik or nginx IPv6: User --> SNI Proxy (Debian Bookworm / nginx 1.22.x) --> Mixed Downstream with Traefik or nginx So basically IPv6 is going directly to the Proxy and for v4 there is a quite simple configured nginx as 4to6 Proxy. Use case is to have IPv4 just at the edge on as few servers as possible. 4to6 and SNI Proxy both use the stream module(s) and just at the third layer the http logic kicks in. // My Problem exists just for the IPv4 way and is that the second layer SNI nginx, sends proxy-protocol stuff with v4 source and v6 destination address even though the INET Protocol is set to TCP4. Thats not a problem for nginx as it parses everything fine in the last/third layer. But for Traefik its a problem as it says it cannot parse the header and so the connection will be closed again. Also Wireshark says the packets are broken. One non feasible workaround could be to completely disable any logic in the second layer Proxy, like described in a blog[1] in section "Untrusted Redirector 2". So don't listen on proxy_protocol and don't send it. Problem with that is that it seems I'm not able to use ssl_preread anymore so there must be static proxy_passing. // Just as additional note, the point when it breaks is if "set_real_ip_from $TRUSTED_IP;" is set. Then the source address is replaced with the v4 address, but the destination address stays the v6 address between first and second layer proxy. So what to do? Quote from the linked thread[2]: > Currently the realip module only changes the client address > (c->sockaddr) and leaves the server address (c->local_sockaddr) > unchanged. > The behavior is the same for Stream and HTTP and is explained by the > fact that initially the module only supported HTTP fields like > X-Real-IP and X-Forwarded-For, which carry only client address. there seems to be no solution. Is there any plan for the future? And for the time beeing is there any other TCP Proxy where it is possible to transport the client and serveraddress through multiple layers with changing IP versions? Gordon (: [0]https://mailman.nginx.org/pipermail/nginx/2023-October/GYTVUIBJ65RJ3X4KDEPNVGXZ2S4STIVT.html [1]https://0xda.de/blog/2020/02/red-team-proxy-protocol-nginx/ [2]https://mailman.nginx.org/pipermail/nginx/2023-October/CKEFWBSQL46HJTHDOJVX6CNUYETKBE53.html From oasis at embracelabs.com Tue Jan 16 04:15:09 2024 From: oasis at embracelabs.com (=?UTF-8?B?67CV6rec7LKg?=) Date: Tue, 16 Jan 2024 13:15:09 +0900 Subject: This is a question about the "$status" log value when "proxy_read_timeout" occurs. Message-ID: Hello. This is a question about the "$status" log value when "proxy_read_timeout" occurs. Nginx version in use: v1.25.3 Contents of 1Mbyte size were requested to [Origin Server]. A response up to approximately 500Kbytes in size, including the header, was received without delay. However, after 500Kbytes, no response was received from Origin for 3 seconds and the connection (time-out) Since the message "upstream timed out...while reading upstream" was logged in the error log, I think the connection was lost due to the "proxy_read_timeout 3s" setting. While checking the log, I noticed that the "$status" value in the access log was different from what I thought. In my opinion, if the connection was terminated by "proxy_read_timeout", the "$status" value would be 5xx, but the "$status" value in the saved access log was 200. A normal response was not completed due to "proxy_read_timeout", so I would like to know why the "$status" value is stored as 200 instead of 5xx. Should I check a variable other than "$status" for responses to abnormal timeouts such as "proxy_read_timeout"? Any help is appreciated. Best regards, kyucheol ----- [ config ] log_format read_log '[$time_iso8601] ' '$request_time\t' '$host ' '$request_method ' '$request ' * '$status* ' '$upstream_status ' '$body_bytes_sent ' '$request_id'; server { listen 80; server_name test.read_timeout.com; access_log /etc/nginx/log/$server_name/access.log read_log; proxy_cache_valid 200 206 304 1d; proxy_connect_timeout 30s; proxy_read_timeout 3s; proxy_set_header Host testmedia.net; proxy_ignore_headers Cache-Control; location / { proxy_pass http://[Origin server]; } } [ curl request ] > GET /media3/testfile HTTP/1.1 > User-Agent: curl/7.29.0 > Accept: */* > Host: test.read_timeout.com > < HTTP/1.1 200 OK < Date: Wed, 10 Jan 2024 08:20:52 GMT < Content-Type: text/plain < Content-Length: 1048576 < Connection: keep-alive < Server: nginx < Last-Modified: Thu, 17 Aug 2023 12:43:31 GMT < ETag: "64de15f3-100000" < Age: 1 < vary: B < Accept-Ranges: bytes < { [data not shown] 46 1024k 46 479k 0 0 155k 0 0:00:06 0:00:03 0:00:03 155k* transfer closed with 557689 bytes remaining to read 46 1024k 46 479k 0 0 155k 0 0:00:06 0:00:03 0:00:03 155k * Closing connection 0 curl: (18) transfer closed with 557689 bytes remaining to read [ error log ] 2024/01/10 17:20:55 [error] 98285#98285: *1* upstream timed out (110: Connection timed out) while reading upstream*, client: 127.0.0.1, server: test.read_timeout.com, request: "GET /media3/testfile HTTP/1.1", ~~~~~~~ [ access log ] [2024-01-10T17:20:55+09:00] 3.080 test.read_timeout.com GET GET /media3/testfile HTTP/1.1 *200 *200 490887 45f01510d4ec56f01899c9dea81e7628 ----- -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Tue Jan 16 22:26:41 2024 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 17 Jan 2024 01:26:41 +0300 Subject: This is a question about the "$status" log value when "proxy_read_timeout" occurs. In-Reply-To: References: Message-ID: Hello! On Tue, Jan 16, 2024 at 01:15:09PM +0900, 박규철 wrote: > This is a question about the "$status" log value when "proxy_read_timeout" > occurs. > Nginx version in use: v1.25.3 > > Contents of 1Mbyte size were requested to [Origin Server]. > A response up to approximately 500Kbytes in size, including the header, was > received without delay. > However, after 500Kbytes, no response was received from Origin for 3 > seconds and the connection (time-out) > Since the message "upstream timed out...while reading upstream" was logged > in the error log, I think the connection was lost due to the > "proxy_read_timeout 3s" setting. > > While checking the log, I noticed that the "$status" value in the access > log was different from what I thought. > In my opinion, if the connection was terminated by "proxy_read_timeout", > the "$status" value would be 5xx, but the "$status" value in the saved > access log was 200. > > A normal response was not completed due to "proxy_read_timeout", so I would > like to know why the "$status" value is stored as 200 instead of 5xx. > Should I check a variable other than "$status" for responses to abnormal > timeouts such as "proxy_read_timeout"? The $status variable shows the status as sent to the client in the response headers. When proxy_read_timeout happens, the response headers are already sent, so $status contains 200 as sent to the client. For errors happened during sending the response body, consider looking into the error log. Some generic information about successful request completion might be found in the $request_completion variable (http://nginx.org/r/$request_completion). Note though that it might not be set for variety of reasons. -- Maxim Dounin http://mdounin.ru/ From oasis at embracelabs.com Wed Jan 17 02:41:22 2024 From: oasis at embracelabs.com (=?UTF-8?B?67CV6rec7LKg?=) Date: Wed, 17 Jan 2024 11:41:22 +0900 Subject: This is a question about the "$status" log value when "proxy_read_timeout" occurs. In-Reply-To: References: Message-ID: Hello. I came to understand why 200 is stored in the $status variable when proxy_read_timeout occurs. Thank you for the reply. 2024년 1월 17일 (수) 오전 7:27, Maxim Dounin 님이 작성: > Hello! > > On Tue, Jan 16, 2024 at 01:15:09PM +0900, 박규철 wrote: > > > This is a question about the "$status" log value when > "proxy_read_timeout" > > occurs. > > Nginx version in use: v1.25.3 > > > > Contents of 1Mbyte size were requested to [Origin Server]. > > A response up to approximately 500Kbytes in size, including the header, > was > > received without delay. > > However, after 500Kbytes, no response was received from Origin for 3 > > seconds and the connection (time-out) > > Since the message "upstream timed out...while reading upstream" was > logged > > in the error log, I think the connection was lost due to the > > "proxy_read_timeout 3s" setting. > > > > While checking the log, I noticed that the "$status" value in the access > > log was different from what I thought. > > In my opinion, if the connection was terminated by "proxy_read_timeout", > > the "$status" value would be 5xx, but the "$status" value in the saved > > access log was 200. > > > > A normal response was not completed due to "proxy_read_timeout", so I > would > > like to know why the "$status" value is stored as 200 instead of 5xx. > > Should I check a variable other than "$status" for responses to abnormal > > timeouts such as "proxy_read_timeout"? > > The $status variable shows the status as sent to the client in the > response headers. When proxy_read_timeout happens, the response > headers are already sent, so $status contains 200 as sent to the > client. > > For errors happened during sending the response body, consider > looking into the error log. Some generic information about > successful request completion might be found in the > $request_completion variable > (http://nginx.org/r/$request_completion). Note though that it > might not be set for variety of reasons. > > -- > Maxim Dounin > http://mdounin.ru/ > _______________________________________________ > nginx mailing list > nginx at nginx.org > https://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From lubos.pintes at seznam.cz Thu Jan 25 11:01:05 2024 From: lubos.pintes at seznam.cz (=?UTF-8?B?xL11Ym/FoSBQaW50ZcWh?=) Date: Thu, 25 Jan 2024 12:01:05 +0100 Subject: Configuration adjustment for GRPC service Message-ID: <644bf4a2-57eb-4bf0-bd91-bb21466a8fa6@seznam.cz> Hello, everybody, I am implementing a GRPC service which has methods, i.e. request/reply and one streaming method, through which the server sends events at random intervals. The GRPC server is written in Go, the client in C#, we are using Grpc.Core. If the server is not running and I call one of the request/reply methods, an error occurs as I expect. But if I call the streaming method, Nginx accepts the connection and the client gets stuck on calling await events.ResponseStream.MoveNext(...); I would like to ask how to configure Nginx so that an error occurs even if the streaming method is called if the server is not running, e.g. it restarts out of my control if the server on which my service is running restarts. Thank you From janderson at appinfoinc.com Thu Jan 25 13:01:24 2024 From: janderson at appinfoinc.com (Jason Anderson) Date: Thu, 25 Jan 2024 08:01:24 -0500 Subject: Configuration adjustment for GRPC service In-Reply-To: <644bf4a2-57eb-4bf0-bd91-bb21466a8fa6@seznam.cz> References: <644bf4a2-57eb-4bf0-bd91-bb21466a8fa6@seznam.cz> Message-ID: Have you tried configuring grpc timeouts on NGINX? This combined with an upstream healthcheck should prevent any client connections that aren't possible for NGINX to service. https://nginx.org/en/docs/http/ngx_http_grpc_module.html#grpc_connect_timeout https://docs.nginx.com/nginx/admin-guide/load-balancer/grpc-health-check/ Regards, Jason On Thu, Jan 25, 2024, 6:01 AM Ľuboš Pinteš wrote: > Hello, everybody, > I am implementing a GRPC service which has methods, i.e. request/reply > and one streaming method, through which the server sends events at > random intervals. > The GRPC server is written in Go, the client in C#, we are using Grpc.Core. > > If the server is not running and I call one of the request/reply > methods, an error occurs as I expect. But if I call the streaming > method, Nginx accepts the connection and the client gets stuck on > calling await events.ResponseStream.MoveNext(...); > > I would like to ask how to configure Nginx so that an error occurs even > if the streaming method is called if the server is not running, e.g. it > restarts out of my control if the server on which my service is running > restarts. > > Thank you > _______________________________________________ > nginx mailing list > nginx at nginx.org > https://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From lubos.pintes at seznam.cz Thu Jan 25 13:53:51 2024 From: lubos.pintes at seznam.cz (=?UTF-8?B?xL11Ym/FoSBQaW50ZcWh?=) Date: Thu, 25 Jan 2024 14:53:51 +0100 Subject: Configuration adjustment for GRPC service In-Reply-To: References: <644bf4a2-57eb-4bf0-bd91-bb21466a8fa6@seznam.cz> Message-ID: Hello Jason and thank for your reply. I am fairly new to this stuff. Concerning health checks, does it matter if I have only one simple server? So no load balancing etc.? Dňa 25. 1. 2024 o 14:01 Jason Anderson via nginx napísal(a): > Have you tried configuring grpc timeouts on NGINX? > > This combined with an upstream healthcheck should prevent any client > connections that aren't possible for NGINX to service. > > https://nginx.org/en/docs/http/ngx_http_grpc_module.html#grpc_connect_timeout > > https://docs.nginx.com/nginx/admin-guide/load-balancer/grpc-health-check/ > > > Regards, > > Jason > > On Thu, Jan 25, 2024, 6:01 AM Ľuboš Pinteš wrote: > > Hello, everybody, > I am implementing a GRPC service which has methods, i.e. > request/reply > and one streaming method, through which the server sends events at > random intervals. > The GRPC server is written in Go, the client in C#, we are using > Grpc.Core. > > If the server is not running and I call one of the request/reply > methods, an error occurs as I expect. But if I call the streaming > method, Nginx accepts the connection and the client gets stuck on > calling await events.ResponseStream.MoveNext(...); > > I would like to ask how to configure Nginx so that an error occurs > even > if the streaming method is called if the server is not running, > e.g. it > restarts out of my control if the server on which my service is > running > restarts. > > Thank you > _______________________________________________ > nginx mailing list > nginx at nginx.org > https://mailman.nginx.org/mailman/listinfo/nginx > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > https://mailman.nginx.org/mailman/listinfo/nginx From rakshith.2302 at gmail.com Sat Jan 27 10:25:42 2024 From: rakshith.2302 at gmail.com (Rakshith Kumar) Date: Sat, 27 Jan 2024 15:55:42 +0530 Subject: Limit NGINX log size Message-ID: Hello Team, I would like to know how to limit the NGINX limit size. We would like to set size limit for Nginx log files on App Volumes Manager since it consume disk space over time. Can we add any parameters to nginx.conf to limit or rotate the logs. Location: ..\Program Files (x86)\CloudVolumes\Manager\nginx\logs Ex: error_https.log, error.log, access.log files. Regards, Rakshith -------------- next part -------------- An HTML attachment was scrubbed... URL: From jordanc.carter at outlook.com Sun Jan 28 07:15:39 2024 From: jordanc.carter at outlook.com (J Carter) Date: Sun, 28 Jan 2024 07:15:39 +0000 Subject: Limit NGINX log size In-Reply-To: References: Message-ID: Hello, On Sat, 27 Jan 2024 15:55:42 +0530 Rakshith Kumar wrote: > Hello Team, > > I would like to know how to limit the NGINX limit size. > We would like to set size limit for Nginx log files on App Volumes Manager > since it consume disk space over time. Can we add any parameters to > nginx.conf to limit or rotate the logs. > > Location: ..\Program Files (x86)\CloudVolumes\Manager\nginx\logs > > Ex: error_https.log, error.log, access.log files. > > Regards, > Rakshith Nginx does not have log rotation capabilities built in, nor can you limit the size of logs. The logrotate utility is used for this task on unix/unix-like platforms. The best choice would be to either write your own utility to do the rotation, or use a premade windows native utility. Something like this powershell clone of logrotate might work well: https://github.com/theohbrothers/Log-Rotate It's necessary for nginx to reopen the logs post rotation, on Windows I believe you'll need to use the CLI for that ' -s reopen' - or restart the service if you have it running as a service. From jordanc.carter at outlook.com Sun Jan 28 07:44:26 2024 From: jordanc.carter at outlook.com (J Carter) Date: Sun, 28 Jan 2024 07:44:26 +0000 Subject: Configuration adjustment for GRPC service In-Reply-To: References: <644bf4a2-57eb-4bf0-bd91-bb21466a8fa6@seznam.cz> Message-ID: Hello, On Thu, 25 Jan 2024 14:53:51 +0100 Ľuboš Pinteš wrote: > Hello Jason and thank for your reply. > > I am fairly new to this stuff. > > Concerning health checks, does it matter if I have only one simple > server? So no load balancing etc.? > Just so you know, active health checks (on the docs.nginx.com admin guide page) are only in NGINX Plus (commerical version of nginx). If you're using OSS nginx you won't have active health checks (the 'health_check' directive). > > Dňa 25. 1. 2024 o 14:01 Jason Anderson via nginx napísal(a): > > Have you tried configuring grpc timeouts on NGINX? > > > > This combined with an upstream healthcheck should prevent any client > > connections that aren't possible for NGINX to service. > > > > https://nginx.org/en/docs/http/ngx_http_grpc_module.html#grpc_connect_timeout > > > > https://docs.nginx.com/nginx/admin-guide/load-balancer/grpc-health-check/ > > > > > > Regards, > > > > Jason > > > > On Thu, Jan 25, 2024, 6:01 AM Ľuboš Pinteš wrote: > > > > Hello, everybody, > > I am implementing a GRPC service which has methods, i.e. > > request/reply > > and one streaming method, through which the server sends events at > > random intervals. > > The GRPC server is written in Go, the client in C#, we are using > > Grpc.Core. > > > > If the server is not running and I call one of the request/reply > > methods, an error occurs as I expect. But if I call the streaming > > method, Nginx accepts the connection and the client gets stuck on > > calling await events.ResponseStream.MoveNext(...); > > > > I would like to ask how to configure Nginx so that an error occurs > > even > > if the streaming method is called if the server is not running, > > e.g. it > > restarts out of my control if the server on which my service is > > running > > restarts. > > > > Thank you From bittnitt at gmail.com Tue Jan 30 07:36:49 2024 From: bittnitt at gmail.com (bittnitt at gmail.com) Date: Tue, 30 Jan 2024 07:36:49 +0000 (UTC) Subject: Managing Static Files References: <116698181.1414026.1706600209716.ref@mail.yahoo.com> Message-ID: <116698181.1414026.1706600209716@mail.yahoo.com> Hi...I read a few articles about managing static files and I'm a bit confused! I use Nginx as the main server to host my website I enabled gzip and brotli I have also enabled gzip_static and brotli_static And I have pre-compressed all static files with gzip and brotli I read in an article that after compressing all files, I should delete all uncompressed files to save memory and only gzip and Brotli files remain. (Of course, I need to create an empty file called index.html for it to work properly) Everything works fine now but my problem is when the browser doesn't support compression and requires uncompressed files. In another article it was written that if gunzip is enabled for browsers that do not support the compressed format, it decompresses the gzip then sends it to the client. But after doing some testing, I found (I think) that gnuzip only works if nginx is used as the proxy (between main server and client) (due to the content encoding header requirement). Now, if I want to support gzip, brotli and non-compressed files, do I have to have all three types of files? Is this method correct? What method do you use? What method is suggested?Thanks bitt-nitt -------------- next part -------------- An HTML attachment was scrubbed... URL: From clima.gabrielphoto at gmail.com Tue Jan 30 08:28:23 2024 From: clima.gabrielphoto at gmail.com (Clima Gabriel) Date: Tue, 30 Jan 2024 10:28:23 +0200 Subject: ngx_http_find_virtual_server ngx_http_regex_exec DOS Message-ID: Greetings fellow nginx-devs, It looks to me as if an attacker could force the server to use up a large amount of resources doing ngx_http_regex_exec if the server were to be configured with a relatively large number of regex server_names. I would appreciate any ideas on the topic, especially suggestions as to how some form of caching could be implemented for the responses, so that the server didn't have to execute the ngx_http_regex_exec on subsequent requests. 2375 for (i = 0; i < virtual_names->nregex; i++) { 2376 2377 n = ngx_http_regex_exec(r, sn[i].regex, host); 2378 2379 if (n == NGX_DECLINED) { 2380 continue; 2381 } 2382 2383 if (n == NGX_OK) { 2384 *cscfp = sn[i].server; 2385 return NGX_OK; 2386 } 2387 2388 return NGX_ERROR; 2389 } ./src/http/ngx_http_request.c Regards, Gabriel -------------- next part -------------- An HTML attachment was scrubbed... URL: From baalchina at gmail.com Tue Jan 30 14:03:53 2024 From: baalchina at gmail.com (baalchina) Date: Tue, 30 Jan 2024 22:03:53 +0800 Subject: How can I sync nginx.conf in two keepalived server? Message-ID: Hi, all. I had just deploy two nginx servers and made them high-available using keepalived. I tested it, and HA works fine. But I had some new questions here: 1st, When I edited the nginx.conf in master server, how to transfer the conf file to backup server immediately? 2nd, And after I edited , I should run 'nginx -s reload' in the master server, how can backup server do the same without manual run? And last, if the config is not correct, after the file transferred to backup, the backup server can not restart correct, will the backup goes down? Thanks. -- from:baalchina -------------- next part -------------- An HTML attachment was scrubbed... URL: From janderson at appinfoinc.com Tue Jan 30 14:12:01 2024 From: janderson at appinfoinc.com (Jason Anderson) Date: Tue, 30 Jan 2024 09:12:01 -0500 Subject: How can I sync nginx.conf in two keepalived server? In-Reply-To: References: Message-ID: This may help: https://docs.nginx.com/nginx/admin-guide/high-availability/configuration-sharing/ Regards, Jason On Tue, Jan 30, 2024, 9:04 AM baalchina wrote: > Hi, all. I had just deploy two nginx servers and made them high-available > using keepalived. I tested it, and HA works fine. > > But I had some new questions here: > 1st, When I edited the nginx.conf in master server, how to transfer the > conf file to backup server immediately? > 2nd, And after I edited , I should run 'nginx -s reload' in the master > server, how can backup server do the same without manual run? > And last, if the config is not correct, after the file transferred to > backup, the backup server can not restart correct, will the backup goes > down? > > Thanks. > > > -- > from:baalchina > _______________________________________________ > nginx mailing list > nginx at nginx.org > https://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Wed Jan 31 03:14:34 2024 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 31 Jan 2024 06:14:34 +0300 Subject: Managing Static Files In-Reply-To: <116698181.1414026.1706600209716@mail.yahoo.com> References: <116698181.1414026.1706600209716.ref@mail.yahoo.com> <116698181.1414026.1706600209716@mail.yahoo.com> Message-ID: Hello! On Tue, Jan 30, 2024 at 07:36:49AM +0000, bittnitt at gmail.com wrote: > Hi...I read a few articles about managing static files and I'm a > bit confused! > I use Nginx as the main server to host my website > I enabled gzip and brotli > I have also enabled gzip_static and brotli_static > And I have pre-compressed all static files with gzip and brotli > I read in an article that after compressing all files, I should > delete all uncompressed files to save memory and only gzip and > Brotli files remain. > (Of course, I need to create an empty file called index.html for > it to work properly) > Everything works fine now but my problem is when the browser > doesn't support compression and requires uncompressed files. > In another article it was written that if gunzip is enabled for > browsers that do not support the compressed format, it > decompresses the gzip then sends it to the client. > But after doing some testing, I found (I think) that gnuzip only > works if nginx is used as the proxy (between main server and > client) (due to the content encoding header requirement). > Now, if I want to support gzip, brotli and non-compressed files, > do I have to have all three types of files? Is this method > correct? What method do you use? What method is suggested?Thanks The gunzip module works perfectly fine without proxying, though you'll need to ensure that appropriate Content-Encoding is properly set on the response. In particular, if you only have gzipped files, you can do: gzip_static always; gunzip on; In this configuration gzip_static will respond with the compressed version of the file to all requests, and gunzip will uncompress it for clients which does not support gzip (see http://nginx.org/r/gzip_static for the documentation). Not sure about brotli_static, but if the 3rd party module is implemented properly, it should be possible to do "brotli_static on;" in the same configuration to return brotli-compressed files to clients which support brotli. It is not required to delete uncompressed files though. While gunzip module makes it possible, this might be not the best approach available: uncompressing files on the fly certainly consumes additional CPU resources, and also no uncompressed files on disk might be suboptimal for other tasks. Removing uncompressed files usually makes sense only if amount of static files is huge. Hope this helps. -- Maxim Dounin http://mdounin.ru/ From mdounin at mdounin.ru Wed Jan 31 03:19:52 2024 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 31 Jan 2024 06:19:52 +0300 Subject: ngx_http_find_virtual_server ngx_http_regex_exec DOS In-Reply-To: References: Message-ID: Hello! On Tue, Jan 30, 2024 at 10:28:23AM +0200, Clima Gabriel wrote: > Greetings fellow nginx-devs, > It looks to me as if an attacker could force the server to use up a large > amount of resources doing ngx_http_regex_exec if the server were to be > configured with a relatively large number of regex server_names. > I would appreciate any ideas on the topic, especially suggestions as to how > some form of caching could be implemented for the responses, so that the > server didn't have to execute the ngx_http_regex_exec on subsequent > requests. Not using "large number of regex server_names" might be the best solution available here. Requests are not required to be to the same virtual server, and caching won't generally work. -- Maxim Dounin http://mdounin.ru/ From ranieri85 at gmail.com Wed Jan 31 22:06:12 2024 From: ranieri85 at gmail.com (Ranieri Mazili) Date: Wed, 31 Jan 2024 19:06:12 -0300 Subject: How to test session resumption (session id and session ticket) and measure performance between them? Message-ID: Hi, I have an Nginx server where I'm trying to measure the performance difference between not using session resumption, using session id and using session ticket. To do that, I just need to set this on my nginx.conf file: #No session resumption ssl_session_tickets off; ssl_session_cache off; #Session resumption - Session ID ssl_session_tickets off; ssl_session_cache shared:SSL:10m; #Session resumption - Session Tickets ssl_session_tickets on; ssl_session_cache off; After set each one, I can test if they are active using the following command and check Session-ID or TLS session ticket are returned: echo R | openssl s_client -connect mtls-as.ranieri.dev.br:443 -tls1_2 -reconnect I also have configured the variable $ssl_session_reused to print when a session is reused. I've tried to test and measure the performance using the Apache Bench software with the command below, but nginx is printing "." instead of "r" on the logs, what means the session aren't being reused. Apache Bench command: ab -n 500 -c 10 -k https://mtls-as.ranieri.dev.br/ Does anyone know how to test the session resumption and measure the performance difference between them? -- Ranieri -------------- next part -------------- An HTML attachment was scrubbed... URL: