From cyflhn at 163.com Fri Nov 2 05:18:08 2018 From: cyflhn at 163.com (yf chu) Date: Fri, 2 Nov 2018 13:18:08 +0800 (CST) Subject: Can Nginx handle millions of static pages or pictures ? Message-ID: <2c91f9b7.11f42.166d2db6fe3.Coremail.cyflhn@163.com> I have a website with tens of millions of pages. The content on the page stored in database but the data is not changed very frequently. so for the sake of improving the performance of the website and reducing the costs of deployment of web applications, I want to generate the static pages for the dynamic content and refresh the pages if the contents are changed. But I am very concerned about how to manage these large amount of pages. how should I store these pages? I plan to use Nginx to manage these pages. Is it possible that it will cause IO problems when the web server handle many requests? What is the capability of handing requests for Nginx ? Is there any better solutions for this issue? -------------- next part -------------- An HTML attachment was scrubbed... URL: From peter_booth at me.com Fri Nov 2 13:16:18 2018 From: peter_booth at me.com (Peter Booth) Date: Fri, 02 Nov 2018 09:16:18 -0400 Subject: Can Nginx handle millions of static pages or pictures ? In-Reply-To: <2c91f9b7.11f42.166d2db6fe3.Coremail.cyflhn@163.com> References: <2c91f9b7.11f42.166d2db6fe3.Coremail.cyflhn@163.com> Message-ID: <938ECE01-9E28-458B-B6CC-C3B3C71D5CC2@me.com> So this is a very interesting question. I started writing dynamic websites in 1998. Most developers don?t want to generate static sites. I think their reasons are more emotional than technical. About seven years ago I had two jobs - the day job was a high traffic retail fashion website. the side job was a very similar site, implemented as a static site that was recreated when content changed. The dynamic site had (first request) latencies of about 2 sec. The static site had typical latencies of 250ms. That?s almost 10x faster. It also cost about 2% of what the dynamic site cost to run. Sounds like you?re planning to do things the smart way. You haven?t said how busy your site is. Assuming that your hardware is Linux then your content will all be sitting in Linus?s page cache, so on a recent model server a well tuned Ng Inc can serve well over 100,000 requests per sec. the key is to use your browser cache whenever possible. As well as making good use if compute resources, a website like this is much more reliable than a dynamic site. There are few moving parts that can go wrong. Have fun! Pete Sent from my iPhone > On Nov 2, 2018, at 1:18 AM, yf chu wrote: > > I have a website with tens of millions of pages. The content on the page stored in database but the data is not changed very frequently. so for the sake of improving the performance of the website and reducing the costs of deployment of web applications, I want to generate the static pages for the dynamic content and refresh the pages if the contents are changed. But I am very concerned about how to manage these large amount of pages. how should I store these pages? I plan to use Nginx to manage these pages. Is it possible that it will cause IO problems when the web server handle many requests? What is the capability of handing requests for Nginx ? Is there any better solutions for this issue? > > > > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From cyflhn at 163.com Fri Nov 2 14:45:49 2018 From: cyflhn at 163.com (yf chu) Date: Fri, 2 Nov 2018 22:45:49 +0800 (CST) Subject: Can Nginx handle millions of static pages or pictures ? In-Reply-To: <938ECE01-9E28-458B-B6CC-C3B3C71D5CC2@me.com> References: <2c91f9b7.11f42.166d2db6fe3.Coremail.cyflhn@163.com> <938ECE01-9E28-458B-B6CC-C3B3C71D5CC2@me.com> Message-ID: <4322c83.202a9.166d4e32b02.Coremail.cyflhn@163.com> Thank you for your advice. But may I ask you that how do you store your static web pages on your server? If there are too many pages in a directory, is it possible that the process of looking up the pages could affect the performance of the web server ? Here I have another question. For a dynamic website , some dynamic contents can be generated as static pages in advance. e.g the page showing the detail information of a certain product. We know that how many products are there in our websites. but some dynamic contents are difficult to be generated in advance such as the the contents of some search results. There are lots of search words for a website for which there are too many result items. How should we handle this issue? At 2018-11-02 21:16:18, "Peter Booth via nginx" wrote: So this is a very interesting question. I started writing dynamic websites in 1998. Most developers don?t want to generate static sites. I think their reasons are more emotional than technical. About seven years ago I had two jobs - the day job was a high traffic retail fashion website. the side job was a very similar site, implemented as a static site that was recreated when content changed. The dynamic site had (first request) latencies of about 2 sec. The static site had typical latencies of 250ms. That?s almost 10x faster. It also cost about 2% of what the dynamic site cost to run. Sounds like you?re planning to do things the smart way. You haven?t said how busy your site is. Assuming that your hardware is Linux then your content will all be sitting in Linus?s page cache, so on a recent model server a well tuned Ng Inc can serve well over 100,000 requests per sec. the key is to use your browser cache whenever possible. As well as making good use if compute resources, a website like this is much more reliable than a dynamic site. There are few moving parts that can go wrong. Have fun! Pete Sent from my iPhone On Nov 2, 2018, at 1:18 AM, yf chu wrote: I have a website with tens of millions of pages. The content on the page stored in database but the data is not changed very frequently. so for the sake of improving the performance of the website and reducing the costs of deployment of web applications, I want to generate the static pages for the dynamic content and refresh the pages if the contents are changed. But I am very concerned about how to manage these large amount of pages. how should I store these pages? I plan to use Nginx to manage these pages. Is it possible that it will cause IO problems when the web server handle many requests? What is the capability of handing requests for Nginx ? Is there any better solutions for this issue? _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From peter_booth at me.com Fri Nov 2 15:02:31 2018 From: peter_booth at me.com (Peter Booth) Date: Fri, 02 Nov 2018 11:02:31 -0400 Subject: Can Nginx handle millions of static pages or pictures ? In-Reply-To: <4322c83.202a9.166d4e32b02.Coremail.cyflhn@163.com> References: <2c91f9b7.11f42.166d2db6fe3.Coremail.cyflhn@163.com> <938ECE01-9E28-458B-B6CC-C3B3C71D5CC2@me.com> <4322c83.202a9.166d4e32b02.Coremail.cyflhn@163.com> Message-ID: <8F7D232E-C3BD-47F9-83C2-6B2263E21CBE@me.com> The too many files in a directory can be a pain in the backside if you get to the 100s of thousands - but that?s unto you to create relevant subdirectories. Imagine that your website was a retail store selling millions of possible products. For search results it depends upon whether results vary per user. For one site I worked on, if I searched for ?green jeans? I could get the same list of pages as you, and so these pages would be cached sp that if we both requested green jeans nginx would only request the page once. This is very useful for protecting against denial of service attacks as , for example you can configure nginx to only send one request at a time for the same url to the back-end, and for other requests to wait and return the same content to each user. The logic for this caching can be very subtle - if I had was logged in and had preference set ?show all prices in Australian dollars? then I?d expect a different page than you for the same item. It?s also possible that your pages might mix filtering and search - so I might click predefined categories - cocktail dresses/size 4/red then add free text to search with. nginx?s caching features are tremendous powerful and can be extended with lua code using, for example, the openresty bundle of nginx. I was amazed that I never found a use case that couldn?t be solved with nginx functionality, to the point where a tv show could invite viewers select a specific URL at some point and the hundreds of thousands of requests ended up generating only one request for the backend, and the site stayed up under such spiky loads. My tip is to start simple and add one feature at a time and understand your web server logs, which contain lots of information. Peter > On 2 Nov 2018, at 10:45 AM, yf chu wrote: > > > Thank you for your advice. But may I ask you that how do you store your static web pages on your server? If there are too many pages in a directory, is it possible that the process of looking up the pages could affect the performance of the web server ? > Here I have another question. > For a dynamic website , some dynamic contents can be generated as static pages in advance. e.g the page showing the detail information of a certain product. We know that how many products are there in our websites. > but some dynamic contents are difficult to be generated in advance such as the the contents of some search results. There are lots of search words for a website for which there are too many result items. How should we handle this issue? > > > > At 2018-11-02 21:16:18, "Peter Booth via nginx" wrote: > So this is a very interesting question. I started writing dynamic websites in 1998. Most developers don?t want to generate static sites. I think their reasons are more emotional than technical. About seven years ago I had two jobs - the day job was a high traffic retail fashion website. the side job was a very similar site, implemented as a static site that was recreated when content changed. The dynamic site had (first request) latencies of about 2 sec. The static site had typical latencies of 250ms. That?s almost 10x faster. It also cost about 2% of what the dynamic site cost to run. > > Sounds like you?re planning to do things the smart way. You haven?t said how busy your site is. Assuming that your hardware is Linux then your content will all be sitting in Linus?s page cache, so on a recent model server a well tuned Ng Inc can serve well over 100,000 requests per sec. the key is to use your browser cache whenever possible. As well as making good use if compute resources, a website like this is much more reliable than a dynamic site. There are few moving parts that can go wrong. Have fun! > > Pete > > Sent from my iPhone > > On Nov 2, 2018, at 1:18 AM, yf chu > wrote: > >> I have a website with tens of millions of pages. The content on the page stored in database but the data is not changed very frequently. so for the sake of improving the performance of the website and reducing the costs of deployment of web applications, I want to generate the static pages for the dynamic content and refresh the pages if the contents are changed. But I am very concerned about how to manage these large amount of pages. how should I store these pages? I plan to use Nginx to manage these pages. Is it possible that it will cause IO problems when the web server handle many requests? What is the capability of handing requests for Nginx ? Is there any better solutions for this issue? >> >> >> >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From michael.friscia at yale.edu Fri Nov 2 16:10:08 2018 From: michael.friscia at yale.edu (Friscia, Michael) Date: Fri, 2 Nov 2018 16:10:08 +0000 Subject: Solr and Nginx Message-ID: <9D5371C7-46EB-481E-B607-362FA26C1275@yale.edu> I?m wondering if anyone has put SOLR behind Nginx or if there might be good reasons not to put it behind Nginx. The obvious part works fine, the admin interface uses all POST which are not cached and I get all the GET requests cached which seems ok. But I?m wondering if I?m missing something that makes SOLR a bad candidate for NGINX caches. ___________________________________________ Michael Friscia Office of Communications Yale School of Medicine (203) 737-7932 - office (203) 931-5381 - mobile http://web.yale.edu -------------- next part -------------- An HTML attachment was scrubbed... URL: From bf.014 at protonmail.com Sat Nov 3 18:14:15 2018 From: bf.014 at protonmail.com (Bogdan) Date: Sat, 03 Nov 2018 18:14:15 +0000 Subject: no TLS1.3 with 1.15.5 Message-ID: Hello, everyone. I am stuck with a fresh installation which runs absolutely fine except it doesn't offer TLS1.3 which is the the biggest reason for updating the server. Below is some info about my config. Distribution: Ubuntu 18.04 server with kernel 4.15.0-38-generic nginx compile options: nginx/1.15.5 (Ubuntu) built by gcc 7.3.0 (Ubuntu 7.3.0-27ubuntu1~18.04) built with OpenSSL 1.1.1 11 Sep 2018 TLS SNI support enabled configure arguments: --prefix=/etc/nginx --sbin-path=/usr/sbin/nginx --modules-path=/usr/lib/nginx/modules --conf-path=/etc/nginx/nginx.conf --error-log-path=/var/log/nginx/error.log --pid-path=/var/run/nginx.pid --lock-path=/var/run/nginx.lock --user=nobody --group=nogroup --build=Ubuntu --builddir=nginx-1.15.5 --with-openssl=../openssl-1.1.1 --with-pcre=../pcre-8.42 --with-pcre-jit --with-zlib=../zlib-1.2.11 --with-openssl-opt=no-nextprotoneg --with-select_module --with-poll_module --with-threads --with-file-aio --with-http_ssl_module --with-http_v2_module --with-http_realip_module --with-http_addition_module --with-http_xslt_module=dynamic --with-http_image_filter_module=dynamic --with-http_sub_module --with-http_geoip_module=dynamic --with-http_auth_request_module --with-http_secure_link_module --with-http_degradation_module --with-http_slice_module --with-http_stub_status_module --with-http_perl_module=dynamic --with-perl_modules_path=/usr/share/perl/5.26.1 --with-perl=/usr/bin/perl --http-log-path=/var/log/nginx/access.log --http-client-body-temp-path=/var/cache/nginx/client_temp --without-http_empty_gif_module --without-http_browser_module --without-http_fastcgi_module --without-http_uwsgi_module --without-http_scgi_module --without-mail_pop3_module --without-mail_imap_module --without-mail_smtp_module --with-stream=dynamic --with-stream_ssl_module --with-stream_realip_module --with-stream_geoip_module=dynamic --with-stream_ssl_preread_module --with-compat --with-debug /etc/nginx/sites-available/default: ssl_session_cache shared:SSL:1m; server { ssl_early_data on; ssl_dhparam /etc/nginx/ssl/dh4096.pem; ssl_session_timeout 5m; ssl_stapling on; ssl_stapling_verify on; ssl_prefer_server_ciphers on; ssl_protocols TLSv1.2 TLSv1.3; ssl_ciphers TLS13-CHACHA20-POLY1305-SHA256:TLS13-AES-256-GCM-SHA384:TLS13-AES-128-GCM-SHA256:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-SHA384:ECDHE-RSA-AES256-SHA; ssl_ecdh_curve secp521r1:secp384r1; } I can't reach beyond TLS1.2 with Firefox 63 (security.tls.version.max = 4, that is TLS1.3 RFC as far as I know) and ssllabs.com's test says TLSv1.3 is non-existent on the server. Any help would be much appreciated. Bogdan -------------- next part -------------- An HTML attachment was scrubbed... URL: From sca at andreasschulze.de Sun Nov 4 12:31:12 2018 From: sca at andreasschulze.de (A. Schulze) Date: Sun, 4 Nov 2018 13:31:12 +0100 Subject: no TLS1.3 with 1.15.5 In-Reply-To: References: Message-ID: <85695ab5-0768-40dc-b7e7-18faeb7b29d3@andreasschulze.de> Am 03.11.18 um 19:14 schrieb Bogdan via nginx: > Hello, everyone. > > I am stuck with a fresh installation which runs absolutely fine except it doesn't offer TLS1.3 which is the the biggest reason for updating the server. > > Below is some info about my config. > > Distribution: Ubuntu 18.04 server with kernel 4.15.0-38-generic > > nginx compile options: nginx/1.15.5 (Ubuntu) > built by gcc 7.3.0 (Ubuntu 7.3.0-27ubuntu1~18.04) > built with OpenSSL 1.1.1? 11 Sep 2018 > TLS SNI support enabled > configure arguments: --prefix=/etc/nginx --sbin-path=/usr/sbin/nginx --modules-path=/usr/lib/nginx/modules --conf-path=/etc/nginx/nginx.conf --error-log-path=/var/log/nginx/error.log --pid-path=/var/run/nginx.pid --lock-path=/var/run/nginx.lock --user=nobody --group=nogroup --build=Ubuntu --builddir=nginx-1.15.5 --with-openssl=../openssl-1.1.1 --with-pcre=../pcre-8.42 --with-pcre-jit --with-zlib=../zlib-1.2.11 --with-openssl-opt=no-nextprotoneg --with-select_module --with-poll_module --with-threads --with-file-aio --with-http_ssl_module --with-http_v2_module --with-http_realip_module --with-http_addition_module --with-http_xslt_module=dynamic --with-http_image_filter_module=dynamic --with-http_sub_module --with-http_geoip_module=dynamic --with-http_auth_request_module --with-http_secure_link_module --with-http_degradation_module --with-http_slice_module --with-http_stub_status_module --with-http_perl_module=dynamic --with-perl_modules_path=/usr/share/perl/5.26.1 > --with-perl=/usr/bin/perl --http-log-path=/var/log/nginx/access.log --http-client-body-temp-path=/var/cache/nginx/client_temp --without-http_empty_gif_module --without-http_browser_module --without-http_fastcgi_module --without-http_uwsgi_module --without-http_scgi_module --without-mail_pop3_module --without-mail_imap_module --without-mail_smtp_module --with-stream=dynamic --with-stream_ssl_module --with-stream_realip_module --with-stream_geoip_module=dynamic --with-stream_ssl_preread_module --with-compat --with-debug Hello Bogdan, while I have not really a helpful suggestion for you I noticed you disabled "nextprotoneg" for openssl. May I kindly ask why you do so? > /etc/nginx/sites-available/default: > > ssl_session_cache shared:SSL:1m; > > server { > > ssl_early_data on; that one I did not know, so thanks for the hint. > ssl_dhparam /etc/nginx/ssl/dh4096.pem; > ssl_session_timeout 5m; > ssl_stapling on; > ssl_stapling_verify on; > ssl_prefer_server_ciphers on; > ssl_protocols TLSv1.2 TLSv1.3; > ssl_ciphers TLS13-CHACHA20-POLY1305-SHA256:TLS13-AES-256-GCM-SHA384:TLS13-AES-128-GCM-SHA256:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-SHA384:ECDHE-RSA-AES256-SHA; > ssl_ecdh_curve secp521r1:secp384r1; try to disable as many option as possible. I would start leaving ssl_dhparam, stapling, ciphers and curve options on defaults. > } > > I can't reach beyond TLS1.2 with Firefox 63 (security.tls.version.max = 4, that is TLS1.3 RFC as far as I know) and ssllabs.com's test says TLSv1.3 is non-existent on the server. Also using "openssl s_client" is a good method for measurement. > > Any help would be much appreciated. are you sure, nginx is really not build against distribution's openssl _not_ supporting TLS1.3? > > Bogdan Good luck! Andreas From bf.014 at protonmail.com Sun Nov 4 19:10:06 2018 From: bf.014 at protonmail.com (Bogdan) Date: Sun, 04 Nov 2018 19:10:06 +0000 Subject: no TLS1.3 with 1.15.5 In-Reply-To: <85695ab5-0768-40dc-b7e7-18faeb7b29d3@andreasschulze.de> References: <85695ab5-0768-40dc-b7e7-18faeb7b29d3@andreasschulze.de> Message-ID: <3LHZyzIjj7DqH0r8gvoFU3iSVhI5YcxVk_Jk6TeGOmoIcivpS1fjyp6mfsuVqNuluZ5QIFPZE6pxoabTbyzQohlj1MxArGZXoBv10fylSec=@protonmail.com> Hi, Andreas! I disabled NPN (Next Protocol Negotiation) because, as far as I know (not very far and it comes from what I've read, since I am not an expert), ALPN with HTTP/2 is more efficient and offers lower latency. Google also dropped support for NPN in their Chrome browser. Indeed I tried to disable as many lines as possible, but the compiling options for nginx weren't the culprit. The distribution's openssl was also compiled from scratch (v1.1.1), so there was no chance that on my system I was been using versions of software which were not able to offer TLS1.3 support. The problem was that in /etc/nginx/nginx.conf I had this line which I missed: ssl_protocols TLSv1.2; It was a configuration error on my part, so updating the line as follows solved the problem: ssl_protocols TLSv1.2 TLSv1.3; A great trouble for a only a few missing characters, but once the trouble is gone, the server runs greatly. :) Thank you for your suggestions! Bogdan ??????? Original Message ??????? On Sunday, November 4, 2018 2:31 PM, A. Schulze wrote: > Am 03.11.18 um 19:14 schrieb Bogdan via nginx: > > > Hello, everyone. > > I am stuck with a fresh installation which runs absolutely fine except it doesn't offer TLS1.3 which is the the biggest reason for updating the server. > > Below is some info about my config. > > Distribution: Ubuntu 18.04 server with kernel 4.15.0-38-generic > > nginx compile options: nginx/1.15.5 (Ubuntu) > > built by gcc 7.3.0 (Ubuntu 7.3.0-27ubuntu1~18.04) > > built with OpenSSL 1.1.1? 11 Sep 2018 > > TLS SNI support enabled > > configure arguments: --prefix=/etc/nginx --sbin-path=/usr/sbin/nginx --modules-path=/usr/lib/nginx/modules --conf-path=/etc/nginx/nginx.conf --error-log-path=/var/log/nginx/error.log --pid-path=/var/run/nginx.pid --lock-path=/var/run/nginx.lock --user=nobody --group=nogroup --build=Ubuntu --builddir=nginx-1.15.5 --with-openssl=../openssl-1.1.1 --with-pcre=../pcre-8.42 --with-pcre-jit --with-zlib=../zlib-1.2.11 --with-openssl-opt=no-nextprotoneg --with-select_module --with-poll_module --with-threads --with-file-aio --with-http_ssl_module --with-http_v2_module --with-http_realip_module --with-http_addition_module --with-http_xslt_module=dynamic --with-http_image_filter_module=dynamic --with-http_sub_module --with-http_geoip_module=dynamic --with-http_auth_request_module --with-http_secure_link_module --with-http_degradation_module --with-http_slice_module --with-http_stub_status_module --with-http_perl_module=dynamic --with-perl_modules_path=/usr/share/perl/5.26.1 > > --with-perl=/usr/bin/perl --http-log-path=/var/log/nginx/access.log --http-client-body-temp-path=/var/cache/nginx/client_temp --without-http_empty_gif_module --without-http_browser_module --without-http_fastcgi_module --without-http_uwsgi_module --without-http_scgi_module --without-mail_pop3_module --without-mail_imap_module --without-mail_smtp_module --with-stream=dynamic --with-stream_ssl_module --with-stream_realip_module --with-stream_geoip_module=dynamic --with-stream_ssl_preread_module --with-compat --with-debug > > Hello Bogdan, > > while I have not really a helpful suggestion for you I noticed you disabled "nextprotoneg" for openssl. > May I kindly ask why you do so? > > > /etc/nginx/sites-available/default: > > ssl_session_cache shared:SSL:1m; > > server { > > ssl_early_data on; > > that one I did not know, so thanks for the hint. > > > ssl_dhparam /etc/nginx/ssl/dh4096.pem; > > ssl_session_timeout 5m; > > ssl_stapling on; > > ssl_stapling_verify on; > > ssl_prefer_server_ciphers on; > > ssl_protocols TLSv1.2 TLSv1.3; > > ssl_ciphers TLS13-CHACHA20-POLY1305-SHA256:TLS13-AES-256-GCM-SHA384:TLS13-AES-128-GCM-SHA256:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-SHA384:ECDHE-RSA-AES256-SHA; > > ssl_ecdh_curve secp521r1:secp384r1; > > try to disable as many option as possible. I would start leaving ssl_dhparam, stapling, ciphers and curve options on defaults. > > > } > > I can't reach beyond TLS1.2 with Firefox 63 (security.tls.version.max = 4, that is TLS1.3 RFC as far as I know) and ssllabs.com's test says TLSv1.3 is non-existent on the server. > > Also using "openssl s_client" is a good method for measurement. > > > Any help would be much appreciated. > > are you sure, nginx is really not build against distribution's openssl not supporting TLS1.3? > > > Bogdan > > Good luck! > Andreas > > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From alex at samad.com.au Sun Nov 4 23:14:42 2018 From: alex at samad.com.au (Alex Samad) Date: Mon, 5 Nov 2018 10:14:42 +1100 Subject: no TLS1.3 with 1.15.5 In-Reply-To: <3LHZyzIjj7DqH0r8gvoFU3iSVhI5YcxVk_Jk6TeGOmoIcivpS1fjyp6mfsuVqNuluZ5QIFPZE6pxoabTbyzQohlj1MxArGZXoBv10fylSec=@protonmail.com> References: <85695ab5-0768-40dc-b7e7-18faeb7b29d3@andreasschulze.de> <3LHZyzIjj7DqH0r8gvoFU3iSVhI5YcxVk_Jk6TeGOmoIcivpS1fjyp6mfsuVqNuluZ5QIFPZE6pxoabTbyzQohlj1MxArGZXoBv10fylSec=@protonmail.com> Message-ID: Hi Don't you need a openssl that works with 1.3 as well. My sticking point is centos 6 - no openssl that comes with 1.3 - as far as i know A On Mon, 5 Nov 2018 at 06:10, Bogdan via nginx wrote: > Hi, Andreas! > > > I disabled NPN (Next Protocol Negotiation) because, as far as I know (not > very far and it comes from what I've read, since I am not an expert), ALPN > with HTTP/2 is more efficient and offers lower latency. Google also dropped > support for NPN in their Chrome browser. > > Indeed I tried to disable as many lines as possible, but the compiling > options for nginx weren't the culprit. The distribution's openssl was also > compiled from scratch (v1.1.1), so there was no chance that on my system I > was been using versions of software which were not able to offer TLS1.3 > support. > > The problem was that in /etc/nginx/nginx.conf I had this line which I > missed: > > ssl_protocols TLSv1.2; > > It was a configuration error on my part, so updating the line as follows > solved the problem: > > ssl_protocols TLSv1.2 TLSv1.3; > > A great trouble for a only a few missing characters, but once the trouble > is gone, the server runs greatly. :) > > > > Thank you for your suggestions! > > Bogdan > > > > > ??????? Original Message ??????? > On Sunday, November 4, 2018 2:31 PM, A. Schulze > wrote: > > > Am 03.11.18 um 19:14 schrieb Bogdan via nginx: > > > > > Hello, everyone. > > > I am stuck with a fresh installation which runs absolutely fine except > it doesn't offer TLS1.3 which is the the biggest reason for updating the > server. > > > Below is some info about my config. > > > Distribution: Ubuntu 18.04 server with kernel 4.15.0-38-generic > > > nginx compile options: nginx/1.15.5 (Ubuntu) > > > built by gcc 7.3.0 (Ubuntu 7.3.0-27ubuntu1~18.04) > > > built with OpenSSL 1.1.1 11 Sep 2018 > > > TLS SNI support enabled > > > configure arguments: --prefix=/etc/nginx --sbin-path=/usr/sbin/nginx > --modules-path=/usr/lib/nginx/modules --conf-path=/etc/nginx/nginx.conf > --error-log-path=/var/log/nginx/error.log --pid-path=/var/run/nginx.pid > --lock-path=/var/run/nginx.lock --user=nobody --group=nogroup > --build=Ubuntu --builddir=nginx-1.15.5 --with-openssl=../openssl-1.1.1 > --with-pcre=../pcre-8.42 --with-pcre-jit --with-zlib=../zlib-1.2.11 > --with-openssl-opt=no-nextprotoneg --with-select_module --with-poll_module > --with-threads --with-file-aio --with-http_ssl_module --with-http_v2_module > --with-http_realip_module --with-http_addition_module > --with-http_xslt_module=dynamic --with-http_image_filter_module=dynamic > --with-http_sub_module --with-http_geoip_module=dynamic > --with-http_auth_request_module --with-http_secure_link_module > --with-http_degradation_module --with-http_slice_module > --with-http_stub_status_module --with-http_perl_module=dynamic > --with-perl_modules_path=/usr/share/perl/5.26.1 > > > --with-perl=/usr/bin/perl --http-log-path=/var/log/nginx/access.log > --http-client-body-temp-path=/var/cache/nginx/client_temp > --without-http_empty_gif_module --without-http_browser_module > --without-http_fastcgi_module --without-http_uwsgi_module > --without-http_scgi_module --without-mail_pop3_module > --without-mail_imap_module --without-mail_smtp_module --with-stream=dynamic > --with-stream_ssl_module --with-stream_realip_module > --with-stream_geoip_module=dynamic --with-stream_ssl_preread_module > --with-compat --with-debug > > > > Hello Bogdan, > > > > while I have not really a helpful suggestion for you I noticed you > disabled "nextprotoneg" for openssl. > > May I kindly ask why you do so? > > > > > /etc/nginx/sites-available/default: > > > ssl_session_cache shared:SSL:1m; > > > server { > > > ssl_early_data on; > > > > that one I did not know, so thanks for the hint. > > > > > ssl_dhparam /etc/nginx/ssl/dh4096.pem; > > > ssl_session_timeout 5m; > > > ssl_stapling on; > > > ssl_stapling_verify on; > > > ssl_prefer_server_ciphers on; > > > ssl_protocols TLSv1.2 TLSv1.3; > > > ssl_ciphers > TLS13-CHACHA20-POLY1305-SHA256:TLS13-AES-256-GCM-SHA384:TLS13-AES-128-GCM-SHA256:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-SHA384:ECDHE-RSA-AES256-SHA; > > > ssl_ecdh_curve secp521r1:secp384r1; > > > > try to disable as many option as possible. I would start leaving > ssl_dhparam, stapling, ciphers and curve options on defaults. > > > > > } > > > I can't reach beyond TLS1.2 with Firefox 63 (security.tls.version.max > = 4, that is TLS1.3 RFC as far as I know) and ssllabs.com's test says > TLSv1.3 is non-existent on the server. > > > > Also using "openssl s_client" is a good method for measurement. > > > > > Any help would be much appreciated. > > > > are you sure, nginx is really not build against distribution's openssl > not supporting TLS1.3? > > > > > Bogdan > > > > Good luck! > > Andreas > > > > nginx mailing list > > nginx at nginx.org > > http://mailman.nginx.org/mailman/listinfo/nginx > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Mon Nov 5 14:14:33 2018 From: nginx-forum at forum.nginx.org (nginxuser2018) Date: Mon, 05 Nov 2018 09:14:33 -0500 Subject: SSLEngine closed already exception triggered by reload Message-ID: <3749aa47809743bb35d95d621f9ac089.NginxMailingListEnglish@forum.nginx.org> I noticed that if I setup a simple scenario where a client is making concurrent requests on a server with nginx configured as a reverse proxy and SSL traffic termination endpoint, if I trigger a reload with 'nginx -s reload' mid requests, often times the client will throw an 'javax.net.ssl.SSLException: SSLEngine closed already at io.netty.handler.ssl.SslHandler.wrap(...)(Unknown Source)' exception. I'm using Scala with the Play framework, which uses netty under the hood. Is there any configuration that could avoid these exceptions being thrown? I cannot reproduce it using for example using Play to serve HTTPS, so I can possibly rule out a problem in the client and think it is a problem with nginx? Thank you. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,281786,281786#msg-281786 From nginx-forum at forum.nginx.org Tue Nov 6 05:33:16 2018 From: nginx-forum at forum.nginx.org (swati) Date: Tue, 06 Nov 2018 00:33:16 -0500 Subject: How to use dynamic IP in resolver directive when NGINX installed on Multi Nodes Openshift cluster In-Reply-To: <8C57D97BF3785A4A8066B7EB3E6CEB1801F1F8306E@ILRAADAGBE3.corp.amdocs.com> References: <8C57D97BF3785A4A8066B7EB3E6CEB1801F1F8306E@ILRAADAGBE3.corp.amdocs.com> Message-ID: Hi Marwan, Did you find solution for this one? Can you please share your configurations? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,281239,281789#msg-281789 From nginx-forum at forum.nginx.org Tue Nov 6 05:44:47 2018 From: nginx-forum at forum.nginx.org (swati) Date: Tue, 06 Nov 2018 00:44:47 -0500 Subject: Using URL instead of IP Message-ID: <941b5c084c60c3440ed1f69925b3d600.NginxMailingListEnglish@forum.nginx.org> Hi, I intend to use nginx as load balancer for routing traffic to an application running on two separate Openshift (kubernetes) clusters. The app urls are -> app.dc1.example.com and app.dc2.example.com. When I curl to these individually I get required page. However, it seems, nginx resolves it into IP instead of treating it as mere destinations. See below my config. Since the apps are running on Openshift, they do not have a specific (public) IP address. Ping for app.dc1.example.com (and for that matter app.dc2.example.com) will give the IP of the machine where DNS service is running for that Openshift cluster. Is there a way to configure nginx to route the requests to the URLs without trying to resolve it into IP addresses? Thanks Swati --- upstream lbtest { server app.dc1.example.com ; server app.dc2.example.com ; } server { listen 9087; location / { proxy_set_header Host $host; proxy_pass http://lbtest; } } --- Posted at Nginx Forum: https://forum.nginx.org/read.php?2,281790,281790#msg-281790 From nginx-forum at forum.nginx.org Tue Nov 6 09:21:36 2018 From: nginx-forum at forum.nginx.org (swati) Date: Tue, 06 Nov 2018 04:21:36 -0500 Subject: How to use dynamic IP in resolver directive when NGINX installed on Multi Nodes Openshift cluster In-Reply-To: <8C57D97BF3785A4A8066B7EB3E6CEB1801F1F8306E@ILRAADAGBE3.corp.amdocs.com> References: <8C57D97BF3785A4A8066B7EB3E6CEB1801F1F8306E@ILRAADAGBE3.corp.amdocs.com> Message-ID: <5647b2fc5b30aa1d28abf5245a77bb62.NginxMailingListEnglish@forum.nginx.org> Hi Marwan, Did you find solution for this one? Can you please share your configurations? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,281239,281791#msg-281791 From nginx-forum at forum.nginx.org Tue Nov 6 13:49:07 2018 From: nginx-forum at forum.nginx.org (nginxuser2018) Date: Tue, 06 Nov 2018 08:49:07 -0500 Subject: SSLEngine closed already exception triggered by reload In-Reply-To: <3749aa47809743bb35d95d621f9ac089.NginxMailingListEnglish@forum.nginx.org> References: <3749aa47809743bb35d95d621f9ac089.NginxMailingListEnglish@forum.nginx.org> Message-ID: One setting that I noticed mitigates the problem is to use `lingering_close always;` however in our infrastructure this can lead to the build up of worker processes (for the duration of the lingering_timeout). What are the advantages and drawbacks of using this setting? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,281786,281801#msg-281801 From mdounin at mdounin.ru Tue Nov 6 15:27:22 2018 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 6 Nov 2018 18:27:22 +0300 Subject: nginx-1.15.6 Message-ID: <20181106152721.GE56558@mdounin.ru> Changes with nginx 1.15.6 06 Nov 2018 *) Security: when using HTTP/2 a client might cause excessive memory consumption (CVE-2018-16843) and CPU usage (CVE-2018-16844). *) Security: processing of a specially crafted mp4 file with the ngx_http_mp4_module might result in worker process memory disclosure (CVE-2018-16845). *) Feature: the "proxy_socket_keepalive", "fastcgi_socket_keepalive", "grpc_socket_keepalive", "memcached_socket_keepalive", "scgi_socket_keepalive", and "uwsgi_socket_keepalive" directives. *) Bugfix: if nginx was built with OpenSSL 1.1.0 and used with OpenSSL 1.1.1, the TLS 1.3 protocol was always enabled. *) Bugfix: working with gRPC backends might result in excessive memory consumption. -- Maxim Dounin http://nginx.org/ From mdounin at mdounin.ru Tue Nov 6 15:27:44 2018 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 6 Nov 2018 18:27:44 +0300 Subject: nginx-1.14.1 Message-ID: <20181106152743.GI56558@mdounin.ru> Changes with nginx 1.14.1 06 Nov 2018 *) Security: when using HTTP/2 a client might cause excessive memory consumption (CVE-2018-16843) and CPU usage (CVE-2018-16844). *) Security: processing of a specially crafted mp4 file with the ngx_http_mp4_module might result in worker process memory disclosure (CVE-2018-16845). *) Bugfix: working with gRPC backends might result in excessive memory consumption. -- Maxim Dounin http://nginx.org/ From mdounin at mdounin.ru Tue Nov 6 15:28:08 2018 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 6 Nov 2018 18:28:08 +0300 Subject: nginx security advisory (CVE-2018-16843, CVE-2018-16844) Message-ID: <20181106152808.GM56558@mdounin.ru> Hello! Two security issues were identified in nginx HTTP/2 implementation, which might cause excessive memory consumption (CVE-2018-16843) and CPU usage (CVE-2018-16844). The issues affect nginx compiled with the ngx_http_v2_module (not compiled by default) if the "http2" option of the "listen" directive is used in a configuration file. The issues affect nginx 1.9.5 - 1.15.5. The issues are fixed in nginx 1.15.6, 1.14.1. Thanks to Gal Goldshtein from F5 Networks for initial report of the CPU usage issue. -- Maxim Dounin http://nginx.org/ From mdounin at mdounin.ru Tue Nov 6 15:28:30 2018 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 6 Nov 2018 18:28:30 +0300 Subject: nginx security advisory (CVE-2018-16845) Message-ID: <20181106152830.GQ56558@mdounin.ru> Hello! A security issue was identified in the ngx_http_mp4_module, which might allow an attacker to cause infinite loop in a worker process, cause a worker process crash, or might result in worker process memory disclosure by using a specially crafted mp4 file (CVE-2018-16845). The issue only affects nginx if it is built with the ngx_http_mp4_module (the module is not built by default) and the "mp4" directive is used in the configuration file. Further, the attack is only possible if an attacker is able to trigger processing of a specially crafted mp4 file with the ngx_http_mp4_module. The issue affects nginx 1.1.3+, 1.0.7+. The issue is fixed in 1.15.6, 1.14.1. Patch for the issue can be found here: http://nginx.org/download/patch.2018.mp4.txt -- Maxim Dounin http://nginx.org/ From kworthington at gmail.com Tue Nov 6 16:59:34 2018 From: kworthington at gmail.com (Kevin Worthington) Date: Tue, 6 Nov 2018 11:59:34 -0500 Subject: [nginx-announce] nginx-1.14.1 In-Reply-To: <20181106152748.GJ56558@mdounin.ru> References: <20181106152748.GJ56558@mdounin.ru> Message-ID: Hello Nginx users, Now available: Nginx 1.14.1 for Windows https://kevinworthington.com/nginxwin1141 (32-bit and 64-bit versions) These versions are to support legacy users who are already using Cygwin based builds of Nginx. Officially supported native Windows binaries are at nginx.org. Announcements are also available here: Twitter http://twitter.com/kworthington Google+ https://plus.google.com/+KevinWorthington/ Thank you, Kevin -- Kevin Worthington kworthington *@* (gmail] [dot} {com) https://kevinworthington.com/ https://twitter.com/kworthington https://plus.google.com/+KevinWorthington/ On Tue, Nov 6, 2018 at 10:28 AM Maxim Dounin wrote: > Changes with nginx 1.14.1 06 Nov > 2018 > > *) Security: when using HTTP/2 a client might cause excessive memory > consumption (CVE-2018-16843) and CPU usage (CVE-2018-16844). > > *) Security: processing of a specially crafted mp4 file with the > ngx_http_mp4_module might result in worker process memory disclosure > (CVE-2018-16845). > > *) Bugfix: working with gRPC backends might result in excessive memory > consumption. > > > -- > Maxim Dounin > http://nginx.org/ > _______________________________________________ > nginx-announce mailing list > nginx-announce at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx-announce > -------------- next part -------------- An HTML attachment was scrubbed... URL: From kworthington at gmail.com Tue Nov 6 17:00:03 2018 From: kworthington at gmail.com (Kevin Worthington) Date: Tue, 6 Nov 2018 12:00:03 -0500 Subject: [nginx-announce] nginx-1.15.6 In-Reply-To: <20181106152727.GF56558@mdounin.ru> References: <20181106152727.GF56558@mdounin.ru> Message-ID: Hello Nginx users, Now available: Nginx 1.15.6 for Windows https://kevinworthington.com/nginxwin1156 (32-bit and 64-bit versions) These versions are to support legacy users who are already using Cygwin based builds of Nginx. Officially supported native Windows binaries are at nginx.org. Announcements are also available here: Twitter http://twitter.com/kworthington Google+ https://plus.google.com/+KevinWorthington/ Thank you, Kevin -- Kevin Worthington kworthington *@* (gmail] [dot} {com) https://kevinworthington.com/ https://twitter.com/kworthington https://plus.google.com/+KevinWorthington/ On Tue, Nov 6, 2018 at 10:27 AM Maxim Dounin wrote: > Changes with nginx 1.15.6 06 Nov > 2018 > > *) Security: when using HTTP/2 a client might cause excessive memory > consumption (CVE-2018-16843) and CPU usage (CVE-2018-16844). > > *) Security: processing of a specially crafted mp4 file with the > ngx_http_mp4_module might result in worker process memory disclosure > (CVE-2018-16845). > > *) Feature: the "proxy_socket_keepalive", "fastcgi_socket_keepalive", > "grpc_socket_keepalive", "memcached_socket_keepalive", > "scgi_socket_keepalive", and "uwsgi_socket_keepalive" directives. > > *) Bugfix: if nginx was built with OpenSSL 1.1.0 and used with OpenSSL > 1.1.1, the TLS 1.3 protocol was always enabled. > > *) Bugfix: working with gRPC backends might result in excessive memory > consumption. > > > -- > Maxim Dounin > http://nginx.org/ > _______________________________________________ > nginx-announce mailing list > nginx-announce at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx-announce > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Tue Nov 6 17:13:52 2018 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 6 Nov 2018 20:13:52 +0300 Subject: Using URL instead of IP In-Reply-To: <941b5c084c60c3440ed1f69925b3d600.NginxMailingListEnglish@forum.nginx.org> References: <941b5c084c60c3440ed1f69925b3d600.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20181106171352.GW56558@mdounin.ru> Hello! On Tue, Nov 06, 2018 at 12:44:47AM -0500, swati wrote: > Hi, > > I intend to use nginx as load balancer for routing traffic to an application > running on two separate Openshift (kubernetes) clusters. > The app urls are -> app.dc1.example.com and app.dc2.example.com. > When I curl to these individually I get required page. > However, it seems, nginx resolves it into IP instead of treating it as mere > destinations. > See below my config. > > Since the apps are running on Openshift, they do not have a specific > (public) IP address. > Ping for app.dc1.example.com (and for that matter app.dc2.example.com) will > give the IP of the machine where DNS service is running for that Openshift > cluster. > > Is there a way to configure nginx to route the requests to the URLs without > trying to resolve it into IP addresses? > > Thanks > Swati > > --- > upstream lbtest { > server app.dc1.example.com ; > server app.dc2.example.com ; > } > > server { > listen 9087; > location / { > proxy_set_header Host $host; > proxy_pass http://lbtest; > } > } > --- With nginx-plus commercial version, you can use "server ... resolve", which is specially designed for such use cases. See here for details: http://nginx.org/en/docs/http/ngx_http_upstream_module.html#resolve With vanilla nginx, you can instruct nginx to always re-resolve names by using variables: set $backend app.dc1.example.com; proxy_pass $backend; You won't be able to use an upstream block with multiple names though. See here for details: http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_pass Note that in both cases you'll have to define a resolver to use, see here: http://nginx.org/en/docs/http/ngx_http_core_module.html#resolver -- Maxim Dounin http://mdounin.ru/ From mdounin at mdounin.ru Tue Nov 6 18:18:59 2018 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 6 Nov 2018 21:18:59 +0300 Subject: no TLS1.3 with 1.15.5 In-Reply-To: References: Message-ID: <20181106181859.GY56558@mdounin.ru> Hello! On Sat, Nov 03, 2018 at 06:14:15PM +0000, Bogdan via nginx wrote: > Hello, everyone. > > I am stuck with a fresh installation which runs absolutely fine except it doesn't offer TLS1.3 which is the the biggest reason for updating the server. > > Below is some info about my config. > > Distribution: Ubuntu 18.04 server with kernel 4.15.0-38-generic > > nginx compile options: nginx/1.15.5 (Ubuntu) > built by gcc 7.3.0 (Ubuntu 7.3.0-27ubuntu1~18.04) > built with OpenSSL 1.1.1 11 Sep 2018 > TLS SNI support enabled > configure arguments: --prefix=/etc/nginx --sbin-path=/usr/sbin/nginx --modules-path=/usr/lib/nginx/modules --conf-path=/etc/nginx/nginx.conf --error-log-path=/var/log/nginx/error.log --pid-path=/var/run/nginx.pid --lock-path=/var/run/nginx.lock --user=nobody --group=nogroup --build=Ubuntu --builddir=nginx-1.15.5 --with-openssl=../openssl-1.1.1 --with-pcre=../pcre-8.42 --with-pcre-jit --with-zlib=../zlib-1.2.11 --with-openssl-opt=no-nextprotoneg --with-select_module --with-poll_module --with-threads --with-file-aio --with-http_ssl_module --with-http_v2_module --with-http_realip_module --with-http_addition_module --with-http_xslt_module=dynamic --with-http_image_filter_module=dynamic --with-http_sub_module --with-http_geoip_module=dynamic --with-http_auth_request_module --with-http_secure_link_module --with-http_degradation_module --with-http_slice_module --with-http_stub_status_module --with-http_perl_module=dynamic --with-perl_modules_path=/usr/share/perl/5.26.1 --with-perl=/usr/bin/perl --http-log-path=/var/log/nginx/access.log --http-client-body-temp-path=/var/cache/nginx/client_temp --without-http_empty_gif_module --without-http_browser_module --without-http_fastcgi_module --without-http_uwsgi_module --without-http_scgi_module --without-mail_pop3_module --without-mail_imap_module --without-mail_smtp_module --with-stream=dynamic --with-stream_ssl_module --with-stream_realip_module --with-stream_geoip_module=dynamic --with-stream_ssl_preread_module --with-compat --with-debug > > /etc/nginx/sites-available/default: > > ssl_session_cache shared:SSL:1m; > > server { > > ssl_early_data on; > ssl_dhparam /etc/nginx/ssl/dh4096.pem; > ssl_session_timeout 5m; > ssl_stapling on; > ssl_stapling_verify on; > ssl_prefer_server_ciphers on; > ssl_protocols TLSv1.2 TLSv1.3; > ssl_ciphers TLS13-CHACHA20-POLY1305-SHA256:TLS13-AES-256-GCM-SHA384:TLS13-AES-128-GCM-SHA256:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-SHA384:ECDHE-RSA-AES256-SHA; > ssl_ecdh_curve secp521r1:secp384r1; > > } > > I can't reach beyond TLS1.2 with Firefox 63 (security.tls.version.max = 4, that is TLS1.3 RFC as far as I know) and ssllabs.com's test says TLSv1.3 is non-existent on the server. > > Any help would be much appreciated. Make sure you have properly configured ssl_protocols in the default server for the listen socket in question. If unsure, configure ssl_protocols at the http{} level. Note well that testing using "openssl s_client" from the OpenSSL library you've built nginx with is the most reliable approach, as it ensures that proper TLSv1.3 variant is used by the client. -- Maxim Dounin http://mdounin.ru/ From piersh at hotmail.com Tue Nov 6 20:21:01 2018 From: piersh at hotmail.com (Piers Haken) Date: Tue, 6 Nov 2018 20:21:01 +0000 Subject: slice module not handling 416 Message-ID: i'm trying to use the slice module to cache large files, as described here: https://guo365.github.io/study/nginx.org/en/docs/admin-guide/Web%20content%20cache.html#slice however, the slice module _always_ requests the full slice range regardless of the size of the file, and the servers that i'm proxying return "416 The requested range is not satisfiable" which seems to be the correct response as defined in https://tools.ietf.org/html/rfc7233#section-4.4. I cannot change the behavior of those proxied servers, and i really need this to work. is there something else i can do to work around this? -------------- next part -------------- An HTML attachment was scrubbed... URL: From arut at nginx.com Wed Nov 7 12:23:34 2018 From: arut at nginx.com (Roman Arutyunyan) Date: Wed, 7 Nov 2018 15:23:34 +0300 Subject: slice module not handling 416 In-Reply-To: References: Message-ID: <20181107122334.GC4729@Romans-MacBook-Air.local> Hi, On Tue, Nov 06, 2018 at 08:21:01PM +0000, Piers Haken wrote: > i'm trying to use the slice module to cache large files, as described here: > > https://guo365.github.io/study/nginx.org/en/docs/admin-guide/Web%20content%20cache.html#slice > > however, the slice module _always_ requests the full slice range regardless of the size of the file, and the servers that i'm proxying return "416 The requested range is not satisfiable" which seems to be the correct response as defined in https://tools.ietf.org/html/rfc7233#section-4.4. The RFC states that 416 should be returned if the requested range does not overlap the file. This should not happen with the slice module. Even the last slice should overlap the file. The 416 (Range Not Satisfiable) status code indicates that none of the ranges in the request's Range header field (Section 3.1) overlap the current extent of the selected resource... There are cases when the slice module does not even know the size of the file, for example when requesting the first slice. > I cannot change the behavior of those proxied servers, and i really need this to work. is there something else i can do to work around this? -- Roman Arutyunyan From mdounin at mdounin.ru Wed Nov 7 16:43:21 2018 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 7 Nov 2018 19:43:21 +0300 Subject: SSLEngine closed already exception triggered by reload In-Reply-To: <3749aa47809743bb35d95d621f9ac089.NginxMailingListEnglish@forum.nginx.org> References: <3749aa47809743bb35d95d621f9ac089.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20181107164321.GH56558@mdounin.ru> Hello! On Mon, Nov 05, 2018 at 09:14:33AM -0500, nginxuser2018 wrote: > I noticed that if I setup a simple scenario where a client is making > concurrent requests on a server with nginx configured as a reverse proxy and > SSL traffic termination endpoint, if I trigger a reload with 'nginx -s > reload' mid requests, often times the client will throw an > 'javax.net.ssl.SSLException: SSLEngine closed already > at io.netty.handler.ssl.SslHandler.wrap(...)(Unknown Source)' exception. > > I'm using Scala with the Play framework, which uses netty under the hood. > > Is there any configuration that could avoid these exceptions being thrown? > > I cannot reproduce it using for example using Play to serve HTTPS, so I can > possibly rule out a problem in the client and think it is a problem with > nginx? On Tue, Nov 06, 2018 at 08:49:07AM -0500, nginxuser2018 wrote: > One setting that I noticed mitigates the problem is to use `lingering_close > always;` however in our infrastructure this can lead to the build up of > worker processes (for the duration of the lingering_timeout). What are the > advantages and drawbacks of using this setting? Upon configuration reload, nginx will close connections after it finishes processing already active requests in these connections. And given that "lingering_close always;" helps, I think there are two possible cases here: 1. Closing the connection by nginx happens and the wrong time, right before the next request is received on this connection, so RST is sent on the connection before the client is able get the response and the connection close. If this is indeed the case, using "lingering_close always;" might be the right thing to do - or, alternatively, lingering close automatic logic might need to be improved. 2. The client isn't smart enough to check that the connection is closed before sending the next request, and/or isn't smart enough to recover from asynchronous close events (a good description can be found in RFC 2616, "8.1.4 Practical Considerations", https://tools.ietf.org/html/rfc2616#section-8.1.4). In this case, a proper fix would be to improve the client. -- Maxim Dounin http://mdounin.ru/ From jeff.dyke at gmail.com Wed Nov 7 19:17:27 2018 From: jeff.dyke at gmail.com (Jeff Dyke) Date: Wed, 7 Nov 2018 14:17:27 -0500 Subject: no TLS1.3 with 1.15.5 In-Reply-To: <20181106181859.GY56558@mdounin.ru> References: <20181106181859.GY56558@mdounin.ru> Message-ID: Hi. I know this does not solve the problem, but curious if you found a package that was compiled with 1.1.1 or compile it yourself. Generally i like to avoid the later as everything is managed through salt, but am interested in TLSv1.3 Thanks, Jeff On Tue, Nov 6, 2018 at 1:19 PM Maxim Dounin wrote: > Hello! > > On Sat, Nov 03, 2018 at 06:14:15PM +0000, Bogdan via nginx wrote: > > > Hello, everyone. > > > > I am stuck with a fresh installation which runs absolutely fine except > it doesn't offer TLS1.3 which is the the biggest reason for updating the > server. > > > > Below is some info about my config. > > > > Distribution: Ubuntu 18.04 server with kernel 4.15.0-38-generic > > > > nginx compile options: nginx/1.15.5 (Ubuntu) > > built by gcc 7.3.0 (Ubuntu 7.3.0-27ubuntu1~18.04) > > built with OpenSSL 1.1.1 11 Sep 2018 > > TLS SNI support enabled > > configure arguments: --prefix=/etc/nginx --sbin-path=/usr/sbin/nginx > --modules-path=/usr/lib/nginx/modules --conf-path=/etc/nginx/nginx.conf > --error-log-path=/var/log/nginx/error.log --pid-path=/var/run/nginx.pid > --lock-path=/var/run/nginx.lock --user=nobody --group=nogroup > --build=Ubuntu --builddir=nginx-1.15.5 --with-openssl=../openssl-1.1.1 > --with-pcre=../pcre-8.42 --with-pcre-jit --with-zlib=../zlib-1.2.11 > --with-openssl-opt=no-nextprotoneg --with-select_module --with-poll_module > --with-threads --with-file-aio --with-http_ssl_module --with-http_v2_module > --with-http_realip_module --with-http_addition_module > --with-http_xslt_module=dynamic --with-http_image_filter_module=dynamic > --with-http_sub_module --with-http_geoip_module=dynamic > --with-http_auth_request_module --with-http_secure_link_module > --with-http_degradation_module --with-http_slice_module > --with-http_stub_status_module --with-http_perl_module=dynamic > --with-perl_modules_path=/usr/share/perl/5.26.1 --with-perl=/usr/bi > n/perl --http-log-path=/var/log/nginx/access.log > --http-client-body-temp-path=/var/cache/nginx/client_temp > --without-http_empty_gif_module --without-http_browser_module > --without-http_fastcgi_module --without-http_uwsgi_module > --without-http_scgi_module --without-mail_pop3_module > --without-mail_imap_module --without-mail_smtp_module --with-stream=dynamic > --with-stream_ssl_module --with-stream_realip_module > --with-stream_geoip_module=dynamic --with-stream_ssl_preread_module > --with-compat --with-debug > > > > /etc/nginx/sites-available/default: > > > > ssl_session_cache shared:SSL:1m; > > > > server { > > > > ssl_early_data on; > > ssl_dhparam /etc/nginx/ssl/dh4096.pem; > > ssl_session_timeout 5m; > > ssl_stapling on; > > ssl_stapling_verify on; > > ssl_prefer_server_ciphers on; > > ssl_protocols TLSv1.2 TLSv1.3; > > ssl_ciphers > TLS13-CHACHA20-POLY1305-SHA256:TLS13-AES-256-GCM-SHA384:TLS13-AES-128-GCM-SHA256:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-SHA384:ECDHE-RSA-AES256-SHA; > > ssl_ecdh_curve secp521r1:secp384r1; > > > > } > > > > I can't reach beyond TLS1.2 with Firefox 63 (security.tls.version.max = > 4, that is TLS1.3 RFC as far as I know) and ssllabs.com's test says > TLSv1.3 is non-existent on the server. > > > > Any help would be much appreciated. > > Make sure you have properly configured ssl_protocols in the > default server for the listen socket in question. If unsure, > configure ssl_protocols at the http{} level. > > Note well that testing using "openssl s_client" from the OpenSSL > library you've built nginx with is the most reliable approach, as it > ensures that proper TLSv1.3 variant is used by the client. > > -- > Maxim Dounin > http://mdounin.ru/ > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jpereiran at gmail.com Wed Nov 7 22:47:15 2018 From: jpereiran at gmail.com (Jorge Pereira) Date: Wed, 7 Nov 2018 20:47:15 -0200 Subject: Problems with map + subfilter + proxy Message-ID: Folks, 1. I have tried to use the map+subfilter as the below snip. user nginx; worker_processes auto; daemon off; events { worker_connections 1024; } http { include /etc/nginx/mime.types; default_type application/octet-stream; log_format main '$remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for" ' '(Cache $upstream_cache_status)'; access_log /dev/stdout main; sendfile on; keepalive_timeout 65; gzip on; # Due to we are listening only http, therefore be sure to always return http:// map $request_uri $subfilter_allowed_content_type { volatile; default whatever/donothing; ~/artifactory/api/nuget/.* application/atom+xml; } proxy_cache_path /var/cache/nginx/artifactory levels=1:2 keys_zone=artifactory_cache:50m max_size=50g inactive=24h use_temp_path=off; server { listen 80; server_name ~(?.+)\.artifactory.tapioca.lan; resolver 8.8.8.8; set $upstream https://artifactory.myaws.com/artifactory; location /artifactory/ { sub_filter_types $subfilter_allowed_content_type; # the variable is filled up correctly #sub_filter_types "application/atom+xml"; # but, when use it hardcode. then it works fine. sub_filter_last_modified on; sub_filter "https://$host" http://$host"; # it works only when use sub_filter_types with hardcore value. sub_filter_once off; # its been filled correctly add_header X-Debug-subfilter-allowed-content-type "$subfilter_allowed_content_type"; proxy_read_timeout 60s; proxy_pass_header Server; proxy_cookie_path ~*^/.* /; if ( $request_uri ~ ^/artifactory/(.*)$ ) { proxy_pass $upstream/$1; } proxy_pass $upstream; proxy_set_header Host $http_host; proxy_set_header X-Forwarded-Port $server_port; proxy_set_header X-Forwarded-Proto $http_x_forwarded_proto; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Artifactory-Override-Base-Url $http_x_forwarded_proto://$host:$server_port/artifactory; proxy_set_header X-JFrog-Art-Api $artifactory_token; } } } 2. I do the request like. curl -s -H "Host: artifactory-proxy.mylan.fni" " http://172.17.0.2/artifactory/api/nuget/v3/dtfni-nuget/Packages(Id='AttributeRouting.Core.Web ',Version='3.5.6')" Conclusion: the variable it been filled up correctly, but the sub_filter_types looks to not process. Someone has any suggestion? ps: I am using the nginx 1.12.x -- Jorge Pereira -------------- next part -------------- An HTML attachment was scrubbed... URL: From francis at daoine.org Thu Nov 8 01:10:33 2018 From: francis at daoine.org (Francis Daly) Date: Thu, 8 Nov 2018 01:10:33 +0000 Subject: Problems with map + subfilter + proxy In-Reply-To: References: Message-ID: <20181108011033.i7vif36dn75yxlmg@daoine.org> On Wed, Nov 07, 2018 at 08:47:15PM -0200, Jorge Pereira wrote: Hi there, > location /artifactory/ { > sub_filter_types $subfilter_allowed_content_type; # the > variable is filled up correctly > #sub_filter_types "application/atom+xml"; # but, when > use it hardcode. then it works fine. If the documentation for a directive (available here at http://nginx.org/r/sub_filter_types) does not say that the value can contain variables, then the value probably does not process $variable content. > sub_filter "https://$host" http://$host"; # it works > only when use sub_filter_types with hardcore value. Compare with the documentation for this directive - http://nginx.org/r/sub_filter > Conclusion: the variable it been filled up correctly, but the > sub_filter_types looks to not process. Someone has any suggestion? sub_filter_types does not read variables. If you set the type to the string "$subfilter_allowed_content_type", sub_filter might take effect. (I haven't tested, since it is not a useful case, I think.) f -- Francis Daly francis at daoine.org From nginx-forum at forum.nginx.org Fri Nov 9 12:50:27 2018 From: nginx-forum at forum.nginx.org (binaryanomaly) Date: Fri, 09 Nov 2018 07:50:27 -0500 Subject: New compilation failure In-Reply-To: References: <40020b11fde120bab04f7abe2b1dc9c1.NginxMailingListEnglish@forum.nginx.org> Message-ID: Same here but with nginx-1.15.6 so I guess it's a ModSecurity issue since my previous build with 1.15.5 was working. Maybe related to v3/master branch CI failed build? https://github.com/SpiderLabs/ModSecurity/branches Posted at Nginx Forum: https://forum.nginx.org/read.php?2,281731,281883#msg-281883 From nginx-forum at forum.nginx.org Fri Nov 9 14:14:57 2018 From: nginx-forum at forum.nginx.org (alang) Date: Fri, 09 Nov 2018 09:14:57 -0500 Subject: New compilation failure In-Reply-To: References: <40020b11fde120bab04f7abe2b1dc9c1.NginxMailingListEnglish@forum.nginx.org> Message-ID: Yeah I fixed this issue. It was definitely the modsecurity connector. They had done an update to master branch that required to use their newer libraries. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,281731,281887#msg-281887 From felipe at zimmerle.org Fri Nov 9 14:42:19 2018 From: felipe at zimmerle.org (Felipe Zimmerle) Date: Fri, 9 Nov 2018 11:42:19 -0300 Subject: New compilation failure In-Reply-To: References: <40020b11fde120bab04f7abe2b1dc9c1.NginxMailingListEnglish@forum.nginx.org> Message-ID: Hi, Please upgrade your libModSecurity to v3.0.3: https://github.com/SpiderLabs/ModSecurity/releases/tag/v3.0.3 Br., Z. On Fri, Nov 9, 2018 at 11:14 AM alang wrote: > Yeah I fixed this issue. It was definitely the modsecurity connector. > They > had done an update to master branch that required to use their newer > libraries. > > Posted at Nginx Forum: > https://forum.nginx.org/read.php?2,281731,281887#msg-281887 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Fri Nov 9 17:42:48 2018 From: nginx-forum at forum.nginx.org (nginxuser2018) Date: Fri, 09 Nov 2018 12:42:48 -0500 Subject: SSLEngine closed already exception triggered by reload In-Reply-To: <20181107164321.GH56558@mdounin.ru> References: <20181107164321.GH56558@mdounin.ru> Message-ID: <67dcf8def77fb5ad5671751aefaee63e.NginxMailingListEnglish@forum.nginx.org> Hi Maxim, We have tried different settings with 'lingering_close always;' and 'lingering_time', 'lingering_timeout' up to 240s with no success. Would you be able to confirm whether it is an nginx problem in the lingering close automatic logic as you mentioned if I provide an example to reproduce it? Thanks, Dario Maxim Dounin Wrote: ------------------------------------------------------- > Hello! > > On Mon, Nov 05, 2018 at 09:14:33AM -0500, nginxuser2018 wrote: > > > I noticed that if I setup a simple scenario where a client is making > > concurrent requests on a server with nginx configured as a reverse > proxy and > > SSL traffic termination endpoint, if I trigger a reload with 'nginx > -s > > reload' mid requests, often times the client will throw an > > 'javax.net.ssl.SSLException: SSLEngine closed already > > at io.netty.handler.ssl.SslHandler.wrap(...)(Unknown Source)' > exception. > > > > I'm using Scala with the Play framework, which uses netty under the > hood. > > > > Is there any configuration that could avoid these exceptions being > thrown? > > > > I cannot reproduce it using for example using Play to serve HTTPS, > so I can > > possibly rule out a problem in the client and think it is a problem > with > > nginx? > > On Tue, Nov 06, 2018 at 08:49:07AM -0500, nginxuser2018 wrote: > > > One setting that I noticed mitigates the problem is to use > `lingering_close > > always;` however in our infrastructure this can lead to the build up > of > > worker processes (for the duration of the lingering_timeout). What > are the > > advantages and drawbacks of using this setting? > > Upon configuration reload, nginx will close connections after it > finishes processing already active requests in these connections. > And given that "lingering_close always;" helps, I think there are > two possible cases here: > > 1. Closing the connection by nginx happens and the wrong time, > right before the next request is received on this connection, so > RST is sent on the connection before the client is able get the > response and the connection close. If this is indeed the case, > using "lingering_close always;" might be the right thing to do - > or, alternatively, lingering close automatic logic might need to > be improved. > > 2. The client isn't smart enough to check that the connection is > closed before sending the next request, and/or isn't smart enough > to recover from asynchronous close events (a good description can > be found in RFC 2616, "8.1.4 Practical Considerations", > https://tools.ietf.org/html/rfc2616#section-8.1.4). In this case, > a proper fix would be to improve the client. > > -- > Maxim Dounin > http://mdounin.ru/ > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx Posted at Nginx Forum: https://forum.nginx.org/read.php?2,281786,281894#msg-281894 From mdounin at mdounin.ru Fri Nov 9 19:09:40 2018 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 9 Nov 2018 22:09:40 +0300 Subject: SSLEngine closed already exception triggered by reload In-Reply-To: <67dcf8def77fb5ad5671751aefaee63e.NginxMailingListEnglish@forum.nginx.org> References: <20181107164321.GH56558@mdounin.ru> <67dcf8def77fb5ad5671751aefaee63e.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20181109190940.GH99070@mdounin.ru> Hello! On Fri, Nov 09, 2018 at 12:42:48PM -0500, nginxuser2018 wrote: > We have tried different settings with 'lingering_close always;' and > 'lingering_time', 'lingering_timeout' up to 240s with no success. > > Would you be able to confirm whether it is an nginx problem in the lingering > close automatic logic as you mentioned if I provide an example to reproduce > it? If 'lingering_close always;' does not help, in contrast to what you wrote in your second message, this is certainly client's problem. -- Maxim Dounin http://mdounin.ru/ From nginx-forum at forum.nginx.org Fri Nov 9 21:05:54 2018 From: nginx-forum at forum.nginx.org (binaryanomaly) Date: Fri, 09 Nov 2018 16:05:54 -0500 Subject: New compilation failure In-Reply-To: References: Message-ID: <60a2526fcd2d4e391a85a6b5c6539868.NginxMailingListEnglish@forum.nginx.org> That worked, thanks. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,281731,281897#msg-281897 From bf.014 at protonmail.com Sat Nov 10 06:45:12 2018 From: bf.014 at protonmail.com (Bogdan) Date: Sat, 10 Nov 2018 06:45:12 +0000 Subject: no TLS1.3 with 1.15.5 In-Reply-To: References: <20181106181859.GY56558@mdounin.ru> Message-ID: Hello! I am sorry for the late aswer. I never install any compiled packages except for the ones that can be pulled from Ubuntu's official repositories. Since 1.15.5 was not available yet (and the one that was available was compiled against a SSL version which didn't support TLS1.3), I had retrieve the source code for both and do all the hard and fun work myself. :) Seeing how it works, I believe that it's worth all the trouble. Bogdan ??????? Original Message ??????? On Wednesday, November 7, 2018 9:17 PM, Jeff Dyke wrote: > Hi. I know this does not solve the problem, but curious if you found a package that was compiled with 1.1.1 or compile it yourself. Generally i like to avoid the later as everything is managed through salt, but am interested in TLSv1.3 > > Thanks, > Jeff > > On Tue, Nov 6, 2018 at 1:19 PM Maxim Dounin wrote: > >> Hello! >> >> On Sat, Nov 03, 2018 at 06:14:15PM +0000, Bogdan via nginx wrote: >> >>> Hello, everyone. >>> >>> I am stuck with a fresh installation which runs absolutely fine except it doesn't offer TLS1.3 which is the the biggest reason for updating the server. >>> >>> Below is some info about my config. >>> >>> Distribution: Ubuntu 18.04 server with kernel 4.15.0-38-generic >>> >>> nginx compile options: nginx/1.15.5 (Ubuntu) >>> built by gcc 7.3.0 (Ubuntu 7.3.0-27ubuntu1~18.04) >>> built with OpenSSL 1.1.1 11 Sep 2018 >>> TLS SNI support enabled >>> configure arguments: --prefix=/etc/nginx --sbin-path=/usr/sbin/nginx --modules-path=/usr/lib/nginx/modules --conf-path=/etc/nginx/nginx.conf --error-log-path=/var/log/nginx/error.log --pid-path=/var/run/nginx.pid --lock-path=/var/run/nginx.lock --user=nobody --group=nogroup --build=Ubuntu --builddir=nginx-1.15.5 --with-openssl=../openssl-1.1.1 --with-pcre=../pcre-8.42 --with-pcre-jit --with-zlib=../zlib-1.2.11 --with-openssl-opt=no-nextprotoneg --with-select_module --with-poll_module --with-threads --with-file-aio --with-http_ssl_module --with-http_v2_module --with-http_realip_module --with-http_addition_module --with-http_xslt_module=dynamic --with-http_image_filter_module=dynamic --with-http_sub_module --with-http_geoip_module=dynamic --with-http_auth_request_module --with-http_secure_link_module --with-http_degradation_module --with-http_slice_module --with-http_stub_status_module --with-http_perl_module=dynamic --with-perl_modules_path=/usr/share/perl/5.26.1 --with-perl=/usr/bi >> n/perl --http-log-path=/var/log/nginx/access.log --http-client-body-temp-path=/var/cache/nginx/client_temp --without-http_empty_gif_module --without-http_browser_module --without-http_fastcgi_module --without-http_uwsgi_module --without-http_scgi_module --without-mail_pop3_module --without-mail_imap_module --without-mail_smtp_module --with-stream=dynamic --with-stream_ssl_module --with-stream_realip_module --with-stream_geoip_module=dynamic --with-stream_ssl_preread_module --with-compat --with-debug >>> >>> /etc/nginx/sites-available/default: >>> >>> ssl_session_cache shared:SSL:1m; >>> >>> server { >>> >>> ssl_early_data on; >>> ssl_dhparam /etc/nginx/ssl/dh4096.pem; >>> ssl_session_timeout 5m; >>> ssl_stapling on; >>> ssl_stapling_verify on; >>> ssl_prefer_server_ciphers on; >>> ssl_protocols TLSv1.2 TLSv1.3; >>> ssl_ciphers TLS13-CHACHA20-POLY1305-SHA256:TLS13-AES-256-GCM-SHA384:TLS13-AES-128-GCM-SHA256:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-SHA384:ECDHE-RSA-AES256-SHA; >>> ssl_ecdh_curve secp521r1:secp384r1; >>> >>> } >>> >>> I can't reach beyond TLS1.2 with Firefox 63 (security.tls.version.max = 4, that is TLS1.3 RFC as far as I know) and ssllabs.com's test says TLSv1.3 is non-existent on the server. >>> >>> Any help would be much appreciated. >> >> Make sure you have properly configured ssl_protocols in the >> default server for the listen socket in question. If unsure, >> configure ssl_protocols at the http{} level. >> >> Note well that testing using "openssl s_client" from the OpenSSL >> library you've built nginx with is the most reliable approach, as it >> ensures that proper TLSv1.3 variant is used by the client. >> >> -- >> Maxim Dounin >> http://mdounin.ru/ >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From moorage at protonmail.ch Sun Nov 11 15:55:06 2018 From: moorage at protonmail.ch (Moorage) Date: Sun, 11 Nov 2018 15:55:06 +0000 Subject: Exposing external JSON API in an Nginx custom path? Message-ID: I have a vhost running on nginx/1.15.6, https://example.com I have a standalone API service (fwiw, Gentics Mesh) running at http://mesh.example.com It exposes its UI, per its own config as apiUrl: '/api/v1/', Browsing to the direct link, Mesh responds as expected, correctly rendering/displaying its UI Login Form. With curl, curl -i http://mesh.example.com:8080 HTTP/1.1 302 Found Location: /mesh-ui/ Content-Length: 0 I want to expose that UI at an Nginx site custom path. https://example.com/mesh For now, both run on the same machine/IP, host example.com 10.1.2.3 host mesh.example.com 10.1.2.3 In Nginx vhost config, I've tried to set up a proxy as upstream meshproxy { server 10.1.2.3:8080; } server { listen 10.1.2.3:443 ssl http2; server_name example.com; ... location ~ /mesh/ { proxy_set_header Accept 'application/json'; proxy_set_header Content-Type 'application/json'; proxy_set_header Connection "upgrade"; proxy_set_header Host $http_host; proxy_set_header Upgrade $http_upgrade; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto $scheme; proxy_set_header X-NginX-Proxy true; proxy_set_header X-Real-IP $remote_addr; proxy_buffering off; proxy_connect_timeout 5; proxy_http_version 1.1; proxy_intercept_errors on; proxy_read_timeout 240; proxy_pass http://meshproxy; proxy_redirect off; } ... } With that config, browser access to https://example.com/mesh doesn't display the intended Mesh UI. Instead, it displays a response of { "message" : "Not Found", "internalMessage" : "The rest endpoint or resource for given path {/mesh/} could not be found. Please verify that your Accept header is set correctly. I got {application/json}. It must accept {application/json}" } Testing GET with curl, at that link, currently returns 403, curl --include --http2 --ipv4 --ssl --tlsv1.2 --key-type PEM --cert-type PEM --key /ssl/client.key.pem --cert /ssl/client.crt.pem --cacert /ssl/CA.crt.pem -X GET https://example.com/mesh/ HTTP/2 403 date: Sat, 10 Nov 2018 14:58:24 GMT content-type: text/html; charset=utf-8 content-length: 146 vary: Accept-Encoding secure: Server x-content-type-options: nosniff 403 Forbidden

403 Forbidden


nginx
I have no idea yet why it's 'Forbidden'. I'm guessing the problem is something in my Nginx config? What config needs change/addition to get that API UI 'exposed' correctly in the Nginx custom path? From jindan at gmail.com Sun Nov 11 15:58:58 2018 From: jindan at gmail.com (Jindan Zhou) Date: Sun, 11 Nov 2018 09:58:58 -0600 Subject: Rewrite request url to match the query string and normalization In-Reply-To: References: Message-ID: I have a simple nginx forward proxy, configured as: server { listen 8000; resolver 8.8.8.8; location / { proxy_pass http://$host; proxy_set_header Host $host; } } The client behind its isp firewall sends the request (per nginx log): GET http://www.clientisp.com/path/rewrite.do?url=http%3A%2F%2Fwww.example.com HTTP/1.1 How do I transform the requested url to http://www.example.com before it is sent to the upstream? I looked up many posts online, but I am still confused at: 1. The online examples usually teach how you match the uri part, but my goal is to obtain the queried string only, i.e., everything after the equation mark"=", http%3A%2F%2Fwww.example.com. 2. I have no idea how to decode the percentage coded symbols into normalized one. Thanks for your input! -- .--- .. -. -.. .- -. --.. .... --- ..- -------------- next part -------------- An HTML attachment was scrubbed... URL: From francis at daoine.org Sun Nov 11 20:36:03 2018 From: francis at daoine.org (Francis Daly) Date: Sun, 11 Nov 2018 20:36:03 +0000 Subject: Exposing external JSON API in an Nginx custom path? In-Reply-To: References: Message-ID: <20181111203603.hw767fkhlvqjvsqx@daoine.org> On Sun, Nov 11, 2018 at 03:55:06PM +0000, Moorage via nginx wrote: Hi there, > curl -i http://mesh.example.com:8080 > HTTP/1.1 302 Found > Location: /mesh-ui/ > Content-Length: 0 > > I want to expose that UI at an Nginx site custom path. > > https://example.com/mesh So you want the request to nginx of "/mesh/" to become a request to your upstream server of "/"? It can be done; however, I suggest that you will find things easier to reverse-proxy if you can configure the mesh server to expect that its root is /mesh/, so that it exposes its api at /mesh/api/v1 and it redirects to /mesh/mesh-ui/ > location ~ /mesh/ { "~" is not "starts with". You probably want "^~" there instead. > proxy_pass http://meshproxy; If you want the request to upstream to be "/mesh/", use that. Otherwise, use proxy_pass http://meshproxy/; > proxy_redirect off; You probably do not want that. Perhaps "proxy_redirect default;"; perhaps something else based on whatever the mesh service actually returns. From your example, you may want proxy_pass / /mesh/; Good luck with it, f -- Francis Daly francis at daoine.org From moorage at protonmail.ch Sun Nov 11 20:50:14 2018 From: moorage at protonmail.ch (Moorage) Date: Sun, 11 Nov 2018 20:50:14 +0000 Subject: Exposing external JSON API in an Nginx custom path? In-Reply-To: <20181111203603.hw767fkhlvqjvsqx@daoine.org> References: <20181111203603.hw767fkhlvqjvsqx@daoine.org> Message-ID: Hi, > So you want the request to nginx of "/mesh/" to become a request to your > upstream server of "/"? Initially, that was the idea. For no reason OTHER than out of the box, the upstream UI *is*, by default, available at http://mesh.example.com:8080/ > It can be done; however, I suggest that you will find things easier > to reverse-proxy if you can configure the mesh server to expect that > its root is /mesh/, so that it exposes its api at /mesh/api/v1 and it > redirects to /mesh/mesh-ui/ In the mesh ui config, there IS this config/mesh-ui-config.js (function(window, document) { /** * Settings which can be configured per app instance, without requiring the app be re-built from * source. */ var meshUiConfig = { // The URL to the Mesh API apiUrl: '/api/v1/', // The ISO-639-1 code of the default language ... So I *can* change - apiUrl: '/api/v1/', + apiUrl: '/mesh/api/v1/', which I have tried -- but it didn't end up working for me. Probably because of everything _else_ wrong so far. > > > location ~ /mesh/ { > > > > "~" is not "starts with". You probably want "^~" there instead. > > > proxy_pass http://meshproxy; > > > > If you want the request to upstream to be "/mesh/", use that. Otherwise, use > > proxy_pass http://meshproxy/; > > > proxy_redirect off; > > > > You probably do not want that. Perhaps "proxy_redirect default;"; perhaps > something else based on whatever the mesh service actually returns. From > your example, you may want > > proxy_pass / /mesh/; Some more 'stuff' to try! Let's see how it works ... Cheers. From moorage at protonmail.ch Mon Nov 12 03:03:39 2018 From: moorage at protonmail.ch (Moorage) Date: Mon, 12 Nov 2018 03:03:39 +0000 Subject: Exposing external JSON API in an Nginx custom path? In-Reply-To: References: <20181111203603.hw767fkhlvqjvsqx@daoine.org> Message-ID: <0Zofeuv9Key9-gtdCe9TFQ_pW0HkmMdcjSZigHyx_UhuGB8dj__s2nS8WqnuJvzu8bt0Ig__btJU5vjQmSsviphngy5pN0LnOAr7mu5FqBQ=@protonmail.ch> > Some more 'stuff' to try! > > Let's see how it works ... The secret-sauce recipe was: in the mesh-ui config apiUrl: '/mesh/api/v1/', and in nginx location ^~ /mesh/ { ... proxy_pass http://meshproxy/; proxy_redirect / /mesh/; ... it now works as hoped. Thanks for the hints! From francis at daoine.org Mon Nov 12 11:12:22 2018 From: francis at daoine.org (Francis Daly) Date: Mon, 12 Nov 2018 11:12:22 +0000 Subject: Exposing external JSON API in an Nginx custom path? In-Reply-To: <0Zofeuv9Key9-gtdCe9TFQ_pW0HkmMdcjSZigHyx_UhuGB8dj__s2nS8WqnuJvzu8bt0Ig__btJU5vjQmSsviphngy5pN0LnOAr7mu5FqBQ=@protonmail.ch> References: <20181111203603.hw767fkhlvqjvsqx@daoine.org> <0Zofeuv9Key9-gtdCe9TFQ_pW0HkmMdcjSZigHyx_UhuGB8dj__s2nS8WqnuJvzu8bt0Ig__btJU5vjQmSsviphngy5pN0LnOAr7mu5FqBQ=@protonmail.ch> Message-ID: <20181112111222.yo2mer2sfhzzksf5@daoine.org> On Mon, Nov 12, 2018 at 03:03:39AM +0000, Moorage via nginx wrote: Hi there, > The secret-sauce recipe was: > > in the mesh-ui config > > apiUrl: '/mesh/api/v1/', > > and in nginx > > location ^~ /mesh/ { > ... > proxy_pass http://meshproxy/; > proxy_redirect / /mesh/; > ... > > it now works as hoped. Good to hear; thanks for letting the list know, so that the next person with the issue will be able to find the answer too. >From briefly looking at the Gentics Mesh documentation, it appears that you might want to configure a few other parts of it to include the "/mesh" prefix too, if you want to expose those parts through the nginx reverse proxy. /mesh-ui/ and /demo/ are things that may want to be changed -- but it is also probably reasonable not to make those available externally. Cheers, f -- Francis Daly francis at daoine.org From nginx-forum at forum.nginx.org Mon Nov 12 11:34:08 2018 From: nginx-forum at forum.nginx.org (petecooper) Date: Mon, 12 Nov 2018 06:34:08 -0500 Subject: Nesting variables to build header contents - is there a better way? Message-ID: Hello. I use `add_header` to build Content Security Policy and Feature Policy headers. To help with change control and maintainability I build an Nginx variable from nothing and add each Content Security Policy and Feature Policy data/source type on a different line. The Nginx variable is unique to the `server` block. For example (excerpt from `server` block for subdomain.example.com): #nested variable for Content Security Policy maintainability set $contentsecuritypolicy_https_subdomain_example_com ''; set $contentsecuritypolicy_https_subdomain_example_com '${contentsecuritypolicy_https_subdomain_example_com}connect-src \'self\';'; set $contentsecuritypolicy_https_subdomain_example_com '${contentsecuritypolicy_https_subdomain_example_com}default-src \'none\';'; set $contentsecuritypolicy_https_subdomain_example_com '${contentsecuritypolicy_https_subdomain_example_com}font-src \'self\';'; set $contentsecuritypolicy_https_subdomain_example_com '${contentsecuritypolicy_https_subdomain_example_com}frame-ancestors \'none\';'; set $contentsecuritypolicy_https_subdomain_example_com '${contentsecuritypolicy_https_subdomain_example_com}img-src \'self\';'; set $contentsecuritypolicy_https_subdomain_example_com '${contentsecuritypolicy_https_subdomain_example_com}manifest-src \'self\';'; set $contentsecuritypolicy_https_subdomain_example_com '${contentsecuritypolicy_https_subdomain_example_com}media-src \'self\';'; set $contentsecuritypolicy_https_subdomain_example_com '${contentsecuritypolicy_https_subdomain_example_com}object-src \'none\';'; set $contentsecuritypolicy_https_subdomain_example_com '${contentsecuritypolicy_https_subdomain_example_com}script-src \'self\';'; set $contentsecuritypolicy_https_subdomain_example_com '${contentsecuritypolicy_https_subdomain_example_com}style-src https://cdnjs.cloudflare.com \'self\';'; set $contentsecuritypolicy_https_subdomain_example_com '${contentsecuritypolicy_https_subdomain_example_com}worker-src \'self\';'; add_header Content-Security-Policy $contentsecuritypolicy_https_subdomain_example_com; #nested variable for Feature Policy maintainability set $featurepolicy_https_subdomain_example_com ''; set $featurepolicy_https_subdomain_example_com '${featurepolicy_https_subdomain_example_com}camera \'none\';'; set $featurepolicy_https_subdomain_example_com '${featurepolicy_https_subdomain_example_com}fullscreen \'self\';'; set $featurepolicy_https_subdomain_example_com '${featurepolicy_https_subdomain_example_com}geolocation \'none\';'; set $featurepolicy_https_subdomain_example_com '${featurepolicy_https_subdomain_example_com}gyroscope \'none\';'; set $featurepolicy_https_subdomain_example_com '${featurepolicy_https_subdomain_example_com}magnetometer \'none\';'; set $featurepolicy_https_subdomain_example_com '${featurepolicy_https_subdomain_example_com}microphone \'none\';'; set $featurepolicy_https_subdomain_example_com '${featurepolicy_https_subdomain_example_com}midi \'none\';'; set $featurepolicy_https_subdomain_example_com '${featurepolicy_https_subdomain_example_com}notifications \'self\';'; set $featurepolicy_https_subdomain_example_com '${featurepolicy_https_subdomain_example_com}payment \'none\';'; set $featurepolicy_https_subdomain_example_com '${featurepolicy_https_subdomain_example_com}push \'self\';'; set $featurepolicy_https_subdomain_example_com '${featurepolicy_https_subdomain_example_com}speaker \'none\';'; set $featurepolicy_https_subdomain_example_com '${featurepolicy_https_subdomain_example_com}sync-xhr \'self\';'; set $featurepolicy_https_subdomain_example_com '${featurepolicy_https_subdomain_example_com}vibrate \'none\''; #no trailing semicolon add_header Feature-Policy $featurepolicy_https_subdomain_example_com; This method provides a level of visibility for change control, and is preferable to the everything-on-one-line method for each header type. I am aware this method also consumes additional memory due to the increased `variables_hash_bucket_size` requirements. Is there an alternative way I could build two headers with each content/source type on its own line, without nesting and appending variables? Thank you in advance for any feedback or advice. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,281912,281912#msg-281912 From nginx-forum at forum.nginx.org Tue Nov 13 03:29:50 2018 From: nginx-forum at forum.nginx.org (Cubic) Date: Mon, 12 Nov 2018 22:29:50 -0500 Subject: Make ngx_http_v2_push_resource api public Message-ID: I am intend to write an Nginx module which support downstream message push through http2 protocol. The main process is as below. 1. Client connect to Nginx and use http2 protocol to send a long polling request stream. 2. Nginx hold this long polling request stream and wait downstream messages. 3. If a message arrival, Nginx could send a push promise through the long polling stream and put the message in another stream associated with the push promise There is an ngx_http_v2_push_resource api in ngx_http_v2_filter_module.c which assemble push promise frame and create another stream to invoke an internal request by the specific path to send data, so i can register an content handler with that path to send the message. But sadly, this api is static somehow. Would you make this amazing api public? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,281920,281920#msg-281920 From mdounin at mdounin.ru Tue Nov 13 17:10:20 2018 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 13 Nov 2018 20:10:20 +0300 Subject: Make ngx_http_v2_push_resource api public In-Reply-To: References: Message-ID: <20181113171019.GP99070@mdounin.ru> Hello! On Mon, Nov 12, 2018 at 10:29:50PM -0500, Cubic wrote: > I am intend to write an Nginx module which support downstream message push > through http2 protocol. > > The main process is as below. > 1. Client connect to Nginx and use http2 protocol to send a long polling > request stream. > 2. Nginx hold this long polling request stream and wait downstream > messages. > 3. If a message arrival, Nginx could send a push promise through the long > polling stream and put the message in another stream associated with the > push promise > > There is an ngx_http_v2_push_resource api in ngx_http_v2_filter_module.c > which assemble push promise frame and create another stream to invoke an > internal request by the specific path to send data, so i can register an > content handler with that path to send the message. > But sadly, this api is static somehow. > > Would you make this amazing api public? Similar request was previously rejected, see this thread for details: http://mailman.nginx.org/pipermail/nginx-devel/2018-February/010847.html In short: this API is not going to be public. If you want to push resources from your module, consider using the "http2_push" directive with a variable, or using the "Link: rel=preload" header. As for the design you describe, it looks unnecessarily HTTP/2-centric. You may want to consider returning an actual response to the long polling request instead of trying to maintain a fake request and return associated pushed resources. -- Maxim Dounin http://mdounin.ru/ From bob.worobec at nginx.com Tue Nov 13 18:45:04 2018 From: bob.worobec at nginx.com (Bob Worobec) Date: Tue, 13 Nov 2018 10:45:04 -0800 Subject: NGINX 2018 Training Survey Message-ID: Hi, At NGINX we are always trying to improve our high-performing, lightweight technology. A big part of that is our training program ( https://university.nginx.com), designed to help NGINX users build skills for both basic and advanced uses of NGINX products. To help us do better, if you could please take a few minutes to answer just 15 quick questions on NGINX usage and training, we?d greatly appreciate it. Completing this survey should take about 10 minutes. https://www.surveygizmo.com/s3/4674476/NGINX-Training-Survey Thank you in advance for your input, Bob -- Bob Worobec Training Manager Mobile: 650-278-0675 -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Tue Nov 13 22:23:41 2018 From: nginx-forum at forum.nginx.org (petecooper) Date: Tue, 13 Nov 2018 17:23:41 -0500 Subject: Debugging `try_files` with 404 as a last resort Message-ID: <7030a33671589fd1028313f11ba4eff8.NginxMailingListEnglish@forum.nginx.org> Hello. I've got into knots with `try_files` inside `location` when PHP is involved. Ideally, I would like the following route for `try_files` (in order): * $uri (requested URI) * $uri/ (requested URI, trailing slash) * /index.php?$args (use root `index.php` with args) * =404 (Nginx returns 404) Here is my current code: location / { index index.html index.php; limit_except GET HEAD POST { deny all; } try_files $uri $uri/ /index.php?$args; } location ~ ^.+\.php(?:/.*)?$ { fastcgi_hide_header "X-Powered-By"; fastcgi_index index.php; fastcgi_keep_conn on; fastcgi_next_upstream error timeout; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; fastcgi_pass unix:/var/run/php/php7.2-fpm.sock; fastcgi_split_path_info ^(.+\.php)(/.+)$; include fastcgi_params; try_files $uri =404; } Note there is no `=404` on the first `location`. PHP is installed and running, and the PHP CMS works fine. I am somewhat confused as to why things start to misbehave when I add a `=404` to the first location block. I will explain? == Using `try_files $uri $uri/ /index.php?$args;` == * https://example.com (200 OK) * https://example.com/index.php (200 OK) * https://example.com/articles (200 OK) * https://example.com/index.php?s=articles (200 OK) * https://example.com/non-existent-uri/ (404 Not Found, rendered by CMS via `index.php`) * https://example.com/non-existent-uri (404 Not Found, rendered by CMS via `index.php`) == Using `try_files $uri $uri/ /index.php?$args =404;` == * https://example.com (200 OK) * https://example.com/index.php (200 OK) * https://example.com/articles (404 Not Found, rendered by Nginx) * https://example.com/index.php?s=articles (200 OK) * https://example.com/non-existent-uri/ (404 Not Found, rendered by Nginx) * https://example.com/non-existent-uri (404 Not Found, rendered by Nginx) I do not fully understand why https://example.com/articles results in a 404 Not Found when `index.php?$args` generates a valid page (200 OK) and precedes `=404`. Does the `=` carry some weight? I would greatly appreciate a pointer for further reading so I can better understand. Thank you in advance. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,281938,281938#msg-281938 From francis at daoine.org Tue Nov 13 23:30:42 2018 From: francis at daoine.org (Francis Daly) Date: Tue, 13 Nov 2018 23:30:42 +0000 Subject: Debugging `try_files` with 404 as a last resort In-Reply-To: <7030a33671589fd1028313f11ba4eff8.NginxMailingListEnglish@forum.nginx.org> References: <7030a33671589fd1028313f11ba4eff8.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20181113233042.n7qnajxkuhonjgab@daoine.org> On Tue, Nov 13, 2018 at 05:23:41PM -0500, petecooper wrote: Hi there, > Ideally, I would like the following route for `try_files` (in order): > > * $uri (requested URI) > * $uri/ (requested URI, trailing slash) > * /index.php?$args (use root `index.php` with args) > * =404 (Nginx returns 404) Do you know whether the file that corresponds to the url /index.php exists? If it does exist, use try_files $uri $uri/ /index.php?$args; If it does not exist, use try_files $uri $uri/ =404; But really, in the latter case, you are probably better off just omitting the try_files line altogether. > I do not fully understand why https://example.com/articles results in a 404 > Not Found when `index.php?$args` generates a valid page (200 OK) and > precedes `=404`. Does the `=` carry some weight? > > I would greatly appreciate a pointer for further reading so I can better > understand. http://nginx.org/r/try_files The final argument to try_files is a uri or =code. "code" is returned. "uri" is searched for across all locations (due to an internal redirect). The other arguments to try_files are files. Does a file with the name $document_root/index.php?$args (expanding the two variables) exist? If not, processing will continue until the uri or =code at the end of the argument list. f -- Francis Daly francis at daoine.org From nginx-forum at forum.nginx.org Wed Nov 14 01:30:08 2018 From: nginx-forum at forum.nginx.org (petecooper) Date: Tue, 13 Nov 2018 20:30:08 -0500 Subject: Debugging `try_files` with 404 as a last resort In-Reply-To: <20181113233042.n7qnajxkuhonjgab@daoine.org> References: <20181113233042.n7qnajxkuhonjgab@daoine.org> Message-ID: <57d308eadce74368b4f8a9e8c2f028dd.NginxMailingListEnglish@forum.nginx.org> Francis Daly Wrote: ------------------------------------------------------- > Do you know whether the file that corresponds to the url /index.php > exists? > > If it does exist, use > > try_files $uri $uri/ /index.php?$args; > > If it does not exist, use > > try_files $uri $uri/ =404; Hello Francis. Thank you for your reply. I have some servers where the `index.php` exists, and some where it does not exist. I have been using the approach you outlined above successfully where I know `index.php` exists (or not), I think my question could perhaps be worded: is there a `try_files` option where I can have $uri, then $uri/, then `index.php` and finally `=404` if all else fails. > http://nginx.org/r/try_files > > The final argument to try_files is a uri or =code. "code" is > returned. "uri" is searched for across all locations (due to an > internal > redirect). > > The other arguments to try_files are files. This is how I understood it, thank you for confirming! > Does a file with the name $document_root/index.php?$args (expanding > the > two variables) exist? If not, processing will continue until the uri > or > =code at the end of the argument list. I know the file `index.php` exists in the `root`. Would using the two expanded variables in a log be sufficient to confirm Nginx can see it, or am I missing something? Thanks again, I am grateful for your time and assistance. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,281938,281940#msg-281940 From francis at daoine.org Wed Nov 14 08:22:17 2018 From: francis at daoine.org (Francis Daly) Date: Wed, 14 Nov 2018 08:22:17 +0000 Subject: Debugging `try_files` with 404 as a last resort In-Reply-To: <57d308eadce74368b4f8a9e8c2f028dd.NginxMailingListEnglish@forum.nginx.org> References: <20181113233042.n7qnajxkuhonjgab@daoine.org> <57d308eadce74368b4f8a9e8c2f028dd.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20181114082217.e2izn7eiyzr3iosh@daoine.org> On Tue, Nov 13, 2018 at 08:30:08PM -0500, petecooper wrote: > Francis Daly Wrote: Hi there, > I think my question could perhaps be > worded: is there a `try_files` option where I can have $uri, then $uri/, > then `index.php` and finally `=404` if all else fails. I believe "not directly". You possibly could have try_files falling back to a uri which is handled in a location which in turn does try_files? > > http://nginx.org/r/try_files > > > > The final argument to try_files is a uri or =code. "code" is > > returned. "uri" is searched for across all locations (due to an > > internal > > redirect). > > > > The other arguments to try_files are files. > > This is how I understood it, thank you for confirming! > > > Does a file with the name $document_root/index.php?$args (expanding > > the > > two variables) exist? If not, processing will continue until the uri > > or > > =code at the end of the argument list. > > I know the file `index.php` exists in the `root`. Would using the two > expanded variables in a log be sufficient to confirm Nginx can see it, or am > I missing something? If $args is empty, then your config asks nginx to serve the file `index.php?`, which is not the same as the file `index.php`. Basically, try_files is for trying files, and gives one non-file fallback. You seem to want more than one non-file fallback. Maybe you can built something close to what you want using error_page? Good luck with it, f -- Francis Daly francis at daoine.org From lucas at lucasrolff.com Wed Nov 14 14:36:10 2018 From: lucas at lucasrolff.com (Lucas Rolff) Date: Wed, 14 Nov 2018 14:36:10 +0000 Subject: Byte-range request not possible for proxy_cache if origin doesn't return accept-ranges header Message-ID: <4F158B9A-C71B-4F32-99E4-2BD4A9CED0ED@lucasrolff.com> Hi guys, I've been investigating why byte-range requests didn't work for files that are cached in nginx with proxy_cache, I'd simply do something like: $ curl -r 0-1023 https://cdn.domain.com/mymovie.mp4 What would happen was that the full length of a file would be returned, despite being in the cache already (I know that the initial request, you can't seek into a file). Now, after investigation, I compared it with another file that I knew worked fine, I looked in the file on disk, the only difference between the two files, was the fact that one cached file contained Accept-Ranges: bytes, and another didn't have it. Investigating this, I tried to add the header Accept-Ranges: bytes on an origin server, and everything started to work from nginx as well. Now, I understand that Accept-Ranges: bytes should be sent whenever a server supports byte-range requests. I'd expect that after nginx has fetched the full file, that it would be perfectly capable of doing byte-range requests itself, but it seems like it's not a possibility. I'm not really sure if this is a bug or not, but I do find it odd that the behavior is something like: "If origin does not understand byte-range requests, then I also shouldn't understand them". Is there a way to solve this on the nginx side directly to "fix" origin servers that do not send an Accept-Ranges header, or is it something that could possibly be fixed in such a way that nginx doesn't "require" the cached file to contain the "Accept-Ranges: bytes" header, to be able to do range requests to it? Thanks in advance! Best Regards, Lucas Rolff -------------- next part -------------- An HTML attachment was scrubbed... URL: From aquilinux at gmail.com Wed Nov 14 14:54:20 2018 From: aquilinux at gmail.com (aquilinux) Date: Wed, 14 Nov 2018 15:54:20 +0100 Subject: Strange behaviour of %27 encoding in rewrite Message-ID: Hi all, i'm seeing a strange behaviour in nginx rewrite involving encoded urls for *%27* I have this type of rewrite: rewrite "^/brands/l-oreal$" > https://somedomain.tld/L%27Or%C3%A9al-Paris/index.html? permanent; > That translates to this: > [~]> curl -kIL https://mydomain.tld/brands/l-oreal > HTTP/2 301 > server: nginx > date: Wed, 14 Nov 2018 14:44:21 GMT > content-type: text/html > content-length: 178 > *location: https://somedomain.tld/L'Or%C3%A9al-Paris/index.html > * > strict-transport-security: max-age=15768000; includeSubDomains If i change %27 to %20 i have: [~]> curl -kIL https://mydomain.tld/brands/l-oreal > HTTP/2 301 > server: nginx > date: Wed, 14 Nov 2018 14:31:09 GMT > content-type: text/html > content-length: 178 > *location: https://somedomain.tld/L%20Or%C3%A9al-Paris/index.html > * > strict-transport-security: max-age=15768000; includeSubDomains as expected. The same strange behaviour applies to *%2C*, that is decoded to "*,*" instead of being passed unencoded, as expected. This is driving me nuts, can anyone explain (or fix) this? Thanks! -- "Madness, like small fish, runs in hosts, in vast numbers of instances." Nessuno mi pettina bene come il vento. -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Wed Nov 14 15:40:38 2018 From: nginx-forum at forum.nginx.org (Olaf van der Spek) Date: Wed, 14 Nov 2018 10:40:38 -0500 Subject: Enable http2 and ssl by default Message-ID: > If the directive is not present then either *:80 is used if nginx runs with the superuser privileges, or *:8000 otherwise. It'd be nice if http2 and ssl (if cert is configured) were enabled automatically instead of just listening on port 80. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,281948,281948#msg-281948 From arut at nginx.com Wed Nov 14 16:35:49 2018 From: arut at nginx.com (Roman Arutyunyan) Date: Wed, 14 Nov 2018 19:35:49 +0300 Subject: Byte-range request not possible for proxy_cache if origin doesn't return accept-ranges header In-Reply-To: <4F158B9A-C71B-4F32-99E4-2BD4A9CED0ED@lucasrolff.com> References: <4F158B9A-C71B-4F32-99E4-2BD4A9CED0ED@lucasrolff.com> Message-ID: <20181114163549.GQ33859@Romans-MacBook-Air.local> Hi, On Wed, Nov 14, 2018 at 02:36:10PM +0000, Lucas Rolff wrote: > Hi guys, > > I've been investigating why byte-range requests didn't work for files that are cached in nginx with proxy_cache, I'd simply do something like: > > $ curl -r 0-1023 https://cdn.domain.com/mymovie.mp4 > > What would happen was that the full length of a file would be returned, despite being in the cache already (I know that the initial request, you can't seek into a file). > > Now, after investigation, I compared it with another file that I knew worked fine, I looked in the file on disk, the only difference between the two files, was the fact that one cached file contained Accept-Ranges: bytes, and another didn't have it. > > Investigating this, I tried to add the header Accept-Ranges: bytes on an origin server, and everything started to work from nginx as well. > > Now, I understand that Accept-Ranges: bytes should be sent whenever a server supports byte-range requests. > I'd expect that after nginx has fetched the full file, that it would be perfectly capable of doing byte-range requests itself, but it seems like it's not a possibility. > > I'm not really sure if this is a bug or not, but I do find it odd that the behavior is something like: "If origin does not understand byte-range requests, then I also shouldn't understand them". > > Is there a way to solve this on the nginx side directly to "fix" origin servers that do not send an Accept-Ranges header, or is it something that could possibly be fixed in such a way that nginx doesn't "require" the cached file to contain the "Accept-Ranges: bytes" header, to be able to do range requests to it? The "proxy_force_ranges" directive enables byte ranges regardless of the Accept-Ranges header. http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_force_ranges -- Roman Arutyunyan From jim at k4vqc.com Wed Nov 14 17:44:12 2018 From: jim at k4vqc.com (Jim Popovitch) Date: Wed, 14 Nov 2018 12:44:12 -0500 Subject: Enable http2 and ssl by default In-Reply-To: References: Message-ID: <1542217452.2340.1.camel@k4vqc.com> On Wed, 2018-11-14 at 10:40 -0500, Olaf van der Spek wrote: > > If the directive is not present then either *:80 is used if nginx > > runs > > with the superuser privileges, or *:8000 otherwise. > > It'd be nice if http2 and ssl (if cert is configured) were enabled > automatically instead of just listening on port 80. > > Posted at Nginx Forum: https://forum.nginx.org/read.php?2,281948,28194 > 8#msg-281948 > It'd be nice if everyone used email instead of forums, it'd be nice if $software never needed configuration and just knew what everyone expected of it, it'd be nice if ... yeah. -Jim P. -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: This is a digitally signed message part URL: From lucas at lucasrolff.com Wed Nov 14 18:50:23 2018 From: lucas at lucasrolff.com (Lucas Rolff) Date: Wed, 14 Nov 2018 18:50:23 +0000 Subject: Byte-range request not possible for proxy_cache if origin doesn't return accept-ranges header In-Reply-To: <20181114163549.GQ33859@Romans-MacBook-Air.local> References: <4F158B9A-C71B-4F32-99E4-2BD4A9CED0ED@lucasrolff.com> <20181114163549.GQ33859@Romans-MacBook-Air.local> Message-ID: <856248AD-2C6A-44EF-9429-58AC3F14F79E@lucasrolff.com> Hi Roman, I can confirm that indeed does fix the problem, thanks! I do wonder though, why not let nginx make the decision instead of relying on what the origin sends or does not send? Thanks! ?On 14/11/2018, 17.36, "nginx on behalf of Roman Arutyunyan" wrote: Hi, On Wed, Nov 14, 2018 at 02:36:10PM +0000, Lucas Rolff wrote: > Hi guys, > > I've been investigating why byte-range requests didn't work for files that are cached in nginx with proxy_cache, I'd simply do something like: > > $ curl -r 0-1023 https://cdn.domain.com/mymovie.mp4 > > What would happen was that the full length of a file would be returned, despite being in the cache already (I know that the initial request, you can't seek into a file). > > Now, after investigation, I compared it with another file that I knew worked fine, I looked in the file on disk, the only difference between the two files, was the fact that one cached file contained Accept-Ranges: bytes, and another didn't have it. > > Investigating this, I tried to add the header Accept-Ranges: bytes on an origin server, and everything started to work from nginx as well. > > Now, I understand that Accept-Ranges: bytes should be sent whenever a server supports byte-range requests. > I'd expect that after nginx has fetched the full file, that it would be perfectly capable of doing byte-range requests itself, but it seems like it's not a possibility. > > I'm not really sure if this is a bug or not, but I do find it odd that the behavior is something like: "If origin does not understand byte-range requests, then I also shouldn't understand them". > > Is there a way to solve this on the nginx side directly to "fix" origin servers that do not send an Accept-Ranges header, or is it something that could possibly be fixed in such a way that nginx doesn't "require" the cached file to contain the "Accept-Ranges: bytes" header, to be able to do range requests to it? The "proxy_force_ranges" directive enables byte ranges regardless of the Accept-Ranges header. http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_force_ranges -- Roman Arutyunyan _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx From roger at netskrt.io Wed Nov 14 20:17:57 2018 From: roger at netskrt.io (Roger Fischer) Date: Wed, 14 Nov 2018 12:17:57 -0800 Subject: Securing the HTTPS private key Message-ID: Hello, does NGINX support any mechanisms to securely access the private key of server certificates? Specifically, could NGINX make a request to a key store, rather than reading from a local file? Are there any best practices for keeping private keys secure? I understand the basics. The key file should only be readable by root. I cannot protect the key with a pass-phrase, as NGINX needs to start and restart autonomously. Thanks? Roger From rpaprocki at fearnothingproductions.net Wed Nov 14 20:21:57 2018 From: rpaprocki at fearnothingproductions.net (Robert Paprocki) Date: Wed, 14 Nov 2018 12:21:57 -0800 Subject: Securing the HTTPS private key In-Reply-To: References: Message-ID: <64145365-7A3A-43D8-9538-E083CC5F3B59@fearnothingproductions.net> Hi, You might want to consider something like OpenResty, which allows for serving certificates on the fly with Lua logic. You can use this to fetch cert/key material via Vault or some other secure data store that can be accessed via TCP (or you could also keep the encrypted private key on-disk and fetch the secret at worker initialization via some Lua logic). See https://github.com/openresty/lua-resty-core/blob/master/lib/ngx/ssl.md for an example of the former. > On Nov 14, 2018, at 12:17, Roger Fischer wrote: > > Hello, > > does NGINX support any mechanisms to securely access the private key of server certificates? > > Specifically, could NGINX make a request to a key store, rather than reading from a local file? > > Are there any best practices for keeping private keys secure? > > I understand the basics. The key file should only be readable by root. I cannot protect the key with a pass-phrase, as NGINX needs to start and restart autonomously. > > Thanks? > > Roger > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From arut at nginx.com Thu Nov 15 09:29:59 2018 From: arut at nginx.com (Roman Arutyunyan) Date: Thu, 15 Nov 2018 12:29:59 +0300 Subject: Byte-range request not possible for proxy_cache if origin doesn't return accept-ranges header In-Reply-To: <856248AD-2C6A-44EF-9429-58AC3F14F79E@lucasrolff.com> References: <4F158B9A-C71B-4F32-99E4-2BD4A9CED0ED@lucasrolff.com> <20181114163549.GQ33859@Romans-MacBook-Air.local> <856248AD-2C6A-44EF-9429-58AC3F14F79E@lucasrolff.com> Message-ID: <20181115092959.GR33859@Romans-MacBook-Air.local> Hi Lucas, On Wed, Nov 14, 2018 at 06:50:23PM +0000, Lucas Rolff wrote: > Hi Roman, > > I can confirm that indeed does fix the problem, thanks! > > I do wonder though, why not let nginx make the decision instead of relying on what the origin sends or does not send? nginx tries to be transparent and do not introduce any changes in the response and behavior of the origin unless explicitly requested. > Thanks! > > ?On 14/11/2018, 17.36, "nginx on behalf of Roman Arutyunyan" wrote: > > Hi, > > On Wed, Nov 14, 2018 at 02:36:10PM +0000, Lucas Rolff wrote: > > Hi guys, > > > > I've been investigating why byte-range requests didn't work for files that are cached in nginx with proxy_cache, I'd simply do something like: > > > > $ curl -r 0-1023 https://cdn.domain.com/mymovie.mp4 > > > > What would happen was that the full length of a file would be returned, despite being in the cache already (I know that the initial request, you can't seek into a file). > > > > Now, after investigation, I compared it with another file that I knew worked fine, I looked in the file on disk, the only difference between the two files, was the fact that one cached file contained Accept-Ranges: bytes, and another didn't have it. > > > > Investigating this, I tried to add the header Accept-Ranges: bytes on an origin server, and everything started to work from nginx as well. > > > > Now, I understand that Accept-Ranges: bytes should be sent whenever a server supports byte-range requests. > > I'd expect that after nginx has fetched the full file, that it would be perfectly capable of doing byte-range requests itself, but it seems like it's not a possibility. > > > > I'm not really sure if this is a bug or not, but I do find it odd that the behavior is something like: "If origin does not understand byte-range requests, then I also shouldn't understand them". > > > > Is there a way to solve this on the nginx side directly to "fix" origin servers that do not send an Accept-Ranges header, or is it something that could possibly be fixed in such a way that nginx doesn't "require" the cached file to contain the "Accept-Ranges: bytes" header, to be able to do range requests to it? > > The "proxy_force_ranges" directive enables byte ranges regardless of the > Accept-Ranges header. > > http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_force_ranges > > -- > Roman Arutyunyan > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -- Roman Arutyunyan From roughlea at hotmail.co.uk Thu Nov 15 11:10:08 2018 From: roughlea at hotmail.co.uk (rough lea) Date: Thu, 15 Nov 2018 11:10:08 +0000 Subject: How to disable ipv6 in nginx? Message-ID: Hi, I am a newbie running tusd server on macos High Sierra behind an Nginx Proxy running within a docker container. In the logs, I notice that before an _UploadCreated_ event is received there is an attempt to connect to tusd using ipv6 loopback address which fails. _[crit] 23#23: *4 connect() to [::1]:1080 failed (99: Address not available) while connecting to upstream, client: 172.22.0.1, server: , request: "POST /files/ HTTP/1.1", upstream: "http://[::1]:1080/files/", host: "test.example.com:1081"_ My nginx configuration is listed below?.. ``` server { listen 1081 ssl http2; #listen [::]:443 http2 ipv6only=on ssl; charset utf-8; access_log /dev/stdout; error_log /dev/stdout; ssl_certificate /server/certs/tls.crt; ssl_certificate_key /server/certs/tls.key; location /files/ { resolver 8.8.8.8 4.2.2.2; #resolver 8.8.8.8 4.2.2.2 ipv6only=off; proxy_pass http://localhost:1080/files/; #proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; # Disable request and response buffering proxy_request_buffering off; proxy_buffering off; proxy_http_version 1.1; # Add X-Forwarded-* headers so that response can reference https and # originating host:port proxy_set_header X-Forwarded-Host $host:$server_port; proxy_set_header X-Forwarded-Proto https; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; # Allow proxying of websockets if required proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection "upgrade"; client_max_body_size 0; } ``` If I take out the line, _listen [::]:1081 http2 ipv6only=on ssl;_ from the server config block, the issue still occurs. Upon further reading at [docker](https://docs.docker.com/config/daemon/ipv6/) and [docker-for-mac](https://github.com/docker/for-mac/issues/1432), it appears that ipv6 networking is only available for docker daemons running on Linux hosts??? I have tried adding a resolver and setting ipv6only=off but nginx seems to continue to try and send to the upstream proxy with an ipv6 address. How can I get nginx to use ipv4 only? Has anybody else experienced and resolved the same issue? Kind regards Simon From mdounin at mdounin.ru Thu Nov 15 13:03:14 2018 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 15 Nov 2018 16:03:14 +0300 Subject: Securing the HTTPS private key In-Reply-To: References: Message-ID: <20181115130314.GV99070@mdounin.ru> Hello! On Wed, Nov 14, 2018 at 12:17:57PM -0800, Roger Fischer wrote: > Hello, > > does NGINX support any mechanisms to securely access the private > key of server certificates? > > Specifically, could NGINX make a request to a key store, rather > than reading from a local file? > > Are there any best practices for keeping private keys secure? > > I understand the basics. The key file should only be readable by > root. I cannot protect the key with a pass-phrase, as NGINX > needs to start and restart autonomously. You actually can protect the key using a passphrase, see http://nginx.org/r/ssl_password_file. Though this might not be the best idea due to basically the same security provided, while involving higher complexity. Also, you can use "engine:..." syntax to load keys via OpenSSL engines. This allows using various complex key stores, including hardware tokens, to access keys, though may not be trivial to configure. -- Maxim Dounin http://mdounin.ru/ From nginx-forum at forum.nginx.org Thu Nov 15 13:05:42 2018 From: nginx-forum at forum.nginx.org (Olaf van der Spek) Date: Thu, 15 Nov 2018 08:05:42 -0500 Subject: Enable http2 and ssl by default In-Reply-To: <1542217452.2340.1.camel@k4vqc.com> References: <1542217452.2340.1.camel@k4vqc.com> Message-ID: <3059675dab576c36b23978da4b4c1bce.NginxMailingListEnglish@forum.nginx.org> Why so hostile? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,281948,281963#msg-281963 From jim at k4vqc.com Thu Nov 15 13:17:42 2018 From: jim at k4vqc.com (Jim Popovitch) Date: Thu, 15 Nov 2018 08:17:42 -0500 Subject: Enable http2 and ssl by default In-Reply-To: <3059675dab576c36b23978da4b4c1bce.NginxMailingListEnglish@forum.nginx.org> References: <1542217452.2340.1.camel@k4vqc.com> <3059675dab576c36b23978da4b4c1bce.NginxMailingListEnglish@forum.nginx.org> Message-ID: <1542287862.2101.5.camel@k4vqc.com> On Thu, 2018-11-15 at 08:05 -0500, Olaf van der Spek wrote: > Why so hostile? Why so vague? (see, this is why posting via forums is like cancer. Hint: the forum rarely sends the context, also not to forget the quoted first line in the thread opener) To address your concerns about nginx configuration, simply put it's not worth the developers time to reduce configuration to such a level of ease and thereby possibly breaking the configuration of some beast who wants to run ssl+spdy on port 80. -Jim P. From nginx-forum at forum.nginx.org Thu Nov 15 13:36:02 2018 From: nginx-forum at forum.nginx.org (Olaf van der Spek) Date: Thu, 15 Nov 2018 08:36:02 -0500 Subject: Enable http2 and ssl by default In-Reply-To: <1542287862.2101.5.camel@k4vqc.com> References: <1542287862.2101.5.camel@k4vqc.com> Message-ID: > (see, this is why posting via forums is like cancer. Hint: the forum > rarely sends the context, also not to forget the quoted first line in > the thread opener) A proper forum would do that.. > To address your concerns about nginx configuration, simply put it's not > worth the developers time to reduce configuration to such a level of Are you a nginx developer? > ease and thereby possibly breaking the configuration of some beast who > wants to run ssl+spdy on port 80. That configuration would have a listen line, so the default wouldn't apply and updating the default wouldn't break it. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,281948,281966#msg-281966 From jim at k4vqc.com Thu Nov 15 14:03:10 2018 From: jim at k4vqc.com (Jim Popovitch) Date: Thu, 15 Nov 2018 09:03:10 -0500 Subject: Enable http2 and ssl by default In-Reply-To: References: <1542287862.2101.5.camel@k4vqc.com> Message-ID: <1542290590.2101.8.camel@k4vqc.com> On Thu, 2018-11-15 at 08:36 -0500, Olaf van der Spek wrote: > > (see, this is why posting via forums is like cancer. Hint: the forum > > rarely sends the context, also not to forget the quoted first line > > in the thread opener) > > A proper forum would do that.. A proper forum poster would too. ;-) > > To address your concerns about nginx configuration, simply put it's > > not worth the developers time to reduce configuration to such a > > level of > > Are you a nginx developer? No. > > ease and thereby possibly breaking the configuration of some beast > > who wants to run ssl+spdy on port 80. > > That configuration would have a listen line, so the default wouldn't > apply and updating the default wouldn't break it. So a specific use case. What about port 443 (you haven't mentioned it yet), except what if it's on a non-routable subnet perhaps 8443 should be preferred then? Should nginx also look for certs in /etc/ssl/ that have file names that align with server_name? What about multi-homed servers, should the listen directive default to the IP address(es) that map to server_name? I could come up with a 100 "ease of use" cases, but they're still not worthy of hard coding into nginx. Every new line of code has the potential to introduce new bugs. -Jim P. From francis at daoine.org Thu Nov 15 14:15:29 2018 From: francis at daoine.org (Francis Daly) Date: Thu, 15 Nov 2018 14:15:29 +0000 Subject: How to disable ipv6 in nginx? In-Reply-To: References: Message-ID: <20181115141529.rxztxjnrtuyru32c@daoine.org> On Thu, Nov 15, 2018 at 11:10:08AM +0000, rough lea wrote: Hi there, > I am a newbie running tusd server on macos High Sierra behind an Nginx Proxy running within a docker container. In the logs, I notice that before an _UploadCreated_ event is received there is an attempt to connect to tusd using ipv6 loopback address which fails. > Your config says > proxy_pass http://localhost:1080/files/; Your log says > _[crit] 23#23: *4 connect() to [::1]:1080 Presumably your system resolver resolves the word "localhost" to the IPv6 address [::1]? If you change your config to proxy_pass http://127.0.0.1:1080/files/; does it do what you want? (The alternative, of configuring your system resolver to resolve "localhost" to the IPv4 address 127.0.0.1, is probably more work.) That does not answer the main question you asked; but might resolve the immediate issue. Cheers, f -- Francis Daly francis at daoine.org From mdounin at mdounin.ru Thu Nov 15 14:24:09 2018 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 15 Nov 2018 17:24:09 +0300 Subject: How to disable ipv6 in nginx? In-Reply-To: References: Message-ID: <20181115142409.GZ99070@mdounin.ru> Hello! On Thu, Nov 15, 2018 at 11:10:08AM +0000, rough lea wrote: > I am a newbie running tusd server on macos High Sierra behind an > Nginx Proxy running within a docker container. In the logs, I > notice that before an _UploadCreated_ event is received there is > an attempt to connect to tusd using ipv6 loopback address which > fails. > > _[crit] 23#23: *4 connect() to [::1]:1080 failed (99: Address not available) while connecting to upstream, client: 172.22.0.1, server: , request: "POST /files/ HTTP/1.1", upstream: "http://[::1]:1080/files/", host: "test.example.com:1081"_ > > My nginx configuration is listed below?.. [...] > proxy_pass http://localhost:1080/files/; [...] > If I take out the line, _listen > [::]:1081 http2 ipv6only=on ssl;_ from the server config block, > the issue still occurs. The error is about connecting to "[::1]:1080" backend, as per proxy_pass in your configuration. Adding or removing listening sockets in nginx is not expected change things. > Upon further reading at > [docker](https://docs.docker.com/config/daemon/ipv6/) and > [docker-for-mac](https://github.com/docker/for-mac/issues/1432), > it appears that ipv6 networking is only available for docker > daemons running on Linux hosts??? The error (Address not available) suggests that this is indeed an issue in Docker. > I have tried adding a resolver and setting ipv6only=off but > nginx seems to continue to try and send to the upstream proxy > with an ipv6 address. When a name is known during configuration parsing, nginx will use normal system-provided name resolution, as available via the getaddrinfo() function. A resolver is only used once nginx does a run-time resolution of domain names, and cannot use getaddrinfo() as the interface is blocking. > How can I get nginx to use ipv4 only? Has anybody else > experienced and resolved the same issue? The is no way to globally disable IPv6 in nginx. Instead, consider one of the following options: - when you want nginx to use IPv4 addresses only, use names which resolve to IPv4 addresses only (or use IPv4 addresses directly); - configure your system resolver to do not return IPv6 addresses (usually this happens automatically when you do not have IPv6 configured on the host). In your particular case, writing something like proxy_pass http://127.0.0.1:8080/files/; with "127.0.0.1" IPv4 address explicitly used instead of "localhost" should be enough. -- Maxim Dounin http://mdounin.ru/ From nginx-forum at forum.nginx.org Thu Nov 15 14:27:27 2018 From: nginx-forum at forum.nginx.org (Olaf van der Spek) Date: Thu, 15 Nov 2018 09:27:27 -0500 Subject: Enable http2 and ssl by default In-Reply-To: <1542290590.2101.8.camel@k4vqc.com> References: <1542290590.2101.8.camel@k4vqc.com> Message-ID: <5f45a4ed30638cd0b92f82336d5050f1.NginxMailingListEnglish@forum.nginx.org> Jim Popovitch Wrote: ------------------------------------------------------- > On Thu, 2018-11-15 at 08:36 -0500, Olaf van der Spek wrote: > So a specific use case. What about port 443 (you haven't mentioned it What about it? > yet), except what if it's on a non-routable subnet perhaps 8443 should > be preferred then? Why? > Should nginx also look for certs in /etc/ssl/ that > have file names that align with server_name? What about multi-homed I had thought about that.. maybe but that'd be another feature request. > servers, should the listen directive default to the IP address(es) > that > map to server_name? No Posted at Nginx Forum: https://forum.nginx.org/read.php?2,281948,281975#msg-281975 From vbart at nginx.com Thu Nov 15 14:35:43 2018 From: vbart at nginx.com (Valentin V. Bartenev) Date: Thu, 15 Nov 2018 17:35:43 +0300 Subject: Unit 1.6 release Message-ID: <2048954.30bhB6ODT8@vbart-workstation> Hello, I'm glad to announce a new release of NGINX Unit. This release primarily focuses on improvements in Node.js module compatibility; thanks to our vibrant community, we made great progress here. Please don't hesitate to report any problems to: - Github: https://github.com/nginx/unit/issues - Mailing list: https://mailman.nginx.org/mailman/listinfo/unit If you have installed the "unit-http" module from npm, then don't forget to update it besides Unit itself. Detailed instructions for Node.js installation can be found here: - http://unit.nginx.org/installation/#node-js-package Changes with Unit 1.6 15 Nov 2018 *) Change: "make install" now installs Node.js module as well if it was configured. *) Feature: "--local" ./configure option to install Node.js module locally. *) Bugfix: Node.js module might have crashed due to broken reference counting. *) Bugfix: asynchronous operations in Node.js might not have worked. *) Bugfix: various compatibility issues with Node.js applications. *) Bugfix: "freed pointer is out of pool" alerts might have appeared in log. *) Bugfix: module discovery didn't work on 64-bit big-endian systems like IBM/S390x. wbr, Valentin V. Bartenev From roughlea at hotmail.co.uk Thu Nov 15 14:36:48 2018 From: roughlea at hotmail.co.uk (rough lea) Date: Thu, 15 Nov 2018 14:36:48 +0000 Subject: How to disable ipv6 in nginx? In-Reply-To: <20181115142409.GZ99070@mdounin.ru> References: <20181115142409.GZ99070@mdounin.ru> Message-ID: Hi Francis and Maxim, Cheers, that solved it. Used 127.0.0.1 and works like a charm. Will remove the resolver from my config. Thanks for the explanations. Kind regards Simon > On 15 Nov 2018, at 14:24, Maxim Dounin wrote: > > Hello! > > On Thu, Nov 15, 2018 at 11:10:08AM +0000, rough lea wrote: > >> I am a newbie running tusd server on macos High Sierra behind an >> Nginx Proxy running within a docker container. In the logs, I >> notice that before an _UploadCreated_ event is received there is >> an attempt to connect to tusd using ipv6 loopback address which >> fails. >> >> _[crit] 23#23: *4 connect() to [::1]:1080 failed (99: Address not available) while connecting to upstream, client: 172.22.0.1, server: , request: "POST /files/ HTTP/1.1", upstream: "http://[::1]:1080/files/", host: "test.example.com:1081"_ >> >> My nginx configuration is listed below?.. > > [...] > >> proxy_pass http://localhost:1080/files/; > > [...] > >> If I take out the line, _listen >> [::]:1081 http2 ipv6only=on ssl;_ from the server config block, >> the issue still occurs. > > The error is about connecting to "[::1]:1080" backend, as per > proxy_pass in your configuration. Adding or removing listening > sockets in nginx is not expected change things. > >> Upon further reading at >> [docker](https://docs.docker.com/config/daemon/ipv6/) and >> [docker-for-mac](https://github.com/docker/for-mac/issues/1432), >> it appears that ipv6 networking is only available for docker >> daemons running on Linux hosts??? > > The error (Address not available) suggests that this is indeed an > issue in Docker. > >> I have tried adding a resolver and setting ipv6only=off but >> nginx seems to continue to try and send to the upstream proxy >> with an ipv6 address. > > When a name is known during configuration parsing, nginx will > use normal system-provided name resolution, as available via > the getaddrinfo() function. > > A resolver is only used once nginx does a run-time resolution of > domain names, and cannot use getaddrinfo() as the interface is > blocking. > >> How can I get nginx to use ipv4 only? Has anybody else >> experienced and resolved the same issue? > > The is no way to globally disable IPv6 in nginx. Instead, > consider one of the following options: > > - when you want nginx to use IPv4 addresses only, use names which > resolve to IPv4 addresses only (or use IPv4 addresses directly); > > - configure your system resolver to do not return IPv6 addresses > (usually this happens automatically when you do not have IPv6 > configured on the host). > > In your particular case, writing something like > > proxy_pass http://127.0.0.1:8080/files/; > > with "127.0.0.1" IPv4 address explicitly used instead of > "localhost" should be enough. > > -- > Maxim Dounin > http://mdounin.ru/ > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From jim at k4vqc.com Thu Nov 15 15:40:16 2018 From: jim at k4vqc.com (Jim Popovitch) Date: Thu, 15 Nov 2018 10:40:16 -0500 Subject: Enable http2 and ssl by default In-Reply-To: <5f45a4ed30638cd0b92f82336d5050f1.NginxMailingListEnglish@forum.nginx.org> References: <1542290590.2101.8.camel@k4vqc.com> <5f45a4ed30638cd0b92f82336d5050f1.NginxMailingListEnglish@forum.nginx.org> Message-ID: <1542296416.2101.11.camel@k4vqc.com> On Thu, 2018-11-15 at 09:27 -0500, Olaf van der Spek wrote: > Jim Popovitch Wrote: > ------------------------------------------------------- > > On Thu, 2018-11-15 at 08:36 -0500, Olaf van der Spek wrote: > > So a specific use case.??What about port 443 (you haven't mentioned > > it > > What about it? > > > yet), except what if it's on a non-routable subnet perhaps 8443 > > should > > be preferred then??? > > Why?? > > > Should nginx also look for certs in /etc/ssl/ that > > have file names that align with server_name???What about multi-homed > > I had thought about that.. maybe but that'd be another feature > request. > > > servers, should the listen directive default to the IP address(es) > > that map to server_name??? > > No > Heh, those weren't really questions necessitating answers from you. I apologize if they appeared to you as such. On a side note, github suggests that we share a common interest in C&C. Best wishes, -Jim P. From mdounin at mdounin.ru Thu Nov 15 15:55:41 2018 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 15 Nov 2018 18:55:41 +0300 Subject: Strange behaviour of %27 encoding in rewrite In-Reply-To: References: Message-ID: <20181115155541.GE99070@mdounin.ru> Hello! On Wed, Nov 14, 2018 at 03:54:20PM +0100, aquilinux wrote: > Hi all, > i'm seeing a strange behaviour in nginx rewrite involving encoded urls for > *%27* > I have this type of rewrite: > > rewrite "^/brands/l-oreal$" > > https://somedomain.tld/L%27Or%C3%A9al-Paris/index.html? permanent; > > > > That translates to this: > > > > [~]> curl -kIL https://mydomain.tld/brands/l-oreal > > HTTP/2 301 > > server: nginx > > date: Wed, 14 Nov 2018 14:44:21 GMT > > content-type: text/html > > content-length: 178 > > *location: https://somedomain.tld/L'Or%C3%A9al-Paris/index.html > > * > > strict-transport-security: max-age=15768000; includeSubDomains > > > If i change %27 to %20 i have: > > [~]> curl -kIL https://mydomain.tld/brands/l-oreal > > HTTP/2 301 > > server: nginx > > date: Wed, 14 Nov 2018 14:31:09 GMT > > content-type: text/html > > content-length: 178 > > *location: https://somedomain.tld/L%20Or%C3%A9al-Paris/index.html > > * > > strict-transport-security: max-age=15768000; includeSubDomains > > > as expected. > > The same strange behaviour applies to *%2C*, that is decoded to "*,*" > instead of being passed unencoded, as expected. > This is driving me nuts, can anyone explain (or fix) this? This is because both "'" and "," don't need to be escaped. And, given that the rewrite directive operates on the internal URI representation, the replacement argument is unescaped by nginx, and then escaped again when returning a permantent redirect. But it only escapes characters which need to be escaped. If you want nginx to return a redirect exactly in the way you wrote it, please consider using the "return" directive instead, for example: location = /brands/l-oreal { return 301 https://somedomain.tld/L%27Or%C3%A9al-Paris/index.html; } -- Maxim Dounin http://mdounin.ru/ From nginx-forum at forum.nginx.org Thu Nov 15 17:17:39 2018 From: nginx-forum at forum.nginx.org (kmansoft) Date: Thu, 15 Nov 2018 12:17:39 -0500 Subject: Enabling TLS 1.0 / 1.1 on Debian Testing Message-ID: <60c9eb1fd6486f9f647935aeba6581d9.NginxMailingListEnglish@forum.nginx.org> Cross posting from https://unix.stackexchange.com/questions/481963, this seems to be the better place to ask. --- Just updated Debian from "stable" 9.* to "testing" 10.*. Have nginx 1.14 - used to come from "stable backports" now included in Debian itself. Seeing a strange issue with TLS versions in nginx. TLS 1.3 is enabled, and 1.2 is too, but I can't seem to get TLS 1.0 / 1.1 even though they're included in nginx configs. https://www.htbridge.com/ssl/?id=QgSrZIuN Oh and by the way, Dovecot running on same system still has TLS 1.0 - 1.1 - 1.2 - 1.3 all functional: https://www.htbridge.com/ssl/?id=cSArIbQQ relevant bits from nginx site config: ssl_protocols TLSv1 TLSv1.1 TLSv1.2 TLSv1.3; ssl_ciphers kECDHE+CHACHA20:kECDHE+AESGCM:kDHE+AESGCM:kECDHE+AES+SHA:kDHE+AES+SHA:!AESCCM:!aNULL:!eNULL; ssl_prefer_server_ciphers on; I tried removing either ssl_protocols or ssl_ciphers or both, nothing changed really. Is this an intentional change in nginx - upstream or as packaged by Debian? A change in openssl itself? Any way I can enable all TLS versions from 1.0 and up to 1.3 in nginx at the same time? --- Found this in Debian news, basically they've disabled TLS 1.0 / 1.1 - apps have to ask for these versions specifically: https://packages.qa.debian.org/o/openssl/news/20170824T211015Z.html * Instead of completly disabling TLS 1.0 and 1.1, just set the minimum version to TLS 1.2 by default. TLS 1.0 and 1.1 can be enabled again by calling SSL_CTX_set_min_proto_version() or SSL_set_min_proto_version(). Is there some way nginx could accommodate this change and make it possible to enable TLS 1.0 / 1.1? Maybe consider adding a new config directive like the one used by Dovecot? https://github.com/dovecot/core/blob/master/doc/example-config/conf.d/10-ssl.conf#L55 It would still allow someone to only use TLS 1.2 and newer, or "TLS 1.0 and newer" or "TLS 1.1 and newer" without getting overly verbose. It would also work identical with both OpenSSL variations, with and without TLS 1.3 support. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,281984,281984#msg-281984 From mdounin at mdounin.ru Thu Nov 15 18:25:48 2018 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 15 Nov 2018 21:25:48 +0300 Subject: Enabling TLS 1.0 / 1.1 on Debian Testing In-Reply-To: <60c9eb1fd6486f9f647935aeba6581d9.NginxMailingListEnglish@forum.nginx.org> References: <60c9eb1fd6486f9f647935aeba6581d9.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20181115182548.GF99070@mdounin.ru> Hello! On Thu, Nov 15, 2018 at 12:17:39PM -0500, kmansoft wrote: > Cross posting from https://unix.stackexchange.com/questions/481963, this > seems to be the better place to ask. > > --- > > Just updated Debian from "stable" 9.* to "testing" 10.*. > > Have nginx 1.14 - used to come from "stable backports" now included in > Debian itself. > > Seeing a strange issue with TLS versions in nginx. > > TLS 1.3 is enabled, and 1.2 is too, but I can't seem to get TLS 1.0 / 1.1 > even though they're included in nginx configs. [...] Upgrade to nginx 1.15.3+, this problem is expected to be addressed by this commit: http://hg.nginx.org/nginx/rev/7ad0f4ace359 Alternatively, you can modify (and/or disable via the OPENSSL_CONF environment variable specifically for nginx) system-wide OpenSSL configuration file which disables protocols before TLS 1.2. -- Maxim Dounin http://mdounin.ru/ From roger at netskrt.io Thu Nov 15 19:59:31 2018 From: roger at netskrt.io (Roger Fischer) Date: Thu, 15 Nov 2018 11:59:31 -0800 Subject: Listen on transient address Message-ID: <791890AA-1F59-4522-B5F6-DF1DF5DBD95F@netskrt.io> Hello, I have an NGINX instance that listens on a tunnel (and some other interfaces). When NGINX was restarted while the tunnel was down (tun device and address did not exist), NGINX failed to start. [emerg] 1344#1344: bind() to 38.88.78.19:443 failed (99: Cannot assign requested address) Relevant config: listen 172.16.200.5:80 default_server; listen 38.88.78.19:80 default_server; # tunnel, not always up Is there a way to configure NGINX to listen ?best effort?, still start even if it can?t bind to the address/port, and periodically retry to bind to the address/port? This would be my preferred solution. Alternatively, if I pre-define the tunnel device and its address (I have not explored that yet), would NGINX bind successfully when the tunnel is down? Thanks? Roger -------------- next part -------------- An HTML attachment was scrubbed... URL: From alex at samad.com.au Thu Nov 15 22:41:22 2018 From: alex at samad.com.au (Alex Samad) Date: Fri, 16 Nov 2018 09:41:22 +1100 Subject: Securing the HTTPS private key In-Reply-To: <20181115130314.GV99070@mdounin.ru> References: <20181115130314.GV99070@mdounin.ru> Message-ID: HI isn't this a bit futile, if they can get onto the box that has nginx they can get either the private key or secret to get the private key. safer would be to make it that you need human interact to start nginx. But till a memory dump of the app would get you the private key. On Fri, 16 Nov 2018 at 00:03, Maxim Dounin wrote: > Hello! > > On Wed, Nov 14, 2018 at 12:17:57PM -0800, Roger Fischer wrote: > > > Hello, > > > > does NGINX support any mechanisms to securely access the private > > key of server certificates? > > > > Specifically, could NGINX make a request to a key store, rather > > than reading from a local file? > > > > Are there any best practices for keeping private keys secure? > > > > I understand the basics. The key file should only be readable by > > root. I cannot protect the key with a pass-phrase, as NGINX > > needs to start and restart autonomously. > > You actually can protect the key using a passphrase, see > http://nginx.org/r/ssl_password_file. Though this might not be > the best idea due to basically the same security provided, while > involving higher complexity. > > Also, you can use "engine:..." syntax to load keys via OpenSSL > engines. This allows using various complex key stores, including > hardware tokens, to access keys, though may not be trivial to > configure. > > -- > Maxim Dounin > http://mdounin.ru/ > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From roger at netskrt.io Fri Nov 16 06:02:16 2018 From: roger at netskrt.io (Roger Fischer) Date: Thu, 15 Nov 2018 22:02:16 -0800 Subject: Securing the HTTPS private key In-Reply-To: References: <20181115130314.GV99070@mdounin.ru> Message-ID: <4D90FA64-F27C-4BB8-8FAD-0C2D1BC0FAE6@netskrt.io> Hi Alex, our device is unattended, not always on, and in some cases in only semi-secured locations. Besides preventing root access, we also need to protect against the hacking of a stolen device (or disk). Human interaction is not practical (other than in exceptional situations). Roger > On Nov 15, 2018, at 2:41 PM, Alex Samad wrote: > > HI > > isn't this a bit futile, if they can get onto the box that has nginx they can get either the private key or secret to get the private key. > > safer would be to make it that you need human interact to start nginx. > > But till a memory dump of the app would get you the private key. > > > > > On Fri, 16 Nov 2018 at 00:03, Maxim Dounin > wrote: > Hello! > > On Wed, Nov 14, 2018 at 12:17:57PM -0800, Roger Fischer wrote: > > > Hello, > > > > does NGINX support any mechanisms to securely access the private > > key of server certificates? > > > > Specifically, could NGINX make a request to a key store, rather > > than reading from a local file? > > > > Are there any best practices for keeping private keys secure? > > > > I understand the basics. The key file should only be readable by > > root. I cannot protect the key with a pass-phrase, as NGINX > > needs to start and restart autonomously. > > You actually can protect the key using a passphrase, see > http://nginx.org/r/ssl_password_file . Though this might not be > the best idea due to basically the same security provided, while > involving higher complexity. > > Also, you can use "engine:..." syntax to load keys via OpenSSL > engines. This allows using various complex key stores, including > hardware tokens, to access keys, though may not be trivial to > configure. > > -- > Maxim Dounin > http://mdounin.ru/ > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From ashish.is at lostca.se Fri Nov 16 08:00:38 2018 From: ashish.is at lostca.se (Ashish SHUKLA) Date: Fri, 16 Nov 2018 13:30:38 +0530 Subject: Listen on transient address In-Reply-To: <791890AA-1F59-4522-B5F6-DF1DF5DBD95F@netskrt.io> References: <791890AA-1F59-4522-B5F6-DF1DF5DBD95F@netskrt.io> Message-ID: On 11/16/18 1:29 AM, Roger Fischer wrote: > Hello, > > I have an NGINX instance that listens on a tunnel (and some other > interfaces). When NGINX was restarted while the tunnel was down (tun > device and address did not exist), NGINX failed to start. > > [emerg] 1344#1344: bind() to 38.88.78.19:443 failed (99: Cannot > assign requested address) > > > Relevant config: > > ? ??listen 172.16.200.5:80 default_server; > ? ? listen 38.88.78.19:80 default_server; ? # tunnel, not always up > > Is there a way to configure NGINX to listen ?best effort?, still start > even if it can?t bind to the address/port, and periodically retry to > bind to the address/port? This would be my preferred solution. > > Alternatively, if I pre-define the tunnel device and its address (I have > not explored that yet), would NGINX bind successfully when the tunnel is > down? If using GNU/Linux, make sure /proc/sys/net/ipv4/ip_nonlocal_bind is set to 1, and then you should be able to bind to any non-local IPv4 address. HTH -- Ashish SHUKLA | GPG: F682CDCC39DC0FEAE11620B6C746CFA9E74FA4B0 ?Under certain circumstances, profanity provides a relief denied even to prayer.? (Mark Twain) -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: OpenPGP digital signature URL: From rainer at ultra-secure.de Fri Nov 16 08:02:34 2018 From: rainer at ultra-secure.de (rainer at ultra-secure.de) Date: Fri, 16 Nov 2018 09:02:34 +0100 Subject: Securing the HTTPS private key In-Reply-To: <4D90FA64-F27C-4BB8-8FAD-0C2D1BC0FAE6@netskrt.io> References: <20181115130314.GV99070@mdounin.ru> <4D90FA64-F27C-4BB8-8FAD-0C2D1BC0FAE6@netskrt.io> Message-ID: <622a96082f5d42f9dbc17594a5594224@ultra-secure.de> Am 2018-11-16 07:02, schrieb Roger Fischer: > Hi Alex, > > our device is unattended, not always on, and in some cases in only > semi-secured locations. Besides preventing root access, we also need > to protect against the hacking of a stolen device (or disk). > > Human interaction is not practical (other than in exceptional > situations). Well, in that case you might really want to look at HSMs. They need not be installed locally, AFAIK. What's your budget, BTW? From nginx-forum at forum.nginx.org Fri Nov 16 09:48:39 2018 From: nginx-forum at forum.nginx.org (kmansoft) Date: Fri, 16 Nov 2018 04:48:39 -0500 Subject: Enabling TLS 1.0 / 1.1 on Debian Testing In-Reply-To: <20181115182548.GF99070@mdounin.ru> References: <20181115182548.GF99070@mdounin.ru> Message-ID: Maxim Dounin Wrote: ------------------------------------------------------- > > [...] > > [...] > > Upgrade to nginx 1.15.3+, this problem is expected to be addressed by > this commit: > > http://hg.nginx.org/nginx/rev/7ad0f4ace359 > > Alternatively, you can modify (and/or disable via the OPENSSL_CONF > environment variable specifically for nginx) system-wide OpenSSL > configuration file which disables protocols before TLS 1.2. > > -- > Maxim Dounin > http://mdounin.ru/ Thank you Maxim. Solved by editing /etc/ssl/openssl.conf [system_default_sect] -MinProtocol = TLSv1.2 +MinProtocol = TLSv1 I understand about OPENSSL_CONF env var just for nginx - but for me system wide is fine too. Thanks again! Posted at Nginx Forum: https://forum.nginx.org/read.php?2,281984,282011#msg-282011 From mdounin at mdounin.ru Fri Nov 16 11:41:01 2018 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 16 Nov 2018 14:41:01 +0300 Subject: Listen on transient address In-Reply-To: <791890AA-1F59-4522-B5F6-DF1DF5DBD95F@netskrt.io> References: <791890AA-1F59-4522-B5F6-DF1DF5DBD95F@netskrt.io> Message-ID: <20181116114101.GI99070@mdounin.ru> Hello! On Thu, Nov 15, 2018 at 11:59:31AM -0800, Roger Fischer wrote: > I have an NGINX instance that listens on a tunnel (and some > other interfaces). When NGINX was restarted while the tunnel was > down (tun device and address did not exist), NGINX failed to > start. > > [emerg] 1344#1344: bind() to 38.88.78.19:443 failed (99: Cannot > assign requested address) > > Relevant config: > > listen 172.16.200.5:80 default_server; > listen 38.88.78.19:80 default_server; # tunnel, not always > up > > Is there a way to configure NGINX to listen ?best effort?, still > start even if it can?t bind to the address/port, and > periodically retry to bind to the address/port? This would be my > preferred solution. If you want to bind nginx on addesses which are not yet available on the host, the best solution is to configure a listening socket on the wildcard address: listen 80; Further, if for some reason you want to restrict a particular server only to connections to a particular IP address, you can do so in additional to the lisening socket on the IP address. That is, configure something like this: server { listen 80; ... } server { listen 38.88.78.19:80; ... } With this configuration nginx will bind only on *:80, yet connections to 38.88.78.19:80 will be handled in the second server, much like with separate listening sockets. See the description of the "bind" parameter of the "listen" directive (http://nginx.org/r/listen) for additional details. -- Maxim Dounin http://mdounin.ru/ From patrick at laimbock.com Fri Nov 16 13:07:50 2018 From: patrick at laimbock.com (Patrick Laimbock) Date: Fri, 16 Nov 2018 14:07:50 +0100 Subject: Securing the HTTPS private key In-Reply-To: <4D90FA64-F27C-4BB8-8FAD-0C2D1BC0FAE6@netskrt.io> References: <20181115130314.GV99070@mdounin.ru> <4D90FA64-F27C-4BB8-8FAD-0C2D1BC0FAE6@netskrt.io> Message-ID: <1407765c-07ec-391f-2419-f9448caf057b@laimbock.com> Hi Roger, On 16-11-18 07:02, Roger Fischer wrote: > Hi Alex, > > our device is unattended, not always on, and in some cases in only > semi-secured locations. Besides preventing root access, we also need to > protect against the hacking of a stolen device (or disk). > > Human interaction is not practical (other than in exceptional situations). Maybe this can help: https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/7/html/Security_Guide/sec-Using_Network-Bound_Disk_Encryption.html Cheers, Patrick From nginx-forum at forum.nginx.org Sat Nov 17 00:18:02 2018 From: nginx-forum at forum.nginx.org (lem0nhead) Date: Fri, 16 Nov 2018 19:18:02 -0500 Subject: Throttle based on ETA time? Message-ID: <4ce4a6d7916fd867f613a45487a04302.NginxMailingListEnglish@forum.nginx.org> Hi! I have a particular use-case for a nginx server which is used for downloading big files (1-3 GBs). Services call this server and start downloading, which usually takes ~2 minutes @ 1gbps server connection and 10 concurrent clients. Let's say I want to have a 10min timeout for this download (if it doesn't work, I will just retry at another time). Now I see 2 possible bad scenarios: 1) I get a lot of simultaneous connections (~100), the download time skyrocket and ALL of them fails; 2) I have just 10 connections (usual operation) but the latency increases and the connection speed effectively drops to 100mbps. Again the download time will skyrocket and ALL of them will fail. The first one is easy: I can just limit the amount of concurrent clients to a number that ensures my network can support them when working properly. But the second one is trickier. The solution I can see something like starting to drop 5% of the clients (newest first) each 20 seconds that the ETA is over 8 minutes (so we have a margin until 10 minutes is reached). Accidentally this would probably also solve the first scenario so I wouldn't need to configure nginx in respect to the server it's running on; it would be agnostic to it. Do you think that could be a good solution? Any hints if I can find some module that does that or something similar that I can use as inspiration to write my own? Thanks Posted at Nginx Forum: https://forum.nginx.org/read.php?2,282019,282019#msg-282019 From nginx-forum at forum.nginx.org Sat Nov 17 03:56:30 2018 From: nginx-forum at forum.nginx.org (Jeremy Ardley) Date: Fri, 16 Nov 2018 22:56:30 -0500 Subject: Can't disable TLS 1.0 Message-ID: <5e21fcace7a5bf9a56d8b6910735705a.NginxMailingListEnglish@forum.nginx.org> I am setting up web servers for best practice TLS. The issue is TLS 1.0 which is deprecated I want to remove it from the available protocols and have done the usual ## # SSL Settings ## ssl_protocols TLSv1.1 TLSv1.2; # Dropping SSLv3, ref: POODLE ssl_prefer_server_ciphers on; However the absence of TLSv1 in the list doesn't stop the server offering it. I have checked carefully for prior syntax errors in the configuration and there are none. The configuration is set in the main nginx.conf file and used by one or more enabled sites attached to specific IP addresses. The enabled sites do not change the ssl_protocols. My environment: nginx version: nginx/1.10.3 built with OpenSSL 1.1.0f 25 May 2017 TLS SNI support enabled configure arguments: --with-cc-opt='-g -O2 -fdebug-prefix-map=/build/nginx-tLEWFX/nginx-1.10.3=. -fstack-protector-strong -Wformat -Werror=format-security -Wdate-time -D_FORTIFY_SOURCE=2' --with-ld-opt='-Wl,-z,relro -Wl,-z,now' --prefix=/usr/share/nginx --conf-path=/etc/nginx/nginx.conf --http-log-path=/var/log/nginx/access.log --error-log-path=/var/log/nginx/error.log --lock-path=/var/lock/nginx.lock --pid-path=/run/nginx.pid --modules-path=/usr/lib/nginx/modules --http-client-body-temp-path=/var/lib/nginx/body --http-fastcgi-temp-path=/var/lib/nginx/fastcgi --http-proxy-temp-path=/var/lib/nginx/proxy --http-scgi-temp-path=/var/lib/nginx/scgi --http-uwsgi-temp-path=/var/lib/nginx/uwsgi --with-debug --with-pcre-jit --with-ipv6 --with-http_ssl_module --with-http_stub_status_module --with-http_realip_module --with-http_auth_request_module --with-http_v2_module --with-http_dav_module --with-http_slice_module --with-threads --with-http_addition_module --with-http_geoip_module=dynamic --with-http_gunzip_module --with-http_gzip_static_module --with-http_image_filter_module=dynamic --with-http_sub_module --with-http_xslt_module=dynamic --with-stream=dynamic --with-stream_ssl_module --with-mail=dynamic --with-mail_ssl_module --add-dynamic-module=/build/nginx-tLEWFX/nginx-1.10.3/debian/modules/nginx-auth-pam --add-dynamic-module=/build/nginx-tLEWFX/nginx-1.10.3/debian/modules/nginx-dav-ext-module --add-dynamic-module=/build/nginx-tLEWFX/nginx-1.10.3/debian/modules/nginx-echo --add-dynamic-module=/build/nginx-tLEWFX/nginx-1.10.3/debian/modules/nginx-upstream-fair --add-dynamic-module=/build/nginx-tLEWFX/nginx-1.10.3/debian/modules/ngx_http_substitutions_filter_module My config file - part http { ## # Basic Settings ## sendfile on; tcp_nopush on; tcp_nodelay on; # keepalive_timeout 65; types_hash_max_size 2048; server_tokens off; server_names_hash_bucket_size 64; # server_name_in_redirect off; include /etc/nginx/mime.types; default_type application/octet-stream; error_log /var/log/nginx/error.log info; ## # SSL Settings ## ssl_protocols TLSv1.1 TLSv1.2; # Dropping SSLv3, ref: POODLE ssl_prefer_server_ciphers on; # enable session resumption to improve https performance # http://vincent.bernat.im/en/blog/2011-ssl-session-reuse-rfc5077.html ssl_session_cache shared:SSL:10m; ssl_session_timeout 5m; # Stapling ssl_stapling on; ssl_stapling_verify on; # ssl ecdh curve ssl_ecdh_curve secp384r1; # DH Parameters ssl_dhparam /etc/ssl/dhparams.pem; # Header security add_header X-Frame-Options DENY; add_header X-Content-Type-Options nosniff; .... } Posted at Nginx Forum: https://forum.nginx.org/read.php?2,282020,282020#msg-282020 From chris at cretaforce.gr Sat Nov 17 09:15:51 2018 From: chris at cretaforce.gr (Christos Chatzaras) Date: Sat, 17 Nov 2018 11:15:51 +0200 Subject: Throttle based on ETA time? In-Reply-To: <4ce4a6d7916fd867f613a45487a04302.NginxMailingListEnglish@forum.nginx.org> References: <4ce4a6d7916fd867f613a45487a04302.NginxMailingListEnglish@forum.nginx.org> Message-ID: <321155DB-3FC6-47E1-9964-8C5E43ACAC6A@cretaforce.gr> Any reason to have a 10 minute timeout? > On 17 Nov 2018, at 02:18, lem0nhead wrote: > > Hi! > I have a particular use-case for a nginx server which is used for > downloading big files (1-3 GBs). > Services call this server and start downloading, which usually takes ~2 > minutes @ 1gbps server connection and 10 concurrent clients. > Let's say I want to have a 10min timeout for this download (if it doesn't > work, I will just retry at another time). > Now I see 2 possible bad scenarios: > 1) I get a lot of simultaneous connections (~100), the download time > skyrocket and ALL of them fails; > 2) I have just 10 connections (usual operation) but the latency increases > and the connection speed effectively drops to 100mbps. Again the download > time will skyrocket and ALL of them will fail. > > The first one is easy: I can just limit the amount of concurrent clients to > a number that ensures my network can support them when working properly. > > But the second one is trickier. The solution I can see something like > starting to drop 5% of the clients (newest first) each 20 seconds that the > ETA is over 8 minutes (so we have a margin until 10 minutes is reached). > Accidentally this would probably also solve the first scenario so I wouldn't > need to configure nginx in respect to the server it's running on; it would > be agnostic to it. > > Do you think that could be a good solution? Any hints if I can find some > module that does that or something similar that I can use as inspiration to > write my own? -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 2258 bytes Desc: not available URL: From antoine.bonavita at gmail.com Sat Nov 17 14:06:12 2018 From: antoine.bonavita at gmail.com (Antoine Bonavita) Date: Sat, 17 Nov 2018 15:06:12 +0100 Subject: njs and subrequests Message-ID: Hello, For a pet project of mine I'm trying to use njs to retrieve data from a number of different sources (URLs really) and assemble them into one single response. I tried to implement a proof of concept using subrequest (from ngx_http_js_module) to do so. I quickly realized that it works only for internal subrequests (i.e. not to external servers). I worked around this with the following location: location = /fwd-proxy { proxy_pass $arg_tgt; } We all know nginx is not really a forward proxy and this approach does not seem very nice for the long term. So, I have a few cascading questions: 1 - Is there any plan to have subrequest from ngx_http_js_module support external URLs ? 2 - If answer to 1 is no, is there any plan to have another official js module implement it ? 3 - If no, what would make most sense: implement it as a 3rd-party module or completely move to something different (nginx Unit with Node.js comes to mind) ? And why one rather than the other ? Thank you for reading so far and evne more thanks if you are kind enough to hit the reply button. A. -------------- next part -------------- An HTML attachment was scrubbed... URL: From vbart at nginx.com Sat Nov 17 15:17:52 2018 From: vbart at nginx.com (Valentin V. Bartenev) Date: Sat, 17 Nov 2018 18:17:52 +0300 Subject: njs and subrequests In-Reply-To: References: Message-ID: <2055906.nZS9uF6hhB@vbart-laptop> On Saturday, 17 November 2018 17:06:12 MSK Antoine Bonavita wrote: > Hello, > > For a pet project of mine I'm trying to use njs to retrieve data from a > number of different sources (URLs really) and assemble them into one single > response. I tried to implement a proof of concept using subrequest (from > ngx_http_js_module) to do so. I quickly realized that it works only for > internal subrequests (i.e. not to external servers). > > I worked around this with the following location: > location = /fwd-proxy { > proxy_pass $arg_tgt; > } > > We all know nginx is not really a forward proxy and this approach does not > seem very nice for the long term. > > So, I have a few cascading questions: > 1 - Is there any plan to have subrequest from ngx_http_js_module support > external URLs ? > 2 - If answer to 1 is no, is there any plan to have another official js > module implement it ? > 3 - If no, what would make most sense: implement it as a 3rd-party module > or completely move to something different (nginx Unit with Node.js comes to > mind) ? And why one rather than the other ? > > Thank you for reading so far and evne more thanks if you are kind enough to > hit the reply button. > Just to make it clear, "subrequest" is not a js module part. It's a generic nginx mechanism used by many modules (including ssi, auth_request, addition) to request _internal_ resources. The js module just provides you an API for this mechanism to make ssi-like subrequests. You're looking for a different thing, in fact you need an http client in the js module to request external resources. That's not something provided right now, but will be nice to have in the future. wbr, Valentin V. Bartenev From rainer at ultra-secure.de Sat Nov 17 15:56:39 2018 From: rainer at ultra-secure.de (Rainer Duffner) Date: Sat, 17 Nov 2018 16:56:39 +0100 Subject: Can't disable TLS 1.0 In-Reply-To: <5e21fcace7a5bf9a56d8b6910735705a.NginxMailingListEnglish@forum.nginx.org> References: <5e21fcace7a5bf9a56d8b6910735705a.NginxMailingListEnglish@forum.nginx.org> Message-ID: > Am 17.11.2018 um 04:56 schrieb Jeremy Ardley : > > ssl_protocols TLSv1.1 TLSv1.2; # Dropping SSLv3, ref: POODLE ssl_protocols TLSv1.2; You need to disable 1.0 and 1.1. AFAIK. If you look around, everybody (ebay, github, MSFT, Google etc.pp.) who disabled 1.0 also disabled 1.1. -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Sat Nov 17 17:43:12 2018 From: nginx-forum at forum.nginx.org (lem0nhead) Date: Sat, 17 Nov 2018 12:43:12 -0500 Subject: Throttle based on ETA time? In-Reply-To: <321155DB-3FC6-47E1-9964-8C5E43ACAC6A@cretaforce.gr> References: <321155DB-3FC6-47E1-9964-8C5E43ACAC6A@cretaforce.gr> Message-ID: <9ad6ebe772fe9fc8771b7da4546c0935.NginxMailingListEnglish@forum.nginx.org> Could be any time, the point is that if I have 100 clients trying to download, I prefer having 10 completing their download each 10 minutes then having all the 100 clients completing the download after 100 minutes. Scenario 1 (which is what I'd like): 00:00 - 100 clients start downloading 00:10 - 90 clients are dropped and 10 clients finish downloading 00:20 - 10 more clients finish downloading ... 01:40 - all 100 clients finish downloading Scenario 2 (which is what would happen if I don't use this kind of throttling): 00:00 - 100 clients start downloading 01:40 - all 100 clients finish downloading So, scenario 1 looks much better in my opinion, because the end time for 100 clients is the same, but if you think about this download in the middle of a pipeline, the first clients can be working on other steps. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,282019,282029#msg-282029 From nginx-forum at forum.nginx.org Sun Nov 18 02:31:51 2018 From: nginx-forum at forum.nginx.org (Jeremy Ardley) Date: Sat, 17 Nov 2018 21:31:51 -0500 Subject: Can't disable TLS 1.0 In-Reply-To: <5e21fcace7a5bf9a56d8b6910735705a.NginxMailingListEnglish@forum.nginx.org> References: <5e21fcace7a5bf9a56d8b6910735705a.NginxMailingListEnglish@forum.nginx.org> Message-ID: <56f2f0ef958ad6b14a1c90d68cc40df9.NginxMailingListEnglish@forum.nginx.org> Problem resolved. Letsencrypt was in use and it overrode the nginx.conf allowed protocols in file /etc/letsencrypt/options-ssl-nginx.conf Posted at Nginx Forum: https://forum.nginx.org/read.php?2,282020,282030#msg-282030 From antoine.bonavita at gmail.com Sun Nov 18 15:06:27 2018 From: antoine.bonavita at gmail.com (Antoine Bonavita) Date: Sun, 18 Nov 2018 16:06:27 +0100 Subject: njs and subrequests In-Reply-To: <2055906.nZS9uF6hhB@vbart-laptop> References: <2055906.nZS9uF6hhB@vbart-laptop> Message-ID: Hello Valentin, And thank you for your prompt answer. Writing such a http client and making it available is probably a pretty big work. But if I were to write a "limited" one that would fit my simple needs (GET only, only HTTP/1.1, response in memory, etc.) trying to re-use what already exists in nginx code base (asynchronous resolver comes to mind first), how should I proceed ? Which parts of the code should I start looking at/take as sample ? Basically, I think I am asking what is the extension model of njs... Thanks for your help, Antoine. On Sat, Nov 17, 2018 at 4:17 PM Valentin V. Bartenev wrote: > On Saturday, 17 November 2018 17:06:12 MSK Antoine Bonavita wrote: > > Hello, > > > > For a pet project of mine I'm trying to use njs to retrieve data from a > > number of different sources (URLs really) and assemble them into one > single > > response. I tried to implement a proof of concept using subrequest (from > > ngx_http_js_module) to do so. I quickly realized that it works only for > > internal subrequests (i.e. not to external servers). > > > > I worked around this with the following location: > > location = /fwd-proxy { > > proxy_pass $arg_tgt; > > } > > > > We all know nginx is not really a forward proxy and this approach does > not > > seem very nice for the long term. > > > > So, I have a few cascading questions: > > 1 - Is there any plan to have subrequest from ngx_http_js_module support > > external URLs ? > > 2 - If answer to 1 is no, is there any plan to have another official js > > module implement it ? > > 3 - If no, what would make most sense: implement it as a 3rd-party module > > or completely move to something different (nginx Unit with Node.js comes > to > > mind) ? And why one rather than the other ? > > > > Thank you for reading so far and evne more thanks if you are kind enough > to > > hit the reply button. > > > > Just to make it clear, "subrequest" is not a js module part. It's a > generic > nginx mechanism used by many modules (including ssi, auth_request, > addition) > to request _internal_ resources. The js module just provides you an API > for this mechanism to make ssi-like subrequests. > > You're looking for a different thing, in fact you need an http client in > the > js module to request external resources. That's not something provided > right > now, but will be nice to have in the future. > > wbr, Valentin V. Bartenev > > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From rob at cow-frenzy.co.uk Mon Nov 19 09:47:32 2018 From: rob at cow-frenzy.co.uk (Rob Fulton) Date: Mon, 19 Nov 2018 09:47:32 +0000 Subject: Using correct variable for proxy_pass Message-ID: <6466b3fa-22e8-53dc-b80d-ee316acc99a6@cow-frenzy.co.uk> Hi, I'm trying to work out the best way to setup the proxy_pass url and which variables to use. Initially we were using proxy_pass to proxy to a single https URL, we used a rewrite to change https://hostname/ to https://hostname/index.html. We've recently discovered issues due to the single DNS query nginx performs so moved to using a variable for the hostname, this required us to set proxy_pass to the full requests url. We started with : proxy_pass ${content_server}content$request_uri This worked as expected but our rewrite rules failed to work, looking at the documentation, this is seems to be expected since this is the request pre-processing by nginx. We then moved to : proxy_pass ${content_server}content$uri This works fine with the rewrite rules but I noticed a comment on StackOverflow stating this opens you up to header injection vulnerabilities. Is there a variable / combination of variables that allow you to preserve rewrites without the potential security issues, or a better way of doing this ensuring we can use variables in the proxy_pass hostname? Regards Rob From mdounin at mdounin.ru Mon Nov 19 13:12:10 2018 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 19 Nov 2018 16:12:10 +0300 Subject: Using correct variable for proxy_pass In-Reply-To: <6466b3fa-22e8-53dc-b80d-ee316acc99a6@cow-frenzy.co.uk> References: <6466b3fa-22e8-53dc-b80d-ee316acc99a6@cow-frenzy.co.uk> Message-ID: <20181119131209.GM99070@mdounin.ru> Hello! On Mon, Nov 19, 2018 at 09:47:32AM +0000, Rob Fulton wrote: > Hi, > > I'm trying to work out the best way to setup the proxy_pass url and > which variables to use. Initially we were using proxy_pass to proxy to a > single https URL, we used a rewrite to change https://hostname/ to > https://hostname/index.html. > > We've recently discovered issues due to the single DNS query nginx > performs so moved to using a variable for the hostname, this required us > to set proxy_pass to the full requests url. We started with : > > proxy_pass ${content_server}content$request_uri > > This worked as expected but our rewrite rules failed to work, looking at > the documentation, this is seems to be expected since this is the > request pre-processing by nginx. > > We then moved to : > > proxy_pass ${content_server}content$uri > > This works fine with the rewrite rules but I noticed a comment on > StackOverflow stating this opens you up to header injection > vulnerabilities. Is there a variable / combination of variables that > allow you to preserve rewrites without the potential security issues, or > a better way of doing this ensuring we can use variables in the > proxy_pass hostname? If you want to use variables in the proxy_pass and at the same time want to preserve effect of nginx internal URI changes such as due to rewrites, consider using an empty URI compontent in the proxy_pass. For example: set $backend "http://example.com"; proxy_pass $backend; -- Maxim Dounin http://mdounin.ru/ From xeioex at nginx.com Mon Nov 19 13:28:09 2018 From: xeioex at nginx.com (Dmitry Volyntsev) Date: Mon, 19 Nov 2018 16:28:09 +0300 Subject: njs and subrequests In-Reply-To: References: Message-ID: Hi Antoine, >Is there any plan to have subrequest from ngx_http_js_module support > external URLs ? Nothing prevents you from making subrequests to external URLs. https://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_pass https://nginx.org/en/docs/http/ngx_http_core_module.html#resolver >The address can be specified as a domain name or IP address .. >In this case, if an address is specified as a domain name, the name is searched among the described server groups, and, if not found, is determined using a resolver. You still need a location to make a proxy_pass for you (what you already have). As Valentine said, there is nothing special in ngx_http_js_module about subrequests. The module simply uses internal NGINX API for subrequests (http://hg.nginx.org/njs/file/tip/nginx/ngx_http_js_module.c#l2099). You, can find a more complex example of using njs subrequests here: https://github.com/nginxinc/nginx-openid-connect From aquilinux at gmail.com Mon Nov 19 16:38:50 2018 From: aquilinux at gmail.com (aquilinux) Date: Mon, 19 Nov 2018 17:38:50 +0100 Subject: Strange behaviour of %27 encoding in rewrite In-Reply-To: <20181115155541.GE99070@mdounin.ru> References: <20181115155541.GE99070@mdounin.ru> Message-ID: Thanks Maxim, using the return directive worked flawlessly. Regards, On Thu, Nov 15, 2018 at 4:55 PM Maxim Dounin wrote: > Hello! > > On Wed, Nov 14, 2018 at 03:54:20PM +0100, aquilinux wrote: > > > Hi all, > > i'm seeing a strange behaviour in nginx rewrite involving encoded urls > for > > *%27* > > I have this type of rewrite: > > > > rewrite "^/brands/l-oreal$" > > > https://somedomain.tld/L%27Or%C3%A9al-Paris/index.html? permanent; > > > > > > > That translates to this: > > > > > > > [~]> curl -kIL https://mydomain.tld/brands/l-oreal > > > HTTP/2 301 > > > server: nginx > > > date: Wed, 14 Nov 2018 14:44:21 GMT > > > content-type: text/html > > > content-length: 178 > > > *location: https://somedomain.tld/L'Or%C3%A9al-Paris/index.html > > > * > > > strict-transport-security: max-age=15768000; includeSubDomains > > > > > > If i change %27 to %20 i have: > > > > [~]> curl -kIL https://mydomain.tld/brands/l-oreal > > > HTTP/2 301 > > > server: nginx > > > date: Wed, 14 Nov 2018 14:31:09 GMT > > > content-type: text/html > > > content-length: 178 > > > *location: https://somedomain.tld/L%20Or%C3%A9al-Paris/index.html > > > * > > > strict-transport-security: max-age=15768000; includeSubDomains > > > > > > as expected. > > > > The same strange behaviour applies to *%2C*, that is decoded to "*,*" > > instead of being passed unencoded, as expected. > > This is driving me nuts, can anyone explain (or fix) this? > > This is because both "'" and "," don't need to be escaped. > > And, given that the rewrite directive operates on the internal URI > representation, the replacement argument is unescaped by nginx, > and then escaped again when returning a permantent redirect. But > it only escapes characters which need to be escaped. > > If you want nginx to return a redirect exactly in the way you > wrote it, please consider using the "return" directive instead, > for example: > > location = /brands/l-oreal { > return 301 https://somedomain.tld/L%27Or%C3%A9al-Paris/index.html; > } > > -- > Maxim Dounin > http://mdounin.ru/ > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -- "Madness, like small fish, runs in hosts, in vast numbers of instances." Nessuno mi pettina bene come il vento. -------------- next part -------------- An HTML attachment was scrubbed... URL: From xserverlinux at gmail.com Tue Nov 20 01:37:34 2018 From: xserverlinux at gmail.com (Ricky Gutierrez) Date: Mon, 19 Nov 2018 19:37:34 -0600 Subject: Nginx 14.1 very slow as a reverse proxy Message-ID: Hi list , I need a little help, I have a reverse proxy in front of an ASP.net application, the nginx server has it with 4gb ram, disk sas at 15K and 2 core in centos 7, I've noticed that the proxy acts very slow, checking the log I see a lot 2018/11/19 14:10:04 [error] 16379#16379: *51 upstream timed out (110: Connection timed out) while reading response header from upstream, client: 165.225.32.108, server: pms.domain.com, request: "POST /Usuario/Login HTTP/1.1", upstream: "http://192.168.X.11:80/Usuario/Login", host: "pms.domain.com", referrer: "https://pms.domain.com/" if I remove the proxy and leave only the application is fast and without delay, I put the proxy and I have a lot of delay. I have around 40 concurrent connections, my internal network between the proxy and the IIS server is 1gb network, very fast , my config : upstream backend { server 192.168.X.11:80; ## servidor web en windows keepalive 15; } location / { proxy_http_version 1.1; proxy_set_header Connection ""; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_connect_timeout 600; proxy_send_timeout 600; proxy_read_timeout 600; send_timeout 600; proxy_buffers 32 4m; proxy_busy_buffers_size 25m; proxy_buffer_size 512k; proxy_max_temp_file_size 0; client_max_body_size 1024m; client_body_buffer_size 4m; proxy_intercept_errors off; proxy_pass http://backend; } } I hope a light in this darkness;) regards. -- rickygm http://gnuforever.homelinux.com From mdounin at mdounin.ru Tue Nov 20 13:48:51 2018 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 20 Nov 2018 16:48:51 +0300 Subject: Nginx 14.1 very slow as a reverse proxy In-Reply-To: References: Message-ID: <20181120134851.GQ99070@mdounin.ru> Hello! On Mon, Nov 19, 2018 at 07:37:34PM -0600, Ricky Gutierrez wrote: > Hi list , I need a little help, I have a reverse proxy in front of an > ASP.net application, the nginx server has it with 4gb ram, disk sas at > 15K and 2 core in centos 7, I've noticed that the proxy acts very > slow, checking the log I see a lot > > 2018/11/19 14:10:04 [error] 16379#16379: *51 upstream timed out (110: > Connection timed out) while reading response header from upstream, > client: 165.225.32.108, server: pms.domain.com, request: "POST > /Usuario/Login HTTP/1.1", upstream: > "http://192.168.X.11:80/Usuario/Login", host: "pms.domain.com", > referrer: "https://pms.domain.com/" > > if I remove the proxy and leave only the application is fast and > without delay, I put the proxy and I have a lot of delay. > I have around 40 concurrent connections, my internal network between > the proxy and the IIS server is 1gb network, very fast , As per the error message, your backend fails to accept new connections in a reasonable time. You may want to look at the backend to find out why it cannot accept new connections. One of the possible reasons might be that keepalive connections cache you've configured is too large for your backend: > upstream backend { > server 192.168.X.11:80; ## servidor web en windows > keepalive 15; With "keepalive 15" nginx will keep open up to 15 connections in each worker proccess - that is, up to (15 * worker_processes) in total, and this may be too many for your backend. Try switching off keepalive and/or using smaller cache size to see if it helps. -- Maxim Dounin http://mdounin.ru/ From xserverlinux at gmail.com Tue Nov 20 14:43:52 2018 From: xserverlinux at gmail.com (Ricky Gutierrez) Date: Tue, 20 Nov 2018 08:43:52 -0600 Subject: Nginx 14.1 very slow as a reverse proxy In-Reply-To: <20181120134851.GQ99070@mdounin.ru> References: <20181120134851.GQ99070@mdounin.ru> Message-ID: Hi Dounin , What do you mean with smaller cache size?, proxy buffer size for example? I'm going to make the change you suggest. El El mar, nov. 20, 2018 a las 7:49 a. m., Maxim Dounin escribi?: > Hello! > > > > the proxy and the IIS server is 1gb network, very fast , > > As per the error message, your backend fails to accept new > connections in a reasonable time. You may want to look at the > backend to find out why it cannot accept new connections. > > One of the possible reasons might be that keepalive connections > cache you've configured is too large for your backend: > > > upstream backend { > > server 192.168.X.11:80; ## servidor web en windows > > keepalive 15; > > With "keepalive 15" nginx will keep open up to 15 connections in > each worker proccess - that is, up to (15 * worker_processes) in > total, and this may be too many for your backend. > > Try switching off keepalive and/or using smaller cache size to see > if it helps. > > -- > Maxim Dounin > http://mdounin.ru/ > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -- rickygm http://gnuforever.homelinux.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Tue Nov 20 14:56:24 2018 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 20 Nov 2018 17:56:24 +0300 Subject: Nginx 14.1 very slow as a reverse proxy In-Reply-To: References: <20181120134851.GQ99070@mdounin.ru> Message-ID: <20181120145624.GT99070@mdounin.ru> Hello! On Tue, Nov 20, 2018 at 08:43:52AM -0600, Ricky Gutierrez wrote: > Hi Dounin , What do you mean with smaller cache size?, proxy buffer size > for example? I mean keepalive connections cache size, as set to 15 in your configuration. Comment out the "keepalive 15;" line and/or try "keepalive 1;" instead. -- Maxim Dounin http://mdounin.ru/ From rob at cow-frenzy.co.uk Wed Nov 21 08:53:40 2018 From: rob at cow-frenzy.co.uk (Rob Fulton) Date: Wed, 21 Nov 2018 08:53:40 +0000 Subject: Using correct variable for proxy_pass In-Reply-To: <20181119131209.GM99070@mdounin.ru> References: <6466b3fa-22e8-53dc-b80d-ee316acc99a6@cow-frenzy.co.uk> <20181119131209.GM99070@mdounin.ru> Message-ID: Hi, On 19/11/2018 13:12, Maxim Dounin wrote: > > If you want to use variables in the proxy_pass and at the same > time want to preserve effect of nginx internal URI changes such as > due to rewrites, consider using an empty URI compontent in the > proxy_pass. For example: > > set $backend "http://example.com"; > proxy_pass $backend; > Thanks for the guidance however we are proxying to a specific folder on our backend servers, i.e set $backend "http://example.com/blue/content"; proxy_pass $backend Is there a way to make this work in this scenario? Regards Rob From mdounin at mdounin.ru Wed Nov 21 13:20:23 2018 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 21 Nov 2018 16:20:23 +0300 Subject: Using correct variable for proxy_pass In-Reply-To: References: <6466b3fa-22e8-53dc-b80d-ee316acc99a6@cow-frenzy.co.uk> <20181119131209.GM99070@mdounin.ru> Message-ID: <20181121132023.GW99070@mdounin.ru> Hello! On Wed, Nov 21, 2018 at 08:53:40AM +0000, Rob Fulton wrote: > On 19/11/2018 13:12, Maxim Dounin wrote: > > > > If you want to use variables in the proxy_pass and at the same > > time want to preserve effect of nginx internal URI changes such as > > due to rewrites, consider using an empty URI compontent in the > > proxy_pass. For example: > > > > set $backend "http://example.com"; > > proxy_pass $backend; > > > Thanks for the guidance however we are proxying to a specific folder on > our backend servers, i.e > > set $backend "http://example.com/blue/content"; > > proxy_pass $backend > > Is there a way to make this work in this scenario? Try changing URI in nginx with rewrite instead of trying to construct URI in proxy_pass by hand, e.g.: set $backend "http://example.com"; rewrite ^(.*) /blue/content$1 break; proxy_pass $backend; -- Maxim Dounin http://mdounin.ru/ From nginx-forum at forum.nginx.org Wed Nov 21 18:47:40 2018 From: nginx-forum at forum.nginx.org (Ortal) Date: Wed, 21 Nov 2018 13:47:40 -0500 Subject: upgrade to 1-9.15 caused block requests Message-ID: <76d958b50507e7442aaa47186072fd9e.NginxMailingListEnglish@forum.nginx.org> Hi, I am developing my own nginx module, I am getting a post requests, parse the data and send 204. I worked with nginx version release-1.9.15, and I am trying to upgrade to version release-1.15.5. After the upgrade post requests with payload larger then 1M are getting blocked. >From the nginx log: 2018/11/21 20:03:10 [debug] 13470#0: *2 http reading blocked I attached to the process and this is the request Expect: 100-continue Content-Type: multipart/form-data; boundary=", '-' ================= (gdb) p *c->buffer $95 = {pos = 0xd00cf0 "PUT /1/file.txt HTTP/1.1\r\nHost: 127.0.0.1:8081\r\nUser-Agent: curl/7.47.0\r\nAccept: */*\r\nContent-Length: 4\r\nContent-Type: application/x-www-form-urlencoded\r\n\r\ntxt1", last = 0xd00d90 "", file_pos = 0, file_last = 0, start = 0xd00cf0 "PUT /1/file.txt HTTP/1.1\r\nHost: 127.0.0.1:8081\r\nUser-Agent: curl/7.47.0\r\nAccept: */*\r\nContent-Length: 4\r\nContent-Type: application/x-www-form-urlencoded\r\n\r\ntxt1", end = 0xd010f0 "\020\004", tag = 0x0, file = 0x0, shadow = 0x0, temporary = 1, memory = 0, mmap = 0, recycled = 0, in_file = 0, flush = 0, sync = 0, last_buf = 0, last_in_chain = 0, last_shadow = 0, temp_file = 0, num = 0} ================= When the request get to ngx_http_read_client_request_body it send the ngx_http_test_expect send HTTP/1.1 100 Continue response, but the request has no data on the buffer on ngx_http_create_request ================= (gdb) p *r->header_in $58 = { pos = 0xf3f1d0 "PUT /1/file_2G.txt HTTP/1.1\r\nHost: 127.0.0.1:8081\r\nUser-Agent: curl/7.47.0\r\nAccept: */*\r\nContent-Length: 1048804\r\nExpect: 100-continue\r\nContent-Type: multipart/form-data; boundary=", '-' ..., last = 0xf3f2b0 "", file_pos = 0, file_last = 0, start = 0xf3f1d0 "PUT /1/file_2G.txt HTTP/1.1\r\nHost: 127.0.0.1:8081\r\nUser-Agent: curl/7.47.0\r\nAccept: */*\r\nContent-Length: 1048804\r\nExpect: 100-continue\r\nContent-Type: multipart/form-data; boundary=", '-' ..., end = 0xf3f5d0 "", tag = 0x0, file = 0x0, shadow = 0x0, temporary = 1, memory = 0, mmap = 0, recycled = 0, in_file = 0, flush = 0, sync = 0, last_buf = 0, last_in_chain = 0, last_shadow = 0, temp_file = 0, num = 0} ================= on ngx_http_read_client_request_body: ================= $59 = {pos = 0xf3f2b0 "", last = 0xf3f2b0 "", file_pos = 0, file_last = 0, start = 0xf3f1d0 "PUT /1", end = 0xf3f5d0 "", tag = 0x0, file = 0x0, shadow = 0x0, temporary = 1, memory = 0, mmap = 0, recycled = 0, in_file = 0, flush = 0, sync = 0, last_buf = 0, last_in_chain = 0, last_shadow = 0, temp_file = 0, num = 0} ================= The request are blocked without a response. I did not changed nothing on my code, I tried to return to the old nginx version (1-9.15) and it worked, I an using a basic curl command. I could not found any related information on the topic, is anyone familiar with this problem? Nginx conf: Posted at Nginx Forum: https://forum.nginx.org/read.php?2,282068,282068#msg-282068 From mdounin at mdounin.ru Wed Nov 21 19:27:06 2018 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 21 Nov 2018 22:27:06 +0300 Subject: upgrade to 1-9.15 caused block requests In-Reply-To: <76d958b50507e7442aaa47186072fd9e.NginxMailingListEnglish@forum.nginx.org> References: <76d958b50507e7442aaa47186072fd9e.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20181121192706.GY99070@mdounin.ru> Hello! On Wed, Nov 21, 2018 at 01:47:40PM -0500, Ortal wrote: > Hi, > I am developing my own nginx module, I am getting a post requests, parse the > data and send 204. > I worked with nginx version release-1.9.15, and I am trying to upgrade to > version release-1.15.5. > After the upgrade post requests with payload larger then 1M are getting > blocked. > From the nginx log: > 2018/11/21 20:03:10 [debug] 13470#0: *2 http reading blocked Most likely, this is a bug in your module which started to consistently manifest itself after the upgrade. The "http reading blocked" message is printed by the ngx_http_block_reading() function, which is used as a request read event handler when nginx doesn't want read anything from the network. If you see this message and think this is a problem, most likely you've forgot to update request event handlers at some point. In particular, a common mistake is to try to read the request body in a phase handler, and then try to continue request processing by calling ngx_http_core_run_phases() without restoring the r->write_event_handler as well. If you need further help with finding out what goes wrong, consider posting code of your module (preferably a minimal yet working test module) and a full debugging log which demonstrates the problem. -- Maxim Dounin http://mdounin.ru/ From antoine.bonavita at gmail.com Wed Nov 21 19:37:52 2018 From: antoine.bonavita at gmail.com (Antoine Bonavita) Date: Wed, 21 Nov 2018 20:37:52 +0100 Subject: njs and subrequests In-Reply-To: References: Message-ID: Hello Dmitry, Thanks for your answer. I understand locations can be configured to proxy to external locations and therefore be used in subrequests from njs. That is what I am doing for now. I consider this to be a workaround. Before I explain why, I should explain the basics of my pet project. You can think of it as a service to which you post the URL of an RSS feed and I expect the server to retrieve the feed, collect all the articles in parallel (using subrequests) and concatenating the result in my response. Therefore, the URLs I need to query are not known in advance. And, if I am not mistaken, that forces me to use a variable as argument for my proxy_pass, just like I mentioned in my first email: location = /fwd-proxy { proxy_pass $arg_tgt; } Because of that, my understanding is that the external requests are performed using HTTP/1.0 and in particular the connection(s) to the external sources are not kept alive, making the initiator "pay" for the TCP connection establishment on every single request. Am I mistaken in my understanding ? Of course, all this is fine as long as I play in my sandbox and with servers that I can reach with low latency, but I'm concerned if this becomes an actual service one day. Thanks for your precious help, Antoine. On Mon, Nov 19, 2018 at 2:28 PM Dmitry Volyntsev wrote: > > Hi Antoine, > > >Is there any plan to have subrequest from ngx_http_js_module support > > external URLs ? > > Nothing prevents you from making subrequests to external URLs. > > https://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_pass > https://nginx.org/en/docs/http/ngx_http_core_module.html#resolver > > >The address can be specified as a domain name or IP address > .. > >In this case, if an address is specified as a domain name, the name is > searched among the described server groups, and, if not found, is > determined using a resolver. > > You still need a location to make a proxy_pass for you (what you already > have). > > As Valentine said, there is nothing special in ngx_http_js_module about > subrequests. The module simply uses internal NGINX API for subrequests > (http://hg.nginx.org/njs/file/tip/nginx/ngx_http_js_module.c#l2099). > > You, can find a more complex example of using njs subrequests here: > https://github.com/nginxinc/nginx-openid-connect > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From xserverlinux at gmail.com Thu Nov 22 03:02:36 2018 From: xserverlinux at gmail.com (Ricky Gutierrez) Date: Wed, 21 Nov 2018 21:02:36 -0600 Subject: Nginx 14.1 very slow as a reverse proxy In-Reply-To: <20181120145624.GT99070@mdounin.ru> References: <20181120134851.GQ99070@mdounin.ru> <20181120145624.GT99070@mdounin.ru> Message-ID: El mar., 20 nov. 2018 a las 8:56, Maxim Dounin () escribi?: > > Hello! > > On Tue, Nov 20, 2018 at 08:43:52AM -0600, Ricky Gutierrez wrote: > > > Hi Dounin , What do you mean with smaller cache size?, proxy buffer size > > for example? > > I mean keepalive connections cache size, as set to 15 in your > configuration. Comment out the "keepalive 15;" line and/or try > "keepalive 1;" instead. > Hi Maxim , I made the changes and it is working fine!. thank for your help. -- rickygm http://gnuforever.homelinux.com From mailinglist at unix-solution.de Thu Nov 22 10:59:44 2018 From: mailinglist at unix-solution.de (basti) Date: Thu, 22 Nov 2018 11:59:44 +0100 Subject: client sent plain HTTP request to HTTPS port while reading client request headers Message-ID: Hello, after i switch my error log to info level (to find an other Problem) I get a lot of messages like: "client sent plain HTTP request to HTTPS port while reading client request headers" I search for a solution and always found that this error is shown when http and https is in one server {} but my config looks like the old-fashioned way. server { listen 443 ssl; server_name your.site.tld; ssl on; ... } server { listen 80; server_name your.site.tld; return 301 https://your.site.tld$request_uri; } I have switch the order of hhtp and https server{}, the error is still the same. I have also try "error_page 497 https://$host:443$request_uri;", the error is still the same. I cant reproduce them, i have try to connect the site via http and also via https and grep the log for my IP but I found nothing. How can I fix that? Best Regards, From nginx-forum at forum.nginx.org Thu Nov 22 12:15:41 2018 From: nginx-forum at forum.nginx.org (Ortal) Date: Thu, 22 Nov 2018 07:15:41 -0500 Subject: upgrade to 1-9.15 caused block requests In-Reply-To: <20181121192706.GY99070@mdounin.ru> References: <20181121192706.GY99070@mdounin.ru> Message-ID: <3701f2b9aadfef36c6971e7a06ca622a.NginxMailingListEnglish@forum.nginx.org> This is a small example of my code, I changed the code and in the current version on post/put request trying to read the buffer and print it, on any other requests just send response (I tried to minimal it as much as I could) On the current flow when running a post request with 1MB data the ngx_http_read_client_request_body return NGX_AGAIN and not call the post_handler I am not getting the ngx_http_block_reading for the request but I still do not get the request buffer, and the post_handler is not called static char *ngx_v3io_test_pass(ngx_conf_t *cf, ngx_command_t *cmd, void *conf) { if (unlikely(cf == NULL || cmd == NULL || conf == NULL)) return NGX_CONF_ERROR; ngx_http_core_loc_conf_t *core_loc_cf; ngx_log_debug1(NGX_LOG_DEBUG_HTTP, cf->log, 0, "ngx_v3io_test: pass cmd args:%d", cf->args->nelts); core_loc_cf = ngx_http_conf_get_module_loc_conf(cf, ngx_http_core_module); if (unlikely(!core_loc_cf)) return NGX_CONF_ERROR; core_loc_cf->handler = ngx_v3io_test_req_handler; return NGX_CONF_OK; } static void ngx_v3io_test_send_response(ngx_http_request_t *req, ngx_int_t rc) { req->headers_out.status = rc; req->headers_out.content_length_n = 0; req->header_only = 1; ngx_http_send_header(req); req->connection->log->action = "sending to client"; ngx_http_finalize_request(req, NGX_DONE); } ngx_int_t ngx_v3io_http_test_read_client_request_body(ngx_http_request_t *req, ngx_http_client_body_handler_pt post_handler) { if (unlikely(req->main->count == 0)){ ngx_log_error(NGX_LOG_ERR, req->connection->log, 0, "ngx_v3io_test: r->main->count == 0, req: %p", req); return NGX_HTTP_INTERNAL_SERVER_ERROR; } req->main->count--; return ngx_http_read_client_request_body(req, post_handler); } static void ngx_v3io_obj_test_post_handle_request_body(ngx_http_request_t *req) { ngx_log_t *log = req->connection->log; ngx_log_error(NGX_LOG_INFO, log, 0, "ngx_v3io_test req: %p, ngx_ret: %d, req_len: %l \n", req, req->headers_in.content_length_n); if (req->request_body->bufs != NULL) { ngx_log_error(NGX_LOG_INFO, log, 0, "ngx_v3io_test req: %p, req_buf: %s \n", req, req->request_body->bufs->buf->pos); } else { ngx_log_error(NGX_LOG_INFO, log, 0, "ngx_v3io_test req: %p, %d, req_buf is NULL \n", req); } ngx_v3io_test_send_response(req, NGX_OK); } ngx_int_t ngx_v3io_http_test_handle_post_req(ngx_http_request_t *req) { ngx_log_t *log = req->connection->log; ngx_int_t rc; rc = ngx_handle_read_event(req->connection->read, 0); if (unlikely(rc != NGX_OK)) { ngx_log_error(NGX_LOG_INFO, log, 0, "ngx_v3io_test: req: %p, failed to handle_read_event, ngx_ret: %d", req, rc); return NGX_HTTP_INTERNAL_SERVER_ERROR; } ngx_log_debug1(NGX_LOG_DEBUG_HTTP, log, 0, "ngx_v3io_test: req: %p", req); rc = ngx_v3io_http_test_read_client_request_body(req, ngx_v3io_obj_test_post_handle_request_body); if (likely(rc == NGX_OK || rc == NGX_AGAIN)) { ngx_log_error(NGX_LOG_INFO, log, 0, "ngx_v3io_test req: %p, read_client_request_body, ngx_ret: %d \n", req, rc); return NGX_AGAIN; } else { ngx_log_error(NGX_LOG_INFO, log, 0, "ngx_v3io_test req: %p, read_client_request_body, ngx_ret: %d \n", req, rc); return rc; } } #define ONE_MB_LEN (1024 * 1024) ngx_int_t ngx_v3io_http_test_post_req(ngx_http_request_t *req) { ngx_int_t rc = NGX_OK; rc = ngx_v3io_http_test_handle_post_req(req); if (unlikely(rc != NGX_AGAIN)) { rc = NGX_HTTP_INTERNAL_SERVER_ERROR; ngx_v3io_test_send_response(req, rc); } return rc; } ngx_int_t ngx_v3io_test_req_handler(ngx_http_request_t *req) { if (unlikely(!req)) return NGX_HTTP_INTERNAL_SERVER_ERROR; ngx_log_t *log = req->connection->log; if (unlikely(!log)) return NGX_HTTP_INTERNAL_SERVER_ERROR; ngx_uint_t method = req->method; ngx_int_t rc = NGX_OK; if (method & (NGX_HTTP_PUT | NGX_HTTP_POST)) rc = ngx_v3io_http_test_post_req(req); else ngx_v3io_test_send_response(req, rc); return rc; } nginx logs: When I am sending a post request with size = 4 bytes ======= 2018/11/22 14:00:21 [debug] 16673#0: epoll: fd:13 ev:0001 d:00007F4A7DB21010 2018/11/22 14:00:21 [debug] 16673#0: accept on 0.0.0.0:8081, ready: 1 2018/11/22 14:00:21 [debug] 16673#0: posix_memalign: 00000000011C9F10:512 @16 2018/11/22 14:00:21 [debug] 16673#0: *7 accept: 127.0.0.1:35432 fd:17 2018/11/22 14:00:21 [debug] 16673#0: *7 event timer add: 17: 60000:56037164 2018/11/22 14:00:21 [debug] 16673#0: *7 reusable connection: 1 2018/11/22 14:00:21 [debug] 16673#0: *7 epoll add event: fd:17 op:1 ev:80002001 2018/11/22 14:00:21 [debug] 16673#0: accept() not ready (11: Resource temporarily unavailable) 2018/11/22 14:00:21 [debug] 16673#0: timer delta: 4760 2018/11/22 14:00:21 [debug] 16673#0: worker cycle 2018/11/22 14:00:21 [debug] 16673#0: epoll timer: 20077 2018/11/22 14:00:21 [debug] 16673#0: epoll: fd:17 ev:0001 d:00007F4A7DB212E0 2018/11/22 14:00:21 [debug] 16673#0: *7 http wait request handler 2018/11/22 14:00:21 [debug] 16673#0: *7 malloc: 00000000011CA120:1024 2018/11/22 14:00:21 [debug] 16673#0: *7 recv: eof:0, avail:1 2018/11/22 14:00:21 [debug] 16673#0: *7 recv: fd:17 160 of 1024 2018/11/22 14:00:21 [debug] 16673#0: *7 reusable connection: 0 2018/11/22 14:00:21 [debug] 16673#0: *7 posix_memalign: 00000000011D2AE0:4096 @16 2018/11/22 14:00:21 [debug] 16673#0: *7 http process request line 2018/11/22 14:00:21 [debug] 16673#0: *7 http request line: "PUT /1/file.txt HTTP/1.1" 2018/11/22 14:00:21 [debug] 16673#0: *7 http uri: "/1/file.txt" 2018/11/22 14:00:21 [debug] 16673#0: *7 http args: "" 2018/11/22 14:00:21 [debug] 16673#0: *7 http exten: "txt" 2018/11/22 14:00:21 [debug] 16673#0: *7 posix_memalign: 00000000011AF4A0:4096 @16 2018/11/22 14:00:21 [debug] 16673#0: *7 http process request header line 2018/11/22 14:00:21 [debug] 16673#0: *7 http header: "Host: 127.0.0.1:8081" 2018/11/22 14:00:21 [debug] 16673#0: *7 http header: "User-Agent: curl/7.47.0" 2018/11/22 14:00:21 [debug] 16673#0: *7 http header: "Accept: */*" 2018/11/22 14:00:21 [debug] 16673#0: *7 http header: "Content-Length: 4" 2018/11/22 14:00:21 [debug] 16673#0: *7 http header: "Content-Type: application/x-www-form-urlencoded" 2018/11/22 14:00:21 [debug] 16673#0: *7 http header done 2018/11/22 14:00:21 [debug] 16673#0: *7 event timer del: 17: 56037164 2018/11/22 14:00:21 [debug] 16673#0: *7 rewrite phase: 0 2018/11/22 14:00:21 [debug] 16673#0: *7 test location: "/" 2018/11/22 14:00:21 [debug] 16673#0: *7 using configuration "/" 2018/11/22 14:00:21 [debug] 16673#0: *7 http cl:4 max:5368709120 2018/11/22 14:00:21 [debug] 16673#0: *7 rewrite phase: 2 2018/11/22 14:00:21 [debug] 16673#0: *7 http script value: "1" 2018/11/22 14:00:21 [debug] 16673#0: *7 http script set $cors 2018/11/22 14:00:21 [debug] 16673#0: *7 http script var 2018/11/22 14:00:21 [debug] 16673#0: *7 http script var: "PUT" 2018/11/22 14:00:21 [debug] 16673#0: *7 http script value: "OPTIONS" 2018/11/22 14:00:21 [debug] 16673#0: *7 http script equal 2018/11/22 14:00:21 [debug] 16673#0: *7 http script equal: no 2018/11/22 14:00:21 [debug] 16673#0: *7 http script if 2018/11/22 14:00:21 [debug] 16673#0: *7 http script if: false 2018/11/22 14:00:21 [debug] 16673#0: *7 http script var 2018/11/22 14:00:21 [debug] 16673#0: *7 http script var: "1" 2018/11/22 14:00:21 [debug] 16673#0: *7 http script value: "1" 2018/11/22 14:00:21 [debug] 16673#0: *7 http script equal 2018/11/22 14:00:21 [debug] 16673#0: *7 http script if 2018/11/22 14:00:21 [debug] 16673#0: *7 http script complex value 2018/11/22 14:00:21 [debug] 16673#0: *7 http script var: "/1/file.txt" 2018/11/22 14:00:21 [debug] 16673#0: *7 http script copy: "?" 2018/11/22 14:00:21 [debug] 16673#0: *7 http script set $v3io_key 2018/11/22 14:00:21 [debug] 16673#0: *7 http script var 2018/11/22 14:00:21 [debug] 16673#0: *7 http script var: "1" 2018/11/22 14:00:21 [debug] 16673#0: *7 http script value: "1o" 2018/11/22 14:00:21 [debug] 16673#0: *7 http script equal 2018/11/22 14:00:21 [debug] 16673#0: *7 http script equal: no 2018/11/22 14:00:21 [debug] 16673#0: *7 http script if 2018/11/22 14:00:21 [debug] 16673#0: *7 http script if: false 2018/11/22 14:00:21 [debug] 16673#0: *7 post rewrite phase: 3 2018/11/22 14:00:21 [debug] 16673#0: *7 generic phase: 4 2018/11/22 14:00:21 [debug] 16673#0: *7 generic phase: 5 2018/11/22 14:00:21 [debug] 16673#0: *7 access phase: 6 2018/11/22 14:00:21 [debug] 16673#0: *7 access phase: 7 2018/11/22 14:00:21 [debug] 16673#0: *7 post access phase: 8 2018/11/22 14:00:21 [debug] 16673#0: *7 generic phase: 9 2018/11/22 14:00:21 [debug] 16673#0: *7 generic phase: 10 2018/11/22 14:00:21 [debug] 16673#0: *7 ngx_v3io_test: req: 00000000011D2B30 2018/11/22 14:00:21 [debug] 16673#0: *7 http client request body preread 4 2018/11/22 14:00:21 [debug] 16673#0: *7 http request body content length filter 2018/11/22 14:00:21 [debug] 16673#0: *7 http body new buf t:1 f:0 00000000011CA1BC, pos 00000000011CA1BC, size: 4 file: 0, size: 0 2018/11/22 14:00:21 [info] 16673#0: *7 ngx_v3io_test req: 00000000011D2B30, ngx_ret: 4, req_len: 4 , client: 127.0.0.1, server: localhost, request: "PUT /1/file.txt HTTP/1.1", host: "127.0.0.1:8081" 2018/11/22 14:00:21 [info] 16673#0: *7 ngx_v3io_test req: 00000000011D2B30, req_buf: txt1 , client: 127.0.0.1, server: localhost, request: "PUT /1/file.txt HTTP/1.1", host: "127.0.0.1:8081" 2018/11/22 14:00:21 [debug] 16673#0: *7 HTTP/1.1 000 Server: nginx/1.15.6 Date: Thu, 22 Nov 2018 12:00:21 GMT Content-Length: 0 Connection: keep-alive 2018/11/22 14:00:21 [debug] 16673#0: *7 write new buf t:1 f:0 00000000011D3A10, pos 00000000011D3A10, size: 119 file: 0, size: 0 2018/11/22 14:00:21 [debug] 16673#0: *7 http write filter: l:1 f:0 s:119 2018/11/22 14:00:21 [debug] 16673#0: *7 http write filter limit 0 2018/11/22 14:00:21 [debug] 16673#0: *7 writev: 119 of 119 2018/11/22 14:00:21 [debug] 16673#0: *7 http write filter 0000000000000000 2018/11/22 14:00:21 [debug] 16673#0: *7 http finalize request: -4, "/1/file.txt?" a:1, c:1 2018/11/22 14:00:21 [debug] 16673#0: *7 set http keepalive handler 2018/11/22 14:00:21 [debug] 16673#0: *7 http close request 2018/11/22 14:00:21 [debug] 16673#0: *7 http log handler 2018/11/22 14:00:21 [debug] 16673#0: *7 free: 00000000011D2AE0, unused: 32 2018/11/22 14:00:21 [debug] 16673#0: *7 free: 00000000011AF4A0, unused: 2974 2018/11/22 14:00:21 [debug] 16673#0: *7 free: 00000000011CA120 2018/11/22 14:00:21 [debug] 16673#0: *7 hc free: 0000000000000000 2018/11/22 14:00:21 [debug] 16673#0: *7 hc busy: 0000000000000000 0 2018/11/22 14:00:21 [debug] 16673#0: *7 tcp_nodelay 2018/11/22 14:00:21 [debug] 16673#0: *7 reusable connection: 1 2018/11/22 14:00:21 [debug] 16673#0: *7 event timer add: 17: 65000:56042164 2018/11/22 14:00:21 [info] 16673#0: *7 ngx_v3io_test req: 00000000011D2B30, read_client_request_body, ngx_ret: 0 while keepalive, client: 127.0.0.1, server: 0.0.0.0:8081 2018/11/22 14:00:21 [debug] 16673#0: *7 http finalize request: -2, "x }|J?" a:0, c:1 2018/11/22 14:00:21 [alert] 16673#0: *7 http finalize non-active request: "x }|J?" while keepalive, client: 127.0.0.1, server: 0.0.0.0:8081 2018/11/22 14:00:21 [debug] 16673#0: timer delta: 0 2018/11/22 14:00:21 [debug] 16673#0: worker cycle 2018/11/22 14:00:21 [debug] 16673#0: epoll timer: 20077 2018/11/22 14:00:21 [debug] 16673#0: epoll: fd:17 ev:2001 d:00007F4A7DB212E0 2018/11/22 14:00:21 [debug] 16673#0: *7 http keepalive handler 2018/11/22 14:00:21 [debug] 16673#0: *7 malloc: 00000000011CA120:1024 2018/11/22 14:00:21 [debug] 16673#0: *7 recv: eof:1, avail:1 2018/11/22 14:00:21 [debug] 16673#0: *7 recv: fd:17 0 of 1024 2018/11/22 14:00:21 [info] 16673#0: *7 client 127.0.0.1 closed keepalive connection 2018/11/22 14:00:21 [debug] 16673#0: *7 close http connection: 17 2018/11/22 14:00:21 [debug] 16673#0: *7 event timer del: 17: 56042164 2018/11/22 14:00:21 [debug] 16673#0: *7 reusable connection: 0 2018/11/22 14:00:21 [debug] 16673#0: *7 free: 00000000011CA120 2018/11/22 14:00:21 [debug] 16673#0: *7 free: 00000000011C9F10, unused: 128 2018/11/22 14:00:21 [debug] 16673#0: timer delta: 0 2018/11/22 14:00:21 [debug] 16673#0: worker cycle 2018/11/22 14:00:21 [debug] 16673#0: epoll timer: 20077 ======= When I am sending 1MB file: ======= 2018/11/22 14:01:56 [debug] 16673#0: epoll: fd:13 ev:0001 d:00007F4A7DB21010 2018/11/22 14:01:56 [debug] 16673#0: accept on 0.0.0.0:8081, ready: 1 2018/11/22 14:01:56 [debug] 16673#0: posix_memalign: 00000000011A0150:512 @16 2018/11/22 14:01:56 [debug] 16673#0: *8 accept: 127.0.0.1:35438 fd:3 2018/11/22 14:01:56 [debug] 16673#0: *8 event timer add: 3: 60000:56131622 2018/11/22 14:01:56 [debug] 16673#0: *8 reusable connection: 1 2018/11/22 14:01:56 [debug] 16673#0: *8 epoll add event: fd:3 op:1 ev:80002001 2018/11/22 14:01:56 [debug] 16673#0: accept() not ready (11: Resource temporarily unavailable) 2018/11/22 14:01:56 [debug] 16673#0: timer delta: 74359 2018/11/22 14:01:56 [debug] 16673#0: worker cycle 2018/11/22 14:01:56 [debug] 16673#0: epoll timer: 60000 2018/11/22 14:01:56 [debug] 16673#0: epoll: fd:3 ev:0001 d:00007F4A7DB211F0 2018/11/22 14:01:56 [debug] 16673#0: *8 http wait request handler 2018/11/22 14:01:56 [debug] 16673#0: *8 malloc: 00000000011C9F10:1024 2018/11/22 14:01:56 [debug] 16673#0: *8 recv: eof:0, avail:1 2018/11/22 14:01:56 [debug] 16673#0: *8 recv: fd:3 224 of 1024 2018/11/22 14:01:56 [debug] 16673#0: *8 reusable connection: 0 2018/11/22 14:01:56 [debug] 16673#0: *8 posix_memalign: 00000000011D2AE0:4096 @16 2018/11/22 14:01:56 [debug] 16673#0: *8 http process request line 2018/11/22 14:01:56 [debug] 16673#0: *8 http request line: "PUT /1/file_2G.txt HTTP/1.1" 2018/11/22 14:01:56 [debug] 16673#0: *8 http uri: "/1/file_2G.txt" 2018/11/22 14:01:56 [debug] 16673#0: *8 http args: "" 2018/11/22 14:01:56 [debug] 16673#0: *8 http exten: "txt" 2018/11/22 14:01:56 [debug] 16673#0: *8 posix_memalign: 00000000011AF4A0:4096 @16 2018/11/22 14:01:56 [debug] 16673#0: *8 http process request header line 2018/11/22 14:01:56 [debug] 16673#0: *8 http header: "Host: 127.0.0.1:8081" 2018/11/22 14:01:56 [debug] 16673#0: *8 http header: "User-Agent: curl/7.47.0" 2018/11/22 14:01:56 [debug] 16673#0: *8 http header: "Accept: */*" 2018/11/22 14:01:56 [debug] 16673#0: *8 http header: "Content-Length: 1048804" 2018/11/22 14:01:56 [debug] 16673#0: *8 http header: "Expect: 100-continue" 2018/11/22 14:01:56 [debug] 16673#0: *8 http header: "Content-Type: multipart/form-data; boundary=------------------------2034d95ad2250138" 2018/11/22 14:01:56 [debug] 16673#0: *8 http header done 2018/11/22 14:01:56 [debug] 16673#0: *8 event timer del: 3: 56131622 2018/11/22 14:01:56 [debug] 16673#0: *8 rewrite phase: 0 2018/11/22 14:01:56 [debug] 16673#0: *8 test location: "/" 2018/11/22 14:01:56 [debug] 16673#0: *8 using configuration "/" 2018/11/22 14:01:56 [debug] 16673#0: *8 http cl:1048804 max:5368709120 2018/11/22 14:01:56 [debug] 16673#0: *8 rewrite phase: 2 2018/11/22 14:01:56 [debug] 16673#0: *8 http script value: "1" 2018/11/22 14:01:56 [debug] 16673#0: *8 http script set $cors 2018/11/22 14:01:56 [debug] 16673#0: *8 http script var 2018/11/22 14:01:56 [debug] 16673#0: *8 http script var: "PUT" 2018/11/22 14:01:56 [debug] 16673#0: *8 http script value: "OPTIONS" 2018/11/22 14:01:56 [debug] 16673#0: *8 http script equal 2018/11/22 14:01:56 [debug] 16673#0: *8 http script equal: no 2018/11/22 14:01:56 [debug] 16673#0: *8 http script if 2018/11/22 14:01:56 [debug] 16673#0: *8 http script if: false 2018/11/22 14:01:56 [debug] 16673#0: *8 http script var 2018/11/22 14:01:56 [debug] 16673#0: *8 http script var: "1" 2018/11/22 14:01:56 [debug] 16673#0: *8 http script value: "1" 2018/11/22 14:01:56 [debug] 16673#0: *8 http script equal 2018/11/22 14:01:56 [debug] 16673#0: *8 http script if 2018/11/22 14:01:56 [debug] 16673#0: *8 http script complex value 2018/11/22 14:01:56 [debug] 16673#0: *8 http script var: "/1/file_2G.txt" 2018/11/22 14:01:56 [debug] 16673#0: *8 http script copy: "?" 2018/11/22 14:01:56 [debug] 16673#0: *8 http script set $v3io_key 2018/11/22 14:01:56 [debug] 16673#0: *8 http script var 2018/11/22 14:01:56 [debug] 16673#0: *8 http script var: "1" 2018/11/22 14:01:56 [debug] 16673#0: *8 http script value: "1o" 2018/11/22 14:01:56 [debug] 16673#0: *8 http script equal 2018/11/22 14:01:56 [debug] 16673#0: *8 http script equal: no 2018/11/22 14:01:56 [debug] 16673#0: *8 http script if 2018/11/22 14:01:56 [debug] 16673#0: *8 http script if: false 2018/11/22 14:01:56 [debug] 16673#0: *8 post rewrite phase: 3 2018/11/22 14:01:56 [debug] 16673#0: *8 generic phase: 4 2018/11/22 14:01:56 [debug] 16673#0: *8 generic phase: 5 2018/11/22 14:01:56 [debug] 16673#0: *8 access phase: 6 2018/11/22 14:01:56 [debug] 16673#0: *8 access phase: 7 2018/11/22 14:01:56 [debug] 16673#0: *8 post access phase: 8 2018/11/22 14:01:56 [debug] 16673#0: *8 generic phase: 9 2018/11/22 14:01:56 [debug] 16673#0: *8 generic phase: 10 2018/11/22 14:01:56 [debug] 16673#0: *8 ngx_v3io_test: req: 00000000011D2B30 2018/11/22 14:01:56 [debug] 16673#0: *8 send 100 Continue 2018/11/22 14:01:56 [debug] 16673#0: *8 send: fd:3 25 of 25 2018/11/22 14:01:56 [debug] 16673#0: *8 http request body content length filter 2018/11/22 14:01:56 [debug] 16673#0: *8 malloc: 000000000121C990:1048804 2018/11/22 14:01:56 [debug] 16673#0: *8 http read client request body 2018/11/22 14:01:56 [debug] 16673#0: *8 recv: eof:0, avail:0 2018/11/22 14:01:56 [debug] 16673#0: *8 http client request body recv -2 2018/11/22 14:01:56 [debug] 16673#0: *8 http client request body rest 1048804 2018/11/22 14:01:56 [debug] 16673#0: *8 event timer add: 3: 60000:56131622 2018/11/22 14:01:56 [info] 16673#0: *8 ngx_v3io_test req: 00000000011D2B30, read_client_request_body, ngx_ret: -2 , client: 127.0.0.1, server: localhost, request: "PUT /1/file_2G.txt HTTP/1.1", host: "127.0.0.1:8081" 2018/11/22 14:01:56 [debug] 16673#0: *8 http finalize request: -2, "/1/file_2G.txt?" a:1, c:1 2018/11/22 14:01:56 [debug] 16673#0: *8 event timer del: 3: 56131622 2018/11/22 14:01:56 [debug] 16673#0: *8 set http keepalive handler 2018/11/22 14:01:56 [debug] 16673#0: *8 http close request 2018/11/22 14:01:56 [debug] 16673#0: *8 http log handler 2018/11/22 14:01:56 [debug] 16673#0: *8 free: 000000000121C990 2018/11/22 14:01:56 [debug] 16673#0: *8 free: 00000000011D2AE0, unused: 155 2018/11/22 14:01:56 [debug] 16673#0: *8 free: 00000000011AF4A0, unused: 3104 2018/11/22 14:01:56 [debug] 16673#0: *8 free: 00000000011C9F10 2018/11/22 14:01:56 [debug] 16673#0: *8 hc free: 0000000000000000 2018/11/22 14:01:56 [debug] 16673#0: *8 hc busy: 0000000000000000 0 2018/11/22 14:01:56 [debug] 16673#0: *8 tcp_nodelay 2018/11/22 14:01:56 [debug] 16673#0: *8 reusable connection: 1 2018/11/22 14:01:56 [debug] 16673#0: *8 event timer add: 3: 65000:56136622 2018/11/22 14:01:56 [debug] 16673#0: timer delta: 0 2018/11/22 14:01:56 [debug] 16673#0: worker cycle 2018/11/22 14:01:56 [debug] 16673#0: epoll timer: 65000 2018/11/22 14:01:56 [debug] 16673#0: epoll: fd:3 ev:0001 d:00007F4A7DB211F0 2018/11/22 14:01:56 [debug] 16673#0: *8 http keepalive handler 2018/11/22 14:01:56 [debug] 16673#0: *8 malloc: 00000000011C9F10:1024 2018/11/22 14:01:56 [debug] 16673#0: *8 recv: eof:0, avail:1 2018/11/22 14:01:56 [debug] 16673#0: *8 recv: fd:3 1024 of 1024 2018/11/22 14:01:56 [debug] 16673#0: *8 reusable connection: 0 2018/11/22 14:01:56 [debug] 16673#0: *8 posix_memalign: 00000000011D2AE0:4096 @16 2018/11/22 14:01:56 [debug] 16673#0: *8 event timer del: 3: 56136622 2018/11/22 14:01:56 [debug] 16673#0: *8 http process request line 2018/11/22 14:01:56 [info] 16673#0: *8 client sent invalid method while reading client request line, client: 127.0.0.1, server: localhost, request: "--------------------------2034d95ad2250138" 2018/11/22 14:01:56 [debug] 16673#0: *8 http finalize request: 400, "?" a:1, c:1 2018/11/22 14:01:56 [debug] 16673#0: *8 http special response: 400, "?" 2018/11/22 14:01:56 [debug] 16673#0: *8 http set discard body 2018/11/22 14:01:56 [debug] 16673#0: *8 HTTP/1.1 400 Bad Request Server: nginx/1.15.6 Date: Thu, 22 Nov 2018 12:01:56 GMT Content-Type: text/html Content-Length: 157 Connection: close 2018/11/22 14:01:56 [debug] 16673#0: *8 write new buf t:1 f:0 00000000011D3810, pos 00000000011D3810, size: 152 file: 0, size: 0 2018/11/22 14:01:56 [debug] 16673#0: *8 http write filter: l:0 f:0 s:152 2018/11/22 14:01:56 [debug] 16673#0: *8 http output filter "?" 2018/11/22 14:01:56 [debug] 16673#0: *8 http copy filter: "?" 2018/11/22 14:01:56 [debug] 16673#0: *8 http postpone filter "?" 00000000011D39E0 2018/11/22 14:01:56 [debug] 16673#0: *8 write old buf t:1 f:0 00000000011D3810, pos 00000000011D3810, size: 152 file: 0, size: 0 2018/11/22 14:01:56 [debug] 16673#0: *8 write new buf t:0 f:0 0000000000000000, pos 000000000072DBC0, size: 104 file: 0, size: 0 2018/11/22 14:01:56 [debug] 16673#0: *8 write new buf t:0 f:0 0000000000000000, pos 000000000072D6A0, size: 53 file: 0, size: 0 2018/11/22 14:01:56 [debug] 16673#0: *8 http write filter: l:1 f:0 s:309 2018/11/22 14:01:56 [debug] 16673#0: *8 http write filter limit 0 2018/11/22 14:01:56 [debug] 16673#0: *8 writev: 309 of 309 2018/11/22 14:01:56 [debug] 16673#0: *8 http write filter 0000000000000000 2018/11/22 14:01:56 [debug] 16673#0: *8 http copy filter: 0 "?" 2018/11/22 14:01:56 [debug] 16673#0: *8 http finalize request: 0, "?" a:1, c:1 2018/11/22 14:01:56 [debug] 16673#0: *8 event timer add: 3: 600000:56671622 2018/11/22 14:01:56 [debug] 16673#0: *8 http lingering close handler 2018/11/22 14:01:56 [debug] 16673#0: *8 recv: eof:0, avail:1 2018/11/22 14:01:56 [debug] 16673#0: *8 recv: fd:3 4096 of 4096 ======= Posted at Nginx Forum: https://forum.nginx.org/read.php?2,282068,282075#msg-282075 From mdounin at mdounin.ru Thu Nov 22 13:39:40 2018 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 22 Nov 2018 16:39:40 +0300 Subject: upgrade to 1-9.15 caused block requests In-Reply-To: <3701f2b9aadfef36c6971e7a06ca622a.NginxMailingListEnglish@forum.nginx.org> References: <20181121192706.GY99070@mdounin.ru> <3701f2b9aadfef36c6971e7a06ca622a.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20181122133940.GA99070@mdounin.ru> Hello! On Thu, Nov 22, 2018 at 07:15:41AM -0500, Ortal wrote: > This is a small example of my code, I changed the code and in the current > version on post/put request trying to read the buffer and print it, on any > other requests just send response (I tried to minimal it as much as I > could) > On the current flow when running a post request with 1MB data the > ngx_http_read_client_request_body return NGX_AGAIN and not call the > post_handler > I am not getting the ngx_http_block_reading for the request but I still do > not get the request buffer, and the post_handler is not called In no particular order: 1. In ngx_v3io_test_req_handler() you call ngx_v3io_test_send_response() directly if the method is not PUT or POST, and return NGX_OK. And in ngx_v3io_test_send_response() you call ngx_http_finalize_request(req, NGX_DONE). This will result in wrong request reference count as the request will be finalized twice - first with ngx_http_finalize_request(req, NGX_DONE), and then with NGX_OK returned from the handler. This will basically preclude your code from working in non-POST/PUT codepath. 2. You ignore return code from the ngx_http_send_header(). While unlikely, this can cause problems if anything different from NGX_OK is returned. 3. In ngx_v3io_http_test_handle_post_req(), you call ngx_handle_read_event(). This is not something you should do unless you understand why you are doing this - usually, this is something you should call only in low-level event handling functions, when you are reading from the socket yourself. Doing this in your code is certainly not needed and can cause problems. 4. In ngx_v3io_http_test_read_client_request_body(), you decrement r->main->count manually. This is not something you are expected to do, and will result in non-working code. 5. The return code of ngx_http_read_client_request_body() is not handled correct. You are expected to do something like this: rc = ngx_http_read_client_request_body(r, ngx_http_upstream_init); if (rc >= NGX_HTTP_SPECIAL_RESPONSE) { return rc; } return NGX_DONE; in the request content handler, see, for example, PUT handling in the DAV module code, or the development guide here: http://nginx.org/en/docs/dev/development_guide.html#http_request_body Your code seems to return NGX_AGAIN in most common cases of NGX_OK and NGX_AGAIN instead, and this will lead to undefined results. And this is what causes your immediate problems as seen in the debug logs. 6. After reading the request body, you finalize the request in ngx_v3io_test_send_response() with NGX_DONE. This is incorrect, as due to NGX_DONE nginx will assume that something else is going to happen on the request. Instead, you should use the code returned by ngx_http_send_header(). Overall, it looks like there are too many problems in the module to work. It may be a good idea to re-write it from scratch, using an nginx module as an example (and looking into the development guide). -- Maxim Dounin http://mdounin.ru/ From vbart at nginx.com Thu Nov 22 20:11:43 2018 From: vbart at nginx.com (Valentin V. Bartenev) Date: Thu, 22 Nov 2018 23:11:43 +0300 Subject: njs and subrequests In-Reply-To: References: <2055906.nZS9uF6hhB@vbart-laptop> Message-ID: <2644020.kLo5FGGDHW@vbart-laptop> On Sunday, 18 November 2018 18:06:27 MSK Antoine Bonavita wrote: > Hello Valentin, > > And thank you for your prompt answer. Writing such a http client and making > it available is probably a pretty big work. But if I were to write a > "limited" one that would fit my simple needs (GET only, only HTTP/1.1, > response in memory, etc.) trying to re-use what already exists in nginx > code base (asynchronous resolver comes to mind first), how should I proceed > ? Which parts of the code should I start looking at/take as sample ? > > Basically, I think I am asking what is the extension model of njs... > I'm afraid that even a limited one can be quite a difficult task, since it requires good knowledge not only njs internals but nginx too, and involves async interaction between njs and nginx. Anyway, if you're brave enough, then take a look at the existing njs modules for nginx: http://hg.nginx.org/njs/file/tip/nginx wbr, Valentin V. Bartenev From jackdev at mailbox.org Thu Nov 22 20:11:59 2018 From: jackdev at mailbox.org (Jack Henschel) Date: Thu, 22 Nov 2018 21:11:59 +0100 Subject: Intended behavior for Host header in Proxy scenario Message-ID: Hello everyone, during my last debugging session with Nginx I was wondering how and when exactly Nginx passes upstream's hostname when proxying a request. In particular, I have the following example: > upstream backend { > server a.example.com:443; > server b.example.com:443; > } > server { > proxy_pass https://backend/path; > proxy_set_header Host $proxy_host; # default according to docs > } I observed that Nginx does not always pass the appropriate Host header to the upstream server (i.e. "a.example.com" for "server a.example.com:443" and "b.example.com" for "server b.example.com:443"). Is this observation correct or am I missing something? Regards Jack From mdounin at mdounin.ru Thu Nov 22 21:13:38 2018 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 23 Nov 2018 00:13:38 +0300 Subject: Intended behavior for Host header in Proxy scenario In-Reply-To: References: Message-ID: <20181122211338.GD99070@mdounin.ru> Hello! On Thu, Nov 22, 2018 at 09:11:59PM +0100, Jack Henschel wrote: > Hello everyone, > > during my last debugging session with Nginx I was wondering how and when > exactly Nginx passes upstream's hostname when proxying a request. > > In particular, I have the following example: > > upstream backend { > > server a.example.com:443; > > server b.example.com:443; > > } > > > server { > > proxy_pass https://backend/path; > > proxy_set_header Host $proxy_host; # default according to docs > > } > > I observed that Nginx does not always pass the appropriate Host header > to the upstream server (i.e. "a.example.com" for "server > a.example.com:443" and "b.example.com" for "server b.example.com:443"). > > Is this observation correct or am I missing something? The Host header is set to what you wrote in the "proxy_pass" by default. That is, it will be "backend" with the above configuration. -- Maxim Dounin http://mdounin.ru/ From jackdev at mailbox.org Fri Nov 23 08:23:01 2018 From: jackdev at mailbox.org (Jack Henschel) Date: Fri, 23 Nov 2018 09:23:01 +0100 Subject: Intended behavior for Host header in Proxy scenario In-Reply-To: <20181122211338.GD99070@mdounin.ru> References: <20181122211338.GD99070@mdounin.ru> Message-ID: Hi Maxim, thanks for the quick confirmation! > The Host header is set to what you wrote in the "proxy_pass" by default. That is, it will be "backend" with the above configuration. Wouldn't it make more sense to use the hostname from the particular upstream server? I see two scenarios where this is required: 1. TLS secured upstream servers. TLS verification requires the correct Host header to be set (i.e. "a.example.com" instead of "backend"). Though I know there is the possibility of doing this (additionally) with TLS client certificates. 2. Upstream vhosts. Consider the scenario where multiple domains point to the same IP address, where the requests are split apart based on the Host header (I.e. virtual hosts) What do you think? Regards Jack On 22 November 2018 22:13:38 CET, Maxim Dounin wrote: >Hello! > >On Thu, Nov 22, 2018 at 09:11:59PM +0100, Jack Henschel wrote: > >> Hello everyone, >> >> during my last debugging session with Nginx I was wondering how and >when >> exactly Nginx passes upstream's hostname when proxying a request. >> >> In particular, I have the following example: >> > upstream backend { >> > server a.example.com:443; >> > server b.example.com:443; >> > } >> >> > server { >> > proxy_pass https://backend/path; >> > proxy_set_header Host $proxy_host; # default according to docs >> > } >> >> I observed that Nginx does not always pass the appropriate Host >header >> to the upstream server (i.e. "a.example.com" for "server >> a.example.com:443" and "b.example.com" for "server >b.example.com:443"). >> >> Is this observation correct or am I missing something? > >The Host header is set to what you wrote in the "proxy_pass" by >default. That is, it will be "backend" with the above >configuration. > >-- >Maxim Dounin >http://mdounin.ru/ >_______________________________________________ >nginx mailing list >nginx at nginx.org >http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Fri Nov 23 08:49:13 2018 From: nginx-forum at forum.nginx.org (filex) Date: Fri, 23 Nov 2018 03:49:13 -0500 Subject: How to close client HTTP/2 connections? Message-ID: Hello, we use NGINX as http/2 and TLS offloader. Therefore it is responsible for connection handling. (Most of the requests are proxy_pass'ed to upstream servers. However, some few requests are served from local files.) Now, I would like to close the client connection under certain circumstances. That could be the presence of a certain upstream response header or status code. I have tried more_set_headers -s '502 503' 'Connection: close'; This works for HTTP/1 connections. However, this Connection header seems to be forbidden in h2. Triggering the header with curl (-v --http2) yields an error: http2 error: Invalid HTTP header field was received: frame type: 1, stream: 1, name: [connection], value: [close] curl: (92) HTTP/2 stream 1 was not closed cleanly: PROTOCOL_ERROR (err 1) Ok, the connection is closed :) But quite elegantly. And I forgot to mention that I would like to serve an error message, like a "sorry?" HTML. (Therefore I couldn't use the 444 status.) How could I do this for h2: Serve a last page and then say GOWAY? Best regards, Felix Posted at Nginx Forum: https://forum.nginx.org/read.php?2,282096,282096#msg-282096 From nginx-forum at forum.nginx.org Fri Nov 23 13:43:03 2018 From: nginx-forum at forum.nginx.org (Olaf van der Spek) Date: Fri, 23 Nov 2018 08:43:03 -0500 Subject: TLSv1.3 by default? Message-ID: <57e3e6506fb15266efa1b1343e377165.NginxMailingListEnglish@forum.nginx.org> Hi, Why isn't 1.3 enabled by default (when available)? Syntax: ssl_protocols [SSLv2] [SSLv3] [TLSv1] [TLSv1.1] [TLSv1.2] [TLSv1.3]; Default: ssl_protocols TLSv1 TLSv1.1 TLSv1.2; http://nginx.org/en/docs/http/ngx_http_ssl_module.html#ssl_protocols Posted at Nginx Forum: https://forum.nginx.org/read.php?2,282098,282098#msg-282098 From nginx-forum at forum.nginx.org Fri Nov 23 14:11:15 2018 From: nginx-forum at forum.nginx.org (Olaf van der Spek) Date: Fri, 23 Nov 2018 09:11:15 -0500 Subject: How to close client HTTP/2 connections? In-Reply-To: References: Message-ID: Why do you want to do this? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,282096,282099#msg-282099 From mdounin at mdounin.ru Fri Nov 23 14:11:29 2018 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 23 Nov 2018 17:11:29 +0300 Subject: Intended behavior for Host header in Proxy scenario In-Reply-To: References: <20181122211338.GD99070@mdounin.ru> Message-ID: <20181123141128.GE99070@mdounin.ru> Hello! On Fri, Nov 23, 2018 at 09:23:01AM +0100, Jack Henschel wrote: > Hi Maxim, > > thanks for the quick confirmation! > > > The Host header is set to what you wrote in the "proxy_pass" > > by default. That is, it will be "backend" with the above > > configuration. > > Wouldn't it make more sense to use the hostname from the > particular upstream server? > I see two scenarios where this is required: > > 1. TLS secured upstream servers. TLS verification requires the > correct Host header to be set (i.e. "a.example.com" instead of > "backend"). Though I know there is the possibility of doing this > (additionally) with TLS client certificates. > > 2. Upstream vhosts. Consider the scenario where multiple domains > point to the same IP address, where the requests are split apart > based on the Host header (I.e. virtual hosts) > > What do you think? All servers listed in an upstream block are expected to be equal, and expected to be able to process identical requests. You can think of it as multiple A records in DNS, with slightly more control on nginx side. Moreover, nginx doesn't even know which particular server it will use when it creates a request. And the same request can be sent to multiple servers, as per proxy_next_upstream. This does not preclude you from neither using TLS, nor vhosts on upstream servers. But you shouldn't expect that names as written within server directives in upstream blocks means anything and will be used for anything but resolving these names to IP addresses. -- Maxim Dounin http://mdounin.ru/ From nginx-forum at forum.nginx.org Fri Nov 23 14:32:18 2018 From: nginx-forum at forum.nginx.org (filex) Date: Fri, 23 Nov 2018 09:32:18 -0500 Subject: How to close client HTTP/2 connections? In-Reply-To: References: Message-ID: <6278ddca256e00ef907eafe752b772dc.NginxMailingListEnglish@forum.nginx.org> > Why do you want to do this? In a cluster of many nginx servers we had one faulty node that was delivering only errors. In that special case a default vhost replied an "domain not configured" error, because the underlying configuration was inaccessible. The health check was not firing, because such errors are normal (bad bots try to access removed domains or simply make up host headers). A client that was round-robin balanced to that faulty nginx instance was delivered the error page, but the connection was still active. Every subsequent request to our domain had hit the same bad instance. The google bot uses each connection for 100 requests. They all ran into the error. Same for browsers: When you hit reload, you will run into the same problem until the connection times out. If the connection was closed immediately there would be a good chance to be load-balanced to another instance. In http/1 "Connection: close" does the job. but most of our traffic is h2. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,282096,282101#msg-282101 From nginx-forum at forum.nginx.org Fri Nov 23 14:49:07 2018 From: nginx-forum at forum.nginx.org (Olaf van der Spek) Date: Fri, 23 Nov 2018 09:49:07 -0500 Subject: How to close client HTTP/2 connections? In-Reply-To: <6278ddca256e00ef907eafe752b772dc.NginxMailingListEnglish@forum.nginx.org> References: <6278ddca256e00ef907eafe752b772dc.NginxMailingListEnglish@forum.nginx.org> Message-ID: <15968a2873c99face7c2e9a03a9eb336.NginxMailingListEnglish@forum.nginx.org> Closing the connection wouldn't really solve the issue would it? There has to be a better way to solve this. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,282096,282104#msg-282104 From jackdev at mailbox.org Fri Nov 23 15:33:33 2018 From: jackdev at mailbox.org (Jack Henschel) Date: Fri, 23 Nov 2018 16:33:33 +0100 Subject: Intended behavior for Host header in Proxy scenario In-Reply-To: <20181123141128.GE99070@mdounin.ru> References: <20181122211338.GD99070@mdounin.ru> <20181123141128.GE99070@mdounin.ru> Message-ID: <5b6cfff3-9120-f62c-66f0-1cf7375edbf4@mailbox.org> On 11/23/18 3:11 PM, Maxim Dounin wrote: > Hello! > > On Fri, Nov 23, 2018 at 09:23:01AM +0100, Jack Henschel wrote: > >> Hi Maxim, >> >> thanks for the quick confirmation! >> >>> The Host header is set to what you wrote in the "proxy_pass" >>> by default. That is, it will be "backend" with the above >>> configuration. >> >> Wouldn't it make more sense to use the hostname from the >> particular upstream server? >> I see two scenarios where this is required: >> >> 1. TLS secured upstream servers. TLS verification requires the >> correct Host header to be set (i.e. "a.example.com" instead of >> "backend"). Though I know there is the possibility of doing this >> (additionally) with TLS client certificates. >> >> 2. Upstream vhosts. Consider the scenario where multiple domains >> point to the same IP address, where the requests are split apart >> based on the Host header (I.e. virtual hosts) >> >> What do you think? > > All servers listed in an upstream block are expected to be equal, > and expected to be able to process identical requests. You can > think of it as multiple A records in DNS, with slightly more > control on nginx side. > Alright, makes sense. > Moreover, nginx doesn't even know which particular server it will > use when it creates a request. And the same request can be sent > to multiple servers, as per proxy_next_upstream. > > This does not preclude you from neither using TLS, nor vhosts on > upstream servers. But you shouldn't expect that names as written > within server directives in upstream blocks means anything and > will be used for anything but resolving these names to IP addresses. Thanks for the clarification! Would you mind adding this implicit (reasonable) behavior of Nginx to the documentation? In particular clarify that when using an upstream block for the proxy_pass argument, the $proxy_host variable will contain the name of the host specified on the proxy_pass line and NOT the hostnames of the servers specified in the upstream block. The behavior may be totally obvious to you, but it surely wasn't for me. :-) BTW: Is there a "public" method for contributing to the docs? (Git, etc.) Regards Jack From mdounin at mdounin.ru Fri Nov 23 16:51:00 2018 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 23 Nov 2018 19:51:00 +0300 Subject: TLSv1.3 by default? In-Reply-To: <57e3e6506fb15266efa1b1343e377165.NginxMailingListEnglish@forum.nginx.org> References: <57e3e6506fb15266efa1b1343e377165.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20181123165100.GF99070@mdounin.ru> Hello! On Fri, Nov 23, 2018 at 08:43:03AM -0500, Olaf van der Spek wrote: > Hi, > > Why isn't 1.3 enabled by default (when available)? > > Syntax: ssl_protocols [SSLv2] [SSLv3] [TLSv1] [TLSv1.1] [TLSv1.2] > [TLSv1.3]; > Default: > ssl_protocols TLSv1 TLSv1.1 TLSv1.2; > > http://nginx.org/en/docs/http/ngx_http_ssl_module.html#ssl_protocols The main reason is that when it was implemented, TLSv1.3 RFC wasn't yet finalized, and TLSv1.3 was only available via various drafts, and only with pre-release versions of OpenSSL. Now with RFC 8446 published and OpenSSL 1.1.1 with TLSv1.3 released this probably can be reconsidered. On the other hand, enabling TLSv1.3 is known to break at least some configurations, see here for an example: https://serverfault.com/questions/932102/nginx-ssl-handshake-error-no-suitable-key-share Also, due to different approach to configure ciphers, "ssl_ciphers aNULL;" will no longer work as a way to indicate no SSL support with TLSv1.3 enabled (https://trac.nginx.org/nginx/ticket/195). -- Maxim Dounin http://mdounin.ru/ From mdounin at mdounin.ru Fri Nov 23 17:18:02 2018 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 23 Nov 2018 20:18:02 +0300 Subject: Intended behavior for Host header in Proxy scenario In-Reply-To: <5b6cfff3-9120-f62c-66f0-1cf7375edbf4@mailbox.org> References: <20181122211338.GD99070@mdounin.ru> <20181123141128.GE99070@mdounin.ru> <5b6cfff3-9120-f62c-66f0-1cf7375edbf4@mailbox.org> Message-ID: <20181123171802.GG99070@mdounin.ru> Hello! On Fri, Nov 23, 2018 at 04:33:33PM +0100, Jack Henschel wrote: > On 11/23/18 3:11 PM, Maxim Dounin wrote: > > Hello! > > > > On Fri, Nov 23, 2018 at 09:23:01AM +0100, Jack Henschel wrote: > > > >> Hi Maxim, > >> > >> thanks for the quick confirmation! > >> > >>> The Host header is set to what you wrote in the "proxy_pass" > >>> by default. That is, it will be "backend" with the above > >>> configuration. > >> > >> Wouldn't it make more sense to use the hostname from the > >> particular upstream server? > >> I see two scenarios where this is required: > >> > >> 1. TLS secured upstream servers. TLS verification requires the > >> correct Host header to be set (i.e. "a.example.com" instead of > >> "backend"). Though I know there is the possibility of doing this > >> (additionally) with TLS client certificates. > >> > >> 2. Upstream vhosts. Consider the scenario where multiple domains > >> point to the same IP address, where the requests are split apart > >> based on the Host header (I.e. virtual hosts) > >> > >> What do you think? > > > > All servers listed in an upstream block are expected to be equal, > > and expected to be able to process identical requests. You can > > think of it as multiple A records in DNS, with slightly more > > control on nginx side. > > > Alright, makes sense. > > > Moreover, nginx doesn't even know which particular server it will > > use when it creates a request. And the same request can be sent > > to multiple servers, as per proxy_next_upstream. > > > > This does not preclude you from neither using TLS, nor vhosts on > > upstream servers. But you shouldn't expect that names as written > > within server directives in upstream blocks means anything and > > will be used for anything but resolving these names to IP addresses. > > Thanks for the clarification! > Would you mind adding this implicit (reasonable) behavior of Nginx to > the documentation? > In particular clarify that when using an upstream block for the > proxy_pass argument, the $proxy_host variable will contain the name of > the host specified on the proxy_pass line and NOT the hostnames of the > servers specified in the upstream block. > > The behavior may be totally obvious to you, but it surely wasn't for me. :-) I don't think I've seen anyone else who assumed that $proxy_host should contain anything not written in the "proxy_pass" directive. I've, however, seen people who tried to implement/asked for something working on a per-peer basis, such as sending a request with different Host headers to different servers in a single upstream block. While it may worth explaining that this is not something possible, I don't think I know a good place in the documentation to do this. May be adding the DNS analogy to the upstream directive documentation may help, not sure. > BTW: Is there a "public" method for contributing to the docs? (Git, etc.) Much like with nginx itself, sending patches into nginx-devel@ mailing list is the best method, see here: http://nginx.org/en/docs/contributing_changes.html Repository with docs is here: http://hg.nginx.org/nginx.org/ -- Maxim Dounin http://mdounin.ru/ From nginx-forum at forum.nginx.org Fri Nov 23 18:05:55 2018 From: nginx-forum at forum.nginx.org (Olaf van der Spek) Date: Fri, 23 Nov 2018 13:05:55 -0500 Subject: TLSv1.3 by default? In-Reply-To: <20181123165100.GF99070@mdounin.ru> References: <20181123165100.GF99070@mdounin.ru> Message-ID: What's the recommendation for distros? Should they explicitly enable TLSv1.3? Ideally they'd just stick to upstream defaults, hence my question about the default. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,282098,282108#msg-282108 From mdounin at mdounin.ru Fri Nov 23 18:58:33 2018 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 23 Nov 2018 21:58:33 +0300 Subject: TLSv1.3 by default? In-Reply-To: References: <20181123165100.GF99070@mdounin.ru> Message-ID: <20181123185833.GI99070@mdounin.ru> Hello! On Fri, Nov 23, 2018 at 01:05:55PM -0500, Olaf van der Spek wrote: > What's the recommendation for distros? Should they explicitly enable > TLSv1.3? > Ideally they'd just stick to upstream defaults, hence my question about the > default. The recommendation for distros is to don't mess with the defaults. -- Maxim Dounin http://mdounin.ru/ From nginx-forum at forum.nginx.org Fri Nov 23 20:39:45 2018 From: nginx-forum at forum.nginx.org (Olaf van der Spek) Date: Fri, 23 Nov 2018 15:39:45 -0500 Subject: TLSv1.3 by default? In-Reply-To: <20181123185833.GI99070@mdounin.ru> References: <20181123185833.GI99070@mdounin.ru> Message-ID: Maxim Dounin Wrote: ------------------------------------------------------- > Hello! > > On Fri, Nov 23, 2018 at 01:05:55PM -0500, Olaf van der Spek wrote: > > > What's the recommendation for distros? Should they explicitly enable > > TLSv1.3? > > Ideally they'd just stick to upstream defaults, hence my question > about the > > default. > > The recommendation for distros is to don't mess with the defaults. Should they use the 'defaults' from the stock nginx.conf or the defaults from the binary / docs? ;) Posted at Nginx Forum: https://forum.nginx.org/read.php?2,282098,282110#msg-282110 From roger at netskrt.io Fri Nov 23 23:38:27 2018 From: roger at netskrt.io (Roger Fischer) Date: Fri, 23 Nov 2018 15:38:27 -0800 Subject: logical or in location directive with regex (multiple location using same block) Message-ID: Hello, how do I best handle multiple locations that use the same code block? I do not want to repeat the { ? } part. Is there a way to do this with logical or between regular expressions? The first variation is that sometimes there are query parameters, and sometimes there are not. The second variation is that there are multiple suffixes that should match the same block. As an example, the following locations all should have the same behaviour: * /variable-part.gif * /variable-part.gif?variable-query-parameters * /variable-part.png * /variable-part.png?variable-query-parameters The equivalent regular expressions I am using are: * ~* \.gif$ * ~* \.gif\? * ~* \.png$ * ~* \.png\? I know, I can combine the different types: * ~* \.(gif|png)$ * ~* \.(gif|png)\? But I don?t know how to combine the end-of-uri with the followed-by-query-parameters into a single regex. Lastly, I also have locations with a quite different pattern that has the same code block. So, what I would like is something like this: location ( ~* \.(gif|png)$ | ~* \.(gif|png)\? | = /xyz ) { ? } ?|? represents a logical or. Is there a way to do this? BTW, should the regular expression be in quotes (single or double)? Thanks? Roger -------------- next part -------------- An HTML attachment was scrubbed... URL: From francis at daoine.org Sat Nov 24 09:21:07 2018 From: francis at daoine.org (Francis Daly) Date: Sat, 24 Nov 2018 09:21:07 +0000 Subject: logical or in location directive with regex (multiple location using same block) In-Reply-To: References: Message-ID: <20181124092107.gony3bonfh55diqu@daoine.org> On Fri, Nov 23, 2018 at 03:38:27PM -0800, Roger Fischer wrote: Hi there, > how do I best handle multiple locations that use the same code block? Repeat the {} part with all of the config. Nginx cares what is in the config file; it does not care how the config file is created. Use your favourite pre-processor/macro language to create the config, if you don't want to type the same thing twice. > I do not want to repeat the { ? } part. Put all of the repeat-config in a separate file, and repeat the {} part that just contains "include separate-file;". > Is there a way to do this with logical or between regular expressions? Probably; but you will find it to be much more readable if you don't. > The first variation is that sometimes there are query parameters, and sometimes there are not. The second variation is that there are multiple suffixes that should match the same block. "location" matching does not consider query parameters. It stops before the first ? or # in the uri. > As an example, the following locations all should have the same behaviour: > * /variable-part.gif > * /variable-part.gif?variable-query-parameters > * /variable-part.png > * /variable-part.png?variable-query-parameters > > The equivalent regular expressions I am using are: > * ~* \.gif$ > * ~* \.gif\? > * ~* \.png$ > * ~* \.png\? > > I know, I can combine the different types: > * ~* \.(gif|png)$ > * ~* \.(gif|png)\? The ones with "\?" will not match the requests above. > But I don?t know how to combine the end-of-uri with the followed-by-query-parameters into a single regex. You don't need to. (You *can* match end-of-string or following-character by using regex alternation with |. You just do not need to in this particular case.) > Lastly, I also have locations with a quite different pattern that has the same code block. > > So, what I would like is something like this: > > location ( ~* \.(gif|png)$ | ~* \.(gif|png)\? | = /xyz ) { ? } > > ?|? represents a logical or. > > Is there a way to do this? No. You can probably build one probably-very-complex regex to include everything you want and to exclude everything you don't want; but you'll be much happier in 6 months when you need to change something, if your config is human-readable. (The full efficiency effects of the nginx "=" match can't be achieved with a regex match.) > BTW, should the regular expression be in quotes (single or double)? Only if it needs to be. (Which probably means "if it includes {".) "nginx -t" will probably tell you if you got it wrong. And it should be generally straightforward to test. Something like location / { return 200 "no match\n"; } location ~ x { return 200 "match no-quotes\n"; } location ~ "y" { return 200 "match double-quotes\n"; } location ~ 'z' { return 200 "match single-quotes\n"; } and then make requests that include x, y, and z, and see if they are each processed in the location that you expect they should be. Good luck with it, f -- Francis Daly francis at daoine.org From nginx-forum at forum.nginx.org Sat Nov 24 18:35:20 2018 From: nginx-forum at forum.nginx.org (amos123) Date: Sat, 24 Nov 2018 13:35:20 -0500 Subject: compilation under raspbian ? Message-ID: <74de4b0dd00fcd688014c0ca2a38e384.NginxMailingListEnglish@forum.nginx.org> I am trying install nginx and php under raspbian from deb packadges. Whatever i am doing i cannot run php. Have i to compile nginx with support of php ? many thanks PAvel Posted at Nginx Forum: https://forum.nginx.org/read.php?2,282116,282116#msg-282116 From nginx-forum at forum.nginx.org Sun Nov 25 21:52:47 2018 From: nginx-forum at forum.nginx.org (hmac) Date: Sun, 25 Nov 2018 16:52:47 -0500 Subject: Reverse proxy with dynamic destination Message-ID: <2af58e548bba66e7a30746dab6fba4ae.NginxMailingListEnglish@forum.nginx.org> Hi, I have an issue i could use some help with and would appreciate any advice. I find reverse proxying using a variable works for 9 out of 10 devices on my network, except for one device. For this one device, if I statically set the destination, everything works fine! I have tried almost every config option out there but can't get it working. Running on Ubuntu 16.04, Nginx 1.15.6. Strangely after updating from 1.15.5->1.15.6 it worked once (until reboot i think) after enabling proxy_ssl_session_reuse which is a new feature in 1.15.6 so it may be a bug relating to this. Disabling SSL on the device also works. example config snippet: WORKS proxy_pass https://192.168.10.106; FAILS set $myupstream https://192.168.10.106; proxy_pass $myupstream; debug logs for working and not working found below: WORKS https://1drv.ms/u/s!Ah8efbqjeU_lhKsoWzh19b5N7pOaHw FAILS https://1drv.ms/u/s!Ah8efbqjeU_lhKsn8f16blsBh6XdhA nginx.conf: https://1drv.ms/u/s!Ah8efbqjeU_lhKspeCWnKqbB0phIHw Posted at Nginx Forum: https://forum.nginx.org/read.php?2,282118,282118#msg-282118 From nginx-forum at forum.nginx.org Sun Nov 25 21:55:45 2018 From: nginx-forum at forum.nginx.org (bennyk) Date: Sun, 25 Nov 2018 16:55:45 -0500 Subject: Adding modules problem Message-ID: <505405baa54a62b573f64673c0a9fcee.NginxMailingListEnglish@forum.nginx.org> Im trying to add modules to Nginx, therefore im compiling it. Im adding the realip and the brotli modules: sudo ./configure --with-http_realip_module --with-http_ssl_module \ --add-module=/usr/local/src/ngx_brotli --sbin-path=/usr/sbin/nginx sudo make sudo make install After running these commands only the realip module works, although when im compiling only with brotli, the brotly module works. When im compiling with realip only the realip module works as well. Any suggestions? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,282119,282119#msg-282119 From nginx-forum at forum.nginx.org Sun Nov 25 22:20:47 2018 From: nginx-forum at forum.nginx.org (hmac) Date: Sun, 25 Nov 2018 17:20:47 -0500 Subject: Reverse proxy with dynamic destination In-Reply-To: <2af58e548bba66e7a30746dab6fba4ae.NginxMailingListEnglish@forum.nginx.org> References: <2af58e548bba66e7a30746dab6fba4ae.NginxMailingListEnglish@forum.nginx.org> Message-ID: Ohh forgot to add. It does actually load, it just takes around 60 secs what normally takes 4 seconds Posted at Nginx Forum: https://forum.nginx.org/read.php?2,282118,282120#msg-282120 From nginx-forum at forum.nginx.org Sun Nov 25 22:24:44 2018 From: nginx-forum at forum.nginx.org (hmac) Date: Sun, 25 Nov 2018 17:24:44 -0500 Subject: Reverse proxy with dynamic destination In-Reply-To: <2af58e548bba66e7a30746dab6fba4ae.NginxMailingListEnglish@forum.nginx.org> References: <2af58e548bba66e7a30746dab6fba4ae.NginxMailingListEnglish@forum.nginx.org> Message-ID: <81ca662e2cf83410d75f9c263478623d.NginxMailingListEnglish@forum.nginx.org> and the new feature was proxy_socket_keepalive according to https://nginx.org/en/CHANGES (Changes with nginx 1.15.6 06 Nov 2018) Posted at Nginx Forum: https://forum.nginx.org/read.php?2,282118,282121#msg-282121 From antoine.bonavita at gmail.com Mon Nov 26 09:30:27 2018 From: antoine.bonavita at gmail.com (Antoine Bonavita) Date: Mon, 26 Nov 2018 10:30:27 +0100 Subject: njs and subrequests In-Reply-To: <2644020.kLo5FGGDHW@vbart-laptop> References: <2055906.nZS9uF6hhB@vbart-laptop> <2644020.kLo5FGGDHW@vbart-laptop> Message-ID: Hello Valentin, On Thu, Nov 22, 2018 at 9:11 PM Valentin V. Bartenev wrote: > On Sunday, 18 November 2018 18:06:27 MSK Antoine Bonavita wrote: > > Hello Valentin, > > > > And thank you for your prompt answer. Writing such a http client and > making > > it available is probably a pretty big work. But if I were to write a > > "limited" one that would fit my simple needs (GET only, only HTTP/1.1, > > response in memory, etc.) trying to re-use what already exists in nginx > > code base (asynchronous resolver comes to mind first), how should I > proceed > > ? Which parts of the code should I start looking at/take as sample ? > > > > Basically, I think I am asking what is the extension model of njs... > > > > I'm afraid that even a limited one can be quite a difficult task, > since it requires good knowledge not only njs internals but nginx too, > and involves async interaction between njs and nginx. > > Anyway, if you're brave enough, then take a look at the existing njs > modules for nginx: http://hg.nginx.org/njs/file/tip/nginx Not sure if this is bravery or just craziness, but I'll have a look. Thanks for the pointer, your help and nginx itself, of course. A. > > wbr, Valentin V. Bartenev > > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Mon Nov 26 10:58:26 2018 From: nginx-forum at forum.nginx.org (dmitry.murashenkov) Date: Mon, 26 Nov 2018 05:58:26 -0500 Subject: Socket read errors/dropped connections during reload Message-ID: <9cb51eb0b0e8dc3567cac21a6f5d031a.NginxMailingListEnglish@forum.nginx.org> I've been writing a custom load test that performs nginx reload (with same actual config) and noticed that sometimes a single connection get's dropped during reload. The client was in Java, nginx on localhost in Docker under RHEL 7.5 and about 6k req/sec. Can somebody comment - if this is expected behavior or possibly bug/configuration error? I've managed to dump traffic and find that single request that failed (GET in packet 14968 fails): https://drive.google.com/file/d/1I-orMdoZ-zCTiCBFJszWyba2qsO-c1oz/view?usp=sharing The connection has served several requests at this point already and it goes like this: >GET <200 >GET <200 >GET References: <2af58e548bba66e7a30746dab6fba4ae.NginxMailingListEnglish@forum.nginx.org> Message-ID: <3C01D0A2-D658-45C8-8BE0-22C26E0DE8A3@nginx.com> > On 26 Nov 2018, at 01:20, hmac wrote: > > Ohh forgot to add. It does actually load, it just takes around 60 secs what > normally takes 4 seconds > And what if you try to re-run with "worker_processes 1;" ? -- Sergey Kandaurov From mdounin at mdounin.ru Mon Nov 26 12:47:19 2018 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 26 Nov 2018 15:47:19 +0300 Subject: Socket read errors/dropped connections during reload In-Reply-To: <9cb51eb0b0e8dc3567cac21a6f5d031a.NginxMailingListEnglish@forum.nginx.org> References: <9cb51eb0b0e8dc3567cac21a6f5d031a.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20181126124718.GK99070@mdounin.ru> Hello! On Mon, Nov 26, 2018 at 05:58:26AM -0500, dmitry.murashenkov wrote: > I've been writing a custom load test that performs nginx reload (with same > actual config) and noticed that sometimes a single connection get's dropped > during reload. The client was in Java, nginx on localhost in Docker under > RHEL 7.5 and about 6k req/sec. > > Can somebody comment - if this is expected behavior or possibly > bug/configuration error? > > I've managed to dump traffic and find that single request that failed (GET > in packet 14968 fails): > https://drive.google.com/file/d/1I-orMdoZ-zCTiCBFJszWyba2qsO-c1oz/view?usp=sharing > > The connection has served several requests at this point already and it goes > like this: > >GET > <200 > >GET > <200 > >GET > Changes with nginx 1.15.7 27 Nov 2018 *) Feature: the "proxy_requests" directive in the stream module. *) Feature: the "delay" parameter of the "limit_req" directive. Thanks to Vladislav Shabanov and Peter Shchuchkin. *) Bugfix: memory leak on errors during reconfiguration. *) Bugfix: in the $upstream_response_time, $upstream_connect_time, and $upstream_header_time variables. *) Bugfix: a segmentation fault might occur in a worker process if the ngx_http_mp4_module was used on 32-bit platforms. -- Maxim Dounin http://nginx.org/ From xeioex at nginx.com Tue Nov 27 15:28:56 2018 From: xeioex at nginx.com (Dmitry Volyntsev) Date: Tue, 27 Nov 2018 18:28:56 +0300 Subject: njs-0.2.6 Message-ID: <40efbb08-b832-8b73-c925-2e1ffd728d5a@nginx.com> Hello, I'm glad to announce a new release of NGINX JavaScript module (njs). This release proceeds to extend the coverage of ECMAScript 5.1 specification. - Added initial support for extending the existing prototypes. So, generic functions can be added to extend functionality of built-in types. : > String.prototype.myUpper = function() {return this.toUpperCase()} : [Function] : > 'abc'.myUpper() : 'ABC' You can learn more about njs: - Overview and introduction: http://nginx.org/en/docs/njs/ - Presentation: https://youtu.be/Jc_L6UffFOs Feel free to try it and give us feedback on: - Github: https://github.com/nginx/njs/issues - Mailing list: http://mailman.nginx.org/mailman/listinfo/nginx-devel Changes with njs 0.2.6 27 Nov 2018 Core: *) Feature: making built-in prototypes mutable. *) Feature: making global object mutable. *) Feature: console.time() and console.timeEnd() methods. *) Feature: allowing variables and functions to be redeclared. *) Feature: extending Object.defineProperty() spec conformance. *) Feature: introduced quiet mode for CLI to handle simple expressions from stdin (echo "2**3" | njs -q -> 8). *) Feature: introduced compact form of backtraces to handle stack overflows. *) Improvement: improved wording for various exceptions. *) Bugfix: fixed closure values handling. *) Bugfix: fixed equality operator for various value types. *) Bugfix: fixed handling of "this" keyword in various scopes. *) Bugfix: fixed handling non-object values in Object.keys(). *) Bugfix: fixed parsing of throw statement inside if statement. *) Bugfix: fixed parsing of newline after throw statement. *) Bugfix: fixed parsing of statements in if statement without newline. *) Bugfix: fixed size uint32_t overflow in njs_array_expand(). *) Bugfix: fixed typeof operator for object_value type. *) Bugfix: miscellaneous additional bugs have been fixed. From kworthington at gmail.com Tue Nov 27 16:30:22 2018 From: kworthington at gmail.com (Kevin Worthington) Date: Tue, 27 Nov 2018 11:30:22 -0500 Subject: [nginx-announce] nginx-1.15.7 In-Reply-To: <20181127150226.GS99070@mdounin.ru> References: <20181127150226.GS99070@mdounin.ru> Message-ID: Hello Nginx users, Now available: Nginx 1.15.7 for Windows https://kevinworthington.com/nginxwin1157 (32-bit and 64-bit versions) These versions are to support legacy users who are already using Cygwin based builds of Nginx. Officially supported native Windows binaries are at nginx.org. Announcements are also available here: Twitter http://twitter.com/kworthington Google+ https://plus.google.com/+KevinWorthington/ Thank you, Kevin -- Kevin Worthington kworthington *@* (gmail] [dot} {com) https://kevinworthington.com/ https://twitter.com/kworthington https://plus.google.com/+KevinWorthington/ On Tue, Nov 27, 2018 at 10:03 AM Maxim Dounin wrote: > Changes with nginx 1.15.7 27 Nov > 2018 > > *) Feature: the "proxy_requests" directive in the stream module. > > *) Feature: the "delay" parameter of the "limit_req" directive. > Thanks to Vladislav Shabanov and Peter Shchuchkin. > > *) Bugfix: memory leak on errors during reconfiguration. > > *) Bugfix: in the $upstream_response_time, $upstream_connect_time, and > $upstream_header_time variables. > > *) Bugfix: a segmentation fault might occur in a worker process if the > ngx_http_mp4_module was used on 32-bit platforms. > > > -- > Maxim Dounin > http://nginx.org/ > _______________________________________________ > nginx-announce mailing list > nginx-announce at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx-announce > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Wed Nov 28 00:21:35 2018 From: nginx-forum at forum.nginx.org (chadhasumit13) Date: Tue, 27 Nov 2018 19:21:35 -0500 Subject: Nginscript Message-ID: <925c657c57fb4209c6b6fb28d8b013a2.NginxMailingListEnglish@forum.nginx.org> Hi, I intend to generate a unique id (by making use of npm uuid ) and make an external call to an HTTP end-point, whenever a new call is received by NGINX. Is it possible to use nginscript for this purpose? If yes, could you please route me to a good example by means of which I can fulfil the above-mentioned requirement. Sumit Posted at Nginx Forum: https://forum.nginx.org/read.php?2,282156,282156#msg-282156 From nginx-forum at forum.nginx.org Wed Nov 28 08:07:25 2018 From: nginx-forum at forum.nginx.org (Olaf van der Spek) Date: Wed, 28 Nov 2018 03:07:25 -0500 Subject: TLSv1.3 by default? In-Reply-To: References: <20181123185833.GI99070@mdounin.ru> Message-ID: <6b51e3a055bc52aac1f1b16cad6fa090.NginxMailingListEnglish@forum.nginx.org> Olaf van der Spek Wrote: ------------------------------------------------------- > Maxim Dounin Wrote: > ------------------------------------------------------- > > Hello! > > > > On Fri, Nov 23, 2018 at 01:05:55PM -0500, Olaf van der Spek wrote: > > > > > What's the recommendation for distros? Should they explicitly > enable > > > TLSv1.3? > > > Ideally they'd just stick to upstream defaults, hence my question > > about the > > > default. > > > > The recommendation for distros is to don't mess with the defaults. > > Should they use the 'defaults' from the stock nginx.conf or the > defaults from the binary / docs? ;) Maxim? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,282098,282157#msg-282157 From mdounin at mdounin.ru Wed Nov 28 14:19:29 2018 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 28 Nov 2018 17:19:29 +0300 Subject: TLSv1.3 by default? In-Reply-To: <6b51e3a055bc52aac1f1b16cad6fa090.NginxMailingListEnglish@forum.nginx.org> References: <20181123185833.GI99070@mdounin.ru> <6b51e3a055bc52aac1f1b16cad6fa090.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20181128141928.GZ99070@mdounin.ru> Hello! On Wed, Nov 28, 2018 at 03:07:25AM -0500, Olaf van der Spek wrote: > Olaf van der Spek Wrote: > ------------------------------------------------------- > > Maxim Dounin Wrote: > > ------------------------------------------------------- > > > Hello! > > > > > > On Fri, Nov 23, 2018 at 01:05:55PM -0500, Olaf van der Spek wrote: > > > > > > > What's the recommendation for distros? Should they explicitly > > enable > > > > TLSv1.3? > > > > Ideally they'd just stick to upstream defaults, hence my question > > > about the > > > > default. > > > > > > The recommendation for distros is to don't mess with the defaults. > > > > Should they use the 'defaults' from the stock nginx.conf or the > > defaults from the binary / docs? ;) > > > Maxim? There is no such thing as "defaults from the stock nginx.conf". The nginx.conf file can be used to set various configuration parameters. Obviously enough, distributions may need to set something in nginx.conf they ship with nginx packages differently from what is configured in example configuration as available in nginx sources, conf/nginx.conf. Though my recommendation would be to keep configuration shipped as close to conf/nginx.conf as possible, and don't diverge from it unless there are good reasons to. As for TLSv1.3, the TLSv1.3 protocol is currently disabled by default in nginx. Distributions shouldn't try to enable it (either way) unless there are very good reasons to do so. -- Maxim Dounin http://mdounin.ru/ From xeioex at nginx.com Wed Nov 28 14:31:20 2018 From: xeioex at nginx.com (Dmitry Volyntsev) Date: Wed, 28 Nov 2018 17:31:20 +0300 Subject: Nginscript In-Reply-To: <925c657c57fb4209c6b6fb28d8b013a2.NginxMailingListEnglish@forum.nginx.org> References: <925c657c57fb4209c6b6fb28d8b013a2.NginxMailingListEnglish@forum.nginx.org> Message-ID: <168a5e02-cf84-3819-8a79-923c935221c0@nginx.com> On 28.11.2018 3:21, chadhasumit13 wrote: > Hi, I intend to generate a unique id (by making use of npm uuid ) and make > an external call to an HTTP end-point, whenever a new call is received by > NGINX. > > Is it possible to use nginscript for this purpose? > > If yes, could you please route me to a good example by means of which I can > fulfil the above-mentioned requirement. Hi chadhasumit13, Could you please clarify more what are you trying to do? 1) In general, if you simply need a unique id (not necessarily uuid), it is better and easier to use request_id variable (https://nginx.org/en/docs/http/ngx_http_core_module.html#var_request_id) you can access $request_id variable by r.variables.request_id > external call to an HTTP end-point 2) You can use subrequest method (http://nginx.org/en/docs/njs/reference.html#http). You can find some examples with it here: https://github.com/xeioex/njs-examples#subrequests-join From nginx-forum at forum.nginx.org Wed Nov 28 19:29:26 2018 From: nginx-forum at forum.nginx.org (Olaf van der Spek) Date: Wed, 28 Nov 2018 14:29:26 -0500 Subject: TLSv1.3 by default? In-Reply-To: <20181128141928.GZ99070@mdounin.ru> References: <20181128141928.GZ99070@mdounin.ru> Message-ID: <82c20c0cf61a94e5ab4fc1de487e0053.NginxMailingListEnglish@forum.nginx.org> Maxim Dounin Wrote: ------------------------------------------------------- > There is no such thing as "defaults from the stock nginx.conf". > The nginx.conf file can be used to set various configuration > parameters. > > Obviously enough, distributions may need to set something in > nginx.conf they ship with nginx packages differently from what is > configured in example configuration as available in nginx sources, > conf/nginx.conf. Though my recommendation would be to keep That's the file I meant. > configuration shipped as close to conf/nginx.conf as possible, and > don't diverge from it unless there are good reasons to. OK, but that file sets some settings differently from documented defaults, which is kinda confusing. Wouldn't it make sense to not do that? I'd prefer the nginx.conf to be as clean and simple as possible. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,282098,282172#msg-282172 From mdounin at mdounin.ru Wed Nov 28 19:38:25 2018 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 28 Nov 2018 22:38:25 +0300 Subject: TLSv1.3 by default? In-Reply-To: <82c20c0cf61a94e5ab4fc1de487e0053.NginxMailingListEnglish@forum.nginx.org> References: <20181128141928.GZ99070@mdounin.ru> <82c20c0cf61a94e5ab4fc1de487e0053.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20181128193825.GF99070@mdounin.ru> Hello! On Wed, Nov 28, 2018 at 02:29:26PM -0500, Olaf van der Spek wrote: > Maxim Dounin Wrote: > ------------------------------------------------------- > > There is no such thing as "defaults from the stock nginx.conf". > > The nginx.conf file can be used to set various configuration > > parameters. > > > > Obviously enough, distributions may need to set something in > > nginx.conf they ship with nginx packages differently from what is > > configured in example configuration as available in nginx sources, > > conf/nginx.conf. Though my recommendation would be to keep > > That's the file I meant. > > > configuration shipped as close to conf/nginx.conf as possible, and > > don't diverge from it unless there are good reasons to. > > OK, but that file sets some settings differently from documented defaults, > which is kinda confusing. > Wouldn't it make sense to not do that? > > I'd prefer the nginx.conf to be as clean and simple as possible. As I already tried to explain in Trac ticket #1681, one of the important goals of conf/nginx.conf, as well as any other default configuration file, is to demonstrate how various things can be tuned. If this is still not clear, I don't think that repeating this explanation would help. Sorry about that. -- Maxim Dounin http://mdounin.ru/ From nginx-forum at forum.nginx.org Wed Nov 28 20:28:56 2018 From: nginx-forum at forum.nginx.org (Olaf van der Spek) Date: Wed, 28 Nov 2018 15:28:56 -0500 Subject: TLSv1.3 by default? In-Reply-To: <20181128193825.GF99070@mdounin.ru> References: <20181128193825.GF99070@mdounin.ru> Message-ID: <5e0f506ad6b3a4286a5f5673b4e5ae60.NginxMailingListEnglish@forum.nginx.org> Maxim Dounin Wrote: ------------------------------------------------------- > > OK, but that file sets some settings differently from documented > defaults, > > which is kinda confusing. > > Wouldn't it make sense to not do that? > > > > I'd prefer the nginx.conf to be as clean and simple as possible. > > As I already tried to explain in Trac ticket #1681, one of the > important goals of conf/nginx.conf, as well as any other default > configuration file, is to demonstrate how various things can be > tuned. This is mostly done by comments.. though I'd argue a link to a HTML document would be better to explain things. > If this is still not clear, I don't think that repeating > this explanation would help. Sorry about that. Didn't realise that was you.. So should a 'default' install of nginx end up with default_type application/octet-stream, with default_type text/plain or would both be fine with you? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,282098,282175#msg-282175 From pgnet.dev at gmail.com Wed Nov 28 23:34:02 2018 From: pgnet.dev at gmail.com (pgndev) Date: Wed, 28 Nov 2018 15:34:02 -0800 Subject: How to set vhost-specific ENV vars for pass to php-fpm/fastcgi? Message-ID: I run nginx + php-fpm. I include the following 'secure' config into my shared site config cat ../sec/params.shared fastcgi_param DATABASE_HOST "localhost"; fastcgi_param DATABASE_PORT "3306"; fastcgi_param DATABASE_NAME "db_name"; fastcgi_param DATABASE_USER "db_user"; fastcgi_param DATABASE_PASSWORD "db_pass"; It works as intended, setting the ENV vars in PHP env. I'd like to add some vhost-specific env vars, that ONLY exist & are set for specific vhost's ENVs For example, I'd like to set up a conditional add'n of set $this_token "" if ($host = 'vhost1.example.com') { set $this_token "data"; 10 fastcgi_param VHOST1_TOKEN $this_token; } But when I add this stanza to the main config above, conf check fails nginx: [emerg] "fastcgi_param" directive is not allowed here in /etc/nginx/sec/param.shared:10 nginx: configuration file /etc/nginx/nginx.conf test failed If instead I add set $this_token "" if ($host = 'vhost1.example.com') { set $this_token "data"; } fastcgi_param VHOST1_TOKEN $this_token; it passes check nginx: the configuration file /etc/nginx/nginx.conf syntax is ok nginx: configuration file /etc/nginx/nginx.conf test is successful In this case the ENV var "VHOST1_TOKEN" is corrrectly defined in the match case, but it is also *defined*, though null, for any/all other vhosts. How do I construct this conditional ENV var setting to ONLY set/defined the vars on host match? -------------- next part -------------- An HTML attachment was scrubbed... URL: From r at roze.lv Thu Nov 29 07:02:32 2018 From: r at roze.lv (Reinis Rozitis) Date: Thu, 29 Nov 2018 09:02:32 +0200 Subject: How to set vhost-specific ENV vars for pass to php-fpm/fastcgi? In-Reply-To: References: Message-ID: <001801d487b1$802d4a80$8087df80$@roze.lv> > In this case the ENV var "VHOST1_TOKEN" is corrrectly defined in the match case, but it is also *defined*, though null, for any/all other vhosts. > > How do I construct this conditional ENV var setting to ONLY set/defined the vars on host match? You can add if_not_empty at the end of the particular fastcgi_param directive: fastcgi_param VHOST1_TOKEN $this_token if_not_empty; http://nginx.org/en/docs/http/ngx_http_fastcgi_module.html#fastcgi_param rr From stefan.safar at showmax.com Thu Nov 29 14:23:51 2018 From: stefan.safar at showmax.com (Stefan Safar) Date: Thu, 29 Nov 2018 15:23:51 +0100 Subject: Inner workings of nginx cache manager Message-ID: Hi there, I'd like to know a little bit more about the inner workings of the cache manager. I looked through the code, and if I understand it correctly, when it runs out of disk space specified by max_size, it tries to run the cache manager, which looks at a queue of last-accessed URLs and tries to remove the least used URL from the queue and from the disk. My question is, whether the queue is somehow persisted on disk, or I misunderstood something? What I'm trying to know is what happens when an nginx instance runs out of disk space and it's restarted - how does nginx know what it should or shouldn't delete? I don't think I saw any code that would scan through the disk and that would be a rather slow way to deal with this. Thanks, Stefan Safar -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Thu Nov 29 14:50:45 2018 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 29 Nov 2018 17:50:45 +0300 Subject: Inner workings of nginx cache manager In-Reply-To: References: Message-ID: <20181129145045.GJ99070@mdounin.ru> Hello! On Thu, Nov 29, 2018 at 03:23:51PM +0100, Stefan Safar wrote: > Hi there, > > I'd like to know a little bit more about the inner workings of the cache > manager. I looked through the code, and if I understand it correctly, when > it runs out of disk space specified by max_size, it tries to run the cache > manager, which looks at a queue of last-accessed URLs and tries to remove > the least used URL from the queue and from the disk. > > My question is, whether the queue is somehow persisted on disk, or I > misunderstood something? What I'm trying to know is what happens when an > nginx instance runs out of disk space and it's restarted - how does nginx > know what it should or shouldn't delete? I don't think I saw any code that > would scan through the disk and that would be a rather slow way to deal > with this. The LRU queue is only mantained in memory. When nginx is restarted, details of which cache items were access last are lost, and nginx will have to remove some items due to max_size reached, it will remove mostly arbitrary items unless they were accessed after the restart. Note though that there is a code which scans though the disk to find out which items are in the cache (and how much space they take). The cache loader process does this, see http://nginx.org/r/proxy_cache_path for a high level description of how it works. -- Maxim Dounin http://mdounin.ru/ From jpereiran at gmail.com Thu Nov 29 23:51:13 2018 From: jpereiran at gmail.com (Jorge Pereira) Date: Thu, 29 Nov 2018 21:51:13 -0200 Subject: Problems with cache by mime/type Message-ID: Hi, I am using the nginx/1.12.0 and I am trying to use the below config. but, the below "map" by "$upstream_http_content_type" is always matching with default value "1". but, if I remove "proxy_cache_bypass" then the map it works. therefore, I need the "proxy_cache_bypass " capability. http { include /etc/nginx/mime.types; default_type application/octet-stream; map $sent_http_content_type $no_cache { default 1; application/zip 0; # *.zip files application/octet-stream 0; # /artifactory/api/pypi and any other binary files. application/java-archive 0; # *.jar application/x-nupkg 0; # /artifactory/api/nuget/.*/Download/.* } proxy_cache_path $CACHE_FOLDER levels=1:2 keys_zone=artifactory_cache:$CACHE_KEYS_SIZE max_size=$CACHE_SIZE inactive=$CACHE_DURATION use_temp_path=off; server { listen 80; server_name mywww.lan; set $upstream https://172.16.0.1/artifactory; proxy_read_timeout 10; proxy_pass_header Server; proxy_cookie_path ~*^/.* /; proxy_next_upstream http_503 non_idempotent; proxy_set_header Host $http_host; proxy_cache artifactory_cache; proxy_cache_bypass $no_cache; proxy_no_cache $no_cache; add_header X-Proxy-Cache $upstream_cache_status; proxy_cache_key $scheme$proxy_host$request_uri$auth_cache_key; proxy_cache_valid 200 24h; proxy_ignore_headers "Set-Cookie" "Cache-control"; location ~/mypath(?.*)$ { proxy_pass $upstream${relative_uri}${is_args}${args}; } } } -- Jorge Pereira From stefan.safar at showmax.com Fri Nov 30 12:26:27 2018 From: stefan.safar at showmax.com (Stefan Safar) Date: Fri, 30 Nov 2018 13:26:27 +0100 Subject: Inner workings of nginx cache manager In-Reply-To: <20181129145045.GJ99070@mdounin.ru> References: <20181129145045.GJ99070@mdounin.ru> Message-ID: Hi Maxim, thanks a lot for the clarification! So the process/thread that scans through the files on disk need to read the all the file headers to find the KEY for the all cache files to keep the information in memory before it starts deleting anything, is that correct? It would be great if I could specify an option which would tell the cache manager that a whole drive is being used as cache, which would make the cache manager able to cut down the time it takes before deleting stuff from a huge drive from days to seconds. Does that make sense? Stefan Safar On Thu, Nov 29, 2018 at 3:50 PM Maxim Dounin wrote: > Hello! > > On Thu, Nov 29, 2018 at 03:23:51PM +0100, Stefan Safar wrote: > > > Hi there, > > > > I'd like to know a little bit more about the inner workings of the cache > > manager. I looked through the code, and if I understand it correctly, > when > > it runs out of disk space specified by max_size, it tries to run the > cache > > manager, which looks at a queue of last-accessed URLs and tries to remove > > the least used URL from the queue and from the disk. > > > > My question is, whether the queue is somehow persisted on disk, or I > > misunderstood something? What I'm trying to know is what happens when an > > nginx instance runs out of disk space and it's restarted - how does nginx > > know what it should or shouldn't delete? I don't think I saw any code > that > > would scan through the disk and that would be a rather slow way to deal > > with this. > > The LRU queue is only mantained in memory. When nginx is > restarted, details of which cache items were access last are lost, > and nginx will have to remove some items due to max_size reached, > it will remove mostly arbitrary items unless they were accessed > after the restart. > > Note though that there is a code which scans though the disk to > find out which items are in the cache (and how much space they > take). The cache loader process does this, see > http://nginx.org/r/proxy_cache_path for a high level description > of how it works. > > -- > Maxim Dounin > http://mdounin.ru/ > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Fri Nov 30 12:58:13 2018 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 30 Nov 2018 15:58:13 +0300 Subject: Inner workings of nginx cache manager In-Reply-To: References: <20181129145045.GJ99070@mdounin.ru> Message-ID: <20181130125812.GP99070@mdounin.ru> Hello! On Fri, Nov 30, 2018 at 01:26:27PM +0100, Stefan Safar wrote: > So the process/thread that scans through the files on disk need to read the > all the file headers to find the KEY for the all cache files to keep the > information in memory before it starts deleting anything, is that correct? No, cache loader only scans which files are present in the cache (and their sizes), it doesn't try to read them. Raw keys as stored in cache file headers are only needed for a safety check to make sure there are no MD5 collissions between different keys, and this check only happens when returning an actual response from the cache. -- Maxim Dounin http://mdounin.ru/ From stefan.safar at showmax.com Fri Nov 30 13:05:22 2018 From: stefan.safar at showmax.com (Stefan Safar) Date: Fri, 30 Nov 2018 14:05:22 +0100 Subject: Inner workings of nginx cache manager In-Reply-To: <20181130125812.GP99070@mdounin.ru> References: <20181129145045.GJ99070@mdounin.ru> <20181130125812.GP99070@mdounin.ru> Message-ID: Hi! So the cache loader only does something like stat() during the filesystem walk, which should be fairly fast, unless you have tens/hundreds of millions of files in cache. Thanks again! Stefan On Fri, Nov 30, 2018 at 1:58 PM Maxim Dounin wrote: > Hello! > > On Fri, Nov 30, 2018 at 01:26:27PM +0100, Stefan Safar wrote: > > > So the process/thread that scans through the files on disk need to read > the > > all the file headers to find the KEY for the all cache files to keep the > > information in memory before it starts deleting anything, is that > correct? > > No, cache loader only scans which files are present in the cache > (and their sizes), it doesn't try to read them. Raw keys as > stored in cache file headers are only needed for a safety check to > make sure there are no MD5 collissions between different keys, and > this check only happens when returning an actual response from the > cache. > > -- > Maxim Dounin > http://mdounin.ru/ > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From francis at daoine.org Fri Nov 30 13:12:07 2018 From: francis at daoine.org (Francis Daly) Date: Fri, 30 Nov 2018 13:12:07 +0000 Subject: Problems with cache by mime/type In-Reply-To: References: Message-ID: <20181130131207.laqzlc5lw3vcoybu@daoine.org> On Thu, Nov 29, 2018 at 09:51:13PM -0200, Jorge Pereira wrote: Hi there, > I am using the nginx/1.12.0 and I am trying to use the below config. > but, the below "map" by "$upstream_http_content_type" is always > matching with default value "1". but, if I remove "proxy_cache_bypass" > then the map it works. therefore, I need the "proxy_cache_bypass " > capability. It looks like you want to bypass the cache (and therefore always serve these requests from upstream), for certain requests. Is that correct? If so -- you must set the "$no_cache" variable based on something in the request, not on something in the response like a $sent_http_ variable. > map $sent_http_content_type $no_cache { > default 1; > > application/zip 0; # *.zip files > application/octet-stream 0; # /artifactory/api/pypi and any > other binary files. > application/java-archive 0; # *.jar > application/x-nupkg 0; # /artifactory/api/nuget/.*/Download/.* > } Perhaps try setting $no_cache based on $request_uri or $document_uri? That might fit three of the four above. (Alternatively: set $no_cache to 0 by default, and to 1 based on requests that you are happy to potentially have served from the cache.) > proxy_cache artifactory_cache; > proxy_cache_bypass $no_cache; > proxy_no_cache $no_cache; Good luck with it, f -- Francis Daly francis at daoine.org From jpereiran at gmail.com Fri Nov 30 13:32:39 2018 From: jpereiran at gmail.com (Jorge Pereira) Date: Fri, 30 Nov 2018 11:32:39 -0200 Subject: Problems with cache by mime/type In-Reply-To: <20181130131207.laqzlc5lw3vcoybu@daoine.org> References: <20181130131207.laqzlc5lw3vcoybu@daoine.org> Message-ID: Hi Francis, I sent the wrong snip. the correct is using $upstream_http_content_type as can be seen below. basically, always when I use "proxy_cache_bypass $no_cache;" that impact the value of "map $upstream_http_content_type $no_cache".... I didn't understand what is the reason. thanks for any suggestions. http { include /etc/nginx/mime.types; default_type application/octet-stream; map $upstream_http_content_type $no_cache { default 1; application/zip 0; # *.zip files application/octet-stream 0; # /artifactory/api/pypi and any other binary files. application/java-archive 0; # *.jar application/x-nupkg 0; # /artifactory/api/nuget/.*/Download/.* } proxy_cache_path $CACHE_FOLDER levels=1:2 keys_zone=artifactory_cache:$CACHE_KEYS_SIZE max_size=$CACHE_SIZE inactive=$CACHE_DURATION use_temp_path=off; server { listen 80; server_name mywww.lan; set $upstream https://172.16.0.1/artifactory; proxy_read_timeout 10; proxy_pass_header Server; proxy_cookie_path ~*^/.* /; proxy_next_upstream http_503 non_idempotent; proxy_set_header Host $http_host; proxy_cache artifactory_cache; proxy_cache_bypass $no_cache; proxy_no_cache $no_cache; add_header X-Proxy-Cache $upstream_cache_status; proxy_cache_key $scheme$proxy_host$request_uri$auth_cache_key; proxy_cache_valid 200 24h; proxy_ignore_headers "Set-Cookie" "Cache-control"; location ~/mypath(?.*)$ { proxy_pass $upstream${relative_uri}${is_args}${args}; } } } -- Jorge Pereira On Fri, Nov 30, 2018 at 11:12 AM Francis Daly wrote: > > On Thu, Nov 29, 2018 at 09:51:13PM -0200, Jorge Pereira wrote: > > Hi there, > > > I am using the nginx/1.12.0 and I am trying to use the below config. > > but, the below "map" by "$upstream_http_content_type" is always > > matching with default value "1". but, if I remove "proxy_cache_bypass" > > then the map it works. therefore, I need the "proxy_cache_bypass " > > capability. > > It looks like you want to bypass the cache (and therefore always serve > these requests from upstream), for certain requests. > > Is that correct? > > If so -- you must set the "$no_cache" variable based on something in > the request, not on something in the response like a $sent_http_ variable. > > > map $sent_http_content_type $no_cache { > > default 1; > > > > application/zip 0; # *.zip files > > application/octet-stream 0; # /artifactory/api/pypi and any > > other binary files. > > application/java-archive 0; # *.jar > > application/x-nupkg 0; # /artifactory/api/nuget/.*/Download/.* > > } > > Perhaps try setting $no_cache based on $request_uri or $document_uri? That > might fit three of the four above. > > (Alternatively: set $no_cache to 0 by default, and to 1 based on requests > that you are happy to potentially have served from the cache.) > > > proxy_cache artifactory_cache; > > proxy_cache_bypass $no_cache; > > proxy_no_cache $no_cache; > > Good luck with it, > > f > -- > Francis Daly francis at daoine.org > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From francis at daoine.org Fri Nov 30 14:05:08 2018 From: francis at daoine.org (Francis Daly) Date: Fri, 30 Nov 2018 14:05:08 +0000 Subject: Problems with cache by mime/type In-Reply-To: References: <20181130131207.laqzlc5lw3vcoybu@daoine.org> Message-ID: <20181130140508.zwlojuj2dgkktpz7@daoine.org> On Fri, Nov 30, 2018 at 11:32:39AM -0200, Jorge Pereira wrote: Hi there, > I sent the wrong snip. the correct is using > $upstream_http_content_type as can be seen below. basically, always $upstream_http_content_type is the http Content-Type header sent by upstream in response to the request from nginx to upstream. http://nginx.org/r/$upstream_http_ At the time that nginx is deciding "should this request be served from cache or sent to upstream?", $upstream_http_content_type cannot have a value. Your $no_cache variable that is used in the proxy_cache_bypass directive can only usefully be made up of things that are available in the request from the client to nginx. > when I use "proxy_cache_bypass $no_cache;" that impact the value of > "map $upstream_http_content_type $no_cache".... I didn't understand > what is the reason. thanks for any suggestions. When "proxy_cache_bypass" is used, it must find the value of the variable $no_cache. At that time, $upstream_http_content_type is empty, so $no_cache maps to 1. When "proxy_no_cache" is used, it must find the value of the variable $no_cache. If $no_cache was set to 1 previously, it will keep that value. If $no_cache was not set previously, nginx will check the map, and now potentially set $no_cache to 0. That is why your "proxy_no_cache" sees a different value depending on whether "proxy_cache_bypass" is commented out or not. f -- Francis Daly francis at daoine.org From jpereiran at gmail.com Fri Nov 30 14:15:18 2018 From: jpereiran at gmail.com (Jorge Pereira) Date: Fri, 30 Nov 2018 12:15:18 -0200 Subject: Problems with cache by mime/type In-Reply-To: <20181130140508.zwlojuj2dgkktpz7@daoine.org> References: <20181130131207.laqzlc5lw3vcoybu@daoine.org> <20181130140508.zwlojuj2dgkktpz7@daoine.org> Message-ID: ahh! > At the time that nginx is deciding "should this request be served from > cache or sent to upstream?", $upstream_http_content_type cannot have > a value. > now I figured out the reason. thank you Francis. From nginx-forum at forum.nginx.org Fri Nov 30 18:39:01 2018 From: nginx-forum at forum.nginx.org (EndaD) Date: Fri, 30 Nov 2018 13:39:01 -0500 Subject: Fail_timeout behaviour on no available servers Message-ID: <30bd157b74dbc94e592f9a224e6fb9d4.NginxMailingListEnglish@forum.nginx.org> hi, In nginx 1.11.5, this change was introduced - Change: now if there are no available servers in an upstream, nginx will not reset number of failures of all servers as it previously did, but will wait for fail_timeout to expire Is there any way via configuration to change this behaviour back to to reset number of failures immediately if there are no available servers? Regards, Enda Posted at Nginx Forum: https://forum.nginx.org/read.php?2,282218,282218#msg-282218 From nginx-forum at forum.nginx.org Fri Nov 30 18:39:46 2018 From: nginx-forum at forum.nginx.org (dertin) Date: Fri, 30 Nov 2018 13:39:46 -0500 Subject: Nginx tries to open the html file and through error 404. I expected the redirection to the php script Message-ID: Hi, Nginx tries to open the html file and through error 404. I expected the redirection to the php script. server { listen 443 ssl http2 fastopen=500 reuseport; listen [::]:443 ssl http2 fastopen=500 reuseport ipv6only=on; server_name www.testing.com.uy; resolver 1.1.1.1 1.0.0.1 valid=300s; charset utf-8; root /var/www/webdisk/testing.com.uy/htdocs; autoindex off; index load.php index.php index.html; pagespeed Disallow "*"; pagespeed Allow "?https://www.testing.com.uy/*"; pagespeed Disallow "*/blog/*"; pagespeed Disallow "*/ws/*"; pagespeed Disallow "*/not-exist-file/*"; # Script PHP - WORK location ~ \.php$ { include /etc/nginx/fastcgi.conf; fastcgi_pass unix:/run/php/php7-fpm.sock; } # Wordpress - WORK location ~ /blog/ { index index.php; try_files $uri $uri/ /blog/index.php?$args; break; } # NOT WORK With \.html # Nginx tries to open the html file and through error 404. # I expected the redirection to the php script. location ~ /not-exist-file/([a-zA-Z0-9-]*)\.html$ { rewrite /not-exist-file/([a-zA-Z0-9-]*)\.html$ /not-exist-file/post.php?name=$1 last; break; } location ~* /(.+)$ { # /test.html -> /public/prov/test.html - WORK if (-f $document_root/public/prov/$1.html) { rewrite (.+)$ /public/prov/$1.html last; break; } # Framework PHP - WORK if (!-e $request_filename) { rewrite (.+)$ /load.php?request=$1 last; break; } } } Error: [debug] 50772#0: *64 Passing on content handling for non-pagespeed resource '?https://www.testing.com.uy/not-exist-file/test.html' [error] 50772#0: *64 open() "/var/www/webdisk/testing.com.uy/htdocs/not-exist-file/test.html" failed (2: No such file or directory), client: X.X.X.X, server: www.testing.com.uy, request: "GET /not-exist-file/test.html HTTP/2.0", ... Regards. Guillermo. Version: uname -a Linux host 4.9.0-8-amd64 #1 SMP Debian 4.9.110-3+deb9u6 (2018-10-08) x86_64 GNU/Linux nginx -V nginx version: nginx/1.15.6 built by gcc 6.3.0 20170516 (Debian 6.3.0-18+deb9u1) built with OpenSSL 1.1.1 11 Sep 2018 TLS SNI support enabled configure arguments: --prefix=/usr/share/nginx --sbin-path=/usr/sbin/nginx --conf-path=/etc/nginx/nginx.conf --pid-path=/var/run/nginx.pid --lock-path=/var/lock/nginx.lock --error-log-path=/var/log/nginx/error.log --http-log-path=/var/log/nginx/access.log --user=www-data --group=www-data --without-mail_pop3_module --without-mail_imap_module --without-mail_smtp_module --without-http_uwsgi_module --without-http_scgi_module --without-http_memcached_module --with-http_ssl_module --with-http_stub_status_module --with-http_gzip_static_module --with-http_v2_module --with-file-aio --with-http_realip_module --with-http_sub_module --with-ld-opt='-L/usr/local/lib -Wl,-rpath,/usr/local/lib -ljemalloc' --with-cc-opt='-m64 -march=native -DTCP_FASTOPEN=23 -g -O3 -fstack-protector-strong -fuse-ld=gold --param=ssp-buffer-size=4 -Wformat -Werror=format-security -Wp,-D_FORTIFY_SOURCE=2 -gsplit-dwarf' --add-module=/var/tmp/nginx_build/nginx_src/./incubator-pagespeed-ngx-1.13.35.2-stable Posted at Nginx Forum: https://forum.nginx.org/read.php?2,282219,282219#msg-282219 From m16+nginx at monksofcool.net Fri Nov 30 21:00:33 2018 From: m16+nginx at monksofcool.net (Ralph Seichter) Date: Fri, 30 Nov 2018 22:00:33 +0100 Subject: Recommended method to debug POST requests? Message-ID: <8736riuvpq.fsf@ra.horus-it.com> While searching for a way to debug POST requests in NGINX 1.15, I found a link to https://github.com/openresty/echo-nginx-module (the "HTTP Echo" module) on https://www.nginx.com/resources/wiki/modules/ . Is HTTP Echo the recommended way to go, or are there any alternatives, ideally methods that do not require third party addons? -Ralph