From nginx-forum at forum.nginx.org Sat Dec 1 06:02:19 2018 From: nginx-forum at forum.nginx.org (blason) Date: Sat, 01 Dec 2018 01:02:19 -0500 Subject: In Nginx revers proxy unable to disable TLS1 Message-ID: Hi Team, I have deployed nginx in reverse proxy mode and trying to disable TLS1 and1.1 in configuation file but somehow it still shows when site is scanned by SSLlabs. Any idea why? nginx version: nginx/1.10.1 ssl_prefer_server_ciphers On; ssl_protocols TLSv1.2; ssl_ciphers ECDH+AESGCM:ECDH+CHACHA20:DH+AESGCM:ECDH+AES256:DH+AES256:ECDH+AES128:DH+AES:RSA+AESGCM:RSA+AES:!aNULL:!MD5:!DSS; ssl_dhparam /etc/ssl/stest.pem; Posted at Nginx Forum: https://forum.nginx.org/read.php?2,282222,282222#msg-282222 From cyflhn at 163.com Sun Dec 2 10:29:33 2018 From: cyflhn at 163.com (yf chu) Date: Sun, 2 Dec 2018 18:29:33 +0800 (CST) Subject: Can Nginx only cache reponse body while excluding some response headers? Message-ID: <52d43d2f.51ab.1676e77532d.Coremail.cyflhn@163.com> We all know that the cache feature in Nginx will cache all response content generated by upstream server. But I wonder whether there is a solution that only the response body is cached by Nginx while some response headers should not be cached and should be sent to client directly .I know that Nginx provides some cache-bypassing directives such as "proxy_no_cache","proxy_cache_bypass". These directives are not suitable for me as the hits pecentage will drop significantly if these directives are configured. -------------- next part -------------- An HTML attachment was scrubbed... URL: From mikydevel at yahoo.fr Sun Dec 2 22:03:14 2018 From: mikydevel at yahoo.fr (Mik J) Date: Sun, 2 Dec 2018 22:03:14 +0000 (UTC) Subject: avoid redirect References: <1752959808.1055743.1543788194901.ref@mail.yahoo.com> Message-ID: <1752959808.1055743.1543788194901@mail.yahoo.com> Hello, I'd like to be able to offer let's encrypt in port 80 only and redirect everything else to port 443 server { ??????? listen 80; ??????? listen [::]:80; ??????? listen 443; ??????? listen [::]:443; ??????? server_name http://www.mydomain.org blog.mydomain.org; ??????? location ^~ /.well-known/acme-challenge { default_type "text/plain"; root /var/www/letsencrypt; } ??????? location = /.well-known/acme-challenge/ { return 404; } ??????? return 301 https:// mydomain.org; } My problem is that everything is redirected and I cannot access a file in /var/www/letsencrypt/.well-known/acme-challenge When I comment the return 301 it works but I loose the redirection. It seems to me that nginx parses everything where I would expect it to stop at location ^~ /.well-known/acme-challenge { default_type "text/plain"; root /var/www/letsencrypt; } Does anyone know the trick ? -------------- next part -------------- An HTML attachment was scrubbed... URL: From kohenkatz at gmail.com Sun Dec 2 22:09:35 2018 From: kohenkatz at gmail.com (Moshe Katz) Date: Sun, 2 Dec 2018 17:09:35 -0500 Subject: avoid redirect In-Reply-To: <1752959808.1055743.1543788194901@mail.yahoo.com> References: <1752959808.1055743.1543788194901.ref@mail.yahoo.com> <1752959808.1055743.1543788194901@mail.yahoo.com> Message-ID: I believe you need to put the `return 301 ...` inside a location block too. Otherwise, it overrides all the location blocks. I'm on my phone now, but I'll try to share a sample file from one of my servers (that works as you want it) when I get back to my computer. Moshe On Sun, Dec 2, 2018, 5:03 PM Mik J via nginx Hello, > > I'd like to be able to offer let's encrypt in port 80 only and redirect > everything else to port 443 > > server { > listen 80; > listen [::]:80; > listen 443; > listen [::]:443; > server_name http://www.mydomain.org blog.mydomain.org; > location ^~ /.well-known/acme-challenge { default_type > "text/plain"; root /var/www/letsencrypt; } > location = /.well-known/acme-challenge/ { return 404; } > return 301 https:// mydomain.org; > } > > My problem is that everything is redirected and I cannot access a file in > /var/www/letsencrypt/.well-known/acme-challenge > When I comment the return 301 it works but I loose the redirection. > > It seems to me that nginx parses everything where I would expect it to > stop at > location ^~ /.well-known/acme-challenge { default_type "text/plain"; root > /var/www/letsencrypt; } > > Does anyone know the trick ? > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From kohenkatz at gmail.com Sun Dec 2 22:57:21 2018 From: kohenkatz at gmail.com (Moshe Katz) Date: Sun, 2 Dec 2018 17:57:21 -0500 Subject: avoid redirect In-Reply-To: References: <1752959808.1055743.1543788194901.ref@mail.yahoo.com> <1752959808.1055743.1543788194901@mail.yahoo.com> Message-ID: Here is a sample working configuration from one of my servers. Note that it uses separate `server` blocks for HTTP and HTTPS to make it easier to read. server { listen 80; listen [::]:80; server_name server.example.com; location ~ /\.well-known { root /path/to/site; } location / { return 301 https://$host$request_uri; } } server { listen 443 ssl http2; listen [::]:443 ssl http2; server_name server.example.com; root /path/to/site; # rest of server config left our for brevity... } Doing it this way has a side benefit if you have many sites running on a single server and you would like all of them to use LetsEncrypt and to be redirected to HTTPS. You can change the HTTP `server` block to look like this: server { listen 80 default_server; listen [::]:80 default_server; location ~ /\.well-known { # ALL LetsEncrypt authorizations will be done in this single shared folder. # This means you can issue the certificate using the LetsEncrypt command line # and then create the `server` block which already includes the correct path to the certificate. root /var/www/html; } location / { return 301 https://$host$request_uri; } } You then only need to create HTTPS `server` blocks for each site, which makes your configuration much simpler. Moshe -- Moshe Katz -- kohenkatz at gmail.com -- +1(301)867-3732 On Sun, Dec 2, 2018 at 5:09 PM Moshe Katz wrote: > I believe you need to put the `return 301 ...` inside a location block > too. Otherwise, it overrides all the location blocks. > > I'm on my phone now, but I'll try to share a sample file from one of my > servers (that works as you want it) when I get back to my computer. > > Moshe > > > On Sun, Dec 2, 2018, 5:03 PM Mik J via nginx >> Hello, >> >> I'd like to be able to offer let's encrypt in port 80 only and redirect >> everything else to port 443 >> >> server { >> listen 80; >> listen [::]:80; >> listen 443; >> listen [::]:443; >> server_name http://www.mydomain.org blog.mydomain.org; >> location ^~ /.well-known/acme-challenge { default_type >> "text/plain"; root /var/www/letsencrypt; } >> location = /.well-known/acme-challenge/ { return 404; } >> return 301 https:// mydomain.org; >> } >> >> My problem is that everything is redirected and I cannot access a file in >> /var/www/letsencrypt/.well-known/acme-challenge >> When I comment the return 301 it works but I loose the redirection. >> >> It seems to me that nginx parses everything where I would expect it to >> stop at >> location ^~ /.well-known/acme-challenge { default_type "text/plain"; root >> /var/www/letsencrypt; } >> >> Does anyone know the trick ? >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mikydevel at yahoo.fr Sun Dec 2 23:13:59 2018 From: mikydevel at yahoo.fr (Mik J) Date: Sun, 2 Dec 2018 23:13:59 +0000 (UTC) Subject: avoid redirect In-Reply-To: References: <1752959808.1055743.1543788194901.ref@mail.yahoo.com> <1752959808.1055743.1543788194901@mail.yahoo.com> Message-ID: <833562441.1092337.1543792439301@mail.yahoo.com> Hello Moshe, Thank you very much for your quick and detailed answer. Have a nice day ! Le dimanche 2 d?cembre 2018 ? 23:57:25 UTC+1, Moshe Katz a ?crit : Here is a sample working configuration from one of my servers. Note that it uses separate `server` blocks for HTTP and HTTPS to make it easier to read. server { ? ? ? ? listen 80;? ? ? ? listen [::]:80;? ? ? ? server_name server.example.com; ? ? ? ? location ~ /\.well-known {? ? ? ??? ? ? ??root /path/to/site;? ? ? ? } ? ? ? ? location / {? ? ? ??? ? ? ? return 301 https://$host$request_uri;? ? ? ? }} server {? ? ? ? listen 443 ssl http2; ? ? ? ? listen [::]:443 ssl http2;? ? ? ? server_name server.example.com; ? ? ? ? root /path/to/site; ? ? ? ? # rest of server config left our for brevity...} Doing it this way has a side benefit if you have many sites running on a single server and you would like all of them to use LetsEncrypt and to be redirected to HTTPS.You can change the HTTP `server` block to look like this: server {? ? ? ? listen 80 default_server;? ? ? ? listen [::]:80 default_server; ? ? ? ? location ~ /\.well-known {? ? ? ? ? ? ? ? # ALL LetsEncrypt authorizations will be done in this single shared folder.? ? ? ? ? ? ? ? # This means you can issue the certificate using the LetsEncrypt command line? ? ? ? ? ? ? ? # and then create the `server` block which already includes the correct path to the certificate.? ? ? ? ??? ? ? ??root /var/www/html;? ? ? ? } ? ? ? ? location / {? ? ? ??? ? ? ? return 301 https://$host$request_uri;? ? ? ? }} You then only need to create HTTPS `server` blocks for each site, which makes your configuration much simpler. Moshe -- Moshe Katz -- kohenkatz at gmail.com -- +1(301)867-3732 On Sun, Dec 2, 2018 at 5:09 PM Moshe Katz wrote: I believe you need to put the `return 301 ...` inside a location block too. Otherwise, it overrides all the location blocks. I'm on my phone now, but I'll try to share a sample file from one of my servers (that works as you want it) when I get back to my computer. Moshe On Sun, Dec 2, 2018, 5:03 PM Mik J via nginx From mdounin at mdounin.ru Mon Dec 3 14:08:40 2018 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 3 Dec 2018 17:08:40 +0300 Subject: Recommended method to debug POST requests? In-Reply-To: <8736riuvpq.fsf@ra.horus-it.com> References: <8736riuvpq.fsf@ra.horus-it.com> Message-ID: <20181203140840.GW99070@mdounin.ru> Hello! On Fri, Nov 30, 2018 at 10:00:33PM +0100, Ralph Seichter wrote: > While searching for a way to debug POST requests in NGINX 1.15, I found > a link to https://github.com/openresty/echo-nginx-module (the "HTTP > Echo" module) on https://www.nginx.com/resources/wiki/modules/ . > > Is HTTP Echo the recommended way to go, or are there any alternatives, > ideally methods that do not require third party addons? Depending on what you are trying to debug: - the $request_body variable, http://nginx.org/r/$request_body - the $request_body_file variable, http://nginx.org/r/$request_body_file - the client_body_in_file_only directive, http://nginx.org/r/client_body_in_file_only Also, the mirror module: http://nginx.org/en/docs/http/ngx_http_mirror_module.html and the embedded perl module: http://nginx.org/en/docs/http/ngx_http_perl_module.html might be also helpful. -- Maxim Dounin http://mdounin.ru/ From mdounin at mdounin.ru Mon Dec 3 14:13:31 2018 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 3 Dec 2018 17:13:31 +0300 Subject: In Nginx revers proxy unable to disable TLS1 In-Reply-To: References: Message-ID: <20181203141331.GX99070@mdounin.ru> Hello! On Sat, Dec 01, 2018 at 01:02:19AM -0500, blason wrote: > Hi Team, > > I have deployed nginx in reverse proxy mode and trying to disable TLS1 > and1.1 in configuation file but somehow it still shows when site is scanned > by SSLlabs. > > Any idea why? > > nginx version: nginx/1.10.1 > > ssl_prefer_server_ciphers On; > ssl_protocols TLSv1.2; > ssl_ciphers > ECDH+AESGCM:ECDH+CHACHA20:DH+AESGCM:ECDH+AES256:DH+AES256:ECDH+AES128:DH+AES:RSA+AESGCM:RSA+AES:!aNULL:!MD5:!DSS; > ssl_dhparam /etc/ssl/stest.pem; Make sure you change ssl_protocols in the right context. It is not possible to change enabled SSL protocols in a SNI-based virtual server, so you have to define the "ssl_protocols" directive in the default server for the listening socket. Most simple solution would be define "ssl_protocols" in the "http" context, so it will be used for all servers. -- Maxim Dounin http://mdounin.ru/ From nginx-forum at forum.nginx.org Mon Dec 3 16:55:56 2018 From: nginx-forum at forum.nginx.org (benzimmer) Date: Mon, 03 Dec 2018 11:55:56 -0500 Subject: Proxying setup delivering wrong cache entry in some edge cases In-Reply-To: <000201d46537$5c2e1e80$148a5b80$@roze.lv> References: <000201d46537$5c2e1e80$148a5b80$@roze.lv> Message-ID: <47402740f9ee00310ae5f4576ae8f0e4.NginxMailingListEnglish@forum.nginx.org> Thanks for your answer and apologies for the long delay. How would the $http_host ever be empty? If I make a request without it I receive a 400 Bad Request as the HTTP spec defines it. Does Nginx still forward the request to the upstream server and populate a cache entry? Additionally, if I make requests to our backend without a proper X-Forwarded-For header I will always receive a 404 and not data for a wrong domain. Unfortunately we're still not able to reproduce the problem on our end, but still receive complaints from users encountering the problem. We removed all caching from the problematic endpoint, but the problem still seems to persist. Are there any known conditions, where Nginx would pass on a wrong host to the upstream server for any kind of reason? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,281606,282244#msg-282244 From lahiruprasad at gmail.com Tue Dec 4 01:14:32 2018 From: lahiruprasad at gmail.com (Lahiru Prasad) Date: Tue, 4 Dec 2018 06:44:32 +0530 Subject: Monitor active connections per proxy pass Message-ID: Hi, Currently we are using upstream check module to check the health of each upstream. I want to know whether there's a module to get the number of active connections per proxy pass. /abc --- how many active connections /xyz --- how many active connections Regards, Lahiru Prasad. -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Tue Dec 4 10:56:00 2018 From: nginx-forum at forum.nginx.org (hpuac) Date: Tue, 04 Dec 2018 05:56:00 -0500 Subject: No gzip compression for HTTP status code 202 Message-ID: Hey nginx team, I noticed that the ngx_http_gzip_module is not compressing the response body if the HTTP response status code is 202 (Accepted). After having a look at the code, it looks like the filter is only active if the status code is 200, 403 or 404. ( https://trac.nginx.org/nginx/browser/nginx/src/http/modules/ngx_http_gzip_filter_module.c#L249 ) Is there any particular reason to not compress other status codes? And if not, could we adjust this logic to also compress others, for example everything except 204 (No Content)? Best regards, Hans Posted at Nginx Forum: https://forum.nginx.org/read.php?2,282249,282249#msg-282249 From mdounin at mdounin.ru Tue Dec 4 11:15:23 2018 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 4 Dec 2018 14:15:23 +0300 Subject: No gzip compression for HTTP status code 202 In-Reply-To: References: Message-ID: <20181204111523.GA99070@mdounin.ru> Hello! On Tue, Dec 04, 2018 at 05:56:00AM -0500, hpuac wrote: > I noticed that the ngx_http_gzip_module is not compressing the response body > if the HTTP response status code is 202 (Accepted). > After having a look at the code, it looks like the filter is only active if > the status code is 200, 403 or 404. > ( > https://trac.nginx.org/nginx/browser/nginx/src/http/modules/ngx_http_gzip_filter_module.c#L249 > ) > Is there any particular reason to not compress other status codes? > And if not, could we adjust this logic to also compress others, for example > everything except 204 (No Content)? http://mailman.nginx.org/pipermail/nginx/2012-September/035338.html -- Maxim Dounin http://mdounin.ru/ From nginx-forum at forum.nginx.org Tue Dec 4 12:09:48 2018 From: nginx-forum at forum.nginx.org (hpuac) Date: Tue, 04 Dec 2018 07:09:48 -0500 Subject: No gzip compression for HTTP status code 202 In-Reply-To: <20181204111523.GA99070@mdounin.ru> References: <20181204111523.GA99070@mdounin.ru> Message-ID: <0021dc034b330f8ed9d1326b9742c0d7.NginxMailingListEnglish@forum.nginx.org> Hey! > http://mailman.nginx.org/pipermail/nginx/2012-September/035338.html Thank you for the quick answer! Would it make sense to add that information to the documentation? https://nginx.org/en/docs/http/ngx_http_gzip_module.html You named some examples why not to compress 206, 304, 400, 500, but is there any particular reason to not compress 202? Status codes like 201 and 202 should like 200 be common and safe response codes that could be compressed. I had a quick look and for example undertow is also compressing 202 responses. What do you think about making the status codes that will get compressed configurable? Best regards, Hans Posted at Nginx Forum: https://forum.nginx.org/read.php?2,282249,282252#msg-282252 From mdounin at mdounin.ru Tue Dec 4 12:39:01 2018 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 4 Dec 2018 15:39:01 +0300 Subject: No gzip compression for HTTP status code 202 In-Reply-To: <0021dc034b330f8ed9d1326b9742c0d7.NginxMailingListEnglish@forum.nginx.org> References: <20181204111523.GA99070@mdounin.ru> <0021dc034b330f8ed9d1326b9742c0d7.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20181204123901.GC99070@mdounin.ru> Hello! On Tue, Dec 04, 2018 at 07:09:48AM -0500, hpuac wrote: > > http://mailman.nginx.org/pipermail/nginx/2012-September/035338.html > > Thank you for the quick answer! > Would it make sense to add that information to the documentation? > https://nginx.org/en/docs/http/ngx_http_gzip_module.html I don't think so. It is an implementation detail. > You named some examples why not to compress 206, 304, 400, 500, but is there > any particular reason to not compress 202? > Status codes like 201 and 202 should like 200 be common and safe response > codes that could be compressed. > I had a quick look and for example undertow is also compressing 202 > responses. Both 201 and 202 are very rare, and aren't expected to be big. E.g., 201 as returned by nginx's DAV module contains an empty entity body, and certainly won't benefit from compression. Also, with status codes specific for non-browser clients it is a good idea to test if various popular clients using these status codes can actually handle compression of these codes. And this turns to be a problem, see ticket #394 which is still waiting for tests on Windows / macOS builtin DAV clients: https://trac.nginx.org/nginx/ticket/394 > What do you think about making the status codes that will get compressed > configurable? I don't think this is needed. Moreover, I would expect this to result in problems, as people will inevitably try to compress responses with codes certainly not to be compressed, simply because they can. -- Maxim Dounin http://mdounin.ru/ From mdounin at mdounin.ru Tue Dec 4 15:07:28 2018 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 4 Dec 2018 18:07:28 +0300 Subject: nginx-1.14.2 Message-ID: <20181204150728.GD99070@mdounin.ru> Changes with nginx 1.14.2 04 Dec 2018 *) Bugfix: nginx could not be built by gcc 8.1. *) Bugfix: nginx could not be built on Fedora 28 Linux. *) Bugfix: in handling of client addresses when using unix domain listen sockets to work with datagrams on Linux. *) Change: the logging level of the "http request", "https proxy request", "unsupported protocol", "version too low", "no suitable key share", and "no suitable signature algorithm" SSL errors has been lowered from "crit" to "info". *) Bugfix: when using OpenSSL 1.1.0 or newer it was not possible to switch off "ssl_prefer_server_ciphers" in a virtual server if it was switched on in the default server. *) Bugfix: nginx could not be built with LibreSSL 2.8.0. *) Bugfix: if nginx was built with OpenSSL 1.1.0 and used with OpenSSL 1.1.1, the TLS 1.3 protocol was always enabled. *) Bugfix: sending a disk-buffered request body to a gRPC backend might fail. *) Bugfix: connections with some gRPC backends might not be cached when using the "keepalive" directive. *) Bugfix: a segmentation fault might occur in a worker process if the ngx_http_mp4_module was used on 32-bit platforms. -- Maxim Dounin http://nginx.org/ From kworthington at gmail.com Tue Dec 4 15:34:25 2018 From: kworthington at gmail.com (Kevin Worthington) Date: Tue, 4 Dec 2018 10:34:25 -0500 Subject: [nginx-announce] nginx-1.14.2 In-Reply-To: <20181204150733.GE99070@mdounin.ru> References: <20181204150733.GE99070@mdounin.ru> Message-ID: Hello Nginx users, Now available: Nginx 1.14.2 for Windows https://kevinworthington.com/nginxwin1142 (32-bit and 64-bit versions) These versions are to support legacy users who are already using Cygwin based builds of Nginx. Officially supported native Windows binaries are at nginx.org. Announcements are also available here: Twitter http://twitter.com/kworthington Google+ https://plus.google.com/+KevinWorthington/ Thank you, Kevin On Tue, Dec 4, 2018 at 10:07 AM Maxim Dounin wrote: > Changes with nginx 1.14.2 04 Dec > 2018 > > *) Bugfix: nginx could not be built by gcc 8.1. > > *) Bugfix: nginx could not be built on Fedora 28 Linux. > > *) Bugfix: in handling of client addresses when using unix domain > listen > sockets to work with datagrams on Linux. > > *) Change: the logging level of the "http request", "https proxy > request", "unsupported protocol", "version too low", "no suitable > key > share", and "no suitable signature algorithm" SSL errors has been > lowered from "crit" to "info". > > *) Bugfix: when using OpenSSL 1.1.0 or newer it was not possible to > switch off "ssl_prefer_server_ciphers" in a virtual server if it was > switched on in the default server. > > *) Bugfix: nginx could not be built with LibreSSL 2.8.0. > > *) Bugfix: if nginx was built with OpenSSL 1.1.0 and used with OpenSSL > 1.1.1, the TLS 1.3 protocol was always enabled. > > *) Bugfix: sending a disk-buffered request body to a gRPC backend might > fail. > > *) Bugfix: connections with some gRPC backends might not be cached when > using the "keepalive" directive. > > *) Bugfix: a segmentation fault might occur in a worker process if the > ngx_http_mp4_module was used on 32-bit platforms. > > > -- > Maxim Dounin > http://nginx.org/ > _______________________________________________ > nginx-announce mailing list > nginx-announce at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx-announce > -------------- next part -------------- An HTML attachment was scrubbed... URL: From r1ch+nginx at teamliquid.net Tue Dec 4 17:57:15 2018 From: r1ch+nginx at teamliquid.net (Richard Stanway) Date: Tue, 4 Dec 2018 18:57:15 +0100 Subject: proxy_cache_background_update ignores regular expression match when updating In-Reply-To: References: Message-ID: Hello, I'm running into an issue where a proxied location with a regular expression match does not correctly update the cache when using proxy_cache_background_update. The update request to the backend seems to be missing the captured parameters from the regex. I've created a small test case that demonstrates this in nginx 1.15.7. Hopefully I'm not missing anything, I checked the docs and didn't seem to find anything that would explain this behavior. nginx version: nginx/1.15.7 built by gcc 6.3.0 20170516 (Debian 6.3.0-18+deb9u1) built with OpenSSL 1.1.0f 25 May 2017 (running with OpenSSL 1.1.0j 20 Nov 2018) TLS SNI support enabled configure arguments: --prefix=/etc/nginx --sbin-path=/usr/sbin/nginx --modules-path=/usr/lib/nginx/modules --conf-path=/etc/nginx/nginx.conf --error-log-path=/var/log/nginx/error.log --http-log-path=/var/log/nginx/access.log --pid-path=/var/run/nginx.pid --lock-path=/var/run/nginx.lock --http-client-body-temp-path=/var/cache/nginx/client_temp --http-proxy-temp-path=/var/cache/nginx/proxy_temp --http-fastcgi-temp-path=/var/cache/nginx/fastcgi_temp --http-uwsgi-temp-path=/var/cache/nginx/uwsgi_temp --http-scgi-temp-path=/var/cache/nginx/scgi_temp --user=nginx --group=nginx --with-compat --with-file-aio --with-threads --with-http_addition_module --with-http_auth_request_module --with-http_dav_module --with-http_flv_module --with-http_gunzip_module --with-http_gzip_static_module --with-http_mp4_module --with-http_random_index_module --with-http_realip_module --with-http_secure_link_module --with-http_slice_module --with-http_ssl_module --with-http_stub_status_module --with-http_sub_module --with-http_v2_module --with-mail --with-mail_ssl_module --with-stream --with-stream_realip_module --with-stream_ssl_module --with-stream_ssl_preread_module --with-cc-opt='-g -O2 -fdebug-prefix-map=/data/builder/debuild/nginx-1.15.7/debian/debuild-base/nginx-1.15.7=. -specs=/usr/share/dpkg/no-pie-compile.specs -fstack-protector-strong -Wformat -Werror=format-security -Wp,-D_FORTIFY_SOURCE=2 -fPIC' --with-ld-opt='-specs=/usr/share/dpkg/no-pie-link.specs -Wl,-z,relro -Wl,-z,now -Wl,--as-needed -pie' Configuration: proxy_cache_path /tmp keys_zone=test:1m max_size=1g inactive=1h use_temp_path=off; server { listen 127.0.0.1:8010; root /tmp/nginx; } server { listen 127.0.0.1:8011; location ~ /test/(regular|expression)$ { proxy_pass http://127.0.0.1:8010/test/$1; proxy_cache test; proxy_cache_background_update on; proxy_cache_use_stale updating; proxy_cache_valid 10s; } } Initial testing with proxy_cache_background_update off. Log excerpts show requests to both servers. First request (one to frontend, one to backend as expected): 127.0.0.1 - - [04/Dec/2018:17:42:31 +0000] "GET /test/regular HTTP/1.0" 200 8 "-" "curl/7.52.1" "-" 127.0.0.1 - - [04/Dec/2018:17:42:31 +0000] "GET /test/regular HTTP/1.1" 200 8 "-" "curl/7.52.1" "-" Second request (served from frontend cache, all good): 127.0.0.1 - - [04/Dec/2018:17:42:35 +0000] "GET /test/regular HTTP/1.1" 200 8 "-" "curl/7.52.1" "-" Third request (cache expired, so a new request to backend, also good): 127.0.0.1 - - [04/Dec/2018:17:43:14 +0000] "GET /test/regular HTTP/1.0" 200 8 "-" "curl/7.52.1" "-" 127.0.0.1 - - [04/Dec/2018:17:43:14 +0000] "GET /test/regular HTTP/1.1" 200 8 "-" "curl/7.52.1" "-" After setting proxy_cache_background_update on, every request tries to do a background update with the wrong URL once the content is expired. The stale content is still served in the meantime. 127.0.0.1 - - [04/Dec/2018:17:44:01 +0000] "GET /test/ HTTP/1.0" 403 153 "-" "curl/7.52.1" "-" 127.0.0.1 - - [04/Dec/2018:17:44:01 +0000] "GET /test/regular HTTP/1.1" 200 8 "-" "curl/7.52.1" "-" 127.0.0.1 - - [04/Dec/2018:17:44:15 +0000] "GET /test/ HTTP/1.0" 403 153 "-" "curl/7.52.1" "-" 127.0.0.1 - - [04/Dec/2018:17:44:15 +0000] "GET /test/regular HTTP/1.1" 200 8 "-" "curl/7.52.1" "-" 127.0.0.1 - - [04/Dec/2018:17:44:17 +0000] "GET /test/ HTTP/1.0" 403 153 "-" "curl/7.52.1" "-" 127.0.0.1 - - [04/Dec/2018:17:44:17 +0000] "GET /test/regular HTTP/1.1" 200 8 "-" "curl/7.52.1" "-" 127.0.0.1 - - [04/Dec/2018:17:44:19 +0000] "GET /test/ HTTP/1.0" 403 153 "-" "curl/7.52.1" "-" 127.0.0.1 - - [04/Dec/2018:17:44:19 +0000] "GET /test/regular HTTP/1.1" 200 8 "-" "curl/7.52.1" "-" 127.0.0.1 - - [04/Dec/2018:17:44:21 +0000] "GET /test/ HTTP/1.0" 403 153 "-" "curl/7.52.1" "-" 127.0.0.1 - - [04/Dec/2018:17:44:21 +0000] "GET /test/regular HTTP/1.1" 200 8 "-" "curl/7.52.1" "-" Is this a bug or am I misunderstanding how this is supposed to work? From nginx-forum at forum.nginx.org Thu Dec 6 11:09:40 2018 From: nginx-forum at forum.nginx.org (dxxvi) Date: Thu, 06 Dec 2018 06:09:40 -0500 Subject: Possible to use nginx as a proxy (like Squid)? Message-ID: Hi All, Is it possible to use nginx as a proxy like Squid, of course without all the access control lists, without protocols which are not http nor https? If yes, could somebody give me a starting point? Thanks. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,282288,282288#msg-282288 From jackdev at mailbox.org Thu Dec 6 12:53:35 2018 From: jackdev at mailbox.org (Jack Henschel) Date: Thu, 06 Dec 2018 13:53:35 +0100 Subject: Possible to use nginx as a proxy (like Squid)? In-Reply-To: References: Message-ID: Hello, yes that is indeed possible with nginx. The Admin Guide is a good starting point: https://docs.nginx.com/nginx/admin-guide/web-server/reverse-proxy/ The principal module you'll be dealing with is the http_proxy module, you can find the docs here: https://nginx.org/en/docs/http/ngx_http_proxy_module.html Regards Jack On 6 December 2018 12:09:40 CET, dxxvi wrote: >Hi All, > >Is it possible to use nginx as a proxy like Squid, of course without >all the >access control lists, without protocols which are not http nor https? >If >yes, could somebody give me a starting point? > >Thanks. > >Posted at Nginx Forum: >https://forum.nginx.org/read.php?2,282288,282288#msg-282288 > >_______________________________________________ >nginx mailing list >nginx at nginx.org >http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From arut at nginx.com Thu Dec 6 13:01:36 2018 From: arut at nginx.com (Roman Arutyunyan) Date: Thu, 6 Dec 2018 16:01:36 +0300 Subject: proxy_cache_background_update ignores regular expression match when updating In-Reply-To: References: Message-ID: <20181206130136.GJ20304@Romans-MacBook-Air.local> Hello Richard, On Tue, Dec 04, 2018 at 06:57:15PM +0100, Richard Stanway via nginx wrote: > Hello, > I'm running into an issue where a proxied location with a regular > expression match does not correctly update the cache when using > proxy_cache_background_update. The update request to the backend seems > to be missing the captured parameters from the regex. I've created a > small test case that demonstrates this in nginx 1.15.7. Hopefully I'm > not missing anything, I checked the docs and didn't seem to find > anything that would explain this behavior. This indeed looks like a bug in the way nginx creates cloned subrequests. Because of it unnamed captures matched in the parent request are not available in the subrequest. A simple workaround is to use named captures instead. [..] > location ~ /test/(regular|expression)$ { > proxy_pass http://127.0.0.1:8010/test/$1; This should solve the issue: location ~ /test/($regular|expression)$ { proxy_pass http://127.0.0.1:8010/test/$name; > proxy_cache test; > proxy_cache_background_update on; > proxy_cache_use_stale updating; > proxy_cache_valid 10s; > } > } > > Initial testing with proxy_cache_background_update off. Log excerpts > show requests to both servers. > > First request (one to frontend, one to backend as expected): > 127.0.0.1 - - [04/Dec/2018:17:42:31 +0000] "GET /test/regular > HTTP/1.0" 200 8 "-" "curl/7.52.1" "-" > 127.0.0.1 - - [04/Dec/2018:17:42:31 +0000] "GET /test/regular > HTTP/1.1" 200 8 "-" "curl/7.52.1" "-" > > Second request (served from frontend cache, all good): > 127.0.0.1 - - [04/Dec/2018:17:42:35 +0000] "GET /test/regular > HTTP/1.1" 200 8 "-" "curl/7.52.1" "-" > > Third request (cache expired, so a new request to backend, also good): > 127.0.0.1 - - [04/Dec/2018:17:43:14 +0000] "GET /test/regular > HTTP/1.0" 200 8 "-" "curl/7.52.1" "-" > 127.0.0.1 - - [04/Dec/2018:17:43:14 +0000] "GET /test/regular > HTTP/1.1" 200 8 "-" "curl/7.52.1" "-" > > After setting proxy_cache_background_update on, every request tries to > do a background update with the wrong URL once the content is expired. > The stale content is still served in the meantime. > 127.0.0.1 - - [04/Dec/2018:17:44:01 +0000] "GET /test/ HTTP/1.0" 403 > 153 "-" "curl/7.52.1" "-" > 127.0.0.1 - - [04/Dec/2018:17:44:01 +0000] "GET /test/regular > HTTP/1.1" 200 8 "-" "curl/7.52.1" "-" > > 127.0.0.1 - - [04/Dec/2018:17:44:15 +0000] "GET /test/ HTTP/1.0" 403 > 153 "-" "curl/7.52.1" "-" > 127.0.0.1 - - [04/Dec/2018:17:44:15 +0000] "GET /test/regular > HTTP/1.1" 200 8 "-" "curl/7.52.1" "-" > > 127.0.0.1 - - [04/Dec/2018:17:44:17 +0000] "GET /test/ HTTP/1.0" 403 > 153 "-" "curl/7.52.1" "-" > 127.0.0.1 - - [04/Dec/2018:17:44:17 +0000] "GET /test/regular > HTTP/1.1" 200 8 "-" "curl/7.52.1" "-" > > 127.0.0.1 - - [04/Dec/2018:17:44:19 +0000] "GET /test/ HTTP/1.0" 403 > 153 "-" "curl/7.52.1" "-" > 127.0.0.1 - - [04/Dec/2018:17:44:19 +0000] "GET /test/regular > HTTP/1.1" 200 8 "-" "curl/7.52.1" "-" > > 127.0.0.1 - - [04/Dec/2018:17:44:21 +0000] "GET /test/ HTTP/1.0" 403 > 153 "-" "curl/7.52.1" "-" > 127.0.0.1 - - [04/Dec/2018:17:44:21 +0000] "GET /test/regular > HTTP/1.1" 200 8 "-" "curl/7.52.1" "-" > > Is this a bug or am I misunderstanding how this is supposed to work? > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -- Roman Arutyunyan From arut at nginx.com Thu Dec 6 18:04:18 2018 From: arut at nginx.com (Roman Arutyunyan) Date: Thu, 6 Dec 2018 21:04:18 +0300 Subject: proxy_cache_background_update ignores regular expression match when updating In-Reply-To: <20181206130136.GJ20304@Romans-MacBook-Air.local> References: <20181206130136.GJ20304@Romans-MacBook-Air.local> Message-ID: <20181206180418.GK20304@Romans-MacBook-Air.local> Hi, On Thu, Dec 06, 2018 at 04:01:36PM +0300, Roman Arutyunyan wrote: [..] > This should solve the issue: > > location ~ /test/($regular|expression)$ { > proxy_pass http://127.0.0.1:8010/test/$name; Sorry, the right syntax is of course this: location ~ /test/(?regular|expression)$ { proxy_pass http://127.0.0.1:8010/test/$name; > > proxy_cache test; > > proxy_cache_background_update on; > > proxy_cache_use_stale updating; > > proxy_cache_valid 10s; > > } > > } [..] -- Roman Arutyunyan From francis at daoine.org Fri Dec 7 22:16:03 2018 From: francis at daoine.org (Francis Daly) Date: Fri, 7 Dec 2018 22:16:03 +0000 Subject: Possible to use nginx as a proxy (like Squid)? In-Reply-To: References: Message-ID: <20181207221603.xzvlojzvl6cwy5ci@daoine.org> On Thu, Dec 06, 2018 at 06:09:40AM -0500, dxxvi wrote: Hi there, > Is it possible to use nginx as a proxy like Squid, of course without all the > access control lists, without protocols which are not http nor https? I'd say "no". squid is a proxy server. nginx is (among other things) a reverse proxy server. They are different things. If you want an easy proxy, you'll be much happier starting with something that is built to do that task. f -- Francis Daly francis at daoine.org From francis at daoine.org Fri Dec 7 22:23:54 2018 From: francis at daoine.org (Francis Daly) Date: Fri, 7 Dec 2018 22:23:54 +0000 Subject: Can Nginx only cache reponse body while excluding some response headers? In-Reply-To: <52d43d2f.51ab.1676e77532d.Coremail.cyflhn@163.com> References: <52d43d2f.51ab.1676e77532d.Coremail.cyflhn@163.com> Message-ID: <20181207222354.zj5oqrdrwv72if2o@daoine.org> On Sun, Dec 02, 2018 at 06:29:33PM +0800, yf chu wrote: Hi there, > We all know that the cache feature in Nginx will cache all response content generated by upstream server. But I wonder whether there is a solution that only the response body is cached by Nginx while some response headers should not be cached and should be sent to client directly . I'm afraid I don't understand what exactly you are asking. Could you give more details, or an example? Such as: request from client to nginx; not in cache, so nginx asks upstream which returns headers+body. What do you want nginx to store in the cache, and what should nginx send to the client? same request from client to nginx; it is in the cache. What should nginx send to the client? The answer to that might make your question clearer. Thanks, f -- Francis Daly francis at daoine.org From cyflhn at 163.com Fri Dec 7 22:40:52 2018 From: cyflhn at 163.com (yf chu) Date: Sat, 8 Dec 2018 06:40:52 +0800 (CST) Subject: Can Nginx only cache reponse body while excluding some response headers? In-Reply-To: <20181207222354.zj5oqrdrwv72if2o@daoine.org> References: <52d43d2f.51ab.1676e77532d.Coremail.cyflhn@163.com> <20181207222354.zj5oqrdrwv72if2o@daoine.org> Message-ID: <36091e57.1edd.1678ad4ab47.Coremail.cyflhn@163.com> A request is sent from client to nginx, if it did not hit the cache, I hope nginx only caches the response body returned by upstream and send the headers and response body returned by upstream to client. If the request hit the cache, I hope nginx can generate new headers and send them with the cached response body to client. At 2018-12-08 06:23:54, "Francis Daly" wrote: >On Sun, Dec 02, 2018 at 06:29:33PM +0800, yf chu wrote: > >Hi there, > >> We all know that the cache feature in Nginx will cache all response content generated by upstream server. But I wonder whether there is a solution that only the response body is cached by Nginx while some response headers should not be cached and should be sent to client directly . > >I'm afraid I don't understand what exactly you are asking. > >Could you give more details, or an example? > >Such as: > >request from client to nginx; not in cache, so nginx asks upstream which >returns headers+body. What do you want nginx to store in the cache, >and what should nginx send to the client? > >same request from client to nginx; it is in the cache. What should nginx >send to the client? > >The answer to that might make your question clearer. > >Thanks, > > f >-- >Francis Daly francis at daoine.org >_______________________________________________ >nginx mailing list >nginx at nginx.org >http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From francis at daoine.org Fri Dec 7 23:14:24 2018 From: francis at daoine.org (Francis Daly) Date: Fri, 7 Dec 2018 23:14:24 +0000 Subject: Can Nginx only cache reponse body while excluding some response headers? In-Reply-To: <36091e57.1edd.1678ad4ab47.Coremail.cyflhn@163.com> References: <52d43d2f.51ab.1676e77532d.Coremail.cyflhn@163.com> <20181207222354.zj5oqrdrwv72if2o@daoine.org> <36091e57.1edd.1678ad4ab47.Coremail.cyflhn@163.com> Message-ID: <20181207231424.e5zxlhrc6evedsyd@daoine.org> On Sat, Dec 08, 2018 at 06:40:52AM +0800, yf chu wrote: Hi there, > A request is sent from client to nginx, if it did not hit the cache, I hope nginx only caches the response body returned by upstream and send the headers and response body returned by upstream to client. > If the request hit the cache, I hope nginx can generate new headers and send them with the cached response body to client. Ok. What new headers do you want nginx to generate? (And how would you configure nginx to do so?) I suspect that stock nginx does not do this; but maybe something could be done with one of the embedded languages. I wonder -- would changing the upstream to send an X-Accel-Redirect header, along with its for-this-response headers, be useful here? If so, nginx could potentially send the same local file contents, with fresh-from-upstream headers each time. That could avoid doing anything with the nginx cache; and all of the "header generation" logic would be on the upstream server. And it might be simpler than patching nginx, if that is what is necessary. Cheers, f -- Francis Daly francis at daoine.org From nginx-forum at forum.nginx.org Sat Dec 8 05:34:09 2018 From: nginx-forum at forum.nginx.org (lahiru) Date: Sat, 08 Dec 2018 00:34:09 -0500 Subject: Monitor active connections per proxy pass In-Reply-To: References: Message-ID: Hi, Currently we are using upstream check module to check the health of each upstream. I want to know whether there's a module to get the number of active connections per proxy pass. /abc --- how many active connections /xyz --- how many active connections Regards, Lahiru Prasad. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,282248,282308#msg-282308 From m16+nginx at monksofcool.net Sun Dec 9 13:41:56 2018 From: m16+nginx at monksofcool.net (Ralph Seichter) Date: Sun, 09 Dec 2018 14:41:56 +0100 Subject: How to build NodeJS support without an Internet connection? Message-ID: <87sgz6yfyz.fsf@ra.horus-it.com> Hello developer team. I am the maintainer of the NGINX Unit ebuild for Gentoo Linux, and currently I am struggling with colliding requirements of Unit's and Gentoo's build strategies. As you may know, the Gentoo way is to build everything from the source code, with only very few exceptions. If, for example, the user desires Python support in Unit, my ebuild ensures that this requirement is met before Unit's release tarball is even unpacked. NodeJS support is a different beast, because the existing Unit build method relies on node-gyp and npm to download and install dependencies during the build process. This fails, because Gentoo builds are executed in a sandbox environment that prohibits both network access and files being written outside the sandbox. Hence, "npm install --global ..." won't do. Is there a way I can modify the existing Unit build to comply with these restrictions? Everything required for the build must either already exist, or it must be downloaded by Gentoo before Unit's compile phase. Also, all required artefacts are recorded with a checksum when an ebuild is added to the Gentoo tree, so no "download whatever is the most recent version of XYZ". Your help is appreciated. -Ralph From vbart at nginx.com Sun Dec 9 18:14:14 2018 From: vbart at nginx.com (Valentin V. Bartenev) Date: Sun, 09 Dec 2018 21:14:14 +0300 Subject: How to build NodeJS support without an Internet connection? In-Reply-To: <87sgz6yfyz.fsf@ra.horus-it.com> References: <87sgz6yfyz.fsf@ra.horus-it.com> Message-ID: <2279904.R71bQZdoZm@vbart-laptop> On Sunday, 9 December 2018 16:41:56 MSK Ralph Seichter wrote: > Hello developer team. > > I am the maintainer of the NGINX Unit ebuild for Gentoo Linux, and > currently I am struggling with colliding requirements of Unit's and > Gentoo's build strategies. > > As you may know, the Gentoo way is to build everything from the source > code, with only very few exceptions. If, for example, the user desires > Python support in Unit, my ebuild ensures that this requirement is met > before Unit's release tarball is even unpacked. > > NodeJS support is a different beast, because the existing Unit build > method relies on node-gyp and npm to download and install dependencies > during the build process. This fails, because Gentoo builds are executed > in a sandbox environment that prohibits both network access and files > being written outside the sandbox. Hence, "npm install --global ..." > won't do. > > Is there a way I can modify the existing Unit build to comply with these > restrictions? Everything required for the build must either already > exist, or it must be downloaded by Gentoo before Unit's compile phase. > Also, all required artefacts are recorded with a checksum when an ebuild > is added to the Gentoo tree, so no "download whatever is the most recent > version of XYZ". > > Your help is appreciated. Hi Ralph, Thank you for your effort. All the Node.js module dependencies are set in its package.json file: http://hg.nginx.org/unit/file/tip/src/nodejs/unit-http/package.json There's just one: "dependencies": { "node-addon-api": "1.2.0" } and I'm not sure that it's needed at all, especially for Gentoo. I think you can just remove it. See also this overlay for some ideas: https://github.com/msva/mva-overlay/tree/master/www-servers/nginx-unit wbr, Valentin V. Bartenev From nginx-forum at forum.nginx.org Mon Dec 10 04:56:33 2018 From: nginx-forum at forum.nginx.org (blason) Date: Sun, 09 Dec 2018 23:56:33 -0500 Subject: In Nginx revers proxy unable to disable TLS1 In-Reply-To: <20181203141331.GX99070@mdounin.ru> References: <20181203141331.GX99070@mdounin.ru> Message-ID: Hello, Do you mean I need to mention in each and every reverse proxy stanza or in default config? Is this right? [root at xxxxxx conf.d]# vi default.conf server { listen 80 default_server; #server_name ""; server_name _; return 444; ssl_protocols TLSv1.2; #charset koi8-r; #access_log /var/log/nginx/log/host.access.log main; location / { root /usr/share/nginx/html; index index.html index.htm; } Posted at Nginx Forum: https://forum.nginx.org/read.php?2,282222,282316#msg-282316 From dung.trantien at axonactive.com Mon Dec 10 06:43:13 2018 From: dung.trantien at axonactive.com (Dung Tran Tien) Date: Mon, 10 Dec 2018 06:43:13 +0000 Subject: Invalid character found in the request target on Confluence behind nginx Message-ID: Hi, Currently I'm using Confluene 6.10.2 behind nginx. I have some pages with the page name including character '>' could not accessible, the error is: HTTP Status 400 - Bad Request Type Exception Report Message Invalid character found in the request target. The valid characters are defined in RFC 7230 and RFC 3986 Description The server cannot or will not process the request due to something that is perceived to be a client error (e.g., malformed request syntax, invalid request message framing, or deceptive request routing). Exception java.lang.IllegalArgumentException: Invalid character found in the request target. The valid characters are defined in RFC 7230 and RFC 3986 org.apache.coyote.http11.Http11InputBuffer.parseRequestLine(Http11InputBuffer.java:474) org.apache.coyote.http11.Http11Processor.service(Http11Processor.java:294) org.apache.coyote.AbstractProcessorLight.process(AbstractProcessorLight.java:66) org.apache.coyote.AbstractProtocol$ConnectionHandler.process(AbstractProtocol.java:764) org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.doRun(NioEndpoint.java:1388) org.apache.tomcat.util.net.SocketProcessorBase.run(SocketProcessorBase.java:49) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) org.apache.tomcat.util.threads.TaskThread$WrappingRunnable.run(TaskThread.java:61) java.lang.Thread.run(Thread.java:748) Note The full stack trace of the root cause is available in the server logs. Apache Tomcat/9.0.10 But when I access the page bypassing the reverse proxy, it's ok, so it could be a problem in nginx. I read logs in Confluence and nginx, but did not find any strange, seen like the request this page has been drop by nginx, no record found out in backend log, please advise me how to fix the issue. I tried with some solutions, like ignore_invalid_headers to off in http level, also set rewrite , but no luck if ($request_uri ~ ^(/.*)[>](.*)$) { return 301 $1%3E$2; } Dung Tran Tien ICT Specialist AXON ACTIVE VIETNAM Co. Ltd www.axonactive.com T +84.28.7109 1234, F +84.28.629 738 86, M +84 933 893 489 Ho Chi Minh Office: Hai Au Building, 39B Truong Son, Ward 4, Tan Binh District, Ho Chi Minh City, Vietnam 106?39'51"East / 10?48'32"North Da Nang Office: PVcomBank Building, 30/4 Street, Hai Chau District, Da Nang, Vietnam 108?13'15"East / 16?2'27"North Can Tho Office: Toyota-NinhKieu Building, 57-59A Cach Mang Thang Tam Street, Can Tho, Vietnam 105?46'34"East / 10?2'57"North San Francisco Office: 281 Ellis Str, San Francisco, CA 94102, United States 122?24'39"West / 37?47'6"North Luzern Office: Schl?ssli Sch?negg, Wilhelmsh?he, Luzern 6003, Switzerland 8?17'52"East / 47?3'1"North -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Mon Dec 10 15:03:52 2018 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 10 Dec 2018 18:03:52 +0300 Subject: In Nginx revers proxy unable to disable TLS1 In-Reply-To: References: <20181203141331.GX99070@mdounin.ru> Message-ID: <20181210150352.GP99070@mdounin.ru> Hello! On Sun, Dec 09, 2018 at 11:56:33PM -0500, blason wrote: > Do you mean I need to mention in each and every reverse proxy stanza or in > default config? You have to configure ssl_protocols in the default server for the listening socket in question. As previously suggested, most simple solution would be to configure ssl_protocols in the http{} block in nginx.conf. > Is this right? > > [root at xxxxxx conf.d]# vi default.conf > server { > listen 80 default_server; > #server_name ""; > server_name _; > return 444; > ssl_protocols TLSv1.2; > > #charset koi8-r; > #access_log /var/log/nginx/log/host.access.log main; > > location / { > root /usr/share/nginx/html; > index index.html index.htm; > } No. The server{} block in question is default for the port 80, which is plain HTTP, and does not use SSL. Note > listen 80 default_server; is the only listening socket in this server block. You need to configure ssl_protocols in the server{} block which is the default for HTTPS listening socket, usually on port 443. -- Maxim Dounin http://mdounin.ru/ From francis at daoine.org Mon Dec 10 23:10:16 2018 From: francis at daoine.org (Francis Daly) Date: Mon, 10 Dec 2018 23:10:16 +0000 Subject: Invalid character found in the request target on Confluence behind nginx In-Reply-To: References: Message-ID: <20181210231016.lmxdofnqplxuj4ci@daoine.org> On Mon, Dec 10, 2018 at 06:43:13AM +0000, Dung Tran Tien wrote: Hi there, > Currently I'm using Confluene 6.10.2 behind nginx. I have some pages with the page name including character '>' could not accessible, the error is: > But when I access the page bypassing the reverse proxy, it's ok, so it could be a problem in nginx. It sounds like the request from the client includes a correctly-encoded string %3E, while the request from nginx includes the decoded character >. Can you show the location{} block in nginx that handles this request? Perhaps there is something there that caused the decoding, that can be changed. Possibly you can use something like "tcpdump" to see the actual requests, if the logs do not show the details. Good luck with it, f -- Francis Daly francis at daoine.org From dung.trantien at axonactive.com Tue Dec 11 03:15:52 2018 From: dung.trantien at axonactive.com (Dung Tran Tien) Date: Tue, 11 Dec 2018 03:15:52 +0000 Subject: Invalid character found in the request target on Confluence behind nginx In-Reply-To: <20181210231016.lmxdofnqplxuj4ci@daoine.org> References: <20181210231016.lmxdofnqplxuj4ci@daoine.org> Message-ID: Hi there, Thanks for your feedback, below is location block. location /confluence { proxy_set_header X-Forwarded-Host $host; proxy_set_header X-Forwarded-Server $host; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_pass http://1.1.2.1:8090/confluence; proxy_read_timeout 200; } Here is access log record on nginx 14.161.32.199 - - [11/Dec/2018:04:06:51 +0100] "GET /confluence/display/AII/5.+test+%3E+A?src=contextnavpagetreemode HTTP/2.0" 400 2490 "https://jira.a.com/" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/70.0.3538.110 Safari/537.36" Honesty, I did not find any useful info from tcpdump related this issue. I tried to enable Tomcat http access log on Confluence, but not found this request above. Your sincerely, Dung -----Original Message----- From: nginx On Behalf Of Francis Daly Sent: Tuesday, December 11, 2018 6:10 AM To: nginx at nginx.org Subject: Re: Invalid character found in the request target on Confluence behind nginx On Mon, Dec 10, 2018 at 06:43:13AM +0000, Dung Tran Tien wrote: Hi there, > Currently I'm using Confluene 6.10.2 behind nginx. I have some pages with the page name including character '>' could not accessible, the error is: > But when I access the page bypassing the reverse proxy, it's ok, so it could be a problem in nginx. It sounds like the request from the client includes a correctly-encoded string %3E, while the request from nginx includes the decoded character >. Can you show the location{} block in nginx that handles this request? Perhaps there is something there that caused the decoding, that can be changed. Possibly you can use something like "tcpdump" to see the actual requests, if the logs do not show the details. Good luck with it, f -- Francis Daly francis at daoine.org _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Tue Dec 11 07:32:32 2018 From: nginx-forum at forum.nginx.org (mundhada) Date: Tue, 11 Dec 2018 02:32:32 -0500 Subject: How to use nginx dynamic module when proxying connections to application servers Message-ID: <13882d4cbc04a918850ac1b8afb5d5d6.NginxMailingListEnglish@forum.nginx.org> I developed a Nginx dynamic module and did required configuration in nginx.conf. Able to run that module and see it doing processing. This module reads request request header, cookies etc., does some business logic execution and modify response header then send back response to client. Problem -: "How to use nginx module when proxying connections to application servers" I'm be using Nginx as proxy server and Tomcat or Node as application server and my application hosted on app server. I'm able to route the request through both web & app server and get response back but module isn't getting invoked. Not sure how to link/configure it so that my able to intercept request and modify response header as per need. Browser <-> Web Server (module sits here) <-> Application Server Has anybody explored this part? If yes then please help. Let me know if more detail needed. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,282329,282329#msg-282329 From francis at daoine.org Tue Dec 11 08:35:28 2018 From: francis at daoine.org (Francis Daly) Date: Tue, 11 Dec 2018 08:35:28 +0000 Subject: Invalid character found in the request target on Confluence behind nginx In-Reply-To: References: <20181210231016.lmxdofnqplxuj4ci@daoine.org> Message-ID: <20181211083528.mmze3r4tr342327g@daoine.org> On Tue, Dec 11, 2018 at 03:15:52AM +0000, Dung Tran Tien wrote: Hi there, In this case, I think that there is a straightforward change that can work: > location /confluence { > proxy_pass http://1.1.2.1:8090/confluence; Change that line to just be proxy_pass http://1.1.2.1:8090; f -- Francis Daly francis at daoine.org From dung.trantien at axonactive.com Tue Dec 11 08:39:32 2018 From: dung.trantien at axonactive.com (Dung Tran Tien) Date: Tue, 11 Dec 2018 08:39:32 +0000 Subject: Invalid character found in the request target on Confluence behind nginx In-Reply-To: <20181211083528.mmze3r4tr342327g@daoine.org> References: <20181210231016.lmxdofnqplxuj4ci@daoine.org> <20181211083528.mmze3r4tr342327g@daoine.org> Message-ID: <2a7446dbce6e4297a10a14fa19cc4551@axonactive.com> Hi, The backend must have context /confluence, without it, the page cannot be load with 404 code. -----Original Message----- From: nginx On Behalf Of Francis Daly Sent: Tuesday, December 11, 2018 3:35 PM To: nginx at nginx.org Subject: Re: Invalid character found in the request target on Confluence behind nginx On Tue, Dec 11, 2018 at 03:15:52AM +0000, Dung Tran Tien wrote: Hi there, In this case, I think that there is a straightforward change that can work: > location /confluence { > proxy_pass http://1.1.2.1:8090/confluence; Change that line to just be proxy_pass http://1.1.2.1:8090; f -- Francis Daly francis at daoine.org _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx From francis at daoine.org Tue Dec 11 08:43:24 2018 From: francis at daoine.org (Francis Daly) Date: Tue, 11 Dec 2018 08:43:24 +0000 Subject: Invalid character found in the request target on Confluence behind nginx In-Reply-To: <2a7446dbce6e4297a10a14fa19cc4551@axonactive.com> References: <20181210231016.lmxdofnqplxuj4ci@daoine.org> <20181211083528.mmze3r4tr342327g@daoine.org> <2a7446dbce6e4297a10a14fa19cc4551@axonactive.com> Message-ID: <20181211084324.pmbbnjdgtudqxnw3@daoine.org> On Tue, Dec 11, 2018 at 08:39:32AM +0000, Dung Tran Tien wrote: Hi there, > The backend must have context /confluence, without it, the page cannot be load with 404 code. Yes. The request from the client is for /confluence/something. The request from nginx is for /confluence/something. f -- Francis Daly francis at daoine.org From dung.trantien at axonactive.com Tue Dec 11 08:54:23 2018 From: dung.trantien at axonactive.com (Dung Tran Tien) Date: Tue, 11 Dec 2018 08:54:23 +0000 Subject: Invalid character found in the request target on Confluence behind nginx In-Reply-To: <20181211084324.pmbbnjdgtudqxnw3@daoine.org> References: <20181210231016.lmxdofnqplxuj4ci@daoine.org> <20181211083528.mmze3r4tr342327g@daoine.org> <2a7446dbce6e4297a10a14fa19cc4551@axonactive.com> <20181211084324.pmbbnjdgtudqxnw3@daoine.org> Message-ID: <35ddd723734a4516a9b361d6815bba59@axonactive.com> Hi there, Not sure I understand you correctly. If change as suggest like location /confluence { proxy_pass http://1.1.2.1:8090/; The request from the client is for /confluence/something. The request from nginx to backend is for /something, it's not correct. Besides that, my current configuration from this guide https://confluence.atlassian.com/confkb/how-to-use-nginx-to-proxy-requests-for-confluence-313459790.html. Your sincerely, Dung -----Original Message----- From: nginx On Behalf Of Francis Daly Sent: Tuesday, December 11, 2018 3:43 PM To: nginx at nginx.org Subject: Re: Invalid character found in the request target on Confluence behind nginx On Tue, Dec 11, 2018 at 08:39:32AM +0000, Dung Tran Tien wrote: Hi there, > The backend must have context /confluence, without it, the page cannot be load with 404 code. Yes. The request from the client is for /confluence/something. The request from nginx is for /confluence/something. f -- Francis Daly francis at daoine.org _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From francis at daoine.org Tue Dec 11 08:59:22 2018 From: francis at daoine.org (Francis Daly) Date: Tue, 11 Dec 2018 08:59:22 +0000 Subject: Invalid character found in the request target on Confluence behind nginx In-Reply-To: <35ddd723734a4516a9b361d6815bba59@axonactive.com> References: <20181210231016.lmxdofnqplxuj4ci@daoine.org> <20181211083528.mmze3r4tr342327g@daoine.org> <2a7446dbce6e4297a10a14fa19cc4551@axonactive.com> <20181211084324.pmbbnjdgtudqxnw3@daoine.org> <35ddd723734a4516a9b361d6815bba59@axonactive.com> Message-ID: <20181211085922.55exzp5l3zvaicmv@daoine.org> On Tue, Dec 11, 2018 at 08:54:23AM +0000, Dung Tran Tien wrote: Hi there, > Not sure I understand you correctly. If change as suggest like > > location /confluence { > > proxy_pass http://1.1.2.1:8090/; Ah, there's the problem. That's not what I suggested. You have one "/" in your config that should not be there. f -- Francis Daly francis at daoine.org From dung.trantien at axonactive.com Tue Dec 11 11:14:13 2018 From: dung.trantien at axonactive.com (Dung Tran Tien) Date: Tue, 11 Dec 2018 11:14:13 +0000 Subject: Invalid character found in the request target on Confluence behind nginx In-Reply-To: <20181211085922.55exzp5l3zvaicmv@daoine.org> References: <20181210231016.lmxdofnqplxuj4ci@daoine.org> <20181211083528.mmze3r4tr342327g@daoine.org> <2a7446dbce6e4297a10a14fa19cc4551@axonactive.com> <20181211084324.pmbbnjdgtudqxnw3@daoine.org> <35ddd723734a4516a9b361d6815bba59@axonactive.com> <20181211085922.55exzp5l3zvaicmv@daoine.org> Message-ID: Hi there, It's work as your suggestion. Thanks you for your help. I just confuse that why those page with invalid character worked in the past with my old nginx config. That's strange. Regards, Dung. -----Original Message----- From: nginx On Behalf Of Francis Daly Sent: Tuesday, December 11, 2018 3:59 PM To: nginx at nginx.org Subject: Re: Invalid character found in the request target on Confluence behind nginx On Tue, Dec 11, 2018 at 08:54:23AM +0000, Dung Tran Tien wrote: Hi there, > Not sure I understand you correctly. If change as suggest like > > location /confluence { > > proxy_pass http://1.1.2.1:8090/; Ah, there's the problem. That's not what I suggested. You have one "/" in your config that should not be there. f -- Francis Daly francis at daoine.org _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx From ottavio at campana.vi.it Wed Dec 12 09:48:25 2018 From: ottavio at campana.vi.it (Ottavio Campana) Date: Wed, 12 Dec 2018 10:48:25 +0100 Subject: Transfering a fd to a process to get better performance than with mod_proxy Message-ID: Hello, I have the current scenario: nginx proxies a process that runs internally and is bound to the loopback interface. Everything works, but I am facing performance issues, because the processor is old and slow. I would like to skip the proxy and the read/write operations performed by mod_proxy and to pass the file descriptor associated to the socket to the internal process, in order to unload the mod_proxy. Is there already a mechanism in nginx to do this? If not, shall I develop a custom module with a dedicated handler or do you think that there is a smarter way to do it? Thank you, Ottavio -------------- next part -------------- An HTML attachment was scrubbed... URL: From shahzaib.cb at gmail.com Wed Dec 12 10:01:17 2018 From: shahzaib.cb at gmail.com (shahzaib mushtaq) Date: Wed, 12 Dec 2018 15:01:17 +0500 Subject: rewrite nginx ! Message-ID: Hi, Need help on constructing the rewrite url in nginx, need to remove last two parts of uri as stated in example below: https://domain.com/videos/32/cooking/most_recent/last_year/ to https://domain.com/videos/32/cooking https://domain.com/videos/17/tv-showbiz/most_recent/today/ to https://domain.com/videos/17/tv-showbiz Actually we need to match uri as below, help will be highly appreciated : /videos/[ID]/[STRING]/most_recent/[last_year|all_time|videos|this_year|last_week|last_month|today] -------------- next part -------------- An HTML attachment was scrubbed... URL: From shahzaib.cb at gmail.com Wed Dec 12 11:28:17 2018 From: shahzaib.cb at gmail.com (shahzaib mushtaq) Date: Wed, 12 Dec 2018 16:28:17 +0500 Subject: rewrite nginx ! In-Reply-To: References: Message-ID: Ok, this worked : rewrite ^/videos/([0-9]+)/(.*)/(.*)/(.*)/([0-9]+)? https://domain.com/videos/$1/$2 last; However, if there's any better rule, please let me know. Regards. On Wed, Dec 12, 2018 at 3:01 PM shahzaib mushtaq wrote: > Hi, > > Need help on constructing the rewrite url in nginx, need to remove last > two parts of uri as stated in example below: > > https://domain.com/videos/32/cooking/most_recent/last_year/ > to > https://domain.com/videos/32/cooking > > > https://domain.com/videos/17/tv-showbiz/most_recent/today/ > to > https://domain.com/videos/17/tv-showbiz > > Actually we need to match uri as below, help will be highly appreciated : > > > /videos/[ID]/[STRING]/most_recent/[last_year|all_time|videos|this_year|last_week|last_month|today] > -------------- next part -------------- An HTML attachment was scrubbed... URL: From vbart at nginx.com Wed Dec 12 17:09:33 2018 From: vbart at nginx.com (Valentin V. Bartenev) Date: Wed, 12 Dec 2018 20:09:33 +0300 Subject: How to build NodeJS support without an Internet connection? In-Reply-To: <2279904.R71bQZdoZm@vbart-laptop> References: <87sgz6yfyz.fsf@ra.horus-it.com> <2279904.R71bQZdoZm@vbart-laptop> Message-ID: <4891609.RCAK4cHBTs@vbart-workstation> On Sunday 09 December 2018 21:14:14 Valentin V. Bartenev wrote: > On Sunday, 9 December 2018 16:41:56 MSK Ralph Seichter wrote: > > Hello developer team. > > > > I am the maintainer of the NGINX Unit ebuild for Gentoo Linux, and > > currently I am struggling with colliding requirements of Unit's and > > Gentoo's build strategies. > > > > As you may know, the Gentoo way is to build everything from the source > > code, with only very few exceptions. If, for example, the user desires > > Python support in Unit, my ebuild ensures that this requirement is met > > before Unit's release tarball is even unpacked. > > > > NodeJS support is a different beast, because the existing Unit build > > method relies on node-gyp and npm to download and install dependencies > > during the build process. This fails, because Gentoo builds are executed > > in a sandbox environment that prohibits both network access and files > > being written outside the sandbox. Hence, "npm install --global ..." > > won't do. > > > > Is there a way I can modify the existing Unit build to comply with these > > restrictions? Everything required for the build must either already > > exist, or it must be downloaded by Gentoo before Unit's compile phase. > > Also, all required artefacts are recorded with a checksum when an ebuild > > is added to the Gentoo tree, so no "download whatever is the most recent > > version of XYZ". > > > > Your help is appreciated. > > Hi Ralph, > > Thank you for your effort. > > All the Node.js module dependencies are set in its package.json file: > http://hg.nginx.org/unit/file/tip/src/nodejs/unit-http/package.json > > There's just one: > > "dependencies": { > "node-addon-api": "1.2.0" > } > > and I'm not sure that it's needed at all, especially for Gentoo. > > I think you can just remove it. > > See also this overlay for some ideas: > https://github.com/msva/mva-overlay/tree/master/www-servers/nginx-unit > [..] JFYI, the surplus dependency has been removed: http://hg.nginx.org/unit/rev/fd323ad9e24f So, now the Node.js module has no dependencies and npm doesn't download anything during installation. wbr, Valentin V. Bartenev From m16+nginx at monksofcool.net Wed Dec 12 18:11:59 2018 From: m16+nginx at monksofcool.net (Ralph Seichter) Date: Wed, 12 Dec 2018 19:11:59 +0100 Subject: How to build NodeJS support without an Internet connection? In-Reply-To: <4891609.RCAK4cHBTs@vbart-workstation> References: <87sgz6yfyz.fsf@ra.horus-it.com> <2279904.R71bQZdoZm@vbart-laptop> <4891609.RCAK4cHBTs@vbart-workstation> Message-ID: <87tvjiy5qo.fsf@ra.horus-it.com> * Valentin V. Bartenev: > http://hg.nginx.org/unit/rev/fd323ad9e24f That looks promising, Valentin. I'll try a build as soon as I'm able to. Would you perhaps consider releasing this as version 1.6.1, so there's an official tarball I can use during the build process? -Ralph From vbart at nginx.com Wed Dec 12 18:24:45 2018 From: vbart at nginx.com (Valentin V. Bartenev) Date: Wed, 12 Dec 2018 21:24:45 +0300 Subject: How to build NodeJS support without an Internet connection? In-Reply-To: <87tvjiy5qo.fsf@ra.horus-it.com> References: <87sgz6yfyz.fsf@ra.horus-it.com> <4891609.RCAK4cHBTs@vbart-workstation> <87tvjiy5qo.fsf@ra.horus-it.com> Message-ID: <2766743.x5PuytNSLF@vbart-workstation> On Wednesday 12 December 2018 19:11:59 Ralph Seichter wrote: > * Valentin V. Bartenev: > > > http://hg.nginx.org/unit/rev/fd323ad9e24f > > That looks promising, Valentin. I'll try a build as soon as I'm able > to. Would you perhaps consider releasing this as version 1.6.1, so > there's an official tarball I can use during the build process? > [..] You can either wait till 20 of December when the 1.7 release is planned, or apply this change on 1.6 sources during the build process. wbr, Valentin V. Bartenev From postmaster at palvelin.fi Thu Dec 13 09:17:03 2018 From: postmaster at palvelin.fi (Palvelin Postmaster) Date: Thu, 13 Dec 2018 11:17:03 +0200 Subject: FastCGI cache file has too long header Message-ID: Hi, I run nginx/1.14.0 (Ubuntu 18.04) and PHP-FPM 7.2. I get such occasional (daily) errors about too long cache file headers: 2018/12/13 10:39:49 [crit] 1537#1537: *760972 cache file "/var/lib/nginx/fastcgi/mydomain-fi/d/be/7a11ac32c28dc9f8c3d7da12fe3d6bed" has too long header, client: XXX.XXX.XXX.XXX, server: mydomain.fi, request: "GET /kotitreeni/ HTTP/1.1", host: ?mydomain.fi" Can someone offer any insight as to why this is happening, what I could do to prevent it or how I could further analyze the cause? Here are a couple examples of headers from such cache files _after_ the error log entry (possibly redundant, but anyhow): ?(\?\?\??x?""a652b989092efd137f04e13cad4af4b3"Accept-Encoding?, ?#??3`??u??Zt KEY: httpsGETmydomain.fi/kotitreeni/ 3Content-Type: text/html; charset=UTF-8 Last-Modified: Thu, 13 Dec 2018 08:39:31 GMT Expires: Thu, 13 Dec 2018 09:39:31 GMT Pragma: public Cache-Control: max-age=3582, public ETag: "a652b989092efd137f04e13cad4af4b3" X-Powered-By: W3 Total Cache/0.9.7 Content-Encoding: gzip Vary: Accept-Encoding {\?\k \\???s?""4fca3f56d337d2057ce73a6dd5712d80"Accept-Encoding??L?????p ??(,f KEY: httpsGETmydomain.fi/ruoka/ 3Content-Type: text/html; charset=UTF-8 Last-Modified: Thu, 13 Dec 2018 07:02:19 GMT Expires: Thu, 13 Dec 2018 08:02:19 GMT Pragma: public Cache-Control: max-age=1696, public ETag: "4fca3f56d337d2057ce73a6dd5712d80" X-Powered-By: W3 Total Cache/0.9.7 Content-Encoding: gzip Vary: Accept-Encoding From peter.wright at icmcapital.co.uk Thu Dec 13 09:17:33 2018 From: peter.wright at icmcapital.co.uk (peter.wright at icmcapital.co.uk) Date: Thu, 13 Dec 2018 09:17:33 +0000 (GMT) Subject: no longer with the company Message-ID: <20181213091733.251039A0475@euk-90517.eukservers.com> peter wright is no longer with the company From mdounin at mdounin.ru Thu Dec 13 14:31:13 2018 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 13 Dec 2018 17:31:13 +0300 Subject: FastCGI cache file has too long header In-Reply-To: References: Message-ID: <20181213143113.GK99070@mdounin.ru> Hello! On Thu, Dec 13, 2018 at 11:17:03AM +0200, Palvelin Postmaster via nginx wrote: > Hi, > > I run nginx/1.14.0 (Ubuntu 18.04) and PHP-FPM 7.2. I get such occasional (daily) errors about too long cache file headers: > > 2018/12/13 10:39:49 [crit] 1537#1537: *760972 cache file "/var/lib/nginx/fastcgi/mydomain-fi/d/be/7a11ac32c28dc9f8c3d7da12fe3d6bed" has too long header, client: XXX.XXX.XXX.XXX, server: mydomain.fi, request: "GET /kotitreeni/ HTTP/1.1", host: ?mydomain.fi" > > Can someone offer any insight as to why this is happening, what I could do to prevent it or how I could further analyze the cause? > > Here are a couple examples of headers from such cache files _after_ the error log entry (possibly redundant, but anyhow): > >  >  > Expires: Thu, 13 Dec 2018 09:39:31 GMT > Pragma: public > Cache-Control: max-age=3582, public > ETag: "a652b989092efd137f04e13cad4af4b3" > X-Powered-By: W3 Total Cache/0.9.7 > Content-Encoding: gzip > Vary: Accept-Encoding > > >  >  > Expires: Thu, 13 Dec 2018 08:02:19 GMT > Pragma: public > Cache-Control: max-age=1696, public > ETag: "4fca3f56d337d2057ce73a6dd5712d80" > X-Powered-By: W3 Total Cache/0.9.7 > Content-Encoding: gzip > Vary: Accept-Encoding "Vary" may indicate that you are hitting the bug with multiple variants. Could you please try the patch from http://mailman.nginx.org/pipermail/nginx-devel/2018-January/010774.html to see if it helps? -- Maxim Dounin http://mdounin.ru/ From postmaster at palvelin.fi Thu Dec 13 15:44:08 2018 From: postmaster at palvelin.fi (Palvelin Postmaster) Date: Thu, 13 Dec 2018 17:44:08 +0200 Subject: FastCGI cache file has too long header In-Reply-To: <20181213143113.GK99070@mdounin.ru> References: <20181213143113.GK99070@mdounin.ru> Message-ID: <28FAF88E-946F-4AAE-9E19-AA160212DEC1@palvelin.fi> > On 13 Dec 2018, at 16:31, Maxim Dounin wrote: > > Hello! > > On Thu, Dec 13, 2018 at 11:17:03AM +0200, Palvelin Postmaster via nginx wrote: > >> Hi, >> >> I run nginx/1.14.0 (Ubuntu 18.04) and PHP-FPM 7.2. I get such occasional (daily) errors about too long cache file headers: >> >> 2018/12/13 10:39:49 [crit] 1537#1537: *760972 cache file "/var/lib/nginx/fastcgi/mydomain-fi/d/be/7a11ac32c28dc9f8c3d7da12fe3d6bed" has too long header, client: XXX.XXX.XXX.XXX, server: mydomain.fi, request: "GET /kotitreeni/ HTTP/1.1", host: ?mydomain.fi" >> >> Can someone offer any insight as to why this is happening, what I could do to prevent it or how I could further analyze the cause? >> >> Here are a couple examples of headers from such cache files _after_ the error log entry (possibly redundant, but anyhow): >> >>  >>  >> Expires: Thu, 13 Dec 2018 09:39:31 GMT >> Pragma: public >> Cache-Control: max-age=3582, public >> ETag: "a652b989092efd137f04e13cad4af4b3" >> X-Powered-By: W3 Total Cache/0.9.7 >> Content-Encoding: gzip >> Vary: Accept-Encoding >> >> >>  >>  >> Expires: Thu, 13 Dec 2018 08:02:19 GMT >> Pragma: public >> Cache-Control: max-age=1696, public >> ETag: "4fca3f56d337d2057ce73a6dd5712d80" >> X-Powered-By: W3 Total Cache/0.9.7 >> Content-Encoding: gzip >> Vary: Accept-Encoding > > "Vary" may indicate that you are hitting the bug with multiple > variants. Could you please try the patch from > > http://mailman.nginx.org/pipermail/nginx-devel/2018-January/010774.html > > to see if it helps? I?m afraid patching source and rebuilding is over my head. :( Can I presume if it?s a known bug, it will eventually get fixed? -- Palvelin.fi Hostmaster postmaster at palvelin.fi From nginx-forum at forum.nginx.org Fri Dec 14 02:19:56 2018 From: nginx-forum at forum.nginx.org (arnabmaity1) Date: Thu, 13 Dec 2018 21:19:56 -0500 Subject: ssl3_get_client_hello:no shared cipher Message-ID: Hello We have been having this strange issue. For the first time when a user attempts to login to the application the login fails and we come across this error on the nginx log. The second time the user attemps , the login is successful. Again if the browser is closed and an attempt is made to log in, we find the same error. 2018/12/13 15:35:11 [info] 4337#0: *102 SSL_do_handshake() failed (SSL: error:1408A0C1:SSL routines:ssl3_get_client_hello:no shared cipher) while SSL handshaking, client: , server: 0.0.0.0:443 Please suggest possible reason and any fix for this. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,282382,282382#msg-282382 From cyflhn at 163.com Fri Dec 14 03:17:29 2018 From: cyflhn at 163.com (yf chu) Date: Fri, 14 Dec 2018 11:17:29 +0800 (CST) Subject: The problem of "client prematurely closed connection while processing HTTP/2 connection" Message-ID: <6295bab.964e.167aab8138a.Coremail.cyflhn@163.com> I was haunted recently by the problem that some of our customers were complaining that they often see that the web page could not be opened because of connection reset. This issue did not occur very frequently, but rather bothering. It often occurs after a POST request was sent to server. Then I looked into this issue and checked the logs in Nginx. I found that in error.log, it often shows that "client prematurely closed connection while processing HTTP/2 connection". When I logon to our cutsomer's computer, I found it's hard to reproduce this issue. I know it has something to do with network, Nginx could not read more data on the socket. But how should I find the reason for this problem? What is reason of the tcp connection reset. The Nginx version I use is 1.12.2. -------------- next part -------------- An HTML attachment was scrubbed... URL: From lists at lazygranch.com Fri Dec 14 05:05:59 2018 From: lists at lazygranch.com (Gary) Date: Thu, 13 Dec 2018 21:05:59 -0800 Subject: ssl3_get_client_hello:no shared cipher In-Reply-To: Message-ID: <01qfs5n92csh7u9tfb96m3nh.1544763959657@lazygranch.com> On the second attempt, is the connection on port 443? Have you set up HSTS? Mayhe you can pastebin your conf file, sanitizing as appropriate. ? Original Message ? From: nginx-forum at forum.nginx.org Sent: December 13, 2018 6:20 PM To: nginx at nginx.org Reply-to: nginx at nginx.org Subject: ssl3_get_client_hello:no shared cipher Hello We have been having this strange issue. For the first time when a user attempts to login to the application the login fails and we come across this error on the nginx log. The second time the user attemps , the login is successful. Again if the browser is closed and an attempt is made to log in, we find the same error. 2018/12/13 15:35:11 [info] 4337#0: *102 SSL_do_handshake() failed (SSL: error:1408A0C1:SSL routines:ssl3_get_client_hello:no shared cipher) while SSL handshaking, client: , server: 0.0.0.0:443 Please suggest possible reason and any fix for this. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,282382,282382#msg-282382 _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx From nginx-ml at davepedu.com Fri Dec 14 05:16:12 2018 From: nginx-ml at davepedu.com (Dave Pedu) Date: Thu, 13 Dec 2018 21:16:12 -0800 Subject: Nginx not enforcing default client_max_body_size ? Message-ID: <4939fb0a329272eda4eb070bcea109a5@mail1.dpedu.io> Hello, I came across some nginx behavior that seems odd to me. In my config, I have this server block: server { server_name subdomain.somehostname.com listen 443 ssl; ssl_certificate "/some/file.crt"; ssl_certificate_key "/some/other/file.key"; ssl_protocols ssl_ciphers return 307 https://anothersubdomain.somehostname.com$request_uri; } I'm using a 307 redirect to cause clients to retry their original request at the redirected destination, particularly for file uploads. With the above configuration, client requests regardless of post size - even larger than the default client_max_body_size - are redirected. For example, a 6MB file upload: $ curl -v --data-binary "@5mbRandomData.bin" 'https://subdomain.somehostname.com/upload' ... > POST /upload HTTP/1.1 ... > User-Agent: curl/7.54.0 > Content-Length: 6161400 > Content-Type: application/x-www-form-urlencoded > Expect: 100-continue > < HTTP/1.1 100 Continue < HTTP/1.1 307 Temporary Redirect < Server: nginx/1.12.2 < Location: https://anothersubdomain.somehostname.com/upload ... However, when I place the "return" line within a location block as shown here: server { server_name subdomain.somehostname.com listen 443 ssl; ssl_certificate "/some/file.crt"; ssl_certificate_key "/some/other/file.key"; ssl_protocols ssl_ciphers location / { return 307 https://anothersubdomain.somehostname.com$request_uri; } } ...then clients posting larger than the default client_max_body_size are sent an error instead. Again, with a 6MB upload: $ curl -v --data-binary "@5mbRandomData.bin" 'https://subdomain.somehostname.com/upload' > POST /upload HTTP/1.1 ... > User-Agent: curl/7.54.0 > Content-Length: 6161400 > Content-Type: application/x-www-form-urlencoded > Expect: 100-continue > < HTTP/1.1 413 Request Entity Too Large < Server: nginx/1.12.2 Which seems like correct behavior in contrast to the first example since client_max_body_size must be set to 0 to allow unlimited sized uploads, and the default value is 1m. I didn't see anything in the documentation about selective application of the body size limit. Is this a bug? I have client_max_body_size set to 500mb in a *different* server block, but the behavior above holds true in any size request I tried, which was as large as: Content-Length: 10485760000 I am using nginx 1.12.2. Thanks Dave From mdounin at mdounin.ru Fri Dec 14 14:01:49 2018 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 14 Dec 2018 17:01:49 +0300 Subject: FastCGI cache file has too long header In-Reply-To: <28FAF88E-946F-4AAE-9E19-AA160212DEC1@palvelin.fi> References: <20181213143113.GK99070@mdounin.ru> <28FAF88E-946F-4AAE-9E19-AA160212DEC1@palvelin.fi> Message-ID: <20181214140149.GM99070@mdounin.ru> Hello! On Thu, Dec 13, 2018 at 05:44:08PM +0200, Palvelin Postmaster via nginx wrote: > > > > On 13 Dec 2018, at 16:31, Maxim Dounin wrote: > > > > Hello! > > > > On Thu, Dec 13, 2018 at 11:17:03AM +0200, Palvelin Postmaster via nginx wrote: > > > >> Hi, > >> > >> I run nginx/1.14.0 (Ubuntu 18.04) and PHP-FPM 7.2. I get such occasional (daily) errors about too long cache file headers: > >> > >> 2018/12/13 10:39:49 [crit] 1537#1537: *760972 cache file "/var/lib/nginx/fastcgi/mydomain-fi/d/be/7a11ac32c28dc9f8c3d7da12fe3d6bed" has too long header, client: XXX.XXX.XXX.XXX, server: mydomain.fi, request: "GET /kotitreeni/ HTTP/1.1", host: ?mydomain.fi" > >> > >> Can someone offer any insight as to why this is happening, what I could do to prevent it or how I could further analyze the cause? > >> > >> Here are a couple examples of headers from such cache files _after_ the error log entry (possibly redundant, but anyhow): > >> > >>  > >>  > >> Expires: Thu, 13 Dec 2018 09:39:31 GMT > >> Pragma: public > >> Cache-Control: max-age=3582, public > >> ETag: "a652b989092efd137f04e13cad4af4b3" > >> X-Powered-By: W3 Total Cache/0.9.7 > >> Content-Encoding: gzip > >> Vary: Accept-Encoding > >> > >> > >>  > >>  > >> Expires: Thu, 13 Dec 2018 08:02:19 GMT > >> Pragma: public > >> Cache-Control: max-age=1696, public > >> ETag: "4fca3f56d337d2057ce73a6dd5712d80" > >> X-Powered-By: W3 Total Cache/0.9.7 > >> Content-Encoding: gzip > >> Vary: Accept-Encoding > > > > "Vary" may indicate that you are hitting the bug with multiple > > variants. Could you please try the patch from > > > > http://mailman.nginx.org/pipermail/nginx-devel/2018-January/010774.html > > > > to see if it helps? > > I?m afraid patching source and rebuilding is over my head. :( > > Can I presume if it?s a known bug, it will eventually get fixed? Well, eventually. But, as you can see, the patch was made almost a year ago, and still not committed due to lack of testing from affected users. -- Maxim Dounin http://mdounin.ru/ From nginx-forum at forum.nginx.org Fri Dec 14 14:42:18 2018 From: nginx-forum at forum.nginx.org (arnabmaity1) Date: Fri, 14 Dec 2018 09:42:18 -0500 Subject: ssl3_get_client_hello:no shared cipher In-Reply-To: <01qfs5n92csh7u9tfb96m3nh.1544763959657@lazygranch.com> References: <01qfs5n92csh7u9tfb96m3nh.1544763959657@lazygranch.com> Message-ID: Hi I am pasting the current conf file. Please review and suggest ; all connections are through port 443. server { listen 443 http2 ssl; listen [::]:443 http2 ssl; server_name ; root /usr/share/nginx/html/Bank/; ssl_certificate //.crt; ssl_certificate_key //private.key; #ssl_dhparam /etc/ssl/certs/dhparam.pem; ssl_protocols TLSv1.2; ssl_prefer_server_ciphers on; ssl_ciphers "EECDH+AESGCM:EDH+AESGCM:AES256+EECDH:AES256+EDH"; ssl_ecdh_curve secp384r1; ssl_session_cache shared:SSL:10m; ssl_session_tickets off; ssl_stapling on; ssl_stapling_verify on; resolver 8.8.8.8 8.8.4.4 valid=300s; resolver_timeout 5s; # Disable preloading HSTS for now. You can use the commented out header line that includes # the "preload" directive if you understand the implications. #add_header Strict-Transport-Security "max-age=63072000; includeSubdomains; preload"; add_header Strict-Transport-Security "max-age=63072000; includeSubdomains"; add_header X-Frame-Options DENY; add_header X-Content-Type-Options nosniff; underscores_in_headers on; error_log /var/log/nginx/error.log debug; location // { proxy_set_header X-Forwarded-Host $host; proxy_set_header X-Forwarded-Server $host; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; #proxy_pass_request_headers on ; proxy_cookie_path / "/; secure; HttpOnly; SameSite=lax"; proxy_pass http://:8080/; sendfile off; expires 0; add_header Cache-Control private; add_header Cache-Control no-store; add_header Cache-Control no-cache; add_header Strict-Transport-Security "max-age=63072000; includeSubdomains"; index index.html index.htm; } } Posted at Nginx Forum: https://forum.nginx.org/read.php?2,282382,282389#msg-282389 From mdounin at mdounin.ru Fri Dec 14 15:34:59 2018 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 14 Dec 2018 18:34:59 +0300 Subject: Nginx not enforcing default client_max_body_size ? In-Reply-To: <4939fb0a329272eda4eb070bcea109a5@mail1.dpedu.io> References: <4939fb0a329272eda4eb070bcea109a5@mail1.dpedu.io> Message-ID: <20181214153459.GO99070@mdounin.ru> Hello! On Thu, Dec 13, 2018 at 09:16:12PM -0800, Dave Pedu wrote: > Hello, > > I came across some nginx behavior that seems odd to me. In my config, I > have this server block: > > > server { > server_name subdomain.somehostname.com > listen 443 ssl; > ssl_certificate "/some/file.crt"; > ssl_certificate_key "/some/other/file.key"; > ssl_protocols > ssl_ciphers > return 307 https://anothersubdomain.somehostname.com$request_uri; > } > > > I'm using a 307 redirect to cause clients to retry their original > request at the redirected destination, particularly for file uploads. > With the above configuration, client requests regardless of post size - > even larger than the default client_max_body_size - are redirected. For > example, a 6MB file upload: > > > $ curl -v --data-binary "@5mbRandomData.bin" > 'https://subdomain.somehostname.com/upload' > ... > > POST /upload HTTP/1.1 > ... > > User-Agent: curl/7.54.0 > > Content-Length: 6161400 > > Content-Type: application/x-www-form-urlencoded > > Expect: 100-continue > > > < HTTP/1.1 100 Continue > < HTTP/1.1 307 Temporary Redirect > < Server: nginx/1.12.2 > < Location: https://anothersubdomain.somehostname.com/upload > ... > > > However, when I place the "return" line within a location block as shown > here: > > > server { > server_name subdomain.somehostname.com > listen 443 ssl; > ssl_certificate "/some/file.crt"; > ssl_certificate_key "/some/other/file.key"; > ssl_protocols > ssl_ciphers > location / { > return 307 > https://anothersubdomain.somehostname.com$request_uri; > } > } > > > ...then clients posting larger than the default client_max_body_size are > sent an error instead. Again, with a 6MB upload: > > > $ curl -v --data-binary "@5mbRandomData.bin" > 'https://subdomain.somehostname.com/upload' > > POST /upload HTTP/1.1 > ... > > User-Agent: curl/7.54.0 > > Content-Length: 6161400 > > Content-Type: application/x-www-form-urlencoded > > Expect: 100-continue > > > < HTTP/1.1 413 Request Entity Too Large > < Server: nginx/1.12.2 > > > Which seems like correct behavior in contrast to the first example since > client_max_body_size must be set to 0 to allow unlimited sized uploads, > and the default value is 1m. I didn't see anything in the documentation > about selective application of the body size limit. Is this a bug? The client_max_body_size limit is only enforced when nginx selects a location (or when reading the body if Content-Length is not known in advance). This is because different limits can be configured in different locations, so a configuration like location / { client_max_body_size 1m; ... } location = /upload.cgi { client_max_body_size 100m; ... } will properly allow uploading of large files via "/upload.cgi", but will restrict maximum request body size on other requests. As such, client_max_body_size is only enforced when nginx chooses some location configuration to work with. And in your first configuration the request is answered during processing server rewrites, before nginx has a chance to select a location. This is not really important though, since nginx does not try read a request body in such a case. Rather, it will discard anything - much like it will do when returning an error anyway. -- Maxim Dounin http://mdounin.ru/ From nginx-ml at davepedu.com Fri Dec 14 17:24:04 2018 From: nginx-ml at davepedu.com (Dave Pedu) Date: Fri, 14 Dec 2018 09:24:04 -0800 Subject: Nginx not enforcing default client_max_body_size ? In-Reply-To: <20181214153459.GO99070@mdounin.ru> References: <4939fb0a329272eda4eb070bcea109a5@mail1.dpedu.io> <20181214153459.GO99070@mdounin.ru> Message-ID: <551f3be53f390c78c52c69af0ab8c647@mail1.dpedu.io> Hello, On 2018-12-14 07:34, Maxim Dounin wrote: > Hello! > > On Thu, Dec 13, 2018 at 09:16:12PM -0800, Dave Pedu wrote: > >> Hello, >> >> I came across some nginx behavior that seems odd to me. In my config, >> I >> have this server block: >> >> >> server { >> server_name subdomain.somehostname.com >> listen 443 ssl; >> ssl_certificate "/some/file.crt"; >> ssl_certificate_key "/some/other/file.key"; >> ssl_protocols >> ssl_ciphers >> return 307 >> https://anothersubdomain.somehostname.com$request_uri; >> } >> >> >> I'm using a 307 redirect to cause clients to retry their original >> request at the redirected destination, particularly for file uploads. >> With the above configuration, client requests regardless of post size >> - >> even larger than the default client_max_body_size - are redirected. >> For >> example, a 6MB file upload: >> >> >> $ curl -v --data-binary "@5mbRandomData.bin" >> 'https://subdomain.somehostname.com/upload' >> ... >> > POST /upload HTTP/1.1 >> ... >> > User-Agent: curl/7.54.0 >> > Content-Length: 6161400 >> > Content-Type: application/x-www-form-urlencoded >> > Expect: 100-continue >> > >> < HTTP/1.1 100 Continue >> < HTTP/1.1 307 Temporary Redirect >> < Server: nginx/1.12.2 >> < Location: https://anothersubdomain.somehostname.com/upload >> ... >> >> >> However, when I place the "return" line within a location block as >> shown >> here: >> >> >> server { >> server_name subdomain.somehostname.com >> listen 443 ssl; >> ssl_certificate "/some/file.crt"; >> ssl_certificate_key "/some/other/file.key"; >> ssl_protocols >> ssl_ciphers >> location / { >> return 307 >> https://anothersubdomain.somehostname.com$request_uri; >> } >> } >> >> >> ...then clients posting larger than the default client_max_body_size >> are >> sent an error instead. Again, with a 6MB upload: >> >> >> $ curl -v --data-binary "@5mbRandomData.bin" >> 'https://subdomain.somehostname.com/upload' >> > POST /upload HTTP/1.1 >> ... >> > User-Agent: curl/7.54.0 >> > Content-Length: 6161400 >> > Content-Type: application/x-www-form-urlencoded >> > Expect: 100-continue >> > >> < HTTP/1.1 413 Request Entity Too Large >> < Server: nginx/1.12.2 >> >> >> Which seems like correct behavior in contrast to the first example >> since >> client_max_body_size must be set to 0 to allow unlimited sized >> uploads, >> and the default value is 1m. I didn't see anything in the >> documentation >> about selective application of the body size limit. Is this a bug? > > The client_max_body_size limit is only enforced when nginx selects > a location (or when reading the body if Content-Length is not > known in advance). This is because different limits can be > configured in different locations, so a configuration like > > location / { > client_max_body_size 1m; > ... > } > > location = /upload.cgi { > client_max_body_size 100m; > ... > } > > will properly allow uploading of large files via "/upload.cgi", > but will restrict maximum request body size on other requests. > > As such, client_max_body_size is only enforced when nginx chooses > some location configuration to work with. And in your first > configuration the request is answered during processing server > rewrites, before nginx has a chance to select a location. > > This is not really important though, since nginx does not try > read a request body in such a case. Rather, it will discard > anything - much like it will do when returning an error anyway. That makes sense - I appreciate your reply, Maxim. Is there an area of the documentation that describes this selective enforcement when a location block is not selected? I would like to determine what other options are handled similarly. Looking at the description of client_max_body_size here [1], the language is quite clear that the setting's value is compared to Content-Length, a header that's present in both situations above, hence my confusion. Thanks! Dave [1] http://nginx.org/en/docs/http/ngx_http_core_module.html#client_max_body_size From mdounin at mdounin.ru Fri Dec 14 17:57:40 2018 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 14 Dec 2018 20:57:40 +0300 Subject: Nginx not enforcing default client_max_body_size ? In-Reply-To: <551f3be53f390c78c52c69af0ab8c647@mail1.dpedu.io> References: <4939fb0a329272eda4eb070bcea109a5@mail1.dpedu.io> <20181214153459.GO99070@mdounin.ru> <551f3be53f390c78c52c69af0ab8c647@mail1.dpedu.io> Message-ID: <20181214175740.GR99070@mdounin.ru> Hello! On Fri, Dec 14, 2018 at 09:24:04AM -0800, Dave Pedu wrote: > Hello, > > > On 2018-12-14 07:34, Maxim Dounin wrote: > > Hello! > > > > On Thu, Dec 13, 2018 at 09:16:12PM -0800, Dave Pedu wrote: > > > >> Hello, > >> > >> I came across some nginx behavior that seems odd to me. In my config, > >> I > >> have this server block: > >> > >> > >> server { > >> server_name subdomain.somehostname.com > >> listen 443 ssl; > >> ssl_certificate "/some/file.crt"; > >> ssl_certificate_key "/some/other/file.key"; > >> ssl_protocols > >> ssl_ciphers > >> return 307 > >> https://anothersubdomain.somehostname.com$request_uri; > >> } > >> > >> > >> I'm using a 307 redirect to cause clients to retry their original > >> request at the redirected destination, particularly for file uploads. > >> With the above configuration, client requests regardless of post size > >> - > >> even larger than the default client_max_body_size - are redirected. > >> For > >> example, a 6MB file upload: > >> > >> > >> $ curl -v --data-binary "@5mbRandomData.bin" > >> 'https://subdomain.somehostname.com/upload' > >> ... > >> > POST /upload HTTP/1.1 > >> ... > >> > User-Agent: curl/7.54.0 > >> > Content-Length: 6161400 > >> > Content-Type: application/x-www-form-urlencoded > >> > Expect: 100-continue > >> > > >> < HTTP/1.1 100 Continue > >> < HTTP/1.1 307 Temporary Redirect > >> < Server: nginx/1.12.2 > >> < Location: https://anothersubdomain.somehostname.com/upload > >> ... > >> > >> > >> However, when I place the "return" line within a location block as > >> shown > >> here: > >> > >> > >> server { > >> server_name subdomain.somehostname.com > >> listen 443 ssl; > >> ssl_certificate "/some/file.crt"; > >> ssl_certificate_key "/some/other/file.key"; > >> ssl_protocols > >> ssl_ciphers > >> location / { > >> return 307 > >> https://anothersubdomain.somehostname.com$request_uri; > >> } > >> } > >> > >> > >> ...then clients posting larger than the default client_max_body_size > >> are > >> sent an error instead. Again, with a 6MB upload: > >> > >> > >> $ curl -v --data-binary "@5mbRandomData.bin" > >> 'https://subdomain.somehostname.com/upload' > >> > POST /upload HTTP/1.1 > >> ... > >> > User-Agent: curl/7.54.0 > >> > Content-Length: 6161400 > >> > Content-Type: application/x-www-form-urlencoded > >> > Expect: 100-continue > >> > > >> < HTTP/1.1 413 Request Entity Too Large > >> < Server: nginx/1.12.2 > >> > >> > >> Which seems like correct behavior in contrast to the first example > >> since > >> client_max_body_size must be set to 0 to allow unlimited sized > >> uploads, > >> and the default value is 1m. I didn't see anything in the > >> documentation > >> about selective application of the body size limit. Is this a bug? > > > > The client_max_body_size limit is only enforced when nginx selects > > a location (or when reading the body if Content-Length is not > > known in advance). This is because different limits can be > > configured in different locations, so a configuration like > > > > location / { > > client_max_body_size 1m; > > ... > > } > > > > location = /upload.cgi { > > client_max_body_size 100m; > > ... > > } > > > > will properly allow uploading of large files via "/upload.cgi", > > but will restrict maximum request body size on other requests. > > > > As such, client_max_body_size is only enforced when nginx chooses > > some location configuration to work with. And in your first > > configuration the request is answered during processing server > > rewrites, before nginx has a chance to select a location. > > > > This is not really important though, since nginx does not try > > read a request body in such a case. Rather, it will discard > > anything - much like it will do when returning an error anyway. > > > That makes sense - I appreciate your reply, Maxim. Is there an area of > the documentation that describes this selective enforcement when a > location block is not selected? I would like to determine what other > options are handled similarly. I don't think it is documented anywhere, as this is mostly an implementation detail. -- Maxim Dounin http://mdounin.ru/ From thehunmonkgroup at gmail.com Sun Dec 16 22:45:56 2018 From: thehunmonkgroup at gmail.com (Chad Phillips) Date: Sun, 16 Dec 2018 14:45:56 -0800 Subject: Disable proxy buffering for websockets Message-ID: I use software that runs a speed test via websockets. When proxying this websocket connection through Nginx, the 'download' portion of the test is inaccurate. My theory is that this is due to Nginx buffering the response from the backend server, thus the timer on the backend server reports an inaccurate value compared to when it's not proxied. I've tried the following settings at both the location and server levels of my configuration: proxy_buffering off; proxy_ignore_headers X-Accel-Buffering; However, this doesn't fix the problem. I've confirmed the functionality works correctly when it's not being proxied via Nginx, so wondering if A) there is some other cause of the issue besides the proxy buffer, or B) I'm not using the proxy buffer settings correctly? -------------- next part -------------- An HTML attachment was scrubbed... URL: From info at zane.it Mon Dec 17 00:10:02 2018 From: info at zane.it (Gianluigi Zanettini) Date: Mon, 17 Dec 2018 01:10:02 +0100 Subject: Fwd: server_name for localhost actually matches .localhost In-Reply-To: References: Message-ID: Hi guys, I'm testing my config on localhost with nginx/1.15.7 before deploying. Since I only want to handle specific hostnames, I setup a catch-default: server { listen 443 ssl default_server; ... return 400; } Then I setup my real config: server { server_name localhost; root /usr/share/nginx/html; listen 443 ssl http2; listen [::]:443 ssl http2; ... } My /etc/hosts: 127.0.0.1 localhost test.localhost bogus.com It works almost as expected: if I open https://127.0.0.1 or https://bogus.com I get my error page, as I expected, and if I open https://localhost I see my site. *The problem is*: if I open https://test.localhost , I also see my website! What? why? my server isn't defined for this hostname! What am I doing wrong? Thanks! O----------------------------------------------------O | Dr. Gianluigi Zanettini info at zane.it | | Tel: +39 338 8562977 Fax: +39 0532 9631162 | | http://TurboLab.it http://zane.it | O----------------------------------------------------O -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Mon Dec 17 02:26:40 2018 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 17 Dec 2018 05:26:40 +0300 Subject: Disable proxy buffering for websockets In-Reply-To: References: Message-ID: <20181217022640.GS99070@mdounin.ru> Hello! On Sun, Dec 16, 2018 at 02:45:56PM -0800, Chad Phillips wrote: > I use software that runs a speed test via websockets. When proxying this > websocket connection through Nginx, the 'download' portion of the test is > inaccurate. > > My theory is that this is due to Nginx buffering the response from the > backend server, thus the timer on the backend server reports an inaccurate > value compared to when it's not proxied. > > I've tried the following settings at both the location and server levels of > my configuration: > > proxy_buffering off; > proxy_ignore_headers X-Accel-Buffering; > > However, this doesn't fix the problem. I've confirmed the functionality > works correctly when it's not being proxied via Nginx, so wondering if A) > there is some other cause of the issue besides the proxy buffer, or B) I'm > not using the proxy buffer settings correctly? As long as a connection is upgraded to the websockets protocol, buffering doesn't matter: regardless of the settings nginx will proxy anything without buffering. Note though that websockets proxying requires special configuration, see here: http://nginx.org/en/docs/http/websocket.html Note well that proxying though nginx implies several additional buffers being used anyway (two socket buffers and a proxy buffer within nginx), and this may reduce accuracy. -- Maxim Dounin http://mdounin.ru/ From info at zane.it Mon Dec 17 07:43:46 2018 From: info at zane.it (Gianluigi Zanettini) Date: Mon, 17 Dec 2018 08:43:46 +0100 Subject: server_name for localhost actually matches .localhost In-Reply-To: References: Message-ID: Ok, this seems related to the specific use of localhost. If I try it with any other domain name, it works as expected. I'm still interested in an answer for my own learning, but it's definitely lower priority now. Il giorno lun 17 dic 2018 alle ore 01:10 Gianluigi Zanettini ha scritto: > Hi guys, > I'm testing my config on localhost with nginx/1.15.7 before deploying. > Since I only want to handle specific hostnames, I setup a catch-default: > > server { > > listen 443 ssl default_server; > ... > return 400; > } > > Then I setup my real config: > > > server { > > server_name localhost; > root /usr/share/nginx/html; > listen 443 ssl http2; > listen [::]:443 ssl http2; > ... > } > > > > My /etc/hosts: > > 127.0.0.1 localhost test.localhost bogus.com > > > It works almost as expected: if I open https://127.0.0.1 or > https://bogus.com I get my error page, as I expected, and if I open > https://localhost I see my site. > > *The problem is*: if I open https://test.localhost , I also see my > website! What? why? my server isn't defined for this hostname! > > What am I doing wrong? > > Thanks! > > O----------------------------------------------------O > | Dr. Gianluigi Zanettini info at zane.it | > | Tel: +39 338 8562977 Fax: +39 0532 9631162 | > | http://TurboLab.it http://zane.it | > O----------------------------------------------------O > -------------- next part -------------- An HTML attachment was scrubbed... URL: From oscaretu at gmail.com Mon Dec 17 14:30:28 2018 From: oscaretu at gmail.com (oscaretu) Date: Mon, 17 Dec 2018 15:30:28 +0100 Subject: server_name for localhost actually matches .localhost In-Reply-To: References: Message-ID: Hello, Gianluigi I suppose the response is coming from the default server: if the vhost servername of the request doesn't match any of the defined ones, the answer will come from the default server. If you add an alternate servername "test.localhost" where you defined "localhost" in the config file, you should see the same content that you see when you use "localhost" Kind regards, Oscar On Mon, Dec 17, 2018 at 8:44 AM Gianluigi Zanettini wrote: > > Ok, this seems related to the specific use of localhost. If I try it with > any other domain name, it works as expected. > > I'm still interested in an answer for my own learning, but it's definitely > lower priority now. > > > Il giorno lun 17 dic 2018 alle ore 01:10 Gianluigi Zanettini > ha scritto: > >> Hi guys, >> I'm testing my config on localhost with nginx/1.15.7 before deploying. >> Since I only want to handle specific hostnames, I setup a catch-default: >> >> server { >> >> listen 443 ssl default_server; >> ... >> return 400; >> } >> >> Then I setup my real config: >> >> >> server { >> >> server_name localhost; >> root /usr/share/nginx/html; >> listen 443 ssl http2; >> listen [::]:443 ssl http2; >> ... >> } >> >> >> >> My /etc/hosts: >> >> 127.0.0.1 localhost test.localhost bogus.com >> >> >> It works almost as expected: if I open https://127.0.0.1 or >> https://bogus.com I get my error page, as I expected, and if I open >> https://localhost I see my site. >> >> *The problem is*: if I open https://test.localhost , I also see my >> website! What? why? my server isn't defined for this hostname! >> >> What am I doing wrong? >> >> Thanks! >> >> O----------------------------------------------------O >> | Dr. Gianluigi Zanettini info at zane.it | >> | Tel: +39 338 8562977 Fax: +39 0532 9631162 | >> | http://TurboLab.it http://zane.it | >> O----------------------------------------------------O >> > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -- Oscar Fernandez Sierra oscaretu at gmail.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Tue Dec 18 16:46:54 2018 From: nginx-forum at forum.nginx.org (traquila) Date: Tue, 18 Dec 2018 11:46:54 -0500 Subject: Multiple range request and proxy_force_ranges directive Message-ID: Hi everybody, I have an issue with multiple range requests in a proxy_pass context. When I want to support the HTTP range requests, I use to enable the proxy_force_ranges directive in two cases: - When the origin server does not support HTTP range - When I want to download the full content from the origin (with proxy_set_header Range "";) and serve only the requested range. However, this does not work for multiple ranges requests, as the "single_range" flag is set by the proxy_force_ranges directive. The flag comes from this old patch https://forum.nginx.org/read.php?29,248573,248573 When I change the code to remove the single_range flag in case of force_ranges, everything works fine; but I am not sure of the edge effects. if (u->conf->force_ranges) { r->allow_ranges = 1; - r->single_range = 1; - - if (NGX_HTTP_CACHE) - if (r->cached) { - r->single_range = 0; - } - #endif } Thank you, Traquila Posted at Nginx Forum: https://forum.nginx.org/read.php?2,282412,282412#msg-282412 From nginx-forum at forum.nginx.org Tue Dec 18 16:58:49 2018 From: nginx-forum at forum.nginx.org (thehunmonkgroup) Date: Tue, 18 Dec 2018 11:58:49 -0500 Subject: Disable proxy buffering for websockets In-Reply-To: <20181217022640.GS99070@mdounin.ru> References: <20181217022640.GS99070@mdounin.ru> Message-ID: <86fcdddacc2844cc5c25906f4b496547.NginxMailingListEnglish@forum.nginx.org> I could deal with *some* inaccuracy, but the results are completely out of whack. Downloading 256KB of data via the websocket over a poor DSL connection happens near instantaneously from the websocket server's point of view, which to me indicates that Nginx is consuming all that data in a buffer instead of passing it along to the client without buffering. You mentioned that there's a proxy buffer within Nginx in the case of websockets, is there a setting to disable that? The ' proxy_buffering off;' setting I mentioned previously didn't seem to do it. Really hoping there's another setting I can take advantage of here... Posted at Nginx Forum: https://forum.nginx.org/read.php?2,282399,282413#msg-282413 From mdounin at mdounin.ru Tue Dec 18 18:45:13 2018 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 18 Dec 2018 21:45:13 +0300 Subject: Disable proxy buffering for websockets In-Reply-To: <86fcdddacc2844cc5c25906f4b496547.NginxMailingListEnglish@forum.nginx.org> References: <20181217022640.GS99070@mdounin.ru> <86fcdddacc2844cc5c25906f4b496547.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20181218184513.GW99070@mdounin.ru> Hello! On Tue, Dec 18, 2018 at 11:58:49AM -0500, thehunmonkgroup wrote: > I could deal with *some* inaccuracy, but the results are completely out of > whack. Downloading 256KB of data via the websocket over a poor DSL > connection happens near instantaneously from the websocket server's point of > view, which to me indicates that Nginx is consuming all that data in a > buffer instead of passing it along to the client without buffering. Well, 256KB is likely several times smaller than socket buffers used, and you are going to see problems if you are testing with such small sizes without also tuning socket buffers. E.g., on Linux default socket buffers sizes are autoscaled depending on the connection speed, and can be up to several megabytes between nginx and the backend, as these are on a fast connection. > You mentioned that there's a proxy buffer within Nginx in the case of > websockets, is there a setting to disable that? The ' proxy_buffering off;' > setting I mentioned previously didn't seem to do it. No. To copy data from one socket to another you need a buffer. You can control size of the buffer nginx use internally via the proxy_buffer_size directive (see http://nginx.org/r/proxy_buffer_size). But the default size is pretty low - 4k - so this is unlikely the source of your problems, unless you've tuned it to a larger value yourself. Most likely, you have to tune socket buffers to be smaller to get more accurate results. On Linux, socket buffers can be tuned using the net.ipv4.tcp_rmem and net.ipv4.tcp_wmem on Linux. Also, in nginx itself you can control socket buffers towards the client using the "sndbuf" parameter of the "listen" directive (http://nginx.org/r/listen), but this is unlikely to be enough in such a setup. Note well that measuring connection speed on the server side might not be a good idea, as this will inevitably lead to inacurate results. -- Maxim Dounin http://mdounin.ru/ From mdounin at mdounin.ru Tue Dec 18 19:01:11 2018 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 18 Dec 2018 22:01:11 +0300 Subject: Multiple range request and proxy_force_ranges directive In-Reply-To: References: Message-ID: <20181218190110.GX99070@mdounin.ru> Hello! On Tue, Dec 18, 2018 at 11:46:54AM -0500, traquila wrote: > Hi everybody, > > I have an issue with multiple range requests in a proxy_pass context. > > When I want to support the HTTP range requests, I use to enable the > proxy_force_ranges directive in two cases: > - When the origin server does not support HTTP range > - When I want to download the full content from the origin (with > proxy_set_header Range "";) and serve only the requested range. > > However, this does not work for multiple ranges requests, as the > "single_range" flag is set by the proxy_force_ranges directive. > The flag comes from this old patch > https://forum.nginx.org/read.php?29,248573,248573 > > When I change the code to remove the single_range flag in case of > force_ranges, everything works fine; but I am not sure of the edge effects. The r->single_range flag is there for a reason. Multipart range requests are only supported if the whole response is in the single buffer. -- Maxim Dounin http://mdounin.ru/ From lists at lazygranch.com Thu Dec 20 09:42:49 2018 From: lists at lazygranch.com (lists at lazygranch.com) Date: Thu, 20 Dec 2018 01:42:49 -0800 Subject: Need logic to not check for bad user agent if xml file Message-ID: <20181220014249.2c91512d.lists@lazygranch.com> I have a map to check for bad user agents called badagent. I want to set up a RSS feed. The feedreaders can have funny agents, so I need to omit the bad agent check if the file is any xml type. This is rejected. if (($request_uri != [*.xml]) && ($badagent)) {return 444; } Suggestions? I can put the xml files in a separate location if that helps. From mdounin at mdounin.ru Thu Dec 20 13:23:23 2018 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 20 Dec 2018 16:23:23 +0300 Subject: Need logic to not check for bad user agent if xml file In-Reply-To: <20181220014249.2c91512d.lists@lazygranch.com> References: <20181220014249.2c91512d.lists@lazygranch.com> Message-ID: <20181220132323.GZ99070@mdounin.ru> Hello! On Thu, Dec 20, 2018 at 01:42:49AM -0800, lists at lazygranch.com wrote: > I have a map to check for bad user agents called badagent. I want to > set up a RSS feed. The feedreaders can have funny agents, so I need to > omit the bad agent check if the file is any xml type. > > This is rejected. > > if (($request_uri != [*.xml]) && ($badagent)) {return 444; } > > Suggestions? > > I can put the xml files in a separate location if that helps. Doing the $badagent only in locations where you want to reject bots would be the most simple approach. That is, consider something like this: location / { if ($badagent) { return 403; } ... } location /rss/ { ... } -- Maxim Dounin http://mdounin.ru/ From vbart at nginx.com Thu Dec 20 19:04:02 2018 From: vbart at nginx.com (Valentin V. Bartenev) Date: Thu, 20 Dec 2018 22:04:02 +0300 Subject: Unit 1.7 release Message-ID: <1997466.pah0Z9BfTt@vbart-workstation> Hi, I'm glad to announce a new release of NGINX Unit. This is a bugfix release with a primary focus on the stabilization of the Node.js module. We have made great progress with it, and now Node.js support is in much better shape than before. Changes with Unit 1.7 20 Dec 2018 *) Change: now rpath is set in Ruby module only if the library was not found in default search paths; this allows to meet packaging restrictions on some systems. *) Bugfix: "disable_functions" and "disable_classes" PHP options set via Control API did not work. *) Bugfix: Promises on request data in Node.js were not triggered. *) Bugfix: various compatibility issues with Node.js applications. *) Bugfix: a segmentation fault occurred in Node.js module if application tried to read request body after request.end() was called. *) Bugfix: a segmentation fault occurred in Node.js module if application attempted to send header twice. *) Bugfix: names of response header fields in Node.js module were erroneously treated as case-sensitive. *) Bugfix: uncatched exceptions in Node.js were not logged. *) Bugfix: global install of Node.js module from sources was broken on some systems; the bug had appeared in 1.6. *) Bugfix: traceback for exceptions during initialization of Python applications might not be logged. *) Bugfix: PHP module build failed if PHP interpreter was built with thread safety enabled. Highly likely, this is the last release of Unit in 2018, so I would like to wish you a Happy New Year on the behalf of the entire Unit team. 2018 was an exciting year in Unit development. Many important features have been introduced, including: - Advanced Process Management, which allows scaling application processes dynamically depending on the amount of load. Thanks go to Maxim Romanov who primarily worked on this feature. Documentation: https://unit.nginx.org/configuration/#process-management - Perl, Ruby, and Node.js application support. Thanks to Alexander Borisov who implemented these language modules. - TLS support and Certificates Storage API that allows to dynamically configure TLS certificates. Thanks to Igor Sysoev who collaborated with me on this feature. Documentation: https://unit.nginx.org/configuration/#ssl-tls-and-certificates - C API language modules were moved into a separate library; this helped a lot with Node.js integration and aids the upcoming Java support. Thanks again to Maxim Romanov for this work. - Essential access logging support. Documentation: https://unit.nginx.org/configuration/#access-log - Advanced settings for applications including environment variables, runtime arguments, PHP options, and php.ini path customization. I can?t imagine releasing any of these features without the effort of our QA engineer, Andrey Zelenkov, who relentlessly improves test coverage of Unit codebase, runs various fuzzing tests, and reports any suspicious behaviour to the developers. In addition, one of the most important achievements of the year was a tangible improvement of documentation quality. The unit.nginx.org website is up-to-date now and covers all the features introduced in the new and previous Unit releases. This duty was successfully carried out by our technical writer, Artem Konev. Besides, he continues refactoring the documentation and plans to introduce HowTos for various use cases and applications. If you have any particular suggestions concerning applications you?d like to configure with Unit, please create a feature request in our documentation issue tracker on GitHub: - https://github.com/nginx/unit-docs/issues Thanks to our system engineers, Andrei Belov and Konstantin Pavlov, who are toiling over packages in our own repositories and images in Docker hub. Thanks to our product manager Nick Shadrin who helps us to envision our strategy and gives excellent talks on conferences around the world. You can see him in the latest Unit demo session at NGINX Conf 2018: - https://www.youtube.com/watch?v=JQZKbIG3uro Of course, everything I?ve just mentioned wouldn?t be possible without our vibrant community; our users who are eager to move their projects to Unit; everyone who reports bugs and suggests features, guiding us to the right path. We urge everybody to participate via our mailing list at - unit at nginx.org or on GitHub: - https://github.com/nginx/unit I gladly mention ??? (Hong Zhi Dao) as one of the most active community members who not only reports bugs but also reads our code, asks pointed questions, and regularly sends patches with improvements. Thank you very much for your contribution. Special thanks go to the maintainers of Unit packages in various community repositories: Sergey A. Osokin (FreeBSD), Ralph Seichter (Gentoo), Andr? Klitzing (Alpine Linux), and Julian Brost (Arch Linux). Sorry if I didn't mention anyone else who maintains Unit packages for other distributions; you can open an issue for your repository to be included in the Installation section at unit.nginx.org: - https://github.com/nginx/unit-docs/issues Unfortunately, we weren?t able to achieve each and every of our audacious goals this year. The development of some features is postponed until the upcoming year. Currently, there is ongoing work on WebSocket support, the Java module, request routing, and static files serving. We have already made good progress on the Java module. This work is underway in a separate GitHub public repository: - https://github.com/mar0x/unit , so everybody willing to run their Java applications on Unit can participate. Many other good things and announcements about Unit will surely happen in 2019. Thank you for staying with us, and all the best. wbr, Valentin V. Bartenev From cyflhn at 163.com Fri Dec 21 08:48:19 2018 From: cyflhn at 163.com (yf chu) Date: Fri, 21 Dec 2018 16:48:19 +0800 (CST) Subject: Does the variable "$request_time" include ssl handshake time? Message-ID: <759061fb.11d0b.167cff379bd.Coremail.cyflhn@163.com> The definition of "$request_time" is that "request processing time in seconds with a milliseconds resolution; time elapsed since the first bytes were read from the client". But I want to know whether it includes the ssl handshake time? If not, is there any method to get the duration of ssl handshake in Nginx? -------------- next part -------------- An HTML attachment was scrubbed... URL: From pluknet at nginx.com Fri Dec 21 10:52:33 2018 From: pluknet at nginx.com (Sergey Kandaurov) Date: Fri, 21 Dec 2018 13:52:33 +0300 Subject: Does the variable "$request_time" include ssl handshake time? In-Reply-To: <759061fb.11d0b.167cff379bd.Coremail.cyflhn@163.com> References: <759061fb.11d0b.167cff379bd.Coremail.cyflhn@163.com> Message-ID: <9000F4A5-695A-4157-9B89-FD0D3FEC3BF6@nginx.com> > On 21 Dec 2018, at 11:48, yf chu wrote: > > The definition of "$request_time" is that "request processing time in seconds with a milliseconds resolution; time elapsed since the first bytes were read from the client". But I want to know whether it includes the ssl handshake time? No. > If not, is there any method to get the duration of ssl handshake in Nginx? I'm not aware of. -- Sergey Kandaurov From sca at andreasschulze.de Sun Dec 23 13:21:31 2018 From: sca at andreasschulze.de (A. Schulze) Date: Sun, 23 Dec 2018 14:21:31 +0100 Subject: migrate fastcgi_split_path_info to uwsgi Message-ID: <71de487a-1404-329b-9459-9e94849e9fdd@andreasschulze.de> cross-posting to horde & nginx lists... Hello, I plan to migrate a PHP Application (horde) from php-fpm to uwsgi. nginx talk "fastcgi protocol" to php-fpm now and have to talk "uwsgi protocol" to uwsgi later. Horde uses partially arguments as URL elements. example comment in https://github.com/horde/base/blob/master/services/ajax.php the corresponding nginx config I currently use: location ~ /horde/services/ajax.php/ { fastcgi_split_path_info ^(.+\.php)(/.+)$; fastcgi_pass unix:/var/run/php.sock; include /etc/nginx/fastcgi.conf; } the example URL used in ajax.php: http://example.com/horde/services/ajax.php/APP/ACTION[?OPTIONS] without fastcgi_split_path_info: SCRIPT_FILENAME /horde/services/ajax.php/APP/ACTION PATH_INFO is empty? with fastcgi_split_path_info: SCRIPT_FILENAME /horde/services/ajax.php PATH_INFO /APP/ACTION[?OPTIONS] hope, this is correct so far... to verify my setup I configured location /horde/services/ajax.php { fastcgi_split_path_info ^(/horde/services/ajax\.php)(.+)$; return 200 "REQUEST_URI: $request_uri, SCRIPT_FILENAME: $fastcgi_script_name, PATH_INFO: $fastcgi_path_info ARGS: $args\n"; } $ curl 'https://example.org/horde/services/ajax.php/APP/ACTION?OPTION=foobar' REQUEST_URI: /horde/services/ajax.php/APP/ACTION?OPTION=foobar, SCRIPT_FILENAME: /horde/services/ajax.php, PATH_INFO: /APP/ACTION ARGS: OPTION=foobar that _looks_ correct to me. Anyway, horde don't work, I get 404 back: [pid: 1301|app: -1|req: -1/10] 2001:db8::2 () {58 vars in 1220 bytes} [Sun Dec 23 14:19:22 2018] POST /horde/services/ajax.php/imp/dynamicInit => generated 9 bytes in 0 msecs (HTTP/2.0 404) 2 headers in 71 bytes (0 switches on core 0) Unfortunately there is no "uwsgi_split_path_info" in nginx. That means to me - it's simply not implemented - the problem is solved in a other way. I appreciate any hint on how to solve the "split" or at least debug what's going on. Andreas From mdounin at mdounin.ru Tue Dec 25 15:08:28 2018 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 25 Dec 2018 18:08:28 +0300 Subject: nginx-1.15.8 Message-ID: <20181225150828.GY99070@mdounin.ru> Changes with nginx 1.15.8 25 Dec 2018 *) Feature: the $upstream_bytes_sent variable. Thanks to Piotr Sikora. *) Feature: new directives in vim syntax highlighting scripts. Thanks to Gena Makhomed. *) Bugfix: in the "proxy_cache_background_update" directive. *) Bugfix: in the "geo" directive when using unix domain listen sockets. *) Workaround: the "ignoring stale global SSL error ... bad length" alerts might appear in logs when using the "ssl_early_data" directive with OpenSSL. *) Bugfix: in nginx/Windows. *) Bugfix: in the ngx_http_autoindex_module on 32-bit platforms. -- Maxim Dounin http://nginx.org/ From xeioex at nginx.com Tue Dec 25 15:21:26 2018 From: xeioex at nginx.com (Dmitry Volyntsev) Date: Tue, 25 Dec 2018 18:21:26 +0300 Subject: njs-0.2.7 Message-ID: Hello, I'm glad to announce a new release of NGINX JavaScript module (njs). This release proceeds to extend the coverage of ECMAScript specifications. - Added support for ES6 rest parameters syntax. Thanks to Alexander Pyshchev. : > var add = function(prev, curr) { return prev + curr } : undefined : > function sum(...args) { return args.reduce(add) } : undefined : > sum(1,2,3) : 6 : > sum(1,2,3,4) : 10 - Added ES8 Object.values() and Object.entries() methods. You can learn more about njs: - Overview and introduction: http://nginx.org/en/docs/njs/ - Presentation: https://youtu.be/Jc_L6UffFOs Feel free to try it and give us feedback on: - Github: https://github.com/nginx/njs/issues - Mailing list: http://mailman.nginx.org/mailman/listinfo/nginx-devel Changes with njs 0.2.7 25 Dec 2018 Core: *) Feature: rest parameters syntax (destructuring is not supported). Thanks to Alexander Pyshchev. *) Feature: added Object.entries() method. *) Feature: added Object.values() method. *) Improvement: code generator refactored and simplified. *) Bugfix: fixed automatic semicolon insertion. *) Bugfix: fixed assignment expression from compound assignment. *) Bugfix: fixed comparison of Byte and UTF8 strings. *) Bugfix: fixed type of iteration variable in for-in with array values. *) Bugfix: fixed building on paltforms without librt. *) Bugfix: miscellaneous additional bugs have been fixed. -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Wed Dec 26 10:17:53 2018 From: nginx-forum at forum.nginx.org (bhaktaonline) Date: Wed, 26 Dec 2018 05:17:53 -0500 Subject: NGINX TLS Behavior Message-ID: Hello, I have a question on NGINX's behavior during TLS. I see that NGINX combines HTTP header and Data together into a SSL record. You can see from the logs below that 2018/12/26 14:10:34 [debug] 13248#0: *1 SSL buf copy: 244 2018/12/26 14:10:34 [debug] 13248#0: *1 SSL buf copy: 16140 2018/12/26 14:10:34 [debug] 13248#0: *1 SSL to write: 16384 While the header gets generated earlier, its written to along with data. Is there a way (i mean a configurable way) to tell NGINX to write just the headers, so that header goes out in a single TLS record? thank you for your time in looking at this. -bhakta Full logs, this is in response to a GET request of a 1mb file that I am trying to server as part of this test.: 2018/12/26 14:10:34 [debug] 13248#0: *1 content phase: 12 2018/12/26 14:10:34 [debug] 13248#0: *1 content phase: 13 2018/12/26 14:10:34 [debug] 13248#0: *1 ngx_http_static_handler: http filename: "/usr/local/nginx/html/protected/1mb.html" 2018/12/26 14:10:34 [debug] 13248#0: *1 add cleanup: 000055E9E8B9AFF0 2018/12/26 14:10:34 [debug] 13248#0: *1 http static fd: 11 2018/12/26 14:10:34 [debug] 13248#0: *1 http set discard body 2018/12/26 14:10:34 [debug] 13248#0: *1 HTTP/1.1 200 OK Server: nginx/1.15.5 Date: Wed, 26 Dec 2018 08:40:34 GMT Content-Type: text/html Content-Length: 1000000 Last-Modified: Tue, 25 Dec 2018 09:02:16 GMT Connection: keep-alive ETag: "5c21f218-f4240" Accept-Ranges: bytes 2018/12/26 14:10:34 [debug] 13248#0: *1 write new buf t:1 f:0 000055E9E8B9B1C8, pos 000055E9E8B9B1C8, size: 244 file: 0, size: 0 2018/12/26 14:10:34 [debug] 13248#0: *1 http write filter: l:0 f:0 s:244 2018/12/26 14:10:34 [debug] 13248#0: *1 http output filter "/1mb.html?" 2018/12/26 14:10:34 [debug] 13248#0: *1 http copy filter: "/1mb.html?" 2018/12/26 14:10:34 [debug] 13248#0: *1 malloc: 000055E9E8BD9110:32768 2018/12/26 14:10:34 [debug] 13248#0: *1 read: 11, 000055E9E8BD9110, 32768, 0 2018/12/26 14:10:34 [debug] 13248#0: *1 http postpone filter "/1mb.html?" 000055E9E8B9B3B8 2018/12/26 14:10:34 [debug] 13248#0: *1 write old buf t:1 f:0 000055E9E8B9B1C8, pos 000055E9E8B9B1C8, size: 244 file: 0, size: 0 2018/12/26 14:10:34 [debug] 13248#0: *1 write new buf t:1 f:0 000055E9E8BD9110, pos 000055E9E8BD9110, size: 32768 file: 0, size: 0 2018/12/26 14:10:34 [debug] 13248#0: *1 http write filter: l:0 f:1 s:33012 2018/12/26 14:10:34 [debug] 13248#0: *1 http write filter limit 0 2018/12/26 14:10:34 [debug] 13248#0: *1 posix_memalign: 000055E9E8B78950:512 @16 2018/12/26 14:10:34 [debug] 13248#0: *1 malloc: 000055E9E8BCD330:16384 2018/12/26 14:10:34 [debug] 13248#0: *1 SSL buf copy: 244 2018/12/26 14:10:34 [debug] 13248#0: *1 SSL buf copy: 16140 2018/12/26 14:10:34 [debug] 13248#0: *1 SSL to write: 16384 2018/12/26 14:10:34 [debug] 13248#0: *1 SSL_write: 16384 2018/12/26 14:10:34 [debug] 13248#0: *1 SSL buf copy: 16384 2018/12/26 14:10:34 [debug] 13248#0: *1 SSL to write: 16384 2018/12/26 14:10:34 [debug] 13248#0: *1 SSL_write: 16384 My nginx.conf section related to https: server { listen 8081 ssl; sendfile off; tcp_nopush off; #ssl on; ssl_certificate /etc/ssl/certs/server.crt; ssl_certificate_key /etc/ssl/private/server.key; server_name server.com; ssl_prefer_server_ciphers on; ssl_ciphers AES128-GCM-SHA256; access_log off; error_log /var/log/nginx/nginx.server.https.error.log debug; location / { root /usr/local/nginx/html/protected; } } Posted at Nginx Forum: https://forum.nginx.org/read.php?2,282472,282472#msg-282472 From nginx-forum at forum.nginx.org Wed Dec 26 10:40:29 2018 From: nginx-forum at forum.nginx.org (krionz) Date: Wed, 26 Dec 2018 05:40:29 -0500 Subject: Only receiving an empty response with 200 status code when using CORS on Nginx 1.14 Message-ID: <2f1665c9876bd58dfc22346ee8e37a00.NginxMailingListEnglish@forum.nginx.org> I'm trying to use wide-open CORS configuration for Nginx. I found this configuration: https://enable-cors.org/server_nginx.html. Since i started to use this configuration Nginx only serves an empty response with 200 status code. I'm using Nginx with PHP-FPM. I have already checked and there is nothing wrong with the code itself i can say that because i have put some HTTP responses on text files. On the text files i have the correct HTTP responses but they are not sent to the client by Nginx. Here is the configuration: location / { if ($request_method = 'OPTIONS') { add_header 'Access-Control-Allow-Origin' '*' always; add_header 'Access-Control-Allow-Methods' 'GET, POST, OPTIONS' always; # # Custom headers and headers various browsers *should* be OK with but aren't # add_header 'Access-Control-Allow-Headers' 'x-authorization,Filename,DNT,User-Agent,X-Requested-With,If-Modified-Since,Cache-Control,content-type,Range' always; # # Tell client that this pre-flight info is valid for 20 days # add_header 'Access-Control-Max-Age' 1728000 always; add_header 'Content-Type' 'text/plain; charset=utf-8' always; add_header 'Content-Length' 0 always; return 204; } if ($request_method = 'GET') { add_header 'Access-Control-Allow-Origin' '*' always; add_header 'Access-Control-Allow-Methods' 'GET, POST, OPTIONS' always; add_header 'Access-Control-Allow-Headers' 'x-authorization,Filename,DNT,User-Agent,X-Requested-With,If-Modified-Since,Cache-Control,content-type,Range' always; add_header 'Access-Control-Expose-Headers' 'Content-Length,Content-Range' always; } include /etc/nginx/fastcgi_params; fastcgi_param SCRIPT_FILENAME $realpath_root/app_dev.php$is_args$args; fastcgi_param DOCUMENT_ROOT $realpath_root; fastcgi_split_path_info ^(.+\.php)(/.*)$; fastcgi_pass php-fpm:9000; } I'm testing the configs just for GET requests at the moment. I have tried to make HTTP requests with google chrome and the Insomnia REST client and with both clients the same behaviour ocurr. I would like to make it clear that the HTTP requests are getting in to the code correctly and the correct HTTP responses are being computed, but the HTTP responses are not being sent to the client by the Nginx. Thanks for your attention. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,282474,282474#msg-282474 From pluknet at nginx.com Wed Dec 26 10:41:45 2018 From: pluknet at nginx.com (Sergey Kandaurov) Date: Wed, 26 Dec 2018 13:41:45 +0300 Subject: NGINX TLS Behavior In-Reply-To: References: Message-ID: > On 26 Dec 2018, at 13:17, bhaktaonline wrote: > > Hello, > > > I have a question on NGINX's behavior during TLS. > > I see that NGINX combines HTTP header and Data together into a SSL record. > You can see from the logs below that > > > > 2018/12/26 14:10:34 [debug] 13248#0: *1 SSL buf copy: 244 > 2018/12/26 14:10:34 [debug] 13248#0: *1 SSL buf copy: 16140 > 2018/12/26 14:10:34 [debug] 13248#0: *1 SSL to write: 16384 > > > > > While the header gets generated earlier, its written to along with data. Is > there a way (i mean a configurable way) to tell NGINX to write just the > headers, so that header goes out in a single TLS record? Yes, there's a way to send headers separately. See http://nginx.org/r/postpone_output for details. -- Sergey Kandaurov From nginx-forum at forum.nginx.org Wed Dec 26 11:56:46 2018 From: nginx-forum at forum.nginx.org (bhaktaonline) Date: Wed, 26 Dec 2018 06:56:46 -0500 Subject: NGINX TLS Behavior In-Reply-To: References: Message-ID: <958c1a3957cb60c3b48559e32a07a24b.NginxMailingListEnglish@forum.nginx.org> Thanks for the quick response, Sergey, I set: postpone_output 200; killed/restarted the server this should have postponed writing output until 200 bytes is available. The HTTP header is 244 bytes and it should have triggered an output.. I however still see on single TLS record which has both header and data: Any suggestions? Logs: 2018/12/26 16:59:00 [debug] 14290#0: *1 content phase: 13 2018/12/26 16:59:00 [debug] 14290#0: *1 ngx_http_static_handler: http filename: "/usr/local/nginx/html/protected/1mb.html" 2018/12/26 16:59:00 [debug] 14290#0: *1 add cleanup: 0000557495474FD0 2018/12/26 16:59:00 [debug] 14290#0: *1 http static fd: 11 2018/12/26 16:59:00 [debug] 14290#0: *1 http set discard body 2018/12/26 16:59:00 [debug] 14290#0: *1 HTTP/1.1 200 OK Server: nginx/1.15.5 Date: Wed, 26 Dec 2018 11:29:00 GMT Content-Type: text/html Content-Length: 1000000 Last-Modified: Tue, 25 Dec 2018 09:02:16 GMT Connection: keep-alive ETag: "5c21f218-f4240" Accept-Ranges: bytes 2018/12/26 16:59:00 [debug] 14290#0: *1 write new buf t:1 f:0 00005574954751A8, pos 00005574954751A8, size: 244 file: 0, size: 0 2018/12/26 16:59:00 [debug] 14290#0: *1 http write filter: l:0 f:0 s:244 2018/12/26 16:59:00 [debug] 14290#0: *1 http write filter limit 0 2018/12/26 16:59:00 [debug] 14290#0: *1 posix_memalign: 0000557495452950:512 @16 2018/12/26 16:59:00 [debug] 14290#0: *1 malloc: 00005574954A7320:16384 2018/12/26 16:59:00 [debug] 14290#0: *1 SSL buf copy: 244 2018/12/26 16:59:00 [debug] 14290#0: *1 http write filter 0000000000000000 2018/12/26 16:59:00 [debug] 14290#0: *1 http output filter "/1mb.html?" 2018/12/26 16:59:00 [debug] 14290#0: *1 http copy filter: "/1mb.html?" 2018/12/26 16:59:00 [debug] 14290#0: *1 malloc: 00005574954B3100:32768 2018/12/26 16:59:00 [debug] 14290#0: *1 read: 11, 00005574954B3100, 32768, 0 2018/12/26 16:59:00 [debug] 14290#0: *1 http postpone filter "/1mb.html?" 0000557495475388 2018/12/26 16:59:00 [debug] 14290#0: *1 write new buf t:1 f:0 00005574954B3100, pos 00005574954B3100, size: 32768 file: 0, size: 0 2018/12/26 16:59:00 [debug] 14290#0: *1 http write filter: l:0 f:1 s:32768 2018/12/26 16:59:00 [debug] 14290#0: *1 http write filter limit 0 2018/12/26 16:59:00 [debug] 14290#0: *1 SSL buf copy: 16140 2018/12/26 16:59:00 [debug] 14290#0: *1 SSL to write: 16384 2018/12/26 16:59:00 [debug] 14290#0: *1 SSL_write: 16384 Posted at Nginx Forum: https://forum.nginx.org/read.php?2,282472,282476#msg-282476 From pluknet at nginx.com Wed Dec 26 12:23:52 2018 From: pluknet at nginx.com (Sergey Kandaurov) Date: Wed, 26 Dec 2018 15:23:52 +0300 Subject: NGINX TLS Behavior In-Reply-To: <958c1a3957cb60c3b48559e32a07a24b.NginxMailingListEnglish@forum.nginx.org> References: <958c1a3957cb60c3b48559e32a07a24b.NginxMailingListEnglish@forum.nginx.org> Message-ID: <69546E4B-2847-46D2-8ACA-EA5EC42733B1@nginx.com> > On 26 Dec 2018, at 14:56, bhaktaonline wrote: > > Thanks for the quick response, Sergey, > > I set: > postpone_output 200; > > killed/restarted the server > > this should have postponed writing output until 200 bytes is available. The > HTTP header is 244 bytes and it should have triggered an output.. I however > still see on single TLS record which has both header and data: > > Any suggestions? Ok, that's due to SSL buffering in nginx that isn't configurable. You can turn it off though by recompiling with this patch. diff --git a/src/http/ngx_http_request.c b/src/http/ngx_http_request.c --- a/src/http/ngx_http_request.c +++ b/src/http/ngx_http_request.c @@ -725,7 +725,7 @@ ngx_http_ssl_handshake(ngx_event_t *rev) sscf = ngx_http_get_module_srv_conf(hc->conf_ctx, ngx_http_ssl_module); - if (ngx_ssl_create_connection(&sscf->ssl, c, NGX_SSL_BUFFER) + if (ngx_ssl_create_connection(&sscf->ssl, c, 0) != NGX_OK) { ngx_http_close_connection(c); -- Sergey Kandaurov From nginx-forum at forum.nginx.org Wed Dec 26 12:44:42 2018 From: nginx-forum at forum.nginx.org (bhaktaonline) Date: Wed, 26 Dec 2018 07:44:42 -0500 Subject: NGINX TLS Behavior In-Reply-To: <69546E4B-2847-46D2-8ACA-EA5EC42733B1@nginx.com> References: <69546E4B-2847-46D2-8ACA-EA5EC42733B1@nginx.com> Message-ID: <23965b782777f2cca306067e05557463.NginxMailingListEnglish@forum.nginx.org> Awesome.. I can confirm that this works.. thanks -bhakta Posted at Nginx Forum: https://forum.nginx.org/read.php?2,282472,282479#msg-282479 From kworthington at gmail.com Wed Dec 26 17:09:34 2018 From: kworthington at gmail.com (Kevin Worthington) Date: Wed, 26 Dec 2018 12:09:34 -0500 Subject: [nginx-announce] nginx-1.15.8 In-Reply-To: <20181225150833.GZ99070@mdounin.ru> References: <20181225150833.GZ99070@mdounin.ru> Message-ID: Hello Nginx users, Now available: Nginx 1.15.8 for Windows https://kevinworthington.com/nginxwin1158 (32-bit and 64-bit versions) These versions are to support legacy users who are already using Cygwin based builds of Nginx. Officially supported native Windows binaries are at nginx.org. Announcements are also available here: Twitter http://twitter.com/kworthington Google+ https://plus.google.com/+KevinWorthington/ Thank you, Kevin -- Kevin Worthington kworthington *@* (gmail] [dot} {com) https://kevinworthington.com/ https://twitter.com/kworthington https://plus.google.com/+KevinWorthington/ On Tue, Dec 25, 2018 at 10:08 AM Maxim Dounin wrote: > Changes with nginx 1.15.8 25 Dec > 2018 > > *) Feature: the $upstream_bytes_sent variable. > Thanks to Piotr Sikora. > > *) Feature: new directives in vim syntax highlighting scripts. > Thanks to Gena Makhomed. > > *) Bugfix: in the "proxy_cache_background_update" directive. > > *) Bugfix: in the "geo" directive when using unix domain listen > sockets. > > *) Workaround: the "ignoring stale global SSL error ... bad length" > alerts might appear in logs when using the "ssl_early_data" > directive > with OpenSSL. > > *) Bugfix: in nginx/Windows. > > *) Bugfix: in the ngx_http_autoindex_module on 32-bit platforms. > > > -- > Maxim Dounin > http://nginx.org/ > _______________________________________________ > nginx-announce mailing list > nginx-announce at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx-announce > -------------- next part -------------- An HTML attachment was scrubbed... URL: From email1an at adam.com.au Wed Dec 26 23:23:21 2018 From: email1an at adam.com.au (Ian) Date: Thu, 27 Dec 2018 09:53:21 +1030 Subject: Darktable Win10 version 2.6.0 Wont Install Message-ID: <95c327cd-ee68-c09d-9803-1f5ce0aaf535@adam.com.au> Hi Guys, Just to let you know the new 2.6.0-win64 exe failed to install on my Dell notebook, (Inspiron 15 Gaming 7567) running Win10 Home Edition, (64bit 1803). I have attached the text file that the failed install created and hope it may help to correct the install error. Looks like a great app and worthy replacement to my no longer supported Adobe LR. Regards, Ian -------------- next part -------------- this is darktable 2.6.0 reporting an exception: ------------------- Error occurred on Thursday, December 27, 2018 at 09:36:39. darktable.exe caused an Access Violation at location 00007FFA1EE7E3DE in module igdrcl64.dll Reading from location 0000000000000028. AddrPC Params 00007FFA1EE7E3DE 000000000061DDA8 0000000000000040 00007FFA1ECF31E0 igdrcl64.dll!clGetGLContextInfoKHR 00007FFA1ECF3265 000000000699C040 00007FFA1ECF31E0 00000000068F2510 igdrcl64.dll!GTPin_Init 00007FFA1EC77915 0000000000000001 000000000699C040 0000000000000000 igdrcl64.dll!GTPin_Init 00007FFA1EC78A14 00007FFA1EC78A00 0000000000000001 000000000699C040 igdrcl64.dll!GTPin_Init 00007FFA1EC900FC 00007FFA1F921F5C 00007FFA1EC49C45 0000000000000000 igdrcl64.dll!GTPin_Init 00007FFA1EC4A302 00000000068D3C40 00007FFA1EC49C30 000000000061E470 igdrcl64.dll!clEnqueueTask 00007FFA1F921F9A 00007FFA37224C08 0000000000000000 0000000000000000 IntelOpenCL64.dll!clEnqueueWriteBufferRect 00007FFA1F8EC5CB 00007FFA1F8CC140 00007FFA3723AE01 00007FFA1F8C0000 IntelOpenCL64.dll!clEnqueueWriteBufferRect 00007FFA371C684C 0000000000000000 00007FFA3723AE01 0000000000749D40 OpenCL.dll!clEnqueueWriteBufferRect 00007FFA371C93CE 000000000076765E 00007FFA00000064 0000000000000001 OpenCL.dll!clEnqueueWriteBufferRect 00007FFA371C9591 0000000000000000 00007FFA3723AEB0 0000000000000000 OpenCL.dll!clEnqueueWriteBufferRect 00007FFA371C9082 000000006382DA78 0000000000000000 00000000033B8610 OpenCL.dll!clEnqueueWriteBufferRect 00007FFA58BC986F 00000000048AB9D0 00000000638A6B20 000000000061F27C ntdll.dll!RtlRunOnceExecuteOnce 00007FFA55AF417A 0000000000000014 0000000000000014 00000000033B8AD0 KERNELBASE.dll!InitOnceExecuteOnce 00007FFA371C899C 00000000033BACF0 00000000048AB9D0 00000000048AE440 OpenCL.dll!clEnqueueWriteBufferRect 0000000063644E2A 0000000000000000 0000000000000046 0000000000000000 libdarktable.dll!dt_opencl_init [D:/build/darktable/src/common/opencl.c @ 601] 00000000635E222E 00007FFA00000001 0000000003302200 00007FFA00000001 libdarktable.dll!dt_init [D:/build/darktable/src/common/darktable.c @ 889] 00000000004030B5 00000000019E0740 000000008AF84201 00000000033028C0 darktable.exe!main [D:/build/darktable/src/main.c @ 82] 0000000000401605 000000000000005A 0000000000000000 0000000000408610 darktable.exe!wmain [D:/build/darktable/src/win/main_wrapper.h @ 15] 00000000004013FE 0000000000000000 0000000000000000 0000000000000000 darktable.exe!__tmainCRTStartup [C:/repo/mingw-w64-crt-git/src/mingw-w64/mingw-w64-crt/crt/crtexe.c @ 334] 000000000040153B 0000000000000000 0000000000000000 0000000000000000 darktable.exe!mainCRTStartup [C:/repo/mingw-w64-crt-git/src/mingw-w64/mingw-w64-crt/crt/crtexe.c @ 223] 00007FFA58803034 0000000000000000 0000000000000000 0000000000000000 KERNEL32.DLL!BaseThreadInitThunk 00007FFA58C13691 0000000000000000 0000000000000000 0000000000000000 ntdll.dll!RtlUserThreadStart darktable.exe 2.6.0.0 ntdll.dll 10.0.17134.471 KERNEL32.DLL 10.0.17134.1 KERNELBASE.dll 10.0.17134.441 msvcrt.dll 7.0.17134.1 libintl-8.dll 0.19.8.0 ADVAPI32.dll 10.0.17134.471 sechost.dll 10.0.17134.319 RPCRT4.dll 10.0.17134.471 libglib-2.0-0.dll 2.58.1.0 ole32.dll 10.0.17134.407 combase.dll 10.0.17134.407 ucrtbase.dll 10.0.17134.319 bcryptPrimitives.dll 10.0.17134.1 GDI32.dll 10.0.17134.285 gdi32full.dll 10.0.17134.471 msvcp_win.dll 10.0.17134.137 USER32.dll 10.0.17134.376 win32u.dll 10.0.17134.1 SHELL32.dll 10.0.17134.441 cfgmgr32.dll 10.0.17134.1 shcore.dll 10.0.17134.112 windows.storage.dll 10.0.17134.471 shlwapi.dll 10.0.17134.1 kernel.appcore.dll 10.0.17134.112 profapi.dll 10.0.17134.1 powrprof.dll 10.0.17134.1 FLTLIB.DLL 10.0.17134.1 WS2_32.dll 10.0.17134.1 libdarktable.dll PSAPI.DLL 10.0.17134.1 libiconv-2.dll 1.15.0.0 libwinpthread-1.dll 1.0.0.0 libpcre-1.dll libgcc_s_seh-1.dll libstdc++-6.dll exchndl.dll 0.8.2.0 libcairo-2.dll libgdk_pixbuf-2.0-0.dll 2.38.0.0 libexiv2.dll libgdk-3-0.dll 3.24.2.0 IMM32.dll 10.0.17134.1 SETUPAPI.dll 10.0.17134.1 libgmodule-2.0-0.dll 2.58.1.0 libgobject-2.0-0.dll 2.58.1.0 libgphoto2-6.dll libgphoto2_port-12.dll libGraphicsMagick-3.dll libIlmImf-2_3.dll libjpeg-8.dll libjson-glib-1.0-0.dll liblcms2-2.dll lua53.dll libpango-1.0-0.dll 1.43.0.0 libpangocairo-1.0-0.dll 1.43.0.0 libpng16-16.dll libpugixml.dll libsecret-1-0.dll libtiff-5.dll libsoup-2.4-1.dll libsqlite3-0.dll VERSION.dll 10.0.17134.1 MSIMG32.dll 10.0.17134.1 zlib1.dll libfontconfig-1.dll gdiplus.dll 10.0.17134.472 mgwhelp.dll 0.8.2.0 libpixman-1-0.dll dwmapi.dll 10.0.17134.1 WINMM.dll 10.0.17134.1 libfreetype-6.dll 2.9.1.0 libcairo-gobject-2.dll libexpat-1.dll libpangowin32-1.0-0.dll 1.43.0.0 libffi-6.dll libepoxy-0.dll libexif-12.dll libsystre-0.dll libltdl-7.dll libbz2-1.dll libIex-2_3.dll libHalf-2_3.dll libIlmThread-2_3.dll libthai-0.dll libImath-2_3.dll libpangoft2-1.0-0.dll 1.43.0.0 libpsl-5.dll liblzma-5.dll 5.2.4.0 libzstd.dll dbghelp.dll 10.0.17134.1 winmmbase.dll 10.0.17134.1 USP10.dll 10.0.17134.1 libgcrypt-20.dll 1.8.4.17417 libdatrie-1.dll libtre-5.dll libharfbuzz-0.dll DWrite.dll 10.0.17134.376 libidn2-0.dll dbgcore.DLL 10.0.17134.1 libgraphite2.dll libgomp-1.dll libgtk-3-0.dll 3.24.2.0 libgio-2.0-0.dll 2.58.1.0 libopenjp2-7.dll librsvg-2-2.dll libfribidi-0.dll libxml2-2.dll libunistring-2.dll 0.9.10.0 libgpg-error-0.dll 1.33.0.0 DNSAPI.dll 10.0.17134.441 NSI.dll 10.0.17134.1 IPHLPAPI.DLL 10.0.17134.1 comdlg32.dll 10.0.17134.1 COMCTL32.dll 6.10.17134.472 WINSPOOL.DRV 10.0.17134.319 bcrypt.dll 10.0.17134.112 PROPSYS.dll 7.0.17134.112 OLEAUT32.dll 10.0.17134.48 libatk-1.0-0.dll 2.30.0.0 libcroco-0.6-3.dll CRYPTSP.dll 10.0.17134.1 rsaenh.dll 10.0.17134.254 CRYPTBASE.dll 10.0.17134.1 uxtheme.dll 10.0.17134.1 clbcatq.dll 2001.12.10941.16384 mswsock.dll 10.0.17134.1 nimdnsNSP.dll 215.0.2.49152 nimdnsResponder.dll 215.0.2.49152 VCRUNTIME140.dll 14.12.25810.0 rasadhlp.dll 10.0.17134.1 fwpuclnt.dll 10.0.17134.1 MSCTF.dll 10.0.17134.376 DEVOBJ.dll 10.0.17134.1 WINTRUST.dll 10.0.17134.81 MSASN1.dll 10.0.17134.1 CRYPT32.dll 10.0.17134.1 OpenCL.dll 2.2.1.0 nvopencl64.dll 25.21.14.1735 nvfatbinaryLoader.dll 25.21.14.1735 nvapi64.dll 25.21.14.1735 dxgi.dll 10.0.17134.112 IntelOpenCL64.dll 24.20.100.6194 intelocl64.dll 6.8.0.2 task_executor64.dll 6.8.0.2 OPENGL32.dll 10.0.17134.1 GLU32.dll 10.0.17134.1 cpu_device64.dll 6.8.0.2 igdrcl64.dll TextInputFramework.dll 10.0.17134.376 CoreUIComponents.dll 10.0.17134.376 CoreMessaging.dll 10.0.17134.471 ntmarta.dll 10.0.17134.1 wintypes.dll 10.0.17134.407 Windows 10.0.17134 DrMingw 0.8.2 From kiriyama.kentaro at tis.co.jp Thu Dec 27 12:26:45 2018 From: kiriyama.kentaro at tis.co.jp (=?iso-2022-jp?B?GyRCNk07MyEhN3JCQE86GyhC?=) Date: Thu, 27 Dec 2018 12:26:45 +0000 Subject: limit_req_zone & proxy_intercept_errors Message-ID: Hello, I?m using the nginx as a reverse proxy server. The components are formatted like the attached. >Layout sheet on the attached file is showing the layout of the system. *Platform is AWS. >nginx.conf is showing the conf. file on /etc/nginx/ on nginx server. >ALB.conf is showing the conf. file on /etc/nginx/conf.d/ The proxy passing target is the DNS name of Application Load Balancer(AWS) Here are the details that I would like to achieve. 1. Limiting the request to the backend apache server on nginx server. The client will request to the below URL. ?URL is just a sample? https://dev.xxxxxxx.xxxx.co.jp/front/auth By looking at the developer tools on google chrome the web site is formatted with other requests with multiple .css , .js , .png, .ico file extension requests. On nginx.conf in the attached file, I have used the ?limit_req_zone? and would like to limit the request to pass though 13r/s to the backend apache server. Then tried with and attached conf. file, but didn?t work well that the limit is counting the requests with and file extension which I?ve mentioned above. 2. When the backend apache server respond with an 500 code, I would like to replace the status code to 200 and respond it back to the client(Requester). The reason why I would like to do this is that the CloudFront very front on this web system has the custom error pages response. If the backend apache server returns the 500 response with and message like ?authentication error? and without replacing the status code on nginx, the client(Requester) will get an error pages from the desired error page on S3 bucket by the rule of CloudFront custom error page. For No.1, Can anyone define how to exclude the requests with an following file extension when the client access to the Web page. For No 2, As I?m selecting the proxy_intercept_errors, does the conf. which I?ve wrote in the attached file works? Regards Ken -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: 20181227212647.zip Type: application/octet-stream Size: 43632 bytes Desc: not available URL: From kiriyama.kentaro at tis.co.jp Thu Dec 27 12:29:04 2018 From: kiriyama.kentaro at tis.co.jp (=?iso-2022-jp?B?GyRCNk07MyEhN3JCQE86GyhC?=) Date: Thu, 27 Dec 2018 12:29:04 +0000 Subject: limit_req_zone & proxy_intercept_errors In-Reply-To: References: Message-ID: The attached files password is following. LE,=2Cs4 Regards Ken From: ?????? Sent: Thursday, December 27, 2018 9:27 PM To: nginx at nginx.org Cc: ????; ?????; ?????; ?????? Subject: limit_req_zone & proxy_intercept_errors Hello, I?m using the nginx as a reverse proxy server. The components are formatted like the attached. >Layout sheet on the attached file is showing the layout of the system. *Platform is AWS. >nginx.conf is showing the conf. file on /etc/nginx/ on nginx server. >ALB.conf is showing the conf. file on /etc/nginx/conf.d/ The proxy passing target is the DNS name of Application Load Balancer(AWS) Here are the details that I would like to achieve. 1. Limiting the request to the backend apache server on nginx server. The client will request to the below URL. ?URL is just a sample? https://dev.xxxxxxx.xxxx.co.jp/front/auth By looking at the developer tools on google chrome the web site is formatted with other requests with multiple .css , .js , .png, .ico file extension requests. On nginx.conf in the attached file, I have used the ?limit_req_zone? and would like to limit the request to pass though 13r/s to the backend apache server. Then tried with and attached conf. file, but didn?t work well that the limit is counting the requests with and file extension which I?ve mentioned above. 2. When the backend apache server respond with an 500 code, I would like to replace the status code to 200 and respond it back to the client(Requester). The reason why I would like to do this is that the CloudFront very front on this web system has the custom error pages response. If the backend apache server returns the 500 response with and message like ?authentication error? and without replacing the status code on nginx, the client(Requester) will get an error pages from the desired error page on S3 bucket by the rule of CloudFront custom error page. For No.1, Can anyone define how to exclude the requests with an following file extension when the client access to the Web page. For No 2, As I?m selecting the proxy_intercept_errors, does the conf. which I?ve wrote in the attached file works? Regards Ken -------------- next part -------------- An HTML attachment was scrubbed... URL: From francis at daoine.org Thu Dec 27 17:17:10 2018 From: francis at daoine.org (Francis Daly) Date: Thu, 27 Dec 2018 17:17:10 +0000 Subject: migrate fastcgi_split_path_info to uwsgi In-Reply-To: <71de487a-1404-329b-9459-9e94849e9fdd@andreasschulze.de> References: <71de487a-1404-329b-9459-9e94849e9fdd@andreasschulze.de> Message-ID: <20181227171710.3wjakqbelieje7vj@daoine.org> On Sun, Dec 23, 2018 at 02:21:31PM +0100, A. Schulze wrote: > cross-posting to horde & nginx lists... Hi there, > I plan to migrate a PHP Application (horde) from php-fpm to uwsgi. > nginx talk "fastcgi protocol" to php-fpm now and have to talk "uwsgi protocol" to uwsgi later. > > Horde uses partially arguments as URL elements. > example comment in https://github.com/horde/base/blob/master/services/ajax.php > > the corresponding nginx config I currently use: > > location ~ /horde/services/ajax.php/ { > fastcgi_split_path_info ^(.+\.php)(/.+)$; > fastcgi_pass unix:/var/run/php.sock; > include /etc/nginx/fastcgi.conf; > } > > the example URL used in ajax.php: http://example.com/horde/services/ajax.php/APP/ACTION[?OPTIONS] > > without fastcgi_split_path_info: > SCRIPT_FILENAME /horde/services/ajax.php/APP/ACTION > PATH_INFO is empty? > > with fastcgi_split_path_info: > SCRIPT_FILENAME /horde/services/ajax.php > PATH_INFO /APP/ACTION[?OPTIONS] The nginx directive fastcgi_split_path_info just sets some internal-to-nginx variables. To send things like SCRIPT_FILENAME and PATH_INFO to the upstream fastcgi service, you will have some fastcgi_param directives that use those variables -- in this case, presumably they are in the "include" file. > Unfortunately there is no "uwsgi_split_path_info" in nginx. > That means to me > - it's simply not implemented > - the problem is solved in a other way. > > I appreciate any hint on how to solve the "split" or at least debug what's going on. You can continue to use fastcgi_split_path_info to set some variables. For uwsgi, you will need some uwsgi_param directives. You may want to edit whatever default "include" file you use, or perhaps write the directives directly. You will want to know what parameter names your upstream uwsgi server actually uses, and make sure to set those to the values you need. Good luck with it, f -- Francis Daly francis at daoine.org From yang.yu.list at gmail.com Sat Dec 29 00:17:35 2018 From: yang.yu.list at gmail.com (Yang Yu) Date: Fri, 28 Dec 2018 16:17:35 -0800 Subject: nginx reverse proxy using digest authentication to origin Message-ID: Hi, Is there a way to make nginx use digest authentication for connection to origin? For HTTP basic auth, `proxy_set_header Authorization` to a static string works. I can't find information on how to support other authentication schemes to origin. Thanks. Yang From nginx-forum at forum.nginx.org Mon Dec 31 15:01:54 2018 From: nginx-forum at forum.nginx.org (Sesshomurai) Date: Mon, 31 Dec 2018 10:01:54 -0500 Subject: NGINX not passing header to proxy Message-ID: <14a3386f0000650b03b7b5ee959dd036.NginxMailingListEnglish@forum.nginx.org> Hi, I am having a problem with NGINX not forwarding a request header to my proxy. Here is the location: location /xyz { proxy_pass_request_headers on; proxy_pass https://someserver/; } I make the call passing "userAccount" header and it never gets sent to the proxy, but if I declare it in the location, it does get passed. Other headers are passed to proxy. Adding this works, but I need to pass the header in the client request. proxy_set_header 'userAccount' 'someuser'; Any tips appreciated. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,282518,282518#msg-282518 From francis at daoine.org Mon Dec 31 16:58:48 2018 From: francis at daoine.org (Francis Daly) Date: Mon, 31 Dec 2018 16:58:48 +0000 Subject: NGINX not passing header to proxy In-Reply-To: <14a3386f0000650b03b7b5ee959dd036.NginxMailingListEnglish@forum.nginx.org> References: <14a3386f0000650b03b7b5ee959dd036.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20181231165848.327m2zt3zwr7l26a@daoine.org> On Mon, Dec 31, 2018 at 10:01:54AM -0500, Sesshomurai wrote: Hi there, > I am having a problem with NGINX not forwarding a request header to my > proxy. > > Here is the location: > > location /xyz { > proxy_pass_request_headers on; > proxy_pass https://someserver/; > } > > I make the call passing "userAccount" header and it never gets sent to the > proxy, but if I declare it in the location, it does get passed. > Other headers are passed to proxy. You seem to report that when you do curl -H userAccount:abc http://nginx/xyz you want nginx to make a request to https://someserver/ including the http header userAccount; and that nginx does make the request but does not include the header. Is that correct? A simple test, using http and not https, seems to show it working as you want here. Does the same test work for you? If so, does using https make a difference to you? == # "main" server server { listen 8090; location /xyz { proxy_pass http://127.0.0.1:8091/; } } # "upstream" server server { listen 8091; location / { return 200 "request: $request; userAccount: $http_useraccount\n"; } } == $ curl -H userAccount:abc http://127.0.0.1:8090/xyz request: GET / HTTP/1.0; userAccount: abc f -- Francis Daly francis at daoine.org