From RenePaulMages at ramix.org Sun Jun 3 09:12:54 2018 From: RenePaulMages at ramix.org (RPM) Date: Sun, 3 Jun 2018 11:12:54 +0200 Subject: error : conflicting server name Message-ID: <46433955-45b8-17b8-b944-185b8b966746@ramix.org> Hello Nginx Community, Our website was perfectly running under nginx since yesterday : https://www.nouvelledonne.fr OS server is debian Jessie (version 8.3.0). After the SSL certificate renewal (with Let's encrypt) the following error appears : cat /var/log/nginx/error.log 2018/06/02 22:20:41 [warn] 16742#0: conflicting server name "www.nouvelledonne.fr" on 0.0.0.0:443, ignored 2018/06/02 22:20:41 [notice] 16742#0: signal process started -- Thanks for your help Rene Paul Mages (ramix) GnuPG key : 0x9840A6F7 http://renemages.wordpress.com/debian http://nosoftwarepatents.wikidot.com http://twitter.com/RenePaulMages From nginx-forum at forum.nginx.org Sun Jun 3 11:59:09 2018 From: nginx-forum at forum.nginx.org (shiz) Date: Sun, 03 Jun 2018 07:59:09 -0400 Subject: TLS 1.3 not being selected. Message-ID: Hi, I can't see what I'm doing wrong. When I visit https://www.cloudflare.com/ with my browser TLS 1.3 is used. However when I visit my website, TLS 1.2 is selected instead. My browser (opera 53) has this in its command line: " --ssl-version-max=tls1.3 --tls13-variant=draft" Nginx is compiled like this: nginx version: nginx/1.14.0 built with OpenSSL 1.1.1-pre7 (beta) 29 May 2018 TLS SNI support enabled configure arguments: --with-cc-opt='-g -O2 -fPIE -fstack-protector-strong -Wformat -Werror=format-security -D_FORTIFY_SOURCE=2' --with-ld-opt='-fPIE -pie -Wl,-z,relro -Wl,-z,now' --prefix=/usr/share/nginx --conf-path=/etc/nginx/nginx.conf --http-log-path=/var/log/nginx/access.log --error-log-path=/var/log/nginx/error.log --lock-path=/var/lock/nginx.lock --pid-path=/run/nginx.pid --http-client-body-temp-path=/var/lib/nginx/body --http-fastcgi-temp-path=/var/lib/nginx/fastcgi --http-proxy-temp-path=/var/lib/nginx/proxy --http-scgi-temp-path=/var/lib/nginx/scgi --http-uwsgi-temp-path=/var/lib/nginx/uwsgi --with-debug --with-pcre-jit --with-ipv6 --with-http_ssl_module --with-http_stub_status_module --with-http_realip_module --with-http_auth_request_module --with-http_addition_module --with-http_dav_module --with-http_geoip_module --with-http_gunzip_module --with-http_gzip_static_module --with-http_image_filter_module --with-http_v2_module --with-http_sub_module --with-http_xslt_module --with-stream --with-stream_ssl_module --with-mail --with-mail_ssl_module --with-threads --add-module=/usr/local/src/nginx/nginx-1.14.0/debian/modules/nginx-auth-pam --add-module=/usr/local/src/nginx/nginx-1.14.0/debian/modules/nginx-cache-purge --add-module=/usr/local/src/nginx/nginx-1.14.0/debian/modules/nginx-dav-ext-module --add-module=/usr/local/src/nginx/nginx-1.14.0/debian/modules/nginx-echo --add-module=/usr/local/src/nginx/nginx-1.14.0/debian/modules/ngx_http_substitutions_filter_module --add-module=/usr/local/src/ngx_brotli --with-openssl-opt=enable-tls1_3 testssl.sh does report TLS 1.3: ./testssl.sh -p www.ts-export.com ########################################################### testssl.sh 3.0beta from https://testssl.sh/dev/ (f426a3b 2018-05-23 15:09:03 -- ) This program is free software. Distribution and modification under GPLv2 permitted. USAGE w/o ANY WARRANTY. USE IT AT YOUR OWN RISK! Please file bugs @ https://testssl.sh/bugs/ ########################################################### Using "OpenSSL 1.0.2-chacha (1.0.2i-dev)" [~183 ciphers] on NC-PH-0657-10:./bin/openssl.Linux.x86_64 (built: "Jun 22 19:32:29 2016", platform: "linux-x86_64") Start 2018-06-02 21:16:10 -->> 209.188.18.190:443 (www.ts-export.com) <<-- rDNS (209.188.18.190): ts-export.com. Service detected: HTTP Testing protocols via sockets except NPN+ALPN SSLv2 not offered (OK) SSLv3 not offered (OK) TLS 1 offered TLS 1.1 offered TLS 1.2 offered (OK) TLS 1.3 offered (OK): draft 28, draft 27, draft 26 NPN/SPDY h2, http/1.1 (advertised) ALPN/HTTP2 h2, http/1.1 (offered) Done 2018-06-02 21:16:17 [ 9s] -->> 209.188.18.190:443 (www.ts-export.com) <<-- Pertinent part of my configuration: ssl_protocols TLSv1 TLSv1.1 TLSv1.2 TLSv1.3; ssl_prefer_server_ciphers on; ssl_session_cache shared:SSL:20m; ssl_session_timeout 10m; ssl_ciphers 'TLS13-AES-128-GCM-SHA256:TLS13-AES-256-GCM-SHA384:TLS13-CHACHA20-POLY1305-SHA256:TLS13-AES-128-GCM-SHA256:TLS13-AES-128-CCM-8-SHA256:TLS13-AES-128-CCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES128-SHA:ECDHE-ECDSA-AES256-SHA:ECDHE-ECDSA-AES128-SHA256:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-RSA-AES128-SHA:ECDHE-RSA-AES256-SHA:ECDHE-RSA-AES128-SHA256:ECDHE-RSA-AES256-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES128-SHA:DHE-RSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES256-SHA256:CAMELLIA:!aNULL:!eNULL:!EXPORT:!DES:!RC4:!RSA:!MD5:!PSK:!aECDH'; ssl_ecdh_curve secp384r1; ssl_stapling on; ssl_stapling_verify on; Any suggestion? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,280017,280017#msg-280017 From sca at andreasschulze.de Sun Jun 3 14:33:16 2018 From: sca at andreasschulze.de (A. Schulze) Date: Sun, 3 Jun 2018 16:33:16 +0200 Subject: TLS 1.3 not being selected. In-Reply-To: References: Message-ID: <6a8500a2-9417-7bd5-e46c-f3706a99ca5d@andreasschulze.de> Am 03.06.2018 um 13:59 schrieb shiz: > TLS 1.3 offered (OK): draft 28, draft 27, draft 26 there are different, incompatible versions (drafts) of TLS1.3 Browser and server must implement the same draft version otherwise the browser fall back to TLS1.2. see https://wiki.openssl.org/index.php/TLS1.3 Andreas From mdounin at mdounin.ru Mon Jun 4 11:42:26 2018 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 4 Jun 2018 14:42:26 +0300 Subject: error : conflicting server name In-Reply-To: <46433955-45b8-17b8-b944-185b8b966746@ramix.org> References: <46433955-45b8-17b8-b944-185b8b966746@ramix.org> Message-ID: <20180604114226.GU32137@mdounin.ru> Hello! On Sun, Jun 03, 2018 at 11:12:54AM +0200, RPM wrote: > Our website was perfectly running under nginx since yesterday : > https://www.nouvelledonne.fr > OS server is debian Jessie (version 8.3.0). > > After the SSL certificate renewal (with Let's encrypt) the following > error appears : > > cat /var/log/nginx/error.log > > 2018/06/02 22:20:41 [warn] 16742#0: conflicting server name > "www.nouvelledonne.fr" on 0.0.0.0:443, ignored > 2018/06/02 22:20:41 [notice] 16742#0: signal process started You have server_name set to "www.nouvelledonne.fr" in two different server{} blocks listening on 0.0.0.0:443. Check your configs. -- Maxim Dounin http://mdounin.ru/ From RenePaulMages at ramix.org Mon Jun 4 13:28:01 2018 From: RenePaulMages at ramix.org (RPM) Date: Mon, 4 Jun 2018 15:28:01 +0200 Subject: error : conflicting server name In-Reply-To: <20180604114226.GU32137@mdounin.ru> References: <46433955-45b8-17b8-b944-185b8b966746@ramix.org> <20180604114226.GU32137@mdounin.ru> Message-ID: Le 04/06/2018 ? 13:42, Maxim Dounin a ?crit?: > Hello! > > On Sun, Jun 03, 2018 at 11:12:54AM +0200, RPM wrote: > >> Our website was perfectly running under nginx since yesterday : >> https://www.nouvelledonne.fr >> OS server is debian Jessie (version 8.3.0). >> >> After the SSL certificate renewal (with Let's encrypt) the following >> error appears : >> >> cat /var/log/nginx/error.log >> >> 2018/06/02 22:20:41 [warn] 16742#0: conflicting server name >> "www.nouvelledonne.fr" on 0.0.0.0:443, ignored >> 2018/06/02 22:20:41 [notice] 16742#0: signal process started > > You have server_name set to "www.nouvelledonne.fr" in two > different server{} blocks listening on 0.0.0.0:443. Check your > configs. Thanks a lot Maxim for your help. The https access to our web site has come back after the deletion of one block of the two blocks (in which www.nouvelledonne.fr appears) in the following file : /etc/nginx/sites-available/www.nouvelledonne.fr -- All the best Rene Paul Mages (ramix) GnuPG key : 0x9840A6F7 http://renemages.wordpress.com/debian http://nosoftwarepatents.wikidot.com http://twitter.com/RenePaulMages From nginx-forum at forum.nginx.org Tue Jun 5 09:34:10 2018 From: nginx-forum at forum.nginx.org (prajos) Date: Tue, 05 Jun 2018 05:34:10 -0400 Subject: large_client_header_buffers: Custom error pages are not working Message-ID: <641bf50b2522e14cab2137a02b816478.NginxMailingListEnglish@forum.nginx.org> Hi there, I'm using nginx nginx version 1.12.0 as a reverse proxy to my application servers. I allow certain top level checks like header size and count to be done at nginx level. The server block looks like the following: server { listen 443 ssl default_server; .. large_client_header_buffers 32 512; .. location / { ... } error_page 400 /400.json; location = /400.json { root /etc/nginx/errors-files/; allow all; internal; } } Then I start testing the nginx with curl and adding a header of size 600 bytes. nginx promptly stops the request and dumps a default error page instead of my custom error page. 400 Request Header Or Cookie Too Large

400 Bad Request

Request Header Or Cookie Too Large

nginx
How can I get a CUSTOM ERROR page for this situation working instead of the default page. Thanks Cheers prajos Posted at Nginx Forum: https://forum.nginx.org/read.php?2,280035,280035#msg-280035 From mdounin at mdounin.ru Tue Jun 5 11:44:27 2018 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 5 Jun 2018 14:44:27 +0300 Subject: large_client_header_buffers: Custom error pages are not working In-Reply-To: <641bf50b2522e14cab2137a02b816478.NginxMailingListEnglish@forum.nginx.org> References: <641bf50b2522e14cab2137a02b816478.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20180605114426.GB32137@mdounin.ru> Hello! On Tue, Jun 05, 2018 at 05:34:10AM -0400, prajos wrote: > Hi there, > I'm using nginx nginx version 1.12.0 as a reverse proxy to my application > servers. > I allow certain top level checks like header size and count to be done at > nginx level. > > The server block looks like the following: > > server { > listen 443 ssl default_server; > .. > large_client_header_buffers 32 512; > .. > location / { > ... > } > > error_page 400 /400.json; > location = /400.json { > root /etc/nginx/errors-files/; > allow all; > internal; > } > > } > > Then I start testing the nginx with curl and adding a header of size 600 > bytes. > nginx promptly stops the request and dumps a default error page instead of > my custom error page. > > > 400 Request Header Or Cookie Too Large > >

400 Bad Request

>
Request Header Or Cookie Too Large
>
nginx
> > > > > How can I get a CUSTOM ERROR page for this situation working instead of the > default page. Try handling 494 errors instead. It's a custom code used to report "Request Header Too Large" errors, translated to 400 just before returning to client. It was introduced in nginx 0.9.4 to make it possible to define a custom error page for these particular errors separately from generic 400 errors. (It looks like it's not documented anywhere but in CHANGES though. This needs to be fixed.) -- Maxim Dounin http://mdounin.ru/ From nginx-forum at forum.nginx.org Tue Jun 5 13:14:16 2018 From: nginx-forum at forum.nginx.org (prajos) Date: Tue, 05 Jun 2018 09:14:16 -0400 Subject: large_client_header_buffers: Custom error pages are not working In-Reply-To: <20180605114426.GB32137@mdounin.ru> References: <20180605114426.GB32137@mdounin.ru> Message-ID: Thanks Maxim Dounin, The trick worked. I did something like the following: server { large_client_header_buffers 12 64; ... error_page 494 =400 /400.json; error_page 400 /400.json; location = /400.json { add_header Funky-Header1 'Funky Value' always; root /etc/nginx/error-files/; allow all; internal; } } & now I'm able to get my custom message and Header. Cheers Posted at Nginx Forum: https://forum.nginx.org/read.php?2,280035,280044#msg-280044 From mdounin at mdounin.ru Tue Jun 5 14:01:40 2018 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 5 Jun 2018 17:01:40 +0300 Subject: nginx-1.15.0 Message-ID: <20180605140140.GG32137@mdounin.ru> Changes with nginx 1.15.0 05 Jun 2018 *) Change: the "ssl" directive is deprecated; the "ssl" parameter of the "listen" directive should be used instead. *) Change: now nginx detects missing SSL certificates during configuration testing when using the "ssl" parameter of the "listen" directive. *) Feature: now the stream module can handle multiple incoming UDP datagrams from a client within a single session. *) Bugfix: it was possible to specify an incorrect response code in the "proxy_cache_valid" directive. *) Bugfix: nginx could not be built by gcc 8.1. *) Bugfix: logging to syslog stopped on local IP address changes. *) Bugfix: nginx could not be built by clang with CUDA SDK installed; the bug had appeared in 1.13.8. *) Bugfix: "getsockopt(TCP_FASTOPEN) ... failed" messages might appear in logs during binary upgrade when using unix domain listen sockets on FreeBSD. *) Bugfix: nginx could not be built on Fedora 28 Linux. *) Bugfix: request processing rate might exceed configured rate when using the "limit_req" directive. *) Bugfix: in handling of client addresses when using unix domain listen sockets to work with datagrams on Linux. *) Bugfix: in memory allocation error handling. -- Maxim Dounin http://nginx.org/ From m16+nginx at monksofcool.net Tue Jun 5 14:29:02 2018 From: m16+nginx at monksofcool.net (Ralph Seichter) Date: Tue, 5 Jun 2018 16:29:02 +0200 Subject: nginx-1.15.0 In-Reply-To: <20180605140140.GG32137@mdounin.ru> References: <20180605140140.GG32137@mdounin.ru> Message-ID: <024ff472-6c3c-86f8-4cf5-decce6891a5e@monksofcool.net> Hello nginx team, a while ago it was mentioned here that the June release was planned to contain the new feature of passing environment variables to individual apps by using dynamic configuration data. I don't see this mentioned in the release notes? -Ralph From maxim at nginx.com Tue Jun 5 14:34:19 2018 From: maxim at nginx.com (Maxim Konovalov) Date: Tue, 5 Jun 2018 17:34:19 +0300 Subject: nginx-1.15.0 In-Reply-To: <024ff472-6c3c-86f8-4cf5-decce6891a5e@monksofcool.net> References: <20180605140140.GG32137@mdounin.ru> <024ff472-6c3c-86f8-4cf5-decce6891a5e@monksofcool.net> Message-ID: Hi Ralph, On 05/06/2018 17:29, Ralph Seichter wrote: > Hello nginx team, > > a while ago it was mentioned here that the June release was planned to > contain the new feature of passing environment variables to individual > apps by using dynamic configuration data. > > I don't see this mentioned in the release notes? > You are probably talking about nginx-unit project, right? -- Maxim Konovalov From mdounin at mdounin.ru Tue Jun 5 14:35:14 2018 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 5 Jun 2018 17:35:14 +0300 Subject: nginx-1.15.0 In-Reply-To: <024ff472-6c3c-86f8-4cf5-decce6891a5e@monksofcool.net> References: <20180605140140.GG32137@mdounin.ru> <024ff472-6c3c-86f8-4cf5-decce6891a5e@monksofcool.net> Message-ID: <20180605143513.GK32137@mdounin.ru> Hello! On Tue, Jun 05, 2018 at 04:29:02PM +0200, Ralph Seichter wrote: > Hello nginx team, > > a while ago it was mentioned here that the June release was planned to > contain the new feature of passing environment variables to individual > apps by using dynamic configuration data. > > I don't see this mentioned in the release notes? Application server is called Unit and it is a different product. -- Maxim Dounin http://mdounin.ru/ From m16+nginx at monksofcool.net Tue Jun 5 14:39:47 2018 From: m16+nginx at monksofcool.net (Ralph Seichter) Date: Tue, 5 Jun 2018 16:39:47 +0200 Subject: nginx-1.15.0 In-Reply-To: References: <20180605140140.GG32137@mdounin.ru> <024ff472-6c3c-86f8-4cf5-decce6891a5e@monksofcool.net> Message-ID: <04032295-6b07-5225-c586-78d2fdea914a@monksofcool.net> Hello Maxim. > You are probably talking about nginx-unit project, right? You are right, my bad. I am waiting anxiously for that nginx-unit feature, and I have mistaken the announcement message. -Ralph From maxim at nginx.com Tue Jun 5 14:44:21 2018 From: maxim at nginx.com (Maxim Konovalov) Date: Tue, 5 Jun 2018 17:44:21 +0300 Subject: nginx-1.15.0 In-Reply-To: <04032295-6b07-5225-c586-78d2fdea914a@monksofcool.net> References: <20180605140140.GG32137@mdounin.ru> <024ff472-6c3c-86f8-4cf5-decce6891a5e@monksofcool.net> <04032295-6b07-5225-c586-78d2fdea914a@monksofcool.net> Message-ID: On 05/06/2018 17:39, Ralph Seichter wrote: > Hello Maxim. > >> You are probably talking about nginx-unit project, right? > > You are right, my bad. I am waiting anxiously for that nginx-unit > feature, and I have mistaken the announcement message. > No problem -- unit-1.2 release is anticipated this week. Maxim -- Maxim Konovalov From kworthington at gmail.com Tue Jun 5 15:09:02 2018 From: kworthington at gmail.com (Kevin Worthington) Date: Tue, 5 Jun 2018 11:09:02 -0400 Subject: [nginx-announce] nginx-1.15.0 In-Reply-To: <20180605140145.GH32137@mdounin.ru> References: <20180605140145.GH32137@mdounin.ru> Message-ID: Hello Nginx users, Now available: Nginx 1.15.0 for Windows https://kevinworthington.com/nginxwin1150 (32-bit and 64-bit versions) These versions are to support legacy users who are already using Cygwin based builds of Nginx. Officially supported native Windows binaries are at nginx.org. Announcements are also available here: Twitter http://twitter.com/kworthington Google+ https://plus.google.com/+KevinWorthington/ Thank you, Kevin -- Kevin Worthington kworthington *@* (gmail] [dot} {com) https://kevinworthington.com/ https://twitter.com/kworthington https://plus.google.com/+KevinWorthington/ On Tue, Jun 5, 2018 at 10:01 AM, Maxim Dounin wrote: > Changes with nginx 1.15.0 05 Jun > 2018 > > *) Change: the "ssl" directive is deprecated; the "ssl" parameter of > the > "listen" directive should be used instead. > > *) Change: now nginx detects missing SSL certificates during > configuration testing when using the "ssl" parameter of the "listen" > directive. > > *) Feature: now the stream module can handle multiple incoming UDP > datagrams from a client within a single session. > > *) Bugfix: it was possible to specify an incorrect response code in the > "proxy_cache_valid" directive. > > *) Bugfix: nginx could not be built by gcc 8.1. > > *) Bugfix: logging to syslog stopped on local IP address changes. > > *) Bugfix: nginx could not be built by clang with CUDA SDK installed; > the bug had appeared in 1.13.8. > > *) Bugfix: "getsockopt(TCP_FASTOPEN) ... failed" messages might appear > in logs during binary upgrade when using unix domain listen sockets > on FreeBSD. > > *) Bugfix: nginx could not be built on Fedora 28 Linux. > > *) Bugfix: request processing rate might exceed configured rate when > using the "limit_req" directive. > > *) Bugfix: in handling of client addresses when using unix domain > listen > sockets to work with datagrams on Linux. > > *) Bugfix: in memory allocation error handling. > > > -- > Maxim Dounin > http://nginx.org/ > _______________________________________________ > nginx-announce mailing list > nginx-announce at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx-announce > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gk at leniwiec.biz Tue Jun 5 17:04:22 2018 From: gk at leniwiec.biz (Grzegorz Kulewski) Date: Tue, 5 Jun 2018 19:04:22 +0200 Subject: nginx-1.15.0 In-Reply-To: <20180605140140.GG32137@mdounin.ru> References: <20180605140140.GG32137@mdounin.ru> Message-ID: <6e97f26c-c78e-22fe-d965-019ac38a0229@leniwiec.biz> W dniu 05.06.2018 o?16:01, Maxim Dounin pisze: > Changes with nginx 1.15.0 05 Jun 2018 [snip]> *) Feature: now the stream module can handle multiple incoming UDP > datagrams from a client within a single session. Does this mean that the performance of UDP proxying of (for example) OpenVPN should be greatly increased in this release? -- Grzegorz Kulewski gk at leniwiec.biz +48 663 92 88 95 From arut at nginx.com Tue Jun 5 17:23:20 2018 From: arut at nginx.com (Roman Arutyunyan) Date: Tue, 5 Jun 2018 20:23:20 +0300 Subject: nginx-1.15.0 In-Reply-To: <6e97f26c-c78e-22fe-d965-019ac38a0229@leniwiec.biz> References: <20180605140140.GG32137@mdounin.ru> <6e97f26c-c78e-22fe-d965-019ac38a0229@leniwiec.biz> Message-ID: <20180605172320.GU40083@Romans-MacBook-Air.local> Hello Grzegorz, On Tue, Jun 05, 2018 at 07:04:22PM +0200, Grzegorz Kulewski wrote: > W dniu 05.06.2018 o?16:01, Maxim Dounin pisze: > > Changes with nginx 1.15.0 05 Jun 2018 > [snip]> *) Feature: now the stream module can handle multiple incoming UDP > > datagrams from a client within a single session. > > Does this mean that the performance of UDP proxying of (for example) OpenVPN should be greatly increased in this release? Above all, this means that UDP proxying now works for protocols which require multiple packets to be sent back and forth between client and server, as opposed to simple request-response DNS-like protocols. OpenVPN is likely one of the protocols which are expected to work now. And yes, even for simple request-response protocols the performance has increased significantly because the same session and upstream connection are reused for multiple client packets. -- Roman Arutyunyan From pgnet.dev at gmail.com Wed Jun 6 22:05:23 2018 From: pgnet.dev at gmail.com (PGNet Dev) Date: Wed, 6 Jun 2018 15:05:23 -0700 Subject: status/usage of FRiCKLE/ngx_cache_purge. still reliable? alternatives? Message-ID: <1a9a0819-6396-a430-6e62-b48c1c546bca@gmail.com> For some new WordPress sites, I'll be deploying fastcgi_cache as reverse proxy / page cache, instead of usual Varnish. Although there are a number of WP-module-based PURGE options, I prefer that it's handled by the web server. A commonly referenced approach is to use the 'FRiCKLE/ngx_cache_purge', https://github.com/FRiCKLE/ngx_cache_purge/ with associated nginx conf additions, https://easyengine.io/wordpress-nginx/tutorials/single-site/fastcgi-cache-with-purging/ https://www.ryadel.com/en/nginx-purge-proxy-cache-delete-invalidate-linux-centos-7/ ngx_cache_purge module development appears to have gone stale; no commits since ~ 2014. What are your experiences with current use of that module, with latest 1.15x nginx releases? Is there a cleaner, nginx-native approach? Or other nginx purge module that's better maintained? Comments &/or pointers to any docs, etc would be helpful. From rpaprocki at fearnothingproductions.net Wed Jun 6 22:20:17 2018 From: rpaprocki at fearnothingproductions.net (Robert Paprocki) Date: Wed, 6 Jun 2018 15:20:17 -0700 Subject: status/usage of FRiCKLE/ngx_cache_purge. still reliable? alternatives? In-Reply-To: <1a9a0819-6396-a430-6e62-b48c1c546bca@gmail.com> References: <1a9a0819-6396-a430-6e62-b48c1c546bca@gmail.com> Message-ID: Hi, On Wed, Jun 6, 2018 at 3:05 PM, PGNet Dev wrote: > For some new WordPress sites, I'll be deploying fastcgi_cache as reverse > proxy / page cache, instead of usual Varnish. > > Although there are a number of WP-module-based PURGE options, I prefer > that it's handled by the web server. > > A commonly referenced approach is to use the 'FRiCKLE/ngx_cache_purge', > > https://github.com/FRiCKLE/ngx_cache_purge/ > > with associated nginx conf additions, > > https://easyengine.io/wordpress-nginx/tutorials/ > single-site/fastcgi-cache-with-purging/ > https://www.ryadel.com/en/nginx-purge-proxy-cache- > delete-invalidate-linux-centos-7/ > > ngx_cache_purge module development appears to have gone stale; no commits > since ~ 2014. > > What are your experiences with current use of that module, with latest > 1.15x nginx releases? > > Is there a cleaner, nginx-native approach? Or other nginx purge module > that's better maintained? > > Comments &/or pointers to any docs, etc would be helpful. > My $0.02 coming from experience building out scalable WP clusters is, stick to Varnish here. FRiCKLE's module is great, but it would be scary to put into production- have fun with that test/release cycle :p The overhead of putting Nginx in front of Varnish is fairly small in the grand scheme of things. What's your motivation to strictly use Nginx? There is official support for cache purging with the commercial version of Nginx: https://www.nginx.com/products/nginx/caching/. I've seen moderate hardware running Nginx (for TLS offload + WAF) -> Varnish (cache + purge) -> Apache/mod_php do 50k r/s on a single node. One would hope this suffices; it's a stable and proven stack. Again, ngx_cache_purge is great, but any unsupported module in a prod environment is scary when you're not writing the code. ;) -------------- next part -------------- An HTML attachment was scrubbed... URL: From pgnet.dev at gmail.com Wed Jun 6 22:42:25 2018 From: pgnet.dev at gmail.com (PGNet Dev) Date: Wed, 6 Jun 2018 15:42:25 -0700 Subject: status/usage of FRiCKLE/ngx_cache_purge. still reliable? alternatives? In-Reply-To: References: <1a9a0819-6396-a430-6e62-b48c1c546bca@gmail.com> Message-ID: Hi > My $0.02 coming from experience building out scalable WP clusters is, > stick to Varnish here. Miscommunication on my part -- my aforementioned Varnish-in-front referred to site dev in general. To date, it's been in front of Symfony sites. Works like a champ there. Since you're apparently working with WP under real-world loads, do you perchance have a production-ready, V6-compatible VCL & nginx config you can share? or point to? > FRiCKLE's module is great, but it would be scary to put into production- > have fun with that test/release cycle :p Yep. Hence my question(s)! > The overhead of putting Nginx in front of Varnish is fairly small in the > grand scheme of things. What's your motivation to strictly use Nginx? This time 'round, it's not entirely 'my' motivation; came with the job's "prefer to haves". Based, in apparently large part, on the usual use of TheGoogle; these 2 in particular: https://deliciousbrains.com/page-caching-varnish-vs-nginx-fastcgi-cache-2018/ https://www.scalescale.com/tips/nginx/nginx-vs-varnish/ > There is official support for cache purging with the commercial version > of Nginx: https://www.nginx.com/products/nginx/caching/. Ah, so not (yet) in the FOSS product. I see it's proxy_cache, not fastcgi_cache, based ... > I've seen moderate hardware running Nginx (for TLS offload + WAF) -> > Varnish (cache + purge) -> Apache/mod_php do 50k r/s on a single node. > One would hope this suffices; it's a stable and proven stack. Again, > ngx_cache_purge is great, but any unsupported module in a prod > environment is scary when you're not writing the code. ;) Again, yep. Thx! From rpaprocki at fearnothingproductions.net Wed Jun 6 23:09:50 2018 From: rpaprocki at fearnothingproductions.net (Robert Paprocki) Date: Wed, 6 Jun 2018 16:09:50 -0700 Subject: status/usage of FRiCKLE/ngx_cache_purge. still reliable? alternatives? In-Reply-To: References: <1a9a0819-6396-a430-6e62-b48c1c546bca@gmail.com> Message-ID: Hi, On Wed, Jun 6, 2018 at 3:42 PM, PGNet Dev wrote: > Hi > > My $0.02 coming from experience building out scalable WP clusters is, >> stick to Varnish here. >> > > Miscommunication on my part -- my aforementioned Varnish-in-front referred > to site dev in general. > > To date, it's been in front of Symfony sites. Works like a champ there. > > Since you're apparently working with WP under real-world loads, do you > perchance have a production-ready, V6-compatible VCL & nginx config you can > share? or point to? > Nothing off the top of my head/isn't NDA-protected ;) But basic configs will generally serve you well. Varnish and Nginx are mature, stable projects; basic proxy_pass design with Nginx + basic Varnish config and a PURGE method handler should suffice for most operations. Beyond that, tune Nginx for buffer sizes and do a bit of kernel tweaking if necessary for windowing, if you need. > FRiCKLE's module is great, but it would be scary to put into production- >> have fun with that test/release cycle :p > > Yep. Hence my question(s)! > Right- my point is, it's not officially supported, and Nginx has no stable API/ABI. With every release you want to leverage you need to walk through your entire test/canary/B-G/whatever cycle. That's a question only you can answer, but asking about "what about X release" is fruitless because of a complete lack of ABI support. In six month's it's an obsolete question, whose only two answers are "be the developer and watching the changelog" or "compile the module, test it, and pray to the diety of your choice that it doesn't explode". > > The overhead of putting Nginx in front of Varnish is fairly small in the >> grand scheme of things. What's your motivation to strictly use Nginx? >> > > This time 'round, it's not entirely 'my' motivation; came with the job's > "prefer to haves". > > Based, in apparently large part, on the usual use of TheGoogle; these 2 in > particular: > > > https://deliciousbrains.com/page-caching-varnish-vs-nginx-fa > stcgi-cache-2018/ > https://www.scalescale.com/tips/nginx/nginx-vs-varnish/ Stepping back, these articles compare Nginx vs. Varnish straight-up. There is considerable difference to take into account in examining a stack leverage both. And of course, always always always take into strong account the context and limitations in which these articles were written. They do not care about your particular business limitations, context, financial/resource restrictions, or anything else that makes your situation useful. A large grain of salt is always important to hold here. In particular, the first article doesn't leverage keepalive (I maintain "ab" is a horrid tool in this day and age), uses a cloud service with the client living in who-knows-what-geographic/network-topology, and quite frankly was written by an author who does not focus on systems/operations. Tread wisely. The second article is two and a half years old, offers no data whatsoever, and touches on a number of irrelevant topics (SSL, h2). I'd steer clear of any opinion offered here. If I were you I would strongly question this "prefer to have" if the only question is manageable cache purging. :) > There is official support for cache purging with the commercial version of >> Nginx: https://www.nginx.com/products/nginx/caching/. >> > > Ah, so not (yet) in the FOSS product. I see it's proxy_cache, not > fastcgi_cache, based ... > I imagine that's a question for the sales folks, outside of this list :D -------------- next part -------------- An HTML attachment was scrubbed... URL: From pgnet.dev at gmail.com Wed Jun 6 23:18:22 2018 From: pgnet.dev at gmail.com (PGNet Dev) Date: Wed, 6 Jun 2018 16:18:22 -0700 Subject: status/usage of FRiCKLE/ngx_cache_purge. still reliable? alternatives? In-Reply-To: References: <1a9a0819-6396-a430-6e62-b48c1c546bca@gmail.com> Message-ID: <170752c4-c3b0-df34-0893-23e333029a35@gmail.com> On 6/6/18 4:09 PM, Robert Paprocki wrote: > Nginx has no stable API/ABI. With every release you want to leverage you need to walk > through your entire test/canary/B-G/whatever cycle. That's a question > only you can answer, but asking about "what about X release" is > fruitless because of a complete lack of ABI support. In six month's it's > an obsolete question, whose only two answers are "be the developer and > watching the changelog" or "compile the module, test it, and pray to the > diety of your choice that it doesn't explode". That's an excellent point. Esp since I tend to keep production current with Nginx releases. TBH, tho, I've said such a prayer-or-three re: Varnish! > Stepping back, these articles compare Nginx vs. Varnish straight-up. > There is considerable difference to take into account in examining a > stack leverage both. > ... Much agreed. Apparently my reference to 'TheGoogle' refs wasn't snarky or dismissive enough! ;-) > If I were you I would strongly question this "prefer to have" if the > only question is manageable cache purging. :) Been done. Not convincingly enough, apparently. You can lead a horse ... It's a Nordstrom's(-of-long-ago) moment: "Customer's Right. Because they say so." Thx agn! From rpaprocki at fearnothingproductions.net Wed Jun 6 23:42:03 2018 From: rpaprocki at fearnothingproductions.net (Robert Paprocki) Date: Wed, 6 Jun 2018 16:42:03 -0700 Subject: status/usage of FRiCKLE/ngx_cache_purge. still reliable? alternatives? In-Reply-To: <170752c4-c3b0-df34-0893-23e333029a35@gmail.com> References: <1a9a0819-6396-a430-6e62-b48c1c546bca@gmail.com> <170752c4-c3b0-df34-0893-23e333029a35@gmail.com> Message-ID: <5280D70A-20EF-4EE7-AEA7-45B773D9772C@fearnothingproductions.net> Hi, > On Jun 6, 2018, at 16:18, PGNet Dev wrote: > >> On 6/6/18 4:09 PM, Robert Paprocki wrote: >> Nginx has no stable API/ABI. With every release you want to leverage you need to walk through your entire test/canary/B-G/whatever cycle. That's a question only you can answer, but asking about "what about X release" is fruitless because of a complete lack of ABI support. In six month's it's an obsolete question, whose only two answers are "be the developer and watching the changelog" or "compile the module, test it, and pray to the diety of your choice that it doesn't explode". > > That's an excellent point. Esp since I tend to keep production current with Nginx releases. > > TBH, tho, I've said such a prayer-or-three re: Varnish! Certainly ;) I'm unfamiliar with Varnish's lifecycle. Just pointing out what should be noted (frankly, with the last few years of releases, unless there's a specific feature or bug you need to overcome, upgrading nginx to "latest" doesn't offer much value. I would love to be proved wrong here though ;) ). > >> Stepping back, these articles compare Nginx vs. Varnish straight-up. There is considerable difference to take into account in examining a stack leverage both. > > ... > > Much agreed. Apparently my reference to 'TheGoogle' refs wasn't snarky or dismissive enough! ;-) > >> If I were you I would strongly question this "prefer to have" if the only question is manageable cache purging. :) > > Been done. Not convincingly enough, apparently. > You can lead a horse ... > It's a Nordstrom's(-of-long-ago) moment: "Customer's Right. Because they say so." > > Thx agn! I got you :) good luck with it! You have our sympathies ;) From vest.april4 at gmail.com Thu Jun 7 06:31:07 2018 From: vest.april4 at gmail.com (Jon Franklin) Date: Thu, 7 Jun 2018 14:31:07 +0800 Subject: status/usage of FRiCKLE/ngx_cache_purge. still reliable? alternatives? In-Reply-To: <1a9a0819-6396-a430-6e62-b48c1c546bca@gmail.com> References: <1a9a0819-6396-a430-6e62-b48c1c546bca@gmail.com> Message-ID: On Thu, Jun 7, 2018 at 6:05 AM, PGNet Dev wrote: > For some new WordPress sites, I'll be deploying fastcgi_cache as reverse proxy / page cache, instead of usual Varnish. > > Although there are a number of WP-module-based PURGE options, I prefer that it's handled by the web server. > > A commonly referenced approach is to use the 'FRiCKLE/ngx_cache_purge', > > https://github.com/FRiCKLE/ngx_cache_purge/ > > with associated nginx conf additions, > > https://easyengine.io/wordpress-nginx/tutorials/single-site/fastcgi-cache-with-purging/ > https://www.ryadel.com/en/nginx-purge-proxy-cache-delete-invalidate-linux-centos-7/ > > ngx_cache_purge module development appears to have gone stale; no commits since ~ 2014. > > What are your experiences with current use of that module, with latest 1.15x nginx releases? > > Is there a cleaner, nginx-native approach? Or other nginx purge module that's better maintained? > > Comments &/or pointers to any docs, etc would be helpful. > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx You can try this: https://github.com/nginx-modules/ngx_cache_purge From pgnet.dev at gmail.com Thu Jun 7 14:38:30 2018 From: pgnet.dev at gmail.com (PGNet Dev) Date: Thu, 7 Jun 2018 07:38:30 -0700 Subject: status/usage of FRiCKLE/ngx_cache_purge. still reliable? alternatives? In-Reply-To: References: <1a9a0819-6396-a430-6e62-b48c1c546bca@gmail.com> Message-ID: <5e9badee-e545-ad65-4927-60be6e76cbc6@gmail.com> On 6/6/18 11:31 PM, Jon Franklin wrote: > You can try this: > https://github.com/nginx-modules/ngx_cache_purge Thx! I'd aptly managed to not find/notice that fork. Does address the 'stale' development status. Still, leaves some of the concerns about nginx ABI, etc. mentioned earlier. I'll set up a test instance and take it all for a spin. OTOH, I've setup a Varnish instance in front of WP. As predicted, it's straightforward. And, the test WP site 'feels' a *lot* more responsive than using the FastCGI cache alternative. I've no quantitative benchmarks ... yet ... and I've not yet run all the 'Canary' tests I need to by any stretch. But it certainly looks promising. From vbart at nginx.com Thu Jun 7 16:07:12 2018 From: vbart at nginx.com (Valentin V. Bartenev) Date: Thu, 07 Jun 2018 19:07:12 +0300 Subject: Unit 1.2 release Message-ID: <7247090.2KqsfHKdYK@vbart-workstation> Hello, I'm glad to announce a new release of NGINX Unit. Changes with Unit 1.2 07 Jun 2018 *) Feature: configuration of environment variables for application processes. *) Feature: customization of php.ini path. *) Feature: setting of individual PHP configuration options. *) Feature: configuration of execution arguments for Go applications. *) Bugfix: keep-alive connections might hang after reconfiguration. Here's an example of new configuration parameters of application objects: { "args-example": { "type": "go", "executable": "/path/to/compiled/go/binary", "arguments": ["arg1", "arg2", "arg3"] }, "opts-example": { "type": "php", "root": "/www/site", "script": "phpinfo.php", "options": { "file": "/path/to/php.ini", "admin": { "memory_limit": "256M", "variables_order": "EGPCS", "short_open_tag": "1" }, "user": { "display_errors": "0" } } }, "env-example": { "type": "python", "path": "/www/django", "module": "wsgi", "environment": { "DB_ENGINE": "django.db.backends.postgresql_psycopg2", "DB_NAME": "mydb", "DB_HOST": "127.0.0.1" } } } Please note that "environment" can be configured for any type of application. Binary Linux packages and Docker images are available here: - Packages: https://unit.nginx.org/installation/#precompiled-packages - Docker: https://hub.docker.com/r/nginx/unit/tags/ wbr, Valentin V. Bartenev From nginx-forum at forum.nginx.org Thu Jun 7 16:09:01 2018 From: nginx-forum at forum.nginx.org (neuronetv) Date: Thu, 07 Jun 2018 12:09:01 -0400 Subject: increase video image size Message-ID: <26c0cc548561bbdff23afe8678422834.NginxMailingListEnglish@forum.nginx.org> I use ffmpeg to stream a live video from my home to a vps running nginx. The video size coming from source (home) is 320x180. Is there any way nginx can inflate the video image? Can it be done in the nginx.conf file? This is my nginx.conf file: ----------------------------------------- worker_processes 1; error_log logs/error.log debug; events { worker_connections 1024; } rtmp { server { listen 1935; allow play all; #creates our "live" full-resolution HLS videostream from our incoming encoder stream and tells where to put the HLS video manifest and video fragments application live { allow play all; live on; record all; record_path /video_recordings; record_unique on; hls on; hls_nested on; hls_path /HLS/live; hls_fragment 10s; } #creates our "mobile" lower-resolution HLS videostream from the ffmpeg-created stream and tells where to put the HLS video manifest and video fragments application mobile { allow play all; live on; hls on; hls_nested on; hls_path /HLS/mobile; hls_fragment 10s; } #allows you to play your recordings of your live streams using a URL like "rtmp://my-ip:1935/vod/filename.flv" application vod { play /video_recordings; } } } http { include mime.types; default_type application/octet-stream; server { listen 90; server_name 192.168.254.178; #creates the http-location for our full-resolution (desktop) HLS stream - "http://my-ip/live/my-stream-key/index.m3u8" location /live { types { application/vnd.apple.mpegurl m3u8; } alias /HLS/live; add_header Cache-Control no-cache; } #creates the http-location for our mobile-device HLS stream - "http://my-ip/mobile/my-stream-key/index.m3u8" location /mobile { types { application/vnd.apple.mpegurl m3u8; } alias /HLS/mobile; add_header Cache-Control no-cache; } #allows us to see how stats on viewers on our Nginx site using a URL like: "http://my-ip/stats" location /stats { stub_status; } #allows us to host some webpages which can show our videos: "http://my-ip/my-page.html" location / { root html; index index.html index.htm; } } } -------------------------------------- I got this nginx.conf file off the internet because it worked in streaming video to mobile phones. The videao stream in question is: http://198.91.92.112:90/mobile/index.m3u8. If I paste this url into google chrome it plays but it's small. Is there any way to modify this url so chrome plays a larger image? I know google chrome has a zoom function under settings but I'd like to do this with minimal fuss to the viewer. Thanks for any help. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,280079,280079#msg-280079 From arut at nginx.com Thu Jun 7 16:25:59 2018 From: arut at nginx.com (Roman Arutyunyan) Date: Thu, 7 Jun 2018 19:25:59 +0300 Subject: increase video image size In-Reply-To: <26c0cc548561bbdff23afe8678422834.NginxMailingListEnglish@forum.nginx.org> References: <26c0cc548561bbdff23afe8678422834.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20180607162559.GW40083@Romans-MacBook-Air.local> Hi, On Thu, Jun 07, 2018 at 12:09:01PM -0400, neuronetv wrote: > I use ffmpeg to stream a live video from my home to a vps running nginx. The > video size coming from source (home) is 320x180. Is there any way nginx can > inflate the video image? Can it be done in the nginx.conf file? You can set up exec_push with ffmpeg at your incoming application and republish your stream to another application with any change, including size change. Something like this should work: application /src { live on; exec_push ffmpeg -i rtmp://localhost/src/$name -c:a copy -c:v libx264 -s 640x480 -f flv rtmp://localhost/dst/$name; } application /dst { # proceed here } > This is my > nginx.conf file: > > ----------------------------------------- > worker_processes 1; > error_log logs/error.log debug; > events { > worker_connections 1024; > } > rtmp { > server { > listen 1935; > allow play all; > > #creates our "live" full-resolution HLS videostream from our incoming > encoder stream and tells where to put the HLS video manifest and video > fragments > application live { > allow play all; > live on; > record all; > record_path /video_recordings; > record_unique on; > hls on; > hls_nested on; > hls_path /HLS/live; > hls_fragment 10s; > > } > > #creates our "mobile" lower-resolution HLS videostream from the > ffmpeg-created stream and tells where to put the HLS video manifest and > video fragments > application mobile { > allow play all; > live on; > hls on; > hls_nested on; > hls_path /HLS/mobile; > hls_fragment 10s; > } > > #allows you to play your recordings of your live streams using a URL like > "rtmp://my-ip:1935/vod/filename.flv" > application vod { > play /video_recordings; > } > } > } > > > http { > include mime.types; > default_type application/octet-stream; > > server { > listen 90; > server_name 192.168.254.178; > > #creates the http-location for our full-resolution (desktop) HLS stream - > "http://my-ip/live/my-stream-key/index.m3u8" > location /live { > types { > application/vnd.apple.mpegurl m3u8; > } > alias /HLS/live; > add_header Cache-Control no-cache; > } > > #creates the http-location for our mobile-device HLS stream - > "http://my-ip/mobile/my-stream-key/index.m3u8" > location /mobile { > types { > application/vnd.apple.mpegurl m3u8; > } > alias /HLS/mobile; > add_header Cache-Control no-cache; > } > > #allows us to see how stats on viewers on our Nginx site using a URL like: > "http://my-ip/stats" > location /stats { > stub_status; > } > > #allows us to host some webpages which can show our videos: > "http://my-ip/my-page.html" > location / { > root html; > index index.html index.htm; > } > } > } > -------------------------------------- > > I got this nginx.conf file off the internet because it worked in streaming > video to mobile phones. The videao stream in question is: > http://198.91.92.112:90/mobile/index.m3u8. If I paste this url into google > chrome it plays but it's small. Is there any way to modify this url so > chrome plays a larger image? I know google chrome has a zoom function under > settings but I'd like to do this with minimal fuss to the viewer. Thanks for > any help. > > Posted at Nginx Forum: https://forum.nginx.org/read.php?2,280079,280079#msg-280079 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -- Roman Arutyunyan From r at roze.lv Thu Jun 7 16:27:16 2018 From: r at roze.lv (Reinis Rozitis) Date: Thu, 7 Jun 2018 19:27:16 +0300 Subject: status/usage of FRiCKLE/ngx_cache_purge. still reliable? alternatives? In-Reply-To: <1a9a0819-6396-a430-6e62-b48c1c546bca@gmail.com> References: <1a9a0819-6396-a430-6e62-b48c1c546bca@gmail.com> Message-ID: > For some new WordPress sites, I'll be deploying fastcgi_cache as reverse > proxy / page cache, instead of usual Varnish. > > A commonly referenced approach is to use the 'FRiCKLE/ngx_cache_purge', > > https://github.com/FRiCKLE/ngx_cache_purge/ > > ngx_cache_purge module development appears to have gone stale; no commits > since ~ 2014. Works just fine just for the current nginx versions you need to apply this patch https://github.com/FRiCKLE/ngx_cache_purge/commit/c7345057ad5429617fc0823e92e3fa8043840cef.diff . (or maybe the forked repo has allready this implemented). There are some situations where nginx is "better" suited than Varnish. In my case at one project we decided/had to switch to nginx caching from varnish because varnish (even you are using disk based (mmap/file) backend storage) has a memory overhead per cacheable object (like ~1Kb) While 1Kb doesn't sound much when you start to have milions of objects it adds up and in this case even we had several terabytes of fast SSDs the actual bottleneck ended was there was not enough ram - the instances had only limited 32 Gb so in general there couldnt be more than 33 milion cached objects. Nginx on the other on the same hardware deals with 800+ milion (and increasing) objects without a problem. p.s. there is also obviously the ssl thing with varnish vs nginx .. but thats another topic. rr From pgnet.dev at gmail.com Thu Jun 7 16:42:08 2018 From: pgnet.dev at gmail.com (PGNet Dev) Date: Thu, 7 Jun 2018 09:42:08 -0700 Subject: status/usage of FRiCKLE/ngx_cache_purge. still reliable? alternatives? In-Reply-To: References: <1a9a0819-6396-a430-6e62-b48c1c546bca@gmail.com> Message-ID: <0a06a554-5d14-30ff-6ad2-c7ef107b2f29@gmail.com> On 6/7/18 9:27 AM, Reinis Rozitis wrote: > this patch > https://github.com/FRiCKLE/ngx_cache_purge/commit/c7345057ad5429617fc0823e92e3fa8043840cef.diff Noted, thx. > In my case at one project we decided/had to switch to nginx caching from > varnish because varnish (even you are using disk based (mmap/file) > backend storage) has a memory overhead per cacheable object (like ~1Kb) > > While 1Kb doesn't sound much when you start to have milions of objects > it adds up and in this case even we had several terabytes of fast SSDs > the actual bottleneck ended was there was not enough ram? - the > instances had only limited 32 Gb so in general there couldnt be more > than 33 milion cached objects. Nginx on the other on the same hardware > deals with 800+ milion (and increasing) objects without a problem. Point taken. Not an issue for my typical use case; may come up in future, so good to remember. > p.s. there is also obviously the ssl thing with varnish vs nginx .. but > thats another topic. No real "vs" or "thing" IME. nginx(ssl terminator) -> varnish -> nginx works quite nicely. There's also Varnish's terminator, Hitch, as an alternative, https://www.varnish-software.com/plus/ssl-tls-support/ https://github.com/varnish/hitch which I've been told works well; I haven't bothered since I've already got nginx in place on the backend -- adding a listener on the frontend is trivial. From nginx-forum at forum.nginx.org Thu Jun 7 16:58:18 2018 From: nginx-forum at forum.nginx.org (5lava) Date: Thu, 07 Jun 2018 12:58:18 -0400 Subject: Custom HTTP code in limit_except Message-ID: <796d9175587cfa2ca0acc407b42f19e7.NginxMailingListEnglish@forum.nginx.org> I'd like to find an elegant and efficient solution to redirect GET and HEAD requests using code 301, but requests with other methods ? using code 308. Intuitively I wrote this: location /foo { limit_except GET { return 301 /bar; } return 308 /bar; } But allowed context for "return" are "server", "location", and "if", so nginx won't start (error: "return" directive is not allowed here). Another approach would be using "if" e.g.: location /foo { if ( $request_method = GET ) { return 301 /bar; } if ( $request_method = HEAD ) { return 301 /bar; } return 308 /bar; } But this doesn't seem quite elegant (regex could make it look a bit nicer but less efficient). I'm wondering if anyone can suggest a better idea? And, if nginx developers are reading this, is "if ( $request_method = GET )" equivalent to "limit_except GET", performance-wise? Also, just wondering if there are some technical limitations that prevent making "return" work inside "limit_except" block? Currently only "deny" works in "limit_except" but it's only capable of returning 403. Thank you. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,280084,280084#msg-280084 From vbart at nginx.com Thu Jun 7 16:59:03 2018 From: vbart at nginx.com (Valentin V. Bartenev) Date: Thu, 07 Jun 2018 19:59:03 +0300 Subject: status/usage of FRiCKLE/ngx_cache_purge. still reliable? alternatives? In-Reply-To: References: <1a9a0819-6396-a430-6e62-b48c1c546bca@gmail.com> Message-ID: <3091205.QqL4anuWLm@vbart-workstation> On Wednesday 06 June 2018 15:42:25 PGNet Dev wrote: [..] > > There is official support for cache purging with the commercial version > > of Nginx: https://www.nginx.com/products/nginx/caching/. > > Ah, so not (yet) in the FOSS product. I see it's proxy_cache, not > fastcgi_cache, based ... > Like almost all official modules, it's independent from the protocol used. http://nginx.org/r/proxy_cache_purge http://nginx.org/r/fastcgi_cache_purge http://nginx.org/r/uwsgi_cache_purge http://nginx.org/r/scgi_cache_purge wbr, Valentin V. Bartenev From r at roze.lv Thu Jun 7 18:12:47 2018 From: r at roze.lv (Reinis Rozitis) Date: Thu, 7 Jun 2018 21:12:47 +0300 Subject: status/usage of FRiCKLE/ngx_cache_purge. still reliable? alternatives? In-Reply-To: <0a06a554-5d14-30ff-6ad2-c7ef107b2f29@gmail.com> References: <1a9a0819-6396-a430-6e62-b48c1c546bca@gmail.com> <0a06a554-5d14-30ff-6ad2-c7ef107b2f29@gmail.com> Message-ID: <6061960C14E84371904142B0304C3CF0@Neiroze> > No real "vs" or "thing" IME. nginx(ssl terminator) -> varnish -> nginx > works quite nicely. > > There's also Varnish's terminator, Hitch, as an alternative, Sure in general there is no problem offloading varnish (done it with nginx / stud / haproxy / hitch / h2o .. etc and still running several setups). But again depends on your needs and willingness to deal with larger software stack (that's why I said it's another topic) as you end up with 2+ moving parts (which have their own configuration / own resources / network buffers / sockets / timeouts etc) but obviously there are things which one does better than other (and vice versa). I just added it because you initially asked to comment on "nginx-native" approach (if we can consider a third-party (in non-commercial version) module as native) ;) p.s. for some time varnish has http2 support .. maybe at some point in future either openssl gets cleaned-up/rewritten enough for them to link with it or they find some good-enough alternative :) rr From nginx-forum at forum.nginx.org Thu Jun 7 23:57:43 2018 From: nginx-forum at forum.nginx.org (shiz) Date: Thu, 07 Jun 2018 19:57:43 -0400 Subject: rewrite question Message-ID: <6cc85325536ec6ead2cc3fe03062d8ef.NginxMailingListEnglish@forum.nginx.org> Hi, Recently, Google has started spidering my website and in addition to normal pages, appended "&" to all urls, even the pages excluded by robots.txt e.g. page.php?page=aaa -> page.php?page=aaa& Any idea how to redirect/rewrite this? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,280093,280093#msg-280093 From nginx-forum at forum.nginx.org Fri Jun 8 00:01:50 2018 From: nginx-forum at forum.nginx.org (shiz) Date: Thu, 07 Jun 2018 20:01:50 -0400 Subject: TLS 1.3 not being selected. In-Reply-To: <6a8500a2-9417-7bd5-e46c-f3706a99ca5d@andreasschulze.de> References: <6a8500a2-9417-7bd5-e46c-f3706a99ca5d@andreasschulze.de> Message-ID: Ah! Thank you very much. Recompiled with older openssl 1.1.1 pre2 since current browsers implement draft 23 atm. It's working now. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,280017,280094#msg-280094 From nginx-forum at forum.nginx.org Fri Jun 8 09:15:57 2018 From: nginx-forum at forum.nginx.org (neuronetv) Date: Fri, 08 Jun 2018 05:15:57 -0400 Subject: increase video image size In-Reply-To: <20180607162559.GW40083@Romans-MacBook-Air.local> References: <20180607162559.GW40083@Romans-MacBook-Air.local> Message-ID: Roman Arutyunyan Wrote: ------------------------------------------------------- > Something like this should work: > > application /src { > live on; > exec_push ffmpeg -i rtmp://localhost/src/$name -c:a copy -c:v > libx264 > -s 640x480 -f flv rtmp://localhost/dst/$name; > } > > application /dst { > # proceed here > } > thanks, does this go in my nginx.conf file? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,280079,280097#msg-280097 From nginx-forum at forum.nginx.org Fri Jun 8 09:35:19 2018 From: nginx-forum at forum.nginx.org (prabhat) Date: Fri, 08 Jun 2018 05:35:19 -0400 Subject: Performance of h2 is better than h2c Message-ID: <142b57d5482dc26dd1780102b4e16d4d.NginxMailingListEnglish@forum.nginx.org> I am taking performance data on nginx. The client I used is h2load Request per second using h2 is much higher than h2c. But I think it should not be as h2 is having the overhead of ssl. I have used the command ./h2load https://xx.xx.xx.xx:4070 -n500000 -c1000 -t50 --- h2 ./h2load http://xx.xx.xx.xx:4090 -n500000 -c1000 -t50 --- h2c and at the server side config is http2_max_concurrent_streams 600000;- http2_max_requests 600000; http2_streams_index_size 524288; The h2c is getting 23008.09 req/sec and h2 96091.85 req/sec The request is for default page provided with nginx instalation Posted at Nginx Forum: https://forum.nginx.org/read.php?2,280098,280098#msg-280098 From peter_booth at me.com Fri Jun 8 10:07:25 2018 From: peter_booth at me.com (Peter Booth) Date: Fri, 08 Jun 2018 06:07:25 -0400 Subject: Performance of h2 is better than h2c In-Reply-To: <142b57d5482dc26dd1780102b4e16d4d.NginxMailingListEnglish@forum.nginx.org> References: <142b57d5482dc26dd1780102b4e16d4d.NginxMailingListEnglish@forum.nginx.org> Message-ID: <5D200056-E412-4D7D-9ED2-DF5F438D8F0B@me.com> Is your client running n a different host than your server? > On 8 Jun 2018, at 5:35 AM, prabhat wrote: > > I am taking performance data on nginx. > The client I used is h2load > > Request per second using h2 is much higher than h2c. But I think it should > not be as h2 is having the overhead of ssl. > I have used the command > ./h2load https://xx.xx.xx.xx:4070 -n500000 -c1000 -t50 --- h2 > ./h2load http://xx.xx.xx.xx:4090 -n500000 -c1000 -t50 --- h2c > > and at the server side config is > http2_max_concurrent_streams 600000;- > http2_max_requests 600000; > http2_streams_index_size 524288; > > The h2c is getting 23008.09 req/sec and h2 96091.85 req/sec > > The request is for default page provided with nginx instalation > > Posted at Nginx Forum: https://forum.nginx.org/read.php?2,280098,280098#msg-280098 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From nginx-forum at forum.nginx.org Fri Jun 8 10:11:14 2018 From: nginx-forum at forum.nginx.org (prabhat) Date: Fri, 08 Jun 2018 06:11:14 -0400 Subject: Performance of h2 is better than h2c In-Reply-To: <5D200056-E412-4D7D-9ED2-DF5F438D8F0B@me.com> References: <5D200056-E412-4D7D-9ED2-DF5F438D8F0B@me.com> Message-ID: yes. Both are running in different machines. OS used is ubuntu 14.04 Posted at Nginx Forum: https://forum.nginx.org/read.php?2,280098,280100#msg-280100 From arut at nginx.com Fri Jun 8 10:28:05 2018 From: arut at nginx.com (Roman Arutyunyan) Date: Fri, 8 Jun 2018 13:28:05 +0300 Subject: increase video image size In-Reply-To: References: <20180607162559.GW40083@Romans-MacBook-Air.local> Message-ID: <20180608102805.GX40083@Romans-MacBook-Air.local> Hi, On Fri, Jun 08, 2018 at 05:15:57AM -0400, neuronetv wrote: > Roman Arutyunyan Wrote: > ------------------------------------------------------- > > > Something like this should work: > > > > application /src { > > live on; > > exec_push ffmpeg -i rtmp://localhost/src/$name -c:a copy -c:v > > libx264 > > -s 640x480 -f flv rtmp://localhost/dst/$name; > > } > > > > application /dst { > > # proceed here > > } > > > > thanks, does this go in my nginx.conf file? Yes, it does. > Posted at Nginx Forum: https://forum.nginx.org/read.php?2,280079,280097#msg-280097 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -- Roman Arutyunyan From m16+nginx at monksofcool.net Fri Jun 8 12:09:40 2018 From: m16+nginx at monksofcool.net (Ralph Seichter) Date: Fri, 8 Jun 2018 14:09:40 +0200 Subject: Unit 1.2 release In-Reply-To: <7247090.2KqsfHKdYK@vbart-workstation> References: <7247090.2KqsfHKdYK@vbart-workstation> Message-ID: On 07.06.18 18:07, Valentin V. Bartenev wrote: > Feature: configuration of environment variables for application > processes. My thanks to the Unit team, this new feature is going to save me a lot of headaches. -Ralph From r1ch+nginx at teamliquid.net Mon Jun 11 10:00:35 2018 From: r1ch+nginx at teamliquid.net (Richard Stanway) Date: Mon, 11 Jun 2018 12:00:35 +0200 Subject: rewrite question In-Reply-To: <6cc85325536ec6ead2cc3fe03062d8ef.NginxMailingListEnglish@forum.nginx.org> References: <6cc85325536ec6ead2cc3fe03062d8ef.NginxMailingListEnglish@forum.nginx.org> Message-ID: This is almost certainly not Google as they obey robots.txt. The & to & conversion is another sign of a poor quality crawler. Check the RDNS and you will find it's probably some IP faking Google UA, I suggest blocking at network level. On Fri, Jun 8, 2018 at 1:57 AM shiz wrote: > Hi, > > Recently, Google has started spidering my website and in addition to normal > pages, appended "&" to all urls, even the pages excluded by robots.txt > > e.g. page.php?page=aaa -> page.php?page=aaa& > > Any idea how to redirect/rewrite this? > > Posted at Nginx Forum: > https://forum.nginx.org/read.php?2,280093,280093#msg-280093 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Mon Jun 11 12:53:49 2018 From: nginx-forum at forum.nginx.org (ayman) Date: Mon, 11 Jun 2018 08:53:49 -0400 Subject: Nginx crashing with image filter and cache enabled Message-ID: <969a1c1dddfcd01561b0d9e33d46df13.NginxMailingListEnglish@forum.nginx.org> Hi, When enabling the cache on image filter; nginx workers crash and keep getting 500. I'm using Nginx 1.14.0 error log: 2018/06/11 12:30:49 [alert] 46105#0: worker process 46705 exited on signal 11 (core dumped) proxy_cache_path /opt/nginx/img-cache/resized levels=1:2 keys_zone=resizedimages:10m max_size=3G; location ~ ^/resize/(\d+)x(\d+)/(.*) { proxy_pass https://proxypass/$3 proxy_cache resizedimages; proxy_cache_key "$host$document_uri"; proxy_temp_path off; proxy_cache_valid 200 1d; proxy_cache_valid any 1m; proxy_cache_use_stale error timeout invalid_header updating; image_filter resize $1 $2; image_filter_jpeg_quality 90; image_filter_buffer 20M; image_filter_interlace on; } If i disable the cache it's working perfectly! Do you recommend to change anything in the config? What could be the issue? Thanks. Ayman Posted at Nginx Forum: https://forum.nginx.org/read.php?2,280115,280115#msg-280115 From nginx-forum at forum.nginx.org Mon Jun 11 13:42:12 2018 From: nginx-forum at forum.nginx.org (shiz) Date: Mon, 11 Jun 2018 09:42:12 -0400 Subject: rewrite question In-Reply-To: <6cc85325536ec6ead2cc3fe03062d8ef.NginxMailingListEnglish@forum.nginx.org> References: <6cc85325536ec6ead2cc3fe03062d8ef.NginxMailingListEnglish@forum.nginx.org> Message-ID: <19c2f3fa45384a0d9536af6b3dd11a27.NginxMailingListEnglish@forum.nginx.org> I see another poster have written this, and deleted it afterwards. `This is almost certainly not Google as they obey robots.txt. The & to & conversion is another sign of a poor quality crawler. Check the RDNS and you will find it's probably some IP faking Google UA, I suggest blocking at network level.` My actual reply: 1 - It is Google 2 - They do not always a user friendly user agent. That is a fact. 3 - When they don't, they also don't follow robots.txt. So my problem remains. I don't want to block those IP ranges at iptables level because it's Google. So a rewrite or redirect - I'm not sure exactly which ATM is badly needed. Depends on the URL. Here are the IP ranges, definetely Google. Referenced in https://github.com/mitchellkrogza/nginx-ultimate-bad-bot-blocker/issues/175 And here is a copy of my original message. "Hi, I'm still faithful to your script. It does great things to my websites. Thanks for that. Not a bug properly speaking, just a constatation you might like, Recently, 1-2 months in time, I got a lot of strange impossible requests all with the same User-Agent, no referrer and HTTP/1.1. All came from Google. They do not respect robots.txt and sniff everywhere they're not supposed to. I thought you should be make aware of it. I know you whitelist Google IPs, but after inspection from other users, you might want to revisit those. User-agent: "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_8_4) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/28.0.1500.71 Safari/537.36" Ranges: 66.249.64.0/19 72.14.199.0/24 Examples of request: 72.14.199.18 - - [27/May/2018:14:12:01 -0700] "GET /page.php?page%3Dabout_himeji_forklifts& HTTP/1.1" 301 178 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_8_4) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/28.0.1500.71 Safari/537.36" 72.14.199.4 - - [27/May/2018:14:12:24 -0700] "GET /page.php?page%3Dabout_himeji_forklifts& HTTP/1.1" 302 165 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_8_4) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/28.0.1500.71 Safari/537.36" In the meantime, I circumvented your whitelist by issuing manual range bans. After 6 weeks, no more of those strange requests, and bandwidth has dropped significantly since those 2 ranges were requestings quite a few hundred of megabytes each day! Thanks again." Posted at Nginx Forum: https://forum.nginx.org/read.php?2,280093,280117#msg-280117 From nginx-forum at forum.nginx.org Mon Jun 11 13:50:55 2018 From: nginx-forum at forum.nginx.org (shiz) Date: Mon, 11 Jun 2018 09:50:55 -0400 Subject: rewrite question In-Reply-To: <19c2f3fa45384a0d9536af6b3dd11a27.NginxMailingListEnglish@forum.nginx.org> References: <6cc85325536ec6ead2cc3fe03062d8ef.NginxMailingListEnglish@forum.nginx.org> <19c2f3fa45384a0d9536af6b3dd11a27.NginxMailingListEnglish@forum.nginx.org> Message-ID: 'The & to & conversion is another sign of a poor quality crawler.' I wasn't referring to any of them but to '&'. Important difference. Also explaining my failure to filter it from parameters since parameters contains an equal sign. E.g. ...&= something or even &= & or & would also easy do filter out. But that is not the problem I'm having here. It's different, hence my request for assistance to the nginx community. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,280093,280118#msg-280118 From francis at daoine.org Mon Jun 11 15:05:35 2018 From: francis at daoine.org (Francis Daly) Date: Mon, 11 Jun 2018 16:05:35 +0100 Subject: rewrite question In-Reply-To: <6cc85325536ec6ead2cc3fe03062d8ef.NginxMailingListEnglish@forum.nginx.org> References: <6cc85325536ec6ead2cc3fe03062d8ef.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20180611150535.GC3111@daoine.org> On Thu, Jun 07, 2018 at 07:57:43PM -0400, shiz wrote: Hi there, > Recently, Google has started spidering my website and in addition to normal > pages, appended "&" to all urls, even the pages excluded by robots.txt > > e.g. page.php?page=aaa -> page.php?page=aaa& > > Any idea how to redirect/rewrite this? Untested, but: if ($args ~ "&$") { return 400; } should handle all requests that end in the four characters you report. You may prefer a different response code. Good luck with it, f -- Francis Daly francis at daoine.org From r1ch+nginx at teamliquid.net Mon Jun 11 15:37:27 2018 From: r1ch+nginx at teamliquid.net (Richard Stanway) Date: Mon, 11 Jun 2018 17:37:27 +0200 Subject: rewrite question In-Reply-To: <20180611150535.GC3111@daoine.org> References: <6cc85325536ec6ead2cc3fe03062d8ef.NginxMailingListEnglish@forum.nginx.org> <20180611150535.GC3111@daoine.org> Message-ID: That IP resolves to rate-limited-proxy-72-14-199-18.google.com - this is not the Google search crawler, hence why it ignores your robots.txt. No one seems to know for sure what the rate-limited-proxy IPs are used for. They could represent random Chrome users using the Google data saving feature, hence the varying user-agents you will see. Either way, they are probably best not blocked, as they could represent many end user IPs. Maybe there is an X-Forwarded-For header you could look at. The Google search crawler will resolve to an IP like crawl-66-249-64-213.googlebot.com. On Mon, Jun 11, 2018 at 5:05 PM Francis Daly wrote: > On Thu, Jun 07, 2018 at 07:57:43PM -0400, shiz wrote: > > Hi there, > > > Recently, Google has started spidering my website and in addition to > normal > > pages, appended "&" to all urls, even the pages excluded by robots.txt > > > > e.g. page.php?page=aaa -> page.php?page=aaa& > > > > Any idea how to redirect/rewrite this? > > Untested, but: > > if ($args ~ "&$") { return 400; } > > should handle all requests that end in the four characters you report. > > You may prefer a different response code. > > Good luck with it, > > f > -- > Francis Daly francis at daoine.org > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From lagged at gmail.com Tue Jun 12 07:03:28 2018 From: lagged at gmail.com (Andrei) Date: Tue, 12 Jun 2018 10:03:28 +0300 Subject: status/usage of FRiCKLE/ngx_cache_purge. still reliable? alternatives? In-Reply-To: <6061960C14E84371904142B0304C3CF0@Neiroze> References: <1a9a0819-6396-a430-6e62-b48c1c546bca@gmail.com> <0a06a554-5d14-30ff-6ad2-c7ef107b2f29@gmail.com> <6061960C14E84371904142B0304C3CF0@Neiroze> Message-ID: I ran both Varnish (for caching), Nginx (ssl offloading) for quite some time in production, but then switched to Nginx only. The main reasons being: - The sheer amount of added context switches (proxying was done local on a cPanel box, seeing 20-30k reqs/sec during peak hours) - Issues with managing hacks/changes for spoofing the HTTPS env in Apache, while maintaining the option of regular updates (CloudLinux ended up adding this patch for me in it's builds https://alex-at.net/blog/apache-mod_remoteip-mod_rpaf => https://www.cloudlinux.com/cloudlinux-os-blog/entry/beta-easyapache-4-updated-1-31 to make things easier, but it was already too late as I had already jumped to Nginx) - Having to manage two software versions, configs, auto config builders used by internal tools, etc - More added headaches with central logging - No projected TLS support in Varnish - Bare minimum H2 support in Varnish vs a more mature implementation in Nginx Since Nginx can pretty much do everything Varnish does, and more, I decided to avoid the headaches and just jump over to Nginx (even though I've been an avid Varnish fan since 2.1.5). As for a VCL replacement and purging in Nginx, I suggest reading up on Lua and checking out openresty if you want streamlined updates and don't want to manually compile/manage modules. To avoid overloading the filesystem with added I/O from purge requests/scans/etc, I wrote a simple Perl script that handles all the PURGE requests in order to have regex support and control over the remoals (it basically validates ownership to purge on the related domain, queues removals, then has another thread for the cleanup). Hope this helps some :) On Thu, Jun 7, 2018 at 9:12 PM, Reinis Rozitis wrote: > No real "vs" or "thing" IME. nginx(ssl terminator) -> varnish -> nginx >> works quite nicely. >> >> There's also Varnish's terminator, Hitch, as an alternative, >> > > Sure in general there is no problem offloading varnish (done it with nginx > / stud / haproxy / hitch / h2o .. etc and still running several setups). > > But again depends on your needs and willingness to deal with larger > software stack (that's why I said it's another topic) as you end up with 2+ > moving parts (which have their own configuration / own resources / network > buffers / sockets / timeouts etc) but obviously there are things which one > does better than other (and vice versa). > > I just added it because you initially asked to comment on "nginx-native" > approach (if we can consider a third-party (in non-commercial version) > module as native) ;) > > > p.s. for some time varnish has http2 support .. maybe at some point in > future either openssl gets cleaned-up/rewritten enough for them to link > with it or they find some good-enough alternative :) > > rr > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Tue Jun 12 10:08:24 2018 From: nginx-forum at forum.nginx.org (shiz) Date: Tue, 12 Jun 2018 06:08:24 -0400 Subject: rewrite question In-Reply-To: <20180611150535.GC3111@daoine.org> References: <20180611150535.GC3111@daoine.org> Message-ID: <4f2c33fbfeaf161a17683814c0f46d39.NginxMailingListEnglish@forum.nginx.org> 'if ($args ~ "&$") { return 400; }' Thanks a lot! Exactly what I needed :) Posted at Nginx Forum: https://forum.nginx.org/read.php?2,94128,280124#msg-280124 From nginx-forum at forum.nginx.org Tue Jun 12 12:09:18 2018 From: nginx-forum at forum.nginx.org (anish10dec) Date: Tue, 12 Jun 2018 08:09:18 -0400 Subject: Secure Link Md5 with Primary and Secondary Secret Message-ID: There is requirement for token authentication using two secret key i.e primary and secondary secret for location block. If token with first secret gives 405, then to generate the token with second secret to allow the request. This is required for changing the Secret Key in production on server so that partial user will be allowed with old secret and some with new secret for meanwhile till secret is updated on all servers and client. Something similar to below implementation https://cdnsun.com/knowledgebase/cdn-live/setting-a-token-authentication-protect-your-cdn-content Regards & Thanks , Anish Posted at Nginx Forum: https://forum.nginx.org/read.php?2,280125,280125#msg-280125 From nginx-forum at forum.nginx.org Tue Jun 12 12:16:00 2018 From: nginx-forum at forum.nginx.org (anish10dec) Date: Tue, 12 Jun 2018 08:16:00 -0400 Subject: Secure Link Md5 with Primary and Secondary Secret In-Reply-To: References: Message-ID: <643d0c50fb7bdd3d103ce08e6141a199.NginxMailingListEnglish@forum.nginx.org> Current Configuration secure_link $arg_token,$arg_expiry; secure_link_md5 "secret$arg_expiry"; if ($secure_link = "") {return 405;} if ($secure_link = "0"){return 410;} Posted at Nginx Forum: https://forum.nginx.org/read.php?2,280125,280126#msg-280126 From francis at daoine.org Tue Jun 12 17:22:26 2018 From: francis at daoine.org (Francis Daly) Date: Tue, 12 Jun 2018 18:22:26 +0100 Subject: Secure Link Md5 with Primary and Secondary Secret In-Reply-To: References: Message-ID: <20180612172226.GD3111@daoine.org> On Tue, Jun 12, 2018 at 08:09:18AM -0400, anish10dec wrote: Hi there, > There is requirement for token authentication using two secret key i.e > primary and secondary secret for location block. If this is the same scenario as in https://forum.nginx.org/read.php?2,275668 and in https://forum.nginx.org/read.php?2,278063 then I'm pretty sure that the answer is the same as those times. > If token with first secret gives 405, then to generate the token with second > secret to allow the request. There is a suggested untested config in an earlier response. Does it work for you? > This is required for changing the Secret Key in production on server so that > partial user will be allowed with old secret and some with new secret for > meanwhile till secret is updated on all servers and client. If the client knows it, it's not a secret. f -- Francis Daly francis at daoine.org From karljohnson.it at gmail.com Tue Jun 12 19:48:19 2018 From: karljohnson.it at gmail.com (Karl Johnson) Date: Tue, 12 Jun 2018 15:48:19 -0400 Subject: Using variable in vhost Message-ID: Hello, I have a nginx multi-user setup that use the same fpm config for all vhost but each vhost has his own user so I had to set a variable in the vhost config to set the fastcgi_pass path in the included file. This way the vhost config is always clean. I've read somewhere that variable in vhost is not recommended. What do you think of this setup? It's currently working pretty well so I was wondering. Thanks, Karl [root at web ~]# cat /etc/nginx/conf.d/vhosts/exemple.com.conf server { listen 80; server_name exemple.com; root /home/webtest/exemple.com/public_html; access_log /var/log/nginx/exemple.com-access_log main; error_log /var/log/nginx/exemple.com-error_log warn; set $fpmuser webtest; if ($bad_bot) { return 444; } include conf.d/custom/restrictions.conf; include conf.d/custom/pagespeed.conf; include conf.d/custom/fpm-wordpress-user.conf; } [root at web ~]# cat /etc/nginx/conf.d/custom/fpm-wordpress-user.conf location / { rewrite /wp-admin$ $scheme://$host$uri/ permanent; try_files $uri $uri/ /index.php?$args; } location ~* \.(?:jpg|jpeg|gif|png|ico|cur|gz|svg|svgz|mp4|ogg|ogv|webm|htc|woff)$ { expires 2w; log_not_found off; } location ~* \.(?:css|js)$ { expires 1w; add_header Pragma public; add_header Cache-Control "public"; } location ~ \.php$ { try_files $uri =404; fastcgi_split_path_info ^(.+\.php)(/.+)$; fastcgi_buffers 8 256k; fastcgi_buffer_size 256k; fastcgi_send_timeout 300; fastcgi_read_timeout 300; include fastcgi_params; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; fastcgi_pass unix:/run/php-fpm/$fpmuser.sock; } -------------- next part -------------- An HTML attachment was scrubbed... URL: From jefferson at aoeu2code.com Wed Jun 13 06:10:11 2018 From: jefferson at aoeu2code.com (Jefferson Carpenter) Date: Wed, 13 Jun 2018 06:10:11 +0000 Subject: Support for ticket #557 Message-ID: <92448991-7ca8-8b0e-5250-9932d06c8d13@aoeu2code.com> Just want to show my support for allowing `autoindex` to include dotfiles (ticket #557). I am relatively new to nginx, and have been using it in increasingly large and complex capacities recently. Specifically, more than once I have now set up location blocks that basically enable directory browsing. These location blocks generally look like this: location ~ ^/git/?(.*)$ { root /home/aoeu/git-webserver; autoindex on; try_files /$1 /$1/ 404; } (where that location block takes requests to the /git/ path on my domain and allows it to be browsed as my local /home/aoeu/git-webserver directory - generally I am interested in turning a particular path on my domain into a file browse of a particular directory on my server). Problem with this being, the `autoindex on` directive skips over hidden (`.`) files when it generates directory listings, and cannot be configured not to. I'm still up in the air about how best to allow my sites to list and statically serve files. More than simply displaying hidden (`.`) files, I would like to be able to configure (maybe through a regular expression) specifically what files are to be hidden, but given `autoindex on` displaying all files (not hiding `.` files) this could probably be done effectively enough by modifying the regular expression that my location block matches paths against. That is all. If anyone has ideas on plugins that could help me create browsable directory listings *including* all dot files that would be great - I did see https://www.nginx.com/resources/wiki/modules/fancy_index/ but I don't think that supports my full use case of mapping a specific path on my domain onto a specific directory on my computer. I also saw some code under ticket #557 that would help me to recompile nginx so that `autoindex on` does not skip over dot files, and that's probably what I'll do as the most direct way to meet my wants and needs in lieu of any way to do it without compiling nginx locally. Jefferson From m16+nginx at monksofcool.net Wed Jun 13 09:01:09 2018 From: m16+nginx at monksofcool.net (Ralph Seichter) Date: Wed, 13 Jun 2018 11:01:09 +0200 Subject: Should listen *:443 bind to IPv4 and IPv6 ? Message-ID: <3a1ac914-6b9d-66cf-7690-b0cda254dcd6@monksofcool.net> Hi folks, I wonder if I missed an announcement for a change in nginx behaviour or if some local issue is causing me problems. The configuration server { listen *:443 ssl default_server; } used to bind to both 0.0.0.0:443 and [::]:443, but since I updated to nginx 1.15.0 it only binds to IPv4 but no longer to IPv6. When I add a second listen directive server { listen *:443 ssl default_server; listen [::]:443 ssl default_server; } the server can be reached via both IPv6 and IPv4 again. Was this a deliberate change? -Ralph From mdounin at mdounin.ru Wed Jun 13 12:19:59 2018 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 13 Jun 2018 15:19:59 +0300 Subject: Should listen *:443 bind to IPv4 and IPv6 ? In-Reply-To: <3a1ac914-6b9d-66cf-7690-b0cda254dcd6@monksofcool.net> References: <3a1ac914-6b9d-66cf-7690-b0cda254dcd6@monksofcool.net> Message-ID: <20180613121959.GX32137@mdounin.ru> Hello! On Wed, Jun 13, 2018 at 11:01:09AM +0200, Ralph Seichter wrote: > I wonder if I missed an announcement for a change in nginx behaviour > or if some local issue is causing me problems. The configuration > > server { > listen *:443 ssl default_server; > } > > used to bind to both 0.0.0.0:443 and [::]:443, but since I updated to > nginx 1.15.0 it only binds to IPv4 but no longer to IPv6. When I add > a second listen directive > > server { > listen *:443 ssl default_server; > listen [::]:443 ssl default_server; > } > > the server can be reached via both IPv6 and IPv4 again. Was this a > deliberate change? The "listen *:443" snippet always created only IPv4 listening socket. Though I think I've seen some distributions patching nginx to create IPv6+IPv4 sockets instead. -- Maxim Dounin http://mdounin.ru/ From mdounin at mdounin.ru Wed Jun 13 12:26:26 2018 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 13 Jun 2018 15:26:26 +0300 Subject: Nginx crashing with image filter and cache enabled In-Reply-To: <969a1c1dddfcd01561b0d9e33d46df13.NginxMailingListEnglish@forum.nginx.org> References: <969a1c1dddfcd01561b0d9e33d46df13.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20180613122626.GY32137@mdounin.ru> Hello! On Mon, Jun 11, 2018 at 08:53:49AM -0400, ayman wrote: > When enabling the cache on image filter; nginx workers crash and keep > getting 500. > > I'm using Nginx 1.14.0 > > error log: > 2018/06/11 12:30:49 [alert] 46105#0: worker process 46705 exited on signal > 11 (core dumped) > > proxy_cache_path /opt/nginx/img-cache/resized levels=1:2 > keys_zone=resizedimages:10m max_size=3G; > > location ~ ^/resize/(\d+)x(\d+)/(.*) { > proxy_pass https://proxypass/$3 > proxy_cache resizedimages; > proxy_cache_key "$host$document_uri"; > proxy_temp_path off; > proxy_cache_valid 200 1d; > proxy_cache_valid any 1m; > proxy_cache_use_stale error timeout invalid_header > updating; > > image_filter resize $1 $2; > image_filter_jpeg_quality 90; > image_filter_buffer 20M; > image_filter_interlace on; > > } > > If i disable the cache it's working perfectly! > > Do you recommend to change anything in the config? What could be the issue? You may want to provide "nginx -V" output, backtrace as obtained from the core dump, and details on the GD library used. -- Maxim Dounin http://mdounin.ru/ From pgnet.dev at gmail.com Wed Jun 13 13:56:37 2018 From: pgnet.dev at gmail.com (PGNet Dev) Date: Wed, 13 Jun 2018 06:56:37 -0700 Subject: status/usage of FRiCKLE/ngx_cache_purge. still reliable? alternatives? In-Reply-To: References: <1a9a0819-6396-a430-6e62-b48c1c546bca@gmail.com> <0a06a554-5d14-30ff-6ad2-c7ef107b2f29@gmail.com> <6061960C14E84371904142B0304C3CF0@Neiroze> Message-ID: <42f07cbc-f903-04ca-b308-7ae2dd71d64f@gmail.com> Hi On 6/12/18 12:03 AM, Andrei wrote: > - The sheer amount of added context switches (proxying was done local on > a cPanel box, seeing 20-30k reqs/sec during peak hours) Not clear what you mean here > - Having to manage two software versions, configs, auto config builders > used by internal tools, etc Not a huge headache here. I can see this gets possibly annoying a scale with # of sites. > - More added headaches with central logging Having Varnish's detailed logging is a bit plus, IME, for tracking down cache issues, specifically, and header issues in general. No issues with 'central' logging. > - No projected TLS support in Varnish Having a terminator out front hasn't been a problem, save for the additional config considerations. > - Bare minimum H2 support in Varnish vs a more mature implementation in > Nginx This one I'm somewhat aware of -- haven't yet convinced myself of if/where there's a really problematic bottleneck. > Since Nginx can pretty much do everything Varnish does, and more, Except for the richness of the VCL ... > I decided to avoid the headaches and just jump over to Nginx (even though > I've been an avid Varnish fan since 2.1.5). As for a VCL replacement and > purging in Nginx, I suggest reading up on Lua and checking out openresty > if you want streamlined updates and don't want to manually > compile/manage modules. To avoid overloading the filesystem with added > I/O from purge requests/scans/etc, I wrote a simple Perl script that > handles all the PURGE requests in order to have regex support and > control over the remoals (it basically validates ownership to purge on > the related domain, queues removals, then has another thread for the > cleanup). My main problem so far is that WordPress appears to be generally Varnish-UNfriendly. Not core, but plugins. With Varnish, I'm having all SORTS of issues/artifacts cropping up. So far, (my) VCL pass exceptions haven't been sufficient. Without Varnish, there are far fewer 'surprises'. Then again, I'm not a huge WP fan to begin with; it's a pain to debug anything beyond standard server config issues. Caching in particular. OTOH, my sites with Nginx+Varnish+Varnish with Symfony work without a hitch. My leaning is, for WP, Nginx only. For SF, Nginx+Varnish. And, TBH, avoiding WP if/when I can. > Hope this helps some :) It does, thx! From m16+nginx at monksofcool.net Wed Jun 13 15:10:51 2018 From: m16+nginx at monksofcool.net (Ralph Seichter) Date: Wed, 13 Jun 2018 17:10:51 +0200 Subject: Should listen *:443 bind to IPv4 and IPv6 ? In-Reply-To: <20180613121959.GX32137@mdounin.ru> References: <3a1ac914-6b9d-66cf-7690-b0cda254dcd6@monksofcool.net> <20180613121959.GX32137@mdounin.ru> Message-ID: On 13.06.18 14:19, Maxim Dounin wrote: > The "listen *:443" snippet always created only IPv4 listening socket. That's interesting. Maybe Gentoo Linux did indeed add a custom patch to previous nginx versions. What is the shortest officially recommended way to bind nginx to port 443 for both IPv4 and IPv6? I should probably mention that my servers usually service multiple domains using TLS SNI. server { listen *:443 ssl; listen [::]:443; } works, but perhaps there is method with just one listen statement? -Ralph From mdounin at mdounin.ru Wed Jun 13 15:58:31 2018 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 13 Jun 2018 18:58:31 +0300 Subject: Should listen *:443 bind to IPv4 and IPv6 ? In-Reply-To: References: <3a1ac914-6b9d-66cf-7690-b0cda254dcd6@monksofcool.net> <20180613121959.GX32137@mdounin.ru> Message-ID: <20180613155831.GB32137@mdounin.ru> Hello! On Wed, Jun 13, 2018 at 05:10:51PM +0200, Ralph Seichter wrote: > On 13.06.18 14:19, Maxim Dounin wrote: > > > The "listen *:443" snippet always created only IPv4 listening socket. > > That's interesting. Maybe Gentoo Linux did indeed add a custom patch to > previous nginx versions. > > What is the shortest officially recommended way to bind nginx to port > 443 for both IPv4 and IPv6? I should probably mention that my servers > usually service multiple domains using TLS SNI. > > server { > listen *:443 ssl; > listen [::]:443; > } > > works, but perhaps there is method with just one listen statement? Using listen 443 ssl; listen [::]:443 ssl; should be good enough. While it is possible to use just one listen statement with an IPv6 address and "ipv6only=off", I would rather recommend to use an explicit configuration with two distinct listening sockets. -- Maxim Dounin http://mdounin.ru/ From nginx-forum at forum.nginx.org Thu Jun 14 13:27:01 2018 From: nginx-forum at forum.nginx.org (Enrico) Date: Thu, 14 Jun 2018 09:27:01 -0400 Subject: Nginx redirection In-Reply-To: <20180523150046.GB8604@aleks-PC> References: <20180523150046.GB8604@aleks-PC> Message-ID: <6a324076a0be621b86872de5568f87ac.NginxMailingListEnglish@forum.nginx.org> Hi, Thanks for your help. Your solution is good, my server works ! Thanks Posted at Nginx Forum: https://forum.nginx.org/read.php?2,279813,280143#msg-280143 From kaushalshriyan at gmail.com Sat Jun 16 05:26:37 2018 From: kaushalshriyan at gmail.com (Kaushal Shriyan) Date: Sat, 16 Jun 2018 10:56:37 +0530 Subject: 413 Request Entity Too Large Message-ID: Hi, I am encountering 413 Request Entity Too Large in the browser. I have added upload_max_filesize = 20M. I have added client_max_body_size 20M; in nginx.conf and i am still facing the issue. nginx version is 1.12. Please let me know if you need any additional information. Any help will be highly appreciated. Best Regards, Kaushal -------------- next part -------------- An HTML attachment was scrubbed... URL: From al-nginx at none.at Sat Jun 16 08:06:39 2018 From: al-nginx at none.at (Aleksandar Lazic) Date: Sat, 16 Jun 2018 10:06:39 +0200 Subject: 413 Request Entity Too Large In-Reply-To: References: Message-ID: <20180616080639.GC14440@aleks-PC> Hi. On 16/06/2018 10:56, Kaushal Shriyan wrote: >Hi, > >I am encountering 413 Request Entity Too Large in the browser. I have >added upload_max_filesize = 20M. I have added client_max_body_size 20M; >in nginx.conf and i am still facing the issue. nginx version is >1.12. Please let me know if you need any additional information. > >Any help will be highly appreciated. What's in the log file? There will be more information about the reason of the 413 as you can see in the source. https://github.com/nginx/nginx/search?q=Request+Entity+Too+Large&unscoped_q=Request+Entity+Too+Large >Best Regards, > >Kaushal Best regards Aleks From alex at nixd.org Sat Jun 16 23:36:25 2018 From: alex at nixd.org (Alexander Morozov) Date: Sun, 17 Jun 2018 01:36:25 +0200 Subject: outbound UDP port 1 In-Reply-To: References: Message-ID: <0ecb26e15a56229fd551af62230c60fb@nixd.org> Hello. I was doing experiments with the sandboxing in FreeBSD and I executed nginx sandboxed (in sandbox for FreeBSD) and I noticed that sandbox blocked 2 outbound datagrams from nginx (uid:root) process. Jun 17 00:26:02 ** sandboxd[49377]: action: deny for pid[30392]nginx uid:0 procedure: network-outbound[90] network outbound remote udp/ip4:65.158.94.185:1 Jun 17 00:26:02 ** sandboxd[49377]: action: deny for pid[30392]nginx uid:0 procedure: network-outbound[90] network outbound remote udp/ip4:65.158.94.168:1 Jun 17 01:17:03 ** sandboxd[49377]: action: deny for pid[61454]nginx uid:0 procedure: network-outbound[90] network outbound remote udp/ip4:205.197.140.171:1 Jun 17 01:17:03 ** sandboxd[49377]: action: deny for pid[61454]nginx uid:0 procedure: network-outbound[90] network outbound remote udp/ip4:205.197.140.178:1 Jun 17 01:24:11 ** sandboxd[49377]: action: deny for pid[11326]nginx uid:0 procedure: network-outbound[90] network outbound remote udp/ip4:80.239.148.73:1 Jun 17 01:24:11 ** sandboxd[49377]: action: deny for pid[11326]nginx uid:0 procedure: network-outbound[90] network outbound remote udp/ip4:80.239.148.95:1 I can not find any information about this addresses except from whois. For which purpose outgoing UDP/1 is used? The nginx was built from ports with the following config: ===> The following configuration options are available for nginx-1.14.0_4,2: DEBUG=off: Build with debugging support DEBUGLOG=off: Enable debug log (--with-debug) DSO=on: Enable dynamic modules support FILE_AIO=on: Enable file aio IPV6=on: Enable IPv6 support THREADS=on: Enable threads support WWW=on: Enable html sample files ====> Modules that require MAIL module MAIL=off: Enable IMAP4/POP3/SMTP proxy module MAIL_IMAP=off: Enable IMAP4 proxy module MAIL_POP3=off: Enable POP3 proxy module MAIL_SMTP=off: Enable SMTP proxy module MAIL_SSL=off: Enable mail_ssl module ====> Modules that require HTTP module GOOGLE_PERFTOOLS=off: Enable google perftools module HTTP=on: Enable HTTP module HTTP_ADDITION=on: Enable http_addition module HTTP_AUTH_REQ=on: Enable http_auth_request module HTTP_CACHE=on: Enable http_cache module HTTP_DAV=on: Enable http_webdav module HTTP_FLV=off: Enable http_flv module HTTP_GEOIP=on: Enable http_geoip module HTTP_GUNZIP_FILTER=on: Enable http_gunzip_filter module HTTP_GZIP_STATIC=on: Enable http_gzip_static module HTTP_IMAGE_FILTER=off: Enable http_image_filter module HTTP_MP4=off: Enable http_mp4 module HTTP_PERL=off: Enable http_perl module HTTP_RANDOM_INDEX=off: Enable http_random_index module HTTP_REALIP=on: Enable http_realip module HTTP_REWRITE=on: Enable http_rewrite module HTTP_SECURE_LINK=on: Enable http_secure_link module HTTP_SLICE=on: Enable http_slice module HTTP_SSL=on: Enable http_ssl module HTTP_STATUS=on: Enable http_stub_status module HTTP_SUB=on: Enable http_sub module HTTP_XSLT=off: Enable http_xslt module HTTPV2=on: Enable HTTP/2 protocol support (SSL req.) STREAM=on: Enable stream module STREAM_SSL=on: Enable stream_ssl module (SSL req.) STREAM_SSL_PREREAD=on: Enable stream_ssl_preread module (SSL req.) AJP=off: 3rd party ajp module AWS_AUTH=off: 3rd party aws auth module BROTLI=off: 3rd party brotli module CACHE_PURGE=on: 3rd party cache_purge module CLOJURE=off: 3rd party clojure module CT=off: 3rd party cert_transparency module (SSL req.) DEVEL_KIT=on: 3rd party Nginx Development Kit module ARRAYVAR=off: 3rd party array_var module DRIZZLE=off: 3rd party drizzle module DYNAMIC_UPSTREAM=off: 3rd party dynamic_upstream module ECHO=off: 3rd party echo module ENCRYPTSESSION=off: 3rd party encrypted_session module FASTDFS=off: 3rd party fastdfs module -- Kind Regards, Alexander Morozov From nginx-forum at forum.nginx.org Sun Jun 17 16:46:35 2018 From: nginx-forum at forum.nginx.org (peanutgyz) Date: Sun, 17 Jun 2018 12:46:35 -0400 Subject: nginx sometimes close connection to grpc upstream when keepalive is set Message-ID: when grpc server response SETTINGS[0], WINDOW_UPDATE[0], PING[0], HEADERS[1], DATA[1], HEADERS[1] in one tcp packet, nginx want to send settings ack and call function ngx_http_grpc_send_settings_ack, write some data in ctx->out when finalize upstream grpc request, because out != NULL, nginx can't set connection to be keepalive and closed connection. expect connection to be persistent . Posted at Nginx Forum: https://forum.nginx.org/read.php?2,280159,280159#msg-280159 From nginx-forum at forum.nginx.org Mon Jun 18 14:27:51 2018 From: nginx-forum at forum.nginx.org (PiAil) Date: Mon, 18 Jun 2018 10:27:51 -0400 Subject: Nginx with Mbed TLS Message-ID: <771076948dfc7f9cc48cb78d1c89a8b8.NginxMailingListEnglish@forum.nginx.org> Hi, I'm trying to implement Mbed TLS (and suppress all the OpenSSL part at the same time) in Nginx for learning reasons. I've found https://github.com/Yawning/nginx-polarssl that helped me a lot, evenf if both Nginx and Mbed TLS have a bit changed since that time. I have a problem : the handshake seems to be done if I believe in the Wireshark logs, but my naive hello world page isn't sent to the client (Mozilla, but I've also tested with the s_client feature in Openssl, and the result is the same). The last TLS message in the log is an Encrypted Application Data from the client (just after the Encrypted Handshake Message from the server), which i guess is the GET request, and after that nothing but the client waiting. I know I certainly do not provide enough information for someone to find the solution, but I give it a try... Other bugs that are not my priority for the moment : - With Chrome, the handshake stop just after the Certificate part ("Decode Error"). - To help me debugging things, I added "master_process off;" in the configuration file, and whitout it, the server doesn't even answer to the Client Hello. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,280164,280164#msg-280164 From mdounin at mdounin.ru Mon Jun 18 15:20:37 2018 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 18 Jun 2018 18:20:37 +0300 Subject: nginx sometimes close connection to grpc upstream when keepalive is set In-Reply-To: References: Message-ID: <20180618152037.GK32137@mdounin.ru> Hello! On Sun, Jun 17, 2018 at 12:46:35PM -0400, peanutgyz wrote: > when grpc server response > SETTINGS[0], WINDOW_UPDATE[0], PING[0], HEADERS[1], DATA[1], HEADERS[1] in > one tcp packet, > > nginx want to send settings ack and call function > ngx_http_grpc_send_settings_ack, write some data in ctx->out > > when finalize upstream grpc request, because out != NULL, nginx can't set > connection to be keepalive and closed connection. > > expect connection to be persistent . There are situations when nginx won't be able to keep a connection alive. These aren't likely to happen though, and not considered to be a problem. -- Maxim Dounin http://mdounin.ru/ From mdounin at mdounin.ru Mon Jun 18 16:07:54 2018 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 18 Jun 2018 19:07:54 +0300 Subject: outbound UDP port 1 In-Reply-To: <0ecb26e15a56229fd551af62230c60fb@nixd.org> References: <0ecb26e15a56229fd551af62230c60fb@nixd.org> Message-ID: <20180618160754.GL32137@mdounin.ru> Hello! On Sun, Jun 17, 2018 at 01:36:25AM +0200, Alexander Morozov wrote: > Hello. > > I was doing experiments with the sandboxing in FreeBSD and I executed > nginx sandboxed (in sandbox for FreeBSD) and I noticed that sandbox > blocked 2 outbound datagrams from nginx (uid:root) process. > > Jun 17 00:26:02 ** sandboxd[49377]: action: deny for pid[30392]nginx > uid:0 procedure: network-outbound[90] network outbound remote > udp/ip4:65.158.94.185:1 > Jun 17 00:26:02 ** sandboxd[49377]: action: deny for pid[30392]nginx > uid:0 procedure: network-outbound[90] network outbound remote > udp/ip4:65.158.94.168:1 > Jun 17 01:17:03 ** sandboxd[49377]: action: deny for pid[61454]nginx > uid:0 procedure: network-outbound[90] network outbound remote > udp/ip4:205.197.140.171:1 > Jun 17 01:17:03 ** sandboxd[49377]: action: deny for pid[61454]nginx > uid:0 procedure: network-outbound[90] network outbound remote > udp/ip4:205.197.140.178:1 > Jun 17 01:24:11 ** sandboxd[49377]: action: deny for pid[11326]nginx > uid:0 procedure: network-outbound[90] network outbound remote > udp/ip4:80.239.148.73:1 > Jun 17 01:24:11 ** sandboxd[49377]: action: deny for pid[11326]nginx > uid:0 procedure: network-outbound[90] network outbound remote > udp/ip4:80.239.148.95:1 > > I can not find any information about this addresses except from whois. > For which purpose outgoing UDP/1 is used? It is not used by nginx unless you've explicitly configured it to do so. -- Maxim Dounin http://mdounin.ru/ From kaushalshriyan at gmail.com Mon Jun 18 16:33:26 2018 From: kaushalshriyan at gmail.com (Kaushal Shriyan) Date: Mon, 18 Jun 2018 22:03:26 +0530 Subject: 413 Request Entity Too Large In-Reply-To: <20180616080639.GC14440@aleks-PC> References: <20180616080639.GC14440@aleks-PC> Message-ID: On Sat, Jun 16, 2018 at 1:36 PM Aleksandar Lazic wrote: > Hi. > > On 16/06/2018 10:56, Kaushal Shriyan wrote: > >Hi, > > > >I am encountering 413 Request Entity Too Large in the browser. I have > >added upload_max_filesize = 20M. I have added client_max_body_size 20M; > >in nginx.conf and i am still facing the issue. nginx version is > >1.12. Please let me know if you need any additional information. > > > >Any help will be highly appreciated. > > What's in the log file? > > There will be more information about the reason of the 413 as you can > see in the source. > > > https://github.com/nginx/nginx/search?q=Request+Entity+Too+Large&unscoped_q=Request+Entity+Too+Large > > >Best Regards, > > > >Kaushal > > Hi Aleks, I have set the below settings. */etc/php.ini* max_input_time = 60 max_execution_time = 200 upload_max_size = 100M upload_max_filesize = 100M post_max_size = 100M */opt/nginx/conf/nginx.conf * client_max_body_size 100M; I am still encountering *413 Request Entity Too Large nginx/1.12.1 * Please comment. Best Regards, -------------- next part -------------- An HTML attachment was scrubbed... URL: From kaushalshriyan at gmail.com Tue Jun 19 02:29:43 2018 From: kaushalshriyan at gmail.com (Kaushal Shriyan) Date: Tue, 19 Jun 2018 07:59:43 +0530 Subject: 413 Request Entity Too Large In-Reply-To: References: <20180616080639.GC14440@aleks-PC> Message-ID: On Mon, Jun 18, 2018 at 10:03 PM Kaushal Shriyan wrote: > > > On Sat, Jun 16, 2018 at 1:36 PM Aleksandar Lazic wrote: > >> Hi. >> >> On 16/06/2018 10:56, Kaushal Shriyan wrote: >> >Hi, >> > >> >I am encountering 413 Request Entity Too Large in the browser. I have >> >added upload_max_filesize = 20M. I have added client_max_body_size 20M; >> >in nginx.conf and i am still facing the issue. nginx version is >> >1.12. Please let me know if you need any additional information. >> > >> >Any help will be highly appreciated. >> >> What's in the log file? >> >> There will be more information about the reason of the 413 as you can >> see in the source. >> >> >> https://github.com/nginx/nginx/search?q=Request+Entity+Too+Large&unscoped_q=Request+Entity+Too+Large >> >> >Best Regards, >> > >> >Kaushal >> >> > Hi Aleks, > > I have set the below settings. > > */etc/php.ini* > max_input_time = 60 > max_execution_time = 200 > upload_max_size = 100M > upload_max_filesize = 100M > post_max_size = 100M > > */opt/nginx/conf/nginx.conf * > client_max_body_size 100M; > > I am still encountering *413 Request Entity Too Large nginx/1.12.1 * > Please comment. > > Best Regards, > > Hi, I will appreciate if somebody can pitch in for help to my earlier post to this mailing list. Thanks in Advance. Best Regards, Kaushal -------------- next part -------------- An HTML attachment was scrubbed... URL: From al-nginx at none.at Tue Jun 19 07:26:24 2018 From: al-nginx at none.at (Aleksandar Lazic) Date: Tue, 19 Jun 2018 09:26:24 +0200 Subject: 413 Request Entity Too Large In-Reply-To: References: <20180616080639.GC14440@aleks-PC> Message-ID: <20180619072623.GC8112@aleks-PC> Hi Kaushal. On 18/06/2018 22:03, Kaushal Shriyan wrote: >On Sat, Jun 16, 2018 at 1:36 PM Aleksandar Lazic wrote: > >> Hi. >> >> On 16/06/2018 10:56, Kaushal Shriyan wrote: >> >Hi, >> > >> >I am encountering 413 Request Entity Too Large in the browser. I have >> >added upload_max_filesize = 20M. I have added client_max_body_size 20M; >> >in nginx.conf and i am still facing the issue. nginx version is >> >1.12. Please let me know if you need any additional information. >> > >> >Any help will be highly appreciated. >> >> What's in the log file? >> >> There will be more information about the reason of the 413 as you can >> see in the source. >> >> >> https://github.com/nginx/nginx/search?q=Request+Entity+Too+Large&unscoped_q=Request+Entity+Too+Large >> >> >Best Regards, >> > >> >Kaushal >> >> >Hi Aleks, > >I have set the below settings. > >*/etc/php.ini* >max_input_time = 60 >max_execution_time = 200 >upload_max_size = 100M >upload_max_filesize = 100M >post_max_size = 100M > >*/opt/nginx/conf/nginx.conf * >client_max_body_size 100M; > >I am still encountering *413 Request Entity Too Large nginx/1.12.1 * >Please comment. What's in the nginx error logs file? There should be more informations about the reason of the 413 as you can see in the source. https://github.com/nginx/nginx/search?q=Request+Entity+Too+Large&unscoped_q=Request+Entity+Too+Large >Best Regards, Best regards Aleks From nginx-forum at forum.nginx.org Tue Jun 19 19:34:13 2018 From: nginx-forum at forum.nginx.org (vchhabra@medallia.com) Date: Tue, 19 Jun 2018 15:34:13 -0400 Subject: allow traffic through with a certain header value Message-ID: Hi Nginx Forum This is my first posting here. I'm trying to configure an application to only allow traffic if a certain header value matches exactly. I'm trying the "if" statement below in my nginx app config file, but doesn't seem to quite work. It just gives a 403 for every request. If I change the != to = it allows allows all traffic through. "nginx -T" doesn't report any issues. Any suggestions on what else might be required? Thanks a lot. location / { if ($http_headerkey != "headervalue") { return 403; } allow .... allow .... Posted at Nginx Forum: https://forum.nginx.org/read.php?2,280181,280181#msg-280181 From nginx-forum at forum.nginx.org Wed Jun 20 07:16:40 2018 From: nginx-forum at forum.nginx.org (nov1ce) Date: Wed, 20 Jun 2018 03:16:40 -0400 Subject: Retaining upstream server Message-ID: Hello, 1.14.0-1 running on Debian Stretch: # dpkg -l | grep nginx ii nginx 1.14.0-1~stretch amd64 high performance web server I'm trying to load balance between two VMware View Connection servers (10.7.18.121 and 10.7.18.122) listening on 443/tcp, 4172/tcp and 4172/udp. The way the application works is: first, the connecting client hits 443/tcp where authentication takes place, then the client gets connected to 4172/tcp (or 4172/udp). I have no problems when the connection is handled by the same upstream server, such as: remote_client > nginx_vip > 10.7.18.121:443 > 10.7.18.121:4172 or remote_client > nginx_vip > 10.7.18.122:443 > 10.7.18.122:4172. However, I get application errors if 443/tcp is handled by one server and 4172/tcp/udp by another. Therefore, I was wondering whether it'd be possible to configure Nginx in a such way that the upstream server is retained through the whole session? I mean, if a client gets served by 10.7.18.121:443 Nginx will use the same upstream to deliver 4172/tcp/udp? I can probably switch to active-backup model, but I was hoping to benefit from the load distribution. Many thanks. stream { log_format basic '$time_iso8601 $remote_addr ' '$protocol $status $bytes_sent $bytes_received ' '$session_time $upstream_addr ' '"$upstream_bytes_sent" "$upstream_bytes_received" "$upstream_connect_time"'; access_log /var/log/nginx/stream_access.log basic; upstream test_horizon_4172_tcp { hash $remote_addr consistent; server 10.7.18.121:4172; server 10.7.18.122:4172; } upstream test_horizon_4172_udp { hash $remote_addr consistent; server 10.7.18.121:4172; server 10.7.18.122:4172; } upstream test_horizon_https { hash $remote_addr consistent; server 10.7.18.121:443; server 10.7.18.122:443; } server { listen 4172; proxy_pass test_horizon_4172_tcp; } server { listen 4172 udp; proxy_pass test_horizon_4172_udp; } server { listen 443; proxy_pass test_horizon_https; } } Posted at Nginx Forum: https://forum.nginx.org/read.php?2,280183,280183#msg-280183 From alex at samad.com.au Wed Jun 20 07:31:25 2018 From: alex at samad.com.au (Alex Samad) Date: Wed, 20 Jun 2018 17:31:25 +1000 Subject: Retaining upstream server In-Reply-To: References: Message-ID: Look at sticky session, a routing code in a cookie that helps you decide where to send the packet. So on the 443 set the cookie and on the udp use the cookie in the header to route on the back end On 20 June 2018 at 17:16, nov1ce wrote: > Hello, > > 1.14.0-1 running on Debian Stretch: > > # dpkg -l | grep nginx > ii nginx 1.14.0-1~stretch amd64 > > high performance web server > > I'm trying to load balance between two VMware View Connection servers > (10.7.18.121 and 10.7.18.122) listening on 443/tcp, 4172/tcp and 4172/udp. > The way the application works is: first, the connecting client hits 443/tcp > where authentication takes place, then the client gets connected to > 4172/tcp > (or 4172/udp). > > I have no problems when the connection is handled by the same upstream > server, such as: remote_client > nginx_vip > 10.7.18.121:443 > > 10.7.18.121:4172 or remote_client > nginx_vip > 10.7.18.122:443 > > 10.7.18.122:4172. However, I get application errors if 443/tcp is handled > by > one server and 4172/tcp/udp by another. > > Therefore, I was wondering whether it'd be possible to configure Nginx in a > such way that the upstream server is retained through the whole session? I > mean, if a client gets served by 10.7.18.121:443 Nginx will use the same > upstream to deliver 4172/tcp/udp? > > I can probably switch to active-backup model, but I was hoping to benefit > from the load distribution. > > Many thanks. > > stream { > > log_format basic '$time_iso8601 $remote_addr ' > '$protocol $status $bytes_sent $bytes_received ' > '$session_time $upstream_addr ' > '"$upstream_bytes_sent" "$upstream_bytes_received" > "$upstream_connect_time"'; > access_log /var/log/nginx/stream_access.log basic; > > upstream test_horizon_4172_tcp { > hash $remote_addr consistent; > server 10.7.18.121:4172; > server 10.7.18.122:4172; > } > > upstream test_horizon_4172_udp { > hash $remote_addr consistent; > server 10.7.18.121:4172; > server 10.7.18.122:4172; > } > > upstream test_horizon_https { > hash $remote_addr consistent; > server 10.7.18.121:443; > server 10.7.18.122:443; > } > > server { > listen 4172; > proxy_pass test_horizon_4172_tcp; > } > > server { > listen 4172 udp; > proxy_pass test_horizon_4172_udp; > } > > server { > listen 443; > proxy_pass test_horizon_https; > } > > } > > Posted at Nginx Forum: https://forum.nginx.org/read. > php?2,280183,280183#msg-280183 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Wed Jun 20 08:43:56 2018 From: nginx-forum at forum.nginx.org (nov1ce) Date: Wed, 20 Jun 2018 04:43:56 -0400 Subject: Retaining upstream server In-Reply-To: References: Message-ID: <4943bc46b2251a2234e8d8776740fac0.NginxMailingListEnglish@forum.nginx.org> Thank you. I guess sticky module is only available in the commercial edition of Nginx? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,280183,280185#msg-280185 From nginx-forum at forum.nginx.org Wed Jun 20 09:48:24 2018 From: nginx-forum at forum.nginx.org (foxgab) Date: Wed, 20 Jun 2018 05:48:24 -0400 Subject: gzip doesn't work while backend response include a Accept-Ranges header Message-ID: <9475a47ca22eb1910dc78e6d2057c560.NginxMailingListEnglish@forum.nginx.org> i configured gzip like bellow: http { gzip on; gzip_comp_level 5; gzip_http_version 1.0; gzip_proxied any; gzip_min_length 1k; gzip_types text/css text/plain text/javascript text/xml application/json application/javascript application/x-javascript; ... upstream proxy_static_srv { ... } server { ... location /static { proxy_pass http://proxy_static_srv } } } ----------------------------------------------- i found some .js file didn't compressed if a Accept-Ranges header appeared in the response, others doing well. what's wrong? request: Host: i.xxxxx.com User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:52.0) Gecko/20100101 Firefox/52.0 Accept: */* Accept-Language: zh-CN,zh;q=0.8,en-US;q=0.5,en;q=0.3 Accept-Encoding: gzip, deflate, br Cookie: _ga=GA1.2.313391719.1521699529; Connection: keep-alive Pragma: no-cache Cache-Control: no-cache response: Accept-Ranges: bytes Cache-Control: max-age=28800 Connection: keep-alive Content-Length: 1481234 Content-Type: application/javascript; charset=utf-8 Date: Wed, 20 Jun 2018 08:56:08 GMT Etag: "5b23c8c3-169a12" Expires: Wed, 20 Jun 2018 16:56:08 GMT Keep-Alive: timeout=60 Last-Modified: Fri, 15 Jun 2018 14:10:11 GMT Server: nginx the static servers are nginx too, why sometimes this header appeared in the response? -------------------------------------------------- [root at nginx-prd3-public-huiju-a1]# /usr/local/nginx/sbin/nginx -V nginx version: nginx/1.10.3 built by gcc 4.4.7 20120313 (Red Hat 4.4.7-18) (GCC) built with OpenSSL 1.0.2h 3 May 2016 TLS SNI support enabled configure arguments: --prefix=/usr/local/nginx --with-http_stub_status_module --with-http_v2_module --with-http_ssl_module --with-ipv6 --with-http_gzip_static_module --with-http_realip_module --with-http_flv_module --with-openssl=/usr/local/src/openssl-1.0.2h/ --with-pcre=/usr/local/src/pcre-8.39/ --with-pcre-jit --with-ld-opt=-ljemalloc --with-ld-opt=-Wl,-rpath,/usr/local/luajit/lib --add-module=/usr/local/src/ngx_devel_kit-0.2.19 --add-module=/usr/local/src/lua-nginx-module-0.10.2 --add-module=/usr/local/src/nginx-sticky-module-1.1-master --add-module=/usr/local/src/nginx_upstream_check_module-master --add-dynamic-module=/usr/local/src/headers-more-nginx-module-master/ --with-stream --with-http_realip_module --with-file-aio Posted at Nginx Forum: https://forum.nginx.org/read.php?2,280186,280186#msg-280186 From nginx-forum at forum.nginx.org Wed Jun 20 11:38:18 2018 From: nginx-forum at forum.nginx.org (rihad) Date: Wed, 20 Jun 2018 07:38:18 -0400 Subject: massive deleted open files in proxy cache In-Reply-To: References: Message-ID: Have you been able to solve the issue? We're having the same problem after upgrading 1.12.2 to 1.14 Posted at Nginx Forum: https://forum.nginx.org/read.php?2,272519,280189#msg-280189 From peter_booth at me.com Wed Jun 20 14:06:08 2018 From: peter_booth at me.com (Peter Booth) Date: Wed, 20 Jun 2018 10:06:08 -0400 Subject: massive deleted open files in proxy cache In-Reply-To: References: Message-ID: <28D26B9C-7FA8-4B31-9B38-4DCC650F3878@me.com> Sounds weird. 1. It doesn?t make sense for your cache to be on a tmpfs share. Better to use s physical disk allow Linux ?s page csche to do its job 2. How big are the files in the larger cache? Min/median/max? Sent from my iPhone > On Jun 20, 2018, at 7:38 AM, rihad wrote: > > Have you been able to solve the issue? We're having the same problem after > upgrading 1.12.2 to 1.14 > > Posted at Nginx Forum: https://forum.nginx.org/read.php?2,272519,280189#msg-280189 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From francis at daoine.org Wed Jun 20 15:13:56 2018 From: francis at daoine.org (Francis Daly) Date: Wed, 20 Jun 2018 16:13:56 +0100 Subject: allow traffic through with a certain header value In-Reply-To: References: Message-ID: <20180620151356.GE3111@daoine.org> On Tue, Jun 19, 2018 at 03:34:13PM -0400, vchhabra at medallia.com wrote: Hi there, > I'm trying the "if" statement below in my > nginx app config file, but doesn't seem to quite work. It just gives a > 403 for every request. > location / { > if ($http_headerkey != "headervalue") { > return 403; } It seems to work for me: server { listen 8000; location / { if ($http_headerkey != "headervalue") { return 403; } return 404; } } And then: $ curl -I -H HeaderKey:headervalue http://127.0.0.1:8000/x HTTP/1.1 404 Not Found $ curl -I -H HeaderKey:somethingelse http://127.0.0.1:8000/x HTTP/1.1 403 Forbidden $ curl -I http://127.0.0.1:8000/x HTTP/1.1 403 Forbidden I get 403 if the key does not have the value, and I get the here-expected 404 if the key does have the value. What do you get if you try that test? f -- Francis Daly francis at daoine.org From nginx-forum at forum.nginx.org Thu Jun 21 00:57:49 2018 From: nginx-forum at forum.nginx.org (peanutgyz) Date: Wed, 20 Jun 2018 20:57:49 -0400 Subject: nginx sometimes close connection to grpc upstream when keepalive is set In-Reply-To: <20180618152037.GK32137@mdounin.ru> References: <20180618152037.GK32137@mdounin.ru> Message-ID: <680dfbae2a1eaed605ecd4c216cd7225.NginxMailingListEnglish@forum.nginx.org> thanks for your help. how can i find why nginx can't keep connection in this case. why nginx can no process data and send a settings or ping ack to grpc server? grpc client will do like this. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,280159,280207#msg-280207 From nginx-forum at forum.nginx.org Thu Jun 21 01:15:55 2018 From: nginx-forum at forum.nginx.org (peanutgyz) Date: Wed, 20 Jun 2018 21:15:55 -0400 Subject: is there somthing like proxy_request_buffering in grpc module? Message-ID: like grpc_request_buffering? nginx listen on http2 and grpc_pass to grpc server, but grpc_next_stream cant work like expect. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,280208,280208#msg-280208 From ambadiaravind at gmail.com Thu Jun 21 08:35:57 2018 From: ambadiaravind at gmail.com (aRaviNd) Date: Thu, 21 Jun 2018 14:05:57 +0530 Subject: TCP load balancing with domain name Message-ID: Hi All, I am trying to configure tcp load balancing with Nginx with below configuration. stream { server { listen 25; resolver 1.1.1.1; proxy_pass $host:25; } } If I try to connect mx1.abc.com i would like to expand my variable $host to mx1.abc.com and internally it will resolve to servers who handles mail for that mx record. Please let me know is there any nginx variable in stream which supports hostname i am connecting. Aravind M D -------------- next part -------------- An HTML attachment was scrubbed... URL: From maxim at nginx.com Thu Jun 21 08:39:37 2018 From: maxim at nginx.com (Maxim Konovalov) Date: Thu, 21 Jun 2018 11:39:37 +0300 Subject: TCP load balancing with domain name In-Reply-To: References: Message-ID: <05aaf113-4583-aa7e-bdc0-ca71d3da83cc@nginx.com> Aravind, On 21/06/2018 11:35, aRaviNd wrote: > Hi All, > > I am trying to configure tcp load balancing with Nginx with below > configuration. > > stream { > ? ? server { > ? ? ? ? listen 25; > ? ? ? ? resolver 1.1.1.1; > ? ? ? ? proxy_pass $host:25; > ? ? } > } > > If I try to connect mx1.abc.com i would like to > expand my variable $host to mx1.abc.com and > internally it will resolve to servers who handles mail for that mx > record. > > Please let me know is there any nginx variable in stream which > supports hostname i am connecting. > This is simple not possible. TCP doesn't have any signs of the original domain name that was used for connect(2) on the client side. -- Maxim Konovalov From nginx-forum at forum.nginx.org Thu Jun 21 10:20:15 2018 From: nginx-forum at forum.nginx.org (ChatterjeeAtanu) Date: Thu, 21 Jun 2018 06:20:15 -0400 Subject: How to redirect the call from nginx conf if proxy pass is successfull Message-ID: Hello Team, Need your help actually we are passing docker private registry calls from the nginx conf using reverse proxy . but here one of things we need to trap like when docker manifest/blob downloads are successful and error we have to report to the some pcf service running somewhere . using error_page tried to trap and proxy_pass the call to somewhere. how can we catch the NGINX success and failure codes both like with some variable or can trap with condition , if you have any idea please share it will be extremely helpful for our team. Thanks Atanu Posted at Nginx Forum: https://forum.nginx.org/read.php?2,280220,280220#msg-280220 From cyang at 123mail.org Thu Jun 21 19:47:50 2018 From: cyang at 123mail.org (cyang at 123mail.org) Date: Thu, 21 Jun 2018 12:47:50 -0700 Subject: How to pass connection's real IP through Nginx smtp proxy to Postfix/postscreen backend? Message-ID: <1529610470.1202832.1416158392.397D05A3@webmail.messagingengine.com> I run Postfix 3.3.1 & Nginx 1.15.0 Both work great. I'm beginning to experiment with putting Postfix (and eventually other) server behind Nginx (v 1.15.0) setup as a mail (SMTP) proxy. Without the proxy, Postfix logs show an inbound connection to my real IP Jun 21 12:12:31 mailprox postfix/postscreen[55634]: CONNECT from [74.125.142.27]:43757 to [192.0.2.1]:25 The way nginx gets configured for smtp proxy, even if I'm *NOT* doing any auth is to direct the connection to a "fake" auth_http destination, mail { ... auth_http 127.0.0.1:33001/dummy.php; ... } http { ... server { listen 127.0.0.1:33001; ... location ~ .php$ { add_header Auth-Server 127.0.0.1; add_header Auth-Port 33025; return 200; } ... } Switching over, the proxy is set up to listen on the real IP [192.0.2.1]:25 and passes to Postfix's postscreen which using the config above is listening on [127.0.0.1]:33025 What I see in the Postfix log is Jun 21 12:10:12 mailprox postfix/postscreen[55329]: CONNECT from [127.0.0.1]:31460 to [127.0.0.1]:33025 Jun 21 12:10:12 mailprox postfix/postscreen[55329]: WHITELISTED [127.0.0.1]:31460 Mail does get delivered but postscreen is whitelisting the IP of the proxy, 127.0.0.1, and not using the real IP. I need to somehow pass the Real-IP through to postscreen, and anything further downstream that'll need it. For web server proxying I'd pass something like X-Forwarded-For or X-Real-IP to a downstream webserver listener. What do I need for Postfix/Postscreen to correctly 'see' the Real IP? A header added to the nginx config? Some additional code in the auth_http? Something else? Cheers! Cy From nginx-forum at forum.nginx.org Thu Jun 21 20:37:32 2018 From: nginx-forum at forum.nginx.org (abatie) Date: Thu, 21 Jun 2018 16:37:32 -0400 Subject: dual stack binding Message-ID: I have nginx binding to a variety of addresses for ssl and target selection reasons. Now I'm trying to add ipv6 support. Since I'm using specific listen addresses, I wouldn't expect to have a binding conflict, however I am, and I'm hoping someone can point me in the right direction: server { listen 207.55.17.79:25; ... server { listen [2607:f678::17:79]:25; ... [150] # service nginx restart Stopping nginx: [FAILED] Starting nginx: nginx: [emerg] bind() to [2607:f678::17:79]:25 failed (99: Cannot assign requested address) [FAILED] The local mail server is only listening on localhost: tcp 0 0 ::1:25 :::* LISTEN Commenting out the smtp server config just moves the conflict to the next port in question... nginx/1.7.6 CentOS release 6.9 (Final) Posted at Nginx Forum: https://forum.nginx.org/read.php?2,280234,280234#msg-280234 From gfrankliu at gmail.com Thu Jun 21 20:58:47 2018 From: gfrankliu at gmail.com (Frank Liu) Date: Thu, 21 Jun 2018 13:58:47 -0700 Subject: How to pass connection's real IP through Nginx smtp proxy to Postfix/postscreen backend? In-Reply-To: <1529610470.1202832.1416158392.397D05A3@webmail.messagingengine.com> References: <1529610470.1202832.1416158392.397D05A3@webmail.messagingengine.com> Message-ID: Try proxy protocol. On Thu, Jun 21, 2018 at 12:47 PM, wrote: > I run Postfix 3.3.1 & Nginx 1.15.0 > > Both work great. > > I'm beginning to experiment with putting Postfix (and eventually other) > server behind Nginx (v 1.15.0) setup as a mail (SMTP) proxy. > > Without the proxy, Postfix logs show an inbound connection to my real IP > > Jun 21 12:12:31 mailprox postfix/postscreen[55634]: CONNECT from > [74.125.142.27]:43757 to [192.0.2.1]:25 > > The way nginx gets configured for smtp proxy, even if I'm *NOT* doing any > auth is to direct the connection to a "fake" auth_http destination, > > mail { > ... > auth_http 127.0.0.1:33001/dummy.php; > ... > } > http { > ... > server { > listen 127.0.0.1:33001; > ... > location ~ .php$ { > add_header Auth-Server 127.0.0.1; > add_header Auth-Port 33025; > return 200; > } > ... > } > > Switching over, the proxy is set up to listen on the real IP > > [192.0.2.1]:25 > > and passes to Postfix's postscreen which using the config above is > listening on > > [127.0.0.1]:33025 > > What I see in the Postfix log is > > Jun 21 12:10:12 mailprox postfix/postscreen[55329]: CONNECT from > [127.0.0.1]:31460 to [127.0.0.1]:33025 > Jun 21 12:10:12 mailprox postfix/postscreen[55329]: WHITELISTED > [127.0.0.1]:31460 > > Mail does get delivered but postscreen is whitelisting the IP of the > proxy, 127.0.0.1, and not using the real IP. > > I need to somehow pass the Real-IP through to postscreen, and anything > further downstream that'll need it. > > For web server proxying I'd pass something like > > X-Forwarded-For > > or > > X-Real-IP > > to a downstream webserver listener. > > What do I need for Postfix/Postscreen to correctly 'see' the Real IP? > > A header added to the nginx config? Some additional code in the > auth_http? Something else? > > Cheers! > > Cy > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From lists-nginx at swsystem.co.uk Thu Jun 21 21:36:36 2018 From: lists-nginx at swsystem.co.uk (Steve Wilson) Date: Thu, 21 Jun 2018 22:36:36 +0100 Subject: dual stack binding In-Reply-To: References: Message-ID: I've no problem with IPv6 on my server using specific v4 and v6 listen statements. Is the IP you're trying to use actually configured on an interface? Steve. On 21/06/2018 21:37, abatie wrote: > I have nginx binding to a variety of addresses for ssl and target selection > reasons. Now I'm trying to add ipv6 support. Since I'm using specific > listen addresses, I wouldn't expect to have a binding conflict, however I > am, and I'm hoping someone can point me in the right direction: > > server { > listen 207.55.17.79:25; > ... > server { > listen [2607:f678::17:79]:25; > ... > > [150] # service nginx restart > Stopping nginx: [FAILED] > Starting nginx: nginx: [emerg] bind() to [2607:f678::17:79]:25 failed (99: > Cannot assign requested address) > [FAILED] > > The local mail server is only listening on localhost: > > tcp 0 0 ::1:25 :::* > LISTEN > > Commenting out the smtp server config just moves the conflict to the next > port in question... > > nginx/1.7.6 > CentOS release 6.9 (Final) > > Posted at Nginx Forum: https://forum.nginx.org/read.php?2,280234,280234#msg-280234 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > From nginx-forum at forum.nginx.org Thu Jun 21 21:43:34 2018 From: nginx-forum at forum.nginx.org (abatie) Date: Thu, 21 Jun 2018 17:43:34 -0400 Subject: dual stack binding In-Reply-To: References: Message-ID: <0df8452be7c5fe79bb89b1cb14ec7afb.NginxMailingListEnglish@forum.nginx.org> Yup: eth0 Link encap:Ethernet HWaddr 00:50:56:8C:62:77 inet addr:207.55.17.91 Bcast:207.55.19.255 Mask:255.255.252.0 inet6 addr: fe80::250:56ff:fe8c:6277/64 Scope:Link inet6 addr: 2607:f678::17:79/64 Scope:Global inet6 addr: 2607:f678::17:91/64 Scope:Global And what's odd is when I first tried it, it worked. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,280234,280237#msg-280237 From nginx-forum at forum.nginx.org Thu Jun 21 21:45:50 2018 From: nginx-forum at forum.nginx.org (abatie) Date: Thu, 21 Jun 2018 17:45:50 -0400 Subject: dual stack binding In-Reply-To: <0df8452be7c5fe79bb89b1cb14ec7afb.NginxMailingListEnglish@forum.nginx.org> References: <0df8452be7c5fe79bb89b1cb14ec7afb.NginxMailingListEnglish@forum.nginx.org> Message-ID: <7e4bf0e54a24874ffb4c70a691918f97.NginxMailingListEnglish@forum.nginx.org> OK, that's odd: I commented out the ipv4 address and it still fails. It's not a conflict then, so something's odd in the network stack... Posted at Nginx Forum: https://forum.nginx.org/read.php?2,280234,280238#msg-280238 From nginx-forum at forum.nginx.org Thu Jun 21 22:12:36 2018 From: nginx-forum at forum.nginx.org (abatie) Date: Thu, 21 Jun 2018 18:12:36 -0400 Subject: dual stack binding In-Reply-To: <7e4bf0e54a24874ffb4c70a691918f97.NginxMailingListEnglish@forum.nginx.org> References: <0df8452be7c5fe79bb89b1cb14ec7afb.NginxMailingListEnglish@forum.nginx.org> <7e4bf0e54a24874ffb4c70a691918f97.NginxMailingListEnglish@forum.nginx.org> Message-ID: <4960a6f1a511ef55074f931358591bfb.NginxMailingListEnglish@forum.nginx.org> I believe this is related to blocking neighbor discovery on the address for the purposes of doing direct server return load balancing, and not nginx related, thanks! Posted at Nginx Forum: https://forum.nginx.org/read.php?2,280234,280239#msg-280239 From gfrankliu at gmail.com Thu Jun 21 22:50:09 2018 From: gfrankliu at gmail.com (Frank Liu) Date: Thu, 21 Jun 2018 15:50:09 -0700 Subject: dual stack binding In-Reply-To: <4960a6f1a511ef55074f931358591bfb.NginxMailingListEnglish@forum.nginx.org> References: <0df8452be7c5fe79bb89b1cb14ec7afb.NginxMailingListEnglish@forum.nginx.org> <7e4bf0e54a24874ffb4c70a691918f97.NginxMailingListEnglish@forum.nginx.org> <4960a6f1a511ef55074f931358591bfb.NginxMailingListEnglish@forum.nginx.org> Message-ID: The issue is with this: [150] # service nginx restart Stopping nginx: [FAILED] Since stopping FAILED, the IP/port still in use. That's why start failed with "binding" error. You can try "service nginx stop" along and check error log to see why it failed to stop. On Thu, Jun 21, 2018 at 3:12 PM, abatie wrote: > I believe this is related to blocking neighbor discovery on the address for > the purposes of doing direct server return load balancing, and not nginx > related, thanks! > > Posted at Nginx Forum: https://forum.nginx.org/read. > php?2,280234,280239#msg-280239 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Fri Jun 22 07:42:57 2018 From: nginx-forum at forum.nginx.org (Szop) Date: Fri, 22 Jun 2018 03:42:57 -0400 Subject: NGINX Proxy Cache Cache-Control Message-ID: <914a290964f6ce89d343cd6b28cff0c1.NginxMailingListEnglish@forum.nginx.org> Hello guys, I'm having a hard time defining a proxy cache because my landing page doesn't generate any HTML which can be cached. Quit complicated to explain, let me show you some logs and curl requests: curl: curl -I https://....info/de HTTP/1.1 200 OK Server: nginx Date: Thu, 21 Jun 2018 11:56:15 GMT Content-Type: text/html;charset=UTF-8 Content-Length: 135883 Connection: keep-alive Keep-Alive: timeout=5 X-Magnolia-Registration: Registered Access-Control-Allow-Origin: ... Access-Control-Allow-Methods: GET, OPTIONS, HEAD Access-Control-Allow-Headers: X-PINGOTHER, Origin, X-Requested-With, Content-Type, Accept Cache-Control: max-age=60, public Expires: Thu, 21 Jun 2018 11:57:15 GMT Last-Modified: Thu, 21 Jun 2018 11:55:46 GMT X-UPSTREAM: 10.6.198.11:8080 ... NGINX Access Logs: [22/Jun/2018:09:35:24 +0200] Cache: - 10.6.198.12:8080 0.022 304 865 IP ...-com.stage.....info /de [22/Jun/2018:09:35:26 +0200] Cache: HIT - - 200 1151 IP ...-com.stage.....info /.resources/img/favicon.ico NGINX Locations: location ~* \.(?:bmp|css|gif|ico|jng|jpe?g|js(on)?|png|svgz?|tiff?|wbmp|webp)$ { # caching expires max; proxy_cache stage.....info_proxy-cache; proxy_cache_lock on; # custom lines proxy_cache_use_stale error timeout updating invalid_header http_500 http_502 http_503 http_504; # proxy pass proxy_pass http://public.stage; } location ~* \.(?:html)$ { # caching expires 15s; proxy_cache stage.....info_proxy-cache; proxy_cache_lock on; # custom lines proxy_cache_use_stale error timeout updating invalid_header http_500 http_502 http_503 http_504; # proxy pass proxy_pass http://public.stage; } I'm able to cache all static assets with proper file extension like .png, .css, etc. pp. so this works like expected. My question is: is it possible to define the caching behaviour because of the Cache-Type? My idea is to take the result like "Content-Type: text/html;charset=UTF-8" and then to proxy cache it if it is text/html. Does it make sense? Cheers, Szop Posted at Nginx Forum: https://forum.nginx.org/read.php?2,280242,280242#msg-280242 From ftriboix at incise.co Fri Jun 22 13:38:28 2018 From: ftriboix at incise.co (Fabrice Triboix) Date: Fri, 22 Jun 2018 14:38:28 +0100 Subject: How to log the number of bytes sent over a websocket? Message-ID: <703ea93e-1f97-d7f1-725c-95a83a59804a@incise.co> Hi All, I am using nginx as a websocket reverse-proxy (this is working fine BTW). I would like to log the number of bytes sent (and ideally also received) over a websocket. If I use `$body_bytes_sent` in `log_format`, the entry in the access_log is always 0. As far as I can tell, a lot of data went through the websocket, so clearly `$body_bytes_sent` does not include data sent over a websocket. I tried to use `$bytes_sent`, but it's just one or two hundred bytes (no matter how much data is sent over the websocket), so that's clearly just the HTTP headers. I went through the list of available nginx variables, but I couldn't find anything for me... Any idea? Thanks a lot for any help! ? Fabrice From peter_booth at me.com Fri Jun 22 16:51:17 2018 From: peter_booth at me.com (Peter Booth) Date: Fri, 22 Jun 2018 12:51:17 -0400 Subject: NGINX Proxy Cache Cache-Control In-Reply-To: <914a290964f6ce89d343cd6b28cff0c1.NginxMailingListEnglish@forum.nginx.org> References: <914a290964f6ce89d343cd6b28cff0c1.NginxMailingListEnglish@forum.nginx.org> Message-ID: <5FB7FF9F-8216-428D-A77B-3430FD767540@me.com> Your question raises so many other questions: 1. The static content - jpg, png, tiff, etc. It looks as though you are serving them your backend and caching them. Are they also being built on demand dynamically? If not, then why csche them? Why not deploy them to nginx and serve them directly? 2. The text content - is this fragments of html that don?t have names that end in html? Sent from my iPhone > On Jun 22, 2018, at 3:42 AM, Szop wrote: > Something > Hello guys, > > I'm having a hard time defining a proxy cache because my landing page > doesn't generate any HTML which can be cached. Quit complicated to explain, > let me show you some logs and curl requests: > > curl: > > curl -I https://....info/de > HTTP/1.1 200 OK > Server: nginx > Date: Thu, 21 Jun 2018 11:56:15 GMT > Content-Type: text/html;charset=UTF-8 > Content-Length: 135883 > Connection: keep-alive > Keep-Alive: timeout=5 > X-Magnolia-Registration: Registered > Access-Control-Allow-Origin: ... > Access-Control-Allow-Methods: GET, OPTIONS, HEAD > Access-Control-Allow-Headers: X-PINGOTHER, Origin, X-Requested-With, > Content-Type, Accept > Cache-Control: max-age=60, public > Expires: Thu, 21 Jun 2018 11:57:15 GMT > Last-Modified: Thu, 21 Jun 2018 11:55:46 GMT > X-UPSTREAM: 10.6.198.11:8080 > ... > > NGINX Access Logs: > > [22/Jun/2018:09:35:24 +0200] Cache: - 10.6.198.12:8080 0.022 304 865 IP > ...-com.stage.....info /de > [22/Jun/2018:09:35:26 +0200] Cache: HIT - - 200 1151 IP > ...-com.stage.....info /.resources/img/favicon.ico > > NGINX Locations: > > location ~* > \.(?:bmp|css|gif|ico|jng|jpe?g|js(on)?|png|svgz?|tiff?|wbmp|webp)$ { > # caching > expires max; > proxy_cache stage.....info_proxy-cache; > proxy_cache_lock on; > > # custom lines > proxy_cache_use_stale error timeout updating invalid_header http_500 > http_502 http_503 http_504; > > # proxy pass > proxy_pass http://public.stage; > > } > > > location ~* \.(?:html)$ { > # caching > expires 15s; > proxy_cache stage.....info_proxy-cache; > proxy_cache_lock on; > > # custom lines > proxy_cache_use_stale error timeout updating invalid_header http_500 > http_502 http_503 http_504; > > # proxy pass > proxy_pass http://public.stage; > > } > > I'm able to cache all static assets with proper file extension like .png, > .css, etc. pp. so this works like expected. My question is: is it possible > to define the caching behaviour because of the Cache-Type? My idea is to > take the result like "Content-Type: text/html;charset=UTF-8" and then to > proxy cache it if it is text/html. Does it make sense? > > Cheers, > Szop > > Posted at Nginx Forum: https://forum.nginx.org/read.php?2,280242,280242#msg-280242 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From scott.oaks at oracle.com Fri Jun 22 18:40:39 2018 From: scott.oaks at oracle.com (scott.oaks at oracle.com) Date: Fri, 22 Jun 2018 14:40:39 -0400 Subject: Recovering from partial writes Message-ID: <8ae61bfa-8edc-b397-4838-01a85f7e330e@oracle.com> I have an nginx proxy through which clients pass a large POST payload to the upstream server. Sometimes, the upstream server is slow and so writing the POST data will fail with a writev() not ready (EAGAIN) error. But of course, that's a very common situation when dealing with non-blocking I/O, and I'd expect the rest of the data to be written when the socket is again ready for writing. In fact, it seems like the basic structure of that is in place; when ngx_writev gets the EAGAIN, it passes that to calling functions, which modify the chain buffers. Yet somewhere along the line (seemingly in ngx_http_upstream_send_request_body) the partially-written buffer is freed, and although the socket later indicates that it is ready to write (and the ngx epoll module does detect that), there is no longer any data to write and so everything fails. I realize this is not the dev mailing list so an answer to how that is programmed isn't necessarily what I'm after -- again, the partial write of data to a socket is such a common thing that I can't think I'm the first to encounter it and find a basic bug, so I assume that something else is going on. I have tried this with proxy_request_buffering off and on, and the failure is essentially the same. The http section of my conf looks like this: http { max_ranges 1; #map $http_accept $file_extension { # default ".html"; # "~*json" ".json"; #} map $http_upgrade $connection_upgrade { default upgrade; '' ""; } server_names_hash_bucket_size 512; server_names_hash_max_size 2048; variables_hash_bucket_size 512; variables_hash_max_size 2048; client_header_buffer_size 8k; large_client_header_buffers 4 16k; proxy_buffering off; proxy_request_buffering off; # Tried on, and various sizes #proxy_buffer_size 16k; #proxy_buffers 4 128k; #proxy_busy_buffers_size 256k; #proxy_headers_hash_bucket_size 256; client_max_body_size 0; ssl_session_cache shared:SSL:20m; ssl_session_timeout 60m; include /u01/data/config/nginx/mime.types; default_type application/octet-stream; log_format main '"$remote_addr" "-" "$remote_user" "[$time_local]" "$request" ' '"$status" "$body_bytes_sent" "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"'; log_format opcroutingtier '"$remote_addr" "-" "$remote_user" [$time_local] "$request" "$status" ' '"$body_bytes_sent" "$http_referer" "$http_user_agent" "$bytes_sent" "$request_length" "-" ' '"$host" "$http_x_forwarded_for" "$server_name" "$server_port" "$request_time" "$upstream_addr" ' '"$upstream_connect_time" "$upstream_header_time" "$upstream_response_time" "$upstream_status" "$ssl_cipher" "$ssl_protocol" ' '"-" "-" "-"'; access_log /u01/data/logs/nginx_logs/access_logs/access.log opcroutingtier; sendfile off; # also tried on keepalive_timeout 60s; keepalive_requests 2000000; open_file_cache max=2000 inactive=20s; open_file_cache_valid 60s; open_file_cache_min_uses 5; open_file_cache_errors off; gzip on; gzip_types text/plain text/css text/javascript text/xml application/x-javascript application/xml; gzip_min_length 500; gzip_comp_level 7; Everything works fine if the upstream reads data fast enough; it's only when nginx gets a partial write upstream that there is a problem. Am I missing something here? -Scott From peter_booth at me.com Fri Jun 22 20:18:37 2018 From: peter_booth at me.com (Peter Booth) Date: Fri, 22 Jun 2018 16:18:37 -0400 Subject: Recovering from partial writes In-Reply-To: <8ae61bfa-8edc-b397-4838-01a85f7e330e@oracle.com> References: <8ae61bfa-8edc-b397-4838-01a85f7e330e@oracle.com> Message-ID: <78DDDAE6-32A5-4596-85BA-DA999F8BBB65@me.com> How large is a large POST payload? Are the nginx and upstream systems physical hosts in same data center? What are approx best case / typical case / worst case latency for the post to upstream? Sent from my iPhone > On Jun 22, 2018, at 2:40 PM, scott.oaks at oracle.com wrote: > > I have an nginx proxy through which clients pass a large POST payload to the upstream server. Sometimes, the upstream server is slow and so writing the POST data will fail with a writev() not ready (EAGAIN) error. But of course, that's a very common situation when dealing with non-blocking I/O, and I'd expect the rest of the data to be written when the socket is again ready for writing. > > In fact, it seems like the basic structure of that is in place; when ngx_writev gets the EAGAIN, it passes that to calling functions, which modify the chain buffers. Yet somewhere along the line (seemingly in ngx_http_upstream_send_request_body) the partially-written buffer is freed, and although the socket later indicates that it is ready to write (and the ngx epoll module does detect that), there is no longer any data to write and so everything fails. > > I realize this is not the dev mailing list so an answer to how that is programmed isn't necessarily what I'm after -- again, the partial write of data to a socket is such a common thing that I can't think I'm the first to encounter it and find a basic bug, so I assume that something else is going on. I have tried this with proxy_request_buffering off and on, and the failure is essentially the same. The http section of my conf looks like this: > > http { > max_ranges 1; > #map $http_accept $file_extension { > # default ".html"; > # "~*json" ".json"; > #} > map $http_upgrade $connection_upgrade { > default upgrade; > '' ""; > } > server_names_hash_bucket_size 512; > server_names_hash_max_size 2048; > variables_hash_bucket_size 512; > variables_hash_max_size 2048; > client_header_buffer_size 8k; > large_client_header_buffers 4 16k; > proxy_buffering off; > proxy_request_buffering off; # Tried on, and various sizes > #proxy_buffer_size 16k; > #proxy_buffers 4 128k; > #proxy_busy_buffers_size 256k; > #proxy_headers_hash_bucket_size 256; > client_max_body_size 0; > ssl_session_cache shared:SSL:20m; > ssl_session_timeout 60m; > > include /u01/data/config/nginx/mime.types; > default_type application/octet-stream; > > log_format main '"$remote_addr" "-" "$remote_user" "[$time_local]" "$request" ' > '"$status" "$body_bytes_sent" "$http_referer" ' > '"$http_user_agent" "$http_x_forwarded_for"'; > > log_format opcroutingtier '"$remote_addr" "-" "$remote_user" [$time_local] "$request" "$status" ' > '"$body_bytes_sent" "$http_referer" "$http_user_agent" "$bytes_sent" "$request_length" "-" ' > '"$host" "$http_x_forwarded_for" "$server_name" "$server_port" "$request_time" "$upstream_addr" ' > '"$upstream_connect_time" "$upstream_header_time" "$upstream_response_time" "$upstream_status" "$ssl_cipher" "$ssl_protocol" ' > '"-" "-" "-"'; > > access_log /u01/data/logs/nginx_logs/access_logs/access.log opcroutingtier; > sendfile off; # also tried on > keepalive_timeout 60s; > keepalive_requests 2000000; > open_file_cache max=2000 inactive=20s; > open_file_cache_valid 60s; > open_file_cache_min_uses 5; > open_file_cache_errors off; > gzip on; > gzip_types text/plain text/css text/javascript text/xml application/x-javascript application/xml; > gzip_min_length 500; > gzip_comp_level 7; > > Everything works fine if the upstream reads data fast enough; it's only when nginx gets a partial write upstream that there is a problem. Am I missing something here? > > -Scott > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From scott.oaks at oracle.com Fri Jun 22 20:25:37 2018 From: scott.oaks at oracle.com (scott.oaks at oracle.com) Date: Fri, 22 Jun 2018 16:25:37 -0400 Subject: Recovering from partial writes In-Reply-To: <78DDDAE6-32A5-4596-85BA-DA999F8BBB65@me.com> References: <8ae61bfa-8edc-b397-4838-01a85f7e330e@oracle.com> <78DDDAE6-32A5-4596-85BA-DA999F8BBB65@me.com> Message-ID: The POST payload varies but can be as much as 20M nginx and the upstream are in the same data center now, but that isn't necessarily a requirement, and even in the data center speeds will vary depending on network congestion. Hence I cannot guarantee the worst-case latency. If the upstream java server does a 5 second GC, then there could be a long pause in its reading data. Your questions lead me to believe that you'd like to suggest things to make the writev() not do a partial write. That is helpful, but the real point isn't to find a config that happens to work so that the writev() never gets a partial write -- it is to make the partial writev scenario actually work. -Scott On 6/22/18 4:18 PM, Peter Booth wrote: > How large is a large POST payload? > Are the nginx and upstream systems physical hosts in same data center? > What are approx best case / typical case / worst case latency for the post to upstream? > > Sent from my iPhone > >> On Jun 22, 2018, at 2:40 PM, scott.oaks at oracle.com wrote: >> >> I have an nginx proxy through which clients pass a large POST payload to the upstream server. Sometimes, the upstream server is slow and so writing the POST data will fail with a writev() not ready (EAGAIN) error. But of course, that's a very common situation when dealing with non-blocking I/O, and I'd expect the rest of the data to be written when the socket is again ready for writing. >> >> In fact, it seems like the basic structure of that is in place; when ngx_writev gets the EAGAIN, it passes that to calling functions, which modify the chain buffers. Yet somewhere along the line (seemingly in ngx_http_upstream_send_request_body) the partially-written buffer is freed, and although the socket later indicates that it is ready to write (and the ngx epoll module does detect that), there is no longer any data to write and so everything fails. >> >> I realize this is not the dev mailing list so an answer to how that is programmed isn't necessarily what I'm after -- again, the partial write of data to a socket is such a common thing that I can't think I'm the first to encounter it and find a basic bug, so I assume that something else is going on. I have tried this with proxy_request_buffering off and on, and the failure is essentially the same. The http section of my conf looks like this: >> >> http { >> max_ranges 1; >> #map $http_accept $file_extension { >> # default ".html"; >> # "~*json" ".json"; >> #} >> map $http_upgrade $connection_upgrade { >> default upgrade; >> '' ""; >> } >> server_names_hash_bucket_size 512; >> server_names_hash_max_size 2048; >> variables_hash_bucket_size 512; >> variables_hash_max_size 2048; >> client_header_buffer_size 8k; >> large_client_header_buffers 4 16k; >> proxy_buffering off; >> proxy_request_buffering off; # Tried on, and various sizes >> #proxy_buffer_size 16k; >> #proxy_buffers 4 128k; >> #proxy_busy_buffers_size 256k; >> #proxy_headers_hash_bucket_size 256; >> client_max_body_size 0; >> ssl_session_cache shared:SSL:20m; >> ssl_session_timeout 60m; >> >> include /u01/data/config/nginx/mime.types; >> default_type application/octet-stream; >> >> log_format main '"$remote_addr" "-" "$remote_user" "[$time_local]" "$request" ' >> '"$status" "$body_bytes_sent" "$http_referer" ' >> '"$http_user_agent" "$http_x_forwarded_for"'; >> >> log_format opcroutingtier '"$remote_addr" "-" "$remote_user" [$time_local] "$request" "$status" ' >> '"$body_bytes_sent" "$http_referer" "$http_user_agent" "$bytes_sent" "$request_length" "-" ' >> '"$host" "$http_x_forwarded_for" "$server_name" "$server_port" "$request_time" "$upstream_addr" ' >> '"$upstream_connect_time" "$upstream_header_time" "$upstream_response_time" "$upstream_status" "$ssl_cipher" "$ssl_protocol" ' >> '"-" "-" "-"'; >> >> access_log /u01/data/logs/nginx_logs/access_logs/access.log opcroutingtier; >> sendfile off; # also tried on >> keepalive_timeout 60s; >> keepalive_requests 2000000; >> open_file_cache max=2000 inactive=20s; >> open_file_cache_valid 60s; >> open_file_cache_min_uses 5; >> open_file_cache_errors off; >> gzip on; >> gzip_types text/plain text/css text/javascript text/xml application/x-javascript application/xml; >> gzip_min_length 500; >> gzip_comp_level 7; >> >> Everything works fine if the upstream reads data fast enough; it's only when nginx gets a partial write upstream that there is a problem. Am I missing something here? >> >> -Scott >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> https://urldefense.proofpoint.com/v2/url?u=http-3A__mailman.nginx.org_mailman_listinfo_nginx&d=DwICAg&c=RoP1YumCXCgaWHvlZYR8PZh8Bv7qIrMUB65eapI_JnE&r=rd78Sm7B8x7Jg86BzKn7cn1a_8HKt26SFIE05r0bOD0&m=tOSPAWsYbWZKUrwdK1PErzJygcIkiJzSm3gAK6UYZRQ&s=1Ko-8EEz4a8Ukl0ELKr1jnR5-sZe5qTh_cWP19eVye4&e= > _______________________________________________ > nginx mailing list > nginx at nginx.org > https://urldefense.proofpoint.com/v2/url?u=http-3A__mailman.nginx.org_mailman_listinfo_nginx&d=DwICAg&c=RoP1YumCXCgaWHvlZYR8PZh8Bv7qIrMUB65eapI_JnE&r=rd78Sm7B8x7Jg86BzKn7cn1a_8HKt26SFIE05r0bOD0&m=tOSPAWsYbWZKUrwdK1PErzJygcIkiJzSm3gAK6UYZRQ&s=1Ko-8EEz4a8Ukl0ELKr1jnR5-sZe5qTh_cWP19eVye4&e= From scott.oaks at oracle.com Fri Jun 22 20:30:36 2018 From: scott.oaks at oracle.com (scott.oaks at oracle.com) Date: Fri, 22 Jun 2018 16:30:36 -0400 Subject: Recovering from partial writes In-Reply-To: References: <8ae61bfa-8edc-b397-4838-01a85f7e330e@oracle.com> <78DDDAE6-32A5-4596-85BA-DA999F8BBB65@me.com> Message-ID: I should have added -- I know that there are 60 second (default) timeouts in place so if nginx cannot write upstream for 60 seconds it will abort the request. That's fine; it's the shorter scenarios I am worried about. In fact, what happens when nginx doesn't send the remaining data in my case is that the upstream server times out when it doesn't read data for 30 seconds, so there is a 30-second period where the socket buffers are clear on both sides (yet nginx doesn't continue to send up the data). -Scott On 6/22/18 4:25 PM, scott.oaks at oracle.com wrote: > The POST payload varies but can be as much as 20M > > nginx and the upstream are in the same data center now, but that isn't > necessarily a requirement, and even in the data center speeds will > vary depending on network congestion. Hence I cannot guarantee the > worst-case latency. If the upstream java server does a 5 second GC, > then there could be a long pause in its reading data. > > Your questions lead me to believe that you'd like to suggest things to > make the writev() not do a partial write. That is helpful, but the > real point isn't to find a config that happens to work so that the > writev() never gets a partial write -- it is to make the partial > writev scenario actually work. > > -Scott > > On 6/22/18 4:18 PM, Peter Booth wrote: >> How large is a large POST payload? >> Are the nginx and upstream systems physical hosts in same data center? >> What are approx best case / typical case / worst case latency for the >> post to upstream? >> >> Sent from my iPhone >> >>> On Jun 22, 2018, at 2:40 PM, scott.oaks at oracle.com wrote: >>> >>> I have an nginx proxy through which clients pass a large POST >>> payload to the upstream server. Sometimes, the upstream server is >>> slow and so writing the POST data will fail with a writev() not >>> ready (EAGAIN) error. But of course, that's a very common situation >>> when dealing with non-blocking I/O, and I'd expect the rest of the >>> data to be written when the socket is again ready for writing. >>> >>> In fact, it seems like the basic structure of that is in place; when >>> ngx_writev gets the EAGAIN, it passes that to calling functions, >>> which modify the chain buffers. Yet somewhere along the line >>> (seemingly in ngx_http_upstream_send_request_body) the >>> partially-written buffer is freed, and although the socket later >>> indicates that it is ready to write (and the ngx epoll module does >>> detect that), there is no longer any data to write and so everything >>> fails. >>> >>> I realize this is not the dev mailing list so an answer to how that >>> is programmed isn't necessarily what I'm after -- again, the partial >>> write of data to a socket is such a common thing that I can't think >>> I'm the first to encounter it and find a basic bug, so I assume that >>> something else is going on. I have tried this with >>> proxy_request_buffering off and on, and the failure is essentially >>> the same. The http section of my conf looks like this: >>> >>> http { >>> ??? max_ranges 1; >>> ??? #map $http_accept $file_extension { >>> ??? #?? default?? ".html"; >>> ??? #??? "~*json"? ".json"; >>> ??? #} >>> ??? map $http_upgrade $connection_upgrade { >>> ??????? default upgrade; >>> ??????? '' ""; >>> ??? } >>> ??? server_names_hash_bucket_size 512; >>> ??? server_names_hash_max_size 2048; >>> ??? variables_hash_bucket_size 512; >>> ??? variables_hash_max_size 2048; >>> ??? client_header_buffer_size 8k; >>> ??? large_client_header_buffers 4 16k; >>> ??? proxy_buffering off; >>> ??? proxy_request_buffering off; # Tried on, and various sizes >>> ??? #proxy_buffer_size 16k; >>> ??? #proxy_buffers 4 128k; >>> ??? #proxy_busy_buffers_size 256k; >>> ??? #proxy_headers_hash_bucket_size 256; >>> ??? client_max_body_size 0; >>> ??? ssl_session_cache shared:SSL:20m; >>> ??? ssl_session_timeout 60m; >>> >>> ??? include?????? /u01/data/config/nginx/mime.types; >>> ??? default_type? application/octet-stream; >>> >>> ??? log_format? main? '"$remote_addr" "-" "$remote_user" >>> "[$time_local]" "$request" ' >>> ????????????????????? '"$status" "$body_bytes_sent" "$http_referer" ' >>> ????????????????????? '"$http_user_agent" "$http_x_forwarded_for"'; >>> >>> ??? log_format? opcroutingtier? '"$remote_addr" "-" "$remote_user" >>> [$time_local] "$request" "$status" ' >>> ??????????????????????????????? '"$body_bytes_sent" "$http_referer" >>> "$http_user_agent" "$bytes_sent" "$request_length" "-" ' >>> ??????????????????????????????? '"$host" "$http_x_forwarded_for" >>> "$server_name" "$server_port" "$request_time" "$upstream_addr" ' >>> ??????????????????????????????? '"$upstream_connect_time" >>> "$upstream_header_time" "$upstream_response_time" "$upstream_status" >>> "$ssl_cipher" "$ssl_protocol" ' >>> ??????????????????????????????? '"-" "-" "-"'; >>> >>> ??? access_log /u01/data/logs/nginx_logs/access_logs/access.log >>> opcroutingtier; >>> ??? sendfile??????? off;? # also tried on >>> ??? keepalive_timeout 60s; >>> ??? keepalive_requests 2000000; >>> ??? open_file_cache max=2000 inactive=20s; >>> ??? open_file_cache_valid 60s; >>> ??? open_file_cache_min_uses 5; >>> ??? open_file_cache_errors off; >>> ??? gzip on; >>> ??? gzip_types text/plain text/css text/javascript text/xml >>> application/x-javascript application/xml; >>> ??? gzip_min_length 500; >>> ??? gzip_comp_level 7; >>> >>> Everything works fine if the upstream reads data fast enough; it's >>> only when nginx gets a partial write upstream that there is a >>> problem. Am I missing something here? >>> >>> -Scott >>> >>> _______________________________________________ >>> nginx mailing list >>> nginx at nginx.org >>> https://urldefense.proofpoint.com/v2/url?u=http-3A__mailman.nginx.org_mailman_listinfo_nginx&d=DwICAg&c=RoP1YumCXCgaWHvlZYR8PZh8Bv7qIrMUB65eapI_JnE&r=rd78Sm7B8x7Jg86BzKn7cn1a_8HKt26SFIE05r0bOD0&m=tOSPAWsYbWZKUrwdK1PErzJygcIkiJzSm3gAK6UYZRQ&s=1Ko-8EEz4a8Ukl0ELKr1jnR5-sZe5qTh_cWP19eVye4&e= >>> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> https://urldefense.proofpoint.com/v2/url?u=http-3A__mailman.nginx.org_mailman_listinfo_nginx&d=DwICAg&c=RoP1YumCXCgaWHvlZYR8PZh8Bv7qIrMUB65eapI_JnE&r=rd78Sm7B8x7Jg86BzKn7cn1a_8HKt26SFIE05r0bOD0&m=tOSPAWsYbWZKUrwdK1PErzJygcIkiJzSm3gAK6UYZRQ&s=1Ko-8EEz4a8Ukl0ELKr1jnR5-sZe5qTh_cWP19eVye4&e= >> > > _______________________________________________ > nginx mailing list > nginx at nginx.org > https://urldefense.proofpoint.com/v2/url?u=http-3A__mailman.nginx.org_mailman_listinfo_nginx&d=DwICAg&c=RoP1YumCXCgaWHvlZYR8PZh8Bv7qIrMUB65eapI_JnE&r=rd78Sm7B8x7Jg86BzKn7cn1a_8HKt26SFIE05r0bOD0&m=W78uVaGLNes6XZNk3hrr9Rhn3aiguEruIhv09PHI64I&s=H2bt0GJ1kNPn3cOgJSaV1y2ER5GErOy6k0cT7PiS0cY&e= > From nginx-forum at forum.nginx.org Fri Jun 22 22:50:24 2018 From: nginx-forum at forum.nginx.org (ebondar) Date: Fri, 22 Jun 2018 18:50:24 -0400 Subject: failover to the next upstream server if one of the servers is slow Message-ID: <29700e8ca01c87744e51323362d17bb5.NginxMailingListEnglish@forum.nginx.org> Hello, i'm trying to setup a failover configuration between two upstream servers, all works as expected. But i want to cover the case if one of the upstream servers is became a very slow and i want to remove this servers from rotation and move all requests to the second upstream server. upstream rubyfe { server qa-vmf01.int:443; server qa-vmf02.int:443; } server { listen 443 http2; server_name qa-www.example.com; gzip on; proxy_connect_timeout 300; proxy_send_timeout 300; proxy_read_timeout 300; send_timeout 300; proxy_buffering on; location / { proxy_read_timeout 1; proxy_pass https://rubyfe; proxy_next_upstream error timeout invalid_header http_500 http_502 http_503 http_504; proxy_next_upstream_timeout 3; } when i tried to load the first server #stress --cpu 80 --io 8 --vm 4 --vm-bytes 300M --timeout 180s # wrk -t2 -c20 -d30s https://qa-www.example.com in log files i catched request from "wrk" 192.168.0.1 - - [22/Jun/2018:17:59:10 -0400] "GET / HTTP/1.1" 200 50393 "-" "-" "-" "192.168.0.2:443, 192.168.0.3:443 [ 1.001, 0.814 ]" "text/html; charset=utf-8" "-" "582bd020659715d66afafad533f7ac5d" "TLSv1.2/ECDHE-RSA-AES256-GCM-SHA384 " but at this time i tried to get result via "curl" i seen $ curl -IL https://qa-www.example.com/ HTTP/1.1 502 Bad Gateway could you help me to understand how i can operate timeouts of requests to the upstream servers and in case if one of the upstream server became is slow to force requests switch to another server Posted at Nginx Forum: https://forum.nginx.org/read.php?2,280259,280259#msg-280259 From nginx-forum at forum.nginx.org Fri Jun 22 23:00:20 2018 From: nginx-forum at forum.nginx.org (itpp2012) Date: Fri, 22 Jun 2018 19:00:20 -0400 Subject: failover to the next upstream server if one of the servers is slow In-Reply-To: <29700e8ca01c87744e51323362d17bb5.NginxMailingListEnglish@forum.nginx.org> References: <29700e8ca01c87744e51323362d17bb5.NginxMailingListEnglish@forum.nginx.org> Message-ID: You may have to resort to Lua (openresty) and periodically perform via a sub request a query which should indicate how fast an upstream is and decide weather to take it offline (which also can be done with Lua). Posted at Nginx Forum: https://forum.nginx.org/read.php?2,280259,280260#msg-280260 From nginx-forum at forum.nginx.org Fri Jun 22 23:31:44 2018 From: nginx-forum at forum.nginx.org (ebondar) Date: Fri, 22 Jun 2018 19:31:44 -0400 Subject: failover to the next upstream server if one of the servers is slow In-Reply-To: References: <29700e8ca01c87744e51323362d17bb5.NginxMailingListEnglish@forum.nginx.org> Message-ID: <38b8d7275f86d8bcf5b13da1c8fca789.NginxMailingListEnglish@forum.nginx.org> Thanks itpp2012, I'll look at LUA if I understand correctly, we can not specify the timeout of the session to upstream the server, initiate a timeout error, and force requests to move to another server? proxy_read_timeout 1; -> proxy_next_upstream error timeout Posted at Nginx Forum: https://forum.nginx.org/read.php?2,280259,280261#msg-280261 From nginx-forum at forum.nginx.org Sat Jun 23 10:40:59 2018 From: nginx-forum at forum.nginx.org (itpp2012) Date: Sat, 23 Jun 2018 06:40:59 -0400 Subject: failover to the next upstream server if one of the servers is slow In-Reply-To: <38b8d7275f86d8bcf5b13da1c8fca789.NginxMailingListEnglish@forum.nginx.org> References: <29700e8ca01c87744e51323362d17bb5.NginxMailingListEnglish@forum.nginx.org> <38b8d7275f86d8bcf5b13da1c8fca789.NginxMailingListEnglish@forum.nginx.org> Message-ID: <8fbda00eaa49429690eb57863e67ef9c.NginxMailingListEnglish@forum.nginx.org> Can't tell atm. if a timeout forces a node to become offline but 60s is still along time to wait (and decide) without actually knowing if a node is overloaded (it might be just busy which does not always mean overloaded). There are tools for fi. edge routers which polls a status page to decide to change routing or not, in your case you first need to determine what exactly the conditions are for a node to be slow and then design a way to detect this, then you can look for tooling to automate whatever you want happening. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,280259,280263#msg-280263 From mdounin at mdounin.ru Sat Jun 23 13:37:27 2018 From: mdounin at mdounin.ru (Maxim Dounin) Date: Sat, 23 Jun 2018 16:37:27 +0300 Subject: is there somthing like proxy_request_buffering in grpc module? In-Reply-To: References: Message-ID: <20180623133727.GS32137@mdounin.ru> Hello! On Wed, Jun 20, 2018 at 09:15:55PM -0400, peanutgyz wrote: > like grpc_request_buffering? No. Both request and response buffering are always off in gRPC proxying, as there are streaming RPCs which cannot be used with buffering. -- Maxim Dounin http://mdounin.ru/ From pluknet at nginx.com Mon Jun 25 12:04:58 2018 From: pluknet at nginx.com (Sergey Kandaurov) Date: Mon, 25 Jun 2018 15:04:58 +0300 Subject: How to log the number of bytes sent over a websocket? In-Reply-To: <703ea93e-1f97-d7f1-725c-95a83a59804a@incise.co> References: <703ea93e-1f97-d7f1-725c-95a83a59804a@incise.co> Message-ID: > On 22 Jun 2018, at 16:38, Fabrice Triboix wrote: > > Hi All, > > I am using nginx as a websocket reverse-proxy (this is working fine BTW). I would like to log the number of bytes sent (and ideally also received) over a websocket. If I use `$body_bytes_sent` in `log_format`, the entry in the access_log is always 0. As far as I can tell, a lot of data went through the websocket, so clearly `$body_bytes_sent` does not include data sent over a websocket. > > [..] Make sure to run a recent enough version of nginx, at least 1.7.11. -- Sergey Kandaurov From ftriboix at incise.co Mon Jun 25 12:32:52 2018 From: ftriboix at incise.co (Fabrice Triboix) Date: Mon, 25 Jun 2018 13:32:52 +0100 Subject: How to log the number of bytes sent over a websocket? In-Reply-To: References: <703ea93e-1f97-d7f1-725c-95a83a59804a@incise.co> Message-ID: Thanks a lot for the suggestion. Mine is 1.4.6, which is quite old indeed. I will try again with the latest version. On 25/06/18 13:04, Sergey Kandaurov wrote: >> On 22 Jun 2018, at 16:38, Fabrice Triboix wrote: >> >> Hi All, >> >> I am using nginx as a websocket reverse-proxy (this is working fine BTW). I would like to log the number of bytes sent (and ideally also received) over a websocket. If I use `$body_bytes_sent` in `log_format`, the entry in the access_log is always 0. As far as I can tell, a lot of data went through the websocket, so clearly `$body_bytes_sent` does not include data sent over a websocket. >> >> [..] > Make sure to run a recent enough version of nginx, at least 1.7.11. > From nginx-forum at forum.nginx.org Tue Jun 26 04:48:27 2018 From: nginx-forum at forum.nginx.org (Szop) Date: Tue, 26 Jun 2018 00:48:27 -0400 Subject: NGINX Proxy Cache Cache-Control In-Reply-To: <5FB7FF9F-8216-428D-A77B-3430FD767540@me.com> References: <5FB7FF9F-8216-428D-A77B-3430FD767540@me.com> Message-ID: <9afdd82ab920bcc2e4c649bae0719507.NginxMailingListEnglish@forum.nginx.org> Thanks for the fast reply. >1. The static content - jpg, png, tiff, etc. It looks as though you are serving them your backend and caching them. Are they also being built on demand dynamically? If not, then why csche them? Why not deploy them to nginx and serve them directly? There is a huge part of static content that is stored on a shared filesystem, which is delivered from backend. Unfortunately I don't have any chance to change this for now. >2. The text content - is this fragments of html that don?t have names that end in html? Yes it seem's so. Cheers, Szop Posted at Nginx Forum: https://forum.nginx.org/read.php?2,280242,280275#msg-280275 From ftriboix at incise.co Tue Jun 26 09:51:37 2018 From: ftriboix at incise.co (Fabrice Triboix) Date: Tue, 26 Jun 2018 10:51:37 +0100 Subject: How to log the number of bytes sent over a websocket? In-Reply-To: References: <703ea93e-1f97-d7f1-725c-95a83a59804a@incise.co> Message-ID: <30bb26a4-e4b8-54a2-9e36-d2081c77af43@incise.co> That worked indeed! I used nginx-1.14.0, and it does work. Thanks a lot! On 25/06/18 13:04, Sergey Kandaurov wrote: >> On 22 Jun 2018, at 16:38, Fabrice Triboix wrote: >> >> Hi All, >> >> I am using nginx as a websocket reverse-proxy (this is working fine BTW). I would like to log the number of bytes sent (and ideally also received) over a websocket. If I use `$body_bytes_sent` in `log_format`, the entry in the access_log is always 0. As far as I can tell, a lot of data went through the websocket, so clearly `$body_bytes_sent` does not include data sent over a websocket. >> >> [..] > Make sure to run a recent enough version of nginx, at least 1.7.11. > From djczaski at gmail.com Tue Jun 26 14:25:07 2018 From: djczaski at gmail.com (Danomi Czaski) Date: Tue, 26 Jun 2018 10:25:07 -0400 Subject: changing secure_link_secret Message-ID: I would like to create a secure link and be able to easily change the secret as desired. Short of rewriting the config file and restarting nginx, is there an easy way to do this? Ideally I'd like nginx to read the secret from a file. From rpaprocki at fearnothingproductions.net Tue Jun 26 14:55:58 2018 From: rpaprocki at fearnothingproductions.net (Robert Paprocki) Date: Tue, 26 Jun 2018 07:55:58 -0700 Subject: changing secure_link_secret In-Reply-To: References: Message-ID: <18F6C43A-4F5C-4844-B278-1CB587A7EF5E@fearnothingproductions.net> You could either write a custom nginx module to read your file/env variable and provide it as an nginx variable, or you could use Lua/OpenResty to read/write the secret value (the latter is safer but more expensive) Sent from my iPhone > On Jun 26, 2018, at 07:25, Danomi Czaski wrote: > > I would like to create a secure link and be able to easily change the > secret as desired. > > Short of rewriting the config file and restarting nginx, is there an > easy way to do this? Ideally I'd like nginx to read the secret from a > file. > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From lists at viaduct-productions.com Tue Jun 26 20:56:55 2018 From: lists at viaduct-productions.com (VP Lists) Date: Tue, 26 Jun 2018 16:56:55 -0400 Subject: File Upload Permissions Issues Message-ID: Hi folks. I?m having a problem uploading any files of any significant size to a test site on my workstation. 2018/06/26 16:50:20 [crit] 36196#0: *1099 open() "/usr/local/var/run/nginx/client_body_temp/0000000018" failed (13: Permission denied), client: 127.0.0.1, server: pass1.local, request: "POST /upload HTTP/1.1", host: "pass1.local:8080", referrer: "http://pass1.local:8080/upload" 2018/06/26 16:50:20 [debug] 36196#0: *1099 http finalize request: 500, "/upload?" a:1, c:1 2018/06/26 16:50:20 [debug] 36196#0: *1099 event timer del: 16: 1530046280299 2018/06/26 16:50:20 [debug] 36196#0: *1099 http special response: 500, "/upload?" 2018/06/26 16:50:20 [debug] 36196#0: *1099 HTTP/1.1 500 Internal Server Error Server: nginx/1.15.0 Date: Tue, 26 Jun 2018 20:50:20 GMT Content-Type: text/html Content-Length: 595 Connection: close 2018/06/26 16:50:20 [debug] 36196#0: *1099 write new buf t:1 f:0 00007FACB10021A0, pos 00007FACB10021A0, size: 162 file: 0, size: 0 2018/06/26 16:50:20 [debug] 36196#0: *1099 http write filter: l:0 f:0 s:162 2018/06/26 16:50:20 [debug] 36196#0: *1099 http output filter "/upload?" 2018/06/26 16:50:20 [debug] 36196#0: *1099 http copy filter: "/upload?" 2018/06/26 16:50:20 [debug] 36196#0: *1099 http postpone filter "/upload?" 00007FACB10023C0 2018/06/26 16:50:20 [debug] 36196#0: *1099 write old buf t:1 f:0 00007FACB10021A0, pos 00007FACB10021A0, size: 162 file: 0, size: 0 2018/06/26 16:50:20 [debug] 36196#0: *1099 write new buf t:0 f:0 0000000000000000, pos 000000010A332120, size: 140 file: 0, size: 0 2018/06/26 16:50:20 [debug] 36196#0: *1099 write new buf t:0 f:0 0000000000000000, pos 000000010A330F20, size: 53 file: 0, size: 0 2018/06/26 16:50:20 [debug] 36196#0: *1099 write new buf t:0 f:0 0000000000000000, pos 000000010A330FD0, size: 402 file: 0, size: 0 2018/06/26 16:50:20 [debug] 36196#0: *1099 http write filter: l:1 f:0 s:757 2018/06/26 16:50:20 [debug] 36196#0: *1099 http write filter limit 0 2018/06/26 16:50:20 [debug] 36196#0: *1099 writev: 757 of 757 2018/06/26 16:50:20 [debug] 36196#0: *1099 http write filter 0000000000000000 2018/06/26 16:50:20 [debug] 36196#0: *1099 http copy filter: 0 "/upload?" 2018/06/26 16:50:20 [debug] 36196#0: *1099 http finalize request: 0, "/upload?" a:1, c:1 2018/06/26 16:50:20 [debug] 36196#0: *1099 event timer add: 16: 5000:1530046225299 2018/06/26 16:50:20 [debug] 36196#0: *1099 http lingering close handler 2018/06/26 16:50:20 [debug] 36196#0: *1099 recv: eof:0, avail:73728, err:0 My nginx.conf has no set ?user? and here are the permissions set on the temp file upload folder for nginx: $ ll /usr/local/var/run/nginx/ drwxr-xr-x 7 rich admin 238B Dec 8 2016 . drwxr-xr-x 4 rich admin 136B Jun 19 15:19 .. drwx------ 2 nobody admin 68B Dec 8 2016 client_body_temp I have 4 workers owned by nobody:admin, and nginx is run as default, as root:admin. Now this topic of permissions and ?what user should run nginx? has come up before. Some say run as root, others say not. It?s my workstation, so it doesn?t really matter. It?s my dev box. The issue comes down to production. Is there one way all of this should be run without the worried security devs out there from losing it? Since I?m here at another security issue with who runs what, maybe it?s a good time to get a consensus on how all this should be set up. Cheers _____________ Rich in Toronto @ VP From lists at viaduct-productions.com Tue Jun 26 21:09:43 2018 From: lists at viaduct-productions.com (VP Lists) Date: Tue, 26 Jun 2018 17:09:43 -0400 Subject: 413 Request Entity Too Large In-Reply-To: <20180619072623.GC8112@aleks-PC> References: <20180616080639.GC14440@aleks-PC> <20180619072623.GC8112@aleks-PC> Message-ID: <38452040-C828-4EC7-990E-54B7A6FA2C62@viaduct-productions.com> I am guessing it?s the permissions issue on the incoming temp folder. I just posted the same on the list, not published yet. 2018/06/26 16:50:20 [crit] 36196#0: *1099 open() "/usr/local/var/run/nginx/client_body_temp/0000000018" failed (13: Permission denied), client: 127.0.0.1, server: pass1.local, request: "POST /upload HTTP/1.1", host: "pass1.local:8080", referrer: "http://pass1.local:8080/upload" 2018/06/26 16:50:20 [debug] 36196#0: *1099 http finalize request: 500, "/upload?" a:1, c:1 2018/06/26 16:50:20 [debug] 36196#0: *1099 event timer del: 16: 1530046280299 2018/06/26 16:50:20 [debug] 36196#0: *1099 http special response: 500, "/upload?" 2018/06/26 16:50:20 [debug] 36196#0: *1099 HTTP/1.1 500 Internal Server Error Kaushal, your nginx is running as whom? Both your user and your workers. Second, what are the permissions on the following: $ ll /usr/local/var/run/nginx/ drwxr-xr-x 7 rich admin 238B Dec 8 2016 . drwxr-xr-x 4 rich admin 136B Jun 19 15:19 .. drwx------ 2 nobody admin 68B Dec 8 2016 client_body_temp > On Jun 19, 2018, at 3:26 AM, Aleksandar Lazic wrote: > > What's in the nginx error logs file? > > There should be more informations about the reason of the 413 as you can > see in the source. > > https://github.com/nginx/nginx/search?q=Request+Entity+Too+Large&unscoped_q=Request+Entity+Too+Large > >> Best Regards, > > Best regards > Aleks Cheers _____________ Rich in Toronto @ VP From mdounin at mdounin.ru Wed Jun 27 02:51:48 2018 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 27 Jun 2018 05:51:48 +0300 Subject: File Upload Permissions Issues In-Reply-To: References: Message-ID: <20180627025148.GA35731@mdounin.ru> Hello! On Tue, Jun 26, 2018 at 04:56:55PM -0400, VP Lists wrote: > I?m having a problem uploading any files of any significant size to a test site on my workstation. > > 2018/06/26 16:50:20 [crit] 36196#0: *1099 open() "/usr/local/var/run/nginx/client_body_temp/0000000018" failed (13: Permission denied), client: 127.0.0.1, server: pass1.local, request: "POST /upload HTTP/1.1", host: "pass1.local:8080", referrer: "http://pass1.local:8080/upload" The error message speaks for itself: nginx has no permissions to write temporary files to the directory it was configured to write temporary files to. You have to fix this. [...] > My nginx.conf has no set ?user? This means that nginx will use the default user for worker processes as long as it is started as root. Usually this is nobody:nogroup, or whatever is set via configure arguments (see "nginx -V"). > and here are the permissions set on the temp file upload folder for nginx: > > $ ll /usr/local/var/run/nginx/ > drwxr-xr-x 7 rich admin 238B Dec 8 2016 . > drwxr-xr-x 4 rich admin 136B Jun 19 15:19 .. > drwx------ 2 nobody admin 68B Dec 8 2016 client_body_temp You have to check all path compontents. That is, check that nginx has at least "x" on "/", "/usr", "/usr/local", "/usr/local/var", "/usr/local/var/run". Additionally, if you have SELinux or equivalent enabled, you should check it as well. > I have 4 workers owned by nobody:admin, and nginx is run as > default, as root:admin. > > Now this topic of permissions and ?what user should run nginx? > has come up before. Some say run as root, others say not. It?s > my workstation, so it doesn?t really matter. It?s my dev box. > The issue comes down to production. > > Is there one way all of this should be run without the worried > security devs out there from losing it? Since I?m here at > another security issue with who runs what, maybe it?s a good > time to get a consensus on how all this should be set up. You should never run nginx worker processes as root unless you understand what you are doing and possible consequences. On the other hand, nginx master process can't do many required things - like binding to port 80 - without being root. As such, you have to run nginx itself (that is, nginx master process) as root. -- Maxim Dounin http://mdounin.ru/ From lists at viaduct-productions.com Wed Jun 27 04:56:09 2018 From: lists at viaduct-productions.com (VP Lists) Date: Wed, 27 Jun 2018 00:56:09 -0400 Subject: File Upload Permissions Issues In-Reply-To: <20180627025148.GA35731@mdounin.ru> References: <20180627025148.GA35731@mdounin.ru> Message-ID: > On Jun 26, 2018, at 10:51 PM, Maxim Dounin wrote: > > Hello! Hello there. Thanks for the reply. > On Tue, Jun 26, 2018 at 04:56:55PM -0400, VP Lists wrote: > >> I?m having a problem uploading any files of any significant size to a test site on my workstation. >> >> 2018/06/26 16:50:20 [crit] 36196#0: *1099 open() "/usr/local/var/run/nginx/client_body_temp/0000000018" failed (13: Permission denied), client: 127.0.0.1, server: pass1.local, request: "POST /upload HTTP/1.1", host: "pass1.local:8080", referrer: "http://pass1.local:8080/upload" > > The error message speaks for itself: nginx has no permissions to > write temporary files to the directory it was configured to write > temporary files to. You have to fix this. > > [...] > >> My nginx.conf has no set ?user? > > This means that nginx will use the default user for worker > processes as long as it is started as root. Usually this is > nobody:nogroup, or whatever is set via configure arguments (see > "nginx -V?). That command didn?t lend itself to anything. No user mentioned anywhere in that result. It?s running as root, with the workers being run as nobody. >> and here are the permissions set on the temp file upload folder for nginx: >> >> $ ll /usr/local/var/run/nginx/ >> drwxr-xr-x 7 rich admin 238B Dec 8 2016 . >> drwxr-xr-x 4 rich admin 136B Jun 19 15:19 .. >> drwx------ 2 nobody admin 68B Dec 8 2016 client_body_temp > > You have to check all path compontents. That is, check that nginx > has at least "x" on "/", "/usr", "/usr/local", "/usr/local/var", > "/usr/local/var/run?. OK, here?s where things get interesting: On MacOS El Capitan: --http-client-body-temp-path=/usr/local/var/run/nginx/client_body_temp /usr drwxr-xr-x@ 13 root wheel 442B May 26 2017 usr /usr/local drwxr-xr-x 28 rich admin 952B Mar 30 16:12 local /usr/local/var drwx------ 36 rich admin 1.2K May 7 21:01 var /usr/local/var/run drwxr-xr-x 4 rich admin 136B Jun 19 15:19 run /usr/local/var/run/nginx drwxr-xr-x 7 rich admin 238B Dec 8 2016 nginx /usr/local/var/run/nginx/client_body_temp drwx------ 2 nobody admin 68B Dec 8 2016 client_body_temp On FreeBSD 11.1-RELEASE: --http-client-body-temp-path=/var/tmp/nginx/client_body_temp /var drwxr-xr-x 25 root wheel 25 May 7 08:48 var /var/tmp drwxrwxrwt 4 root wheel 4 Jul 7 2017 tmp /var/tmp/nginx drwxr-xr-x 7 root wheel 7 Jul 13 2017 nginx /var/tmp/nginx/client_body_temp drwx------ 2 www wheel 2 Jul 7 2017 client_body_temp On two different boxes, two different OSes, showing variable eXecution permissions within the path. Not only that, but in both instances, the client_body_temp permissions show ?drwx- - - - - -?, and for two different owner:group combinations. Why would nginx allow this to happen? Is it not thinkable that either nginx would state that a clear path to the directory responsible for receiving file uploads be permitted? Or maybe the maintainers receiving this as a criteria for installation? I find this quite odd. Running around patching up path permissions to installed directories, specific to nginx, is truly strange. > Additionally, if you have SELinux or equivalent enabled, you > should check it as well. > >> I have 4 workers owned by nobody:admin, and nginx is run as >> default, as root:admin. >> >> Now this topic of permissions and ?what user should run nginx? >> has come up before. Some say run as root, others say not. It?s >> my workstation, so it doesn?t really matter. It?s my dev box. >> The issue comes down to production. >> >> Is there one way all of this should be run without the worried >> security devs out there from losing it? Since I?m here at >> another security issue with who runs what, maybe it?s a good >> time to get a consensus on how all this should be set up. > > You should never run nginx worker processes as root unless you > understand what you are doing and possible consequences. I don?t. I don?t even set nginx master to run as root, but it does. nginx.conf: # user root admin; # commented out > On the other hand, nginx master process can't do many required > things - like binding to port 80 - without being root. As such, > you have to run nginx itself (that is, nginx master process) as > root. This last paragraph has people doing backflips to interject. OSX has chosen to run as root, and I?m running vhosts on port 80. FreeBSD also has master user commented out as ?root?, but working on port 8080. I doubt I asked it to work on port 80 so far, given the setup I have. In any case, I fixed it. /usr/local/var was a bit closed up. Not friendly. Cheers _____________ Rich in Toronto @ VP From mdounin at mdounin.ru Wed Jun 27 06:02:53 2018 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 27 Jun 2018 09:02:53 +0300 Subject: File Upload Permissions Issues In-Reply-To: References: <20180627025148.GA35731@mdounin.ru> Message-ID: <20180627060252.GG35731@mdounin.ru> Hello! On Wed, Jun 27, 2018 at 12:56:09AM -0400, VP Lists wrote: [...] > OK, here?s where things get interesting: > > On MacOS El Capitan: --http-client-body-temp-path=/usr/local/var/run/nginx/client_body_temp > /usr drwxr-xr-x@ 13 root wheel 442B May 26 2017 usr > /usr/local drwxr-xr-x 28 rich admin 952B Mar 30 16:12 local > /usr/local/var drwx------ 36 rich admin 1.2K May 7 21:01 var Clearly "nobody" has no rights to work with anything in /usr/local/var. That's what causes the error you've faced. You have to fix it for things to work. [...] > On two different boxes, two different OSes, showing variable > eXecution permissions within the path. Not only that, but in > both instances, the client_body_temp permissions show ?drwx- - - > - - -?, and for two different owner:group combinations. > > Why would nginx allow this to happen? Is it not thinkable that > either nginx would state that a clear path to the directory > responsible for receiving file uploads be permitted? Or maybe > the maintainers receiving this as a criteria for installation? > I find this quite odd. > > Running around patching up path permissions to installed > directories, specific to nginx, is truly strange. Permissions on FreeBSD are perfectly fine. Permissions on macOS are broken due to incorrect permissions on /usr/local/var, and it's clearly not nginx business to do anything with permissions on the system-wide directory. If you think packaging system you've used to install things into /usr/local/ could be better at maintaining correct permissions on various folders under /usr/local/ - you may want to contact authors of the packaging system. [...] -- Maxim Dounin http://mdounin.ru/ From nginx-forum at forum.nginx.org Wed Jun 27 09:57:40 2018 From: nginx-forum at forum.nginx.org (duda) Date: Wed, 27 Jun 2018 05:57:40 -0400 Subject: Wait for backend Message-ID: Hi I have one backend: upstream backend_1 { server 127.0.0.1:35510; } and server: server { ... location / { proxy_pass http://backend_1; } } Sometimes I have to restart my backend and it is unavailable for 3-5sec (port unreachable) How I can tell nginx to "wait" for backend and do not respond with 502 ? So it should hold connection for 5 second and if it is unreachable in that interval responds 502 to client. Thanks Posted at Nginx Forum: https://forum.nginx.org/read.php?2,280293,280293#msg-280293 From atif.ali at gmail.com Wed Jun 27 12:01:48 2018 From: atif.ali at gmail.com (aT) Date: Wed, 27 Jun 2018 16:01:48 +0400 Subject: Wait for backend In-Reply-To: References: Message-ID: Look into *proxy_read_timeout* *time*; https://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_read_timeout *proxy_connect_timeout* *time*; https://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_connect_timeout On Wed, Jun 27, 2018 at 1:57 PM duda wrote: > Hi > > I have one backend: > > upstream backend_1 { > server 127.0.0.1:35510; > } > > and server: > > server { > ... > location / { > proxy_pass http://backend_1; > } > } > > > Sometimes I have to restart my backend and it is unavailable for 3-5sec > (port unreachable) > > How I can tell nginx to "wait" for backend and do not respond with 502 ? > So it should hold connection for 5 second and if it is unreachable in that > interval responds 502 to client. > > Thanks > > Posted at Nginx Forum: > https://forum.nginx.org/read.php?2,280293,280293#msg-280293 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -- Syed Atif Ali Desk: 971 4 4493131 -------------- next part -------------- An HTML attachment was scrubbed... URL: From mailinglist at unix-solution.de Wed Jun 27 13:08:50 2018 From: mailinglist at unix-solution.de (basti) Date: Wed, 27 Jun 2018 15:08:50 +0200 Subject: Combining Basic Authentication with Access Restriction by IP Address and auth_basic off Message-ID: <161f9dc4-7948-b72a-e6be-1feb03aafbb1@unix-solution.de> Hello, I have a config like: server { ... # combine basic auth and ip whitelisting # https://docs.nginx.com/nginx/admin-guide/security-controls/configuring-http-basic-authentication/ satisfy any; allow ; deny all; auth_basic "Restricted"; auth_basic_user_file /etc/nginx/nx4/.htpasswd; location /.well-known/acme-challenge/ { auth_basic off; default_type "text/plain"; alias /var/lib/dehydrated/acme-challenges/; } } But it seems not working. Access from allowed ip is fine, from all other get 2018/06/27 14:54:12 [error] 1333#1333: *11176 access forbidden by rule, client: ... nginx -v nginx version: nginx/1.10.3 Can anyone confirm this? Best regards From lists at viaduct-productions.com Wed Jun 27 13:42:04 2018 From: lists at viaduct-productions.com (VP Lists) Date: Wed, 27 Jun 2018 09:42:04 -0400 Subject: File Upload Permissions Issues In-Reply-To: <20180627060252.GG35731@mdounin.ru> References: <20180627025148.GA35731@mdounin.ru> <20180627060252.GG35731@mdounin.ru> Message-ID: <6FC6599A-69C1-48BB-8286-810F3AA7703F@viaduct-productions.com> > On Jun 27, 2018, at 2:02 AM, Maxim Dounin wrote: > > Hello! Hello again! > On Wed, Jun 27, 2018 at 12:56:09AM -0400, VP Lists wrote: > > [...] > >> OK, here?s where things get interesting: >> >> On MacOS El Capitan: --http-client-body-temp-path=/usr/local/var/run/nginx/client_body_temp >> /usr drwxr-xr-x@ 13 root wheel 442B May 26 2017 usr >> /usr/local drwxr-xr-x 28 rich admin 952B Mar 30 16:12 local >> /usr/local/var drwx------ 36 rich admin 1.2K May 7 21:01 var > > Clearly "nobody" has no rights to work with anything in > /usr/local/var. That's what causes the error you've faced. You > have to fix it for things to work. I changed it to 755. Works now. > [...] > >> On two different boxes, two different OSes, showing variable >> eXecution permissions within the path. Not only that, but in >> both instances, the client_body_temp permissions show ?drwx- - - >> - - -?, and for two different owner:group combinations. >> >> Why would nginx allow this to happen? Is it not thinkable that >> either nginx would state that a clear path to the directory >> responsible for receiving file uploads be permitted? Or maybe >> the maintainers receiving this as a criteria for installation? >> I find this quite odd. >> >> Running around patching up path permissions to installed >> directories, specific to nginx, is truly strange. > > Permissions on FreeBSD are perfectly fine. > > Permissions on macOS are broken due to incorrect permissions on > /usr/local/var, and it's clearly not nginx business to do anything > with permissions on the system-wide directory. > > If you think packaging system you've used to install things into > /usr/local/ could be better at maintaining correct permissions on > various folders under /usr/local/ - you may want to contact > authors of the packaging system. With regards to system-wide directories and nginx choosing this as the target directory, I?m not versed in who has what right to do what during installation, with regards to chmod, chown, chgrp, etc. As well, the same goes for package managers vs nginx. I was just under the assumption it would all have a collective agreement that a sticky permission (t) would apply so that there could be some kind of installation functionality happening. But I guess not, so the /usr/local/var permissions, even though a system-wide directory, needed to be changed. Cheers _____________ Rich in Toronto @ VP From ru at nginx.com Wed Jun 27 13:57:36 2018 From: ru at nginx.com (Ruslan Ermilov) Date: Wed, 27 Jun 2018 16:57:36 +0300 Subject: Combining Basic Authentication with Access Restriction by IP Address and auth_basic off In-Reply-To: <161f9dc4-7948-b72a-e6be-1feb03aafbb1@unix-solution.de> References: <161f9dc4-7948-b72a-e6be-1feb03aafbb1@unix-solution.de> Message-ID: <20180627135736.GF62373@lo0.su> On Wed, Jun 27, 2018 at 03:08:50PM +0200, basti wrote: > Hello, > I have a config like: > > server { > > ... > # combine basic auth and ip whitelisting > # > https://docs.nginx.com/nginx/admin-guide/security-controls/configuring-http-basic-authentication/ > satisfy any; > allow ; > deny all; > > auth_basic "Restricted"; > auth_basic_user_file /etc/nginx/nx4/.htpasswd; > > location /.well-known/acme-challenge/ { > auth_basic off; > default_type "text/plain"; > alias /var/lib/dehydrated/acme-challenges/; > } > } > > But it seems not working. > Access from allowed ip is fine, from all other get > > 2018/06/27 14:54:12 [error] 1333#1333: *11176 access forbidden by rule, > client: ... > > nginx -v > nginx version: nginx/1.10.3 > > Can anyone confirm this? Since you have switched auth_basic off, the only enabled authentication left is by client address, and your inherited configuration says it's denied for everything except . Put "allow all" into the "location /.well-known/acme-challenge/" to have it working for all. From mailinglist at unix-solution.de Wed Jun 27 14:01:19 2018 From: mailinglist at unix-solution.de (basti) Date: Wed, 27 Jun 2018 16:01:19 +0200 Subject: Combining Basic Authentication with Access Restriction by IP Address and auth_basic off In-Reply-To: <20180627135736.GF62373@lo0.su> References: <161f9dc4-7948-b72a-e6be-1feb03aafbb1@unix-solution.de> <20180627135736.GF62373@lo0.su> Message-ID: <765236f6-a3df-1873-2d4e-298aa242156c@unix-solution.de> On 27.06.2018 15:57, Ruslan Ermilov wrote: > Since you have switched auth_basic off, the only enabled authentication > left is by client address, and your inherited configuration says it's > denied for everything except . Put "allow all" into the > "location /.well-known/acme-challenge/" to have it working for all. Thanks for any hints. Best Regards, From djczaski at gmail.com Thu Jun 28 00:25:38 2018 From: djczaski at gmail.com (Danomi Czaski) Date: Wed, 27 Jun 2018 20:25:38 -0400 Subject: 400 bad request with ssl_verify_client optional Message-ID: I get 400 bad request when client certs are used early even though I have ssl_verify_client optional. nginx: [info] 9612#0: *338 client SSL certificate verify error: (9:certificate is not yet valid) while reading client request headers, Is there anyway to ignore the time check? From pluknet at nginx.com Thu Jun 28 10:06:17 2018 From: pluknet at nginx.com (Sergey Kandaurov) Date: Thu, 28 Jun 2018 13:06:17 +0300 Subject: 400 bad request with ssl_verify_client optional In-Reply-To: References: Message-ID: <154F1538-9433-4577-A7AE-ADACA7E761B2@nginx.com> > On 28 Jun 2018, at 03:25, Danomi Czaski wrote: > > I get 400 bad request when client certs are used early even though I > have ssl_verify_client optional. > > nginx: [info] 9612#0: *338 client SSL certificate verify error: > (9:certificate is not yet valid) while reading client request headers, > > Is there anyway to ignore the time check? No way, but you may want to try ?optional_no_ca? if it?s also not trusted. -- Sergey Kandaurov From nginx-forum at forum.nginx.org Thu Jun 28 10:28:04 2018 From: nginx-forum at forum.nginx.org (woodprogrammer) Date: Thu, 28 Jun 2018 06:28:04 -0400 Subject: Nginx Serve different Proxy Pass Message-ID: <4b2455ffd9b4d9171aafe29241517275.NginxMailingListEnglish@forum.nginx.org> Hi everyone, I've tree different jenkins machine , for serving behing the NGINX. My sample nginx file shown as below . I have tree different file each machine . server { listen 80; server_name ""; access_log off; location /USERNAME { proxy_pass http://USERNAME_MACHINE_ID:8080; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto http; proxy_max_temp_file_size 0; proxy_connect_timeout 150; proxy_send_timeout 100; proxy_read_timeout 100; proxy_buffer_size 8k; proxy_buffers 4 32k; proxy_busy_buffers_size 64k; proxy_temp_file_write_size 64k; } } When I change the username I want to see the different Jenkins machine how I can solve this problem ? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,280316,280316#msg-280316 From nginx-forum at forum.nginx.org Thu Jun 28 12:14:42 2018 From: nginx-forum at forum.nginx.org (duda) Date: Thu, 28 Jun 2018 08:14:42 -0400 Subject: Wait for backend In-Reply-To: References: Message-ID: What is not work: nginx immediately returns "502 Bad Gateway" to the client Posted at Nginx Forum: https://forum.nginx.org/read.php?2,280293,280319#msg-280319 From nginx-forum at forum.nginx.org Thu Jun 28 12:15:10 2018 From: nginx-forum at forum.nginx.org (duda) Date: Thu, 28 Jun 2018 08:15:10 -0400 Subject: Wait for backend In-Reply-To: References: Message-ID: *That is Posted at Nginx Forum: https://forum.nginx.org/read.php?2,280293,280320#msg-280320 From mdounin at mdounin.ru Thu Jun 28 13:27:25 2018 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 28 Jun 2018 16:27:25 +0300 Subject: 400 bad request with ssl_verify_client optional In-Reply-To: References: Message-ID: <20180628132725.GK35731@mdounin.ru> Hello! On Wed, Jun 27, 2018 at 08:25:38PM -0400, Danomi Czaski wrote: > I get 400 bad request when client certs are used early even though I > have ssl_verify_client optional. > > nginx: [info] 9612#0: *338 client SSL certificate verify error: > (9:certificate is not yet valid) while reading client request headers, > > Is there anyway to ignore the time check? The only thing you can do if client presents an invalid certificate is to handle this via the error_page directive, see here: http://nginx.org/en/docs/http/ngx_http_ssl_module.html#errors In the error_page you can even recover to normal request handling, but I wouldn't recommend doing this, as this can easily result in security problems if a configuration uses $ssl_client_* variables without checking $ssl_client_verify first. -- Maxim Dounin http://mdounin.ru/ From michael.friscia at yale.edu Thu Jun 28 13:27:58 2018 From: michael.friscia at yale.edu (Friscia, Michael) Date: Thu, 28 Jun 2018 13:27:58 +0000 Subject: Cache question Message-ID: I?m working through use cases for cache in a presentation and had a thought jump into my head. We have a server block where most things are cached, but a few locations are set not to use the cache. But the thought is that even thought we don?t want to use a local cache and always fetch from upstream, is it possible to still keep a cache copy that could then be served if the upstream host sends anything other than a 200 response? ___________________________________________ Michael Friscia Office of Communications Yale School of Medicine (203) 737-7932 - office (203) 931-5381 - mobile http://web.yale.edu -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Thu Jun 28 15:19:38 2018 From: nginx-forum at forum.nginx.org (donald.williams.0018) Date: Thu, 28 Jun 2018 11:19:38 -0400 Subject: nginx + grpc-web Message-ID: <3f5b3d971ac9b2c08af4c3afd4601f4c.NginxMailingListEnglish@forum.nginx.org> I try to use nginx + grpc + grpc-web client JS library (https://github.com/grpc/grpc-web). Nginx-1 is compiled using the following setup: nginx version: nginx/1.15.0 built by gcc 7.2.0 (Ubuntu 7.2.0-8ubuntu3.1~16.04.york0) built with OpenSSL 1.0.2g 1 Mar 2016 TLS SNI support enabled configure arguments: --with-threads --with-file-aio --with-http_ssl_module --with-http_v2_module Nginx-2 that comes with the grpc-web javascript library on docker image is compiled using the following setup: nginx version: nginx/1.11.13 built by gcc 5.4.0 20160609 (Ubuntu 5.4.0-6ubuntu1~16.04.9) built with OpenSSL 1.0.2h 3 May 2016 TLS SNI support enabled configure arguments: --with-http_ssl_module --with-http_v2_module --with-cc-opt='-I /usr/local/include -I /github/grpc-web -I /github/grpc-web/third_party/grpc/third_party/protobuf/include -I /github/grpc-web/third_party/grpc/third_party/protobuf/src -I /github/grpc-web/third_party/grpc/include -I /github/grpc-web/third_party/grpc' --with-ld-opt='-L/github/grpc-web/third_party/grpc/third_party/protobuf/src/.libs -L/github/grpc-web/third_party/grpc/libs/opt -lgrpc++ -lgrpc -lprotobuf -lpthread -ldl -lrt -lstdc++ -lm' --with-openssl=/github/grpc-web/third_party/openssl --add-module=/github/grpc-web/net/grpc/gateway/nginx The grpc service is running on port 50051. I want to use the grpc-web client JS library to call grpc service from webpage, and I use the same following nginx.conf for Nginx-1 and Nginx-2. master_process off; daemon off; worker_processes 1; pid nginx.pid; error_log stderr debug; events { worker_connections 1024; } http { access_log off; client_max_body_size 0; client_body_temp_path client_body_temp; proxy_temp_path proxy_temp; proxy_request_buffering off; server { listen 8080; server_name localhost; location ~ \.(html|js)$ { root html; } location /helloworld.Greeter { grpc_pass localhost:50051; if ($request_method = 'OPTIONS') { add_header 'Access-Control-Allow-Origin' '*'; add_header 'Access-Control-Allow-Methods' 'GET, POST, OPTIONS'; add_header 'Access-Control-Allow-Headers' 'DNT,X-CustomHeader,Keep-Alive,User-Agent,X-Requested-With,If-Modified-Since,Cache-Control,Content-Type,Content-Transfer-Encoding,Custom-Header-1,X-Accept-Content-Transfer-Encoding,X-Accept-Response-Streaming,X-User-Agent'; add_header 'Access-Control-Expose-Headers' 'Content-Transfer-Encoding'; } } } } If I use Nginx-2, the web JS client can connect to the service. For Nginx-1, the web JS client cannot connect to the service. Nginx returns the following error: [error] 26125#26125: *1 upstream rejected request with error 2 while reading response header from upstream, client: 192.168.50.101, server: localhost, request: "POST /helloworld.Greeter/SayHello HTTP/1.1", upstream: "grpc://127.0.0.1:50051", host: "localhost:8080", referrer: "http://localhost:8080/hello.html" >From the chrome console, I received the following error POST http://localhost:8080/helloworld.Greeter/SayHello 502 (Bad Gateway) goog.net.XhrIo.send @ compiled.js:395 module$contents$grpc$web$GrpcWebClientBase_GrpcWebClientBase.rpcCall @ compiled.js:438 proto.helloworld.GreeterClient.sayHello @ compiled.js:631 echo @ hello.html:49 send @ hello.html:66 (anonymous) @ hello.html:78 dispatch @ jquery.min.js:3 q.handle @ jquery.min.js:3 Uncaught Error: Unknown base64 encoding at char: < at c (compiled.js:429) at Object.goog.crypt.base64.decodeStringInternal_ (compiled.js:429) at Object.goog.crypt.base64.decodeStringToUint8Array (compiled.js:428) at goog.net.XhrIo. (compiled.js:432) at goog.net.XhrIo.goog.events.EventTarget.fireListeners (compiled.js:279) at Function.goog.events.EventTarget.dispatchEventInternal_ (compiled.js:281) at goog.net.XhrIo.goog.events.EventTarget.dispatchEvent (compiled.js:276) at goog.net.XhrIo.onReadyStateChangeHelper_ (compiled.js:401) at goog.net.XhrIo.onReadyStateChangeEntryPoint_ (compiled.js:400) at goog.net.XhrIo.onReadyStateChange_ (compiled.js:400) It seems that Nginx-1 might have some issue with the encoding or the translation of http1 to http2. Do you have any suggestion of what is the issue? Thanks a lot! Don Posted at Nginx Forum: https://forum.nginx.org/read.php?2,280325,280325#msg-280325 From peter_booth at me.com Thu Jun 28 16:46:44 2018 From: peter_booth at me.com (Peter Booth) Date: Thu, 28 Jun 2018 12:46:44 -0400 Subject: Cache question In-Reply-To: References: Message-ID: Sure is. Look at the stale-if-error stale-while-revalidate proxy_cache_use_stale proxy_cache_lock etc Can you describe the use case a bit more? Why don't you want to cache this particular content? Is it that its dynamic and a fresher version is always preferable but the stale is good enough in the event of an error? Or is there more to it than that? Sometimes people build sites that are ?more dynamic? than they need to because they didn't consider a static site that gets frequently periodically regenerated. Peter > On 28 Jun 2018, at 9:27 AM, Friscia, Michael wrote: > > I?m working through use cases for cache in a presentation and had a thought jump into my head. We have a server block where most things are cached, but a few locations are set not to use the cache. But the thought is that even thought we don?t want to use a local cache and always fetch from upstream, is it possible to still keep a cache copy that could then be served if the upstream host sends anything other than a 200 response? > > ___________________________________________ > Michael Friscia > Office of Communications > Yale School of Medicine > (203) 737-7932 - office > (203) 931-5381 - mobile > http://web.yale.edu > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From michael.friscia at yale.edu Thu Jun 28 17:19:08 2018 From: michael.friscia at yale.edu (Friscia, Michael) Date: Thu, 28 Jun 2018 17:19:08 +0000 Subject: Cache question In-Reply-To: References: Message-ID: Yes, the content is dynamic, basically a set of JSON RESTful applications we call feeds that for specific reasons we do not cache but most we do. The use-case is simple, if we have to release new code these feeds are down/returning a 503 error but if we had a cache that would serve stale during that time, then, in theory, our feeds would never go down. As for the dynamic site thing, I completely agree. We have very dynamic content that we still cache. Much of our site is built using taxonomy driven feeds but even with our search box, the standard searches are all cached since we know the most common medical terminologies that will be queried. But for the things we can?t cache, it would be pretty bad. Thanks for the commands to look at, this will be really helpful. ___________________________________________ Michael Friscia Office of Communications Yale School of Medicine (203) 737-7932 - office (203) 931-5381 - mobile http://web.yale.edu From: nginx on behalf of Peter Booth Reply-To: "nginx at nginx.org" Date: Thursday, June 28, 2018 at 12:46 PM To: Wiktor Kwapisiewicz via nginx Subject: Re: Cache question Sure is. Look at the stale-if-error stale-while-revalidate proxy_cache_use_stale proxy_cache_lock etc Can you describe the use case a bit more? Why don't you want to cache this particular content? Is it that its dynamic and a fresher version is always preferable but the stale is good enough in the event of an error? Or is there more to it than that? Sometimes people build sites that are ?more dynamic? than they need to because they didn't consider a static site that gets frequently periodically regenerated. Peter On 28 Jun 2018, at 9:27 AM, Friscia, Michael > wrote: I?m working through use cases for cache in a presentation and had a thought jump into my head. We have a server block where most things are cached, but a few locations are set not to use the cache. But the thought is that even thought we don?t want to use a local cache and always fetch from upstream, is it possible to still keep a cache copy that could then be served if the upstream host sends anything other than a 200 response? ___________________________________________ Michael Friscia Office of Communications Yale School of Medicine (203) 737-7932 - office (203) 931-5381 - mobile http://web.yale.edu _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Fri Jun 29 15:12:33 2018 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 29 Jun 2018 18:12:33 +0300 Subject: nginx + grpc-web In-Reply-To: <3f5b3d971ac9b2c08af4c3afd4601f4c.NginxMailingListEnglish@forum.nginx.org> References: <3f5b3d971ac9b2c08af4c3afd4601f4c.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20180629151233.GL35731@mdounin.ru> Hello! On Thu, Jun 28, 2018 at 11:19:38AM -0400, donald.williams.0018 wrote: > I try to use nginx + grpc + grpc-web client JS library > (https://github.com/grpc/grpc-web). > > Nginx-1 is compiled using the following setup: > nginx version: nginx/1.15.0 > built by gcc 7.2.0 (Ubuntu 7.2.0-8ubuntu3.1~16.04.york0) > built with OpenSSL 1.0.2g 1 Mar 2016 > TLS SNI support enabled > configure arguments: --with-threads --with-file-aio > --with-http_ssl_module --with-http_v2_module [...] > If I use Nginx-2, the web JS client can connect to the service. > > For Nginx-1, the web JS client cannot connect to the service. Nginx returns > the following error: > [error] 26125#26125: *1 upstream rejected request with error 2 while > reading response header from upstream, client: 192.168.50.101, server: > localhost, request: "POST /helloworld.Greeter/SayHello HTTP/1.1", upstream: > "grpc://127.0.0.1:50051", host: "localhost:8080", referrer: > "http://localhost:8080/hello.html" [...] > It seems that Nginx-1 might have some issue with the encoding or the > translation of http1 to http2. Do you have any suggestion of what is the > issue? gRPC and gRPC-Web are different protocols. gRPC-Web clients cannot connect to gRPC services without special translation from gRPC-Web to gRPC, and this is not something gRPC proxy module in nginx does. -- Maxim Dounin http://mdounin.ru/ From nginx-forum at forum.nginx.org Fri Jun 29 23:48:08 2018 From: nginx-forum at forum.nginx.org (donald.williams.0018) Date: Fri, 29 Jun 2018 19:48:08 -0400 Subject: nginx + grpc-web In-Reply-To: <20180629151233.GL35731@mdounin.ru> References: <20180629151233.GL35731@mdounin.ru> Message-ID: Thanks a lot for your explanation Maxim! Cheers, Don Posted at Nginx Forum: https://forum.nginx.org/read.php?2,280325,280345#msg-280345 From nginx-forum at forum.nginx.org Sat Jun 30 18:54:19 2018 From: nginx-forum at forum.nginx.org (shiz) Date: Sat, 30 Jun 2018 14:54:19 -0400 Subject: OPTIONS request failing when issued from CDN Message-ID: I could make it easily from localhost: curl -i -X OPTIONS http://www.server.com/css/reset.css -> xxx.xxx.xxx.190 - - [30/Jun/2018:11:33:53 -0700] "OPTIONS /css/reset.css HTTP/1.1" 200 0 "-" "curl/7.38.0" HTTP/1.1 200 OK Server: nginx Date: Sat, 30 Jun 2018 18:47:49 GMT Content-Type: text/css Content-Length: 0 Connection: keep-alive Expect-CT: enforce; max-age=3600 Strict-Transport-Security: max-age=0 However when the CDN does it, it fails with 405. I'd like to return a 200. Prefer not restricting the CDN on css/js/images at all. I already have this in my config: if ($request_method = OPTIONS ) { return 200; } However the results are not what I expect: ``` 185.180.15.73 - - [30/Jun/2018:11:27:31 -0700] "OPTIONS /css/reset.css HTTP/1.1" 405 568 "http://www.server.com/goonet.php?tsp=https://www.goo-net.com/cgi-bin/fsearch/goo_used_search.cgi%3Fcategory%3DUSD%26phrase%3D%25E3%2583%2580%25E3%2583%25B3%25E3%2583%2597%26query%3D%25E3%2583%2580%25E3%2583%25B3%25E3%2583%2597%26page%3D20#" "Mozilla/5.0 (Linux; Android 4.2.1; en-us; Nexus 5 Build/JOP40D) AppleWebKit/535.19 (KHTML, like Gecko; googleweblight) Chrome/38.0.1025.166 Mobile Safari/535.19" ``` I tried the `return 200` only today. Prior to that, every solution proposed in the internet resulted in all the css being unavailable. No exception. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,280352,280352#msg-280352