From semackenzie at gmail.com Sun Nov 1 05:00:58 2015 From: semackenzie at gmail.com (Scott E. MacKenzie) Date: Sun, 1 Nov 2015 13:00:58 +0800 Subject: Set up version control of local files with the help of nginx server In-Reply-To: <20151031084037.GQ3095@daoine.org> References: <20151031084037.GQ3095@daoine.org> Message-ID: Suggest looking at gitlab: https://about.gitlab.com/downloads/ It works well with Nginx and meets most needs for version control. If you stick with the standard bundle and plugins then your upgrade management for gitlab will remain clean and simple. IMHO On 31 Oct 2015 16:40, "Francis Daly" wrote: > On Tue, Oct 20, 2015 at 08:41:38PM +0200, Tie Cheng wrote: > > Hi there, > > > I have bought a dedicated server from DigitalOcean, and configured nginx. > > > > I want to version control my local files, and I am wondering if I could > > commit and update them from time to time to this server. > > nginx is a web server. > > version control is independent of a web server. > > So you should be able to pick whatever version control system you want, > and if it provides a web interface, reverse proxy to that through nginx. > > But the web interface may not provide all the version control > facilities. And if it uses more than GET and POST (DAV parts, for > example), it may be more tricky. > > > It seems that the setup is not straightforward (some posts say that we > have > > to use Apache for svn). I don't mind if it is svn or git, I just want to > > follow an easy approach. > > I would suggest: first pick the version control system that you want to > use. > > Then search for "that and nginx" to see if there is a recipe that someone > else says works. > > > Could any one help? > > I could say "use fossil in scgi mode"; but if you have reasons to prefer > svn, then that wouldn't be helpful. > > Cheers, > > f > -- > Francis Daly francis at daoine.org > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From xiaokai.wang at live.com Mon Nov 2 06:14:17 2015 From: xiaokai.wang at live.com (Xiaokai Wang) Date: Mon, 2 Nov 2015 06:14:17 +0000 Subject: =?UTF-8?Q?different_centos_version_+_different_nginx_version_make_differen?= =?UTF-8?Q?t_performance=E2=80=8F?= Message-ID: hi all, I find a performace decreasing when I update centos5.4 to centos7.1 and the same time nginx-1.2.7 to nginx-1.8.0. Exacted information as below picture: centos5.4 + nginx-1.2.7 centos7.1 + nginx-1.8.0 From the pictures, we can see that updated environment cpu-load average is almostly doubled previous environment. Of course the sysctl.conf and nginx-conf are not changed after updating environment. It confused me and I dont know why. Anybody meet the problem? Please give me a help, thanks. -----Regards,Xiaokai -------------- next part -------------- An HTML attachment was scrubbed... URL: From xiaokai.wang at live.com Mon Nov 2 07:48:46 2015 From: xiaokai.wang at live.com (Xiaokai Wang) Date: Mon, 2 Nov 2015 07:48:46 +0000 Subject: different centos version + different nginx version make different performance Message-ID: hi all, Sorry to bother you again, I find pictures cannt show clearly, so copy statistics below. thanks again. I find a performace decreasing when I update centos5.4 to centos7.1 and the same time nginx-1.2.7 to nginx-1.8.0. Exacted statistics as below:Tasks: 177 total, 3 running, 173 sleeping, 0 stopped, 1 zombieCpu0 : 3.7%us, 3.3%sy, 0.0%ni, 92.4%id, 0.0%wa, 0.0%hi, 0.7%si, 0.0%stCpu1 : 4.7%us, 3.7%sy, 0.0%ni, 91.0%id, 0.0%wa, 0.0%hi, 0.7%si, 0.0%stCpu2 : 3.6%us, 3.0%sy, 0.0%ni, 92.7%id, 0.0%wa, 0.0%hi, 0.7%si, 0.0%stCpu3 : 4.0%us, 3.0%sy, 0.0%ni, 92.7%id, 0.0%wa, 0.0%hi, 0.3%si, 0.0%stCpu4 : 3.7%us, 3.0%sy, 0.0%ni, 93.0%id, 0.0%wa, 0.0%hi, 0.3%si, 0.0%stCpu5 : 3.3%us, 2.6%sy, 0.0%ni, 93.4%id, 0.0%wa, 0.0%hi, 0.7%si, 0.0%stCpu6 : 3.3%us, 3.3%sy, 0.0%ni, 93.0%id, 0.0%wa, 0.0%hi, 0.3%si, 0.0%stCpu7 : 4.0%us, 2.7%sy, 0.0%ni, 92.7%id, 0.0%wa, 0.0%hi, 0.7%si, 0.0%stCpu8 : 4.0%us, 2.7%sy, 0.0%ni, 93.0%id, 0.0%wa, 0.0%hi, 0.3%si, 0.0%stCpu9 : 4.0%us, 3.7%sy, 0.0%ni, 91.3%id, 0.7%wa, 0.0%hi, 0.3%si, 0.0%stCpu10 : 2.7%us, 2.0%sy, 0.0%ni, 82.4%id, 0.0%wa, 0.7%hi, 12.3%si, 0.0%stCpu11 : 2.3%us, 2.3%sy, 0.0%ni, 64.7%id, 0.0%wa, 1.0%hi, 29.7%si, 0.0%stMem: 12273740k total, 9946612k used, 2327128k free, 5475768k buffersSwap: 8385920k total, 0k used, 8385920k free, 2030764k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND26770 nobody 15 0 91076 32m 1196 S 9.0 0.3 997:15.34 proxy-nginx26769 nobody 15 0 91936 33m 1196 S 7.7 0.3 1043:52 proxy-nginx26778 nobody 15 0 91560 33m 1196 R 7.7 0.3 921:28.59 proxy-nginx26772 nobody 15 0 92164 33m 1196 S 7.3 0.3 920:47.31 proxy-nginx26773 nobody 15 0 91808 33m 1200 S 7.3 0.3 930:41.83 proxy-nginx26776 nobody 15 0 92332 33m 1196 S 7.3 0.3 934:57.61 proxy-nginx26777 nobody 15 0 92504 33m 1196 S 7.3 0.3 926:08.37 proxy-nginx26771 nobody 16 0 92176 33m 1196 S 7.0 0.3 955:29.21 proxy-nginx26775 nobody 15 0 91688 33m 1200 R 7.0 0.3 939:46.60 proxy-nginx26774 nobody 15 0 92044 33m 1196 S 6.3 0.3 842:52.49 proxy-nginx26780 nobody 15 0 90772 32m 1200 S 6.0 0.3 696:50.17 proxy-nginx26779 nobody 15 0 90380 31m 1196 S 5.7 0.3 832:16.64 proxy-nginx centos5.4 + nginx-1.2.7 Tasks: 276 total, 6 running, 268 sleeping, 2 stopped, 0 zombie%Cpu0 : 8.2 us, 8.9 sy, 0.0 ni, 73.4 id, 0.3 wa, 0.0 hi, 9.2 si, 0.0 st%Cpu1 : 7.9 us, 7.6 sy, 0.0 ni, 75.2 id, 0.0 wa, 0.0 hi, 9.3 si, 0.0 st%Cpu2 : 6.7 us, 8.1 sy, 0.0 ni, 77.4 id, 0.0 wa, 0.0 hi, 7.8 si, 0.0 st%Cpu3 : 7.4 us, 8.0 sy, 0.0 ni, 79.6 id, 0.0 wa, 0.0 hi, 5.0 si, 0.0 st%Cpu4 : 6.4 us, 7.1 sy, 0.0 ni, 78.8 id, 0.0 wa, 0.0 hi, 7.8 si, 0.0 st%Cpu5 : 7.0 us, 7.3 sy, 0.0 ni, 78.4 id, 0.0 wa, 0.0 hi, 7.3 si, 0.0 st%Cpu6 : 27.3 us, 8.7 sy, 0.0 ni, 59.3 id, 0.0 wa, 0.0 hi, 4.7 si, 0.0 st%Cpu7 : 8.3 us, 8.0 sy, 0.0 ni, 75.1 id, 0.0 wa, 0.0 hi, 8.7 si, 0.0 st%Cpu8 : 6.9 us, 8.6 sy, 0.0 ni, 78.7 id, 0.0 wa, 0.0 hi, 5.8 si, 0.0 st%Cpu9 : 15.4 us, 8.6 sy, 0.0 ni, 69.2 id, 0.0 wa, 0.0 hi, 6.8 si, 0.0 st%Cpu10 : 23.9 us, 8.8 sy, 0.0 ni, 64.0 id, 0.0 wa, 0.0 hi, 3.4 si, 0.0 st%Cpu11 : 40.5 us, 9.1 sy, 0.0 ni, 41.2 id, 0.0 wa, 0.0 hi, 9.1 si, 0.0 stKiB Mem : 12126740 total, 159380 free, 607028 used, 11360332 buff/cacheKiB Swap: 8388604 total, 8388604 free, 0 used. 10895304 avail Mem PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 6876 root 20 0 4604 864 496 R 85.4 0.0 0:52.28 gzip12806 nobody 20 0 61508 35056 1272 R 18.3 0.3 282:22.99 nginx12810 nobody 20 0 61044 34596 1272 S 17.6 0.3 292:06.85 nginx12799 nobody 20 0 60768 34240 1272 R 16.9 0.3 275:36.83 nginx12801 nobody 20 0 60764 34204 1272 S 15.6 0.3 277:38.95 nginx12802 nobody 20 0 61468 34964 1272 S 15.3 0.3 226:20.06 nginx12800 nobody 20 0 60716 34268 1272 R 14.6 0.3 276:13.54 nginx12805 nobody 20 0 61424 34976 1272 S 14.6 0.3 241:46.36 nginx12807 nobody 20 0 61128 34680 1272 S 14.6 0.3 245:35.70 nginx12808 nobody 20 0 60640 34096 1272 R 14.6 0.3 255:17.38 nginx12809 nobody 20 0 60716 34268 1272 S 14.6 0.3 244:02.67 nginx12803 nobody 20 0 61396 34908 1272 S 14.3 0.3 284:18.39 nginx12804 nobody 20 0 60568 34120 1272 S 14.0 0.3 282:03.64 nginx centos7.1 + nginx-1.8.0 From the statistics, we can see that updated environment cpu-load average is almostly doubled previous environment. Of course the sysctl.conf and nginx-conf are not changed after updating environment. It confused me and I dont know why. Anybody meet the problem? Please give me a help, thanks. -----Regards,Xiaokai -------------- next part -------------- An HTML attachment was scrubbed... URL: From ar at xlrs.de Mon Nov 2 08:24:50 2015 From: ar at xlrs.de (Axel Rosenski) Date: Mon, 02 Nov 2015 09:24:50 +0100 Subject: Set up version control of local files with the help of nginx server In-Reply-To: References: <20151031084037.GQ3095@daoine.org> Message-ID: <1663195.xoWfhnJf0c@pollux> Hey, Am Sunday 01 November 2015, 13:00:58 schrieb Scott E. MacKenzie: > Suggest looking at gitlab: > https://about.gitlab.com/downloads/ > > It works well with Nginx and meets most needs for version control. If you > stick with the standard bundle and plugins then your upgrade management for > gitlab will remain clean and simple. I can confirm this! You can use Nginx directly as described in most howto or as reverse proxy if you decide to use virtual machines/containers for gitlab. Regards, Axel From nginx-forum at nginx.us Mon Nov 2 15:24:27 2015 From: nginx-forum at nginx.us (de_nginx_noob) Date: Mon, 02 Nov 2015 10:24:27 -0500 Subject: Custom Module Directive is Duplicate? Error in conf? In-Reply-To: <20151023151245.GN48365@mdounin.ru> References: <20151023151245.GN48365@mdounin.ru> Message-ID: <13fb8dcd60193936bdf08c7c388fab8a.NginxMailingListEnglish@forum.nginx.org> I understand now. Thank You! Posted at Nginx Forum: https://forum.nginx.org/read.php?2,262425,262566#msg-262566 From nginx-forum at nginx.us Mon Nov 2 22:37:00 2015 From: nginx-forum at nginx.us (jstangroome) Date: Mon, 02 Nov 2015 17:37:00 -0500 Subject: log_subrequest and auth_request Message-ID: <60cb40367697c6d7419de24f889e059c.NginxMailingListEnglish@forum.nginx.org> Hi, I have set `log_subrequest on;` at the http level and I am using to `auth_request` to a location that does a `proxy_pass` but I am not seeing the details of the auth subrequest in the access.log. Should this work? Conf: > log_subrequest on; > server{ > location / { > auth_request /authorize; > # ... > } > location /authorize { > proxy_pass http://authserver/doauth; > } > } Regards, Jason Posted at Nginx Forum: https://forum.nginx.org/read.php?2,262567,262567#msg-262567 From steve at greengecko.co.nz Tue Nov 3 01:29:08 2015 From: steve at greengecko.co.nz (steve) Date: Tue, 3 Nov 2015 14:29:08 +1300 Subject: different centos version + different nginx version make different performance In-Reply-To: References: Message-ID: <56380DE4.3090904@greengecko.co.nz> Hi, On 11/02/2015 08:48 PM, Xiaokai Wang wrote: > hi all, > > Sorry to bother you again, I find pictures cannt show clearly, so > copy statistics below. thanks again. > > I find a performace decreasing when I update centos5.4 to centos7.1 > and the same time nginx-1.2.7 to nginx-1.8.0. > > Exacted statistics as below: > > Tasks: 177 total, 3 running, 173 sleeping, 0 stopped, 1 zombie > Cpu0 : 3.7%us, 3.3%sy, 0.0%ni, 92.4%id, 0.0%wa, 0.0%hi, 0.7%si, > 0.0%st > Cpu1 : 4.7%us, 3.7%sy, 0.0%ni, 91.0%id, 0.0%wa, 0.0%hi, 0.7%si, > 0.0%st > Cpu2 : 3.6%us, 3.0%sy, 0.0%ni, 92.7%id, 0.0%wa, 0.0%hi, 0.7%si, > 0.0%st > Cpu3 : 4.0%us, 3.0%sy, 0.0%ni, 92.7%id, 0.0%wa, 0.0%hi, 0.3%si, > 0.0%st > Cpu4 : 3.7%us, 3.0%sy, 0.0%ni, 93.0%id, 0.0%wa, 0.0%hi, 0.3%si, > 0.0%st > Cpu5 : 3.3%us, 2.6%sy, 0.0%ni, 93.4%id, 0.0%wa, 0.0%hi, 0.7%si, > 0.0%st > Cpu6 : 3.3%us, 3.3%sy, 0.0%ni, 93.0%id, 0.0%wa, 0.0%hi, 0.3%si, > 0.0%st > Cpu7 : 4.0%us, 2.7%sy, 0.0%ni, 92.7%id, 0.0%wa, 0.0%hi, 0.7%si, > 0.0%st > Cpu8 : 4.0%us, 2.7%sy, 0.0%ni, 93.0%id, 0.0%wa, 0.0%hi, 0.3%si, > 0.0%st > Cpu9 : 4.0%us, 3.7%sy, 0.0%ni, 91.3%id, 0.7%wa, 0.0%hi, 0.3%si, > 0.0%st > Cpu10 : 2.7%us, 2.0%sy, 0.0%ni, 82.4%id, 0.0%wa, 0.7%hi, 12.3%si, > 0.0%st > Cpu11 : 2.3%us, 2.3%sy, 0.0%ni, 64.7%id, 0.0%wa, 1.0%hi, 29.7%si, > 0.0%st > Mem: 12273740k total, 9946612k used, 2327128k free, 5475768k buffers > Swap: 8385920k total, 0k used, 8385920k free, 2030764k cached > > PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND > 26770 nobody 15 0 91076 32m 1196 S 9.0 0.3 997:15.34 proxy-nginx > 26769 nobody 15 0 91936 33m 1196 S 7.7 0.3 1043:52 proxy-nginx > 26778 nobody 15 0 91560 33m 1196 R 7.7 0.3 921:28.59 proxy-nginx > 26772 nobody 15 0 92164 33m 1196 S 7.3 0.3 920:47.31 proxy-nginx > 26773 nobody 15 0 91808 33m 1200 S 7.3 0.3 930:41.83 proxy-nginx > 26776 nobody 15 0 92332 33m 1196 S 7.3 0.3 934:57.61 proxy-nginx > 26777 nobody 15 0 92504 33m 1196 S 7.3 0.3 926:08.37 proxy-nginx > 26771 nobody 16 0 92176 33m 1196 S 7.0 0.3 955:29.21 proxy-nginx > 26775 nobody 15 0 91688 33m 1200 R 7.0 0.3 939:46.60 proxy-nginx > 26774 nobody 15 0 92044 33m 1196 S 6.3 0.3 842:52.49 proxy-nginx > 26780 nobody 15 0 90772 32m 1200 S 6.0 0.3 696:50.17 proxy-nginx > 26779 nobody 15 0 90380 31m 1196 S 5.7 0.3 832:16.64 proxy-nginx > centos5.4 + nginx-1.2.7 > > > Tasks: 276 total, 6 running, 268 sleeping, 2 stopped, 0 zombie > %Cpu0 : 8.2 us, 8.9 sy, 0.0 ni, 73.4 id, 0.3 wa, 0.0 hi, 9.2 > si, 0.0 st > %Cpu1 : 7.9 us, 7.6 sy, 0.0 ni, 75.2 id, 0.0 wa, 0.0 hi, 9.3 > si, 0.0 st > %Cpu2 : 6.7 us, 8.1 sy, 0.0 ni, 77.4 id, 0.0 wa, 0.0 hi, 7.8 > si, 0.0 st > %Cpu3 : 7.4 us, 8.0 sy, 0.0 ni, 79.6 id, 0.0 wa, 0.0 hi, 5.0 > si, 0.0 st > %Cpu4 : 6.4 us, 7.1 sy, 0.0 ni, 78.8 id, 0.0 wa, 0.0 hi, 7.8 > si, 0.0 st > %Cpu5 : 7.0 us, 7.3 sy, 0.0 ni, 78.4 id, 0.0 wa, 0.0 hi, 7.3 > si, 0.0 st > %Cpu6 : 27.3 us, 8.7 sy, 0.0 ni, 59.3 id, 0.0 wa, 0.0 hi, 4.7 > si, 0.0 st > %Cpu7 : 8.3 us, 8.0 sy, 0.0 ni, 75.1 id, 0.0 wa, 0.0 hi, 8.7 > si, 0.0 st > %Cpu8 : 6.9 us, 8.6 sy, 0.0 ni, 78.7 id, 0.0 wa, 0.0 hi, 5.8 > si, 0.0 st > %Cpu9 : 15.4 us, 8.6 sy, 0.0 ni, 69.2 id, 0.0 wa, 0.0 hi, 6.8 > si, 0.0 st > %Cpu10 : 23.9 us, 8.8 sy, 0.0 ni, 64.0 id, 0.0 wa, 0.0 hi, 3.4 > si, 0.0 st > %Cpu11 : 40.5 us, 9.1 sy, 0.0 ni, 41.2 id, 0.0 wa, 0.0 hi, 9.1 > si, 0.0 st > KiB Mem : 12126740 total, 159380 free, 607028 used, 11360332 buff/cache > KiB Swap: 8388604 total, 8388604 free, 0 used. 10895304 avail Mem > > PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND > 6876 root 20 0 4604 864 496 R 85.4 0.0 0:52.28 gzip > 12806 nobody 20 0 61508 35056 1272 R 18.3 0.3 282:22.99 nginx > 12810 nobody 20 0 61044 34596 1272 S 17.6 0.3 292:06.85 nginx > 12799 nobody 20 0 60768 34240 1272 R 16.9 0.3 275:36.83 nginx > 12801 nobody 20 0 60764 34204 1272 S 15.6 0.3 277:38.95 nginx > 12802 nobody 20 0 61468 34964 1272 S 15.3 0.3 226:20.06 nginx > 12800 nobody 20 0 60716 34268 1272 R 14.6 0.3 276:13.54 nginx > 12805 nobody 20 0 61424 34976 1272 S 14.6 0.3 241:46.36 nginx > 12807 nobody 20 0 61128 34680 1272 S 14.6 0.3 245:35.70 nginx > 12808 nobody 20 0 60640 34096 1272 R 14.6 0.3 255:17.38 nginx > 12809 nobody 20 0 60716 34268 1272 S 14.6 0.3 244:02.67 nginx > 12803 nobody 20 0 61396 34908 1272 S 14.3 0.3 284:18.39 nginx > 12804 nobody 20 0 60568 34120 1272 S 14.0 0.3 282:03.64 nginx > centos7.1 + nginx-1.8.0 > > > From the statistics, we can see that updated environment > cpu-load average is almostly doubled previous environment. > > Of course the sysctl.conf and nginx-conf are not changed after > updating environment. It confused me and I dont know why. > > Anybody meet the problem? Please give me a help, thanks. > > > > ----- > Regards, > Xiaokai > It's not really a fair check is it? New server is running 50% more processes, and most importantly is using 600MB instead of 10GB memory. I'd start off looking at where the memory is used on the old server. I'd just report CPU usage stats, rather than per cpu. nginx doesn't really use much cpu at all, but server-side programming languages ( PHP especially from experience ) do. It would probably be worth installing a monitoring package such as munin or cacti to get a better picture of what's going on. hth, Steve -- Steve Holdoway BSc(Hons) MIITP http://www.greengecko.co.nz Linkedin: http://www.linkedin.com/in/steveholdoway Skype: sholdowa -------------- next part -------------- An HTML attachment was scrubbed... URL: From benji.taylor at distilnetworks.com Tue Nov 3 05:40:08 2015 From: benji.taylor at distilnetworks.com (Benji Taylor) Date: Mon, 2 Nov 2015 21:40:08 -0800 Subject: Problem with On The Fly Upgrade Message-ID: I am currently experiencing an issue using the on the fly upgrade feature of nginx. I am attempting the upgrade going from 1.4.1 to 1.8.1 on Ubuntu 12.04. Here is the process I am utilizing: Here is initial look at my running nginx: /etc/nginx# ps aux | grep nginx root 18779 0.5 0.6 416156 26756 ? Ss 05:24 0:00 nginx: master process /usr/sbin/nginx www-data 18780 0.0 0.8 424648 35528 ? S 05:24 0:00 nginx: worker process www-data 18781 0.0 0.8 424648 35284 ? S 05:24 0:00 nginx: worker process www-data 18782 0.0 0.8 424648 35284 ? S 05:24 0:00 nginx: worker process www-data 18783 0.0 0.8 424648 35284 ? S 05:24 0:00 nginx: worker process www-data 18784 0.0 0.6 416156 27084 ? S 05:24 0:00 nginx: cache manager process www-data 18785 0.0 0.6 416156 27084 ? S 05:24 0:00 nginx: cache loader process root 18861 0.0 0.0 9384 940 pts/2 S+ 05:25 0:00 grep --color=auto nginx First command in the process is run: :/etc/nginx# sudo kill -s USR2 18779 result: /etc/nginx# ps aux | grep nginx root 18779 0.0 0.6 416156 26756 ? Ss 05:24 0:00 nginx: master process /usr/sbin/nginx www-data 18780 0.0 0.8 424648 35528 ? S 05:24 0:00 nginx: worker process www-data 18781 0.0 0.8 424648 35284 ? S 05:24 0:00 nginx: worker process www-data 18782 0.0 0.8 424648 35284 ? S 05:24 0:00 nginx: worker process www-data 18783 0.0 0.8 424648 35284 ? S 05:24 0:00 nginx: worker process www-data 18784 0.0 0.6 416156 27084 ? S 05:24 0:00 nginx: cache manager process root 23762 8.0 0.8 416156 32908 ? S 05:27 0:00 nginx: master process /usr/sbin/nginx www-data 23763 6.0 0.8 424648 35280 ? S 05:27 0:00 nginx: worker process www-data 23764 7.0 0.8 424648 35280 ? S 05:27 0:00 nginx: worker process www-data 23765 4.0 0.8 424648 35280 ? S 05:27 0:00 nginx: worker process www-data 23766 4.0 0.8 424648 35280 ? S 05:27 0:00 nginx: worker process www-data 23767 0.0 0.6 416156 27084 ? S 05:27 0:00 nginx: cache manager process www-data 23768 0.0 0.6 416156 27084 ? S 05:27 0:00 nginx: cache loader process root 23815 0.0 0.0 9384 944 pts/2 S+ 05:27 0:00 grep --color=auto nginx Second Command is run: /etc/nginx# sudo kill -s WINCH 18779 Result: /etc/nginx# ps aux | grep nginx root 18779 0.0 0.6 416156 26756 ? Ss 05:24 0:00 nginx: master process /usr/sbin/nginx root 23762 0.1 0.8 416156 32908 ? S 05:27 0:00 nginx: master process /usr/sbin/nginx www-data 23763 0.0 0.8 424648 35524 ? S 05:27 0:00 nginx: worker process www-data 23764 0.0 0.8 424648 35280 ? S 05:27 0:00 nginx: worker process www-data 23765 0.0 0.8 424648 35280 ? S 05:27 0:00 nginx: worker process www-data 23766 0.0 0.8 424648 35280 ? S 05:27 0:00 nginx: worker process www-data 23767 0.0 0.6 416156 27084 ? S 05:27 0:00 nginx: cache manager process root 26172 0.0 0.0 9384 944 pts/2 S+ 05:28 0:00 grep --color=auto nginx Everything up through this point looks like it works as intended. However the last step when shutting down the old master process. I am getting unexpected behavior: /etc/nginx# sudo kill -s QUIT 18779 Result: /etc/nginx# ps aux | grep nginx root 31133 0.0 0.0 9384 944 pts/2 S+ 05:31 0:00 grep --color=auto nginx The QUIT signal seems to be killing both the old master pid as well as the new master and all of the new master's workers. I also do a pstree and it looks like the old master is the parent for the new master and its workers. Am I doing anything wrong in regards to the on the fly upgrade process which is causing this behavior and or could something else on the box cause this sort of thing to occur. Benji -------------- next part -------------- An HTML attachment was scrubbed... URL: From spol.pl at gmail.com Tue Nov 3 05:53:31 2015 From: spol.pl at gmail.com (SpolFirst SpolMiddle SpolLast) Date: Mon, 2 Nov 2015 21:53:31 -0800 Subject: resolver does not seem to work with proxy_pass Message-ID: I have setup nginx (1.9.5) as a proxy for few tomcat servers through proxy_pass directive. The proxy_pass points to a domain name (aws route53). The proxying works okay, but domain name is resolved only during nginx startup/reload. I have setup the resolver to force resolving domain name every few seconds, but it does not work. Here is my config. Any thoughts on why this does not work? server { listen 443; server_name localhost; ssl on; resolver 10.0.0.2 valid=10s; ssl_certificate /etc/nginx/certs/ssl-bundle3.crt; ssl_certificate_key /etc/nginx/certs/chewie.key; ssl_dhparam /etc/nginx/certs/dhparam.pem; ssl_protocols TLSv1 TLSv1.1 TLSv1.2; ssl_session_cache shared:SSL:20m; ssl_session_timeout 180m; ssl_prefer_server_ciphers on; ssl_ciphers "EECDH+ECDSA+AESGCM EECDH+aRSA+AESGCM EECDH+ECDSA+SHA384 EECDH+ECDSA+SHA256 EECDH+aRSA+SHA384 EECDH+aRSA+SHA256 EECDH+aRSA+RC4 EECDH EDH+aRSA RC4 !aNULL !eNULL !LOW !3DES !MD5 !EXP !PSK !SRP !DSS !RC4"; location / { resolver_timeout 30s; resolver 10.0.0.2 valid=10s; set $target "http://abctest.hello.world:80"; proxy_pass $target; proxy_cache_bypass true; proxy_no_cache true; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_ssl_session_reuse on; error_log /var/log/nginx/error.log debug; } } -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Tue Nov 3 08:57:17 2015 From: nginx-forum at nginx.us (Synchro) Date: Tue, 03 Nov 2015 03:57:17 -0500 Subject: No ALPN, only NPN with http2 Message-ID: <3f69ffa8807a860ce4e05fef7eaef0b3.NginxMailingListEnglish@forum.nginx.org> I'm attempting to deploy http2 with nginx 1.9.6 using teward's Ubuntu packages (https://launchpad.net/~nginx/+archive/ubuntu/development). I've got openssl 1.0.2d on both client and server and I'm testing with Chrome Canary and Firefox 41.0.2. The SSL config has a Qualys A+ rating and works perfectly with the previous SPDY config in nginx 1.9.4. The only config I changed in the nginx upgrade is 'spdy' to 'http2' in the listen directive. I can see that Firefox is negotiating and reporting a successful h2 connection, but Chrome is not. Testing with openssl shows me that it's using NPN but not ALPN, so I assume that Chrome Canary has already dropped NPN support and is thus unable to negotiate h2. With NPN: `echo | openssl s_client -nextprotoneg h2 -connect www.synchromedia.co.uk:443` Next protocol: (1) h2 No ALPN negotiated With ALPN: `echo | openssl s_client -alpn h2 -connect www.synchromedia.co.uk:443` No ALPN negotiated Why would this be happening? Do I need to do something else to enable ALPN in nginx? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,262573,262573#msg-262573 From luky-37 at hotmail.com Tue Nov 3 09:04:18 2015 From: luky-37 at hotmail.com (Lukas Tribus) Date: Tue, 3 Nov 2015 10:04:18 +0100 Subject: No ALPN, only NPN with http2 In-Reply-To: <3f69ffa8807a860ce4e05fef7eaef0b3.NginxMailingListEnglish@forum.nginx.org> References: <3f69ffa8807a860ce4e05fef7eaef0b3.NginxMailingListEnglish@forum.nginx.org> Message-ID: > I'm attempting to deploy http2 with nginx 1.9.6 using teward's Ubuntu > packages (https://launchpad.net/~nginx/+archive/ubuntu/development). I've > got openssl 1.0.2d on both client and server and I'm testing with Chrome > Canary and Firefox 41.0.2. Post "nginx -V" output. From nginx-forum at nginx.us Tue Nov 3 09:36:03 2015 From: nginx-forum at nginx.us (Synchro) Date: Tue, 03 Nov 2015 04:36:03 -0500 Subject: No ALPN, only NPN with http2 In-Reply-To: References: Message-ID: <37f44f5358199fd8af868073fd2bb582.NginxMailingListEnglish@forum.nginx.org> Ah, this is probably the problem: built with OpenSSL 1.0.1f 6 Jan 2014 I'll open a ticket on the package repo to see if it can be built with a later version. Oddly, I can't find any mention of ALPN support in the openssl release notes at all: https://www.openssl.org/news/changelog.html This is the complete output: nginx version: nginx/1.9.6 built with OpenSSL 1.0.1f 6 Jan 2014 TLS SNI support enabled configure arguments: --with-cc-opt='-g -O2 -fPIE -fstack-protector --param=ssp-buffer-size=4 -Wformat -Werror=format-security -D_FORTIFY_SOURCE=2' --with-ld-opt='-Wl,-Bsymbolic-functions -fPIE -pie -Wl,-z,relro -Wl,-z,now' --prefix=/usr/share/nginx --conf-path=/etc/nginx/nginx.conf --http-log-path=/var/log/nginx/access.log --error-log-path=/var/log/nginx/error.log --lock-path=/var/lock/nginx.lock --pid-path=/run/nginx.pid --http-client-body-temp-path=/var/lib/nginx/body --http-fastcgi-temp-path=/var/lib/nginx/fastcgi --http-proxy-temp-path=/var/lib/nginx/proxy --http-scgi-temp-path=/var/lib/nginx/scgi --http-uwsgi-temp-path=/var/lib/nginx/uwsgi --with-debug --with-pcre-jit --with-ipv6 --with-http_ssl_module --with-http_stub_status_module --with-http_realip_module --with-http_auth_request_module --with-http_addition_module --with-http_dav_module --with-http_flv_module --with-http_geoip_module --with-http_gunzip_module --with-http_gzip_static_module --with-http_image_filter_module --with-http_mp4_module --with-http_perl_module --with-http_random_index_module --with-http_secure_link_module --with-http_v2_module --with-http_sub_module --with-http_xslt_module --with-mail --with-mail_ssl_module --with-stream --with-stream_ssl_module --with-threads --add-module=/build/nginx-lbqvlX/nginx-1.9.6/debian/modules/headers-more-nginx-module --add-module=/build/nginx-lbqvlX/nginx-1.9.6/debian/modules/nginx-auth-pam --add-module=/build/nginx-lbqvlX/nginx-1.9.6/debian/modules/nginx-cache-purge --add-module=/build/nginx-lbqvlX/nginx-1.9.6/debian/modules/nginx-dav-ext-module --add-module=/build/nginx-lbqvlX/nginx-1.9.6/debian/modules/nginx-development-kit --add-module=/build/nginx-lbqvlX/nginx-1.9.6/debian/modules/nginx-echo --add-module=/build/nginx-lbqvlX/nginx-1.9.6/debian/modules/ngx-fancyindex --add-module=/build/nginx-lbqvlX/nginx-1.9.6/debian/modules/nginx-http-push --add-module=/build/nginx-lbqvlX/nginx-1.9.6/debian/modules/nginx-lua --add-module=/build/nginx-lbqvlX/nginx-1.9.6/debian/modules/nginx-upload-progress --add-module=/build/nginx-lbqvlX/nginx-1.9.6/debian/modules/nginx-upstream-fair --add-module=/build/nginx-lbqvlX/nginx-1.9.6/debian/modules/ngx_http_substitutions_filter_module Posted at Nginx Forum: https://forum.nginx.org/read.php?2,262573,262575#msg-262575 From luky-37 at hotmail.com Tue Nov 3 09:48:13 2015 From: luky-37 at hotmail.com (Lukas Tribus) Date: Tue, 3 Nov 2015 10:48:13 +0100 Subject: No ALPN, only NPN with http2 In-Reply-To: <37f44f5358199fd8af868073fd2bb582.NginxMailingListEnglish@forum.nginx.org> References: , <37f44f5358199fd8af868073fd2bb582.NginxMailingListEnglish@forum.nginx.org> Message-ID: > Ah, this is probably the problem: > > built with OpenSSL 1.0.1f 6 Jan 2014 It is. ALPN is supported only in the 1.0.2 branch. Lukas From steve at greengecko.co.nz Tue Nov 3 19:05:58 2015 From: steve at greengecko.co.nz (steve) Date: Wed, 4 Nov 2015 08:05:58 +1300 Subject: http2 problem?? Message-ID: <56390596.5080209@greengecko.co.nz> Hi folks, I'm having a problem with the configuration of my site... basically, I use a default server config to redirect traffic to my www. site under https, but the http: redirection doesn't work. Here's the redirect server config: server { listen 101.0.108.116:80 default http2; listen 127.0.1.1:80 http2; listen [2401:fc00:0:106::6]:80 default http2; listen 101.0.108.116:443 ssl default http2; listen [2401:fc00:0:106::6]:443 ssl default http2; ssl_certificate /etc/nginx/ssl/wildcard.greengecko.co.nz.crt; ssl_certificate_key /etc/nginx/ssl/wildcard.greengecko.co.nz.key; return 301 https://www.greengecko.co.nz$request_uri; } and the return from interrogation of the https: site is fine: $ curl --insecure -I https://greengecko.co.nz HTTP/1.1 301 Moved Permanently Server: nginx/1.9.6 Date: Tue, 03 Nov 2015 19:02:58 GMT Content-Type: text/html Content-Length: 184 Connection: keep-alive Location: https://www.greengecko.co.nz/ But with the http: site $ curl -I http://greengecko.co.nz ??????? the return string, when dumped, looks like this: $ od -c /tmp/a 0000000 \0 \0 022 004 \0 \0 \0 \0 \0 \0 003 \0 \0 \0 200 \0 0000020 004 177 377 377 377 \0 005 \0 377 377 377 \0 \0 004 \b \0 0000040 \0 \0 \0 \0 177 377 \0 \0 \0 \0 \b \a \0 \0 \0 \0 0000060 \0 \0 \0 \0 \0 \0 \0 \0 001 0000071 This happened both with 1.9.5, and the current 1.9.6. Can anyone shed any light onto this? Cheers, Steve -- Steve Holdoway BSc(Hons) MIITP http://www.greengecko.co.nz Linkedin: http://www.linkedin.com/in/steveholdoway Skype: sholdowa From r1ch+nginx at teamliquid.net Tue Nov 3 19:22:27 2015 From: r1ch+nginx at teamliquid.net (Richard Stanway) Date: Tue, 3 Nov 2015 20:22:27 +0100 Subject: http2 problem?? In-Reply-To: <56390596.5080209@greengecko.co.nz> References: <56390596.5080209@greengecko.co.nz> Message-ID: You've set port 80 to listen with http2, but you're not passing --http2 to curl so you're getting back an unexpected binary http2 response. Due to lack of ALPN I suggest you don't use http2 on port 80. On Tue, Nov 3, 2015 at 8:05 PM, steve wrote: > Hi folks, > > I'm having a problem with the configuration of my site... basically, I use > a default server config to redirect traffic to my www. site under https, > but the http: redirection doesn't work. > > Here's the redirect server config: > > server { > listen 101.0.108.116:80 default http2; > listen 127.0.1.1:80 http2; > listen [2401:fc00:0:106::6]:80 default http2; > listen 101.0.108.116:443 ssl default http2; > listen [2401:fc00:0:106::6]:443 ssl default http2; > > ssl_certificate /etc/nginx/ssl/wildcard.greengecko.co.nz.crt; > ssl_certificate_key /etc/nginx/ssl/wildcard.greengecko.co.nz.key; > > return 301 https://www.greengecko.co.nz$request_uri; > } > > and the return from interrogation of the https: site is fine: > > $ curl --insecure -I https://greengecko.co.nz > HTTP/1.1 301 Moved Permanently > Server: nginx/1.9.6 > Date: Tue, 03 Nov 2015 19:02:58 GMT > Content-Type: text/html > Content-Length: 184 > Connection: keep-alive > Location: https://www.greengecko.co.nz/ > > But with the http: site > > $ curl -I http://greengecko.co.nz ? ?????? > the return string, when dumped, looks like this: > > $ od -c /tmp/a > 0000000 \0 \0 022 004 \0 \0 \0 \0 \0 \0 003 \0 \0 \0 200 \0 > 0000020 004 177 377 377 377 \0 005 \0 377 377 377 \0 \0 004 \b \0 > 0000040 \0 \0 \0 \0 177 377 \0 \0 \0 \0 \b \a \0 \0 \0 \0 > 0000060 \0 \0 \0 \0 \0 \0 \0 \0 001 > 0000071 > > This happened both with 1.9.5, and the current 1.9.6. > > Can anyone shed any light onto this? > > Cheers, > > Steve > > > > > > -- > Steve Holdoway BSc(Hons) MIITP > http://www.greengecko.co.nz > Linkedin: http://www.linkedin.com/in/steveholdoway > Skype: sholdowa > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Tue Nov 3 19:33:35 2015 From: nginx-forum at nginx.us (itpp2012) Date: Tue, 03 Nov 2015 14:33:35 -0500 Subject: http2 problem?? In-Reply-To: <56390596.5080209@greengecko.co.nz> References: <56390596.5080209@greengecko.co.nz> Message-ID: <3bebbfa17f2ff2d3fff273343cca8d1e.NginxMailingListEnglish@forum.nginx.org> See http://mailman.nginx.org/pipermail/nginx/2015-September/048680.html Posted at Nginx Forum: https://forum.nginx.org/read.php?2,262577,262579#msg-262579 From steve at greengecko.co.nz Tue Nov 3 19:51:08 2015 From: steve at greengecko.co.nz (steve) Date: Wed, 4 Nov 2015 08:51:08 +1300 Subject: http2 problem?? In-Reply-To: References: <56390596.5080209@greengecko.co.nz> Message-ID: <5639102C.90009@greengecko.co.nz> Many thanks. On 11/04/2015 08:22 AM, Richard Stanway wrote: > You've set port 80 to listen with http2, but you're not passing > --http2 to curl so you're getting back an unexpected binary http2 > response. Due to lack of ALPN I suggest you don't use http2 on port 80. > > On Tue, Nov 3, 2015 at 8:05 PM, steve > wrote: > > Hi folks, > > I'm having a problem with the configuration of my site... > basically, I use a default server config to redirect traffic to my > www. site under https, but the http: redirection doesn't work. > > Here's the redirect server config: > > server { > listen 101.0.108.116:80 default > http2; > listen 127.0.1.1:80 http2; > listen [2401:fc00:0:106::6]:80 default http2; > listen 101.0.108.116:443 ssl > default http2; > listen [2401:fc00:0:106::6]:443 ssl default http2; > > ssl_certificate /etc/nginx/ssl/wildcard.greengecko.co.nz.crt; > ssl_certificate_key > /etc/nginx/ssl/wildcard.greengecko.co.nz.key; > > return 301 https://www.greengecko.co.nz$request_uri; > } > > and the return from interrogation of the https: site is fine: > > $ curl --insecure -I https://greengecko.co.nz > HTTP/1.1 301 Moved Permanently > Server: nginx/1.9.6 > Date: Tue, 03 Nov 2015 19:02:58 GMT > Content-Type: text/html > Content-Length: 184 > Connection: keep-alive > Location: https://www.greengecko.co.nz/ > > But with the http: site > > $ curl -I http://greengecko.co.nz ? ?????? > the return string, when dumped, looks like this: > > $ od -c /tmp/a > 0000000 \0 \0 022 004 \0 \0 \0 \0 \0 \0 003 \0 \0 \0 200 \0 > 0000020 004 177 377 377 377 \0 005 \0 377 377 377 \0 \0 004 \b \0 > 0000040 \0 \0 \0 \0 177 377 \0 \0 \0 \0 \b \a \0 \0 \0 \0 > 0000060 \0 \0 \0 \0 \0 \0 \0 \0 001 > 0000071 > > This happened both with 1.9.5, and the current 1.9.6. > > Can anyone shed any light onto this? > > Cheers, > > Steve > > > > > > -- > Steve Holdoway BSc(Hons) MIITP > http://www.greengecko.co.nz > Linkedin: http://www.linkedin.com/in/steveholdoway > Skype: sholdowa > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > > > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx I've stripped http2 off port 80 and redirects are now working fine. -- Steve Holdoway BSc(Hons) MIITP http://www.greengecko.co.nz Linkedin: http://www.linkedin.com/in/steveholdoway Skype: sholdowa -------------- next part -------------- An HTML attachment was scrubbed... URL: From steve at greengecko.co.nz Tue Nov 3 19:52:20 2015 From: steve at greengecko.co.nz (steve) Date: Wed, 4 Nov 2015 08:52:20 +1300 Subject: http2 problem?? In-Reply-To: <3bebbfa17f2ff2d3fff273343cca8d1e.NginxMailingListEnglish@forum.nginx.org> References: <56390596.5080209@greengecko.co.nz> <3bebbfa17f2ff2d3fff273343cca8d1e.NginxMailingListEnglish@forum.nginx.org> Message-ID: <56391074.1090608@greengecko.co.nz> On 11/04/2015 08:33 AM, itpp2012 wrote: > See http://mailman.nginx.org/pipermail/nginx/2015-September/048680.html > > Posted at Nginx Forum: https://forum.nginx.org/read.php?2,262577,262579#msg-262579 > > Thanks. I've taken the easier way out and removed http2 from http traffic and all is well. -- Steve Holdoway BSc(Hons) MIITP http://www.greengecko.co.nz Linkedin: http://www.linkedin.com/in/steveholdoway Skype: sholdowa From nginx-forum at nginx.us Tue Nov 3 20:27:01 2015 From: nginx-forum at nginx.us (kodeninja) Date: Tue, 03 Nov 2015 15:27:01 -0500 Subject: nginx rewrite url without causing redirect and proxy to backend app Message-ID: <75e67b40d4113bc34ce2ce5cd19c567b.NginxMailingListEnglish@forum.nginx.org> Howdy, everyone! I'm trying to figure out if there's a way to achieve this with nginx: . I didn't get response on the SO question yet, so trying my luck here :). Please let me know if this is possible. Thanks a lot! Posted at Nginx Forum: https://forum.nginx.org/read.php?2,262583,262583#msg-262583 From gfrankliu at gmail.com Tue Nov 3 21:23:25 2015 From: gfrankliu at gmail.com (Frank Liu) Date: Tue, 3 Nov 2015 13:23:25 -0800 Subject: ignore bad conf file Message-ID: Hi all, Is there a way to configure nginx to ignore bad conf files? My master nginx.conf has a include elsewhere/*.conf towards the end. Other people and programs can drop new configs into "elsewhere" directory. nginx reloads and all is great. Sometimes if one guy drops a conf file with a typo or syntax error, nginx will refuse to reload all other configs afterwards. Is it possible to configure nginx to skip the bad file and move on to next *.conf? Thanks! Frank -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Wed Nov 4 02:12:42 2015 From: nginx-forum at nginx.us (wangyiran125) Date: Tue, 03 Nov 2015 21:12:42 -0500 Subject: nginx install error Message-ID: <4b6aa6efad63fbb0f9c15a258661b28d.NginxMailingListEnglish@forum.nginx.org> i want to install nginx plus .and i try these steps step by step Setup instruction for RHEL6/CentOS6/Oracle Linux 6 Create /etc/ssl/nginx/ directory Move CA.crt, nginx-repo.key and nginx-repo.crt files to /etc/ssl/nginx/ directory Copy nginx-plus-6.repo file to /etc/yum.repos.d/ directory Run yum install nginx-plus to install nginx-plus package. Please keep in mind that old nginx package (if any) will be replaced. In order to upgrade from the previous version of nginx-plus, run yum upgrade nginx-plus but when i type yum install nginx-plus,it report Repo nginx-plus forced skip_if_unavailable=True due to: /etc/ssl/nginx/nginx-repo.crt Repo nginx-plus forced skip_if_unavailable=True due to: /etc/ssl/nginx/nginx-repo.key https://plus-pkgs.nginx.com/centos/6/x86_64/repodata/repomd.xml: [Errno 14] PYCURL ERROR 22 - "NSS: client certificate not found: /etc/ssl/nginx/nginx-repo.crt" Trying other mirror. Setting up Install Process No package nginx-plus available. Error: Nothing to do how to solve it? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,262587,262587#msg-262587 From zipper1790 at gmail.com Wed Nov 4 03:03:46 2015 From: zipper1790 at gmail.com (Erick Ocrospoma) Date: Tue, 3 Nov 2015 22:03:46 -0500 Subject: nginx install error In-Reply-To: <4b6aa6efad63fbb0f9c15a258661b28d.NginxMailingListEnglish@forum.nginx.org> References: <4b6aa6efad63fbb0f9c15a258661b28d.NginxMailingListEnglish@forum.nginx.org> Message-ID: Seems like a gpg repo issue. Sorry, how did you get that repo? On 3 November 2015 at 21:12, wangyiran125 wrote: > i want to install nginx plus .and i try these steps step by step > Setup instruction for RHEL6/CentOS6/Oracle Linux 6 > Create /etc/ssl/nginx/ directory > Move CA.crt, nginx-repo.key and nginx-repo.crt files to /etc/ssl/nginx/ > directory > Copy nginx-plus-6.repo file to /etc/yum.repos.d/ directory > Run yum install nginx-plus to install nginx-plus package. Please keep in > mind that old nginx package (if any) will be replaced. > In order to upgrade from the previous version of nginx-plus, run yum > upgrade > nginx-plus > > but when i type yum install nginx-plus,it report > Repo nginx-plus forced skip_if_unavailable=True due to: > /etc/ssl/nginx/nginx-repo.crt > Repo nginx-plus forced skip_if_unavailable=True due to: > /etc/ssl/nginx/nginx-repo.key > https://plus-pkgs.nginx.com/centos/6/x86_64/repodata/repomd.xml: [Errno > 14] > PYCURL ERROR 22 - "NSS: client certificate not found: > /etc/ssl/nginx/nginx-repo.crt" > Trying other mirror. > Setting up Install Process > No package nginx-plus available. > Error: Nothing to do > > how to solve it? > > Posted at Nginx Forum: > https://forum.nginx.org/read.php?2,262587,262587#msg-262587 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -- ~ Happy install ! Erick. --- IRC : zerick Blog : http://zerick.me About : http://about.me/zerick Linux User ID : 549567 -------------- next part -------------- An HTML attachment was scrubbed... URL: From me at myconan.net Wed Nov 4 03:43:46 2015 From: me at myconan.net (nanaya) Date: Wed, 04 Nov 2015 12:43:46 +0900 Subject: nginx rewrite url without causing redirect and proxy to backend app In-Reply-To: <75e67b40d4113bc34ce2ce5cd19c567b.NginxMailingListEnglish@forum.nginx.org> References: <75e67b40d4113bc34ce2ce5cd19c567b.NginxMailingListEnglish@forum.nginx.org> Message-ID: <1446608626.1210323.428530529.7E8E6898@webmail.messagingengine.com> On Wed, Nov 4, 2015, at 05:27 AM, kodeninja wrote: > Howdy, everyone! > > I'm trying to figure out if there's a way to achieve this with nginx: > . > I didn't get response on the SO question yet, so trying my luck here :). > > Please let me know if this is possible. A request URI is passed to the server as follows: If the proxy_pass directive is specified with a URI, then when a request is passed to the server, the part of a normalized request URI matching the location is replaced by a URI specified in the directive: location /name/ { proxy_pass http://127.0.0.1/remote/; } http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_pass Also in case of rewrite you probably want `break` since `last` restart location matching process From xiaokai.wang at live.com Wed Nov 4 03:50:45 2015 From: xiaokai.wang at live.com (Xiaokai Wang) Date: Wed, 4 Nov 2015 03:50:45 +0000 Subject: different centos version + different nginx version make different performance In-Reply-To: <56380DE4.3090904@greengecko.co.nz> References: , <56380DE4.3090904@greengecko.co.nz> Message-ID: hi steve, Thanks your reply, it's so nice to me! Sorry to return you back lately. Yes, it' hard for a totally fair scenario, maybe we can concentrate on main factor? The environment nginx is used as a proxy, and as we all know it's hardly consumed memory. The nginx is main task, the others can be leaved out and qps is 10000/s, all requests are light. The load average is different, because from centos 5.4 to centos 7.1 the load average computed algorithm is changed, the new version reflect how many core fully used. Confused me is that us% is less than sy%, is that strange? %cpu double the old %cpu. nginx-1.8.0 has a much longer time, anybody use that on other version system? the statistics? nobody use nginx-1.8.0 on centos 7.1? I am much fresh on these problems, Please give me your advice, thanks. -----Regards,Xiaokai Subject: Re: different centos version + different nginx version make different performance To: nginx at nginx.org From: steve at greengecko.co.nz Date: Tue, 3 Nov 2015 14:29:08 +1300 Hi, On 11/02/2015 08:48 PM, Xiaokai Wang wrote: hi all, Sorry to bother you again, I find pictures cannt show clearly, so copy statistics below. thanks again. I find a performace decreasing when I update centos5.4 to centos7.1 and the same time nginx-1.2.7 to nginx-1.8.0. Exacted statistics as below: Tasks: 177 total, 3 running, 173 sleeping, 0 stopped, 1 zombie Cpu0 : 3.7%us, 3.3%sy, 0.0%ni, 92.4%id, 0.0%wa, 0.0%hi, 0.7%si, 0.0%st Cpu1 : 4.7%us, 3.7%sy, 0.0%ni, 91.0%id, 0.0%wa, 0.0%hi, 0.7%si, 0.0%st Cpu2 : 3.6%us, 3.0%sy, 0.0%ni, 92.7%id, 0.0%wa, 0.0%hi, 0.7%si, 0.0%st Cpu3 : 4.0%us, 3.0%sy, 0.0%ni, 92.7%id, 0.0%wa, 0.0%hi, 0.3%si, 0.0%st Cpu4 : 3.7%us, 3.0%sy, 0.0%ni, 93.0%id, 0.0%wa, 0.0%hi, 0.3%si, 0.0%st Cpu5 : 3.3%us, 2.6%sy, 0.0%ni, 93.4%id, 0.0%wa, 0.0%hi, 0.7%si, 0.0%st Cpu6 : 3.3%us, 3.3%sy, 0.0%ni, 93.0%id, 0.0%wa, 0.0%hi, 0.3%si, 0.0%st Cpu7 : 4.0%us, 2.7%sy, 0.0%ni, 92.7%id, 0.0%wa, 0.0%hi, 0.7%si, 0.0%st Cpu8 : 4.0%us, 2.7%sy, 0.0%ni, 93.0%id, 0.0%wa, 0.0%hi, 0.3%si, 0.0%st Cpu9 : 4.0%us, 3.7%sy, 0.0%ni, 91.3%id, 0.7%wa, 0.0%hi, 0.3%si, 0.0%st Cpu10 : 2.7%us, 2.0%sy, 0.0%ni, 82.4%id, 0.0%wa, 0.7%hi, 12.3%si, 0.0%st Cpu11 : 2.3%us, 2.3%sy, 0.0%ni, 64.7%id, 0.0%wa, 1.0%hi, 29.7%si, 0.0%st Mem: 12273740k total, 9946612k used, 2327128k free, 5475768k buffers Swap: 8385920k total, 0k used, 8385920k free, 2030764k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 26770 nobody 15 0 91076 32m 1196 S 9.0 0.3 997:15.34 proxy-nginx 26769 nobody 15 0 91936 33m 1196 S 7.7 0.3 1043:52 proxy-nginx 26778 nobody 15 0 91560 33m 1196 R 7.7 0.3 921:28.59 proxy-nginx 26772 nobody 15 0 92164 33m 1196 S 7.3 0.3 920:47.31 proxy-nginx 26773 nobody 15 0 91808 33m 1200 S 7.3 0.3 930:41.83 proxy-nginx 26776 nobody 15 0 92332 33m 1196 S 7.3 0.3 934:57.61 proxy-nginx 26777 nobody 15 0 92504 33m 1196 S 7.3 0.3 926:08.37 proxy-nginx 26771 nobody 16 0 92176 33m 1196 S 7.0 0.3 955:29.21 proxy-nginx 26775 nobody 15 0 91688 33m 1200 R 7.0 0.3 939:46.60 proxy-nginx 26774 nobody 15 0 92044 33m 1196 S 6.3 0.3 842:52.49 proxy-nginx 26780 nobody 15 0 90772 32m 1200 S 6.0 0.3 696:50.17 proxy-nginx 26779 nobody 15 0 90380 31m 1196 S 5.7 0.3 832:16.64 proxy-nginx centos5.4 + nginx-1.2.7 Tasks: 276 total, 6 running, 268 sleeping, 2 stopped, 0 zombie %Cpu0 : 8.2 us, 8.9 sy, 0.0 ni, 73.4 id, 0.3 wa, 0.0 hi, 9.2 si, 0.0 st %Cpu1 : 7.9 us, 7.6 sy, 0.0 ni, 75.2 id, 0.0 wa, 0.0 hi, 9.3 si, 0.0 st %Cpu2 : 6.7 us, 8.1 sy, 0.0 ni, 77.4 id, 0.0 wa, 0.0 hi, 7.8 si, 0.0 st %Cpu3 : 7.4 us, 8.0 sy, 0.0 ni, 79.6 id, 0.0 wa, 0.0 hi, 5.0 si, 0.0 st %Cpu4 : 6.4 us, 7.1 sy, 0.0 ni, 78.8 id, 0.0 wa, 0.0 hi, 7.8 si, 0.0 st %Cpu5 : 7.0 us, 7.3 sy, 0.0 ni, 78.4 id, 0.0 wa, 0.0 hi, 7.3 si, 0.0 st %Cpu6 : 27.3 us, 8.7 sy, 0.0 ni, 59.3 id, 0.0 wa, 0.0 hi, 4.7 si, 0.0 st %Cpu7 : 8.3 us, 8.0 sy, 0.0 ni, 75.1 id, 0.0 wa, 0.0 hi, 8.7 si, 0.0 st %Cpu8 : 6.9 us, 8.6 sy, 0.0 ni, 78.7 id, 0.0 wa, 0.0 hi, 5.8 si, 0.0 st %Cpu9 : 15.4 us, 8.6 sy, 0.0 ni, 69.2 id, 0.0 wa, 0.0 hi, 6.8 si, 0.0 st %Cpu10 : 23.9 us, 8.8 sy, 0.0 ni, 64.0 id, 0.0 wa, 0.0 hi, 3.4 si, 0.0 st %Cpu11 : 40.5 us, 9.1 sy, 0.0 ni, 41.2 id, 0.0 wa, 0.0 hi, 9.1 si, 0.0 st KiB Mem : 12126740 total, 159380 free, 607028 used, 11360332 buff/cache KiB Swap: 8388604 total, 8388604 free, 0 used. 10895304 avail Mem PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 6876 root 20 0 4604 864 496 R 85.4 0.0 0:52.28 gzip 12806 nobody 20 0 61508 35056 1272 R 18.3 0.3 282:22.99 nginx 12810 nobody 20 0 61044 34596 1272 S 17.6 0.3 292:06.85 nginx 12799 nobody 20 0 60768 34240 1272 R 16.9 0.3 275:36.83 nginx 12801 nobody 20 0 60764 34204 1272 S 15.6 0.3 277:38.95 nginx 12802 nobody 20 0 61468 34964 1272 S 15.3 0.3 226:20.06 nginx 12800 nobody 20 0 60716 34268 1272 R 14.6 0.3 276:13.54 nginx 12805 nobody 20 0 61424 34976 1272 S 14.6 0.3 241:46.36 nginx 12807 nobody 20 0 61128 34680 1272 S 14.6 0.3 245:35.70 nginx 12808 nobody 20 0 60640 34096 1272 R 14.6 0.3 255:17.38 nginx 12809 nobody 20 0 60716 34268 1272 S 14.6 0.3 244:02.67 nginx 12803 nobody 20 0 61396 34908 1272 S 14.3 0.3 284:18.39 nginx 12804 nobody 20 0 60568 34120 1272 S 14.0 0.3 282:03.64 nginx centos7.1 + nginx-1.8.0 From the statistics, we can see that updated environment cpu-load average is almostly doubled previous environment. Of course the sysctl.conf and nginx-conf are not changed after updating environment. It confused me and I dont know why. Anybody meet the problem? Please give me a help, thanks. ----- Regards, Xiaokai It's not really a fair check is it? New server is running 50% more processes, and most importantly is using 600MB instead of 10GB memory. I'd start off looking at where the memory is used on the old server. I'd just report CPU usage stats, rather than per cpu. nginx doesn't really use much cpu at all, but server-side programming languages ( PHP especially from experience ) do. It would probably be worth installing a monitoring package such as munin or cacti to get a better picture of what's going on. hth, Steve -- Steve Holdoway BSc(Hons) MIITP http://www.greengecko.co.nz Linkedin: http://www.linkedin.com/in/steveholdoway Skype: sholdowa _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Wed Nov 4 05:38:27 2015 From: nginx-forum at nginx.us (wangyiran125) Date: Wed, 04 Nov 2015 00:38:27 -0500 Subject: nginx install error In-Reply-To: References: Message-ID: <47a5095604b1e4129dcb9581395a2c9f.NginxMailingListEnglish@forum.nginx.org> i download nginx-plus-6.repo from https://cs.nginx.com/repo_setup Posted at Nginx Forum: https://forum.nginx.org/read.php?2,262587,262591#msg-262591 From nginx-forum at nginx.us Wed Nov 4 10:08:49 2015 From: nginx-forum at nginx.us (hristoc) Date: Wed, 04 Nov 2015 05:08:49 -0500 Subject: Problem with http2 huge load average Message-ID: Hello, Im using nginx: nginx version: nginx/1.9.6 built with OpenSSL 1.0.1p 9 Jul 2015 TLS SNI support enabled configure arguments: --prefix=/usr --sbin-path=/usr/sbin/nginx --conf-path=/etc/nginx/nginx.conf --pid-path=/var/run/nginx.pid --lock-path=/var/lock/nginx --user=nobody --group=nogroup --error-log-path=/var/log/nginx/error.log --http-log-path=/var/log/nginx/access.log --with-ipv6 --with-threads --with-select_module --with-poll_module --with-http_ssl_module --with-http_realip_module --with-http_addition_module --with-http_xslt_module --with-http_sub_module --with-http_dav_module --with-http_flv_module --with-http_mp4_module --with-http_gunzip_module --with-http_v2_module --with-http_gzip_static_module --with-http_random_index_module --with-http_secure_link_module --with-http_stub_status_module --with-http_geoip_module --with-http_auth_request_module --with-http_perl_module --with-perl_modules_path= --http-client-body-temp-path=/var/tmp/nginx_client_body_temp --http-proxy-temp-path=/var/tmp/nginx_proxy_temp --http-fastcgi-temp-path=/dev/shm --with-http_realip_module --without-mail_pop3_module --without-mail_imap_module --without-mail_smtp_module on Slackware 14- current : Linux 123 4.1.12 #2 SMP Wed Oct 28 14:19:14 CDT 2015 x86_64 Intel(R) Xeon(R) CPU E5420 @ 2.50GHz GenuineIntel GNU/Linux I use it wtih php: PHP 5.6.15 (cli) (built: Nov 2 2015 09:35:45) as php-fpm with 32 threads. When I start nginx with http2 enabled on port 443: server { listen 443 ssl http2; , after a minute my load average jump to 32.50. When I switch off http2 load averege after a few seconds drop to 0.30. This probably is a bug but any solution? I can give more info if is needed. Regards, Posted at Nginx Forum: https://forum.nginx.org/read.php?2,262594,262594#msg-262594 From ek at nginx.com Wed Nov 4 12:13:06 2015 From: ek at nginx.com (Ekaterina Kukushkina) Date: Wed, 4 Nov 2015 15:13:06 +0300 Subject: nginx install error In-Reply-To: <4b6aa6efad63fbb0f9c15a258661b28d.NginxMailingListEnglish@forum.nginx.org> References: <4b6aa6efad63fbb0f9c15a258661b28d.NginxMailingListEnglish@forum.nginx.org> Message-ID: <65A7BDB3-69BD-4A5C-8268-81D39EBFE2D1@nginx.com> Hello wangyiran125, I suggest that you should ask our technical support team. The support team email you may find in the e-mail with Nginx Plus activation link. > On 04 Nov 2015, at 05:12, wangyiran125 wrote: > > i want to install nginx plus .and i try these steps step by step > Setup instruction for RHEL6/CentOS6/Oracle Linux 6 > Create /etc/ssl/nginx/ directory > Move CA.crt, nginx-repo.key and nginx-repo.crt files to /etc/ssl/nginx/ > directory > Copy nginx-plus-6.repo file to /etc/yum.repos.d/ directory > Run yum install nginx-plus to install nginx-plus package. Please keep in > mind that old nginx package (if any) will be replaced. > In order to upgrade from the previous version of nginx-plus, run yum upgrade > nginx-plus > > but when i type yum install nginx-plus,it report > Repo nginx-plus forced skip_if_unavailable=True due to: > /etc/ssl/nginx/nginx-repo.crt > Repo nginx-plus forced skip_if_unavailable=True due to: > /etc/ssl/nginx/nginx-repo.key > https://plus-pkgs.nginx.com/centos/6/x86_64/repodata/repomd.xml: [Errno 14] > PYCURL ERROR 22 - "NSS: client certificate not found: > /etc/ssl/nginx/nginx-repo.crt" > Trying other mirror. > Setting up Install Process > No package nginx-plus available. > Error: Nothing to do > > how to solve it? > > Posted at Nginx Forum: https://forum.nginx.org/read.php?2,262587,262587#msg-262587 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -- Ekaterina Kukushkina Support Engineer | NGINX, Inc. From luky-37 at hotmail.com Wed Nov 4 16:03:06 2015 From: luky-37 at hotmail.com (Lukas Tribus) Date: Wed, 4 Nov 2015 17:03:06 +0100 Subject: ignore bad conf file In-Reply-To: References: Message-ID: > Hi all,? >? > Is there a way to configure nginx to ignore bad conf files? No, that would lead to inconsistencies all over the place. > My master nginx.conf has a include elsewhere/*.conf towards the end.? > Other people and programs can drop new configs into "elsewhere"? > directory. nginx reloads and all is great. Sometimes if one guy drops a? > conf file with a typo or syntax error, nginx will refuse to reload all? > other configs afterwards. Nginx is supposed to roll back in this case [1], are your reloading (HUP) or restarting? In any case, testing the config first is what you should do. Lukas [1] http://nginx.org/en/docs/control.html#reconfiguration From shannon at nginx.com Thu Nov 5 00:13:15 2015 From: shannon at nginx.com (Shannon Burns) Date: Wed, 4 Nov 2015 16:13:15 -0800 Subject: Feedback Request from NGINX on the Future of App Dev & Deployment Message-ID: Hello Everyone, Shannon here, from the developer relations team at NGINX, Inc. We're working on a comprehensive study about where application development and deployment are headed in the future. We?d love your input, and we think you might find the results quite interesting. The goal is to learn from your knowledge and experience, synthesize the results of the study and share the full report of what we've found with everyone in our community. Completing the survey gives you first access to the report once it is finished. Interested in participating? Take the survey here . It should take roughly 20 minutes to complete and will close on November 30, 2015. I'm very excited to see what we can collectively learn from this experiment. Thanks in advance for participating, and I look forward to sharing the results with you. Yours in code, Shannon Burns Developer Advocate NGINX, Inc -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Thu Nov 5 02:05:01 2015 From: nginx-forum at nginx.us (kodeninja) Date: Wed, 04 Nov 2015 21:05:01 -0500 Subject: nginx rewrite url without causing redirect and proxy to backend app In-Reply-To: <1446608626.1210323.428530529.7E8E6898@webmail.messagingengine.com> References: <1446608626.1210323.428530529.7E8E6898@webmail.messagingengine.com> Message-ID: nanaya Wrote: ------------------------------------------------------- > Also in case of rewrite you probably want `break` since `last` restart > location matching process > That did the trick :)! The `break` was the part I was not doing right. Thanks @nanaya!! Posted at Nginx Forum: https://forum.nginx.org/read.php?2,262583,262608#msg-262608 From zxcvbn4038 at gmail.com Thu Nov 5 05:55:36 2015 From: zxcvbn4038 at gmail.com (CJ Ess) Date: Thu, 5 Nov 2015 00:55:36 -0500 Subject: Tuning upstream keepalive parameter Message-ID: So I'm looking for some advice on determining an appropriate number for the keepalive parameter within an upstream stanza. They server processes ~3000 requests per second, and haproxy is the single upstream destination. Dividing by the request rate by the number of processors (workers) I'm thinking that maybe 256 is a good starting number for the max keepalives. Is that realistic? Or should I be looking at a fraction of that number? -------------- next part -------------- An HTML attachment was scrubbed... URL: From gfrankliu at gmail.com Thu Nov 5 06:33:05 2015 From: gfrankliu at gmail.com (Frank Liu) Date: Wed, 4 Nov 2015 22:33:05 -0800 Subject: ignore bad conf file In-Reply-To: References: Message-ID: Thanks Lukas! I tried configtest but with 100k files in conf.d, it takes 3 minutes to finish, during which time there may be another file dropped in conf.d and trigger another configtest. This sometimes causes several config test running at the same time. A reload on the other hand is much quicker and returns almost immediately. I understand this is just a sighup and nginx internally may still take as much longer to finish reload? If there is only one file change, will reload cause nginx to reread all the 100k files and process them ? Frank On Wednesday, November 4, 2015, Lukas Tribus wrote: > > Hi all, > > > > Is there a way to configure nginx to ignore bad conf files? > > No, that would lead to inconsistencies all over the place. > > > > > My master nginx.conf has a include elsewhere/*.conf towards the end. > > Other people and programs can drop new configs into "elsewhere" > > directory. nginx reloads and all is great. Sometimes if one guy drops a > > conf file with a typo or syntax error, nginx will refuse to reload all > > other configs afterwards. > > Nginx is supposed to roll back in this case [1], are your reloading (HUP) > or restarting? > > > In any case, testing the config first is what you should do. > > > > Lukas > > [1] http://nginx.org/en/docs/control.html#reconfiguration > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From luky-37 at hotmail.com Thu Nov 5 08:25:03 2015 From: luky-37 at hotmail.com (Lukas Tribus) Date: Thu, 5 Nov 2015 09:25:03 +0100 Subject: ignore bad conf file In-Reply-To: References: , , Message-ID: > Thanks Lukas! I tried configtest but with 100k files in conf.d, it > takes 3 minutes to finish, during which time there may be another file > dropped in conf.d and trigger another configtest. This sometimes causes > several config test running at the same time. > A reload on the other hand is much quicker and returns almost > immediately. I understand this is just a sighup and nginx internally > may still take as much longer to finish reload? If there is only one > file change, will reload cause nginx to reread all the 100k files and > process them? Yes. nginx will not "change" its configuration, it will start a new process with the new configuration. You can find more details about this in the link I posted. From vbart at nginx.com Thu Nov 5 11:30:00 2015 From: vbart at nginx.com (Valentin V. Bartenev) Date: Thu, 05 Nov 2015 14:30 +0300 Subject: Problem with http2 huge load average In-Reply-To: References: Message-ID: <8130666.u1Z0vuBXyS@vbart-workstation> On Wednesday 04 November 2015 05:08:49 hristoc wrote: > Hello, > > Im using nginx: > > nginx version: nginx/1.9.6 > built with OpenSSL 1.0.1p 9 Jul 2015 > TLS SNI support enabled > configure arguments: --prefix=/usr --sbin-path=/usr/sbin/nginx > --conf-path=/etc/nginx/nginx.conf --pid-path=/var/run/nginx.pid > --lock-path=/var/lock/nginx --user=nobody --group=nogroup > --error-log-path=/var/log/nginx/error.log > --http-log-path=/var/log/nginx/access.log --with-ipv6 --with-threads > --with-select_module --with-poll_module --with-http_ssl_module > --with-http_realip_module --with-http_addition_module > --with-http_xslt_module --with-http_sub_module --with-http_dav_module > --with-http_flv_module --with-http_mp4_module --with-http_gunzip_module > --with-http_v2_module --with-http_gzip_static_module > --with-http_random_index_module --with-http_secure_link_module > --with-http_stub_status_module --with-http_geoip_module > --with-http_auth_request_module --with-http_perl_module > --with-perl_modules_path= > --http-client-body-temp-path=/var/tmp/nginx_client_body_temp > --http-proxy-temp-path=/var/tmp/nginx_proxy_temp > --http-fastcgi-temp-path=/dev/shm --with-http_realip_module > --without-mail_pop3_module --without-mail_imap_module > --without-mail_smtp_module > > > on Slackware 14- current : > Linux 123 4.1.12 #2 SMP Wed Oct 28 14:19:14 CDT 2015 x86_64 Intel(R) Xeon(R) > CPU E5420 @ 2.50GHz GenuineIntel GNU/Linux > > > I use it wtih php: PHP 5.6.15 (cli) (built: Nov 2 2015 09:35:45) as php-fpm > with 32 threads. > > When I start nginx with http2 enabled on port 443: > server { > listen 443 ssl http2; > > , after a minute my load average jump to 32.50. When I switch off http2 load > averege after a few seconds drop to 0.30. > > This probably is a bug but any solution? I can give more info if is needed. > Is it nginx who eats your cpu, or php? wbr, Valentin V. Bartenev From mdounin at mdounin.ru Thu Nov 5 13:19:09 2015 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 5 Nov 2015 16:19:09 +0300 Subject: Tuning upstream keepalive parameter In-Reply-To: References: Message-ID: <20151105131909.GB74233@mdounin.ru> Hello! On Thu, Nov 05, 2015 at 12:55:36AM -0500, CJ Ess wrote: > So I'm looking for some advice on determining an appropriate number for the > keepalive parameter within an upstream stanza. > > They server processes ~3000 requests per second, and haproxy is the single > upstream destination. Dividing by the request rate by the number of > processors (workers) I'm thinking that maybe 256 is a good starting number > for the max keepalives. > > Is that realistic? Or should I be looking at a fraction of that number? Number of requests per second processed is mostly irrelevant. Two important numbers are: - How many connections your upstream servers can handle. It's a good idea to don't exhaust all available connections with keepalive ones. - How many connections are used under normal load and/or during load spikes. That is, how many simultaneous requests are executed on upstream servers. It make sense to keep comparable number of connections alive to handle load fluctuations. -- Maxim Dounin http://nginx.org/ From mdounin at mdounin.ru Thu Nov 5 15:11:09 2015 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 5 Nov 2015 18:11:09 +0300 Subject: resolver does not seem to work with proxy_pass In-Reply-To: References: Message-ID: <20151105151109.GE74233@mdounin.ru> Hello! On Mon, Nov 02, 2015 at 09:53:31PM -0800, SpolFirst SpolMiddle SpolLast wrote: > I have setup nginx (1.9.5) as a proxy for few tomcat servers through > proxy_pass directive. The proxy_pass points to a domain name (aws route53). > The proxying works okay, but domain name is resolved only during nginx > startup/reload. I have setup the resolver to force resolving domain name > every few seconds, but it does not work. Here is my config. Any thoughts > on why this does not work? [...] > resolver 10.0.0.2 valid=10s; > set $target "http://abctest.hello.world:80"; > proxy_pass $target; One possible reason is that "abctest.hello.world" is explicitly defined as an upstream{} block. -- Maxim Dounin http://nginx.org/ From nginx-forum at nginx.us Thu Nov 5 17:08:16 2015 From: nginx-forum at nginx.us (hristoc) Date: Thu, 05 Nov 2015 12:08:16 -0500 Subject: Problem with http2 huge load average In-Reply-To: <8130666.u1Z0vuBXyS@vbart-workstation> References: <8130666.u1Z0vuBXyS@vbart-workstation> Message-ID: <0b7a1b6ac059c786b0b2033df1761bf7.NginxMailingListEnglish@forum.nginx.org> No, php eat my cpu but when I enable http/2.0 support. If http2 mode is off everything is normal, I switch back to nginx 1.9.5 and worked fine with http2. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,262594,262620#msg-262620 From vbart at nginx.com Thu Nov 5 17:12:22 2015 From: vbart at nginx.com (Valentin V. Bartenev) Date: Thu, 05 Nov 2015 20:12:22 +0300 Subject: Problem with http2 huge load average In-Reply-To: <0b7a1b6ac059c786b0b2033df1761bf7.NginxMailingListEnglish@forum.nginx.org> References: <8130666.u1Z0vuBXyS@vbart-workstation> <0b7a1b6ac059c786b0b2033df1761bf7.NginxMailingListEnglish@forum.nginx.org> Message-ID: <9277006.97W4c2fba6@vbart-workstation> On Thursday 05 November 2015 12:08:16 hristoc wrote: > No, php eat my cpu but when I enable http/2.0 support. If http2 mode is off > everything is normal, I switch back to nginx 1.9.5 and worked fine with > http2. > You should check your php code. Probably, it does something strange when it sees different HTTP protocol version in the environment variables. Please note that nginx does nothing with php and scripts. wbr, Valentin V. Bartenev From mdounin at mdounin.ru Thu Nov 5 17:33:13 2015 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 5 Nov 2015 20:33:13 +0300 Subject: log_subrequest and auth_request In-Reply-To: <60cb40367697c6d7419de24f889e059c.NginxMailingListEnglish@forum.nginx.org> References: <60cb40367697c6d7419de24f889e059c.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20151105173312.GH74233@mdounin.ru> Hello! On Mon, Nov 02, 2015 at 05:37:00PM -0500, jstangroome wrote: > Hi, > > I have set `log_subrequest on;` at the http level and I am using to > `auth_request` to a location that does a `proxy_pass` but I am not seeing > the details of the auth subrequest in the access.log. Should this work? > > Conf: > > > log_subrequest on; > > server{ > > location / { > > auth_request /authorize; > > # ... > > } > > location /authorize { > > proxy_pass http://authserver/doauth; > > } > > } Works fine here. Note though, that with default logging format (with $request, see http://nginx.org/r/log_format) is mostly impossible to distinguish a subrequest from the main request, as $request will be identical for both. You have to log $uri to see the difference. -- Maxim Dounin http://nginx.org/ From nginx-forum at nginx.us Fri Nov 6 01:00:50 2015 From: nginx-forum at nginx.us (vmbeliz) Date: Thu, 05 Nov 2015 20:00:50 -0500 Subject: SEF / Friendly URL - 404 Page not found Message-ID: <0f251e61b168ade58f21b83b029235a4.NginxMailingListEnglish@forum.nginx.org> Hi guys, Please i need help. i have a server with Centos 6.5 + Nginx + PHP-FPM. I use a website with joomla. I configured to use Friendly url (SEF) in joomla, and worked, but if i enter something that do not exist like www.example.com/asdasdad, so the server redirect me back to index (www.example.com); but should show me the joomla error page. If i disable the SEF, and test, so i can see the joomla error page. What i need to do ? I'll send my vhost.conf. ##################### VHOST.CONF: server { server_name example.com; return 301 $scheme://www.example.com$request_uri; } server { listen 80; server_name www.example.com; access_log /var/log/nginx/access.log; error_log /var/log/nginx/error.log; root /usr/share/nginx/html; location / { index index.html index.htm index.php; try_files $uri $uri/ /index.php?$args; autoindex on; } location ~ \.php$ { try_files $uri =404; include /etc/nginx/fastcgi_params; fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME /usr/share/nginx/html$fastcgi_script_name; fastcgi_connect_timeout 120; fastcgi_send_timeout 500; fastcgi_read_timeout 500; fastcgi_buffer_size 128k; fastcgi_buffers 4 256k; fastcgi_busy_buffers_size 256k; fastcgi_temp_file_write_size 256k; fastcgi_param PHP_VALUE " post_max_size=50M upload_max_filesize=50M "; } } ############################### Thank you all! Posted at Nginx Forum: https://forum.nginx.org/read.php?2,262625,262625#msg-262625 From r1ch+nginx at teamliquid.net Fri Nov 6 01:18:20 2015 From: r1ch+nginx at teamliquid.net (Richard Stanway) Date: Fri, 6 Nov 2015 02:18:20 +0100 Subject: SEF / Friendly URL - 404 Page not found In-Reply-To: <0f251e61b168ade58f21b83b029235a4.NginxMailingListEnglish@forum.nginx.org> References: <0f251e61b168ade58f21b83b029235a4.NginxMailingListEnglish@forum.nginx.org> Message-ID: All your requests for non-static content are being routed through the joomla index.php, so this is an option you'll have to look for in your joomla configuration. On Fri, Nov 6, 2015 at 2:00 AM, vmbeliz wrote: > Hi guys, > Please i need help. > i have a server with Centos 6.5 + Nginx + PHP-FPM. > I use a website with joomla. > I configured to use Friendly url (SEF) in joomla, and worked, but if i > enter > something that do not exist like www.example.com/asdasdad, so the server > redirect me back to index (www.example.com); but should show me the joomla > error page. If i disable the SEF, and test, so i can see the joomla error > page. What i need to do ? > > I'll send my vhost.conf. > > ##################### > VHOST.CONF: > > server { > server_name example.com; > return 301 $scheme://www.example.com$request_uri; > } > > server { > listen 80; > server_name www.example.com; > access_log /var/log/nginx/access.log; > error_log /var/log/nginx/error.log; > root /usr/share/nginx/html; > > location / { > index index.html index.htm index.php; > try_files $uri $uri/ /index.php?$args; > autoindex on; > } > > location ~ \.php$ { > try_files $uri =404; > include /etc/nginx/fastcgi_params; > fastcgi_pass 127.0.0.1:9000; > fastcgi_index index.php; > fastcgi_param SCRIPT_FILENAME > /usr/share/nginx/html$fastcgi_script_name; > fastcgi_connect_timeout 120; > fastcgi_send_timeout 500; > fastcgi_read_timeout 500; > fastcgi_buffer_size 128k; > fastcgi_buffers 4 256k; > fastcgi_busy_buffers_size 256k; > fastcgi_temp_file_write_size 256k; > > fastcgi_param PHP_VALUE " > post_max_size=50M > upload_max_filesize=50M > "; > } > } > ############################### > > Thank you all! > > Posted at Nginx Forum: > https://forum.nginx.org/read.php?2,262625,262625#msg-262625 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From zxcvbn4038 at gmail.com Fri Nov 6 03:33:46 2015 From: zxcvbn4038 at gmail.com (CJ Ess) Date: Thu, 5 Nov 2015 22:33:46 -0500 Subject: Tuning upstream keepalive parameter In-Reply-To: <20151105131909.GB74233@mdounin.ru> References: <20151105131909.GB74233@mdounin.ru> Message-ID: So in this case NGINX is terminating SSL, and the upstream is HAPROXY running the same server. HAProxy applies routing rules and distributes requests to a pool of ~500 servers in round robin fashion. In this case we're most interested in keeping the connections between NGINX to HAPROXY alive. The NGINX server receives 3000 requests per second around the clock, roughly 250 requests per worker per second. How many are active simultaneously over the course of a second I can only guess - for the sake of argument we could guess that if each request cook 250ms to complete, then maybe 64 would be active simultaneously at any point. On Thu, Nov 5, 2015 at 8:19 AM, Maxim Dounin wrote: > Hello! > > On Thu, Nov 05, 2015 at 12:55:36AM -0500, CJ Ess wrote: > > > So I'm looking for some advice on determining an appropriate number for > the > > keepalive parameter within an upstream stanza. > > > > They server processes ~3000 requests per second, and haproxy is the > single > > upstream destination. Dividing by the request rate by the number of > > processors (workers) I'm thinking that maybe 256 is a good starting > number > > for the max keepalives. > > > > Is that realistic? Or should I be looking at a fraction of that number? > > Number of requests per second processed is mostly irrelevant. Two > important numbers are: > > - How many connections your upstream servers can handle. It's a good > idea to don't exhaust all available connections with keepalive > ones. > > - How many connections are used under normal load and/or during > load spikes. That is, how many simultaneous requests are > executed on upstream servers. It make sense to keep comparable > number of connections alive to handle load fluctuations. > > -- > Maxim Dounin > http://nginx.org/ > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Fri Nov 6 03:52:24 2015 From: nginx-forum at nginx.us (huakaibird) Date: Thu, 05 Nov 2015 22:52:24 -0500 Subject: Chrome/Firefox lost some cookie Message-ID: <84c2f9873813fb84f1f2f19dc74c283b.NginxMailingListEnglish@forum.nginx.org> Hi, I have one strange issue: I use nginx 1.8.0 on CentOS 7.1, found that some cookie will lost on chrome/firefox, but IE works, and one of my nginx works these nginx's configurations are same This is cookie copy from chrome, it lost _zm_sid cookie, response has the cookie (which is secure and http only) Request Cookies 500 PRUM_EPISODES s=1446776564844&r=https%3A//10.10.101.31/signin N/A N/A N/A 61 __ar_v4 DV47DVTZ4NHJNM5BVKACVM%3A20151106%3A23%7CMNBMU5UBV5A6DJOSTXTI32%3A20151106%3A23%7CFYTZRQUEVVGS7EWCIOE64A%3A20151106%3A23 N/A N/A N/A 130 __qca P0-1693026123-1446769914724 N/A N/A N/A 35 __zlcmid XZf57JMul1Jhtw N/A N/A N/A 25 _bizo_bzid cf545ade-f8f3-4815-9837-cbdd1f802497 N/A N/A N/A 49 _bizo_cksm 527A3CEFBCE90A8A N/A N/A N/A 29 _bizo_np_stats 14%3D530%2C N/A N/A N/A 28 _ga GA1.1.1717735130.1446768974 N/A N/A N/A 33 _zm_bu https%3A%2F%2F10.10.101.31%2Fmeeting N/A N/A N/A 45 cred 9B4823D72A1F7FD05F58E2A1A0F20221 N/A N/A N/A 39 visitor_id84442 25513943 N/A N/A N/A 26 Response Cookies 95 _zm_sid PinjHTI6Th-jv1c_pPO9Ug / 2015-11-06T04:22:41.000Z 95 ? ? My nginx configuration: ssl_session_cache shared:SSL:10m; ssl_session_timeout 30m; upstream backend { server x.x.x.x; } server { listen 80; listen 443 ssl; location / { proxy_pass http://backend; } keepalive_timeout 70; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto $scheme; proxy_buffers 128 16k; client_body_buffer_size 2048k; underscores_in_headers on; ssl_certificate ssl/chained.crt; #ssl_certificate ssl/4582cfef411bb.crt; ssl_certificate_key ssl/xxx20140410.key; ssl_protocols TLSv1 TLSv1.1 TLSv1.2; #ssl_ciphers HIGH:!aNULL:!MD5; ssl_ciphers 'ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-DSS-AES128-GCM-SHA256:kEDH+AESGCM:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA:ECDHE-ECDSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-DSS-AES128-SHA256:DHE-RSA-AES256-SHA256:DHE-DSS-AES256-SHA:DHE-RSA-AES256-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AES256-SHA256:AES128-SHA:AES256-SHA:AES:CAMELLIA:DES-CBC3-SHA:!aNULL:!eNULL:!EXPORT:!DES:!RC4:!MD5:!PSK:!aECDH:!EDH-DSS-DES-CBC3-SHA:!EDH-RSA-DES-CBC3-SHA:!KRB5-DES-CBC3-SHA'; ssl_prefer_server_ciphers on; ssl_dhparam ssl/dhparams.pem; } Posted at Nginx Forum: https://forum.nginx.org/read.php?2,262628,262628#msg-262628 From nginx-forum at nginx.us Fri Nov 6 06:33:18 2015 From: nginx-forum at nginx.us (hristoc) Date: Fri, 06 Nov 2015 01:33:18 -0500 Subject: Problem with http2 huge load average In-Reply-To: <9277006.97W4c2fba6@vbart-workstation> References: <9277006.97W4c2fba6@vbart-workstation> Message-ID: <8cf16a15c824546ff37205996e904cfb.NginxMailingListEnglish@forum.nginx.org> Yes I know, but it's very strange situation that can't understand. Code is the same, nothing is changed. Only nginx version and http2 support on host. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,262594,262629#msg-262629 From luky-37 at hotmail.com Fri Nov 6 08:21:51 2015 From: luky-37 at hotmail.com (Lukas Tribus) Date: Fri, 6 Nov 2015 09:21:51 +0100 Subject: Problem with http2 huge load average In-Reply-To: <8cf16a15c824546ff37205996e904cfb.NginxMailingListEnglish@forum.nginx.org> References: <9277006.97W4c2fba6@vbart-workstation>, <8cf16a15c824546ff37205996e904cfb.NginxMailingListEnglish@forum.nginx.org> Message-ID: > Yes I know, > but it's very strange situation that can't understand. Code is the same, > nothing is changed. Only nginx version and http2 support on host. nginx 1.9.5 has a bug (#800): $server_protocol is empty on HTTP2. This is fixed in nginx 1.9.6, so with 1.9.6 PHP/FCGI for the first time sees HTTP/2.0 as SERVER_PROTOCOL. You can probably reproduce the problem in 1.9.5 if set it manually: fastcgi_param ?SERVER_PROTOCOL ? ?HTTP/2.0; Or workaround it in 1.9.6 if reset: fastcgi_param ?SERVER_PROTOCOL ? ?HTTP/1.1; In the end, you have to fix your PHP code. From nginx-forum at nginx.us Fri Nov 6 18:00:04 2015 From: nginx-forum at nginx.us (xfeep) Date: Fri, 06 Nov 2015 13:00:04 -0500 Subject: [ANN] An Example for a custom module integrating java by nginx-clojure module Message-ID: An Example for a custom module integrating java by nginx-clojure module. This simple example shows how a nginx C module integrates existing Java libraries by using nginx-clojure. Example Link: https://github.com/nginx-clojure/nginx-clojure/tree/master/example-projects/c-module-integration-example Original Discussion in google group https://groups.google.com/forum/#!topic/nginx-clojure/Unc9HN8r9JE Posted at Nginx Forum: https://forum.nginx.org/read.php?2,262635,262635#msg-262635 From anoopalias01 at gmail.com Sat Nov 7 11:42:11 2015 From: anoopalias01 at gmail.com (Anoop Alias) Date: Sat, 7 Nov 2015 17:12:11 +0530 Subject: NAXSI directive for fixing this error Message-ID: Hi, I had an issue with nginx compiled in with the NAXSI waf . ##################### nginx: [emerg] could not build the wlr_url_hash, you should increase wlr_url_hash_bucket_size: 512 nginx: [emerg] $URL hashtable init failed in /etc/nginx/nginx.conf:87 nginx: [emerg] WhiteList Hash building failed in /etc/nginx/nginx.conf:87 nginx: configuration file /etc/nginx/nginx.conf test failed ####################### Cant seem to find the directive to increase this wlr_url_hash_bucket_size . tried wlr_url_hash_bucket_size 1024; but it says invalid directive . The naxsi docs doesnt seem to include this . Thanks in advance. -- *Anoop P Alias* -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Sun Nov 8 00:42:54 2015 From: nginx-forum at nginx.us (jstangroome) Date: Sat, 07 Nov 2015 19:42:54 -0500 Subject: log_subrequest and auth_request In-Reply-To: <20151105173312.GH74233@mdounin.ru> References: <20151105173312.GH74233@mdounin.ru> Message-ID: Thanks, adding the `$uri` fields to the log helps. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,262567,262648#msg-262648 From zxcvbn4038 at gmail.com Sun Nov 8 01:28:29 2015 From: zxcvbn4038 at gmail.com (CJ Ess) Date: Sat, 7 Nov 2015 20:28:29 -0500 Subject: listen deferred option Message-ID: Just curious - if I am using the deferred listen option on Linux my understanding is that nginx will not be woken up until data arrives for the connection. If someone is trying to DDOS me by opening as many connections as possible (has happened before) how does that situation play out with deferred accepts? Currently I am not using the deferred option and I have timeouts set so that if complete request headers aren't received in a few seconds then the connection is closed, however with deffered accepts I don't believe nginx would be able to do that. -------------- next part -------------- An HTML attachment was scrubbed... URL: From 416626442 at qq.com Sun Nov 8 06:47:03 2015 From: 416626442 at qq.com (=?utf-8?B?Li4uwrLCusK5wrM=?=) Date: Sun, 8 Nov 2015 14:47:03 +0800 Subject: set_real_ip_from --how to load IP list file ? Message-ID: Hi, I'm using CDN with my site. There are many IPs on network. set_real_ip_from only can add IP in one line. I want to add IP list to file and load it . How to set ? PS: Apache Module mod_remoteip have RemoteIPTrustedProxyList for this. But I just want to use nginx. http://httpd.apache.org/docs/2.4/mod/mod_remoteip.html RemoteIPTrustedProxyList Directive Description:Declare client intranet IP addresses trusted to present the RemoteIPHeader value Syntax:RemoteIPTrustedProxyList filename Context:server config, virtual host Status:Base Module:mod_remoteip The RemoteIPTrustedProxyList directive specifies a file parsed at startup, and builds a list of addresses (or address blocks) to trust as presenting a valid RemoteIPHeader value of the useragent IP. The '#' hash character designates a comment line, otherwise each whitespace or newline separated entry is processed identically to the RemoteIPTrustedProxy directive. Trusted (Load Balancer) Example RemoteIPHeader X-Forwarded-For RemoteIPTrustedProxyList conf/trusted-proxies.lst conf/trusted-proxies.lst contents # Identified external proxies; 192.0.2.16/28 #wap phone group of proxies proxy.isp.example.com #some well known ISP -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Sun Nov 8 07:36:57 2015 From: nginx-forum at nginx.us (ifeltsweet) Date: Sun, 08 Nov 2015 02:36:57 -0500 Subject: proxy_cache with proxy_intercept_errors bug Message-ID: <42576c7602e9dc60229f49d48fad2603.NginxMailingListEnglish@forum.nginx.org> Hi, I have a very simple proxy config: http { proxy_cache_path /var/www/cache levels=1:2 keys_zone=s3-images-cache:50m inactive=1M max_size=1000m; proxy_temp_path /var/www/cache/tmp; } server { listen 80; server_name images.example.com; location / { proxy_cache s3-images-cache; proxy_cache_key $scheme$proxy_host$uri$is_args$args; proxy_cache_bypass $http_purge_cache; proxy_cache_valid any 1y; proxy_pass http://images-example.s3.amazonaws.com; add_header X-Cache $upstream_cache_status; proxy_intercept_errors on; error_page 404 = @no_image; } location @no_image { return 403; } } 1. Let's request /image.jpg. 2. Request is sent to proxy for /image.jpg (does not exists yet). 3. Backend responds with 404. 4. "proxy_intercept_errors on" kicks in and "error_page 404 = @no_image" is called. 5. Nginx returns 403. 6. Do another request for same image and see that "X-Cache: HIT" is set. We are clearly hitting cache. But, if we check /var/www/cache/ folder at this time we will see that there is no cache item created for this request. So does it mean Nginx keeps the cache for it in memory and forgot to write to file? 7. Let's upload /image.jpg to backend. 8. Now do "PURGE-CACHE: 1" request to that image. We see that now we get the image instead of 403. Good. If we check /var/www/cache/ we will see that the cache file is now finally created for this request. Also good. 9. Now here is the problem: lets request /image.jpg again. 10. Nginx returns 403 with "X-Cache: HIT". Why? So it's hitting the cache but returning something else not what is in /var/www/cache folder?? How? My only explanation to this is it seems that Nginx is caching the response in memory and doesn't write to file when we are hitting error with our custom error_page in the proxied responses. Furthermore when using proxy_cache_bypass it does not overwrite in-memory cache, so that subsequent requests to the same item will be using old cache which is stored in memory and not the new one created in the cache folder. Could someone please let me know if I am doing something wrong or this really is a bug. Spent last 3 days fighting this. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,262651,262651#msg-262651 From adam at jooadam.hu Sun Nov 8 15:16:14 2015 From: adam at jooadam.hu (=?UTF-8?B?Sm/DsyDDgWTDoW0=?=) Date: Sun, 8 Nov 2015 16:16:14 +0100 Subject: Certificate Transparency Message-ID: Hi, Do we know if there?s any plan to support the signed certificate timestamp TLS extension in Nginx? (There?s apparently a third party module that implements the functionality: https://github.com/grahamedgecombe/nginx-ct) The TLS extension is the only method to implement Certificate Transparency without the assistance of the CA, and starting with January 1 2015 Chrome refuses to display the green bar for EV certificates without Certificate Transparency. StartSSL is one CA that currently does not support other methods, which means a lot of sites suffers from this. ? From r at roze.lv Mon Nov 9 02:03:55 2015 From: r at roze.lv (Reinis Rozitis) Date: Mon, 9 Nov 2015 04:03:55 +0200 Subject: set_real_ip_from --how to load IP list file ? In-Reply-To: References: Message-ID: > I'm using CDN with my site. > There are many IPs on network. > set_real_ip_from only can add IP in one line. set_real_ip_from supports subnet masks, so you can more or less achieve the same as in apache. If you want an external file you can use include ( http://nginx.org/en/docs/ngx_core_module.html#include ) just in the file prefix each line with set_real_ip_from. p.s. some CDNs even provide copy/paste configurations ( like https://support.cloudflare.com/hc/en-us/articles/200170706-How-do-I-restore-original-visitor-IP-with-Nginx ) rr From zxcvbn4038 at gmail.com Mon Nov 9 03:38:06 2015 From: zxcvbn4038 at gmail.com (CJ Ess) Date: Sun, 8 Nov 2015 22:38:06 -0500 Subject: set_real_ip_from --how to load IP list file ? In-Reply-To: References: Message-ID: He has a point - if your using multiple CDNs you can have many dozens of addresses for the real_ip module - it would be nice to be able to source them from a file. Also last I checked the real_ip module did a linear search through all the addresses configured, its not an issue yet but at some point it would be worth changing that to some sort of tree structure. On Sun, Nov 8, 2015 at 9:03 PM, Reinis Rozitis wrote: > I'm using CDN with my site. >> There are many IPs on network. >> set_real_ip_from only can add IP in one line. >> > > set_real_ip_from supports subnet masks, so you can more or less achieve > the same as in apache. > If you want an external file you can use include ( > http://nginx.org/en/docs/ngx_core_module.html#include ) just in the file > prefix each line with set_real_ip_from. > > p.s. some CDNs even provide copy/paste configurations ( like > https://support.cloudflare.com/hc/en-us/articles/200170706-How-do-I-restore-original-visitor-IP-with-Nginx > ) > > rr > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Mon Nov 9 10:41:18 2015 From: nginx-forum at nginx.us (cubicdaiya) Date: Mon, 09 Nov 2015 05:41:18 -0500 Subject: NAXSI directive for fixing this error In-Reply-To: References: Message-ID: Hello. 2015-11-07 20:42 GMT+09:00 Anoop Alias : > ##################### > nginx: [emerg] could not build the wlr_url_hash, you should increase > wlr_url_hash_bucket_size: 512 > nginx: [emerg] $URL hashtable init failed in /etc/nginx/nginx.conf:87 > nginx: [emerg] WhiteList Hash building failed in /etc/nginx/nginx.conf:87 > nginx: configuration file /etc/nginx/nginx.conf test failed > ####################### > > Cant seem to find the directive to increase this wlr_url_hash_bucket_size . Maybe wlr_url_hash_bucket_size is not a directive but the size of a hash table named wlr_url_hash and implemented on naxsi. refs -> http://hg.nginx.org/nginx/file/tip/src/core/ngx_hash.c#l273 As far as seen the source code, the bucket size of wlr_url_hash is hard-coded as 512. https://github.com/nbs-system/naxsi/blob/master/naxsi_src/naxsi_utils.c#L569-#L573 https://github.com/nbs-system/naxsi/blob/master/naxsi_src/naxsi_utils.c#L591-#L598 Why don't you try increase this? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,262644,262659#msg-262659 From mdounin at mdounin.ru Mon Nov 9 12:53:09 2015 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 9 Nov 2015 15:53:09 +0300 Subject: listen deferred option In-Reply-To: References: Message-ID: <20151109125309.GP74233@mdounin.ru> Hello! On Sat, Nov 07, 2015 at 08:28:29PM -0500, CJ Ess wrote: > Just curious - if I am using the deferred listen option on Linux my > understanding is that nginx will not be woken up until data arrives for the > connection. If someone is trying to DDOS me by opening as many connections > as possible (has happened before) how does that situation play out with > deferred accepts? > > Currently I am not using the deferred option and I have timeouts set so > that if complete request headers aren't received in a few seconds then the > connection is closed, however with deffered accepts I don't believe nginx > would be able to do that. When using deferred accept, nginx instructs the kernel to defer connections for just 1 second. After this time, the kernel will pass connections to nginx for normal processing. If there are too many connections waiting in deferred accept (more than a socket backlog), syncookies will be used by the kernel if enabled. Note that this works slightly differently with old kernels (before 2.6.32), and in previous nginx versions (before 1.5.10). Some additional information can be found in these commit logs: http://hg.nginx.org/nginx/rev/fdb67cfc957d http://hg.nginx.org/nginx/rev/05a56ebb084a -- Maxim Dounin http://nginx.org/ From mdounin at mdounin.ru Mon Nov 9 13:06:03 2015 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 9 Nov 2015 16:06:03 +0300 Subject: set_real_ip_from --how to load IP list file ? In-Reply-To: References: Message-ID: <20151109130603.GQ74233@mdounin.ru> Hello! On Sun, Nov 08, 2015 at 02:47:03PM +0800, ...???? wrote: > Hi, > I'm using CDN with my site. > There are many IPs on network. > set_real_ip_from only can add IP in one line. > I want to add IP list to file and load it . > How to set ? The "set_real_ip_from" directive understands not only IPs, but also networks in CIDR notation. That is, if you want to enable real ip handling for a network, you can write something like: set_real_ip_from 192.168.1.0/24; And this is recommended compared to listing individual addresses, as it's much faster to check. There is no support for loading multiple addresses from a file. If you really want to load a list from a file, you can use the "include" directive, see http://nginx.org/r/include. You'll have to convert your list to be valid nginx configuration with "set_real_ip_from" and ";" (something can be easily done with an awk or perl one-liner). -- Maxim Dounin http://nginx.org/ From mdounin at mdounin.ru Mon Nov 9 13:21:34 2015 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 9 Nov 2015 16:21:34 +0300 Subject: Certificate Transparency In-Reply-To: References: Message-ID: <20151109132134.GR74233@mdounin.ru> Hello! On Sun, Nov 08, 2015 at 04:16:14PM +0100, Jo? ?d?m wrote: > Do we know if there?s any plan to support the signed certificate > timestamp TLS extension in Nginx? (There?s apparently a third party > module that implements the functionality: > https://github.com/grahamedgecombe/nginx-ct) No plans. > The TLS extension is the only method to implement Certificate > Transparency without the assistance of the CA, and starting with > January 1 2015 Chrome refuses to display the green bar for EV > certificates without Certificate Transparency. > > StartSSL is one CA that currently does not support other methods, > which means a lot of sites suffers from this. There are at lease some CAs that provide CT support without a need to submit a certificate to log servers yourself and use the signed_certificate_timestamp extension. Given that's all about EV certs, switching to a different CA is a solution to consider if a particular CA doesn't support CT. -- Maxim Dounin http://nginx.org/ From adam at jooadam.hu Mon Nov 9 16:15:37 2015 From: adam at jooadam.hu (=?UTF-8?B?Sm/DsyDDgWTDoW0=?=) Date: Mon, 9 Nov 2015 17:15:37 +0100 Subject: Certificate Transparency In-Reply-To: <20151109132134.GR74233@mdounin.ru> References: <20151109132134.GR74233@mdounin.ru> Message-ID: Hi Maxim, > Given that's all about EV > certs, switching to a different CA is a solution to consider if a > particular CA doesn't support CT. My understanding is that it is something that would benefit the entire ecosystem, not just EV sites, but I understand if this has no priority. I guess CAs will catch up sooner or later. Thanks for the answer. ? From zxcvbn4038 at gmail.com Tue Nov 10 06:08:50 2015 From: zxcvbn4038 at gmail.com (CJ Ess) Date: Tue, 10 Nov 2015 01:08:50 -0500 Subject: listen deferred option In-Reply-To: <20151109125309.GP74233@mdounin.ru> References: <20151109125309.GP74233@mdounin.ru> Message-ID: Good info, thank you! On Mon, Nov 9, 2015 at 7:53 AM, Maxim Dounin wrote: > Hello! > > On Sat, Nov 07, 2015 at 08:28:29PM -0500, CJ Ess wrote: > > > Just curious - if I am using the deferred listen option on Linux my > > understanding is that nginx will not be woken up until data arrives for > the > > connection. If someone is trying to DDOS me by opening as many > connections > > as possible (has happened before) how does that situation play out with > > deferred accepts? > > > > Currently I am not using the deferred option and I have timeouts set so > > that if complete request headers aren't received in a few seconds then > the > > connection is closed, however with deffered accepts I don't believe nginx > > would be able to do that. > > When using deferred accept, nginx instructs the kernel to defer > connections for just 1 second. After this time, the kernel will > pass connections to nginx for normal processing. > > If there are too many connections waiting in deferred accept (more > than a socket backlog), syncookies will be used by the kernel if > enabled. > > Note that this works slightly differently with old kernels (before > 2.6.32), and in previous nginx versions (before 1.5.10). Some > additional information can be found in these commit logs: > > http://hg.nginx.org/nginx/rev/fdb67cfc957d > http://hg.nginx.org/nginx/rev/05a56ebb084a > > -- > Maxim Dounin > http://nginx.org/ > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From reallfqq-nginx at yahoo.fr Tue Nov 10 08:54:00 2015 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Tue, 10 Nov 2015 09:54:00 +0100 Subject: Let's Encrypt TLS project: seeking nginx configuration module help Message-ID: Hello, You might have heard about the Let's Encrypt project, delivering Domain-Validated TLS certificates for free. General public availability is planned to next week, Novembre 16th. People who have registered to the beta program might have unlock domains they own and got their certificates already, as I did. ?They ?are using an automated system to deliver certificates which is Python-based and open-source. They are currently struggling with their nginx module, allowing a certificate to be automatically installed on nginx. Experts are called for to help on a list of issues on GitHub . ?If a kind soul wished to give a hand on the matter, the project would be more than welcoming. :o)? ?Have a nice day (and soon nice free DV TLS certificates!), --- *B. R.* -------------- next part -------------- An HTML attachment was scrubbed... URL: From tovmeod at gmail.com Tue Nov 10 09:08:44 2015 From: tovmeod at gmail.com (Avraham Serour) Date: Tue, 10 Nov 2015 11:08:44 +0200 Subject: syslog not properly tagged Message-ID: Hi, I have an ubuntu machine and installed nginx stable using the ppa (1.9.3) In my conf I'm sending the logs to syslog: access_log syslog:server=unix:/dev/log,tag=lenginx_access le_json; error_log syslog:server=unix:/dev/log,tag=nginx,severity=error; then I'm using rsyslog to ship my logs to my logstash server. My problem is that it seems nginx does't properly tag the messages, I should be able to filter nginx messages in my rsyslog conf using: if $programname == 'nginx' then { but it seems $programname is my hostname, the tag is added to the message body This creates two problems: now I need to workaround to filter nginx messages and my message body format is messed up, my beautifully json format is now not a valid json and I need to further manipulate it. I was able to work around this for the access logs, my filter is now: if $msg contains 'lenginx_access' then { and I am using the substring to remove the prefix But I wasn't able to accomplish this for the error logs, it seems I can't use a custom format for the error logs So any way of custom formatting my error logs to output json? How can I tell nginx to properly tag the messages? btw, upon registering to this mailing list I got a confirmation email with my password, really?? Avraham -------------- next part -------------- An HTML attachment was scrubbed... URL: From vl at nginx.com Tue Nov 10 09:23:33 2015 From: vl at nginx.com (Vladimir Homutov) Date: Tue, 10 Nov 2015 12:23:33 +0300 Subject: syslog not properly tagged In-Reply-To: References: Message-ID: <20151110092332.GA22635@vlpc.nginx.com> On Tue, Nov 10, 2015 at 11:08:44AM +0200, Avraham Serour wrote: > Hi, > > I have an ubuntu machine and installed nginx stable using the ppa (1.9.3) > > In my conf I'm sending the logs to syslog: > > access_log syslog:server=unix:/dev/log,tag=lenginx_access le_json; > error_log syslog:server=unix:/dev/log,tag=nginx,severity=error; > > then I'm using rsyslog to ship my logs to my logstash server. > > My problem is that it seems nginx does't properly tag the messages, I > should be able to filter nginx messages in my rsyslog conf using: > > if $programname == 'nginx' then { > > but it seems $programname is my hostname, the tag is added to the message > body This happens because nginx uses remote syslog message format, which includes hostname. To use it with local syslog daemon you have two options: a) tell your syslog daemon that there is a hostname in a message coming from nginx b) tell nginx to not send hostname, using the 'nohostname' option, added recently in 1.9.7 (http://nginx.org/en/docs/syslog.html) > > This creates two problems: now I need to workaround to filter nginx > messages and my message body format is messed up, my beautifully json > format is now not a valid json and I need to further manipulate it. > > I was able to work around this for the access logs, my filter is now: > if $msg contains 'lenginx_access' then { > and I am using the substring to remove the prefix > > But I wasn't able to accomplish this for the error logs, it seems I can't > use a custom format for the error logs > > So any way of custom formatting my error logs to output json? > How can I tell nginx to properly tag the messages? > > btw, upon registering to this mailing list I got a confirmation email with > my password, really?? > > Avraham From tovmeod at gmail.com Tue Nov 10 09:43:33 2015 From: tovmeod at gmail.com (Avraham Serour) Date: Tue, 10 Nov 2015 11:43:33 +0200 Subject: syslog not properly tagged In-Reply-To: <20151110092332.GA22635@vlpc.nginx.com> References: <20151110092332.GA22635@vlpc.nginx.com> Message-ID: Well nohostname seems to be what I need, but 1.9.7 is even newer than mainline (currently 1.9.6), my manager won't let me deploy anything but stable on production So unless 1.9.7 gets tagged as stable soon it seems I will need a workaorund Thanks Avraham On Tue, Nov 10, 2015 at 11:23 AM, Vladimir Homutov wrote: > On Tue, Nov 10, 2015 at 11:08:44AM +0200, Avraham Serour wrote: > > Hi, > > > > I have an ubuntu machine and installed nginx stable using the ppa (1.9.3) > > > > In my conf I'm sending the logs to syslog: > > > > access_log syslog:server=unix:/dev/log,tag=lenginx_access le_json; > > error_log syslog:server=unix:/dev/log,tag=nginx,severity=error; > > > > then I'm using rsyslog to ship my logs to my logstash server. > > > > My problem is that it seems nginx does't properly tag the messages, I > > should be able to filter nginx messages in my rsyslog conf using: > > > > if $programname == 'nginx' then { > > > > but it seems $programname is my hostname, the tag is added to the message > > body > > This happens because nginx uses remote syslog message format, which > includes hostname. To use it with local syslog daemon you have two > options: > > a) tell your syslog daemon that there is a hostname in a message coming > from nginx > > b) tell nginx to not send hostname, using the 'nohostname' option, added > recently in 1.9.7 (http://nginx.org/en/docs/syslog.html) > > > > > This creates two problems: now I need to workaround to filter nginx > > messages and my message body format is messed up, my beautifully json > > format is now not a valid json and I need to further manipulate it. > > > > I was able to work around this for the access logs, my filter is now: > > if $msg contains 'lenginx_access' then { > > and I am using the substring to remove the prefix > > > > But I wasn't able to accomplish this for the error logs, it seems I can't > > use a custom format for the error logs > > > > So any way of custom formatting my error logs to output json? > > How can I tell nginx to properly tag the messages? > > > > btw, upon registering to this mailing list I got a confirmation email > with > > my password, really?? > > > > Avraham > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From reallfqq-nginx at yahoo.fr Tue Nov 10 17:47:54 2015 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Tue, 10 Nov 2015 18:47:54 +0100 Subject: syslog not properly tagged In-Reply-To: References: <20151110092332.GA22635@vlpc.nginx.com> Message-ID: Vladimir already provided a plan B in his a) point. :oP ?? --- *B. R.* On Tue, Nov 10, 2015 at 10:43 AM, Avraham Serour wrote: > Well nohostname seems to be what I need, but 1.9.7 is even newer than > mainline (currently 1.9.6), my manager won't let me deploy anything but > stable on production > So unless 1.9.7 gets tagged as stable soon it seems I will need a > workaorund > > Thanks > Avraham > > On Tue, Nov 10, 2015 at 11:23 AM, Vladimir Homutov wrote: > >> On Tue, Nov 10, 2015 at 11:08:44AM +0200, Avraham Serour wrote: >> > Hi, >> > >> > I have an ubuntu machine and installed nginx stable using the ppa >> (1.9.3) >> > >> > In my conf I'm sending the logs to syslog: >> > >> > access_log syslog:server=unix:/dev/log,tag=lenginx_access le_json; >> > error_log syslog:server=unix:/dev/log,tag=nginx,severity=error; >> > >> > then I'm using rsyslog to ship my logs to my logstash server. >> > >> > My problem is that it seems nginx does't properly tag the messages, I >> > should be able to filter nginx messages in my rsyslog conf using: >> > >> > if $programname == 'nginx' then { >> > >> > but it seems $programname is my hostname, the tag is added to the >> message >> > body >> >> This happens because nginx uses remote syslog message format, which >> includes hostname. To use it with local syslog daemon you have two >> options: >> >> a) tell your syslog daemon that there is a hostname in a message coming >> from nginx >> >> b) tell nginx to not send hostname, using the 'nohostname' option, added >> recently in 1.9.7 (http://nginx.org/en/docs/syslog.html) >> >> > >> > This creates two problems: now I need to workaround to filter nginx >> > messages and my message body format is messed up, my beautifully json >> > format is now not a valid json and I need to further manipulate it. >> > >> > I was able to work around this for the access logs, my filter is now: >> > if $msg contains 'lenginx_access' then { >> > and I am using the substring to remove the prefix >> > >> > But I wasn't able to accomplish this for the error logs, it seems I >> can't >> > use a custom format for the error logs >> > >> > So any way of custom formatting my error logs to output json? >> > How can I tell nginx to properly tag the messages? >> > >> > btw, upon registering to this mailing list I got a confirmation email >> with >> > my password, really?? >> > >> > Avraham >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx >> > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From shahzaib.cb at gmail.com Wed Nov 11 09:36:56 2015 From: shahzaib.cb at gmail.com (shahzaib shahzaib) Date: Wed, 11 Nov 2015 14:36:56 +0500 Subject: Redirect request based on source $scheme !! Message-ID: Hi, Is there a way we can serve $scheme (HTTP/HTTPS) based on source request ? Such as : if https://ad.domain.com --> sends request to http://ourdomain.com (as it'll fail due to cross $scheme conflict) So http://ourdomain.com will check that the request invoked using https $scheme and it'll redirect http://ourdomain.com to https://ourdomain.com for that particular ad.domain.com. --------------------------------------------------------- Is that possible guys ? Thanks in Advance ! Regards. Shahzaib Need to send me private email? I use Virtru . -------------- next part -------------- An HTML attachment was scrubbed... URL: From tovmeod at gmail.com Wed Nov 11 10:10:39 2015 From: tovmeod at gmail.com (Avraham Serour) Date: Wed, 11 Nov 2015 12:10:39 +0200 Subject: Redirect request based on source $scheme !! In-Reply-To: References: Message-ID: you can create separate server blocks for each domain On Wed, Nov 11, 2015 at 11:36 AM, shahzaib shahzaib wrote: > Hi, > > Is there a way we can serve $scheme (HTTP/HTTPS) based on source > request ? Such as : > > if https://ad.domain.com --> sends request to http://ourdomain.com (as > it'll fail due to cross $scheme conflict) > > So http://ourdomain.com will check that the request invoked using https > $scheme and it'll redirect http://ourdomain.com to https://ourdomain.com > for that particular ad.domain.com. > > --------------------------------------------------------- > > Is that possible guys ? > > Thanks in Advance ! > > Regards. > Shahzaib > > > Need to send me private email? I use Virtru > . > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From tovmeod at gmail.com Wed Nov 11 10:15:25 2015 From: tovmeod at gmail.com (Avraham Serour) Date: Wed, 11 Nov 2015 12:15:25 +0200 Subject: syslog not properly tagged In-Reply-To: References: <20151110092332.GA22635@vlpc.nginx.com> Message-ID: well the problem is not only with formatting, formatting is just and inconvenience that I managed to work around already, my main problem is to catch nginx logs only. my rsyslog config will parse every syslog message, everyone that writes to syslog will send messages, I only need the ones coming from nginx, actually I even need to tell apart the error from access since they have diferent formatting On Tue, Nov 10, 2015 at 7:47 PM, B.R. wrote: > Vladimir already provided a plan B in his a) point. :oP > ?? > --- > *B. R.* > > On Tue, Nov 10, 2015 at 10:43 AM, Avraham Serour > wrote: > >> Well nohostname seems to be what I need, but 1.9.7 is even newer than >> mainline (currently 1.9.6), my manager won't let me deploy anything but >> stable on production >> So unless 1.9.7 gets tagged as stable soon it seems I will need a >> workaorund >> >> Thanks >> Avraham >> >> On Tue, Nov 10, 2015 at 11:23 AM, Vladimir Homutov wrote: >> >>> On Tue, Nov 10, 2015 at 11:08:44AM +0200, Avraham Serour wrote: >>> > Hi, >>> > >>> > I have an ubuntu machine and installed nginx stable using the ppa >>> (1.9.3) >>> > >>> > In my conf I'm sending the logs to syslog: >>> > >>> > access_log syslog:server=unix:/dev/log,tag=lenginx_access le_json; >>> > error_log syslog:server=unix:/dev/log,tag=nginx,severity=error; >>> > >>> > then I'm using rsyslog to ship my logs to my logstash server. >>> > >>> > My problem is that it seems nginx does't properly tag the messages, I >>> > should be able to filter nginx messages in my rsyslog conf using: >>> > >>> > if $programname == 'nginx' then { >>> > >>> > but it seems $programname is my hostname, the tag is added to the >>> message >>> > body >>> >>> This happens because nginx uses remote syslog message format, which >>> includes hostname. To use it with local syslog daemon you have two >>> options: >>> >>> a) tell your syslog daemon that there is a hostname in a message coming >>> from nginx >>> >>> b) tell nginx to not send hostname, using the 'nohostname' option, added >>> recently in 1.9.7 (http://nginx.org/en/docs/syslog.html) >>> >>> > >>> > This creates two problems: now I need to workaround to filter nginx >>> > messages and my message body format is messed up, my beautifully json >>> > format is now not a valid json and I need to further manipulate it. >>> > >>> > I was able to work around this for the access logs, my filter is now: >>> > if $msg contains 'lenginx_access' then { >>> > and I am using the substring to remove the prefix >>> > >>> > But I wasn't able to accomplish this for the error logs, it seems I >>> can't >>> > use a custom format for the error logs >>> > >>> > So any way of custom formatting my error logs to output json? >>> > How can I tell nginx to properly tag the messages? >>> > >>> > btw, upon registering to this mailing list I got a confirmation email >>> with >>> > my password, really?? >>> > >>> > Avraham >>> >>> _______________________________________________ >>> nginx mailing list >>> nginx at nginx.org >>> http://mailman.nginx.org/mailman/listinfo/nginx >>> >> >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx >> > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From shahzaib.cb at gmail.com Wed Nov 11 10:25:11 2015 From: shahzaib.cb at gmail.com (shahzaib shahzaib) Date: Wed, 11 Nov 2015 15:25:11 +0500 Subject: Redirect request based on source $scheme !! In-Reply-To: References: Message-ID: >>you can create separate server blocks for each domain I think issue will still persist. Say https://ad.domain.com makes static call to http://ourdomain.com , it'll end up with conflicted scheme i.e https -> http. We can't force http to https as well because it'll break static calls from http -> http. Actually we've video sharing website from where people embed http/https links to there websites. Now the problem is, some of the HTTPS websites have embedded HTTP URL links from our website instead of HTTPS due to which the code is unable to execute on their HTTPS website because it is making call from https -> http which is wrong. The number of these malformed links are huge and there's no way that those users can manually correct the embedded links by editing http to https and vice versa). So we're thinking to have some condition in place that if the request for HTTP embedded link comes from any HTTPS domain , nginx will detect that source $scheme and redirect that request to HTTPS. On Wed, Nov 11, 2015 at 3:10 PM, Avraham Serour wrote: > you can create separate server blocks for each domain > > On Wed, Nov 11, 2015 at 11:36 AM, shahzaib shahzaib > wrote: > >> Hi, >> >> Is there a way we can serve $scheme (HTTP/HTTPS) based on source >> request ? Such as : >> >> if https://ad.domain.com --> sends request to http://ourdomain.com (as >> it'll fail due to cross $scheme conflict) >> >> So http://ourdomain.com will check that the request invoked using https >> $scheme and it'll redirect http://ourdomain.com to https://ourdomain.com >> for that particular ad.domain.com. >> >> --------------------------------------------------------- >> >> Is that possible guys ? >> >> Thanks in Advance ! >> >> Regards. >> Shahzaib >> >> >> Need to send me private email? I use Virtru >> . >> >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx >> > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From shahzaib.cb at gmail.com Wed Nov 11 11:01:05 2015 From: shahzaib.cb at gmail.com (shahzaib shahzaib) Date: Wed, 11 Nov 2015 16:01:05 +0500 Subject: Redirect request based on source $scheme !! In-Reply-To: References: Message-ID: One point is worth mentioning, we don't own ad.domain.com its a 3rd party website. All we can control is ourdomain.com. On Wed, Nov 11, 2015 at 3:25 PM, shahzaib shahzaib wrote: > >>you can create separate server blocks for each domain > I think issue will still persist. Say https://ad.domain.com makes static > call to http://ourdomain.com , it'll end up with conflicted scheme i.e > https -> http. We can't force http to https as well because it'll break > static calls from http -> http. > > Actually we've video sharing website from where people embed http/https > links to there websites. Now the problem is, some of the HTTPS websites > have embedded HTTP URL links from our website instead of HTTPS due to which > the code is unable to execute on their HTTPS website because it is making > call from https -> http which is wrong. The number of these malformed links > are huge and there's no way that those users can manually correct > the embedded links by editing http to https and vice versa). > > So we're thinking to have some condition in place that if the request for > HTTP embedded link comes from any HTTPS domain , nginx will detect that > source $scheme and redirect that request to HTTPS. > > On Wed, Nov 11, 2015 at 3:10 PM, Avraham Serour wrote: > >> you can create separate server blocks for each domain >> >> On Wed, Nov 11, 2015 at 11:36 AM, shahzaib shahzaib < >> shahzaib.cb at gmail.com> wrote: >> >>> Hi, >>> >>> Is there a way we can serve $scheme (HTTP/HTTPS) based on source >>> request ? Such as : >>> >>> if https://ad.domain.com --> sends request to http://ourdomain.com (as >>> it'll fail due to cross $scheme conflict) >>> >>> So http://ourdomain.com will check that the request invoked using https >>> $scheme and it'll redirect http://ourdomain.com to https://ourdomain.com >>> for that particular ad.domain.com. >>> >>> --------------------------------------------------------- >>> >>> Is that possible guys ? >>> >>> Thanks in Advance ! >>> >>> Regards. >>> Shahzaib >>> >>> >>> Need to send me private email? I use Virtru >>> . >>> >>> >>> _______________________________________________ >>> nginx mailing list >>> nginx at nginx.org >>> http://mailman.nginx.org/mailman/listinfo/nginx >>> >> >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From tovmeod at gmail.com Wed Nov 11 11:02:46 2015 From: tovmeod at gmail.com (Avraham Serour) Date: Wed, 11 Nov 2015 13:02:46 +0200 Subject: Redirect request based on source $scheme !! In-Reply-To: References: Message-ID: if you don't own the domain then you won't ever receive the request and you can't do nothing about it On Wed, Nov 11, 2015 at 1:01 PM, shahzaib shahzaib wrote: > One point is worth mentioning, we don't own ad.domain.com its a 3rd party > website. All we can control is ourdomain.com. > > On Wed, Nov 11, 2015 at 3:25 PM, shahzaib shahzaib > wrote: > >> >>you can create separate server blocks for each domain >> I think issue will still persist. Say https://ad.domain.com makes static >> call to http://ourdomain.com , it'll end up with conflicted scheme i.e >> https -> http. We can't force http to https as well because it'll break >> static calls from http -> http. >> >> Actually we've video sharing website from where people embed http/https >> links to there websites. Now the problem is, some of the HTTPS websites >> have embedded HTTP URL links from our website instead of HTTPS due to which >> the code is unable to execute on their HTTPS website because it is making >> call from https -> http which is wrong. The number of these malformed links >> are huge and there's no way that those users can manually correct >> the embedded links by editing http to https and vice versa). >> >> So we're thinking to have some condition in place that if the request for >> HTTP embedded link comes from any HTTPS domain , nginx will detect that >> source $scheme and redirect that request to HTTPS. >> >> On Wed, Nov 11, 2015 at 3:10 PM, Avraham Serour >> wrote: >> >>> you can create separate server blocks for each domain >>> >>> On Wed, Nov 11, 2015 at 11:36 AM, shahzaib shahzaib < >>> shahzaib.cb at gmail.com> wrote: >>> >>>> Hi, >>>> >>>> Is there a way we can serve $scheme (HTTP/HTTPS) based on source >>>> request ? Such as : >>>> >>>> if https://ad.domain.com --> sends request to http://ourdomain.com (as >>>> it'll fail due to cross $scheme conflict) >>>> >>>> So http://ourdomain.com will check that the request invoked using >>>> https $scheme and it'll redirect http://ourdomain.com to >>>> https://ourdomain.com for that particular ad.domain.com. >>>> >>>> --------------------------------------------------------- >>>> >>>> Is that possible guys ? >>>> >>>> Thanks in Advance ! >>>> >>>> Regards. >>>> Shahzaib >>>> >>>> >>>> Need to send me private email? I use Virtru >>>> . >>>> >>>> >>>> _______________________________________________ >>>> nginx mailing list >>>> nginx at nginx.org >>>> http://mailman.nginx.org/mailman/listinfo/nginx >>>> >>> >>> >>> _______________________________________________ >>> nginx mailing list >>> nginx at nginx.org >>> http://mailman.nginx.org/mailman/listinfo/nginx >>> >> >> > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Wed Nov 11 11:03:32 2015 From: nginx-forum at nginx.us (locojohn) Date: Wed, 11 Nov 2015 06:03:32 -0500 Subject: Certificate Transparency In-Reply-To: References: Message-ID: <0d0efa009bca356bb7b3e9f2c482eaad.NginxMailingListEnglish@forum.nginx.org> Jo? ?d?m Wrote: ------------------------------------------------------- > The TLS extension is the only method to implement Certificate > Transparency without the assistance of the CA, and starting with > January 1 2015 Chrome refuses to display the green bar for EV > certificates without Certificate Transparency. > > StartSSL is one CA that currently does not support other methods, > which means a lot of sites suffers from this. Interesting, we have installed multi-domain EV certificates from StartSSL for our company and we use Nginx, and EV green bar works in all modern and even not so modern browsers: https://www.ahlers.com I presume Certificate Transparency is not required then? Best regards, Andrejs Posted at Nginx Forum: https://forum.nginx.org/read.php?2,262652,262726#msg-262726 From rob.stradling at comodo.com Wed Nov 11 11:11:40 2015 From: rob.stradling at comodo.com (Rob Stradling) Date: Wed, 11 Nov 2015 11:11:40 +0000 Subject: Certificate Transparency In-Reply-To: <0d0efa009bca356bb7b3e9f2c482eaad.NginxMailingListEnglish@forum.nginx.org> References: <0d0efa009bca356bb7b3e9f2c482eaad.NginxMailingListEnglish@forum.nginx.org> Message-ID: <5643226C.2020408@comodo.com> On 11/11/15 11:03, locojohn wrote: > Jo? ?d?m Wrote: > ------------------------------------------------------- > >> The TLS extension is the only method to implement Certificate >> Transparency without the assistance of the CA, and starting with >> January 1 2015 Chrome refuses to display the green bar for EV >> certificates without Certificate Transparency. >> >> StartSSL is one CA that currently does not support other methods, >> which means a lot of sites suffers from this. > > Interesting, we have installed multi-domain EV certificates from StartSSL > for our company and we use Nginx, and EV green bar works in all modern and > even not so modern browsers: > > https://www.ahlers.com In Chrome 46, I see "https:" in green but I don't see the "EV green bar" that shows the Subject Organization Name. That's because... > I presume Certificate Transparency is not required then? ...CT _is_ required if you want to see the EV green bar in recent versions of Chrome. > Best regards, > Andrejs -- Rob Stradling Senior Research & Development Scientist COMODO - Creating Trust Online From shahzaib.cb at gmail.com Wed Nov 11 11:20:39 2015 From: shahzaib.cb at gmail.com (shahzaib shahzaib) Date: Wed, 11 Nov 2015 16:20:39 +0500 Subject: Redirect request based on source $scheme !! In-Reply-To: References: Message-ID: >>if you don't own the domain then you won't ever receive the request and you can't do nothing about it We don't own ad.domain.com but that domain sends http/https request to our domain 'ourdomain.com' .We just need to find out the $scheme they use to send requests such as : Is request coming from http://ad.domain.com or is it coming from https://ad.domain.com ? On Wed, Nov 11, 2015 at 4:02 PM, Avraham Serour wrote: > if you don't own the domain then you won't ever receive the request and > you can't do nothing about it > > On Wed, Nov 11, 2015 at 1:01 PM, shahzaib shahzaib > wrote: > >> One point is worth mentioning, we don't own ad.domain.com its a 3rd >> party website. All we can control is ourdomain.com. >> >> On Wed, Nov 11, 2015 at 3:25 PM, shahzaib shahzaib > > wrote: >> >>> >>you can create separate server blocks for each domain >>> I think issue will still persist. Say https://ad.domain.com makes >>> static call to http://ourdomain.com , it'll end up with conflicted >>> scheme i.e https -> http. We can't force http to https as well because >>> it'll break static calls from http -> http. >>> >>> Actually we've video sharing website from where people embed http/https >>> links to there websites. Now the problem is, some of the HTTPS websites >>> have embedded HTTP URL links from our website instead of HTTPS due to which >>> the code is unable to execute on their HTTPS website because it is making >>> call from https -> http which is wrong. The number of these malformed links >>> are huge and there's no way that those users can manually correct >>> the embedded links by editing http to https and vice versa). >>> >>> So we're thinking to have some condition in place that if the request >>> for HTTP embedded link comes from any HTTPS domain , nginx will detect that >>> source $scheme and redirect that request to HTTPS. >>> >>> On Wed, Nov 11, 2015 at 3:10 PM, Avraham Serour >>> wrote: >>> >>>> you can create separate server blocks for each domain >>>> >>>> On Wed, Nov 11, 2015 at 11:36 AM, shahzaib shahzaib < >>>> shahzaib.cb at gmail.com> wrote: >>>> >>>>> Hi, >>>>> >>>>> Is there a way we can serve $scheme (HTTP/HTTPS) based on source >>>>> request ? Such as : >>>>> >>>>> if https://ad.domain.com --> sends request to http://ourdomain.com >>>>> (as it'll fail due to cross $scheme conflict) >>>>> >>>>> So http://ourdomain.com will check that the request invoked using >>>>> https $scheme and it'll redirect http://ourdomain.com to >>>>> https://ourdomain.com for that particular ad.domain.com. >>>>> >>>>> --------------------------------------------------------- >>>>> >>>>> Is that possible guys ? >>>>> >>>>> Thanks in Advance ! >>>>> >>>>> Regards. >>>>> Shahzaib >>>>> >>>>> >>>>> Need to send me private email? I use Virtru >>>>> . >>>>> >>>>> >>>>> _______________________________________________ >>>>> nginx mailing list >>>>> nginx at nginx.org >>>>> http://mailman.nginx.org/mailman/listinfo/nginx >>>>> >>>> >>>> >>>> _______________________________________________ >>>> nginx mailing list >>>> nginx at nginx.org >>>> http://mailman.nginx.org/mailman/listinfo/nginx >>>> >>> >>> >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx >> > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Wed Nov 11 11:23:02 2015 From: nginx-forum at nginx.us (itpp2012) Date: Wed, 11 Nov 2015 06:23:02 -0500 Subject: Redirect request based on source $scheme !! In-Reply-To: References: Message-ID: <9796c75e5b1d560838af1625654fdd21.NginxMailingListEnglish@forum.nginx.org> shahzaib1232 Wrote: ------------------------------------------------------- > So we're thinking to have some condition in place that if the request > for > HTTP embedded link comes from any HTTPS domain , nginx will detect > that > source $scheme and redirect that request to HTTPS. The problem is you only have $request_uri (which can be anything) which should contain whats being asked for, but there is no way to detect (map) where it's coming from. You could do something with Lua, do a co-socket call to its origin on https, if that works save it in a cache, the next time you test for the cache value. Or get the calling party to add a header and map that back to https. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,262719,262730#msg-262730 From nginx-forum at nginx.us Wed Nov 11 12:07:23 2015 From: nginx-forum at nginx.us (173279834462) Date: Wed, 11 Nov 2015 07:07:23 -0500 Subject: script to prime nginx's OCSP cache Message-ID: <100943570bb32101ecd485dd30a1c1a9.NginxMailingListEnglish@forum.nginx.org> #!/bin/ksh -e # # The purpose of this script is to prime the OCSP cache of nginx. # # Ideally, nginx would prime its worker processes ahead of any client request. # There are two events that ought to trigger this behaviour: # the server start-up, and each time a cache expires. # # In reality, nginx stands still until a client hits a worker process, # then the specific worker process primes its own cache only. # # Therefore, this script can only prime those worker processes that respond: # if the script hapens to hit the same worker processes, # the remaining ones will still need to be primed. To solve this problem, # a stripped version of the script may run as a midnight cron job. # fqdn="$1"; if [[ "$fqdn" == "" ]]; then echo "usage: $0 FQDN"; exit 0; fi clearLastLine() { tput cuu 1 && tput el; } echo "Priming nginx's OCSP cache:"; echo ""; _iterations="20"; for (( COUNTER=1; COUNTER<=$_iterations; COUNTER++ )); do clearLastLine; echo -n "iteration $COUNTER of $_iterations: "; fail=true; while $fail; do response="$( ./read_ocsp.sh $fqdn 2>&1 | tail -1 )"; if [[ "$response" =~ "OCSP response: no response sent" || "$response" == "" ]]; then echo -n "."; sleep 6; # wait for the OCSP update else echo "OK"; sleep 3; fail=false; fi done done Posted at Nginx Forum: https://forum.nginx.org/read.php?2,262731,262731#msg-262731 From luky-37 at hotmail.com Wed Nov 11 12:49:32 2015 From: luky-37 at hotmail.com (Lukas Tribus) Date: Wed, 11 Nov 2015 13:49:32 +0100 Subject: Redirect request based on source $scheme !! In-Reply-To: <9796c75e5b1d560838af1625654fdd21.NginxMailingListEnglish@forum.nginx.org> References: , <9796c75e5b1d560838af1625654fdd21.NginxMailingListEnglish@forum.nginx.org> Message-ID: > shahzaib1232 Wrote: > ------------------------------------------------------- >> So we're thinking to have some condition in place that if the request >> for >> HTTP embedded link comes from any HTTPS domain , nginx will detect >> that >> source $scheme and redirect that request to HTTPS. What you need is to look at is the scheme indicated in the Referer header. However be advised that there are certain combinations where the Referer is blank (a HTTP frame embedded in a HTTPS site). Lukas From mikydevel at yahoo.fr Wed Nov 11 13:14:31 2015 From: mikydevel at yahoo.fr (Mik J) Date: Wed, 11 Nov 2015 13:14:31 +0000 (UTC) Subject: Best practice for URL rewriting with php parameter References: <801517974.5093373.1447247671420.JavaMail.yahoo.ref@mail.yahoo.com> Message-ID: <801517974.5093373.1447247671420.JavaMail.yahoo@mail.yahoo.com> Hello, I have checked many ways to implement what I want (including if is evil) and I've been able to reach what I wanted to do (something simple) I want that a user who accessesnginx.org/informationwill be redirected in the background tonginx/index.php?x=informationSo that my index.php page is dymanic I did like this in my virtual host configurationlocation /information { try_files information /index.php?x=information; } I would like to know if:a) This is the best practice to do what I would like to do ?b) If "location /information" is an exact match only. Apparently no because nginx.org/informationxxxsomethingxxxweird also matches Any other advices are appreciated. Thank you -------------- next part -------------- An HTML attachment was scrubbed... URL: From francis at daoine.org Wed Nov 11 13:32:58 2015 From: francis at daoine.org (Francis Daly) Date: Wed, 11 Nov 2015 13:32:58 +0000 Subject: Redirect request based on source $scheme !! In-Reply-To: References: Message-ID: <20151111133258.GA3351@daoine.org> On Wed, Nov 11, 2015 at 03:25:11PM +0500, shahzaib shahzaib wrote: Hi there, > Actually we've video sharing website from where people embed http/https > links to there websites. Now the problem is, some of the HTTPS websites > have embedded HTTP URL links from our website instead of HTTPS due to which > the code is unable to execute on their HTTPS website because it is making > call from https -> http which is wrong. Before you put too much time into building the solution, can you do a quick test to confirm that it can work? As in: * on a https site, include a link to http on your server to one particular url that you control. * in your config, redirect that one url to something https on your site * for that https request, return the response that you want When you do that -- does it work? As in: do you know that the client (browser) that you care about, will access your http url and accept the https redirection and then make use of the code that you return over that https link? Because if that does not work, then it does not matter what else you do. > So we're thinking to have some condition in place that if the request for > HTTP embedded link comes from any HTTPS domain , nginx will detect that > source $scheme and redirect that request to HTTPS. You cannot reliably detect where the link came from. If you are willing to accept unreliably detecting where the link came from, knowing that some innocent cases and some malicious cases will be handled wrongly, then you can just examine $http_referer. If it starts with "https://", then probably the link was on a https site. If it starts with "http://", then probably the link was on a http site. If it is blank, then probably the link was on a https site and it is accessing your http site. Each "probably" is because the Referer header is set to whatever the browser wants. Some browsers lie. Some browsers omit it always. Some browsers set it to a non-default value because that's what the user configured it to do. Other possibilities exist. f -- Francis Daly francis at daoine.org From francis at daoine.org Wed Nov 11 13:41:01 2015 From: francis at daoine.org (Francis Daly) Date: Wed, 11 Nov 2015 13:41:01 +0000 Subject: syslog not properly tagged In-Reply-To: References: <20151110092332.GA22635@vlpc.nginx.com> Message-ID: <20151111134101.GB3351@daoine.org> On Wed, Nov 11, 2015 at 12:15:25PM +0200, Avraham Serour wrote: > well the problem is not only with formatting, formatting is just and > inconvenience that I managed to work around already, my main problem is to > catch nginx logs only. If nginx is the only thing that writes to this syslog service using the remote syslog format, then nginx is the only thing that will have your hostname in that part of the line, no? That should be straightforward to extract. > my rsyslog config will parse every syslog message, everyone that writes to > syslog will send messages, I only need the ones coming from nginx, actually > I even need to tell apart the error from access since they have diferent > formatting Can you tell rsyslog that if $programname == your hostname, this line is in remote format and should be re-parsed on that basis? Then you might find nginx and the tags where you expect them to be. > >>> > access_log syslog:server=unix:/dev/log,tag=lenginx_access le_json; > >>> > error_log syslog:server=unix:/dev/log,tag=nginx,severity=error; > >>> > > >>> > then I'm using rsyslog to ship my logs to my logstash server. Given that the nginx you use only uses remote-syslog format, is it worth you avoiding rsyslog and letting nginx write to the logstash server directly? (There may be good reasons why not to do that.) Good luck with it, f -- Francis Daly francis at daoine.org From reallfqq-nginx at yahoo.fr Wed Nov 11 13:40:57 2015 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Wed, 11 Nov 2015 14:40:57 +0100 Subject: syslog not properly tagged In-Reply-To: References: <20151110092332.GA22635@vlpc.nginx.com> Message-ID: syslog has facilities to allow you sending messages from different sources to different destinations. That being rsyslog-related, I suggest you read some 101 books on this topic. It seems all the help about nginx you could grab from this ML has already been provided. --- *B. R.* On Wed, Nov 11, 2015 at 11:15 AM, Avraham Serour wrote: > well the problem is not only with formatting, formatting is just and > inconvenience that I managed to work around already, my main problem is to > catch nginx logs only. > my rsyslog config will parse every syslog message, everyone that writes to > syslog will send messages, I only need the ones coming from nginx, actually > I even need to tell apart the error from access since they have diferent > formatting > > On Tue, Nov 10, 2015 at 7:47 PM, B.R. wrote: > >> Vladimir already provided a plan B in his a) point. :oP >> ?? >> --- >> *B. R.* >> >> On Tue, Nov 10, 2015 at 10:43 AM, Avraham Serour >> wrote: >> >>> Well nohostname seems to be what I need, but 1.9.7 is even newer than >>> mainline (currently 1.9.6), my manager won't let me deploy anything but >>> stable on production >>> So unless 1.9.7 gets tagged as stable soon it seems I will need a >>> workaorund >>> >>> Thanks >>> Avraham >>> >>> On Tue, Nov 10, 2015 at 11:23 AM, Vladimir Homutov wrote: >>> >>>> On Tue, Nov 10, 2015 at 11:08:44AM +0200, Avraham Serour wrote: >>>> > Hi, >>>> > >>>> > I have an ubuntu machine and installed nginx stable using the ppa >>>> (1.9.3) >>>> > >>>> > In my conf I'm sending the logs to syslog: >>>> > >>>> > access_log syslog:server=unix:/dev/log,tag=lenginx_access le_json; >>>> > error_log syslog:server=unix:/dev/log,tag=nginx,severity=error; >>>> > >>>> > then I'm using rsyslog to ship my logs to my logstash server. >>>> > >>>> > My problem is that it seems nginx does't properly tag the messages, I >>>> > should be able to filter nginx messages in my rsyslog conf using: >>>> > >>>> > if $programname == 'nginx' then { >>>> > >>>> > but it seems $programname is my hostname, the tag is added to the >>>> message >>>> > body >>>> >>>> This happens because nginx uses remote syslog message format, which >>>> includes hostname. To use it with local syslog daemon you have two >>>> options: >>>> >>>> a) tell your syslog daemon that there is a hostname in a message coming >>>> from nginx >>>> >>>> b) tell nginx to not send hostname, using the 'nohostname' option, added >>>> recently in 1.9.7 (http://nginx.org/en/docs/syslog.html) >>>> >>>> > >>>> > This creates two problems: now I need to workaround to filter nginx >>>> > messages and my message body format is messed up, my beautifully json >>>> > format is now not a valid json and I need to further manipulate it. >>>> > >>>> > I was able to work around this for the access logs, my filter is now: >>>> > if $msg contains 'lenginx_access' then { >>>> > and I am using the substring to remove the prefix >>>> > >>>> > But I wasn't able to accomplish this for the error logs, it seems I >>>> can't >>>> > use a custom format for the error logs >>>> > >>>> > So any way of custom formatting my error logs to output json? >>>> > How can I tell nginx to properly tag the messages? >>>> > >>>> > btw, upon registering to this mailing list I got a confirmation email >>>> with >>>> > my password, really?? >>>> > >>>> > Avraham >>>> >>>> _______________________________________________ >>>> nginx mailing list >>>> nginx at nginx.org >>>> http://mailman.nginx.org/mailman/listinfo/nginx >>>> >>> >>> >>> _______________________________________________ >>> nginx mailing list >>> nginx at nginx.org >>> http://mailman.nginx.org/mailman/listinfo/nginx >>> >> >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx >> > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From francis at daoine.org Wed Nov 11 13:55:00 2015 From: francis at daoine.org (Francis Daly) Date: Wed, 11 Nov 2015 13:55:00 +0000 Subject: Best practice for URL rewriting with php parameter In-Reply-To: <801517974.5093373.1447247671420.JavaMail.yahoo@mail.yahoo.com> References: <801517974.5093373.1447247671420.JavaMail.yahoo.ref@mail.yahoo.com> <801517974.5093373.1447247671420.JavaMail.yahoo@mail.yahoo.com> Message-ID: <20151111135500.GC3351@daoine.org> On Wed, Nov 11, 2015 at 01:14:31PM +0000, Mik J wrote: Hi there, > I have checked many ways to implement what I want (including if is evil) and I've been able to reach what I wanted to do (something simple) > I want that a user who accessesnginx.org/informationwill be redirected in the background tonginx/index.php?x=informationSo that my index.php page is dymanic What does "redirected in the background" mean? I suspect it refers to an nginx internal rewrite, rather than to a http redirect. > I did like this in my virtual host configurationlocation /information { try_files information /index.php?x=information; } > I would like to know if:a) This is the best practice to do what I would like to do ? I'd say "no". I'm not fully sure what it is that you want to do, but I suspect that "rewrite" (http://nginx.org/r/rewrite) may be what you want; unless you will describe how /index.php is intended to be handled -- in which case just using (e.g.) fastcgi_pass with some suitable fastcgi_param directives might be even better. > b) If "location /information" is an exact match only. Apparently no because nginx.org/informationxxxsomethingxxxweird also matches It is not an exact match. http://nginx.org/r/location f -- Francis Daly francis at daoine.org From reallfqq-nginx at yahoo.fr Wed Nov 11 14:02:03 2015 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Wed, 11 Nov 2015 15:02:03 +0100 Subject: Certificate Transparency In-Reply-To: <5643226C.2020408@comodo.com> References: <0d0efa009bca356bb7b3e9f2c482eaad.NginxMailingListEnglish@forum.nginx.org> <5643226C.2020408@comodo.com> Message-ID: It is sad Chrome kind of forces website owners to have Certificate Transparency available while the whole things is still categorized as 'Experimental' by the IETF to this day: https://tools.ietf.org/html/rfc6962 ... but that is another debate. If you wanna serve CT certificates from a non-CT-compliant CA, you will need to serve it through as TLS extension, ie using a server module. In the end, it sounds logical that CA implement this mechanism on their side, through OCSP. For now, this RFC future is uncertain and the technical oddities this mechanism oddities it implies (double issuance , for example) might make CAs relunctant to rush, and it is perfectly understandable. If you support Chrome's vision and Google's wish to force the way of this RFC, go for a compliant CA or use a custom module. --- *B. R.* On Wed, Nov 11, 2015 at 12:11 PM, Rob Stradling wrote: > On 11/11/15 11:03, locojohn wrote: > >> Jo? ?d?m Wrote: >> ------------------------------------------------------- >> >> The TLS extension is the only method to implement Certificate >>> Transparency without the assistance of the CA, and starting with >>> January 1 2015 Chrome refuses to display the green bar for EV >>> certificates without Certificate Transparency. >>> >>> StartSSL is one CA that currently does not support other methods, >>> which means a lot of sites suffers from this. >>> >> >> Interesting, we have installed multi-domain EV certificates from StartSSL >> for our company and we use Nginx, and EV green bar works in all modern and >> even not so modern browsers: >> >> https://www.ahlers.com >> > > In Chrome 46, I see "https:" in green but I don't see the "EV green bar" > that shows the Subject Organization Name. That's because... > > I presume Certificate Transparency is not required then? >> > > ...CT _is_ required if you want to see the EV green bar in recent versions > of Chrome. > > Best regards, >> Andrejs >> > > -- > Rob Stradling > Senior Research & Development Scientist > COMODO - Creating Trust Online > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From reallfqq-nginx at yahoo.fr Wed Nov 11 14:13:52 2015 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Wed, 11 Nov 2015 15:13:52 +0100 Subject: Redirect request based on source $scheme !! In-Reply-To: <20151111133258.GA3351@daoine.org> References: <20151111133258.GA3351@daoine.org> Message-ID: Maybe I am seing oversimplified things here, but it seems to me the question does not require all the effort everyone amazingly puts in here. If you want to catch HTTP requests separate from HTTPS requests, make 2 server blocks, as Avraham suggested. You can then redirect all requests coming to the HTTP block to the same URI with the HTTPS scheme (301, 302, 303, 307... have it your way!). There is not constraint regarding HTTP pages loading HTTPS content (while reverse is true). I am making the assumption the request to a HTTP resource in the HTTPS page will be made, here. If the browser prevents them because they are seen as insecure, there little to nothing you can do about it. Stupid webpage service cannot be fixed on the remote side. Of course, the client browser embedding the content will need to be clever enough to follow redirects on included resources, which is I think the case of any standard use-case. ?Am I missing something there?? --- *B. R.* On Wed, Nov 11, 2015 at 2:32 PM, Francis Daly wrote: > On Wed, Nov 11, 2015 at 03:25:11PM +0500, shahzaib shahzaib wrote: > > Hi there, > > > Actually we've video sharing website from where people embed http/https > > links to there websites. Now the problem is, some of the HTTPS websites > > have embedded HTTP URL links from our website instead of HTTPS due to > which > > the code is unable to execute on their HTTPS website because it is making > > call from https -> http which is wrong. > > Before you put too much time into building the solution, can you do a > quick test to confirm that it can work? > > As in: > > * on a https site, include a link to http on your server to one particular > url that you control. > * in your config, redirect that one url to something https on your site > * for that https request, return the response that you want > > When you do that -- does it work? > > As in: do you know that the client (browser) that you care about, will > access your http url and accept the https redirection and then make use > of the code that you return over that https link? > > Because if that does not work, then it does not matter what else you do. > > > So we're thinking to have some condition in place that if the request for > > HTTP embedded link comes from any HTTPS domain , nginx will detect that > > source $scheme and redirect that request to HTTPS. > > You cannot reliably detect where the link came from. > > If you are willing to accept unreliably detecting where the link came > from, knowing that some innocent cases and some malicious cases will be > handled wrongly, then you can just examine $http_referer. > > If it starts with "https://", then probably the link was on a https site. > > If it starts with "http://", then probably the link was on a http site. > > If it is blank, then probably the link was on a https site and it is > accessing your http site. > > Each "probably" is because the Referer header is set to whatever the > browser wants. Some browsers lie. Some browsers omit it always. Some > browsers set it to a non-default value because that's what the user > configured it to do. Other possibilities exist. > > f > -- > Francis Daly francis at daoine.org > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From rob.stradling at comodo.com Wed Nov 11 14:26:18 2015 From: rob.stradling at comodo.com (Rob Stradling) Date: Wed, 11 Nov 2015 14:26:18 +0000 Subject: Certificate Transparency In-Reply-To: References: <0d0efa009bca356bb7b3e9f2c482eaad.NginxMailingListEnglish@forum.nginx.org> <5643226C.2020408@comodo.com> Message-ID: <5643500A.5070808@comodo.com> On 11/11/15 14:02, B.R. wrote: > It is sad Chrome kind of forces website owners to have Certificate > Transparency available while the whole things is still categorized as > 'Experimental' by the IETF to this day: > https://tools.ietf.org/html/rfc6962 > > ... but that is another debate. If you wanna serve CT certificates from > a non-CT-compliant CA, you will need to serve it through as TLS > extension, ie using a server module. > > In the end, it sounds logical that CA implement this mechanism on their > side, through OCSP. Indeed it does (and I'm very glad I pushed for this feature to be included in RFC6962 :-) ). If you have a cert from Comodo, we can embed SCTs in OCSP Responses for you today. Just ask. :-) (IIRC, DigiCert can do this too. I don't know about any other CAs). > For now, this RFC future is uncertain and the technical oddities this > mechanism oddities it implies (double issuance > , > for example) might make CAs relunctant to rush, and it is perfectly > understandable. Google have consistently said that they intend to require CT for all (EV and non-EV) TLS server certificates eventually. Given that Google are "going to require that as of June 1st, 2016, all certificates issued by Symantec itself will be required to support Certificate Transparency" [1], it seems that "eventually" might not be that far away. BTW, note that over at the IETF we're working on the next version of CT [2]. [1] https://googleonlinesecurity.blogspot.co.uk/2015/10/sustaining-digital-certificate-security.html [2] https://datatracker.ietf.org/doc/draft-ietf-trans-rfc6962-bis/ > If you support Chrome's vision and Google's wish to force the way of > this RFC, go for a compliant CA or use a custom module. > --- > *B. R.* > > On Wed, Nov 11, 2015 at 12:11 PM, Rob Stradling > > wrote: > > On 11/11/15 11:03, locojohn wrote: > > Jo? ?d?m Wrote: > ------------------------------------------------------- > > The TLS extension is the only method to implement Certificate > Transparency without the assistance of the CA, and starting with > January 1 2015 Chrome refuses to display the green bar for EV > certificates without Certificate Transparency. > > StartSSL is one CA that currently does not support other > methods, > which means a lot of sites suffers from this. > > > Interesting, we have installed multi-domain EV certificates from > StartSSL > for our company and we use Nginx, and EV green bar works in all > modern and > even not so modern browsers: > > https://www.ahlers.com > > > In Chrome 46, I see "https:" in green but I don't see the "EV green > bar" that shows the Subject Organization Name. That's because... > > I presume Certificate Transparency is not required then? > > > ...CT _is_ required if you want to see the EV green bar in recent > versions of Chrome. > > Best regards, > Andrejs > > > -- > Rob Stradling > Senior Research & Development Scientist > COMODO - Creating Trust Online > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > > > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -- Rob Stradling Senior Research & Development Scientist COMODO - Creating Trust Online Office Tel: +44.(0)1274.730505 Office Fax: +44.(0)1274.730909 www.comodo.com COMODO CA Limited, Registered in England No. 04058690 Registered Office: 3rd Floor, 26 Office Village, Exchange Quay, Trafford Road, Salford, Manchester M5 3EQ This e-mail and any files transmitted with it are confidential and intended solely for the use of the individual or entity to whom they are addressed. If you have received this email in error please notify the sender by replying to the e-mail containing this attachment. Replies to this email may be monitored by COMODO for operational or business reasons. Whilst every endeavour is taken to ensure that e-mails are free from viruses, no liability can be accepted and the recipient is requested to use their own virus checking software. From mikydevel at yahoo.fr Wed Nov 11 14:29:37 2015 From: mikydevel at yahoo.fr (Mik J) Date: Wed, 11 Nov 2015 14:29:37 +0000 (UTC) Subject: Best practice for URL rewriting with php parameter In-Reply-To: <20151111135500.GC3351@daoine.org> References: <20151111135500.GC3351@daoine.org> Message-ID: <1780845743.5120146.1447252177531.JavaMail.yahoo@mail.yahoo.com> Hi there, > I have checked many ways to implement what I want (including if is evil) and I've been able to reach what I wanted to do (something simple) > I want that a user who accessesnginx.org/informationwill be redirected in the background tonginx/index.php?x=informationSo that my index.php page is dymanic What does "redirected in the background" mean?M => I just meant that the user won't see the php parameters. He just sees a simple url with text only.Nginx passes the parameter to php, and not the user (through GET). That's what I meant by "in the background" I suspect it refers to an nginx internal rewrite, rather than to a http redirect.M => Maybe I didn't used the correct words. > I did like this in my virtual host configurationlocation /information { try_files information /index.php?x=information; } > I would like to know if:a) This is the best practice to do what I would like to do ? I'd say "no". I'm not fully sure what it is that you want to do, but I suspect that "rewrite" (http://nginx.org/r/rewrite) may be what you want; unless you will describe how /index.php is intended to be handled -- in which case just using (e.g.) fastcgi_pass with some suitable fastcgi_param directives might be even better. M => My index.php looks like this > b) If "location /information" is an exact match only. Apparently no because nginx.org/informationxxxsomethingxxxweird also matches It is not an exact match. http://nginx.org/r/location M => Thank you I added = in order to have an exact match -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Wed Nov 11 15:07:28 2015 From: nginx-forum at nginx.us (173279834462) Date: Wed, 11 Nov 2015 10:07:28 -0500 Subject: Let's Encrypt TLS project: seeking nginx configuration module help In-Reply-To: References: Message-ID: <358250352dcdfdbf9a122f7646c49ee5.NginxMailingListEnglish@forum.nginx.org> > They are currently struggling with their nginx module, > allowing a certificate to be automatically installed on nginx. Would you really use that script? 1. It requires python. --- I do not have python on my server, and I have no intention to install it. You can kick and scream, but that will not change my decision. If nginx will demand python to run, I will drop nginx for something I trust more. 2. Assuming *you* have python on board, and something breaks. Say, the script is hacked and you server installs something else than your intended certificate. Are you ready to pay for the damages? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,262697,262746#msg-262746 From nginx-forum at nginx.us Wed Nov 11 15:21:00 2015 From: nginx-forum at nginx.us (de_nginx_noob) Date: Wed, 11 Nov 2015 10:21:00 -0500 Subject: Error after ./configure -> No rule to make target "src/os/unix/ngx_gcc_atomic_x86.h" In-Reply-To: <20151030180019.GQ69417@mdounin.ru> References: <20151030180019.GQ69417@mdounin.ru> Message-ID: The weird thing is that the code I've been testing has been compiling fine for the past week or so and I haven't changed anything in the module config file. What would I have screwed up in the module config file? ngx_addon_name=ngx_http_netacuity_module HTTP_MODULES="$HTTP_MODULES ngx_http_netacuity_module" NGX_ADDON_SRCS="$NGX_ADDON_SRCS /root/Documents/netacuity/ngx_http_netacuity_module.c" Posted at Nginx Forum: https://forum.nginx.org/read.php?2,262539,262747#msg-262747 From francis at daoine.org Wed Nov 11 15:47:09 2015 From: francis at daoine.org (Francis Daly) Date: Wed, 11 Nov 2015 15:47:09 +0000 Subject: Best practice for URL rewriting with php parameter In-Reply-To: <1780845743.5120146.1447252177531.JavaMail.yahoo@mail.yahoo.com> References: <20151111135500.GC3351@daoine.org> <1780845743.5120146.1447252177531.JavaMail.yahoo@mail.yahoo.com> Message-ID: <20151111154709.GD3351@daoine.org> On Wed, Nov 11, 2015 at 02:29:37PM +0000, Mik J wrote: Hi there, > > I want that a user who accessesnginx.org/informationwill be redirected in the background tonginx/index.php?x=informationSo that my index.php page is dymanic > > What does "redirected in the background" mean?M => I just meant that the user won't see the php parameters. He just sees a simple url with text only.Nginx passes the parameter to php, and not the user (through GET). That's what I meant by "in the background" That's clear, thanks. In nginx terms, it's an internal rewrite. > > I did like this in my virtual host configurationlocation /information { try_files information /index.php?x=information; } > > I would like to know if:a) This is the best practice to do what I would like to do ? > > I'd say "no". > > I'm not fully sure what it is that you want to do, but I suspect that > "rewrite" (http://nginx.org/r/rewrite) may be what you want; unless > you will describe how /index.php is intended to be handled -- in which > case just using (e.g.) fastcgi_pass with some suitable fastcgi_param > directives might be even better. > M => My index.php looks like this if ($_GET['x']) == 'information') { echo "This is the information Page"; } > if ($_GET['x']) == 'contact') { echo "This is the contact Page"; } > ?> Your nginx will have some way to cause your index.php to be processed. Maybe it is proxy_pass to a php-enabled web server; maybe it is fastcgi_pass to a fastcgi server, maybe it is something else. If you do something like location = /information { rewrite ^ /index.php?x=information; } location = /index.php { fastcgi_pass ...; fastcgi_param SCRIPT_FILENAME $document_root$uri; fastcgi_param QUERY_STRING $query_string; } then you could instead omit the rewrite, and just do something like location = /information { fastcgi_pass ...; fastcgi_param SCRIPT_FILENAME $document_root/index.php; fastcgi_param QUERY_STRING x=information; } directly. f -- Francis Daly francis at daoine.org From mdounin at mdounin.ru Wed Nov 11 15:50:55 2015 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 11 Nov 2015 18:50:55 +0300 Subject: Error after ./configure -> No rule to make target "src/os/unix/ngx_gcc_atomic_x86.h" In-Reply-To: References: <20151030180019.GQ69417@mdounin.ru> Message-ID: <20151111155055.GK74233@mdounin.ru> Hello! On Wed, Nov 11, 2015 at 10:21:00AM -0500, de_nginx_noob wrote: > The weird thing is that the code I've been testing has been compiling fine > for the past week or so and I haven't changed anything in the module config > file. What would I have screwed up in the module config file? > > ngx_addon_name=ngx_http_netacuity_module > HTTP_MODULES="$HTTP_MODULES ngx_http_netacuity_module" > NGX_ADDON_SRCS="$NGX_ADDON_SRCS > /root/Documents/netacuity/ngx_http_netacuity_module.c" Looks fine. The only suggestion then is that src/os/unix/ngx_gcc_atomic_x86.h was removed from your nginx sources accidentally. Try getting a fresh checkout. -- Maxim Dounin http://nginx.org/ From shahzaib.cb at gmail.com Wed Nov 11 16:58:48 2015 From: shahzaib.cb at gmail.com (shahzaib shahzaib) Date: Wed, 11 Nov 2015 21:58:48 +0500 Subject: Redirect request based on source $scheme !! In-Reply-To: References: <20151111133258.GA3351@daoine.org> Message-ID: >>If you want to catch HTTP requests separate from HTTPS requests, make 2 server blocks, as Avraham suggested. You can then redirect all requests coming to the HTTP block to the same URI with the HTTPS scheme (301, 302, 303, 307... have it your way!). There is not constraint regarding HTTP pages loading HTTPS content (while reverse is true). B.R yes, we've decided to go with this strategy as well. I think that should do the job, we just overlooked things and made it more complex. Thanks for help and suggestion guys, i'll update about the results. :) Regards. Shahzaib On Wed, Nov 11, 2015 at 7:13 PM, B.R. wrote: > Maybe I am seing oversimplified things here, but it seems to me the > question does not require all the effort everyone amazingly puts in here. > > If you want to catch HTTP requests separate from HTTPS requests, make 2 > server blocks, as Avraham suggested. > You can then redirect all requests coming to the HTTP block to the same > URI with the HTTPS scheme (301, 302, 303, 307... have it your way!). There > is not constraint regarding HTTP pages loading HTTPS content (while reverse > is true). > I am making the assumption the request to a HTTP resource in the HTTPS > page will be made, here. If the browser prevents them because they are seen > as insecure, there little to nothing you can do about it. Stupid webpage > service cannot be fixed on the remote side. > > Of course, the client browser embedding the content will need to be clever > enough to follow redirects on included resources, which is I think the case > of any standard use-case. > > ?Am I missing something there?? > --- > *B. R.* > > On Wed, Nov 11, 2015 at 2:32 PM, Francis Daly wrote: > >> On Wed, Nov 11, 2015 at 03:25:11PM +0500, shahzaib shahzaib wrote: >> >> Hi there, >> >> > Actually we've video sharing website from where people embed http/https >> > links to there websites. Now the problem is, some of the HTTPS websites >> > have embedded HTTP URL links from our website instead of HTTPS due to >> which >> > the code is unable to execute on their HTTPS website because it is >> making >> > call from https -> http which is wrong. >> >> Before you put too much time into building the solution, can you do a >> quick test to confirm that it can work? >> >> As in: >> >> * on a https site, include a link to http on your server to one particular >> url that you control. >> * in your config, redirect that one url to something https on your site >> * for that https request, return the response that you want >> >> When you do that -- does it work? >> >> As in: do you know that the client (browser) that you care about, will >> access your http url and accept the https redirection and then make use >> of the code that you return over that https link? >> >> Because if that does not work, then it does not matter what else you do. >> >> > So we're thinking to have some condition in place that if the request >> for >> > HTTP embedded link comes from any HTTPS domain , nginx will detect that >> > source $scheme and redirect that request to HTTPS. >> >> You cannot reliably detect where the link came from. >> >> If you are willing to accept unreliably detecting where the link came >> from, knowing that some innocent cases and some malicious cases will be >> handled wrongly, then you can just examine $http_referer. >> >> If it starts with "https://", then probably the link was on a https site. >> >> If it starts with "http://", then probably the link was on a http site. >> >> If it is blank, then probably the link was on a https site and it is >> accessing your http site. >> >> Each "probably" is because the Referer header is set to whatever the >> browser wants. Some browsers lie. Some browsers omit it always. Some >> browsers set it to a non-default value because that's what the user >> configured it to do. Other possibilities exist. >> >> f >> -- >> Francis Daly francis at daoine.org >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx >> > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From reallfqq-nginx at yahoo.fr Wed Nov 11 17:28:01 2015 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Wed, 11 Nov 2015 18:28:01 +0100 Subject: Let's Encrypt TLS project: seeking nginx configuration module help In-Reply-To: <358250352dcdfdbf9a122f7646c49ee5.NginxMailingListEnglish@forum.nginx.org> References: <358250352dcdfdbf9a122f7646c49ee5.NginxMailingListEnglish@forum.nginx.org> Message-ID: This script has nothing to do with nginx, it is the one used for certificate request generation to Let's Encrypt. They intend to use certificate with shorter and shorter lifespans as the process is considered more and more robust, to fight against compromised certificates in the wild. Thus, automating the process is a good idea as manual installation could become a huge burden, not to mention manual request/generation. For the moment, the only way with nginx is to only request certificates and install them yourself manually afterwards. It could be automated somehow, depending on the Let's Encrypt script outcome, and then moving certificate files around + issuing nginx reload. It might be a compromise to avoid generated certificates go live without your own proper validation. I suggest you complain about the use of python on the Let's Encrypt board directly. I was merely trying to bring the attention of nginx experts on this topic, as a thorough understanding of nginx' way of working is in my eyes necessary. Your complaint has little impact/use here. For the security concern you are talking about, the fact the script is open-source and provided to the eyes of the whole world allows you to carefully review its code before using it, as one should do it. Open-source works only if you validate libraries you use (or if you take the risk the community does it for you, with no complaint from your side then). --- *B. R.* On Wed, Nov 11, 2015 at 4:07 PM, 173279834462 wrote: > > They are currently struggling with their nginx module, > > allowing a certificate to be automatically installed on nginx. > > Would you really use that script? > > 1. It requires python. --- I do not have python on my server, > and I have no intention to install it. You can kick and scream, > but that will not change my decision. If nginx will demand > python to run, I will drop nginx for something I trust more. > > 2. Assuming *you* have python on board, and something > breaks. Say, the script is hacked and you server installs > something else than your intended certificate. Are you > ready to pay for the damages? > > Posted at Nginx Forum: > https://forum.nginx.org/read.php?2,262697,262746#msg-262746 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From lists at ruby-forum.com Wed Nov 11 17:37:29 2015 From: lists at ruby-forum.com (Spencer Fu) Date: Wed, 11 Nov 2015 18:37:29 +0100 Subject: Nginx returning 414 even when large_client_header_buffers is set In-Reply-To: <5bc29339f7da23e67aa16c78851e44c2.NginxMailingListEnglish@forum.nginx.org> References: <8b19edbb6b2d137c94eae907213f378f.NginxMailingListEnglish@forum.nginx.org> <20120410200956.GB13466@mdounin.ru> <20120411072023.GC13466@mdounin.ru> <8bd0fd9b781422036139358eae2ea463.NginxMailingListEnglish@forum.nginx.org> <5bc29339f7da23e67aa16c78851e44c2.NginxMailingListEnglish@forum.nginx.org> Message-ID: I changed the request from a get to a post in order to get it working for my site. -- Posted via http://www.ruby-forum.com/. From mikydevel at yahoo.fr Wed Nov 11 18:50:54 2015 From: mikydevel at yahoo.fr (Mik J) Date: Wed, 11 Nov 2015 18:50:54 +0000 (UTC) Subject: Best practice for URL rewriting with php parameter In-Reply-To: <20151111154709.GD3351@daoine.org> References: <20151111154709.GD3351@daoine.org> Message-ID: <1692777902.5313613.1447267854515.JavaMail.yahoo@mail.yahoo.com> Hello Francis, First thank you for your answers I tried both methods but none of them worked. I'm going to look at it more in details (and display the php logs because I just had a blank page). Also I would like to know why the solution you're offering is a "best practice" ?At first it seems a bit heavy because I'll have a paragraph like that for every x variable (x=information, x=something). Considering also that the users URL might not look like the x variable.nginx.org/info_1234 is an internal rewrite of index.php?x=information I tried two other solutions that workedrewrite /info_1234 /index.php?x=information; and location = /info_1234 { try_files info_1234 /index.php?x=information; } I would like to understand why your solution is better than these, why is it a best practice ? Thank you Le Mercredi 11 novembre 2015 16h47, Francis Daly a ?crit : On Wed, Nov 11, 2015 at 02:29:37PM +0000, Mik J wrote: Hi there, > > I want that a user who accessesnginx.org/informationwill be redirected in the background tonginx/index.php?x=informationSo that my index.php page is dymanic > > What does "redirected in the background" mean?M => I just meant that the user won't see the php parameters. He just sees a simple url with text only.Nginx passes the parameter to php, and not the user (through GET). That's what I meant by "in the background" That's clear, thanks. In nginx terms, it's an internal rewrite. > > I did like this in my virtual host configurationlocation /information { try_files information /index.php?x=information; } > > I would like to know if:a) This is the best practice to do what I would like to do ? > > I'd say "no". > > I'm not fully sure what it is that you want to do, but I suspect that > "rewrite" (http://nginx.org/r/rewrite) may be what you want; unless > you will describe how /index.php is intended to be handled -- in which > case just using (e.g.) fastcgi_pass with some suitable fastcgi_param > directives might be even better. > M => My index.php looks like this if ($_GET['x']) == 'information') { echo "This is the information Page"; } > if ($_GET['x']) == 'contact') { echo "This is the contact Page"; } > ?> Your nginx will have some way to cause your index.php to be processed. Maybe it is proxy_pass to a php-enabled web server; maybe it is fastcgi_pass to a fastcgi server, maybe it is something else. If you do something like ? location = /information { rewrite ^ /index.php?x=information; } ? location = /index.php { ? ? fastcgi_pass ...; ? ? fastcgi_param SCRIPT_FILENAME $document_root$uri; ? ? fastcgi_param QUERY_STRING $query_string; ? } then you could instead omit the rewrite, and just do something like ? location = /information { ? ? fastcgi_pass ...; ? ? fastcgi_param SCRIPT_FILENAME $document_root/index.php; ? ? fastcgi_param QUERY_STRING x=information; ? } directly. ??? f -- Francis Daly? ? ? ? francis at daoine.org _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From francis at daoine.org Wed Nov 11 19:38:27 2015 From: francis at daoine.org (Francis Daly) Date: Wed, 11 Nov 2015 19:38:27 +0000 Subject: Best practice for URL rewriting with php parameter In-Reply-To: <1692777902.5313613.1447267854515.JavaMail.yahoo@mail.yahoo.com> References: <20151111154709.GD3351@daoine.org> <1692777902.5313613.1447267854515.JavaMail.yahoo@mail.yahoo.com> Message-ID: <20151111193827.GE3351@daoine.org> On Wed, Nov 11, 2015 at 06:50:54PM +0000, Mik J wrote: Hi there, > I tried both methods but none of them worked. I'm going to look at it more in details (and display the php logs because I just had a blank page). First configure things so that an explicit request for /index.php?x=information gives you the response that you want. Keep it simple, do one thing at a time. > Also I would like to know why the solution you're offering is a "best practice" ?At first it seems a bit heavy because I'll have a paragraph like that for every x variable (x=information, x=something). The original question did not have multiple x variables. A different question probably gets a different answer. Using "rewrite" instead of "try_files" should have exactly the same number of location{} blocks. And it avoids a presumed-unnecessary filesystem check -- what should happen if the file /usr/local/nginx/htmlinformation exists on your filesystem? Using "fastcgi_pass" and friends instead of "rewrite" should also have exactly the same number of location{} blocks. And it means that nginx can skip one step of processing -- if the rewrite was going to go to a location which does exactly this anyway. It does mean that there are more words involved; but if the number of location{} block is going to be big, you are probably going to generate them anyway. > Considering also that the users URL might not look like the x variable.nginx.org/info_1234 is an internal rewrite of index.php?x=information I don't see the difference this makes, in the three cases in question. In each case you have the request, and a different string as the x= value. If you choose to use variables in a common location, or something like that, then you will still need the mapping or request to x= value. If you knew that the mapping is always "remove the first /", then you wouldn't need a separate table. But you don't have that now, either. > I tried two other solutions that workedrewrite /info_1234 /index.php?x=information; > and > location = /info_1234 { try_files info_1234 /index.php?x=information; } > I would like to understand why your solution is better than these, why is it a best practice ? It means that nginx has less work to do to process the request. Sometime that is not the thing that you want to optimise for. f -- Francis Daly francis at daoine.org From al-nginx at none.at Wed Nov 11 20:10:44 2015 From: al-nginx at none.at (Aleksandar Lazic) Date: Wed, 11 Nov 2015 21:10:44 +0100 Subject: Fwd: openshift-nginx docker image running as non-root In-Reply-To: <703386880.7557572.1447266218497.JavaMail.zimbra@redhat.com> References: <1206832728.7518114.1447262029592.JavaMail.zimbra@redhat.com> <703386880.7557572.1447266218497.JavaMail.zimbra@redhat.com> Message-ID: <05e647760123aeff9b81815b952676ec@none.at> Dear Scott. I think this is not a devel question so I answer primarly to nginx list. Am 11-11-2015 19:23, schrieb Scott Creeley: > ----- Forwarded Message ----- > From: "Scott Creeley" > To: nginx-devel at nginx.org > Sent: Wednesday, November 11, 2015 12:13:49 PM > Subject: openshift-nginx docker image running as non-root > > Hi, > Been playing around with the > https://github.com/nginxinc/openshift-nginx dockerfile and trying to > find a way to run run nginx as non-root with openshift/k8/docker. Not > having much luck, if I pass in a user or specify a user in the > nginx.con or Dockerfile or via openshift/k8 runAsUser I always get > some form permission errors. Is there a way to do this or am I > wasting my time messing with this? > > nginx: [alert] could not open error log file: open() > "/var/log/nginx/error.log" failed (13: Permission denied) > 2015/11/10 14:40:40 [warn] 1#1: the "user" directive makes sense only > if the master process runs with super-user privileges, ignored in > /etc/nginx/nginx.conf:2 > 2015/11/10 14:40:40 [emerg] 1#1: mkdir() > "/var/cache/nginx/client_temp" failed (13: Permission denied) We had the same problem. tl;dr Add this to the dockerfile. RUN .... && chmod -R 777 /var/log/nginx /var/cache/nginx/ \ && chmod 644 /etc/nginx/* Longer explanation. Openshift v3 uses a randomly User inside the container. This makes the user and group setting in the most Dockerfile and app not very helpfully. You can take a look into the node-js example container oc exec nodejs-example-1-qerx1 -it bash ###### bash-4.2$ ps aafxu USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND 1000100+ 19 0.0 0.0 11740 1840 ? Ss 14:58 0:00 bash 1000100+ 34 0.0 0.0 19764 1204 ? R+ 14:58 0:00 \_ ps aafxu 1000100+ 1 0.0 0.0 863264 26216 ? Ssl Nov09 0:00 npm 1000100+ 17 0.0 0.0 701120 25892 ? Sl Nov09 0:00 node server.js ####### The reason why the most of the programs have this user & group stuff is a security reason. Due to the fact that almost all Containers in Openshift v3 runs under a dedicated user (e.g.: 1000100+) you don't need and not allowed to change to a dedicated user. Please take a look into this docs. Due to the fact that I don't know if you use Openshift Enterprise (OSE) or Openshift origin I post the doc links from the origin ;-) https://docs.openshift.org/latest/architecture/index.html https://docs.openshift.org/latest/creating_images/guidelines.html https://docs.openshift.org/latest/admin_guide/manage_scc.html#enable-images-to-run-with-user-in-the-dockerfile https://docs.openshift.org/latest/using_images/docker_images/index.html https://docs.openshift.org/latest/architecture/core_concepts/pods_and_services.html https://docs.openshift.org/latest/architecture/additional_concepts/authorization.html#security-context-constraints Please give you some time to learn the Openshift ecosystem it's not like a 'docker run ...' on any machine ;-) BR Aleks From nginx-forum at nginx.us Wed Nov 11 22:06:21 2015 From: nginx-forum at nginx.us (rnovo1983) Date: Wed, 11 Nov 2015 17:06:21 -0500 Subject: No default.conf file was generated Message-ID: <586376559850cb6c47374ac327feb35d.NginxMailingListEnglish@forum.nginx.org> After the installation of nginx/1.6.3 on CentOS 7, no default.conf file was created under etc/nginx/conf.d Can anybody explain me why?.. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,262761,262761#msg-262761 From reallfqq-nginx at yahoo.fr Wed Nov 11 21:51:11 2015 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Wed, 11 Nov 2015 22:51:11 +0100 Subject: Nginx returning 414 even when large_client_header_buffers is set In-Reply-To: References: <8b19edbb6b2d137c94eae907213f378f.NginxMailingListEnglish@forum.nginx.org> <20120410200956.GB13466@mdounin.ru> <20120411072023.GC13466@mdounin.ru> <8bd0fd9b781422036139358eae2ea463.NginxMailingListEnglish@forum.nginx.org> <5bc29339f7da23e67aa16c78851e44c2.NginxMailingListEnglish@forum.nginx.org> Message-ID: I suggest you try setting a larger buffer size with large_client_header_buffers . The docs are really clear about the fact a request line cannot exceed the size of a single buffer (*not* the number * size value). By changing from the GET to the POST method, are you sure you did not change anything else about the headers sent? Trying to make your tests (replaying both the KO & OK requests) with a command-line client displaying all those headers to you might prove useful. --- *B. R.* On Wed, Nov 11, 2015 at 6:37 PM, Spencer Fu wrote: > I changed the request from a get to a post in order to get it working > for my site. > > -- > Posted via http://www.ruby-forum.com/. > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Wed Nov 11 23:59:46 2015 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 12 Nov 2015 02:59:46 +0300 Subject: No default.conf file was generated In-Reply-To: <586376559850cb6c47374ac327feb35d.NginxMailingListEnglish@forum.nginx.org> References: <586376559850cb6c47374ac327feb35d.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20151111235945.GL74233@mdounin.ru> Hello! On Wed, Nov 11, 2015 at 05:06:21PM -0500, rnovo1983 wrote: > After the installation of nginx/1.6.3 on CentOS 7, no default.conf file was > created under etc/nginx/conf.d > > Can anybody explain me why?.. The only configuration file used by nginx itself is nginx.conf. Everything else are included files, and they are provided by packages you use, "for convenience" (quotes because in most cases these additional files and directories cause confusion instead). Depending on a particular package you've installed they may or may not be provided. If in doubt, try looking into nginx.conf. It contains all the configuration, and if there are any included files - you'll see the "include" directives there to include them. Detailed documentation on the "include" directive can be found at http://nginx.org/r/include. -- Maxim Dounin http://nginx.org/ From nginx-forum at nginx.us Thu Nov 12 05:37:36 2015 From: nginx-forum at nginx.us (DrDinosaur) Date: Thu, 12 Nov 2015 00:37:36 -0500 Subject: HTTP/2 Issues Message-ID: <677d05bb62d67b93d20afa5b6fbaa439.NginxMailingListEnglish@forum.nginx.org> Hi, I was having a few issues with HTTP/2 requests. Some files and images aren't loading on Chrome. I filed a bug report and documented the issue extensively here: https://code.google.com/p/chromium/issues/detail?id=553282 It might just be a Chrome issue, but I want to see if there is also a bug in Nginx or perhaps something I am doing wrong. Here is the output from nginx -V: https://pastebin.com/Xez5hW0k Any information will be helpful. Thanks. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,262764,262764#msg-262764 From ml-nginx at zu-con.org Thu Nov 12 06:57:19 2015 From: ml-nginx at zu-con.org (Matthias Rieber) Date: Thu, 12 Nov 2015 07:57:19 +0100 (CET) Subject: HTTP/2 Issues In-Reply-To: <677d05bb62d67b93d20afa5b6fbaa439.NginxMailingListEnglish@forum.nginx.org> References: <677d05bb62d67b93d20afa5b6fbaa439.NginxMailingListEnglish@forum.nginx.org> Message-ID: Hello! On Thu, 12 Nov 2015, DrDinosaur wrote: > Hi, > > I was having a few issues with HTTP/2 requests. Some files and images aren't > loading on Chrome. I filed a bug report and documented the issue extensively > here: https://code.google.com/p/chromium/issues/detail?id=553282 I had similar a problem with a website which loads lots of css/js files. Many of them failed with the spd compression error. It appears that it depends on the characteristics of internet connection, but I couldn't proof that. Increasing ssl_buffer_size to 32k solved the issue for me. I usually set ssl_buffer_size to 4k. After serveral tries it appears that lower values cause more problems with HTTP/2. Even with the default (16k) some requests fail with the compression error. Matthias From nginx-forum at nginx.us Thu Nov 12 07:16:18 2015 From: nginx-forum at nginx.us (DrDinosaur) Date: Thu, 12 Nov 2015 02:16:18 -0500 Subject: HTTP/2 Issues In-Reply-To: References: Message-ID: Hi, I set the value to 32k and even 64k, but I still had the same issues. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,262764,262766#msg-262766 From Marcin.Janowski at assecobs.pl Thu Nov 12 10:32:05 2015 From: Marcin.Janowski at assecobs.pl (Janowski Marcin) Date: Thu, 12 Nov 2015 10:32:05 +0000 Subject: Wordpress on subdirs Message-ID: <566c03d8338344b9b1014bae710250de@LUBSXMB06.abs.assecobs.pl> Hello, I have few wordpress instalations on one vhost: location /pl { try_files $uri $uri/ /pl/index.php?$args; } # Add trailing slash to */wp-admin requests. rewrite /pl/wp-admin$ $scheme://$host$uri/index.php permanent; location /en { try_files $uri $uri/ /en/index.php?$args; } # Add trailing slash to */wp-admin requests. rewrite /en/wp-admin$ $scheme://$host$uri/index.php permanent; location /dev { try_files $uri $uri/ /en/index.php?$args; } # Add trailing slash to */wp-admin requests. rewrite /dev/wp-admin$ $scheme://$host$uri/index.php permanent; set $user_login wiki; include /etc/nginx/templates.d/wordpress-subdirectory.conf; File /etc/nginx/templates.d/wordpress-subdirectory.conf has: location = /favicon.ico { log_not_found off; access_log off; } location = /robots.txt { allow all; log_not_found off; access_log off; } # Deny all attempts to access hidden files such as .htaccess, .htpasswd, .DS_Store (Mac). # Keep logging the requests to parse later (or to pass to firewall utilities such as fail2ban) location ~ /\. { deny all; } # Deny access to any files with a .php extension in the uploads directory # Works in sub-directory installs and also in multisite network # Keep logging the requests to parse later (or to pass to firewall utilities such as fail2ban) location ~* /(?:uploads|files)/.*\.php$ { deny all; } # Directives to send expires headers and turn off 404 error logging. location ~* ^.+\.(ogg|ogv|svg|svgz|eot|otf|woff|mp4|ttf|rss|atom|jpg|jpeg|gif|png|ico|zip|tgz|gz|rar|bz2|doc|xls|exe|ppt|tar|mid|midi|wav|bmp|rtf)$ { access_log off; log_not_found off; expires max; } location ~ [^/]\.php(/|$) { fastcgi_split_path_info ^(.+?\.php)(/.*)$; if (!-f $document_root$fastcgi_script_name) { return 404; } if ( $wordpress_norun_subdir ) { return 403; } include /etc/nginx/fastcgi_params; fastcgi_index index.php; fastcgi_pass unix:/var/run/$user_login.php-fpm.socket; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; fastcgi_param REMOTE_USER $remote_user; } In location ~ [^/]\.php(/|$) I have: if ( $wordpress_norun_subdir ) { return 403; } . $wordpress_norun_subdir is map: map $uri $wordpress_norun { default 1; /index.php 0; /wp-login.php 0; /wp-blog-header.php 0; /wp-cron.php 0; /wp-includes/js/tinymce/wp-mce-help.php 0; ... /xmlrpc.php 0; /wp-load.php 0; /wp-settings.php 0; /wp-admin/about.php 0; /wp-admin/admin-ajax.php 0; /wp-admin/admin-footer.php 0; /wp-admin/admin-functions.php 0; /wp-admin/admin-header.php 0; /wp-admin/admin.php 0; /wp-admin/admin-post.php 0; ... } This map works fine when I run wordpress on root directory, but if I have wordpress in subdir it doesn't. I can change paths in map to: ~/.*/index.php, but this can run files: /index.php, /wp-admin/index.php, /any_hacker_stuff/index.php. Of course, I don't want allow run this last file ;) I thinks I can change location /en to /en(.*) and set $wordpress_path $1; and change $uri to $wordpress_path, but on location /en(.*) wordpress friendly URL don't works. -- Pozdrawiam, Marcin Janowski Specjalista ds. System?w IT Centrum Przetwarzania Danych - Lublin Dzia? Rozwi?za? Systemowych T: + 48 81 535 30 00, w. 366 e-mail: marcin.janowski at assecobs.pl ________________________________ Powy?sza korespondencja przeznaczona jest wy??cznie dla osoby lub podmiotu, do kt?rego jest adresowana i mo?e zawiera? informacje o charakterze poufnym lub zastrze?onym. Nieuprawnione wykorzystanie informacji zawartych w wiadomo?ci e-mail przez osob? lub podmiot nie b?d?cy jej adresatem jest zabronione odpowiednimi przepisami prawa. Odbiorca korespondencji, kt?ry otrzyma? j? omy?kowo, proszony jest o niezw?oczne zawiadomienie nadawcy drog? elektroniczn? lub telefonicznie i usuni?cie tej tre?ci z poczty elektronicznej. Dzi?kujemy. Asseco Business Solutions S.A. ________________________________ We? pod uwag? ochron? ?rodowiska, zanim wydrukujesz ten e-mail. ________________________________ This information is intended only for the person or entity to which it is addressed and may contain confidential and/or privileged material. Unauthorized use of this information by person or entity other than the intended recipient is prohibited by law. If you received this by mistake, please immediately contact the sender by e-mail or by telephone and delete this information from any computer. Thank you. Asseco Business Solutions S.A. ________________________________ Please consider your environmental responsibility before printing this e-mail. -------------- next part -------------- An HTML attachment was scrubbed... URL: From vbart at nginx.com Thu Nov 12 14:49:45 2015 From: vbart at nginx.com (Valentin V. Bartenev) Date: Thu, 12 Nov 2015 17:49:45 +0300 Subject: HTTP/2 Issues In-Reply-To: <677d05bb62d67b93d20afa5b6fbaa439.NginxMailingListEnglish@forum.nginx.org> References: <677d05bb62d67b93d20afa5b6fbaa439.NginxMailingListEnglish@forum.nginx.org> Message-ID: <1732208.mKeDbfnAVV@vbart-workstation> On Thursday 12 November 2015 00:37:36 DrDinosaur wrote: > Hi, > > I was having a few issues with HTTP/2 requests. Some files and images aren't > loading on Chrome. I filed a bug report and documented the issue extensively > here: https://code.google.com/p/chromium/issues/detail?id=553282 > > It might just be a Chrome issue, but I want to see if there is also a bug in > Nginx or perhaps something I am doing wrong. > > Here is the output from nginx -V: https://pastebin.com/Xez5hW0k > > Any information will be helpful. Thanks. > [..] It looks like the trigger of the problem is too long response headers. You have "X-XSS-Protection" header on each response with ~1k value, and a number of such responses in sequence breaks Chrome. wbr, Valentin V. Bartenev From vbart at nginx.com Thu Nov 12 16:17:37 2015 From: vbart at nginx.com (Valentin V. Bartenev) Date: Thu, 12 Nov 2015 19:17:37 +0300 Subject: HTTP/2 Issues In-Reply-To: <677d05bb62d67b93d20afa5b6fbaa439.NginxMailingListEnglish@forum.nginx.org> References: <677d05bb62d67b93d20afa5b6fbaa439.NginxMailingListEnglish@forum.nginx.org> Message-ID: <3376773.WxB08lekR9@vbart-workstation> On Thursday 12 November 2015 00:37:36 DrDinosaur wrote: > Hi, > > I was having a few issues with HTTP/2 requests. Some files and images aren't > loading on Chrome. I filed a bug report and documented the issue extensively > here: https://code.google.com/p/chromium/issues/detail?id=553282 > > It might just be a Chrome issue, but I want to see if there is also a bug in > Nginx or perhaps something I am doing wrong. > > Here is the output from nginx -V: https://pastebin.com/Xez5hW0k > > Any information will be helpful. Thanks. > [..] Thank you for the report. I've found an issue in the HTTP/2 module. Could you try the following patch? diff -r 47e1a02be058 -r 7e7f773ac055 src/http/v2/ngx_http_v2_filter_module.c --- a/src/http/v2/ngx_http_v2_filter_module.c Thu Nov 12 18:11:16 2015 +0300 +++ b/src/http/v2/ngx_http_v2_filter_module.c Thu Nov 12 19:13:50 2015 +0300 @@ -1054,13 +1054,27 @@ static ngx_int_t ngx_http_v2_headers_frame_handler(ngx_http_v2_connection_t *h2c, ngx_http_v2_out_frame_t *frame) { - ngx_buf_t *buf; + ngx_chain_t *cl; ngx_http_v2_stream_t *stream; - buf = frame->first->buf; + cl = frame->first; - if (buf->pos != buf->last) { - return NGX_AGAIN; + for ( ;; ) { + if (cl->buf->pos != cl->buf->last) { + frame->first = cl; + + ngx_log_debug2(NGX_LOG_DEBUG_HTTP, h2c->connection->log, 0, + "http2:%ui HEADERS frame %p was sent partially", + stream->node->id, frame); + + return NGX_AGAIN; + } + + if (cl == frame->last) { + break; + } + + cl = cl->next; } stream = frame->stream; From vbart at nginx.com Thu Nov 12 19:13:03 2015 From: vbart at nginx.com (Valentin V. Bartenev) Date: Thu, 12 Nov 2015 22:13:03 +0300 Subject: HTTP/2 Issues In-Reply-To: <3376773.WxB08lekR9@vbart-workstation> References: <677d05bb62d67b93d20afa5b6fbaa439.NginxMailingListEnglish@forum.nginx.org> <3376773.WxB08lekR9@vbart-workstation> Message-ID: <6379024.Lmm6BIYKZg@vbart-workstation> On Thursday 12 November 2015 19:17:37 Valentin V. Bartenev wrote: > On Thursday 12 November 2015 00:37:36 DrDinosaur wrote: > > Hi, > > > > I was having a few issues with HTTP/2 requests. Some files and images aren't > > loading on Chrome. I filed a bug report and documented the issue extensively > > here: https://code.google.com/p/chromium/issues/detail?id=553282 > > > > It might just be a Chrome issue, but I want to see if there is also a bug in > > Nginx or perhaps something I am doing wrong. > > > > Here is the output from nginx -V: https://pastebin.com/Xez5hW0k > > > > Any information will be helpful. Thanks. > > > [..] > > Thank you for the report. I've found an issue in the HTTP/2 module. > Could you try the following patch? [..] Sorry, the previous patch breaks building with debug, here's a better one: diff -r cccef1e939f3 -r 44855eed0a35 src/http/v2/ngx_http_v2_filter_module.c --- a/src/http/v2/ngx_http_v2_filter_module.c Thu Nov 12 20:45:07 2015 +0300 +++ b/src/http/v2/ngx_http_v2_filter_module.c Thu Nov 12 21:50:06 2015 +0300 @@ -1054,17 +1054,30 @@ static ngx_int_t ngx_http_v2_headers_frame_handler(ngx_http_v2_connection_t *h2c, ngx_http_v2_out_frame_t *frame) { - ngx_buf_t *buf; + ngx_chain_t *cl; ngx_http_v2_stream_t *stream; - buf = frame->first->buf; + stream = frame->stream; + cl = frame->first; - if (buf->pos != buf->last) { - return NGX_AGAIN; + for ( ;; ) { + if (cl->buf->pos != cl->buf->last) { + frame->first = cl; + + ngx_log_debug2(NGX_LOG_DEBUG_HTTP, h2c->connection->log, 0, + "http2:%ui HEADERS frame %p was sent partially", + stream->node->id, frame); + + return NGX_AGAIN; + } + + if (cl == frame->last) { + break; + } + + cl = cl->next; } - stream = frame->stream; - ngx_log_debug2(NGX_LOG_DEBUG_HTTP, h2c->connection->log, 0, "http2:%ui HEADERS frame %p was sent", stream->node->id, frame); From ml-nginx at zu-con.org Thu Nov 12 21:48:29 2015 From: ml-nginx at zu-con.org (Matthias Rieber) Date: Thu, 12 Nov 2015 22:48:29 +0100 (CET) Subject: HTTP/2 Issues In-Reply-To: <6379024.Lmm6BIYKZg@vbart-workstation> References: <677d05bb62d67b93d20afa5b6fbaa439.NginxMailingListEnglish@forum.nginx.org> <3376773.WxB08lekR9@vbart-workstation> <6379024.Lmm6BIYKZg@vbart-workstation> Message-ID: Hi! On Thu, 12 Nov 2015, Valentin V. Bartenev wrote: > On Thursday 12 November 2015 19:17:37 Valentin V. Bartenev wrote: > > On Thursday 12 November 2015 00:37:36 DrDinosaur wrote: > > > Hi, > > > > > > I was having a few issues with HTTP/2 requests. Some files and images aren't > > > loading on Chrome. I filed a bug report and documented the issue extensively > > > here: https://code.google.com/p/chromium/issues/detail?id=553282 > > > > > > It might just be a Chrome issue, but I want to see if there is also a bug in > > > Nginx or perhaps something I am doing wrong. > > > > > > Here is the output from nginx -V: https://pastebin.com/Xez5hW0k > > > > > > Any information will be helpful. Thanks. > > > > > [..] > > > > Thank you for the report. I've found an issue in the HTTP/2 module. > > Could you try the following patch? > [..] It fixes my problem, too. I can lower ssl_bufffer_size to 16k and 4k again without any problems. Matthias From mikydevel at yahoo.fr Thu Nov 12 23:04:50 2015 From: mikydevel at yahoo.fr (Mik J) Date: Thu, 12 Nov 2015 23:04:50 +0000 (UTC) Subject: Best practice for URL rewriting with php parameter In-Reply-To: <20151111193827.GE3351@daoine.org> References: <20151111193827.GE3351@daoine.org> Message-ID: <366902393.6464888.1447369490496.JavaMail.yahoo@mail.yahoo.com> Hello Francis, I tried again your solution from yesterday and didn't manage to make it work location = /information { rewrite ^ /index.php?x=information; } ? location = /index.php { ??? fastcgi_pass 127.0.0.1:9000;??? fastcgi_param SCRIPT_FILENAME $document_root$uri; ??? fastcgi_param QUERY_STRING $query_string; ? } and ? location = /information { ??? fastcgi_pass 127.0.0.1:9000;??? fastcgi_param SCRIPT_FILENAME $document_root/index.php; ??? fastcgi_param QUERY_STRING x=information; ? } You asked me if it's a proxy_pass or a fast_cgi pass, I guess I want a fastcgi_pass. Nginx passes the php work to php-fpm on the same server. I know that the blocks are properly matched because whenI misconfigure the fastcgi_pass I have the bad gateway message.When it's configured as above it displays a completely blank page. I have php-fpm configured with error_log = log/php-fpm.loglog_level = noticephp_flag[display_errors] = on php_admin_value[error_log] = /var/log/fpm-php.www.log php_admin_flag[log_errors] = on I don't know how to debug this blank page thing However when the x variable can take multiple values x=information or x=something, the second solution you wrote seems to be long. If I had to implement as you suggested, I'd have location = /information { rewrite ^ /index.php?x=information; }location = /something { rewrite ^ /index.php?x=something; }? location = /index.php { ??? fastcgi_pass 127.0.0.1:9000;??? fastcgi_param SCRIPT_FILENAME $document_root$uri; ??? fastcgi_param QUERY_STRING $query_string; ? } or I don't know what it could be for the second solution It's much longer thanlocation = /information { rewrite ^ /index.php?x=information; }location = /something { rewrite ^ /index.php?x=something; }Actually, if I only write this it works Thank you Le Mercredi 11 novembre 2015 20h45, Francis Daly a ?crit : On Wed, Nov 11, 2015 at 06:50:54PM +0000, Mik J wrote: Hi there, > I tried both methods but none of them worked. I'm going to look at it more in details (and display the php logs because I just had a blank page). First configure things so that an explicit request for /index.php?x=information gives you the response that you want. Keep it simple, do one thing at a time. > Also I would like to know why the solution you're offering is a "best practice" ?At first it seems a bit heavy because I'll have a paragraph like that for every x variable (x=information, x=something). The original question did not have multiple x variables. A different question probably gets a different answer. Using "rewrite" instead of "try_files" should have exactly the same number of location{} blocks. And it avoids a presumed-unnecessary filesystem check -- what should happen if the file /usr/local/nginx/htmlinformation exists on your filesystem? Using "fastcgi_pass" and friends instead of "rewrite" should also have exactly the same number of location{} blocks. And it means that nginx can skip one step of processing -- if the rewrite was going to go to a location which does exactly this anyway. It does mean that there are more words involved; but if the number of location{} block is going to be big, you are probably going to generate them anyway. > Considering also that the users URL might not look like the x variable.nginx.org/info_1234 is an internal rewrite of index.php?x=information I don't see the difference this makes, in the three cases in question. In each case you have the request, and a different string as the x= value. If you choose to use variables in a common location, or something like that, then you will still need the mapping or request to x= value. If you knew that the mapping is always "remove the first /", then you wouldn't need a separate table. But you don't have that now, either. > I tried two other solutions that workedrewrite /info_1234 /index.php?x=information; > and > location = /info_1234 { try_files info_1234 /index.php?x=information; } > I would like to understand why your solution is better than these, why is it a best practice ? It means that nginx has less work to do to process the request. Sometime that is not the thing that you want to optimise for. ??? f -- Francis Daly? ? ? ? francis at daoine.org _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Fri Nov 13 00:10:52 2015 From: nginx-forum at nginx.us (George) Date: Thu, 12 Nov 2015 19:10:52 -0500 Subject: Let's Encrypt TLS project: seeking nginx configuration module help In-Reply-To: References: Message-ID: Folks might also want to look into letsencrypt client's webroot authentication plugin see - http://letsencrypt.readthedocs.org/en/latest/using.html#plugins - https://community.letsencrypt.org/t/letsencrypt-webroot-authentication-tested-on-beta-invited-whitelisted-domain/227612 - https://community.letsencrypt.org/t/using-the-webroot-domain-verification-method/144510 With webroot authentication there's a clearer separation in that letsencrypt client doesn't actually touch nginx configuration itself. Instead it just validates the domain(s) when you pass the public web root path of your domain(s) to the letsencrypt client. So you can script and do the actual nginx web server end configuration whichever way you want it setup and just point to the letsencrypt client obtained ssl certificate related files. I think part of the problem is letsencrypt was only developed on Ubuntu, but there's a variety of ways that Nginx config files, structure can be setup across various OS platforms Ubuntu, Debian, RHEL, CentOS, Fedora and even within the same OS platform. My personal vision of the letsencrypt nginx module would be to take advantage of letsencrypt webroot plugin for domain validation side and have custom code for setting up nginx ssl/http/2 vhost side to pointing to letsencrypt ssl certificates obtained. Unfortunately, the server vhost side has a variety of ways that can be configured. Hope the info helps. Unfortunately, I am no python coder. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,262697,262780#msg-262780 From Django Fri Nov 13 08:14:26 2015 From: Django (Django) Date: Fri, 13 Nov 2015 09:14:26 +0100 Subject: Let's Encrypt TLS project: seeking nginx configuration module help In-Reply-To: References: Message-ID: <56459BE2.6040308@nausch.org> HI! Am 10.11.2015 um 09:54 schrieb B.R.: > You might have heard about the Let's Encrypt > project, delivering Domain-Validated TLS certificates for free. May you tell me, how letsencrypt.org will handle TLSA records for DANE? cul8r Django -- "Bonnie & Clyde der Postmaster-Szene!" approved by Postfix-God http://wetterstation-pliening.info http://dokuwiki.nausch.org http://wiki.piratenpartei.de/Benutzer:Django From adam at jooadam.hu Fri Nov 13 14:37:28 2015 From: adam at jooadam.hu (=?UTF-8?B?Sm/DsyDDgWTDoW0=?=) Date: Fri, 13 Nov 2015 15:37:28 +0100 Subject: Selection of secure virtual servers Message-ID: Hi, I would like to terminate TLS connections arriving at the default server, only serving requests with the correct host header, relying on SNI. The configuration is as follows: server { listen 80; listen 443 ssl; return 444; } server { listen 80; listen 443 ssl; server_name example.com; ssl_certificate_key private-key; ssl_certificate certificate; } The above, however results in all connections failing, including the ones to example.com. nginx -V is: nginx version: nginx/1.6.2 (Ubuntu) TLS SNI support enabled configure arguments: --with-cc-opt='-g -O2 -fPIE -fstack-protector-strong -Wformat -Werror=format-security -D_FORTIFY_SOURCE=2' --with-ld-opt='-Wl,-Bsymbolic-functions -fPIE -pie -Wl,-z,relro -Wl,-z,now' --prefix=/usr/share/nginx --conf-path=/etc/nginx/nginx.conf --http-log-path=/var/log/nginx/access.log --error-log-path=/var/log/nginx/error.log --lock-path=/var/lock/nginx.lock --pid-path=/run/nginx.pid --http-client-body-temp-path=/var/lib/nginx/body --http-fastcgi-temp-path=/var/lib/nginx/fastcgi --http-proxy-temp-path=/var/lib/nginx/proxy --http-scgi-temp-path=/var/lib/nginx/scgi --http-uwsgi-temp-path=/var/lib/nginx/uwsgi --with-debug --with-pcre-jit --with-ipv6 --with-http_ssl_module --with-http_stub_status_module --with-http_realip_module --with-http_auth_request_module --with-http_addition_module --with-http_dav_module --with-http_geoip_module --with-http_gzip_static_module --with-http_image_filter_module --with-http_spdy_module --with-http_sub_module --with-http_xslt_module --with-mail --with-mail_ssl_module Can anyone help with this one? Thanks, ?d?m From nginx-forum at nginx.us Fri Nov 13 16:00:00 2015 From: nginx-forum at nginx.us (DrDinosaur) Date: Fri, 13 Nov 2015 11:00:00 -0500 Subject: HTTP/2 Issues In-Reply-To: <6379024.Lmm6BIYKZg@vbart-workstation> References: <6379024.Lmm6BIYKZg@vbart-workstation> Message-ID: How do I apply the patch? root at server:~/nginx-1.9.6# patch -p1 < http2.patch patching file src/http/v2/ngx_http_v2_filter_module.c patch: **** malformed patch at line 5: ngx_http_v2_headers_frame_handler(ngx_http_v2_connection_t *h2c, I tried the second one and get that error. Thanks. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,262764,262798#msg-262798 From vbart at nginx.com Fri Nov 13 16:22:21 2015 From: vbart at nginx.com (Valentin V. Bartenev) Date: Fri, 13 Nov 2015 19:22:21 +0300 Subject: HTTP/2 Issues In-Reply-To: References: <6379024.Lmm6BIYKZg@vbart-workstation> Message-ID: <2452125.mrpeyKPZkZ@vbart-workstation> On Friday 13 November 2015 11:00:00 DrDinosaur wrote: > How do I apply the patch? > > root at server:~/nginx-1.9.6# patch -p1 < http2.patch > patching file src/http/v2/ngx_http_v2_filter_module.c > patch: **** malformed patch at line 5: > ngx_http_v2_headers_frame_handler(ngx_http_v2_connection_t *h2c, > > I tried the second one and get that error. > > Thanks. > Please, try to copy the patch from the mailing list archive: http://mailman.nginx.org/pipermail/nginx/2015-November/049172.html The forum you're using is just an awkward interface to the mailing list[1], it breaks formatting. [1] http://mailman.nginx.org/mailman/listinfo/nginx wbr, Valentin V. Bartenev From ahutchings at nginx.com Fri Nov 13 16:22:54 2015 From: ahutchings at nginx.com (Andrew Hutchings) Date: Fri, 13 Nov 2015 16:22:54 +0000 Subject: HTTP/2 Issues In-Reply-To: References: <6379024.Lmm6BIYKZg@vbart-workstation> Message-ID: <56460E5E.3070407@nginx.com> Hi, The whitespace gets stripped in the forums so you can't copy/paste it from there. Please try getting it from the mailing list copy instead: http://mailman.nginx.org/pipermail/nginx/2015-November/049172.html Kind Regards Andrew On 13/11/15 16:00, DrDinosaur wrote: > How do I apply the patch? > > root at server:~/nginx-1.9.6# patch -p1 < http2.patch > patching file src/http/v2/ngx_http_v2_filter_module.c > patch: **** malformed patch at line 5: > ngx_http_v2_headers_frame_handler(ngx_http_v2_connection_t *h2c, > > I tried the second one and get that error. > > Thanks. > > Posted at Nginx Forum: https://forum.nginx.org/read.php?2,262764,262798#msg-262798 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -- Andrew Hutchings (LinuxJedi) Technical Product Manager, NGINX Inc. From nginx-forum at nginx.us Fri Nov 13 16:47:10 2015 From: nginx-forum at nginx.us (DrDinosaur) Date: Fri, 13 Nov 2015 11:47:10 -0500 Subject: HTTP/2 Issues In-Reply-To: <6379024.Lmm6BIYKZg@vbart-workstation> References: <6379024.Lmm6BIYKZg@vbart-workstation> Message-ID: <9bf4f7b12da929e18f974144d0fa2b67.NginxMailingListEnglish@forum.nginx.org> Yay, this seems to have fixed it. Thank you very much for the quick help and solution. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,262764,262805#msg-262805 From vbart at nginx.com Fri Nov 13 17:17:03 2015 From: vbart at nginx.com (Valentin V. Bartenev) Date: Fri, 13 Nov 2015 20:17:03 +0300 Subject: HTTP/2 Issues In-Reply-To: <9bf4f7b12da929e18f974144d0fa2b67.NginxMailingListEnglish@forum.nginx.org> References: <6379024.Lmm6BIYKZg@vbart-workstation> <9bf4f7b12da929e18f974144d0fa2b67.NginxMailingListEnglish@forum.nginx.org> Message-ID: <5011427.eWQYLQiKPM@vbart-workstation> On Friday 13 November 2015 11:47:10 DrDinosaur wrote: > Yay, this seems to have fixed it. Thank you very much for the quick help and > solution. > Thanks. The fix has been committed [1], and will be released with nginx 1.9.7. [1] http://hg.nginx.org/nginx/rev/f72d3129cd35 wbr, Valentin V. Bartenev From nlopez at gmail.com Fri Nov 13 21:04:32 2015 From: nlopez at gmail.com (Nick Lopez) Date: Fri, 13 Nov 2015 16:04:32 -0500 Subject: Selection of secure virtual servers In-Reply-To: References: Message-ID: Try this more explicit configuration for your default SNI server: server { listen 80 default_server; listen 443 ssl default_server; # server_name _; return 444; } See here for more info on "server_name _;" and the default_server selector for the listen directive, including an example similar to your config: http://nginx.org/en/docs/http/server_names.html On Fri, Nov 13, 2015 at 9:37 AM, Jo? ?d?m wrote: > Hi, > > I would like to terminate TLS connections arriving at the default > server, only serving requests with the correct host header, relying on > SNI. > > The configuration is as follows: > > server { > listen 80; > listen 443 ssl; > > return 444; > } > > server { > listen 80; > listen 443 ssl; > > server_name example.com; > > ssl_certificate_key private-key; > ssl_certificate certificate; > } > > The above, however results in all connections failing, including the > ones to example.com. > > nginx -V is: > > nginx version: nginx/1.6.2 (Ubuntu) > TLS SNI support enabled > configure arguments: --with-cc-opt='-g -O2 -fPIE > -fstack-protector-strong -Wformat -Werror=format-security > -D_FORTIFY_SOURCE=2' --with-ld-opt='-Wl,-Bsymbolic-functions -fPIE > -pie -Wl,-z,relro -Wl,-z,now' --prefix=/usr/share/nginx > --conf-path=/etc/nginx/nginx.conf > --http-log-path=/var/log/nginx/access.log > --error-log-path=/var/log/nginx/error.log > --lock-path=/var/lock/nginx.lock --pid-path=/run/nginx.pid > --http-client-body-temp-path=/var/lib/nginx/body > --http-fastcgi-temp-path=/var/lib/nginx/fastcgi > --http-proxy-temp-path=/var/lib/nginx/proxy > --http-scgi-temp-path=/var/lib/nginx/scgi > --http-uwsgi-temp-path=/var/lib/nginx/uwsgi --with-debug > --with-pcre-jit --with-ipv6 --with-http_ssl_module > --with-http_stub_status_module --with-http_realip_module > --with-http_auth_request_module --with-http_addition_module > --with-http_dav_module --with-http_geoip_module > --with-http_gzip_static_module --with-http_image_filter_module > --with-http_spdy_module --with-http_sub_module --with-http_xslt_module > --with-mail --with-mail_ssl_module > > Can anyone help with this one? > > > Thanks, > ?d?m > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From adam at jooadam.hu Fri Nov 13 21:32:10 2015 From: adam at jooadam.hu (=?UTF-8?B?Sm/DsyDDgWTDoW0=?=) Date: Fri, 13 Nov 2015 22:32:10 +0100 Subject: Selection of secure virtual servers In-Reply-To: References: Message-ID: Hi Nick, I have already tried those, same results. ? From 2305958068 at qq.com Sat Nov 14 03:58:28 2015 From: 2305958068 at qq.com (=?gb18030?B?zOy608qvIC0g0Lu98MX0?=) Date: Sat, 14 Nov 2015 11:58:28 +0800 Subject: how to forward a url on behalf of host server Message-ID: I like to use nginx as my web server. My clients can go to my web side by typing "www.mywebside.com?realserver=www.nginx.com" so my clients can see the same contents as original from www.nginx.com but url is www.mywebside.com. All communications between my clients and www.nginx.com must go through www.mywebside.com. I also like to modify some contents in on all pages from www.nginx.com. Is this possible? Thanks, David -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Sun Nov 15 11:39:42 2015 From: nginx-forum at nginx.us (yuval.carmel@capriza.com) Date: Sun, 15 Nov 2015 06:39:42 -0500 Subject: sub_filter and proxy_pass Message-ID: Hi, I'm using Nginx 1.8 and trying to add a sub_filter but It fails to work (I don't see the substitution happenning). location ~* \.(appcache|manifest)$ { expires -1; rewrite ^(/v[0-9]+/.*)$ /app_ver$1 break; rewrite ^(?!/v[0-9]+/)(.*)$ /app$1 break; proxy_set_header Accept-Encoding ""; sub_filter 'CACHE' 'CACHE22'; sub_filter_once on; proxy_pass http://static.dev.capriza.com.s3.amazonaws.com:80 ; } You can see below that the sub_module is there: root#2>&1 nginx -V | tr -- - '\n' | grep _module http_ssl_module http_realip_module http_addition_module http_sub_module http_dav_module http_flv_module http_mp4_module http_gunzip_module http_gzip_static_module http_random_index_module http_secure_link_module http_stub_status_module http_auth_request_module mail_ssl_module http_spdy_module Any idea what may be wrong here? Tx, Yuval Posted at Nginx Forum: https://forum.nginx.org/read.php?2,262829,262829#msg-262829 From francis at daoine.org Sun Nov 15 12:26:26 2015 From: francis at daoine.org (Francis Daly) Date: Sun, 15 Nov 2015 12:26:26 +0000 Subject: Best practice for URL rewriting with php parameter In-Reply-To: <366902393.6464888.1447369490496.JavaMail.yahoo@mail.yahoo.com> References: <20151111193827.GE3351@daoine.org> <366902393.6464888.1447369490496.JavaMail.yahoo@mail.yahoo.com> Message-ID: <20151115122626.GF3351@daoine.org> On Thu, Nov 12, 2015 at 11:04:50PM +0000, Mik J wrote: Hi there, > I tried again your solution from yesterday and didn't manage to make it work If you have one solution that works well enough for you, use it. It is usually not worth spending more time preparing something than will be saved by using it. > location = /information { rewrite ^ /index.php?x=information; } That should work as a straight drop-in replacement for your initial try_files configuration. If that does not work, then there may be something to investigate. > ? location = /index.php { > ??? fastcgi_pass 127.0.0.1:9000;??? fastcgi_param SCRIPT_FILENAME $document_root$uri; > ??? fastcgi_param QUERY_STRING $query_string; > ? } That is based on me guessing what your actual configuration has. Somewhere in your configuration, you have a location{} that handles the request for /index.php Adjust this location to match that one; or just use that one directly. > ? location = /information { > ??? fastcgi_pass 127.0.0.1:9000;??? fastcgi_param SCRIPT_FILENAME $document_root/index.php; > ??? fastcgi_param QUERY_STRING x=information; > ? } This one does the fastcgi_pass directly, avoiding the rewrite or the internal redirection. Again, the actual configuration required depends on the part of your config that you have not shown. Adjust it to match your local requirements, if you want to investigate further. > However when the x variable can take multiple values x=information or x=something, the second solution you wrote seems to be long. If I had to implement as you suggested, I'd have > location = /information { rewrite ^ /index.php?x=information; }location = /something { rewrite ^ /index.php?x=something; }? location = /index.php { > ??? fastcgi_pass 127.0.0.1:9000;??? fastcgi_param SCRIPT_FILENAME $document_root$uri; > ??? fastcgi_param QUERY_STRING $query_string; > ? } I'm afraid that I don't understand this part of your response. It seems like you are saying that you are happy to currently write location = /one { # one try_files directive } location = /two { # one try_files directive } but you think that writing location = /one { # one rewrite directive } location = /two { # one rewrite directive } is too long? I suspect that I have misunderstood you. Alternatively, you think that writing location = /one { # three or four fastcgi-related directives } location = /two { # three or four fastcgi-related directives } is too long? In that case, you must balance "more words for you to put into the config file" with "less work for nginx to do to process the request"; and you must decide where on that scale you want this server to fit. I think that nginx best practice leans towards "less work for nginx to do to process the request". I think that that is because person-configuration-writing time has zero cost, and machine-request-processing time has nonzero cost. (person-configuration-reading and person-configuration-modifying times also matter; I suspect that they point at "less work for nginx..." too. But it's your system; you choose how you want to manage it.) Good luck with it, f -- Francis Daly francis at daoine.org From francis at daoine.org Sun Nov 15 12:31:19 2015 From: francis at daoine.org (Francis Daly) Date: Sun, 15 Nov 2015 12:31:19 +0000 Subject: sub_filter and proxy_pass In-Reply-To: References: Message-ID: <20151115123119.GG3351@daoine.org> On Sun, Nov 15, 2015 at 06:39:42AM -0500, yuval.carmel at capriza.com wrote: Hi there, > I'm using Nginx 1.8 and trying to add a sub_filter but It fails to work (I > don't see the substitution happenning). The usual suspects here are "gzip" and "content type". Your "proxy_set_header Accept-Encoding "";" should avoid the response from upstream being compressed. Can you see that that is the case? What do the upstream response headers say the Content-Type is? Is that listed in your effective sub_filter_types directive? http://nginx.org/r/sub_filter_types f -- Francis Daly francis at daoine.org From nginx-forum at nginx.us Sun Nov 15 12:36:51 2015 From: nginx-forum at nginx.us (yuval.carmel@capriza.com) Date: Sun, 15 Nov 2015 07:36:51 -0500 Subject: sub_filter and proxy_pass In-Reply-To: <20151115123119.GG3351@daoine.org> References: <20151115123119.GG3351@daoine.org> Message-ID: <9e6774088fc592cb5c2c3b4cc512630d.NginxMailingListEnglish@forum.nginx.org> Issue was the Content-Type. Tx! (used sub_filter_types to fix it) Posted at Nginx Forum: https://forum.nginx.org/read.php?2,262829,262833#msg-262833 From francis at daoine.org Sun Nov 15 12:51:56 2015 From: francis at daoine.org (Francis Daly) Date: Sun, 15 Nov 2015 12:51:56 +0000 Subject: Selection of secure virtual servers In-Reply-To: References: Message-ID: <20151115125156.GH3351@daoine.org> On Fri, Nov 13, 2015 at 03:37:28PM +0100, Jo? ?d?m wrote: Hi there, > I would like to terminate TLS connections arriving at the default > server, only serving requests with the correct host header, relying on > SNI. SSL is fiddly. The selection of which https server{} to use is not as straightforward as the selection of which http server{} to use. If you have one ssl server that you care about, and you do not know that everything involved works fully with SNI, the "simple" (but inelegant) approach might be to just have a single server{} block with ssl on for this ip:port, and use if ($host != "example.com") { return 444; } there. If you do know that everything works with SNI, you should have most of your ssl configuration at http{} level, or identical in the relevant server{} blocks, and only the different certificates configured per server. (You will probably want a certificate for your scratch/throw-away default server -- I have not tested.) You will also want to ensure that ssl_protocols excludes anything that does not allow SNI -- see "SSL is fiddly". http://nginx.org/en/docs/http/configuring_https_servers.html Good luck with it, f -- Francis Daly francis at daoine.org From nginx-forum at nginx.us Mon Nov 16 03:52:15 2015 From: nginx-forum at nginx.us (vps4) Date: Sun, 15 Nov 2015 22:52:15 -0500 Subject: proxy_pass hide headers Message-ID: <168f3738f7e656984e2be2897cd37a62.NginxMailingListEnglish@forum.nginx.org> backend has many X-* headers how can i hide them all with simple way? thanks Posted at Nginx Forum: https://forum.nginx.org/read.php?2,262837,262837#msg-262837 From nginx-forum at nginx.us Mon Nov 16 07:04:48 2015 From: nginx-forum at nginx.us (vps4) Date: Mon, 16 Nov 2015 02:04:48 -0500 Subject: proxy_pass use proxy Message-ID: <39ef10c97f3c3dc71f18437959083897.NginxMailingListEnglish@forum.nginx.org> how can proxy_pass use socks5 proxy like 127.0.0.1:7070 Posted at Nginx Forum: https://forum.nginx.org/read.php?2,262839,262839#msg-262839 From ek at nginx.com Mon Nov 16 07:36:22 2015 From: ek at nginx.com (Ekaterina Kukushkina) Date: Mon, 16 Nov 2015 10:36:22 +0300 Subject: proxy_pass hide headers In-Reply-To: <168f3738f7e656984e2be2897cd37a62.NginxMailingListEnglish@forum.nginx.org> References: <168f3738f7e656984e2be2897cd37a62.NginxMailingListEnglish@forum.nginx.org> Message-ID: <338C04C4-6138-4F54-85E6-9ED9F7D15B78@nginx.com> Hi, You can use the more_clear_headers directive provided with the HeadersMore module. The documentation can be found here: https://github.com/openresty/headers-more-nginx-module#more_clear_headers > On 16 Nov 2015, at 06:52, vps4 wrote: > > backend has many X-* headers > how can i hide them all with simple way? > thanks > -- Ekaterina From reallfqq-nginx at yahoo.fr Mon Nov 16 11:13:36 2015 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Mon, 16 Nov 2015 12:13:36 +0100 Subject: proxy_pass use proxy In-Reply-To: <39ef10c97f3c3dc71f18437959083897.NginxMailingListEnglish@forum.nginx.org> References: <39ef10c97f3c3dc71f18437959083897.NginxMailingListEnglish@forum.nginx.org> Message-ID: Using the same format as the question for the answer: socks is OSI-layer #5 HTTP is OSI-layer #7 nginx works at HTTP level --- *B. R.* On Mon, Nov 16, 2015 at 8:04 AM, vps4 wrote: > how can proxy_pass use socks5 proxy like 127.0.0.1:7070 > > Posted at Nginx Forum: > https://forum.nginx.org/read.php?2,262839,262839#msg-262839 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Mon Nov 16 13:51:29 2015 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 16 Nov 2015 16:51:29 +0300 Subject: Selection of secure virtual servers In-Reply-To: <20151115125156.GH3351@daoine.org> References: <20151115125156.GH3351@daoine.org> Message-ID: <20151116135129.GS74233@mdounin.ru> Hello! On Sun, Nov 15, 2015 at 12:51:56PM +0000, Francis Daly wrote: > On Fri, Nov 13, 2015 at 03:37:28PM +0100, Jo? ?d?m wrote: > > Hi there, > > > I would like to terminate TLS connections arriving at the default > > server, only serving requests with the correct host header, relying on > > SNI. > > SSL is fiddly. > > The selection of which https server{} to use is not as straightforward > as the selection of which http server{} to use. > > If you have one ssl server that you care about, and you do not know that > everything involved works fully with SNI, the "simple" (but inelegant) > approach might be to just have a single server{} block with ssl on for > this ip:port, and use > > if ($host != "example.com") { return 444; } > > there. There is no need to do this. With nginx server{} blocks are selected twice: by SNI, and then by HTTP Host header. This allows to happily use server{} blocks even when not using SNI. -- Maxim Dounin http://nginx.org/ From mdounin at mdounin.ru Mon Nov 16 13:55:47 2015 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 16 Nov 2015 16:55:47 +0300 Subject: Selection of secure virtual servers In-Reply-To: References: Message-ID: <20151116135546.GT74233@mdounin.ru> Hello! On Fri, Nov 13, 2015 at 03:37:28PM +0100, Jo? ?d?m wrote: > Hi, > > I would like to terminate TLS connections arriving at the default > server, only serving requests with the correct host header, relying on > SNI. > > The configuration is as follows: > > server { > listen 80; > listen 443 ssl; > > return 444; > } > > server { > listen 80; > listen 443 ssl; > > server_name example.com; > > ssl_certificate_key private-key; > ssl_certificate certificate; > } > > The above, however results in all connections failing, including the > ones to example.com. The problem is that there is no certificate defined in the default server{} block. You should be able to find nginx complaints about this in the error log. Solution is to specify a certificate in the default server. Use a dummy one if you don't need a real one. -- Maxim Dounin http://nginx.org/ From admin at leboutique.com Mon Nov 16 15:34:10 2015 From: admin at leboutique.com (Artem Tomyuk) Date: Mon, 16 Nov 2015 17:34:10 +0200 Subject: Is it possible to use 2 variables (one from regexp, the other one from map definition)? Message-ID: Hi all. The mission is to conditionally serve *.webp. First of all i have map in my http section. map $http_accept $webp_suffix { default ""; "~*webp" ".webp"; } The purpose of this map is to check if the user agent supports the webp. The second thing we need somehow combine it in try_file..... There is an example from config. location ~ /resize/([-_0-9a-z]+)/([0-9a-z]+)/([0-9a-z]+)/([-_0-9a-z]+)\.jpg$ { add_header Vary Accept; And there i want to do something like: try_files /resize/$1/$2/$3/$4$webp_suffix /resize/$1/$2/$3/$4.jpg =404 But it is not working. The only thing is working is try_files $uri$webp_suffix $uri =404. But this combination is checking for image_name.jpg.webp. but i need to look for image_name.webp. The question is how to concat 2 variables (one from regexp, the other one from map definition) in try section. Thanks. -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Mon Nov 16 15:49:45 2015 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 16 Nov 2015 18:49:45 +0300 Subject: Is it possible to use 2 variables (one from regexp, the other one from map definition)? In-Reply-To: References: Message-ID: <20151116154945.GB74233@mdounin.ru> Hello! On Mon, Nov 16, 2015 at 05:34:10PM +0200, Artem Tomyuk wrote: > Hi all. > > The mission is to conditionally serve *.webp. > > First of all i have map in my http section. > > map $http_accept $webp_suffix { > > default ""; > > "~*webp" ".webp"; > } > > The purpose of this map is to check if the user agent supports the webp. > > The second thing we need somehow combine it in try_file..... > There is an example from config. > > location ~ > /resize/([-_0-9a-z]+)/([0-9a-z]+)/([0-9a-z]+)/([-_0-9a-z]+)\.jpg$ { > > add_header Vary Accept; > > And there i want to do something like: > > try_files /resize/$1/$2/$3/$4$webp_suffix /resize/$1/$2/$3/$4.jpg =404 > > But it is not working. This is because $n variables ($1, $2, ...) are derived from the last regular expression executed. And $webp_suffix executes a regular expression, which screws up things. Solution would be to use named captures instead, like this: location ~ ^/resize/(?[-_0-9a-z]+).jpg$ { try_files /resize/$foo$webp_suffix /resize/$foo.jpg =404; ... } -- Maxim Dounin http://nginx.org/ From edigarov at qarea.com Mon Nov 16 16:58:43 2015 From: edigarov at qarea.com (Gregory Edigarov) Date: Mon, 16 Nov 2015 18:58:43 +0200 Subject: some problems with http:// -> https:// & phpMyAdmin Message-ID: <564A0B43.2040807@qarea.com> server { listen 80; server_name site.com www.site.com; location / { return 301 https://site.com$request_uri; } } server { listen 443; server_name site.com; [certificate setup skipped] # phpMyAdmin location /pma { proxy_set_header Host $http_host; proxy_set_header X-REQUEST_URI $request_uri; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto $scheme; proxy_pass http://127.0.0.1:8080; #Apache listens here client_max_body_size 10m; client_body_buffer_size 128k; proxy_connect_timeout 90; proxy_send_timeout 90; proxy_read_timeout 90; proxy_buffer_size 32k; proxy_buffers 8 16k; proxy_busy_buffers_size 64k; proxy_temp_file_write_size 64k; } # other stuff skipped, as it works correctly } When I test this config using : curl -vvvik https://site.com/pma/index.php?token=a5255c4748a3ccbe87b566ba04cae84c -X POST -d 'pma_username=user&pma_password=somepass' among other things I get: .... skip ..... Location: https://site.com:80/pma/index.php?lang=en&token=25286554842f53e1bb0cfa4390caf154&phpMyAdmin=k6plh7hnvfg6aphfkqbg17p3or70474r why do i get port 80 in location: and how can i eliminate it? From adam at jooadam.hu Mon Nov 16 18:05:55 2015 From: adam at jooadam.hu (=?UTF-8?B?Sm/DsyDDgWTDoW0=?=) Date: Mon, 16 Nov 2015 19:05:55 +0100 Subject: Selection of secure virtual servers In-Reply-To: <20151116135546.GT74233@mdounin.ru> References: <20151116135546.GT74233@mdounin.ru> Message-ID: Francis, Maxim, Thank you for the answers. ?d?m From nginx-forum at nginx.us Mon Nov 16 18:45:15 2015 From: nginx-forum at nginx.us (itpp2012) Date: Mon, 16 Nov 2015 13:45:15 -0500 Subject: some problems with http:// -> https:// & phpMyAdmin In-Reply-To: <564A0B43.2040807@qarea.com> References: <564A0B43.2040807@qarea.com> Message-ID: It looks like the backend has not been told yet its running on 443. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,262860,262863#msg-262863 From nginx-forum at nginx.us Tue Nov 17 08:07:33 2015 From: nginx-forum at nginx.us (rgrraj) Date: Tue, 17 Nov 2015 03:07:33 -0500 Subject: Absolute rather than relative times in expires directives In-Reply-To: References: Message-ID: <8ee3603b3dcd722112fb19212f03e491.NginxMailingListEnglish@forum.nginx.org> HI All The topic was the same one I was looking for. But we have specific idea of setting up the expire value. We need expires to be at every 2h hours at the same time to be on every 24hours, ie: midnight. Can you help me with how to configure the same with 2h as well as for midnight night. reason is that cdn purge happens at every midnight so if an end use access the page at might 10.50pm if we set only expires 2h, it will espire only for 00:50 which infact needed to be overrided by 24h hours / midnight in such cases. Can any body help us with a solution / idea. Thanks Govind Posted at Nginx Forum: https://forum.nginx.org/read.php?2,115406,262867#msg-262867 From mdounin at mdounin.ru Tue Nov 17 15:13:05 2015 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 17 Nov 2015 18:13:05 +0300 Subject: nginx-1.9.7 Message-ID: <20151117151305.GI74233@mdounin.ru> Changes with nginx 1.9.7 17 Nov 2015 *) Feature: the "nohostname" parameter of logging to syslog. *) Feature: the "proxy_cache_convert_head" directive. *) Feature: the $realip_remote_addr in the ngx_http_realip_module. *) Bugfix: the "expires" directive might not work when using variables. *) Bugfix: a segmentation fault might occur in a worker process when using HTTP/2; the bug had appeared in 1.9.6. *) Bugfix: if nginx was built with the ngx_http_v2_module it was possible to use the HTTP/2 protocol even if the "http2" parameter of the "listen" directive was not specified. *) Bugfix: in the ngx_http_v2_module. -- Maxim Dounin http://nginx.org/ From kworthington at gmail.com Tue Nov 17 16:24:52 2015 From: kworthington at gmail.com (Kevin Worthington) Date: Tue, 17 Nov 2015 11:24:52 -0500 Subject: [nginx-announce] nginx-1.9.7 In-Reply-To: <20151117151310.GJ74233@mdounin.ru> References: <20151117151310.GJ74233@mdounin.ru> Message-ID: Hello Nginx users, Now available: Nginx 1.9.7 for Windows http://tiny.cc/nginxwin197 (32-bit and 64-bit versions) These versions are to support legacy users who are already using Cygwin based builds of Nginx. Officially supported native Windows binaries are at nginx.org. Announcements are also available here: Twitter http://twitter.com/kworthington Google+ https://plus.google.com/+KevinWorthington/ Thank you, Kevin -- Kevin Worthington kworthington *@* (gmail] [dot} {com) http://kevinworthington.com/ http://twitter.com/kworthington https://plus.google.com/+KevinWorthington/ On Tue, Nov 17, 2015 at 10:13 AM, Maxim Dounin wrote: > Changes with nginx 1.9.7 17 Nov > 2015 > > *) Feature: the "nohostname" parameter of logging to syslog. > > *) Feature: the "proxy_cache_convert_head" directive. > > *) Feature: the $realip_remote_addr in the ngx_http_realip_module. > > *) Bugfix: the "expires" directive might not work when using variables. > > *) Bugfix: a segmentation fault might occur in a worker process when > using HTTP/2; the bug had appeared in 1.9.6. > > *) Bugfix: if nginx was built with the ngx_http_v2_module it was > possible to use the HTTP/2 protocol even if the "http2" parameter of > the "listen" directive was not specified. > > *) Bugfix: in the ngx_http_v2_module. > > > -- > Maxim Dounin > http://nginx.org/ > > _______________________________________________ > nginx-announce mailing list > nginx-announce at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx-announce > -------------- next part -------------- An HTML attachment was scrubbed... URL: From peter.a.portante at gmail.com Tue Nov 17 18:17:23 2015 From: peter.a.portante at gmail.com (Peter Portante) Date: Tue, 17 Nov 2015 13:17:23 -0500 Subject: nginx-1.9.7 In-Reply-To: <20151117151305.GI74233@mdounin.ru> References: <20151117151305.GI74233@mdounin.ru> Message-ID: Great news! Installed and running on our site, nohostname works like a champ now. I now see "nginx" as the syslog "programname" in my queries. Thank you for such a quick turn-around! -peter On Tue, Nov 17, 2015 at 10:13 AM, Maxim Dounin wrote: > Changes with nginx 1.9.7 17 Nov > 2015 > > *) Feature: the "nohostname" parameter of logging to syslog. > > *) Feature: the "proxy_cache_convert_head" directive. > > *) Feature: the $realip_remote_addr in the ngx_http_realip_module. > > *) Bugfix: the "expires" directive might not work when using variables. > > *) Bugfix: a segmentation fault might occur in a worker process when > using HTTP/2; the bug had appeared in 1.9.6. > > *) Bugfix: if nginx was built with the ngx_http_v2_module it was > possible to use the HTTP/2 protocol even if the "http2" parameter of > the "listen" directive was not specified. > > *) Bugfix: in the ngx_http_v2_module. > > > -- > Maxim Dounin > http://nginx.org/ > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gfrankliu at gmail.com Tue Nov 17 18:20:33 2015 From: gfrankliu at gmail.com (Frank Liu) Date: Tue, 17 Nov 2015 10:20:33 -0800 Subject: nginx-1.9.7 In-Reply-To: <20151117151305.GI74233@mdounin.ru> References: <20151117151305.GI74233@mdounin.ru> Message-ID: Congrats on the new release and thanks for adding $realip_remote_addr new feature! How do I get the original client port? Do we need another new feature like $realip_remote_port? Frank On Tue, Nov 17, 2015 at 7:13 AM, Maxim Dounin wrote: > Changes with nginx 1.9.7 17 Nov > 2015 > > *) Feature: the "nohostname" parameter of logging to syslog. > > *) Feature: the "proxy_cache_convert_head" directive. > > *) Feature: the $realip_remote_addr in the ngx_http_realip_module. > > *) Bugfix: the "expires" directive might not work when using variables. > > *) Bugfix: a segmentation fault might occur in a worker process when > using HTTP/2; the bug had appeared in 1.9.6. > > *) Bugfix: if nginx was built with the ngx_http_v2_module it was > possible to use the HTTP/2 protocol even if the "http2" parameter of > the "listen" directive was not specified. > > *) Bugfix: in the ngx_http_v2_module. > > > -- > Maxim Dounin > http://nginx.org/ > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Tue Nov 17 19:23:31 2015 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 17 Nov 2015 22:23:31 +0300 Subject: nginx-1.9.7 In-Reply-To: References: <20151117151305.GI74233@mdounin.ru> Message-ID: <20151117192331.GS74233@mdounin.ru> Hello! On Tue, Nov 17, 2015 at 10:20:33AM -0800, Frank Liu wrote: > How do I get the original client port? Do we need another new feature like > $realip_remote_port? No way as of now. If there is a good use case, we may consider adding something like $realip_remote_port. -- Maxim Dounin http://nginx.org/ From nginx-forum at nginx.us Tue Nov 17 20:13:46 2015 From: nginx-forum at nginx.us (lakarjail) Date: Tue, 17 Nov 2015 15:13:46 -0500 Subject: Nginx failing to ask for PEM SSL key password Message-ID: <3fcea552522a370fcff86a291b4fe4af.NginxMailingListEnglish@forum.nginx.org> == CONTEXT == nginx version: nginx/1.6.2 Linux - 2.6.32-042stab111.11 #1 SMP Tue Sep 1 18:19:12 MSK 2015 x86_64 GNU/Linux While starting/restarting nginx with "service nginx start", no password is asked on the terminal and nginx fails to start. By checking journalctl, I receive the following error : --- nov. 17 ... systemd[1]: Failed to reset devices.list on /system.slice/nginx.service: No nov. 17 ... nginx[1441]: Enter PEM pass phrase: nov. 17 ... nginx[1441]: nginx: [emerg] SSL_CTX_use_PrivateKey_file("/etc/nginx/ssl/mykeycert") failed (SSL: error:0906406D:PEM routines:PEM_def_callback:problems getting password error:0906A nov. 17 ... nginx[1441]: nginx: configuration file /etc/nginx/nginx.conf test failed nov. 17 ... systemd[1]: nginx.service: control process exited, code=exited status=1 nov. 17 ... systemd[1]: Failed to start A high performance web server and a revers --- Log files says that a PEM pass phrase has been asked, but that is not the case, nothing can be read from the terminal. Please note that : - nginx server starts correctly in command line (#nginx ), not using service. SSL configuration (like file locations and permissions seems therefore correct). Password is -that way- asked on terminal. - when doing the same SSL configuration with Apache2, the password is well required when starting/restarting Apache2 server with "service apache2 start". == Problem and Question == 1) I am not about to remove password of a cert key, since it's usually a bad security practise (considering the server get compromised, the cert will have to be revoked, etc.). On top of that, as explained, I never had problems on Apache2 using a password protected key Cert file. When I run Apache service, password is well asked. I can not consider the solution of removing the password, when other solutions work properly. I also checked ssl_password_file proposal. Storing the password in that way would set the security system as if no password was set on the key cert file. Therefore, I can't -as well- follow that solution. 2) What I fail to understand, if it is a bug, or a feature is the following : Nginx, when run as command line asks me for my cert key password and runs correctly. Why this behaviour can't be applied on a service ? The command: --- # nginx --- Asks for a password, runs webserver Nginx correctly. However : --- # service nginx start --- doesn't, password is not asked on terminal, producing the journalctl above mentionned. Why this difference of response ? Why an Apache2-like (that works in both situation) mechanism can't be introduced with Nginx ? Thank you in advance for your answer. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,262900,262900#msg-262900 From r1ch+nginx at teamliquid.net Tue Nov 17 21:25:31 2015 From: r1ch+nginx at teamliquid.net (Richard Stanway) Date: Tue, 17 Nov 2015 22:25:31 +0100 Subject: Nginx failing to ask for PEM SSL key password In-Reply-To: <3fcea552522a370fcff86a291b4fe4af.NginxMailingListEnglish@forum.nginx.org> References: <3fcea552522a370fcff86a291b4fe4af.NginxMailingListEnglish@forum.nginx.org> Message-ID: Running nginx directly works fine because nginx can see and use your terminal. (Re)starting nginx through systemd does not, because systemd doesn't provide a terminal (nor would your input reach it). See https://trac.nginx.org/nginx/ticket/433 On Tue, Nov 17, 2015 at 9:13 PM, lakarjail wrote: > > The command: > --- > # nginx > --- > Asks for a password, runs webserver Nginx correctly. However : > --- > # service nginx start > --- > doesn't, password is not asked on terminal, producing the journalctl above > mentionned. Why this difference of response ? Why an Apache2-like (that > works in both situation) mechanism can't be introduced with Nginx ? > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From adam at jooadam.hu Tue Nov 17 21:41:28 2015 From: adam at jooadam.hu (=?UTF-8?B?Sm/DsyDDgWTDoW0=?=) Date: Tue, 17 Nov 2015 22:41:28 +0100 Subject: Processing of proxied requests Message-ID: Hi, Can someone please tell me how much processing Nginx does on incoming requests before proxying? If I would use Nginx merely for TLS termination, how different would be the TCP stream arriving to the backend compared to the original TLS payload? Thank you, ?d?m From stephane at wirtel.be Tue Nov 17 22:48:46 2015 From: stephane at wirtel.be (Stephane Wirtel) Date: Tue, 17 Nov 2015 23:48:46 +0100 Subject: Forward request after operation with worker? Message-ID: <20151117224846.GA5619@sg1> Hi, With a request, is it possible to redirect to a running worker and if this one is not running, just enable it. I explain, I would like to implement a reverse proxy with Lua and OpenResty and Redis. Redis will store a mapping between the hostname and a tuple (ip of the worker:port). But the workers can be down, because unused in time. I was thinking to keep the request, execute a "light thread" in Lua (with a timeout of 1s). The light thread will active the worker. If the timeout is reached, we return an error, else we send the request to the worker. 1. Is it possible ? 2. I would like to know how will you make that, I don't know Lua, just used it in the past just for small scripts with imapfilter or a PoC with OpenResty. Thank you, Stephane -- St?phane Wirtel - http://wirtel.be - @matrixise From nginx-forum at nginx.us Wed Nov 18 09:34:20 2015 From: nginx-forum at nginx.us (lakarjail) Date: Wed, 18 Nov 2015 04:34:20 -0500 Subject: Nginx failing to ask for PEM SSL key password In-Reply-To: References: Message-ID: <74a85cfc2d72ec0ed205d34075613140.NginxMailingListEnglish@forum.nginx.org> I see your point there. Thank you for the link. It made me wondering why "SSLPassPhraseDialog" from Apache was not as well added on Nginx. Indeed, I am looking for a solution that wouldn't decrease the global security of my system. I can not consider leaving the password of a PEM key in cleartext like "ssl_password_file" solution proposed by Nginx, nor to remove the password of the key cert file for obvious and same reasons. What solution do I have then, solution that would be clean enough in terms of security, and to ensure that next nginx updates won't cause problems? Richard Stanway Wrote: ------------------------------------------------------- > Running nginx directly works fine because nginx can see and use your > terminal. (Re)starting nginx through systemd does not, because systemd > doesn't provide a terminal (nor would your input reach it). > > See https://trac.nginx.org/nginx/ticket/433 > Posted at Nginx Forum: https://forum.nginx.org/read.php?2,262900,262911#msg-262911 From nginx-forum at nginx.us Wed Nov 18 10:29:55 2015 From: nginx-forum at nginx.us (itpp2012) Date: Wed, 18 Nov 2015 05:29:55 -0500 Subject: Nginx failing to ask for PEM SSL key password In-Reply-To: <74a85cfc2d72ec0ed205d34075613140.NginxMailingListEnglish@forum.nginx.org> References: <74a85cfc2d72ec0ed205d34075613140.NginxMailingListEnglish@forum.nginx.org> Message-ID: Assuming the cert files are not kept open, you could store them in a protected vault with the password in them, place them (copy from vault) where nginx wants them, close vault, start nginx and overwrite/remove the files. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,262900,262912#msg-262912 From nginx-forum at nginx.us Wed Nov 18 11:22:40 2015 From: nginx-forum at nginx.us (lakarjail) Date: Wed, 18 Nov 2015 06:22:40 -0500 Subject: Nginx failing to ask for PEM SSL key password In-Reply-To: References: <74a85cfc2d72ec0ed205d34075613140.NginxMailingListEnglish@forum.nginx.org> Message-ID: <775695974ac116c18a71d73f635a84e6.NginxMailingListEnglish@forum.nginx.org> Thank you for your answer. Could you please describe technically the "protected vault" for Debian you have in mind as a solution? If I understand you well, there is no simple solution in debian as we can have with Apache2 and its mod_ssl function 'SSLPassPhraseDialog'? That is quite surprising from Nginx/Debian support. itpp2012 Wrote: ------------------------------------------------------- > Assuming the cert files are not kept open, you could store them in a > protected vault with the password in them, place them (copy from > vault) where nginx wants them, close vault, start nginx and > overwrite/remove the files. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,262900,262913#msg-262913 From nginx-forum at nginx.us Wed Nov 18 12:09:26 2015 From: nginx-forum at nginx.us (itpp2012) Date: Wed, 18 Nov 2015 07:09:26 -0500 Subject: Nginx failing to ask for PEM SSL key password In-Reply-To: <775695974ac116c18a71d73f635a84e6.NginxMailingListEnglish@forum.nginx.org> References: <74a85cfc2d72ec0ed205d34075613140.NginxMailingListEnglish@forum.nginx.org> <775695974ac116c18a71d73f635a84e6.NginxMailingListEnglish@forum.nginx.org> Message-ID: <63ccf0ba1cf16d3e1685e34c44e36055.NginxMailingListEnglish@forum.nginx.org> lakarjail Wrote: ------------------------------------------------------- > Thank you for your answer. > Could you please describe technically the "protected vault" for Debian > you have in mind as a solution? https://wiki.debian.org/TransparentEncryptionForHomeFolder Posted at Nginx Forum: https://forum.nginx.org/read.php?2,262900,262914#msg-262914 From adam at jooadam.hu Wed Nov 18 12:29:48 2015 From: adam at jooadam.hu (=?UTF-8?B?Sm/DsyDDgWTDoW0=?=) Date: Wed, 18 Nov 2015 13:29:48 +0100 Subject: Processing of proxied requests In-Reply-To: References: Message-ID: > Can someone please tell me how much processing Nginx does on incoming > requests before proxying? If I would use Nginx merely for TLS > termination, how different would be the TCP stream arriving to the > backend compared to the original TLS payload? Nevermind, I wasn?t aware of the new streaming functionality. ? From adam at jooadam.hu Wed Nov 18 12:38:26 2015 From: adam at jooadam.hu (=?UTF-8?B?Sm/DsyDDgWTDoW0=?=) Date: Wed, 18 Nov 2015 13:38:26 +0100 Subject: Missing SSL directives in ngx_stream_ssl_module Message-ID: Hi, There are 10 directives missing from ngx_stream_ssl_module compared to ngx_http_ssl_module: * ssl_buffer_size * ssl_client_certificate * ssl_crl * ssl_stapling * ssl_stapling_file * ssl_stapling_responder * ssl_stapling_verify * ssl_trusted_certificate * ssl_verify_client * ssl_verify_depth Is there a reason why these are not included? Can we expect them being added? Thanks, ?d?m From maxim at nginx.com Wed Nov 18 12:45:11 2015 From: maxim at nginx.com (Maxim Konovalov) Date: Wed, 18 Nov 2015 15:45:11 +0300 Subject: Missing SSL directives in ngx_stream_ssl_module In-Reply-To: References: Message-ID: <564C72D7.5060207@nginx.com> On 11/18/15 3:38 PM, Jo? ?d?m wrote: > Hi, > > There are 10 directives missing from ngx_stream_ssl_module compared to > ngx_http_ssl_module: > > * ssl_buffer_size > * ssl_client_certificate > * ssl_crl > * ssl_stapling > * ssl_stapling_file > * ssl_stapling_responder > * ssl_stapling_verify > * ssl_trusted_certificate > * ssl_verify_client > * ssl_verify_depth > > Is there a reason why these are not included? Can we expect them being added? > The directives above cover several very different areas and use-cases. Any specific reasons why you asking? -- Maxim Konovalov From adam at jooadam.hu Wed Nov 18 12:57:13 2015 From: adam at jooadam.hu (=?UTF-8?B?Sm/DsyDDgWTDoW0=?=) Date: Wed, 18 Nov 2015 13:57:13 +0100 Subject: Missing SSL directives in ngx_stream_ssl_module In-Reply-To: <564C72D7.5060207@nginx.com> References: <564C72D7.5060207@nginx.com> Message-ID: Hi Maxim, > The directives above cover several very different areas and > use-cases. Any specific reasons why you asking? In this instance I would like to use Nginx for TLS termination only and receiving the underlying traffic unaltered, but I would like to provide the same functionality to browsers I could otherwise. I was investigating other solutions and first decided to use Stunnel, but then it turned out OCSP stapling is not yet implemented in Stunnel. So I was happy to discover the new streaming functionality in Nginx, only to realize it also misses functionality. ?d?m From francis at daoine.org Wed Nov 18 13:19:43 2015 From: francis at daoine.org (Francis Daly) Date: Wed, 18 Nov 2015 13:19:43 +0000 Subject: Nginx failing to ask for PEM SSL key password In-Reply-To: <74a85cfc2d72ec0ed205d34075613140.NginxMailingListEnglish@forum.nginx.org> References: <74a85cfc2d72ec0ed205d34075613140.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20151118131943.GI3351@daoine.org> On Wed, Nov 18, 2015 at 04:34:20AM -0500, lakarjail wrote: Hi there, > It made me wondering why > "SSLPassPhraseDialog" from Apache was not as well added on Nginx. I'm a bit unclear on this -- what extra security do you think that Apache's "SSLPassPhraseDialog" gives you? See below for my rationale. > Indeed, I am looking for a solution that wouldn't decrease the global > security of my system. I can not consider leaving the password of a PEM key > in cleartext like "ssl_password_file" solution proposed by Nginx, nor to > remove the password of the key cert file for obvious and same reasons. So - the file system permissions to block a random person from reading the un-password-protected key file are unsuitable. Ok. And the file system permissions to block a random person from running "cat passwordfile" and seeing "mypassword" are unsuitable. Ok. But the file system permissions to block a random person from running "./passwordscript" and seeing "mypassword" are not unsuitable? How does that work? ("cat passwordscript" or "strings passwordscript" might show the same thing; but the same user that would need to read passwordfile, would need to execute passwordscript. Unless I'm missing something.) > What solution do I have then, solution that would be clean enough in terms > of security, and to ensure that next nginx updates won't cause problems? I think that if you insist on manually typing the password each time nginx starts, you should make sure to manually start nginx, and be there to type the password. If your "service" command does not provide the facility to do that, use a different (or no) service command. Or - build your own system which does the equivalent of ./passwordscript > passwordfile service nginx start echo random > passwordfile at the appropriate times. I don't see how your system security is enhanced, if you do anything other than manually type in the password each time it is needed. Good luck with it, f -- Francis Daly francis at daoine.org From francis at daoine.org Wed Nov 18 13:28:04 2015 From: francis at daoine.org (Francis Daly) Date: Wed, 18 Nov 2015 13:28:04 +0000 Subject: Selection of secure virtual servers In-Reply-To: <20151116135129.GS74233@mdounin.ru> References: <20151115125156.GH3351@daoine.org> <20151116135129.GS74233@mdounin.ru> Message-ID: <20151118132804.GJ3351@daoine.org> On Mon, Nov 16, 2015 at 04:51:29PM +0300, Maxim Dounin wrote: > On Sun, Nov 15, 2015 at 12:51:56PM +0000, Francis Daly wrote: > > On Fri, Nov 13, 2015 at 03:37:28PM +0100, Jo? ?d?m wrote: Hi there, > > > I would like to terminate TLS connections arriving at the default > > > server, only serving requests with the correct host header, relying on > > > SNI. > > If you have one ssl server that you care about, and you do not know that > > everything involved works fully with SNI, the "simple" (but inelegant) > > approach might be to just have a single server{} block with ssl on for > > this ip:port, and use > > > > if ($host != "example.com") { return 444; } > > > > there. > > There is no need to do this. With nginx server{} blocks > are selected twice: by SNI, and then by HTTP Host header. This > allows to happily use server{} blocks even when not using SNI. Thanks for the correction. I guess I should get more practice with secure web sites :-) My thinking was: if the client did not do SNI, then it would get the certificate from the default server{}, and would choose not to continue the connection as that certificate probably would not include the preferred server name. But the initial requirement assumed that only SNI clients matter; and I guess that the default certificate could easily include the "real" server name anyway, to avoid that edge case. So I was wrong on that thinking too. Cheers, f -- Francis Daly francis at daoine.org From francis at daoine.org Wed Nov 18 13:45:10 2015 From: francis at daoine.org (Francis Daly) Date: Wed, 18 Nov 2015 13:45:10 +0000 Subject: Absolute rather than relative times in expires directives In-Reply-To: <8ee3603b3dcd722112fb19212f03e491.NginxMailingListEnglish@forum.nginx.org> References: <8ee3603b3dcd722112fb19212f03e491.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20151118134510.GK3351@daoine.org> On Tue, Nov 17, 2015 at 03:07:33AM -0500, rgrraj wrote: Hi there, > The topic was the same one I was looking for. But we have specific idea of > setting up the expire value. We need expires to be at every 2h hours at the > same time to be on every 24hours, ie: midnight. Can you help me with how to > configure the same with 2h as well as for midnight night. http://nginx.org/r/expires shows how you can set the value, and shows that it accepts a variable. http://nginx.org/r/map shows how you can set one variable from the value of another. http://nginx.org/en/docs/http/ngx_http_core_module.html#variables shows the list of "always there" variables; two of them refer to "local time". So putting them all together: map $time_iso8601 $expires { default "2h"; ~T22 "@00h00"; ~T23 "@00h00"; } expires $expires; (in the right parts of the config file) might do some of what you want. Probably you will have to adjust some of those numbers to match "local midnight" or "UTC midnight"; and possibly you will want to adjust the regex to allow for the local timezone offset -- see the $time_iso8601 value. But that should be a start. f -- Francis Daly francis at daoine.org From nginx-forum at nginx.us Wed Nov 18 14:31:36 2015 From: nginx-forum at nginx.us (lakarjail) Date: Wed, 18 Nov 2015 09:31:36 -0500 Subject: Nginx failing to ask for PEM SSL key password In-Reply-To: <20151118131943.GI3351@daoine.org> References: <20151118131943.GI3351@daoine.org> Message-ID: Thank you for your answer. I agree with you on all points concerning if it would or not improve the security. Francis Daly Wrote: ------------------------------------------------------- > On Wed, Nov 18, 2015 at 04:34:20AM -0500, lakarjail wrote: > I don't see how your system security is enhanced, if you do anything > other than manually type in the password each time it is needed. That is exactly what I am looking for, I am not looking for another solution. I wish I could launch Nginx as a service and "manually" type in the password. However the password requirement phase is not displayed using nginx debian service, though it is displayed with Apache service and its ssl_mod thanks to the method I was previously mentioning. a) I was just wondering (trying to understand understand) if there was any reason regarding why it does't work, and, in case was not implemented/made it available on purpose, why this option was chosen not to be implemented. b) I.e., in what way using the same kind of Apache SSLPassPhraseDialog (that force you to enter passphrase by hand, not storing any password on the local machine) would set the global certificate security level at same level than storing it in a file on the local machine (whatever permissions are set on this file). Posted at Nginx Forum: https://forum.nginx.org/read.php?2,262900,262923#msg-262923 From nginx-forum at nginx.us Wed Nov 18 14:44:42 2015 From: nginx-forum at nginx.us (itpp2012) Date: Wed, 18 Nov 2015 09:44:42 -0500 Subject: nginx-1.9.7 In-Reply-To: <20151117192331.GS74233@mdounin.ru> References: <20151117192331.GS74233@mdounin.ru> Message-ID: <4b810e8d4be46c7493a08fc8e2556bd1.NginxMailingListEnglish@forum.nginx.org> A simple 'hack', compiles and works, untested if port is coming from correct pool. http://pastebin.com/ZarS2nQd Raw code http://pastebin.com/raw.php?i=ZarS2nQd Posted at Nginx Forum: https://forum.nginx.org/read.php?2,262876,262924#msg-262924 From francis at daoine.org Wed Nov 18 15:40:51 2015 From: francis at daoine.org (Francis Daly) Date: Wed, 18 Nov 2015 15:40:51 +0000 Subject: Nginx failing to ask for PEM SSL key password In-Reply-To: References: <20151118131943.GI3351@daoine.org> Message-ID: <20151118154051.GL3351@daoine.org> On Wed, Nov 18, 2015 at 09:31:36AM -0500, lakarjail wrote: > Francis Daly Wrote: > ------------------------------------------------------- > > On Wed, Nov 18, 2015 at 04:34:20AM -0500, lakarjail wrote: Hi there, I think I fail at reading comprehension :-( > > I don't see how your system security is enhanced, if you do anything > > other than manually type in the password each time it is needed. > > That is exactly what I am looking for, I am not looking for another > solution. I wish I could launch Nginx as a service and "manually" type in > the password. > > However the password requirement phase is not displayed using nginx debian > service, though it is displayed with Apache service and its ssl_mod thanks > to the method I was previously mentioning. I had missed that: * when you type "service apache2 start", you are challenged to enter your passphrase. Combining that with: * when you type "service nginx start", you are not challenged to enter your passphrase then probably the useful thing to investigate is: what does "service apache2" do different from "service nginx"? Check the files that your "service" command runs in each case. If you copy the apache ones and change the names to nginx-test, do things work any better? > a) I was just wondering (trying to understand understand) if there was any > reason regarding why it does't work, and, in case was not implemented/made > it available on purpose, why this option was chosen not to be implemented. Right now, it is not clear to me what option is missing. Apache SSLPassPhraseDialog defaults to "builtin", which is the same as what nginx uses, I believe. If you can show the service or configuration difference that allows apache work while nginx fails, then it will be a good starting point. > b) I.e., in what way using the same kind of Apache SSLPassPhraseDialog (that > force you to enter passphrase by hand, not storing any password on the local > machine) would set the global certificate security level at same level than > storing it in a file on the local machine (whatever permissions are set on > this file). If you are entering your apache passphrase by hand, then you avoid storing it on the local machine. "SSLPassPhraseDialog" is, as I understand it, more usually used when you are *not* entering the passphrase by hand. My mistake. f -- Francis Daly francis at daoine.org From francis at daoine.org Wed Nov 18 18:51:35 2015 From: francis at daoine.org (Francis Daly) Date: Wed, 18 Nov 2015 18:51:35 +0000 Subject: Nginx failing to ask for PEM SSL key password In-Reply-To: <20151118154051.GL3351@daoine.org> References: <20151118131943.GI3351@daoine.org> <20151118154051.GL3351@daoine.org> Message-ID: <20151118185135.GM3351@daoine.org> On Wed, Nov 18, 2015 at 03:40:51PM +0000, Francis Daly wrote: > On Wed, Nov 18, 2015 at 09:31:36AM -0500, lakarjail wrote: > > Francis Daly Wrote: > > ------------------------------------------------------- > > > On Wed, Nov 18, 2015 at 04:34:20AM -0500, lakarjail wrote: Hi there, > > However the password requirement phase is not displayed using nginx debian > > service, though it is displayed with Apache service and its ssl_mod thanks > > to the method I was previously mentioning. > > a) I was just wondering (trying to understand understand) if there was any > > reason regarding why it does't work, and, in case was not implemented/made > > it available on purpose, why this option was chosen not to be implemented. > Apache SSLPassPhraseDialog defaults to "builtin", which is the same as > what nginx uses, I believe. A bit more googling suggests that perhaps your apache configuration uses SSLPassPhraseDialog configured to exec the tool systemd-ask-password, which is the thing that you type the passphrase in to. If so: stock nginx does not support that. There are three options I see that you could try. * don't use stock nginx. This could be "don't use nginx at all", or "use a patched version which does let you exec() to find the passphrase". * don't use systemd to launch nginx Any "service" launcher is used because it brings some benefits. I think that the main ones are: it runs as root, so you don't have to; it auto-starts the service on boot or on demand; it auto-re-starts the service if it exits uncleanly. There presumably are more benefits too, which can be enumerated and considered. Since you have to be there to type the password, numbers 2 and 3 do not apply. And if you were happy to go this route, number 1 might be worked around by other means such as sudo -- the details could be worked out if you wanted this. * write or use a wrapper script for nginx, which systemd can use I do not know if this exists already. If it does, hurray. Basically, the script would ask you for the password (or passwords, in sequence?) and then feed them to nginx when requested. I do not know if the architecture of systemd and nginx makes this possible -- someone to whom it is important would arrange that the testing happens. Good luck with it, f -- Francis Daly francis at daoine.org From reallfqq-nginx at yahoo.fr Wed Nov 18 20:18:51 2015 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Wed, 18 Nov 2015 21:18:51 +0100 Subject: Weird location choice Message-ID: WIth the following configuration: server { listen 80; listen [::]:80; location / { location ~* "^/[[:alnum:]]+$" { default_type text/plain; return 200 "KO"; } } location ~* "^/test" { default_type text/plain; return 200 "OK"; } } ?I noticed that calling example.org/test returns KO.? The location docs say the longest prefix match ('/') is remembered then regex are checked. Since the 'test' regex is on the same level, you would expect higher precedence for it compared to the embedded 'alnum' one, which is one level deeper. If secondary-level regex locations have the same priority as others, you are basically doomed trying to prioritize regex locations between each others using prefix locations at an upper-level. ?Where am I wrong?? --- *B. R.* -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Wed Nov 18 20:43:28 2015 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 18 Nov 2015 23:43:28 +0300 Subject: Weird location choice In-Reply-To: References: Message-ID: <20151118204328.GY74233@mdounin.ru> Hello! On Wed, Nov 18, 2015 at 09:18:51PM +0100, B.R. wrote: > WIth the following configuration: > server { > listen 80; > listen [::]:80; > > location / { > location ~* "^/[[:alnum:]]+$" { > default_type text/plain; > return 200 "KO"; > } > } > > location ~* "^/test" { > default_type text/plain; > return 200 "OK"; > } > } > > ?I noticed that calling example.org/test returns KO.? > > The location docs > say the > longest prefix match ('/') is remembered then regex are checked. Since the > 'test' regex is on the same level, you would expect higher precedence for > it compared to the embedded 'alnum' one, which is one level deeper. When using nested locations within the location "/" (the longest prefix location found at top level) nginx will repeat location search: it will search for a longest prefix location (won't find anything as there are no prefix locations within "/"), and then will search for regex locations (will find one with "alnum"). As a regex location was found, searching stops. > If secondary-level regex locations have the same priority as others, you > are basically doomed trying to prioritize regex locations between each > others using prefix locations at an upper-level. Things are not that bad, as only locations within the longest prefix location are considered. And, actually, when using regex locations you are doomed anyway. Here is Igor's talk about configuration scalability, in particular he talks about regex locations and why it's better to avoid them (I believe you've been there, actually): http://www.youtube.com/watch?v=YWRYbLKsS0I -- Maxim Dounin http://nginx.org/ From reallfqq-nginx at yahoo.fr Wed Nov 18 21:24:04 2015 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Wed, 18 Nov 2015 22:24:04 +0100 Subject: Weird location choice In-Reply-To: <20151118204328.GY74233@mdounin.ru> References: <20151118204328.GY74233@mdounin.ru> Message-ID: Thanks Maxim, Well, regex location for this particular exemple is indeed useless, but might prove very useful when URI description is not trivial. Too bad they are that flawed. I remember this talk very well indeed and think about it almost daily when dealing with nginx configuration. I had hopes for regex locations... ;o) --- *B. R.* On Wed, Nov 18, 2015 at 9:43 PM, Maxim Dounin wrote: > Hello! > > On Wed, Nov 18, 2015 at 09:18:51PM +0100, B.R. wrote: > > > WIth the following configuration: > > server { > > listen 80; > > listen [::]:80; > > > > location / { > > location ~* "^/[[:alnum:]]+$" { > > default_type text/plain; > > return 200 "KO"; > > } > > } > > > > location ~* "^/test" { > > default_type text/plain; > > return 200 "OK"; > > } > > } > > > > ?I noticed that calling example.org/test returns KO.? > > > > The location docs > > say > the > > longest prefix match ('/') is remembered then regex are checked. Since > the > > 'test' regex is on the same level, you would expect higher precedence for > > it compared to the embedded 'alnum' one, which is one level deeper. > > When using nested locations within the location "/" (the longest > prefix location found at top level) nginx will repeat location > search: it will search for a longest prefix location (won't find > anything as there are no prefix locations within "/"), and then > will search for regex locations (will find one with "alnum"). As > a regex location was found, searching stops. > > > If secondary-level regex locations have the same priority as others, you > > are basically doomed trying to prioritize regex locations between each > > others using prefix locations at an upper-level. > > Things are not that bad, as only locations within the longest > prefix location are considered. And, actually, when using regex > locations you are doomed anyway. > > Here is Igor's talk about configuration scalability, in particular > he talks about regex locations and why it's better to avoid them > (I believe you've been there, actually): > > http://www.youtube.com/watch?v=YWRYbLKsS0I > > -- > Maxim Dounin > http://nginx.org/ > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From al-nginx at none.at Wed Nov 18 22:02:38 2015 From: al-nginx at none.at (Aleksandar Lazic) Date: Wed, 18 Nov 2015 23:02:38 +0100 Subject: Nginx failing to ask for PEM SSL key password In-Reply-To: <3fcea552522a370fcff86a291b4fe4af.NginxMailingListEnglish@forum.nginx.org> References: <3fcea552522a370fcff86a291b4fe4af.NginxMailingListEnglish@forum.nginx.org> Message-ID: Hi. Am 17-11-2015 21:13, schrieb lakarjail: [snipp] > Please note that : > > - nginx server starts correctly in command line (#nginx ), not using > service. SSL configuration (like file locations and permissions seems > therefore correct). Password is -that way- asked on terminal. > - when doing the same SSL configuration with Apache2, the password > is > well required when starting/restarting Apache2 server with "service > apache2 > start". > > == Problem and Question == > > > 1) I am not about to remove password of a cert key, since it's usually > a > bad security practise (considering the server get compromised, the cert > will > have to be revoked, etc.). > On top of that, as explained, I never had problems on Apache2 using a > password protected key Cert file. When I run Apache service, password > is > well asked. I can not consider the solution of removing the password, > when > other solutions work properly. > I also checked ssl_password_file proposal. Storing the password in that > way > would set the security system as if no password was set on the key cert > file. Therefore, I can't -as well- follow that solution. > > 2) What I fail to understand, if it is a bug, or a feature is the > following > : Nginx, when run as command line asks me for my cert key password and > runs > correctly. Why this behaviour can't be applied on a service ? > The command: > --- > # nginx > --- > Asks for a password, runs webserver Nginx correctly. However : > --- > # service nginx start > --- > doesn't, password is not asked on terminal, producing the journalctl > above > mentionned. Why this difference of response ? Why an Apache2-like (that > works in both situation) mechanism can't be introduced with Nginx ? Do you know this directive? http://nginx.org/en/docs/http/ngx_http_ssl_module.html#ssl_password_file Br Aleks From nginx-forum at nginx.us Thu Nov 19 02:40:45 2015 From: nginx-forum at nginx.us (semseoymas) Date: Wed, 18 Nov 2015 21:40:45 -0500 Subject: Nginx cache 1 KEY into multiple cache files (cache not running) Message-ID: <944ebb54b8740335136b2abbda5aa579.NginxMailingListEnglish@forum.nginx.org> Hello all! First, the specs: nginx version: nginx/1.8.0 built by gcc 4.4.7 20120313 (Red Hat 4.4.7-11) (GCC) built with OpenSSL 1.0.1e-fips 11 Feb 2013 TLS SNI support enabled configure arguments: --with-http_flv_module --with-ipv6 --with-http_mp4_module --with-pcre=/usr/local/src/publicnginx/pcre-8.35 --sbin-path=/usr/local/sbin --conf-path=/etc/nginx/nginx.conf --pid-path=/var/run/nginx.pid --error-log-path=/var/log/nginx/error.log --http-log-path=/var/log/nginx/access.log --with-http_realip_module --with-http_ssl_module --http-client-body-temp-path=/tmp/nginx_client --http-proxy-temp-path=/tmp/nginx_proxy --http-fastcgi-temp-path=/tmp/nginx_fastcgi --with-http_stub_status_module --add-module=/usr/local/src/publicnginx/ngx_cache_purge --with-threads (everything as usual, but --with-threads) The problem here: if people asks nginx for the same request_uri, it will create multiple files!! this way, the cache is not running ok... You can see with this terminal output: root at hyperserver [/var/nginx.cache/xxxxx]# find -type f -exec grep -a "KEY: /wp-content/themes/sahifa/timthumb.php?src=/wp-content/uploads/2015/11/Amir.jpg&h=50&w=50&a=c" {} \; KEY: /wp-content/themes/sahifa/timthumb.php?src=/wp-content/uploads/2015/11/Amir.jpg&h=50&w=50&a=c KEY: /wp-content/themes/sahifa/timthumb.php?src=/wp-content/uploads/2015/11/Amir.jpg&h=50&w=50&a=c KEY: /wp-content/themes/sahifa/timthumb.php?src=/wp-content/uploads/2015/11/Amir.jpg&h=50&w=50&a=c KEY: /wp-content/themes/sahifa/timthumb.php?src=/wp-content/uploads/2015/11/Amir.jpg&h=50&w=50&a=c KEY: /wp-content/themes/sahifa/timthumb.php?src=/wp-content/uploads/2015/11/Amir.jpg&h=50&w=50&a=c root at hyperserver [/var/nginx.cache/xxxxx]# find -type f -exec grep -la "KEY: /wp-content/themes/sahifa/timthumb.php?src=/wp-content/uploads/2015/11/Amir.jpg&h=50&w=50&a=c" {} \; ./7/b2/11b565d1eec91c3b1c45b95b26d8fb27 ./7/f6/724563e7ef37e878a929ba2b112b8f67 ./a/4d/4a55e3ebe2d00f8fe3dad638b5fbc4da ./e/1d/5cbad6ee61ad0025139302e63ae171de ./3/30/262bab653c221f922694982aef6e2303 ./c/85/2163f57c0724f7b753884658ac98385c ./2/66/7c2ddb49631a53a4bdc8f599f8cc9662 root at hyperserver [/var/nginx.cache/xxxxx]# echo -n "/wp-content/themes/sahifa/timthumb.php?src=/wp-content/uploads/2015/11/Amir.jpg&h=50&w=50&a=c" | md5sum 2163f57c0724f7b753884658ac98385c - root at hyperserver [/var/nginx.cache/xxxxx]# find -type f -exec grep -la "KEY: /wp-content/themes/sahifa/timthumb.php?src=/wp-content/uploads/2015/11/Amir.jpg&h=50&w=50&a=c" {} \; ./7/b2/11b565d1eec91c3b1c45b95b26d8fb27 ./7/f6/724563e7ef37e878a929ba2b112b8f67 ./a/4d/4a55e3ebe2d00f8fe3dad638b5fbc4da ./e/1d/5cbad6ee61ad0025139302e63ae171de ./3/30/262bab653c221f922694982aef6e2303 ./3/17/5869a50ae737e3985a0052634f44c173 ./c/85/2163f57c0724f7b753884658ac98385c ./2/66/7c2ddb49631a53a4bdc8f599f8cc9662 As you see, for the same KEY, nginx is creating multiple files, one of them with the common "md5sum" path/name, the rest could not understand what calculation is done to name them.... The config is usual also: proxy_cache_path /var/nginx.cache/xxxxx levels=1:2 keys_zone=xxxxx:3m max_size=4G inactive=90d; and... location ^~ /wp-content/themes/sahifa/timthumb.php { expires 90d; proxy_pass http://sharedip; include proxy.inc; proxy_cache xxxxx; proxy_cache_key $cache_uri; proxy_cache_valid 200 90d; proxy_cache_use_stale error timeout updating invalid_header http_500 http_502 http_504 http_404; } my proxy server (apache) is all time proccessing the same php codes... Somebody could give me a clue about what is happening here??? What could I do? Thanks. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,262943,262943#msg-262943 From nginx-forum at nginx.us Thu Nov 19 04:31:18 2015 From: nginx-forum at nginx.us (semseoymas) Date: Wed, 18 Nov 2015 23:31:18 -0500 Subject: Nginx cache 1 KEY into multiple cache files (cache not running) In-Reply-To: <944ebb54b8740335136b2abbda5aa579.NginxMailingListEnglish@forum.nginx.org> References: <944ebb54b8740335136b2abbda5aa579.NginxMailingListEnglish@forum.nginx.org> Message-ID: <31c8419d2265745286113b6dc2c694e0.NginxMailingListEnglish@forum.nginx.org> Some more interesting data: sometimes it HIT cache, sometimes not..... depending of referer???? I do not know how... I cannot imagine why. (consecutive log lines filtering by uri.... I did not remove any file or cache, even not restarted nginx at all) 107.167.108.187 - - [19/Nov/2015:05:22:22 +0100] "GET /wp-content/themes/sahifa/timthumb.php?src=/wp-content/uploads/2014/06/Clotilde.jpg&h=50&w=50&a=c HTTP/1.1" 200 2082 "http://xxxxxx.com/2014/09/significado-del-nombre-daniela/" "Opera/9.80 (X11; Linux zvav; U; es) Presto/2.12.423 Version/12.16" Exec: "0.020" Conn: "6" Mobile: "-" Cache: MISS (0.020 sec) - Upstream: "200" 107.167.108.187 - - [19/Nov/2015:05:22:23 +0100] "GET /wp-content/themes/sahifa/timthumb.php?src=/wp-content/uploads/2014/06/Clotilde.jpg&h=50&w=50&a=c HTTP/1.1" 200 2082 "http://xxxxxx.com/2014/09/significado-del-nombre-daniela/" "Opera/9.80 (X11; Linux zvav; U; es) Presto/2.12.423 Version/12.16" Exec: "0.000" Conn: "4" Mobile: "-" Cache: HIT (- sec) - Upstream: "-" 107.167.108.195 - - [19/Nov/2015:05:23:11 +0100] "GET /wp-content/themes/sahifa/timthumb.php?src=/wp-content/uploads/2014/06/Clotilde.jpg&h=50&w=50&a=c HTTP/1.1" 200 2082 "http://xxxxxx.com/2014/09/significado-del-nombre-daniela/" "Opera/9.80 (X11; Linux zvav; U; es) Presto/2.12.423 Version/12.16" Exec: "0.037" Conn: "8" Mobile: "-" Cache: HIT (- sec) - Upstream: "-" 37.132.144.110 - - [19/Nov/2015:05:24:28 +0100] "GET /wp-content/themes/sahifa/timthumb.php?src=/wp-content/uploads/2014/06/Clotilde.jpg&h=50&w=50&a=c HTTP/1.1" 200 2082 "-" "Mozilla/5.0 (Windows NT 6.1; WOW64; rv:42.0) Gecko/20100101 Firefox/42.0" Exec: "0.024" Conn: "1" Mobile: "-" Cache: MISS (0.023 sec) - Upstream: "200" 37.132.144.110 - - [19/Nov/2015:05:24:31 +0100] "GET /wp-content/themes/sahifa/timthumb.php?src=/wp-content/uploads/2014/06/Clotilde.jpg&h=50&w=50&a=c HTTP/1.1" 304 0 "-" "Mozilla/5.0 (Windows NT 6.1; WOW64; rv:42.0) Gecko/20100101 Firefox/42.0" Exec: "0.000" Conn: "2" Mobile: "-" Cache: HIT (- sec) - Upstream: "-" 107.167.108.195 - - [19/Nov/2015:05:24:45 +0100] "GET /wp-content/themes/sahifa/timthumb.php?src=/wp-content/uploads/2014/06/Clotilde.jpg&h=50&w=50&a=c HTTP/1.1" 200 2082 "http://xxxxxx.com/2014/09/significado-del-nombre-daniela/" "Opera/9.80 (Android; Opera Mini/11.0.1912/37.7126; U; es) Presto/2.12.423 Version/12.16" Exec: "0.038" Conn: "6" Mobile: "-" Cache: MISS (0.038 sec) - Upstream: "200" and the files: root at hyperserver [/var/nginx.cache/xxxx]# find -type f -exec grep -a "KEY: /wp-content/themes/sahifa/timthumb.php?src=/wp-content/uploads/2014/06/Clotilde.jpg&h=50&w=50&a=c" {} \; KEY: /wp-content/themes/sahifa/timthumb.php?src=/wp-content/uploads/2014/06/Clotilde.jpg&h=50&w=50&a=c KEY: /wp-content/themes/sahifa/timthumb.php?src=/wp-content/uploads/2014/06/Clotilde.jpg&h=50&w=50&a=c KEY: /wp-content/themes/sahifa/timthumb.php?src=/wp-content/uploads/2014/06/Clotilde.jpg&h=50&w=50&a=c root at hyperserver [/var/nginx.cache/xxxx]# find -type f -exec grep -la "KEY: /wp-content/themes/sahifa/timthumb.php?src=/wp-content/uploads/2014/06/Clotilde.jpg&h=50&w=50&a=c" {} \; ./7/d2/39815042547bf336bfe0c8de84e17d27 ./6/73/c1463df2aaa737d2616ba5016c64f736 ./2/9c/edda94a8edf68a32eb83011e918a59c2 As you see, multiple files for the same KEY (request_uri)... sometimes MISS, sometimes HIT... I cannot imagine why. Please, could someone help me or give a clue? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,262943,262945#msg-262945 From gfrankliu at gmail.com Thu Nov 19 06:30:20 2015 From: gfrankliu at gmail.com (Frank Liu) Date: Wed, 18 Nov 2015 22:30:20 -0800 Subject: nginx-1.9.7 In-Reply-To: <4b810e8d4be46c7493a08fc8e2556bd1.NginxMailingListEnglish@forum.nginx.org> References: <20151117192331.GS74233@mdounin.ru> <4b810e8d4be46c7493a08fc8e2556bd1.NginxMailingListEnglish@forum.nginx.org> Message-ID: Great, thanks! On Wednesday, November 18, 2015, itpp2012 wrote: > A simple 'hack', compiles and works, untested if port is coming from > correct > pool. > > http://pastebin.com/ZarS2nQd > Raw code http://pastebin.com/raw.php?i=ZarS2nQd > > Posted at Nginx Forum: > https://forum.nginx.org/read.php?2,262876,262924#msg-262924 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gfrankliu at gmail.com Thu Nov 19 06:36:07 2015 From: gfrankliu at gmail.com (Frank Liu) Date: Wed, 18 Nov 2015 22:36:07 -0800 Subject: Timestamp in log Message-ID: I understand nginx writes the log when request completes, but is time_local (or time_iso8601, msec) representing the time that the request was received or when the request completes and log written? I know Apache and AWS ELB both log the request received time, and want to see nginx works the same. Thanks Frank -------------- next part -------------- An HTML attachment was scrubbed... URL: From reallfqq-nginx at yahoo.fr Thu Nov 19 10:56:37 2015 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Thu, 19 Nov 2015 11:56:37 +0100 Subject: Nginx failing to ask for PEM SSL key password In-Reply-To: References: <3fcea552522a370fcff86a291b4fe4af.NginxMailingListEnglish@forum.nginx.org> Message-ID: Aleks: Have you even read the 1st message from lakarjail? ?(s)he said he had a look at it. It seems (s)he only wants? interactive solutions with the password being written nowhere. Although the reasoning appearing strange to me (someone needs to be there in case of unexpected reload/restart, otherwise, as long as it is stored and extracted automatically, whatever storage solutions being chosen, it ends up all the same to me), (s)he seems to be knowing what (s)he wants. --- *B. R.* On Wed, Nov 18, 2015 at 11:02 PM, Aleksandar Lazic wrote: > Hi. > > Am 17-11-2015 21:13, schrieb lakarjail: > > [snipp] > > > Please note that : >> >> - nginx server starts correctly in command line (#nginx ), not using >> service. SSL configuration (like file locations and permissions seems >> therefore correct). Password is -that way- asked on terminal. >> - when doing the same SSL configuration with Apache2, the password is >> well required when starting/restarting Apache2 server with "service >> apache2 >> start". >> >> == Problem and Question == >> >> >> 1) I am not about to remove password of a cert key, since it's usually a >> bad security practise (considering the server get compromised, the cert >> will >> have to be revoked, etc.). >> On top of that, as explained, I never had problems on Apache2 using a >> password protected key Cert file. When I run Apache service, password is >> well asked. I can not consider the solution of removing the password, when >> other solutions work properly. >> I also checked ssl_password_file proposal. Storing the password in that >> way >> would set the security system as if no password was set on the key cert >> file. Therefore, I can't -as well- follow that solution. >> >> 2) What I fail to understand, if it is a bug, or a feature is the >> following >> : Nginx, when run as command line asks me for my cert key password and >> runs >> correctly. Why this behaviour can't be applied on a service ? >> The command: >> --- >> # nginx >> --- >> Asks for a password, runs webserver Nginx correctly. However : >> --- >> # service nginx start >> --- >> doesn't, password is not asked on terminal, producing the journalctl above >> mentionned. Why this difference of response ? Why an Apache2-like (that >> works in both situation) mechanism can't be introduced with Nginx ? >> > > Do you know this directive? > > http://nginx.org/en/docs/http/ngx_http_ssl_module.html#ssl_password_file > > Br Aleks > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Thu Nov 19 13:10:36 2015 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 19 Nov 2015 16:10:36 +0300 Subject: Timestamp in log In-Reply-To: References: Message-ID: <20151119131036.GB74233@mdounin.ru> Hello! On Wed, Nov 18, 2015 at 10:36:07PM -0800, Frank Liu wrote: > I understand nginx writes the log when request completes, but is time_local > (or time_iso8601, msec) representing the time that the request was received > or when the request completes and log written? I know Apache and AWS ELB > both log the request received time, and want to see nginx works the same. Both are current time, i.e., the time of the log write. This is explicitly documented here at http://nginx.org/en/docs/http/ngx_http_log_module.html#log_format: : $msec : time in seconds with a milliseconds resolution at the time of the : log write If you want to obtain time of the request start, subtract $request_time. -- Maxim Dounin http://nginx.org/ From mdounin at mdounin.ru Thu Nov 19 13:26:02 2015 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 19 Nov 2015 16:26:02 +0300 Subject: Nginx cache 1 KEY into multiple cache files (cache not running) In-Reply-To: <944ebb54b8740335136b2abbda5aa579.NginxMailingListEnglish@forum.nginx.org> References: <944ebb54b8740335136b2abbda5aa579.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20151119132602.GD74233@mdounin.ru> Hello! On Wed, Nov 18, 2015 at 09:40:45PM -0500, semseoymas wrote: > First, the specs: > nginx version: nginx/1.8.0 [...] > The problem here: if people asks nginx for the same request_uri, it will > create multiple files!! this way, the cache is not running ok... Multiple cache files for the same key can be created if a backend response uses the Vary mechanism to allow multiple resource variants. It is supported and taken into account when caching since nginx 1.7.7, http://nginx.org/en/CHANGES: *) Change: now nginx takes into account the "Vary" header line in a backend response while caching. If responses are really the same, consider removing Vary from backend responses. If it not possible for some reason, you can use proxy_ignore_headers to stop nginx from handling Vary in responses, e.g.: proxy_ignore_headers Vary; Some additional details can be found in the documentation here: http://nginx.org/r/proxy_ignore_headers -- Maxim Dounin http://nginx.org/ From vbart at nginx.com Thu Nov 19 13:40:45 2015 From: vbart at nginx.com (Valentin V. Bartenev) Date: Thu, 19 Nov 2015 16:40:45 +0300 Subject: Nginx failing to ask for PEM SSL key password In-Reply-To: References: <3fcea552522a370fcff86a291b4fe4af.NginxMailingListEnglish@forum.nginx.org> Message-ID: <2104498.X7JYKKuxLQ@vbart-workstation> On Thursday 19 November 2015 11:56:37 B.R. wrote: > Aleks: Have you even read the 1st message from lakarjail? > > ?(s)he said he had a look at it. It seems (s)he only wants? interactive > solutions with the password being written nowhere. > Although the reasoning appearing strange to me (someone needs to be there > in case of unexpected reload/restart, otherwise, as long as it is stored > and extracted automatically, whatever storage solutions being chosen, it > ends up all the same to me), (s)he seems to be knowing what (s)he wants. [..] "named pipe can also be used instead of a file" - doesn't that help to make interactive solution? wbr, Valentin V. Bartenev From guilherme.e at gmail.com Thu Nov 19 14:12:18 2015 From: guilherme.e at gmail.com (Guilherme) Date: Thu, 19 Nov 2015 12:12:18 -0200 Subject: Nginx cache 1 KEY into multiple cache files (cache not running) In-Reply-To: <20151119132602.GD74233@mdounin.ru> References: <944ebb54b8740335136b2abbda5aa579.NginxMailingListEnglish@forum.nginx.org> <20151119132602.GD74233@mdounin.ru> Message-ID: Maxim, In these cases, when Vary is present in response headers and generate multiple cache files for the same key, how nginx determines cache file names for variants? Tks, Guilherme On Thu, Nov 19, 2015 at 11:26 AM, Maxim Dounin wrote: > Hello! > > On Wed, Nov 18, 2015 at 09:40:45PM -0500, semseoymas wrote: > > > First, the specs: > > nginx version: nginx/1.8.0 > > [...] > > > The problem here: if people asks nginx for the same request_uri, it will > > create multiple files!! this way, the cache is not running ok... > > Multiple cache files for the same key can be created if a backend > response uses the Vary mechanism to allow multiple resource > variants. It is supported and taken into account when caching > since nginx 1.7.7, http://nginx.org/en/CHANGES: > > *) Change: now nginx takes into account the "Vary" header line in a > backend response while caching. > > If responses are really the same, consider removing Vary from > backend responses. If it not possible for some reason, you can > use proxy_ignore_headers to stop nginx from handling Vary in > responses, e.g.: > > proxy_ignore_headers Vary; > > Some additional details can be found in the documentation here: > > http://nginx.org/r/proxy_ignore_headers > > -- > Maxim Dounin > http://nginx.org/ > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From me at myconan.net Thu Nov 19 14:45:58 2015 From: me at myconan.net (nanaya) Date: Thu, 19 Nov 2015 23:45:58 +0900 Subject: Nginx failing to ask for PEM SSL key password In-Reply-To: <2104498.X7JYKKuxLQ@vbart-workstation> References: <3fcea552522a370fcff86a291b4fe4af.NginxMailingListEnglish@forum.nginx.org> <2104498.X7JYKKuxLQ@vbart-workstation> Message-ID: <1447944358.2241930.444358025.731460D5@webmail.messagingengine.com> Hi On Thu, Nov 19, 2015, at 10:40 PM, Valentin V. Bartenev wrote: > "named pipe can also be used instead of a file" - doesn't that help to > make > interactive solution? > Considering the admin must be ready when the "service" starts, I'm wondering the benefit of using systemd or other service management. Typing "nginx" to start it isn't all that hard (shorter than typing "service nginx start", even) and the service can't be automatically restarted if it's dead anyway. From mdounin at mdounin.ru Thu Nov 19 15:59:34 2015 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 19 Nov 2015 18:59:34 +0300 Subject: Nginx cache 1 KEY into multiple cache files (cache not running) In-Reply-To: References: <944ebb54b8740335136b2abbda5aa579.NginxMailingListEnglish@forum.nginx.org> <20151119132602.GD74233@mdounin.ru> Message-ID: <20151119155934.GG74233@mdounin.ru> Hello! On Thu, Nov 19, 2015 at 12:12:18PM -0200, Guilherme wrote: > In these cases, when Vary is present in response headers and generate > multiple cache files for the same key, how nginx determines cache file > names for variants? Secondary keys are calculated as an MD5 hash of the main key and headers of a request listed in Vary. Details can be found in the source code, see the ngx_http_file_cache_vary() function: http://hg.nginx.org/nginx/file/tip/src/http/ngx_http_file_cache.c#l1043 -- Maxim Dounin http://nginx.org/ From al-nginx at none.at Thu Nov 19 19:14:46 2015 From: al-nginx at none.at (Aleksandar Lazic) Date: Thu, 19 Nov 2015 20:14:46 +0100 Subject: Nginx failing to ask for PEM SSL key password In-Reply-To: <2104498.X7JYKKuxLQ@vbart-workstation> References: <3fcea552522a370fcff86a291b4fe4af.NginxMailingListEnglish@forum.nginx.org> <2104498.X7JYKKuxLQ@vbart-workstation> Message-ID: Hi. Am 19-11-2015 14:40, schrieb Valentin V. Bartenev: > > On Thursday 19 November 2015 11:56:37 B.R. wrote: >> Aleks: Have you even read the 1st message from lakarjail? Well a little bit. >> ?(s)he said he had a look at it. It seems (s)he only wants? >> interactive >> solutions with the password being written nowhere. >> Although the reasoning appearing strange to me (someone needs to be >> there >> in case of unexpected reload/restart, otherwise, as long as it is >> stored >> and extracted automatically, whatever storage solutions being chosen, >> it >> ends up all the same to me), (s)he seems to be knowing what (s)he >> wants. > [..] > > "named pipe can also be used instead of a file" - doesn't that help to > make > interactive solution? Thump up ;-) https://startpage.com/do/search?q=named+pipe+can+also+be+used+instead+of+a+file Just an idea: Create a Pipe Point the ssl_password_file to this pipe Until there is something to read from the pipe the process will wait for data. There are milliard of ways to write something into this pipe, imho. This idea is untested but maybe worth to test ;-) BR Aleks From nginx-forum at nginx.us Thu Nov 19 23:46:47 2015 From: nginx-forum at nginx.us (semseoymas) Date: Thu, 19 Nov 2015 18:46:47 -0500 Subject: Nginx cache 1 KEY into multiple cache files (cache not running) In-Reply-To: <20151119132602.GD74233@mdounin.ru> References: <20151119132602.GD74233@mdounin.ru> Message-ID: Maxim Dounin Wrote: ------------------------------------------------------- > Multiple cache files for the same key can be created if a backend > response uses the Vary mechanism to allow multiple resource > variants. It is supported and taken into account when caching > since nginx 1.7.7, http://nginx.org/en/CHANGES: THANKS THANKS so much! My solution taken: - using mod_deflate only for apache listening at 443 only (it is a cpanel host) - deactivate mod_deflate at apache at 8081 (now, there is no vary at all) - nginx using apache 8081 as proxy_cache server. - now nginx is the one sending cache elements using gzip if required. For me, this is really better that apache proccessing twice, and nginx storing twice (or more) due to vary. Thanks, again. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,262943,262976#msg-262976 From reallfqq-nginx at yahoo.fr Fri Nov 20 18:01:45 2015 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Fri, 20 Nov 2015 19:01:45 +0100 Subject: Precisions about $upstream_response_time Message-ID: http://nginx.org/en/docs/http/ngx_http_upstream_module.html#var_upstream_response_time Does this represents the time from the end of the request forward to the upstream until it starts to answer (kinda TTFB)? Or does that include the whole response has been received, excluding the time taken to forward it to the client? Or maybe something else? Thanks, --- *B. R.* -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Fri Nov 20 18:27:48 2015 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 20 Nov 2015 21:27:48 +0300 Subject: Precisions about $upstream_response_time In-Reply-To: References: Message-ID: <20151120182748.GO74233@mdounin.ru> Hello! On Fri, Nov 20, 2015 at 07:01:45PM +0100, B.R. wrote: > http://nginx.org/en/docs/http/ngx_http_upstream_module.html#var_upstream_response_time > > Does this represents the time from the end of the request forward to the > upstream until it starts to answer (kinda TTFB)? > Or does that include the whole response has been received, excluding the > time taken to forward it to the client? > Or maybe something else? It's the time taken to get full response from an upstream server, not just time to first byte. It doesn't take into account anything related to sending the response to the client, though it should be kept in mind that in some cases a client can delay reading a response from an upstream server, see http://nginx.org/r/proxy_buffering. If you want something similar to TTFB, take a look at $upstream_header_time. -- Maxim Dounin http://nginx.org/ From sylvain.bertrand at gmail.com Sun Nov 22 04:05:25 2015 From: sylvain.bertrand at gmail.com (Sylvain BERTRAND) Date: Sun, 22 Nov 2015 15:05:25 +1100 Subject: 403 forbidden with lynx www browser Message-ID: <20151122040524.GD1496@privacy> Hi, On many nginx powered web sites, I must not send the user agent or I'll get a 403 forbidden. -- Sylvain From dewanggaba at xtremenitro.org Sun Nov 22 12:43:00 2015 From: dewanggaba at xtremenitro.org (Dewangga) Date: Sun, 22 Nov 2015 19:43:00 +0700 Subject: fastcgi cache on dynamic pages Message-ID: <5651B854.3070802@xtremenitro.org> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 Hello! I'm struggling with fastcgi_cache, the cache will be valid and got 'HIT' status if I'm writing simple PHP code. Like `phpinfo` or printing `date`. My fastcgi parameter are looks like this http://fpaste.org/293267/19588614/raw/ I send the request using HEAD : $ HEAD domain.tld/artikel/berita/wakil-gubernur-yogyakarta-paku-alam-ix-mangkat 200 OK Cache-Control: no-store, no-cache, must-revalidate, post-check=0, pre-check=0 Connection: close Date: Sun, 22 Nov 2015 12:39:19 GMT Pragma: no-cache Server: nginx Vary: Accept-Encoding Content-Type: text/html; charset=UTF-8 Expires: Thu, 19 Nov 1981 08:52:00 GMT Client-Date: Sun, 22 Nov 2015 12:39:19 GMT Client-Peer: 103.52.3.18:80 Client-Response-Num: 1 Set-Cookie: PHPSESSID=8eae558pdl8o0p78k6s9n8nkv2; path=/ X-Backend-Cache: MISS X-Response: 1448195959.467 I'm curious, the framework (Laravel 5.1) set header `Cache-Control` to not cache every request. If yes, can nginx override the header? I've set it, but it comes double. If try using `fastcgi_ignore_headers` and `fastcgi_hide_header`, but still no luck. The response from nginx still MISS. Any hints and helps are appreciated. :) Regards, Dewangga -----BEGIN PGP SIGNATURE----- Version: GnuPG v2.0.17 (MingW32) iQEcBAEBAgAGBQJWUbhTAAoJEF1+odKB6YIxKrEH/jcuSey9sUCbgSfVtejDDt+V qfa187JDyzoG6xjIy2WsgCCvuViBAQA0UB0exKJECnO/Mk3Y2/N2n0ZyImpzLta9 l27oAtkNGMR/fb4Qw3VBAUasxbJ+ZmAaabyc801LYXMwKJ43dSGd2/H0TTQaZM1p JRMTLI0GQXn/TnHL/v0wTbKOQgYx4xvzrn3UxzmLV5SUT2A0bMOfz152SAPRuFG2 akvsDUZB3GetKjRPbMwlW2qc6MGW0olzdQdqnGbAUKAAXd2XtS/XTDU67H3QMgCt 0pXFtLR7VaRcFQb4v9LjxFnrfUp4KSZ/nnjkDv+hXAUh9lc3mzOh0YH0qrto120= =N6Un -----END PGP SIGNATURE----- From francis at daoine.org Sun Nov 22 17:25:11 2015 From: francis at daoine.org (Francis Daly) Date: Sun, 22 Nov 2015 17:25:11 +0000 Subject: 403 forbidden with lynx www browser In-Reply-To: <20151122040524.GD1496@privacy> References: <20151122040524.GD1496@privacy> Message-ID: <20151122172511.GO3351@daoine.org> On Sun, Nov 22, 2015 at 03:05:25PM +1100, Sylvain BERTRAND wrote: Hi there, > On many nginx powered web sites, I must not send the user agent or I'll get a > 403 forbidden. The administrators of those web sites do not want you as a customer. Probably they have configured their servers to deny any request that includes "libwww" in the User-Agent. You could contact them and ask them to allow your preferred user-agent; you could adjust your user-agent string so that it does not match their presumed block list until they change it; or you could not visit those sites and encourage others not to visit them too. If you tell them why you are not visiting their sites, maybe they will change their configuration to welcome you. But unless the nginx.org (or possibly nginx.com) web sites are among the sites that block you, there's probably not much that readers of this list can do about it. f -- Francis Daly francis at daoine.org From sylvain.bertrand at gmail.com Sun Nov 22 23:23:20 2015 From: sylvain.bertrand at gmail.com (Sylvain BERTRAND) Date: Mon, 23 Nov 2015 10:23:20 +1100 Subject: 403 forbidden with lynx www browser In-Reply-To: <20151122172511.GO3351@daoine.org> References: <20151122040524.GD1496@privacy> <20151122172511.GO3351@daoine.org> Message-ID: <20151122232320.GB1382@privacy> On Sun, Nov 22, 2015 at 05:25:11PM +0000, Francis Daly wrote: > On Sun, Nov 22, 2015 at 03:05:25PM +1100, Sylvain BERTRAND wrote: > > On many nginx powered web sites, I must not send the user agent or I'll get a > > 403 forbidden. > > The administrators of those web sites do not want you as a customer. > > Probably they have configured their servers to deny any request that > includes "libwww" in the User-Agent. I'm about to block sending the user agent for good and all www sites. (or I will send a custom one). On Sun, Nov 22, 2015 at 05:25:11PM +0000, Francis Daly wrote: > You could contact them and ask them to allow your preferred user-agent; > you could adjust your user-agent string so that it does not match their > presumed block list until they change it; or you could not visit those > sites and encourage others not to visit them too. > > If you tell them why you are not visiting their sites, maybe they will > change their configuration to welcome you. > > But unless the nginx.org (or possibly nginx.com) web sites are among > the sites that block you, there's probably not much that readers of this > list can do about it. That's why I'm posting here: *Only nginx* www sites does block lynx. Something is not right there: a default aggressive blocking policy from nginx? -- Sylvain From shuxinyang.oss at gmail.com Mon Nov 23 00:21:36 2015 From: shuxinyang.oss at gmail.com (Shuxin Yang) Date: Sun, 22 Nov 2015 16:21:36 -0800 Subject: bug of discarding request body Message-ID: <56525C10.7010300@gmail.com> Hi, There: I run into a bug which I believe it is about ngx_http_discard_request_body() (discard_body() for short). This bug is reproducible using the 1.9.7 release. The discard_body() discards request body by reading it. However, the if the body is not ready yet (i.e. ngx_http_read_discarded_request_body() returns NGX_AGAIN), it still return NGX_OK. ngx_http_special_response_handler() (special_handler() for short) calls discard_body(). If the discard_body() returns NGX_OK, it does *NOT* disable keepalive connection, in the meantime, it sends out the special response before the request body is completely discarded. This cause the problem! This is my setup to expose the bug: ==================================== My OS is Ubuntu 14. a). the "backend" server have two locations, one return 200; the other one return 403 b). the reverse proxy hook to the backend with: b1) keepalive b2) uploading is unbuffered; so we can send incomplete request to the backend, which is to mimic the situation when the body is not sent at all of somehow dealayed. b3) send incomplete POST request R1 to the '/bad' relocation via "cat post_bad.txt | nc -q -1 127.0.0.1 8080' b4) (much be quick) send another request R2 via "curl -X GET http://localhost:8080/good" you may end up see 400 reponse. Observeration/Analysis: ====================== If you strace the backend server, you will see it first send 403-response to the proxy, then call recvfrom() trying to get the body the R1. The recvfrom() does not get the body of the R1, instead it gets the leading part of the R2, and discard it. The subsequent call to recvfrom() get the trailing part of the R2. Nginx thought it is starting part of R2, and gives 400 response. Proposed Fix: ============ 1. Make sure discard_body() completely discard body before it sends the header The disadvantage is that it may waste lots of resource in read-then-discard process. 2. If discard_body() does not complete, just return NGX_AGAIN instead of NGX_OK, whichyby special handler disable keepalive, making sure the boundary of requests are clean. The disadvantage is that it compromise the performance the keepalive connections between backend and proxy. Please shed some light, lots thanks in advance! Shuxin ==================================================== Following is my setup: 1. proxy conf; upstream backend { server 127.0.0.1:8081; keepalive 32; } server { listen 8080; proxy_request_buffering off; location / { proxy_http_version 1.1; proxy_set_header Connection ""; proxy_pass http://backend; } } o. backend: server { listen 8081; proxy_request_buffering off; # dose not matter location /good { content_by_lua 'ngx.say("lol")'; } location /bad1 { return 403; } } o. incomplete request to the bad request cat post_bad1.txt | nc -q -1 127.0.0.1 8080 The post_bad1.txt is attached to this mail. -------------- next part -------------- POST /bad1 HTTP/1.1 User-Agent: curl/7.35.0 Host: localhost:8088 Accept: */* Content-Length: 8 Content-Type: application/x-www-form-urlencoded From mdounin at mdounin.ru Mon Nov 23 01:24:52 2015 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 23 Nov 2015 04:24:52 +0300 Subject: bug of discarding request body In-Reply-To: <56525C10.7010300@gmail.com> References: <56525C10.7010300@gmail.com> Message-ID: <20151123012452.GR74233@mdounin.ru> Hello! On Sun, Nov 22, 2015 at 04:21:36PM -0800, Shuxin Yang wrote: > I run into a bug which I believe it is about > ngx_http_discard_request_body() > (discard_body() for short). This bug is reproducible using the 1.9.7 > release. > > The discard_body() discards request body by reading it. However, > the if the body is not ready yet (i.e. > ngx_http_read_discarded_request_body() > returns NGX_AGAIN), it still return NGX_OK. > > ngx_http_special_response_handler() (special_handler() for short) calls > discard_body(). If the discard_body() returns NGX_OK, it does *NOT* disable > keepalive connection, in the meantime, it sends out the special response > before the request body is completely discarded. This cause the problem! [...] > Observeration/Analysis: > ====================== > If you strace the backend server, you will see it first send 403-response to > the proxy, then call recvfrom() trying to get the body the R1. > > The recvfrom() does not get the body of the R1, instead it gets the leading > part of the R2, and discard it. > > The subsequent call to recvfrom() get the trailing part of the R2. Nginx thought > it is starting part of R2, and gives 400 response. There is no problem in returning a response before reading the whole body. It looks like you've run into the old bug in the proxy module, which doesn't handle such responses in keepalive connections properly, see additional details here: https://trac.nginx.org/nginx/ticket/669 Correct fix would be to stop nginx from re-using upstream connections where a request wasn't completely sent. -- Maxim Dounin http://nginx.org/ From shahzaib.cb at gmail.com Mon Nov 23 08:17:41 2015 From: shahzaib.cb at gmail.com (shahzaib shahzaib) Date: Mon, 23 Nov 2015 13:17:41 +0500 Subject: 400 Error on % !! Message-ID: Hi, We've encountered with 400 Bad request error on nginx reverse proxy in front of apache. Here is the attached link : http://prntscr.com/95wlsl If we remove '%' from the URL, it works fine. What could be the issue ? Regards. Shahzaib Need to send me private email? I use Virtru . -------------- next part -------------- An HTML attachment was scrubbed... URL: From me at myconan.net Mon Nov 23 08:20:20 2015 From: me at myconan.net (nanaya) Date: Mon, 23 Nov 2015 17:20:20 +0900 Subject: 400 Error on % !! In-Reply-To: References: Message-ID: <1448266820.1728419.447312209.4187C094@webmail.messagingengine.com> On Mon, Nov 23, 2015, at 05:17 PM, shahzaib shahzaib wrote: > Hi, > > We've encountered with 400 Bad request error on nginx reverse proxy in > front of apache. Here is the attached link : > > http://prntscr.com/95wlsl > > If we remove '%' from the URL, it works fine. What could be the issue ? > `%-` isn't a valid percent-encoding (and thus a bad request). Try encoding the url properly. From shahzaib.cb at gmail.com Mon Nov 23 08:40:43 2015 From: shahzaib.cb at gmail.com (shahzaib shahzaib) Date: Mon, 23 Nov 2015 13:40:43 +0500 Subject: 400 Error on % !! In-Reply-To: <1448266820.1728419.447312209.4187C094@webmail.messagingengine.com> References: <1448266820.1728419.447312209.4187C094@webmail.messagingengine.com> Message-ID: Hi, Thanks for the reply, now we've tons of these URLs which are not properly encoded. Can we redirect '%' request to the same URL by excluding '%' ? Such as http://domain.com/video/100%-working to http://domain.com/video/100-working Regards. Shahzaib On Mon, Nov 23, 2015 at 1:20 PM, nanaya wrote: > > > On Mon, Nov 23, 2015, at 05:17 PM, shahzaib shahzaib wrote: > > Hi, > > > > We've encountered with 400 Bad request error on nginx reverse proxy in > > front of apache. Here is the attached link : > > > > http://prntscr.com/95wlsl > > > > If we remove '%' from the URL, it works fine. What could be the issue ? > > > > `%-` isn't a valid percent-encoding (and thus a bad request). Try > encoding the url properly. > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From agentzh at gmail.com Mon Nov 23 14:23:05 2015 From: agentzh at gmail.com (Yichun Zhang (agentzh)) Date: Mon, 23 Nov 2015 22:23:05 +0800 Subject: [ANN] OpenResty 1.9.3.2 released Message-ID: Hi guys, I am glad to announce the new formal release, 1.9.3.2, of the OpenResty bundle: https://openresty.org/#Download The first highlight of this release is the new *_by_lua_block {} directives added in the ngx_http_lua module. For example, instead of writing content_by_lua ' ngx.say("Hello, OpenResty\'s world!\\n") '; We can just write content_by_lua_block { ngx.say("Hello, OpenResty's world!\n") } No ugly special character escaping is needed in the Lua source any more in the latter form :) The second highlight of this release is the new (experimental) support for Windows using the MinGW gcc toolchain. You can download the pre-built Win32 binaries from this zip package: https://openresty.org/download/ngx_openresty-1.9.3.2-win32.zip For detailed usage on Windows, please check out the following document: https://github.com/openresty/ngx_openresty/blob/master/doc/README-win32.md If you want to build on Windows yourself, then this document contains detailed instructions as well. We will release a corresponding Win32 binary package for every release of the OpenResty source package from now on. The Windows build is still experimental and should be used for development only. We have plans to migrate to the Microsoft Visual Studio compiler toolchain and to resolve existing limitations on Windows in the near future. Changes since the last (formal) release, 1.9.3.1: * feature: added support for compiling on Windows using the MinGW gcc toolchain to the build system. See the document for more details: https://github.com/openresty/ngx_openresty/blob/master/doc/README-win32.md * upgraded the ngx_lua module to 0.9.19. * feature: implemented "*_by_lua_block {}" directives for all the existing *_by_lua directives so that we no longer have to escape special characters while inlining Lua source inside the "nginx.conf" file. * feature: now we support LuaJIT 2 on Windows (in the form of "lua51.dll"). * feature: initial fixes when being used with the new "ngx_http_v2" module since nginx 1.9.5. thanks itpp16 for the patches. * bugfix: fixed errors and warnings with C compilers without variadic macro support. * bugfix: subrequest response status codes between the range 100 .. 299 (inclusive) might get lost in the return values of ngx.location.capture*() calls. thanks Igor Clark for the report. * bugfix: we might return the wrong shm zone in the public C API function "ngx_http_lua_find_zone()". thanks qlee001 for the report. * bugfix: the user specified "./configure"'s "--with-cc-opt" and "--with-ld-opt" might override the "LUAJIT_INC"/"LUAJIT_LIB" and "LUA_INC"/"LUA_LIB" environment settings. thanks Julian Gonggrijp for the report. * bugfix: setting builtin request header "Upgrade" via ngx.req.set_header and etc might not take effect with some builtin nginx modules. * bugfix: setting builtin request headers "Depth", "Destination", "Overwrite", and "Date" via ngx.req.set_header() and etc might not take effect at least with ngx_http_dav_module. thanks Igor Clark for the report. * bugfix: fixed typos due to copy&paste mistakes in some error messages. * bugfix: fixed one "-Wmaybe-uninitialized" warning when compiling with "gcc -Os". * bugfix: use of shared dicts resulted in (unwanted) registrations of shared dict metatables on *all* the lightuserdata in the Lua space. thanks helloyi for the report and patch. * bugfix: if a 3rd-party module calls "ngx_http_conf_get_module_srv_conf" to fetch its current "srv_conf" construct in its "merge_srv_conf" callback, then use of init_worker_by_lua might lead to segmentation faults (the same also applied to merge_loc_conf). thanks chiyouhen for the report and patch. * bugfix: the "if_unmodified_since" "shortcut" field in "ngx_http_headers_in_t" was first added in nginx 0.9.2. * bugfix: ngx.req.clear_header/ngx.req.set_header: we did not update the shortcut fields in "ngx_http_headers_in_t" added since nginx 1.3.3 which may confuse other nginx modules accessing them. * bugfix: setting "Content-Type" response values including "; charset=xxx" via the ngx.header API might bypass the MIME type checks in other nginx modules like ngx_gzip. thanks Andreas Fischer for the report. * bugfix: typo fixes in some debug logging messages. thanks doujiang for the patch. * optimize: fixed the hash-table initial sizes of the cosocket metatables. thanks ops-dev-cn for the patch. * tests: removed the useless "use lib" directives from the Perl test files. thanks Markus Linnala for the report. * doc: various typo fixes from Lance Li. * doc: ngx.exit was not disabled within the header_filter_by_lua* context. * doc: a code example misses a "return". thanks YuanSheng Wang for the patch. * doc: ngx.var: documented the values for undefined and uninitialized nginx variables. thanks Sean Johnson for asking. * doc: typo fix from Tatsuya Hoshino. * upgraded the ngx_lua_upstream module to 0.04. * feature: "upstream.get_servers(server_name)" now returns the server name (if any) as well, which can be the domain name if the user puts it in "nginx.conf". thanks Hung Nguyen for the request. * upgraded the ngx_headers_more module to 0.28. * bugfix: fixed errors and warnings with C compilers without variadic macro support. * bugfix: setting (builtin) request headers "Upgrade", "Depth", "Destination", "Overwrite", and "Date" might not take effect in standard nginx modules like ngx_http_proxy and ngx_http_dav. * bugfix: when the response header "Content-Type" contains parameters like "; charset=utf-8", the "-t MIME-List" options did not work as expected at all. thanks Joseph Bartels for the report. * bugfix: clearing input headers "If-Unmodified-Since", "If-Match", and "If-None-Match" did not clear the builtin "shortcut" fields in "ngx_http_headers_in_t" which might confuse other nginx modules like "ngx_http_not_modified_filter_module". The first header gets "shortcuts" fields since nginx 0.9.2 while the latter two since nginx 1.3.3. * upgraded the ngx_iconv module to 0.13. * bugfix: HTTP 0.9 requests would turn "iconv_filter" into a bad unrecoverable state leading to "iconv body filter skiped" error upon every subsequent request. thanks numberlife for the report. also introduced some coding style fixes. * bugfix: lowered the error log level for HTTP 0.9 requests from "error" to "warn" to prevent malicious clients from flooding the error logs. * upgraded lua-resty-redis to 0.21. * bugfix: the "attempt to call local new_tab (a table value)" error might happen when LuaJIT 2.0 was used and a local Lua module named "table.new" was visible. thanks Michael Pirogov for the report. * doc: fixed code examples to check redis pipelined requests' return values more strictly. some commands (like hkeys and smembers) may return empty tables, which may result in "nil res[1]" values. thanks Dejiang Zhu for the patch. * upgraded the lua-resty-core library to 0.1.2. * change: updated the implementation to reflect recent changes in shared dictionary zones of the ngx_lua module. now we require the ngx_lua module 0.9.17+. * upgraded the lua-cjson library to 2.1.0.3. * feature: now we allow up to 16 decimal places in JSON number encoding via "cjson.encode_number_precision()". thanks lordnynex for the patch. * bugfix: fixed the warning "inline function ?fpconv_init? declared but never defined" from gcc. * bugfix: Makefile: removed the slash ("/") after "$(DESTDIR)" so as to support relative path values in make variable "LUA_LIB_DIR". * upgraded resty-cli to 0.04. * feature: now the "resty" command-line utility looks for an nginx under the directory of itself as well (for Win32 OpenResty). * bugfix: worked around a bug regarding temp directory cleanup in msys perl 5.8.8 (and possibly other versions of msys perl as well). * bugfix: ensure we append an appropriate executable file extension when testing the existence of executables on exotic systems like Win32. * upgraded the lua-rds-parser library to 0.06. * bugfix: fixed the "u_char" C data type for MinGW gcc which lacks it. * bugfix: Makefile: added an explicit ".c -> .o" rule to help MinGW make. * bugfix: Makefile: removed the slash ("/") after "$(DESTDIR)" so as to support relative path values in make variable "LUA_LIB_DIR". * upgraded the lua-redis-parser library to 0.12. * bugfix: Makefile: added an explicit ".c -> .o" rule to help MinGW make. * bugfix: Makefile: removed the slash ("/") after "$(DESTDIR)" so as to support relative path values in make variable "LUA_LIB_DIR". * upgraded the ngx_rds_csv module to 0.07. * bugfix: fixed compilation errors with MinGW gcc on Win32. * bugfix: fixed errors and warnings with C compilers without variadic macro support. * upgraded LuaJIT to v2.1-20151028: https://github.com/openresty/luajit2/tags * imported Mike Pall's latest changes: * limit number of arguments given to "io.lines()" and "fp:lines()". * ARM64: fix "__call" metamethod handling for tail calls. * FFI: Do not propagate qualifiers into subtypes of complex. * feature: parse binary number literals ("0bxxx"). * fix NYICF error message. * properly handle OOM in "trace_save()". * ARM64: add support for saving bytecode as object files. * ARM64: fix ELF bytecode saving. * feature: parse Unicode string escape "\u{XX...}". * FFI: add "ssize_t" declaration. * fix unsinking check. * feature: add "collectgarbage("isrunning")". * flush symbol tables in "jit.dump" on trace flush. The HTML version of the change log with lots of helpful hyper-links can be browsed here: https://openresty.org/#ChangeLog1009003 OpenResty (aka. ngx_openresty) is a full-fledged web platform by bundling the standard Nginx core, Lua/LuaJIT, lots of 3rd-party Nginx modules and Lua libraries, as well as most of their external dependencies. See OpenResty's homepage for details: https://openresty.org/ We have run extensive testing on our Amazon EC2 test cluster and ensured that all the components (including the Nginx core) play well together. The latest test report can always be found here: http://qa.openresty.org Have fun! -agentzh From shuxinyang.oss at gmail.com Mon Nov 23 17:26:46 2015 From: shuxinyang.oss at gmail.com (Shuxin Yang) Date: Mon, 23 Nov 2015 09:26:46 -0800 Subject: bug of discarding request body In-Reply-To: <20151123012452.GR74233@mdounin.ru> References: <56525C10.7010300@gmail.com> <20151123012452.GR74233@mdounin.ru> Message-ID: <56534C56.9000107@gmail.com> Hi, Maxim: Thank you very much for the comment, and sorry for my long previous email. I guess you might misunderstand my previous email. Basically what I try to say is that the *OLD* bug (ticket/669 as you mentioned) is seen on the *PRISTINE* *NEW* 1.9.7 release. The attached script can reproduce the problem by simply invoke "./a.sh" The a.sh donwload the 1.9.7 release, build it *without* any 3rd party module. The problem is triggered by turning off unbuffered-uploading in proxy server. IIRC, when people report ticket/669, unbuffered-uploading was not available. I guess we ngx_http_discard_request_body() should return NGX_AGAIN instead NGX_OK if discarding body is in progress. Also see the following interleaving response. Best Regards Shuxin On 11/22/2015 05:24 PM, Maxim Dounin wrote: > There is no problem in returning a response before reading the > whole body. Ditto! > > It looks like you've run into the old bug in the proxy module, > which doesn't handle such responses in keepalive connections > properly, see additional details here: > > https://trac.nginx.org/nginx/ticket/669 > > Correct fix would be to stop nginx from re-using upstream > connections where a request wasn't completely sent. > We cannot tell if it is complete or not if ngx_http_discard_request_body() always returns NGX_OK -------------- next part -------------- A non-text attachment was scrubbed... Name: repro_400.tar.gz Type: application/gzip Size: 1152 bytes Desc: not available URL: From nginx-forum at nginx.us Mon Nov 23 17:48:32 2015 From: nginx-forum at nginx.us (lmauldinpe15) Date: Mon, 23 Nov 2015 12:48:32 -0500 Subject: Complex url rewriting Message-ID: I have a single Nginx installation and I am using PHP-FPM to serve multiple PHP applications in sub directories. Example: /var/www/ (this is 'root') /var/www/a/foo/index.php /var/www/a/bar/index.php /var/www/b/bar/index.php I want to setup url rewriting so that any request to http://xxx/a/foo/index.php/users/login gets redirected to http://xxx/a/foo/index.php and similarly http://xxx/a/bar/index.php/users/login gets redirected to http://xxx/a/bar/index.php I may have a large number of applications in the sub directories so I don't want to setup individual location blocks for each application. Can I accomplish this with a global rewrite rule? Please let me know if you need more information. Luke Posted at Nginx Forum: https://forum.nginx.org/read.php?2,263024,263024#msg-263024 From mdounin at mdounin.ru Mon Nov 23 17:53:37 2015 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 23 Nov 2015 20:53:37 +0300 Subject: bug of discarding request body In-Reply-To: <56534C56.9000107@gmail.com> References: <56525C10.7010300@gmail.com> <20151123012452.GR74233@mdounin.ru> <56534C56.9000107@gmail.com> Message-ID: <20151123175337.GY74233@mdounin.ru> Hello! On Mon, Nov 23, 2015 at 09:26:46AM -0800, Shuxin Yang wrote: > Hi, Maxim: > > Thank you very much for the comment, and sorry for my long previous > email. > > I guess you might misunderstand my previous email. Basically what I try > to say > is that the *OLD* bug (ticket/669 as you mentioned) is seen on the > *PRISTINE* *NEW* > 1.9.7 release. The attached script can reproduce the problem by simply > invoke > "./a.sh" > > The a.sh donwload the 1.9.7 release, build it *without* any 3rd party > module. > > The problem is triggered by turning off unbuffered-uploading in proxy > server. > IIRC, when people report ticket/669, unbuffered-uploading was not available. That's an old yet still unfixed bug. Using unbuffered upload is just an easy way to trigger it. > I guess we ngx_http_discard_request_body() should return NGX_AGAIN > instead NGX_OK if discarding body is in progress. No, it shouldn't. [...] > >Correct fix would be to stop nginx from re-using upstream > >connections where a request wasn't completely sent. > > > We cannot tell if it is complete or not if ngx_http_discard_request_body() > always > returns NGX_OK We can. In frontend nginx, the one configured with upstream keepalive in your test. Note well that the problem persists when not nginx but some other server is used as a backend. And changing ngx_http_discard_request_body() behaviour will be useless. -- Maxim Dounin http://nginx.org/ From nginx-forum at nginx.us Mon Nov 23 18:19:22 2015 From: nginx-forum at nginx.us (lmauldinpe15) Date: Mon, 23 Nov 2015 13:19:22 -0500 Subject: Complex url rewriting In-Reply-To: References: Message-ID: <502d3bef1dc272c5bf2b83d899157ccc.NginxMailingListEnglish@forum.nginx.org> Another note, on some of the application sub directories, I need to emulate this rule from .htaccess: RewriteRule ^(.*)$ %{DOCUMENT_ROOT}/index.php/$1 [QSA,L] Posted at Nginx Forum: https://forum.nginx.org/read.php?2,263024,263026#msg-263026 From reallfqq-nginx at yahoo.fr Mon Nov 23 19:26:15 2015 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Mon, 23 Nov 2015 20:26:15 +0100 Subject: 403 forbidden with lynx www browser In-Reply-To: <20151122232320.GB1382@privacy> References: <20151122040524.GD1496@privacy> <20151122172511.GO3351@daoine.org> <20151122232320.GB1382@privacy> Message-ID: ?Hello,? On Mon, Nov 23, 2015 at 12:23 AM, Sylvain BERTRAND < sylvain.bertrand at gmail.com> wrote: > That's why I'm posting here: *Only nginx* www sites does block lynx. > Something > is not right there: a default aggressive blocking policy from nginx? > ?There is a difference between 'only websites I visit which happen to use nginx' and 'every nginx websites' Stock nginx does nothing else but serving data to clients in the most simple way and RFC-compliant.? As Francis pointed out, it is most probably a deliberate configuration from those websites you visit which may consider your user-agent string looking too much like crawlers/bots. As Francis also told you, if nginx.org or nginx.com websites are accessible to you without harm, then the nginx product is not the source of your trouble, and you might be willing to contact the adminitrators of the websites you visit to complain and talk about a solution. You might also change your user-agent string to look like more 'official' ones, unless you wish to defend the use of lynx, which might be a meaningful quest. Best of luck, --- *B. R.* ? -------------- next part -------------- An HTML attachment was scrubbed... URL: From erik.l.nelson at bankofamerica.com Mon Nov 23 19:31:15 2015 From: erik.l.nelson at bankofamerica.com (Nelson, Erik - 2) Date: Mon, 23 Nov 2015 19:31:15 +0000 Subject: 403 forbidden with lynx www browser In-Reply-To: References: <20151122040524.GD1496@privacy> <20151122172511.GO3351@daoine.org> <20151122232320.GB1382@privacy> Message-ID: <9FB8528595F3BE4E9D4AAB664B7A500D16926671@smtp_mail.bankofamerica.com> B.R. Monday, November 23, 2015 2:26 PM On Mon, Nov 23, 2015 at 12:23 AM, Sylvain BERTRAND wrote: >>That's why I'm posting here: *Only nginx* www sites does block lynx. Something >>is not right there: a default aggressive blocking policy from nginx? >?There is a difference between 'only websites I visit which happen to use nginx' and 'every nginx websites' That may be true, but it's missing the point. This has come up before on the ML. The point is that he has observed something systemic. Maybe it's a default configuration that's tighter (as he suggested), maybe it's that admins who use nginx just hate lynx, or maybe it's something else. Whatever it is, there's *something* about nginx that's different. ---------------------------------------------------------------------- This message, and any attachments, is for the intended recipient(s) only, may contain information that is privileged, confidential and/or proprietary and subject to important terms and conditions available at http://www.bankofamerica.com/emaildisclaimer. If you are not the intended recipient, please delete this message. From reallfqq-nginx at yahoo.fr Mon Nov 23 19:34:41 2015 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Mon, 23 Nov 2015 20:34:41 +0100 Subject: 400 Error on % !! In-Reply-To: References: <1448266820.1728419.447312209.4187C094@webmail.messagingengine.com> Message-ID: The '%' character has a meaning in the HTTP URI grammar, indicating that the following bytes are hexadecimal value representing a unicode character (see https://tools.ietf.org/html/rfc3986#section-2.1). If you try to implement what you suggest, you will basically corrupt URIs from someone tring to access your website with Unicode-encoded strings, which might be perfectly valid. ?The only solution I find viable is that you remove percent character from your URIs which are not representing Unicode characters, for example by encoding '%'.? The percent-encoded version of '%' is '%25'. --- *B. R.* On Mon, Nov 23, 2015 at 9:40 AM, shahzaib shahzaib wrote: > Hi, > > Thanks for the reply, now we've tons of these URLs which are not > properly encoded. Can we redirect '%' request to the same URL by excluding > '%' ? Such as > > http://domain.com/video/100%-working > > to > > http://domain.com/video/100-working > > Regards. > Shahzaib > > On Mon, Nov 23, 2015 at 1:20 PM, nanaya wrote: > >> >> >> On Mon, Nov 23, 2015, at 05:17 PM, shahzaib shahzaib wrote: >> > Hi, >> > >> > We've encountered with 400 Bad request error on nginx reverse proxy >> in >> > front of apache. Here is the attached link : >> > >> > http://prntscr.com/95wlsl >> > >> > If we remove '%' from the URL, it works fine. What could be the issue ? >> > >> >> `%-` isn't a valid percent-encoding (and thus a bad request). Try >> encoding the url properly. >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx >> > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From francis at daoine.org Mon Nov 23 19:37:23 2015 From: francis at daoine.org (Francis Daly) Date: Mon, 23 Nov 2015 19:37:23 +0000 Subject: 403 forbidden with lynx www browser In-Reply-To: <20151122232320.GB1382@privacy> References: <20151122040524.GD1496@privacy> <20151122172511.GO3351@daoine.org> <20151122232320.GB1382@privacy> Message-ID: <20151123193723.GP3351@daoine.org> On Mon, Nov 23, 2015 at 10:23:20AM +1100, Sylvain BERTRAND wrote: > On Sun, Nov 22, 2015 at 05:25:11PM +0000, Francis Daly wrote: Hi there, > > Probably they have configured their servers to deny any request that > > includes "libwww" in the User-Agent. > > I'm about to block sending the user agent for good and all www sites. > (or I will send a custom one). Possibly something like "lynx, but with a non-default user agent string" will get their attention. (Probably not, though.) > > But unless the nginx.org (or possibly nginx.com) web sites are among > > the sites that block you, there's probably not much that readers of this > > list can do about it. > > That's why I'm posting here: *Only nginx* www sites does block lynx. Something > is not right there: a default aggressive blocking policy from nginx? There is no default policy in stock nginx, or in any distributed nginx that I'm aware of. My suspicion is that these sites that run nginx also run some back-end framework, and that that framework may have some suggested configuration for nginx which blocks access based on user-agent matching. If there is any hint in the response headers what the full server-side suite is, that may be a good place to start looking for a more general future solution. Cheers, f -- Francis Daly francis at daoine.org From reallfqq-nginx at yahoo.fr Mon Nov 23 19:43:35 2015 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Mon, 23 Nov 2015 20:43:35 +0100 Subject: Complex url rewriting In-Reply-To: <502d3bef1dc272c5bf2b83d899157ccc.NginxMailingListEnglish@forum.nginx.org> References: <502d3bef1dc272c5bf2b83d899157ccc.NginxMailingListEnglish@forum.nginx.org> Message-ID: Hello, You do not necessarily need to *redirect* *per se*, but you wish content to be served by your index.php files. Would location /index.php { location ~* (?:.*/index.php)(.*) { fastcgi_param SCRIPT_FILENAME $document_root$1; fastcgi_pass ; } } do the job? (untested) --- *B. R.* On Mon, Nov 23, 2015 at 7:19 PM, lmauldinpe15 wrote: > Another note, on some of the application sub directories, I need to emulate > this rule from .htaccess: RewriteRule ^(.*)$ %{DOCUMENT_ROOT}/index.php/$1 > [QSA,L] > > Posted at Nginx Forum: > https://forum.nginx.org/read.php?2,263024,263026#msg-263026 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From reallfqq-nginx at yahoo.fr Mon Nov 23 19:48:05 2015 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Mon, 23 Nov 2015 20:48:05 +0100 Subject: 403 forbidden with lynx www browser In-Reply-To: <9FB8528595F3BE4E9D4AAB664B7A500D16926671@smtp_mail.bankofamerica.com> References: <20151122040524.GD1496@privacy> <20151122172511.GO3351@daoine.org> <20151122232320.GB1382@privacy> <9FB8528595F3BE4E9D4AAB664B7A500D16926671@smtp_mail.bankofamerica.com> Message-ID: You missed the parts where Francis and I suggested about testing against nginx.org and/or nginx.com websites. Is those reply correctly, the nginx product is definitely not the source of the trouble. I could also provide you with domains I serve with nginx, which do not use any sort of user-agent filtering, through private channels, if necessary. ?Stuff like common rulesets or IDS tend to 'auto-configure' products with rules (which then might be shared) to the problem would still not come from nginx itself.? --- *B. R.* On Mon, Nov 23, 2015 at 8:31 PM, Nelson, Erik - 2 < erik.l.nelson at bankofamerica.com> wrote: > B.R. Monday, November 23, 2015 2:26 PM > On Mon, Nov 23, 2015 at 12:23 AM, Sylvain BERTRAND < > sylvain.bertrand at gmail.com> wrote: > >>That's why I'm posting here: *Only nginx* www sites does block lynx. > Something > >>is not right there: a default aggressive blocking policy from nginx? > > >?There is a difference between 'only websites I visit which happen to use > nginx' and 'every nginx websites' > > That may be true, but it's missing the point. This has come up before on > the ML. The point is that he has observed something systemic. Maybe it's > a default configuration that's tighter (as he suggested), maybe it's that > admins who use nginx just hate lynx, or maybe it's something else. > > Whatever it is, there's *something* about nginx that's different. > > ---------------------------------------------------------------------- > This message, and any attachments, is for the intended recipient(s) only, > may contain information that is privileged, confidential and/or proprietary > and subject to important terms and conditions available at > http://www.bankofamerica.com/emaildisclaimer. If you are not the > intended recipient, please delete this message. > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From francis at daoine.org Mon Nov 23 20:14:55 2015 From: francis at daoine.org (Francis Daly) Date: Mon, 23 Nov 2015 20:14:55 +0000 Subject: 400 Error on % !! In-Reply-To: References: <1448266820.1728419.447312209.4187C094@webmail.messagingengine.com> Message-ID: <20151123201455.GQ3351@daoine.org> On Mon, Nov 23, 2015 at 08:34:41PM +0100, B.R. wrote: > On Mon, Nov 23, 2015 at 9:40 AM, shahzaib shahzaib > wrote: Hi there, [mostly addressed to the original poster] > > Thanks for the reply, now we've tons of these URLs which are not > > properly encoded. > The only solution I find viable is that you remove percent character from > your URIs which are not representing Unicode characters, for example by > encoding '%'. The percent-encoded version of '%' is '%25'. The right answer is to fix those tons of urls at the place they are written. If whoever wrote them had written "domain.org" instead of "domain.com", you'd have to fix them (or take control of domain.org). This is broadly similar to that. Either strip the % in the cases where you know it was written unencoded; or encode it to %25 in the same cases. > > Can we redirect '%' request to the same URL by excluding > > '%' ? Such as > > > > http://domain.com/video/100%-working > > > > to > > > > http://domain.com/video/100-working In theory, you could have a front-end web service which accepted all requests, and for the specific ones that are clearly broken like this, could redirect to the fixed version; otherwise is could pass the request through to the back-end (current) web server. (Or it could pass through the fixed version; but that feels like it would be even more complicated.) It could only work if you know the broken urls, though -- the url /video/100-working is exactly equivalent to the url /v%69deo/100%2dworkin%67, so you would not want to %-strip that one. And if you have one request for /video/50%good and one for /video/50%bad -- is the second one encoded or not? % is followed by two hexadecimal characters, which should mean "it is correctly encoded". If you wanted nginx to be this front-end web service, I think that you would need code-level changes in your version to get it to accept the broken input. It is not "just" configuration that would achieve it. Good luck with it, f -- Francis Daly francis at daoine.org From nginx-forum at nginx.us Mon Nov 23 20:34:33 2015 From: nginx-forum at nginx.us (lmauldinpe15) Date: Mon, 23 Nov 2015 15:34:33 -0500 Subject: Complex url rewriting In-Reply-To: References: Message-ID: Did you mean to use nested location blocks? I tried it but it didn't work. Here is the relevant part of my configuration file: # define web root root /var/www/html/public; index index.php index.html; location /index.php { location ~* (?:.*/index.php)(.*) { fastcgi_param SCRIPT_FILENAME $document_root$1; fastcgi_pass unix:/var/run/php5-fpm.sock; include fastcgi_params; } } location / { try_files $uri $uri/ =404; } location ~ \.php$ { fastcgi_pass unix:/var/run/php5-fpm.sock; try_files $uri =404; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include fastcgi_params; } Posted at Nginx Forum: https://forum.nginx.org/read.php?2,263024,263035#msg-263035 From al-nginx at none.at Mon Nov 23 20:50:33 2015 From: al-nginx at none.at (Aleksandar Lazic) Date: Mon, 23 Nov 2015 21:50:33 +0100 Subject: Complex url rewriting In-Reply-To: References: Message-ID: Hi. Am 23-11-2015 18:48, schrieb lmauldinpe15: > I have a single Nginx installation and I am using PHP-FPM to serve > multiple > PHP applications in sub directories. Example: > > /var/www/ (this is 'root') > /var/www/a/foo/index.php > /var/www/a/bar/index.php > /var/www/b/bar/index.php > > I want to setup url rewriting so that any request to > http://xxx/a/foo/index.php/users/login gets redirected to > http://xxx/a/foo/index.php and similarly > http://xxx/a/bar/index.php/users/login gets redirected to > http://xxx/a/bar/index.php > > I may have a large number of applications in the sub directories so I > don't > want to setup individual location blocks for each application. Can I > accomplish this with a global rewrite rule? Please let me know if you > need > more information. How about to use the map module and some lines of scripting http://nginx.org/en/docs/http/ngx_http_map_module.html and use the right command for the location block. http://nginx.org/en/docs/http/ngx_http_core_module.html#alias http://nginx.org/en/docs/http/ngx_http_core_module.html#root Pick a variable of your choice ;-) http://nginx.org/en/docs/varindex.html I would suggest http://nginx.org/en/docs/http/ngx_http_core_module.html#var_request_uri but that's my opinion. Do you a favor and use the debug log for debugging ;-) http://nginx.org/en/docs/debugging_log.html BR Aleks From shuxinyang.oss at gmail.com Mon Nov 23 21:16:27 2015 From: shuxinyang.oss at gmail.com (Shuxin Yang) Date: Mon, 23 Nov 2015 13:16:27 -0800 Subject: bug of discarding request body In-Reply-To: <20151123175337.GY74233@mdounin.ru> References: <56525C10.7010300@gmail.com> <20151123012452.GR74233@mdounin.ru> <56534C56.9000107@gmail.com> <20151123175337.GY74233@mdounin.ru> Message-ID: <5653822B.5000104@gmail.com> Hi, Maxim: Thank you so much for your insightful comment! Unbuffered-uploading not just to make things easier to reproduce the problem. It is trivial. So to speak. It is translating to say it is rather dangerous to use the unbuffered-uploading along with keepalive connections, as the combination make the proxy server paper thin to penetrate. Thanks Shuxin On 11/23/2015 09:53 AM, Maxim Dounin wrote: > Hello! > > On Mon, Nov 23, 2015 at 09:26:46AM -0800, Shuxin Yang wrote: > >> Hi, Maxim: >> >> Thank you very much for the comment, and sorry for my long previous >> email. >> >> I guess you might misunderstand my previous email. Basically what I try >> to say >> is that the *OLD* bug (ticket/669 as you mentioned) is seen on the >> *PRISTINE* *NEW* >> 1.9.7 release. The attached script can reproduce the problem by simply >> invoke >> "./a.sh" >> >> The a.sh donwload the 1.9.7 release, build it *without* any 3rd party >> module. >> >> The problem is triggered by turning off unbuffered-uploading in proxy >> server. >> IIRC, when people report ticket/669, unbuffered-uploading was not available. > That's an old yet still unfixed bug. Using unbuffered upload is > just an easy way to trigger it. > >> I guess we ngx_http_discard_request_body() should return NGX_AGAIN >> instead NGX_OK if discarding body is in progress. > No, it shouldn't. > > [...] > >>> Correct fix would be to stop nginx from re-using upstream >>> connections where a request wasn't completely sent. >>> >> We cannot tell if it is complete or not if ngx_http_discard_request_body() >> always >> returns NGX_OK > We can. In frontend nginx, the one configured with upstream > keepalive in your test. > > Note well that the problem persists when not nginx but some other > server is used as a backend. And changing > ngx_http_discard_request_body() behaviour will be useless. > From sylvain.bertrand at gmail.com Mon Nov 23 23:00:01 2015 From: sylvain.bertrand at gmail.com (Sylvain BERTRAND) Date: Tue, 24 Nov 2015 10:00:01 +1100 Subject: 403 forbidden with lynx www browser In-Reply-To: <9FB8528595F3BE4E9D4AAB664B7A500D16926671@smtp_mail.bankofamerica.com> References: <20151122040524.GD1496@privacy> <20151122172511.GO3351@daoine.org> <20151122232320.GB1382@privacy> <9FB8528595F3BE4E9D4AAB664B7A500D16926671@smtp_mail.bankofamerica.com> Message-ID: <20151123230001.GA1095@privacy> On Mon, Nov 23, 2015 at 07:31:15PM +0000, Nelson, Erik - 2 wrote: > B.R. Monday, November 23, 2015 2:26 PM > On Mon, Nov 23, 2015 at 12:23 AM, Sylvain BERTRAND wrote: > >>That's why I'm posting here: *Only nginx* www sites does block lynx. Something > >>is not right there: a default aggressive blocking policy from nginx? > > >?There is a difference between 'only websites I visit which happen to > >use nginx' and 'every nginx websites' > > That may be true, but it's missing the point. This has come up before on the > ML. The point is that he has observed something systemic. Maybe it's a > default configuration that's tighter (as he suggested), maybe it's that > admins who use nginx just hate lynx, or maybe it's something else. > > Whatever it is, there's *something* about nginx that's different. You got it right. The last web site... a friend sent me the link (:p), had to remove the user-agent from http headers. http://www.rawstory.com/2015/11/hacker-collective-anonymous-claims-isis-has-plans-for-more-attacks-on-sunday/ -- Sylvain From reallfqq-nginx at yahoo.fr Mon Nov 23 23:10:56 2015 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Tue, 24 Nov 2015 00:10:56 +0100 Subject: 403 forbidden with lynx www browser In-Reply-To: <20151123230001.GA1095@privacy> References: <20151122040524.GD1496@privacy> <20151122172511.GO3351@daoine.org> <20151122232320.GB1382@privacy> <9FB8528595F3BE4E9D4AAB664B7A500D16926671@smtp_mail.bankofamerica.com> <20151123230001.GA1095@privacy> Message-ID: *There is none so deaf than those who will not hear.* :o| ?Well? if you two understand each other, find where nginx handles some user-agents differently than others. I am sure the developers would be more than glad to learn about it. Everyone is, actually. --- *B. R.* On Tue, Nov 24, 2015 at 12:00 AM, Sylvain BERTRAND < sylvain.bertrand at gmail.com> wrote: > On Mon, Nov 23, 2015 at 07:31:15PM +0000, Nelson, Erik - 2 wrote: > > B.R. Monday, November 23, 2015 2:26 PM > > On Mon, Nov 23, 2015 at 12:23 AM, Sylvain BERTRAND < > sylvain.bertrand at gmail.com> wrote: > > >>That's why I'm posting here: *Only nginx* www sites does block lynx. > Something > > >>is not right there: a default aggressive blocking policy from nginx? > > > > >?There is a difference between 'only websites I visit which happen to > > >use nginx' and 'every nginx websites' > > > > That may be true, but it's missing the point. This has come up before > on the > > ML. The point is that he has observed something systemic. Maybe it's a > > default configuration that's tighter (as he suggested), maybe it's that > > admins who use nginx just hate lynx, or maybe it's something else. > > > > Whatever it is, there's *something* about nginx that's different. > > You got it right. > > The last web site... a friend sent me the link (:p), had to remove the > user-agent from http headers. > > http://www.rawstory.com/2015/11/hacker-collective-anonymous-claims-isis-has-plans-for-more-attacks-on-sunday/ > > -- > Sylvain > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From phoenix.x.gao at gmail.com Tue Nov 24 09:08:45 2015 From: phoenix.x.gao at gmail.com (=?UTF-8?B?6auY57+U?=) Date: Tue, 24 Nov 2015 17:08:45 +0800 Subject: Missing methods in ngx_http_dav_module Message-ID: - PROPFIND - PROPPATCH - LOCK - UNLOCK Above methods are not implemented in ngx_http_dav_module module, It was said they are in TODOs but three years past... http://mailman.nginx.org/pipermail/nginx/2012-June/034197.html Will they be implemented? Thanks! -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Tue Nov 24 12:36:25 2015 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 24 Nov 2015 15:36:25 +0300 Subject: Missing methods in ngx_http_dav_module In-Reply-To: References: Message-ID: <20151124123624.GE74233@mdounin.ru> Hello! On Tue, Nov 24, 2015 at 05:08:45PM +0800, ?? wrote: > - PROPFIND > - PROPPATCH > - LOCK > - UNLOCK > > > Above methods are not implemented in ngx_http_dav_module module, > > It was said they are in TODOs but three years past... > http://mailman.nginx.org/pipermail/nginx/2012-June/034197.html > Will they be implemented? Things hasn't changed much: it's still in TODO, and no ETA. Full DAV support isn't something requested often, and hence it's low priority. -- Maxim Dounin http://nginx.org/ From mdounin at mdounin.ru Tue Nov 24 12:57:21 2015 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 24 Nov 2015 15:57:21 +0300 Subject: 403 forbidden with lynx www browser In-Reply-To: <20151123230001.GA1095@privacy> References: <20151122040524.GD1496@privacy> <20151122172511.GO3351@daoine.org> <20151122232320.GB1382@privacy> <9FB8528595F3BE4E9D4AAB664B7A500D16926671@smtp_mail.bankofamerica.com> <20151123230001.GA1095@privacy> Message-ID: <20151124125721.GF74233@mdounin.ru> Hello! On Tue, Nov 24, 2015 at 10:00:01AM +1100, Sylvain BERTRAND wrote: > On Mon, Nov 23, 2015 at 07:31:15PM +0000, Nelson, Erik - 2 wrote: > > B.R. Monday, November 23, 2015 2:26 PM > > On Mon, Nov 23, 2015 at 12:23 AM, Sylvain BERTRAND wrote: > > >>That's why I'm posting here: *Only nginx* www sites does block lynx. Something > > >>is not right there: a default aggressive blocking policy from nginx? > > > > >?There is a difference between 'only websites I visit which happen to > > >use nginx' and 'every nginx websites' > > > > That may be true, but it's missing the point. This has come up before on the > > ML. The point is that he has observed something systemic. Maybe it's a > > default configuration that's tighter (as he suggested), maybe it's that > > admins who use nginx just hate lynx, or maybe it's something else. > > > > Whatever it is, there's *something* about nginx that's different. > > You got it right. > > The last web site... a friend sent me the link (:p), had to remove the user-agent from http headers. > http://www.rawstory.com/2015/11/hacker-collective-anonymous-claims-isis-has-plans-for-more-attacks-on-sunday/ The site in question uses CloudFlare, a big CDN and DDoS protection system known to use nginx. It's not really a surprise that a DDoS protection service uses an aggresive blocking policy not very friendly to various minor browsers. Nothing here that can be fixed by nginx though, and writing to this list is likely useless (well, some people from CloudFlare are likely to read this list, but I doubt they'll notice this thread, and anyway it's a wrong way to contact them). -- Maxim Dounin http://nginx.org/ From sylvain.bertrand at gmail.com Tue Nov 24 13:07:29 2015 From: sylvain.bertrand at gmail.com (Sylvain BERTRAND) Date: Wed, 25 Nov 2015 00:07:29 +1100 Subject: 403 forbidden with lynx www browser In-Reply-To: <20151124125721.GF74233@mdounin.ru> References: <20151122040524.GD1496@privacy> <20151122172511.GO3351@daoine.org> <20151122232320.GB1382@privacy> <9FB8528595F3BE4E9D4AAB664B7A500D16926671@smtp_mail.bankofamerica.com> <20151123230001.GA1095@privacy> <20151124125721.GF74233@mdounin.ru> Message-ID: <20151124130729.GA2616@privacy> On Tue, Nov 24, 2015 at 03:57:21PM +0300, Maxim Dounin wrote: > Hello! > > On Tue, Nov 24, 2015 at 10:00:01AM +1100, Sylvain BERTRAND wrote: > > > On Mon, Nov 23, 2015 at 07:31:15PM +0000, Nelson, Erik - 2 wrote: > > > B.R. Monday, November 23, 2015 2:26 PM > > > On Mon, Nov 23, 2015 at 12:23 AM, Sylvain BERTRAND wrote: > > > >>That's why I'm posting here: *Only nginx* www sites does block lynx. Something > > > >>is not right there: a default aggressive blocking policy from nginx? > > > > > > >?There is a difference between 'only websites I visit which happen to > > > >use nginx' and 'every nginx websites' > > > > > > That may be true, but it's missing the point. This has come up before on the > > > ML. The point is that he has observed something systemic. Maybe it's a > > > default configuration that's tighter (as he suggested), maybe it's that > > > admins who use nginx just hate lynx, or maybe it's something else. > > > > > > Whatever it is, there's *something* about nginx that's different. > > > > You got it right. > > > > The last web site... a friend sent me the link (:p), had to remove the user-agent from http headers. > > http://www.rawstory.com/2015/11/hacker-collective-anonymous-claims-isis-has-plans-for-more-attacks-on-sunday/ > > The site in question uses CloudFlare, a big CDN and DDoS > protection system known to use nginx. It's not really a surprise > that a DDoS protection service uses an aggresive blocking policy > not very friendly to various minor browsers. Nothing here that > can be fixed by nginx though, and writing to this list is likely > useless (well, some people from CloudFlare are likely to read this > list, but I doubt they'll notice this thread, and anyway it's a > wrong way to contact them). Then, if I understand well, it's a CloudFlare mistake, who happens to use nginx only. That makes sense, and is quite unfair for nginx to be displayed while applying CloudFlare blocking. That's really mis-leading. -- Sylvain From black.fledermaus at arcor.de Tue Nov 24 13:54:43 2015 From: black.fledermaus at arcor.de (basti) Date: Tue, 24 Nov 2015 14:54:43 +0100 Subject: nginx 1.9.6 /etc/nginx/html Message-ID: <56546C23.7080104@arcor.de> Hello, I have installed nginx 1.9.6 on debian jessie. As I can see there is no sites-enabled and sites-available anymore. Is that right? An other strange thing is that nginx looks in "/etc/nginx/html" for files but this dirctory does not exist. For example: 2015/11/24 14:24:51 [error] 14763#14763: *70 open() "/etc/nginx/html/robots.txt" failed (2: No such file or directory) But I can't find an entry for "/etc/nginx/html" in any config file placed in /etc/nginx. Is it "hard-coded" in source? Regards, Basti From black.fledermaus at arcor.de Tue Nov 24 14:17:19 2015 From: black.fledermaus at arcor.de (basti) Date: Tue, 24 Nov 2015 15:17:19 +0100 Subject: [Solved] nginx 1.9.6 /etc/nginx/html In-Reply-To: <56546C23.7080104@arcor.de> References: <56546C23.7080104@arcor.de> Message-ID: <5654716F.9000404@arcor.de> Upgrade to nginx 1.9.7 fixed the error with/etc/nginx/html Am 24.11.2015 um 14:54 schrieb basti: > Hello, > I have installed nginx 1.9.6 on debian jessie. > As I can see there is no sites-enabled and sites-available anymore. Is > that right? > > An other strange thing is that nginx looks in "/etc/nginx/html" for > files but this dirctory does not exist. For example: > > 2015/11/24 14:24:51 [error] 14763#14763: *70 open() > "/etc/nginx/html/robots.txt" failed (2: No such file or directory) > > But I can't find an entry for "/etc/nginx/html" in any config file > placed in /etc/nginx. > Is it "hard-coded" in source? > > Regards, > Basti > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From reallfqq-nginx at yahoo.fr Tue Nov 24 17:41:39 2015 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Tue, 24 Nov 2015 18:41:39 +0100 Subject: 403 forbidden with lynx www browser In-Reply-To: <20151124130729.GA2616@privacy> References: <20151122040524.GD1496@privacy> <20151122172511.GO3351@daoine.org> <20151122232320.GB1382@privacy> <9FB8528595F3BE4E9D4AAB664B7A500D16926671@smtp_mail.bankofamerica.com> <20151123230001.GA1095@privacy> <20151124125721.GF74233@mdounin.ru> <20151124130729.GA2616@privacy> Message-ID: On Tue, Nov 24, 2015 at 2:07 PM, Sylvain BERTRAND < sylvain.bertrand at gmail.com> wrote: > Then, if I understand well, it's a CloudFlare mistake, who happens to use > nginx only. > > That makes sense, and is quite unfair for nginx to be displayed while > applying > CloudFlare blocking. > > That's really mis-leading. > The 'Server' header from the response is 'cloudflare-nginx', not 'nginx', btw, as all the CloudFlare-hosted websites show. That means it is a custom webserver based, even remotely, on nginx. I find it quite useful, rather than misleading: it is definitely not stock nginx. --- *B. R.* ? -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Tue Nov 24 22:21:28 2015 From: nginx-forum at nginx.us (lmauldinpe15) Date: Tue, 24 Nov 2015 17:21:28 -0500 Subject: Complex url rewriting In-Reply-To: References: Message-ID: <69d8b99265d1ea16ad17c041b2bc3fc9.NginxMailingListEnglish@forum.nginx.org> If I followed your post correctly, you wanted me to make a map of locations. However, I want the users to be able to add a new directory (ex: /var/www/c/foo) and have a url like 'http://xxx/c/foo/index.php/users/login' automatically served by /var/www/c/foo/index.php without having to change Nginx configuration. Is this possible? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,263024,263062#msg-263062 From nginx-forum at nginx.us Tue Nov 24 22:53:22 2015 From: nginx-forum at nginx.us (lmauldinpe15) Date: Tue, 24 Nov 2015 17:53:22 -0500 Subject: Complex url rewriting In-Reply-To: <69d8b99265d1ea16ad17c041b2bc3fc9.NginxMailingListEnglish@forum.nginx.org> References: <69d8b99265d1ea16ad17c041b2bc3fc9.NginxMailingListEnglish@forum.nginx.org> Message-ID: <8b790f7a00b5d0ccd539441b6a89a851.NginxMailingListEnglish@forum.nginx.org> I think this seems to be working for now. Does anyone see a problem with it: # define web root root /var/www/html/public; index index.php default.php index.html; location / { try_files $uri $uri/ =404; } location ~ [^/]\.php(/|$) { fastcgi_split_path_info ^(.+?\.php)(/.*)$; # Save the $fastcgi_path_info before try_files clear it set $path_info $fastcgi_path_info; fastcgi_param PATH_INFO $path_info; try_files $fastcgi_script_name =404; fastcgi_pass unix:/var/run/php5-fpm.sock; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include fastcgi_params; } Posted at Nginx Forum: https://forum.nginx.org/read.php?2,263024,263065#msg-263065 From sylvain.bertrand at gmail.com Tue Nov 24 23:43:20 2015 From: sylvain.bertrand at gmail.com (Sylvain BERTRAND) Date: Wed, 25 Nov 2015 10:43:20 +1100 Subject: 403 forbidden with lynx www browser In-Reply-To: References: <20151122040524.GD1496@privacy> <20151122172511.GO3351@daoine.org> <20151122232320.GB1382@privacy> <9FB8528595F3BE4E9D4AAB664B7A500D16926671@smtp_mail.bankofamerica.com> <20151123230001.GA1095@privacy> <20151124125721.GF74233@mdounin.ru> <20151124130729.GA2616@privacy> Message-ID: <20151124234320.GA1136@privacy> On Tue, Nov 24, 2015 at 06:41:39PM +0100, B.R. wrote: > On Tue, Nov 24, 2015 at 2:07 PM, Sylvain BERTRAND < > sylvain.bertrand at gmail.com> wrote: > > > Then, if I understand well, it's a CloudFlare mistake, who happens to use > > nginx only. > > > > That makes sense, and is quite unfair for nginx to be displayed while > > applying > > CloudFlare blocking. > > > > That's really mis-leading. > > > > The 'Server' header from the response is 'cloudflare-nginx', not 'nginx', > btw, as all the CloudFlare-hosted websites show. > That means it is a custom webserver based, even remotely, on nginx. I find > it quite useful, rather than misleading: it is definitely not stock nginx. The server header gives more information but what's displayed in the xhtml body of the 403 response is "nginx" only which is what will be displayed in the user www browser, and that's mis-leading and unfair for nginx. Maybe somebody should contact CloudFlare to make them modify their response bodies from stock nginx. -- Sylvain From bra at fsn.hu Wed Nov 25 10:21:35 2015 From: bra at fsn.hu (Nagy, Attila) Date: Wed, 25 Nov 2015 11:21:35 +0100 Subject: http_dav_module fsync Message-ID: <56558BAF.7090606@fsn.hu> Hi, I want to make sure that when nginx's DAV module responds with an OK for a PUT (or COPY) request, the file is committed on disk. I see the following solutions: 1. set the filesystem to be sync (for example zfs sync=always) 2. LD_PRELOAD a lib to nginx, which does an fsync before the close() call 3. introduce a dav_fsync on option 4. add a query parameter (or a header), which can be used to signal that the client wants the file to be fsync()-ed #1 is the worst performance wise, #2 is easy, but seems a little bit hackish. A quick glance to the source code tells me that implementing #3 could be as easy to call an fsync() at the end of ngx_http_dav_put_handler(), like this: r->headers_out.status = status; r->header_only = 1; fsync(r->request_body->temp_file->file.fd); ngx_http_finalize_request(r, ngx_http_send_header(r)); return; Of course this blocks, which may badly hurt nginx performance. What is your opinion on this? How this should be handled? From reallfqq-nginx at yahoo.fr Wed Nov 25 17:38:54 2015 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Wed, 25 Nov 2015 18:38:54 +0100 Subject: 403 forbidden with lynx www browser In-Reply-To: <20151124234320.GA1136@privacy> References: <20151122040524.GD1496@privacy> <20151122172511.GO3351@daoine.org> <20151122232320.GB1382@privacy> <9FB8528595F3BE4E9D4AAB664B7A500D16926671@smtp_mail.bankofamerica.com> <20151123230001.GA1095@privacy> <20151124125721.GF74233@mdounin.ru> <20151124130729.GA2616@privacy> <20151124234320.GA1136@privacy> Message-ID: On Wed, Nov 25, 2015 at 12:43 AM, Sylvain BERTRAND < sylvain.bertrand at gmail.com> wrote: > Maybe somebody > ? ? > should contact CloudFlare to make them modify their response bodies from > stock > nginx. > ? Do you qualify as somebody? :o) --- *B. R.* ? -------------- next part -------------- An HTML attachment was scrubbed... URL: From luky-37 at hotmail.com Wed Nov 25 18:29:53 2015 From: luky-37 at hotmail.com (Lukas Tribus) Date: Wed, 25 Nov 2015 19:29:53 +0100 Subject: 403 forbidden with lynx www browser In-Reply-To: <20151124234320.GA1136@privacy> References: <20151122040524.GD1496@privacy> <20151122172511.GO3351@daoine.org>, <20151122232320.GB1382@privacy>, , <9FB8528595F3BE4E9D4AAB664B7A500D16926671@smtp_mail.bankofamerica.com>, <20151123230001.GA1095@privacy> <20151124125721.GF74233@mdounin.ru>, <20151124130729.GA2616@privacy>, , <20151124234320.GA1136@privacy> Message-ID: > The server header gives more information but what's displayed in the xhtml body > of the 403 response is "nginx" only which is what will be displayed in the user > www browser, and that's mis-leading and unfair for nginx. Maybe somebody > should contact CloudFlare to make them modify their response bodies from stock > nginx. When cloudflare blocks a user-agent, its because the cloudflare customer configured it to do so. Contact the customer, not cloudflare. Also, when cloudflare emits a 403 error, the error page is *ALWAYS* cloudflare branded. Also see: https://support.cloudflare.com/hc/en-us/articles/200169226-Why-am-I-getting-a-403-error- That is however not the case with rawstory.com. Therefor: - rawstory.com uses cloudflare, but - cloudflare does not block lynx in this instance - the backend (rawstory.com) nginx server blocks the user-agent There is no one to blame other than the original host. Lukas From nginx-forum at nginx.us Wed Nov 25 20:31:19 2015 From: nginx-forum at nginx.us (no.1) Date: Wed, 25 Nov 2015 15:31:19 -0500 Subject: Reverse proxy to QNAP does not work Message-ID: I've been trying to reach my QNAP NAS from internet via reverse proxy on Raspberry Pi. Beside the stand alone QNAP a owncloud installation should run as well on the RPi. The plan was to use subdirectories to access both (https://example.com/nas and https://example.com/owncloud). Subdomains are not possible. Even if I leave out the owncloud installation it won't work to access the NAS via subdirectory. It works with ?direct access? and proxy_pass: upstream php-handler { server unix:/var/run/php5-fpm.sock; } server { listen 80; server_name example.com; return 301 https://$server_name$request_uri; } server { listen 443 ssl; server_name example.com; ssl_certificate /etc/nginx/ssl/owncloud.crt; ssl_certificate_key /etc/nginx/ssl/owncloud.key; location / { proxy_pass http://qnap:8080; proxy_set_header X-Real-IP $remote_addr; } } But as soon as I try that a rewrite it ends up shortening the url to https://example.com/cgi-bin/login.html?1448481759 and an 404 error instead of https://example.com/nas/cgi-bin (with the logon windows from the QNAP which is show locally if I use https://qnap:8080). location block look like: location /nas/ { rewrite /nas/(.*) /$1 break; proxy_set_header Accept-Encoding ""; proxy_pass http://qnap:8080; proxy_set_header X-Real-IP $remote_addr; proxy_redirect https://qnap:8080/ /nas/; sub_filter '"/' '"/nas/'; sub_filter_once off; } Response header on this page says gzip for content-encoding and content-type text/html (which I've read in another post is often relevant using sub_filter directives). If I add a location block before it looks fine, the logon window appears but I can't logon. ?Password or username is wrong?. location / { if ($http_referer ~ /nas) { rewrite ^(.*) /nas$1 permanent; } return 404; } I try to understand nginx and the directives behind, but it?s really hard to find ?my way?. So at the moment it?s more trial and error and I hope someone can help. Kind regs no.1 _________ my configuration: - QNAP with latest firmware 4.2.0 (2015/10/23) - (Standard) Raspbian (Debian Jessie) on a RPi Model B - nginx 1.6.2 with --with-http_sub_module and --add-module=/build/nginx-Kaumns/nginx-1.6.2/debian/modules/ngx_http_substitutions_filter_module - PHP 5.6 mit php-fpm Posted at Nginx Forum: https://forum.nginx.org/read.php?2,263081,263081#msg-263081 From nginx-forum at nginx.us Wed Nov 25 22:27:09 2015 From: nginx-forum at nginx.us (chobit) Date: Wed, 25 Nov 2015 17:27:09 -0500 Subject: SMTP Forward Nginx Proxy Message-ID: Almost all of our customers send e-mail through our private SMTP servers, but we have one customer who chooses to use a third-party SMTP provider. The third-party SMTP service requires whitelisting of any sending IP addresses which is normal. Unfortunately the components in our infrastructure which send e-mail are part of an autoscaling group so the IP addresses can vary. To solve for this problem I would like to setup an nginx configuration which accepts SMTP connections to it and then proxies them to another IP address (the third-party SMTP service) so the requests to the mail server always appear to the third-party SMTP service as if they came from the same server. Is it possible to solve this issue with ngingx smtp proxy? How should i forwarded smtp in case with third-party SMTP service? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,263084,263084#msg-263084 From sylvain.bertrand at gmail.com Thu Nov 26 01:17:23 2015 From: sylvain.bertrand at gmail.com (Sylvain BERTRAND) Date: Thu, 26 Nov 2015 12:17:23 +1100 Subject: 403 forbidden with lynx www browser In-Reply-To: References: <20151122172511.GO3351@daoine.org> <20151122232320.GB1382@privacy> <9FB8528595F3BE4E9D4AAB664B7A500D16926671@smtp_mail.bankofamerica.com> <20151123230001.GA1095@privacy> <20151124125721.GF74233@mdounin.ru> <20151124130729.GA2616@privacy> <20151124234320.GA1136@privacy> Message-ID: <20151126011723.GA1108@privacy> On Wed, Nov 25, 2015 at 07:29:53PM +0100, Lukas Tribus wrote: > https://support.cloudflare.com/hc/en-us/articles/200169226-Why-am-I-getting-a-403-error- > > > That is however not the case with rawstory.com. > > > Therefor: > - rawstory.com uses cloudflare, but > - cloudflare does not block lynx in this instance > - the backend (rawstory.com) nginx server blocks the user-agent > > > There is no one to blame other than the original host. Then, back to square one. Usually, when lynx is 403 forbidden, as far as I can remember, it was nginx. If I read well the document with the link provided above, it means that the "browser integrity" from nginx mod_security seems to have a setting blocking alternative light browsers like lynx. If it's actually the case, nginx mod_security is concluding that lynx is a "compromised browser" way too easily. -- Sylvain From agentzh at gmail.com Thu Nov 26 04:14:40 2015 From: agentzh at gmail.com (Yichun Zhang (agentzh)) Date: Thu, 26 Nov 2015 12:14:40 +0800 Subject: Forward request after operation with worker? In-Reply-To: <20151117224846.GA5619@sg1> References: <20151117224846.GA5619@sg1> Message-ID: Hello! On Wed, Nov 18, 2015 at 6:48 AM, Stephane Wirtel wrote: > With a request, is it possible to redirect to a running worker and if > this one is not running, just enable it. > > I explain, I would like to implement a reverse proxy with Lua and > OpenResty and Redis. > > Redis will store a mapping between the hostname and a tuple (ip of the > worker:port). > > But the workers can be down, because unused in time. > > I was thinking to keep the request, execute a "light thread" in Lua > (with a timeout of 1s). The light thread will active the worker. > If the timeout is reached, we return an error, else we send the request > to the worker. > > 1. Is it possible ? > 2. I would like to know how will you make that, I don't know Lua, just used > it in the past just for small scripts with imapfilter or a PoC with > OpenResty. Sorry, I don't really understand what you're trying to achieve. Are you trying to make the nginx workers talk to each other? Maybe you can use the lua_shared_dict to share your data among them? BTW, it's recommended to post to the openresty-en mailing list for OpenResty specific questions. Please see https://openresty.org/#Community Thanks! Best regards, -agentzh From nginx-forum at nginx.us Thu Nov 26 04:58:19 2015 From: nginx-forum at nginx.us (DankMemes) Date: Wed, 25 Nov 2015 23:58:19 -0500 Subject: How are client certificate expired CRLs handled? Message-ID: <8abc7a6cc7611332ef86544ff8e612f1.NginxMailingListEnglish@forum.nginx.org> If any of the concatenated CRLs in the file provided to ssl_crl have expired (root or intermediate), what is the Nginx behavior (assuming ssl_verify_client is on)? Does it result in failing verification of the client certificate (chain), or does it just log a warning, or nothing happens? If it does fail verification, how can I detect that specific problems and still perform the rest of verification (valid certificate which itself has not expired and chain of trust can be established to the verification depth) (the CA I'm using to generate the CRLs is on the same server, so it's not a problem if it's actually expired -- though a warning message would be nice as a reminder to the admin). Posted at Nginx Forum: https://forum.nginx.org/read.php?2,263087,263087#msg-263087 From lists at ruby-forum.com Thu Nov 26 06:27:44 2015 From: lists at ruby-forum.com (Simon Walker) Date: Thu, 26 Nov 2015 07:27:44 +0100 Subject: Nginx Magento multistore configuration In-Reply-To: References: Message-ID: Thanks for this immediate help. -- Posted via http://www.ruby-forum.com/. From mdounin at mdounin.ru Thu Nov 26 13:28:35 2015 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 26 Nov 2015 16:28:35 +0300 Subject: How are client certificate expired CRLs handled? In-Reply-To: <8abc7a6cc7611332ef86544ff8e612f1.NginxMailingListEnglish@forum.nginx.org> References: <8abc7a6cc7611332ef86544ff8e612f1.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20151126132835.GU74233@mdounin.ru> Hello! On Wed, Nov 25, 2015 at 11:58:19PM -0500, DankMemes wrote: > If any of the concatenated CRLs in the file provided to ssl_crl have expired > (root or intermediate), what is the Nginx behavior (assuming > ssl_verify_client is on)? Does it result in failing verification of the > client certificate (chain), or does it just log a warning, or nothing > happens? If it does fail verification, how can I detect that specific > problems and still perform the rest of verification (valid certificate which > itself has not expired and chain of trust can be established to the > verification depth) (the CA I'm using to generate the CRLs is on the same > server, so it's not a problem if it's actually expired -- though a warning > message would be nice as a reminder to the admin). CRLs are just loaded by nginx from a file specified by the ssl_crl directive, and no additional checks are made. -- Maxim Dounin http://nginx.org/ From mdounin at mdounin.ru Thu Nov 26 13:51:25 2015 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 26 Nov 2015 16:51:25 +0300 Subject: SMTP Forward Nginx Proxy In-Reply-To: References: Message-ID: <20151126135125.GW74233@mdounin.ru> Hello! On Wed, Nov 25, 2015 at 05:27:09PM -0500, chobit wrote: > Almost all of our customers send e-mail through our private SMTP servers, > but we have one customer who chooses to use a third-party SMTP provider. The > third-party SMTP service requires whitelisting of any sending IP addresses > which is normal. > Unfortunately the components in our infrastructure which send e-mail are > part of an autoscaling group so the IP addresses can vary. > To solve for this problem I would like to setup an nginx configuration which > accepts SMTP connections to it and then proxies them to another IP address > (the third-party SMTP service) so the requests to the mail server always > appear to the third-party SMTP service as if they came from the same > server. > > Is it possible to solve this issue with ngingx smtp proxy? > How should i forwarded smtp in case with third-party SMTP service? You can configure nginx mail proxy to forward all connections to a particular SMTP server. To do so, return server IP addresss from your auth_http service, see http://nginx.org/r/auth_http. Note though, that nginx is not designed to talk to 3rd party servers, it is designed to proxy to your own backends. In particular, in case of SMTP this results in the fact that no auth commands are sent to backend servers - they are expected to be configured to accept mail without authentification. Depending on a particular 3rd party service you are trying to use, this may or may not be an option. For your particular task, it may be easier to configure raw TCP proxy to a particular 3rd party SMTP server (e.g., using nginx stream proxy, http://nginx.org/en/docs/stream/ngx_stream_core_module.html) or a full-featured STMP server with a smarthost configured. -- Maxim Dounin http://nginx.org/ From nginx-forum at nginx.us Thu Nov 26 14:14:07 2015 From: nginx-forum at nginx.us (itpp2012) Date: Thu, 26 Nov 2015 09:14:07 -0500 Subject: SMTP Forward Nginx Proxy In-Reply-To: <20151126135125.GW74233@mdounin.ru> References: <20151126135125.GW74233@mdounin.ru> Message-ID: <6ea737005aee36e78c72ac6c78952867.NginxMailingListEnglish@forum.nginx.org> Maxim Dounin Wrote: ------------------------------------------------------- > For your particular task, it may be easier to configure raw TCP > proxy to a particular 3rd party SMTP server (e.g., using nginx > stream proxy, > http://nginx.org/en/docs/stream/ngx_stream_core_module.html) > or a full-featured STMP server with a smarthost configured. Simple example 2 interfaces balanced backend limited to a vlan routed gateway: stream { error_log logs/stream_error_smtp.log; upstream backendsmtp { server 192.168.28.21:25; server 192.168.28.22:25; server 192.168.28.23:25; } server { listen 25; proxy_connect_timeout 30s; proxy_timeout 30s; proxy_pass backendsmtp; allow 192.168.29.1; deny all; } } Posted at Nginx Forum: https://forum.nginx.org/read.php?2,263084,263099#msg-263099 From nginx-forum at nginx.us Fri Nov 27 02:03:19 2015 From: nginx-forum at nginx.us (vps4) Date: Thu, 26 Nov 2015 21:03:19 -0500 Subject: proxy_store skip not 200 Message-ID: <82edfe163fd08516ddca46b155943eae.NginxMailingListEnglish@forum.nginx.org> i setup proxy_store works fine, but has some problem when the backend response 404 or other result, proxy_store still save them for example: backend 1.jpg response 404 and html result, proxy_store will store it in disk how can i skip that results not 200 and verify by mime etc... thanks Posted at Nginx Forum: https://forum.nginx.org/read.php?2,263110,263110#msg-263110 From sylvain.bertrand at gmail.com Fri Nov 27 03:33:36 2015 From: sylvain.bertrand at gmail.com (Sylvain BERTRAND) Date: Fri, 27 Nov 2015 14:33:36 +1100 Subject: 403 forbidden with lynx www browser In-Reply-To: <20151126011723.GA1108@privacy> References: <20151122232320.GB1382@privacy> <9FB8528595F3BE4E9D4AAB664B7A500D16926671@smtp_mail.bankofamerica.com> <20151123230001.GA1095@privacy> <20151124125721.GF74233@mdounin.ru> <20151124130729.GA2616@privacy> <20151124234320.GA1136@privacy> <20151126011723.GA1108@privacy> Message-ID: <20151127033335.GE1154@privacy> On Thu, Nov 26, 2015 at 12:17:23PM +1100, Sylvain BERTRAND wrote: > On Wed, Nov 25, 2015 at 07:29:53PM +0100, Lukas Tribus wrote: > > https://support.cloudflare.com/hc/en-us/articles/200169226-Why-am-I-getting-a-403-error- > > > > > > That is however not the case with rawstory.com. > > > > > > Therefor: > > - rawstory.com uses cloudflare, but > > - cloudflare does not block lynx in this instance > > - the backend (rawstory.com) nginx server blocks the user-agent > > > > > > There is no one to blame other than the original host. > > Then, back to square one. > > Usually, when lynx is 403 forbidden, as far as I can remember, it was nginx. If I > read well the document with the link provided above, it means that the "browser > integrity" from nginx mod_security seems to have a setting blocking alternative > light browsers like lynx. If it's actually the case, nginx mod_security > is concluding that lynx is a "compromised browser" way too easily. ... and another one: https://www.whatismyip.com/ip-whois-lookup/ -- Sylvain From nginx-forum at nginx.us Fri Nov 27 10:03:43 2015 From: nginx-forum at nginx.us (frederico) Date: Fri, 27 Nov 2015 05:03:43 -0500 Subject: Nginx Reverse proxy + RD Gateway Auth Problem In-Reply-To: References: <20141017125529.GC35211@mdounin.ru> Message-ID: Hi timbo, I am also trying to connect to a Remote Desktop Gateway through nginx. Did you get it work? Regards, Fred Posted at Nginx Forum: https://forum.nginx.org/read.php?2,254095,263116#msg-263116 From nginx-forum at nginx.us Fri Nov 27 11:06:32 2015 From: nginx-forum at nginx.us (itpp2012) Date: Fri, 27 Nov 2015 06:06:32 -0500 Subject: Nginx Reverse proxy + RD Gateway Auth Problem In-Reply-To: References: <20141017125529.GC35211@mdounin.ru> Message-ID: <6fecef054759e9d9d01d938805bfc724.NginxMailingListEnglish@forum.nginx.org> frederico Wrote: ------------------------------------------------------- > Hi timbo, > > I am also trying to connect to a Remote Desktop Gateway through nginx. > Did you get it work? Have you tried this using stream {} ? which works fine for vpn and other streaming services. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,254095,263117#msg-263117 From nginx-forum at nginx.us Fri Nov 27 11:30:57 2015 From: nginx-forum at nginx.us (frederico) Date: Fri, 27 Nov 2015 06:30:57 -0500 Subject: Nginx Reverse proxy + RD Gateway Auth Problem In-Reply-To: <6fecef054759e9d9d01d938805bfc724.NginxMailingListEnglish@forum.nginx.org> References: <20141017125529.GC35211@mdounin.ru> <6fecef054759e9d9d01d938805bfc724.NginxMailingListEnglish@forum.nginx.org> Message-ID: <30a75ca5fa9ed66b7b9182e2b9faacd1.NginxMailingListEnglish@forum.nginx.org> Hi, sorry, but I don't understand what you mean with stream {}, my nginx config for the RD Gateway is the following: server {listen *:6080; listen *:6443 ssl; server_name ~^rdg..*$; include ssl_rdg.conf; location / {proxy_pass https://s2012-rdg; include proxy_defaults.conf;}} Should I replace proxy_pass to stream? Regards, Fred Posted at Nginx Forum: https://forum.nginx.org/read.php?2,254095,263118#msg-263118 From nginx-forum at nginx.us Fri Nov 27 13:28:26 2015 From: nginx-forum at nginx.us (itpp2012) Date: Fri, 27 Nov 2015 08:28:26 -0500 Subject: Nginx Reverse proxy + RD Gateway Auth Problem In-Reply-To: <30a75ca5fa9ed66b7b9182e2b9faacd1.NginxMailingListEnglish@forum.nginx.org> References: <20141017125529.GC35211@mdounin.ru> <6fecef054759e9d9d01d938805bfc724.NginxMailingListEnglish@forum.nginx.org> <30a75ca5fa9ed66b7b9182e2b9faacd1.NginxMailingListEnglish@forum.nginx.org> Message-ID: <2afea5e7fd097102d9748590f45f3fa8.NginxMailingListEnglish@forum.nginx.org> frederico Wrote: ------------------------------------------------------- > Should I replace proxy_pass to stream? If your config is a http {} one then yes, simple example https://forum.nginx.org/read.php?2,263084,263099#msg-263099 Posted at Nginx Forum: https://forum.nginx.org/read.php?2,254095,263119#msg-263119 From mdounin at mdounin.ru Fri Nov 27 13:54:57 2015 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 27 Nov 2015 16:54:57 +0300 Subject: proxy_store skip not 200 In-Reply-To: <82edfe163fd08516ddca46b155943eae.NginxMailingListEnglish@forum.nginx.org> References: <82edfe163fd08516ddca46b155943eae.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20151127135457.GZ74233@mdounin.ru> Hello! On Thu, Nov 26, 2015 at 09:03:19PM -0500, vps4 wrote: > i setup proxy_store works fine, but has some problem > when the backend response 404 or other result, proxy_store still save them > for example: > > backend 1.jpg response 404 and html result, proxy_store will store it in > disk > how can i skip that results not 200 and verify by mime etc... The proxy_store mechanism only stores responses with status code 200. If you see it storing 404s - responses you see are likely returned with incorrect status code, that is, they are 200 in fact. An obvious way way to fix this is to fix the backend to properly return status code. Additional response verification before it's store isn't available with proxy_store. Some verification can be done when using proxy_cache, using the proxy_no_cache directive (http://nginx.org/r/proxy_no_cache) and appropriate mapping of upstream response headers. -- Maxim Dounin http://nginx.org/ From ngw at nofeed.org Fri Nov 27 15:54:29 2015 From: ngw at nofeed.org (Nicholas Wieland) Date: Fri, 27 Nov 2015 16:54:29 +0100 Subject: nginx SSL_do_handshake() failed Message-ID: <15B2CB39-BC59-48CC-84C6-3E6BD491FDBF@nofeed.org> it's the first time I configure an SSL certificate on my development machine (I'm no sysadmin - I need SSL to work with facebook). I decided to go with ngingx proxying a ruby sinatra application, nothing fancy. This is the error I get when Facebook tries to connect to my HTTP server. AFAIK nginx is the culprit here: 2015/11/26 15:42:03 [info] 42872#0: *3 SSL_do_handshake() failed (SSL: error:14094418:SSL routines:ssl3_read_bytes:tlsv1 alert unknown ca:SSL alert number 48) while SSL handshaking, client: 31.13.113.70, server: 0.0.0.0:4567 This is what I did: Downloaded the cert (a .key, a .crt and a .csr) from RapidSSL Downloaded the trusted cert from RapidSSL (https://knowledge.rapidssl.com/library/VERISIGN/ALL_OTHER/RapidSSL%20Intermediate/RapidSSL_CA_bundle.pem) and saved locally under /etc/ssl/cert/ Installed locally nginx and configured like this: https://gist.github.com/ngw/f97adc4194b08ea355c8 Restarted both nginx and puma respectively on port 4567 and 8080 Went to https://sandbox.thing.it, the app responded as expected, the connection was encrypted and the certificate appears to be the correct one. Went to Facebook and attempted to register a new page subscription (https://developers.facebook.com/docs/graph-api/webhooks/v2.2). Had the error reported on the top (SSL_do_handshake() failed) when Facebook attempted to validate my callback url Any suggestion? Thanks for your time, ngw From mdounin at mdounin.ru Fri Nov 27 16:14:02 2015 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 27 Nov 2015 19:14:02 +0300 Subject: nginx SSL_do_handshake() failed In-Reply-To: <15B2CB39-BC59-48CC-84C6-3E6BD491FDBF@nofeed.org> References: <15B2CB39-BC59-48CC-84C6-3E6BD491FDBF@nofeed.org> Message-ID: <20151127161402.GC74233@mdounin.ru> Hello! On Fri, Nov 27, 2015 at 04:54:29PM +0100, Nicholas Wieland wrote: > it's the first time I configure an SSL certificate on my development machine (I'm no sysadmin - I need SSL to work with facebook). I decided to go with ngingx proxying a ruby sinatra application, nothing fancy. > > This is the error I get when Facebook tries to connect to my HTTP server. AFAIK nginx is the culprit here: > > 2015/11/26 15:42:03 [info] 42872#0: *3 SSL_do_handshake() failed (SSL: error:14094418:SSL routines:ssl3_read_bytes:tlsv1 alert unknown ca:SSL alert number 48) while SSL handshaking, client: 31.13.113.70, server: 0.0.0.0:4567 > > This is what I did: > > Downloaded the cert (a .key, a .crt and a .csr) from RapidSSL > Downloaded the trusted cert from RapidSSL (https://knowledge.rapidssl.com/library/VERISIGN/ALL_OTHER/RapidSSL%20Intermediate/RapidSSL_CA_bundle.pem) and saved locally under /etc/ssl/cert/ > Installed locally nginx and configured like this: https://gist.github.com/ngw/f97adc4194b08ea355c8 > Restarted both nginx and puma respectively on port 4567 and 8080 > > Went to https://sandbox.thing.it, the app responded as expected, the connection was encrypted and the certificate appears to be the correct one. > > Went to Facebook and attempted to register a new page subscription (https://developers.facebook.com/docs/graph-api/webhooks/v2.2). Had the error reported on the top (SSL_do_handshake() failed) when Facebook attempted to validate my callback url > > Any suggestion? Make sure to properly configure certificate chains, see http://nginx.org/en/docs/http/configuring_https_servers.html#chains for details. Note well that if you have no experience with SSL configuration, it's a good idea to avoid configuring anything but ssl_certificate and ssl_certificate_key (and ssl_session_cache for performance reasons). That is, remove (or comment out) all other ssl_* directives in your configuration (including ssl_stapling, ssl_stapling_verify, ssl_prefer_server_ciphers, ssl_protocols, ssl_ciphers) unless you'll get it working. You can re-add these directives later if needed. The error you are seeing is likely unrelated, but it's generally better approach anyway. -- Maxim Dounin http://nginx.org/ From ngw at nofeed.org Fri Nov 27 16:41:23 2015 From: ngw at nofeed.org (Nicholas Wieland) Date: Fri, 27 Nov 2015 17:41:23 +0100 Subject: nginx SSL_do_handshake() failed In-Reply-To: <20151127161402.GC74233@mdounin.ru> References: <15B2CB39-BC59-48CC-84C6-3E6BD491FDBF@nofeed.org> <20151127161402.GC74233@mdounin.ru> Message-ID: > On 27 Nov 2015, at 17:14, Maxim Dounin wrote: > > Hello! > > On Fri, Nov 27, 2015 at 04:54:29PM +0100, Nicholas Wieland wrote: > >> it's the first time I configure an SSL certificate on my development machine (I'm no sysadmin - I need SSL to work with facebook). I decided to go with ngingx proxying a ruby sinatra application, nothing fancy. >> >> This is the error I get when Facebook tries to connect to my HTTP server. AFAIK nginx is the culprit here: >> >> 2015/11/26 15:42:03 [info] 42872#0: *3 SSL_do_handshake() failed (SSL: error:14094418:SSL routines:ssl3_read_bytes:tlsv1 alert unknown ca:SSL alert number 48) while SSL handshaking, client: 31.13.113.70, server: 0.0.0.0:4567 >> >> This is what I did: >> >> Downloaded the cert (a .key, a .crt and a .csr) from RapidSSL >> Downloaded the trusted cert from RapidSSL (https://knowledge.rapidssl.com/library/VERISIGN/ALL_OTHER/RapidSSL%20Intermediate/RapidSSL_CA_bundle.pem) and saved locally under /etc/ssl/cert/ >> Installed locally nginx and configured like this: https://gist.github.com/ngw/f97adc4194b08ea355c8 >> Restarted both nginx and puma respectively on port 4567 and 8080 >> >> Went to https://sandbox.thing.it, the app responded as expected, the connection was encrypted and the certificate appears to be the correct one. >> >> Went to Facebook and attempted to register a new page subscription (https://developers.facebook.com/docs/graph-api/webhooks/v2.2). Had the error reported on the top (SSL_do_handshake() failed) when Facebook attempted to validate my callback url >> >> Any suggestion? > > Make sure to properly configure certificate chains, see > http://nginx.org/en/docs/http/configuring_https_servers.html#chains > for details. I?m not entirely sure I understand why I need a certificate chain. The .crt file is what the provider sent me, that?s what I use. Should I ?chain? the .crt file the provider sent me with the RapidSSL bundle? This is for testing and development, I don?t really care about performances, a slow solution is perfectly fine ngw -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Fri Nov 27 17:16:19 2015 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 27 Nov 2015 20:16:19 +0300 Subject: nginx SSL_do_handshake() failed In-Reply-To: References: <15B2CB39-BC59-48CC-84C6-3E6BD491FDBF@nofeed.org> <20151127161402.GC74233@mdounin.ru> Message-ID: <20151127171619.GD74233@mdounin.ru> Hello! On Fri, Nov 27, 2015 at 05:41:23PM +0100, Nicholas Wieland wrote: > > On 27 Nov 2015, at 17:14, Maxim Dounin wrote: [...] > > Make sure to properly configure certificate chains, see > > http://nginx.org/en/docs/http/configuring_https_servers.html#chains > > for details. > > I?m not entirely sure I understand why I need a certificate > chain. The .crt file is what the provider sent me, that?s what I > use. Should I ?chain? the .crt file the provider sent me with > the RapidSSL bundle? This is for testing and development, I > don?t really care about performances, a slow solution is > perfectly fine Certificate chains are needed, because a typical certificate is issued by an intermediate CA, while browsers know only about root CAs. And a web server must supply intermediate CA certificate to a browser (or other client) for the browser to be able to verify that the certificate provided by the web server should be trusted. The link quoted explains how to properly put certs into a certificate file for things to work, and how to validate that the result is correct. Normally it's as easy as just concatenating your server's certificate and the bundle provided by your CA. But things may vary depending on CA - some CAs may provide incorrect bundles, or certs in a wrong order within the bundle, or there may be more than one bundle and you'll have to choose the right one. That is, it's a good idea to understand what you are doing and verify that the resulting chain returned by your server contains all needed certs in the correct order (see "openssl s_client ..." part of the link). -- Maxim Dounin http://nginx.org/ From mayamatakeshi at gmail.com Sat Nov 28 00:17:53 2015 From: mayamatakeshi at gmail.com (mayamatakeshi) Date: Sat, 28 Nov 2015 09:17:53 +0900 Subject: Is it possible to get headers from X-Accel-Redirect reply Message-ID: Hello, my upstream server replies with this: HTTP/1.1 200 OK. Server: test. Date: Sat, 28 Nov 2015 00:05:55 GMT. Content-Type: application/json. Content-Length: 0. Connection: close. X-ExtraInfo: domain_name=test1.com ;domain_id=1000;user_name=user1;user_id=10001001. X-Accel-Redirect: /internal_redirect/192.168.2.153:7777. Access-Control-Allow-Origin: https://192.168.2.153:445. Access-Control-Allow-Credentials: true. which I handle with this: location ~ ^/internal_redirect/(.*) { internal; proxy_pass http://$1$is_args$args; } However, I want to pass X-ExtraInfo to the target upstream server so i tried: proxy_set_header X-ExtraInfo $sent_http_x_extrainfo; and even: proxy_set_header X-ExtraInfo $http_x_extrainfo; but none of them worked. So, is there a way to get headers from X-Accel-Redirect replies? Obs: I can workaround this passing the ExtraInfo as arguments in the target URL but I want try to rely on the current interface of the target upstream server. Regards, Takeshi -------------- next part -------------- An HTML attachment was scrubbed... URL: From mayamatakeshi at gmail.com Sat Nov 28 01:10:19 2015 From: mayamatakeshi at gmail.com (mayamatakeshi) Date: Sat, 28 Nov 2015 10:10:19 +0900 Subject: Is it possible to get headers from X-Accel-Redirect reply In-Reply-To: References: Message-ID: On Sat, Nov 28, 2015 at 9:17 AM, mayamatakeshi wrote: > Hello, my upstream server replies with this: > > HTTP/1.1 200 OK. > Server: test. > Date: Sat, 28 Nov 2015 00:05:55 GMT. > Content-Type: application/json. > Content-Length: 0. > Connection: close. > X-ExtraInfo: domain_name=test1.com > ;domain_id=1000;user_name=user1;user_id=10001001. > X-Accel-Redirect: /internal_redirect/192.168.2.153:7777. > Access-Control-Allow-Origin: https://192.168.2.153:445. > Access-Control-Allow-Credentials: true. > > which I handle with this: > > location ~ ^/internal_redirect/(.*) { > internal; > proxy_pass http://$1$is_args$args; > } > > However, I want to pass X-ExtraInfo to the target upstream server so i > tried: > proxy_set_header X-ExtraInfo > $sent_http_x_extrainfo; > and even: > proxy_set_header X-ExtraInfo > $http_x_extrainfo; > but none of them worked. > So, is there a way to get headers from X-Accel-Redirect replies? > > Obs: I can workaround this passing the ExtraInfo as arguments in the > target URL but I want try to rely on the current interface of the target > upstream server. > > Hello, I have found the solution in the openresty google group: https://groups.google.com/d/msg/openresty-en/vPxQFntAfCc/VcCv1bhLeLsJ Regards, Takeshi -------------- next part -------------- An HTML attachment was scrubbed... URL: From icyou at qq.com Sat Nov 28 07:02:33 2015 From: icyou at qq.com (=?ISO-8859-1?B?U2hp?=) Date: Sat, 28 Nov 2015 15:02:33 +0800 Subject: 502 errors and request_time Message-ID: Hi: I've found 2 kinds of 502 errors in my server: connection reset by peer, it happens as request_time reach to 10s. connection timeout, it happens as request_time reach to 3s. Server is: centos , kernel(2.6.32) php5.2 php-fpm nginx 1.6.3. access logs outputs: [28/Nov/2015:14:41:08 +0800] "GET /ben2.php HTTP/1.1" 502 172 "-" "Apache-HttpClient/4.2.6 (java 1.5)" "-" 3.000 [28/Nov/2015:14:41:11 +0800] "GET /ben2.php HTTP/1.1" 502 172 "-" "Apache-HttpClient/4.2.6 (java 1.5)" "-" 10.000 error logs: 2015/11/28 14:41:11 [error] 12981#0: *798323 recv() failed (104: Connection reset by peer) while reading response header from upstream, client: xx.xx.xx.xx, server: xx.xx.xx, request: "GET /ben2.php HTTP/1.1", upstream: "fastcgi://127.0.0.1:9000", host: "xx.xx.xx" 2015/11/28 14:41:08 [error] 12981#0: *798215 connect() failed (110: Connection timed out) while connecting to upstream, client: xx.xx.xx.xx, server: xx.xx.xx, request: "GET /ben2.php HTTP/1.1", upstream: "fastcgi://127.0.0.1:9000", host: "xx.xx.xx" I've tried "request_terminate_timeout, fastcgi_connect_timeout, fastcgi_read/write_timeout", but no help. What can I do next step ? -------------- next part -------------- An HTML attachment was scrubbed... URL: From dewanggaba at xtremenitro.org Sat Nov 28 10:10:42 2015 From: dewanggaba at xtremenitro.org (Dewangga Bachrul Alam) Date: Sat, 28 Nov 2015 17:10:42 +0700 Subject: purger directive not available Message-ID: <56597DA2.4010404@xtremenitro.org> Hello! I am using nginx 1.8.0 on Cent OS 7, tried to enable purger directive, mentioned on http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_cache_path But got error like this : nginx[46931]: nginx: [emerg] invalid parameter "purge=on" in /etc/nginx/conf.d/proxy.conf:5 My complete directive are like this : proxy_cache_path /var/cache/nginx/proxy_cache levels=1:2 keys_zone=mycache:10m inactive=60m use_temp_path=on purger=on purger_files=100 max_size=20g; proxy_temp_path /var/cache/nginx/proxy_temp 1 2; proxy_cache_key "$scheme$proxy_host$uri$is_args$args"; Any hints ? From wandenberg at gmail.com Sat Nov 28 10:26:54 2015 From: wandenberg at gmail.com (Wandenberg Peixoto) Date: Sat, 28 Nov 2015 08:26:54 -0200 Subject: purger directive not available In-Reply-To: <56597DA2.4010404@xtremenitro.org> References: <56597DA2.4010404@xtremenitro.org> Message-ID: This directive is available only on paid version Additionally, the following parameters are available as part of our commercial subscription : purger=on|off On Sat, Nov 28, 2015 at 8:10 AM, Dewangga Bachrul Alam < dewanggaba at xtremenitro.org> wrote: > Hello! > > I am using nginx 1.8.0 on Cent OS 7, tried to enable purger directive, > mentioned on > http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_cache_path > > But got error like this : > nginx[46931]: nginx: [emerg] invalid parameter "purge=on" in > /etc/nginx/conf.d/proxy.conf:5 > > My complete directive are like this : > proxy_cache_path /var/cache/nginx/proxy_cache levels=1:2 > keys_zone=mycache:10m inactive=60m use_temp_path=on purger=on > purger_files=100 max_size=20g; > proxy_temp_path /var/cache/nginx/proxy_temp 1 2; > proxy_cache_key "$scheme$proxy_host$uri$is_args$args"; > > Any hints ? > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From dewanggaba at xtremenitro.org Sat Nov 28 10:48:58 2015 From: dewanggaba at xtremenitro.org (Dewangga Bachrul Alam) Date: Sat, 28 Nov 2015 17:48:58 +0700 Subject: purger directive not available In-Reply-To: References: <56597DA2.4010404@xtremenitro.org> Message-ID: <5659869A.4020101@xtremenitro.org> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256 Hello! On 11/28/2015 05:26 PM, Wandenberg Peixoto wrote: > This directive is available only on paid version > > Additionally, the following parameters are available as part of our > commercial subscription : > > |purger|=|on|||off| My bad, didn't read the whole pages. :) Thank you, bro :) > > > On Sat, Nov 28, 2015 at 8:10 AM, Dewangga Bachrul Alam > > > wrote: > > Hello! > > I am using nginx 1.8.0 on Cent OS 7, tried to enable purger > directive, mentioned on > http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_cache_p ath > > But got error like this : nginx[46931]: nginx: [emerg] invalid > parameter "purge=on" in /etc/nginx/conf.d/proxy.conf:5 > > My complete directive are like this : proxy_cache_path > /var/cache/nginx/proxy_cache levels=1:2 keys_zone=mycache:10m > inactive=60m use_temp_path=on purger=on purger_files=100 > max_size=20g; proxy_temp_path /var/cache/nginx/proxy_temp 1 2; > proxy_cache_key "$scheme$proxy_host$uri$is_args$args"; > > Any hints ? > > _______________________________________________ nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > > > > > _______________________________________________ nginx mailing list > nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx > -----BEGIN PGP SIGNATURE----- Version: GnuPG v2 iQIcBAEBCAAGBQJWWYaYAAoJEOV/0iCgKM1w1LwP/1g5RyvyITOKszO4flS1jY+q pzNsV6NfrotmIDQkAYyBux7qonXyaRPUszojfmj0ztkgDBehPXSZm0BZkfSnc5N0 CQKa2iyVNtqW+ogq8wgGLc5fDyKYcIR6O/YABCmZ9QG5KPPcpCDuIxj9CGV0MOHB 9bD6M04zcyFPWGdYJeXCz3prFS/ZMaMUenq8QioagoPm/z+2cdTCvqtNjsdHoAll j+UogeWtkUJroGFkacGPJmoh8qPZ7688u/Iz09+Lqx6CVfqhKlibGon8L4pbpI/K 65Z5A6X4qH5Pnf7b4PiEpkMXmXzozpJctNz5j7Eq6TZwxNgdW1lyEmq1yLpkkrI6 fhBoQGbGqy4ZaTOyIY7SVDvBAmV9iXZkkFhRKbcnp+NBId5++EK+mAj/MAs2iaqy vR87NarlOweHblSdd3aAyq43lw9tiuXREmOAOB1zYAVLfaBAYVOTrx96YodVFn4D rMqxjtVxbcmHPqOM2fpFm/hEfg76uYfeOToEIvgtvifMv5jLdIknxfVl4Z8KqDwG YOWB/jc+FJlGG1+qgTL3ose+mGRPTnwdlTqtybcdki2PKuwKdlNYRClW8voBngev 138DSkLVhDXtEnMERJzT4znEoFOZOTt7CnLwv3+DXXdlF6stlwSlE2xvI103QORb 3YLJdSIssv9iUVMPcawD =IiK8 -----END PGP SIGNATURE----- From adam at jooadam.hu Sat Nov 28 17:18:54 2015 From: adam at jooadam.hu (=?UTF-8?B?Sm/DsyDDgWTDoW0=?=) Date: Sat, 28 Nov 2015 18:18:54 +0100 Subject: Basic auth is slow Message-ID: Hi, I just noticed that enabling basic authentication adds between 100 and 150 ms to my otherwise 30-40 ms page load time. Is this known behaviour? Is this somehow inherent or a design / implementation mistake? ? From ytlec2014 at gmail.com Sun Nov 29 08:04:50 2015 From: ytlec2014 at gmail.com (Smart Goldman) Date: Sun, 29 Nov 2015 17:04:50 +0900 Subject: PHP and CGI on UserDir Message-ID: Hello. I am new here. I try to enable PHP and CGI(Perl) on UserDir (/home/user/public_html) with nginx. But on my Chrome, PHP script is downloaded and CGI script shows me "404 Not Found" page. Here's my configurations. What is wrong with my configurations? OS: Linux 3.10.0 / CentOS 7 64bit nginx version: 1.8.0 ---------------------------------------------- /etc/nginx/conf.d/default.conf: server { listen 80; server_name localhost; access_log /var/log/nginx/access.log; error_log /var/log/nginx/error.log; #charset koi8-r; #access_log /var/log/nginx/log/host.access.log main; location / { root /var/www/html; index index.html index.htm; } location ~ ^/~(.+?)(/.*)?$ { alias /home/$1/public_html$2; index index.html index.htm; autoindex on; } #error_page 404 /404.html; # redirect server error pages to the static page /50x.html # error_page 500 502 503 504 /50x.html; location = /50x.html { root /var/www/html; } # proxy the PHP scripts to Apache listening on 127.0.0.1:80 # #location ~ \.php$ { # proxy_pass http://127.0.0.1; #} # pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000 # #location ~ \.php$ { # root html; # fastcgi_pass 127.0.0.1:9000; # fastcgi_index index.php; # fastcgi_param SCRIPT_FILENAME /scripts$fastcgi_script_name; # include fastcgi_params; #} location ~ (^~)*\.php$ { root /var/www/html; fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include /etc/nginx/fastcgi_params; } location ~ (^~)*\.pl|cgi$ { root /var/www/html; fastcgi_pass 127.0.0.1:8999; fastcgi_index index.cgi; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include /etc/nginx/fastcgi_params; } location ~ .*~.*\.php$ { alias /home/$1/public_html$2; fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include /etc/nginx/fastcgi_params; } location ~ .*~.*\.pl|cgi$ { alias /home/$1/public_html$2; fastcgi_pass 127.0.0.1:8999; fastcgi_index index.cgi; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include /etc/nginx/fastcgi_params; } # deny access to .htaccess files, if Apache's document root # concurs with nginx's one # #location ~ /\.ht { # deny all; #} } ---------------------------------------------- /etc/nginx/nginx.conf: user nginx; worker_processes 1; error_log /var/log/nginx/error.log warn; pid /var/run/nginx.pid; events { worker_connections 1024; } http { include /etc/nginx/mime.types; default_type application/octet-stream; log_format main '$remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"'; access_log /var/log/nginx/access.log; error_log /var/log/nginx/error.log; sendfile on; #tcp_nopush on; keepalive_timeout 65; #gzip on; include /etc/nginx/conf.d/*.conf; } -------------- next part -------------- An HTML attachment was scrubbed... URL: From anoopalias01 at gmail.com Sun Nov 29 08:28:51 2015 From: anoopalias01 at gmail.com (Anoop Alias) Date: Sun, 29 Nov 2015 13:58:51 +0530 Subject: PHP and CGI on UserDir In-Reply-To: References: Message-ID: What does the nginx error log ( /var/log/nginx/error.log) say when you access a php page? -------------- next part -------------- An HTML attachment was scrubbed... URL: From ytlec2014 at gmail.com Sun Nov 29 09:12:24 2015 From: ytlec2014 at gmail.com (Smart Goldman) Date: Sun, 29 Nov 2015 18:12:24 +0900 Subject: PHP and CGI on UserDir In-Reply-To: References: Message-ID: Sorry I forgot error log. CGI outputs 2015/11/29 04:01:07 [error] 2618#0: *6 open() "/home/user/public_html/index.cgi" failed (2: No such file or directory), client: 119.105.136.26, server: localhost, request: "GET /~user/index.cgi HTTP/1.1", host: "host.domain.com" PHP outputs nothing to error.log. access.log says: 119.105.136.26 - - [29/Nov/2015:04:01:07 -0500] "GET /~user/index.cgi HTTP/1.1" 404 570 "-" "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/46.0.2490.86 Safari/537.36" 119.105.136.26 - - [29/Nov/2015:04:08:23 -0500] "GET /~user/index.php HTTP/1.1" 200 20 "-" "Mozilla/5.0 (Windows NT 6.1; WOW64; Trident/7.0; rv:11.0) like Gecko" 2015-11-29 17:28 GMT+09:00 Anoop Alias : > What does the nginx error log ( /var/log/nginx/error.log) say when you > access a php page? > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From al-nginx at none.at Sun Nov 29 09:41:33 2015 From: al-nginx at none.at (Aleksandar Lazic) Date: Sun, 29 Nov 2015 10:41:33 +0100 Subject: PHP and CGI on UserDir In-Reply-To: References: Message-ID: <1ce1e947290cde83c8b25d3ca5299369@none.at> Hi Smart Goldman. Am 29-11-2015 09:04, schrieb Smart Goldman: > Hello. I am new here. > > I try to enable PHP and CGI(Perl) on UserDir (/home/user/public_html) > with nginx. > But on my Chrome, PHP script is downloaded and CGI script shows me "404 > Not Found" page. > Here's my configurations. What is wrong with my configurations? Try to use nested locations. http://nginx.org/en/docs/http/ngx_http_core_module.html#location > OS: Linux 3.10.0 / CentOS 7 64bit > nginx version: 1.8.0 > > ---------------------------------------------- > /etc/nginx/conf.d/default.conf: > server { > listen 80; > server_name localhost; > access_log /var/log/nginx/access.log; > error_log /var/log/nginx/error.log; > > #charset koi8-r; > #access_log /var/log/nginx/log/host.access.log main; > > location / { > root /var/www/html; > index index.html index.htm; > } > > location ~ ^/~(.+?)(/.*)?$ { > alias /home/$1/public_html$2; > index index.html index.htm; > autoindex on; include my_php_config.conf; include my_cgi_config.conf; > } > > #error_page 404 /404.html; > > # redirect server error pages to the static page /50x.html > # > error_page 500 502 503 504 /50x.html; > location = /50x.html { > root /var/www/html; > } > > # proxy the PHP scripts to Apache listening on 127.0.0.1:80 [1] > # > #location ~ \.php$ { > # proxy_pass http://127.0.0.1; > #} > > # pass the PHP scripts to FastCGI server listening on > 127.0.0.1:9000 > [2] > # > #location ~ \.php$ { > # root html; > # fastcgi_pass 127.0.0.1:9000 [2]; > # fastcgi_index index.php; > # fastcgi_param SCRIPT_FILENAME /scripts$fastcgi_script_name; > # include fastcgi_params; > #} > > location ~ (^~)*\.php$ { > root /var/www/html; > fastcgi_pass 127.0.0.1:9000 [2]; > fastcgi_index index.php; > fastcgi_param SCRIPT_FILENAME > $document_root$fastcgi_script_name; > include /etc/nginx/fastcgi_params; > } > location ~ (^~)*\.pl|cgi$ { > root /var/www/html; > fastcgi_pass 127.0.0.1:8999 [3]; > fastcgi_index index.cgi; > fastcgi_param SCRIPT_FILENAME > $document_root$fastcgi_script_name; > include /etc/nginx/fastcgi_params; > } This block into "my_php_config.conf" > location ~ .*~.*\.php$ { > alias /home/$1/public_html$2; > fastcgi_pass 127.0.0.1:9000 [2]; > fastcgi_index index.php; > fastcgi_param SCRIPT_FILENAME > $document_root$fastcgi_script_name; > include /etc/nginx/fastcgi_params; > } END This block into "my_cgi_config.conf" > location ~ .*~.*\.pl|cgi$ { > alias /home/$1/public_html$2; > fastcgi_pass 127.0.0.1:8999 [3]; > fastcgi_index index.cgi; > fastcgi_param SCRIPT_FILENAME > $document_root$fastcgi_script_name; > include /etc/nginx/fastcgi_params; > } END > # deny access to .htaccess files, if Apache's document root > # concurs with nginx's one BR Aleks From francis at daoine.org Sun Nov 29 10:51:17 2015 From: francis at daoine.org (Francis Daly) Date: Sun, 29 Nov 2015 10:51:17 +0000 Subject: Reverse proxy to QNAP does not work In-Reply-To: References: Message-ID: <20151129105117.GU3351@daoine.org> On Wed, Nov 25, 2015 at 03:31:19PM -0500, no.1 wrote: Hi there, I don't have a qnap system to test with, so I do not know how the application responds. > The plan was to use subdirectories to access both > (https://example.com/nas and https://example.com/owncloud). Subdomains are > not possible. In general, you can only easily reverse proxy things at different levels in the /url hierarchy, if the upstream application is written to allow it. (Basically, this means that local links in the application should not start with / and should not start with http:// or https://.) There are exceptions. If these nas and owncloud applications fit the "easy" pattern, then it should be straightforward. If not, then you can choose to battle it; or you can choose to change your "no subdomain" rule. Or you can see if you can reconfigure the nas and owncloud applications to be at different points in the hierarchy. > location / { > proxy_pass http://qnap:8080; > proxy_set_header X-Real-IP $remote_addr; > } See http://nginx.org/r/proxy_pass for what request is passed to qnap, when a request comes in to nginx. > But as soon as I try that a rewrite Why do you use a rewrite? I would expect that something like location ^~ /nas/ { proxy_pass http://qnap:8080/; proxy_set_header X-Real-IP $remote_addr; } (note the extra / on the proxy_pass line) should make the correct initial request to qnap. There may well be responses that lead to further requests that do not do what you want; but that's usually what happens when reverse proxying, and they can be addressed when the details are available. If it does not Just Work, the next easiest thing (I believe) is to try to configure the qnap service so that it believes that its base url is /nas/ and not /, on the local server. Good luck with it, f -- Francis Daly francis at daoine.org From ytlec2014 at gmail.com Sun Nov 29 11:02:31 2015 From: ytlec2014 at gmail.com (Smart Goldman) Date: Sun, 29 Nov 2015 20:02:31 +0900 Subject: PHP and CGI on UserDir In-Reply-To: <1ce1e947290cde83c8b25d3ca5299369@none.at> References: <1ce1e947290cde83c8b25d3ca5299369@none.at> Message-ID: Hi, thank you for great help, Aleksandar Lazic. I tried it. PHP script shows me "File not found." and outputs the following log: 2015/11/29 05:50:15 [error] 5048#0: *6 FastCGI sent in stderr: "Primary script unknown" while reading response header from upstream, client: 119.105.136.26, server: localhost, request: "GET /~user/index.php HTTP/1.1", upstream: "fastcgi://127.0.0.1:9000", host: "host.domain.com" - I do not know how to fix it... CGI script shows me "Error: No such CGI app - /home//public_html/~user/index.cgi may not exist or is not executable by this process." and outputs nothing to error.log. - /home//public_html/~user/... I think this path is wrong and I tried to fix this path but I could not. /home/user/public_html/ should be correct path.. 2015-11-29 18:41 GMT+09:00 Aleksandar Lazic : > Hi Smart Goldman. > > Am 29-11-2015 09:04, schrieb Smart Goldman: > >> Hello. I am new here. >> >> I try to enable PHP and CGI(Perl) on UserDir (/home/user/public_html) >> with nginx. >> But on my Chrome, PHP script is downloaded and CGI script shows me "404 >> Not Found" page. >> Here's my configurations. What is wrong with my configurations? >> > > Try to use nested locations. > > http://nginx.org/en/docs/http/ngx_http_core_module.html#location > > OS: Linux 3.10.0 / CentOS 7 64bit >> nginx version: 1.8.0 >> >> ---------------------------------------------- >> /etc/nginx/conf.d/default.conf: >> server { >> listen 80; >> server_name localhost; >> access_log /var/log/nginx/access.log; >> error_log /var/log/nginx/error.log; >> >> #charset koi8-r; >> #access_log /var/log/nginx/log/host.access.log main; >> >> location / { >> root /var/www/html; >> index index.html index.htm; >> } >> >> location ~ ^/~(.+?)(/.*)?$ { >> alias /home/$1/public_html$2; >> index index.html index.htm; >> autoindex on; >> > > include my_php_config.conf; > > include my_cgi_config.conf; > > > } >> >> #error_page 404 /404.html; >> >> # redirect server error pages to the static page /50x.html >> # >> error_page 500 502 503 504 /50x.html; >> location = /50x.html { >> root /var/www/html; >> } >> >> # proxy the PHP scripts to Apache listening on 127.0.0.1:80 [1] >> # >> #location ~ \.php$ { >> # proxy_pass http://127.0.0.1; >> #} >> >> # pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000 >> [2] >> # >> #location ~ \.php$ { >> # root html; >> # fastcgi_pass 127.0.0.1:9000 [2]; >> # fastcgi_index index.php; >> # fastcgi_param SCRIPT_FILENAME /scripts$fastcgi_script_name; >> # include fastcgi_params; >> #} >> >> location ~ (^~)*\.php$ { >> root /var/www/html; >> fastcgi_pass 127.0.0.1:9000 [2]; >> fastcgi_index index.php; >> fastcgi_param SCRIPT_FILENAME >> $document_root$fastcgi_script_name; >> include /etc/nginx/fastcgi_params; >> } >> location ~ (^~)*\.pl|cgi$ { >> root /var/www/html; >> fastcgi_pass 127.0.0.1:8999 [3]; >> fastcgi_index index.cgi; >> fastcgi_param SCRIPT_FILENAME >> $document_root$fastcgi_script_name; >> include /etc/nginx/fastcgi_params; >> } >> > > This block into "my_php_config.conf" > > location ~ .*~.*\.php$ { >> alias /home/$1/public_html$2; >> fastcgi_pass 127.0.0.1:9000 [2]; >> fastcgi_index index.php; >> fastcgi_param SCRIPT_FILENAME >> $document_root$fastcgi_script_name; >> include /etc/nginx/fastcgi_params; >> } >> > END > > This block into "my_cgi_config.conf" > > location ~ .*~.*\.pl|cgi$ { >> alias /home/$1/public_html$2; >> fastcgi_pass 127.0.0.1:8999 [3]; >> fastcgi_index index.cgi; >> fastcgi_param SCRIPT_FILENAME >> $document_root$fastcgi_script_name; >> include /etc/nginx/fastcgi_params; >> } >> > > END > > # deny access to .htaccess files, if Apache's document root >> # concurs with nginx's one >> > > BR Aleks > -------------- next part -------------- An HTML attachment was scrubbed... URL: From francis at daoine.org Sun Nov 29 11:10:19 2015 From: francis at daoine.org (Francis Daly) Date: Sun, 29 Nov 2015 11:10:19 +0000 Subject: PHP and CGI on UserDir In-Reply-To: References: Message-ID: <20151129111019.GV3351@daoine.org> On Sun, Nov 29, 2015 at 05:04:50PM +0900, Smart Goldman wrote: Hi there, > I try to enable PHP and CGI(Perl) on UserDir (/home/user/public_html) with > nginx. > But on my Chrome, PHP script is downloaded and CGI script shows me "404 Not > Found" page. In nginx, one requests is handled in one location. http://nginx.org/r/location describes how the one location is chosen for a particular request. You have: > location / { > location ~ ^/~(.+?)(/.*)?$ { > location = /50x.html { > location ~ (^~)*\.php$ { > location ~ (^~)*\.pl|cgi$ { > location ~ .*~.*\.php$ { > location ~ .*~.*\.pl|cgi$ { According to the description, the requests /~user/index.cgi and /~user/index.php are both handled in the second location there, which says: > location ~ ^/~(.+?)(/.*)?$ { > alias /home/$1/public_html$2; > index index.html index.htm; > autoindex on; which says "serve the file /home/user/public_html/index.cgi (or index.php) from the filesystem, with no further processing". And that is what you see -- one file does not exist, do you get 404; the other file does exist, so you get it. To make things work, you will need to arrange your location{} blocks so that the one that you want nginx to use to process a request, is the one that nginx does choose to process a request. And then make sure that you know what mapping you want nginx to use for *this* request should be handled by processing *this* file through *that* fastcgi server (or whatever is appropriate). Good luck with it, f -- Francis Daly francis at daoine.org From nginx-forum at nginx.us Sun Nov 29 12:28:38 2015 From: nginx-forum at nginx.us (meirhazon) Date: Sun, 29 Nov 2015 07:28:38 -0500 Subject: SCTP Message-ID: <0b2ac1cd3d75a13cdc0e5c3e058c7db0.NginxMailingListEnglish@forum.nginx.org> Hello, Can I load balance SCTP traffic by using nginx? Thanks so much, Meir Posted at Nginx Forum: https://forum.nginx.org/read.php?2,263148,263148#msg-263148 From nginx-forum at nginx.us Sun Nov 29 13:09:41 2015 From: nginx-forum at nginx.us (itpp2012) Date: Sun, 29 Nov 2015 08:09:41 -0500 Subject: SCTP In-Reply-To: <0b2ac1cd3d75a13cdc0e5c3e058c7db0.NginxMailingListEnglish@forum.nginx.org> References: <0b2ac1cd3d75a13cdc0e5c3e058c7db0.NginxMailingListEnglish@forum.nginx.org> Message-ID: meirhazon Wrote: ------------------------------------------------------- > Hello, > > Can I load balance SCTP traffic by using nginx? Stream {} http://nginx.org/en/docs/stream/ngx_stream_core_module.html should be capable. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,263148,263149#msg-263149 From cai.bo at h3c.com Mon Nov 30 00:59:40 2015 From: cai.bo at h3c.com (Caibo) Date: Mon, 30 Nov 2015 00:59:40 +0000 Subject: Hello Message-ID: <201511300859061956920@h3c.com> Anyone here? ________________________________ caibo 11642 (RD) ------------------------------------------------------------------------------------------------------------------------------------- ??????????????????????????,????????????? ?????????????????????(?????????????????? ???)?????????????????,?????????????????? ??! This e-mail and its attachments contain confidential information from H3C, which is intended only for the person or entity whose address is listed above. Any use of the information contained herein in any way (including, but not limited to, total or partial disclosure, reproduction, or dissemination) by persons other than the intended recipient(s) is prohibited. If you receive this e-mail in error, please notify the sender by phone or email immediately and delete it! -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Mon Nov 30 13:12:10 2015 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 30 Nov 2015 16:12:10 +0300 Subject: Basic auth is slow In-Reply-To: References: Message-ID: <20151130131210.GF74233@mdounin.ru> Hello! On Sat, Nov 28, 2015 at 06:18:54PM +0100, Jo? ?d?m wrote: > Hi, > > I just noticed that enabling basic authentication adds between 100 and > 150 ms to my otherwise 30-40 ms page load time. Is this known > behaviour? Is this somehow inherent or a design / implementation > mistake? Basic authentication checks user password on each request. Depending on a password hash used for a particular user in the user file, it may take significant time - as password hashes are designed to be CPU-intensive to prevent password recovery attacks. Some additional information can be found here: https://en.wikipedia.org/wiki/Crypt_(C) Depending on your particular setup and possible risks, you may consider using something less CPU-intensive as your password hash function if a hash calculation takes 100ms. All crypt(3) schemes as supported by your system are understood by nginx, as well as some additional schemes for portability and debugging. See here for more details: http://nginx.org/r/auth_basic_user_file -- Maxim Dounin http://nginx.org/ From nginx-forum at nginx.us Mon Nov 30 13:52:30 2015 From: nginx-forum at nginx.us (itpp2012) Date: Mon, 30 Nov 2015 08:52:30 -0500 Subject: nginx and oracle weblogic server In-Reply-To: References: Message-ID: <6fee993a93365197b7da64b405034036.NginxMailingListEnglish@forum.nginx.org> Garcia Wrote: ------------------------------------------------------- > Hi, > Can Nginx work with oracle weblogic server properly? As a proxy nginx can easily handle weblogic servers, interfacing (API) nginx with Oracle can be done with Lua. > Does Oracle have support for Nginx? It depends what kind of support you are looking for, ask Oracle to start with. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,263159,263163#msg-263163 From mdounin at mdounin.ru Mon Nov 30 15:00:48 2015 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 30 Nov 2015 18:00:48 +0300 Subject: 502 errors and request_time In-Reply-To: References: Message-ID: <20151130150048.GI74233@mdounin.ru> Hello! On Sat, Nov 28, 2015 at 03:02:33PM +0800, Shi wrote: > Hi: > > I've found 2 kinds of 502 errors in my server: > > connection reset by peer, it happens as request_time reach to 10s. > > connection timeout, it happens as request_time reach to 3s. > > Server is: > > centos , kernel(2.6.32) > > php5.2 php-fpm > > nginx 1.6.3. > > access logs outputs: > [28/Nov/2015:14:41:08 +0800] "GET /ben2.php HTTP/1.1" 502 172 "-" "Apache-HttpClient/4.2.6 (java 1.5)" "-" 3.000 > > [28/Nov/2015:14:41:11 +0800] "GET /ben2.php HTTP/1.1" 502 172 "-" "Apache-HttpClient/4.2.6 (java 1.5)" "-" 10.000 > error logs: > 2015/11/28 14:41:11 [error] 12981#0: *798323 recv() failed (104: Connection reset by peer) while reading response header from upstream, client: xx.xx.xx.xx, server: xx.xx.xx, request: "GET /ben2.php HTTP/1.1", upstream: "fastcgi://127.0.0.1:9000", host: "xx.xx.xx" > > 2015/11/28 14:41:08 [error] 12981#0: *798215 connect() failed (110: Connection timed out) while connecting to upstream, client: xx.xx.xx.xx, server: xx.xx.xx, request: "GET /ben2.php HTTP/1.1", upstream: "fastcgi://127.0.0.1:9000", host: "xx.xx.xx" > > > > I've tried "request_terminate_timeout, fastcgi_connect_timeout, fastcgi_read/write_timeout", but no help. > > What can I do next step ? The "connection reset by peer" means that the connection was terminated by your backend, not by nginx. There isn't much you can do on nginx side. Consider checking php-fpm logs instead, may be you are hitting some limit like execution time limit or something. The "connection timed out" means that nginx wasn't able to connect to the backend in time, fastcgi_connection_timeout is something you can tune - though the default is 60s, and it should be big enough for normal use. Again, consider looking into your backend to find out why connection takes so long - likely it's overloaded and can't process connection requests in time. -- Maxim Dounin http://nginx.org/ From al-nginx at none.at Mon Nov 30 16:47:16 2015 From: al-nginx at none.at (Aleksandar Lazic) Date: Mon, 30 Nov 2015 17:47:16 +0100 Subject: PHP and CGI on UserDir In-Reply-To: References: <1ce1e947290cde83c8b25d3ca5299369@none.at> Message-ID: <96697b73d09200e820ba5d685f4aaba6@none.at> Hi. Am 29-11-2015 12:02, schrieb Smart Goldman: > Hi, thank you for great help, Aleksandar Lazic. > I tried it. How looks now your config? > PHP script shows me "File not found." and outputs the following log: > 2015/11/29 05:50:15 [error] 5048#0: *6 FastCGI sent in stderr: "Primary > script unknown" while reading response header from upstream, client: > 119.105.136.26, server: localhost, request: "GET /~user/index.php > HTTP/1.1", upstream: "fastcgi://127.0.0.1:9000 [2]", host: > "host.domain.com [4]" > > - I do not know how to fix it... > > CGI script shows me "Error: No such CGI app - > /home//public_html/~user/index.cgi may not exist or is not executable > by > this process." and outputs nothing to error.log. > > - /home//public_html/~user/... I think this path is wrong and I tried > to > fix this path but I could not. /home/user/public_html/ should be > correct > path.. Please run the debug log to see more. http://nginx.org/en/docs/debugging_log.html Due to the fact that I don't know if you use the centos packes or the nginx package I suggest to install the following packages http://nginx.org/en/linux_packages.html#mainline and the nginx-debug and run the debug instance with the suggested settings in http://nginx.org/en/docs/debugging_log.html BR Aleks > 2015-11-29 18:41 GMT+09:00 Aleksandar Lazic : > >> Hi Smart Goldman. >> >> Am 29-11-2015 09:04, schrieb Smart Goldman: >> >>> Hello. I am new here. >>> >>> I try to enable PHP and CGI(Perl) on UserDir >>> (/home/user/public_html) >>> with nginx. >>> But on my Chrome, PHP script is downloaded and CGI script shows me >>> "404 >>> Not Found" page. >>> Here's my configurations. What is wrong with my configurations? >> >> Try to use nested locations. >> >> http://nginx.org/en/docs/http/ngx_http_core_module.html#location >> >>> OS: Linux 3.10.0 / CentOS 7 64bit >>> nginx version: 1.8.0 >>> >>> ---------------------------------------------- >>> /etc/nginx/conf.d/default.conf: >>> server { >>> listen 80; >>> server_name localhost; >>> access_log /var/log/nginx/access.log; >>> error_log /var/log/nginx/error.log; >>> >>> #charset koi8-r; >>> #access_log /var/log/nginx/log/host.access.log main; >>> >>> location / { >>> root /var/www/html; >>> index index.html index.htm; >>> } >>> >>> location ~ ^/~(.+?)(/.*)?$ { >>> alias /home/$1/public_html$2; >>> index index.html index.htm; >>> autoindex on; >> >> include my_php_config.conf; >> >> include my_cgi_config.conf; >> >>> } >>> >>> #error_page 404 /404.html; >>> >>> # redirect server error pages to the static page /50x.html >>> # >>> error_page 500 502 503 504 /50x.html; >>> location = /50x.html { >>> root /var/www/html; >>> } >>> >>> # proxy the PHP scripts to Apache listening on 127.0.0.1:80 [1] >>> [1] >>> # >>> #location ~ \.php$ { >>> # proxy_pass http://127.0.0.1; >>> #} >>> >>> # pass the PHP scripts to FastCGI server listening on >>> 127.0.0.1:9000 [2] >>> [2] >>> # >>> #location ~ \.php$ { >>> # root html; >>> # fastcgi_pass 127.0.0.1:9000 [2] [2]; >>> # fastcgi_index index.php; >>> # fastcgi_param SCRIPT_FILENAME >>> /scripts$fastcgi_script_name; >>> # include fastcgi_params; >>> #} >>> >>> location ~ (^~)*\.php$ { >>> root /var/www/html; >>> fastcgi_pass 127.0.0.1:9000 [2] [2]; >>> fastcgi_index index.php; >>> fastcgi_param SCRIPT_FILENAME >>> $document_root$fastcgi_script_name; >>> include /etc/nginx/fastcgi_params; >>> } >>> location ~ (^~)*\.pl|cgi$ { >>> root /var/www/html; >>> fastcgi_pass 127.0.0.1:8999 [3] [3]; >>> fastcgi_index index.cgi; >>> fastcgi_param SCRIPT_FILENAME >>> $document_root$fastcgi_script_name; >>> include /etc/nginx/fastcgi_params; >>> } >> >> This block into "my_php_config.conf" >> >>> location ~ .*~.*\.php$ { >>> alias /home/$1/public_html$2; >>> fastcgi_pass 127.0.0.1:9000 [2] [2]; >>> fastcgi_index index.php; >>> fastcgi_param SCRIPT_FILENAME >>> $document_root$fastcgi_script_name; >>> include /etc/nginx/fastcgi_params; >>> } >> END >> >> This block into "my_cgi_config.conf" >> >>> location ~ .*~.*\.pl|cgi$ { >>> alias /home/$1/public_html$2; >>> fastcgi_pass 127.0.0.1:8999 [3] [3]; >>> fastcgi_index index.cgi; >>> fastcgi_param SCRIPT_FILENAME >>> $document_root$fastcgi_script_name; >>> include /etc/nginx/fastcgi_params; >>> } >> >> END >> >>> # deny access to .htaccess files, if Apache's document root >>> # concurs with nginx's one >> >> BR Aleks > > > > Links: > ------ > [1] http://127.0.0.1:80 > [2] http://127.0.0.1:9000 > [3] http://127.0.0.1:8999 > [4] http://host.domain.com From adam at jooadam.hu Mon Nov 30 22:03:05 2015 From: adam at jooadam.hu (=?UTF-8?B?Sm/DsyDDgWTDoW0=?=) Date: Mon, 30 Nov 2015 23:03:05 +0100 Subject: Basic auth is slow In-Reply-To: <20151130131210.GF74233@mdounin.ru> References: <20151130131210.GF74233@mdounin.ru> Message-ID: Wow, I just realized how stupid my question was. I wasn?t considering the high iteration count I myself selected for hashing? Thanks, Maxim! ?