From nginx-forum at nginx.us Sat Aug 1 01:57:05 2015 From: nginx-forum at nginx.us (xfeep) Date: Fri, 31 Jul 2015 21:57:05 -0400 Subject: Memory leaks when used as a shared library Message-ID: Hi, I want to build nginx into a shared library for this feature of nginx-clojure: https://github.com/nginx-clojure/nginx-clojure/issues/86 . But i found that there's memory leaks after stop server but without exit the process. Then I tried valgrind to check a simple test without any 3rd party module. $ nginx -V nginx version: nginx/1.8.0 built by gcc 4.8.4 (Ubuntu 4.8.4-2ubuntu1~14.04) configure arguments: $ test.conf =================start of test.conf============================== daemon off; master_process off; error_log logs/error.log; events { worker_connections 1024; } http { include mime.types; default_type text/html; access_log off; sendfile on; keepalive_timeout 65; server { listen 8081; server_name localhost; location /hello { alias /nginx-clojure-embed/test/work-dir; } } } =================end of test.conf============================== The result of valgrind is here: $ valgrind --tool=memcheck --leak-check=full --show-leak-kinds=all --leak-check=yes objs/nginx -c work-dir/test.conf -p work-dir ==10039== Memcheck, a memory error detector ==10039== Copyright (C) 2002-2013, and GNU GPL'd, by Julian Seward et al. ==10039== Using Valgrind-3.11.0.SVN and LibVEX; rerun with -h for copyright info ==10039== Command: objs/nginx -c work-dir/test.conf -p work-dir ==10039== -==10039== ==10039== HEAP SUMMARY: ==10039== in use at exit: 441,239 bytes in 148 blocks ==10039== total heap usage: 197 allocs, 49 frees, 613,884 bytes allocated ==10039== ==10039== 48 bytes in 1 blocks are still reachable in loss record 1 of 10 ==10039== at 0x4C2ABFD: malloc (vg_replace_malloc.c:299) ==10039== by 0x41CB98: ngx_alloc (ngx_alloc.c:22) ==10039== by 0x403E9B: ngx_save_argv (nginx.c:817) ==10039== by 0x40434D: main (nginx.c:316) ==10039== ==10039== 128 bytes in 1 blocks are possibly lost in loss record 2 of 10 ==10039== at 0x4C2ABFD: malloc (vg_replace_malloc.c:299) ==10039== by 0x41CB98: ngx_alloc (ngx_alloc.c:22) ==10039== by 0x40CAD2: ngx_crc32_table_init (ngx_crc32.c:117) ==10039== by 0x40452C: main (nginx.c:332) ==10039== ==10039== 151 bytes in 5 blocks are still reachable in loss record 3 of 10 ==10039== at 0x4C2ABFD: malloc (vg_replace_malloc.c:299) ==10039== by 0x41CB98: ngx_alloc (ngx_alloc.c:22) ==10039== by 0x403EE7: ngx_save_argv (nginx.c:825) ==10039== by 0x40434D: main (nginx.c:316) ==10039== ==10039== 2,160 bytes in 1 blocks are still reachable in loss record 4 of 10 ==10039== at 0x4C2ABFD: malloc (vg_replace_malloc.c:299) ==10039== by 0x41CACD: ngx_strerror_init (ngx_errno.c:60) ==10039== by 0x403F74: main (nginx.c:211) ==10039== ==10039== 3,030 bytes in 135 blocks are still reachable in loss record 5 of 10 ==10039== at 0x4C2ABFD: malloc (vg_replace_malloc.c:299) ==10039== by 0x41CB0E: ngx_strerror_init (ngx_errno.c:69) ==10039== by 0x403F74: main (nginx.c:211) ==10039== ==10039== 3,594 bytes in 1 blocks are still reachable in loss record 6 of 10 ==10039== at 0x4C2ABFD: malloc (vg_replace_malloc.c:299) ==10039== by 0x41CB98: ngx_alloc (ngx_alloc.c:22) ==10039== by 0x41F260: ngx_init_setproctitle (ngx_setproctitle.c:47) ==10039== by 0x41F3E4: ngx_os_init (ngx_posix_init.c:43) ==10039== by 0x4048F4: main (nginx.c:324) ==10039== ==10039== 6,144 bytes in 1 blocks are still reachable in loss record 7 of 10 ==10039== at 0x4C2ABFD: malloc (vg_replace_malloc.c:299) ==10039== by 0x41CB98: ngx_alloc (ngx_alloc.c:22) ==10039== by 0x421832: ngx_epoll_init (ngx_epoll_module.c:348) ==10039== by 0x41A239: ngx_event_process_init (ngx_event.c:626) ==10039== by 0x42132C: ngx_single_process_cycle (ngx_process_cycle.c:298) ==10039== by 0x4048A7: main (nginx.c:416) ==10039== ==10039== 106,496 bytes in 1 blocks are indirectly lost in loss record 8 of 10 ==10039== at 0x4C2ABFD: malloc (vg_replace_malloc.c:299) ==10039== by 0x41CB98: ngx_alloc (ngx_alloc.c:22) ==10039== by 0x41A424: ngx_event_process_init (ngx_event.c:689) ==10039== by 0x42132C: ngx_single_process_cycle (ngx_process_cycle.c:298) ==10039== by 0x4048A7: main (nginx.c:416) ==10039== ==10039== 106,496 bytes in 1 blocks are indirectly lost in loss record 9 of 10 ==10039== at 0x4C2ABFD: malloc (vg_replace_malloc.c:299) ==10039== by 0x41CB98: ngx_alloc (ngx_alloc.c:22) ==10039== by 0x41A479: ngx_event_process_init (ngx_event.c:701) ==10039== by 0x42132C: ngx_single_process_cycle (ngx_process_cycle.c:298) ==10039== by 0x4048A7: main (nginx.c:416) ==10039== ==10039== 425,984 (212,992 direct, 212,992 indirect) bytes in 1 blocks are definitely lost in loss record 10 of 10 ==10039== at 0x4C2ABFD: malloc (vg_replace_malloc.c:299) ==10039== by 0x41CB98: ngx_alloc (ngx_alloc.c:22) ==10039== by 0x41A3F5: ngx_event_process_init (ngx_event.c:682) ==10039== by 0x42132C: ngx_single_process_cycle (ngx_process_cycle.c:298) ==10039== by 0x4048A7: main (nginx.c:416) ==10039== ==10039== LEAK SUMMARY: ==10039== definitely lost: 212,992 bytes in 1 blocks ==10039== indirectly lost: 212,992 bytes in 2 blocks ==10039== possibly lost: 128 bytes in 1 blocks ==10039== still reachable: 15,127 bytes in 144 blocks ==10039== suppressed: 0 bytes in 0 blocks ==10039== ==10039== For counts of detected and suppressed errors, rerun with: -v ==10039== ERROR SUMMARY: 2 errors from 2 contexts (suppressed: 0 from 0) There are a lot of memory leaks. e.g. ngx_epoll_done will never be called so event_list has no chance to be freed. Thanks. Xfeep Posted at Nginx Forum: http://forum.nginx.org/read.php?2,260692,260692#msg-260692 From nginx-forum at nginx.us Sat Aug 1 02:32:20 2015 From: nginx-forum at nginx.us (xfeep) Date: Fri, 31 Jul 2015 22:32:20 -0400 Subject: Memory leaks when used as a shared library In-Reply-To: References: Message-ID: <5a9fec201e4e35b94f5b90b94ac676f7.NginxMailingListEnglish@forum.nginx.org> BTW, after start nginx by valgrind it won't print memory leaks report until we stop nginx by $ objs/nginx -c work-dir/test.conf -p work-dir Posted at Nginx Forum: http://forum.nginx.org/read.php?2,260692,260693#msg-260693 From nginx-forum at nginx.us Sat Aug 1 23:32:42 2015 From: nginx-forum at nginx.us (smsmaddy1981) Date: Sat, 01 Aug 2015 19:32:42 -0400 Subject: Static Content on Different Server Isn't Loaded?? In-Reply-To: <20150623214846.GH23844@daoine.org> References: <20150623214846.GH23844@daoine.org> Message-ID: <12cc067daf225e82ea0896dee770129d.NginxMailingListEnglish@forum.nginx.org> Hi Francis, I tried possible options suggested proxy_pass, fastcgi_pass...and was unsuccessful This query was posted in another request http://forum.nginx.org/read.php?2,260231,260232#msg-260232 and you are smart to redirect me back to this earlier request. I would like to detail more on my question pls., I have plenty of 5 different services deployed on different 5 servers. I have NGINX deployed on one server (1) where application 1 is deployed. Other services are on different servers (2,3,4,5). Now, whenever there is an web request...via NGINX... the configuration is able to read the static data of service 1...as it is deployed on the same server where NGINX is installed. Issue arises, when trying to access services deployed on other servers (2,3,4,5). request is reached via upstream servers configured. Unfortunately, the static content (2,3,4,5) servers are not getting fetched/loaded. Rather, it is trying to find the data on NGINX installed machine only. error.log: 2015/08/02 01:11:09 [error] 30243#0: *72 open() "/var/gvp/Nginx/nginx-1.8.0/html/pvp/img/rvi.jpg" failed (2: No such file or directory) nginx.conf details below server { listen 80; server_name ser-01; location /pvp/{ proxy_pass http://ser02; } location /gen/ { proxy_pass http://ser03; } location /config/ { proxy_pass http://ser04; } location /stat/ { proxy_pass http://ser05; } } upstream ser01{ ip_hash; server 10.177.54.92:9092; server 10.177.54.93:9092; } upstream ser02 { ip_hash; server 10.176.54.92:9091; server 10.176.54.93:9091; } upstream ser03 { ip_hash; server 10.175.54.94:8181; server 10.175.54.95:8181; } upstream ser04 { ip_hash; server 10.174.54.74:8080; server 10.174.54.76:8080; } Now, please assist me with an pointing to an code snippet/examples which show server my above requirement. Your answer will decide my consideration of Nginx implementation in my project. Awaiting for your updates. Best regards, Maddy Posted at Nginx Forum: http://forum.nginx.org/read.php?2,259786,260695#msg-260695 From nginx-forum at nginx.us Sat Aug 1 23:33:39 2015 From: nginx-forum at nginx.us (smsmaddy1981) Date: Sat, 01 Aug 2015 19:33:39 -0400 Subject: remote server static content is not getting loaded In-Reply-To: <20150712214422.GU23844@daoine.org> References: <20150712214422.GU23844@daoine.org> Message-ID: <69e7a64ed18835a1ac2adc7861a845ef.NginxMailingListEnglish@forum.nginx.org> Hi Francis., I have updated my old request http://forum.nginx.org/read.php?2,259786,260695#msg-260695 Awaiting for your response pls. Best regards, Maddy Posted at Nginx Forum: http://forum.nginx.org/read.php?2,260231,260696#msg-260696 From mdounin at mdounin.ru Sun Aug 2 02:55:29 2015 From: mdounin at mdounin.ru (Maxim Dounin) Date: Sun, 2 Aug 2015 05:55:29 +0300 Subject: Memory leaks when used as a shared library In-Reply-To: References: Message-ID: <20150802025529.GI19190@mdounin.ru> Hello! On Fri, Jul 31, 2015 at 09:57:05PM -0400, xfeep wrote: > I want to build nginx into a shared library for this feature of > nginx-clojure: https://github.com/nginx-clojure/nginx-clojure/issues/86 . > But i found that there's memory leaks after stop server but without exit the > process. Then I tried valgrind to check a simple test without any 3rd party > module. Well, the answer is simple: nginx is not designed to be a shared library. If you want to convert it to be one, it's you who are responsible for cleaning up various global allocations. -- Maxim Dounin http://nginx.org/ From lists at ruby-forum.com Sun Aug 2 03:14:25 2015 From: lists at ruby-forum.com (Lawrence Pan) Date: Sun, 02 Aug 2015 05:14:25 +0200 Subject: Improve server performance Message-ID: <230b50a648b828821766f5337c47ce52@ruby-forum.com> Hi, I am hosting a couple of wordpress sites on a very light VPS with 1vCPU, 512Mb Ram and 10Gb SSD. The performance is not very impressive. Are caching, using gzip, etc., good means to improve the performance of my VPS despite the fact that it only has one vCPU and 512 Mb of ram. Thank you -- Posted via http://www.ruby-forum.com/. From anoopalias01 at gmail.com Sun Aug 2 04:10:11 2015 From: anoopalias01 at gmail.com (Anoop Alias) Date: Sun, 2 Aug 2015 09:40:11 +0530 Subject: Improve server performance In-Reply-To: <230b50a648b828821766f5337c47ce52@ruby-forum.com> References: <230b50a648b828821766f5337c47ce52@ruby-forum.com> Message-ID: caching - will improve performance . Look for caching at the opcode level and fastcgi_cache along with the associated wordpress plugin to invalidate cache gzip - will be taxing again on your single vCPU -- *Anoop P Alias* -------------- next part -------------- An HTML attachment was scrubbed... URL: From lists at ruby-forum.com Sun Aug 2 04:45:00 2015 From: lists at ruby-forum.com (Lawrence Pan) Date: Sun, 02 Aug 2015 06:45:00 +0200 Subject: Improve server performance In-Reply-To: References: <230b50a648b828821766f5337c47ce52@ruby-forum.com> Message-ID: <26baabb15028822fe0c8908410c2ab6c@ruby-forum.com> Anoop Alias wrote in post #1177127: > caching - will improve performance . Look for caching at the opcode > level > and fastcgi_cache along with the associated wordpress plugin to > invalidate > cache > > gzip - will be taxing again on your single vCPU Thank you very much. So what should I do is to enable cache and disable gzip right? -- Posted via http://www.ruby-forum.com/. From francis at daoine.org Sun Aug 2 10:05:30 2015 From: francis at daoine.org (Francis Daly) Date: Sun, 2 Aug 2015 11:05:30 +0100 Subject: Static Content on Different Server Isn't Loaded?? In-Reply-To: <12cc067daf225e82ea0896dee770129d.NginxMailingListEnglish@forum.nginx.org> References: <20150623214846.GH23844@daoine.org> <12cc067daf225e82ea0896dee770129d.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20150802100530.GD23844@daoine.org> On Sat, Aug 01, 2015 at 07:32:42PM -0400, smsmaddy1981 wrote: Hi there, > Issue arises, when trying to access services deployed on other servers > (2,3,4,5). request is reached via upstream servers configured. > Unfortunately, the static content (2,3,4,5) servers are not getting > fetched/loaded. Rather, it is trying to find the data on NGINX installed > machine only. I don't see answers to my previous questions here. So I'll try again, with just one question: When you request http://ser-01/content/img.jpg, on which of your servers 1-5 is the file img.jpg located? f -- Francis Daly francis at daoine.org From nginx-forum at nginx.us Sun Aug 2 11:16:44 2015 From: nginx-forum at nginx.us (tunist) Date: Sun, 02 Aug 2015 07:16:44 -0400 Subject: mp4 streaming/seeking works from firefox (fedora) and not from firefox (windows) (nginx 1.9.3) In-Reply-To: <3e13103f2fbe0508b4c57caebad87e19.NginxMailingListEnglish@forum.nginx.org> References: <3e13103f2fbe0508b4c57caebad87e19.NginxMailingListEnglish@forum.nginx.org> Message-ID: oh, so the solution here was to add: add_header Accept-Ranges bytes; to the site's config file. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,260615,260702#msg-260702 From lucas at slcoding.com Sun Aug 2 11:20:09 2015 From: lucas at slcoding.com (Lucas Rolff) Date: Sun, 02 Aug 2015 13:20:09 +0200 Subject: mp4 streaming/seeking works from firefox (fedora) and not from firefox (windows) (nginx 1.9.3) In-Reply-To: References: <3e13103f2fbe0508b4c57caebad87e19.NginxMailingListEnglish@forum.nginx.org> Message-ID: <55BDFCE9.80706@slcoding.com> Be aware it doesn't work either in Chrome on mac :-) > tunist > 2 Aug 2015 13:16 > oh, so the solution here was to add: add_header Accept-Ranges bytes; > to the site's config file. > > Posted at Nginx Forum: > http://forum.nginx.org/read.php?2,260615,260702#msg-260702 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > tunist > 29 Jul 2015 13:14 > greetings! > > i am seeing an unexplained malfunction here with nginx when serving > videos. > flv and mp4 files have different symptoms. mp4 streams correctly when > i view > the file in firefox 39 in fedora 22, but in windows 7 (firefox 39) the > file > cannot be 'seeked' and must be played linearly. > after speaking with the coders of video.js (the player i use), it was > determined that nginx is not returning byte range data appropriately > (or at > all) - so seeking would not work. however, this does not explain why > firefox > 39 in fedora works perfectly and does not provide a solution as to how to > get nginx to serve correctly. > > the only advice i have seen is to change the value of the 'max_ranges' > directive - but doing that has made no difference. i have left it as > 'unset' > - which i understand to mean 'unlimited'. > > an example video from the server is here: > src="https://www.ureka.org/file/play/17924/censored%20on%20google%202.mp4" > > any tips welcomed! thanks > > Posted at Nginx Forum: > http://forum.nginx.org/read.php?2,260615,260615#msg-260615 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Sun Aug 2 14:31:34 2015 From: nginx-forum at nginx.us (xfeep) Date: Sun, 02 Aug 2015 10:31:34 -0400 Subject: Memory leaks when used as a shared library In-Reply-To: <20150802025529.GI19190@mdounin.ru> References: <20150802025529.GI19190@mdounin.ru> Message-ID: Hi, Maxim, Thanks for your reply! ------------------------------------------------------- > Well, the answer is simple: nginx is not designed to be a shared > library. If you want to convert it to be one, it's you who are > responsible for cleaning up various global allocations. > You're right. But for module developers who use valgrind there are maybe some confusion, the built-in core modules also have "memory leaks" , despite the fact that those are not real leaks because OS will free them after nginx master/worker processes exit. Regards. Xfeep Posted at Nginx Forum: http://forum.nginx.org/read.php?2,260692,260704#msg-260704 From nginx-forum at nginx.us Sun Aug 2 17:59:48 2015 From: nginx-forum at nginx.us (tunist) Date: Sun, 02 Aug 2015 13:59:48 -0400 Subject: mp4 streaming/seeking works from firefox (fedora) and not from firefox (windows) (nginx 1.9.3) In-Reply-To: <55BDFCE9.80706@slcoding.com> References: <55BDFCE9.80706@slcoding.com> Message-ID: i thought i had solved this by adding the header for accept-ranges - since when i did that i could then seek in firefox on windows 7. however, after testing further i found that the results are inconsistent and ultimately it is still somewhat broken. i have added more details to this to a new question i have asked at serverfault: http://serverfault.com/questions/710304/why-is-partial-content-not-being-served-in-nginx-mp4 any input is still welcome! Posted at Nginx Forum: http://forum.nginx.org/read.php?2,260615,260705#msg-260705 From nginx-forum at nginx.us Sun Aug 2 21:44:27 2015 From: nginx-forum at nginx.us (smsmaddy1981) Date: Sun, 02 Aug 2015 17:44:27 -0400 Subject: Static Content on Different Server Isn't Loaded?? In-Reply-To: <20150802100530.GD23844@daoine.org> References: <20150802100530.GD23844@daoine.org> Message-ID: Hi Francis, Yes, the file/s are located. Verified with all servers. Best regards, Maddy Posted at Nginx Forum: http://forum.nginx.org/read.php?2,259786,260708#msg-260708 From francis at daoine.org Sun Aug 2 22:30:43 2015 From: francis at daoine.org (Francis Daly) Date: Sun, 2 Aug 2015 23:30:43 +0100 Subject: Static Content on Different Server Isn't Loaded?? In-Reply-To: References: <20150802100530.GD23844@daoine.org> Message-ID: <20150802223043.GE23844@daoine.org> On Sun, Aug 02, 2015 at 05:44:27PM -0400, smsmaddy1981 wrote: Hi there, > Yes, the file/s are located. Verified with all servers. You make a http request for /content/img.jpg. The file /usr/local/nginx/html/content/img.jpg is on one of your five servers. You seem to report that nginx fails to serve it from the filesystem of ser01, because it is actually on one of the other servers. Which one of your servers contains the file /usr/local/nginx/html/content/img.jpg? Your answer should be one of: * ser02 * ser03 * ser04 * ser05 * none of them, because I never make a request for /content/img.jpg f -- Francis Daly francis at daoine.org From nginx-forum at nginx.us Sun Aug 2 22:36:13 2015 From: nginx-forum at nginx.us (smsmaddy1981) Date: Sun, 02 Aug 2015 18:36:13 -0400 Subject: Static Content on Different Server Isn't Loaded?? In-Reply-To: <20150802223043.GE23844@daoine.org> References: <20150802223043.GE23844@daoine.org> Message-ID: Answer below pls: Nginx is on "ser01" file is on "ser02" //Maddy Posted at Nginx Forum: http://forum.nginx.org/read.php?2,259786,260710#msg-260710 From francis at daoine.org Sun Aug 2 22:46:28 2015 From: francis at daoine.org (Francis Daly) Date: Sun, 2 Aug 2015 23:46:28 +0100 Subject: Static Content on Different Server Isn't Loaded?? In-Reply-To: References: <20150802223043.GE23844@daoine.org> Message-ID: <20150802224628.GF23844@daoine.org> On Sun, Aug 02, 2015 at 06:36:13PM -0400, smsmaddy1981 wrote: > Answer below pls: > > Nginx is on "ser01" > file is on "ser02" How do you know that /content/img.jpg should be served from ser02, and not from ser01 or ser03? What is special about /content/img.jpg that says "this is on ser02"? (You can configure nginx to know what that special thing is, so that things will work as you want.) f -- Francis Daly francis at daoine.org From nginx-forum at nginx.us Sun Aug 2 22:59:03 2015 From: nginx-forum at nginx.us (smsmaddy1981) Date: Sun, 02 Aug 2015 18:59:03 -0400 Subject: Static Content on Different Server Isn't Loaded?? In-Reply-To: <20150802224628.GF23844@daoine.org> References: <20150802224628.GF23844@daoine.org> Message-ID: <29ff1d8af029b0dee58f673475e3bb42.NginxMailingListEnglish@forum.nginx.org> "/content/img.jpg" is sepcific to an application and which is deployed on "ser02". Whenenver an web request is made accessing the application (deployed on ser02)...through upstream configuration...the service responds reaching ser02...but static files (/content/img.jpg) are not rendered. The access_log shows the file is trying to look on ser01 (Nginx is installed) (You can configure nginx to know what that special thing is, so that things will work as you want.) How pls.??? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,259786,260712#msg-260712 From francis at daoine.org Sun Aug 2 23:06:25 2015 From: francis at daoine.org (Francis Daly) Date: Mon, 3 Aug 2015 00:06:25 +0100 Subject: Static Content on Different Server Isn't Loaded?? In-Reply-To: <29ff1d8af029b0dee58f673475e3bb42.NginxMailingListEnglish@forum.nginx.org> References: <20150802224628.GF23844@daoine.org> <29ff1d8af029b0dee58f673475e3bb42.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20150802230625.GG23844@daoine.org> On Sun, Aug 02, 2015 at 06:59:03PM -0400, smsmaddy1981 wrote: > "/content/img.jpg" is sepcific to an application and which is deployed on > "ser02". I'm going to guess that you mean "every request that starts with /content/ should be handled by ser02". In that case, your configuration should be like location ^~ /content/ { proxy_pass http://ser02; } What happens when you use that? (What request do you make?; what response do you get?) > Whenenver an web request is made accessing the application (deployed on > ser02)...through upstream configuration...the service responds reaching > ser02...but static files (/content/img.jpg) are not rendered. The access_log > shows the file is trying to look on ser01 (Nginx is installed) If you still see a problem, can you list the exact request that does work as you want, and the exact request that does not work as you want? f -- Francis Daly francis at daoine.org From nginx-forum at nginx.us Sun Aug 2 23:30:05 2015 From: nginx-forum at nginx.us (smsmaddy1981) Date: Sun, 02 Aug 2015 19:30:05 -0400 Subject: Static Content on Different Server Isn't Loaded?? In-Reply-To: <20150802230625.GG23844@daoine.org> References: <20150802230625.GG23844@daoine.org> Message-ID: <8e46e6d992af55aee1c41fbef6a814cd.NginxMailingListEnglish@forum.nginx.org> Hi Francis Thanks for your quick support, I will revert with my observations. Regards. Maddy Posted at Nginx Forum: http://forum.nginx.org/read.php?2,259786,260714#msg-260714 From nginx-forum at nginx.us Mon Aug 3 03:54:45 2015 From: nginx-forum at nginx.us (goga) Date: Sun, 02 Aug 2015 23:54:45 -0400 Subject: Nginx proxy server Message-ID: Hi everyone. I need to create a mirror site. And I use the server Nginx. I was able to configure the server. And everything works fine. But I have a problem when trying to login on my site. If a password is good I redirect to the original site (www.example.com). If the password is bad I stay on my website. And I do not know how to fix it. I need professional help. I am using these configurations: location / { resolver 8.8.8.8 ; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Location $host; proxy_pass http://www.example.com; proxy_redirect example.com mirorr.com; proxy_cookie_domain example.com mirorr.com; proxy_cookie_path / /mirorr/; proxy_set_header Accept-Encoding ""; proxy_set_header Referer www.example.com; proxy_set_header Cookie $http_cookie; } I think that I return value Location "www.example.com" to "response header". But how do I change it, I do not know. Or I'm wrong. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,260716,260716#msg-260716 From nginx-forum at nginx.us Mon Aug 3 04:12:54 2015 From: nginx-forum at nginx.us (Eberx) Date: Mon, 03 Aug 2015 00:12:54 -0400 Subject: Nginx domain work without be into sites-enabled Message-ID: <796cabcfe52c0173ac2c70c7aa6e2a27.NginxMailingListEnglish@forum.nginx.org> Hello, I have many subdomains which is created into /etc/nginx/sites-enabled. For example 1. www.example.com 2. music.example.com 3. video.example.com are already enabled. and working fine. But I tried to remove music.example.com from public access. Then I did music.example.com from sites-enabled folder but I can access to music.example.com. Site is still there. How do I fix this ? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,260717,260717#msg-260717 From francis at daoine.org Mon Aug 3 10:35:22 2015 From: francis at daoine.org (Francis Daly) Date: Mon, 3 Aug 2015 11:35:22 +0100 Subject: Nginx domain work without be into sites-enabled In-Reply-To: <796cabcfe52c0173ac2c70c7aa6e2a27.NginxMailingListEnglish@forum.nginx.org> References: <796cabcfe52c0173ac2c70c7aa6e2a27.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20150803103522.GH23844@daoine.org> On Mon, Aug 03, 2015 at 12:12:54AM -0400, Eberx wrote: Hi there, > I have many subdomains which is created into /etc/nginx/sites-enabled. For > example Your nginx.conf presumably has "include /etc/nginx/sites-enabled/*;" so that these files are read when nginx starts. > 1. www.example.com > 2. music.example.com > 3. video.example.com > > are already enabled. and working fine. But I tried to remove > music.example.com from public access. Then I did music.example.com from > sites-enabled folder but I can access to music.example.com. Site is still > there. Did you successfully restart or reload nginx after removing the file? What response do you get for "curl -i http://music.example.com/"? What response do you want instead for "curl -i http://music.example.com/"? f -- Francis Daly francis at daoine.org From nginx-forum at nginx.us Mon Aug 3 11:41:15 2015 From: nginx-forum at nginx.us (Eberx) Date: Mon, 03 Aug 2015 07:41:15 -0400 Subject: Nginx domain work without be into sites-enabled In-Reply-To: <20150803103522.GH23844@daoine.org> References: <20150803103522.GH23844@daoine.org> Message-ID: Francis Daly Wrote: ------------------------------------------------------- > On Mon, Aug 03, 2015 at 12:12:54AM -0400, Eberx wrote: > > Hi there, > > > I have many subdomains which is created into > /etc/nginx/sites-enabled. For > > example > > Your nginx.conf presumably has "include /etc/nginx/sites-enabled/*;" > so that these files are read when nginx starts. I added bottom of my nginx.conf file include /etc/nginx/conf.d/*.conf; include /etc/nginx/sites-enabled/*; > > > 1. www.example.com > > 2. music.example.com > > 3. video.example.com > > > > are already enabled. and working fine. But I tried to remove > > music.example.com from public access. Then I did music.example.com > from > > sites-enabled folder but I can access to music.example.com. Site is > still > > there. > > Did you successfully restart or reload nginx after removing the file? Yes > > What response do you get for "curl -i http://music.example.com/"? I get response header Server: nginx Date: Mon, 03 Aug 2015 11:39:04 GMT Content-Type: text/html; charset=utf-8 Transfer-Encoding: chunked Connection: keep-alive Vary: Accept-Encoding Set-Cookie: example=lda48dtiqiaqg46paqulh9ov56; path=/; domain=.example.com Expires: Thu, 19 Nov 1981 08:52:00 GMT Cache-Control: no-store, no-cache, must-revalidate, post-check=0, pre-check=0 Pragma: no-cache Below this header our website html displayed. > > What response do you want instead for "curl -i > http://music.example.com/"? > I just want to go default vhost config. > f > -- > Francis Daly francis at daoine.org > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx Posted at Nginx Forum: http://forum.nginx.org/read.php?2,260717,260724#msg-260724 From nginx-forum at nginx.us Mon Aug 3 12:44:52 2015 From: nginx-forum at nginx.us (mynidiravichandra) Date: Mon, 03 Aug 2015 08:44:52 -0400 Subject: Problem with NGINX on MIPS processor Message-ID: <906940711eca64c94ae8e396cf993a86.NginxMailingListEnglish@forum.nginx.org> Hi, I am trying to run nginx on a MIPS processor. In my NGINX server configuration, i have used only worker process. When i try connecting with openssl s_client application, i am able to connect only once. After this, the first connection is closed and if i try connecting again, server doesn't accept any connection. I have tried strace on the nginx server process and observed that worker process in busy in writing some data to the closed(first) connection. PS: i have tried the same experiment on x86 and didn't face any such issue. Appreciate any help/suggestions regarding this issue. Thanks Ravichandra Posted at Nginx Forum: http://forum.nginx.org/read.php?2,260725,260725#msg-260725 From francis at daoine.org Mon Aug 3 15:15:41 2015 From: francis at daoine.org (Francis Daly) Date: Mon, 3 Aug 2015 16:15:41 +0100 Subject: Nginx domain work without be into sites-enabled In-Reply-To: References: <20150803103522.GH23844@daoine.org> Message-ID: <20150803151541.GI23844@daoine.org> On Mon, Aug 03, 2015 at 07:41:15AM -0400, Eberx wrote: > Francis Daly Wrote: Hi there, > > What response do you get for "curl -i http://music.example.com/"? > > I get response header > > Server: nginx > Date: Mon, 03 Aug 2015 11:39:04 GMT > Content-Type: text/html; charset=utf-8 > Transfer-Encoding: chunked > Connection: keep-alive > Vary: Accept-Encoding > Set-Cookie: example=lda48dtiqiaqg46paqulh9ov56; path=/; domain=.example.com > Expires: Thu, 19 Nov 1981 08:52:00 GMT > Cache-Control: no-store, no-cache, must-revalidate, post-check=0, > pre-check=0 > Pragma: no-cache > > Below this header our website html displayed. Is "our website html" from music.example.com or from www.example.com or from something else? > > What response do you want instead for "curl -i > > http://music.example.com/"? > > I just want to go default vhost config. What is your default vhost config? If you want specific help, you will need to give specific details. How nginx chooses which server{} block to use is described at http://nginx.org/en/docs/http/request_processing.html If you can create a config file where your nginx does not do what that page describes, then you will have found an interesting bug. So: in your nginx.conf, plus any file that is "include"d in there, look at all of your server{} blocks, and in them, look at all of your "listen" and "server_name" directives. Then follow what that web page says, and identify which one server{} block nginx should use to handle your request for http://music.example.com/. Is that the server{} block that you want nginx to use? If not, make whatever changes are needed so that it is. Good luck with it, f -- Francis Daly francis at daoine.org From mdounin at mdounin.ru Mon Aug 3 18:02:06 2015 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 3 Aug 2015 21:02:06 +0300 Subject: Problem with NGINX on MIPS processor In-Reply-To: <906940711eca64c94ae8e396cf993a86.NginxMailingListEnglish@forum.nginx.org> References: <906940711eca64c94ae8e396cf993a86.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20150803180205.GQ19190@mdounin.ru> Hello! On Mon, Aug 03, 2015 at 08:44:52AM -0400, mynidiravichandra wrote: > Hi, > I am trying to run nginx on a MIPS processor. In my NGINX server > configuration, i have used only worker process. When i try connecting with > openssl s_client application, i am able to connect only once. After this, > the first connection is closed and if i try connecting again, server doesn't > accept any connection. I have tried strace on the nginx server process and > observed that worker process in busy in writing some data to the > closed(first) connection. > > PS: i have tried the same experiment on x86 and didn't face any such issue. > > Appreciate any help/suggestions regarding this issue. First of all I would recommend to try plain HTTP, without SSL. This may be some SSL related problem, either in nginx or in OpenSSL itself. -- Maxim Dounin http://nginx.org/ From nginx-forum at nginx.us Tue Aug 4 04:03:28 2015 From: nginx-forum at nginx.us (Eberx) Date: Tue, 04 Aug 2015 00:03:28 -0400 Subject: Nginx domain work without be into sites-enabled In-Reply-To: <20150803151541.GI23844@daoine.org> References: <20150803151541.GI23844@daoine.org> Message-ID: Hello, Thank you for advice. It is my fault. I did not define any default_server in my config. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,260717,260734#msg-260734 From hagaia at qwilt.com Tue Aug 4 06:47:09 2015 From: hagaia at qwilt.com (Hagai Avrahami) Date: Tue, 4 Aug 2015 09:47:09 +0300 Subject: Nginx may recieve data from Upstream Server even a request has not sent Message-ID: Hi, I am using Nginx as reverse Proxy, during testing I have noticed a situation Nginx receives response from upstream server even no request has been sent yet from Nginx side. The scenario is that the "Upstream Server" started to transfer data just after the connection has established without waiting to request from Nginx, Nginx receives all the response and does not send the Proxied request. I have looked in the code and realized Nginx allows to receive response even without request *or* maybe the AND should be OR (* (!u->request_sent || ngx_http_upstream_test_connect(c) != NGX_OK))* static void ngx_http_upstream_process_header(ngx_http_request_t *r, ngx_http_upstream_t *u) { ssize_t n; ngx_int_t rc; ngx_connection_t *c; .... * if (!u->request_sent && ngx_http_upstream_test_connect(c) != NGX_OK) { ngx_http_upstream_next(r, u, NGX_HTTP_UPSTREAM_FT_ERROR); return; }* Please Advise Thanks Hagai Avrahami -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Tue Aug 4 11:57:37 2015 From: nginx-forum at nginx.us (nginxsantos) Date: Tue, 04 Aug 2015 07:57:37 -0400 Subject: Nginx sending notification Message-ID: <74bce6ed94d654c37e7e5c262754e38c.NginxMailingListEnglish@forum.nginx.org> Hi, I want to send a notification from Nginx to another remote server when the health of any configured upstream server changes. Is there any module available for this. I was able to get that working through "salt" , but I need something better, Any suggestions?? Regards, Santos Posted at Nginx Forum: http://forum.nginx.org/read.php?2,260738,260738#msg-260738 From nginx-forum at nginx.us Tue Aug 4 12:37:34 2015 From: nginx-forum at nginx.us (itpp2012) Date: Tue, 04 Aug 2015 08:37:34 -0400 Subject: Nginx sending notification In-Reply-To: <74bce6ed94d654c37e7e5c262754e38c.NginxMailingListEnglish@forum.nginx.org> References: <74bce6ed94d654c37e7e5c262754e38c.NginxMailingListEnglish@forum.nginx.org> Message-ID: <79016e49f1498f21857d4b6501c6e27c.NginxMailingListEnglish@forum.nginx.org> What about Lua? I could image something with Lua->fastcgi sending a GET to another server via simple tcp with status information the other nginx server can act upon. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,260738,260739#msg-260739 From lordnynex at gmail.com Tue Aug 4 19:35:23 2015 From: lordnynex at gmail.com (Lord Nynex) Date: Tue, 4 Aug 2015 12:35:23 -0700 Subject: Memory leaks when used as a shared library In-Reply-To: References: <20150802025529.GI19190@mdounin.ru> Message-ID: Have you looked at https://github.com/openresty/no-pool-nginx ? On Sun, Aug 2, 2015 at 7:31 AM, xfeep wrote: > Hi, Maxim, > > Thanks for your reply! > ------------------------------------------------------- > > Well, the answer is simple: nginx is not designed to be a shared > > library. If you want to convert it to be one, it's you who are > > responsible for cleaning up various global allocations. > > > > You're right. > But for module developers who use valgrind there are maybe some confusion, > the built-in core modules also have "memory leaks" , despite the fact that > those are not real leaks because OS will free them after nginx > master/worker processes exit. > > Regards. > Xfeep > > Posted at Nginx Forum: > http://forum.nginx.org/read.php?2,260692,260704#msg-260704 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Tue Aug 4 21:57:42 2015 From: nginx-forum at nginx.us (nginxuser100) Date: Tue, 04 Aug 2015 17:57:42 -0400 Subject: HTTP 505 error message supported by NGINX? Message-ID: <3887123f1c7a7ddf659872ca57c7b360.NginxMailingListEnglish@forum.nginx.org> Hi, does NGINX support the generation of the error message for HTTP error code 505? For example, I see "401 Authorization Required" when running nginx 1.6.2 but I don't see anything for 505. NGINX would return "505 OK" in the HTTP response. Thank you. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,260754,260754#msg-260754 From reallfqq-nginx at yahoo.fr Tue Aug 4 22:05:05 2015 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Wed, 5 Aug 2015 00:05:05 +0200 Subject: HTTP 505 error message supported by NGINX? In-Reply-To: <3887123f1c7a7ddf659872ca57c7b360.NginxMailingListEnglish@forum.nginx.org> References: <3887123f1c7a7ddf659872ca57c7b360.NginxMailingListEnglish@forum.nginx.org> Message-ID: There is no such HTTP 505 "OK". It means "HTTP Version Not Supported". What were you testing? How did you test it? What result did you expect? What result did you get? --- *B. R.* On Tue, Aug 4, 2015 at 11:57 PM, nginxuser100 wrote: > Hi, does NGINX support the generation of the error message for HTTP error > code 505? For example, I see > > "401 Authorization Required" when running nginx 1.6.2 > but I don't see anything for 505. NGINX would return "505 OK" in the HTTP > response. > > Thank you. > > Posted at Nginx Forum: > http://forum.nginx.org/read.php?2,260754,260754#msg-260754 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Tue Aug 4 22:19:03 2015 From: nginx-forum at nginx.us (nginxuser100) Date: Tue, 04 Aug 2015 18:19:03 -0400 Subject: HTTP 505 error message supported by NGINX? In-Reply-To: References: Message-ID: <6c89467179b3d11ea15355c7ef3d3b5d.NginxMailingListEnglish@forum.nginx.org> I have my FCGI server send "HTTP/1.1 505 Version Not Supported\r\nStatus: 505 Version Not Supported\r\n\r\n". In nginx.conf, I have: fastcgi_intercept_errors on; error_page 505 /errpage; location /errpage { try_files /version_not_supported.html =505; } If version_not_supported.html is not found, I expected nginx to display "505 HTTP Version Not Supported" on the browser page, and to return "Status Code: 505 HTTP Version Not Supported" in the HTTP response. Instead, I got a blank page, and the HTTP response shows a Status code of "505 OK". Note NGINX displays the correct error message and Status code for other error codes such as 400, 401, 403, 404, 500, etc. Just for 505, NGINX would not return the proper error message. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,260754,260756#msg-260756 From tfransosi at gmail.com Tue Aug 4 22:37:41 2015 From: tfransosi at gmail.com (Thiago Farina) Date: Tue, 4 Aug 2015 19:37:41 -0300 Subject: Nginx proxy server In-Reply-To: References: Message-ID: On Mon, Aug 3, 2015 at 12:54 AM, goga wrote: > Hi everyone. > I need to create a mirror site. And I use the server Nginx. > I was able to configure the server. And everything works fine. > But I have a problem when trying to login on my site. If a password is good > I redirect to the original site (www.example.com). If the password is bad I > stay on my website. And I do not know how to fix it. Isn't this something you can/should handle in the language side? Btw, which language are you using? PHP, Perl, Python, Ruby? -- Thiago Farina From francis at daoine.org Tue Aug 4 22:49:54 2015 From: francis at daoine.org (Francis Daly) Date: Tue, 4 Aug 2015 23:49:54 +0100 Subject: HTTP 505 error message supported by NGINX? In-Reply-To: <6c89467179b3d11ea15355c7ef3d3b5d.NginxMailingListEnglish@forum.nginx.org> References: <6c89467179b3d11ea15355c7ef3d3b5d.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20150804224954.GJ23844@daoine.org> On Tue, Aug 04, 2015 at 06:19:03PM -0400, nginxuser100 wrote: Hi there, > I have my FCGI server send "HTTP/1.1 505 Version Not Supported\r\nStatus: > 505 Version Not Supported\r\n\r\n". > In nginx.conf, I have: > > fastcgi_intercept_errors on; > error_page 505 /errpage; > location /errpage { > try_files /version_not_supported.html =505; > } > > If version_not_supported.html is not found, I expected nginx to display "505 > HTTP Version Not Supported" on the browser page, and to return "Status Code: > 505 HTTP Version Not Supported" in the HTTP response. Instead, I got a blank > page, and the HTTP response shows a Status code of "505 OK". I think that you can control the HTTP status code that nginx returns, and you can control the body content returned with the response; but you cannot control the "reason phrase" that nginx puts on the status line. So nginx can be told to return (for example) "HTTP/1.1 505 " with the body content that you choose. If that is good enough for what you want, you won't need to go beyond nginx.conf to achieve it. (To do that, you could replace "=505" with "@my505", and then in "location @my505" do "return 505 my_body_content". I'm sure that other ways of achieving the same thing exist too.) What the browser chooses to do with the 505 response is its business -- perhaps it will show its own "505" page instead of what nginx sent; or perhaps it will show exactly what nginx sent. > Note NGINX displays the correct error message and Status code for other > error codes such as 400, 401, 403, 404, 500, etc. Just for 505, NGINX would > not return the proper error message. I think that nginx returns its hard-coded "reason phrase" for each of those status codes. It happens not to have one for 505, so it returns the empty one. Or do you have a config when you can control the reason phrase for those status codes? Cheers, f -- Francis Daly francis at daoine.org From nginx-forum at nginx.us Tue Aug 4 23:25:23 2015 From: nginx-forum at nginx.us (nginxuser100) Date: Tue, 04 Aug 2015 19:25:23 -0400 Subject: HTTP 505 error message supported by NGINX? In-Reply-To: <20150804224954.GJ23844@daoine.org> References: <20150804224954.GJ23844@daoine.org> Message-ID: <7388c398b4efb4442a426f7f1d181963.NginxMailingListEnglish@forum.nginx.org> Thank you Francis. The body content did the trick ... not as aesthetically pleasing to the eyes as the NGINX's "hard-coded reason phrase", but it is better than a blank page. I did not understand what you meant by a config to control the reason phrase. Thanks again. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,260754,260761#msg-260761 From reallfqq-nginx at yahoo.fr Tue Aug 4 23:32:38 2015 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Wed, 5 Aug 2015 01:32:38 +0200 Subject: HTTP 505 error message supported by NGINX? In-Reply-To: <7388c398b4efb4442a426f7f1d181963.NginxMailingListEnglish@forum.nginx.org> References: <20150804224954.GJ23844@daoine.org> <7388c398b4efb4442a426f7f1d181963.NginxMailingListEnglish@forum.nginx.org> Message-ID: It seems that by default nginx does not handle that HTTP code. I just tried the following: HEAD / HTTP/30.00 User-Agent: lollipop Host: [EDITED] Accept: */* HTTP/1.1 200 OK Server: nginx Date: Tue, 04 Aug 2015 23:27:43 GMT Content-Type: text/html; charset=UTF-8 Connection: keep-alive Strict-Transport-Security: max-age=15984000 ?So if you overload the status code with 505, the reason won't change and still be 'OK'.? ?That would explain why you saw '505 OK' which is an heresy...? --- *B. R.* On Wed, Aug 5, 2015 at 1:25 AM, nginxuser100 wrote: > Thank you Francis. The body content did the trick ... not as aesthetically > pleasing to the eyes as the NGINX's "hard-coded reason phrase", but it is > better than a blank page. > I did not understand what you meant by a config to control the reason > phrase. > Thanks again. > > Posted at Nginx Forum: > http://forum.nginx.org/read.php?2,260754,260761#msg-260761 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Tue Aug 4 23:50:33 2015 From: nginx-forum at nginx.us (xfeep) Date: Tue, 04 Aug 2015 19:50:33 -0400 Subject: Memory leaks when used as a shared library In-Reply-To: References: Message-ID: <2a3025363a79ce647baa2ba5df14f6dd.NginxMailingListEnglish@forum.nginx.org> Hi, Lord, Thank you! Lord Nynex Wrote: ------------------------------------------------------- > Have you looked at https://github.com/openresty/no-pool-nginx ? > But the issue in my case is not related to nginx's pool mechanism. It is caused by some build-in modules which won't release it allocated memory at all. e.g. ngx_event_core_module.init_process does some allocating but there 's no ngx_event_core_module.exit_process at all, so the memory it allocated will only be released by operation system process manager only when the worker process exits. it is plain that this will be reported as memory leak by valgrind. Just as Maxim said, I plan to write a new process_cycle for my shared library and record all unreleased memory and do release them at the end of process_cycle. Regards. Xfeep Posted at Nginx Forum: http://forum.nginx.org/read.php?2,260692,260765#msg-260765 From nginx-forum at nginx.us Wed Aug 5 05:29:33 2015 From: nginx-forum at nginx.us (nginxuser100) Date: Wed, 05 Aug 2015 01:29:33 -0400 Subject: HTTP 505 error message supported by NGINX? In-Reply-To: References: Message-ID: Thank you B.R. I wonder why 505 was not supported. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,260754,260768#msg-260768 From francis at daoine.org Wed Aug 5 07:17:24 2015 From: francis at daoine.org (Francis Daly) Date: Wed, 5 Aug 2015 08:17:24 +0100 Subject: HTTP 505 error message supported by NGINX? In-Reply-To: <7388c398b4efb4442a426f7f1d181963.NginxMailingListEnglish@forum.nginx.org> References: <20150804224954.GJ23844@daoine.org> <7388c398b4efb4442a426f7f1d181963.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20150805071724.GK23844@daoine.org> On Tue, Aug 04, 2015 at 07:25:23PM -0400, nginxuser100 wrote: Hi there, > Thank you Francis. The body content did the trick ... not as aesthetically > pleasing to the eyes as the NGINX's "hard-coded reason phrase", but it is > better than a blank page. Good that you have an acceptable answer. > I did not understand what you meant by a config to control the reason > phrase. I think that nginx does not have a way to let you control the reason phrase. I was not sure from your paragraph whether you had a system where you got nginx to return a different reason phrase, for (e.g.) fastcgi returning 404. If you had, I'd be interested to see the config. But it sounds like you don't, so that's ok. Cheers, f -- Francis Daly francis at daoine.org From francis at daoine.org Wed Aug 5 07:20:03 2015 From: francis at daoine.org (Francis Daly) Date: Wed, 5 Aug 2015 08:20:03 +0100 Subject: Nginx domain work without be into sites-enabled In-Reply-To: References: <20150803151541.GI23844@daoine.org> Message-ID: <20150805072003.GL23844@daoine.org> On Tue, Aug 04, 2015 at 12:03:28AM -0400, Eberx wrote: Hi there, > Thank you for advice. > > It is my fault. I did not define any default_server in my config. Good that you found a solution, and thanks for letting the list know what the fix is. Cheers, f -- Francis Daly francis at daoine.org From nginx-forum at nginx.us Wed Aug 5 14:18:49 2015 From: nginx-forum at nginx.us (goga) Date: Wed, 05 Aug 2015 10:18:49 -0400 Subject: Nginx proxy server In-Reply-To: References: Message-ID: Thiago Farina Wrote: ------------------------------------------------------- > On Mon, Aug 3, 2015 at 12:54 AM, goga wrote: > Isn't this something you can/should handle in the language side? Btw, > which language are you using? PHP, Perl, Python, Ruby? > > -- > Thiago Farina > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx None. I only use the configuration file in /etc/nginx/sites-avalible/*.conf Only settings virtualhosts. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,260716,260789#msg-260789 From nitinmlvya at gmail.com Wed Aug 5 14:25:36 2015 From: nitinmlvya at gmail.com (Nitin Solanki) Date: Wed, 05 Aug 2015 14:25:36 +0000 Subject: Configure core Python scripts into Nginx Message-ID: Hi, I want to execute python scripts into Nginx server. I don't want to any frameworks for that. Core python script, I need to use. Any help and step to follow . To do that. -------------- next part -------------- An HTML attachment was scrubbed... URL: From agentzh at gmail.com Wed Aug 5 15:22:28 2015 From: agentzh at gmail.com (Yichun Zhang (agentzh)) Date: Wed, 5 Aug 2015 23:22:28 +0800 Subject: Configure core Python scripts into Nginx In-Reply-To: References: Message-ID: Hello! On Wed, Aug 5, 2015 at 10:25 PM, Nitin Solanki wrote: > I want to execute python scripts into Nginx server. I don't want to > any frameworks for that. Core python script, I need to use. > Any help and step to follow . To do that. > Because you're using NGINX, I'd assume you're after performance. I suggest you have a look at our ngx_lua module which supports nonblocking Lua scripting in the nginx core: https://github.com/openresty/lua-nginx-module Lua is a simple language and is pythanic in some ways :) If you insist in running Python in your apps, then you should try those fastcgi or uwsgi options instead. Just my 2 cents. Regards, -agentzh From nitinmlvya at gmail.com Thu Aug 6 06:51:27 2015 From: nitinmlvya at gmail.com (Nitin Solanki) Date: Thu, 06 Aug 2015 06:51:27 +0000 Subject: Configure core Python scripts into Nginx In-Reply-To: References: Message-ID: Hi Zhang, Which should I use fastcgi or uwsgi. I tried uwsgi but not succeed. Can you help to sort out my problem. Shall you please send me steps to configure python with Nginx. On Wed, Aug 5, 2015 at 8:52 PM Yichun Zhang (agentzh) wrote: > Hello! > > On Wed, Aug 5, 2015 at 10:25 PM, Nitin Solanki wrote: > > I want to execute python scripts into Nginx server. I don't > want to > > any frameworks for that. Core python script, I need to use. > > Any help and step to follow . To do that. > > > > Because you're using NGINX, I'd assume you're after performance. I > suggest you have a look at our ngx_lua module which supports > nonblocking Lua scripting in the nginx core: > > https://github.com/openresty/lua-nginx-module > > Lua is a simple language and is pythanic in some ways :) > > If you insist in running Python in your apps, then you should try > those fastcgi or uwsgi options instead. > > Just my 2 cents. > > Regards, > -agentzh > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Thu Aug 6 08:10:29 2015 From: nginx-forum at nginx.us (mynidiravichandra) Date: Thu, 06 Aug 2015 04:10:29 -0400 Subject: Problem with NGINX on MIPS processor In-Reply-To: <20150803180205.GQ19190@mdounin.ru> References: <20150803180205.GQ19190@mdounin.ru> Message-ID: <3aab0d1398c774c35d253b35d24f4378.NginxMailingListEnglish@forum.nginx.org> Thanks, Maxim. It was a problem in SSL code. Ravichandra Posted at Nginx Forum: http://forum.nginx.org/read.php?2,260725,260814#msg-260814 From nitinmlvya at gmail.com Thu Aug 6 11:25:43 2015 From: nitinmlvya at gmail.com (Nitin Solanki) Date: Thu, 06 Aug 2015 11:25:43 +0000 Subject: Execute python files with Nginx Message-ID: Hello, How can we execute python files with Nginx? Anybody can help me to give steps by steps to follow. In Apache, it is very easy to execute python files but in python, I am trying from last 2 days and nothing happening. Any Idea how to use "uwsgi" ? -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Thu Aug 6 11:49:16 2015 From: nginx-forum at nginx.us (Alt) Date: Thu, 06 Aug 2015 07:49:16 -0400 Subject: Execute python files with Nginx In-Reply-To: References: Message-ID: Hello, I've never used python with nginx, but there are some examples on how to configure everything here: http://wiki.nginx.org/Configuration#Python_via_FastCGI Best Regards Posted at Nginx Forum: http://forum.nginx.org/read.php?2,260817,260818#msg-260818 From nitinmlvya at gmail.com Thu Aug 6 11:53:18 2015 From: nitinmlvya at gmail.com (Nitin Solanki) Date: Thu, 06 Aug 2015 11:53:18 +0000 Subject: Execute python files with Nginx In-Reply-To: References: Message-ID: I tried that and getting issues. Unable to configure. I am not getting those steps. Any help you can do by explaining in steps... On Thu, Aug 6, 2015 at 5:19 PM Alt wrote: > Hello, > > I've never used python with nginx, but there are some examples on how to > configure everything here: > http://wiki.nginx.org/Configuration#Python_via_FastCGI > > Best Regards > > Posted at Nginx Forum: > http://forum.nginx.org/read.php?2,260817,260818#msg-260818 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Thu Aug 6 13:16:38 2015 From: nginx-forum at nginx.us (tunist) Date: Thu, 06 Aug 2015 09:16:38 -0400 Subject: mp4 streaming/seeking works from firefox (fedora) and not from firefox (windows) (nginx 1.9.3) In-Reply-To: <3e13103f2fbe0508b4c57caebad87e19.NginxMailingListEnglish@forum.nginx.org> References: <3e13103f2fbe0508b4c57caebad87e19.NginxMailingListEnglish@forum.nginx.org> Message-ID: <460c40b28555a4225385f6f4e65510b5.NginxMailingListEnglish@forum.nginx.org> i have now created question on this topic on serverfault and on stackexchange's video site - but have not found the solution to the problem. i have looked into all the answers i have received so far and none have made any difference to the fact that my server is not reliably serving video. all my MP4s are now processed to relocate the moov atom. anyone got any additional thoughts? is there a github issue i can open or some other bug tracker for nginx? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,260615,260820#msg-260820 From agentzh at gmail.com Thu Aug 6 13:49:55 2015 From: agentzh at gmail.com (Yichun Zhang (agentzh)) Date: Thu, 6 Aug 2015 21:49:55 +0800 Subject: Configure core Python scripts into Nginx In-Reply-To: References: Message-ID: Hello! On Thu, Aug 6, 2015 at 2:51 PM, Nitin Solanki wrote: > Which should I use fastcgi or uwsgi. It's generally believed that uwsgi is better. > I tried uwsgi but not > succeed. Can you help to sort out my problem. Shall you please send me steps > to configure python with Nginx. > As the maintainer of the ngx_lua module I never code any Python for web apps (for obvious reasons) :) I only use Python occasionally when I absolutely have to (like writing gdb-based debugging tools). Having said that, you'd better provide as much details about your problems as possible here such that other more knowledgeable people might have a chance to help you out. Well, just a suggestion. Regards, -agentzh From rgacote at appropriatesolutions.com Thu Aug 6 17:37:36 2015 From: rgacote at appropriatesolutions.com (Ray Cote) Date: Thu, 6 Aug 2015 13:37:36 -0400 Subject: Configure core Python scripts into Nginx In-Reply-To: References: Message-ID: We use gUnicorn for our nginx/Django deployments. Lots of good guidance on the gUnicorn site: http://gunicorn-docs.readthedocs.org/en/latest/deploy.html nginx is their deployment of choice... -Ray On Thu, Aug 6, 2015 at 9:49 AM, Yichun Zhang (agentzh) wrote: > Hello! > > On Thu, Aug 6, 2015 at 2:51 PM, Nitin Solanki wrote: > > Which should I use fastcgi or uwsgi. > > It's generally believed that uwsgi is better. > > > I tried uwsgi but not > > succeed. Can you help to sort out my problem. Shall you please send me > steps > > to configure python with Nginx. > > > > As the maintainer of the ngx_lua module I never code any Python for > web apps (for obvious reasons) :) I only use Python occasionally when > I absolutely have to (like writing gdb-based debugging tools). > > Having said that, you'd better provide as much details about your > problems as possible here such that other more knowledgeable people > might have a chance to help you out. Well, just a suggestion. > > Regards, > -agentzh > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -- Raymond Cote, President voice: +1.603.924.6079 email: rgacote at AppropriateSolutions.com skype: ray.cote -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Thu Aug 6 17:53:57 2015 From: nginx-forum at nginx.us (mex) Date: Thu, 06 Aug 2015 13:53:57 -0400 Subject: Configure core Python scripts into Nginx In-Reply-To: References: Message-ID: <08fca2c19e4569a3e588091cce03c6f1.NginxMailingListEnglish@forum.nginx.org> Ray Cote Wrote: ------------------------------------------------------- > We use gUnicorn for our nginx/Django deployments. > Lots of good guidance on the gUnicorn site: > http://gunicorn-docs.readthedocs.org/en/latest/deploy.html > nginx is their deployment of choice... > -Ray > gunicorn (+nginx for static content, caching, ssl-offload and waf-features) is what we use here too on a couple of installations; its rock solid and easy to use. cheers, mex Posted at Nginx Forum: http://forum.nginx.org/read.php?2,260791,260824#msg-260824 From nitinmlvya at gmail.com Thu Aug 6 18:07:23 2015 From: nitinmlvya at gmail.com (Nitin Solanki) Date: Thu, 06 Aug 2015 18:07:23 +0000 Subject: Configure core Python scripts into Nginx In-Reply-To: <08fca2c19e4569a3e588091cce03c6f1.NginxMailingListEnglish@forum.nginx.org> References: <08fca2c19e4569a3e588091cce03c6f1.NginxMailingListEnglish@forum.nginx.org> Message-ID: Hi all, I am not using any python framework. Generally, I use core python and making a call from php to python using AJAX. I found different ways to configure python with Nginx but they all have python framework integration. Any configuration, you have to follow for core python scripts Please help me. On Thu, Aug 6, 2015 at 11:24 PM mex wrote: > Ray Cote Wrote: > ------------------------------------------------------- > > We use gUnicorn for our nginx/Django deployments. > > Lots of good guidance on the gUnicorn site: > > http://gunicorn-docs.readthedocs.org/en/latest/deploy.html > > nginx is their deployment of choice... > > -Ray > > > > gunicorn (+nginx for static content, caching, ssl-offload and waf-features) > is what we use here > too on a couple of installations; its rock solid and easy to use. > > > > cheers, > > mex > > Posted at Nginx Forum: > http://forum.nginx.org/read.php?2,260791,260824#msg-260824 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Thu Aug 6 18:13:45 2015 From: nginx-forum at nginx.us (mex) Date: Thu, 06 Aug 2015 14:13:45 -0400 Subject: Configure core Python scripts into Nginx In-Reply-To: References: Message-ID: <2350c2748f099c9defd8d91fd7a4d5d6.NginxMailingListEnglish@forum.nginx.org> if you ask for something like mod_cgi from the apache-world, there is nothing like this; the following article might help to define requirements and find a solution: > https://www.digitalocean.com/community/tutorials/a-comparison-of-web-servers-for-python-based-web-applications Nitin Solanki Wrote: ------------------------------------------------------- > Hi all, I am not using any python framework. Generally, I use core > python > and making a call from php to python using AJAX. I found different > ways to > configure python with Nginx but they all have python framework > integration. > Any configuration, you have to follow for core python scripts Please > help > me. > Posted at Nginx Forum: http://forum.nginx.org/read.php?2,260791,260826#msg-260826 From nitinmlvya at gmail.com Thu Aug 6 18:16:34 2015 From: nitinmlvya at gmail.com (Nitin Solanki) Date: Thu, 06 Aug 2015 18:16:34 +0000 Subject: Configure core Python scripts into Nginx In-Reply-To: <2350c2748f099c9defd8d91fd7a4d5d6.NginxMailingListEnglish@forum.nginx.org> References: <2350c2748f099c9defd8d91fd7a4d5d6.NginxMailingListEnglish@forum.nginx.org> Message-ID: Hi, Which one is better to configure with Nginx? Please can you suggest me with steps. I explored more but didn't getting anything. It will be very helpful for me. If you guide me in steps. Thanks.. On Thu, Aug 6, 2015 at 11:44 PM mex wrote: > if you ask for something like mod_cgi from the apache-world, there is > nothing like > this; the following article might help to define requirements and find a > solution: > > > > > https://www.digitalocean.com/community/tutorials/a-comparison-of-web-servers-for-python-based-web-applications > > > > Nitin Solanki Wrote: > ------------------------------------------------------- > > Hi all, I am not using any python framework. Generally, I use core > > python > > and making a call from php to python using AJAX. I found different > > ways to > > configure python with Nginx but they all have python framework > > integration. > > Any configuration, you have to follow for core python scripts Please > > help > > me. > > > > Posted at Nginx Forum: > http://forum.nginx.org/read.php?2,260791,260826#msg-260826 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Thu Aug 6 19:04:59 2015 From: nginx-forum at nginx.us (sudharshanr) Date: Thu, 06 Aug 2015 15:04:59 -0400 Subject: Rerouting based on content-type of request Message-ID: <24076472f324e6ead3f24dd20ba79a91.NginxMailingListEnglish@forum.nginx.org> I'm having a reverse-proxied Nginx server. I wanted to know if it is possible to redirect the request in Nginx based on the content-type of the request? Right now, I'm checking the URL for a keyword, and based on that, I redirect the request. So it is something like this: location ~*/(keyword){ proxy_pass http://127.0.0.1:6565; } But now I have another URL having the same keyword. However, the content-type of the request will be 'application/json', and I need to forward that request to port 8080. Is it possible to differentiate between these two urls? I'm using Nginx 1.6.2. Thanks. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,260830,260830#msg-260830 From nginx-forum at nginx.us Thu Aug 6 19:47:14 2015 From: nginx-forum at nginx.us (itpp2012) Date: Thu, 06 Aug 2015 15:47:14 -0400 Subject: Rerouting based on content-type of request In-Reply-To: <24076472f324e6ead3f24dd20ba79a91.NginxMailingListEnglish@forum.nginx.org> References: <24076472f324e6ead3f24dd20ba79a91.NginxMailingListEnglish@forum.nginx.org> Message-ID: <022e3067bb246c46cc76f09c71853758.NginxMailingListEnglish@forum.nginx.org> Close enough: http://forum.nginx.org/read.php?2,239473,239473#msg-239473 Posted at Nginx Forum: http://forum.nginx.org/read.php?2,260830,260831#msg-260831 From mdounin at mdounin.ru Thu Aug 6 19:57:59 2015 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 6 Aug 2015 22:57:59 +0300 Subject: mp4 streaming/seeking works from firefox (fedora) and not from firefox (windows) (nginx 1.9.3) In-Reply-To: <460c40b28555a4225385f6f4e65510b5.NginxMailingListEnglish@forum.nginx.org> References: <3e13103f2fbe0508b4c57caebad87e19.NginxMailingListEnglish@forum.nginx.org> <460c40b28555a4225385f6f4e65510b5.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20150806195759.GD15954@mdounin.ru> Hello! On Thu, Aug 06, 2015 at 09:16:38AM -0400, tunist wrote: > i have now created question on this topic on serverfault and on > stackexchange's video site - but have not found the solution to the problem. > i have looked into all the answers i have received so far and none have made > any difference to the fact that my server is not reliably serving video. all > my MP4s are now processed to relocate the moov atom. > anyone got any additional thoughts? is there a github issue i can open or > some other bug tracker for nginx? Bugs in nginx are to be reported via trac.nginx.org. But please note that for now nothing in this thread and/or in your serverfault question indicate there is a bug in nginx. -- Maxim Dounin http://nginx.org/ From nginx-forum at nginx.us Thu Aug 6 20:12:35 2015 From: nginx-forum at nginx.us (tunist) Date: Thu, 06 Aug 2015 16:12:35 -0400 Subject: mp4 streaming/seeking works from firefox (fedora) and not from firefox (windows) (nginx 1.9.3) In-Reply-To: <20150806195759.GD15954@mdounin.ru> References: <20150806195759.GD15954@mdounin.ru> Message-ID: <2c29128b52816746652195cc89f1add8.NginxMailingListEnglish@forum.nginx.org> i mentioned the possibility of a bug since i have already exhausted all options presented to me via numerous channels of research and support. i will post my server/site's full config to serverfault shorly. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,260615,260833#msg-260833 From nginx-forum at nginx.us Thu Aug 6 23:26:34 2015 From: nginx-forum at nginx.us (tunist) Date: Thu, 06 Aug 2015 19:26:34 -0400 Subject: mp4 streaming/seeking works from firefox (fedora) and not from firefox (windows) (nginx 1.9.3) In-Reply-To: <20150806195759.GD15954@mdounin.ru> References: <20150806195759.GD15954@mdounin.ru> Message-ID: <59bfca2b17464846052625c658d8abd7.NginxMailingListEnglish@forum.nginx.org> i have now updated the serverfault question to include the nginx config files. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,260615,260835#msg-260835 From francis at daoine.org Fri Aug 7 00:18:48 2015 From: francis at daoine.org (Francis Daly) Date: Fri, 7 Aug 2015 01:18:48 +0100 Subject: mp4 streaming/seeking works from firefox (fedora) and not from firefox (windows) (nginx 1.9.3) In-Reply-To: <59bfca2b17464846052625c658d8abd7.NginxMailingListEnglish@forum.nginx.org> References: <20150806195759.GD15954@mdounin.ru> <59bfca2b17464846052625c658d8abd7.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20150807001848.GP23844@daoine.org> On Thu, Aug 06, 2015 at 07:26:34PM -0400, tunist wrote: Hi there, > i have now updated the serverfault question to include the nginx config > files. It looks like the question at http://serverfault.com/questions/710304/ has been changed so most current comments now look broken. That's one downside of not using something like a mailing list where the original context can be seen, even after the event. Can I suggest you use a test server {} block, and strip that section of your nginx.conf down to only what is essential to show the problem that you are reporting? (And post the smaller config here, so that people will be able to copy-paste it into their test systems and reproduce exactly what you are reporting.) Right now, it looks to me as if your config says that a request for /file.mp4 will be handled in "location / {}", which will just serve the file /usr/local/nginx/html/file.mp4. Is that what you expect? If so, you can remove all of the other location {} blocks and have an effectively identical config for this request, which will be much easier to analyse without getting distracted. Also, you seem to be testing with "curl -I -r", and being surprised at a HTTP 200 response. nginx returns HTTP 200 to HEAD requests for files. Use something like "curl -i -r 5-10 http...file.mp4 | cat -v" to see whether it sends a HTTP 206 with suitable headers. If it doesn't, then you may have found an nginx problem. (Right now, your example url gives me HTTP 302 and indicates that PHP is involved somehow. It's hard to analyse things if they change silently.) Good luck with it, f -- Francis Daly francis at daoine.org From francis at daoine.org Fri Aug 7 01:12:40 2015 From: francis at daoine.org (Francis Daly) Date: Fri, 7 Aug 2015 02:12:40 +0100 Subject: mp4 streaming/seeking works from firefox (fedora) and not from firefox (windows) (nginx 1.9.3) In-Reply-To: <20150807001848.GP23844@daoine.org> References: <20150806195759.GD15954@mdounin.ru> <59bfca2b17464846052625c658d8abd7.NginxMailingListEnglish@forum.nginx.org> <20150807001848.GP23844@daoine.org> Message-ID: <20150807011240.GQ23844@daoine.org> On Fri, Aug 07, 2015 at 01:18:48AM +0100, Francis Daly wrote: > On Thu, Aug 06, 2015 at 07:26:34PM -0400, tunist wrote: Hi there, > Also, you seem to be testing with "curl -I -r", and being surprised at > a HTTP 200 response. nginx returns HTTP 200 to HEAD requests for files. That last sentence is incorrect. It can return 200 and it can return 206; it depends on the actual request made. For example, "Range:" and "Content-Range:" request headers can lead to different responses. f -- Francis Daly francis at daoine.org From nginx-forum at nginx.us Fri Aug 7 01:44:18 2015 From: nginx-forum at nginx.us (mojiz) Date: Thu, 06 Aug 2015 21:44:18 -0400 Subject: connection timeout on download server Message-ID: Hi I have an nginx setup on multiple download servers. I've setup a monitor server to check my servers at interval and when the servers seem to be at their peak I get connection timeout from my servers. I've set limit_conn 64 for each remote_ip to make sure one user can't hog the server. today when I was receiving the timeout warnings, I checked the stub status of my server and it seems the server connection is nowhere near it's limits. This is the stub status page: Server 1: Active connections: 764 server accepts handled requests 326451 326451 346395 Reading: 0 Writing: 604 Waiting: 160 Server 2: Active connections: 426 server accepts handled requests 538407 538407 576918 Reading: 0 Writing: 418 Waiting: 8 However my config files are way too high: worker_processes 1; pid /run/nginx.pid; worker_rlimit_nofile 90000; events { worker_connections 64000; multi_accept on; } http{ #aio on; turned off for linux upload_progress uploads 5m; sendfile off; output_buffers 1 1m; directio 512; tcp_nopush on; tcp_nodelay on; types_hash_max_size 2048; server_tokens off; server_names_hash_bucket_size 128; variables_hash_max_size 1024; } server{ .... # limit perip limit_rate_after 1m; limit_rate 96k; limit_conn perip 64; } nginx version: nginx/1.6.2 Linux s17 3.16.0-4-amd64 #1 SMP Debian 3.16.7-ckt9-3~deb8u1 (2015-04-24) x86_64 GNU/Linux Am I missing something? Is there another connection limitation I should know of? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,260839,260839#msg-260839 From nginx-forum at nginx.us Fri Aug 7 05:39:05 2015 From: nginx-forum at nginx.us (arlin) Date: Fri, 07 Aug 2015 01:39:05 -0400 Subject: try_files and proxypass Message-ID: Hi, I came across such configs and I am curios to know what @app_$dc; matches in location configs? try_files $uri @app_$dc; location @app_prod { proxy_pass http://app_prod;} location @app_qadc { proxy_pass http://app_qadc;} Thanks ar Posted at Nginx Forum: http://forum.nginx.org/read.php?2,260841,260841#msg-260841 From francis at daoine.org Fri Aug 7 07:48:24 2015 From: francis at daoine.org (Francis Daly) Date: Fri, 7 Aug 2015 08:48:24 +0100 Subject: try_files and proxypass In-Reply-To: References: Message-ID: <20150807074824.GR23844@daoine.org> On Fri, Aug 07, 2015 at 01:39:05AM -0400, arlin wrote: Hi there, > I came across such configs and I am curios to know what @app_$dc; matches > in location configs? > > try_files $uri @app_$dc; Presumably somewhere else in the config, $dc is set to either prod or qadc. try_files expands variables -- for example $uri -- before testing. > location @app_prod { proxy_pass http://app_prod;} > location @app_qadc { proxy_pass http://app_qadc;} It will use whatever location @app_$dc expands to on this request. f -- Francis Daly francis at daoine.org From mfioretti at nexaima.net Fri Aug 7 10:48:48 2015 From: mfioretti at nexaima.net (M. Fioretti) Date: Fri, 07 Aug 2015 12:48:48 +0200 Subject: nginx configuration for SemanticScuttle??? Message-ID: Greetings, I have several Web applications running on a Centos 6.6 server. I am migrating them from Apache to Nginx. Apache is still running, so I'm running nginx on port 81 for now. The general setup is OK. I ALREAADY have Drupal 7 and Wordpress sites served this way. The ONLY application that does not work is SemanticScuttle (SC). The closest I've got to make it "run" is with the configuration below, turning clean urls off in the SC config file, and using a rewrite rule created by winginx, when you give it the .htaccess distributed with SC. with that configuration, all these URLS work: http://bookmarks.example.com:81/ http://bookmarks.example.com:81/ http://bookmarks.example.com:81/index/?page=N (N=2, 3, etc) http://bookmarks.example.com:81/populartags but all the others, don't, meaning that e.g. http://bookmarks.example.com:81/tags.php/linux redirect to http://bookmarks.example.com:81/populartags (which must be some SC fallback URL, I guess...) What next? What is happening? I know it's something stupid, but right now I could really use some kind pointer to whatever it is that I am missing, or on how to debug more effectively. TIA, Marco here is the configuration of nginx for this virtual host: server { server_name bookmarks.example.com; listen 81; root /var/www/ntml/scuttle/www/; index index.php; location / { rewrite ^/([^/.]+)/?(.*)$ /$1.php?$2 break; include fastcgi_params; fastcgi_pass unix:/tmp/phpfpm.sock; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; fastcgi_intercept_errors on; fastcgi_split_path_info ^(.+\.php)(.*)$; fastcgi_hide_header X-Powered-By; fastcgi_index index.php; } } From nginx-forum at nginx.us Fri Aug 7 11:12:21 2015 From: nginx-forum at nginx.us (bugster86) Date: Fri, 07 Aug 2015 07:12:21 -0400 Subject: can't cache javascript file from proxy Message-ID: <6eb536bf07cb435034f0920893c7a93c.NginxMailingListEnglish@forum.nginx.org> Hi, I have a problem with cache javascript file from a proxy glassfish4.0 Server. My nginx.conf is this: ___________________________ user nginx nginx; error_log /var/log/nginx/error.log warn; pid /usr/local/nginx/logs/nginx_1.8.0.pid; worker_processes auto; worker_rlimit_nofile 1024; worker_priority -5; worker_cpu_affinity 01 10; events { worker_connections 128; multi_accept on; use epoll; #debug_connection 127.0.0.1; debug_connection 10.77.37.133; } ________________________________ And this is my specific site conf file http { include /etc/nginx/mime.types; default_type application/octet-stream; sendfile on; tcp_nopush on; keepalive_timeout 65; reset_timedout_connection on; log_not_found off; proxy_cache_path /tmp/nginx/cache keys_zone=cache1:10m; proxy_cache_valid 200 302 8h; server { listen 81; server_name r60; root /var/nginx/first/html; set $cache_key "$request_uri"; location / { index index.html index.htm; } error_page 500 502 503 504 /50x.html; error_page 404 =200 /50x.html; error_page 403 =200 /50x.html; location = /50x.html { root html; } location /site { location ~* \.(js)$ { proxy_cache cache1; proxy_cache_key $cache_key; proxy_ignore_headers "Cache-Control"; #proxy_ignore_headers "Pragma"; proxy_hide_header Cache-Control; proxy_hide_header Pragma; add_header X-Proxy-Cache $upstream_cache_status; proxy_pass http://10.77.38.166:8080; } proxy_pass http://10.77.38.166:8080; } } } ______________________________________________________ I have no problem in configuration sintax (nginx -t --> OK) With a browser, I open a web page http://r60:81/site and I see all the content of the page. This is OK but I don't view anything in /tmp/nginx/cache (no cache files ) My web request download 3 javascript file. I expect it to be cached. Why proxy_ignore_headers "Cache-Control" doesn't work properly? Thanks in advace Posted at Nginx Forum: http://forum.nginx.org/read.php?2,260847,260847#msg-260847 From nginx-forum at nginx.us Fri Aug 7 11:12:27 2015 From: nginx-forum at nginx.us (tunist) Date: Fri, 07 Aug 2015 07:12:27 -0400 Subject: mp4 streaming/seeking works from firefox (fedora) and not from firefox (windows) (nginx 1.9.3) In-Reply-To: <20150807001848.GP23844@daoine.org> References: <20150807001848.GP23844@daoine.org> Message-ID: <676236b217148b837579efb967ea28e4.NginxMailingListEnglish@forum.nginx.org> yes, i removed some of the original question since it was shown to not be relevant to the issue. i appreciate that maybe it would be best to write 'EDIT:' when i edit there, but then the question might become huge.. so in the interest of balance, i just cut some parts out. "Right now, it looks to me as if your config says that a request for /file.mp4 will be handled in "location / {}", which will just serve the file /usr/local/nginx/html/file.mp4." why do you think that? there is a PHP application being served via this config that handles the routing and serving of files, including mp4s. the video files are accessible at urls that are handled via a page handler programatically. the PHP file that serves the media files outputs the appropriate headers and feeds the file to the browser as a stream. "Also, you seem to be testing with "curl -I -r", and being surprised at a HTTP 200 response. nginx returns HTTP 200 to HEAD requests for files." i have been testing with this format, as recommended here by a contributor to video.js on github (https://github.com/videojs/video.js/issues/2385): curl -sL -w "%{http_code} %{size_download} %{url_effective}\\n" 'https://www.ureka.org/file/download/17365/censored%20on%20google.mp4' -H 'Accept-Encoding: identity;q=1, *;q=0' -H 'Accept-Language: en-US,en;q=0.8' -H 'User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10_10_3) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/43.0.2357.134 Safari/537.36' -H 'Range: bytes=0-' -H 'Accept: */*' -H 'Referer: https://www.ureka.org/file/view/17365/me-being-covertly-censored-on-google' -H 'Cookie: Elgg=htbujjg4khfj7qr1s9nbban653; _pk_id.1.e8a2=30087d194778be65.1437418823.1.1437418823.1437418823.; _pk_ses.1.e8a2=*' -H 'Connection: keep-alive' -H 'Cache-Control: max-age=0' --compressed -o /dev/null i ran the curl command that you provided here and saw a continual stream of unreadable characters in the terminal. i am not experienced with curl in the terminal to the extent i can discern which would be the appropriate flags to use here. i have removed the video that i posted as a test now. however, this one will remain available: https://www.ureka.org/file/play/7369/nasa%20mars%20anomalies%202010.mp4 thanks for assisting! Posted at Nginx Forum: http://forum.nginx.org/read.php?2,260615,260848#msg-260848 From nginx-forum at nginx.us Fri Aug 7 12:20:54 2015 From: nginx-forum at nginx.us (tunist) Date: Fri, 07 Aug 2015 08:20:54 -0400 Subject: mp4 streaming/seeking works from firefox (fedora) and not from firefox (windows) (nginx 1.9.3) In-Reply-To: <20150807011240.GQ23844@daoine.org> References: <20150807011240.GQ23844@daoine.org> Message-ID: oh, so it seems likely that i need to add extra logic into the PHP page that handles the video stream to support range requests - as detailed here: http://codesamplez.com/programming/php-html5-video-streaming-tutorial Posted at Nginx Forum: http://forum.nginx.org/read.php?2,260615,260853#msg-260853 From nginx-forum at nginx.us Fri Aug 7 12:21:07 2015 From: nginx-forum at nginx.us (tunist) Date: Fri, 07 Aug 2015 08:21:07 -0400 Subject: mp4 streaming/seeking works from firefox (fedora) and not from firefox (windows) (nginx 1.9.3) In-Reply-To: <676236b217148b837579efb967ea28e4.NginxMailingListEnglish@forum.nginx.org> References: <20150807001848.GP23844@daoine.org> <676236b217148b837579efb967ea28e4.NginxMailingListEnglish@forum.nginx.org> Message-ID: <301c5fd226f36922aa311de04151b309.NginxMailingListEnglish@forum.nginx.org> oh, so it seems likely that i need to add extra logic into the PHP page that handles the video stream to support range requests - as detailed here: http://codesamplez.com/programming/php-html5-video-streaming-tutorial Posted at Nginx Forum: http://forum.nginx.org/read.php?2,260615,260852#msg-260852 From nginx-forum at nginx.us Fri Aug 7 12:26:54 2015 From: nginx-forum at nginx.us (tunist) Date: Fri, 07 Aug 2015 08:26:54 -0400 Subject: mp4 streaming/seeking works from firefox (fedora) and not from firefox (windows) (nginx 1.9.3) In-Reply-To: <301c5fd226f36922aa311de04151b309.NginxMailingListEnglish@forum.nginx.org> References: <20150807001848.GP23844@daoine.org> <676236b217148b837579efb967ea28e4.NginxMailingListEnglish@forum.nginx.org> <301c5fd226f36922aa311de04151b309.NginxMailingListEnglish@forum.nginx.org> Message-ID: <3d6f64f248f8373f729574063a0c0009.NginxMailingListEnglish@forum.nginx.org> aha! yes, the cause of the problem was the lack of range handling in the PHP page i am using for streaming the files. i forgot that that is a requirement of the process! i have added the videostream class to the page and so far the streaming is working well in my tests :) Posted at Nginx Forum: http://forum.nginx.org/read.php?2,260615,260855#msg-260855 From francis at daoine.org Fri Aug 7 12:39:48 2015 From: francis at daoine.org (Francis Daly) Date: Fri, 7 Aug 2015 13:39:48 +0100 Subject: mp4 streaming/seeking works from firefox (fedora) and not from firefox (windows) (nginx 1.9.3) In-Reply-To: <676236b217148b837579efb967ea28e4.NginxMailingListEnglish@forum.nginx.org> References: <20150807001848.GP23844@daoine.org> <676236b217148b837579efb967ea28e4.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20150807123948.GS23844@daoine.org> On Fri, Aug 07, 2015 at 07:12:27AM -0400, tunist wrote: Hi there, > so in the interest of balance, i just cut some parts out. Yes, that's usually a sensible thing to do. It just makes reading the content a week later, a bit odd. > "Right now, it looks to me as if your config says that a request for > /file.mp4 will be handled in "location / {}", which will just serve the > file /usr/local/nginx/html/file.mp4." > > why do you think that? Given the request, and the location{} blocks in the config, "location /{}" looked like the best-match one to me. That contains "try_files $uri " and then a fallback. If you are not directly using nginx to serve the file from the filesystem; or using the nginx "mp4" directive to handle the file from the filesystem; it will probably be useful to explain what exactly you are doing, in order to allow others reproduce the problem you are reporting. > there is a PHP application being served via this config that handles the > routing and serving of files, including mp4s. To me, that says that it is not nginx that is sending the video, but your php script. So your php script is the place to look for any unexpected behaviour. If you can start with "I make *this* request and I expect to get *this* response", then you will have a much easier time testing and seeing where things fail. On your test system, if you throw away all of the php stuff and just serve the file directly; or serve the file using the mp4 module does everything work as it should? If so, that points at where the problem probably is. > "Also, you seem to be testing with "curl -I -r", and being surprised at > a HTTP 200 response. nginx returns HTTP 200 to HEAD requests for files." > > i have been testing with this format, as recommended here by a contributor > to video.js on github (https://github.com/videojs/video.js/issues/2385): That looks like it is a "please send me the whole file" request (Range: bytes=0-), so I'd expect to get the whole file. When I try a request like that against an nginx just serving the file, I get back HTTP/1.1 206 Partial Content Content-Length: 4013 Content-Range: bytes 0-4012/4013 amongst the rest of the response. > i ran the curl command that you provided here and saw a continual stream of > unreadable characters in the terminal. i am not experienced with curl in the > terminal to the extent i can discern which would be the appropriate flags to > use here. I suggest "-v" for verbose, "-H" with the one or two headers that you actually care about, and then end the shell line with "2>&1 | cat -v | less", so that you can see a printable representation of the (binary) body content. All you really want to see are the headers, though. > i have removed the video that i posted as a test now. however, this one will > remain available: > https://www.ureka.org/file/play/7369/nasa%20mars%20anomalies%202010.mp4 I think that the problem is in the non-nginx complexity that has been added. Put that file in /usr/local/nginx/html/, and access is without using php. If that works, then you can choose whether to keep it that way, or to reintroduce php. > thanks for assisting! Good luck with it. f -- Francis Daly francis at daoine.org From nginx-forum at nginx.us Fri Aug 7 14:21:08 2015 From: nginx-forum at nginx.us (Alt) Date: Fri, 07 Aug 2015 10:21:08 -0400 Subject: mp4 streaming/seeking works from firefox (fedora) and not from firefox (windows) (nginx 1.9.3) In-Reply-To: <3d6f64f248f8373f729574063a0c0009.NginxMailingListEnglish@forum.nginx.org> References: <20150807001848.GP23844@daoine.org> <676236b217148b837579efb967ea28e4.NginxMailingListEnglish@forum.nginx.org> <301c5fd226f36922aa311de04151b309.NginxMailingListEnglish@forum.nginx.org> <3d6f64f248f8373f729574063a0c0009.NginxMailingListEnglish@forum.nginx.org> Message-ID: Hello, If I understand well, you are streaming video files from PHP? Here PHP will kill your performance and you really should avoid that and stream directly from nginx. Best Regards Posted at Nginx Forum: http://forum.nginx.org/read.php?2,260615,260861#msg-260861 From mdounin at mdounin.ru Fri Aug 7 18:17:55 2015 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 7 Aug 2015 21:17:55 +0300 Subject: can't cache javascript file from proxy In-Reply-To: <6eb536bf07cb435034f0920893c7a93c.NginxMailingListEnglish@forum.nginx.org> References: <6eb536bf07cb435034f0920893c7a93c.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20150807181754.GE67578@mdounin.ru> Hello! On Fri, Aug 07, 2015 at 07:12:21AM -0400, bugster86 wrote: > Hi, > > I have a problem with cache javascript file from a proxy glassfish4.0 > Server. [...] > proxy_ignore_headers "Cache-Control"; > #proxy_ignore_headers "Pragma"; [...] > With a browser, I open a web page http://r60:81/site and I see all the > content of the page. > This is OK but I don't view anything in /tmp/nginx/cache (no cache files > ) > > My web request download 3 javascript file. > I expect it to be cached. > > Why proxy_ignore_headers "Cache-Control" doesn't work properly? There are other headers in addition to Cache-Control which may prevent caching of a response by nginx. Most notably, Expires and Set-Cookie. Check if the response contains them to see if it's the case. See http://nginx.org/r/proxy_ignore_headers for a full list. -- Maxim Dounin http://nginx.org/ From Michael.Power at ELOTOUCH.com Fri Aug 7 18:30:36 2015 From: Michael.Power at ELOTOUCH.com (Michael Power) Date: Fri, 7 Aug 2015 18:30:36 +0000 Subject: Zeroconf for proxy to upstream servers Message-ID: <2E15C631-E0B1-4432-B324-8FF7FC4F7A74@elotouch.com> Does Nginx support any sort of zeroconf in its proxying to upstream servers? I would like to make the backend publish themselves via something like zeroconf. Nginx should route traffic to them when they publish themselves as online, and nginx should remove them from this list when they publish themselves as offline. Does that currently exist in nginx? Michael Power -------------- next part -------------- An HTML attachment was scrubbed... URL: From shannon at nginx.com Fri Aug 7 19:20:31 2015 From: shannon at nginx.com (Shannon Burns) Date: Fri, 7 Aug 2015 12:20:31 -0700 Subject: Execute python files with Nginx In-Reply-To: References: Message-ID: <1F17B2DC-40C9-4DB3-957E-AA9EA23C0622@nginx.com> Hi Nitin, Would you mind providing a bit more information? > On Aug 6, 2015, at 4:53 AM, Nitin Solanki wrote: > > I tried that and getting issues. Unable to configure. I am not getting those steps. Any help you can do by explaining in steps? What issues are you running into? Can you copy and paste any errors you?re receiving? Can you provide the configuration file you are using? What is the behavior you are expecting and what is the behavior you?re seeing? > > On Thu, Aug 6, 2015 at 5:19 PM Alt > wrote: > Hello, > > I've never used python with nginx, but there are some examples on how to > configure everything here: > http://wiki.nginx.org/Configuration#Python_via_FastCGI > > Best Regards > > Posted at Nginx Forum: http://forum.nginx.org/read.php?2,260817,260818#msg-260818 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From nitinmlvya at gmail.com Fri Aug 7 20:12:58 2015 From: nitinmlvya at gmail.com (Nitin Solanki) Date: Fri, 07 Aug 2015 20:12:58 +0000 Subject: Execute python files with Nginx In-Reply-To: <1F17B2DC-40C9-4DB3-957E-AA9EA23C0622@nginx.com> References: <1F17B2DC-40C9-4DB3-957E-AA9EA23C0622@nginx.com> Message-ID: Hi, Right now, I am not office.. I am from India.. Now, it is 1.42 am. It will be great. If you provide step by step from beginning. Is it possible to mail. Thanks. On Sat, Aug 8, 2015 at 12:50 AM Shannon Burns wrote: > Hi Nitin, > > Would you mind providing a bit more information? > > On Aug 6, 2015, at 4:53 AM, Nitin Solanki wrote: > > I tried that and getting issues. Unable to configure. I am not getting > those steps. Any help you can do by explaining in steps? > > > What issues are you running into? Can you copy and paste any errors you?re > receiving? > > Can you provide the configuration file you are using? > > What is the behavior you are expecting and what is the behavior you?re > seeing? > > > On Thu, Aug 6, 2015 at 5:19 PM Alt wrote: > >> Hello, >> >> I've never used python with nginx, but there are some examples on how to >> configure everything here: >> http://wiki.nginx.org/Configuration#Python_via_FastCGI >> >> Best Regards >> >> Posted at Nginx Forum: >> http://forum.nginx.org/read.php?2,260817,260818#msg-260818 >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx >> > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Sat Aug 8 11:35:31 2015 From: nginx-forum at nginx.us (bugster86) Date: Sat, 08 Aug 2015 07:35:31 -0400 Subject: can't cache javascript file from proxy In-Reply-To: <6eb536bf07cb435034f0920893c7a93c.NginxMailingListEnglish@forum.nginx.org> References: <6eb536bf07cb435034f0920893c7a93c.NginxMailingListEnglish@forum.nginx.org> Message-ID: <30a77575fafb0e1b9caadff2ba9e3341.NginxMailingListEnglish@forum.nginx.org> Thank you Maxim. I added these 2 directives in my configuration file: proxy_ignore_headers "Expires"; proxy_ignore_headers "Set-Cookie"; And now my cache folder has some file !!! [root at puppet cache]# ls 0741765da3f800afaeeca0c3a6139db8 0cbfc19bd25e83d6ffa73641383bab4a a5d938d8984466ebe2e9894c65ef49f1 Posted at Nginx Forum: http://forum.nginx.org/read.php?2,260847,260880#msg-260880 From nginx-forum at nginx.us Sat Aug 8 12:43:06 2015 From: nginx-forum at nginx.us (lloydzhou) Date: Sat, 08 Aug 2015 08:43:06 -0400 Subject: is possible to using same upstream in "mail", "http" and "stream" block? Message-ID: <951e31be354f716b8b13abe014937f43.NginxMailingListEnglish@forum.nginx.org> is possible to using same upstream* module in "mail", "http" and "stream" block? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,260881,260881#msg-260881 From nginx-forum at nginx.us Sat Aug 8 13:16:53 2015 From: nginx-forum at nginx.us (mf1) Date: Sat, 08 Aug 2015 09:16:53 -0400 Subject: SOLVED: nginx configuration for SemanticScuttle??? In-Reply-To: References: Message-ID: After an IRC session in which I assisted Semantic Scuttle developer Christian Weiske to investigate this issue, he provided the following nginx configuration. Christian says it works on debian 8, nginx 1.6.2. I can confirm that also works on Centos 6.6, nginx 1.0.15, the only thing I had to change was the path to the unix socket. Many thanks to Christian for his patience, and for SC in general of course! Bye, Marco (from http://p.cweiske.de/216 ) 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 server { listen 80 default_server; root /var/www/sc/www; index index.php; server_name _; location ~ \.php(/.*)?$ { # regex to split $uri to $fastcgi_script_name and $fastcgi_path fastcgi_split_path_info ^(.+\.php)(/.+)$; # Check that the PHP script exists before passing it try_files $fastcgi_script_name =404; # Bypass the fact that try_files resets $fastcgi_path_info # see: http://trac.nginx.org/nginx/ticket/321 set $path_info $fastcgi_path_info; fastcgi_param PATH_INFO $path_info; fastcgi_index index.php; include fastcgi.conf; #include snippets/fastcgi-php.conf; # With php5-fpm: fastcgi_pass unix:/var/run/php5-fpm.sock; } } Posted at Nginx Forum: http://forum.nginx.org/read.php?2,260845,260882#msg-260882 From nginx-forum at nginx.us Sat Aug 8 13:25:11 2015 From: nginx-forum at nginx.us (lloydzhou) Date: Sat, 08 Aug 2015 09:25:11 -0400 Subject: is possible to using same upstream in "mail", "http" and "stream" block? In-Reply-To: <951e31be354f716b8b13abe014937f43.NginxMailingListEnglish@forum.nginx.org> References: <951e31be354f716b8b13abe014937f43.NginxMailingListEnglish@forum.nginx.org> Message-ID: <8e43b03ded812d3ffdca864c28157299.NginxMailingListEnglish@forum.nginx.org> i mean can using same upstream and upstream_hash module, and so on. "http" "mail" just different protocol, can using different protocol_praser to handle the request... Posted at Nginx Forum: http://forum.nginx.org/read.php?2,260881,260883#msg-260883 From reallfqq-nginx at yahoo.fr Sat Aug 8 15:05:26 2015 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Sat, 8 Aug 2015 17:05:26 +0200 Subject: ssl_password_file on nginx 1.8.0 Message-ID: Hello, I cannot manage to load a certificate protected wit ha password on nginx 1.8.0: [emerg] 2331#0: SSL_CTX_use_PrivateKey_file("/etc/ssl/private/domain.key") failed (SSL: error:0906406D:PEM routines:PEM_def_callback:problems getting password error:0906A068:PEM routines:PEM_do_header:bad password read error:140B0009:SSL routines:SSL_CTX_use_PrivateKey_file:PEM lib) The file configured with ssl_password_file is plaintext, restricted to read rights for root user only (even tried root user + root group). Shall it be otherwise? Have I missed something? ?I intended to avoid deciphering my private keys using this new capability of nginx. I also noted that, dunno if it might be related to my trouble: ? http://mailman.nginx.org/pipermail/nginx-devel/2014-October/006104.html $ sudo nginx -v nginx version: nginx/1.8.0 $ openssl version OpenSSL 1.0.1k 8 Jan 2015 --- *B. R.* -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Sat Aug 8 16:38:19 2015 From: nginx-forum at nginx.us (tunist) Date: Sat, 08 Aug 2015 12:38:19 -0400 Subject: mp4 streaming/seeking works from firefox (fedora) and not from firefox (windows) (nginx 1.9.3) In-Reply-To: <20150807123948.GS23844@daoine.org> References: <20150807123948.GS23844@daoine.org> Message-ID: thanks again for assisting. i have resolved this now since i realised that the php file i use to stream the files did not support partial content ;) i thought that nginx had some kind of built in 'magical' support for that which meant that my application didn't need to handle it. however, that is not the case and once i added in a PHP class to handle the partial requests, the files now seek correctly. i am seeing some zero buffer errors in the nginx log for large files, which i haven't debugged / resolved yet - but the major issue is resolved now. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,260615,260885#msg-260885 From nginx-forum at nginx.us Sat Aug 8 23:31:32 2015 From: nginx-forum at nginx.us (tunist) Date: Sat, 08 Aug 2015 19:31:32 -0400 Subject: mp4 streaming/seeking works from firefox (fedora) and not from firefox (windows) (nginx 1.9.3) In-Reply-To: References: <20150807001848.GP23844@daoine.org> <676236b217148b837579efb967ea28e4.NginxMailingListEnglish@forum.nginx.org> <301c5fd226f36922aa311de04151b309.NginxMailingListEnglish@forum.nginx.org> <3d6f64f248f8373f729574063a0c0009.NginxMailingListEnglish@forum.nginx.org> Message-ID: <5dbdb5b2e4aeca2c5490b25c096fe669.NginxMailingListEnglish@forum.nginx.org> i did explore that possibility for several days, but did not achieve success. part of the problem is that the videos have dynamic privacy settings applied to them and so PHP is used to decide which videos the present user can view and which ones they cannot view. i looked at using various directives in nginx to bypass this but never found a way to do it. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,260615,260886#msg-260886 From al-nginx at none.at Sun Aug 9 10:12:01 2015 From: al-nginx at none.at (Aleksandar Lazic) Date: Sun, 09 Aug 2015 12:12:01 +0200 Subject: Zeroconf for proxy to upstream servers In-Reply-To: <2E15C631-E0B1-4432-B324-8FF7FC4F7A74@elotouch.com> References: <2E15C631-E0B1-4432-B324-8FF7FC4F7A74@elotouch.com> Message-ID: <4be087ca101d7d992d18f5b361a50660@none.at> Hi Michael Am 07-08-2015 20:30, schrieb Michael Power: > Does Nginx support any sort of zeroconf in its proxying to upstream > servers? I would like to make the backend publish themselves via > something like zeroconf. Nginx should route traffic to them when they > publish themselves as online, and nginx should remove them from this > list when they publish themselves as offline. > > Does that currently exist in nginx? Yes. http://nginx.org/en/docs/http/ngx_http_upstream_conf_module.html > Michael Power > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From rpaprocki at fearnothingproductions.net Sun Aug 9 19:38:20 2015 From: rpaprocki at fearnothingproductions.net (Robert Paprocki) Date: Sun, 9 Aug 2015 12:38:20 -0700 Subject: Zeroconf for proxy to upstream servers In-Reply-To: <4be087ca101d7d992d18f5b361a50660@none.at> References: <2E15C631-E0B1-4432-B324-8FF7FC4F7A74@elotouch.com> <4be087ca101d7d992d18f5b361a50660@none.at> Message-ID: You could also look at lua-resty-upstream-healthcheck ( https://github.com/openresty/lua-resty-upstream-healthcheck) as an alternative. It's not native Nginx per se, but it's integrated with OpenResty. On Sun, Aug 9, 2015 at 3:12 AM, Aleksandar Lazic wrote: > Hi Michael > > Am 07-08-2015 20:30, schrieb Michael Power: > >> Does Nginx support any sort of zeroconf in its proxying to upstream >> servers? I would like to make the backend publish themselves via >> something like zeroconf. Nginx should route traffic to them when they >> publish themselves as online, and nginx should remove them from this >> list when they publish themselves as offline. >> >> Does that currently exist in nginx? >> > > Yes. > > http://nginx.org/en/docs/http/ngx_http_upstream_conf_module.html > > Michael Power >> >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx >> > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Sun Aug 9 20:30:45 2015 From: nginx-forum at nginx.us (thms007) Date: Sun, 09 Aug 2015 16:30:45 -0400 Subject: NGinx as a reverse proxy and http_sub_folder module (and httpsubs) Message-ID: <7c6b7799f469a423f6fb8b71145ab851.NginxMailingListEnglish@forum.nginx.org> I am trying to reverse proxy my website and modify the content. To do so, I compiled nginx with sub_filter --with-http-sub_module as well as httpsubs . It now accepts the sub_filter directive, but it does not work somehow.(http://wiki.nginx.org/HttpSubsModule) Here is my configuration: server { listen 8080; server_name www.xxx.com; access_log /var/log/nginx/www.xxx.access.log main; error_log /var/log/nginx/www.xxx.error.log; root /usr/share/nginx/html; index index.html index.htm; ## send request back to apache1 ## location / { sub_filter '<title>test'; sub_filter_once on; proxy_pass http://www.google.fr; proxy_next_upstream error timeout invalid_header http_500 http_502 http_503 http_504; proxy_redirect off; proxy_buffering off; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; } } I also tried with the subs_filter directives, none of them work. Can anyone help me ? Regards, Posted at Nginx Forum: http://forum.nginx.org/read.php?2,260892,260892#msg-260892 From richard.jennings at switchconcepts.com Mon Aug 10 08:18:41 2015 From: richard.jennings at switchconcepts.com (Richard Jennings) Date: Mon, 10 Aug 2015 08:18:41 +0000 Subject: Proxy non persistent client connections to persistent upstream connections Message-ID: I would like to be able to proxy non persistent client http connections to persistent upstream connections on a Linux system, both to reduce the number of connections and the upstream latency. I have experimented with keepalive and proxy_pass. To observe performance I used log_format upstreamlog '[$time_local] $status $connection $connection_requests $request_time'; and $ lsof -Pni What I think I have found is that some of the time persistent upstream connections are reused, but very often they are not. I believe the occasions where upstream connections are reused are possible because by chance the client connection has been reused. Is there a way to make this work? Regards Example configuration log_format upstreamlog '[$time_local] $status $connection $connection_requests $request_time'; upstream google { server google.com:80; keepalive 10; } upstream yahoo { server yahoo.com:80 keepalive 10; } server { listen 8080; server_name 10.0.0.1; proxy_connect_timeout 500ms; proxy_send_timeout 500ms; proxy_read_timeout 500ms; send_timeout 500ms; location /status { stub_status on; } location /google { proxy_pass http://www.google.com; proxy_http_version 1.1; proxy_set_header Connection ""; proxy_set_header Host google.com; access_log /var/log/google.log upstreamlog; } location /yahoo { proxy_pass http://www.yahoo.com; proxy_http_version 1.1; proxy_set_header Connection ""; proxy_set_header Host yahoo.com; access_log /var/log/yahoo.log upstreamlog; } } -------------- next part -------------- An HTML attachment was scrubbed... URL: From vbart at nginx.com Mon Aug 10 11:00:09 2015 From: vbart at nginx.com (Valentin V. Bartenev) Date: Mon, 10 Aug 2015 14:00:09 +0300 Subject: ssl_password_file on nginx 1.8.0 In-Reply-To: References: Message-ID: <1480651.aUtfUGjm3h@vbart-workstation> On Saturday 08 August 2015 17:05:26 B.R. wrote: > Hello, > > I cannot manage to load a certificate protected wit ha password on nginx > 1.8.0: > [emerg] 2331#0: SSL_CTX_use_PrivateKey_file("/etc/ssl/private/domain.key") > failed (SSL: error:0906406D:PEM routines:PEM_def_callback:problems getting > password error:0906A068:PEM routines:PEM_do_header:bad password read > error:140B0009:SSL routines:SSL_CTX_use_PrivateKey_file:PEM lib) > > The file configured with ssl_password_file is plaintext, restricted to read > rights for root user only (even tried root user + root group). > Shall it be otherwise? Have I missed something? > > ?I intended to avoid deciphering my private keys using this new capability > of nginx. > > I also noted that, dunno if it might be related to my trouble: ? > http://mailman.nginx.org/pipermail/nginx-devel/2014-October/006104.html > > $ sudo nginx -v > nginx version: nginx/1.8.0 > $ openssl version > OpenSSL 1.0.1k 8 Jan 2015 Check your password file with hex editor. wbr, Valentin V. Bartenev From zxcvbn4038 at gmail.com Mon Aug 10 22:13:53 2015 From: zxcvbn4038 at gmail.com (CJ Ess) Date: Mon, 10 Aug 2015 18:13:53 -0400 Subject: Odd server_name directive behavior Message-ID: I have an nginx 1.9.0 deploy and I noticed a working config where the name given to the server_name directive doesn't match the names in the Host headers or the certificate DNs. It looks like a mistake, but it works, and I don't know why! Is it possible that if there is only one server stanza that nginx doesn't bother comparing the name and just processes the request via the only stanza defined? -------------- next part -------------- An HTML attachment was scrubbed... URL: From reallfqq-nginx at yahoo.fr Mon Aug 10 22:21:31 2015 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Tue, 11 Aug 2015 00:21:31 +0200 Subject: ssl_password_file on nginx 1.8.0 In-Reply-To: <1480651.aUtfUGjm3h@vbart-workstation> References: <1480651.aUtfUGjm3h@vbart-workstation> Message-ID: At first I thought the 0x0a character could be a problem, though highly improbable... then I realized that one of the server blocks using that certificate had no ssl_password_file configured. Shameful mistake created a dummy error. Sorry for bothering! Thanks for help. --- *B. R.* On Mon, Aug 10, 2015 at 1:00 PM, Valentin V. Bartenev wrote: > On Saturday 08 August 2015 17:05:26 B.R. wrote: > > Hello, > > > > I cannot manage to load a certificate protected wit ha password on nginx > > 1.8.0: > > [emerg] 2331#0: > SSL_CTX_use_PrivateKey_file("/etc/ssl/private/domain.key") > > failed (SSL: error:0906406D:PEM routines:PEM_def_callback:problems > getting > > password error:0906A068:PEM routines:PEM_do_header:bad password read > > error:140B0009:SSL routines:SSL_CTX_use_PrivateKey_file:PEM lib) > > > > The file configured with ssl_password_file is plaintext, restricted to > read > > rights for root user only (even tried root user + root group). > > Shall it be otherwise? Have I missed something? > > > > ?I intended to avoid deciphering my private keys using this new > capability > > of nginx. > > > > I also noted that, dunno if it might be related to my trouble: ? > > http://mailman.nginx.org/pipermail/nginx-devel/2014-October/006104.html > > > > $ sudo nginx -v > > nginx version: nginx/1.8.0 > > $ openssl version > > OpenSSL 1.0.1k 8 Jan 2015 > > Check your password file with hex editor. > > wbr, Valentin V. Bartenev > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From arujohn at cisco.com Tue Aug 11 03:35:19 2015 From: arujohn at cisco.com (Arun John (arujohn)) Date: Tue, 11 Aug 2015 03:35:19 +0000 Subject: Nginx & Digest authentication Message-ID: I have a NGINX that sits in front of my application. I have digest authentication enabled for the application. I?d like a set up where when a user connects to NGINX using Digest, NGINX simply proxies this request to my application where the actual authentication happens. The authentication logic is already available in the application and I just want the NGINX to forward the headers correctly to the app. Could someone help me on how to accomplish this? Regards, Arun -------------- next part -------------- An HTML attachment was scrubbed... URL: From igal at lucee.org Tue Aug 11 06:24:00 2015 From: igal at lucee.org (Igal @ Lucee.org) Date: Mon, 10 Aug 2015 23:24:00 -0700 Subject: proxy_set_header concatenated value Message-ID: <55C99500.1060603@lucee.org> hi, I want to pass the following header to the backend server: name: X-Original-URL value: $scheme://$host$request_uri but the concatenation of the values for the value above do not seem to work. how can I do that? thanks, -- Igal Sapir Lucee Core Developer Lucee.org -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Tue Aug 11 06:53:21 2015 From: nginx-forum at nginx.us (StSch) Date: Tue, 11 Aug 2015 02:53:21 -0400 Subject: Redirect from HTTP to HTTPS does not work Message-ID: <50535fd5dd70bfa13e4474b66ae83da4.NginxMailingListEnglish@forum.nginx.org> This is my configuration: server { listen 80; server_name shell.*; return 301 https://$server_name$request_uri; } server { listen 443 ssl; server_name shell.*; location / { proxy_pass http://192.168.0.16:4200; } } I am using a Dynamic DNS Service to access NGINX running on my Raspberry Pi. The configuration perfectly works for https://shell.raspi.dyndns.com but not for http://shell.raspi.dyndns.com (message: The requested URL could not be retrieved. The system returned: Operation timed out.). Any idea what the problem could be and how to fix it? Regards, Steffen Posted at Nginx Forum: http://forum.nginx.org/read.php?2,260913,260913#msg-260913 From vbart at nginx.com Tue Aug 11 07:27:57 2015 From: vbart at nginx.com (Valentin V. Bartenev) Date: Tue, 11 Aug 2015 10:27:57 +0300 Subject: Odd server_name directive behavior In-Reply-To: References: Message-ID: <1721798.pyTpFQ4JhM@vbart-laptop> On Monday 10 August 2015 18:13:53 CJ Ess wrote: > I have an nginx 1.9.0 deploy and I noticed a working config where the name > given to the server_name directive doesn't match the names in the Host > headers or the certificate DNs. It looks like a mistake, but it works, and > I don't know why! Is it possible that if there is only one server stanza > that nginx doesn't bother comparing the name and just processes the request > via the only stanza defined? http://nginx.org/en/docs/http/server_names.html http://nginx.org/en/docs/http/request_processing.html wbr, Valentin V. Bartenev From francis at daoine.org Tue Aug 11 07:35:55 2015 From: francis at daoine.org (Francis Daly) Date: Tue, 11 Aug 2015 08:35:55 +0100 Subject: Redirect from HTTP to HTTPS does not work In-Reply-To: <50535fd5dd70bfa13e4474b66ae83da4.NginxMailingListEnglish@forum.nginx.org> References: <50535fd5dd70bfa13e4474b66ae83da4.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20150811073555.GT23844@daoine.org> On Tue, Aug 11, 2015 at 02:53:21AM -0400, StSch wrote: Hi there, > This is my configuration: > > server { > listen 80; > server_name shell.*; > return 301 https://$server_name$request_uri; > } When you request "http://shell.example.com/uri", this will redirect to "https://shell.*/uri", which is probably not what you want. Use (for example) $host instead of $server_name. Or (possibly better) use shell.raspi.dyndns.com instead of $server_name. > Any idea what the problem could be and how to fix it? Use "curl -i" to make the http request. See the response. f -- Francis Daly francis at daoine.org From francis at daoine.org Tue Aug 11 07:42:50 2015 From: francis at daoine.org (Francis Daly) Date: Tue, 11 Aug 2015 08:42:50 +0100 Subject: proxy_set_header concatenated value In-Reply-To: <55C99500.1060603@lucee.org> References: <55C99500.1060603@lucee.org> Message-ID: <20150811074250.GU23844@daoine.org> On Mon, Aug 10, 2015 at 11:24:00PM -0700, Igal @ Lucee.org wrote: Hi there, > I want to pass the following header to the backend server: > > name: X-Original-URL > value: $scheme://$host$request_uri > > but the concatenation of the values for the value above do not seem to work. Why do you think it does not work? > how can I do that? proxy_set_header X-Original-URL $scheme://$host$request_uri; works for me. f -- Francis Daly francis at daoine.org From mdounin at mdounin.ru Tue Aug 11 12:34:06 2015 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 11 Aug 2015 15:34:06 +0300 Subject: Proxy non persistent client connections to persistent upstream connections In-Reply-To: References: Message-ID: <20150811123406.GM67578@mdounin.ru> Hello! On Mon, Aug 10, 2015 at 08:18:41AM +0000, Richard Jennings wrote: > I would like to be able to proxy non persistent client http connections to > persistent upstream connections on a Linux system, both to reduce the > number of connections and the upstream latency. > > I have experimented with keepalive and proxy_pass. > > To observe performance I used log_format upstreamlog '[$time_local] $status > $connection $connection_requests $request_time'; and $ lsof -Pni > > What I think I have found is that some of the time persistent upstream > connections are reused, but very often they are not. I believe the > occasions where upstream connections are reused are possible because by > chance the client connection has been reused. > > Is there a way to make this work? In nginx, client connections and upstream connections are not related to each other and can be kept alive separately. That is, what you are trying to do is how it works by design. It doesn't work because of an error in your config: you proxy to > proxy_pass http://www.google.com; and > proxy_pass http://www.yahoo.com; but your upstream blocks are called "google" and "yahoo" respectively. You should use proxy_pass http://google; and proxy_pass http://yahoo; instead (or rename "upstream" blocks accordingly). -- Maxim Dounin http://nginx.org/ From daniel.theodoro at gmail.com Tue Aug 11 12:51:31 2015 From: daniel.theodoro at gmail.com (Daniel Theodoro) Date: Tue, 11 Aug 2015 09:51:31 -0300 Subject: Execute python files with Nginx In-Reply-To: References: <1F17B2DC-40C9-4DB3-957E-AA9EA23C0622@nginx.com> Message-ID: Hi Nitin, If you're using django you can follow these steps: https://www.digitalocean.com/community/tutorials/how-to-serve-django-applications-with-uwsgi-and-nginx-on-ubuntu-14-04 Daniel Theodoro Cel: 11 99399-3364 http://www.linkedin.com/in/danieltheodoro ? RHCA - Red Hat Certified Architect ? RHCDS - Red Hat Certified Datacenter Specialist ? RHCE - Red Hat Certified Engineer ? RHCVA - Red Hat Certified Virtualization Administrator ? LPIC-3 - Senior Level Linux Certification ? Novell Certified Linux Administrator - Suse 11 ? OCA - Oracle Enterprise Linux Administrator Certified Associate On Fri, Aug 7, 2015 at 5:12 PM, Nitin Solanki wrote: > Hi, > Right now, I am not office.. I am from India.. Now, it is 1.42 > am. It will be great. If you provide step by step from beginning. Is it > possible to mail. Thanks. > > On Sat, Aug 8, 2015 at 12:50 AM Shannon Burns wrote: > >> Hi Nitin, >> >> Would you mind providing a bit more information? >> >> On Aug 6, 2015, at 4:53 AM, Nitin Solanki wrote: >> >> I tried that and getting issues. Unable to configure. I am not getting >> those steps. Any help you can do by explaining in steps? >> >> >> What issues are you running into? Can you copy and paste any errors >> you?re receiving? >> >> Can you provide the configuration file you are using? >> >> What is the behavior you are expecting and what is the behavior you?re >> seeing? >> >> >> On Thu, Aug 6, 2015 at 5:19 PM Alt wrote: >> >>> Hello, >>> >>> I've never used python with nginx, but there are some examples on how to >>> configure everything here: >>> http://wiki.nginx.org/Configuration#Python_via_FastCGI >>> >>> Best Regards >>> >>> Posted at Nginx Forum: >>> http://forum.nginx.org/read.php?2,260817,260818#msg-260818 >>> >>> _______________________________________________ >>> nginx mailing list >>> nginx at nginx.org >>> http://mailman.nginx.org/mailman/listinfo/nginx >>> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Tue Aug 11 13:09:35 2015 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 11 Aug 2015 16:09:35 +0300 Subject: Nginx & Digest authentication In-Reply-To: References: Message-ID: <20150811130935.GP67578@mdounin.ru> Hello! On Tue, Aug 11, 2015 at 03:35:19AM +0000, Arun John (arujohn) wrote: > I have a NGINX that sits in front of my application. I have > digest authentication enabled for the application. I?d like a > set up where when a user connects to NGINX using Digest, NGINX > simply proxies this request to my application where the actual > authentication happens. The authentication logic is already > available in the application and I just want the NGINX to > forward the headers correctly to the app. > > Could someone help me on how to accomplish this? Just use proxy_pass, see http://nginx.org/r/proxy_pass. -- Maxim Dounin http://nginx.org/ From arujohn at cisco.com Tue Aug 11 13:29:48 2015 From: arujohn at cisco.com (Arun John (arujohn)) Date: Tue, 11 Aug 2015 13:29:48 +0000 Subject: Nginx & Digest authentication In-Reply-To: <20150811130935.GP67578@mdounin.ru> References: <20150811130935.GP67578@mdounin.ru> Message-ID: Hi, Thanks for the suggestion. I tried using proxy_pass, but it didn?t help much. Please find my settings upstream my_backend { server 127.0.0.1:8083; } location / { proxy_pass http://my_backend; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto http; proxy_pass_request_headers on; if ($request_method = GET ) { rewrite ^/(perf-log-.+)$ /uploads/perf/$1 break; root /opt/test/files; } } Is there any issue with my configs? Regards, Arun On 8/11/15, 6:39 PM, "nginx on behalf of Maxim Dounin" wrote: >Hello! > >On Tue, Aug 11, 2015 at 03:35:19AM +0000, Arun John (arujohn) wrote: > >> I have a NGINX that sits in front of my application. I have >> digest authentication enabled for the application. I?d like a >> set up where when a user connects to NGINX using Digest, NGINX >> simply proxies this request to my application where the actual >> authentication happens. The authentication logic is already >> available in the application and I just want the NGINX to >> forward the headers correctly to the app. >> >> Could someone help me on how to accomplish this? > >Just use proxy_pass, see http://nginx.org/r/proxy_pass. > >-- >Maxim Dounin >http://nginx.org/ > >_______________________________________________ >nginx mailing list >nginx at nginx.org >http://mailman.nginx.org/mailman/listinfo/nginx From me at myconan.net Tue Aug 11 13:36:43 2015 From: me at myconan.net (Edho Arief) Date: Tue, 11 Aug 2015 22:36:43 +0900 Subject: Nginx & Digest authentication In-Reply-To: References: <20150811130935.GP67578@mdounin.ru> Message-ID: On Tue, Aug 11, 2015 at 10:29 PM, Arun John (arujohn) wrote: > Hi, > > Thanks for the suggestion. I tried using proxy_pass, but it didn?t help > much. > I think it would help if you explain the problem in more detail than just "didn't help much". From arujohn at cisco.com Tue Aug 11 13:49:37 2015 From: arujohn at cisco.com (Arun John (arujohn)) Date: Tue, 11 Aug 2015 13:49:37 +0000 Subject: Nginx & Digest authentication In-Reply-To: References: <20150811130935.GP67578@mdounin.ru> Message-ID: Hi Edho, The ?get? requests were failing and nginx was not forwarding the headers required for the authentication. The result was same as before. I have shared the configs. May be I am missing some minor configurations? When I switch the nginx off and directly access the application, everything works fine and able to authenticate. Regards, Arun On 8/11/15, 7:06 PM, "nginx on behalf of Edho Arief" wrote: >On Tue, Aug 11, 2015 at 10:29 PM, Arun John (arujohn) >wrote: >> Hi, >> >> Thanks for the suggestion. I tried using proxy_pass, but it didn?t help >> much. >> > >I think it would help if you explain the problem in more detail than >just "didn't help much". > >_______________________________________________ >nginx mailing list >nginx at nginx.org >http://mailman.nginx.org/mailman/listinfo/nginx From nginx-forum at nginx.us Tue Aug 11 17:27:23 2015 From: nginx-forum at nginx.us (Arno0x0x) Date: Tue, 11 Aug 2015 13:27:23 -0400 Subject: Nginx serving self-signed cert instead of the one defined in conf Message-ID: <94fc7a6a794bde0eaa42c88e8846f25e.NginxMailingListEnglish@forum.nginx.org> Hello, I'm facing a strange issue since I upgraded from Nginx 1.6.2 to 1.8.0. My configuration files have been kept identicals, as well as my official SSL certificates. The problem is Nginx keeps on serving a self-signed certificate (dunno where it takes it from) instead of my proper certificates that I defined in the config file. Here's the server section SSL config bits : ------------------------------------------------------------------------------------ server { listen 443 ssl; ## listen for ipv4; this line is default and implied #listen [::]:80 default_server ipv6only=on; ## listen for ipv6 server_name my_fqdn; ssl_certificate /etc/nginx/ssl/gandi/my_fqdn.crt; ssl_certificate_key /etc/nginx/ssl/gandi/my_fqdn.key; ssl_protocols TLSv1 TLSv1.1 TLSv1.2; ssl_ciphers 'AES256+EECDH:AES256+EDH'; ssl_prefer_server_ciphers on; ssl_session_cache shared:SSL:10m; ... ------------------------------------------------------------------------------------ This configuration works fine on my other server with nginx 1.6.2. I tried to increase error log to the debug level, but I get stricly no error message when starting Nginx (I was hoping for some clue like "nginx cannot read the file / path of the defined certs .... but nothing). The config file checks is fine : ------------------------------------------------------------------------------------ sudo nginx -t nginx: the configuration file /etc/nginx/nginx.conf syntax is ok nginx: configuration file /etc/nginx/nginx.conf test is successful ------------------------------------------------------------------------------------ Example with openssl client : ------------------------------------------------------------------------------------ openssl s_client -connect myfqdn:443 CONNECTED(00000003) depth=0 C = EU, ST = NoWhere, O = Internet Widgits Pty Ltd, CN = myfqdn verify error:num=18:self signed certificate verify return:1 depth=0 C = EU, ST = NoWhere, O = Internet Widgits Pty Ltd, CN = myfqdn verify return:1 --- Certificate chain 0 s:/C=EU/ST=NoWhere/O=Internet Widgits Pty Ltd/CN=myfqdn i:/C=EU/ST=NoWhere/O=Internet Widgits Pty Ltd/CN=myfqdn --- Server certificate etc.... ------------------------------------------------------------------------------------ I'm lost. Any help is welcomed ! Regards, Arno Posted at Nginx Forum: http://forum.nginx.org/read.php?2,260935,260935#msg-260935 From nitinmlvya at gmail.com Tue Aug 11 17:33:35 2015 From: nitinmlvya at gmail.com (Nitin Solanki) Date: Tue, 11 Aug 2015 23:03:35 +0530 Subject: Execute python files with Nginx In-Reply-To: References: <1F17B2DC-40C9-4DB3-957E-AA9EA23C0622@nginx.com> Message-ID: Hi Daniel, I am not using Django. I am using core python files. Is it possible with it? On Tue, Aug 11, 2015 at 6:21 PM, Daniel Theodoro wrote: > Hi Nitin, > > If you're using django you can follow these steps: > > > https://www.digitalocean.com/community/tutorials/how-to-serve-django-applications-with-uwsgi-and-nginx-on-ubuntu-14-04 > > Daniel Theodoro > Cel: 11 99399-3364 > http://www.linkedin.com/in/danieltheodoro > > ? RHCA - Red Hat Certified Architect > ? RHCDS - Red Hat Certified Datacenter Specialist > ? RHCE - Red Hat Certified Engineer > ? RHCVA - Red Hat Certified Virtualization Administrator > ? LPIC-3 - Senior Level Linux Certification > ? Novell Certified Linux Administrator - Suse 11 > ? OCA - Oracle Enterprise Linux Administrator Certified Associate > > On Fri, Aug 7, 2015 at 5:12 PM, Nitin Solanki > wrote: > >> Hi, >> Right now, I am not office.. I am from India.. Now, it is 1.42 >> am. It will be great. If you provide step by step from beginning. Is it >> possible to mail. Thanks. >> >> On Sat, Aug 8, 2015 at 12:50 AM Shannon Burns wrote: >> >>> Hi Nitin, >>> >>> Would you mind providing a bit more information? >>> >>> On Aug 6, 2015, at 4:53 AM, Nitin Solanki wrote: >>> >>> I tried that and getting issues. Unable to configure. I am not getting >>> those steps. Any help you can do by explaining in steps? >>> >>> >>> What issues are you running into? Can you copy and paste any errors >>> you?re receiving? >>> >>> Can you provide the configuration file you are using? >>> >>> What is the behavior you are expecting and what is the behavior you?re >>> seeing? >>> >>> >>> On Thu, Aug 6, 2015 at 5:19 PM Alt wrote: >>> >>>> Hello, >>>> >>>> I've never used python with nginx, but there are some examples on how to >>>> configure everything here: >>>> http://wiki.nginx.org/Configuration#Python_via_FastCGI >>>> >>>> Best Regards >>>> >>>> Posted at Nginx Forum: >>>> http://forum.nginx.org/read.php?2,260817,260818#msg-260818 >>>> >>>> _______________________________________________ >>>> nginx mailing list >>>> nginx at nginx.org >>>> http://mailman.nginx.org/mailman/listinfo/nginx >>>> >>> _______________________________________________ >>> nginx mailing list >>> nginx at nginx.org >>> http://mailman.nginx.org/mailman/listinfo/nginx >>> >>> _______________________________________________ >>> nginx mailing list >>> nginx at nginx.org >>> http://mailman.nginx.org/mailman/listinfo/nginx >> >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx >> > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Tue Aug 11 17:43:08 2015 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 11 Aug 2015 20:43:08 +0300 Subject: Nginx & Digest authentication In-Reply-To: References: <20150811130935.GP67578@mdounin.ru> Message-ID: <20150811174308.GU67578@mdounin.ru> Hello! On Tue, Aug 11, 2015 at 01:49:37PM +0000, Arun John (arujohn) wrote: > The ?get? requests were failing and nginx was not forwarding the headers > required for the authentication. The result was same as before. I have > shared the configs. May be I am missing some minor configurations? Please define "the headers required for the authentication". If you think some headers are not passed, it would be good idea to look into what happens on the wire to be specific. Debug log should be enough to see detailed info about headers send/received, see here: http://nginx.org/en/docs/debugging_log.html Altrenatively, you can use tcpdump to see what actually happens on the wire. Note though, that Digest authentication can be easily screwed up by URI changes. Your configuration suggests that you are trying to change some URIs. Make sure you aren't testing with these URIs you are trying to change. Or, better yet, try with just proxy_pass and nothing more, as initially suggested. -- Maxim Dounin http://nginx.org/ From shannon at nginx.com Tue Aug 11 17:53:07 2015 From: shannon at nginx.com (Shannon Burns) Date: Tue, 11 Aug 2015 10:53:07 -0700 Subject: Execute python files with Nginx In-Reply-To: References: <1F17B2DC-40C9-4DB3-957E-AA9EA23C0622@nginx.com> Message-ID: <495FBBB9-6B6D-4B37-AD5F-A45A19A31698@nginx.com> Hi Nitin, If you are looking for a tutorial on how to get a python application up and running with NGINX check out this tutorial: https://www.digitalocean.com/community/tutorials/how-to-set-up-uwsgi-and-nginx-to-serve-python-apps-on-ubuntu-14-04 For future reference, Digital Ocean has some really well written tutorials to give you a place to start. Our community is a great place to ask questions once you get stuck. The best way to get help when you are stuck is to send: *your configuration file *the error logs you?re getting *any other relevant code I find it most helpful if someone also includes what behavior they are expecting the program to perform and what they are seeing it do in reality. Hopefully this is helpful! Shannon Jr. Developer Advocate NGINX > On Aug 11, 2015, at 10:33 AM, Nitin Solanki wrote: > > Hi Daniel, > I am not using Django. I am using core python files. Is it possible with it? > > On Tue, Aug 11, 2015 at 6:21 PM, Daniel Theodoro > wrote: > Hi Nitin, > > If you're using django you can follow these steps: > > https://www.digitalocean.com/community/tutorials/how-to-serve-django-applications-with-uwsgi-and-nginx-on-ubuntu-14-04 > > Daniel Theodoro > Cel: 11 99399-3364 > http://www.linkedin.com/in/danieltheodoro > > ? RHCA - Red Hat Certified Architect > ? RHCDS - Red Hat Certified Datacenter Specialist > ? RHCE - Red Hat Certified Engineer > ? RHCVA - Red Hat Certified Virtualization Administrator > ? LPIC-3 - Senior Level Linux Certification > ? Novell Certified Linux Administrator - Suse 11 > ? OCA - Oracle Enterprise Linux Administrator Certified Associate > > On Fri, Aug 7, 2015 at 5:12 PM, Nitin Solanki > wrote: > Hi, > Right now, I am not office.. I am from India.. Now, it is 1.42 am. It will be great. If you provide step by step from beginning. Is it possible to mail. Thanks. > > On Sat, Aug 8, 2015 at 12:50 AM Shannon Burns > wrote: > Hi Nitin, > > Would you mind providing a bit more information? > >> On Aug 6, 2015, at 4:53 AM, Nitin Solanki > wrote: >> >> I tried that and getting issues. Unable to configure. I am not getting those steps. Any help you can do by explaining in steps? > > What issues are you running into? Can you copy and paste any errors you?re receiving? > > Can you provide the configuration file you are using? > > What is the behavior you are expecting and what is the behavior you?re seeing? > >> >> On Thu, Aug 6, 2015 at 5:19 PM Alt > wrote: >> Hello, >> >> I've never used python with nginx, but there are some examples on how to >> configure everything here: >> http://wiki.nginx.org/Configuration#Python_via_FastCGI >> >> Best Regards >> >> Posted at Nginx Forum: http://forum.nginx.org/read.php?2,260817,260818#msg-260818 >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From nitinmlvya at gmail.com Tue Aug 11 17:54:57 2015 From: nitinmlvya at gmail.com (Nitin Solanki) Date: Tue, 11 Aug 2015 17:54:57 +0000 Subject: Execute python files with Nginx In-Reply-To: <495FBBB9-6B6D-4B37-AD5F-A45A19A31698@nginx.com> References: <1F17B2DC-40C9-4DB3-957E-AA9EA23C0622@nginx.com> <495FBBB9-6B6D-4B37-AD5F-A45A19A31698@nginx.com> Message-ID: Thanks Shannon.. On Tue, Aug 11, 2015 at 11:23 PM Shannon Burns wrote: > Hi Nitin, > > If you are looking for a tutorial on how to get a python application up > and running with NGINX check out this tutorial: > > > https://www.digitalocean.com/community/tutorials/how-to-set-up-uwsgi-and-nginx-to-serve-python-apps-on-ubuntu-14-04 > > For future reference, Digital Ocean has some really well written tutorials > to give you a place to start. > > Our community is a great place to ask questions once you get stuck. The > best way to get help when you are stuck is to send: > > *your configuration file > *the error logs you?re getting > *any other relevant code > > I find it most helpful if someone also includes *what behavior they are > expecting* the program to perform and *what they are seeing it do in > reality*. > > Hopefully this is helpful! > > Shannon > Jr. Developer Advocate > NGINX > > On Aug 11, 2015, at 10:33 AM, Nitin Solanki wrote: > > Hi Daniel, > I am not using Django. I am using core python files. Is > it possible with it? > > On Tue, Aug 11, 2015 at 6:21 PM, Daniel Theodoro < > daniel.theodoro at gmail.com> wrote: > >> Hi Nitin, >> >> If you're using django you can follow these steps: >> >> >> https://www.digitalocean.com/community/tutorials/how-to-serve-django-applications-with-uwsgi-and-nginx-on-ubuntu-14-04 >> >> Daniel Theodoro >> Cel: 11 99399-3364 >> http://www.linkedin.com/in/danieltheodoro >> >> ? RHCA - Red Hat Certified Architect >> ? RHCDS - Red Hat Certified Datacenter Specialist >> ? RHCE - Red Hat Certified Engineer >> ? RHCVA - Red Hat Certified Virtualization Administrator >> ? LPIC-3 - Senior Level Linux Certification >> ? Novell Certified Linux Administrator - Suse 11 >> ? OCA - Oracle Enterprise Linux Administrator Certified Associate >> >> On Fri, Aug 7, 2015 at 5:12 PM, Nitin Solanki >> wrote: >> >>> Hi, >>> Right now, I am not office.. I am from India.. Now, it is 1.42 >>> am. It will be great. If you provide step by step from beginning. Is it >>> possible to mail. Thanks. >>> >>> On Sat, Aug 8, 2015 at 12:50 AM Shannon Burns wrote: >>> >>>> Hi Nitin, >>>> >>>> Would you mind providing a bit more information? >>>> >>>> On Aug 6, 2015, at 4:53 AM, Nitin Solanki wrote: >>>> >>>> I tried that and getting issues. Unable to configure. I am not getting >>>> those steps. Any help you can do by explaining in steps? >>>> >>>> >>>> What issues are you running into? Can you copy and paste any errors >>>> you?re receiving? >>>> >>>> Can you provide the configuration file you are using? >>>> >>>> What is the behavior you are expecting and what is the behavior you?re >>>> seeing? >>>> >>>> >>>> On Thu, Aug 6, 2015 at 5:19 PM Alt wrote: >>>> >>>>> Hello, >>>>> >>>>> I've never used python with nginx, but there are some examples on how >>>>> to >>>>> configure everything here: >>>>> http://wiki.nginx.org/Configuration#Python_via_FastCGI >>>>> >>>>> Best Regards >>>>> >>>>> Posted at Nginx Forum: >>>>> http://forum.nginx.org/read.php?2,260817,260818#msg-260818 >>>>> >>>>> _______________________________________________ >>>>> nginx mailing list >>>>> nginx at nginx.org >>>>> http://mailman.nginx.org/mailman/listinfo/nginx >>>>> >>>> _______________________________________________ >>>> nginx mailing list >>>> nginx at nginx.org >>>> http://mailman.nginx.org/mailman/listinfo/nginx >>>> >>>> _______________________________________________ >>>> nginx mailing list >>>> nginx at nginx.org >>>> http://mailman.nginx.org/mailman/listinfo/nginx >>> >>> >>> _______________________________________________ >>> nginx mailing list >>> nginx at nginx.org >>> http://mailman.nginx.org/mailman/listinfo/nginx >>> >> >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx >> > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Tue Aug 11 17:59:59 2015 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 11 Aug 2015 20:59:59 +0300 Subject: Nginx serving self-signed cert instead of the one defined in conf In-Reply-To: <94fc7a6a794bde0eaa42c88e8846f25e.NginxMailingListEnglish@forum.nginx.org> References: <94fc7a6a794bde0eaa42c88e8846f25e.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20150811175959.GV67578@mdounin.ru> Hello! On Tue, Aug 11, 2015 at 01:27:23PM -0400, Arno0x0x wrote: > Hello, > > I'm facing a strange issue since I upgraded from Nginx 1.6.2 to 1.8.0. My > configuration files have been kept identicals, as well as my official SSL > certificates. > > The problem is Nginx keeps on serving a self-signed certificate (dunno where > it takes it from) instead of my proper certificates that I defined in the > config file. Here's the server section SSL config bits : > > ------------------------------------------------------------------------------------ > server { > listen 443 ssl; ## listen for ipv4; this line is default and implied > #listen [::]:80 default_server ipv6only=on; ## listen for ipv6 > > server_name my_fqdn; > > ssl_certificate /etc/nginx/ssl/gandi/my_fqdn.crt; > ssl_certificate_key /etc/nginx/ssl/gandi/my_fqdn.key; > ssl_protocols TLSv1 TLSv1.1 TLSv1.2; > ssl_ciphers 'AES256+EECDH:AES256+EDH'; > ssl_prefer_server_ciphers on; > ssl_session_cache shared:SSL:10m; > > ... > ------------------------------------------------------------------------------------ > > This configuration works fine on my other server with nginx 1.6.2. The configuration snippet you've provided is just a snippet for a single server block, not a full configuration. Depending on other server{} blocks it may or may not work. Most notably, the "listen" directive doesn't have "default_server" parameter. That is, if there is another server{} block defined for the same listening socket in the configuration, it may be used as a default one instead (assuming that server is defined first). Try looking into your full configuration, nginx.conf. When questions arise, it usally means that the configuration contains something like "include /path/to/files/*.conf;" - and you have to examine all files matching a given mask. -- Maxim Dounin http://nginx.org/ From nginx-forum at nginx.us Tue Aug 11 18:21:50 2015 From: nginx-forum at nginx.us (Arno0x0x) Date: Tue, 11 Aug 2015 14:21:50 -0400 Subject: Nginx serving self-signed cert instead of the one defined in conf In-Reply-To: <20150811175959.GV67578@mdounin.ru> References: <20150811175959.GV67578@mdounin.ru> Message-ID: <4cded21816ea6f6d8c8c8e81181dbdf5.NginxMailingListEnglish@forum.nginx.org> Hi Maxim, Thanks for your answer. Alas ! I check all config files in my /etc/nginx directory, there's only one containing the server{} directive (for the sake of it, I added the default_server to the listen directive, but it doesn't change anything) : --------------------------------------------------------------------------------------------- pi at rpi /etc/nginx $ grep -r server * fastcgi.conf:fastcgi_param SERVER_PROTOCOL $server_protocol; fastcgi.conf:fastcgi_param SERVER_ADDR $server_addr; fastcgi.conf:fastcgi_param SERVER_PORT $server_port; fastcgi.conf:fastcgi_param SERVER_NAME $server_name; scgi_params:scgi_param SERVER_PROTOCOL $server_protocol; scgi_params:scgi_param SERVER_PORT $server_port; scgi_params:scgi_param SERVER_NAME $server_name; sites-available/myfqdn:# server { sites-available/myfqdn:server { sites-available/myfqdn: listen 443 ssl default_server; ## listen for ipv4; this line is default and implied sites-available/lmyfqdn: #listen [::]:80 default_server ipv6only=on; ## listen for ipv6 sites-available/myfqdn: server_name myfqdn; sites-available/myfqdn: ssl_prefer_server_ciphers on; sites-available/myfqdn: # redirect server error pages to the static page /50x.html uwsgi_params:uwsgi_param SERVER_PROTOCOL $server_protocol; uwsgi_params:uwsgi_param SERVER_PORT $server_port; uwsgi_params:uwsgi_param SERVER_NAME $server_name; --------------------------------------------------------------------------------------------- Could it be possible that nginx reads some other config files from another location than /etc/nginx ? What are my other options (some more debug info would be useful to check where nginx gets its config from). any idea ? Thanks Arno Posted at Nginx Forum: http://forum.nginx.org/read.php?2,260935,260942#msg-260942 From nginx-forum at nginx.us Wed Aug 12 02:37:41 2015 From: nginx-forum at nginx.us (sunzeal) Date: Tue, 11 Aug 2015 22:37:41 -0400 Subject: Should we add php-fpm for each Virtual Host or in Global Configuration ? Message-ID: <8d88b1a05f051a9b47075213812ccdef.NginxMailingListEnglish@forum.nginx.org> This is my configuration for my Virtual Host. I am not sure, but is this the ideal way of defining things ? For every domain / sub-domain, i'm adding a location ~/.php directive. Should i add it in the global nginx.conf itself instead of specifying individually for each domain i add ? Sample Configuration :- server { listen 80; server_name example.com; location / { root /var/www/example.com; index index.php; } location ~ \.php$ { root /var/www/example.com/; fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; # fastcgi_read_timeout 30000; # fastcgi_param SCRIPT_FILENAME $fastcgi_script_name; fastcgi_param SCRIPT_FILENAME $request_filename; include fastcgi_params; } server { listen 80; server_name www.example.net; location / { root /var/www/example.net/; index index.php; } location ~ \.php$ { root /var/www/example.net/; fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; # fastcgi_read_timeout 30000; # fastcgi_param SCRIPT_FILENAME $fastcgi_script_name; fastcgi_param SCRIPT_FILENAME $request_filename; include fastcgi_params; } Posted at Nginx Forum: http://forum.nginx.org/read.php?2,260943,260943#msg-260943 From nginx-forum at nginx.us Wed Aug 12 06:45:00 2015 From: nginx-forum at nginx.us (nginxsantos) Date: Wed, 12 Aug 2015 02:45:00 -0400 Subject: Nginx sending notification In-Reply-To: <79016e49f1498f21857d4b6501c6e27c.NginxMailingListEnglish@forum.nginx.org> References: <74bce6ed94d654c37e7e5c262754e38c.NginxMailingListEnglish@forum.nginx.org> <79016e49f1498f21857d4b6501c6e27c.NginxMailingListEnglish@forum.nginx.org> Message-ID: <048f1b87893e22e7cd731152e0bf819f.NginxMailingListEnglish@forum.nginx.org> Is there a way I can do a HTTP POST from inside Nginx? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,260738,260947#msg-260947 From anoopalias01 at gmail.com Wed Aug 12 08:20:34 2015 From: anoopalias01 at gmail.com (Anoop Alias) Date: Wed, 12 Aug 2015 13:50:34 +0530 Subject: Should we add php-fpm for each Virtual Host or in Global Configuration ? In-Reply-To: <8d88b1a05f051a9b47075213812ccdef.NginxMailingListEnglish@forum.nginx.org> References: <8d88b1a05f051a9b47075213812ccdef.NginxMailingListEnglish@forum.nginx.org> Message-ID: Please see http://nginx.org/en/docs/http/ngx_http_core_module.html#location location should be in a server or location context .So you cannot define it outside of all the server context's. One thing you can do is define a single location block for php and include it in every server context which will save you the trouble of adding it everytime. But note that in real life you may need to change the fastcgi_pass to different pools defined in php-fpm to separate php process based on user etc. On Wed, Aug 12, 2015 at 8:07 AM, sunzeal wrote: > This is my configuration for my Virtual Host. I am not sure, but is this > the > ideal way of defining things ? > > For every domain / sub-domain, i'm adding a location ~/.php directive. > Should i add it in the global nginx.conf itself instead of specifying > individually for each domain i add ? > > Sample Configuration :- > > > server { > listen 80; > server_name example.com; > > location / { > root /var/www/example.com; > index index.php; > > } > > > location ~ \.php$ { > root /var/www/example.com/; > fastcgi_pass 127.0.0.1:9000; > fastcgi_index index.php; > # fastcgi_read_timeout 30000; > # fastcgi_param SCRIPT_FILENAME $fastcgi_script_name; > fastcgi_param SCRIPT_FILENAME $request_filename; > include fastcgi_params; > } > > server { > listen 80; > server_name www.example.net; > > location / { > root /var/www/example.net/; > index index.php; > > } > > > location ~ \.php$ { > root /var/www/example.net/; > fastcgi_pass 127.0.0.1:9000; > fastcgi_index index.php; > # fastcgi_read_timeout 30000; > # fastcgi_param SCRIPT_FILENAME $fastcgi_script_name; > fastcgi_param SCRIPT_FILENAME $request_filename; > include fastcgi_params; > } > > Posted at Nginx Forum: > http://forum.nginx.org/read.php?2,260943,260943#msg-260943 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -- *Anoop P Alias* -------------- next part -------------- An HTML attachment was scrubbed... URL: From arujohn at cisco.com Wed Aug 12 09:30:15 2015 From: arujohn at cisco.com (Arun John (arujohn)) Date: Wed, 12 Aug 2015 09:30:15 +0000 Subject: Using Nginx with chunking Message-ID: Hello, I have nginx configured to send files in chunks to remote clients. The clients will contact the server to send it in chunks of 1 MB each. I am using nginx version 1.8.0 [root at ph-rdu-external-download-01 sbin]# ./nginx -V nginx version: nginx/1.8.0 built by gcc 4.4.7 20120313 (Red Hat 4.4.7-4) (GCC) built with OpenSSL 1.0.1e-fips 11 Feb 2013 TLS SNI support enabled configure arguments: --user=nginx --group=nginx --prefix=/usr/local/nginx --lock-path=/var/lock/subsys/nginx --with-ld-opt=-Wl,-rpath,/usr/lib64 --with-debug --add-module=/root/integ/ngx_devel_kit-master/ --add-module=/root/integ/lua-nginx-module-0.9.15 --error-log-path=/var/log/nginx/error.log --http-log-path=/var/log/nginx/access.log --without-http_autoindex_module --without-http_fastcgi_module --with-http_ssl_module --without-http_geo_module --without-http_empty_gif_module --without-http_ssi_module --without-http_userid_module --without-mail_pop3_module --without-mail_imap_module --without-mail_smtp_module --without-http_uwsgi_module --without-http_scgi_module --with-http_realip_module --with-http_gzip_static_module --with-http_stub_status_module --with-ipv6 Currently I am running into an issue, where if the last chunk size is very small (few bytes), the server doesn?t seem to the send the chunk correctly because of which the clients would re-try again for the same chunk. For some reason it is not able to send the last chunk. I have attached the debug logs to the thread. If you see the logs the last three writes seem to be for the same chunk of size 6672 bytes However if the last chunk size relatively large, then download succeeds without any issues. The issue is seen when the last chunk size is very small. My current nginx configuration is as follows http { # Logging format log_format main '$remote_addr - $remote_user [$time_local] ' '"$request_length" "$request_time" ' '"$request" $status $bytes_sent ' '"$body_bytes_sent" "$bytes_sent" '; default_type application/octet-stream; include mime.types; keepalive_timeout 300 300; keepalive_requests 8000; charset utf-8; source_charset utf-8; # Check if it makes errors. ignore_invalid_headers off; recursive_error_pages on; sendfile on; server_tokens off; tcp_nodelay on; tcp_nopush off; } Regards, Arun -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: nginx_error_location.log Type: application/octet-stream Size: 669828 bytes Desc: nginx_error_location.log URL: From nginx-forum at nginx.us Wed Aug 12 09:56:43 2015 From: nginx-forum at nginx.us (Arno0x0x) Date: Wed, 12 Aug 2015 05:56:43 -0400 Subject: Nginx serving self-signed cert instead of the one defined in conf In-Reply-To: <94fc7a6a794bde0eaa42c88e8846f25e.NginxMailingListEnglish@forum.nginx.org> References: <94fc7a6a794bde0eaa42c88e8846f25e.NginxMailingListEnglish@forum.nginx.org> Message-ID: <6a6ad7174bb32458fa4eb29fbef8ceb9.NginxMailingListEnglish@forum.nginx.org> For the record: problem solved. SHAME on me !! The problem was simply that I copied the wrong certificates from my old installation (nginx 1.6.2) to the new one (nginx 1.8.0). As often, the problem lies in front of the keyboard :-) Thanks Maxim for your assistance in any case, I learnt a few things on the way. Regards, Arno Posted at Nginx Forum: http://forum.nginx.org/read.php?2,260935,260957#msg-260957 From agentzh at gmail.com Wed Aug 12 10:28:11 2015 From: agentzh at gmail.com (Yichun Zhang (agentzh)) Date: Wed, 12 Aug 2015 18:28:11 +0800 Subject: [ANN] OpenResty 1.9.3.1 released Message-ID: Hi folks! I am glad to announce the new formal release, 1.9.3.1, of the OpenResty bundle: https://openresty.org/#Download This is the first OpenResty formal release includes an NGINX 1.9.x core. For OpenResty's release policy, please refer to the following documentation: https://openresty.org/faq.html Special thanks go to all our contributors and users for making this happen! Just as a quick heads-up: we're currently working on incorporating the new ssl_certificate_by_lua*, ssl_session_fetch_by_lua*, ssl_session_store_by_lua*, and balancer_by_lua* features of ngx_lua for future OpenResty releases. Almost all of these new features are already powering CloudFlare's online products. The new ngx_stream_lua_module is also being planned. Stay tuned! Below is the complete change log for this release, as compared to the last formal release (1.7.10.2): * upgraded the Nginx core to 1.9.3. * see the changes here: * bugfix: "./configure --help": fixed the usage text for the "--with-debug" option. thanks Kipras Mancevi?ius for the report. * bugfix: link failures with OpenSSL might happen on 64-bit Mac OS X when the "./configure" option "--with-openssl=PATH" was used and the OpenSSL source was recent enough. thanks grasses for the report. * upgraded PostgresNginxModule to 1.0rc7. * feature: fixed compilation errors with nginx 1.9.1+. thanks Vadim A. Misbakh-Soloviov for the original patch. The HTML version of the change log with lots of helpful hyper-links can be browsed here: https://openresty.org/#ChangeLog1009003 The next formal release of OpenResty will be based on the new Nginx 1.9.x core. OpenResty (aka. ngx_openresty) is a full-fledged web application server by bundling the standard Nginx core, Lua/LuaJIT, lots of 3rd-party Nginx modules and Lua libraries, as well as most of their external dependencies. See OpenResty's homepage for details: https://openresty.org/ We have run extensive testing on our Amazon EC2 test cluster and ensured that all the components (including the Nginx core) play well together. The latest test report can always be found here: http://qa.openresty.org Enjoy! -agentzh From mdounin at mdounin.ru Wed Aug 12 12:53:06 2015 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 12 Aug 2015 15:53:06 +0300 Subject: Using Nginx with chunking In-Reply-To: References: Message-ID: <20150812125305.GZ67578@mdounin.ru> Hello! On Wed, Aug 12, 2015 at 09:30:15AM +0000, Arun John (arujohn) wrote: > Hello, > > I have nginx configured to send files in chunks to remote > clients. The clients will contact the server to send it in > chunks of 1 MB each. I am using nginx version 1.8.0 You confuse chunks and range requests. These are distinct things in HTTP. > Currently I am running into an issue, where if the last chunk > size is very small (few bytes), the server doesn?t seem to the > send the chunk correctly because of which the clients would > re-try again for the same chunk. For some reason it is not able > to send the last chunk. I have attached the debug logs to the > thread. If you see the logs the last three writes seem to be for > the same chunk of size 6672 bytes What makes you think that the problem is with nginx, and not with the client? -- Maxim Dounin http://nginx.org/ From arujohn at cisco.com Wed Aug 12 13:05:13 2015 From: arujohn at cisco.com (Arun John (arujohn)) Date: Wed, 12 Aug 2015 13:05:13 +0000 Subject: Using Nginx with chunking In-Reply-To: <20150812125305.GZ67578@mdounin.ru> References: <20150812125305.GZ67578@mdounin.ru> Message-ID: Hi Maxim, Sorry for the confusion. But even for range requests, it should return the bytes requested, correct? Am I missing any configuration? Regards, Arun On 8/12/15, 6:23 PM, "nginx on behalf of Maxim Dounin" wrote: >Hello! > >On Wed, Aug 12, 2015 at 09:30:15AM +0000, Arun John (arujohn) wrote: > >> Hello, >> >> I have nginx configured to send files in chunks to remote >> clients. The clients will contact the server to send it in >> chunks of 1 MB each. I am using nginx version 1.8.0 > >You confuse chunks and range requests. These are distinct things >in HTTP. > >> Currently I am running into an issue, where if the last chunk >> size is very small (few bytes), the server doesn?t seem to the >> send the chunk correctly because of which the clients would >> re-try again for the same chunk. For some reason it is not able >> to send the last chunk. I have attached the debug logs to the >> thread. If you see the logs the last three writes seem to be for >> the same chunk of size 6672 bytes > >What makes you think that the problem is with nginx, and not with >the client? > >-- >Maxim Dounin >http://nginx.org/ > >_______________________________________________ >nginx mailing list >nginx at nginx.org >http://mailman.nginx.org/mailman/listinfo/nginx From r at roze.lv Wed Aug 12 13:44:04 2015 From: r at roze.lv (Reinis Rozitis) Date: Wed, 12 Aug 2015 16:44:04 +0300 Subject: Using Nginx with chunking In-Reply-To: References: <20150812125305.GZ67578@mdounin.ru> Message-ID: > If you see the logs the last three writes seem to be for the same chunk of > size 6672 bytes I imagine this is a typo (and not what the client expects) because the length is 6272 bytes. > But even for range requests, it should return the bytes requested, > correct? Am I missing any configuration? Well you can always try to test it with something like curl to see if the problem is not really on the particular client: curl --header "Range: bytes=16777216-16783487" http://yourfile or curl -r 16777216-16783487 http://yourfile There are some SSL errors in the debuglog though. rr From mdounin at mdounin.ru Wed Aug 12 14:24:04 2015 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 12 Aug 2015 17:24:04 +0300 Subject: Using Nginx with chunking In-Reply-To: References: <20150812125305.GZ67578@mdounin.ru> Message-ID: <20150812142404.GA67578@mdounin.ru> Hello! On Wed, Aug 12, 2015 at 01:05:13PM +0000, Arun John (arujohn) wrote: > But even for range requests, it should return the bytes requested, > correct? Am I missing any configuration? The question is the same: what makes you think that nginx doesn't return bytes requested? As per logs you've provided, everything is correctly returned to the client. -- Maxim Dounin http://nginx.org/ From nginx-forum at nginx.us Wed Aug 12 18:21:28 2015 From: nginx-forum at nginx.us (xfeep) Date: Wed, 12 Aug 2015 14:21:28 -0400 Subject: [ANN] Nginx-Clojure v0.4.1 Release! Message-ID: <48fae7374463cd34ce88095202cf2ae1.NginxMailingListEnglish@forum.nginx.org> 0.4.1 (2015-08-12) New Feature: Coroutine based socket supports unix domain socket New Feature: APIs for Embedding Nginx-Clojure into a standard Clojure/Java/Groovy App (issue #86). This feature makes debug/test with Nginx Clojure/Java/Groovy handlers very easy. New Feature: Autodetect jvm_path (issue #85) New Feature: Support to use annotation to mark a class or method to be suspenable in coroutine context (issue #84) Enhancement: Auto send error when meets a non websocket request with auto_upgrade_wsis on Enhancement: Add websocket-upgrade! to server channel API Enhancement: Add WholeMessageAdapter to make handling small websocket messages easier. Bug Fix: NginxHttpServerChannel.write(ByteBuffer buf) does not reset the buffer's position (issue #83) Bug Fix: No access to tomcat server 8.24 from nginx-clojure (issue #82) Binaries Distribution: Including some java sources for easy debug. Build Script: Autodetect JNI header files Below are some simple examples about embedding Nginx-Clojure and we can try them with lein repl or debug them in a Java IDE. [nginx-clojure/nginx-clojure-embed "0.4.1"] For Clojure ;;(1) Start it with ring handler and an options map ;;my-app can be a simple ring hanler or a compojure router. (run-server my-app {:port 8080}) ;;(2) Start it with a nginx.conf file (run-server "/my-dir/nginx.conf") ;;(3) Start it with a given work dir (binding [*nginx-work-dir* my-work-dir] (run-server ...)) ;;(4) Stop the server (stop-server) For Java //Start it with ring handler and an options map NginxEmbedServer.getServer().start("my.HelloHandler", ArrayMap.create("port", "8081")); //Start it with with a nginx.conf file NginxEmbedServer.getServer().start("/my-dir/nginx.conf"); //Start it with a given work dir NginxEmbedServer.getServer().setWorkDir(my-work-dir); NginxEmbedServer.getServer().start(...); //Stop the server NginxEmbedServer.getServer().stop(); default options: "error-log", "logs/error.log", "max-connections", "1024", "access-log", "off", "keepalive-timeout", "65", "max-threads", "8", "host", "0.0.0.0", "port", "8080", There 's an example about compojure routing and websocket example in the unit tests source : Clojure: https://github.com/nginx-clojure/nginx-clojure/blob/master/nginx-clojure-embed/test/clojure/nginx/clojure/test_embed.clj Java: https://github.com/nginx-clojure/nginx-clojure/blob/master/nginx-clojure-embed/test/java/nginx/clojure/embed/JavaHandlersTest.java Web Site http://nginx-clojure.github.io/ Source Hosted on Github https://github.com/nginx-clojure/nginx-clojure Google Group (mailing list) https://groups.google.com/forum/#!forum/nginx-clojure Posted at Nginx Forum: http://forum.nginx.org/read.php?2,260968,260968#msg-260968 From nginx-forum at nginx.us Thu Aug 13 04:06:50 2015 From: nginx-forum at nginx.us (daveyfx) Date: Thu, 13 Aug 2015 00:06:50 -0400 Subject: Problem with uwsgi_no_cache Message-ID: Hi all - I'm attempting to exclude application/json data from storing in nginx's cache. All other content types are OK to cache. I thought that the below config would work for me, but nginx is still caching everything that is proxying. What am I doing wrong? ## in http block ## map $http_content_type $no_cache { default 0; "application/json" 1; } ## in vhost block ## location / { uwsgi_cache www; uwsgi_cache_valid 200 10m; uwsgi_cache_methods GET HEAD; uwsgi_cache_bypass $no_cache; uwsgi_no_cache $no_cache; add_header X-uWSGI-Cache $upstream_cache_status; include uwsgi_params; uwsgi_pass www; } I've tried a few other ways to set $no_cache to 1 for json content. Tried the following in both the server block and the location / block. if ($http_content_type = "application/json") { set $no_cache 1; } Here's my build: nginx version: nginx/1.6.2 built by gcc 4.4.7 20120313 (Red Hat 4.4.7-3) (GCC) TLS SNI support enabled configure arguments: --prefix=/etc/nginx --sbin-path=/usr/sbin/nginx --conf-path=/etc/nginx/nginx.conf --error-log-path=/var/log/nginx/error.log --http-log-path=/var/log/nginx/access.log --pid-path=/var/run/nginx.pid --lock-path=/var/run/nginx.lock --http-client-body-temp-path=/var/cache/nginx/client_temp --http-proxy-temp-path=/var/cache/nginx/proxy_temp --http-fastcgi-temp-path=/var/cache/nginx/fastcgi_temp --http-uwsgi-temp-path=/var/cache/nginx/uwsgi_temp --http-scgi-temp-path=/var/cache/nginx/scgi_temp --user=nginx --group=nginx --with-http_ssl_module --with-http_geoip_module --with-http_realip_module --with-http_stub_status_module --with-file-aio --with-ipv6 --without-http_ssi_module --without-http_split_clients_module --without-http_referer_module --without-http_scgi_module --without-http_browser_module --without-mail_pop3_module --without-mail_imap_module --without-mail_smtp_module --add-module=/home/makerpm/rpmbuild/BUILD/nginx-1.6.2/mod/ngx_http_redis-0.3.7 --add-module=/home/makerpm/rpmbuild/BUILD/nginx-1.6.2/mod/nginx-x-rid-header --add-module=/home/makerpm/rpmbuild/BUILD/nginx-1.6.2/mod/nginx-upload-module --add-module=/home/makerpm/rpmbuild/BUILD/nginx-1.6.2/mod/nginx-upload-progress-module-0.8.4 --add-module=/home/makerpm/rpmbuild/BUILD/nginx-1.6.2/mod/echo-nginx-module-0.57 --with-ld-opt=-luuid --with-http_spdy_module --with-cc-opt='-O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector --param=ssp-buffer-size=4 -m64 -mtune=generic' Thanks for your help. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,260977,260977#msg-260977 From francis at daoine.org Thu Aug 13 07:47:25 2015 From: francis at daoine.org (Francis Daly) Date: Thu, 13 Aug 2015 08:47:25 +0100 Subject: Problem with uwsgi_no_cache In-Reply-To: References: Message-ID: <20150813074725.GV23844@daoine.org> On Thu, Aug 13, 2015 at 12:06:50AM -0400, daveyfx wrote: Hi there, > I'm attempting to exclude application/json data from storing in nginx's > cache. All other content types are OK to cache. I thought that the below > config would work for me, but nginx is still caching everything that is > proxying. What am I doing wrong? > > ## in http block ## > map $http_content_type $no_cache { $http_ variables refer to the request from the client to nginx. (http://nginx.org/r/$http_) You want to care about the response from upstream to nginx. Look at $upstream_http_ variables (http://nginx.org/r/$upstream_http_) > default 0; > "application/json" 1; > } > > > ## in vhost block ## > location / { > uwsgi_cache www; > uwsgi_cache_valid 200 10m; > uwsgi_cache_methods GET HEAD; > uwsgi_cache_bypass $no_cache; That says "do not read the response from the cache if this is true". Since you can only sensibly decide that after the upstream response has been fetched, you probably do not want that here. > uwsgi_no_cache $no_cache; That says "do not write to the cache if this is true". You do want that. f -- Francis Daly francis at daoine.org From nginx-forum at nginx.us Thu Aug 13 09:12:57 2015 From: nginx-forum at nginx.us (karkunpavan) Date: Thu, 13 Aug 2015 05:12:57 -0400 Subject: how to make nginx loadbalancer give 404 when all upstream servers are down Message-ID: <51e86283a204e7c36c89881657a4dbf1.NginxMailingListEnglish@forum.nginx.org> Hi folks, I have been stuck this issue for a long time now. Searches could not solve my issue hence posting here. Please help. I am using nginx as a load balancer and ngnix.conf looks like: worker_processes 4; events { worker_connections 1024; } http { upstream ab_backend { server :7000; server :7000 backup; } server { listen 8000; location / { proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_pass http://ab_backend; } } } IP1 and IP2 are simple httpd docker containers. From another machine I am running ab tool. What I am observing is: When IP1 is up the http it responds to request from ab. Bring IP1 and backup IP2 starts to respond to al http requests. starnge thing is when both IP1, IP2 are down the nginx server itself takes it up and responds to the http requests from ab. Is there a way to make nginx not behave this way? what I am looking for is way to force http requests to be answered by only the upstream servers if none of them are reachable requests should get errors instead of giving out non bona fide responses. I am testing using ab and httperf tool and saw the bahavior in both cases. Even when no upstreams are up test reports no Errors or dropped requests. Please help :) Posted at Nginx Forum: http://forum.nginx.org/read.php?2,260980,260980#msg-260980 From francis at daoine.org Thu Aug 13 11:12:38 2015 From: francis at daoine.org (Francis Daly) Date: Thu, 13 Aug 2015 12:12:38 +0100 Subject: how to make nginx loadbalancer give 404 when all upstream servers are down In-Reply-To: <51e86283a204e7c36c89881657a4dbf1.NginxMailingListEnglish@forum.nginx.org> References: <51e86283a204e7c36c89881657a4dbf1.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20150813111238.GW23844@daoine.org> On Thu, Aug 13, 2015 at 05:12:57AM -0400, karkunpavan wrote: Hi there, > starnge thing is when both IP1, IP2 are down the nginx server itself takes > it up and responds to the http requests from ab. Is there a way to make > nginx not behave this way? That does seem strange to me. What output do you get from a request like curl -i http://your-nginx:8000/ when one "upstream" is active; and when no "upstream" is active? What does your nginx access_log say for those two requests? f -- Francis Daly francis at daoine.org From kenz.aureus at gmail.com Fri Aug 14 05:55:11 2015 From: kenz.aureus at gmail.com (Chino Aureus) Date: Fri, 14 Aug 2015 13:55:11 +0800 Subject: nginx load balancing Message-ID: Hi, I'm new to nginx. Need help understanding the http/s load balancing capability of nginx open source versus nginx plus. Appreciate any info regarding this query :) I just downloaded nginx and I'm going to try experiment load balancing capability this weekend. Regards Kenz -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Fri Aug 14 06:06:29 2015 From: nginx-forum at nginx.us (karkunpavan) Date: Fri, 14 Aug 2015 02:06:29 -0400 Subject: how to make nginx loadbalancer give 404 when all upstream servers are down In-Reply-To: <20150813111238.GW23844@daoine.org> References: <20150813111238.GW23844@daoine.org> Message-ID: <4612479b75e2fe05ee7e9a564eeddf09.NginxMailingListEnglish@forum.nginx.org> Hi Fracis, Thanks for taking a look. With it curl behaves as expected, but with tools like ab and httperf, I am not seeing bad gateway responses when both the upstream servers are down. Read ahead for more details. Below is info using ab request and curl request. ab: When one upstream server is up: nginx console: <> - - [14/Aug/2015:13:44:57 +0000] "GET / HTTP/1.0" 200 45 "-" "ApacheBench/2.3" <> - - [14/Aug/2015:13:44:57 +0000] "GET / HTTP/1.0" 200 45 "-" "ApacheBench/2.3" upstream httpd server console: <> - - [14/Aug/2015:13:44:57 +0000] "GET / HTTP/1.0" 200 45 <> - - [14/Aug/2015:13:44:57 +0000] "GET / HTTP/1.0" 200 45 When both upstream servers are down: 2015/08/14 13:43:45 [error] 5#5: *14 connect() failed (111: Connection refused) while connecting to upstream, client: <>, server: , request: "GET / HTTP/1.0", upstream: "http://>:/", host: "" 2015/08/14 13:43:45 [error] 5#5: *14 connect() failed (111: Connection refused) while connecting to upstream, client: <>, server: , request: "GET / HTTP/1.0", upstream: "http://:/", host: "" 2015/08/14 13:43:45 [error] 5#5: *14 no live upstreams while connecting to upstream, client: <>, server: , request: "GET / HTTP/1.0", upstream: "http://ab_backend/", host: "" <> - - [14/Aug/2015:13:43:45 +0000] "GET / HTTP/1.0" 502 172 "-" "ApacheBench/2.3" <> - - [14/Aug/2015:13:43:45 +0000] "GET / HTTP/1.0" 502 172 "-" "ApacheBench/2.3" as a result in bith cases ab just shows all successful responses: # ab -n 1 -c 1 http://:/ This is ApacheBench, Version 2.3 <$Revision: 655654 $> Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/ Licensed to The Apache Software Foundation, http://www.apache.org/ Benchmarking <> (be patient).....done Server Software: nginx/1.9.3 Server Hostname: 10.107.53.127 Server Port: 9000 Document Path: / Document Length: 45 bytes Concurrency Level: 1 Time taken for tests: 0.040 seconds Complete requests: 1 Failed requests: 0 Write errors: 0 Keep-Alive requests: 1 Total transferred: 285 bytes HTML transferred: 45 bytes Requests per second: 24.85 [#/sec] (mean) Time per request: 40.234 [ms] (mean) Time per request: 40.234 [ms] (mean, across all concurrent requests) Transfer rate: 6.92 [Kbytes/sec] received Connection Times (ms) min mean[+/-sd] median max Connect: 2 2 0.0 2 2 Processing: 39 39 0.0 39 39 Waiting: 38 38 0.0 38 38 Total: 40 40 0.0 40 40 curl: when one upstream is up: <> - - [14/Aug/2015:13:58:37 +0000] "GET / HTTP/1.1" 200 45 "-" "curl/7.19.7 (x86_64-redhat-linux-gnu) libcurl/7.19.7 NSS/3.13.1.0 zlib/1.2.3 libidn/1.18 libssh2/1.2.2" When both upstream servers are down: 2015/08/14 14:00:37 [error] 8#8: *26 connect() failed (111: Connection refused) while connecting to upstream, client: <>, server: , request: "GET / HTTP/1.1", upstream: "http://:/", host: ":" 2015/08/14 14:00:37 [error] 8#8: *26 connect() failed (111: Connection refused) while connecting to upstream, client: <>, server: , request: "GET / HTTP/1.1", upstream: "http://:/", host: ":" <> - - [14/Aug/2015:14:00:37 +0000] "GET / HTTP/1.1" 502 172 "-" "curl/7.19.7 (x86_64-redhat-linux-gnu) libcurl/7.19.7 NSS/3.13.1.0 zlib/1.2.3 libidn/1.18 libssh2/1.2.2" and curl gets a "502 Bad Gateway" which is right. Any suggestions what I might be doing wrong? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,260980,260998#msg-260998 From nginx-forum at nginx.us Fri Aug 14 06:20:37 2015 From: nginx-forum at nginx.us (dhirajpraj@gmail.com) Date: Fri, 14 Aug 2015 02:20:37 -0400 Subject: Removal of server from nginx load balancer does not reflect after nginx reload Message-ID: Hi, I am using nginx 1.8.0 as a load balancer. Below is the configuration snippet: upstream appservers { least_conn; server 10.21.3.123:8083; server 10.21.3.125:8083; } server { listen 443 ssl spdy backlog=2048; add_header Strict-Transport-Security "max-age=31536000; includeSubDomains"; add_header Alternate-Protocol 443:npn-spdy/3; ssl_certificate /etc/nginx/click.rummycircle.com/click.rummycircle.com.crt; ssl_certificate_key /etc/nginx/click.rummycircle.com/click.rummycircle.com.key; ssl_ciphers ECDHE-RSA-AES256-SHA384:AES256-SHA256:AES256-SHA256:RC4:HIGH:!MD5:!SSLv2:!ADH:!aNULL:!eNULL:!NULL:!DH:!ADH:!EDH:!AESGCM; ssl_prefer_server_ciphers on; ssl_protocols TLSv1 TLSv1.1 TLSv1.2; location / { # log_format postdata '$remote_addr - $remote_user [$time_local] ' # '"$request" $status $bytes_sent ' # '"$http_referer" "$http_user_agent" "$request_body"'; # access_log /home/deploy/nginx_logs/new_access.log postdata; proxy_set_header Host $http_host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_pass http://appservers; } } Nginx is balancing the load between our 2 servers: 10.21.3.123 and 10.21.3.125. I want to remove one server from the load balancer at runtime without any packet loss. So I removed i by changing the configuration like below: upstream appservers { least_conn; server 10.21.3.123:8083; } Then I reloaded nginx: sudo service nginx reload I checked with nginx -t that Nginx successfully reloaded. However, it still continues to send new requests to the removed server. Please assist. My OS details: uname -a Linux an-lb-01 2.6.32-504.12.2.el6.x86_64 #1 SMP Wed Mar 11 22:03:14 UTC 2015 x86_64 x86_64 x86_64 GNU/Linux Posted at Nginx Forum: http://forum.nginx.org/read.php?2,260999,260999#msg-260999 From nginx-forum at nginx.us Fri Aug 14 07:12:35 2015 From: nginx-forum at nginx.us (itpp2012) Date: Fri, 14 Aug 2015 03:12:35 -0400 Subject: how to make nginx loadbalancer give 404 when all upstream servers are down In-Reply-To: <4612479b75e2fe05ee7e9a564eeddf09.NginxMailingListEnglish@forum.nginx.org> References: <20150813111238.GW23844@daoine.org> <4612479b75e2fe05ee7e9a564eeddf09.NginxMailingListEnglish@forum.nginx.org> Message-ID: <00030821916ef77bc9b89078995cd695.NginxMailingListEnglish@forum.nginx.org> karkunpavan Wrote: ------------------------------------------------------- > <> - - [14/Aug/2015:13:43:45 +0000] "GET / HTTP/1.0" 502 > 172 "-" "ApacheBench/2.3" > <> - - [14/Aug/2015:13:43:45 +0000] "GET / HTTP/1.0" 502 > 172 "-" "ApacheBench/2.3" [...] > <> - - [14/Aug/2015:14:00:37 +0000] "GET / HTTP/1.1" 502 > 172 "-" "curl/7.19.7 (x86_64-redhat-linux-gnu) libcurl/7.19.7 > NSS/3.13.1.0 zlib/1.2.3 libidn/1.18 libssh2/1.2.2" I see a 502 for both tools, might be an 'ab' thing. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,260980,261000#msg-261000 From nginx-forum at nginx.us Fri Aug 14 07:45:34 2015 From: nginx-forum at nginx.us (karkunpavan) Date: Fri, 14 Aug 2015 03:45:34 -0400 Subject: how to make nginx loadbalancer give 404 when all upstream servers are down In-Reply-To: <00030821916ef77bc9b89078995cd695.NginxMailingListEnglish@forum.nginx.org> References: <20150813111238.GW23844@daoine.org> <4612479b75e2fe05ee7e9a564eeddf09.NginxMailingListEnglish@forum.nginx.org> <00030821916ef77bc9b89078995cd695.NginxMailingListEnglish@forum.nginx.org> Message-ID: <3d2d1d8a28cf7e753513988d51ae07cc.NginxMailingListEnglish@forum.nginx.org> Indeed. It's right there I did not even see it. Thanks itpp2012 and Francis Daly. since ab and httperf are behaving unexpectedly, I think best way is to parse the nginx logs to find out dropped requests. Thanks a lot :) :) Posted at Nginx Forum: http://forum.nginx.org/read.php?2,260980,261001#msg-261001 From francis at daoine.org Fri Aug 14 07:54:33 2015 From: francis at daoine.org (Francis Daly) Date: Fri, 14 Aug 2015 08:54:33 +0100 Subject: how to make nginx loadbalancer give 404 when all upstream servers are down In-Reply-To: <4612479b75e2fe05ee7e9a564eeddf09.NginxMailingListEnglish@forum.nginx.org> References: <20150813111238.GW23844@daoine.org> <4612479b75e2fe05ee7e9a564eeddf09.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20150814075433.GX23844@daoine.org> On Fri, Aug 14, 2015 at 02:06:29AM -0400, karkunpavan wrote: Hi there, > Thanks for taking a look. With it curl behaves as expected, but with tools > like ab and httperf, I am not seeing bad gateway responses when both the > upstream servers are down. > ab: > > When one upstream server is up: > > nginx console: > <> - - [14/Aug/2015:13:44:57 +0000] "GET / HTTP/1.0" 200 45 "-" > "ApacheBench/2.3" Response status 200, content size 45. > When both upstream servers are down: > > <> - - [14/Aug/2015:13:43:45 +0000] "GET / HTTP/1.0" 502 172 "-" > "ApacheBench/2.3" Response status 502, content size 172. > as a result in bith cases ab just shows all successful responses: > # ab -n 1 -c 1 http://:/ > Server Port: 9000 > Complete requests: 1 > Failed requests: 0 > HTML transferred: 45 bytes One request, 45 bytes of content. That is the "success" case. If the "fail" case shows the same, while the nginx logs show something else, then your testing tool is wrong. > Any suggestions what I might be doing wrong? ab is talking to port 9000. Your previous config showed nginx listening on port 8000. Might that be related? f -- Francis Daly francis at daoine.org From nginx-forum at nginx.us Fri Aug 14 08:17:34 2015 From: nginx-forum at nginx.us (karkunpavan) Date: Fri, 14 Aug 2015 04:17:34 -0400 Subject: how to make nginx loadbalancer give 404 when all upstream servers are down In-Reply-To: <20150814075433.GX23844@daoine.org> References: <20150814075433.GX23844@daoine.org> Message-ID: <9c89d503d9b64474572f12671688d91e.NginxMailingListEnglish@forum.nginx.org> nginx is listening on 8000. port 9000 is mapped to 8000 on the docker so I think it's not related. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,260980,261004#msg-261004 From maxim at nginx.com Fri Aug 14 10:03:10 2015 From: maxim at nginx.com (Maxim Konovalov) Date: Fri, 14 Aug 2015 13:03:10 +0300 Subject: nginx load balancing In-Reply-To: References: Message-ID: <55CDBCDE.10509@nginx.com> Hi, On 8/14/15 8:55 AM, Chino Aureus wrote: > > Hi, > > I'm new to nginx. Need help understanding the http/s load balancing > capability of nginx open source versus nginx plus. Appreciate any > info regarding this query :) > > I just downloaded nginx and I'm going to try experiment load > balancing capability this weekend. > nginx admin guide(*) and blog.nginx.com is a good start * https://www.nginx.com/resources/admin-guide/ -- Maxim Konovalov Discover best practices for building & delivering apps at scale. nginx.conf 2015: Sept. 22-24, San Francisco. http://nginx.com/nginxconf From al-nginx at none.at Fri Aug 14 10:13:12 2015 From: al-nginx at none.at (Aleksandar Lazic) Date: Fri, 14 Aug 2015 12:13:12 +0200 Subject: nginx load balancing In-Reply-To: References: Message-ID: <8bff1193653efc76bdd1699766f015d3@none.at> Hi. Am 14-08-2015 07:55, schrieb Chino Aureus: > Hi, > > I'm new to nginx. Need help understanding the http/s load balancing > capability of nginx open source versus nginx plus. Appreciate any info > regarding this query :) You can find a comparison between the version in the feature matrix. https://www.nginx.com/products/feature-matrix/ best regards Aleks From nginx-forum at nginx.us Fri Aug 14 10:22:23 2015 From: nginx-forum at nginx.us (Muffel2k) Date: Fri, 14 Aug 2015 06:22:23 -0400 Subject: Rewrite rules for "random" subfolders Message-ID: Hey, I would like to migrate my page from a webhoster to my own vServer running Nginx. So far I got everything up and running EXCEPT these stupid .htaccess file. Let me explain: My site contains several galleries with slideshows. When you are watching one of those you click a link which looks like this "DOMAIN/galleries/album-set/album-2/DSC01154-single.php" (only DOMAIN/galleries and -single.php) are fixed. The rest can vary. Those links are generated by Lightroom and a plugin and during this process a .htaccess file will be copied into each folder. The content looks like this: RewriteEngine On RewriteRule ^(.*)-single.*php$ single.php?id=$1 [L] This makes my link DOMAIN/galleries/album-set/album-2/DSC01154-single.php point to this DOMAIN/galleries/album-set/album-2/single.php?id=DSC01154 as you can see, the "DSC01154" will be passed through. I am trying for several days to create a rewrite rule for Nginx doing the same without killing my page. Does anyone knows a solution for this? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,261008,261008#msg-261008 From nginx-forum at nginx.us Fri Aug 14 16:12:29 2015 From: nginx-forum at nginx.us (pad19) Date: Fri, 14 Aug 2015 12:12:29 -0400 Subject: RTMP save and watch with 1h delay Message-ID: <24d0503d2825737210d29140368a8ef9.NginxMailingListEnglish@forum.nginx.org> HI, Nginx receive rtmp unsecure link from admin -> restream the same rtmp to users just using secure link & record to any format -> if user want, he can watch transmision choosing 1, 2 or 3h delay. Delay is static and cant be change during watching. Is such scenario possible in nginx-rtmp? Can you give me tips to configure it? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,261015,261015#msg-261015 From lists at ruby-forum.com Fri Aug 14 16:21:18 2015 From: lists at ruby-forum.com (Steven Wright) Date: Fri, 14 Aug 2015 18:21:18 +0200 Subject: Good log analysis tool for nginx? In-Reply-To: <634760.80154.qm@web46313.mail.sp1.yahoo.com> References: <634760.80154.qm@web46313.mail.sp1.yahoo.com> Message-ID: <9f0b5f523564ec9d92ea3131706b2e7d@ruby-forum.com> I know this is an old post, but you may try GoAccess; works great - free and open source console based. It may output an HTML,JSON,CSV report too. http://goaccess.io/ -- Posted via http://www.ruby-forum.com/. From gfrankliu at gmail.com Fri Aug 14 16:51:14 2015 From: gfrankliu at gmail.com (Frank Liu) Date: Fri, 14 Aug 2015 09:51:14 -0700 Subject: header handling Message-ID: Hi, I have a few questions regarding headers in nginx: 1) I use proxy_set_header to pass a header to upstream servers. Is it possible to honor the header if the incoming request already has it? 2) I want to pass the "Server" header from upstream response to clients, and if there is no such response header, I'd like to add a customer one. Is it possible via core nginx or any third party modules? Currently I am using "proxy_pass_header Server" without any check. I am not sure what happens if upstream response doesn't have it. 3) I am trying to log an upstream response header to access log but it has a "dot" in it (say X.header). I don't have any control to the upstream servers. On nginx side, I tried setting "ignore_invalid_headers off" in the server block, and in the logformat, I tried a few things for the the column: $upstream_http_x.header $upstream_http_x_header $upstream_http_x-header, but nothing works. Any ideas how I can log it? Thanks! Frank -------------- next part -------------- An HTML attachment was scrubbed... URL: From ryd994 at 163.com Fri Aug 14 17:21:33 2015 From: ryd994 at 163.com (ryd994) Date: Fri, 14 Aug 2015 17:21:33 +0000 Subject: Rewrite rules for "random" subfolders In-Reply-To: References: Message-ID: Hi, Does this work? rewrite ^(.*)/(.*)-single\.php $1/single. php?id=$2 last; Not tested, so anything could happen. On Fri, Aug 14, 2015, 18:22 Muffel2k wrote: > Hey, > > I would like to migrate my page from a webhoster to my own vServer running > Nginx. So far I got everything up and running EXCEPT these stupid .htaccess > file. Let me explain: > > My site contains several galleries with slideshows. When you are watching > one of those you click a link which looks like this > > "DOMAIN/galleries/album-set/album-2/DSC01154-single.php" > > (only DOMAIN/galleries and -single.php) are fixed. The rest can vary. Those > links are generated by Lightroom and a plugin and during this process a > .htaccess file will be copied into each folder. The content looks like > this: > > > RewriteEngine On > RewriteRule ^(.*)-single.*php$ single.php?id=$1 [L] > > > This makes my link > DOMAIN/galleries/album-set/album-2/DSC01154-single.php > point to this > DOMAIN/galleries/album-set/album-2/single.php?id=DSC01154 > > as you can see, the "DSC01154" will be passed through. I am trying for > several days to create a rewrite rule for Nginx doing the same without > killing my page. > > Does anyone knows a solution for this? > > Posted at Nginx Forum: > http://forum.nginx.org/read.php?2,261008,261008#msg-261008 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ryd994 at 163.com Fri Aug 14 17:27:10 2015 From: ryd994 at 163.com (ryd994) Date: Fri, 14 Aug 2015 17:27:10 +0000 Subject: Removal of server from nginx load balancer does not reflect after nginx reload In-Reply-To: References: Message-ID: Hi, Are you using free Nginx or Nginx Plus? Per my memory, server group reconfigure without restart is a Plus only feature. On Fri, Aug 14, 2015, 14:20 dhirajpraj at gmail.com wrote: > Hi, > I am using nginx 1.8.0 as a load balancer. > > Below is the configuration snippet: > > upstream appservers { > least_conn; > server 10.21.3.123:8083; > server 10.21.3.125:8083; > } > > server { > listen 443 ssl spdy backlog=2048; > add_header Strict-Transport-Security "max-age=31536000; includeSubDomains"; > > add_header Alternate-Protocol 443:npn-spdy/3; > > ssl_certificate > /etc/nginx/click.rummycircle.com/click.rummycircle.com.crt; > ssl_certificate_key > /etc/nginx/click.rummycircle.com/click.rummycircle.com.key; > > ssl_ciphers > > ECDHE-RSA-AES256-SHA384:AES256-SHA256:AES256-SHA256:RC4:HIGH:!MD5:!SSLv2:!ADH:!aNULL:!eNULL:!NULL:!DH:!ADH:!EDH:!AESGCM; > ssl_prefer_server_ciphers on; > ssl_protocols TLSv1 TLSv1.1 TLSv1.2; > > location / { > # log_format postdata '$remote_addr - $remote_user > [$time_local] ' > # '"$request" $status $bytes_sent ' > # '"$http_referer" "$http_user_agent" > "$request_body"'; > # access_log /home/deploy/nginx_logs/new_access.log > postdata; > proxy_set_header Host $http_host; > proxy_set_header X-Real-IP $remote_addr; > proxy_set_header X-Forwarded-For > $proxy_add_x_forwarded_for; > proxy_pass http://appservers; > } > } > > Nginx is balancing the load between our 2 servers: 10.21.3.123 and > 10.21.3.125. I want to remove one server from the load balancer at runtime > without any packet loss. So I removed i by changing the configuration like > below: > upstream appservers { > least_conn; > server 10.21.3.123:8083; > } > > Then I reloaded nginx: sudo service nginx reload > I checked with nginx -t that Nginx successfully reloaded. > However, it still continues to send new requests to the removed server. > Please assist. > > > My OS details: > uname -a > Linux an-lb-01 2.6.32-504.12.2.el6.x86_64 #1 SMP Wed Mar 11 22:03:14 UTC > 2015 x86_64 x86_64 x86_64 GNU/Linux > > Posted at Nginx Forum: > http://forum.nginx.org/read.php?2,260999,260999#msg-260999 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ryd994 at 163.com Fri Aug 14 17:36:57 2015 From: ryd994 at 163.com (ryd994) Date: Fri, 14 Aug 2015 17:36:57 +0000 Subject: header handling In-Reply-To: References: Message-ID: Hi, 1) Use mapped variable map $http_
$ { "" "value if nit set"; default $http_
; } Then you can set header with the new variable. 2) I guess you can use map, too. Use $upstream_http_*name* instead. 3) Sorry, I have no idea on this. On Sat, Aug 15, 2015, 00:51 Frank Liu wrote: > Hi, > > I have a few questions regarding headers in nginx: > > 1) I use proxy_set_header to pass a header to upstream servers. Is it > possible to honor the header if the incoming request already has it? > > 2) I want to pass the "Server" header from upstream response to clients, > and if there is no such response header, I'd like to add a customer one. Is > it possible via core nginx or any third party modules? Currently I am using > "proxy_pass_header Server" without any check. I am not sure what happens if > upstream response doesn't have it. > > 3) I am trying to log an upstream response header to access log but it has > a "dot" in it (say X.header). I don't have any control to the upstream > servers. On nginx side, I tried setting "ignore_invalid_headers off" in the > server block, and in the logformat, I tried a few things for the the > column: $upstream_http_x.header $upstream_http_x_header > $upstream_http_x-header, but nothing works. Any ideas how I can log it? > > Thanks! > Frank > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Fri Aug 14 18:33:35 2015 From: nginx-forum at nginx.us (Muffel2k) Date: Fri, 14 Aug 2015 14:33:35 -0400 Subject: Rewrite rules for "random" subfolders In-Reply-To: References: Message-ID: Unfortunately not -.- here is a small summary of my vhost server { location / { index index.php index.html index.htm; try_files $uri $uri/ @rewrites; rewrite ^(.*)/(.*)-single\.php $1/single.php?id=$2 last; } location @rewrites { rewrite ^ /index.php last; } } I emptied my cache and restarted nginx as well .... no success ryd994 Wrote: ------------------------------------------------------- > Hi, > > Does this work? > rewrite ^(.*)/(.*)-single\.php $1/single. php?id=$2 last; > > Not tested, so anything could happen. > > On Fri, Aug 14, 2015, 18:22 Muffel2k wrote: > > > Hey, > > > > I would like to migrate my page from a webhoster to my own vServer > running > > Nginx. So far I got everything up and running EXCEPT these stupid > .htaccess > > file. Let me explain: > > > > My site contains several galleries with slideshows. When you are > watching > > one of those you click a link which looks like this > > > > "DOMAIN/galleries/album-set/album-2/DSC01154-single.php" > > > > (only DOMAIN/galleries and -single.php) are fixed. The rest can > vary. Those > > links are generated by Lightroom and a plugin and during this > process a > > .htaccess file will be copied into each folder. The content looks > like > > this: > > > > > > RewriteEngine On > > RewriteRule ^(.*)-single.*php$ single.php?id=$1 [L] > > > > > > This makes my link > > DOMAIN/galleries/album-set/album-2/DSC01154-single.php > > point to this > > DOMAIN/galleries/album-set/album-2/single.php?id=DSC01154 > > > > as you can see, the "DSC01154" will be passed through. I am trying > for > > several days to create a rewrite rule for Nginx doing the same > without > > killing my page. > > > > Does anyone knows a solution for this? > > > > Posted at Nginx Forum: > > http://forum.nginx.org/read.php?2,261008,261008#msg-261008 > > > > _______________________________________________ > > nginx mailing list > > nginx at nginx.org > > http://mailman.nginx.org/mailman/listinfo/nginx > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx Posted at Nginx Forum: http://forum.nginx.org/read.php?2,261008,261024#msg-261024 From gfrankliu at gmail.com Fri Aug 14 20:04:26 2015 From: gfrankliu at gmail.com (Frank Liu) Date: Fri, 14 Aug 2015 13:04:26 -0700 Subject: header handling In-Reply-To: References: Message-ID: Thanks ryd994 for the suggestion! 1 and 2 are working now. Anyone else has any ideas on 3? Frank On Fri, Aug 14, 2015 at 10:36 AM, ryd994 wrote: > Hi, > > 1) Use mapped variable > > map $http_
$ { > "" "value if nit set"; > default $http_
; > } > Then you can set header with the new variable. > > 2) I guess you can use map, too. Use $upstream_http_*name* instead. > > 3) Sorry, I have no idea on this. > > On Sat, Aug 15, 2015, 00:51 Frank Liu wrote: > >> Hi, >> >> I have a few questions regarding headers in nginx: >> >> 1) I use proxy_set_header to pass a header to upstream servers. Is it >> possible to honor the header if the incoming request already has it? >> >> 2) I want to pass the "Server" header from upstream response to clients, >> and if there is no such response header, I'd like to add a customer one. Is >> it possible via core nginx or any third party modules? Currently I am using >> "proxy_pass_header Server" without any check. I am not sure what happens if >> upstream response doesn't have it. >> >> 3) I am trying to log an upstream response header to access log but it >> has a "dot" in it (say X.header). I don't have any control to the upstream >> servers. On nginx side, I tried setting "ignore_invalid_headers off" in the >> server block, and in the logformat, I tried a few things for the the >> column: $upstream_http_x.header $upstream_http_x_header >> $upstream_http_x-header, but nothing works. Any ideas how I can log it? >> >> Thanks! >> Frank >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gfrankliu at gmail.com Sat Aug 15 07:15:47 2015 From: gfrankliu at gmail.com (Frank Liu) Date: Sat, 15 Aug 2015 00:15:47 -0700 Subject: header handling In-Reply-To: References: Message-ID: I made the below patch and can now use $upstream_http_x_header for logformat to capture the header X.header in the access log. Does anybody see any issues with the patch? --- src/http/ngx_http_variables.c.orig 2015-08-15 02:19:31.635328112 +0000 +++ src/http/ngx_http_variables.c 2015-08-15 02:19:42.051541422 +0000 @@ -897,6 +897,8 @@ } else if (ch == '-') { ch = '_'; + } else if (ch == '.') { + ch = '_'; } if (var->data[n + prefix] != ch) { Thanks! Frank On Fri, Aug 14, 2015 at 1:04 PM, Frank Liu wrote: > Thanks ryd994 for the suggestion! 1 and 2 are working now. > Anyone else has any ideas on 3? > > Frank > > On Fri, Aug 14, 2015 at 10:36 AM, ryd994 wrote: > >> Hi, >> >> 1) Use mapped variable >> >> map $http_
$ { >> "" "value if nit set"; >> default $http_
; >> } >> Then you can set header with the new variable. >> >> 2) I guess you can use map, too. Use $upstream_http_*name* instead. >> >> 3) Sorry, I have no idea on this. >> >> On Sat, Aug 15, 2015, 00:51 Frank Liu wrote: >> >>> Hi, >>> >>> I have a few questions regarding headers in nginx: >>> >>> 1) I use proxy_set_header to pass a header to upstream servers. Is it >>> possible to honor the header if the incoming request already has it? >>> >>> 2) I want to pass the "Server" header from upstream response to clients, >>> and if there is no such response header, I'd like to add a customer one. Is >>> it possible via core nginx or any third party modules? Currently I am using >>> "proxy_pass_header Server" without any check. I am not sure what happens if >>> upstream response doesn't have it. >>> >>> 3) I am trying to log an upstream response header to access log but it >>> has a "dot" in it (say X.header). I don't have any control to the upstream >>> servers. On nginx side, I tried setting "ignore_invalid_headers off" in the >>> server block, and in the logformat, I tried a few things for the the >>> column: $upstream_http_x.header $upstream_http_x_header >>> $upstream_http_x-header, but nothing works. Any ideas how I can log it? >>> >>> Thanks! >>> Frank >>> _______________________________________________ >>> nginx mailing list >>> nginx at nginx.org >>> http://mailman.nginx.org/mailman/listinfo/nginx >> >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From anoopalias01 at gmail.com Sat Aug 15 07:45:35 2015 From: anoopalias01 at gmail.com (Anoop Alias) Date: Sat, 15 Aug 2015 13:15:35 +0530 Subject: Rewrite rules for "random" subfolders In-Reply-To: References: Message-ID: Try the below one in the server context ..outside all location blocks. You can enable dedug log and do rewrite_log on; to see if its matching etc. Good Luck! rewrite ^/galleries/album-set/album-2/(.*)-single\.php /galleries/album-set/album-2/single.php?id=$1 last; -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Sat Aug 15 17:45:46 2015 From: nginx-forum at nginx.us (Muffel2k) Date: Sat, 15 Aug 2015 13:45:46 -0400 Subject: Rewrite rules for "random" subfolders In-Reply-To: References: Message-ID: Great! It is working now, thank you very much :) Posted at Nginx Forum: http://forum.nginx.org/read.php?2,261008,261031#msg-261031 From nginx-forum at nginx.us Sat Aug 15 17:48:36 2015 From: nginx-forum at nginx.us (daveyfx) Date: Sat, 15 Aug 2015 13:48:36 -0400 Subject: Problem with uwsgi_no_cache In-Reply-To: References: Message-ID: <33286ee2aeb3525a31d42e5ee18324ad.NginxMailingListEnglish@forum.nginx.org> Thank you, Francis. For anyone wondering what my corrected configuration looks like, here it is. All the JSON content returned by the upstream is now ignored and only caching HTML content as desired. ## cache zone config ## uwsgi_cache_path /var/cache/nginx/files keys_zone=www:10m inactive=10m; uwsgi_cache_key "$scheme$host$uri$is_args$args"; ## in http block ## map $upstream_http_content_type $no_cache { "application/json" 1; default 0; } ## in server block for my vhost ## location / { uwsgi_cache www; uwsgi_cache_valid 200 10m; uwsgi_cache_methods GET HEAD; uwsgi_no_cache $no_cache; add_header X-uWSGI-Cache $upstream_cache_status; include uwsgi_params; uwsgi_pass www; } Posted at Nginx Forum: http://forum.nginx.org/read.php?2,260977,261032#msg-261032 From nginx-forum at nginx.us Sat Aug 15 18:50:13 2015 From: nginx-forum at nginx.us (daveyfx) Date: Sat, 15 Aug 2015 14:50:13 -0400 Subject: uwsgi_cache only caching root location Message-ID: Hello all - I'm having an issue where nginx is only caching homepage requests. If i send requests to my server, the HTML at the homepage is saved, but requests to any URI otherwise do not save in the cache and upstream_cache_status returns with a MISS. How can I fix my config so that requests other than / will cache properly? ## cache zone config ## uwsgi_cache_path /var/cache/nginx/files keys_zone=www:10m inactive=10m; uwsgi_cache_key "$scheme$host$uri$is_args$args"; ## in http block ## map $upstream_http_content_type $no_cache { "application/json" 1; default 0; } ## in server block for my vhost ## location / { uwsgi_cache www; uwsgi_cache_valid 200 10m; uwsgi_cache_methods GET HEAD; uwsgi_no_cache $no_cache; add_header X-uWSGI-Cache $upstream_cache_status; include uwsgi_params; uwsgi_pass www; } Posted at Nginx Forum: http://forum.nginx.org/read.php?2,261033,261033#msg-261033 From francis at daoine.org Sat Aug 15 21:22:31 2015 From: francis at daoine.org (Francis Daly) Date: Sat, 15 Aug 2015 22:22:31 +0100 Subject: uwsgi_cache only caching root location In-Reply-To: References: Message-ID: <20150815212231.GY23844@daoine.org> On Sat, Aug 15, 2015 at 02:50:13PM -0400, daveyfx wrote: Hi there, > I'm having an issue where nginx is only caching homepage requests. If i > send requests to my server, the HTML at the homepage is saved, but requests > to any URI otherwise do not save in the cache and upstream_cache_status > returns with a MISS. > > How can I fix my config so that requests other than / will cache properly? http://nginx.org/r/uwsgi_cache_valid What in the full response from the uwsgi server for the homepage request? What is the full response for one other request? "tcpdump" or otherwise snoop the traffic between nginx and uwsgi. Or perhaps check the nginx debug log. f -- Francis Daly francis at daoine.org From nginx-forum at nginx.us Sat Aug 15 23:22:56 2015 From: nginx-forum at nginx.us (daveyfx) Date: Sat, 15 Aug 2015 19:22:56 -0400 Subject: uwsgi_cache only caching root location In-Reply-To: <20150815212231.GY23844@daoine.org> References: <20150815212231.GY23844@daoine.org> Message-ID: <72c761d5ac314a530ca93f04ebc39a79.NginxMailingListEnglish@forum.nginx.org> Hi Francis - In the uwsgi logs for my Python application, I see in the request logs: [pid: 15085|app: 0|req: 25/81] 38.103.38.200 () {32 vars in 503 bytes} [Sat Aug 15 22:22:08 2015] GET /congress?mref=nav => generated 108292 bytes in 66 msecs (HTTP/1.1 200) 4 headers in 166 bytes (3 switches on core 0) I have my apps servers on the upstream configured to use uwsgi protocol (for use with uwsgi pass in nginx) and decided to see what would happen if I switched to http for the app, and configure nginx to use proxy_ directives in lieu of all the uwsgi_ directives. Strangely, this still did not fix the caching issue. Here's the output of the tcpdump on my apps node, still with http protocol used instead of uwsgi. GET /congress?mref=nav HTTP/1.1 Host: origin.xxxx.com X-Real-IP: xxx.xxx.xxx.xxx X-Forwarded-For: xxx.xxx.xxx.xxx Authorization: Basic REZQMTphZHRlc3Q= User-Agent: curl/7.15.5 (i686-redhat-linux-gnu) libcurl/7.15.5 OpenSSL/0.9.8b zlib/1.2.3 libidn/0.6.5 Accept: */* HTTP/1.1 200 OK X-Frame-Options: SAMEORIGIN Content-Type: text/html; charset=utf-8 Set-Cookie: obj_id=1732; Path=/ Set-Cookie: landing_type=pub_chan; Path=/ Just to test, I even set proxy_cache_valid to "any" and still only the home page will cache. Also commented out the proxy_no_cache directive in my location / block and strangely, the issue persists. Thank you, Dave Posted at Nginx Forum: http://forum.nginx.org/read.php?2,261033,261035#msg-261035 From nginx-forum at nginx.us Sun Aug 16 01:16:16 2015 From: nginx-forum at nginx.us (daveyfx) Date: Sat, 15 Aug 2015 21:16:16 -0400 Subject: uwsgi_cache only caching root location In-Reply-To: <20150815212231.GY23844@daoine.org> References: <20150815212231.GY23844@daoine.org> Message-ID: Realized you asked for the results of both requests. Here's the curl -svo /dev/null output from: Home page: > GET / HTTP/1.1 > User-Agent: curl/7.15.5 (i686-redhat-linux-gnu) libcurl/7.15.5 OpenSSL/0.9.8b zlib/1.2.3 libidn/0.6.5 > Host: apps01.njprod.amc:9001 > Accept: */* > < HTTP/1.1 200 OK < X-Frame-Options: SAMEORIGIN < Content-Type: text/html; charset=utf-8 And a page that will not cache: > GET /congress?mref=nav HTTP/1.1 > User-Agent: curl/7.15.5 (i686-redhat-linux-gnu) libcurl/7.15.5 OpenSSL/0.9.8b zlib/1.2.3 libidn/0.6.5 > Host: apps01.njprod.amc:9001 > Accept: */* > < HTTP/1.1 200 OK < X-Frame-Options: SAMEORIGIN < Content-Type: text/html; charset=utf-8 < Set-Cookie: obj_id=1732; Path=/ < Set-Cookie: landing_type=pub_chan; Path=/ Thank you, Dave Posted at Nginx Forum: http://forum.nginx.org/read.php?2,261033,261036#msg-261036 From nginx-forum at nginx.us Sun Aug 16 05:16:21 2015 From: nginx-forum at nginx.us (daveyfx) Date: Sun, 16 Aug 2015 01:16:21 -0400 Subject: uwsgi_cache only caching root location In-Reply-To: References: <20150815212231.GY23844@daoine.org> Message-ID: uwsgi_ignore_headers Set-Cookie; This solved my issue. The header is not being sent on the home page, but is sent with almost all other pages. Thanks for the tips. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,261033,261037#msg-261037 From sherlockhugo at gmail.com Sun Aug 16 05:35:44 2015 From: sherlockhugo at gmail.com (Raul Hugo Noriega) Date: Sun, 16 Aug 2015 01:35:44 -0400 Subject: =?UTF-8?Q?=C3=9Anete_a_m=C3=AD_en_el_Grupo_de_Meetup_AWS_User_Group_Peru?= Message-ID: <1169370073.1439703344154.JavaMail.nobody@app20.meetup.com> Raul Hugo Noriega te invit? a unirte a Meetup "Hola Ya somos una comunidad Oficial, Pronto realizaremos el Primer MeetUp Unete!" -------------------------------------- AWS User Group Peru AWS Per? es una comunidad, integrada por Ingenieros, T?cnicos y Geeks con experiencia en Administraci?n y Seguridad de Servicios Amazon Web Services (AWS) en alta disponibilidad (IAAS, PAAS SAAS).... ??nete ahora! http://www.meetup.com/es/awsperu/t/ti1_1/?gj=ej4 -------------------------------------- Si no est?s interesado, no tienes que hacer nada. Meetup no mantendr? tu direcci?n en ninguna lista. -------------- next part -------------- An HTML attachment was scrubbed... URL: From francis at daoine.org Sun Aug 16 06:54:37 2015 From: francis at daoine.org (Francis Daly) Date: Sun, 16 Aug 2015 07:54:37 +0100 Subject: uwsgi_cache only caching root location In-Reply-To: References: <20150815212231.GY23844@daoine.org> Message-ID: <20150816065437.GZ23844@daoine.org> On Sun, Aug 16, 2015 at 01:16:21AM -0400, daveyfx wrote: Hi there, > uwsgi_ignore_headers Set-Cookie; > > This solved my issue. The header is not being sent on the home page, but is > sent with almost all other pages. Yes, that's the reason in this case. """ If the header includes the ?Set-Cookie? field, such a response will not be cached. ... Processing of one or more of these response header fields can be disabled using the uwsgi_ignore_headers directive. """ If "Set-Cookie" were cached, then everyone who fetches from the cache would be invited to set the same cookie, which probably defeats the purpose of cookies. Another way to achieve the same would be to turn off the "Set-Cookie" on the upstream, for all pages that don't need to do it. (Possibly the upstream is running unnecessary code to check for the existence if a cookie; and if not, then create a new unique cookie for future comparison.) You're effectively doing this now, with the nginx config; but this nginx method doesn't (trivially) allow you to get a Set-Cookie to the browser (and avoid caching) for the few pages which might actually need it. > Thanks for the tips. You're welcome. Good that you found an answer, and thanks for sharing it. Cheers, f -- Francis Daly francis at daoine.org From nginx-forum at nginx.us Sun Aug 16 14:29:56 2015 From: nginx-forum at nginx.us (lbc) Date: Sun, 16 Aug 2015 10:29:56 -0400 Subject: Upgrading plain HTTP to HTTPS using STARTTLS Message-ID: Hello, I consider switching from Apache to Nginx (or using it as a reverse proxy to the Apache), but need to upgrade plain HTTP connections to HTTPS using the scheme defined in RFC 2817. Reason for this is a client software running on WiFi Captive Portals, which inserts an "Upgrade: TLS/1.x" request together with custom headers just in front of the encrypted request from a guest's browser to our login server. In order for this scheme to work, the connection used for this kind of "ID request" to determine the hotspot in use and the remaining communication must not change over the upgrade, therefore redirection to the standard HTTPS port of the login server will not work. So, I wonder how I can configure Nginx to get the same effect of Apache's "SSLEngine: optional" setting? I did read the docs about the "starttls" setting in Nginx, but couldn't find an example on how exactly to use this in a server block to achieve an upgrade to TLS. Is it possible at all to configure Nginx this way? And if so, can I forward custom headers such as "X-HotspotID" if Nginx would be used as a proxy? Thanx in advance! Posted at Nginx Forum: http://forum.nginx.org/read.php?2,261044,261044#msg-261044 From nginx-forum at nginx.us Mon Aug 17 04:47:12 2015 From: nginx-forum at nginx.us (Eberx) Date: Mon, 17 Aug 2015 00:47:12 -0400 Subject: How nginx proxy works Message-ID: <80c8fd1ccd868c81993456bae826969a.NginxMailingListEnglish@forum.nginx.org> Hello, I would like to know how nginx proxy_pass works with upstream. I have 2 backend servers running. 1 nginx proxy running in front of them. My question is if one user connects to nginx proxy then nginx forward it to first backend ? or it forward it to both backend ? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,261046,261046#msg-261046 From anoopalias01 at gmail.com Mon Aug 17 05:12:12 2015 From: anoopalias01 at gmail.com (Anoop Alias) Date: Mon, 17 Aug 2015 10:42:12 +0530 Subject: How nginx proxy works In-Reply-To: <80c8fd1ccd868c81993456bae826969a.NginxMailingListEnglish@forum.nginx.org> References: <80c8fd1ccd868c81993456bae826969a.NginxMailingListEnglish@forum.nginx.org> Message-ID: I think what you are looking for is http://nginx.org/en/docs/http/load_balancing.html . -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Mon Aug 17 05:22:34 2015 From: nginx-forum at nginx.us (Eberx) Date: Mon, 17 Aug 2015 01:22:34 -0400 Subject: How nginx proxy works In-Reply-To: References: Message-ID: Hello, Thank you for reply. I'm using round robin load balancing on nginx. I can see the how many connection is established on backend server. It shows for example backend server 1 = 100 connection backend server 2 = 99 connection Does it mean unique 199 connection ? or unique ~100 connection ? Is only ip_hash method for unique connection ? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,261046,261048#msg-261048 From nginx-forum at nginx.us Mon Aug 17 09:30:45 2015 From: nginx-forum at nginx.us (smsmaddy1981) Date: Mon, 17 Aug 2015 05:30:45 -0400 Subject: Static Content on Different Server Isn't Loaded?? In-Reply-To: <8e46e6d992af55aee1c41fbef6a814cd.NginxMailingListEnglish@forum.nginx.org> References: <20150802230625.GG23844@daoine.org> <8e46e6d992af55aee1c41fbef6a814cd.NginxMailingListEnglish@forum.nginx.org> Message-ID: <09cebae08c2d382ef914c285e824ee20.NginxMailingListEnglish@forum.nginx.org> Hi Francis, There were some HW upgrades because of which I was halted to make anymore trials. Today, I am revoked the access to continue my investigation and with your recent suggestion/tip...I am able to access the static content. Cheers and very kind for your quick support. Best regards, smsmaddy Posted at Nginx Forum: http://forum.nginx.org/read.php?2,259786,261051#msg-261051 From mdounin at mdounin.ru Mon Aug 17 11:01:06 2015 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 17 Aug 2015 14:01:06 +0300 Subject: Upgrading plain HTTP to HTTPS using STARTTLS In-Reply-To: References: Message-ID: <20150817110106.GV37350@mdounin.ru> Hello! On Sun, Aug 16, 2015 at 10:29:56AM -0400, lbc wrote: > Hello, > > I consider switching from Apache to Nginx (or using it as a reverse proxy to > the Apache), but need to upgrade plain HTTP connections to HTTPS using the > scheme defined in RFC 2817. Reason for this is a client software running on > WiFi Captive Portals, which inserts an "Upgrade: TLS/1.x" request together > with custom headers just in front of the encrypted request from a guest's > browser to our login server. In order for this scheme to work, the > connection used for this kind of "ID request" to determine the hotspot in > use and the remaining communication must not change over the upgrade, > therefore redirection to the standard HTTPS port of the login server will > not work. > > So, I wonder how I can configure Nginx to get the same effect of Apache's > "SSLEngine: optional" setting? I did read the docs about the "starttls" > setting in Nginx, but couldn't find an example on how exactly to use this in > a server block to achieve an upgrade to TLS. The "starttls" directive is only available in mail proxy module, not for http. There is no support for RFC 2817 in nginx, as it's not something used by known browsers. Connections with Upgrade requests can be proxied to other servers though, so you can use nginx as a reverse proxy for such connections. Such approach is mostly used to proxy WebSocket connections, see http://nginx.org/en/docs/http/websocket.html for configuration details. > Is it possible at all to configure Nginx this way? And if so, can I forward > custom headers such as "X-HotspotID" if Nginx would be used as a proxy? You can add arbitrary headers to requests nginx forwards to upstream servers, see http://nginx.org/r/proxy_set_header. You can also add response headers, see http://nginx.org/r/add_header. -- Maxim Dounin http://nginx.org/ From mdounin at mdounin.ru Mon Aug 17 11:31:53 2015 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 17 Aug 2015 14:31:53 +0300 Subject: header handling In-Reply-To: References: Message-ID: <20150817113153.GX37350@mdounin.ru> Hello! On Sat, Aug 15, 2015 at 12:15:47AM -0700, Frank Liu wrote: > I made the below patch and can now use $upstream_http_x_header for > logformat to capture the header X.header in the access log. Does anybody > see any issues with the patch? > > --- src/http/ngx_http_variables.c.orig 2015-08-15 02:19:31.635328112 +0000 > > +++ src/http/ngx_http_variables.c 2015-08-15 02:19:42.051541422 +0000 > > @@ -897,6 +897,8 @@ > > > > } else if (ch == '-') { > > ch = '_'; > > + } else if (ch == '.') { > > + ch = '_'; > > } Such approach will likely result in security problems, as "X.header" and "X-header" would be indistinguishable from nginx point of view. -- Maxim Dounin http://nginx.org/ From nginx-forum at nginx.us Mon Aug 17 15:18:35 2015 From: nginx-forum at nginx.us (lbc) Date: Mon, 17 Aug 2015 11:18:35 -0400 Subject: Upgrading plain HTTP to HTTPS using STARTTLS In-Reply-To: <20150817110106.GV37350@mdounin.ru> References: <20150817110106.GV37350@mdounin.ru> Message-ID: Dear Maxim, thank you very much for the speedy answer! Glad to here that the websocket approach could help. Will try this, since Nginx just rocks. :-) Have a nice day and best wishes, Stefan (lbc) Posted at Nginx Forum: http://forum.nginx.org/read.php?2,261044,261062#msg-261062 From nginx-forum at nginx.us Mon Aug 17 21:39:57 2015 From: nginx-forum at nginx.us (Anton Ivanov) Date: Mon, 17 Aug 2015 17:39:57 -0400 Subject: $upstream_status may show 502 even if upstream replied with 503 Message-ID: My nginx.conf: http { upstream test { server 127.0.0.1:8081 max_fails=0; server 127.0.0.1:8082 max_fails=0; } server { listen 8080; location / { proxy_pass http://test; } proxy_next_upstream http_503; proxy_intercept_errors on; } log_format main '$upstream_status'; ... } If my upstream servers are on and always reply with 503 I expect to find in log 503, 503 Instead I find 502, 503 Is it ok that first upstream status is 502 instead of 503? Can I somehow configure nginx to show 503? Because in case of more complicated proxy_next_upstream error http_503 it is more useful to see fair upstream status to distinguish between error and http_503. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,261074,261074#msg-261074 From gfrankliu at gmail.com Mon Aug 17 23:39:01 2015 From: gfrankliu at gmail.com (Frank Liu) Date: Mon, 17 Aug 2015 16:39:01 -0700 Subject: header handling In-Reply-To: <20150817113153.GX37350@mdounin.ru> References: <20150817113153.GX37350@mdounin.ru> Message-ID: Hi Maxim, Thanks for you comment! Do you have any other approaches/suggestions? I use nginx as a proxy/load-balancer. The request will be processed by the upstream java servers. I assume my change won't actually modify the actual header, so upstream will still get the original header and can distinguish . and - ? Regards, Frank On Mon, Aug 17, 2015 at 4:31 AM, Maxim Dounin wrote: > Hello! > > On Sat, Aug 15, 2015 at 12:15:47AM -0700, Frank Liu wrote: > > > I made the below patch and can now use $upstream_http_x_header for > > logformat to capture the header X.header in the access log. Does anybody > > see any issues with the patch? > > > > --- src/http/ngx_http_variables.c.orig 2015-08-15 02:19:31.635328112 > +0000 > > > > +++ src/http/ngx_http_variables.c 2015-08-15 02:19:42.051541422 +0000 > > > > @@ -897,6 +897,8 @@ > > > > > > > > } else if (ch == '-') { > > > > ch = '_'; > > > > + } else if (ch == '.') { > > > > + ch = '_'; > > > > } > > Such approach will likely result in security problems, as > "X.header" and "X-header" would be indistinguishable from nginx > point of view. > > -- > Maxim Dounin > http://nginx.org/ > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Tue Aug 18 11:17:41 2015 From: nginx-forum at nginx.us (adrian.biris) Date: Tue, 18 Aug 2015 07:17:41 -0400 Subject: nginx 1.8.0 http_proxy cache issues Message-ID: Hi, I'm using nginx 1.8.0 on Ubuntu 14.04 LTS. # nginx -V nginx version: nginx/1.8.0 built with OpenSSL 1.0.1f 6 Jan 2014 TLS SNI support enabled configure arguments: --with-cc-opt='-g -O2 -fPIE -fstack-protector --param=ssp-buffer-size=4 -Wformat -Werror=format-security -D_FORTIFY_SOURCE=2 -mtune=corei7-avx -march=corei7' --with-ld-opt='-Wl,-Bsymbolic-functions -fPIE -pie -Wl,-z,relro -Wl,-z,now -mtune=corei7-avx -march=corei7' --prefix=/usr/share/nginx --conf-path=/etc/nginx/nginx.conf --http-log-path=/var/log/nginx/access.log --error-log-path=/var/log/nginx/error.log --lock-path=/var/lock/nginx.lock --pid-path=/run/nginx.pid --http-client-body-temp-path=/var/lib/nginx/body --http-fastcgi-temp-path=/var/lib/nginx/fastcgi --http-proxy-temp-path=/var/lib/nginx/proxy --http-scgi-temp-path=/var/lib/nginx/scgi --http-uwsgi-temp-path=/var/lib/nginx/uwsgi --with-debug --with-pcre-jit --with-file-aio --with-http_ssl_module --with-http_stub_status_module --with-http_realip_module --with-http_auth_request_module --with-http_addition_module --with-http_geoip_module --with-http_gunzip_module --with-http_gzip_static_module --with-http_image_filter_module --with-http_perl_module --with-http_secure_link_module --add-module=/opt/rebuildnginx/nginx-1.8.0/debian/modules/headers-more-nginx-module --add-module=/opt/rebuildnginx/nginx-1.8.0/debian/modules/nginx-auth-pam --add-module=/opt/rebuildnginx/nginx-1.8.0/debian/modules/nginx-cache-purge --add-module=/opt/rebuildnginx/nginx-1.8.0/debian/modules/nginx-development-kit --add-module=/opt/rebuildnginx/nginx-1.8.0/debian/modules/nginx-echo --add-module=/opt/rebuildnginx/nginx-1.8.0/debian/modules/ngx-fancyindex --add-module=/opt/rebuildnginx/nginx-1.8.0/debian/modules/nginx-lua --add-module=/opt/rebuildnginx/nginx-1.8.0/debian/modules/nginx-upstream-fair --add-module=/opt/rebuildnginx/nginx-1.8.0/debian/modules/set-misc-nginx-module I have noticed that when I restart nginx (using restart or just simple stop and then start) the cache is not invalidated as it used to be. After each restart I get HIT for all pages that were valid in cache before the restart. Did anyone else noticed this behavior ? If yes, how did you fix it? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,261077,261077#msg-261077 From mdounin at mdounin.ru Tue Aug 18 12:12:59 2015 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 18 Aug 2015 15:12:59 +0300 Subject: nginx 1.8.0 http_proxy cache issues In-Reply-To: References: Message-ID: <20150818121259.GH37350@mdounin.ru> Hello! On Tue, Aug 18, 2015 at 07:17:41AM -0400, adrian.biris wrote: [...] > I have noticed that when I restart nginx (using restart or just simple stop > and then start) the cache is not invalidated as it used to be. > After each restart I get HIT for all pages that were valid in cache before > the restart. > Did anyone else noticed this behavior ? If yes, how did you fix it? Caches are not expected to be invalidated on restart (and they never used to be). If you want nginx to avoid using previously cached responses after a restart, you can remove the cache directory. -- Maxim Dounin http://nginx.org/ From mfioretti at nexaima.net Tue Aug 18 13:23:01 2015 From: mfioretti at nexaima.net (M. Fioretti) Date: Tue, 18 Aug 2015 15:23:01 +0200 Subject: nginx makes mysqld die all the time Message-ID: <10d8bbfd89e34b9966b0355d405b4eca@nexaima.net> Greetings, I just migrated to nginx + php-fpm from apache a few websites, on a centos 6.6 virtual server. The sites are up but... now mysqld (MariaDB, actually) dies every 10/20 **minutes** with status: mysqld dead but subsys locked or mysqld dead but pid file exists for reasons not really relevant here I cannot post nginx conf right away. I **will** do that in a few hours, when I'm back at my desk. Since the crashes are so frequent, however, any help to save time is very welcome. Even if it's just request of other specific info, besides the nginx conf files. TIA, Marco From lists-nginx at swsystem.co.uk Tue Aug 18 12:36:18 2015 From: lists-nginx at swsystem.co.uk (Steve Wilson) Date: Tue, 18 Aug 2015 13:36:18 +0100 Subject: nginx makes mysqld die all the time In-Reply-To: <10d8bbfd89e34b9966b0355d405b4eca@nexaima.net> References: <10d8bbfd89e34b9966b0355d405b4eca@nexaima.net> Message-ID: Hi, When I migrated from apache+mod_php to nginx+php-fpm I found I had a few websites using persistent mysql connections which never closed. I had to disable this in the php.ini so all the sites fell back to using non-persistent connections. I don't know if this will help as it was mysql not maria or other. I imagine there'll probably be something logged somewhere which needs a bit of time to find. On 18/08/2015 14:23, M. Fioretti wrote: > Greetings, > > I just migrated to nginx + php-fpm from apache a few websites, on a > centos 6.6 virtual server. The sites are up but... now mysqld > (MariaDB, actually) dies every 10/20 **minutes** with status: > > mysqld dead but subsys locked > > or > > mysqld dead but pid file exists > > for reasons not really relevant here I cannot post nginx conf > right away. I **will** do that in a few hours, when I'm back > at my desk. Since the crashes are so frequent, however, any > help to save time is very welcome. Even if it's just request > of other specific info, besides the nginx conf files. > > TIA, > Marco > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From mdounin at mdounin.ru Tue Aug 18 15:40:44 2015 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 18 Aug 2015 18:40:44 +0300 Subject: nginx-1.9.4 Message-ID: <20150818154044.GL37350@mdounin.ru> Changes with nginx 1.9.4 18 Aug 2015 *) Change: the "proxy_downstream_buffer" and "proxy_upstream_buffer" directives of the stream module are replaced with the "proxy_buffer_size" directive. *) Feature: the "tcp_nodelay" directive in the stream module. *) Feature: multiple "sub_filter" directives can be used simultaneously. *) Feature: variables support in the search string of the "sub_filter" directive. *) Workaround: configuration testing might fail under Linux OpenVZ. Thanks to Gena Makhomed. *) Bugfix: old worker processes might hog CPU after reconfiguration with a large number of worker_connections. *) Bugfix: a segmentation fault might occur in a worker process if the "try_files" and "alias" directives were used inside a location given by a regular expression; the bug had appeared in 1.7.1. *) Bugfix: the "try_files" directive inside a nested location given by a regular expression worked incorrectly if the "alias" directive was used in the outer location. *) Bugfix: in hash table initialization error handling. *) Bugfix: nginx could not be built with Visual Studio 2015. -- Maxim Dounin http://nginx.org/ From kworthington at gmail.com Tue Aug 18 16:16:17 2015 From: kworthington at gmail.com (Kevin Worthington) Date: Tue, 18 Aug 2015 12:16:17 -0400 Subject: nginx-1.9.4 In-Reply-To: <20150818154044.GL37350@mdounin.ru> References: <20150818154044.GL37350@mdounin.ru> Message-ID: Hello Nginx users, Now available: Nginx 1.9.4 for Windows http://goo.gl/YvvfXR (32-bit and 64-bit versions) These versions are to support legacy users who are already using Cygwin based builds of Nginx. Officially supported native Windows binaries are at nginx.org. Announcements are also available here: Twitter http://twitter.com/kworthington Google+ https://plus.google.com/+KevinWorthington/ Thank you, Kevin -- Kevin Worthington kworthington *@* (gmail] [dot} {com) http://kevinworthington.com/ http://twitter.com/kworthington https://plus.google.com/+KevinWorthington/ On Tue, Aug 18, 2015 at 11:40 AM, Maxim Dounin wrote: > Changes with nginx 1.9.4 18 Aug > 2015 > > *) Change: the "proxy_downstream_buffer" and "proxy_upstream_buffer" > directives of the stream module are replaced with the > "proxy_buffer_size" directive. > > *) Feature: the "tcp_nodelay" directive in the stream module. > > *) Feature: multiple "sub_filter" directives can be used > simultaneously. > > *) Feature: variables support in the search string of the "sub_filter" > directive. > > *) Workaround: configuration testing might fail under Linux OpenVZ. > Thanks to Gena Makhomed. > > *) Bugfix: old worker processes might hog CPU after reconfiguration > with > a large number of worker_connections. > > *) Bugfix: a segmentation fault might occur in a worker process if the > "try_files" and "alias" directives were used inside a location given > by a regular expression; the bug had appeared in 1.7.1. > > *) Bugfix: the "try_files" directive inside a nested location given by > a > regular expression worked incorrectly if the "alias" directive was > used in the outer location. > > *) Bugfix: in hash table initialization error handling. > > *) Bugfix: nginx could not be built with Visual Studio 2015. > > > -- > Maxim Dounin > http://nginx.org/ > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Tue Aug 18 17:12:54 2015 From: nginx-forum at nginx.us (smsmaddy1981) Date: Tue, 18 Aug 2015 13:12:54 -0400 Subject: Static Content on Different Server Isn't Loaded?? In-Reply-To: <09cebae08c2d382ef914c285e824ee20.NginxMailingListEnglish@forum.nginx.org> References: <20150802230625.GG23844@daoine.org> <8e46e6d992af55aee1c41fbef6a814cd.NginxMailingListEnglish@forum.nginx.org> <09cebae08c2d382ef914c285e824ee20.NginxMailingListEnglish@forum.nginx.org> Message-ID: <35f6627d9afb836f47fa556f3fdd522e.NginxMailingListEnglish@forum.nginx.org> Hi Francis, One more observation pls. This WORKS in reading static content from remote server location ^~/wkspace/ { proxy_pass http://citwkspace; } This DOESN'T WORK? in reading static content from remote server location ^~/wkspace/agentLogin/ { proxy_pass http://citwkspace; } Best regards, Maddy Posted at Nginx Forum: http://forum.nginx.org/read.php?2,259786,261105#msg-261105 From sarah at nginx.com Wed Aug 19 03:45:54 2015 From: sarah at nginx.com (Sarah Novotny) Date: Tue, 18 Aug 2015 20:45:54 -0700 Subject: nginx.conf schedule posted for 2015, September 22-24 @ Fort Mason, San Francisco Message-ID: Hello all! The NGINX user conference, nginx.conf 2015, is coming soon: September 22-24 @ Fort Mason, San Francisco. Register now to take advantage of our early pricing. ? http://bit.ly/1NE1qHD nginx.conf 2015 includes technical talks, case studies, and hands-on training. We?ll also be hosting social time where you can connect with core NGINX developers and community members who are building the future of the modern web. In addition, the NGINX booth will be staffed by our engineering team throughout the event to answer any questions you might have. A few of the notable speakers and talks to expect are: ? Igor Sysoev, CTO and creator of NGINX ? Christopher Brown, Founding Engineer of Amazon EC2 and former CTO at Opscode (now Chef) -- Why Containers Are Not (Yet?) Enough ? Dragos Dascalita Haut, Solutions Architect at Adobe -- Scaling Microservices with NGINX, Docker, and Mesos ? Sean Cribbs, Senior Principal Engineer at Comcast Cable -- Rolling Your Own API Management Platform with NGINX and Lua ? Kelsey Hightower, Product Manager, Developer, and Chief Advocate at CoreOS -- Using Kubernetes with NGINX Plus to Package and Distribute Modern Web Applications at Scale You can see the full list of speakers and topics here: http://bit.ly/1NE1qHD We?d like to extend a discount to each of you as community members. Please use and share this discount code to get 25% off conference tickets: NG15ORG See you there! Sarah -- Sarah Novotny Developer Advocacy, Nginx Inc. Discover best practices for building & delivering apps at scale. From mfioretti at nexaima.net Wed Aug 19 06:02:48 2015 From: mfioretti at nexaima.net (M. Fioretti) Date: Wed, 19 Aug 2015 08:02:48 +0200 Subject: nginx makes mysqld die all the time In-Reply-To: References: <10d8bbfd89e34b9966b0355d405b4eca@nexaima.net> Message-ID: On 2015-08-18 14:36, Steve Wilson wrote: > Hi, > > When I migrated from apache+mod_php to nginx+php-fpm I found I had a > few websites using persistent mysql connections which never closed. Steve, thanks for this tip. This surely was part of the problem, but not all of it. Sure enough, when I first noticed this problem, I also found in dmesg messages like this: Out of memory: kill process 31066 (mysqld) score 30155 or a child Killed process 31066 (mysqld) yesterday, as soon as I was able to ssh again, I turned mysql.allow_persistent = Off in php.ini (it was On) and restarted everything. Page load time decreased noticeably AND there where no more mysql crashes for the rest of the day. This morning, however, I found mysql died again with the same symptom (dead but subsystem locked) and a DIFFERENT message in dmesg, that I had never seen before: Out of memory: kill process 13812 (php-fpm) score 18223 or a child Killed process 13812 (php-fpm) the nginx and php-fpm configuration files are pasted below (I have several virtual hosts all configured that way for wordpress, plus one drupal and one semantic scuttle site, if it matters). What next? Any help is welcome! Marco [root at fima ~]# more /etc/nginx/nginx.conf user nginx; worker_processes 1; error_log /var/log/nginx/error.log; #error_log /var/log/nginx/error.log notice; #error_log /var/log/nginx/error.log info; pid /var/run/nginx.pid; events { worker_connections 1024; } http { include /etc/nginx/mime.types; default_type application/octet-stream; server_names_hash_bucket_size 64; server_tokens off; access_log /var/log/nginx/access.log combined buffer=32k; log_format '$remote_addr - $remote_user [$time_local] $status ' '"$request" $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"'; sendfile on; #tcp_nopush on; #keepalive_timeout 0; keepalive_timeout 65; #gzip on; # Load config files from the /etc/nginx/conf.d directory # The default server is in conf.d/default.conf include /etc/nginx/conf.d/*.conf; } and this is configuration for one of the wordpress sites, I only changed the domain name. The configuration is due to the fact that, for several reasons out of my control, I **must** run two fully independent wordpress installations, but "nested" into each other, that is: myblog.example.com/ (english blog, by wordpress installed in $documentroot/myblog) myblog.example.com/it (italian version, by separate wordpress installed in $documentroot/myblog_it) the above worked fine with apache. Can the equivalent config for nginx be related to the problem I'm seeing? If yes, how, and how to fix it? And while we are at this: advice on anything else I could optimize is also very welcome of course, even if not related to the main problem. [root at fima ~]# more /etc/nginx/conf.d/stop.conf server { listen 80; server_name myblog.example.com; root /var/www/html/wordpress/; include /etc/nginx/default.d/*.conf; # configuration for the italian version, installed # in root/myblog_it, but having as url example.com/stop/it location ^~ /it/ { rewrite ^/it/(.+) /myblog_it/$1 ; index /myblog_it/index.php; } location /myblog_it/ { try_files $uri $uri/ /myblog_it/index.php?args; index index.php; location ~ \.php$ { fastcgi_pass unix:/tmp/phpfpm.sock; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include fastcgi_params; } } ################################################################## # main blog location ^~ / { rewrite ^/(.+) /myblog/$1 ; index /myblog/index.php; } location /myblog/ { try_files /$uri /$uri/ /myblog/index.php?args; index index.php; } location ~ \.php$ { fastcgi_pass unix:/tmp/phpfpm.sock; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root/$fastcgi_script_name; include fastcgi_params; } } php-fpm configuration: [root ~]# grep -v '^;' /etc/php-fpm.conf | uniq include=/etc/php-fpm.d/*.conf [global] pid = /var/run/php-fpm/php-fpm.pid error_log = /var/log/php-fpm/error.log daemonize = no emergency_restart_threshold = 10 emergency_restart_interval = 1m process_control_timeout = 10s AND ALSO: [root ~]# grep -v '^;' /etc/php-fpm.d/www.conf | uniq [www] listen.allowed_clients = 127.0.0.1 listen = /tmp/phpfpm.sock listen.owner = nginx listen.group = nginx user = nginx group = nginx pm = dynamic pm.max_children = 50 pm.start_servers = 5 pm.min_spare_servers = 5 pm.max_spare_servers = 35 slowlog = /var/log/php-fpm/www-slow.log php_admin_value[error_log] = /var/log/php-fpm/www-error.log php_admin_flag[log_errors] = on php_value[session.save_handler] = files php_value[session.save_path] = /var/lib/php/session From oscaretu at gmail.com Wed Aug 19 07:01:13 2015 From: oscaretu at gmail.com (oscaretu .) Date: Wed, 19 Aug 2015 09:01:13 +0200 Subject: nginx makes mysqld die all the time In-Reply-To: References: <10d8bbfd89e34b9966b0355d405b4eca@nexaima.net> Message-ID: Hello. Perhaphs this can help you about the out of memory: OOM Killer: https://www.google.com/search?client=ubuntu&channel=fs&q=OOM+Kiler&ie=utf-8&oe=utf-8 Kind regards, Oscar On Wed, Aug 19, 2015 at 8:02 AM, M. Fioretti wrote: > On 2015-08-18 14:36, Steve Wilson wrote: > >> Hi, >> >> When I migrated from apache+mod_php to nginx+php-fpm I found I had a >> few websites using persistent mysql connections which never closed. >> > > Steve, thanks for this tip. This surely was part of the problem, but > not all of it. > > Sure enough, when I first noticed this problem, I also found in dmesg > messages like this: > > Out of memory: kill process 31066 (mysqld) score 30155 or a child > Killed process 31066 (mysqld) > > yesterday, as soon as I was able to ssh again, I turned > > mysql.allow_persistent = Off in php.ini (it was On) > > and restarted everything. Page load time decreased noticeably AND there > where no more mysql crashes for the rest of the day. > This morning, however, I found mysql died again with the same symptom > (dead but subsystem locked) and a DIFFERENT message in dmesg, that I > had never seen before: > > Out of memory: kill process 13812 (php-fpm) score 18223 or a child > Killed process 13812 (php-fpm) > > the nginx and php-fpm configuration files are pasted below (I have several > virtual > hosts all configured that way for wordpress, plus one drupal and one > semantic scuttle > site, if it matters). What next? Any help is welcome! > > Marco > > [root at fima ~]# more /etc/nginx/nginx.conf > > user nginx; > worker_processes 1; > > error_log /var/log/nginx/error.log; > #error_log /var/log/nginx/error.log notice; > #error_log /var/log/nginx/error.log info; > > pid /var/run/nginx.pid; > > > events { > worker_connections 1024; > } > > http { > include /etc/nginx/mime.types; > default_type application/octet-stream; > > server_names_hash_bucket_size 64; > server_tokens off; > access_log /var/log/nginx/access.log combined buffer=32k; > log_format '$remote_addr - $remote_user [$time_local] $status ' > '"$request" $body_bytes_sent "$http_referer" ' > '"$http_user_agent" "$http_x_forwarded_for"'; > > sendfile on; > #tcp_nopush on; > > #keepalive_timeout 0; > keepalive_timeout 65; > > #gzip on; > > # Load config files from the /etc/nginx/conf.d directory > # The default server is in conf.d/default.conf > include /etc/nginx/conf.d/*.conf; > > } > > and this is configuration for one of the wordpress sites, I only changed > the domain name. The configuration is due to the fact that, for several > reasons out of my control, I **must** run two fully independent wordpress > installations, but "nested" into each other, that is: > > myblog.example.com/ (english blog, by wordpress installed in > $documentroot/myblog) > myblog.example.com/it (italian version, by separate wordpress installed > in $documentroot/myblog_it) > > the above worked fine with apache. Can the equivalent config for > nginx be related to the problem I'm seeing? If yes, how, and how to > fix it? And while we are at this: advice on anything else I could optimize > is > also very welcome of course, even if not related to the main problem. > > > [root at fima ~]# more /etc/nginx/conf.d/stop.conf > > server { > listen 80; > server_name myblog.example.com; > root /var/www/html/wordpress/; > include /etc/nginx/default.d/*.conf; > > # configuration for the italian version, installed > # in root/myblog_it, but having as url example.com/stop/it > > location ^~ /it/ { > rewrite ^/it/(.+) /myblog_it/$1 ; > index /myblog_it/index.php; > } > > location /myblog_it/ { > try_files $uri $uri/ /myblog_it/index.php?args; > index index.php; > location ~ \.php$ { > fastcgi_pass unix:/tmp/phpfpm.sock; > fastcgi_index index.php; > fastcgi_param SCRIPT_FILENAME > $document_root$fastcgi_script_name; > include fastcgi_params; > } > } > > ################################################################## > # main blog > > location ^~ / { > rewrite ^/(.+) /myblog/$1 ; > index /myblog/index.php; > } > > location /myblog/ { > try_files /$uri /$uri/ /myblog/index.php?args; > index index.php; > } > > location ~ \.php$ { > fastcgi_pass unix:/tmp/phpfpm.sock; > fastcgi_index index.php; > fastcgi_param SCRIPT_FILENAME > $document_root/$fastcgi_script_name; > include fastcgi_params; > } > } > > > php-fpm configuration: > > [root ~]# grep -v '^;' /etc/php-fpm.conf | uniq > > include=/etc/php-fpm.d/*.conf > > [global] > pid = /var/run/php-fpm/php-fpm.pid > > error_log = /var/log/php-fpm/error.log > > daemonize = no > > emergency_restart_threshold = 10 > emergency_restart_interval = 1m > process_control_timeout = 10s > > AND ALSO: > > [root ~]# grep -v '^;' /etc/php-fpm.d/www.conf | uniq > [www] > > > listen.allowed_clients = 127.0.0.1 > > listen = /tmp/phpfpm.sock > listen.owner = nginx > listen.group = nginx > user = nginx > group = nginx > > pm = dynamic > > pm.max_children = 50 > > pm.start_servers = 5 > > pm.min_spare_servers = 5 > > pm.max_spare_servers = 35 > > > > > > slowlog = /var/log/php-fpm/www-slow.log > > > php_admin_value[error_log] = /var/log/php-fpm/www-error.log > php_admin_flag[log_errors] = on > > php_value[session.save_handler] = files > php_value[session.save_path] = /var/lib/php/session > > > > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -- Oscar Fernandez Sierra oscaretu at gmail.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From mfioretti at nexaima.net Wed Aug 19 09:39:09 2015 From: mfioretti at nexaima.net (M. Fioretti) Date: Wed, 19 Aug 2015 11:39:09 +0200 Subject: nginx makes mysqld die all the time In-Reply-To: References: <10d8bbfd89e34b9966b0355d405b4eca@nexaima.net> Message-ID: <43cb1d9fc3bbac4942fbcbc70fa2ea4b@nexaima.net> On 2015-08-19 09:01, oscaretu . wrote: > Hello. > > Perhaphs this can help you about the out of memory: OOM Killer: > > https://www.google.com/search?client=ubuntu&channel=fs&q=OOM+Kiler&ie=utf-8&oe=utf-8 > [6] > Oscar, and list, I just looked at the several /var/log/messages files. The last one is from August 16th. It contains in equal parts lines like: kernel: php-fpm invoked oom-killer: gfp_mask=0x201da, order=0, oom_adj=0... and "kernel: mysqld invoked oom-killer: ..." etc etc. Not really useful. I am (re)reading this right now: http://www.oracle.com/technetwork/articles/servers-storage-dev/oom-killer-1911807.html but so far it doesn't seem to be really helpful either. I had already figured out that, for some reason, mysqld and/or php-fpm were consuming much more memory than would be reasonable. The question is **what**, in their, or nginx's configuration can make that happen? As it turned out yesterday, a good part of the problem was, indeed, the persistent mysqld connections in php.ini. Turning them off made the situation much better, but not fix it. Today, the question is what else, exactly, to look for in which logs, and above all if something in the nginx/ php-fpm configuration I posted in my previous message is the trigger of this behaviour and must be changed/optimized. Ideas, anybody? Thanks, Marco From lists-nginx at swsystem.co.uk Wed Aug 19 08:57:55 2015 From: lists-nginx at swsystem.co.uk (Steve Wilson) Date: Wed, 19 Aug 2015 09:57:55 +0100 Subject: nginx makes mysqld die all the time In-Reply-To: References: <10d8bbfd89e34b9966b0355d405b4eca@nexaima.net> Message-ID: <12d82966824a3d846cb98e86eb09cfef@swsystem.co.uk> It looks like your machine is running out of memory, again this is something I think I've dealt with in php-fpm by configuring it to recycle the child processes so they don't start consuming too much memory. Here's my fpm pool config file: [www] user = www-data group = www-data listen = /var/run/php5-fpm.sock listen.owner = nginx listen.group = nginx listen.mode = 0660 pm = dynamic pm.max_children = 15 pm.start_servers = 3 pm.min_spare_servers = 2 pm.max_spare_servers = 3 pm.process_idle_timeout = 10s; pm.max_requests = 10 Do not take this config as-is. I've a group of nginx+php-fpm servers running for wordpress and drupal (2 each) but your activity may be considerably higher than what I've got. The key parts here are the "pm." options so you'll probably want to investigate each setting and tune to your requirements. Steve. On 19/08/2015 07:02, M. Fioretti wrote: > On 2015-08-18 14:36, Steve Wilson wrote: >> Hi, >> >> When I migrated from apache+mod_php to nginx+php-fpm I found I had a >> few websites using persistent mysql connections which never closed. > > Steve, thanks for this tip. This surely was part of the problem, but > not all of it. > > Sure enough, when I first noticed this problem, I also found in dmesg > messages like this: > > Out of memory: kill process 31066 (mysqld) score 30155 or a child > Killed process 31066 (mysqld) > > yesterday, as soon as I was able to ssh again, I turned > > mysql.allow_persistent = Off in php.ini (it was On) > > and restarted everything. Page load time decreased noticeably AND there > where no more mysql crashes for the rest of the day. > This morning, however, I found mysql died again with the same symptom > (dead but subsystem locked) and a DIFFERENT message in dmesg, that I > had never seen before: > > Out of memory: kill process 13812 (php-fpm) score 18223 or a child > Killed process 13812 (php-fpm) > > the nginx and php-fpm configuration files are pasted below (I have > several virtual > hosts all configured that way for wordpress, plus one drupal and one > semantic scuttle > site, if it matters). What next? Any help is welcome! > > Marco > > [root at fima ~]# more /etc/nginx/nginx.conf > > user nginx; > worker_processes 1; > > error_log /var/log/nginx/error.log; > #error_log /var/log/nginx/error.log notice; > #error_log /var/log/nginx/error.log info; > > pid /var/run/nginx.pid; > > > events { > worker_connections 1024; > } > > http { > include /etc/nginx/mime.types; > default_type application/octet-stream; > > server_names_hash_bucket_size 64; > server_tokens off; > access_log /var/log/nginx/access.log combined buffer=32k; > log_format '$remote_addr - $remote_user [$time_local] $status ' > '"$request" $body_bytes_sent "$http_referer" ' > '"$http_user_agent" "$http_x_forwarded_for"'; > > sendfile on; > #tcp_nopush on; > > #keepalive_timeout 0; > keepalive_timeout 65; > > #gzip on; > > # Load config files from the /etc/nginx/conf.d directory > # The default server is in conf.d/default.conf > include /etc/nginx/conf.d/*.conf; > > } > > and this is configuration for one of the wordpress sites, I only > changed > the domain name. The configuration is due to the fact that, for several > reasons out of my control, I **must** run two fully independent > wordpress > installations, but "nested" into each other, that is: > > myblog.example.com/ (english blog, by wordpress installed in > $documentroot/myblog) > myblog.example.com/it (italian version, by separate wordpress > installed in $documentroot/myblog_it) > > the above worked fine with apache. Can the equivalent config for > nginx be related to the problem I'm seeing? If yes, how, and how to > fix it? And while we are at this: advice on anything else I could > optimize is > also very welcome of course, even if not related to the main problem. > > > [root at fima ~]# more /etc/nginx/conf.d/stop.conf > > server { > listen 80; > server_name myblog.example.com; > root /var/www/html/wordpress/; > include /etc/nginx/default.d/*.conf; > > # configuration for the italian version, installed > # in root/myblog_it, but having as url example.com/stop/it > > location ^~ /it/ { > rewrite ^/it/(.+) /myblog_it/$1 ; > index /myblog_it/index.php; > } > > location /myblog_it/ { > try_files $uri $uri/ /myblog_it/index.php?args; > index index.php; > location ~ \.php$ { > fastcgi_pass unix:/tmp/phpfpm.sock; > fastcgi_index index.php; > fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; > include fastcgi_params; > } > } > > ################################################################## > # main blog > > location ^~ / { > rewrite ^/(.+) /myblog/$1 ; > index /myblog/index.php; > } > > location /myblog/ { > try_files /$uri /$uri/ /myblog/index.php?args; > index index.php; > } > > location ~ \.php$ { > fastcgi_pass unix:/tmp/phpfpm.sock; > fastcgi_index index.php; > fastcgi_param SCRIPT_FILENAME > $document_root/$fastcgi_script_name; > include fastcgi_params; > } > } > > > php-fpm configuration: > > [root ~]# grep -v '^;' /etc/php-fpm.conf | uniq > > include=/etc/php-fpm.d/*.conf > > [global] > pid = /var/run/php-fpm/php-fpm.pid > > error_log = /var/log/php-fpm/error.log > > daemonize = no > > emergency_restart_threshold = 10 > emergency_restart_interval = 1m > process_control_timeout = 10s > > AND ALSO: > > [root ~]# grep -v '^;' /etc/php-fpm.d/www.conf | uniq > [www] > > > listen.allowed_clients = 127.0.0.1 > > listen = /tmp/phpfpm.sock > listen.owner = nginx > listen.group = nginx > user = nginx > group = nginx > > pm = dynamic > > pm.max_children = 50 > > pm.start_servers = 5 > > pm.min_spare_servers = 5 > > pm.max_spare_servers = 35 > > > > > > slowlog = /var/log/php-fpm/www-slow.log > > > php_admin_value[error_log] = /var/log/php-fpm/www-error.log > php_admin_flag[log_errors] = on > > php_value[session.save_handler] = files > php_value[session.save_path] = /var/lib/php/session From kenz.aureus at gmail.com Wed Aug 19 09:57:34 2015 From: kenz.aureus at gmail.com (Chino Aureus) Date: Wed, 19 Aug 2015 17:57:34 +0800 Subject: Setup for consuming external web APIs from internal app Message-ID: Hi NGinx users, Not sure if this is off topic. Need help on what would be the recommended setup/architecture for consuming external web apis (e.g. twitter, Facebook APIs) securely from an application deployed in an internal network. And how NGINX can augment in this use case. This is what I have in mind but I'm not sure if this is a supported setup of nginx. InternalAPP --> (DMZ) NGINX -> external web api. Appreciate any advise :) Regards, Chino -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Wed Aug 19 13:15:07 2015 From: nginx-forum at nginx.us (vindicator) Date: Wed, 19 Aug 2015 09:15:07 -0400 Subject: "Dereferencing Pointer To Incomplete Type" on ARM Message-ID: <1ad10aff27cc64a6522a691c2a6f5adb.NginxMailingListEnglish@forum.nginx.org> I swear I had built the hg default version on my AMD64 just fine, but now I'm doing it on my ARM device running Ubuntu 15.04 with kernel 4.2. I used the default system openssl, but also tried with the master git version all with the same result. Attempts with: 1) ./auto/configure --with-http_ssl_module --with-ipv6 2) ./auto/configure --with-http_ssl_module --with-ipv6 --with-cc-opt="-I /usr/local/ssl/include -I /usr/local/include" --with-ld-opt="-L /usr/local/ssl/lib -L /usr/local/lib" 3) ./auto/configure --with-ipv6 --with-http_ssl_module --with-openssl=/src/openssl/ and of course "make". ***** src/event/ngx_event_openssl.c: In function ?ngx_ssl_handshake?: src/event/ngx_event_openssl.c:1164:31: error: dereferencing pointer to incomplete type if (c->ssl->connection->s3) { ^ src/event/ngx_event_openssl.c:1165:31: error: dereferencing pointer to incomplete type c->ssl->connection->s3->flags |= SSL3_FLAGS_NO_RENEGOTIATE_CIPHERS; ^ ***** "./auto/configure --with-ipv6" built with no problems. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,261124,261124#msg-261124 From nginx-forum at nginx.us Wed Aug 19 16:00:21 2015 From: nginx-forum at nginx.us (biazus) Date: Wed, 19 Aug 2015 12:00:21 -0400 Subject: Accept Header / cache versioning issue Message-ID: <1476d32ee3cf4c0933bc4dbd7076799e.NginxMailingListEnglish@forum.nginx.org> Hi Guys, I noticed that Nginx 1.8.X is taking into account the Client Header "Accept" for versioning the cache objects, for instance: This request, generates one object in cache: curl -sv -o /dev/null 'http://www.foo.bar/image.png -H 'Accept: */*' And this, generates another one: curl -sv -o /dev/null 'http://www.foo.bar/image.png -H 'Accept: text/html, image/gif, image/jpeg, *; q=.2, */*; q=.2' This one, another one, and so on: curl -sv -o /dev/null 'http://www.foo.bar/image.png -H 'Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8' Considering this behaviour: it will generate many versions of the same object in cache, using much more disc resources, space, IO. It will reduce the cache hit ratio, as a consequence, will reduce the performance. Then, I have a question: Is there any way to normalize the Accept header, or even ignore it ? Thanks in advance, Biazus Posted at Nginx Forum: http://forum.nginx.org/read.php?2,261128,261128#msg-261128 From francis at daoine.org Wed Aug 19 17:33:17 2015 From: francis at daoine.org (Francis Daly) Date: Wed, 19 Aug 2015 18:33:17 +0100 Subject: Static Content on Different Server Isn't Loaded?? In-Reply-To: <35f6627d9afb836f47fa556f3fdd522e.NginxMailingListEnglish@forum.nginx.org> References: <20150802230625.GG23844@daoine.org> <8e46e6d992af55aee1c41fbef6a814cd.NginxMailingListEnglish@forum.nginx.org> <09cebae08c2d382ef914c285e824ee20.NginxMailingListEnglish@forum.nginx.org> <35f6627d9afb836f47fa556f3fdd522e.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20150819173317.GC23844@daoine.org> On Tue, Aug 18, 2015 at 01:12:54PM -0400, smsmaddy1981 wrote: Hi there, > This WORKS in reading static content from remote server > location ^~/wkspace/ { > proxy_pass http://citwkspace; > } For the request that works: what does your nginx log say that the request to nginx was? what does your citwkspace web server log say that the request to it was? what file on the citwkspace web server was successfully returned? > This DOESN'T WORK? in reading static content from remote server > location ^~/wkspace/agentLogin/ { > proxy_pass http://citwkspace; > } For the request that does not work: what does your nginx log say that the request to nginx was? what does your citwkspace web server log say that the request to it was? what file on the citwkspace web server did you want to have returned? what was the response instead? f -- Francis Daly francis at daoine.org From francis at daoine.org Wed Aug 19 17:35:37 2015 From: francis at daoine.org (Francis Daly) Date: Wed, 19 Aug 2015 18:35:37 +0100 Subject: Accept Header / cache versioning issue In-Reply-To: <1476d32ee3cf4c0933bc4dbd7076799e.NginxMailingListEnglish@forum.nginx.org> References: <1476d32ee3cf4c0933bc4dbd7076799e.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20150819173537.GD23844@daoine.org> On Wed, Aug 19, 2015 at 12:00:21PM -0400, biazus wrote: Hi there, > I noticed that Nginx 1.8.X is taking into account the Client Header "Accept" > for versioning the cache objects, for instance: I don't see it in my brief testing. What nginx config do you use that shows this behaviour? > Then, I have a question: > > Is there any way to normalize the Accept header, or even ignore it ? Remove "$http_accept" from your "proxy_cache_key", if it is there? f -- Francis Daly francis at daoine.org From nginx-forum at nginx.us Wed Aug 19 17:42:11 2015 From: nginx-forum at nginx.us (biazus) Date: Wed, 19 Aug 2015 13:42:11 -0400 Subject: Accept Header / cache versioning issue In-Reply-To: <20150819173537.GD23844@daoine.org> References: <20150819173537.GD23844@daoine.org> Message-ID: <54f1fc9961fc9422f8c8e76b79784d39.NginxMailingListEnglish@forum.nginx.org> I got it! The "issue" was the origin server sending "Vary: Accept" header. In order to avoid this behaviour, simply set "proxy_ignore_headers Vary;" Thanks! Posted at Nginx Forum: http://forum.nginx.org/read.php?2,261128,261131#msg-261131 From semenukha at gmail.com Wed Aug 19 17:48:26 2015 From: semenukha at gmail.com (Styopa Semenukha) Date: Wed, 19 Aug 2015 13:48:26 -0400 Subject: Setup for consuming external web APIs from internal app In-Reply-To: References: Message-ID: <1759470.WyCbmol42T@tornado> Hi Chino, Could you please explain your architecture in more detail? So far it looks like you're protecting the Internet from your application, and not vice versa. AFAIK, most API don't do callbacks, so you don't need to expose your app to the outside callers. Simply permit outgoing connections in your firewall. On Wednesday, August 19, 2015 05:57:34 PM Chino Aureus wrote: > Hi NGinx users, > > Not sure if this is off topic. > > Need help on what would be the recommended setup/architecture for consuming > external web apis (e.g. twitter, Facebook APIs) securely from an > application deployed in an internal network. And how NGINX can augment in > this use case. > > This is what I have in mind but I'm not sure if this is a supported setup > of nginx. > > InternalAPP --> (DMZ) NGINX -> external web api. > > Appreciate any advise :) > > Regards, > Chino -- Best regards, Styopa Semenukha. From frederik.nosi at postecom.it Wed Aug 19 18:13:33 2015 From: frederik.nosi at postecom.it (Frederik Nosi) Date: Wed, 19 Aug 2015 20:13:33 +0200 Subject: Setup for consuming external web APIs from internal app In-Reply-To: References: Message-ID: <55D4C74D.2050206@postecom.it> Hi, On 08/19/2015 11:57 AM, Chino Aureus wrote: > > Hi NGinx users, > > Not sure if this is off topic. > > Need help on what would be the recommended setup/architecture for > consuming external web apis (e.g. twitter, Facebook APIs) securely > from an application deployed in an internal network. And how NGINX > can augment in this use case. > > This is what I have in mind but I'm not sure if this is a supported > setup of nginx. > > InternalAPP --> (DMZ) NGINX -> external web api. > > Appreciate any advise :) If you're trying to limit your application calls to outside, ex. let's say your app can connect only to: api.google.com or api.facebook.com this seems more a job for a forwar proxy, typically squid. > > Regards, > Chino Frederik -------------- next part -------------- An HTML attachment was scrubbed... URL: From reallfqq-nginx at yahoo.fr Wed Aug 19 20:59:06 2015 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Wed, 19 Aug 2015 22:59:06 +0200 Subject: Accept Header / cache versioning issue In-Reply-To: <54f1fc9961fc9422f8c8e76b79784d39.NginxMailingListEnglish@forum.nginx.org> References: <20150819173537.GD23844@daoine.org> <54f1fc9961fc9422f8c8e76b79784d39.NginxMailingListEnglish@forum.nginx.org> Message-ID: ... or remove the 'Vary: Accept' header from the origin response? ?It seems this header is wreaking havoc and is useless since it won't be cached... why keeping it?? --- *B. R.* On Wed, Aug 19, 2015 at 7:42 PM, biazus wrote: > I got it! > > The "issue" was the origin server sending "Vary: Accept" header. In order > to > avoid this behaviour, simply set "proxy_ignore_headers Vary;" > > Thanks! > > Posted at Nginx Forum: > http://forum.nginx.org/read.php?2,261128,261131#msg-261131 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Thu Aug 20 09:45:18 2015 From: nginx-forum at nginx.us (vindicator) Date: Thu, 20 Aug 2015 05:45:18 -0400 Subject: "Dereferencing Pointer To Incomplete Type" on ARM In-Reply-To: <1ad10aff27cc64a6522a691c2a6f5adb.NginxMailingListEnglish@forum.nginx.org> References: <1ad10aff27cc64a6522a691c2a6f5adb.NginxMailingListEnglish@forum.nginx.org> Message-ID: Looks like it isn't just nginx (or even more likely IS an openssl issue) because I just encountered it again with pjproject: ***** ../src/pj/ssl_sock_ossl.c:1001:5: warning: implicit declaration of function ?M_ASN1_STRING_length? [-Wimplicit-function-declaration] len = M_ASN1_STRING_length(X509_get_serialNumber(x)); ^ ../src/pj/ssl_sock_ossl.c: In function ?pj_ssl_sock_get_info?: ../src/pj/ssl_sock_ossl.c:2285:24: error: dereferencing pointer to incomplete type info->cipher = (cipher->id & 0x00FFFFFF); ^ ***** Posted at Nginx Forum: http://forum.nginx.org/read.php?2,261124,261142#msg-261142 From lists at ruby-forum.com Thu Aug 20 12:43:04 2015 From: lists at ruby-forum.com (Narayan Bista) Date: Thu, 20 Aug 2015 14:43:04 +0200 Subject: Financial Modeling Training Course Message-ID: The aim of this course is to make you understand the financial statement of a company doing an analysis on the same and projecting the future financials of a company.The tutorials have been designed in such a manner that it would help you build a step-by-step advanced financial model? Concept of circularity? Understand the Key Linkages of Income statement, Balance sheet and Cash flow statement. You would get the opportunity to learn Excel proficiency that is required by the analyst and Financial Modeling of different sectors such as the Paint sector, IT sector and Education sector. Financial Modeling is a tool that can be used to forecast a picture of a security or a financial instrument or a company?s future financial performance based on the historical performance of the entity. Financial Modeling includes preparing of detailed company specific models which are then used for the purpose of decision making and performing financial analysis. It is nothing but constructing a financial representation of some, or all, aspects of the firm or given security. OR it is mathematical model of different aspects of financial health of a given company and this model can be made on a simple not book paper or in excel, with later it is easily possible to analyse the impact of different assumptions or change in value of various variables hence gives the more flexibility. For more information related to this Training Course you can visit to our Website:- http://www.educba.com/course/online-financial-modeling-training/ Attachments: http://www.ruby-forum.com/attachment/11039/Graph233.jpg -- Posted via http://www.ruby-forum.com/. From stefan.kutzke at bettermarks.com Thu Aug 20 15:49:39 2015 From: stefan.kutzke at bettermarks.com (Stefan Kutzke) Date: Thu, 20 Aug 2015 15:49:39 +0000 Subject: longer upstream request time after upgrade to Nginx > 1.4 Message-ID: <1440085778.2233.42.camel@pluto.galaxy.local> Hi, first, the setup: I have 2 identical maschines running latest CentOS 6.6 and uWSGI 2.0.10, both machines had nginx-1.4.7-1.el6.ngx.x86_64 installed. I saw almost identical values for $request_time in the Nginx log files of both machines. We are using Nginx to serve some static files and pass requests to an uWSGI application. The relevant configuration is as follows: upstream backend { server unix:///tmp/backend_blue.socket max_fails=0; server unix:///tmp/backend_blue.socket max_fails=0; } location / { proxy_next_upstream error timeout http_502; proxy_set_header Host $host; include uwsgi_params; uwsgi_param HTTP_X_REQUEST_ID $uuid; uwsgi_param HTTPS on; uwsgi_buffering off; uwsgi_pass backend; } location /static { alias /home/webadmin/blue/static; index index.php; if ( !-e $request_filename) { rewrite ^(/static.*)/test__(.*)$ $1/$2 redirect; } } After upgrading Nginx to one of the following stable releases I've discovered that the values for $request_time have significantly increased (almost twice as high) but only for upstream requests. There are no differences in $request_time for static requests. Affected packages: nginx-1.6.0-1.el6.ngx.x86_64 nginx-1.6.1-1.el6.ngx.x86_64 nginx-1.6.2-1.el6.ngx.x86_64 nginx-1.6.3-1.el6.ngx.x86_64 nginx-1.8.0-1.el6.ngx.x86_64 No further changes were made, only an update of nginx RPM. uWSGI itself reports unchanged times to generate responses. Any idea? From nginx-forum at nginx.us Thu Aug 20 18:36:09 2015 From: nginx-forum at nginx.us (biazus) Date: Thu, 20 Aug 2015 14:36:09 -0400 Subject: Cache structs 1.6.x vs 1.8.x Message-ID: <6a28b268c77a1b330109a46cfb2105f5.NginxMailingListEnglish@forum.nginx.org> Hey Guys, We have been testing Nginx 1.8.x, and We realize that the cache structure for Nginx 1.8.x is not compatible for Nginx 1.6.x. In other words, by updating Nginx 1.6.x to 1.8.x, all the cache objects will be revalidated. We are working in a tool to migrate 1.6.x cache objects to 1.8.x in order to avoid any cache strike, but We have been having hard time to do that. Do you guys know any tool to do such thing ? Example: cache struct Nginx 1.6: Nginx 1.6 Cache object: ? ? ? ValidSec: 1447542551? ? ? ? ? ? Sat Nov 14 23:09:11 2015 ? ? ? LastMod: 1438620329? ? ? ? ? ? Mon Aug? 3 16:45:29 2015 ? ? ? Date: 1439766551? ? ? ? ? ? Sun Aug 16 23:09:11 2015 ? ? ? CRC-32: 4231235715 ? ? ? ValidMSec: 0 ? ? HeaderStart: 153 ? ? ? BodyStart: 536 cache struc Nginx 1.8: Nginx 1.8 Cache object: ? ? ? ? Version: 3 ? ? ? ValidSec: 1447759141? ? ? ? ? ? Tue Nov 17 11:19:01 2015 ? ? ? ? LastMod: 18446744073709551615? Wed Dec 31 23:59:59 1969 ? ? ? ? Date: 1439983141? ? ? ? ? ? Wed Aug 19 11:19:01 2015 ? ? ? ? CRC-32: 1428710042 ? ? ? ValidMSec: 0 ? ? HeaderStart: 269 ? ? ? BodyStart: 571 ? ? ? ? ETagLen: 0 ? ? ? ? ETag: ? ? ? ? VaryLen: 26 ? ? ? ? Vary: Accept-Encoding,User-Agent Thank in advance. Biazus Posted at Nginx Forum: http://forum.nginx.org/read.php?2,261151,261151#msg-261151 From igal at lucee.org Thu Aug 20 21:46:57 2015 From: igal at lucee.org (Igal @ Lucee.org) Date: Thu, 20 Aug 2015 14:46:57 -0700 Subject: preventing requests with unknown host names Message-ID: <55D64AD1.9030403@lucee.org> I want to disable processing of all requests that do not have a valid hostname I'm tried to follow the advice on: http://nginx.org/en/docs/http/request_processing.html#how_to_prevent_undefined_server_names so I have (inside http directive): server { listen 80; server_name ""; return 444; } I also tried server { listen 80; server_name _; return 444; } but I am still able to access the website by its IP address? what am I doing wrong? -- Igal Sapir Lucee Core Developer Lucee.org -------------- next part -------------- An HTML attachment was scrubbed... URL: From jason at lithiumfox.com Thu Aug 20 22:10:16 2015 From: jason at lithiumfox.com (Jason Thomas) Date: Thu, 20 Aug 2015 15:10:16 -0700 Subject: large_client_header_buffers does not work in in server context Message-ID: Hello, large_client_header_buffers does not seem to work in server context, however it works fine in http context. Documentation says it should work for both [1] and looking at the application code it seems like it should too [2]. I am running multiple server definitions and I've tried nginx versions 1.6.2 and 1.8.0. Any help is greatly appreciated! -jason [1] http://nginx.org/en/docs/http/ngx_http_core_module.html#large_client_header_buffers [2] https://github.com/nginx/nginx/blob/branches/stable-1.6/src/http/ngx_http_core_module.c#L262-L267 -------------- next part -------------- An HTML attachment was scrubbed... URL: From francis at daoine.org Thu Aug 20 22:16:57 2015 From: francis at daoine.org (Francis Daly) Date: Thu, 20 Aug 2015 23:16:57 +0100 Subject: preventing requests with unknown host names In-Reply-To: <55D64AD1.9030403@lucee.org> References: <55D64AD1.9030403@lucee.org> Message-ID: <20150820221657.GE23844@daoine.org> On Thu, Aug 20, 2015 at 02:46:57PM -0700, Igal @ Lucee.org wrote: > I want to disable processing of all requests that do not have a valid > hostname Check your entire configuration for "listen" directives. http://nginx.org/r/listen There will be zero or more in each server{} block. If there are zero, that is equivalent to "listen 80" (if you run as root). For each "listen" directive with a unique ip:port, add one server{} block which contains "listen ip:port default_server; return 444;" > I'm tried to follow the advice on: > http://nginx.org/en/docs/http/request_processing.html#how_to_prevent_undefined_server_names > > so I have (inside http directive): > > server { > > listen 80; > server_name ""; > return 444; > } If your config only has "listen 80", or no "listen" directives at all, then server { listen 80 default_server; return 444; } should do what you want. > but I am still able to access the website by its IP address? > > what am I doing wrong? Not causing that server to be the default server for the ip:port you are connecting to. f -- Francis Daly francis at daoine.org From igal at lucee.org Thu Aug 20 22:55:51 2015 From: igal at lucee.org (Igal @ Lucee.org) Date: Thu, 20 Aug 2015 15:55:51 -0700 Subject: preventing requests with unknown host names In-Reply-To: <20150820221657.GE23844@daoine.org> References: <55D64AD1.9030403@lucee.org> <20150820221657.GE23844@daoine.org> Message-ID: <55D65AF7.20307@lucee.org> Thank you, Francis. > For each "listen" directive with a unique ip:port, add one server{} > block which contains "listen ip:port default_server; return 444;" This seems to do the trick. I expected there to be a way to do all of the IP addresses at once. Thanks for your help! Igal On 8/20/2015 3:16 PM, Francis Daly wrote: > I want to disable processing of all requests that do not have a valid > hostname > Check your entire configuration for "listen" directives. > > http://nginx.org/r/listen > > There will be zero or more in each server{} block. If there are zero, > that is equivalent to "listen 80" (if you run as root). > > For each "listen" directive with a unique ip:port, add one server{} > block which contains "listen ip:port default_server; return 444;" > >> I'm tried to follow the advice on: >> http://nginx.org/en/docs/http/request_processing.html#how_to_prevent_undefined_server_names >> >> so I have (inside http directive): >> >> server { >> >> listen 80; >> server_name ""; >> return 444; >> } > If your config only has "listen 80", or no "listen" directives at all, then > > server { > listen 80 default_server; > return 444; > } > > should do what you want. > >> but I am still able to access the website by its IP address? >> >> what am I doing wrong? > Not causing that server to be the default server for the ip:port you > are connecting to. > > f From nginx-forum at nginx.us Fri Aug 21 02:35:51 2015 From: nginx-forum at nginx.us (drhowarddrfine) Date: Thu, 20 Aug 2015 22:35:51 -0400 Subject: Server 1 redirects, Server 2 does not Message-ID: This problem may have been around before I updated to nginx version 1.9.4 but I'm not sure. I have one VPS with two IP addresses. Server 1 redirects port 80 www and non-www to port 443 requests along with www requests on port 443 to the non-www web site. iow, http://(www.)site1.com/ and https://www.site1.com/ properly get 301 redirected to https:site1.com/ Site 2 does not do this and only redirects http:// requests to the equivalent https:// even though both server directives contain identical directives though different ssl certs. server { listen xxx.xxx.21.234:80; server_name www.site1.com site1.com; return 301 https://site1.com$request_uri; } server { listen xxx.xxx.21.234:443 ssl http2; server_name site1.com; http2_keepalive_timeout 64; keepalive_timeout 64; ... This is identical for server 2 including the rest of the directives in the server block. My one question is whether the ssl cert for server2 may not be properly set up for the www name but I'm not sure. Does something immediately jump out as the problem or where should I look further? Other than the redirection, everything on server2 works as it should. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,261156,261156#msg-261156 From nginx-forum at nginx.us Fri Aug 21 02:37:08 2015 From: nginx-forum at nginx.us (drhowarddrfine) Date: Thu, 20 Aug 2015 22:37:08 -0400 Subject: Server 1 redirects, Server 2 does not In-Reply-To: References: Message-ID: <0495fe98a23f6baf5ba18a5bf81d05d0.NginxMailingListEnglish@forum.nginx.org> Note that I upgraded the server so I could turn httpv2 on. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,261156,261157#msg-261157 From nginx-forum at nginx.us Fri Aug 21 03:09:21 2015 From: nginx-forum at nginx.us (drhowarddrfine) Date: Thu, 20 Aug 2015 23:09:21 -0400 Subject: Server 1 redirects, Server 2 does not In-Reply-To: References: Message-ID: <6eabe4ac74a83b5578fc05dde46f1ff4.NginxMailingListEnglish@forum.nginx.org> Doing a curl -I -L http://www.site1.com/ returns this: HTTP/1.1 301 Moved Permanently Server: nginx Date: Fri, 21 Aug 2015 03:04:50 GMT Content-Type: text/html Content-Length: 178 Connection: keep-alive Location: https://site1.com/ HTTP/1.1 200 OK Server: nginx Date: Fri, 21 Aug 2015 03:04:50 GMT Content-Type: text/html; charset=utf-8 Now I'm really confused. Using Chrome on my desktop and my phone, the address bar shows https://www.site1.com Posted at Nginx Forum: http://forum.nginx.org/read.php?2,261156,261159#msg-261159 From igal at lucee.org Fri Aug 21 06:35:58 2015 From: igal at lucee.org (Igal @ Lucee.org) Date: Thu, 20 Aug 2015 23:35:58 -0700 Subject: preventing requests with unknown host names In-Reply-To: <55D65AF7.20307@lucee.org> References: <55D64AD1.9030403@lucee.org> <20150820221657.GE23844@daoine.org> <55D65AF7.20307@lucee.org> Message-ID: <55D6C6CE.1070503@lucee.org> So while this worked well for port 80: On 8/20/2015 3:55 PM, Igal @ Lucee.org wrote: > Thank you, Francis. >> For each "listen" directive with a unique ip:port, add one server{} >> block which contains "listen ip:port default_server; return 444;" > This seems to do the trick. when I tried to add listen for port 443 it broke the https for requests with the valid hostname as well. ## disable http server for requests with unknown hosts server { listen IP:80 default_server; # listen IP:443 default_server; # breaks all https?? return 444; } what's the trick to do the same for https without breaking the requests for https://myhost/ ? > I expected there to be a way to do all of the IP addresses at once. > > Thanks for your help! > > > Igal > > > On 8/20/2015 3:16 PM, Francis Daly wrote: >> I want to disable processing of all requests that do not have a valid >> hostname >> Check your entire configuration for "listen" directives. >> >> http://nginx.org/r/listen >> >> There will be zero or more in each server{} block. If there are zero, >> that is equivalent to "listen 80" (if you run as root). >> >> For each "listen" directive with a unique ip:port, add one server{} >> block which contains "listen ip:port default_server; return 444;" >> >>> I'm tried to follow the advice on: >>> http://nginx.org/en/docs/http/request_processing.html#how_to_prevent_undefined_server_names >>> >>> so I have (inside http directive): >>> >>> server { >>> >>> listen 80; >>> server_name ""; >>> return 444; >>> } >> If your config only has "listen 80", or no "listen" directives at all, then >> >> server { >> listen 80 default_server; >> return 444; >> } >> >> should do what you want. >> >>> but I am still able to access the website by its IP address? >>> >>> what am I doing wrong? >> Not causing that server to be the default server for the ip:port you >> are connecting to. >> >> f From francis at daoine.org Fri Aug 21 07:22:25 2015 From: francis at daoine.org (Francis Daly) Date: Fri, 21 Aug 2015 08:22:25 +0100 Subject: preventing requests with unknown host names In-Reply-To: <55D65AF7.20307@lucee.org> References: <55D64AD1.9030403@lucee.org> <20150820221657.GE23844@daoine.org> <55D65AF7.20307@lucee.org> Message-ID: <20150821072225.GF23844@daoine.org> On Thu, Aug 20, 2015 at 03:55:51PM -0700, Igal @ Lucee.org wrote: Hi there, > > For each "listen" directive with a unique ip:port, add one server{} > > block which contains "listen ip:port default_server; return 444;" > This seems to do the trick. > > I expected there to be a way to do all of the IP addresses at once. You can add all of the "listen ... default_server;" directives into a single server{}. But the way nginx chooses which server{} to use to handle a request, means that there is not a single "listen" directive that will catch everything that you don't want to go elsewhere. f -- Francis Daly francis at daoine.org From francis at daoine.org Fri Aug 21 07:30:08 2015 From: francis at daoine.org (Francis Daly) Date: Fri, 21 Aug 2015 08:30:08 +0100 Subject: preventing requests with unknown host names In-Reply-To: <55D6C6CE.1070503@lucee.org> References: <55D64AD1.9030403@lucee.org> <20150820221657.GE23844@daoine.org> <55D65AF7.20307@lucee.org> <55D6C6CE.1070503@lucee.org> Message-ID: <20150821073008.GG23844@daoine.org> On Thu, Aug 20, 2015 at 11:35:58PM -0700, Igal @ Lucee.org wrote: > On 8/20/2015 3:55 PM, Igal @ Lucee.org wrote: Hi there, I do not know the full answer to your question. > when I tried to add listen for port 443 it broke the https for requests > with the valid hostname as well. > > ## disable http server for requests with unknown hosts > server { > > listen IP:80 default_server; > # listen IP:443 default_server; # breaks all https?? > return 444; > } > > what's the trick to do the same for https without breaking the requests > for https://myhost/ ? You will need at least a proper ssl configuration in that server{} block -- possibly setting it at http level. See, for example, http://nginx.org/en/docs/http/configuring_https_servers.html#name_based_https_servers In general, the ssl hostname that the browser wants to connect to is not available until after the ssl negotiation has happened. f -- Francis Daly francis at daoine.org From livingdeadzerg at yandex.ru Fri Aug 21 11:49:20 2015 From: livingdeadzerg at yandex.ru (navern) Date: Fri, 21 Aug 2015 14:49:20 +0300 Subject: preventing requests with unknown host names In-Reply-To: <20150821073008.GG23844@daoine.org> References: <55D64AD1.9030403@lucee.org> <20150820221657.GE23844@daoine.org> <55D65AF7.20307@lucee.org> <55D6C6CE.1070503@lucee.org> <20150821073008.GG23844@daoine.org> Message-ID: <55D71040.5010801@yandex.ru> On 21.08.2015 10:30, Francis Daly wrote: > On Thu, Aug 20, 2015 at 11:35:58PM -0700, Igal @ Lucee.org wrote: >> On 8/20/2015 3:55 PM, Igal @ Lucee.org wrote: > Hi there, > > I do not know the full answer to your question. > >> when I tried to add listen for port 443 it broke the https for requests >> with the valid hostname as well. >> >> ## disable http server for requests with unknown hosts >> server { >> >> listen IP:80 default_server; >> # listen IP:443 default_server; # breaks all https?? >> return 444; >> } >> >> what's the trick to do the same for https without breaking the requests >> for https://myhost/ ? > You will need at least a proper ssl configuration in that server{} > block -- possibly setting it at http level. > > See, for example, > http://nginx.org/en/docs/http/configuring_https_servers.html#name_based_https_servers > > In general, the ssl hostname that the browser wants to connect to is > not available until after the ssl negotiation has happened. > > f Look at this link: http://nginx.org/en/docs/http/configuring_https_servers.html#sni SNI will help you with to have listen separate server_name on one IP and have default_server. From igal at lucee.org Fri Aug 21 14:26:06 2015 From: igal at lucee.org (Igal @ Lucee.org) Date: Fri, 21 Aug 2015 07:26:06 -0700 Subject: preventing requests with unknown host names In-Reply-To: <20150821073008.GG23844@daoine.org> References: <55D64AD1.9030403@lucee.org> <20150820221657.GE23844@daoine.org> <55D65AF7.20307@lucee.org> <55D6C6CE.1070503@lucee.org> <20150821073008.GG23844@daoine.org> Message-ID: <55D734FE.60507@lucee.org> > You will need at least a proper ssl configuration in that server{} > block -- possibly setting it at http level. that makes sense. thanks again! Igal Sapir Lucee Core Developer Lucee.org On 8/21/2015 12:30 AM, Francis Daly wrote: > On Thu, Aug 20, 2015 at 11:35:58PM -0700, Igal @ Lucee.org wrote: >> On 8/20/2015 3:55 PM, Igal @ Lucee.org wrote: > Hi there, > > I do not know the full answer to your question. > >> when I tried to add listen for port 443 it broke the https for requests >> with the valid hostname as well. >> >> ## disable http server for requests with unknown hosts >> server { >> >> listen IP:80 default_server; >> # listen IP:443 default_server; # breaks all https?? >> return 444; >> } >> >> what's the trick to do the same for https without breaking the requests >> for https://myhost/ ? > You will need at least a proper ssl configuration in that server{} > block -- possibly setting it at http level. > > See, for example, > http://nginx.org/en/docs/http/configuring_https_servers.html#name_based_https_servers > > In general, the ssl hostname that the browser wants to connect to is > not available until after the ssl negotiation has happened. > > f -------------- next part -------------- An HTML attachment was scrubbed... URL: From vbart at nginx.com Fri Aug 21 15:24:46 2015 From: vbart at nginx.com (Valentin V. Bartenev) Date: Fri, 21 Aug 2015 18:24:46 +0300 Subject: large_client_header_buffers does not work in in server context In-Reply-To: References: Message-ID: <1453180.cSVEX6Lt8e@vbart-workstation> On Thursday 20 August 2015 15:10:16 Jason Thomas wrote: > Hello, > > large_client_header_buffers does not seem to work in server context, > however it works fine in http context. Documentation says it should work > for both [1] and looking at the application code it seems like it should > too [2]. > > I am running multiple server definitions and I've tried nginx versions > 1.6.2 and 1.8.0. > > Any help is greatly appreciated! > [..] It works for the server context, but for obvious reason it uses the default server context till the host header is received and parsed. wbr, Valentin V. Bartenev From nginx-forum at nginx.us Fri Aug 21 17:30:15 2015 From: nginx-forum at nginx.us (manimalcruelty) Date: Fri, 21 Aug 2015 13:30:15 -0400 Subject: CORS headers not being set for a 401 response from upstream. In-Reply-To: <20140715222113.GF1849@mdounin.ru> References: <20140715222113.GF1849@mdounin.ru> Message-ID: <9327b0a5a3db6b4bc1a80af2755c0ca4.NginxMailingListEnglish@forum.nginx.org> To quote previous email from Maxim: "In javascript, the code should test the "status" property of the XMLHttpRequest object to find out if the request was successful or not" The problem here is that xhr.status === 0 if you don't have CORS headers present. If nginx doesn't allow you to add_headers for a 401, a 504, a 404 etc then a Javascript app can't react to each response individually. Isn't it a little rudimentary to expect a Javascript app to treat to all server errors in the same way? Simply because 401 !== 504. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,250740,261171#msg-261171 From nginx-forum at nginx.us Fri Aug 21 17:30:53 2015 From: nginx-forum at nginx.us (BillyBobBaker) Date: Fri, 21 Aug 2015 13:30:53 -0400 Subject: auto/configure doesn't look for MINGW64 (on MSYS2) Message-ID: <88892020454847e723eae0980bfdc62e.NginxMailingListEnglish@forum.nginx.org> Hi there, On MSYS2, with x64 compilers, the "auto/configure" file is not able to find the right environment. $ uname -s MINGW64_NT-10.0 But nginx is only looking for MINGW32. Regards, Billy Posted at Nginx Forum: http://forum.nginx.org/read.php?2,261172,261172#msg-261172 From igal at lucee.org Fri Aug 21 18:12:56 2015 From: igal at lucee.org (Igal @ Lucee.org) Date: Fri, 21 Aug 2015 11:12:56 -0700 Subject: preventing requests with unknown host names In-Reply-To: <55D71040.5010801@yandex.ru> References: <55D64AD1.9030403@lucee.org> <20150820221657.GE23844@daoine.org> <55D65AF7.20307@lucee.org> <55D6C6CE.1070503@lucee.org> <20150821073008.GG23844@daoine.org> <55D71040.5010801@yandex.ru> Message-ID: <55D76A28.8040906@lucee.org> On 8/21/2015 4:49 AM, navern wrote: > On 21.08.2015 10:30, Francis Daly wrote: >> On Thu, Aug 20, 2015 at 11:35:58PM -0700, Igal @ Lucee.org wrote: >>> On 8/20/2015 3:55 PM, Igal @ Lucee.org wrote: >> Hi there, >> >> I do not know the full answer to your question. >> >>> when I tried to add listen for port 443 it broke the https for requests >>> with the valid hostname as well. >>> >>> ## disable http server for requests with unknown hosts >>> server { >>> >>> listen IP:80 default_server; >>> # listen IP:443 default_server; # breaks all https?? >>> return 444; >>> } >>> >>> what's the trick to do the same for https without breaking the requests >>> for https://myhost/ ? >> You will need at least a proper ssl configuration in that server{} >> block -- possibly setting it at http level. >> >> See, for example, >> http://nginx.org/en/docs/http/configuring_https_servers.html#name_based_https_servers >> >> >> In general, the ssl hostname that the browser wants to connect to is >> not available until after the ssl negotiation has happened. >> >> f > Look at this link: > http://nginx.org/en/docs/http/configuring_https_servers.html#sni > > SNI will help you with to have listen separate server_name on one IP > and have default_server. I have SNI enabled (running on Windows and confirmed by calling `$ nginx -V` not sure how to "use" that? thanks From ajoadam at gmail.com Sat Aug 22 02:31:35 2015 From: ajoadam at gmail.com (=?UTF-8?B?w4Fkw6FtIEpvw7M=?=) Date: Sat, 22 Aug 2015 04:31:35 +0200 Subject: Internal marked 503 error page returns default 404 Message-ID: Hi, I have the following configuration: server { location = /unavailable.html { internal; } try_files $uri =503; error_page 503 /unavailable.html; } My goal is to have all existing files with the exception of unavailable.html served with 200, and serving unavailable.html with 503 for anything else and itself. The location is marked as internal because otherwise a direct request for /unavailable.html would result in a 200. The expectation is that on a direct request it would be deemed non-existent by try_files, so a 503 would be issued, and through an internal redirect unavailable.html would eventually be served. Requesting /unavailable.html, however, results in the default 404 served, which is, after all, consistent with the documentation, but is not what one would expect it to do. The exact same problem was stated in a Server Fault question in 2011, but it was never answered http://serverfault.com/questions/230433/nginx-error-page-and-internal-directives-not-working-as-expected Can anyone please shed some light on this? Thanks, ?d?m From fsantiago at deviltracks.net Sat Aug 22 02:41:00 2015 From: fsantiago at deviltracks.net (Fabian Santiago) Date: Fri, 21 Aug 2015 22:41:00 -0400 Subject: Ocsp stapling Message-ID: <9B92FC4A-2BF7-41B8-BA39-762AE4D1C042@deviltracks.net> I have my nginx virtual host set to enable ocsp stapling but it doesn't actually do it. Ssllabs testing reports no. OpenSSL cli testing reports nothing. Nginx v1.8.0 Centos 6.7 64bit OpenSSL 1.0.1e I only have the ocsp config on one domain for testing. Any thoughts? Thanks. -- Fabe From nginx-forum at nginx.us Sat Aug 22 11:48:15 2015 From: nginx-forum at nginx.us (itpp2012) Date: Sat, 22 Aug 2015 07:48:15 -0400 Subject: [ANN] Windows nginx 1.9.4.1 Lizard Message-ID: <5b1af4056991dc46ba9ec613a0a7577a.NginxMailingListEnglish@forum.nginx.org> 13:03 22-8-2015 nginx 1.9.4.1 Lizard Based on nginx 1.9.4 (18-8-2015) with; + headers-more-nginx-module v0.26 (upgraded 19-8-2015) + Openssl-1.0.2d + "Proxy-Authenticate" header for the 407 response + lua-nginx-module v0.9.16 (upgraded 13-8-2015) + pcre-8.37b-r1594 (upgraded 22-8-2015) + 'include' in upstream * Known broken issues: ajp cache, spdy + Source changes back ported + Source changes add-on's back ported + Changes for nginx_basic: Source changes back ported * Scheduled release: yes * Additional specifications: see 'Feature list' Builds can be found here: http://nginx-win.ecsds.eu/ Follow releases https://twitter.com/nginx4Windows Posted at Nginx Forum: http://forum.nginx.org/read.php?2,261178,261178#msg-261178 From nginx-forum at nginx.us Sat Aug 22 18:34:53 2015 From: nginx-forum at nginx.us (biazus) Date: Sat, 22 Aug 2015 14:34:53 -0400 Subject: Ocsp stapling In-Reply-To: <9B92FC4A-2BF7-41B8-BA39-762AE4D1C042@deviltracks.net> References: <9B92FC4A-2BF7-41B8-BA39-762AE4D1C042@deviltracks.net> Message-ID: <0d355450ffe5b23ad90802276175ea51.NginxMailingListEnglish@forum.nginx.org> I have been using Nginx 1.8.X with ocsp stabling for a couple of weeks and it seems to be fine. Please send your config files, it may help... Posted at Nginx Forum: http://forum.nginx.org/read.php?2,261177,261181#msg-261181 From fsantiago at deviltracks.net Sat Aug 22 18:46:41 2015 From: fsantiago at deviltracks.net (fsantiago at deviltracks.net) Date: Sat, 22 Aug 2015 14:46:41 -0400 Subject: Ocsp stapling In-Reply-To: <0d355450ffe5b23ad90802276175ea51.NginxMailingListEnglish@forum.nginx.org> References: <9B92FC4A-2BF7-41B8-BA39-762AE4D1C042@deviltracks.net> <0d355450ffe5b23ad90802276175ea51.NginxMailingListEnglish@forum.nginx.org> Message-ID: Sure, here is the relevant portion of my virtual hosts config: server { listen 443 ssl; server_name ; client_max_body_size 64m; client_body_timeout 60; access_log /var/log/nginx/.....; error_log /var/log/nginx/.........; root /var/www/html/rc/; index index.html index.php; ssl_protocols TLSv1.1 TLSv1.2; ssl_certificate /etc/pki/tls/private/......pem; ssl_certificate_key /etc/pki/tls/private/.....pem; ssl_session_cache shared:SSL:10m; ssl_session_timeout 4h; ssl_ciphers HIGH:!aNULL:!MD5; ssl_prefer_server_ciphers on; ssl_dhparam /etc/pki/tls/private/dhparams.pem; add_header Public-Key-Pins-Report-Only 'pin-sha256="amMeV6gb9QNx0Zf7FtJ19Wa/t2B7KpCF/1n2Js3UuSU="; pin-sha256="hXVOamtUHc8T8jznu+VMpu6wgk3ASIUi6YM4obeAEDw="; max-age=31536000; includeSubDomains'; add_header Strict-Transport-Security "max-age=31536000; includeSubdomains"; resolver 127.0.0.1; ssl_stapling on; ssl_stapling_verify on; ssl_trusted_certificate /etc/pki/tls/private/root_store/............pem; On 2015-08-22 14:34, biazus wrote: > I have been using Nginx 1.8.X with ocsp stabling for a couple of weeks > and > it seems to be fine. Please send your config files, it may help... > > Posted at Nginx Forum: > http://forum.nginx.org/read.php?2,261177,261181#msg-261181 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From francis at daoine.org Sat Aug 22 23:54:09 2015 From: francis at daoine.org (Francis Daly) Date: Sun, 23 Aug 2015 00:54:09 +0100 Subject: Internal marked 503 error page returns default 404 In-Reply-To: References: Message-ID: <20150822235409.GA7915@daoine.org> On Sat, Aug 22, 2015 at 04:31:35AM +0200, ?d?m Jo? wrote: Hi there, > My goal is to have all existing files with the exception of > unavailable.html served with 200, and serving unavailable.html with > 503 for anything else and itself. > > The location is marked as internal because otherwise a direct request > for /unavailable.html would result in a 200. The expectation is that > on a direct request it would be deemed non-existent by try_files, so a "selecting the location to handle this request" happens before try_files. So try_files doesn't get to say whether it exists or not. > 503 would be issued, and through an internal redirect unavailable.html > would eventually be served. > > Requesting /unavailable.html, however, results in the default 404 > served, which is, after all, consistent with the documentation, but is > not what one would expect it to do. If the expectation is that the documentation is wrong, the expectation is probably incorrect. > Can anyone please shed some light on this? I would probably move unavailable.html out of the "default" document root, so that it cannot be accessed directly. Then use a named location, so that a request for /unavailable.html is the same as a request for /random_filename.html. location @unavailable { root /tmp; try_files /unavailable.html =500; } try_files $uri =503; error_page 503 @unavailable; The try_files with the "location @" strikes me as inelegant, but seems to be the quickest way to always serve a single named file. The =500 could be =503, depending on what output you want when your preferred unavailable.html file is not accessible. f -- Francis Daly francis at daoine.org From nginx-forum at nginx.us Sun Aug 23 08:00:36 2015 From: nginx-forum at nginx.us (biazus) Date: Sun, 23 Aug 2015 04:00:36 -0400 Subject: Ocsp stapling In-Reply-To: References: Message-ID: <6aa0ba1f20b9a1e2246b1710f43bfd8b.NginxMailingListEnglish@forum.nginx.org> Config files seems to be OK. Just make sure "ssl_trusted_certificate" contais the intermediate & root certificates (in that order from top to bottom). You can test with the following command: echo QUIT | openssl s_client -connect yourhost.com:443 -status 2> /dev/null | grep -A 17 'OCSP response:' | grep -B 17 'Next Update' good luck Posted at Nginx Forum: http://forum.nginx.org/read.php?2,261177,261185#msg-261185 From fsantiago at deviltracks.net Sun Aug 23 13:55:34 2015 From: fsantiago at deviltracks.net (Fabian Santiago) Date: Sun, 23 Aug 2015 09:55:34 -0400 Subject: Ocsp stapling In-Reply-To: <6aa0ba1f20b9a1e2246b1710f43bfd8b.NginxMailingListEnglish@forum.nginx.org> References: <6aa0ba1f20b9a1e2246b1710f43bfd8b.NginxMailingListEnglish@forum.nginx.org> Message-ID: Thanks. It does. Test produces no results. Not working on ssllabs (no result). I'm clueless. I've seen mention out on the web about making sure you define ocsp for the default site or none else will work. I also make use of sni as I only have one ip address. I have no truly "default" site configured. Could be related? I am new to nginx so I'm still learning lots. Thanks again. -- Fabe > On Aug 23, 2015, at 4:00 AM, biazus wrote: > > Config files seems to be OK. Just make sure "ssl_trusted_certificate" > contais the intermediate & root certificates (in that order from top to > bottom). > > You can test with the following command: > > echo QUIT | openssl s_client -connect yourhost.com:443 -status 2> /dev/null > | grep -A 17 'OCSP response:' | grep -B 17 'Next Update' > > good luck > > Posted at Nginx Forum: http://forum.nginx.org/read.php?2,261177,261185#msg-261185 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From fsantiago at deviltracks.net Sun Aug 23 20:29:42 2015 From: fsantiago at deviltracks.net (fsantiago at deviltracks.net) Date: Sun, 23 Aug 2015 16:29:42 -0400 Subject: Ocsp stapling In-Reply-To: References: <6aa0ba1f20b9a1e2246b1710f43bfd8b.NginxMailingListEnglish@forum.nginx.org> Message-ID: <5bf4dae2c8c5a983181af4de25bf85be@deviltracks.net> Update; it all works now. once i enabled ocsp stapling for ALL of my virtual domains, they then all began reporting correct results. - fabe On 2015-08-23 09:55, Fabian Santiago wrote: > Thanks. > > It does. > > Test produces no results. > > Not working on ssllabs (no result). > > I'm clueless. I've seen mention out on the web about making sure you > define ocsp for the default site or none else will work. I also make > use of sni as I only have one ip address. > > I have no truly "default" site configured. > > Could be related? I am new to nginx so I'm still learning lots. Thanks > again. > > -- > > Fabe > > >> On Aug 23, 2015, at 4:00 AM, biazus wrote: >> >> Config files seems to be OK. Just make sure "ssl_trusted_certificate" >> contais the intermediate & root certificates (in that order from top >> to >> bottom). >> >> You can test with the following command: >> >> echo QUIT | openssl s_client -connect yourhost.com:443 -status 2> >> /dev/null >> | grep -A 17 'OCSP response:' | grep -B 17 'Next Update' >> >> good luck >> >> Posted at Nginx Forum: >> http://forum.nginx.org/read.php?2,261177,261185#msg-261185 >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From linuxty at gmail.com Mon Aug 24 02:48:52 2015 From: linuxty at gmail.com (Yang Tian) Date: Mon, 24 Aug 2015 10:48:52 +0800 Subject: [ANNOUNCE] Tengine-2.1.1 released Message-ID: Hi folks, We are very excited to announce that Tengine-2.1.1 (stable version) has been released. You can either checkout the source code from GitHub: https://github.com/alibaba/tengine or download the tarball directly: http://tengine.taobao.org/download/tengine-2.1.1.tar.gz This release provided module dyups to support for adding/updating/removing upstreams without reload (http://tengine.taobao.org/document/http_dyups.html ). The full changelog is as follows: *) Feature: support for dynamic upstream update. (yzprofile) *) Feature: enchanced ngx_http_reqstat_module. (cfsego) *) Feature: added ssl_verify_client_exception directive. (InfoHunter) *) Change: reduced memory usage while parsing configuration. (ilexshen) *) Change: added $trim_bytes and $trim_original_bytes. (taoyuanyuan) *) Change: upgrade debian package to 2.1.1. (PeterDaveHello) *) Change: support for auto compile for ngx_http_spdy_module. (chobits) *) Change: updated SPDY/3.1. (chobits) *) Change: disabled 'proxy_request_buffering' for SPDY. (chobits) *) Change: added configue options to support set linker. (tanguofu) *) Bugfix: fixed Backport bug of SPDY. (nginx official, ym) *) Bugfix: fixed compile error with SL. (ym) *) Bugfix: fixed bug of reuseport. (mnadbobo) See our website for more details: http://tengine.taobao.org Have fun! -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Mon Aug 24 13:27:19 2015 From: nginx-forum at nginx.us (smsmaddy1981) Date: Mon, 24 Aug 2015 09:27:19 -0400 Subject: Broarcast Message-ID: <29980024f1d3bd72ab620674b168df66.NginxMailingListEnglish@forum.nginx.org> Hi, Is there any possibility to broadcast the request to all servers configured in upstream pls.? Best regards, Maddy Posted at Nginx Forum: http://forum.nginx.org/read.php?2,261202,261202#msg-261202 From nginx-forum at nginx.us Mon Aug 24 13:27:58 2015 From: nginx-forum at nginx.us (smsmaddy1981) Date: Mon, 24 Aug 2015 09:27:58 -0400 Subject: Broadcast In-Reply-To: <29980024f1d3bd72ab620674b168df66.NginxMailingListEnglish@forum.nginx.org> References: <29980024f1d3bd72ab620674b168df66.NginxMailingListEnglish@forum.nginx.org> Message-ID: <36d47e87a424136a599994ec047ce6c3.NginxMailingListEnglish@forum.nginx.org> Subject updated pls. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,261202,261203#msg-261203 From sarah at nginx.com Mon Aug 24 13:32:56 2015 From: sarah at nginx.com (Sarah Novotny) Date: Mon, 24 Aug 2015 09:32:56 -0400 Subject: Broadcast In-Reply-To: <36d47e87a424136a599994ec047ce6c3.NginxMailingListEnglish@forum.nginx.org> References: <29980024f1d3bd72ab620674b168df66.NginxMailingListEnglish@forum.nginx.org> <36d47e87a424136a599994ec047ce6c3.NginxMailingListEnglish@forum.nginx.org> Message-ID: Hi Maddy, There is not support for broadcasting a request to all upstream servers. What is your use case? Perhaps the crowd here can offer a different solution for your use case. sarah > On Aug 24, 2015, at 9:27 AM, smsmaddy1981 wrote: > > Subject updated pls. > > Posted at Nginx Forum: http://forum.nginx.org/read.php?2,261202,261203#msg-261203 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > From mdounin at mdounin.ru Mon Aug 24 16:24:06 2015 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 24 Aug 2015 19:24:06 +0300 Subject: "Dereferencing Pointer To Incomplete Type" on ARM In-Reply-To: <1ad10aff27cc64a6522a691c2a6f5adb.NginxMailingListEnglish@forum.nginx.org> References: <1ad10aff27cc64a6522a691c2a6f5adb.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20150824162405.GB37350@mdounin.ru> Hello! On Wed, Aug 19, 2015 at 09:15:07AM -0400, vindicator wrote: > I swear I had built the hg default version on my AMD64 just fine, but now > I'm doing it on my ARM device running Ubuntu 15.04 with kernel 4.2. > I used the default system openssl, but also tried with the master git > version all with the same result. > > Attempts with: > 1) ./auto/configure --with-http_ssl_module --with-ipv6 > 2) ./auto/configure --with-http_ssl_module --with-ipv6 --with-cc-opt="-I > /usr/local/ssl/include -I /usr/local/include" --with-ld-opt="-L > /usr/local/ssl/lib -L /usr/local/lib" > 3) ./auto/configure --with-ipv6 --with-http_ssl_module > --with-openssl=/src/openssl/ > and of course "make". > ***** > src/event/ngx_event_openssl.c: In function ?ngx_ssl_handshake?: > src/event/ngx_event_openssl.c:1164:31: error: dereferencing pointer to > incomplete type > if (c->ssl->connection->s3) { > ^ > src/event/ngx_event_openssl.c:1165:31: error: dereferencing pointer to > incomplete type > c->ssl->connection->s3->flags |= > SSL3_FLAGS_NO_RENEGOTIATE_CIPHERS; > ^ > ***** > > "./auto/configure --with-ipv6" built with no problems. Looks like you have OPENSSL_NO_SSL_INTERN defined by default in your system. Try this patch: --- a/src/event/ngx_event_openssl.c +++ b/src/event/ngx_event_openssl.c @@ -1159,6 +1159,7 @@ ngx_ssl_handshake(ngx_connection_t *c) c->send_chain = ngx_ssl_send_chain; #ifdef SSL3_FLAGS_NO_RENEGOTIATE_CIPHERS +#ifndef OPENSSL_NO_SSL_INTERN /* initial handshake done, disable renegotiation (CVE-2009-3555) */ if (c->ssl->connection->s3) { @@ -1166,6 +1167,7 @@ ngx_ssl_handshake(ngx_connection_t *c) } #endif +#endif return NGX_OK; } -- Maxim Dounin http://nginx.org/ From nginx-forum at nginx.us Mon Aug 24 17:01:04 2015 From: nginx-forum at nginx.us (biazus) Date: Mon, 24 Aug 2015 13:01:04 -0400 Subject: Cache structs 1.6.x vs 1.8.x In-Reply-To: <6a28b268c77a1b330109a46cfb2105f5.NginxMailingListEnglish@forum.nginx.org> References: <6a28b268c77a1b330109a46cfb2105f5.NginxMailingListEnglish@forum.nginx.org> Message-ID: Hi Guys, We have developed a "cache migration tool" in order to keep the cache compatibility between Nginx 1.6.x and 1.8.x. Comments and suggestions are welcome! https://github.com/acaciocenteno/ngx_scripts Thanks, Biazus Posted at Nginx Forum: http://forum.nginx.org/read.php?2,261151,261210#msg-261210 From adam at jooadam.hu Mon Aug 24 21:33:17 2015 From: adam at jooadam.hu (=?UTF-8?B?Sm/DsyDDgWTDoW0=?=) Date: Mon, 24 Aug 2015 23:33:17 +0200 Subject: Internal marked 503 error page returns default 404 In-Reply-To: <20150822235409.GA7915@daoine.org> References: <20150822235409.GA7915@daoine.org> Message-ID: Hi Francis, Thank you for your response. After some further reading I think now I get the processing cycle. I would rather not create a separate root for one file, so I settled with the following: location = /unavailable.html { return 503; } location @unavailable { try_files /unavailable.html =500; } try_files $uri =503; error_page 503 @unavailable; Thanks, ?d?m From adam at jooadam.hu Mon Aug 24 21:50:10 2015 From: adam at jooadam.hu (=?UTF-8?B?Sm/DsyDDgWTDoW0=?=) Date: Mon, 24 Aug 2015 23:50:10 +0200 Subject: Gzip vs. sendfile Message-ID: Hi, >From : > Note, however, that because the data never touches user space it?s not subject to the filters in the regular NGINX processing chain. As a result, filters that change content, for example the gzip filter, have no effect. Which never occurred to me, but sounds reasonable. What is more surprising is that I found virtually no discussion on this, and I wasn?t able to reproduce it. Using the following configuration: types { text/plain txt; } gzip on; gzip_comp_level 5; gzip_types text/plain; sendfile on; the files are still served gzipped. The author of http://expertdevelopers.blogspot.hu/2013/09/gzip-compression.html says: > There is a tradeoff between using compression (saving your bandwidth) and using the sendfile feature (saving your CPU cycles). If the connector supports the sendfile feature, e.g. the NIO connector, using sendfile will take precedence over compression. The symptoms will be that static files greater that 48 Kb will be sent uncompressed. However, increasing the file size to more than 200 KB still has no effect on the encoding. Is it actually the other way around and filters disable sendfile? ?d?m From adam at jooadam.hu Mon Aug 24 22:35:55 2015 From: adam at jooadam.hu (=?UTF-8?B?Sm/DsyDDgWTDoW0=?=) Date: Tue, 25 Aug 2015 00:35:55 +0200 Subject: Client specified server port Message-ID: Hi, The return directive allows the use of URLs relative to the server, in which case the scheme, server name and port are automatically prepended by Nginx. The port is, however, the port on which the request was received, which is not always the port to which the request was sent, i. e. the one specified in the Host header field. For example, tunneling nginx.org:80 through example.com:8000 a redirect will lead to example.com:80. Also, there is no variable exposing this value, so one must extract it themselves to explicitly specify in the redirect URL: set $is_port ''; set $port ''; if ($http_host ~ :(\d+)$) { set $is_port ':'; set $port $1; } Maybe this is something that would worth considering as an enhancement. Making return use the port in the Host header or to preserve backwards compatibility, introducing a switch, request_port_in_redirect, complementing server_name_in_redirect, off by default, and at the same time exposing this in a $request_port variable. What do you think? ?d?m From ryd994 at 163.com Tue Aug 25 02:39:14 2015 From: ryd994 at 163.com (ryd994) Date: Tue, 25 Aug 2015 02:39:14 +0000 Subject: Internal marked 503 error page returns default 404 In-Reply-To: References: <20150822235409.GA7915@daoine.org> Message-ID: Hi Adam, Why not use @named location directly? error_page 503 @unavailable; location @unavailable { alias /absolute/path/to/file; } Notice the path is not related to document root. On Tue, Aug 25, 2015, 05:33 Jo? ?d?m wrote: Hi Francis, Thank you for your response. After some further reading I think now I get the processing cycle. I would rather not create a separate root for one file, so I settled with the following: location = /unavailable.html { return 503; } location @unavailable { try_files /unavailable.html =500; } try_files $uri =503; error_page 503 @unavailable; Thanks, ?d?m _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Tue Aug 25 02:40:05 2015 From: nginx-forum at nginx.us (vindicator) Date: Mon, 24 Aug 2015 22:40:05 -0400 Subject: "Dereferencing Pointer To Incomplete Type" on ARM In-Reply-To: <20150824162405.GB37350@mdounin.ru> References: <20150824162405.GB37350@mdounin.ru> Message-ID: <18161e34ec762f6e395fd112fd64d56f.NginxMailingListEnglish@forum.nginx.org> Thanks, but no. I'm still getting that error: ***** cc -c -pipe -O -W -Wall -Wpointer-arith -Wno-unused-parameter -Werror -g -I src/core -I src/event -I src/event/modules -I src/os/unix -I objs \ -o objs/src/event/ngx_event_openssl.o \ src/event/ngx_event_openssl.c src/event/ngx_event_openssl.c: In function ?ngx_ssl_handshake?: src/event/ngx_event_openssl.c:1165:31: error: dereferencing pointer to incomplete type if (c->ssl->connection->s3) { ^ src/event/ngx_event_openssl.c:1166:31: error: dereferencing pointer to incomplete type c->ssl->connection->s3->flags |= SSL3_FLAGS_NO_RENEGOTIATE_CIPHERS; ^ src/event/ngx_event_openssl.c: In function ?ngx_ssl_session_ticket_key_callback?: src/event/ngx_event_openssl.c:2866:9: error: implicit declaration of function ?RAND_pseudo_bytes? [-Werror=implicit-function-declaration] RAND_pseudo_bytes(iv, 16); ^ cc1: all warnings being treated as errors ***** Changed code section: ***** c->recv_chain = ngx_ssl_recv_chain; c->send_chain = ngx_ssl_send_chain; #ifdef SSL3_FLAGS_NO_RENEGOTIATE_CIPHERS #ifndef OPENSSL_NO_SSL_INTERN /* initial handshake done, disable renegotiation (CVE-2009-3555) */ if (c->ssl->connection->s3) { c->ssl->connection->s3->flags |= SSL3_FLAGS_NO_RENEGOTIATE_CIPHERS; } #endif #endif return NGX_OK; } sslerr = SSL_get_error(c->ssl->connection, n); ***** Just adding that I clean the source before each build attempt via: ***** hg --config "extensions.purge=" purge --all hg revert --all ***** I also don't know where I'd find if "OPENSSL_NO_SSL_INTERN" was already defined. printenv doesn't show it, nor does a recursive grep in /etc or ~. Let me know if there are any other tests you'd like me to try or any other information you need from me that may help. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,261124,261218#msg-261218 From ryd994 at 163.com Tue Aug 25 02:55:54 2015 From: ryd994 at 163.com (ryd994) Date: Tue, 25 Aug 2015 02:55:54 +0000 Subject: Client specified server port In-Reply-To: References: Message-ID: Hi, Maybe you can use following config which is shorter and does not use the evil "if". map $http_host $redirect_port { default ""; .*(:\d+) $1; } return 302 $scheme://$host$redirect_port/ On Tue, Aug 25, 2015, 06:35 Jo? ?d?m wrote: > Hi, > > The return directive allows the use of URLs relative to the server, in > which case the scheme, server name and port are automatically > prepended by Nginx. > > The port is, however, the port on which the request was received, > which is not always the port to which the request was sent, i. e. the > one specified in the Host header field. For example, tunneling > nginx.org:80 through example.com:8000 a redirect will lead to > example.com:80. > > Also, there is no variable exposing this value, so one must extract it > themselves to explicitly specify in the redirect URL: > > set $is_port ''; > set $port ''; > > if ($http_host ~ :(\d+)$) { > set $is_port ':'; > set $port $1; > } > > Maybe this is something that would worth considering as an > enhancement. Making return use the port in the Host header or to > preserve backwards compatibility, introducing a switch, > request_port_in_redirect, complementing server_name_in_redirect, off > by default, and at the same time exposing this in a $request_port > variable. > > What do you think? > > > ?d?m > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From mfioretti at nexaima.net Tue Aug 25 07:36:29 2015 From: mfioretti at nexaima.net (M. Fioretti) Date: Tue, 25 Aug 2015 09:36:29 +0200 Subject: Update on: nginx makes mysqld die all the time In-Reply-To: <10d8bbfd89e34b9966b0355d405b4eca@nexaima.net> References: <10d8bbfd89e34b9966b0355d405b4eca@nexaima.net> Message-ID: <173835a41370df367ce1215395efa5ba@nexaima.net> On 2015-08-18 15:23, M. Fioretti wrote: > Greetings, > > I just migrated to nginx + php-fpm from apache a few websites, on a > centos 6.6 virtual server. The sites are up but... now mysqld > (MariaDB, actually) dies every 10/20 **minutes** with status: Greetings, after a few days, I can report that setting: mysql.allow_persistent = Off in php.ini, and then tuning some php-fpm parameters as below, fixed the problem. There surely still is much more that can be optimized (and comments on the parameters below are welcome!) and I'll ask about it later, but I haven't seen any more crashes, and the websites already load quickly. Thanks to all who helped!!! Marco pm = dynamic pm.max_children = 12 pm.start_servers = 3 pm.min_spare_servers = 2 pm.max_spare_servers = 3 pm.max_requests = 10 > mysqld dead but subsys locked > > or > > mysqld dead but pid file exists > > for reasons not really relevant here I cannot post nginx conf > right away. I **will** do that in a few hours, when I'm back > at my desk. Since the crashes are so frequent, however, any > help to save time is very welcome. Even if it's just request > of other specific info, besides the nginx conf files. > > TIA, > Marco > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -- http://mfioretti.com From mdounin at mdounin.ru Tue Aug 25 12:39:25 2015 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 25 Aug 2015 15:39:25 +0300 Subject: Gzip vs. sendfile In-Reply-To: References: Message-ID: <20150825123925.GH37350@mdounin.ru> Hello! On Mon, Aug 24, 2015 at 11:50:10PM +0200, Jo? ?d?m wrote: > From : > > > Note, however, that because the data never touches user space > > it?s not subject to the filters in the regular NGINX > > processing chain. As a result, filters that change content, > > for example the gzip filter, have no effect. > > Which never occurred to me, but sounds reasonable. What is more > surprising is that I found virtually no discussion on this, and > I wasn?t able to reproduce it. The statement quoted is incorrect. Instead, if any filter is going to change content, sendfile won't be used. I'll ping Rick to fix this. [...] > Is it actually the other way around and filters disable sendfile? Yes. -- Maxim Dounin http://nginx.org/ From nginx-forum at nginx.us Tue Aug 25 13:44:49 2015 From: nginx-forum at nginx.us (khav) Date: Tue, 25 Aug 2015 09:44:49 -0400 Subject: fastcgi_max_temp_file_size 0 vs fastcgi_buffering off Message-ID: <3873d418c0cc39ac6c6430d51a4ad1bf.NginxMailingListEnglish@forum.nginx.org> Is fastcgi_max_temp_file_size 0; and fastcgi_buffering off; the same ? If no what is the difference between those two Posted at Nginx Forum: http://forum.nginx.org/read.php?2,261232,261232#msg-261232 From mdounin at mdounin.ru Tue Aug 25 13:52:47 2015 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 25 Aug 2015 16:52:47 +0300 Subject: "Dereferencing Pointer To Incomplete Type" on ARM In-Reply-To: <18161e34ec762f6e395fd112fd64d56f.NginxMailingListEnglish@forum.nginx.org> References: <20150824162405.GB37350@mdounin.ru> <18161e34ec762f6e395fd112fd64d56f.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20150825135247.GL37350@mdounin.ru> Hello! On Mon, Aug 24, 2015 at 10:40:05PM -0400, vindicator wrote: > Thanks, but no. I'm still getting that error: > ***** > cc -c -pipe -O -W -Wall -Wpointer-arith -Wno-unused-parameter -Werror -g > -I src/core -I src/event -I src/event/modules -I src/os/unix -I objs \ > -o objs/src/event/ngx_event_openssl.o \ > src/event/ngx_event_openssl.c > src/event/ngx_event_openssl.c: In function ?ngx_ssl_handshake?: > src/event/ngx_event_openssl.c:1165:31: error: dereferencing pointer to > incomplete type > if (c->ssl->connection->s3) { > ^ > src/event/ngx_event_openssl.c:1166:31: error: dereferencing pointer to > incomplete type > c->ssl->connection->s3->flags |= > SSL3_FLAGS_NO_RENEGOTIATE_CIPHERS; > ^ > src/event/ngx_event_openssl.c: In function > ?ngx_ssl_session_ticket_key_callback?: > src/event/ngx_event_openssl.c:2866:9: error: implicit declaration of > function ?RAND_pseudo_bytes? [-Werror=implicit-function-declaration] > RAND_pseudo_bytes(iv, 16); > ^ > cc1: all warnings being treated as errors > ***** [...] Oh, it looks like you are trying to build nginx against OpenSSL master branch. As OpenSSL guys are changing things rapidly nowadays, it's not really going to work. Try any released version instead. Quick and dirty fix below, but I wouldn't bet it will be enough to build with OpenSSL master even in a week from now. --- a/src/event/ngx_event_openssl.c +++ b/src/event/ngx_event_openssl.c @@ -1159,6 +1159,7 @@ ngx_ssl_handshake(ngx_connection_t *c) c->send_chain = ngx_ssl_send_chain; #ifdef SSL3_FLAGS_NO_RENEGOTIATE_CIPHERS +#if 0 /* initial handshake done, disable renegotiation (CVE-2009-3555) */ if (c->ssl->connection->s3) { @@ -1166,6 +1167,7 @@ ngx_ssl_handshake(ngx_connection_t *c) } #endif +#endif return NGX_OK; } @@ -2861,7 +2863,7 @@ ngx_ssl_session_ticket_key_callback(ngx_ ngx_hex_dump(buf, key[0].name, 16) - buf, buf, SSL_session_reused(ssl_conn) ? "reused" : "new"); - RAND_pseudo_bytes(iv, 16); + RAND_bytes(iv, 16); EVP_EncryptInit_ex(ectx, EVP_aes_128_cbc(), NULL, key[0].aes_key, iv); HMAC_Init_ex(hctx, key[0].hmac_key, 16, ngx_ssl_session_ticket_md(), NULL); -- Maxim Dounin http://nginx.org/ From mdounin at mdounin.ru Tue Aug 25 13:57:02 2015 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 25 Aug 2015 16:57:02 +0300 Subject: fastcgi_max_temp_file_size 0 vs fastcgi_buffering off In-Reply-To: <3873d418c0cc39ac6c6430d51a4ad1bf.NginxMailingListEnglish@forum.nginx.org> References: <3873d418c0cc39ac6c6430d51a4ad1bf.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20150825135702.GM37350@mdounin.ru> Hello! On Tue, Aug 25, 2015 at 09:44:49AM -0400, khav wrote: > Is fastcgi_max_temp_file_size 0; and fastcgi_buffering off; the same ? No. > If no what is the difference between those two The first one switches off buffering to disk. The second one switches fastcgi module to the unbuffered mode. In this mode all data read will be immediately delivered to a client. See http://nginx.org/r/fastcgi_buffering for details. -- Maxim Dounin http://nginx.org/ From nginx-forum at nginx.us Tue Aug 25 14:03:40 2015 From: nginx-forum at nginx.us (khav) Date: Tue, 25 Aug 2015 10:03:40 -0400 Subject: fastcgi_max_temp_file_size 0 vs fastcgi_buffering off In-Reply-To: <20150825135702.GM37350@mdounin.ru> References: <20150825135702.GM37350@mdounin.ru> Message-ID: <7a880f17c7c52c44c4340b3c4f0de7a3.NginxMailingListEnglish@forum.nginx.org> For video streaming which option will be the best according to you I currently use fastcgi_max_temp_file_size 0 fastcgi_connect_timeout 60; Posted at Nginx Forum: http://forum.nginx.org/read.php?2,261232,261235#msg-261235 From nginx-forum at nginx.us Tue Aug 25 19:27:48 2015 From: nginx-forum at nginx.us (alexberg) Date: Tue, 25 Aug 2015 15:27:48 -0400 Subject: client sent too short userid cookie Message-ID: <47298d889a7f77d215089105bab26855.NginxMailingListEnglish@forum.nginx.org> Hi I'm getting a lot of "client sent too short userid cookie" errors in error.log. What I see is the following: client sent too short userid cookie "odsp=; odsp=rB4An1Xcv3sLIBgDBjypAg==" And, of course, the uid module resets the cookie (even though it can be recovered) and I end up not seeing same users. In other words, I have a $uid_set variable, and no $uid_got (but if I try to read cookie with Lua, I can successfully read odsp). Is there a way to somehow prevent this or recover from this? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,261242,261242#msg-261242 From youcanpoint at me.com Tue Aug 25 20:03:10 2015 From: youcanpoint at me.com (Tyarko Leander Rodney) Date: Tue, 25 Aug 2015 22:03:10 +0200 Subject: nginx RFC21266 Compliance - 'Proxy-Connection' Message-ID: Hi, I?ve posted this question on the IRC before but had no luck. I have the following problem: I?d like to disable the ?Proxy-Connection? Response Header. I know, that the ?Connection? Header is hard coded in ngx_http_header_filter_module.c, but does the same apply to ?Proxy-Connection? (couldn?t find it in the sources)? I?ve tried the more_clear_headers from the ngx_headers_more module and proxy_set_header(which both work fine with all other headers). Background: The ?Proxy-Connection? sadly violates our Server Policy (strict RFC21266 compliance). Kind regards T. Rodney From vbart at nginx.com Tue Aug 25 20:09:20 2015 From: vbart at nginx.com (Valentin V. Bartenev) Date: Tue, 25 Aug 2015 23:09:20 +0300 Subject: nginx RFC21266 Compliance - 'Proxy-Connection' In-Reply-To: References: Message-ID: <11257070.Nx92qfe3SJ@vbart-workstation> On Tuesday 25 August 2015 22:03:10 Tyarko Leander Rodney wrote: > Hi, > > I?ve posted this question on the IRC before but had no luck. I have the following problem: > > I?d like to disable the ?Proxy-Connection? Response Header. I know, that the ?Connection? Header is hard coded in ngx_http_header_filter_module.c, but does the same apply to ?Proxy-Connection? (couldn?t find it in the sources)? [..] NGINX doesn't produce the Proxy-Connection header. I assume it's returned by your backend. If so, then see the "proxy_hide_header" directive: http://nginx.org/r/proxy_hide_header wbr, Valentin V. Bartenev From nginx-forum at nginx.us Tue Aug 25 20:11:56 2015 From: nginx-forum at nginx.us (dpheasant) Date: Tue, 25 Aug 2015 16:11:56 -0400 Subject: Proxy http to https with client certificate authentication Message-ID: Hello everyone, I posted this to stack-exchange, but this is probably the better place to get a real answer. Long story short, I think we're hitting the SSL renegotiation problem described here: http://forum.nginx.org/read.php?2,258464,258464#msg-258464 Basically, we're trying to wrap a request made by an internal application over HTTP, into a HTTPS request to an upstream server that requires a client certificate. If I understand the post correctly, as part of that connection, the remote server asks for a client cert, which trips up NGINX b/c of the SSL renegotiate. Config: location /secure/api/ { proxy_pass https://secure.webservice.com/secure/api/; proxy_ssl_certificate /etc/ssl/api-client.crt; proxy_ssl_certificate_key /etc/ssl/api-client.crt.key; proxy_ssl_verify off; } We have logging turned up to debug but do not get the 'SSL Renegotiation disabled' message in the logs, which is why I'm posting here for confirmation. error.log: 2015/08/25 15:33:56 [info] 29810#0: *57 client closed connection while waiting for request, client: x.x.x.x, server: 0.0.0.0:80 2015/08/25 15:34:05 [info] 29810#0: *53 epoll_wait() reported that client prematurely closed connection, so upstream connection is closed too while reading response header from upstream, client: x.x.x.x, server: our.proxy.com, request: "GET /secure/api/ HTTP/1.1", upstream: "https://y.y.y.y:443/secure/api/", host: "our.proxy.com" Is there any workaround for this? Thanks in advance. P.S. Original SE post here: http://serverfault.com/questions/716684/nginx-proxying-http-to-https-with-client-certificate Posted at Nginx Forum: http://forum.nginx.org/read.php?2,261245,261245#msg-261245 From lists-nginx at swsystem.co.uk Tue Aug 25 20:21:11 2015 From: lists-nginx at swsystem.co.uk (Steve Wilson) Date: Tue, 25 Aug 2015 21:21:11 +0100 Subject: nginx RFC21266 Compliance - 'Proxy-Connection' In-Reply-To: References: Message-ID: <55DCCE37.1070903@swsystem.co.uk> Looking at https://en.wikipedia.org/wiki/List_of_HTTP_header_fields it suggests it's a non-standard request header. You can probably strip this out of the request to the real server with proxy_set_header "Proxy-Connection" ""; Although I'd expect the backend server to ignore invalid request headers rather than bork on the request. Steve. On 25/08/2015 21:03, Tyarko Leander Rodney wrote: > Hi, > > I?ve posted this question on the IRC before but had no luck. I have the following problem: > > I?d like to disable the ?Proxy-Connection? Response Header. I know, that the ?Connection? Header is hard coded in ngx_http_header_filter_module.c, but does the same apply to ?Proxy-Connection? (couldn?t find it in the sources)? > > I?ve tried the more_clear_headers from the ngx_headers_more module and proxy_set_header(which both work fine with all other headers). > > Background: The ?Proxy-Connection? sadly violates our Server Policy (strict RFC21266 compliance). > > Kind regards > > T. Rodney > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > From lists-nginx at swsystem.co.uk Tue Aug 25 20:39:16 2015 From: lists-nginx at swsystem.co.uk (Steve Wilson) Date: Tue, 25 Aug 2015 21:39:16 +0100 Subject: Update on: nginx makes mysqld die all the time In-Reply-To: <173835a41370df367ce1215395efa5ba@nexaima.net> References: <10d8bbfd89e34b9966b0355d405b4eca@nexaima.net> <173835a41370df367ce1215395efa5ba@nexaima.net> Message-ID: <55DCD274.5050005@swsystem.co.uk> Hi, On 25/08/2015 08:36, M. Fioretti wrote: > On 2015-08-18 15:23, M. Fioretti wrote: >> Greetings, >> >> I just migrated to nginx + php-fpm from apache a few websites, on a >> centos 6.6 virtual server. The sites are up but... now mysqld >> (MariaDB, actually) dies every 10/20 **minutes** with status: > > Greetings, > after a few days, I can report that setting: > > mysql.allow_persistent = Off > > in php.ini, and then tuning some php-fpm parameters as below, fixed > the problem. There surely still is much more that can be optimized > (and comments on the parameters below are welcome!) and I'll ask > about it later, but I haven't seen any more crashes, > and the websites already load quickly. This is good news, I was actually wondering if you'd sorted it. > Thanks to all who helped!!! > > Marco > > pm = dynamic > pm.max_children = 12 > pm.start_servers = 3 > pm.min_spare_servers = 2 > pm.max_spare_servers = 3 > pm.max_requests = 10 I don't think this is the perfect place to get answers on this, however my understanding is that this is how it'll work. php-fpm will manage it's workers dynamically, starting with 3 and being able to grow to a maximum of 12. To keep resources to a minimum on your server it will allow for between 2 and 3 of these to be waiting to do something. If there's more than 3 it'll kill the child, also when each one has served 10 requests they'll terminate. If there's less than 2 idle children it'll start another until there's a minimum of 2 idle. The above may not be the optimal configuration for your needs, as ondemand may be more suited than dynamic or static. I think key factors are the spec of the server and how busy php is, there may be some way to monitor the running number of children to fine tune later but may cause problems if there's a sudden surge in requests. The php manual has some information on each of these settings at Steve. From francis at daoine.org Tue Aug 25 22:02:36 2015 From: francis at daoine.org (Francis Daly) Date: Tue, 25 Aug 2015 23:02:36 +0100 Subject: Client specified server port In-Reply-To: References: Message-ID: <20150825220236.GB7915@daoine.org> On Tue, Aug 25, 2015 at 12:35:55AM +0200, Jo? ?d?m wrote: Hi there, > The return directive allows the use of URLs relative to the server, in > which case the scheme, server name and port are automatically > prepended by Nginx. > > The port is, however, the port on which the request was received, > which is not always the port to which the request was sent, i. e. the > one specified in the Host header field. For example, tunneling > nginx.org:80 through example.com:8000 a redirect will lead to > example.com:80. > > Also, there is no variable exposing this value, so one must extract it > themselves to explicitly specify in the redirect URL: I agree that it would be nice to have the port-specified-in-the-Host-header easily available. Until it is, you might be able to just use $http_host directly, possibly falling back to $host if it is empty. (Basically, any current request that does not include a Host: header is probably someone playing games. Sending them back a broken redirect is possibly not a problem. If you're willing to restrict your potential clients to "ones that send a sensible Host: header", you don't need to worry about the fallback.) That is, just always use return 301 $scheme://$http_host/local; instead of the current return 301 /local; (and even $scheme may be overkill, unless you use a single server{} block for http and https.) > Maybe this is something that would worth considering as an > enhancement. Making return use the port in the Host header or to > preserve backwards compatibility, introducing a switch, > request_port_in_redirect, complementing server_name_in_redirect, off > by default, and at the same time exposing this in a $request_port > variable. > > What do you think? I think it is probably worthwhile; and I think I won't be writing the code to implement it ;-) Cheers, f -- Francis Daly francis at daoine.org From francis at daoine.org Tue Aug 25 22:08:57 2015 From: francis at daoine.org (Francis Daly) Date: Tue, 25 Aug 2015 23:08:57 +0100 Subject: Internal marked 503 error page returns default 404 In-Reply-To: References: <20150822235409.GA7915@daoine.org> Message-ID: <20150825220857.GC7915@daoine.org> On Tue, Aug 25, 2015 at 02:39:14AM +0000, ryd994 wrote: Hi there, > Why not use @named location directly? > > error_page 503 @unavailable; > location @unavailable { > alias /absolute/path/to/file; > } the "alias" directive cannot be used inside the named location f -- Francis Daly francis at daoine.org From nginx-forum at nginx.us Tue Aug 25 23:45:45 2015 From: nginx-forum at nginx.us (nginxuser100) Date: Tue, 25 Aug 2015 19:45:45 -0400 Subject: outstanding fastcgi connections Message-ID: <00ac7136f13a0377734085e4b0e14b42.NginxMailingListEnglish@forum.nginx.org> Hi, I have a FASTCGI server listening over UDS to HTTP request from NGINX. For some reason, the requests stopped reaching the FASTCGI server. netstats -nap shows one socket connection in LISTENING state (as expected), one in CONNECTED state (not sure there should be such session hanging around), and many socket connections in CONNECTING state. To remove all these UDS fcgi connections, I stopped nginx. The connections did not get removed. Shouldn't nginx clean up all these outstanding fastcgi connections when it is killed? Thank you. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,261251,261251#msg-261251 From nginx-forum at nginx.us Wed Aug 26 10:48:18 2015 From: nginx-forum at nginx.us (YemSalat) Date: Wed, 26 Aug 2015 06:48:18 -0400 Subject: Create a single config for multiple Apache virtual hosts. Message-ID: Hi everyone! First time posting here, I have been searching online but have not found any good solutions yet. I am trying to run nginx as reverse proxy for Apache, running multiple virtual hosts (domains) on the same ip. I wanted to know if it is possible to have a single nginx config, that would pass the correct url/hostname/path to Apache, without having to create a separate server block for each domain. For example if all domain directories are the same as their hostnames: /var/www/mydomain.com/ /var/www/anotherdomain.org/ ... If it is - are there any potential issues with this setup? Thanks in advance! Posted at Nginx Forum: http://forum.nginx.org/read.php?2,261260,261260#msg-261260 From champetier.etienne at gmail.com Wed Aug 26 15:30:37 2015 From: champetier.etienne at gmail.com (Etienne Champetier) Date: Wed, 26 Aug 2015 17:30:37 +0200 Subject: disable 301 redirect for directory / use relative redirect / change scheme Message-ID: Hi, I have this setup browser -> ssl proxy -> nginx browser to ssl proxy is https only ssl proxy to nginx is http only now i browse to "https://exemple.com/aaa", where aaa is a directory, so nginx send back a 301 redirect with "Location: http://exemple.com/aaa/" Is it possible to send https instead of http (in Location), or send a relative header, like "Location: /aaa/", or just disable this redirection (in my case it's ok) Thanks in advance Etienne -------------- next part -------------- An HTML attachment was scrubbed... URL: From me at myconan.net Wed Aug 26 15:36:40 2015 From: me at myconan.net (Edho Arief) Date: Thu, 27 Aug 2015 00:36:40 +0900 Subject: disable 301 redirect for directory / use relative redirect / change scheme In-Reply-To: References: Message-ID: On Thu, Aug 27, 2015 at 12:30 AM, Etienne Champetier wrote: > Hi, > > I have this setup > browser -> ssl proxy -> nginx > browser to ssl proxy is https only > ssl proxy to nginx is http only > > now i browse to "https://exemple.com/aaa", where aaa is a directory, > so nginx send back a 301 redirect with "Location: http://exemple.com/aaa/" > > Is it possible to send https instead of http (in Location), > or send a relative header, like "Location: /aaa/", > or just disable this redirection (in my case it's ok) > see if proxy_redirect[1] fits your need. http://nginx.org/r/proxy_redirect From champetier.etienne at gmail.com Wed Aug 26 15:52:48 2015 From: champetier.etienne at gmail.com (Etienne Champetier) Date: Wed, 26 Aug 2015 17:52:48 +0200 Subject: disable 301 redirect for directory / use relative redirect / change scheme In-Reply-To: References: Message-ID: 2015-08-26 17:36 GMT+02:00 Edho Arief : > On Thu, Aug 27, 2015 at 12:30 AM, Etienne Champetier > wrote: > > Hi, > > > > I have this setup > > browser -> ssl proxy -> nginx > > browser to ssl proxy is https only > > ssl proxy to nginx is http only > > > > now i browse to "https://exemple.com/aaa", where aaa is a directory, > > so nginx send back a 301 redirect with "Location: > http://exemple.com/aaa/" > > > > Is it possible to send https instead of http (in Location), > > or send a relative header, like "Location: /aaa/", > > or just disable this redirection (in my case it's ok) > > > > see if proxy_redirect[1] fits your need. > > http://nginx.org/r/proxy_redirect > > nginx isn't a proxy here, it's a backend (no proxy_pass in my conf) ssl proxy is a black box (not an nginx) > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From pchychi at gmail.com Wed Aug 26 15:53:59 2015 From: pchychi at gmail.com (Payam Chychi) Date: Wed, 26 Aug 2015 08:53:59 -0700 Subject: disable 301 redirect for directory / use relative redirect / change scheme In-Reply-To: References: Message-ID: <55DDE117.7050807@gmail.com> On 2015-08-26, 8:36 AM, Edho Arief wrote: > On Thu, Aug 27, 2015 at 12:30 AM, Etienne Champetier > wrote: >> Hi, >> >> I have this setup >> browser -> ssl proxy -> nginx >> browser to ssl proxy is https only >> ssl proxy to nginx is http only >> >> now i browse to "https://exemple.com/aaa", where aaa is a directory, >> so nginx send back a 301 redirect with "Location: http://exemple.com/aaa/" >> >> Is it possible to send https instead of http (in Location), >> or send a relative header, like "Location: /aaa/", >> or just disable this redirection (in my case it's ok) >> > see if proxy_redirect[1] fits your need. > > http://nginx.org/r/proxy_redirect > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx use a rewrite 302 can easy help. If the client cant reach the intended site (say due to internal network/firewalls) then you can also Proxy the connection if needed. here is how the 302 rewrite would look like: rewrite ^ https://$hostname$request_uri? redirect; You can also mix this with an if statement in case you only want "http://example.com/aaa" to be redirected to HTTPS and not everything example: if ($host = example.com) { if ($uri = /aaa/) { rewrite ^ https://$hostname$request_uri? redirect; } } Something like that, this is untested but should work. Keep in mind that using IF like this has a performance cost (though small, it all depends on number of req/site load... something to keep in mind) Cheers, Payam Chychi From me at myconan.net Wed Aug 26 16:20:53 2015 From: me at myconan.net (Edho Arief) Date: Thu, 27 Aug 2015 01:20:53 +0900 Subject: disable 301 redirect for directory / use relative redirect / change scheme In-Reply-To: References: Message-ID: On Thu, Aug 27, 2015 at 12:52 AM, Etienne Champetier wrote: > > > 2015-08-26 17:36 GMT+02:00 Edho Arief : >> >> On Thu, Aug 27, 2015 at 12:30 AM, Etienne Champetier >> wrote: >> > Hi, >> > >> > I have this setup >> > browser -> ssl proxy -> nginx >> > browser to ssl proxy is https only >> > ssl proxy to nginx is http only >> > >> > now i browse to "https://exemple.com/aaa", where aaa is a directory, >> > so nginx send back a 301 redirect with "Location: >> > http://exemple.com/aaa/" >> > >> > Is it possible to send https instead of http (in Location), >> > or send a relative header, like "Location: /aaa/", >> > or just disable this redirection (in my case it's ok) >> > >> >> see if proxy_redirect[1] fits your need. >> >> http://nginx.org/r/proxy_redirect >> > nginx isn't a proxy here, it's a backend (no proxy_pass in my conf) > ssl proxy is a black box (not an nginx) > whoops right, sorry. This seems to work: if (-d $request_filename) { return 301 https://$host$uri/; } From champetier.etienne at gmail.com Wed Aug 26 16:27:55 2015 From: champetier.etienne at gmail.com (Etienne Champetier) Date: Wed, 26 Aug 2015 18:27:55 +0200 Subject: disable 301 redirect for directory / use relative redirect / change scheme In-Reply-To: References: Message-ID: 2015-08-26 18:20 GMT+02:00 Edho Arief : > On Thu, Aug 27, 2015 at 12:52 AM, Etienne Champetier > wrote: > > > > > > 2015-08-26 17:36 GMT+02:00 Edho Arief : > >> > >> On Thu, Aug 27, 2015 at 12:30 AM, Etienne Champetier > >> wrote: > >> > Hi, > >> > > >> > I have this setup > >> > browser -> ssl proxy -> nginx > >> > browser to ssl proxy is https only > >> > ssl proxy to nginx is http only > >> > > >> > now i browse to "https://exemple.com/aaa", where aaa is a directory, > >> > so nginx send back a 301 redirect with "Location: > >> > http://exemple.com/aaa/" > >> > > >> > Is it possible to send https instead of http (in Location), > >> > or send a relative header, like "Location: /aaa/", > >> > or just disable this redirection (in my case it's ok) > >> > > >> > >> see if proxy_redirect[1] fits your need. > >> > >> http://nginx.org/r/proxy_redirect > >> > > nginx isn't a proxy here, it's a backend (no proxy_pass in my conf) > > ssl proxy is a black box (not an nginx) > > > > whoops right, sorry. > > This seems to work: > > if (-d $request_filename) { return 301 https://$host$uri/; } > thanks, will try tomorrow > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Wed Aug 26 16:59:39 2015 From: nginx-forum at nginx.us (biazus) Date: Wed, 26 Aug 2015 12:59:39 -0400 Subject: disable 301 redirect for directory / use relative redirect / change scheme In-Reply-To: References: Message-ID: Please try something like that: proxy_redirect http://$proxy_host/ $scheme://$host/; Regards, Biazus Posted at Nginx Forum: http://forum.nginx.org/read.php?2,261275,261283#msg-261283 From nginx-forum at nginx.us Wed Aug 26 18:11:39 2015 From: nginx-forum at nginx.us (ionut_rusu) Date: Wed, 26 Aug 2015 14:11:39 -0400 Subject: Persistent connections timeout (keepalive on upstream) Message-ID: <6c207ed0311a9aabf2ecd3ef3f3ba619.NginxMailingListEnglish@forum.nginx.org> The short question is how can be set the timeout for an upstream for persistent connections (using keepalive) - looks like they never timeout. Details: I have a configuration where i need to setup a reverse proxy for SSL connections, and reuse the backend SSL connections. For doing this i'm using the keepalive directive on the upstream, something like this: ... upstream backend { server backend-host.com:443; keepalive 10; } ... backend-host.com is hosted on Amazon AWS and consists of an Elastic Load Balancer. This means that from time to time its IP will change (scaling up and down) My issue is that witth the current configuration the persistent connections are never refreshed (they don't have a timeout) and when the IP of the ELB changes then the nginx reverse proxy stops working. Let me know if more info is needed. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,261284,261284#msg-261284 From nginx-forum at nginx.us Wed Aug 26 18:35:25 2015 From: nginx-forum at nginx.us (biazus) Date: Wed, 26 Aug 2015 14:35:25 -0400 Subject: Persistent connections timeout (keepalive on upstream) In-Reply-To: <6c207ed0311a9aabf2ecd3ef3f3ba619.NginxMailingListEnglish@forum.nginx.org> References: <6c207ed0311a9aabf2ecd3ef3f3ba619.NginxMailingListEnglish@forum.nginx.org> Message-ID: <50b8ef57f86b068ff719aeec87b732ba.NginxMailingListEnglish@forum.nginx.org> Hi, You may check dynamic upstream module: https://github.com/GUI/nginx-upstream-dyanmic-servers Alternatives: http://www.senginx.org/en/index.php/Dynamic_DNS_Resolve http://openresty.org/ Regards, Biazus Posted at Nginx Forum: http://forum.nginx.org/read.php?2,261284,261285#msg-261285 From quanah at zimbra.com Wed Aug 26 19:12:01 2015 From: quanah at zimbra.com (Quanah Gibson-Mount) Date: Wed, 26 Aug 2015 12:12:01 -0700 Subject: Persistent connections timeout (keepalive on upstream) In-Reply-To: <6c207ed0311a9aabf2ecd3ef3f3ba619.NginxMailingListEnglish@forum.nginx.org> References: <6c207ed0311a9aabf2ecd3ef3f3ba619.NginxMailingListEnglish@forum.nginx.org> Message-ID: --On Wednesday, August 26, 2015 3:11 PM -0400 ionut_rusu wrote: > The short question is how can be set the timeout for an upstream for > persistent connections (using keepalive) - looks like they never timeout. > > Details: > > I have a configuration where i need to setup a reverse proxy for SSL > connections, and reuse the backend SSL connections. For doing this i'm > using the keepalive directive on the upstream, something like this: > > ... > upstream backend { > server backend-host.com:443; > keepalive 10; > } Have you tried the nginx keepalive module? --Quanah -- Quanah Gibson-Mount Platform Architect Zimbra, Inc. -------------------- Zimbra :: the leader in open source messaging and collaboration From dmiller at amfes.com Wed Aug 26 19:33:48 2015 From: dmiller at amfes.com (dmiller at amfes.com) Date: Wed, 26 Aug 2015 12:33:48 -0700 Subject: php server side events [sse] Message-ID: I have several virtual servers operating with php-fpm perfectly fine. Buffering/caching with Wordpress is great. But now... I'm trying to implement a new site that uses php for server-side events to stream live updates to clients. This is, I believe, properly done with an infinite loop in php that will send events as they occur. The problem is the events seem to be getting buffered and don't appear. If I write the php program to end after sending an event then it works. If I restart php-fpm then it works - at least for a while. I've tried numerous config options - obviously what I'm trying isn't working. Any suggestions? -- Daniel From nginx-forum at nginx.us Wed Aug 26 19:42:18 2015 From: nginx-forum at nginx.us (ionut_rusu) Date: Wed, 26 Aug 2015 15:42:18 -0400 Subject: Persistent connections timeout (keepalive on upstream) In-Reply-To: <50b8ef57f86b068ff719aeec87b732ba.NginxMailingListEnglish@forum.nginx.org> References: <6c207ed0311a9aabf2ecd3ef3f3ba619.NginxMailingListEnglish@forum.nginx.org> <50b8ef57f86b068ff719aeec87b732ba.NginxMailingListEnglish@forum.nginx.org> Message-ID: <703c428db349570b00bc6330acb0f8c3.NginxMailingListEnglish@forum.nginx.org> Thanks Biazus, i'll give it a try. Do you know if the keepalive connections to the backend have a timeout ? do they recycle ? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,261284,261288#msg-261288 From nginx-forum at nginx.us Wed Aug 26 19:43:12 2015 From: nginx-forum at nginx.us (ionut_rusu) Date: Wed, 26 Aug 2015 15:43:12 -0400 Subject: Persistent connections timeout (keepalive on upstream) In-Reply-To: References: Message-ID: <816e47dac6a31ef716a7ba0f5553a061.NginxMailingListEnglish@forum.nginx.org> Hi Quanah, I think your suggestion is for client -> nginx connections, i need a setting for nginx -> backend connections. Thanks Posted at Nginx Forum: http://forum.nginx.org/read.php?2,261284,261289#msg-261289 From nginx-forum at nginx.us Wed Aug 26 20:08:25 2015 From: nginx-forum at nginx.us (biazus) Date: Wed, 26 Aug 2015 16:08:25 -0400 Subject: Persistent connections timeout (keepalive on upstream) In-Reply-To: <703c428db349570b00bc6330acb0f8c3.NginxMailingListEnglish@forum.nginx.org> References: <6c207ed0311a9aabf2ecd3ef3f3ba619.NginxMailingListEnglish@forum.nginx.org> <50b8ef57f86b068ff719aeec87b732ba.NginxMailingListEnglish@forum.nginx.org> <703c428db349570b00bc6330acb0f8c3.NginxMailingListEnglish@forum.nginx.org> Message-ID: <0861047e472809865df89a696e34370d.NginxMailingListEnglish@forum.nginx.org> Yes, they do recicle, but it will depends how the keep alive timeout is configured in the backend. I believe there is a similar topic here: http://forum.nginx.org/read.php?2,249924,249953 Regards, Biazus Posted at Nginx Forum: http://forum.nginx.org/read.php?2,261284,261290#msg-261290 From mdounin at mdounin.ru Wed Aug 26 21:08:18 2015 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 27 Aug 2015 00:08:18 +0300 Subject: php server side events [sse] In-Reply-To: References: Message-ID: <20150826210818.GA13309@mdounin.ru> Hello! On Wed, Aug 26, 2015 at 12:33:48PM -0700, dmiller at amfes.com wrote: > I have several virtual servers operating with php-fpm perfectly fine. > Buffering/caching with Wordpress is great. But now... > > I'm trying to implement a new site that uses php for server-side events to > stream live updates to clients. This is, I believe, properly done with an > infinite loop in php that will send events as they occur. The problem is > the events seem to be getting buffered and don't appear. > > If I write the php program to end after sending an event then it works. If > I restart php-fpm then it works - at least for a while. > > I've tried numerous config options - obviously what I'm trying isn't > working. Any suggestions? By default, nginx only starts sending data to a client once it has a full buffer ready. If you want to stream data immediately, you have to switch off buffering - either with the fastcgi_buffering directive, or using the X-Accel-Buffering header in a response. See http://nginx.org/r/fastcgi_buffering for details. Additionally, there are some buffering option in PHP. I'm not a PHP expert, but likely flush() function will help if the problem is on PHP side, see http://php.net/manual/en/function.flush.php. -- Maxim Dounin http://nginx.org/ From dmiller at amfes.com Wed Aug 26 23:53:31 2015 From: dmiller at amfes.com (dmiller at amfes.com) Date: Wed, 26 Aug 2015 16:53:31 -0700 Subject: php server side events [sse] In-Reply-To: <20150826210818.GA13309@mdounin.ru> References: <86a65566c44623b13dfcec2b0c42de8f@amfes.com> <20150826210818.GA13309@mdounin.ru> Message-ID: I thought I had done both...not seeing any difference in behaviour. --- Daniel On 2015-08-26 14:08, Maxim Dounin wrote: > Hello! > > On Wed, Aug 26, 2015 at 12:33:48PM -0700, dmiller at amfes.com wrote: > >> I have several virtual servers operating with php-fpm perfectly fine. >> Buffering/caching with Wordpress is great. But now... >> >> I'm trying to implement a new site that uses php for server-side >> events to >> stream live updates to clients. This is, I believe, properly done >> with an >> infinite loop in php that will send events as they occur. The problem >> is >> the events seem to be getting buffered and don't appear. >> >> If I write the php program to end after sending an event then it >> works. If >> I restart php-fpm then it works - at least for a while. >> >> I've tried numerous config options - obviously what I'm trying isn't >> working. Any suggestions? > > By default, nginx only starts sending data to a client once it has > a full buffer ready. If you want to stream data immediately, > you have to switch off buffering - either with the fastcgi_buffering > directive, or using the X-Accel-Buffering header in a response. > See http://nginx.org/r/fastcgi_buffering for details. > > Additionally, there are some buffering option in PHP. I'm not a > PHP expert, but likely flush() function will help if the problem > is on PHP side, see http://php.net/manual/en/function.flush.php. From sowani at gmail.com Thu Aug 27 04:16:18 2015 From: sowani at gmail.com (Atul Sowani) Date: Thu, 27 Aug 2015 09:46:18 +0530 Subject: nginx dockerfile. Message-ID: Hi, I checked nginx code repository as well as Internet to see if I can get a Dockerfile to build nginx. I got a few references (like https://github.com/dockerfile/nginx) but those are essentially to _run_ nginx, not _build_ it. I am looking to build different versions of nginx (say top-of-the-tree, latest-stable etc.) easily. It would be very convenient if a Dockerfile is presented with the source code which will build one of the versions mentioned above. If required, a slight modification can then build any version of nginx. I would highly appreciate if somebody could point me to a source where I can get a Dockerfile which builds nginx. Thanks, Atul. From valery+nginxen at grid.net.ru Thu Aug 27 08:19:03 2015 From: valery+nginxen at grid.net.ru (Valery Kholodkov) Date: Thu, 27 Aug 2015 09:19:03 +0100 (BST) Subject: [Announcement] New book about Nginx Message-ID: <23062054.264754.1440663543858.JavaMail.root@grid.net.ru> Hi everyone! I would like to announce that a new book about nginx called 'Nginx Essentials' that I was working on the last 6 months has been published and is now available in E-stores worldwide, e.g.: http://www.amazon.com/Nginx-Essentials-Valery-Kholodkov/dp/1785289535 This book is ideal for skilled web masters and site reliability engineers who want to switch to Nginx or solidify their knowledge of Nginx. This will help you to learn how to set up, configure, and operate an Nginx installation for day-to-day use, explore the vast features of Nginx to manage it like a pro, and use them successfully to run your website. This book is an example-based guide to get the best out of Nginx and reduce resource usage footprint. The work on this book was a new interesting journey for me and I would like to thank the Packt Publishing team for making everything happen, as well as technical reviewers. I hope this book will be usefull for you and you enjoy reading it! -- Regards, Valery Kholodkov From nginx-forum at nginx.us Thu Aug 27 13:55:27 2015 From: nginx-forum at nginx.us (erenilhan) Date: Thu, 27 Aug 2015 09:55:27 -0400 Subject: I install BigBlueButton 0.9.0-beta end have problem nginx and do not install bbb In-Reply-To: <1418471355.566695786@f330.i.mail.ru> References: <1418471355.566695786@f330.i.mail.ru> Message-ID: <5df8a8c487b474721de3777445ade197.NginxMailingListEnglish@forum.nginx.org> I have same problem. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,255475,261299#msg-261299 From nginx-forum at nginx.us Thu Aug 27 16:19:23 2015 From: nginx-forum at nginx.us (log) Date: Thu, 27 Aug 2015 12:19:23 -0400 Subject: What cause the error for this http/https wordpress configuration file? Message-ID: <5928ab45a3e858af29cabc566abb9a13.NginxMailingListEnglish@forum.nginx.org> I can not open any link except http://example.com/readme.txt with following server block. Any tips? server { listen 80 default_server; ## listen for ipv4; this line is default and implied listen [::]:80 default_server ipv6only=on; ## listen for ipv6 server_name example.com www.example.com *.example.com; # return 301 https://$server_name$request_uri; #} # #server { listen 443 ssl; listen [::]:443 ssl ipv6only=on; keepalive_timeout 70; #ssl on; ssl_certificate /etc/nginx/cert/example.com-unified.crt; ssl_certificate_key /etc/nginx/cert/example.com.key; server_name example.com www.example.com *.example.com; server_name_in_redirect off; charset utf-8; root /usr/share/nginx/html/example.com; access_log /home/wwwlogs/example.com.access.log; error_log /home/wwwlogs/example.com.error.log; if ($http_host != "www.example.com") { rewrite ^ https://www.example.com$request_uri permanent; } index index.php index.html index.htm; #fastcgi_cache start set $skip_cache 0; # POST requests and urls with a query string should always go to PHP if ($request_method = POST) { set $skip_cache 1; } if ($query_string != "") { set $skip_cache 1; } # Don't cache uris containing the following segments if ($request_uri ~* "(/wp-admin/|/xmlrpc.php|/wp-(app|cron|login|register|mail).php|wp-.*.php|/feed/|index.php|wp-comments-popup.php|wp-links-opml.php|wp-locations.php|sitemap(_index)?.xml|[a-z0-9_-]+-sitemap([0-9]+)?.xml)") { set $skip_cache 1; } # Don't use the cache for logged in users or recent commenters if ($http_cookie ~* "comment_author|wordpress_[a-f0-9]+|wp-postpass|wordpress_no_cache|wordpress_logged_in") { set $skip_cache 1; } location / { # try files in the specified order try_files $uri $uri/ /index.php?$args /index.html; } # pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000 # location ~ \.(php|php5)?$ { # include snippets/fastcgi-php.conf; # ModSecurityEnabled on; ModSecurityConfig modsecurity.conf; try_files $uri =404; fastcgi_split_path_info ^(.+\.php)(/.+)$; #DEBUG include /etc/nginx/fastcgi_params; # use upstream hhvm/php fastcgi_pass php; fastcgi_cache_methods GET HEAD; # Only GET and HEAD methods apply fastcgi_cache_bypass $skip_cache; #apply the "$skip_cache" variable fastcgi_no_cache $skip_cache; fastcgi_cache WORDPRESS; fastcgi_cache_valid 200 301 302 60m; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; # send bad requests to 404 fastcgi_intercept_errors on; } location ~ /purge(/.*) { fastcgi_cache_purge WORDPRESS "$scheme$request_method$host$1"; } # deny access to .htaccess files, if Apache's document root # concurs with nginx's one # location ~ /\.ht { deny all; } location ~* ^.+\.(ogg|ogv|svg|svgz|eot|otf|woff|mp4|ttf|rss|atom|jpg|jpeg|gif|png|ico|zip|tgz|gz|rar|bz2|doc|xls|exe|ppt|tar|mid|midi|wav|bmp|rtf|swf|flv|ico)$ { access_log off; log_not_found off; expires max; } location ~ .*\.(js|css)?$ { expires 7d; } location = /robots.txt { access_log off; log_not_found off; } # Make sure files with the following extensions do not get loaded by nginx because nginx would display the source code, and these files can contain PASSWORDS! # location ~* \.(engine|inc|info|install|make|module|profile|test|po|sh|.*sql|theme|tpl(\.php)?|xtmpl)$|^(\..*|Entries.*|Repository|Root|Tag|Template)$|\.php_ { deny all; } location ~ /\. { deny all; access_log off; log_not_found off; } error_page 404 /404.html; error_page 500 502 503 504 /50x.html; location = /50x.html { root /usr/share/nginx/html; } sysguard on; sysguard_load load=1.8 action=/loadlimit; sysguard_mem swapratio=90% action=/swaplimit; location /loadlimit { return 503; } location /swaplimit { return 503; } if ( $query_string ~* ".*[\;'\<\>].*" ){ return 404; } } Posted at Nginx Forum: http://forum.nginx.org/read.php?2,261301,261301#msg-261301 From reallfqq-nginx at yahoo.fr Thu Aug 27 17:03:37 2015 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Thu, 27 Aug 2015 19:03:37 +0200 Subject: [Announcement] New book about Nginx In-Reply-To: <23062054.264754.1440663543858.JavaMail.root@grid.net.ru> References: <23062054.264754.1440663543858.JavaMail.root@grid.net.ru> Message-ID: *Is this for this book I (and many many other people, as they are used to, search about them) have been contacted by those Packt Publishing crooks to review, effectively asking me to work for free for them?* I am glad you make books, but skilled people usually find what they seek for by themselves, not in a book dedicated to a specific technology. Books are more suitable for concepts and design, not for specificities and particular implementations. Write a book about making a website instead. Or don't. *This mailing list is about discussing the (FOSS?) nginx product, not about self-promoting anonymous nginx experts and selling related stuff. Step away.* That said, I wish you to make money, I am sorry for you having chosen to work with Packt Publishing, and I wish people not to believe books from Nonames about specific technologies are in any way useful. --- *B. R.* On Thu, Aug 27, 2015 at 10:19 AM, Valery Kholodkov < valery+nginxen at grid.net.ru> wrote: > > Hi everyone! > > I would like to announce that a new book about nginx called 'Nginx > Essentials' that I was working on the last 6 months has been published and > is now available in E-stores worldwide, e.g.: > > http://www.amazon.com/Nginx-Essentials-Valery-Kholodkov/dp/1785289535 > > This book is ideal for skilled web masters and site reliability engineers > who want to switch to Nginx or solidify their knowledge of Nginx. > > This will help you to learn how to set up, configure, and operate an Nginx > installation for day-to-day use, explore the vast features of Nginx to > manage it like a pro, and use them successfully to run your website. > > This book is an example-based guide to get the best out of Nginx and > reduce resource usage footprint. > > The work on this book was a new interesting journey for me and I would > like to thank the Packt Publishing team for making everything happen, as > well as technical reviewers. > > I hope this book will be usefull for you and you enjoy reading it! > > -- > Regards, > Valery Kholodkov > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From artemrts at ukr.net Thu Aug 27 17:51:06 2015 From: artemrts at ukr.net (wishmaster) Date: Thu, 27 Aug 2015 20:51:06 +0300 Subject: [Announcement] New book about Nginx In-Reply-To: References: <23062054.264754.1440663543858.JavaMail.root@grid.net.ru> Message-ID: <1440697036.730955692.7b0v2v6q@frv34.fwdcdn.com> Hi, this is not my business but this reply from "B.R." has excited me. This is example of disrespect and discourtesy to the another's work. Please, keep your fucking opinion to yourself. --- w --- Original message --- From: "B.R." Date: 27 August 2015, 20:04:31 Is this for this book I (and many many other people, as they are used to, search about them) have been contacted by those Packt Publishing crooks to review, effectively asking me to work for free for them? I am glad you make books, but skilled people usually find what they seek for by themselves, not in a book dedicated to a specific technology. Books are more suitable for concepts and design, not for specificities and particular implementations. Write a book about making a website instead. Or don't. This mailing list is about discussing the (FOSS?) nginx product, not about self-promoting anonymous nginx experts and selling related stuff. Step away. That said, I wish you to make money, I am sorry for you having chosen to work with Packt Publishing, and I wish people not to believe books from Nonames about specific technologies are in any way useful. --- B. R. On Thu, Aug 27, 2015 at 10:19 AM, Valery Kholodkov < valery+nginxen at grid.net.ru > wrote: Hi everyone! I would like to announce that a new book about nginx called 'Nginx Essentials' that I was working on the last 6 months has been published and is now available in E-stores worldwide, e.g.: http://www.amazon.com/Nginx-Essentials-Valery-Kholodkov/dp/1785289535 This book is ideal for skilled web masters and site reliability engineers who want to switch to Nginx or solidify their knowledge of Nginx. This will help you to learn how to set up, configure, and operate an Nginx installation for day-to-day use, explore the vast features of Nginx to manage it like a pro, and use them successfully to run your website. This book is an example-based guide to get the best out of Nginx and reduce resource usage footprint. The work on this book was a new interesting journey for me and I would like to thank the Packt Publishing team for making everything happen, as well as technical reviewers. I hope this book will be usefull for you and you enjoy reading it! -- Regards, Valery Kholodkov _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From r at roze.lv Thu Aug 27 18:54:17 2015 From: r at roze.lv (Reinis Rozitis) Date: Thu, 27 Aug 2015 21:54:17 +0300 Subject: What cause the error for this http/https wordpress configuration file? In-Reply-To: <5928ab45a3e858af29cabc566abb9a13.NginxMailingListEnglish@forum.nginx.org> References: <5928ab45a3e858af29cabc566abb9a13.NginxMailingListEnglish@forum.nginx.org> Message-ID: <852F28B803A248D1B1184897A78493A8@NeiRoze> > I can not open any link except http://example.com/readme.txt with > following server block. > Any tips? How did you come up with such configuration in first place? Second what exact response you get when opening something else (besides the readme.txt) and what does the access/error log contain for particular request - it should indicate the reason you can't open the particular url - depending on the response/http status it might be different thing (you have quite many location/deny blocks also 3rd party modules which could block the requests, the php backend might not be correctly configured (or just down) etc). If unsure what the resulting config actually does I would start with a more simple version (bare server{} just with a php backend definition). I mean a lot of your current configuration doesn't make sense or is redundant. Just for example you have: if ($request_method = POST) { set $skip_cache 1; } and then: fastcgi_cache_bypass $skip_cache; fastcgi_cache_methods GET HEAD; Where fastcgi_cache_methods default value allready is only GET and HEAD therefore the particular if() is not necessary nor the cache_methods setting itself. rr From vbart at nginx.com Thu Aug 27 19:44:41 2015 From: vbart at nginx.com (Valentin V. Bartenev) Date: Thu, 27 Aug 2015 22:44:41 +0300 Subject: [Announcement] New book about Nginx In-Reply-To: References: <23062054.264754.1440663543858.JavaMail.root@grid.net.ru> Message-ID: <1976552.nVsV82jSmN@vbart-workstation> On Thursday 27 August 2015 19:03:37 B.R. wrote: > *Is this for this book I (and many many other people, as they are used to, > search about them) have been contacted by those Packt Publishing crooks to > review, effectively asking me to work for free for them?* > > I am glad you make books, but skilled people usually find what they seek > for by themselves, not in a book dedicated to a specific technology. Books > are more suitable for concepts and design, not for specificities and > particular implementations. > Write a book about making a website instead. Or don't. > > *This mailing list is about discussing the (FOSS?) nginx product, not about > self-promoting anonymous nginx experts and selling related stuff. Step > away.* > That said, I wish you to make money, I am sorry for you having chosen to > work with Packt Publishing, and I wish people not to believe books from > Nonames about specific technologies are in any way useful. [..] The author of the book, Valery Kholodkov is a well known specialist. He is also author of several nginx modules (his Upload module is among the most popular 3rd-party modules). He has great knowledge of nginx internals, and his blog http://www.nginxguts.com/ is one of the best resources for beginners in nginx development. In 2008 he was a speaker at large IT conference in Moscow about the nginx internals and modules development. When in 2010 I started developing my first module for nginx, I was completely new to nginx sources. Valery's article and responses to my questions in mailing list were the most valuable for understanding how nginx works. So I'm sure that he wrote a good book about nginx, and I'm so sorry to read such response to his announcement. wbr, Valentin V. Bartenev From jim at ohlste.in Thu Aug 27 20:17:56 2015 From: jim at ohlste.in (Jim Ohlstein) Date: Thu, 27 Aug 2015 16:17:56 -0400 Subject: [Announcement] New book about Nginx In-Reply-To: References: <23062054.264754.1440663543858.JavaMail.root@grid.net.ru> Message-ID: <55DF7074.9000807@ohlste.in> [Intentionally top posted] Spoken by the self-important, pompous ass "B.R." who has now taken on the title of "guardian of the mailing list". Your referring to Valery as "anonymous" is more than a bit ironic, since you choose to go by initials that may or may not be real and he, like most of us, isn't afraid to use his real name. Pot, kettle, black. Put them in a sentence for me. If you don't understand English idiom that's fine. Google it. As long as you are in charge of the mailing list now, how about you STOP top posting and using that horrible HTML email client that pisses off the rest of us who actually DO understand mailing list etiquette? On 8/27/15 1:03 PM, B.R. wrote: > /*Is this for this book I (and many many other people, as they are used > to, search about them) have been contacted by those Packt Publishing > crooks to review, effectively asking me to work for free for them?*/ > > I am glad you make books, but skilled people usually find what they seek > for by themselves, not in a book dedicated to a specific technology. > Books are more suitable for concepts and design, not for specificities > and particular implementations. > Write a book about making a website instead. Or don't. > > /This mailing list is about discussing the (FOSS?) nginx product, not > about self-promoting anonymous nginx experts and selling related stuff. > Step away./ > That said, I wish you to make money, I am sorry for you having chosen to > work with Packt Publishing, and I wish people not to believe books from > Nonames about specific technologies are in any way useful. > --- > *B. R.* > > On Thu, Aug 27, 2015 at 10:19 AM, Valery Kholodkov > > wrote: > > > Hi everyone! > > I would like to announce that a new book about nginx called 'Nginx > Essentials' that I was working on the last 6 months has been > published and is now available in E-stores worldwide, e.g.: > > http://www.amazon.com/Nginx-Essentials-Valery-Kholodkov/dp/1785289535 > > This book is ideal for skilled web masters and site reliability > engineers who want to switch to Nginx or solidify their knowledge of > Nginx. > > This will help you to learn how to set up, configure, and operate an > Nginx installation for day-to-day use, explore the vast features of > Nginx to manage it like a pro, and use them successfully to run your > website. > > This book is an example-based guide to get the best out of Nginx and > reduce resource usage footprint. > > The work on this book was a new interesting journey for me and I > would like to thank the Packt Publishing team for making everything > happen, as well as technical reviewers. > > I hope this book will be usefull for you and you enjoy reading it! > > -- > Regards, > Valery Kholodkov > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > > > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -- Jim Ohlstein "Never argue with a fool, onlookers may not be able to tell the difference." - Mark Twain From sarah at nginx.com Thu Aug 27 22:29:24 2015 From: sarah at nginx.com (Sarah Novotny) Date: Thu, 27 Aug 2015 16:29:24 -0600 Subject: [Announcement] New book about Nginx In-Reply-To: <55DF7074.9000807@ohlste.in> References: <23062054.264754.1440663543858.JavaMail.root@grid.net.ru> <55DF7074.9000807@ohlste.in> Message-ID: <560698FE-912E-48F2-A391-5344C0CD465D@nginx.com> [also intentionally top posted] This is a mailing list for discussion of things that are *helpful* to NGINX users. People that subscribe to this mailing list come from a variety of educational backgrounds and experiences and all learn differently. This means that things which may not be helpful to you may well be helpful to others. This said, even when we disagree, it is paramount that we are respectful and gracious given that email lacks any sort of tone except the words we chose. One of the things which is amazing about this community is that there is such depth and diversity of experience that can be shared. Please continue to be awesome to one another remember that many other may be helped even when something seems irrelevant to you. Now, in the spirit of respecting each other?s contributions and time, let?s move on to more constructive topics and if there?s individual feedback on how people contribute or communicate, let?s keep it direct to the people involved and constructive rather than argumentative. sarah > On Aug 27, 2015, at 2:17 PM, Jim Ohlstein wrote: > > [Intentionally top posted] > > Spoken by the self-important, pompous ass "B.R." who has now taken on the title of "guardian of the mailing list". Your referring to Valery as "anonymous" is more than a bit ironic, since you choose to go by initials that may or may not be real and he, like most of us, isn't afraid to use his real name. Pot, kettle, black. Put them in a sentence for me. If you don't understand English idiom that's fine. Google it. > > As long as you are in charge of the mailing list now, how about you STOP top posting and using that horrible HTML email client that pisses off the rest of us who actually DO understand mailing list etiquette? > > On 8/27/15 1:03 PM, B.R. wrote: >> /*Is this for this book I (and many many other people, as they are used >> to, search about them) have been contacted by those Packt Publishing >> crooks to review, effectively asking me to work for free for them?*/ >> >> I am glad you make books, but skilled people usually find what they seek >> for by themselves, not in a book dedicated to a specific technology. >> Books are more suitable for concepts and design, not for specificities >> and particular implementations. >> Write a book about making a website instead. Or don't. >> >> /This mailing list is about discussing the (FOSS?) nginx product, not >> about self-promoting anonymous nginx experts and selling related stuff. >> Step away./ >> That said, I wish you to make money, I am sorry for you having chosen to >> work with Packt Publishing, and I wish people not to believe books from >> Nonames about specific technologies are in any way useful. >> --- >> *B. R.* >> >> On Thu, Aug 27, 2015 at 10:19 AM, Valery Kholodkov >> > wrote: >> >> >> Hi everyone! >> >> I would like to announce that a new book about nginx called 'Nginx >> Essentials' that I was working on the last 6 months has been >> published and is now available in E-stores worldwide, e.g.: >> >> http://www.amazon.com/Nginx-Essentials-Valery-Kholodkov/dp/1785289535 >> >> This book is ideal for skilled web masters and site reliability >> engineers who want to switch to Nginx or solidify their knowledge of >> Nginx. >> >> This will help you to learn how to set up, configure, and operate an >> Nginx installation for day-to-day use, explore the vast features of >> Nginx to manage it like a pro, and use them successfully to run your >> website. >> >> This book is an example-based guide to get the best out of Nginx and >> reduce resource usage footprint. >> >> The work on this book was a new interesting journey for me and I >> would like to thank the Packt Publishing team for making everything >> happen, as well as technical reviewers. >> >> I hope this book will be usefull for you and you enjoy reading it! >> >> -- >> Regards, >> Valery Kholodkov >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx >> >> >> >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx >> > > -- > Jim Ohlstein > > > "Never argue with a fool, onlookers may not be able to tell the difference." - Mark Twain > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > From nginx-forum at nginx.us Thu Aug 27 23:56:03 2015 From: nginx-forum at nginx.us (log) Date: Thu, 27 Aug 2015 19:56:03 -0400 Subject: What cause the error for this http/https wordpress configuration file? In-Reply-To: <852F28B803A248D1B1184897A78493A8@NeiRoze> References: <852F28B803A248D1B1184897A78493A8@NeiRoze> Message-ID: <2e8c09c2aaa8011e67d5a70b2064a2a2.NginxMailingListEnglish@forum.nginx.org> Reinis, Great thanks for the your tips. Here is the update. This is for a wordpress blog, with http and https access. We dont need to redirect http traffic to https. In addition, I want to access it either by http: //example.com, http: //www.example.com, https: //example.com, or https: //www.example.com There are several problems caused by the following configuration. 1. http:// www.example.com/fold1/readme.php will be redirected to https:// fold1/readme.php 2. https: //example.com/fold1/readme.php will be redirected to https: //fold1/readme.php 3. https: //www.example.com/fold1/readme.php was loaded over HTTPS, but requested an insecure script 'http: //www.example.com/fold1/js/user-profile.min.js?ver=4.3'. This request has been blocked; the content must be served over HTTPS. readme.php:1 Mixed Content: The page at 'https: //www.example.com/fold1/readme.php' was loaded over HTTPS, but requested an insecure script 'http: //www.example.com/fold1/js/language-chooser.min.js?ver=4.3'. This request has been blocked; the content must be served over HTTPS. . server { listen 80 default_server; ## listen for ipv4; this line is default and implied listen [::]:80 default_server ipv6only=on; ## listen for ipv6 server_name example.com www.example.com *.example.com; # return 301 https://$server_name$request_uri; #} # #server { listen 443 ssl; listen [::]:443 ssl ipv6only=on; keepalive_timeout 70; #ssl on; ssl_certificate /etc/nginx/cert/example.com-unified.crt; ssl_certificate_key /etc/nginx/cert/example.com.key; server_name example.com www.example.com *.example.com; server_name_in_redirect off; charset utf-8; root /usr/share/nginx/html/example.com; access_log /home/wwwlogs/example.com.access.log; error_log /home/wwwlogs/example.com.error.log; #if ($http_host != "www.example.com") { # rewrite ^ https://www.example.com$request_uri permanent; #} index index.php index.html index.htm; #fastcgi_cache start set $skip_cache 0; # POST requests and urls with a query string should always go to PHP # fastcgi_cache_methods default value allready is only GET and HEAD #if ($request_method = POST) { # set $skip_cache 1; #} if ($query_string != "") { set $skip_cache 1; } # Don't cache uris containing the following segments if ($request_uri ~* "(/wp-admin/|/xmlrpc.php|/wp-(app|cron|login|register|mail).php|wp-.*.php|/feed/|index.php|wp-comments-popup.php|wp-links-opml.php|wp-locations.php|sitemap(_index)?.xml|[a-z0-9_-]+-sitemap([0-9]+)?.xml)") { set $skip_cache 1; } # Don't use the cache for logged in users or recent commenters if ($http_cookie ~* "comment_author|wordpress_[a-f0-9]+|wp-postpass|wordpress_no_cache|wordpress_logged_in") { set $skip_cache 1; } location / { # try files in the specified order try_files $uri $uri/ /index.php?$args /index.html; } # pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000 # location ~ \.(php|php5)?$ { # include snippets/fastcgi-php.conf; # ModSecurityEnabled on; ModSecurityConfig modsecurity.conf; try_files $uri =404; fastcgi_split_path_info ^(.+\.php)(/.+)$; #DEBUG include /etc/nginx/fastcgi_params; # use upstream hhvm/php fastcgi_pass php; # fastcgi_cache_methods GET HEAD; # Only GET and HEAD methods apply fastcgi_cache_bypass $skip_cache; #apply the "$skip_cache" variable fastcgi_no_cache $skip_cache; fastcgi_cache WORDPRESS; fastcgi_cache_valid 200 301 302 60m; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; # send bad requests to 404 fastcgi_intercept_errors on; } location ~ /purge(/.*) { fastcgi_cache_purge WORDPRESS "$scheme$request_method$host$1"; } # deny access to .htaccess files, if Apache's document root # concurs with nginx's one # location ~ /\.ht { deny all; } location ~* ^.+\.(ogg|ogv|svg|svgz|eot|otf|woff|mp4|ttf|rss|atom|jpg|jpeg|gif|png|ico|zip|tgz|gz|rar|bz2|doc|xls|exe|ppt|tar|mid|midi|wav|bmp|rtf|swf|flv|ico)$ { access_log off; log_not_found off; expires max; } location ~ .*\.(js|css)?$ { expires 7d; } location = /robots.txt { access_log off; log_not_found off; } # Make sure files with the following extensions do not get loaded by nginx because nginx would display the source code, and these files can contain PASSWORDS! # location ~* \.(engine|inc|info|install|make|module|profile|test|po|sh|.*sql|theme|tpl(\.php)?|xtmpl)$|^(\..*|Entries.*|Repository|Root|Tag|Template)$|\.php_ { deny all; } location ~ /\. { deny all; access_log off; log_not_found off; } error_page 404 /404.html; error_page 500 502 503 504 /50x.html; location = /50x.html { root /usr/share/nginx/html; } sysguard on; sysguard_load load=1.8 action=/loadlimit; sysguard_mem swapratio=90% action=/swaplimit; location /loadlimit { return 503; } location /swaplimit { return 503; } if ( $query_string ~* ".*[\;'\<\>].*" ){ return 404; } } ####### Following are the logs ######## example.com server access log: 101.102.224.162 - - [27/Aug/2015:22:30:20 +0000] "GET //cgi-bin/webcm?getpage=../html/menus/menu2.html&var:lang=%26%20allcfgconv%20-C%20voip%20-c%20-o%20-%20../../../../../var/tmp/voip.cfg%20%2 HTTP/1.1" 500 796 "-" "curl/7.29.0" 101.102.210.246 - - [27/Aug/2015:23:27:27 +0000] "GET /fold1/readme.php HTTP/1.1" 200 3065 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_10_4) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/40.0.2403.107 Safari/537.36" 101.102.210.246 - - [27/Aug/2015:23:27:28 +0000] "GET /fold1/readme.php HTTP/1.1" 200 3065 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_10_4) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/40.0.2403.107 Safari/537.36" nginx main error log: #lots of "ignore long locked inactive cache entry" errors like follows: 2015/08/27 22:01:04 [alert] 22603#0: ignore long locked inactive cache entry 1054513b79bde8beb8798358f09d0: There is no example.com server error log generated Posted at Nginx Forum: http://forum.nginx.org/read.php?2,261301,261308#msg-261308 From r at roze.lv Fri Aug 28 00:26:36 2015 From: r at roze.lv (Reinis Rozitis) Date: Fri, 28 Aug 2015 03:26:36 +0300 Subject: What cause the error for this http/https wordpress configuration file? In-Reply-To: <2e8c09c2aaa8011e67d5a70b2064a2a2.NginxMailingListEnglish@forum.nginx.org> References: <852F28B803A248D1B1184897A78493A8@NeiRoze> <2e8c09c2aaa8011e67d5a70b2064a2a2.NginxMailingListEnglish@forum.nginx.org> Message-ID: > We dont need to redirect http traffic to https. In addition, I want to > access it either by http: //example.com, http: //www.example.com, https: > //example.com, or https: //www.example.com > > 3. https: //www.example.com/fold1/readme.php was loaded over HTTPS, but > requested an insecure script 'http: //www.example.com/fold1/js/user-profile.min.js?ver=4.3'. This request has been blocked; the content must be served over HTTPS. readme.php:1 Since the access log doesn't show any denied requests this seems as a WordPress configuration issue (though I'm no WP expert) - I imagine you have configured Wordpress with a global URL (in General settings) like http://www.example.com but that way when you open the https:// version all the assets (js/css) in the html source will have the absolute path/url (you can check the source for src="http://.. ") which by default are being blocked by browsers (it doesn't even get to nginx) as non-secure content and the page without any css styles or scripts can look empty/broken. If you don't want to force the http->https redirect you should either configure the the WP with relative url or skip the protocol at all (eg use just //example.com). A more lengthy article can be read here https://managewp.com/wordpress-ssl-settings-and-how-to-resolve-mixed-content-warnings rr From kevin at nginx.com Fri Aug 28 07:05:58 2015 From: kevin at nginx.com (Kevin Jones) Date: Fri, 28 Aug 2015 00:05:58 -0700 Subject: nginx dockerfile. In-Reply-To: References: Message-ID: <1CB7F189-34BD-4987-8708-9DC85500F19E@nginx.com> Hello Atul, Is this what you are looking for? Keep in mind I just threw this together to give you an idea of how you could do this and it should be tested. Let me know if you have any suggestions or questions. https://github.com/kmjones1979/docker-nginx-compiled/blob/master/Dockerfile git at github.com:kmjones1979/docker-nginx-compiled.git --- Kevin http://nginx.org/ On 8/26/15, 9:16 PM, "nginx on behalf of Atul Sowani" wrote: >Hi, > >I checked nginx code repository as well as Internet to see if I can >get a Dockerfile to build nginx. I got a few references (like >https://github.com/dockerfile/nginx) but those are essentially to >_run_ nginx, not _build_ it. > >I am looking to build different versions of nginx (say >top-of-the-tree, latest-stable etc.) easily. It would be very >convenient if a Dockerfile is presented with the source code which >will build one of the versions mentioned above. If required, a slight >modification can then build any version of nginx. > >I would highly appreciate if somebody could point me to a source where >I can get a Dockerfile which builds nginx. > >Thanks, >Atul. > >_______________________________________________ >nginx mailing list >nginx at nginx.org >http://mailman.nginx.org/mailman/listinfo/nginx > From razinkov at gmail.com Fri Aug 28 09:04:07 2015 From: razinkov at gmail.com (Ilja Razinkov) Date: Fri, 28 Aug 2015 12:04:07 +0300 Subject: Raw socket answer (non-http request reply) Message-ID: i have to handle raw socket connections in simple manner: any connection to specific port should be served with predefined data (text string), without http headers, wrappings, etc. Is it possible to do with nginx? May be there is plugin or something similar. To be specific - i wish to serve Adobe socket policy file on 843 port. It is important, that server should send policy request strictly "as-is", without additional content inside raw socket answer Thanks in advance! From sowani at gmail.com Fri Aug 28 09:27:33 2015 From: sowani at gmail.com (Atul Sowani) Date: Fri, 28 Aug 2015 14:57:33 +0530 Subject: nginx dockerfile. In-Reply-To: <1CB7F189-34BD-4987-8708-9DC85500F19E@nginx.com> References: <1CB7F189-34BD-4987-8708-9DC85500F19E@nginx.com> Message-ID: Hi Kevin, Many thanks for the pointer. That is exactly what I am looking for. Now counting on my own experience, I think such dockerfiles should be integrated with the nginx source code itself. This will make the nginx source code self-contained and build process consistent. Otherwise, developers like me will go in every direction looking out for a dockerfile and there could be different versions of it available from multiple sources. Having an official version along with source code will save a lot of pain and will bring good things along. It's just like having Makefile along with the code. As long as the base image for container is available, I don't see any reason not to include a dockerfile with source code. -- Just a thought. Thanks again for pointing me to the right source! Best regards, Atul. On Fri, Aug 28, 2015 at 12:35 PM, Kevin Jones wrote: > Hello Atul, > > Is this what you are looking for? Keep in mind I just threw this together to give you an idea of how you could do this and it should be tested. Let me know if you have any suggestions or questions. > > https://github.com/kmjones1979/docker-nginx-compiled/blob/master/Dockerfile > > git at github.com:kmjones1979/docker-nginx-compiled.git > > --- > Kevin > http://nginx.org/ > > > > > > > > On 8/26/15, 9:16 PM, "nginx on behalf of Atul Sowani" wrote: > >>Hi, >> >>I checked nginx code repository as well as Internet to see if I can >>get a Dockerfile to build nginx. I got a few references (like >>https://github.com/dockerfile/nginx) but those are essentially to >>_run_ nginx, not _build_ it. >> >>I am looking to build different versions of nginx (say >>top-of-the-tree, latest-stable etc.) easily. It would be very >>convenient if a Dockerfile is presented with the source code which >>will build one of the versions mentioned above. If required, a slight >>modification can then build any version of nginx. >> >>I would highly appreciate if somebody could point me to a source where >>I can get a Dockerfile which builds nginx. >> >>Thanks, >>Atul. >> >>_______________________________________________ >>nginx mailing list >>nginx at nginx.org >>http://mailman.nginx.org/mailman/listinfo/nginx >> > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From mq+nginx at ucw.cz Fri Aug 28 11:41:01 2015 From: mq+nginx at ucw.cz (Jan Moskyto Matejka) Date: Fri, 28 Aug 2015 13:41:01 +0200 Subject: Raw socket answer (non-http request reply) In-Reply-To: References: Message-ID: <20150828114101.GA13829@kaficko.joja.cz> On Fri, Aug 28, 2015 at 12:04:07PM +0300, Ilja Razinkov wrote: > i have to handle raw socket connections in simple manner: any > connection to specific port should be served with predefined data > (text string), without http headers, wrappings, etc. > > Is it possible to do with nginx? May be there is plugin or something similar. > > To be specific - i wish to serve Adobe socket policy file on 843 port. > It is important, that server should send policy request strictly > "as-is", without additional content inside raw socket answer If I understand correctly, you want this: - client opens connection, server accepts it - client sends nothing - server sends a pre-defined file - server closes connection Assuming you have a linux server, simple inetd service is enough. /etc/inetd.conf: ... adobe stream tcp nowait nobody cat /path/to/the/file/to/be/sent ... /etc/services: ... adobe 843/tcp ... Both these files should include the given lines, the ... means there are other lines that should probably be kept. Not tested, but should work. No nginx needed, it's an overkill for this. Moskyto From reallfqq-nginx at yahoo.fr Sat Aug 29 09:31:43 2015 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Sat, 29 Aug 2015 11:31:43 +0200 Subject: [Announcement] New book about Nginx In-Reply-To: <1440697036.730955692.7b0v2v6q@frv34.fwdcdn.com> References: <23062054.264754.1440663543858.JavaMail.root@grid.net.ru> <1440697036.730955692.7b0v2v6q@frv34.fwdcdn.com> Message-ID: I do not care about his book, and I simply stated about a generality about that kind of books. Nothing judging the work behind it, thus not disrespecting it. Learn to read. Moreover users' ML are made for users of a specific product, with questions/trouble about it, thus not made for self-promotion. What ever, except maybe coming from developers of the product, because they made it possible. If you are incapable of understanding or accepting others' opinion, then you should follow you own advice and keep your fucking opinion to yourself. --- *B. R.* On Thu, Aug 27, 2015 at 7:51 PM, wishmaster wrote: > Hi, > this is not my business but this reply from "B.R." has excited me. This is > example of disrespect and discourtesy to the another's work. Please, keep > your fucking opinion to yourself. > > --- > w > > > --- Original message --- > From: "B.R." > Date: 27 August 2015, 20:04:31 > > *Is this for this book I (and many many other people, as they are used to, > search about them) have been contacted by those Packt Publishing crooks to > review, effectively asking me to work for free for them?* > > I am glad you make books, but skilled people usually find what they seek > for by themselves, not in a book dedicated to a specific technology. Books > are more suitable for concepts and design, not for specificities and > particular implementations. > Write a book about making a website instead. Or don't. > > *This mailing list is about discussing the (FOSS?) nginx product, not > about self-promoting anonymous nginx experts and selling related stuff. > Step away.* > That said, I wish you to make money, I am sorry for you having chosen to > work with Packt Publishing, and I wish people not to believe books from > Nonames about specific technologies are in any way useful. > --- > *B. R.* > > On Thu, Aug 27, 2015 at 10:19 AM, Valery Kholodkov < > valery+nginxen at grid.net.ru> wrote: > > > Hi everyone! > > I would like to announce that a new book about nginx called 'Nginx > Essentials' that I was working on the last 6 months has been published and > is now available in E-stores worldwide, e.g.: > > http://www.amazon.com/Nginx-Essentials-Valery-Kholodkov/dp/1785289535 > > This book is ideal for skilled web masters and site reliability engineers > who want to switch to Nginx or solidify their knowledge of Nginx. > > This will help you to learn how to set up, configure, and operate an Nginx > installation for day-to-day use, explore the vast features of Nginx to > manage it like a pro, and use them successfully to run your website. > > This book is an example-based guide to get the best out of Nginx and > reduce resource usage footprint. > > The work on this book was a new interesting journey for me and I would > like to thank the Packt Publishing team for making everything happen, as > well as technical reviewers. > > I hope this book will be usefull for you and you enjoy reading it! > > -- > Regards, > Valery Kholodkov > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > > > _______________________________________________ > nginx mailing listnginx at nginx.orghttp://mailman.nginx.org/mailman/listinfo/nginx > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From igor at sysoev.ru Sat Aug 29 10:18:36 2015 From: igor at sysoev.ru (Igor Sysoev) Date: Sat, 29 Aug 2015 13:18:36 +0300 Subject: [Announcement] New book about Nginx In-Reply-To: References: <23062054.264754.1440663543858.JavaMail.root@grid.net.ru> <1440697036.730955692.7b0v2v6q@frv34.fwdcdn.com> Message-ID: <39272226-68F6-413F-97E0-9C3AB53B7D11@sysoev.ru> Hi, please clam down. This mailing list has been created to discuss and announce anything related to nginx. This book certainly has relation to nginx so Valery did well when he announced it here. -- Igor Sysoev Discover best practices for building & delivering apps at scale. nginx.conf 2015: Sept. 22-24, San Francisco. http://nginx.com/nginxconf On 29 Aug 2015, at 12:31, B.R. wrote: > I do not care about his book, and I simply stated about a generality about that kind of books. Nothing judging the work behind it, thus not disrespecting it. Learn to read. > > Moreover users' ML are made for users of a specific product, with questions/trouble about it, thus not made for self-promotion. What ever, except maybe coming from developers of the product, because they made it possible. > > If you are incapable of understanding or accepting others' opinion, then you should follow you own advice and keep your fucking opinion to yourself. > --- > B. R. > > On Thu, Aug 27, 2015 at 7:51 PM, wishmaster wrote: > Hi, > this is not my business but this reply from "B.R." has excited me. This is example of disrespect and discourtesy to the another's work. Please, keep your fucking opinion to yourself. > > --- > w > > > --- Original message --- > From: "B.R." > Date: 27 August 2015, 20:04:31 > > Is this for this book I (and many many other people, as they are used to, search about them) have been contacted by those Packt Publishing crooks to review, effectively asking me to work for free for them? > > I am glad you make books, but skilled people usually find what they seek for by themselves, not in a book dedicated to a specific technology. Books are more suitable for concepts and design, not for specificities and particular implementations. > Write a book about making a website instead. Or don't. > > This mailing list is about discussing the (FOSS?) nginx product, not about self-promoting anonymous nginx experts and selling related stuff. Step away. > That said, I wish you to make money, I am sorry for you having chosen to work with Packt Publishing, and I wish people not to believe books from Nonames about specific technologies are in any way useful. > --- > B. R. > > On Thu, Aug 27, 2015 at 10:19 AM, Valery Kholodkov wrote: > > Hi everyone! > > I would like to announce that a new book about nginx called 'Nginx Essentials' that I was working on the last 6 months has been published and is now available in E-stores worldwide, e.g.: > > http://www.amazon.com/Nginx-Essentials-Valery-Kholodkov/dp/1785289535 > > This book is ideal for skilled web masters and site reliability engineers who want to switch to Nginx or solidify their knowledge of Nginx. > > This will help you to learn how to set up, configure, and operate an Nginx installation for day-to-day use, explore the vast features of Nginx to manage it like a pro, and use them successfully to run your website. > > This book is an example-based guide to get the best out of Nginx and reduce resource usage footprint. > > The work on this book was a new interesting journey for me and I would like to thank the Packt Publishing team for making everything happen, as well as technical reviewers. > > I hope this book will be usefull for you and you enjoy reading it! > > -- > Regards, > Valery Kholodkov > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From shahzaib.cb at gmail.com Sat Aug 29 11:57:19 2015 From: shahzaib.cb at gmail.com (shahzaib shahzaib) Date: Sat, 29 Aug 2015 16:57:19 +0500 Subject: Redirect on specific threshold !! In-Reply-To: <20150616213030.GC23844@daoine.org> References: <2182722.jmPLUFHago@vbart-workstation> <20150616213030.GC23844@daoine.org> Message-ID: Hi, Sorry got back to this thread after long time. First of all, thanks to all for suggestions. Alright, i have also checked with rate_limit module, should this work as well or it should be only limit_conn (to parse error_log and constructing redirect URL). P.S : Actuall looks like limit_conn needs to recompile nginx as it is not included in default yum install nginx repo. So i tried with rate_limit which is built-in within nginx repo. http://greenroom.com.my/blog/2014/10/rate_limit-with-nginx-on-ubuntu/ Regards. Shahzaib On Wed, Jun 17, 2015 at 2:30 AM, Francis Daly wrote: > On Mon, Jun 15, 2015 at 01:45:42PM +0300, Valentin V. Bartenev wrote: > > On Sunday 14 June 2015 22:12:37 shahzaib shahzaib wrote: > > Hi there, > > > > If there are exceeding 1K requests for > http://storage.domain.com/test.mp4 , > > > nginx should construct a Redirect URL for rest of the requests > related to > > > test.mp4 i.e http://cache.domain.com/test.mp4 and entertain the rest > of > > > requests for test.mp4 from Caching Node while long tail would still be > > > served from storage. > > > You can use limit_conn and limit_req modules to set limits: > > http://nginx.org/en/docs/http/ngx_http_limit_conn_module.html > > http://nginx.org/en/docs/http/ngx_http_limit_req_module.html > > > > and the error_page directive to construct the redirect. > > limit_conn and limit_req are the right answer if you care about concurrent > requests. > > (For example: rate=1r/m with burst=1000 might do most of what you want, > without too much work on your part.) > > I think you might care about historical requests, instead -- so if a > url is ever accessed 1K times, then it is "popular" and future requests > should be redirected. > > To do that, you probably will find it simpler to do it outside of nginx, > at least initially. > > Have something read the recent-enough log files[*], and whenever there are > more that 1K requests for the same resource, add a fragment like > > location = /test.mp4 { return 301 http://cache.domain.com/test.mp4; } > > to nginx.conf (and remove similar fragments that are no longer currently > popular-enough, if appropriate), and do a no-downtime config reload. > > You can probably come up with a module or a code config that does the > same thing, but I think it would take me longer to do that. > > > [*] or accesses the statistics by a method of your choice > > f > -- > Francis Daly francis at daoine.org > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From reallfqq-nginx at yahoo.fr Sat Aug 29 22:45:08 2015 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Sun, 30 Aug 2015 00:45:08 +0200 Subject: [Announcement] New book about Nginx In-Reply-To: <39272226-68F6-413F-97E0-9C3AB53B7D11@sysoev.ru> References: <23062054.264754.1440663543858.JavaMail.root@grid.net.ru> <1440697036.730955692.7b0v2v6q@frv34.fwdcdn.com> <39272226-68F6-413F-97E0-9C3AB53B7D11@sysoev.ru> Message-ID: I am sorry to have bothered you all, that was meant to be a personal answer to 'wishmaster '... and I forgot to change the recipient address. End of this topic on my side. --- *B. R.* On Sat, Aug 29, 2015 at 12:18 PM, Igor Sysoev wrote: > Hi, > > please clam down. > This mailing list has been created to discuss and announce anything > related to nginx. > This book certainly has relation to nginx so Valery did well when he > announced it here. > > -- > Igor Sysoev > Discover best practices for building & delivering apps at scale. > nginx.conf 2015: Sept. 22-24, San Francisco. http://nginx.com/nginxconf > > On 29 Aug 2015, at 12:31, B.R. wrote: > > I do not care about his book, and I simply stated about a generality about > that kind of books. Nothing judging the work behind it, thus not > disrespecting it. Learn to read. > > Moreover users' ML are made for users of a specific product, with > questions/trouble about it, thus not made for self-promotion. What ever, > except maybe coming from developers of the product, because they made it > possible. > > If you are incapable of understanding or accepting others' opinion, then > you should follow you own advice and keep your fucking opinion to yourself. > --- > *B. R.* > > On Thu, Aug 27, 2015 at 7:51 PM, wishmaster wrote: > >> Hi, >> this is not my business but this reply from "B.R." has excited me. This >> is example of disrespect and discourtesy to the another's work. Please, >> keep your fucking opinion to yourself. >> >> --- >> w >> >> >> --- Original message --- >> From: "B.R." >> Date: 27 August 2015, 20:04:31 >> >> *Is this for this book I (and many many other people, as they are used >> to, search about them) have been contacted by those Packt Publishing crooks >> to review, effectively asking me to work for free for them?* >> >> I am glad you make books, but skilled people usually find what they seek >> for by themselves, not in a book dedicated to a specific technology. Books >> are more suitable for concepts and design, not for specificities and >> particular implementations. >> Write a book about making a website instead. Or don't. >> >> *This mailing list is about discussing the (FOSS?) nginx product, not >> about self-promoting anonymous nginx experts and selling related stuff. >> Step away.* >> That said, I wish you to make money, I am sorry for you having chosen to >> work with Packt Publishing, and I wish people not to believe books from >> Nonames about specific technologies are in any way useful. >> --- >> *B. R.* >> >> On Thu, Aug 27, 2015 at 10:19 AM, Valery Kholodkov < >> valery+nginxen at grid.net.ru> wrote: >> >> >> Hi everyone! >> >> I would like to announce that a new book about nginx called 'Nginx >> Essentials' that I was working on the last 6 months has been published and >> is now available in E-stores worldwide, e.g.: >> >> http://www.amazon.com/Nginx-Essentials-Valery-Kholodkov/dp/1785289535 >> >> This book is ideal for skilled web masters and site reliability engineers >> who want to switch to Nginx or solidify their knowledge of Nginx. >> >> This will help you to learn how to set up, configure, and operate an >> Nginx installation for day-to-day use, explore the vast features of Nginx >> to manage it like a pro, and use them successfully to run your website. >> >> This book is an example-based guide to get the best out of Nginx and >> reduce resource usage footprint. >> >> The work on this book was a new interesting journey for me and I would >> like to thank the Packt Publishing team for making everything happen, as >> well as technical reviewers. >> >> I hope this book will be usefull for you and you enjoy reading it! >> >> -- >> Regards, >> Valery Kholodkov >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx >> >> >> _______________________________________________ >> nginx mailing listnginx at nginx.orghttp://mailman.nginx.org/mailman/listinfo/nginx >> >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx >> > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Mon Aug 31 00:54:15 2015 From: nginx-forum at nginx.us (maplesyrupandrew) Date: Sun, 30 Aug 2015 20:54:15 -0400 Subject: Why is NGINX serving a 404 here? Message-ID: I have a pretty standard test nginx configuration file (/etc/nginx/sites-available/default) that was configured to serve a sample html document. Here's the modified portion of the configuration file. upstream auth{ server 127.0.0.1:3000; } server { listen 80 default_server; root /var/www/index.html; # Make site accessible from http://localhost/ server_name _; location / { try_files $uri $uri/ =404; } location /auth{ proxy_pass http://auth; } } And yet, accessing my machine's IP address seems to return a 404. What's going on here? I've ensured that the permissions of index.html ensure that the file is readable, so doesn't seem to be a permissions issue. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,261327,261327#msg-261327 From miguelmclara at gmail.com Mon Aug 31 01:38:56 2015 From: miguelmclara at gmail.com (Miguel C) Date: Mon, 31 Aug 2015 02:38:56 +0100 Subject: Why is NGINX serving a 404 here? In-Reply-To: References: Message-ID: Just a quick look but I noticed this: `root /var/www/index.html` I guess you wanted `root /var/www/` !? Also looking at the logs should probably hint on why the 404 is happening, but I'm guessing its realted to the above line :) Melhores Cumprimentos // Best Regards ----------------------------------------------- *Miguel Clara* *IT - Sys Admin & Developer* On Mon, Aug 31, 2015 at 1:54 AM, maplesyrupandrew wrote: > I have a pretty standard test nginx configuration file > (/etc/nginx/sites-available/default) that was configured to serve a sample > html document. > > Here's the modified portion of the configuration file. > > upstream auth{ > server 127.0.0.1:3000; > } > > server { > listen 80 default_server; > > root /var/www/index.html; > > # Make site accessible from http://localhost/ > server_name _; > > location / { > try_files $uri $uri/ =404; > } > > location /auth{ > proxy_pass http://auth; > } > } > > And yet, accessing my machine's IP address seems to return a 404. What's > going on here? > > I've ensured that the permissions of index.html ensure that the file is > readable, so doesn't seem to be a permissions issue. > > Posted at Nginx Forum: > http://forum.nginx.org/read.php?2,261327,261327#msg-261327 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Mon Aug 31 13:16:00 2015 From: nginx-forum at nginx.us (footplus) Date: Mon, 31 Aug 2015 09:16:00 -0400 Subject: Implementing proxy_cache_lock when updating items Message-ID: <669839883a65c0f998c9ca149ec7ff6d.NginxMailingListEnglish@forum.nginx.org> Hello, I am currently implementing a caching proxy with many short-lived items, expiring at a specific date (Expires header set at an absolute time between 10 and 30 seconds in the future by the origin). For various reasons, my cache is multi-level (edge, intermediate 2, intermediate 1, origin) and needs to make the items expire at the edge at 2exactly the time set in the Expires header. When the item expires, i want an updated version of the item to be available at the same URL. I have been able to make it work, and I'm using proxy_cache_lock at every cache level to ensure i'm not hammering the origin servers nor the proxy server. As documented, this works perfectly for items not present in the cache. I am also using proxy_cache_use_stale updating to avoid this hammering also in the case of already in-cache items. My problems begin when an in-cache item expires. 2 top-level caches (let's name them E1,E2 for example) request an updated fragment from the below level (INT). The fragment is requested by INT from below for the first request, but for the second request, a stale fragment is sent (according to proxy_cache_use_stale setting, with the UPDATING status). So far, all is working according to the docs. The problem is that fragments in the UPDATING status are stale, and cannot be cached at all by E1,E2.., and this can be very impacting for INT, because all the requests made on E1/E2 are now proxied to INT directly, until INT has a fresh version of the item installed in cache (this is quite a short duration, but in testing this generate bursts of 15 to 50 requests in the mean time). Is there a way to implement the proxy_cache_lock to make it work also for expired in-cache items in the configuration ? If not, can you suggest a way to implement this (i'm not familiar to nginx's source, but i'm willing to dig into it) ? Thanks, Best regards, Aur?lien Posted at Nginx Forum: http://forum.nginx.org/read.php?2,261333,261333#msg-261333 From shahzaib.cb at gmail.com Mon Aug 31 14:29:15 2015 From: shahzaib.cb at gmail.com (shahzaib shahzaib) Date: Mon, 31 Aug 2015 19:29:15 +0500 Subject: DocumentRoot should end up on specific file ! Message-ID: Hi, We want nginx vhost to access the file audo_portal.php without specifying it,i.e instead of using URL http://domain.com/audio_portal.php , can we access it with http://domain.com ? So it'll directly access audio_portal.php just like index.php ? Regards. Shahzaib -------------- next part -------------- An HTML attachment was scrubbed... URL: From dewanggaba at xtremenitro.org Mon Aug 31 14:33:43 2015 From: dewanggaba at xtremenitro.org (Dewangga Bachrul Alam) Date: Mon, 31 Aug 2015 21:33:43 +0700 Subject: DocumentRoot should end up on specific file ! In-Reply-To: References: Message-ID: <55E465C7.7020908@xtremenitro.org> Hello! On 08/31/2015 09:29 PM, shahzaib shahzaib wrote: > Hi, > > We want nginx vhost to access the file audo_portal.php without > specifying it,i.e instead of using > URL http://domain.com/audio_portal.php , can we access it with > http://domain.com ? So it'll directly access audio_portal.php just like > index.php ? Yes you can use index on http, location, and server. See http://nginx.org/en/docs/http/ngx_http_index_module.html > > Regards. > Shahzaib > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > From shahzaib.cb at gmail.com Mon Aug 31 15:03:33 2015 From: shahzaib.cb at gmail.com (shahzaib shahzaib) Date: Mon, 31 Aug 2015 20:03:33 +0500 Subject: DocumentRoot should end up on specific file ! In-Reply-To: <55E465C7.7020908@xtremenitro.org> References: <55E465C7.7020908@xtremenitro.org> Message-ID: Hi, THanks, looking into it. Regards. Shahzaib On Mon, Aug 31, 2015 at 7:33 PM, Dewangga Bachrul Alam < dewanggaba at xtremenitro.org> wrote: > Hello! > > On 08/31/2015 09:29 PM, shahzaib shahzaib wrote: > > Hi, > > > > We want nginx vhost to access the file audo_portal.php without > > specifying it,i.e instead of using > > URL http://domain.com/audio_portal.php , can we access it with > > http://domain.com ? So it'll directly access audio_portal.php just like > > index.php ? > > Yes you can use index on http, location, and server. See > http://nginx.org/en/docs/http/ngx_http_index_module.html > > > > > Regards. > > Shahzaib > > > > > > _______________________________________________ > > nginx mailing list > > nginx at nginx.org > > http://mailman.nginx.org/mailman/listinfo/nginx > > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Mon Aug 31 15:59:40 2015 From: nginx-forum at nginx.us (log) Date: Mon, 31 Aug 2015 11:59:40 -0400 Subject: How to cache js/css request containing a question mark? Message-ID: Following nginx configuration in a server block can cache js and css files with "js" or "css" as the end of the url. location ~ .*\.(js|css)?$ { expires 7d; } But it can't cache request such as: http: //example.com/jquery.js?ver=1.11.3&build=1 or http: //example.com/style.css?ver=4.3&build=1 Any way to cache cs/jss files above? Or cache files based on the file type of request instead of file name? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,261338,261338#msg-261338 From nginx-forum at nginx.us Mon Aug 31 16:57:22 2015 From: nginx-forum at nginx.us (biazus) Date: Mon, 31 Aug 2015 12:57:22 -0400 Subject: How to cache js/css request containing a question mark? In-Reply-To: References: Message-ID: <4029e61e51f2990de4068734de5e275c.NginxMailingListEnglish@forum.nginx.org> Please try to remove $ in the end of the expression: something like this: location ~ .*\.(js|css) { expires 7d; } Also, make sure you are using args in the cache key: proxy_cache_key "$host$uri$is_args$args"; Regards, Biazus Posted at Nginx Forum: http://forum.nginx.org/read.php?2,261338,261339#msg-261339 From igal at lucee.org Mon Aug 31 21:46:59 2015 From: igal at lucee.org (Igal @ Lucee.org) Date: Mon, 31 Aug 2015 14:46:59 -0700 Subject: [Announcement] New book about Nginx In-Reply-To: <39272226-68F6-413F-97E0-9C3AB53B7D11@sysoev.ru> References: <23062054.264754.1440663543858.JavaMail.root@grid.net.ru> <1440697036.730955692.7b0v2v6q@frv34.fwdcdn.com> <39272226-68F6-413F-97E0-9C3AB53B7D11@sysoev.ru> Message-ID: <55E4CB53.5020700@lucee.org> +1 (intentionally top-posted) I'd like to take this opportunity to thank the nginx team for all of their time and hard work that they've put into their product. I, for one, completely understand that it's impossible to pay the bills with FOSS exclusively, and I welcome the occasional introduction to other products and services provided by the team that maintains the FOSS. Having said that, the aforementioned book seems like an excellent resource for nginx users, so this is not even an issue of promotion, but strictly of good, useful information IMO. As a matter of fact, I just ordered a copy of the book from Amazon. Igal Sapir Lucee Core Developer Lucee.org On 8/29/2015 3:18 AM, Igor Sysoev wrote: > This mailing list has been created to discuss and announce anything > related to nginx. > This book certainly has relation to nginx so Valery did well when he > announced it here. > > -- > Igor Sysoev > Discover best practices for building & delivering apps at scale. > nginx.conf 2015: Sept. 22-24, San Francisco. http://nginx.com/nginxconf > -------------- next part -------------- An HTML attachment was scrubbed... URL: