From artemrts at ukr.net Mon Dec 2 08:57:05 2013 From: artemrts at ukr.net (wishmaster) Date: Mon, 02 Dec 2013 10:57:05 +0200 Subject: fastcgi_buffering and caching from FastCGI server Message-ID: <1385974141.885235384.is0nnm9l@frv34.ukr.net> Hi, devel team! Playing with caching responses from FCGI server I've found not documented issue. When fastcgi_buffering is "off", then fastcgi_cache doesn't work. Yes, this is logical completely, but I think would be better specify it in docs. What do you think? Cheers, w From nginx-forum at nginx.us Mon Dec 2 09:59:21 2013 From: nginx-forum at nginx.us (Larry) Date: Mon, 02 Dec 2013 04:59:21 -0500 Subject: Dynamic request rate throttling Message-ID: Hello, I wish I could send a notification to nginx to dynamically limit the request rate per second. Say I use the request rate module, i would like to be able to override dynamically this setting. How would I do that ? I read about the zone but not sure if i can interfere in this way. Any clue ? thanks Larry Posted at Nginx Forum: http://forum.nginx.org/read.php?2,245116,245116#msg-245116 From nginx-forum at nginx.us Mon Dec 2 12:15:52 2013 From: nginx-forum at nginx.us (itpp2012) Date: Mon, 02 Dec 2013 07:15:52 -0500 Subject: [Patch] possible mutex starvation issue affects all nginx Linux versions. Message-ID: <67df022ba3182c9dca6903f4f1b17bb3.NginxMailingListEnglish@forum.nginx.org> Here is a patch for a possible mutex starvation issue which affects all nginx Linux versions. Already solved for Windows since nginx 1.5.7.1 Caterpillar. Can be reproduced when nginx reloads the config & worker holding mutex dies or hangs. Fixed by Vittorio Francesco Digilio, commercially sponsered solution by ITPP. target source mainline 1.5.8 - 30-11-2013. src/event/ngx_event_accept.c, line 402 was correctly found (starvation fix) but missed in: src/event/ngx_event.c, line 259-260, 1 line added; 255 if (ngx_posted_accept_events) { 256 ngx_event_process_posted(cycle, &ngx_posted_accept_events); 257 } 258 259 if (ngx_accept_mutex_held) { --+ ngx_accept_mutex_held=0; 260 ngx_shmtx_unlock(&ngx_accept_mutex); 261 } 262 263 if (delta) { 264 ngx_event_expire_timers(); 265 } Also applies to; src/os/win32/ngx_process_cycle.c line 507-508, 3 lines added; 504 if (ngx_processes[n].handle != h) { 505 continue; 506 } 507 --+ if(*ngx_accept_mutex.lock==ngx_processes[n].pid) { --+ *ngx_accept_mutex.lock=0; --+ } 508 if (GetExitCodeProcess(h, &code) == 0) { 509 ngx_log_error(NGX_LOG_ALERT, cycle->log, ngx_errno, 510 "GetExitCodeProcess(%P) failed", 511 ngx_processes[n].pid); 512 } Posted at Nginx Forum: http://forum.nginx.org/read.php?2,245121,245121#msg-245121 From mail at labaznov.com Mon Dec 2 12:47:42 2013 From: mail at labaznov.com (=?KOI8-R?B?5M3J1NLJyiDswcLB2s7P1w==?=) Date: Mon, 2 Dec 2013 15:47:42 +0300 Subject: session cache Message-ID: Hi, could anybody say, may i cache dynamic content for sessions isolate from eache other ? I have very hard sql, but it is show some results which very variative for users, but some times users call this script many time in small period of time, so i wana cache the results in small period for each users-session. Is it possible in nginx. Nginx work as proxy_pass http://application_server And cache some static for spesific locations. -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Mon Dec 2 13:39:07 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 2 Dec 2013 17:39:07 +0400 Subject: [Patch] possible mutex starvation issue affects all nginx Linux versions. In-Reply-To: <67df022ba3182c9dca6903f4f1b17bb3.NginxMailingListEnglish@forum.nginx.org> References: <67df022ba3182c9dca6903f4f1b17bb3.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20131202133907.GT93176@mdounin.ru> Hello! On Mon, Dec 02, 2013 at 07:15:52AM -0500, itpp2012 wrote: > Here is a patch for a possible mutex starvation issue which affects all > nginx Linux versions. > Already solved for Windows since nginx 1.5.7.1 Caterpillar. > Can be reproduced when nginx reloads the config & worker holding mutex dies > or hangs. > > Fixed by Vittorio Francesco Digilio, commercially sponsered solution by > ITPP. > target source mainline 1.5.8 - 30-11-2013. > src/event/ngx_event_accept.c, line 402 was correctly found (starvation fix) > but missed in: > src/event/ngx_event.c, line 259-260, 1 line added; > 255 if (ngx_posted_accept_events) { > 256 ngx_event_process_posted(cycle, &ngx_posted_accept_events); > 257 } > 258 > 259 if (ngx_accept_mutex_held) { > --+ ngx_accept_mutex_held=0; > 260 ngx_shmtx_unlock(&ngx_accept_mutex); > 261 } > 262 > 263 if (delta) { > 264 ngx_event_expire_timers(); > 265 } This patch is wrong. The ngx_accept_mutex_held don't need to be unset here. The ngx_accept_mutex_held is used as an idicator that on previous iteration the lock was held by the process, and unsetting it will result in incorrect behaviour. > Also applies to; src/os/win32/ngx_process_cycle.c > line 507-508, 3 lines added; > 504 if (ngx_processes[n].handle != h) { > 505 continue; > 506 } > 507 > --+ if(*ngx_accept_mutex.lock==ngx_processes[n].pid) { > --+ *ngx_accept_mutex.lock=0; > --+ } > 508 if (GetExitCodeProcess(h, &code) == 0) { > 509 ngx_log_error(NGX_LOG_ALERT, cycle->log, ngx_errno, > 510 "GetExitCodeProcess(%P) failed", > 511 ngx_processes[n].pid); > 512 } This patch is also wrong. In the current state of the win32 version it is not assumed that accept mutex is used at all. But if it is, correct aproach to unlock shared memory mutexes on abnormal process termination is to port (or move to platform-independed place) the ngx_unlock_mutexes() function from src/os/unix/ngx_process.c. It correctly uses ngx_shmtx_force_unlock() to properly unlock shared memory mutexes using atomic operations. Note well that unlocking shared memory mutexes on abnormal process termination is an emergency mechanism. If it actually happens, it indicate that something is really wrong in other places of the system. Please also consider reading http://nginx.org/en/docs/contributing_changes.html. -- Maxim Dounin http://nginx.org/en/donation.html From mdounin at mdounin.ru Mon Dec 2 13:45:22 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 2 Dec 2013 17:45:22 +0400 Subject: fastcgi_buffering and caching from FastCGI server In-Reply-To: <1385974141.885235384.is0nnm9l@frv34.ukr.net> References: <1385974141.885235384.is0nnm9l@frv34.ukr.net> Message-ID: <20131202134521.GU93176@mdounin.ru> Hello! On Mon, Dec 02, 2013 at 10:57:05AM +0200, wishmaster wrote: > > Hi, devel team! > > Playing with caching responses from FCGI server I've found not > documented issue. When fastcgi_buffering is "off", then > fastcgi_cache doesn't work. Yes, this is logical completely, but > I think would be better specify it in docs. > What do you think? If you think that explicitly mentioning this will be beneficial, you may try submitting a patch for the documentation. Note that this applies to both proxy and fastcgi (well, actually to uwsgi and scgi as too, but we have no docs for them), and to both cache and store. That is, proxy_store, proxy_cache, fastcgi_store and fastcgi_cache. See here for basic instructions on how to submit patches: http://nginx.org/en/docs/contributing_changes.html Source of the nginx.org site with the documentation can be found here: http://hg.nginx.org/nginx.org -- Maxim Dounin http://nginx.org/en/donation.html From maxmilhas at yandex.com Mon Dec 2 16:24:16 2013 From: maxmilhas at yandex.com (Maxmilhas) Date: Mon, 02 Dec 2013 14:24:16 -0200 Subject: fastcgi_buffering and caching from FastCGI server In-Reply-To: <20131202134521.GU93176@mdounin.ru> References: <1385974141.885235384.is0nnm9l@frv34.ukr.net> <20131202134521.GU93176@mdounin.ru> Message-ID: <529CB430.8050707@yandex.com> Yes, it is completely acceptable that people (possibly) completely out of an entire project make its documentation. Or at least it is them who must decide what is good and necessary to have on the docs, and what isn't. Remarkable. Good to see this > If you think that explicitly mentioning this will be beneficial, you > may try submitting a patch for the documentation. Note that this > applies to both proxy and fastcgi (well, actually to uwsgi and scgi as > too, but we have no docs for them), and to both cache and store. That > is, proxy_store, proxy_cache, fastcgi_store and fastcgi_cache. See > here for basic instructions on how to submit patches: > http://nginx.org/en/docs/contributing_changes.html Source of the > nginx.org site with the documentation can be found here: > http://hg.nginx.org/nginx.org -------------- next part -------------- An HTML attachment was scrubbed... URL: From osa at FreeBSD.org.ru Mon Dec 2 17:06:30 2013 From: osa at FreeBSD.org.ru (Sergey A. Osokin) Date: Mon, 2 Dec 2013 21:06:30 +0400 Subject: [ANN] ngx_http_redis-0.3.7 released Message-ID: <20131202170630.GE20240@FreeBSD.org.ru> ngx_http_redis module version 0.3.7 released, available for immediate download at http://people.freebsd.org/~osa/ngx_http_redis-0.3.7.tar.gz *) Bugfix: ngx_http_redis_module might issue the error message "redis sent invalid trailer" for nginx >= 1.5.3. Thanks to Maxim Dounin. -- Sergey A. Osokin osa at FreeBSD.ORG.ru osa at FreeBSD.ORG From osa at FreeBSD.org.ru Mon Dec 2 17:11:19 2013 From: osa at FreeBSD.org.ru (Sergey A. Osokin) Date: Mon, 2 Dec 2013 21:11:19 +0400 Subject: [ANN] ngx_http_redis-0.3.7 released In-Reply-To: <20131202170630.GE20240@FreeBSD.org.ru> References: <20131202170630.GE20240@FreeBSD.org.ru> Message-ID: <20131202171119.GF20240@FreeBSD.org.ru> On Mon, Dec 02, 2013 at 09:06:30PM +0400, Sergey A. Osokin wrote: > ngx_http_redis module version 0.3.7 released, available for immediate > download at http://people.freebsd.org/~osa/ngx_http_redis-0.3.7.tar.gz > > > > *) Bugfix: ngx_http_redis_module might issue the error message > "redis sent invalid trailer" for nginx >= 1.5.3. > Thanks to Maxim Dounin. > > Additional information. SHA256 (ngx_http_redis-0.3.7.tar.gz) = 9dfc14db81f431fdf3d69f3661a37daf110aef5f9479aa7c88cf362bb5d62604 SIZE (ngx_http_redis-0.3.7.tar.gz) = 12165 Apologies for any inconvenience. -- Sergey A. Osokin osa at FreeBSD.ORG.ru osa at FreeBSD.ORG From francis at daoine.org Mon Dec 2 17:16:34 2013 From: francis at daoine.org (Francis Daly) Date: Mon, 2 Dec 2013 17:16:34 +0000 Subject: session cache In-Reply-To: References: Message-ID: <20131202171634.GG15722@craic.sysops.org> On Mon, Dec 02, 2013 at 03:47:42PM +0300, ??????? ???????? wrote: Hi there, > Hi, could anybody say, may i cache dynamic content for sessions isolate > from eache other ? http://nginx.org/r/proxy_cache_key Include something unique per session in the key. f -- Francis Daly francis at daoine.org From mail at labaznov.com Mon Dec 2 17:18:06 2013 From: mail at labaznov.com (=?KOI8-R?B?5M3J1NLJyiDswcLB2s7P1w==?=) Date: Mon, 2 Dec 2013 21:18:06 +0400 Subject: session cache In-Reply-To: <20131202171634.GG15722@craic.sysops.org> References: <20131202171634.GG15722@craic.sysops.org> Message-ID: Could you plz some example ? 2013/12/2 Francis Daly > On Mon, Dec 02, 2013 at 03:47:42PM +0300, ??????? ???????? wrote: > > Hi there, > > > Hi, could anybody say, may i cache dynamic content for sessions isolate > > from eache other ? > > http://nginx.org/r/proxy_cache_key > > Include something unique per session in the key. > > f > -- > Francis Daly francis at daoine.org > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From francis at daoine.org Mon Dec 2 17:21:23 2013 From: francis at daoine.org (Francis Daly) Date: Mon, 2 Dec 2013 17:21:23 +0000 Subject: session cache In-Reply-To: References: <20131202171634.GG15722@craic.sysops.org> Message-ID: <20131202172123.GH15722@craic.sysops.org> On Mon, Dec 02, 2013 at 09:18:06PM +0400, ??????? ???????? wrote: Hi there, > Could you plz some example ? What is a session? If it is a cookie called "user", then something like proxy_cache_key "$host$request_uri $cookie_user"; may work for you. f -- Francis Daly francis at daoine.org From mail at labaznov.com Mon Dec 2 17:29:53 2013 From: mail at labaznov.com (=?KOI8-R?B?5M3J1NLJyiDswcLB2s7P1w==?=) Date: Mon, 2 Dec 2013 21:29:53 +0400 Subject: session cache In-Reply-To: <20131202172123.GH15722@craic.sysops.org> References: <20131202171634.GG15722@craic.sysops.org> <20131202172123.GH15722@craic.sysops.org> Message-ID: > Hi there, > Could you plz some example ? > What is a session? > If it is a cookie called "user", then something like > proxy_cache_key "$host$request_uri $cookie_user"; > may work for you. yes cookie, what does it mean called "user" ? Users have a cookie session, and this strind do not work =( -------------- next part -------------- An HTML attachment was scrubbed... URL: From francis at daoine.org Mon Dec 2 17:40:12 2013 From: francis at daoine.org (Francis Daly) Date: Mon, 2 Dec 2013 17:40:12 +0000 Subject: session cache In-Reply-To: References: <20131202171634.GG15722@craic.sysops.org> <20131202172123.GH15722@craic.sysops.org> Message-ID: <20131202174012.GI15722@craic.sysops.org> On Mon, Dec 02, 2013 at 09:29:53PM +0400, ??????? ???????? wrote: Hi there, > > If it is a cookie called "user", then something like > > > proxy_cache_key "$host$request_uri $cookie_user"; > yes cookie, what does it mean called "user" ? > Users have a cookie session, and this strind do not work =( http://nginx.org/en/docs/http/ngx_http_core_module.html#variables or maybe http://nginx.org/ru/docs/http/ngx_http_core_module.html#variables If the cookie name is "session", then the nginx variable is $cookie_session. f -- Francis Daly francis at daoine.org From nginx-forum at nginx.us Mon Dec 2 18:15:25 2013 From: nginx-forum at nginx.us (nmarques) Date: Mon, 02 Dec 2013 13:15:25 -0500 Subject: nginx - workers segfaulting Message-ID: <70882db870158d19ee06f8b30f59a845.NginxMailingListEnglish@forum.nginx.org> Dear All, I'm facing a small problem with NGINX; The workers are segfaulting since 11:20 this morning. From kernel messages I got soemthing like this: nginx[6888]: segfault at 8 ip 0000000000426a30 sp 00007fff85c01e70 error 4 in nginx[400000+a8000] nginx[6886]: segfault at 8 ip 0000000000426a30 sp 00007fff85c01e70 error 4 in nginx[400000+a8000] nginx[6890]: segfault at 8 ip 0000000000426a30 sp 00007fff85c01e70 error 4 in nginx[400000+a8000] nginx[6889]: segfault at 8 ip 0000000000426a30 sp 00007fff85c01e70 error 4 in nginx[400000+a8000] nginx[6892]: segfault at 8 ip 0000000000426a30 sp 00007fff85c01e70 error 4 in nginx[400000+a8000] nginx[6893]: segfault at 8 ip 0000000000426a30 sp 00007fff85c01e70 error 4 in nginx[400000+a8000] nginx[6894]: segfault at 8 ip 0000000000426a30 sp 00007fff85c01e70 error 4 in nginx[400000+a8000] nginx[6891]: segfault at 8 ip 0000000000426a30 sp 00007fff85c01e70 error 4 in nginx[400000+a8000] nginx[6896]: segfault at 8 ip 0000000000426a30 sp 00007fff85c01e70 error 4 in nginx[400000+a8000] The log files show the following: 2013/12/02 18:13:53 [alert] 26876#0: worker process 30412 exited on signal 11 2013/12/02 18:13:53 [alert] 26876#0: worker process 30414 exited on signal 11 2013/12/02 18:13:53 [alert] 26876#0: worker process 30413 exited on signal 11 2013/12/02 18:13:54 [alert] 26876#0: worker process 30418 exited on signal 11 2013/12/02 18:13:55 [info] 30417#0: *14388 client closed connection while SSL handshaking, client: 10.192.41.251, server: 0.0.0.0:4443 2013/12/02 18:13:56 [info] 30417#0: *14389 client closed connection while waiting for request, client: 10.192.41.252, server: 0.0.0.0:80 2013/12/02 18:13:56 [info] 30417#0: *14390 client closed connection while SSL handshaking, client: 10.192.41.252, server: 0.0.0.0:2443 2013/12/02 18:13:57 [info] 30417#0: *14397 client closed connection while SSL handshaking, client: 10.192.41.252, server: 0.0.0.0:4443 2013/12/02 18:13:57 [info] 30417#0: *14403 client closed connection while waiting for request, client: 10.192.41.251, server: 0.0.0.0:80 2013/12/02 18:13:57 [info] 30417#0: *14402 client closed connection while SSL handshaking, client: 10.192.41.251, server: 0.0.0.0:2443 I can provide some cores, but I can't attach them here. My setup was running fine till today (which has some coincidence with a new webservice deployed). Please could you provide some extra information on how to further debug this issue ? NM Posted at Nginx Forum: http://forum.nginx.org/read.php?2,245135,245135#msg-245135 From luky-37 at hotmail.com Mon Dec 2 18:30:36 2013 From: luky-37 at hotmail.com (Lukas Tribus) Date: Mon, 2 Dec 2013 19:30:36 +0100 Subject: nginx - workers segfaulting In-Reply-To: <70882db870158d19ee06f8b30f59a845.NginxMailingListEnglish@forum.nginx.org> References: <70882db870158d19ee06f8b30f59a845.NginxMailingListEnglish@forum.nginx.org> Message-ID: Hi! > I'm facing a small problem with NGINX; The workers are segfaulting since > 11:20 this morning. > [...] > I can provide some cores, but I can't attach them here. My setup was running > fine till today (which has some coincidence with a new webservice > deployed). > > Please could you provide some extra information on how to further debug this > issue ? A few things the developers will probably ask anyway: - exact output from "nginx -V" - do use any nginx modules? - do you use any third party modules? - can post and explain your configuration (at least partially)? - you said you can provide cores, can you post a backtrace? - some details about the underlying OS (virtualization, ? cpu/ram/architecture/nic/kernel releases)? Regards, Lukas From nginx-forum at nginx.us Mon Dec 2 18:32:25 2013 From: nginx-forum at nginx.us (nmarques) Date: Mon, 02 Dec 2013 13:32:25 -0500 Subject: nginx - workers segfaulting In-Reply-To: <70882db870158d19ee06f8b30f59a845.NginxMailingListEnglish@forum.nginx.org> References: <70882db870158d19ee06f8b30f59a845.NginxMailingListEnglish@forum.nginx.org> Message-ID: <0dd3af8e0816efc0cd762139895900e7.NginxMailingListEnglish@forum.nginx.org> Adittionally: Name : nginx Relocations: (not relocatable) Version : 1.4.3 Vendor: nginx inc. Release : 1.el6.ngx Build Date: Tue 08 Oct 2013 02:34:32 PM WEST Install Date: Mon 02 Dec 2013 05:35:41 PM WET Build Host: centos6-amd64-ovl.t.nginx.com Group : System Environment/Daemons Source RPM: nginx-1.4.3-1.el6.ngx.src.rpm Size : 788369 License: 2-clause BSD-like license Signature : RSA/SHA1, Tue 08 Oct 2013 03:13:36 PM WEST, Key ID abf5bd827bd9bf62 URL : http://nginx.org/ Summary : High performance web server Description : nginx [engine x] is an HTTP and reverse proxy server, as well as a mail proxy server. Running on fully updated CentOS 6.4. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,245135,245137#msg-245137 From nginx-forum at nginx.us Mon Dec 2 18:45:46 2013 From: nginx-forum at nginx.us (nmarques) Date: Mon, 02 Dec 2013 13:45:46 -0500 Subject: nginx - workers segfaulting In-Reply-To: References: Message-ID: > - exact output from "nginx -V" Name : nginx Relocations: (not relocatable) Version : 1.4.4 Vendor: nginx inc. Release : 1.el6.ngx Build Date: Tue 19 Nov 2013 12:11:15 PM WET Install Date: Mon 02 Dec 2013 06:33:31 PM WET Build Host: centos6-amd64-ovl.t.nginx.com Group : System Environment/Daemons Source RPM: nginx-1.4.4-1.el6.ngx.src.rpm Size : 788337 License: 2-clause BSD-like license Signature : RSA/SHA1, Tue 19 Nov 2013 01:03:00 PM WET, Key ID abf5bd827bd9bf62 URL : http://nginx.org/ Summary : High performance web server Description : nginx [engine x] is an HTTP and reverse proxy server, as well as a mail proxy server. Running on CentOS 6.4. Same happens with 1.4.3 (I've tested a downgrade). > - do use any nginx modules? No. Plain upstream vanilla. > - do you use any third party modules? No. Upstream binary distribution package (rpm) > - can post and explain your configuration (at least partially)? NGINX is a reverse proxy for a vhosted tomcat with openSSL. What tokens from the configuration do you require? > - you said you can provide cores, can you post a backtrace? I'm going to attach nginx master process to gdb and check it out. I can't really attach to workers since they segfault quite fast. > - some details about the underlying OS (virtualization, > ? cpu/ram/architecture/nic/kernel releases)? CentOS 6.4 (full updated) - running on vmware 5.1u1 - 4vcpu's, 6GB RAM, etc... [root at iweb-as2 ~]# uname -a Linux XXXXXXXXXXXX 2.6.32-358.23.2.el6.x86_64 #1 SMP Wed Oct 16 18:37:12 UTC 2013 x86_64 x86_64 x86_64 GNU/Linux Posted at Nginx Forum: http://forum.nginx.org/read.php?2,245135,245138#msg-245138 From artemrts at ukr.net Mon Dec 2 18:53:53 2013 From: artemrts at ukr.net (=?UTF-8?b?0JLQuNGC0LDQu9C40Lk=?= =?UTF-8?b?INCS0LvQsNC00LjQvNC40YDQvtCy0LjRhw==?=) Date: Mon, 02 Dec 2013 20:53:53 +0200 Subject: session cache In-Reply-To: <20131202172123.GH15722@craic.sysops.org> References: <20131202171634.GG15722@craic.sysops.org> <20131202172123.GH15722@craic.sysops.org> Message-ID: <1386010388.168877075.c0ss0i4f@frv34.ukr.net> --- Original message --- From: "Francis Daly" Date: 2 December 2013, 19:21:29 > On Mon, Dec 02, 2013 at 09:18:06PM +0400, ??????? ???????? wrote: > > Hi there, > > > Could you plz some example ? > > What is a session? > > If it is a cookie called "user", then something like > > proxy_cache_key "$host$request_uri $cookie_user"; Or better: proxy_cache_key "$host$request_uri$cookie_user$remote_addr"; From nginx-forum at nginx.us Tue Dec 3 00:49:23 2013 From: nginx-forum at nginx.us (nmarques) Date: Mon, 02 Dec 2013 19:49:23 -0500 Subject: nginx - workers segfaulting In-Reply-To: References: Message-ID: <52417ca2d9a69eaf29addb2c7c25612f.NginxMailingListEnglish@forum.nginx.org> I'm facing a small problem with gdb and separate debuginfo's. Do you build with the '-g' compiler option? [root at XXXX2 nginx]# rpm -qa | grep nginx nginx-1.4.4-1.el6.ngx.x86_64 nginx-debug-1.4.4-1.el6.ngx.x86_64 [root at XXXX2 nginx]# gdb --pid 39019 GNU gdb (GDB) Red Hat Enterprise Linux (7.2-60.el6_4.1) Copyright (C) 2010 Free Software Foundation, Inc. License GPLv3+: GNU GPL version 3 or later This is free software: you are free to change and redistribute it. There is NO WARRANTY, to the extent permitted by law. Type "show copying" and "show warranty" for details. This GDB was configured as "x86_64-redhat-linux-gnu". For bug reporting instructions, please see: . Attaching to process 39019 Reading symbols from /usr/sbin/nginx... warning: the debug information found in "/usr/sbin/nginx.debug" does not match "/usr/sbin/nginx" (CRC mismatch). (no debugging symbols found)...done. Reading symbols from /lib64/libpthread.so.0...Reading symbols from /usr/lib/debug/lib64/libpthread-2.12.so.debug...done. [Thread debugging using libthread_db enabled] done. Loaded symbols for /lib64/libpthread.so.0 Reading symbols from /lib64/libcrypt.so.1...Reading symbols from /usr/lib/debug/lib64/libcrypt-2.12.so.debug...done. done. Loaded symbols for /lib64/libcrypt.so.1 Reading symbols from /lib64/libpcre.so.0...Reading symbols from /usr/lib/debug/lib64/libpcre.so.0.0.1.debug...done. done. Loaded symbols for /lib64/libpcre.so.0 Reading symbols from /usr/lib64/libssl.so.10...Reading symbols from /usr/lib/debug/usr/lib64/libssl.so.1.0.0.debug...done. done. Loaded symbols for /usr/lib64/libssl.so.10 Reading symbols from /usr/lib64/libcrypto.so.10...Reading symbols from /usr/lib/debug/usr/lib64/libcrypto.so.1.0.0.debug...done. done. Loaded symbols for /usr/lib64/libcrypto.so.10 Reading symbols from /lib64/libdl.so.2...Reading symbols from /usr/lib/debug/lib64/libdl-2.12.so.debug...done. done. Loaded symbols for /lib64/libdl.so.2 Reading symbols from /lib64/libz.so.1...Reading symbols from /usr/lib/debug/lib64/libz.so.1.2.3.debug...done. done. Loaded symbols for /lib64/libz.so.1 Reading symbols from /lib64/libc.so.6...Reading symbols from /usr/lib/debug/lib64/libc-2.12.so.debug...done. done. Loaded symbols for /lib64/libc.so.6 Reading symbols from /lib64/ld-linux-x86-64.so.2...Reading symbols from /usr/lib/debug/lib64/ld-2.12.so.debug...done. done. Loaded symbols for /lib64/ld-linux-x86-64.so.2 Reading symbols from /lib64/libfreebl3.so...Reading symbols from /usr/lib/debug/lib64/libfreebl3.so.debug...done. done. Loaded symbols for /lib64/libfreebl3.so Reading symbols from /lib64/libgssapi_krb5.so.2...Reading symbols from /usr/lib/debug/lib64/libgssapi_krb5.so.2.2.debug...done. done. Loaded symbols for /lib64/libgssapi_krb5.so.2 Reading symbols from /lib64/libkrb5.so.3...Reading symbols from /usr/lib/debug/lib64/libkrb5.so.3.3.debug...done. done. Loaded symbols for /lib64/libkrb5.so.3 Reading symbols from /lib64/libcom_err.so.2...Reading symbols from /usr/lib/debug/lib64/libcom_err.so.2.1.debug...done. done. Loaded symbols for /lib64/libcom_err.so.2 Reading symbols from /lib64/libk5crypto.so.3...Reading symbols from /usr/lib/debug/lib64/libk5crypto.so.3.1.debug...done. done. Loaded symbols for /lib64/libk5crypto.so.3 Reading symbols from /lib64/libkrb5support.so.0...Reading symbols from /usr/lib/debug/lib64/libkrb5support.so.0.1.debug...done. done. Loaded symbols for /lib64/libkrb5support.so.0 Reading symbols from /lib64/libkeyutils.so.1...Reading symbols from /usr/lib/debug/lib64/libkeyutils.so.1.3.debug...done. done. Loaded symbols for /lib64/libkeyutils.so.1 Reading symbols from /lib64/libresolv.so.2...Reading symbols from /usr/lib/debug/lib64/libresolv-2.12.so.debug...done. done. Loaded symbols for /lib64/libresolv.so.2 Reading symbols from /lib64/libselinux.so.1...Reading symbols from /usr/lib/debug/lib64/libselinux.so.1.debug...done. done. Loaded symbols for /lib64/libselinux.so.1 Reading symbols from /lib64/libnss_files.so.2...Reading symbols from /usr/lib/debug/lib64/libnss_files-2.12.so.debug...done. done. Loaded symbols for /lib64/libnss_files.so.2 Reading symbols from /lib64/libnss_sss.so.2...(no debugging symbols found)...done. Loaded symbols for /lib64/libnss_sss.so.2 0x00007ffa45a45f23 in __epoll_wait_nocancel () at ../sysdeps/unix/syscall-template.S:82 82 T_PSEUDO (SYSCALL_SYMBOL, SYSCALL_NAME, SYSCALL_NARGS) Missing separate debuginfos, use: debuginfo-install nginx-1.4.4-1.el6.ngx.x86_64 Posted at Nginx Forum: http://forum.nginx.org/read.php?2,245135,245142#msg-245142 From mail at labaznov.com Tue Dec 3 06:57:17 2013 From: mail at labaznov.com (=?KOI8-R?B?5M3J1NLJyiDswcLB2s7P1w==?=) Date: Tue, 3 Dec 2013 09:57:17 +0300 Subject: session cache In-Reply-To: <20131202174012.GI15722@craic.sysops.org> References: <20131202171634.GG15722@craic.sysops.org> <20131202172123.GH15722@craic.sysops.org> <20131202174012.GI15722@craic.sysops.org> Message-ID: On Mon, Dec 02, 2013 at 09:29:53PM +0400, ??????? ???????? wrote: > Hi there, > > If it is a cookie called "user", then something like > > > proxy_cache_key "$host$request_uri $cookie_user"; > yes cookie, what does it mean called "user" ? > Users have a cookie session, and this strind do not work =( > http://nginx.org/en/docs/http/ngx_http_core_module.html#variables > or maybe > http://nginx.org/ru/docs/http/ngx_http_core_module.html#variables > If the cookie name is "session", then the nginx variable is > $cookie_session. THX!!!! Its work! location ~* ^/.+\.(php)$ { proxy_cache_methods GET HEAD POST; proxy_ignore_headers Cache-Control Expires; proxy_cache_key "$host$request_uri $cookie_PHPSESSID"; proxy_pass http://localhost:8080; proxy_temp_path /tmp/nginx/tmp; proxy_cache one; proxy_cache_valid 200 304 1m; expires 1m; } here i cache all php dynavics, cous it is from test stand. But it is work, dynamic for various browsers cached in conformity with they cookies. Thx a lot! It is strange, this easy example i couldnt google for my case and situation. 2013/12/2 Francis Daly > On Mon, Dec 02, 2013 at 09:29:53PM +0400, ??????? ???????? wrote: > > Hi there, > > > > If it is a cookie called "user", then something like > > > > > proxy_cache_key "$host$request_uri $cookie_user"; > > > yes cookie, what does it mean called "user" ? > > Users have a cookie session, and this strind do not work =( > > http://nginx.org/en/docs/http/ngx_http_core_module.html#variables > > or maybe > > http://nginx.org/ru/docs/http/ngx_http_core_module.html#variables > > If the cookie name is "session", then the nginx variable is > $cookie_session. > > f > -- > Francis Daly francis at daoine.org > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From luky-37 at hotmail.com Tue Dec 3 08:19:46 2013 From: luky-37 at hotmail.com (Lukas Tribus) Date: Tue, 3 Dec 2013 09:19:46 +0100 Subject: nginx - workers segfaulting In-Reply-To: <52417ca2d9a69eaf29addb2c7c25612f.NginxMailingListEnglish@forum.nginx.org> References: , , <52417ca2d9a69eaf29addb2c7c25612f.NginxMailingListEnglish@forum.nginx.org> Message-ID: Hi! > I'm facing a small problem with gdb and separate debuginfo's. Do you build > with the '-g' compiler option? Probably not. Please check with file /usr/sbin/nginx Does the repository contain a special debug build like nginx-debug or something? Could you install it? Whoever maintains the centos binaries on nginx.org, please advise howto get the symbol informations; /usr/sbin/nginx.debug doesn't seem to contain it: > warning: the debug information found in "/usr/sbin/nginx.debug" does not > match "/usr/sbin/nginx" (CRC mismatch). > [root at XXXX2 nginx]# gdb --pid 39019 Please let it properly dump a core. Here is an example howto configure nginx so the workers can actually coredump: http://forum.nginx.org/read.php?2,234757,234860 Thanks, Lukas From svoop at delirium.ch Tue Dec 3 09:02:42 2013 From: svoop at delirium.ch (Svoop) Date: Tue, 3 Dec 2013 09:02:42 +0000 (UTC) Subject: Sanitize "invalid UTF-8 byte sequence" Message-ID: Hi I'm getting forged requests with invalid UTF-8 byte sequences on my Rails app which is served with Nginx/Passenger. Is there a way to have Nginx sanitize requests before they are passed to Passenger? Thanks for your hints! From nginx-forum at nginx.us Tue Dec 3 09:21:30 2013 From: nginx-forum at nginx.us (fatine,al) Date: Tue, 03 Dec 2013 04:21:30 -0500 Subject: NGINX 500 http error In-Reply-To: <8cb2846812e4dc52b0eba105531f2e9a.NginxMailingListEnglish@forum.nginx.org> References: <20131115132151.GQ95765@mdounin.ru> <3d57ede6d4d086be412204da3a43654e.NginxMailingListEnglish@forum.nginx.org> <719e036e94e70427f03196a4f080325a.NginxMailingListEnglish@forum.nginx.org> <5d059809a78b8fea62f6d010a7c6650b.NginxMailingListEnglish@forum.nginx.org> <56c3928692421282e9f1938dd8a7e758.NginxMailingListEnglish@forum.nginx.org> <8cb2846812e4dc52b0eba105531f2e9a.NginxMailingListEnglish@forum.nginx.org> Message-ID: <977b04cd0f550691258e9acd18029d71.NginxMailingListEnglish@forum.nginx.org> OK. :) Posted at Nginx Forum: http://forum.nginx.org/read.php?2,244693,245150#msg-245150 From borate at adobe.com Tue Dec 3 09:45:49 2013 From: borate at adobe.com (Shankar Dagadu Borate) Date: Tue, 3 Dec 2013 15:15:49 +0530 Subject: NGINX timeout issue Message-ID: Hi all, We are using nginx as proxy server in all our production deployment. The nginx server proxy the http request to Amazon ELB (elastic load balancer) using upstream module. Currently we are seeing timeout issues in NGINX because NGINX caches the IP address on start. Now when there is change in IP address of ELB, NGINX doesn't updates it's IP and still point to old IP. We went through many post on internet and tried setting resolver to AWS DNS server, resolver timeout to 20s, valids to 30 sec. defining variable instead of direct name in proxy pass, request URL in proxy pass but nothing has worked for us. NGINX is not honoring the resolver and corresponding TTL settings. Some of the blogs also says that they have tried similar things and nothing works for them. IT looks to me like this is bug in NGINX and will be resolved only by changing the source. Has anybody found solution to this problem? If yes, please let us know your configurations. Only solution I have right now is write some script and restart the server when timeout occurs. But this is not ideal solution. I am seriously thinking of moving to Apache Http server if this problem doesn't get resolved. Regards, Shankar -------------- next part -------------- An HTML attachment was scrubbed... URL: From andrew at nginx.com Tue Dec 3 10:09:47 2013 From: andrew at nginx.com (Andrew Alexeev) Date: Tue, 3 Dec 2013 02:09:47 -0800 Subject: NGINX timeout issue In-Reply-To: References: Message-ID: Hi, On Dec 3, 2013, at 1:45 AM, Shankar Dagadu Borate wrote: > Hi all, > > We are using nginx as proxy server in all our production deployment. The nginx server proxy the http request to Amazon ELB (elastic load balancer) using upstream module. Currently we are seeing timeout issues in NGINX because NGINX caches the IP address on start. Now when there is change in IP address of ELB, NGINX doesn?t updates it?s IP and still point to old IP. We went through many post on internet and tried setting resolver to AWS DNS server, resolver timeout to 20s, valids to 30 sec. defining variable instead of direct name in proxy pass, request URL in proxy pass but nothing has worked for us. NGINX is not honoring the resolver and corresponding TTL settings. > > Some of the blogs also says that they have tried similar things and nothing works for them. IT looks to me like this is bug in NGINX and will be resolved only by changing the source. > > Has anybody found solution to this problem? If yes, please let us know your configurations. > > Only solution I have right now is write some script and restart the server when timeout occurs. But this is not ideal solution. Can you paste the relevant part of your nginx configuration here? If that's about re-resolving server names in an upstream server group, then yes - there's no such functionality in nginx at this time. Still, if you're using proxy_pass to a "single server" instead of an upstream server group, re-resolving works. That might be a workaround since you appear to proxy to a single entry-point which is terminated on ELB? Please check, http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_pass ("If a domain name resolves to several addresses, all of them will be used in a round-robin fashion.") Also, "A server name, its port and the passed URI can also be specified using variables: proxy_pass http://$host$uri; or even like this: proxy_pass $request; In this case, the server name is searched among the described server groups, and, if not found, is determined using aresolver." Hope this helps > I am seriously thinking of moving to Apache Http server if this problem doesn?t get resolved. :) > Regards, > > Shankar > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From www at lc365.net Tue Dec 3 11:32:20 2013 From: www at lc365.net (=?gb2312?B?ufnV8cGi?=) Date: Tue, 3 Dec 2013 19:32:20 +0800 Subject: [ANN] Windows nginx 1.5.8.1 Caterpillar In-Reply-To: <82f08a63fed0ac7d3fefdf3b8752ac2d.NginxMailingListEnglish@forum.nginx.org> References: <82f08a63fed0ac7d3fefdf3b8752ac2d.NginxMailingListEnglish@forum.nginx.org> Message-ID: Dear, Thank you very much for your great job! I use Nginx-Win to run proxy for some Windows 2003 nodes. I change nginx-win from official download to your version now. Here is an error occurred if I run "nginx -t" in Windows CLI when the nginx server is in running, The tips always is the same: "Assertion failed: ngx_shared_sockets->pid==pid, file src/core/nginx.c, line 374" And also find one line in error.log which is "2013/12/01 14:51:36 [notice] 4046#3925: Fatal: wait on listen sockets mutex failed" Hope this can be solved in futrue. And please try to compile a version with this module: http://wiki.nginx.org/HttpSubsModule, and with the options "--without-mail_pop3_module --without-mail_imap_module --without-mail_smtp_module" if possible, thank you. Robert K. > To: nginx at nginx.org > Subject: [ANN] Windows nginx 1.5.8.1 Caterpillar > From: nginx-forum at nginx.us > Date: Sat, 30 Nov 2013 16:58:18 -0500 > > 19:18 30-11-2013: nginx 1.5.8.1 Caterpillar > > Based on nginx 1.5.8 (29-11-2013) with (mainly bugfixes in add-on's); > + Naxsi WAF (Web Application Firewall) v0.53-1 (upgraded) > + lua-nginx-module v0.9.2 (upgraded 30-11) > + Streaming with nginx-rtmp-module, v1.0.8 (upgraded 29-11) > + Source changes back ported > + Source changes add-on's back ported > * Additional specifications are like 20:32 19-11-2013: nginx 1.5.7.2 > Caterpillar > > Builds can be found here: > http://nginx-win.ecsds.eu/ > > Posted at Nginx Forum: http://forum.nginx.org/read.php?2,245105,245105#msg-245105 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Tue Dec 3 12:19:52 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 3 Dec 2013 16:19:52 +0400 Subject: nginx - workers segfaulting In-Reply-To: <70882db870158d19ee06f8b30f59a845.NginxMailingListEnglish@forum.nginx.org> References: <70882db870158d19ee06f8b30f59a845.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20131203121952.GD93176@mdounin.ru> Hello! On Mon, Dec 02, 2013 at 01:15:25PM -0500, nmarques wrote: > Dear All, > > I'm facing a small problem with NGINX; The workers are segfaulting since > 11:20 this morning. From kernel messages I got soemthing like this: > > nginx[6888]: segfault at 8 ip 0000000000426a30 sp 00007fff85c01e70 error 4 > in nginx[400000+a8000] > nginx[6886]: segfault at 8 ip 0000000000426a30 sp 00007fff85c01e70 error 4 > in nginx[400000+a8000] > nginx[6890]: segfault at 8 ip 0000000000426a30 sp 00007fff85c01e70 error 4 > in nginx[400000+a8000] > nginx[6889]: segfault at 8 ip 0000000000426a30 sp 00007fff85c01e70 error 4 > in nginx[400000+a8000] > nginx[6892]: segfault at 8 ip 0000000000426a30 sp 00007fff85c01e70 error 4 > in nginx[400000+a8000] > nginx[6893]: segfault at 8 ip 0000000000426a30 sp 00007fff85c01e70 error 4 > in nginx[400000+a8000] > nginx[6894]: segfault at 8 ip 0000000000426a30 sp 00007fff85c01e70 error 4 > in nginx[400000+a8000] > nginx[6891]: segfault at 8 ip 0000000000426a30 sp 00007fff85c01e70 error 4 > in nginx[400000+a8000] > nginx[6896]: segfault at 8 ip 0000000000426a30 sp 00007fff85c01e70 error 4 > in nginx[400000+a8000] > > The log files show the following: > > 2013/12/02 18:13:53 [alert] 26876#0: worker process 30412 exited on signal > 11 > 2013/12/02 18:13:53 [alert] 26876#0: worker process 30414 exited on signal > 11 > 2013/12/02 18:13:53 [alert] 26876#0: worker process 30413 exited on signal > 11 > 2013/12/02 18:13:54 [alert] 26876#0: worker process 30418 exited on signal > 11 > 2013/12/02 18:13:55 [info] 30417#0: *14388 client closed connection while > SSL handshaking, client: 10.192.41.251, server: 0.0.0.0:4443 > 2013/12/02 18:13:56 [info] 30417#0: *14389 client closed connection while > waiting for request, client: 10.192.41.252, server: 0.0.0.0:80 > 2013/12/02 18:13:56 [info] 30417#0: *14390 client closed connection while > SSL handshaking, client: 10.192.41.252, server: 0.0.0.0:2443 > 2013/12/02 18:13:57 [info] 30417#0: *14397 client closed connection while > SSL handshaking, client: 10.192.41.252, server: 0.0.0.0:4443 > 2013/12/02 18:13:57 [info] 30417#0: *14403 client closed connection while > waiting for request, client: 10.192.41.251, server: 0.0.0.0:80 > 2013/12/02 18:13:57 [info] 30417#0: *14402 client closed connection while > SSL handshaking, client: 10.192.41.251, server: 0.0.0.0:2443 > > I can provide some cores, but I can't attach them here. My setup was running > fine till today (which has some coincidence with a new webservice > deployed). > > Please could you provide some extra information on how to further debug this > issue ? >From the messages I suspect you are hitting this bug: http://trac.nginx.org/nginx/ticket/235 Please follow suggested workaround to see if it helps (i.e., move the "ssl_session_cache" directive to http{} level or use the same value in all server{} blocks listening on the same socket). If it doesn't help, please follow debugging hints here: http://wiki.nginx.org/Debugging -- Maxim Dounin http://nginx.org/en/donation.html From nginx-forum at nginx.us Tue Dec 3 13:18:08 2013 From: nginx-forum at nginx.us (itpp2012) Date: Tue, 03 Dec 2013 08:18:08 -0500 Subject: [ANN] Windows nginx 1.5.8.1 Caterpillar In-Reply-To: References: Message-ID: > Here is an error occurred if I run "nginx -t" in Windows CLI when the > nginx server is in running, > > The tips always is the same: "Assertion failed: > ngx_shared_sockets->pid==pid, file src/core/nginx.c, line 374" This is because you are running nginx.exe as a different user then nginx.exe is running under, its the same issue when you run 'nginx -s reload' while nginx is running as a service (which is a different user) see the FAQ on the project website which describes a workaround to this problem. If you need to test a config on the same machine then make a copy of the environment in some other folder so that the PID file does not conflict with the running version. Or stop nginx, run your test and start nginx. > And please try to compile a version with this module: > http://wiki.nginx.org/HttpSubsModule, and with the options I will have a look at https://github.com/yaoweibin/ngx_http_substitutions_filter_module > "--without-mail_pop3_module --without-mail_imap_module > --without-mail_smtp_module" if possible, thank you. I'm not going to remove modules unless they are bugged beyond repair. Adding/removing modules on the fly is on the long term feature list Posted at Nginx Forum: http://forum.nginx.org/read.php?2,245105,245161#msg-245161 From nginx-forum at nginx.us Tue Dec 3 16:11:24 2013 From: nginx-forum at nginx.us (nmarques) Date: Tue, 03 Dec 2013 11:11:24 -0500 Subject: nginx - workers segfaulting In-Reply-To: <20131203121952.GD93176@mdounin.ru> References: <20131203121952.GD93176@mdounin.ru> Message-ID: <80c92d3e73942913ce14334e20b7e31b.NginxMailingListEnglish@forum.nginx.org> Maxim, Right on dude. Anyway you can have this patch merged into trunk for the next release? So far I have blocked nginx updates. NM Posted at Nginx Forum: http://forum.nginx.org/read.php?2,245135,245167#msg-245167 From mdounin at mdounin.ru Tue Dec 3 16:23:26 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 3 Dec 2013 20:23:26 +0400 Subject: nginx - workers segfaulting In-Reply-To: <80c92d3e73942913ce14334e20b7e31b.NginxMailingListEnglish@forum.nginx.org> References: <20131203121952.GD93176@mdounin.ru> <80c92d3e73942913ce14334e20b7e31b.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20131203162326.GK93176@mdounin.ru> Hello! On Tue, Dec 03, 2013 at 11:11:24AM -0500, nmarques wrote: > Maxim, > Right on dude. Anyway you can have this patch merged into trunk for the next > release? The patch as in the ticket is wrong, it only hides the real problem. Proper patch to solve the problem is to be coded. As the problem can be easily resolved by using symmetrical session cache configuration (better yet, using a single session cache at http level), it's not a high priority task. > So far I have blocked nginx updates. Looks like a silly thing to do. The problem you are seeing was a result of a configuration change you've done, not of an nginx update. And blocking updates will only make sure you'll never get a fix. -- Maxim Dounin http://nginx.org/en/donation.html From alex.koch007 at outlook.com Tue Dec 3 17:55:52 2013 From: alex.koch007 at outlook.com (Alex Koch) Date: Tue, 3 Dec 2013 18:55:52 +0100 Subject: =?UTF-8?Q?NGINX_Module_-_create_variables=E2=80=8F?= Message-ID: Hi, I would like to create a small module which execute some routines, returns an NGX_OK, somewhat similar in concept to http://blog.zhuzhaoyuan.com/2009/08/creating-a-hello-world-nginx-module/ However I would like once the module executes to create variables such as $my_var which would be accessible via config files to other blocks or to the same location block. I am aware of the "register_variable" option, with "ngx_http_variable_t *var, *v;" - but the examples I have seen so far, execute the module only once this variable is loaded in the config file. What I would like is being able to define a couple of config variables once my module is loaded. Is this possible at all? If so, could you point me to a sample/module which does this so I can learn from it? Many thanks, Alex -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Tue Dec 3 18:20:00 2013 From: nginx-forum at nginx.us (Peleke) Date: Tue, 03 Dec 2013 13:20:00 -0500 Subject: Subdomains no longer work In-Reply-To: <20131127231035.GB15722@craic.sysops.org> References: <20131127231035.GB15722@craic.sysops.org> Message-ID: <1afd6e9946388553fe017b15c1823b13.NginxMailingListEnglish@forum.nginx.org> It worked sometime before that subdomains were accessible as they should be, now they are not anymore but sadly I don't know why because I haven't tested them with every change. The config seems to be valid. nginx is running on a Debian Wheezy server with PHP 5.5x, memcache and a few other services. I don't know why it serves the main domain as it should but no subdomains. Thanks Posted at Nginx Forum: http://forum.nginx.org/read.php?2,244807,245173#msg-245173 From francis at daoine.org Tue Dec 3 20:30:09 2013 From: francis at daoine.org (Francis Daly) Date: Tue, 3 Dec 2013 20:30:09 +0000 Subject: Subdomains no longer work In-Reply-To: <1afd6e9946388553fe017b15c1823b13.NginxMailingListEnglish@forum.nginx.org> References: <20131127231035.GB15722@craic.sysops.org> <1afd6e9946388553fe017b15c1823b13.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20131203203009.GM15722@craic.sysops.org> On Tue, Dec 03, 2013 at 01:20:00PM -0500, Peleke wrote: Hi there, > I don't know why it serves the main domain as it should but no subdomains. What happens if you comment the line listen [::]:80 ipv6only=on; and restart? If your response is anything other than "it works", please show one request that you make, what response you get, and what response you expect. The output of curl -v http://adminer.domain.tld/ will probably help. f -- Francis Daly francis at daoine.org From ianevans at digitalhit.com Tue Dec 3 21:13:03 2013 From: ianevans at digitalhit.com (Ian Evans) Date: Tue, 03 Dec 2013 16:13:03 -0500 Subject: Any config tricks to stop site from framing us? Message-ID: <11fcffb2663578b4ba9f3fa3e8f4c5bc@digitalhit.com> Yesterday, I discovered that someone had registered a site (basically taking our domain name and adding a word to it) and then framed our whole site in theirs. By that I mean it's a full iframe job, with no toolbars showing. Not sure what they're up to, but I'd like to stop it. I know I can use a framebuster, but I'm wondering what I can do on the nginx.conf end to stop them dead in their tracks so not an ounce of our bandwidth goes to them. Thanks for any advice. From ilan at time4learning.com Tue Dec 3 21:15:11 2013 From: ilan at time4learning.com (Ilan Berkner) Date: Tue, 3 Dec 2013 16:15:11 -0500 Subject: Any config tricks to stop site from framing us? In-Reply-To: <11fcffb2663578b4ba9f3fa3e8f4c5bc@digitalhit.com> References: <11fcffb2663578b4ba9f3fa3e8f4c5bc@digitalhit.com> Message-ID: One possibility (not Nginx related directly) is to block their IP address at the firewall level from even getting to your server. On Tue, Dec 3, 2013 at 4:13 PM, Ian Evans wrote: > Yesterday, I discovered that someone had registered a site (basically > taking our domain name and adding a word to it) and then framed our whole > site in theirs. By that I mean it's a full iframe job, with no toolbars > showing. > > Not sure what they're up to, but I'd like to stop it. I know I can use a > framebuster, but I'm wondering what I can do on the nginx.conf end to stop > them dead in their tracks so not an ounce of our bandwidth goes to them. > > Thanks for any advice. > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -- Ilan Berkner Chief Technology Officer Time4Learning.com 6300 NE 1st Ave., Suite 203 Ft. Lauderdale, FL 33334 (954) 771-0914 Time4Learning.com - Online interactive curriculum for home use, PreK-8th Grade. Time4Writing.com - Online writing tutorials for high, middle, and elementary school students. Time4Learning.net - A forum to chat with parents online about kids, education, parenting and more. spellingcity.com - Online vocabulary and spelling activities for teachers, parents and students. -------------- next part -------------- An HTML attachment was scrubbed... URL: From ianevans at digitalhit.com Tue Dec 3 21:18:41 2013 From: ianevans at digitalhit.com (Ian Evans) Date: Tue, 03 Dec 2013 16:18:41 -0500 Subject: Any config tricks to stop site from framing us? In-Reply-To: References: <11fcffb2663578b4ba9f3fa3e8f4c5bc@digitalhit.com> Message-ID: On 2013-12-03 16:15, Ilan Berkner wrote: > One possibility (not Nginx related directly) is to block their IP > address at the firewall level from even getting to your server. Or add a deny ###.###.###.### to the server block? From mrvisser at gmail.com Tue Dec 3 21:32:40 2013 From: mrvisser at gmail.com (Branden Visser) Date: Tue, 3 Dec 2013 16:32:40 -0500 Subject: Any config tricks to stop site from framing us? In-Reply-To: References: <11fcffb2663578b4ba9f3fa3e8f4c5bc@digitalhit.com> Message-ID: If they're using an iframe rather than a proxy then IP tricks won't help. Using the X-FRAME-OPTIONS header is probably your best bet [1] Hope that helps, Branden [1] http://stackoverflow.com/questions/2896623/how-to-prevent-my-site-page-to-be-loaded-via-3rd-party-site-frame-of-iframe On Tue, Dec 3, 2013 at 4:18 PM, Ian Evans wrote: > On 2013-12-03 16:15, Ilan Berkner wrote: >> >> One possibility (not Nginx related directly) is to block their IP >> address at the firewall level from even getting to your server. > > > Or add a deny ###.###.###.### to the server block? > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From ilan at time4learning.com Tue Dec 3 21:34:04 2013 From: ilan at time4learning.com (Ilan Berkner) Date: Tue, 3 Dec 2013 16:34:04 -0500 Subject: Any config tricks to stop site from framing us? In-Reply-To: References: <11fcffb2663578b4ba9f3fa3e8f4c5bc@digitalhit.com> Message-ID: That's a good point, thanks. On Tue, Dec 3, 2013 at 4:32 PM, Branden Visser wrote: > If they're using an iframe rather than a proxy then IP tricks won't help. > > Using the X-FRAME-OPTIONS header is probably your best bet [1] > > Hope that helps, > Branden > > [1] > http://stackoverflow.com/questions/2896623/how-to-prevent-my-site-page-to-be-loaded-via-3rd-party-site-frame-of-iframe > > On Tue, Dec 3, 2013 at 4:18 PM, Ian Evans wrote: > > On 2013-12-03 16:15, Ilan Berkner wrote: > >> > >> One possibility (not Nginx related directly) is to block their IP > >> address at the firewall level from even getting to your server. > > > > > > Or add a deny ###.###.###.### to the server block? > > > > > > _______________________________________________ > > nginx mailing list > > nginx at nginx.org > > http://mailman.nginx.org/mailman/listinfo/nginx > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -- Ilan Berkner Chief Technology Officer Time4Learning.com 6300 NE 1st Ave., Suite 203 Ft. Lauderdale, FL 33334 (954) 771-0914 Time4Learning.com - Online interactive curriculum for home use, PreK-8th Grade. Time4Writing.com - Online writing tutorials for high, middle, and elementary school students. Time4Learning.net - A forum to chat with parents online about kids, education, parenting and more. spellingcity.com - Online vocabulary and spelling activities for teachers, parents and students. -------------- next part -------------- An HTML attachment was scrubbed... URL: From francis at daoine.org Tue Dec 3 21:39:12 2013 From: francis at daoine.org (Francis Daly) Date: Tue, 3 Dec 2013 21:39:12 +0000 Subject: Any config tricks to stop site from framing us? In-Reply-To: <11fcffb2663578b4ba9f3fa3e8f4c5bc@digitalhit.com> References: <11fcffb2663578b4ba9f3fa3e8f4c5bc@digitalhit.com> Message-ID: <20131203213912.GN15722@craic.sysops.org> On Tue, Dec 03, 2013 at 04:13:03PM -0500, Ian Evans wrote: Hi there, > Yesterday, I discovered that someone had registered a site (basically > taking our domain name and adding a word to it) and then framed our > whole site in theirs. By that I mean it's a full iframe job, with no > toolbars showing. nginx sees the http request coming from the client. Look at the http headers that you see getting to your nginx, when you request your site directly. Look at the http headers that you see getting to your nginx, when you go to their site. Play "spot the difference". Most likely, the only some-bit reliable difference is in the Referer: header. But maybe you can see something else, when you use the browsers that you care about. > Not sure what they're up to, but I'd like to stop it. I know I can use > a framebuster, but I'm wondering what I can do on the nginx.conf end to > stop them dead in their tracks so not an ounce of our bandwidth goes to > them. You can't, reliably. You can, for browsers that send a Referer: header of their site, return different content -- either a simple rejection using something like http://nginx.org/r/valid_referers; or tailored content that indicates what you think of the framing site, or whatever else you can imagine. f -- Francis Daly francis at daoine.org From ianevans at digitalhit.com Tue Dec 3 21:46:42 2013 From: ianevans at digitalhit.com (Ian Evans) Date: Tue, 03 Dec 2013 16:46:42 -0500 Subject: Any config tricks to stop site from framing us? In-Reply-To: References: <11fcffb2663578b4ba9f3fa3e8f4c5bc@digitalhit.com> Message-ID: <7f661cce6dab31af50942f1afa1551fd@digitalhit.com> On 2013-12-03 16:32, Branden Visser wrote: > If they're using an iframe rather than a proxy then IP tricks won't > help. > > Using the X-FRAME-OPTIONS header is probably your best bet [1] > > Hope that helps, > Branden > > [1] > > http://stackoverflow.com/questions/2896623/how-to-prevent-my-site-page-to-be-loaded-via-3rd-party-site-frame-of-iframe Thanks. Just did a cursory look, but does the header allow some sites to frame? e.g. letting stumbleupon do it but not others? From nginx-forum at nginx.us Tue Dec 3 21:48:18 2013 From: nginx-forum at nginx.us (itpp2012) Date: Tue, 03 Dec 2013 16:48:18 -0500 Subject: [ANN] Windows nginx 1.5.8.1 Caterpillar In-Reply-To: References: Message-ID: <8896eb38d1121f6f1be0e73ff0c708f4.NginxMailingListEnglish@forum.nginx.org> > And please try to compile a version with this module: > http://wiki.nginx.org/HttpSubsModule There a beta you can try, nginx 1.5.8.2 Caterpillar BETA1.zip which includes https://github.com/yaoweibin/ngx_http_substitutions_filter_module which I had to port a little bit (2 bugs) Posted at Nginx Forum: http://forum.nginx.org/read.php?2,245105,245184#msg-245184 From mrvisser at gmail.com Tue Dec 3 21:49:42 2013 From: mrvisser at gmail.com (Branden Visser) Date: Tue, 3 Dec 2013 16:49:42 -0500 Subject: Any config tricks to stop site from framing us? In-Reply-To: <7f661cce6dab31af50942f1afa1551fd@digitalhit.com> References: <11fcffb2663578b4ba9f3fa3e8f4c5bc@digitalhit.com> <7f661cce6dab31af50942f1afa1551fd@digitalhit.com> Message-ID: On Tue, Dec 3, 2013 at 4:46 PM, Ian Evans wrote: > On 2013-12-03 16:32, Branden Visser wrote: >> >> If they're using an iframe rather than a proxy then IP tricks won't help. >> >> Using the X-FRAME-OPTIONS header is probably your best bet [1] >> >> Hope that helps, >> Branden >> >> [1] >> >> >> http://stackoverflow.com/questions/2896623/how-to-prevent-my-site-page-to-be-loaded-via-3rd-party-site-frame-of-iframe > > > Thanks. Just did a cursory look, but does the header allow some sites to > frame? e.g. letting stumbleupon do it but not others? > No I don't believe that's the case. If the browser supports it, it *should* stop anyone from iframing, but you're under the mercy of the browser implementation AFAIK -- so maybe Google's Chrome has some big money deals with service providers like stumbleupon, for example (pure speculation). There are other options listed in there such as JavaScript tricks to verify the "self" frame is the same as the "parent" frame. So you can also have a secondary check like that. > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From ianevans at digitalhit.com Tue Dec 3 21:49:55 2013 From: ianevans at digitalhit.com (Ian Evans) Date: Tue, 03 Dec 2013 16:49:55 -0500 Subject: Any config tricks to stop site from framing us? In-Reply-To: <20131203213912.GN15722@craic.sysops.org> References: <11fcffb2663578b4ba9f3fa3e8f4c5bc@digitalhit.com> <20131203213912.GN15722@craic.sysops.org> Message-ID: On 2013-12-03 16:39, Francis Daly wrote: > On Tue, Dec 03, 2013 at 04:13:03PM -0500, Ian Evans wrote: > > Hi there, > >> Yesterday, I discovered that someone had registered a site >> (basically >> taking our domain name and adding a word to it) and then framed our >> whole site in theirs. By that I mean it's a full iframe job, with no >> toolbars showing. > > nginx sees the http request coming from the client. > > Look at the http headers that you see getting to your nginx, when you > request your site directly. > > Look at the http headers that you see getting to your nginx, when you > go to their site. > > Play "spot the difference". > > Most likely, the only some-bit reliable difference is in the Referer: > header. But maybe you can see something else, when you use the > browsers > that you care about. > >> Not sure what they're up to, but I'd like to stop it. I know I can >> use >> a framebuster, but I'm wondering what I can do on the nginx.conf end >> to >> stop them dead in their tracks so not an ounce of our bandwidth goes >> to >> them. > > You can't, reliably. > > You can, for browsers that send a Referer: header of their site, > return > different content -- either a simple rejection using something like > http://nginx.org/r/valid_referers; or tailored content that indicates > what you think of the framing site, or whatever else you can imagine. > Thanks for the info. I'll have to take a look. I'm also hoping to get them shut down as I've talked to their registrar. I'm hoping they grabbed a whole bunch of domains to vampire and not just mine. If it was just us, that'd be creepy From mrvisser at gmail.com Tue Dec 3 21:56:50 2013 From: mrvisser at gmail.com (Branden Visser) Date: Tue, 3 Dec 2013 16:56:50 -0500 Subject: Any config tricks to stop site from framing us? In-Reply-To: References: <11fcffb2663578b4ba9f3fa3e8f4c5bc@digitalhit.com> <7f661cce6dab31af50942f1afa1551fd@digitalhit.com> Message-ID: Sorry I misinterpreted your question. The header does not support specifying specific hosts, for example, that you want to allow iframing from. Using the JavaScript technique, perhaps something could be done to ensure window.parent.location.href matches some pattern or list of hosts. I haven't implemented anything like that before, though. Hope that helps, Branden On Tue, Dec 3, 2013 at 4:49 PM, Branden Visser wrote: > On Tue, Dec 3, 2013 at 4:46 PM, Ian Evans wrote: >> On 2013-12-03 16:32, Branden Visser wrote: >>> >>> If they're using an iframe rather than a proxy then IP tricks won't help. >>> >>> Using the X-FRAME-OPTIONS header is probably your best bet [1] >>> >>> Hope that helps, >>> Branden >>> >>> [1] >>> >>> >>> http://stackoverflow.com/questions/2896623/how-to-prevent-my-site-page-to-be-loaded-via-3rd-party-site-frame-of-iframe >> >> >> Thanks. Just did a cursory look, but does the header allow some sites to >> frame? e.g. letting stumbleupon do it but not others? >> > > No I don't believe that's the case. If the browser supports it, it > *should* stop anyone from iframing, but you're under the mercy of the > browser implementation AFAIK -- so maybe Google's Chrome has some big > money deals with service providers like stumbleupon, for example (pure > speculation). There are other options listed in there such as > JavaScript tricks to verify the "self" frame is the same as the > "parent" frame. So you can also have a secondary check like that. > >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx From www at lc365.net Wed Dec 4 04:17:50 2013 From: www at lc365.net (=?gb2312?B?ufnV8cGi?=) Date: Wed, 4 Dec 2013 12:17:50 +0800 Subject: [ANN] Windows nginx 1.5.8.1 Caterpillar In-Reply-To: References: , Message-ID: Hello, > > Here is an error occurred if I run "nginx -t" in Windows CLI when the > > nginx server is in running, > > > > The tips always is the same: "Assertion failed: > > ngx_shared_sockets->pid==pid, file src/core/nginx.c, line 374" > > This is because you are running nginx.exe as a different user then nginx.exe > is running under, its the same issue when you run 'nginx -s reload' while > nginx is running as a service (which is a different user) see the FAQ on the > project website which describes a workaround to this problem. > > If you need to test a config on the same machine then make a copy of the > environment in some other folder so that the PID file does not conflict with > the running version. Or stop nginx, run your test and start nginx. > I think it is not true. I can run nginx -s reload and all other -s command on this situation. And the Windows version from nginx.org can run nginx -t without problem. Please kindly look further for the reason. Thank you. > > And please try to compile a version with this module: > > http://wiki.nginx.org/HttpSubsModule, and with the options > > I will have a look at > https://github.com/yaoweibin/ngx_http_substitutions_filter_module > Thank you. > > "--without-mail_pop3_module --without-mail_imap_module > > --without-mail_smtp_module" if possible, thank you. > > I'm not going to remove modules unless they are bugged beyond repair. > > Adding/removing modules on the fly is on the long term feature list > > Posted at Nginx Forum: http://forum.nginx.org/read.php?2,245105,245161#msg-245161 -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Wed Dec 4 07:39:30 2013 From: nginx-forum at nginx.us (Oifnseoier) Date: Wed, 04 Dec 2013 02:39:30 -0500 Subject: How to crack Windows 7 administrator user password In-Reply-To: References: Message-ID: <019948d608f217f2bfe14d1f66a5a434.NginxMailingListEnglish@forum.nginx.org> To crack Windows 7 password I think the best way can be reset software or reset disk. You can use your USB drive to download Ophcrack or Anmosoft Windows Password Reset to create a disk by yourself. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,245032,245191#msg-245191 From nginx-forum at nginx.us Wed Dec 4 07:54:30 2013 From: nginx-forum at nginx.us (r004) Date: Wed, 04 Dec 2013 02:54:30 -0500 Subject: rewrite or return directives for discuz v3.1 Message-ID: <9376a42c036c2fe04c333c9d1a21f358.NginxMailingListEnglish@forum.nginx.org> hello; I want to write a block to put in my VHOST config file. i want 1. if there is "/install/index.php/" in the link. it shouldbe removed from the link. 2. if the format is like /uc_server/blab.php/uc_server/blahblah ===it should change to===>/uc_server/blabal what should I put in my VHOST config file and where? my current VHOST format is : [CODE] server { server_name www.DOMAINNAME; rewrite ^(.*) http://DOMAINNAME$1 permanent; } server { listen 80; server_name DOMAINNAME; root /var/www/DOMAINNAME/htdocs; index index.php; include /etc/nginx/security; # Logging -- access_log /var/log/nginx/DOMAINNAME.access.log; error_log /var/log/nginx/DOMAINNAME.error.log notice; # serve static files directly location ~* ^.+.(jpg|jpeg|gif|css|png|js|ico|html|xml|txt)$ { access_log off; expires max; } location ~ \.php$ { try_files $uri =404; fastcgi_pass unix:/var/run/php5-fpm/DOMAINNAME.socket; fastcgi_index index.php; include /etc/nginx/fastcgi_params; } } [/CODE] Posted at Nginx Forum: http://forum.nginx.org/read.php?2,245192,245192#msg-245192 From luky-37 at hotmail.com Wed Dec 4 09:36:58 2013 From: luky-37 at hotmail.com (Lukas Tribus) Date: Wed, 4 Dec 2013 10:36:58 +0100 Subject: nginx - workers segfaulting In-Reply-To: <20131203162326.GK93176@mdounin.ru> References: <20131203121952.GD93176@mdounin.ru>, <80c92d3e73942913ce14334e20b7e31b.NginxMailingListEnglish@forum.nginx.org>, <20131203162326.GK93176@mdounin.ru> Message-ID: Hi! > The patch as in the ticket is wrong, it only hides the real > problem. Proper patch to solve the problem is to be coded. > > As the problem can be easily resolved by using symmetrical session > cache configuration (better yet, using a single session cache at > http level), it's not a high priority task. Agreed, the configuration workaround is viable; but the problem lies in the actual troubleshooting. Coming to this conclusion takes time, time the users don't have when the workers are crashing. Its not always possible to rollback the configuration or understanding right away what particularity caused the crash (after a move of vhosts from one server to another for example). So perhaps until a proper fix is ready, we can add a note in the documentation: http://nginx.org/en/docs/http/ngx_http_ssl_module.html#ssl_session_cache Its not about the workaround, its about knowing such issues and limitations in advance. Regards, Lukas From contact at jpluscplusm.com Wed Dec 4 11:47:29 2013 From: contact at jpluscplusm.com (Jonathan Matthews) Date: Wed, 4 Dec 2013 11:47:29 +0000 Subject: rewrite or return directives for discuz v3.1 In-Reply-To: <9376a42c036c2fe04c333c9d1a21f358.NginxMailingListEnglish@forum.nginx.org> References: <9376a42c036c2fe04c333c9d1a21f358.NginxMailingListEnglish@forum.nginx.org> Message-ID: On 4 December 2013 07:54, r004 wrote: > hello; > I want to write a block to put in my VHOST config file. > i want > 1. if there is "/install/index.php/" in the link. it shouldbe removed from > the link. > 2. if the format is like /uc_server/blab.php/uc_server/blahblah ===it > should change to===>/uc_server/blabal > > what should I put in my VHOST config file and where? What part are you having difficulties with? Have you read the documentation for "location" and "rewrite"? Have a read of that, and then let us know if you're still having a problem. J From contact at jpluscplusm.com Wed Dec 4 11:48:41 2013 From: contact at jpluscplusm.com (Jonathan Matthews) Date: Wed, 4 Dec 2013 11:48:41 +0000 Subject: nginx - workers segfaulting In-Reply-To: References: <20131203121952.GD93176@mdounin.ru> <80c92d3e73942913ce14334e20b7e31b.NginxMailingListEnglish@forum.nginx.org> <20131203162326.GK93176@mdounin.ru> Message-ID: On 4 December 2013 09:36, Lukas Tribus wrote: > Agreed, the configuration workaround is viable; but the problem lies > in the actual troubleshooting. Coming to this conclusion takes time, > time the users don't have when the workers are crashing. Its not always > possible to rollback the configuration or understanding right away what > particularity caused the crash (after a move of vhosts from one server to > another for example). > > So perhaps until a proper fix is ready, we can add a note in the documentation: > http://nginx.org/en/docs/http/ngx_http_ssl_module.html#ssl_session_cache > > Its not about the workaround, its about knowing such issues and limitations > in advance. +1. From reallfqq-nginx at yahoo.fr Wed Dec 4 13:16:39 2013 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Wed, 4 Dec 2013 14:16:39 +0100 Subject: nginx - workers segfaulting In-Reply-To: References: <20131203121952.GD93176@mdounin.ru> <80c92d3e73942913ce14334e20b7e31b.NginxMailingListEnglish@forum.nginx.org> <20131203162326.GK93176@mdounin.ru> Message-ID: Hello, On Wed, Dec 4, 2013 at 12:48 PM, Jonathan Matthews wrote: > On 4 December 2013 09:36, Lukas Tribus wrote: > > Agreed, the configuration workaround is viable; but the problem lies > > in the actual troubleshooting. Coming to this conclusion takes time, > > time the users don't have when the workers are crashing. Its not always > > possible to rollback the configuration or understanding right away what > > particularity caused the crash (after a move of vhosts from one server to > > another for example). > > > > So perhaps until a proper fix is ready, we can add a note in the > documentation: > > http://nginx.org/en/docs/http/ngx_http_ssl_module.html#ssl_session_cache > > > > Its not about the workaround, its about knowing such issues and > limitations > > in advance. > > +1. > ?I also agree on that.? --- *B. R.* -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Wed Dec 4 13:22:42 2013 From: nginx-forum at nginx.us (omidr) Date: Wed, 04 Dec 2013 08:22:42 -0500 Subject: Url rewriting faliure in case of UTF8 urls Message-ID: <72d44d98c4b7b013f0405e7130e57057.NginxMailingListEnglish@forum.nginx.org> I want to rewrite some old urls on my site to new ones inorder to prevent 404 errors. For English urls everything is OK and adding a rewrite rule like the following to the server block works fine. rewrite ^/omid/$ /omidreza; But when it comes to urls containing non-English characters the rule doesn`t work. Has anyone experienced the same issue? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,245202,245202#msg-245202 From nginx-forum at nginx.us Wed Dec 4 18:49:15 2013 From: nginx-forum at nginx.us (itpp2012) Date: Wed, 04 Dec 2013 13:49:15 -0500 Subject: [ANN] Windows nginx 1.5.8.1 Caterpillar In-Reply-To: References: Message-ID: <38a18e43ff69ce5100ad6773525afe56.NginxMailingListEnglish@forum.nginx.org> > Please kindly look further for the reason. Thank you. A new beta you can try, nginx 1.5.8.2 Caterpillar BETA2.zip + Fix for nginx -t 'Assertion failed' issue + HttpSubsModule * The debug version is no longer needed, Intel static profiler data is used nginx crash info/logging or event dump info is all that is needed * Intel static profiler "the need for speed" compiler optimization Posted at Nginx Forum: http://forum.nginx.org/read.php?2,245105,245207#msg-245207 From francis at daoine.org Wed Dec 4 20:48:37 2013 From: francis at daoine.org (Francis Daly) Date: Wed, 4 Dec 2013 20:48:37 +0000 Subject: Url rewriting faliure in case of UTF8 urls In-Reply-To: <72d44d98c4b7b013f0405e7130e57057.NginxMailingListEnglish@forum.nginx.org> References: <72d44d98c4b7b013f0405e7130e57057.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20131204204837.GA517@craic.sysops.org> On Wed, Dec 04, 2013 at 08:22:42AM -0500, omidr wrote: Hi there, > But when it comes to urls containing non-English characters the rule doesn`t > work. Has anyone experienced the same issue? It seems to work for me. What config do you use, what request do you make, what response do you get? f -- Francis Daly francis at daoine.org From nginx-forum at nginx.us Wed Dec 4 20:50:20 2013 From: nginx-forum at nginx.us (vijeesh) Date: Wed, 04 Dec 2013 15:50:20 -0500 Subject: ssl error Message-ID: <26557ff4d5b22cbf70ae1d90a6971fe4.NginxMailingListEnglish@forum.nginx.org> we are getting the error SSL Exception: No peer certificate at random . Anyone please help me to torubleshoot it. Can it be because of the high server load? certificates seems to be installed correctly and we see the errors in very less numbers -Vij Posted at Nginx Forum: http://forum.nginx.org/read.php?2,245212,245212#msg-245212 From francis at daoine.org Wed Dec 4 20:53:29 2013 From: francis at daoine.org (Francis Daly) Date: Wed, 4 Dec 2013 20:53:29 +0000 Subject: rewrite or return directives for discuz v3.1 In-Reply-To: <9376a42c036c2fe04c333c9d1a21f358.NginxMailingListEnglish@forum.nginx.org> References: <9376a42c036c2fe04c333c9d1a21f358.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20131204205329.GB517@craic.sysops.org> On Wed, Dec 04, 2013 at 02:54:30AM -0500, r004 wrote: Hi there, > 1. if there is "/install/index.php/" in the link. it shouldbe removed from > the link. In the rewrite line, capture the parts before and after /install/index.php/, and rewrite to those parts without the middle bit. > 2. if the format is like /uc_server/blab.php/uc_server/blahblah ===it > should change to===>/uc_server/blabal What does "format is like" mean? urls that start with something, or that end with something, or that include a specific pattern somewhere? Which parts of the input are used to make the output? If you can write down how the matching should happen, the rewrite rule should become clear. http://nginx.org/r/rewrite f -- Francis Daly francis at daoine.org From kg.woltz at nasa.gov Wed Dec 4 20:59:23 2013 From: kg.woltz at nasa.gov (Woltz, KG (GSFC-703.0)[ASRC PRIMUS SOLUTIONS]) Date: Wed, 4 Dec 2013 20:59:23 +0000 Subject: No subject Message-ID: - KG -------------- next part -------------- An HTML attachment was scrubbed... URL: From david.birdsong at gmail.com Wed Dec 4 21:04:17 2013 From: david.birdsong at gmail.com (David Birdsong) Date: Wed, 4 Dec 2013 13:04:17 -0800 Subject: why the change in links on the wiki? Message-ID: I noticed this a few months back. Why do the links on wiki.nginx.org link to nginx.org/en/docs/ instead of to wiki.nginx.org? For example: the http core module doc link on http://wiki.nginx.org/Modulespoints -> http://nginx.org/en/docs/http/ngx_http_core_module.html instead of: http://wiki.nginx.org/HttpCoreModule I kind of hate this. I find the wiki version of the module docs much more readable. Why the change? -------------- next part -------------- An HTML attachment was scrubbed... URL: From kg.woltz at nasa.gov Wed Dec 4 21:26:04 2013 From: kg.woltz at nasa.gov (Woltz, KG (GSFC-703.0)[ASRC PRIMUS SOLUTIONS]) Date: Wed, 4 Dec 2013 21:26:04 +0000 Subject: nginx fastcgi_pass and Browser long transfer time Message-ID: I'm new to nginx but not to web development. I have the server configured to do almost everything I need ... including fastcgi_pass for Perl on Solaris. When requesting any cgi page the browser displays the standard status's "Read servername", "Waiting for servername" and "Transferring data from servername". I have two things that I need to fix. Problem 1: The "transferring data..." step takes 1-2 min no matter the size of the content, even hello world. Problem 2: Any little issue and the fastcgi server socket pids go Thanks, - KG -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Wed Dec 4 22:33:39 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 5 Dec 2013 02:33:39 +0400 Subject: why the change in links on the wiki? In-Reply-To: References: Message-ID: <20131204223339.GY93176@mdounin.ru> Hello! On Wed, Dec 04, 2013 at 01:04:17PM -0800, David Birdsong wrote: > I noticed this a few months back. Why do the links on wiki.nginx.org link > to > nginx.org/en/docs/ instead of to wiki.nginx.org? > > For example: the http core module doc link on > http://wiki.nginx.org/Modulespoints -> > http://nginx.org/en/docs/http/ngx_http_core_module.html instead of: > http://wiki.nginx.org/HttpCoreModule > > I kind of hate this. I find the wiki version of the module docs much more > readable. > > Why the change? Wiki pages for standard modules were created as an English translation of Russian docs long time ago, when there were no official English docs. Since then, official English became available. In past years we've faced multiple cases of old/incorrect/missing descriptions on wiki confusing people, so these pages were deprecated and links were changed to official docs instead. Moreover, changing pages to do redirects instead was recenly discussed. Supporing multiple versions of the documentation isn't something we want to spent time on, and bit rot on these pages on wiki can't be just ignored. What exactly do you find "much more readable"? Wording? Design? May be it's something that can be improved in the documentation? -- Maxim Dounin http://nginx.org/en/donation.html From david.birdsong at gmail.com Wed Dec 4 22:58:48 2013 From: david.birdsong at gmail.com (David Birdsong) Date: Wed, 4 Dec 2013 14:58:48 -0800 Subject: why the change in links on the wiki? In-Reply-To: <20131204223339.GY93176@mdounin.ru> References: <20131204223339.GY93176@mdounin.ru> Message-ID: On Wed, Dec 4, 2013 at 2:33 PM, Maxim Dounin wrote: > Hello! > > On Wed, Dec 04, 2013 at 01:04:17PM -0800, David Birdsong wrote: > > > I noticed this a few months back. Why do the links on wiki.nginx.orglink > > to > > nginx.org/en/docs/ instead of to wiki.nginx.org? > > > > For example: the http core module doc link on > > http://wiki.nginx.org/Modulespoints -> > > http://nginx.org/en/docs/http/ngx_http_core_module.html instead of: > > http://wiki.nginx.org/HttpCoreModule > > > > I kind of hate this. I find the wiki version of the module docs much more > > readable. > > > > Why the change? > > Wiki pages for standard modules were created as an English > translation of Russian docs long time ago, when there were no > official English docs. Since then, official English became > available. > > In past years we've faced multiple cases of old/incorrect/missing > descriptions on wiki confusing people, so these pages were > deprecated and links were changed to official docs instead. > Moreover, changing pages to do redirects instead was recenly > discussed. Supporing multiple versions of the documentation isn't > something we want to spent time on, and bit rot on these pages on > wiki can't be just ignored. > > What exactly do you find "much more readable"? Wording? Design? > May be it's something that can be improved in the documentation? > I won't claim to have any design chops, but as a reader, I prefer the wiki style and find that I can get information faster. I do a lot of nginx 'coding', so a large part of my life is spent referring to the docs--speed of navigation helps me a ton. - the index layout is easier to scan, possibly because of the colors and right-justify? - section headers seem absent on the standard docs - the syntax highlighting on the wiki makes it easy to spot examples I jump around to different modules frequently and so the icon-based navigation makes it easy for me to land my mouse on the modules or addons icon. I would have smudged the colors off of those icons with how frequently I'm press on them if these pages were physical. For me it's just a simple case of one being superior to the other. If nobody else cares, I'll shut up and deal with it. > -- > Maxim Dounin > http://nginx.org/en/donation.html > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Wed Dec 4 23:00:51 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 5 Dec 2013 03:00:51 +0400 Subject: nginx fastcgi_pass and Browser long transfer time In-Reply-To: References: Message-ID: <20131204230051.GA93176@mdounin.ru> Hello! On Wed, Dec 04, 2013 at 09:26:04PM +0000, Woltz, KG (GSFC-703.0)[ASRC PRIMUS SOLUTIONS] wrote: > I'm new to nginx but not to web development. > I have the server configured to do almost everything I need ... including fastcgi_pass for Perl on Solaris. > When requesting any cgi page the browser displays the standard status's "Read servername", "Waiting for servername" and "Transferring data from servername". > I have two things that I need to fix. > Problem 1: The "transferring data..." step takes 1-2 min no matter the size of the content, even hello world. > Problem 2: Any little issue and the fastcgi server socket pids go What you describe looks like a problem with FastCGI application you are running. You may want to take a closer look there. Please note that nginx doesn't do any management of FastCGI applications, it just passes requests to configured backends. -- Maxim Dounin http://nginx.org/en/donation.html From nginx-forum at nginx.us Thu Dec 5 00:42:17 2013 From: nginx-forum at nginx.us (omidr) Date: Wed, 04 Dec 2013 19:42:17 -0500 Subject: Url rewriting faliure in case of UTF8 urls In-Reply-To: <20131204204837.GA517@craic.sysops.org> References: <20131204204837.GA517@craic.sysops.org> Message-ID: As an example I want to redirect "/???????" to "/??????? ???????" and I am using a rewrite rulr like the one below but things go wrong and I get a 404. rewrite_rule: rewrite ^/???????/$ /??????? ???????; And what do you mean by config? Do you mean nginx settings or configurations at compile time? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,245202,245225#msg-245225 From chigga101 at gmail.com Thu Dec 5 01:10:53 2013 From: chigga101 at gmail.com (Matthew Ngaha) Date: Thu, 5 Dec 2013 01:10:53 +0000 Subject: location{} hassles:( Message-ID: Hey all I can't seem to get the hang of how to use/write location blocks. I have mailed the list before and I do understand how it works but what I've tried fails. I have tried a few things and in both cases i don't think what i'm doing is making a difference. Here they are: http://bpaste.net/show/YXFXLuJ1Uc1UFQDc8ctn/ The 1st test was to put location {} nested into the main location {} that was already in nginx.conf. I put the "deny all;" just to see if it would find or ignore/ my new location {} but the nested location test directory/file isn't denying access. The 2nd location {} test is below the main one. In the installed nginx folder it uses html/ as the root, i tried to create a test "delete" folder as an alternative location to the html root folder but i can't seem to get it right. My browser returns a "404 Not Found" when i try to access files in localhost/delete. In both cases my added locations {} are being ignored. Any ideas what i can do? From artemrts at ukr.net Thu Dec 5 06:47:37 2013 From: artemrts at ukr.net (wishmaster) Date: Thu, 05 Dec 2013 08:47:37 +0200 Subject: why the change in links on the wiki? In-Reply-To: <20131204223339.GY93176@mdounin.ru> References: <20131204223339.GY93176@mdounin.ru> Message-ID: <1386225281.122761076.q4b5ggyc@frv34.ukr.net> --- Original message --- From: "Maxim Dounin" Date: 5 December 2013, 00:33:44 > Hello! > > On Wed, Dec 04, 2013 at 01:04:17PM -0800, David Birdsong wrote: > > > I noticed this a few months back. Why do the links on wiki.nginx.org link > > to > > nginx.org/en/docs/ instead of to wiki.nginx.org? > > > > For example: the http core module doc link on > > http://wiki.nginx.org/Modulespoints -> > > http://nginx.org/en/docs/http/ngx_http_core_module.html instead of: > > http://wiki.nginx.org/HttpCoreModule > > > > I kind of hate this. I find the wiki version of the module docs much more > > readable. > > > > Why the change? > > Wiki pages for standard modules were created as an English > translation of Russian docs long time ago, when there were no > official English docs. Since then, official English became > available. > > In past years we've faced multiple cases of old/incorrect/missing > descriptions on wiki confusing people, so these pages were > deprecated and links were changed to official docs instead. > Moreover, changing pages to do redirects instead was recenly > discussed. Supporing multiple versions of the documentation isn't > something we want to spent time on, and bit rot on these pages on > wiki can't be just ignored. > > What exactly do you find "much more readable"? Wording? Design? > May be it's something that can be improved in the documentation? >From my point of view, there is one thing which would be very useful in the new official docs (nginx.org/*/docs): this is content of directives of current module. May be even with 'fixed' position. From reeteshr at outlook.com Thu Dec 5 10:40:55 2013 From: reeteshr at outlook.com (Reetesh Ranjan) Date: Thu, 5 Dec 2013 16:10:55 +0530 Subject: Sphinx2 Search Platform Upstream Module Pre-Release Message-ID: Hello! I have developed an upstream module for Sphinx2 search platform. It's available at: https://github.com/reeteshranjan/sphinx2-nginx-module It talks to Sphinx2 searchd component for performing search. It's in infant stage right now and more work is needed to get to production ready. I am working on it. In case any one is interested in working with me on this module, please let me know. If you are looking to play with it, it is built and installed like any other third party module using the source available at link provided above. Thanks,Reetesh -------------- next part -------------- An HTML attachment was scrubbed... URL: From r1ch+nginx at teamliquid.net Thu Dec 5 10:54:56 2013 From: r1ch+nginx at teamliquid.net (Richard Stanway) Date: Thu, 5 Dec 2013 11:54:56 +0100 Subject: Why the status code restriction on add_header? Message-ID: Hello, I'm trying to add some custom headers to a 403 response, but had a hard time doing so. Looking through the docs, it seems add_header "Adds the specified field to a response header provided that the response code equals 200, 201, 204, 206, 301, 302, 303, 304, or 307. " I was wondering what the reasoning was behind this pre-set list of status codes and why it isn't possible for example to add a header to a 403 or 404 response. Also, if there is any workaround that would make this possible, I'd be interested to hear it. Thanks, Rich. -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Thu Dec 5 11:00:20 2013 From: nginx-forum at nginx.us (mojiz) Date: Thu, 05 Dec 2013 06:00:20 -0500 Subject: using nginx to cache file serving Message-ID: <7eee91282d570a968dfd1c21a9b9a8c7.NginxMailingListEnglish@forum.nginx.org> Hi We are using nginx & SATA disks on several of our file servers.However after hitting 12MBytes/s of port speed (server nic is a 1gbps), the server gets down to it's knees. SSH becomes very slow/unresponsive and our users complain about slow speeds and dropped connections. Our users/downloaders have very slow speed (like 512kbits/s) so there are lots of users downloading from each server. I thought the problem was from using SATA disk and the bottleneck was the HDD IO speed.So we ordered a new server with SAS hdds & RAID0, the performance has been great and our server is serving about 20MBytes/s, (same provider) However I'm still not satisfied. 1.Is there any configuration options I'm missing? I'm using the default config. 2. Both servers rams are fully utilized and the memory is 100% filled, When I stop nginx the memory is still used up but why? (there is no other service running on the server) If this is the nginx cache, why it's not emptied when I exit/kill nginx processes? 3. My most important question is, Is it possible/wise that I use the SAS server as a rev-proxy/cache to serve the downloads from the SATA server? My idea is when a user connects to the SAS server to download a file, The SAS server requests the file from the SATA server(using the internal connection in datacenter), caches it on it's fast disk, then serves the file to my slow downloader. This way the SATA HDD doesn't have to seek to the file location each time and since there are lots of downloaders this happens a lot. Sorry for the long message and thank you for your time Mojiz Posted at Nginx Forum: http://forum.nginx.org/read.php?2,245244,245244#msg-245244 From r1ch+nginx at teamliquid.net Thu Dec 5 11:13:23 2013 From: r1ch+nginx at teamliquid.net (Richard Stanway) Date: Thu, 5 Dec 2013 12:13:23 +0100 Subject: using nginx to cache file serving In-Reply-To: <7eee91282d570a968dfd1c21a9b9a8c7.NginxMailingListEnglish@forum.nginx.org> References: <7eee91282d570a968dfd1c21a9b9a8c7.NginxMailingListEnglish@forum.nginx.org> Message-ID: 1. Depends a lot on your environment. If you are sure you are I/O bound, there isn't too much you can tweak. 2. How are you measuring this? It's normal for there to be very little "free" memory on Linux due to buffers and filesystem caches. Look at the -/+ buffers/cache line for your available memory. If you are actually running out of memory and hitting swap, this is likely why performance is so low. 3. Take a look at ngx_slowfs_cache, it sounds like it might fit the bill - http://labs.frickle.com/nginx_ngx_slowfs_cache Rich. On Thu, Dec 5, 2013 at 12:00 PM, mojiz wrote: > Hi > We are using nginx & SATA disks on several of our file servers.However > after > hitting 12MBytes/s of port speed (server nic is a 1gbps), the server gets > down to it's knees. SSH becomes very slow/unresponsive and our users > complain about slow speeds and dropped connections. Our users/downloaders > have very slow speed (like 512kbits/s) so there are lots of users > downloading from each server. I thought the problem was from using SATA > disk > and the bottleneck was the HDD IO speed.So we ordered a new server with SAS > hdds & RAID0, the performance has been great and our server is serving > about > 20MBytes/s, (same provider) > However I'm still not satisfied. > 1.Is there any configuration options I'm missing? I'm using the default > config. > 2. Both servers rams are fully utilized and the memory is 100% filled, When > I stop nginx the memory is still used up but why? (there is no other > service > running on the server) > If this is the nginx cache, why it's not emptied when I exit/kill nginx > processes? > 3. My most important question is, Is it possible/wise that I use the SAS > server as a rev-proxy/cache to serve the downloads from the SATA server? My > idea is when a user connects to the SAS server to download a file, The SAS > server requests the file from the SATA server(using the internal connection > in datacenter), caches it on it's fast disk, then serves the file to my > slow > downloader. This way the SATA HDD doesn't have to seek to the file location > each time and since there are lots of downloaders this happens a lot. > > Sorry for the long message and thank you for your time > Mojiz > > Posted at Nginx Forum: > http://forum.nginx.org/read.php?2,245244,245244#msg-245244 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From Chaitanya_Kamsu at infosys.com Thu Dec 5 12:59:11 2013 From: Chaitanya_Kamsu at infosys.com (Chaitanya Kamsu) Date: Thu, 5 Dec 2013 12:59:11 +0000 Subject: TCP -TLS Redirection Message-ID: Hi Team , I want to do a tcp to tls proxy. we need to communicate to apple server via tls (tcp over ssl). our server does not have internet access so we need to use a proxy server that has internet access which can * either accept the tcp communication and do a tls communication with apns. in this case our server just need to send data over tcp to proxy server without any SSL. * our server can send data over tls, if proxy server can do a transparent redirection. we have tried nginx, it is able to do tcp to tcp redirection but nginx is not allowing ssl directive to be specified in the upstream block of tcp configuration. any help in this direction will be greatly appreciated. i am giving below the tcp configuration in nginx configuration: tcp { upstream cluster { server 127.0.0.1:9521; } server { listen 127.0.0.1:5894; access_log logs/tcp_access.log so_keepalive off; timeout 60000; server_name Proxy; proxy_pass cluster; } } Please give the solution. Regards, Chaitanya K DTAG Push Server VOIP:+91 8039135521 **************** CAUTION - Disclaimer ***************** This e-mail contains PRIVILEGED AND CONFIDENTIAL INFORMATION intended solely for the use of the addressee(s). If you are not the intended recipient, please notify the sender by e-mail and delete the original message. Further, you are not to copy, disclose, or distribute this e-mail or its contents to any other person and any such actions are unlawful. This e-mail may contain viruses. Infosys has taken every reasonable precaution to minimize this risk, but is not liable for any damage you may sustain as a result of any virus in this e-mail. You should carry out your own virus checks before opening the e-mail or attachment. Infosys reserves the right to monitor and review the content of all messages sent to or from this e-mail address. Messages sent to or from this e-mail address may be stored on the Infosys e-mail system. ***INFOSYS******** End of Disclaimer ********INFOSYS*** From mdounin at mdounin.ru Thu Dec 5 13:36:01 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 5 Dec 2013 17:36:01 +0400 Subject: Why the status code restriction on add_header? In-Reply-To: References: Message-ID: <20131205133601.GC95113@mdounin.ru> Hello! On Thu, Dec 05, 2013 at 11:54:56AM +0100, Richard Stanway wrote: > Hello, > I'm trying to add some custom headers to a 403 response, but had a hard > time doing so. Looking through the docs, it seems add_header "Adds the > specified field to a response header provided that the response code equals > 200, 201, 204, 206, 301, 302, 303, 304, or 307. " > > I was wondering what the reasoning was behind this pre-set list of status > codes and why it isn't possible for example to add a header to a 403 or 404 > response. Also, if there is any workaround that would make this possible, > I'd be interested to hear it. The "add_header" directive, much like "expires", is intended to be used to add headers to positive responses, like "Cache-Control" and so on. Adding headers to error responses can easily end up in unexpected behaviour, e.g., due to an error response being cached for a year if you'll add an "expires 1y" to your static files location (assuming you'll only add new files and it's safe). Due to the above headers are only added to response codes in a limited list. As of now, there is no easy way to add response headers to arbitrary responses. You'll have to do it either with embedded perl or with 3rd party modules. -- Maxim Dounin http://nginx.org/en/donation.html From nginx-forum at nginx.us Thu Dec 5 14:39:04 2013 From: nginx-forum at nginx.us (mojiz) Date: Thu, 05 Dec 2013 09:39:04 -0500 Subject: using nginx to cache file serving In-Reply-To: References: Message-ID: <32a8d4a4d4913b1cda0748b058facf13.NginxMailingListEnglish@forum.nginx.org> Thanks for the answeres. about the memory the problem is I'm hitting this on only some of my servers but not all, one of my servers has only 3GB memory which 1G is used but on another 16GB/16GB is used. The ngx_slowfs_cache module is usefull when the server has 2 type disks, SATA and SAS, but I have two servers which I want to do the proxying. I'm looking at the included proxy module something like this: proxy_buffers 10240 128k; #needs 1.2GB memory proxy_cache_path /dev/sas_disk/cache; proxy_max_temp_file_size 2048m; proxy_store is tempting: proxy_store on; proxy_temp_path /dev/sas_disk/temp; root /dev/sas_disk/www; is this what i'm looking for? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,245244,245251#msg-245251 From nginx-forum at nginx.us Thu Dec 5 14:44:55 2013 From: nginx-forum at nginx.us (cschiewek) Date: Thu, 05 Dec 2013 09:44:55 -0500 Subject: Problem with Upstream over SSL Message-ID: <1decead1b3acd1404f95a6b2824d9d3d.NginxMailingListEnglish@forum.nginx.org> I was proxying to an IIS server on 443 on nginx 1.1 on FreeBSD and it worked perfectly fine. We moved to nginx 1.4 running on ubuntu and now it won't work. The following works perfect: server { location / { proxy_pass http://server.domain.com } } But when I change it to server { location / { proxy_pass https://server.domain.com } } It times out. I can curl both http:// and https:// no problem. The strange thing is the log message with the timeout error is showing the IP instead of the hostname. 2013/12/05 09:30:33 [error] 20109#0: *1 upstream timed out (110: Connection timed out) while reading response header from upstream, client: 192.168.1.200, server: external.domain.com, request: "GET / HTTP/1.1", upstream: "https://192.168.1.10:443/", host: "external.domain.com" What I'm guessing is nginx is trying to proxy to the host via the IP and then timing out because of SSL issues, as the SSL cert is not valid for the IP, only for the domain name. Why is nginx proxying to the IP instead of the hostname? Can I force it to use the hostname? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,245252,245252#msg-245252 From mdounin at mdounin.ru Thu Dec 5 15:03:46 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 5 Dec 2013 19:03:46 +0400 Subject: location{} hassles:( In-Reply-To: References: Message-ID: <20131205150346.GD95113@mdounin.ru> Hello! On Thu, Dec 05, 2013 at 01:10:53AM +0000, Matthew Ngaha wrote: > Hey all I can't seem to get the hang of how to use/write location > blocks. I have mailed the list before and I do understand how it works > but what I've tried fails. > > I have tried a few things and in both cases i don't think what i'm > doing is making a difference. Here they are: > > http://bpaste.net/show/YXFXLuJ1Uc1UFQDc8ctn/ > > The 1st test was to put location {} nested into the main location {} > that was already in nginx.conf. I put the "deny all;" just to see if > it would find or ignore/ my new location {} but the nested location > test directory/file isn't denying access. > > The 2nd location {} test is below the main one. In the installed > nginx folder it uses html/ as the root, i tried to create a test > "delete" folder as an alternative location to the html root folder but > i can't seem to get it right. My browser returns a "404 Not Found" > when i try to access files in localhost/delete. > > In both cases my added locations {} are being ignored. Any ideas what i can do? Try looking into error log and full config, and make sure you've reloaded configuration after changing it. With the config you've posted, i.e.: location / { root html; index index.html index.htm; location /test/file.html { deny all; } } location /delete { root delete; } and assuming there are no other locations defined, expected results are: request to /foo maps to file /html/foo request to /test/files.html returns 403 request to /delete/foo maps to file /delete/delete/foo See here for documentation: http://nginx.org/r/location http://nginx.org/r/root -- Maxim Dounin http://nginx.org/en/donation.html From mdounin at mdounin.ru Thu Dec 5 15:36:18 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 5 Dec 2013 19:36:18 +0400 Subject: Problem with Upstream over SSL In-Reply-To: <1decead1b3acd1404f95a6b2824d9d3d.NginxMailingListEnglish@forum.nginx.org> References: <1decead1b3acd1404f95a6b2824d9d3d.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20131205153618.GE95113@mdounin.ru> Hello! On Thu, Dec 05, 2013 at 09:44:55AM -0500, cschiewek wrote: > I was proxying to an IIS server on 443 on nginx 1.1 on FreeBSD and it worked > perfectly fine. We moved to nginx 1.4 running on ubuntu and now it won't > work. > > The following works perfect: > > server { > location / { > proxy_pass http://server.domain.com > } > } > > But when I change it to > > server { > location / { > proxy_pass https://server.domain.com > } > } > > It times out. I can curl both http:// and https:// no problem. The strange > thing is the log message with the timeout error is showing the IP instead of > the hostname. > > 2013/12/05 09:30:33 [error] 20109#0: *1 upstream timed out (110: Connection > timed out) while reading response header from upstream, client: > 192.168.1.200, server: external.domain.com, request: "GET / HTTP/1.1", > upstream: "https://192.168.1.10:443/", host: "external.domain.com" > > What I'm guessing is nginx is trying to proxy to the host via the IP and > then timing out because of SSL issues, as the SSL cert is not valid for the > IP, only for the domain name. The problem indeed may be related to SSL - e.g. something wrong with ciphers used. But it's certainly not a certificate verification issue, as nginx currently doesn't check upstream server certificates at all. You may try using 1.5.x to play with proxy_ssl_protocols and proxy_ssl_ciphers directives introduced specificaly to help to resolve various interoperability problems. > Why is nginx proxying to the IP instead of the hostname? Can I force it to > use the hostname? The ip of a particular server nginx connects to is logged. -- Maxim Dounin http://nginx.org/en/donation.html From nginx-forum at nginx.us Thu Dec 5 15:51:04 2013 From: nginx-forum at nginx.us (volga629) Date: Thu, 05 Dec 2013 10:51:04 -0500 Subject: Imap proxy Message-ID: Hello Everyone, Is mail imap proxy supports SSL or STARTTLS for connections to backend server ? Slava. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,245255,245255#msg-245255 From mdounin at mdounin.ru Thu Dec 5 16:17:53 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 5 Dec 2013 20:17:53 +0400 Subject: Imap proxy In-Reply-To: References: Message-ID: <20131205161753.GF95113@mdounin.ru> Hello! On Thu, Dec 05, 2013 at 10:51:04AM -0500, volga629 wrote: > Hello Everyone, > Is mail imap proxy supports SSL or STARTTLS for connections to backend > server ? No. SSL/STARTTLS is only supported for client connections. Backend network is assumed to be non-hostile. -- Maxim Dounin http://nginx.org/en/donation.html From nginx-forum at nginx.us Thu Dec 5 17:02:44 2013 From: nginx-forum at nginx.us (volga629) Date: Thu, 05 Dec 2013 12:02:44 -0500 Subject: Imap proxy In-Reply-To: <20131205161753.GF95113@mdounin.ru> References: <20131205161753.GF95113@mdounin.ru> Message-ID: Hello Maxim, Thank you for answer. When user connect to proxy with SSL on backend it get destibuted in clear text ? If final server is DR server which another part of the world, there a lot of places to sniff traffic for plain 143. Is no really big sense to use proxy for services located on same physical server. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,245255,245257#msg-245257 From rabeloo at gmail.com Thu Dec 5 17:09:06 2013 From: rabeloo at gmail.com (Raphael R. O.) Date: Thu, 5 Dec 2013 15:09:06 -0200 Subject: Rewrite URL with parameters Message-ID: Hi guys, I'm trying to rewrite an url with a few parameters, but unsuccessfully. The URL: https://www.mysite.com.br/category-body/categories/promotionalXXX?utm_source=PromoCode&utm_medium=AddPromo&utm_campaign=PromoCode_AddPromo_moneycampaing I need rewrite to: https://www.mysite.com.br/digital-parts/newspaper/recs-xyaw-sazz-qqad-cxae The /promotionalXXX can be /promotional123, /promotional324 and many more ... What i already tried: rewrite ^/category-body/categories/promotional(.*)utm_source=PromoCode&utm_medium=AddPromo&utm_campaign=PromoCode_AddPromo_moneycampaing https://www.mysite.com.br/digital-parts/newspaper/recs-xyaw-sazz-qqad-cxae; and ... location = /category-body/categories/promotional(.*)$ { if ($args ~ ?utm_source=PromoCode&utm_medium=AddPromo&utm_campaign=PromoCode_AddPromo_moneycampaing?) { rewrite ^/digital-parts/newspaper/recs-xyaw-sazz-qqad-cxae; permanent; } } But any of this worked for me... What's the best way to solve this? Thanks! -------------- next part -------------- An HTML attachment was scrubbed... URL: From contact at jpluscplusm.com Thu Dec 5 17:24:46 2013 From: contact at jpluscplusm.com (Jonathan Matthews) Date: Thu, 5 Dec 2013 17:24:46 +0000 Subject: Rewrite URL with parameters In-Reply-To: References: Message-ID: On 5 December 2013 17:09, Raphael R. O. wrote: > Hi guys, > > I'm trying to rewrite an url with a few parameters, but unsuccessfully. You're almost there ;-) > What i already tried: > > rewrite > ^/category-body/categories/promotional(.*)utm_source=PromoCode&utm_medium=AddPromo&utm_campaign=PromoCode_AddPromo_moneycampainghttps://www.mysite.com.br/digital-parts/newspaper/recs-xyaw-sazz-qqad-cxae; "rewrite" doesn't examine query strings, so if the contents of your query string are required to drive your logic, you'll need to use other tools instead of (or well as) rewrite. > and ... > > location = /category-body/categories/promotional(.*)$ { You probably don't want the "=" here. I don't actually know what a regex ("(.*)") location used alongside "=" will do. I'm slightly surprised nginx didn't complain on reload/restart ... Have a read of this section: http://wiki.nginx.org/HttpCoreModule#location You probably want to use a case-sensitive regex location. > if ($args ~ > ?utm_source=PromoCode&utm_medium=AddPromo&utm_campaign=PromoCode_AddPromo_moneycampaing?) I'd use a map{} variable inside the if(), personally. The map can use the same check as that which you have above, but in a way which abstracts the actual check away from the logic that it drives. This is just a stylistic change however :-) HTH, Jonathan From laurent at tnpl127.net Thu Dec 5 18:10:36 2013 From: laurent at tnpl127.net (Laurent CREPET) Date: Thu, 05 Dec 2013 19:10:36 +0100 Subject: dummy question... Message-ID: Just read this: http://wiki.nginx.org/Pitfalls#Root_inside_Location_Block ...which recommands to put root outside location block... However, the default.conf installed from nginx rpm contains exactly what should not de done, right ? --- location / { root /usr/share/nginx/html; index index.html index.htm; } location = /50x.html { root /usr/share/nginx/html; } #location ~ \.php$ { # root html; # fastcgi_pass 127.0.0.1:9000; # fastcgi_index index.php; # fastcgi_param SCRIPT_FILENAME /scripts$fastcgi_script_name; # include fastcgi_params; #} --- -- Laurent CREPET From nginx-forum at nginx.us Thu Dec 5 18:52:35 2013 From: nginx-forum at nginx.us (itpp2012) Date: Thu, 05 Dec 2013 13:52:35 -0500 Subject: dummy question... In-Reply-To: References: Message-ID: In some cases you have no choice then use root inside location (like different roots for different requests in 1 block or a dynamic root for a dynamic request) so when you know what your doing its not a bad thing. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,245260,245261#msg-245261 From laurent at tnpl127.net Thu Dec 5 18:58:23 2013 From: laurent at tnpl127.net (Laurent CREPET) Date: Thu, 05 Dec 2013 19:58:23 +0100 Subject: dummy question... In-Reply-To: References: Message-ID: <3f4bd5404cbe9ac2adda91c113215534@tnpl127.net> Le 2013-12-05 19:52, itpp2012 a ?crit?: > In some cases you have no choice then use root inside location (like > different roots for different requests in 1 block or a dynamic root > for a > dynamic request) so when you know what your doing its not a bad > thing. > Yes, I understand, but to remove confusion, the default file could have: - a global root outside location block - some specific root in location blocks, with description (eg: using a specific root) -- Laurent CREPET From francis at daoine.org Thu Dec 5 19:09:35 2013 From: francis at daoine.org (Francis Daly) Date: Thu, 5 Dec 2013 19:09:35 +0000 Subject: Url rewriting faliure in case of UTF8 urls In-Reply-To: References: <20131204204837.GA517@craic.sysops.org> Message-ID: <20131205190935.GF517@craic.sysops.org> On Wed, Dec 04, 2013 at 07:42:17PM -0500, omidr wrote: Hi there, > As an example I want to redirect "/???????" to "/??????? ???????" and I am > using a rewrite rulr like the one below but things go wrong and I get a > 404. Can I suggest you try to simplify the test case, so that you can see what exactly is happening? Use "rewrite A B permanent;", and if the url matches A you will see a redirection to B in the http response headers; and if it does not match A you will not see a redirection to B. That way you will know whether the request that you made matched this rewrite or not. What matters to nginx are the actual bytes written inside the config file, and the actual bytes received in the request. > rewrite_rule: rewrite ^/???????/$ /??????? ???????; So, make that be (say) rewrite ^/??$ /match-?? permanent; and then do something like grep permanent nginx.conf | xxd so that you can see the bytes that are in the file on that line. Then issue your single request for /??, and watch the http response -- do you get the permanent redirect, or something else? It is probably worth watching the network traffic using tcpdump, or some other means, so that you can see what bytes are sent by the browser. If the rewrite doesn't match when you think it should, the tcpdump output might give an indication of why that was. > And what do you mean by config? Do you mean nginx settings or configurations > at compile time? I mean "enough information so that it is easy for me (or anyone else) to do what you are doing so as to be able to see the problem that you are reporting". I first tested using a line rewrite ^/om?d/$ /om?dreza permanent; where the character between "om" and "d" is the two bytes c3 ad, which is the utf-8 representation of "LATIN SMALL LETTER I WITH ACUTE" I tested using "curl -i" asking for /om?d/ and also /om%C3%ADd/. I saw the expected redirection. So nginx is correctly handling a rewrite of a UTF8 url. Can you test with that same thing, and see if your nginx responds differently? If it does respond differently, then there is something significant different between my nginx and your nginx, so the "nginx -V" output will probably matter. f -- Francis Daly francis at daoine.org From alex.koch007 at outlook.com Thu Dec 5 23:25:48 2013 From: alex.koch007 at outlook.com (Alex Koch) Date: Fri, 6 Dec 2013 00:25:48 +0100 Subject: =?UTF-8?Q?RE=3A_NGINX_Module_-_create_variables=E2=80=8F?= In-Reply-To: References: Message-ID: Hello, I would appreciate any hints. Many thanks, Alex From: alex.koch007 at outlook.com To: nginx at nginx.org Subject: NGINX Module - create variables? Date: Tue, 3 Dec 2013 18:55:52 +0100 Hi, I would like to create a small module which execute some routines, returns an NGX_OK, somewhat similar in concept to http://blog.zhuzhaoyuan.com/2009/08/creating-a-hello-world-nginx-module/ However I would like once the module executes to create variables such as $my_var which would be accessible via config files to other blocks or to the same location block. I am aware of the "register_variable" option, with "ngx_http_variable_t *var, *v;" - but the examples I have seen so far, execute the module only once this variable is loaded in the config file. What I would like is being able to define a couple of config variables once my module is loaded. Is this possible at all? If so, could you point me to a sample/module which does this so I can learn from it? Many thanks, Alex _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From bvs7085 at gmail.com Thu Dec 5 23:29:41 2013 From: bvs7085 at gmail.com (Brad Van Sickle) Date: Thu, 05 Dec 2013 18:29:41 -0500 Subject: NGINX Location Matching Question - Case insensitive matching at the start of a URI In-Reply-To: <52A1055E.7090605@gmail.com> References: <52A1055E.7090605@gmail.com> Message-ID: <52A10C65.2020509@gmail.com> Hi, I'm running in a somewhat urgent issue where I have to make a case-insensitive location match on a URI segment that is at the beginning of the URI. so I want to match /uri-segment/* /URI-segment/* /Uri-Segment/* etc.. I know that this will match the start of the URI: location ^~ /uri-segment and this will match regardless of case: location ~* /uri-segment(.*) But chaining them together and using ^~*doesn't seem to work. Is this possible? If so what am I missing? Thanks From contact at jpluscplusm.com Thu Dec 5 23:40:16 2013 From: contact at jpluscplusm.com (Jonathan Matthews) Date: Thu, 5 Dec 2013 23:40:16 +0000 Subject: NGINX Location Matching Question - Case insensitive matching at the start of a URI In-Reply-To: <52A10C65.2020509@gmail.com> References: <52A1055E.7090605@gmail.com> <52A10C65.2020509@gmail.com> Message-ID: On 5 December 2013 23:29, Brad Van Sickle wrote: > > Hi, > > I'm running in a somewhat urgent issue where I have to make a > case-insensitive location match on a URI segment that is at the > beginning of the URI. > > so I want to match > > /uri-segment/* > /URI-segment/* > /Uri-Segment/* > etc.. > > I know that this will match the start of the URI: > location ^~ /uri-segment It will, but that's not what the regex-alike "^" means there. There, it means "... and if this /does/ match, stop immediately and don't consider any regex matches". See http://wiki.nginx.org/HttpCoreModule#location for more information. (I personally consider this a really bad choice of character by nginx - it /always/ confuses me!) > and this will match regardless of case: > location ~* /uri-segment(.*) > > But chaining them together and using ^~*doesn't seem to work. Is this > possible? If so what am I missing? Seeing how you said this is urgent, I'll not do what I /normally/ do and just give you pointers to the docs that'll help you :-) You want location ~* ^/uri-segment(.*) { } HTH, Jonathan From mdounin at mdounin.ru Thu Dec 5 23:49:17 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 6 Dec 2013 03:49:17 +0400 Subject: Imap proxy In-Reply-To: References: <20131205161753.GF95113@mdounin.ru> Message-ID: <20131205234917.GG95113@mdounin.ru> Hello! On Thu, Dec 05, 2013 at 12:02:44PM -0500, volga629 wrote: > Hello Maxim, > Thank you for answer. > When user connect to proxy with SSL on backend it get destibuted in clear > text ? If final server is DR server which another part of the world, there > a lot of places to sniff traffic for plain 143. Is no really big sense to > use proxy for services located on same physical server. The imap proxy is to route clients to different backend servers in a big farm, typically sitting on the same non-hostile network with the proxy. If for some reason you are using backends in another part of the world over a public internet and want traffic to be encrypted, you may use a VPN or a secure tunnel for this. -- Maxim Dounin http://nginx.org/en/donation.html From bvs7085 at gmail.com Thu Dec 5 23:50:28 2013 From: bvs7085 at gmail.com (Brad Van Sickle) Date: Thu, 05 Dec 2013 18:50:28 -0500 Subject: NGINX Location Matching Question - Case insensitive matching at the start of a URI In-Reply-To: References: <52A1055E.7090605@gmail.com> <52A10C65.2020509@gmail.com> Message-ID: <52A11144.10900@gmail.com> Thanks for the quick response! Looks like that is matching how I intend... however it (of course) revealed another issue in my app layer :) But at least the NGINX configs seems correct now. Thanks! On 12/5/2013 6:40 PM, Jonathan Matthews wrote: > On 5 December 2013 23:29, Brad Van Sickle wrote: >> Hi, >> >> I'm running in a somewhat urgent issue where I have to make a >> case-insensitive location match on a URI segment that is at the >> beginning of the URI. >> >> so I want to match >> >> /uri-segment/* >> /URI-segment/* >> /Uri-Segment/* >> etc.. >> >> I know that this will match the start of the URI: >> location ^~ /uri-segment > It will, but that's not what the regex-alike "^" means there. There, > it means "... and if this /does/ match, stop immediately and don't > consider any regex matches". See > http://wiki.nginx.org/HttpCoreModule#location for more information. (I > personally consider this a really bad choice of character by nginx - > it /always/ confuses me!) > >> and this will match regardless of case: >> location ~* /uri-segment(.*) >> >> But chaining them together and using ^~*doesn't seem to work. Is this >> possible? If so what am I missing? > Seeing how you said this is urgent, I'll not do what I /normally/ do > and just give you pointers to the docs that'll help you :-) > > You want > > location ~* ^/uri-segment(.*) { } > > HTH, > Jonathan > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From mdounin at mdounin.ru Fri Dec 6 00:13:53 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 6 Dec 2013 04:13:53 +0400 Subject: NGINX Module - create variables? In-Reply-To: References: Message-ID: <20131206001353.GI95113@mdounin.ru> Hello! On Fri, Dec 06, 2013 at 12:25:48AM +0100, Alex Koch wrote: [...] > What I would like is being able to define a couple > of config variables once my module is loaded. Is this possible at all? > If so, could you point me to a sample/module which does this so I can > learn from it? You may try looking into the stub status module, src/http/modules/ngx_http_stub_status_module.c. It defines several variables ($connections_active, $connections_reading, $connections_writing, $connections_waiting) and it's easy enough to follow. http://hg.nginx.org/nginx/file/tip/src/http/modules/ngx_http_stub_status_module.c -- Maxim Dounin http://nginx.org/en/donation.html From nginx-forum at nginx.us Fri Dec 6 08:46:09 2013 From: nginx-forum at nginx.us (r004) Date: Fri, 06 Dec 2013 03:46:09 -0500 Subject: Nginx location, return and rewrite directives Message-ID: I am having a hard time learning to do return and rewrite directives. the conditions are fuzzy for me. I am trying to learn by example, so I made one up. in the htdocs of my server we have this file and folders. ~:#>ls htdocs admin.php api (dir) api.php archiver (dir) config (dir) connect.php cp.php crossdomain.xml data (dir) favicon.ico forum.php group.php home.php index.php install (dir) member.php misc.php plugin.php portal.php robots.txt search.php source (dir) static (dir) template (dir) uc_client (dir) uc_server (dir) userapp.php NOW; we just focus on the uc_server client ; ~:#> ls /htodcs/uc_server admin.php api (dir) avatar.php control (dir) crossdomain.xml data (dir) images (dir) index.php install (dir) js (dir) lib (dir) model (dir) plugin (dir) release (dir) robots.txt view (dir) A.) So what is happening is in uc_server; I think we are better to create a block location like location /uc_server/ { } and do or conditional directives in there right???? B.) I want these to happen: http://DOMAIN.DOM/uc_server/admin.php/uc_server/ ===changes to===> http://DOMAIN.DOM/uc_server/ AND if there is ...../install/index.php/.... ( the "/install/index.php/" pattern somewhere in the link) in any requested links ;the "/install/index.php/" part would be omitted and the link would processed without that part. THANKS ALOT Posted at Nginx Forum: http://forum.nginx.org/read.php?2,245275,245275#msg-245275 From multiformeingegno at gmail.com Fri Dec 6 10:54:08 2013 From: multiformeingegno at gmail.com (Lorenzo Raffio) Date: Fri, 06 Dec 2013 02:54:08 -0800 (PST) Subject: Force ppt(x) and pps(x) to be downloaded instead of being served as plain text Message-ID: <1386327248213.3bb5f6ee@Nodemailer> Nginx 1.5.7, ppt(x) and pps(x) are served as plain text (of course with strange characters) by nginx. How can I have them always as a download? -------------- next part -------------- An HTML attachment was scrubbed... URL: From contact at jpluscplusm.com Fri Dec 6 11:09:06 2013 From: contact at jpluscplusm.com (Jonathan Matthews) Date: Fri, 6 Dec 2013 11:09:06 +0000 Subject: Force ppt(x) and pps(x) to be downloaded instead of being served as plain text In-Reply-To: <1386327248213.3bb5f6ee@Nodemailer> References: <1386327248213.3bb5f6ee@Nodemailer> Message-ID: On 6 December 2013 10:54, Lorenzo Raffio wrote: > Nginx 1.5.7, ppt(x) and pps(x) are served as plain text (of course with > strange characters) by nginx. > > How can I have them always as a download? Probably by setting the appropriate MIME type up: http://wiki.nginx.org/HttpCoreModule#types From ewgraf at gmail.com Fri Dec 6 11:12:49 2013 From: ewgraf at gmail.com (Sokolov Evgeniy) Date: Fri, 6 Dec 2013 12:12:49 +0100 Subject: Force ppt(x) and pps(x) to be downloaded instead of being served as plain text In-Reply-To: <1386327248213.3bb5f6ee@Nodemailer> References: <1386327248213.3bb5f6ee@Nodemailer> Message-ID: For ppt files it must work, check your mime.types nginx config: ./mime.types: application/vnd.ms-powerpoint ppt; For another formats you must add something similar 2013/12/6 Lorenzo Raffio : > Nginx 1.5.7, ppt(x) and pps(x) are served as plain text (of course with > strange characters) by nginx. > > How can I have them always as a download? > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -- -- ? ?????????, ??????? ??????? From citrin at citrin.ru Fri Dec 6 11:13:00 2013 From: citrin at citrin.ru (Anton Yuzhaninov) Date: Fri, 06 Dec 2013 15:13:00 +0400 Subject: Force ppt(x) and pps(x) to be downloaded instead of being served as plain text In-Reply-To: References: <1386327248213.3bb5f6ee@Nodemailer> Message-ID: <52A1B13C.7060300@citrin.ru> On 12/06/13 15:09, Jonathan Matthews wrote: >> >How can I have them always as a download? > Probably by setting the appropriate MIME type up: > http://wiki.nginx.org/HttpCoreModule#types Or by adding (in location or server) default_type application/octet-stream; http://nginx.org/r/default_type From mdounin at mdounin.ru Fri Dec 6 12:10:37 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 6 Dec 2013 16:10:37 +0400 Subject: Force ppt(x) and pps(x) to be downloaded instead of being served as plain text In-Reply-To: References: <1386327248213.3bb5f6ee@Nodemailer> Message-ID: <20131206121037.GK95113@mdounin.ru> Hello! On Fri, Dec 06, 2013 at 12:12:49PM +0100, Sokolov Evgeniy wrote: > For ppt files it must work, check your mime.types nginx config: > > ./mime.types: application/vnd.ms-powerpoint ppt; > > For another formats you must add something similar The mime.types in 1.5.7 contains pptx as well, see here: http://hg.nginx.org/nginx/file/tip/conf/mime.types#l67 Most likely, the mime.types file is not included into configuration. -- Maxim Dounin http://nginx.org/en/donation.html From mdounin at mdounin.ru Fri Dec 6 15:57:50 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 6 Dec 2013 19:57:50 +0400 Subject: Cross Compiling nginx-1.0.11 In-Reply-To: <2d2e5b8f-9dbc-4b25-b12b-026576c134bd@googlegroups.com> References: <20120119105842.GG67687@mdounin.ru> <2d2e5b8f-9dbc-4b25-b12b-026576c134bd@googlegroups.com> Message-ID: <20131206155750.GR95113@mdounin.ru> Hello! On Fri, Dec 06, 2013 at 06:27:52AM -0800, Shenchen foundshany wrote: > hi, > do you have a plan to supported cross-compilation nginx. how to do it, > thanks There are no plans to support cross-compilation. -- Maxim Dounin http://nginx.org/en/donation.html From www at lc365.net Fri Dec 6 16:33:47 2013 From: www at lc365.net (=?gb2312?B?ufnV8cGi?=) Date: Sat, 7 Dec 2013 00:33:47 +0800 Subject: [ANN] Windows nginx 1.5.8.1 Caterpillar In-Reply-To: <38a18e43ff69ce5100ad6773525afe56.NginxMailingListEnglish@forum.nginx.org> References: , <38a18e43ff69ce5100ad6773525afe56.NginxMailingListEnglish@forum.nginx.org> Message-ID: Hello, > A new beta you can try, nginx 1.5.8.2 Caterpillar BETA2.zip > + Fix for nginx -t 'Assertion failed' issue > + HttpSubsModule > * The debug version is no longer needed, Intel static profiler data is used > nginx crash info/logging or event dump info is all that is needed > * Intel static profiler "the need for speed" compiler optimization > Great job, nginx -t work well now. -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Fri Dec 6 20:53:24 2013 From: nginx-forum at nginx.us (itpp2012) Date: Fri, 06 Dec 2013 15:53:24 -0500 Subject: [ANN] Windows nginx 1.5.8.2 Caterpillar Message-ID: <69930f3d7640bb34363592f979529b12.NginxMailingListEnglish@forum.nginx.org> 'Keep Up' Windows nginx fan base :) Over 2.000 downloads, lots of Beta feedback, we ain't done yet, here's the latest and greatest version. 15:34 6-12-2013: nginx 1.5.8.2 Caterpillar Based on nginx 1.5.8 (5-12-2013) with; + Fix for nginx -t 'Assertion failed' issue + HttpSubsModule (https://github.com/yaoweibin/ngx_http_substitutions_filter_module) + echo-nginx-module (https://github.com/agentzh/echo-nginx-module) + ngx_http_lower_upper_case (https://github.com/replay/ngx_http_lower_upper_case) + Naxsi WAF (Web Application Firewall) v0.53-1 (upgraded 5-12-2013) + lua-nginx-module v0.9.2 (upgraded 6-12) + Streaming with nginx-rtmp-module, v1.0.8 (upgraded 6-12) + Source changes back ported + Source changes add-on's back ported * The debug version is no longer needed, Intel static profiler data is used nginx crash info/logging or event dump info is all that is needed * Intel static profiler "the need for speed" compiler optimization * Additional specifications are like 19:18 30-11-2013: nginx 1.5.8.1 Caterpillar Builds can be found here: http://nginx-win.ecsds.eu/ Posted at Nginx Forum: http://forum.nginx.org/read.php?2,245298,245298#msg-245298 From ianevans at digitalhit.com Sat Dec 7 13:33:03 2013 From: ianevans at digitalhit.com (Ian Evans) Date: Sat, 07 Dec 2013 08:33:03 -0500 Subject: Maintenance mode for all but my ip Message-ID: <90b18a68be354dabbbe20cbe087a671a@digitalhit.com> Getting ready to convert the site to UTF-8 (finally!) and wanted to know how I could issue error code 503 to all people and bots but still allow my IP in so I can go 'round the site checking for glitches due to the change. Right now I have this implementation for 503's but that issues the error code for everyone including me. I know about allow/deny but I'm not sure how it fits into this mechanism. if (-f $document_root/maintenance.html) { return 503; } error_page 503 @503; location @503 { rewrite ^(.*)$ /maintenance.html break; } From reallfqq-nginx at yahoo.fr Sat Dec 7 14:58:43 2013 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Sat, 7 Dec 2013 15:58:43 +0100 Subject: Maintenance mode for all but my ip In-Reply-To: <90b18a68be354dabbbe20cbe087a671a@digitalhit.com> References: <90b18a68be354dabbbe20cbe087a671a@digitalhit.com> Message-ID: I am new to the use of maps, but I suppose it would fit perfectly, using core variables such as the binary IP address: Maybe something like: server { error_page 503 /503.html # Configuring error page map $binary_remote_addr $target { # Configuring white-listed IP addresses default KO your_whitelisted_binary_IP_address_value OK } rewrite ^.*$ $target #Redirecting all traffic according to map-assigned value location @OK { # Named location to do nothing, i.e. serve content as usual } location @KO { # Named location to trap maintenance traffic, spawning a HTTP 503 error return 503; } } Untested, thus unsure, but I'd seek something looking like this. --- *B. R.* -------------- next part -------------- An HTML attachment was scrubbed... URL: From paulnpace at gmail.com Sat Dec 7 15:07:08 2013 From: paulnpace at gmail.com (Paul N. Pace) Date: Sat, 7 Dec 2013 15:07:08 +0000 Subject: Maintenance mode for all but my ip Message-ID: <1186693290-1386428828-cardhu_decombobulator_blackberry.rim.net-343425416-@b28.c8.bise6.blackberry> Did you try putting 'allow ;' above 'return...' line in if block? From contact at jpluscplusm.com Sat Dec 7 15:10:08 2013 From: contact at jpluscplusm.com (Jonathan Matthews) Date: Sat, 7 Dec 2013 15:10:08 +0000 Subject: Maintenance mode for all but my ip In-Reply-To: References: <90b18a68be354dabbbe20cbe087a671a@digitalhit.com> Message-ID: On 7 December 2013 14:58, B.R. wrote: > I am new to the use of maps, but I suppose it would fit perfectly, using > core variables such as the binary IP address: [snip] > rewrite ^.*$ $target #Redirecting all traffic according to map-assigned I don't particularly like ^^^ this. It seems like a level of indirection too far ;-) Using /almost/ the same technique, you could have a map based on the remote IP like BR suggested, but use it like this: http { map $remote_addr $not_me { default 1; my.ip.address ""; } server { # whatever else you need location / { if ($not_me) { return 503 } } } } Cheers, Jonathan From reallfqq-nginx at yahoo.fr Sat Dec 7 15:19:03 2013 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Sat, 7 Dec 2013 16:19:03 +0100 Subject: Maintenance mode for all but my ip In-Reply-To: References: <90b18a68be354dabbbe20cbe087a671a@digitalhit.com> Message-ID: Hello, On Sat, Dec 7, 2013 at 4:10 PM, Jonathan Matthews wrote: > > rewrite ^.*$ $target #Redirecting all traffic according to > map-assigned > > I don't particularly like ^^^ this. It seems like a level of > indirection too far ;-) > ?To me?, your solution looks double evil: 1?) Using an unneeded 'if' directive 2?) Needs modifying each and every location block. Going that way, you could use the much simpler Paul's solution, denying all blocks to everyone but the allowed IP address, that is copy-pasting 'deny' and 'allow' directives? everywhere... I was trying to think about something more scalable, self-contained and generic. ;o) --- *B. R.* -------------- next part -------------- An HTML attachment was scrubbed... URL: From ianevans at digitalhit.com Sat Dec 7 17:31:35 2013 From: ianevans at digitalhit.com (Ian Evans) Date: Sat, 07 Dec 2013 12:31:35 -0500 Subject: Maintenance mode for all but my ip In-Reply-To: References: <90b18a68be354dabbbe20cbe087a671a@digitalhit.com> Message-ID: <40ede7ab73fc46b22ec62647c5f0019b@digitalhit.com> On 2013-12-07 09:58, B.R. wrote: > I am new to the use of maps, but I suppose it would fit perfectly, > using core variables such as the binary IP address: > Maybe something like: > > server { > ??? error_page 503 /503.html # Configuring error page > > ??? map $binary_remote_addr $target { # Configuring white-listed IP > addresses > ??????? > default??????????????????????????????????????????????????? > KO > ??????? your_whitelisted_binary_IP_address_value OK > ??? } > > ??? rewrite ^.*$ $target #Redirecting all traffic according to > map-assigned value > > ??? location @OK { # Named location to do nothing, i.e. serve > content as usual > ??? } > > ??? location @KO { # Named location to trap maintenance traffic, > spawning a HTTP 503 error > ??????? return 503; > ??? } > } > > Untested, thus unsure, but Id seek something looking like this. Thanks. I'll give this a spin. Is there anyway to still trigger the mapping based on the existence of a maintenance.whatever file? Just thinking of the ease of quickly touch'ing the maintenance file to trigger the mapping as opposed to fiddling with the conf and reloading each time you want to do some quick testing. From reallfqq-nginx at yahoo.fr Sat Dec 7 20:31:03 2013 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Sat, 7 Dec 2013 21:31:03 +0100 Subject: Maintenance mode for all but my ip In-Reply-To: <40ede7ab73fc46b22ec62647c5f0019b@digitalhit.com> References: <90b18a68be354dabbbe20cbe087a671a@digitalhit.com> <40ede7ab73fc46b22ec62647c5f0019b@digitalhit.com> Message-ID: ?Hello,? On Sat, Dec 7, 2013 at 6:31 PM, Ian Evans wrote: > Thanks. I'll give this a spin. Is there anyway to still trigger the > mapping based on the existence of a maintenance.whatever file? Just > thinking of the ease of quickly touch'ing the maintenance file to trigger > the mapping as opposed to fiddling with the conf and reloading each time > you want to do some quick testing. By giving it 30 seconds of intensive thinking, I am sure you could figure this out by yourself, considering you already provided the if (-f ***) {...} trick. Since the http://nginx.org/en/docs/http/ngx_http_rewrite_module.html#if'if' directive works in server blocks, you could enclose the rewrite rule in it to only apply 'maintenance mode' based on the presence (or the absence?) of a specific file. Please come back to tell us if it worked as intended/expected... or what had to be corrected/added! :o)? --- *B. R.* -------------- next part -------------- An HTML attachment was scrubbed... URL: From ianevans at digitalhit.com Sat Dec 7 20:37:11 2013 From: ianevans at digitalhit.com (Ian Evans) Date: Sat, 07 Dec 2013 15:37:11 -0500 Subject: Maintenance mode for all but my ip In-Reply-To: References: <90b18a68be354dabbbe20cbe087a671a@digitalhit.com> <40ede7ab73fc46b22ec62647c5f0019b@digitalhit.com> Message-ID: <96d4825548f6bbd2173e8b6eb73517ca@digitalhit.com> On 2013-12-07 15:31, B.R. wrote: > ?Hello,? > > On Sat, Dec 7, 2013 at 6:31 PM, Ian Evans [1]> wrote: > >> Thanks. Ill give this a spin. Is there anyway to still trigger the >> mapping based on the existence of a maintenance.whatever file? Just >> thinking of the ease of quickly touching the maintenance file to >> trigger the mapping as opposed to fiddling with the conf and >> reloading each time you want to do some quick testing. > > By giving it 30 seconds of intensive thinking, I am sure you could > figure this out by yourself, considering you already provided the if > (-f ***) {...} trick. > > Since the > http://nginx.org/en/docs/http/ngx_http_rewrite_module.html#if [2]if > directive works in server blocks, you could enclose the rewrite rule > in it to only apply maintenance mode based on the presence (or the > absence?) of a specific file. > Sorry...too many late nights during this migration/upgrade. So I can just place the "rewrite ^.*$ $target" into my 'if'. I'll give it a whirl as soon as I get some caffeine. :-) From contact at jpluscplusm.com Sat Dec 7 21:14:25 2013 From: contact at jpluscplusm.com (Jonathan Matthews) Date: Sat, 7 Dec 2013 21:14:25 +0000 Subject: Maintenance mode for all but my ip In-Reply-To: References: <90b18a68be354dabbbe20cbe087a671a@digitalhit.com> Message-ID: On 7 December 2013 15:19, B.R. wrote: > Hello, > > > On Sat, Dec 7, 2013 at 4:10 PM, Jonathan Matthews > wrote: >> >> > rewrite ^.*$ $target #Redirecting all traffic according to >> > map-assigned >> >> I don't particularly like ^^^ this. It seems like a level of >> indirection too far ;-) > > To me, your solution looks double evil: > 1?) Using an unneeded 'if' directive Using "if" is absolutely fine if all you're doing is issuing a return directly from it. It's only when you try to do things /after/ an "if" successfully matches that it's considered evil ... > 2?) Needs modifying each and every location block. I didn't really pay attention to the scoping of the return. Moving the return to the most appropriate place in the config shouldn't be too tricky for the OP to figure out ... > I was trying to think about something more scalable, self-contained and > generic. ;o) What I suggested would work, AFAICT, with a single map, and a per-server{} "if($not_me){return 503;}". That's hardly onerous, I'd say ... YMMV, Jonathan From nginx-forum at nginx.us Sat Dec 7 21:39:51 2013 From: nginx-forum at nginx.us (itpp2012) Date: Sat, 07 Dec 2013 16:39:51 -0500 Subject: Maintenance mode for all but my ip In-Reply-To: References: Message-ID: Full working config; http://www.cyberciti.biz/faq/custom-nginx-maintenance-page-with-http503/ Posted at Nginx Forum: http://forum.nginx.org/read.php?2,245306,245316#msg-245316 From reallfqq-nginx at yahoo.fr Sat Dec 7 22:15:32 2013 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Sat, 7 Dec 2013 23:15:32 +0100 Subject: Maintenance mode for all but my ip In-Reply-To: References: Message-ID: On Sat, Dec 7, 2013 at 10:39 PM, itpp2012 wrote: > Full working config; > http://www.cyberciti.biz/faq/custom-nginx-maintenance-page-with-http503/ > ?Thanks for replying after having carefully read ?what is asked for by Ian and not giving a too quick answer copy-pasted from Google --- *B. R.* -------------- next part -------------- An HTML attachment was scrubbed... URL: From noloader at gmail.com Sun Dec 8 05:46:45 2013 From: noloader at gmail.com (Jeffrey Walton) Date: Sun, 8 Dec 2013 00:46:45 -0500 Subject: nginx module developer documentation? Message-ID: Hi All, I'm interested in exploring nginx as the basis for a proxy. I'm having trouble locating reading material on nginx from a development perspective. I found [0,1], but it looks like its for administrators. I also found [2], but it looks like its interpreted. For performance reasons, I'd prefer to use C/C++. Finally, Amazon does not list any books related to development. Would anyone know of a few good references for a C/C++ developer? Thanks in advance. Jeffrey Walton Baltimore, MD, US [0] http://nginx.org/en/docs/http/ngx_http_proxy_module.html [1] http://wiki.nginx.org/3rdPartyModules [2] http://wiki.nginx.org/HttpEchoModule From vl at nginx.com Sun Dec 8 08:14:38 2013 From: vl at nginx.com (Homutov Vladimir) Date: Sun, 8 Dec 2013 12:14:38 +0400 Subject: nginx module developer documentation? In-Reply-To: References: Message-ID: <20131208081438.GA17631@vl> On Sun, Dec 08, 2013 at 12:46:45AM -0500, Jeffrey Walton wrote: > Hi All, > > I'm interested in exploring nginx as the basis for a proxy. > > I'm having trouble locating reading material on nginx from a > development perspective. I found [0,1], but it looks like its for > administrators. I also found [2], but it looks like its interpreted. > For performance reasons, I'd prefer to use C/C++. Finally, Amazon does > not list any books related to development. > > Would anyone know of a few good references for a C/C++ developer? > > Thanks in advance. > > Jeffrey Walton > Baltimore, MD, US > > [0] http://nginx.org/en/docs/http/ngx_http_proxy_module.html > [1] http://wiki.nginx.org/3rdPartyModules > [2] http://wiki.nginx.org/HttpEchoModule Take a look here: http://nginx.org/en/links.html From noloader at gmail.com Sun Dec 8 08:24:49 2013 From: noloader at gmail.com (Jeffrey Walton) Date: Sun, 8 Dec 2013 03:24:49 -0500 Subject: nginx module developer documentation? In-Reply-To: <20131208081438.GA17631@vl> References: <20131208081438.GA17631@vl> Message-ID: On Sun, Dec 8, 2013 at 3:14 AM, Homutov Vladimir wrote: > On Sun, Dec 08, 2013 at 12:46:45AM -0500, Jeffrey Walton wrote: >> Hi All, >> >> I'm interested in exploring nginx as the basis for a proxy. >> >> I'm having trouble locating reading material on nginx from a >> development perspective. I found [0,1], but it looks like its for >> administrators. I also found [2], but it looks like its interpreted. >> For performance reasons, I'd prefer to use C/C++. Finally, Amazon does >> not list any books related to development. >> >> Would anyone know of a few good references for a C/C++ developer? >> >> ... > > Take a look here: > > http://nginx.org/en/links.html perfect, thanks. From noloader at gmail.com Sun Dec 8 11:18:40 2013 From: noloader at gmail.com (Jeffrey Walton) Date: Sun, 8 Dec 2013 06:18:40 -0500 Subject: Configure, make and self tests? Message-ID: `make check` and `make test` results in "no rule to make target". Does nginx include any self test? If so, how does on run them? Thanks in advance. From mdounin at mdounin.ru Sun Dec 8 11:21:30 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Sun, 8 Dec 2013 15:21:30 +0400 Subject: Configure, make and self tests? In-Reply-To: References: Message-ID: <20131208112129.GW95113@mdounin.ru> Hello! On Sun, Dec 08, 2013 at 06:18:40AM -0500, Jeffrey Walton wrote: > `make check` and `make test` results in "no rule to make target". > > Does nginx include any self test? If so, how does on run them? As of now, tests are available as a separate repository, see here: http://hg.nginx.org/nginx-tests -- Maxim Dounin http://nginx.org/en/donation.html From noloader at gmail.com Sun Dec 8 12:43:40 2013 From: noloader at gmail.com (Jeffrey Walton) Date: Sun, 8 Dec 2013 07:43:40 -0500 Subject: Configure, make and self tests? In-Reply-To: <20131208112129.GW95113@mdounin.ru> References: <20131208112129.GW95113@mdounin.ru> Message-ID: On Sun, Dec 8, 2013 at 6:21 AM, Maxim Dounin wrote: > Hello! > > On Sun, Dec 08, 2013 at 06:18:40AM -0500, Jeffrey Walton wrote: > >> `make check` and `make test` results in "no rule to make target". >> >> Does nginx include any self test? If so, how does on run them? > > As of now, tests are available as a separate repository, see here: > > http://hg.nginx.org/nginx-tests perfect, thanks. (almost perfect - mercurial.selenic.com/wiki/Tutorial? is down from my location). Jeff From alex.koch007 at outlook.com Sun Dec 8 19:05:40 2013 From: alex.koch007 at outlook.com (Alex Koch) Date: Sun, 8 Dec 2013 20:05:40 +0100 Subject: NGINX Module - create variables? In-Reply-To: <20131206001353.GI95113@mdounin.ru> References: , , <20131206001353.GI95113@mdounin.ru> Message-ID: Great. Thanks! This was in fact helpful. Alex > Date: Fri, 6 Dec 2013 04:13:53 +0400 > From: mdounin at mdounin.ru > To: nginx at nginx.org > Subject: Re: NGINX Module - create variables? > > Hello! > > On Fri, Dec 06, 2013 at 12:25:48AM +0100, Alex Koch wrote: > > [...] > > > What I would like is being able to define a couple > > of config variables once my module is loaded. Is this possible at all? > > If so, could you point me to a sample/module which does this so I can > > learn from it? > > You may try looking into the stub status module, > src/http/modules/ngx_http_stub_status_module.c. It defines > several variables ($connections_active, $connections_reading, > $connections_writing, $connections_waiting) and it's easy enough > to follow. > > http://hg.nginx.org/nginx/file/tip/src/http/modules/ngx_http_stub_status_module.c > > -- > Maxim Dounin > http://nginx.org/en/donation.html > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From lordnynex at gmail.com Sun Dec 8 23:19:56 2013 From: lordnynex at gmail.com (Lord Nynex) Date: Sun, 8 Dec 2013 15:19:56 -0800 Subject: ngx_http_limit_req_module questions Message-ID: Hello, I have a requirement to rate limit requests to one of my customer facing API's. At present Nginx is a proxy point directing traffic to network internal servers based on endpoint URL. I am interested in integrating more tightly with Nginx to do this rate limiting before the traffic is passed to my upstream resources. I'm in research phases and theres a lot of moving pieces to the project, so in the interest of clarity I've tried to organize the below into sensible lists. Please let me know if if I'm not providing enough detail. Implementation specific limitations: - Our user base traffic tends to originate from networks where NAT is heavily used. Unfortunately, rate limiting by IP address would generate massive amounts of false positives as a result. - Our API is not 'open' and requires a successful authentication handshake (Oauth) to continue. Further requests utilize an auth token in headers to maintain session state. Auth tokens are alpha numeric strings with a length of 64 characters. - High Traffic! (30k+ req/sec) Questions: - Is it feasible to do rate limiting based on an auth token? - Is it feasible to insert strings of this length as keys into the zone? - Is the zone an in memory 'object' (for lack of a better word)? - Is there a performance drawback for create one large in memory zone that is GB as opposed to MB? - How long do keys live in the zone? If I set a 1+ GB zone file, what happens if our aggregate request volume bursts and the zone runs out of storage space? There is a sentence in the documentation I find frightening, "If the zone storage is exhausted, the server will return the 503 (Service Temporarily Unavailable) error to all further requests." ( http://nginx.org/en/docs/http/ngx_http_limit_req_module.html) - Are there better alternatives? Thank You -------------- next part -------------- An HTML attachment was scrubbed... URL: From noloader at gmail.com Mon Dec 9 01:54:42 2013 From: noloader at gmail.com (Jeffrey Walton) Date: Sun, 8 Dec 2013 20:54:42 -0500 Subject: ngx_ssl_dhparam and dh1024_p Message-ID: Hi All, ngx_event_openssl.c hs the following around line 535: ngx_ssl_dhparam(ngx_conf_t *cf, ngx_ssl_t *ssl, ngx_str_t *file) { ... /* * -----BEGIN DH PARAMETERS----- * MIGHAoGBALu8LcrYRnSQfEP89YDpz9vZWKP1aLQtSwju1OsPs1BMbAMCducQgAxc * y7qokiYUxb7spWWl/fHSh6K8BJvmd4Bg6RqSp1fjBI9osHb302zI8pul34HcLKcl * 7OZicMyaUDXYzs7vnqAnSmOrHlj6/UmI0PZdFGdX2gcd8EXP4WubAgEC * -----END DH PARAMETERS----- */ static unsigned char dh1024_p[] = { 0xBB, 0xBC, 0x2D, 0xCA, 0xD8, 0x46, 0x74, 0x90, 0x7C, 0x43, 0xFC, 0xF5, 0x80, 0xE9, 0xCF, 0xDB, 0xD9, 0x58, 0xA3, 0xF5, 0x68, 0xB4, 0x2D, 0x4B, 0x08, 0xEE, 0xD4, 0xEB, 0x0F, 0xB3, 0x50, 0x4C, 0x6C, 0x03, 0x02, 0x76, 0xE7, 0x10, 0x80, 0x0C, 0x5C, 0xCB, 0xBA, 0xA8, 0x92, 0x26, 0x14, 0xC5, 0xBE, 0xEC, 0xA5, 0x65, 0xA5, 0xFD, 0xF1, 0xD2, 0x87, 0xA2, 0xBC, 0x04, 0x9B, 0xE6, 0x77, 0x80, 0x60, 0xE9, 0x1A, 0x92, 0xA7, 0x57, 0xE3, 0x04, 0x8F, 0x68, 0xB0, 0x76, 0xF7, 0xD3, 0x6C, 0xC8, 0xF2, 0x9B, 0xA5, 0xDF, 0x81, 0xDC, 0x2C, 0xA7, 0x25, 0xEC, 0xE6, 0x62, 0x70, 0xCC, 0x9A, 0x50, 0x35, 0xD8, 0xCE, 0xCE, 0xEF, 0x9E, 0xA0, 0x27, 0x4A, 0x63, 0xAB, 0x1E, 0x58, 0xFA, 0xFD, 0x49, 0x88, 0xD0, 0xF6, 0x5D, 0x14, 0x67, 0x57, 0xDA, 0x07, 0x1D, 0xF0, 0x45, 0xCF, 0xE1, 0x6B, 0x9B }; ... Searching on the web for the strings ("MIGHAoGBALu8Lcr", "0xBB, 0xBC, 0x2D, 0xCA, 0xD8, 0x46, 0x74, 0x90" and "bbbc2dcad8467490") returned hits for nginx (but no hits in a standard somewhere). Would anyone happen to know where that prime and generator came from? Does anyone know the subgroup order (or what is the q)? Is q at least 160-bits (or 2k, where k is 80-bits for the security level offered in the 1024-bit DH prime)? Thanks in advance. Jeff From nginx-forum at nginx.us Mon Dec 9 08:03:26 2013 From: nginx-forum at nginx.us (ivanp) Date: Mon, 09 Dec 2013 03:03:26 -0500 Subject: Nginx FastCGI cache for vBulletin Message-ID: <7408bd1dd95bea049318a1cc6e1a0244.NginxMailingListEnglish@forum.nginx.org> Hi, Did somebody manage to implement Nginx FastCGI cache for vBulletin 4? I've read several similar links about it: http://www.vbulletin.com/forum/forum/general/server-configuration/259952-nginx-the-raise-of-a-new-giant?p=3408500#post3408500 http://www.vbulletin.com/forum/blogs/ibxanders/3935822- http://www.vbulletin.com/forum/blogs/george-l/3929797- http://blog.litespeedtech.com/2011/01/28/speed-up-vbulletin-sites-through-litespeed-built-in-cache/ Found vBulletin Boost Product XML (product-boostv1.xml) also: http://pastebin.com/raw.php?i=06yF7X1H But some visitors are getting pages as they are logged in as a different user, so we had to disable it. Does anybody have config that works? Many thanks, Ivan Posted at Nginx Forum: http://forum.nginx.org/read.php?2,245340,245340#msg-245340 From nginx-forum at nginx.us Mon Dec 9 08:10:35 2013 From: nginx-forum at nginx.us (itpp2012) Date: Mon, 09 Dec 2013 03:10:35 -0500 Subject: Nginx FastCGI cache for vBulletin In-Reply-To: <7408bd1dd95bea049318a1cc6e1a0244.NginxMailingListEnglish@forum.nginx.org> References: <7408bd1dd95bea049318a1cc6e1a0244.NginxMailingListEnglish@forum.nginx.org> Message-ID: <2521221f51b6e93c88182d1c9f3b5af9.NginxMailingListEnglish@forum.nginx.org> Try xcache, since VB is a php application which will run via php-cgi, its easy to add xcache. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,245340,245342#msg-245342 From nginx-forum at nginx.us Mon Dec 9 08:37:55 2013 From: nginx-forum at nginx.us (ivanp) Date: Mon, 09 Dec 2013 03:37:55 -0500 Subject: Nginx FastCGI cache for vBulletin In-Reply-To: <2521221f51b6e93c88182d1c9f3b5af9.NginxMailingListEnglish@forum.nginx.org> References: <7408bd1dd95bea049318a1cc6e1a0244.NginxMailingListEnglish@forum.nginx.org> <2521221f51b6e93c88182d1c9f3b5af9.NginxMailingListEnglish@forum.nginx.org> Message-ID: <44b1f883d5944c0d4d8bfeaac5a482f6.NginxMailingListEnglish@forum.nginx.org> Done that already. Actually using Zend OPcache + Memcached, it is better than XCache. fastcgi_cache would additionally boost speed for guest users, just looking for correct config. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,245340,245343#msg-245343 From nginx-forum at nginx.us Mon Dec 9 08:51:47 2013 From: nginx-forum at nginx.us (Neddy) Date: Mon, 09 Dec 2013 03:51:47 -0500 Subject: [SSL] Initial Connection takes very long time Message-ID: Hi, I'm using nginx 1.5.7 for SSL termination for my websites (no encryption betwwen nginx-origin servers). This is my test result: http://www.webpagetest.org/result/131209_M2_BYF/1/details/ you can see it took more than 9 seconds for initiation My SSL config part in nginx.conf: ssl_session_cache shared:TLSSL:10m; ssl_session_timeout 10m; ssl_protocols SSLv3 TLSv1 TLSv1.1 TLSv1.2; ssl_ciphers ECDHE-ECDSA-RC4-SHA:ECDHE-ECDSA-AES128-SHA256:HIGH:!kEDH:!MD5:!aNULL:!eNULL:!NULL:!DH:!EDH:!AESGCM; ssl_prefer_server_ciphers on; ssl_certificate server.crt; ssl_certificate_key server.key; I know it have to trade on high encryption, but 9 seconds is too slow to init a new connection. I highly appreciate your comments to help me to reduce that waiting time. Thanks. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,245344,245344#msg-245344 From citrin at citrin.ru Mon Dec 9 09:05:53 2013 From: citrin at citrin.ru (Anton Yuzhaninov) Date: Mon, 09 Dec 2013 13:05:53 +0400 Subject: [SSL] Initial Connection takes very long time In-Reply-To: References: Message-ID: <52A587F1.2010009@citrin.ru> On 12/09/13 12:51, Neddy wrote: > This is my test result: > http://www.webpagetest.org/result/131209_M2_BYF/1/details/ you can see it > took more than 9 seconds for initiation At least one issue with https://instavn.com - there is no intermediate certificates provided. http://nginx.org/r/ssl_certificate > If intermediate certificates should be specified in addition to a primary > certificate, they should be specified in the same file in the following > order: the primary certificate comes first, then the intermediate > certificates. If you use certificate from Comodo, intermediate certificates can be downloaded from https://support.comodo.com/index.php?_m=downloads&_a=view&parentcategoryid=1 From nginx-forum at nginx.us Mon Dec 9 09:53:32 2013 From: nginx-forum at nginx.us (Neddy) Date: Mon, 09 Dec 2013 04:53:32 -0500 Subject: [SSL] Initial Connection takes very long time In-Reply-To: <52A587F1.2010009@citrin.ru> References: <52A587F1.2010009@citrin.ru> Message-ID: <9f0fb363a5b2e99979c8f2762f713571.NginxMailingListEnglish@forum.nginx.org> I added Essential bundle CA cert into a certchain, but it's no change. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,245344,245350#msg-245350 From r1ch+nginx at teamliquid.net Mon Dec 9 10:43:40 2013 From: r1ch+nginx at teamliquid.net (Richard Stanway) Date: Mon, 9 Dec 2013 11:43:40 +0100 Subject: [SSL] Initial Connection takes very long time In-Reply-To: <9f0fb363a5b2e99979c8f2762f713571.NginxMailingListEnglish@forum.nginx.org> References: <52A587F1.2010009@citrin.ru> <9f0fb363a5b2e99979c8f2762f713571.NginxMailingListEnglish@forum.nginx.org> Message-ID: This seems like a firewall or router issue, your server isn't even replying to port 443 connection attempts for a very long time. 04:41:15.000800 IP x.x.x.x.40069 > 118.69.68.219.443: Flags [S], seq 2650257073, win 14600, options [mss 1460,sackOK,TS val 1750622551 ecr 0,nop,wscale 7], length 0 04:41:15.997096 IP x.x.x.x.40069 > 118.69.68.219.443: Flags [S], seq 2650257073, win 14600, options [mss 1460,sackOK,TS val 1750622801 ecr 0,nop,wscale 7], length 0 04:41:18.001097 IP x.x.x.x.40069 > 118.69.68.219.443: Flags [S], seq 2650257073, win 14600, options [mss 1460,sackOK,TS val 1750623302 ecr 0,nop,wscale 7], length 0 04:41:22.009097 IP x.x.x.x.40069 > 118.69.68.219.443: Flags [S], seq 2650257073, win 14600, options [mss 1460,sackOK,TS val 1750624304 ecr 0,nop,wscale 7], length 0 04:41:30.025097 IP x.x.x.x.40069 > 118.69.68.219.443: Flags [S], seq 2650257073, win 14600, options [mss 1460,sackOK,TS val 1750626308 ecr 0,nop,wscale 7], length 0 04:41:30.257270 IP 118.69.68.219.443 > x.x.x.x.40069: Flags [S.], seq 571024000, ack 2650257074, win 14480, options [mss 1398,sackOK,TS val 1060007116 ecr 1750626308,nop,wscale 5], length 0 04:41:30.257289 IP x.x.x.x.40069 > 118.69.68.219.443: Flags [.], ack 1, win 115, options [nop,nop,TS val 1750626366 ecr 1060007116], length 0 04:41:30.257429 IP x.x.x.x.40069 > 118.69.68.219.443: Flags [P.], seq 1:321, ack 1, win 115, options [nop,nop,TS val 1750626366 ecr 1060007116], length 320 On Mon, Dec 9, 2013 at 10:53 AM, Neddy wrote: > I added Essential bundle CA cert into a certchain, but it's no change. > > Posted at Nginx Forum: > http://forum.nginx.org/read.php?2,245344,245350#msg-245350 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From citrin at citrin.ru Mon Dec 9 10:54:56 2013 From: citrin at citrin.ru (Anton Yuzhaninov) Date: Mon, 09 Dec 2013 14:54:56 +0400 Subject: [SSL] Initial Connection takes very long time In-Reply-To: References: Message-ID: <52A5A180.2060403@citrin.ru> On 12/09/13 12:51, Neddy wrote: > This is my test result: > http://www.webpagetest.org/result/131209_M2_BYF/1/details/ you can see it > took more than 9 seconds for initiation Try to connect from local server, e. g. server_with_nginx> openssl s_client -connect 127.0.0.1:443 If local connection is fast, problem may be in network or firewall settings. From artemrts at ukr.net Mon Dec 9 11:36:01 2013 From: artemrts at ukr.net (wishmaster) Date: Mon, 09 Dec 2013 13:36:01 +0200 Subject: Nginx FastCGI cache for vBulletin In-Reply-To: <44b1f883d5944c0d4d8bfeaac5a482f6.NginxMailingListEnglish@forum.nginx.org> References: <7408bd1dd95bea049318a1cc6e1a0244.NginxMailingListEnglish@forum.nginx.org> <2521221f51b6e93c88182d1c9f3b5af9.NginxMailingListEnglish@forum.nginx.org> <44b1f883d5944c0d4d8bfeaac5a482f6.NginxMailingListEnglish@forum.nginx.org> Message-ID: <1386588698.880156717.1sx5yxi4@frv34.ukr.net> --- Original message --- From: "ivanp" Date: 9 December 2013, 10:38:02 > Done that already. Actually using Zend OPcache + Memcached, it is better > than XCache. > > fastcgi_cache would additionally boost speed for guest users, just looking > for correct config. Confirm. I have e-market based on OpenCart (php5.4+XCache). After adding fastcgi caching, system load reduce significantly. From nginx-forum at nginx.us Mon Dec 9 13:24:30 2013 From: nginx-forum at nginx.us (Peleke) Date: Mon, 09 Dec 2013 08:24:30 -0500 Subject: Subdomains no longer work In-Reply-To: <20131203203009.GM15722@craic.sysops.org> References: <20131203203009.GM15722@craic.sysops.org> Message-ID: <69ab1df17b49ff41041671cd28a91e9d.NginxMailingListEnglish@forum.nginx.org> One problem seems to be that it is not possible to stop nginx: sudo /etc/init.d/nginx stop [ ok ] Stopping nginx: nginx. ~$ ps ax -o pid,ppid,%cpu,vsz,wchan,command|egrep '(nginx|PID)' PID PPID %CPU VSZ WCHAN COMMAND 23558 1 0.0 123844 ep_pol nginx: cache manager process 24945 22802 0.0 6392 pipe_w egrep (nginx|PID) 27042 1 0.0 127780 ep_pol nginx: worker process 27043 1 0.0 127756 ep_pol nginx: worker process 27045 1 0.0 127640 ep_pol nginx: worker process 27046 1 0.0 127636 ep_pol nginx: worker process 27048 1 0.0 123952 ep_pol nginx: cache manager process Only PID 27041 gets killed. curl -v http://www.domain.tld/test * About to connect() to www.domain.tld port 80 * Trying 1.2.3.4... connected * Connected to www.domain.tld (1.2.3.4) port 80 > GET /test HTTP/1.1 > User-Agent: curl/7.15.5 (x86_64-pc-linux-gnu) libcurl/7.15.5 OpenSSL/0.9.8c zlib/1.2.3 libidn/0.6.5 > Host: www.domain.tld > Accept: */* > < HTTP/1.1 301 Moved Permanently < Server: nginx < Date: Mon, 09 Dec 2013 13:09:00 GMT < Content-Type: text/html < Content-Length: 178 < Location: http://www.domain.tld/test/ < Connection: keep-alive 301 Moved Permanently

301 Moved Permanently


nginx
* Connection #0 to host www.domain.tld left intact * Closing connection #0 user 14:08:55 ~: curl -v http://adminer.domain.tld * getaddrinfo(3) failed for adminer.domain.tld:80 * Couldn't resolve host 'adminer.domain.tld' * Closing connection #0 curl: (6) Couldn't resolve host 'adminer.domain.tld' Posted at Nginx Forum: http://forum.nginx.org/read.php?2,244807,245361#msg-245361 From lists at ruby-forum.com Mon Dec 9 14:40:02 2013 From: lists at ruby-forum.com (Roger Pack) Date: Mon, 09 Dec 2013 15:40:02 +0100 Subject: some suggestions for clarifying client_max_body_size Message-ID: <486387453e50639ec9f608a07bf12f0b@ruby-forum.com> Hello. After installing a "fresh" 1.4.4 nginx with passenger the other day, I had a few suggestions/feature requests for the default install: by default the client_max_body_size can easily be exceeded. I think this takes many people by surprise (at least it did for me) when small uploads work but larger ones don't... A couple of possible feature requests/suggestions, thus: by default in nginx.conf list it, with its default (and possibly even an explanation that this is the max POST upload size). Maybe in the error log it could mention the config param, etc. like *216 client intended to send too large body: 15004020 bytes > client_max_body_size 1000000, client: 174.23.73.121 ... (currently it says this) 2013/12/09 14:35:26 [error] 26537#0: *216 client intended to send too large body: 15004020 bytes, client: 174.23.73.121 ... Anyway something like that (or possibly even return a 413 instead of dropping the connection, as long as it has free connections available at all?) That one might not be viable, just debating it. Anyway just throwing out some ideas there. Thank you for an excellent product. -roger- -- Posted via http://www.ruby-forum.com/. From igor at sysoev.ru Mon Dec 9 14:53:17 2013 From: igor at sysoev.ru (Igor Sysoev) Date: Mon, 9 Dec 2013 18:53:17 +0400 Subject: ngx_ssl_dhparam and dh1024_p In-Reply-To: References: Message-ID: <025B98B2-16A9-4587-8E3A-442BD598DAB8@sysoev.ru> On Dec 9, 2013, at 5:54 , Jeffrey Walton wrote: > Hi All, > > ngx_event_openssl.c hs the following around line 535: > > ngx_ssl_dhparam(ngx_conf_t *cf, ngx_ssl_t *ssl, ngx_str_t *file) > { > ... > /* > * -----BEGIN DH PARAMETERS----- > * MIGHAoGBALu8LcrYRnSQfEP89YDpz9vZWKP1aLQtSwju1OsPs1BMbAMCducQgAxc > * y7qokiYUxb7spWWl/fHSh6K8BJvmd4Bg6RqSp1fjBI9osHb302zI8pul34HcLKcl > * 7OZicMyaUDXYzs7vnqAnSmOrHlj6/UmI0PZdFGdX2gcd8EXP4WubAgEC > * -----END DH PARAMETERS----- > */ > > static unsigned char dh1024_p[] = { > 0xBB, 0xBC, 0x2D, 0xCA, 0xD8, 0x46, 0x74, 0x90, 0x7C, 0x43, 0xFC, 0xF5, > 0x80, 0xE9, 0xCF, 0xDB, 0xD9, 0x58, 0xA3, 0xF5, 0x68, 0xB4, 0x2D, 0x4B, > 0x08, 0xEE, 0xD4, 0xEB, 0x0F, 0xB3, 0x50, 0x4C, 0x6C, 0x03, 0x02, 0x76, > 0xE7, 0x10, 0x80, 0x0C, 0x5C, 0xCB, 0xBA, 0xA8, 0x92, 0x26, 0x14, 0xC5, > 0xBE, 0xEC, 0xA5, 0x65, 0xA5, 0xFD, 0xF1, 0xD2, 0x87, 0xA2, 0xBC, 0x04, > 0x9B, 0xE6, 0x77, 0x80, 0x60, 0xE9, 0x1A, 0x92, 0xA7, 0x57, 0xE3, 0x04, > 0x8F, 0x68, 0xB0, 0x76, 0xF7, 0xD3, 0x6C, 0xC8, 0xF2, 0x9B, 0xA5, 0xDF, > 0x81, 0xDC, 0x2C, 0xA7, 0x25, 0xEC, 0xE6, 0x62, 0x70, 0xCC, 0x9A, 0x50, > 0x35, 0xD8, 0xCE, 0xCE, 0xEF, 0x9E, 0xA0, 0x27, 0x4A, 0x63, 0xAB, 0x1E, > 0x58, 0xFA, 0xFD, 0x49, 0x88, 0xD0, 0xF6, 0x5D, 0x14, 0x67, 0x57, 0xDA, > 0x07, 0x1D, 0xF0, 0x45, 0xCF, 0xE1, 0x6B, 0x9B > }; > ... > > Searching on the web for the strings ("MIGHAoGBALu8Lcr", "0xBB, 0xBC, > 0x2D, 0xCA, 0xD8, 0x46, 0x74, 0x90" and "bbbc2dcad8467490") returned > hits for nginx (but no hits in a standard somewhere). > > Would anyone happen to know where that prime and generator came from? > > Does anyone know the subgroup order (or what is the q)? Is q at least > 160-bits (or 2k, where k is 80-bits for the security level offered in > the 1024-bit DH prime)? > > Thanks in advance. This parameters were obtained using "openssl dhparam -C 1024" command. -- Igor Sysoev http://nginx.com From david.donchez at smartjog.com Mon Dec 9 16:52:11 2013 From: david.donchez at smartjog.com (David DONCHEZ) Date: Mon, 9 Dec 2013 16:52:11 +0000 Subject: Upstrea/ Keepalive strange behaviour Message-ID: <1386607931.7264.7.camel@ddonchez.fr.lan> Hello all, I have a strange behavior when using upstream/keepalive and it could be fine if someone can give me some feedback regarding this setup. I have multiple locations with proxy_pass directive, i use proxy_http_version 1.1 and "Connection" header is cleared. In the upstream block, i have 2 or more upstream IP and i have add the directive keepalive. Now, sometimes i see this error log : "upstream prematurely closed connection while reading response header from upstream". A tcpdump show that nginx send a GET to his upstream but the TCP connection is closed and nginx don't receive the response from upstream. Most of the time, nginx uses another upstream server and the transaction is successfully completed. This behavior appears with different upstream server. If someone has experiencing this kind of issue, feedback are welcome ! Thanks, BR. -- David Donchez - Lead Engineer, Research & Engineering SmartJog | T: +33 1 5868 6190 27 Blvd Hippolyte Marqu?s, 94200 Ivry-sur-Seine, France www.smartjog.com | a TDF Group company From chigga101 at gmail.com Mon Dec 9 17:03:55 2013 From: chigga101 at gmail.com (Matthew Ngaha) Date: Mon, 9 Dec 2013 17:03:55 +0000 Subject: alias Message-ID: hi all, i just had a quick question about this example. ____________________________________________________________________ http { server { server_name localhost; root /var/www/website.com/html; location /admin/ { alias /var/www/locked/; } } } When a request for http://localhost/ is received, files are served from the /var/www/website.com/html/ folder. However, if Nginx receives a request for http://localhost/admin/, the path used to retrieve the files is /home/website. com/locked/. ____________________________________________________________________ my question is about the alias variable inside the location /admin/ {} block. It clearly shows /var/www/locked/ as its path, but when explained below, it says the path is /home/website.com/locked/.. Please can someone tell me how: /home/website.com/locked/ == /var/www/locked/ ?? I'm new to linux so maybe it's something i'm unaware of. also the path having website.com in it. Does this mean a directory was named website.com, with the period? From francis at daoine.org Mon Dec 9 17:34:12 2013 From: francis at daoine.org (Francis Daly) Date: Mon, 9 Dec 2013 17:34:12 +0000 Subject: Subdomains no longer work In-Reply-To: <69ab1df17b49ff41041671cd28a91e9d.NginxMailingListEnglish@forum.nginx.org> References: <20131203203009.GM15722@craic.sysops.org> <69ab1df17b49ff41041671cd28a91e9d.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20131209173412.GB21047@craic.sysops.org> On Mon, Dec 09, 2013 at 08:24:30AM -0500, Peleke wrote: Hi there, > One problem seems to be that it is not possible to stop nginx: That is probably worth fixing independent of anything else. Issue whatever "kill" or "nginx -s stop" commands you need to, to be able to get it to reliably stop and start. > user 14:08:55 ~: curl -v http://adminer.domain.tld > * getaddrinfo(3) failed for adminer.domain.tld:80 > * Couldn't resolve host 'adminer.domain.tld' > * Closing connection #0 > curl: (6) Couldn't resolve host 'adminer.domain.tld' Your client can't resolve the hostname to the IP address, so nginx never sees the request. Fix your dns (or other name resolution) and it should work. f -- Francis Daly francis at daoine.org From francis at daoine.org Mon Dec 9 18:33:30 2013 From: francis at daoine.org (Francis Daly) Date: Mon, 9 Dec 2013 18:33:30 +0000 Subject: alias In-Reply-To: References: Message-ID: <20131209183330.GC21047@craic.sysops.org> On Mon, Dec 09, 2013 at 05:03:55PM +0000, Matthew Ngaha wrote: Hi there, > hi all, i just had a quick question about this example. Where does the example come from? It may be worth asking the author to fix it. > http { > server { > server_name localhost; > root /var/www/website.com/html; > location /admin/ { > alias /var/www/locked/; > } > } > } If you ask for http://localhost/request.html, nginx will try to send the file /var/www/website.com/html/request.html. If you ask for http://localhost/admin/request.html, nginx will try to send the file /var/www/locked/request.html. > my question is about the alias variable inside the location /admin/ {} > block. It clearly shows /var/www/locked/ as its path, but when > explained below, it says the path is /home/website.com/locked/.. The explanation is wrong. > also the path having website.com in it. Does this mean a directory was > named website.com, with the period? Yes. f -- Francis Daly francis at daoine.org From r1ch+nginx at teamliquid.net Tue Dec 10 10:24:33 2013 From: r1ch+nginx at teamliquid.net (Richard Stanway) Date: Tue, 10 Dec 2013 11:24:33 +0100 Subject: stalled cache updating - what does it mean? Message-ID: Hello, I have a pretty basic PHP / fastcgi setup with a fastcgi cache as follows: fastcgi_cache_key "$scheme$request_method$host$request_uri"; fastcgi_cache_lock on; fastcgi_cache_bypass $skip_cache; fastcgi_no_cache $skip_cache; fastcgi_cache MAINCACHE; fastcgi_cache_valid 5m; I'm using nginx from the nginx.org Debian repository and building from source, adding the geoip and ngx_cache_purge modules. nginx version: nginx/1.5.7 built by gcc 4.7.2 (Debian 4.7.2-5) TLS SNI support enabled Every so often I'll get a couple of lines in the log like the following: 2013/12/09 15:59:34 [alert] 14218#0: *10450047 stalled cache updating, error:0 while closing request, client: x.236.101.34, server: x.x.101.37:80 2013/12/09 15:59:34 [alert] 14218#0: *10450055 stalled cache updating, error:0 while closing request, client: x.236.101.34, server: x.x.101.37:80 2013/12/09 15:59:34 [alert] 14218#0: *10450099 stalled cache updating, error:0 while closing request, client: x.236.101.34, server: x.x.101.37:80 This usually occurs after a client requests a script which issues some internal requests to the site itself (wp-cron). As it's log level alert, it seems serious, but I can't seem to notice anything wrong as a result. What exactly does this alert mean and is it something I need to be worried about? I've since moved the script to an offline cron job to see if this helps, but I'm still curious exactly what this means. Thanks, Rich. -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Tue Dec 10 10:58:08 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 10 Dec 2013 14:58:08 +0400 Subject: stalled cache updating - what does it mean? In-Reply-To: References: Message-ID: <20131210105807.GD95113@mdounin.ru> Hello! On Tue, Dec 10, 2013 at 11:24:33AM +0100, Richard Stanway wrote: > Hello, > I have a pretty basic PHP / fastcgi setup with a fastcgi cache as follows: > > fastcgi_cache_key "$scheme$request_method$host$request_uri"; > fastcgi_cache_lock on; > fastcgi_cache_bypass $skip_cache; > fastcgi_no_cache $skip_cache; > fastcgi_cache MAINCACHE; > fastcgi_cache_valid 5m; > > I'm using nginx from the nginx.org Debian repository and building from > source, adding the geoip and ngx_cache_purge modules. > > nginx version: nginx/1.5.7 > built by gcc 4.7.2 (Debian 4.7.2-5) > TLS SNI support enabled > > Every so often I'll get a couple of lines in the log like the following: > > 2013/12/09 15:59:34 [alert] 14218#0: *10450047 stalled cache updating, > error:0 while closing request, client: x.236.101.34, server: x.x.101.37:80 > 2013/12/09 15:59:34 [alert] 14218#0: *10450055 stalled cache updating, > error:0 while closing request, client: x.236.101.34, server: x.x.101.37:80 > 2013/12/09 15:59:34 [alert] 14218#0: *10450099 stalled cache updating, > error:0 while closing request, client: x.236.101.34, server: x.x.101.37:80 > > This usually occurs after a client requests a script which issues some > internal requests to the site itself (wp-cron). > > As it's log level alert, it seems serious, but I can't seem to notice > anything wrong as a result. What exactly does this alert mean and is it > something I need to be worried about? I've since moved the script to an > offline cron job to see if this helps, but I'm still curious exactly what > this means. The message means that nginx detected an internal problem - the cache cleanup callback was called on a request termination, and the r->cache->updating flag, which should be cleared at this point, is still set. User-visible results are most likely a cache item which will not be updated, or a cache lock which will apear to be always set (and requests to a cache item only handled only after a timeout). I would suggest to test if you are able to reproduce the problem without 3rd party modules. If you are, it would be interesting to see full configuration and a debug log, see http://wiki.nginx.org/Debugging. -- Maxim Dounin http://nginx.org/ From ian.hobson at ntlworld.com Tue Dec 10 14:56:56 2013 From: ian.hobson at ntlworld.com (Ian Hobson) Date: Tue, 10 Dec 2013 14:56:56 +0000 Subject: nginx "Segementation Fault" - dont know where to start Message-ID: <52A72BB8.4000707@ntlworld.com> Hi all, I am trying to compile version 1.4.4 with a couple of modules, and I have a clean compile that runs on the test system just fine. However when I run it on the production server it crashes with "Segmentation Fault" when the service is restarted. There is nothing helpful in the logs that I can find - only the normal background stuff, and the odd script kiddie looking for attack points. The old config (version 1.2.6) was. Configuration summary + using system PCRE library + using system OpenSSL library + md5: using OpenSSL library + sha1: using OpenSSL library + using system zlib library nginx path prefix: "/usr/local/nginx" nginx binary file: "/usr/sbin" nginx configuration prefix: "/etc/nginx" nginx configuration file: "/etc/nginx/nginx.conf" nginx pid file: "/usr/local/nginx/nginx.pid" nginx error log file: "/usr/local/nginx/logs/error.log" nginx http access log file: "/usr/local/nginx/logs/access.log" nginx http client request body temporary files: "client_body_temp" nginx http proxy temporary files: "proxy_temp" nginx http fastcgi temporary files: "fastcgi_temp" nginx http uwsgi temporary files: "uwsgi_temp" nginx http scgi temporary files: "scgi_temp" The new config summary with version 1.4.4. is Configuration summary + using system PCRE library + using system OpenSSL library + md5: using OpenSSL library + sha1: using OpenSSL library + using system zlib library nginx path prefix: "/usr/local/nginx" nginx binary file: "/usr/sbin" nginx configuration prefix: "/etc/nginx" nginx configuration file: "/etc/nginx/nginx.conf" nginx pid file: "/usr/local/nginx/nginx.pid" nginx error log file: "/usr/local/nginx/logs/error.log" nginx http access log file: "/usr/local/nginx/logs/access.log" nginx http client request body temporary files: "client_body_temp" nginx http proxy temporary files: "proxy_temp" nginx http fastcgi temporary files: "fastcgi_temp" nginx http uwsgi temporary files: "uwsgi_temp" nginx http scgi temporary files: "scgi_temp" uname -a on the test server gives Linux anake 3.2.0-57-generic #87-Ubuntu SMP Tue Nov 12 21:38:12 UTC 2013 i686 i686 i386 GNU/Linux This includes all released updates. And on the production server it is Linux ianhobson.com 3.2.0-39-virtual #62-Ubuntu SMP Wed Feb 27 22:45:45 UTC 2013 i686 athlon i386 GNU/Linux This includes all updates up to about 3 weeks ago. I don't know where to start to try and find the problem. All help gratefully received! The executable file has grown by about 2MB, so I am wondering if it might be lack of memory that is stopping it starting up? Thanks Ian -- Ian Hobson 31 Sheerwater, Northampton NN3 5HU, Tel: 01604 513875 Preparing eBooks for Kindle and ePub formats to give the best reader experience. -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Tue Dec 10 15:14:58 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 10 Dec 2013 19:14:58 +0400 Subject: nginx "Segementation Fault" - dont know where to start In-Reply-To: <52A72BB8.4000707@ntlworld.com> References: <52A72BB8.4000707@ntlworld.com> Message-ID: <20131210151458.GH95113@mdounin.ru> Hello! On Tue, Dec 10, 2013 at 02:56:56PM +0000, Ian Hobson wrote: > I am trying to compile version 1.4.4 with a couple of modules, and I have a > clean compile that runs on the test system just fine. However when I run it > on the production server it crashes with "Segmentation Fault" when the > service is restarted. http://wiki.nginx.org/Debugging -- Maxim Dounin http://nginx.org/ From mdounin at mdounin.ru Tue Dec 10 17:37:49 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 10 Dec 2013 21:37:49 +0400 Subject: Upstrea/ Keepalive strange behaviour In-Reply-To: <1386607931.7264.7.camel@ddonchez.fr.lan> References: <1386607931.7264.7.camel@ddonchez.fr.lan> Message-ID: <20131210173749.GP95113@mdounin.ru> Hello! On Mon, Dec 09, 2013 at 04:52:11PM +0000, David DONCHEZ wrote: > Hello all, > > I have a strange behavior when using upstream/keepalive and it could be > fine if someone can give me some feedback regarding this setup. > > I have multiple locations with proxy_pass directive, i use > proxy_http_version 1.1 and "Connection" header is cleared. > In the upstream block, i have 2 or more upstream IP and i have add the > directive keepalive. > > Now, sometimes i see this error log : "upstream prematurely closed > connection while reading response header from upstream". > > A tcpdump show that nginx send a GET to his upstream but the TCP > connection is closed and nginx don't receive the response from > upstream. Most of the time, nginx uses another upstream server and the > transaction is successfully completed. > This behavior appears with different upstream server. As per HTTP specification, persistent connection can be closed by the server at any time, and clients should handle this. That is, that's more or less normal, and nginx is expected to handle this fine by using another upstream server if this happens. -- Maxim Dounin http://nginx.org/ From chigga101 at gmail.com Tue Dec 10 22:17:25 2013 From: chigga101 at gmail.com (Matthew Ngaha) Date: Tue, 10 Dec 2013 22:17:25 +0000 Subject: alias Message-ID: > On Mon, Dec 09, 2013 at 05:03:55PM +0000, Matthew Ngaha wrote: > Where does the example come from? It may be worth asking the author to > fix it. Hi Francis It was from nginx http server >> http { >> server { >> server_name localhost; >> root /var/www/website.com/html; >> location /admin/ { >> alias /var/www/locked/; >> } >> } >> } > > If you ask for http://localhost/request.html, nginx will try to > send the file /var/www/website.com/html/request.html. If you ask for > http://localhost/admin/request.html, nginx will try to send the file > /var/www/locked/request.html. > The problem i've been having after looking in the error logs,is that it's still trying to find /admin/ in the default html root. I've tried new locations, new roots inside these new locations for /admin/, and now ive tried alias. All have been 404 Not Found due to nginx searching for these paths in root html; ..Any ideas how i can write this? Here's is my default setup, how could i create an alternative path from the root. server { listen 80; server_name localhost; root html; #charset koi8-r; #access_log logs/host.access.log main; } Also i don't know how this message will turn out, i only recieve daily digests so i had to reply to the digest and not individual the mail itself. Is there a way to stop recieving nginx mail via digests and just recieve each individual mail? From francis at daoine.org Tue Dec 10 22:40:48 2013 From: francis at daoine.org (Francis Daly) Date: Tue, 10 Dec 2013 22:40:48 +0000 Subject: alias In-Reply-To: References: Message-ID: <20131210224048.GH21047@craic.sysops.org> On Tue, Dec 10, 2013 at 10:17:25PM +0000, Matthew Ngaha wrote: > > On Mon, Dec 09, 2013 at 05:03:55PM +0000, Matthew Ngaha wrote: Hi there, > >> http { > >> server { > >> server_name localhost; > >> root /var/www/website.com/html; > >> location /admin/ { > >> alias /var/www/locked/; > >> } > >> } > >> } > The problem i've been having after looking in the error logs,is that > it's still trying to find /admin/ in the default html root. That suggests that the configuration you are editing, and the configuration that nginx is using, are not the same. You can test by adding the following line: location = /test/ {return 200 "This is a test\n";} just after the server_name line and reloading nginx. If "curl -i http://localhost/test/" does not show you "This is a test", then that's your problem. After you fix that, the rest should become clear. > Also i don't know how this message will turn out, i only recieve daily > digests so i had to reply to the digest and not individual the mail > itself. Is there a way to stop recieving nginx mail via digests and > just recieve each individual mail? Start here: > http://mailman.nginx.org/mailman/listinfo/nginx Cheers, f -- Francis Daly francis at daoine.org From noloader at gmail.com Wed Dec 11 05:05:34 2013 From: noloader at gmail.com (Jeffrey Walton) Date: Wed, 11 Dec 2013 00:05:34 -0500 Subject: Nginx Deployments in Practice Message-ID: My nginx needs are (1) stable nginx, (2) patched nginx, and (3) something I can modify. The last is what concerns me - I want to ensure I don't lose my changes and enhancements while pursuing (1) and (2). What are folks finding is the easiest way to ensure stable and patched nginx is used in the engineering process? Jeff From david.donchez at smartjog.com Wed Dec 11 08:58:19 2013 From: david.donchez at smartjog.com (David DONCHEZ) Date: Wed, 11 Dec 2013 08:58:19 +0000 Subject: Upstrea/ Keepalive strange behaviour In-Reply-To: <20131210173749.GP95113@mdounin.ru> References: <1386607931.7264.7.camel@ddonchez.fr.lan> <20131210173749.GP95113@mdounin.ru> Message-ID: <1386752299.6695.2.camel@ddonchez.fr.lan> Hello Maxim, > > As per HTTP specification, persistent connection can be closed by > the server at any time, and clients should handle this. That is, > that's more or less normal, and nginx is expected to handle this > fine by using another upstream server if this happens. > Thanks for your reply. I suspected that this behavior was normal but it's good to have a confirmation from your side. -- David Donchez - Lead Engineer, Research & Engineering SmartJog | T: +33 1 5868 6190 27 Blvd Hippolyte Marqu?s, 94200 Ivry-sur-Seine, France www.smartjog.com | a TDF Group company From black.fledermaus at arcor.de Wed Dec 11 10:19:20 2013 From: black.fledermaus at arcor.de (basti) Date: Wed, 11 Dec 2013 11:19:20 +0100 Subject: $_SERVER['QUERY_STRING'] plus-signs and whitespace Message-ID: <52A83C28.8030207@arcor.de> Hello, I have the following Problem with Nginx 1.2.1-2.2 running on 3.2.0-4-amd64 #1 SMP Debian 3.2.41-2 x86_64 GNU/Linux A String like $_SERVER['QUERY_STRING'] = test=1+2 will get $_GET['test'] = 1 2 I have found the following: |$_GET||[||"q"||] = ||strtr||(||$_GET||[||"q"||], ||"+"||, ||" "||);| at http://www.dmuth.org/node/1268/how-get-rid-annoying-plus-signs-drupal-under-nginx Is there a way to do this, in nginx config? Regards, basti -------------- next part -------------- An HTML attachment was scrubbed... URL: From noloader at gmail.com Wed Dec 11 10:55:16 2013 From: noloader at gmail.com (Jeffrey Walton) Date: Wed, 11 Dec 2013 05:55:16 -0500 Subject: OpenSSL Locks Message-ID: I need to do some additional processing with OpenSSL in a custom module. I noticed ngingx does not set any locks in ngx_ssl_init (from ngx_event_openssl.c). A few questions: 1) Should lock installation be guarded on NGX_THREADS or something else? 2) Where is most appropirate to initialize the locks: ngx_init_threads or ngx_ssl_init (or somewhere else)? 3) Is the project interested in a patch? (Per http://nginx.org/en/docs/contributing_changes.html, thanks Maxim). Thanks in advance. Jeff From smallfish.xy at gmail.com Wed Dec 11 11:55:56 2013 From: smallfish.xy at gmail.com (smallfish) Date: Wed, 11 Dec 2013 19:55:56 +0800 Subject: $_SERVER['QUERY_STRING'] plus-signs and whitespace In-Reply-To: <52A83C28.8030207@arcor.de> References: <52A83C28.8030207@arcor.de> Message-ID: '+' in url equla ' ' (base64). if value has '+', you must encode it. example: $a = "1+1"; echo urlencode($a); the correct url is: "http://x.com/?test=1%2B1" -- smallfish http://chenxiaoyu.org On Wed, Dec 11, 2013 at 6:19 PM, basti wrote: > Hello, > I have the following Problem with Nginx 1.2.1-2.2 > running on 3.2.0-4-amd64 #1 SMP Debian 3.2.41-2 x86_64 GNU/Linux > > A String like $_SERVER['QUERY_STRING'] = test=1+2 will get $_GET['test'] > = 1 2 > I have found the following: > > $_GET["q"] = strtr($_GET["q"], "+", " "); > at > http://www.dmuth.org/node/1268/how-get-rid-annoying-plus-signs-drupal-under-nginx > > Is there a way to do this, in nginx config? > > Regards, > basti > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From brian at akins.org Wed Dec 11 12:18:49 2013 From: brian at akins.org (Brian Akins) Date: Wed, 11 Dec 2013 07:18:49 -0500 Subject: Nginx Deployments in Practice In-Reply-To: References: Message-ID: I build packages using omnibus - https://github.com/bakins/omnibus-nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Wed Dec 11 12:21:35 2013 From: nginx-forum at nginx.us (Peleke) Date: Wed, 11 Dec 2013 07:21:35 -0500 Subject: Subdomains no longer work In-Reply-To: <20131209173412.GB21047@craic.sysops.org> References: <20131209173412.GB21047@craic.sysops.org> Message-ID: <0256883b0141ec4579170f3bf3636cba.NginxMailingListEnglish@forum.nginx.org> Thanks for the DNS hint, my provider must have done something wrong, now it is fixed, great! Posted at Nginx Forum: http://forum.nginx.org/read.php?2,244807,245415#msg-245415 From noloader at gmail.com Wed Dec 11 12:30:18 2013 From: noloader at gmail.com (Jeffrey Walton) Date: Wed, 11 Dec 2013 07:30:18 -0500 Subject: Nginx Deployments in Practice In-Reply-To: References: Message-ID: 2013/12/11 Brian Akins : > I build packages using omnibus - https://github.com/bakins/omnibus-nginx > Thanks Brian. It appears omnibus-nginx does not contain the nginx sources. Is it safe to assume you provide them? What version of nginx do you currently use? Do you have any custom code that might create conflicts when stable changes from 1.4.4 to X (would X be 1.4.6 or 1.6)? Jeff From nginx-forum at nginx.us Wed Dec 11 12:57:32 2013 From: nginx-forum at nginx.us (itpp2012) Date: Wed, 11 Dec 2013 07:57:32 -0500 Subject: [Patch] possible mutex starvation issue affects all nginx Linux versions. In-Reply-To: <20131202133907.GT93176@mdounin.ru> References: <20131202133907.GT93176@mdounin.ru> Message-ID: <0d45c0d6218317ddf62cb39e343e06d0.NginxMailingListEnglish@forum.nginx.org> >From the patch author: Hi Maxim, I apologize for my late reply: I just had now time to sort this out. The short answer to your remarks is: the first patch is partially correct, just incomplete, and could be easily completed with the addition of a call to ngx_disable_accept_events(...) addressing the issue of 2 workers accepting connections at the same time. The second one is windows only and as correct and atomical as the unix only ngx_unlock_mutexes(), addressing the very same issue that one is for (which btw showed up during my tests) but being just one line long. The long answer follows and, well, is quite long. So before everything else, and for the sake of clarity: accept mutex patching has been performed on codebase derived by nginx 1.5.6 after properly re-enabling it (the accept_mutex) removing the following disable lines in ngx_event.c #if (NGX_WIN32) /* * disable accept mutex on win32 as it may cause deadlock if * grabbed by a process which can't accept connections */ ngx_use_accept_mutex = 0; #endif Thus submitted patch lines are not useless and are part of a larger patch included in Catepillar 1.5.7.1. The latter (Caterpillar) fully works distributing correctly the workload to any number of configured process workers (not just one of the configured number of them). Now getting to specific proposed patches: A) about the added line ngx_accept_mutex_held=0 in ngx_event.c > 259 if (ngx_accept_mutex_held) { > --+ ngx_accept_mutex_held=0; > 260 ngx_shmtx_unlock(&ngx_accept_mutex); > 261 } that is not wrong, it's just incomplete. In fact, first of all, it pairs with the similar one in src/event/ngx_event_accept.c, line 402 which was patched in Caterpillar and that you too found later in your in 1.5.7. The problem with this one (line 402) was that when ngx_accept_mutex_held's process *local* variable and the accept_mutex *global* to all nginx processes (being it allocated in shared memory) got out of synch. This resulted (as it has been showed in tests) in more than a worker process locally 'thinking' to have the ownership of the accept_mutex at the same time. The latter interfers in the call of their respective ngx_disable_accept_events(...) in turn leading, because of nasty time synching, to no worker being able to enabling them anymore and amounting to all workers (so the whole server) unable to handle any further connection. The line above, between 259 and 260, tried to fix this too, but, there, a call to ngx_disable_accept_events(...) is missing. In fact to be completely correct and not resulting in partially incorrect behaviour as you correctly pointed out, such a call must be added too. This can be done moving that 'if' completely patched code if (ngx_accept_mutex_held) { ngx_disable_accept_events(cycle); ngx_accept_mutex_held=0; ngx_shmtx_unlock(&ngx_accept_mutex); } to ngx_event_accept.c (being ngx_disable_accept_events(...) internal linkage) in a dedicated function, for example void ngx_release_accept_mutex(ngx_cycle_t *cycle){ /* the if above */ } which then gets called at 259 in ngx_event.c instead of the 'if' itself. BTW to make the patch complete the 'if', in ngx_trylock_accept_mutex, where ngx_disable_accept_events() is called should be removed since substituted by that ngx_release_accept_mutex() call. All of this is to avoid a partial incorrect behaviour that the 'if', as it is now, > 259 if (ngx_accept_mutex_held) { > 260 ngx_shmtx_unlock(&ngx_accept_mutex); > 261 } causes at the end of an iteration when the accept mutex was held: the mutex is released in the 'if' above but the ngx_disable_accept_events(...) won't be called until the next iteration with ngx_trylock_accept_mutex(...). Thus in the meanwhile another worker could succeed to acquire the accept mutex, via ngx_trylock_accept_mutex(...), and start to accept connections as well ( ngx_enable_accept_events() is called in ngx_trylock_accept_mutex when mutex is acquired successfully). This means that 2 workers at the same time can accept connections and this goes against the reason of an accept mutex itself reasonably not being not that the intended behaviour. B) About the second proposed patch: the 3 lines if(*ngx_accept_mutex.lock==ngx_processes[n].pid) { *ngx_accept_mutex.lock=0; } make a patch to prevent server's accept_mutex deadlock when its owning worker process dies unexpectedly (program crash while processing a request/answer or task manager's abrupt termination). This is a scenario that occurred several times while testing and, when showing up, lead to server's workers unable to accept any further connection. Given that, unlike in the unix version with its ngx_unlock_mutexes(), this scenario is not addressed in the windows version and, as said, the above lines are meant to fix it. They are correct and thread-safe (the operation is atomic) *in the mentioned specific scenario in which they are meant to be executed* because: 1) they are invoked by master only and just when it detects that a worker process has terminated: worker process handle gets signaled on its termination and a WaitForMultipleObjects (in os/win32/ngx_process_cycle.c) waiting for them wakes up 2) the if condition makes sure 2 things are true at the same time: a) the spinlock contains the pid of the dead worker and b) that implies *ngx_accept_mutex.lock != 0 (since no pid could be zero) 3) considering that the accept_mutex is only acquired (during code flow) via ngx_uint_t ngx_shmtx_trylock(ngx_shmtx_t *mtx) { return (*mtx->lock == 0 && ngx_atomic_cmp_set(mtx->lock, 0, ngx_pid)); } where #define ngx_atomic_cmp_set(lock, old, set) \ (InterlockedCompareExchange((void **) lock, (void *) set, (void *) old) \ == (void *) old) #endif then, in the above scenario, *mtx->lock == pid (of the terminated worker process) and pid !=0 imply that the ngx_atomic_cmp_set(...) (so its InterlockedCompareExchange(...) ) can't ever be called for any other process different than the (already) dead one (i.e. code can't get to the atomic InterlockedXXX operation at all, so no atomic operation is ever executed...). 4) furthermore accept_mutex is released (when appropriate) only via calling ngx_shmtx_unlock() by the owning worker process and in such scenario that process is obviously dead before having had such a chance (or the 'if( )' fix's condition wouldn't trigger) 5) no other worker can release a mutex it didn't previously acquire (and couldn't even if, mistakenly, tried to; just enough to look at ngx_shmtx_unlock() implementation to see why). This last point shows also that, always in that scenario, ngx_shmtx_unlock() can't ever be called by master to release the accept mutex (not being its owner). Hopefully at this point it should be clear enough that releasing the accept_mutex directly with *ngx_accept_mutex.lock=0; is as much safe as calling ngx_shmtx_force_unlock(), the choice of which one I just deem cosmetic (they're functionally equivalent): my choice then went for the more performant, straight variable assignment. NOt wanting to make this post longer than it is I conclude just mentioning that, as of now, the accept mutex is the only mutex living in shared memory so unlocking other possibly shared mutexes is as well unrequired. I realize that for more easily readable code and for possible future extensions (more mutexes in shared memory) it would be better to port that function but as of now it doesn't really make a difference from the correctness of program execution point of view. The patch was still unpolished for full submission, sorry for that. HTH and regards, Vittorio F Digilio Posted at Nginx Forum: http://forum.nginx.org/read.php?2,245121,245417#msg-245417 From noloader at gmail.com Wed Dec 11 14:27:37 2013 From: noloader at gmail.com (Jeffrey Walton) Date: Wed, 11 Dec 2013 09:27:37 -0500 Subject: Hg checkout missing config and friends Message-ID: I performed a checkout to the latest stable: $ hg clone http://hg.nginx.org/nginx -u "release-1.4.4" I tried to run config, and it appears to be missing: $ ./config -bash: ./config: No such file or directory $ nginx-release-1.4.4$ ls auto conf contrib docs misc src The file is present in the tarball on the website. How does one get all the files for release-1.4.4? (Forgive me if I only need to copy the one file. Its not readily apparent to me what I should do at this point). Thanks in advance. From mdounin at mdounin.ru Wed Dec 11 15:25:03 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 11 Dec 2013 19:25:03 +0400 Subject: OpenSSL Locks In-Reply-To: References: Message-ID: <20131211152503.GX95113@mdounin.ru> Hello! On Wed, Dec 11, 2013 at 05:55:16AM -0500, Jeffrey Walton wrote: > I need to do some additional processing with OpenSSL in a custom > module. I noticed ngingx does not set any locks in ngx_ssl_init (from > ngx_event_openssl.c). > > A few questions: > > 1) Should lock installation be guarded on NGX_THREADS or something else? > > 2) Where is most appropirate to initialize the locks: ngx_init_threads > or ngx_ssl_init (or somewhere else)? > > 3) Is the project interested in a patch? (Per > http://nginx.org/en/docs/contributing_changes.html, thanks Maxim). There are basically no threads support in nginx (it was experimental and broken for a long time with the exception of some win32-related stuff), so it's not clear why you need locks at all. -- Maxim Dounin http://nginx.org/ From noloader at gmail.com Wed Dec 11 15:42:25 2013 From: noloader at gmail.com (Jeffrey Walton) Date: Wed, 11 Dec 2013 10:42:25 -0500 Subject: OpenSSL Locks In-Reply-To: <20131211152503.GX95113@mdounin.ru> References: <20131211152503.GX95113@mdounin.ru> Message-ID: On Wed, Dec 11, 2013 at 10:25 AM, Maxim Dounin wrote: > Hello! > > On Wed, Dec 11, 2013 at 05:55:16AM -0500, Jeffrey Walton wrote: > >> I need to do some additional processing with OpenSSL in a custom >> module. I noticed ngingx does not set any locks in ngx_ssl_init (from >> ngx_event_openssl.c). >> >> A few questions: >> >> 1) Should lock installation be guarded on NGX_THREADS or something else? >> >> 2) Where is most appropirate to initialize the locks: ngx_init_threads >> or ngx_ssl_init (or somewhere else)? >> >> 3) Is the project interested in a patch? (Per >> http://nginx.org/en/docs/contributing_changes.html, thanks Maxim). > > There are basically no threads support in nginx (it was > experimental and broken for a long time with the exception of some > win32-related stuff), so it's not clear why you need locks at all. Thanks Maxim. The source code is full of: #if (NGX_THREADS) ... #endif I thought they were needed. My bad. Jeff From piotr at cloudflare.com Wed Dec 11 16:17:35 2013 From: piotr at cloudflare.com (Piotr Sikora) Date: Wed, 11 Dec 2013 08:17:35 -0800 Subject: Hg checkout missing config and friends In-Reply-To: References: Message-ID: Hey, > $ ./config > -bash: ./config: No such file or directory ./auto/configure Best regards, Piotr Sikora From mdounin at mdounin.ru Wed Dec 11 17:11:03 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 11 Dec 2013 21:11:03 +0400 Subject: [Patch] possible mutex starvation issue affects all nginx Linux versions. In-Reply-To: <0d45c0d6218317ddf62cb39e343e06d0.NginxMailingListEnglish@forum.nginx.org> References: <20131202133907.GT93176@mdounin.ru> <0d45c0d6218317ddf62cb39e343e06d0.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20131211171103.GB95113@mdounin.ru> Hello! On Wed, Dec 11, 2013 at 07:57:32AM -0500, itpp2012 wrote: [...] > A) about the added line ngx_accept_mutex_held=0 in ngx_event.c [...] > The line above, between 259 and 260, tried to fix this too, but, there, a > call to ngx_disable_accept_events(...) is missing. > In fact to be completely correct and not resulting in partially incorrect > behaviour as you correctly pointed out, such a call must be added too. This is wrong as well. There is no reason to disable accept events till we are sure that we wasn't able to re-aquire accept mutex before going into the kernel to wait for events. It's just waste of resources. You are misunderstanding the code, probably becase the ngx_accept_mutex_held variable has somewhat misleading name. It is not expected to mean "we have the accept mutex locked", it means "we had the accept mutex locked on previous iteration, we have to disable accept events if we won't be able to re-aquire it". There is no need to fix it, it's not broken. [...] > B) About the second proposed patch: [...] > Hopefully at this point it should be clear enough that releasing the > accept_mutex directly with > > *ngx_accept_mutex.lock=0; > > is as much safe as calling ngx_shmtx_force_unlock(), the choice of which one > I just deem cosmetic (they're functionally equivalent): my choice then went > for the more performant, straight variable assignment. Even considering the code currently there for win32 version, there is no guarantee that reading *ngx_accept_mutex.lock is atomic. While usually it is, in theory it can be non-atomic, and the code will start doing wrong things due to the if() check unexpectedly succeeding. > NOt wanting to make this post longer than it is I conclude just mentioning > that, as of now, the accept mutex is the only mutex living in shared memory > so unlocking other possibly shared mutexes is as well unrequired. That's not true. Something like $ grep -r shmtx_lock src/ will give you an idea where shared memory mutexes are used in nginx. While it's tricky to get all of this working on modern Windows versions with ASLR, the accept mutex is certainly not the only shared memory mutex used in nginx. As I already wrote, I don't object adding an unlock here, but I would like to see the code which is correct and in-line with the unix version of the code. -- Maxim Dounin http://nginx.org/ From nginx-forum at nginx.us Wed Dec 11 18:12:26 2013 From: nginx-forum at nginx.us (volga629) Date: Wed, 11 Dec 2013 13:12:26 -0500 Subject: Imap proxy In-Reply-To: <20131205234917.GG95113@mdounin.ru> References: <20131205234917.GG95113@mdounin.ru> Message-ID: Hello Maxim, Usually is normal setup of EOip tunnels though transport ipsec (transparent lan). And from security prospective the most bigger threat is coming from inside. Outside intrusion possible, but it match more complicated. I confirm that plain 143 proxy working good. I just wonder about this message. 2013/12/05 00:05:40 [error] 20003#0: *1 auth http server 127.0.0.1:80 did not send server or port while in http auth state, client: 10.12.130.102, server: 0.0.0.0:993, login: "testuser1" Posted at Nginx Forum: http://forum.nginx.org/read.php?2,245255,245442#msg-245442 From chigga101 at gmail.com Wed Dec 11 20:20:51 2013 From: chigga101 at gmail.com (Matthew Ngaha) Date: Wed, 11 Dec 2013 20:20:51 +0000 Subject: alias Message-ID: > On Tue, Dec 10, 2013 at 10:17:25PM +0000, Matthew Ngaha wrote: >> The problem i've been having after looking in the error logs,is that >> it's still trying to find /admin/ in the default html root. > > That suggests that the configuration you are editing, and the > configuration that nginx is using, are not the same. > > You can test by adding the following line: > > location = /test/ {return 200 "This is a test\n";} > > just after the server_name line and reloading nginx. > > If "curl -i http://localhost/test/" does not show you "This is a test", > then that's your problem. I think that's the problem also. After doing that, curl returns 404 Not Found aswell. I then changed root from html to something else just to test it was using a different configuration file. I changed it on both: /usr/local/nginx-1.4.3/conf/nginx.conf /usr/local/nginx/conf-1.4.3/nginx.conf.default but localhost still returns the main nginx welcome index page and not the index.html in my test root dir that was in /var/www/testing I ran the linux command "locate" and here's its output.. :~$ locate nginx.conf /home/matthew/src/nginx-1.4.3/conf/nginx.conf /usr/local/nginx-1.4.3/conf/.nginx.conf.swp /usr/local/nginx-1.4.3/conf/nginx.conf /usr/local/nginx-1.4.3/conf/nginx.conf.default /usr/local/nginx-1.4.3/conf/nginx.conf~ I wasn't aware of the 1st result returned so i also edited this conf file. But still no luck. I have no idea where to look for the file or how to pick a default one for nginx to always use:( From mdounin at mdounin.ru Wed Dec 11 20:30:14 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 12 Dec 2013 00:30:14 +0400 Subject: Imap proxy In-Reply-To: References: <20131205234917.GG95113@mdounin.ru> Message-ID: <20131211203014.GE95113@mdounin.ru> Hello! On Wed, Dec 11, 2013 at 01:12:26PM -0500, volga629 wrote: > Hello Maxim, > Usually is normal setup of EOip tunnels though transport ipsec (transparent > lan). And from security prospective the most bigger threat is coming from > inside. Outside intrusion possible, but it match more complicated. > I confirm that plain 143 proxy working good. I just wonder about this > message. > > 2013/12/05 00:05:40 [error] 20003#0: *1 auth http server 127.0.0.1:80 did > not send server or port while in http auth state, client: 10.12.130.102, > server: 0.0.0.0:993, login: "testuser1" The message means that auth script failed to properly respond to auth_http request, see here for details: http://nginx.org/en/docs/mail/ngx_mail_auth_http_module.html#protocol -- Maxim Dounin http://nginx.org/ From francis at daoine.org Wed Dec 11 23:36:52 2013 From: francis at daoine.org (Francis Daly) Date: Wed, 11 Dec 2013 23:36:52 +0000 Subject: alias In-Reply-To: References: Message-ID: <20131211233652.GK21047@craic.sysops.org> On Wed, Dec 11, 2013 at 08:20:51PM +0000, Matthew Ngaha wrote: > > On Tue, Dec 10, 2013 at 10:17:25PM +0000, Matthew Ngaha wrote: > > That suggests that the configuration you are editing, and the > > configuration that nginx is using, are not the same. > /home/matthew/src/nginx-1.4.3/conf/nginx.conf > /usr/local/nginx-1.4.3/conf/.nginx.conf.swp > /usr/local/nginx-1.4.3/conf/nginx.conf > /usr/local/nginx-1.4.3/conf/nginx.conf.default > /usr/local/nginx-1.4.3/conf/nginx.conf~ > > I wasn't aware of the 1st result returned so i also edited this conf > file. But still no luck. I have no idea where to look for the file or > how to pick a default one for nginx to always use:( That's something you'll have to find. "ps" with arguments may let you see which nginx binary is running, and it might show you which config file it is reading (if it not the compiled-in default). Until you can reliably stop and start nginx, configuration changes are pointless. (Just changing the config file will do nothing, until you tell nginx to read the changed config file.) Good luck with it, f -- Francis Daly francis at daoine.org From czhttp at gmail.com Thu Dec 12 01:55:36 2013 From: czhttp at gmail.com (54chen) Date: Thu, 12 Dec 2013 09:55:36 +0800 Subject: Nginx Deployments in Practice In-Reply-To: References: Message-ID: Omnibus seems use AgentZh's ngx tar.gz. See https://github.com/bakins/omnibus-nginx/blob/master/config/software/nginx.rb 2013/12/11 Jeffrey Walton > 2013/12/11 Brian Akins : > > I build packages using omnibus - https://github.com/bakins/omnibus-nginx > > > Thanks Brian. It appears omnibus-nginx does not contain the nginx > sources. Is it safe to assume you provide them? > > What version of nginx do you currently use? Do you have any custom > code that might create conflicts when stable changes from 1.4.4 to X > (would X be 1.4.6 or 1.6)? > > Jeff > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -- - http://www.54chen.com ???? ???? http://twitter.com/54chen -------------- next part -------------- An HTML attachment was scrubbed... URL: From agentzh at gmail.com Thu Dec 12 04:46:54 2013 From: agentzh at gmail.com (Yichun Zhang (agentzh)) Date: Wed, 11 Dec 2013 20:46:54 -0800 Subject: Nginx Deployments in Practice In-Reply-To: References: Message-ID: Hello! On Wed, Dec 11, 2013 at 5:55 PM, 54chen wrote: > Omnibus seems use AgentZh's ngx tar.gz. Just a side note: please never never put capital letters into my nick because I hate that. If you really want to capitalize names, please use my first name, Yichun, instead. Thank you for the cooperation. Best regards, -agentzh From kyprizel at gmail.com Thu Dec 12 07:59:26 2013 From: kyprizel at gmail.com (kyprizel) Date: Thu, 12 Dec 2013 11:59:26 +0400 Subject: Problem with TLS handshake in some browsers when OCSP stapling enabled Message-ID: Hi, we got a problem with OCSP stapling. During the handshake some browsers send TLS extension "certificate status" with more than 5 bytes in it. In Nginx error_log it looks like: [crit] 8721#0: *35 SSL_do_handshake() failed (SSL: error:0D0680A8:asn1 encoding routines:ASN1_CHECK_TLEN:wrong tag error:0D08303A:asn1 enco ding routines:ASN1_TEMPLATE_NOEXP_D2I:nested asn1 error error:1408A0E3:SSL routines:SSL3_GET_CLIENT_HELLO:parse tlsext) while SSL handshaking, client: If we disable OCSP stapling - everything works fine. Looks like the problem is on the browser side and in OpenSSL tls ext parsing function. But can we make it just ignore the incorrect (?) tls extension than dropping SSL hanshake? Here is a list of user-agents which we were able to get on the same IPs after disabling OCSP stapling. Opera/9.80 (Windows NT 5.1) Presto/2.12.388 Version/12.16 Opera/9.80 (Windows NT 6.1) Presto/2.12.388 Version/12.16 Opera/9.80 (Windows NT 6.1; WOW64) Presto/2.12.388 Version/12.16 Mozilla/5.0 (Windows NT 5.1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/31.0.1650.63 Safari/537.36 PoC reproducing the problem attached. -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: ex.py Type: text/x-python Size: 549 bytes Desc: not available URL: From czhttp at gmail.com Thu Dec 12 08:00:59 2013 From: czhttp at gmail.com (54chen) Date: Thu, 12 Dec 2013 16:00:59 +0800 Subject: Nginx Deployments in Practice In-Reply-To: References: Message-ID: Oh..sorry,it's all my fault.Because I am writing too much java codes. 2013/12/12 Yichun Zhang (agentzh) > Hello! > > On Wed, Dec 11, 2013 at 5:55 PM, 54chen wrote: > > Omnibus seems use AgentZh's ngx tar.gz. > > Just a side note: please never never put capital letters into my nick > because I hate that. If you really want to capitalize names, please > use my first name, Yichun, instead. Thank you for the cooperation. > > Best regards, > -agentzh > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -- - http://www.54chen.com ???? ???? http://twitter.com/54chen -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Thu Dec 12 11:33:53 2013 From: nginx-forum at nginx.us (fakrulalam) Date: Thu, 12 Dec 2013 06:33:53 -0500 Subject: Google Analytics with nginx reverse proxy Message-ID: I am using NGINX as reverse proxy and forward the traffic to Apache server where Google Analytics is configured. I have configured proxy_set_header and from Apache server I can view users IP address. But in Googel Analytics users session droped. I am wonder where there is any other configuration need to be done? Thanks Fakrul Posted at Nginx Forum: http://forum.nginx.org/read.php?2,245460,245460#msg-245460 From laursen at oxygen.net Thu Dec 12 12:03:29 2013 From: laursen at oxygen.net (Lasse Laursen) Date: Thu, 12 Dec 2013 13:03:29 +0100 Subject: Google Analytics with nginx reverse proxy In-Reply-To: References: Message-ID: <2EF789F3-D89D-4C47-A3AB-3E2A36EB226D@oxygen.net> Google analytics code (the JavaScript) isn't served from your server. I don't quite understand what your problem is? Sent from my iPhone > On 12/12/2013, at 12.33, "fakrulalam" wrote: > > I am using NGINX as reverse proxy and forward the traffic to Apache server > where Google Analytics is configured. I have configured proxy_set_header and > from Apache server I can view users IP address. But in Googel Analytics > users session droped. I am wonder where there is any other configuration > need to be done? > > Thanks > Fakrul > > Posted at Nginx Forum: http://forum.nginx.org/read.php?2,245460,245460#msg-245460 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From noloader at gmail.com Thu Dec 12 14:30:56 2013 From: noloader at gmail.com (Jeffrey Walton) Date: Thu, 12 Dec 2013 09:30:56 -0500 Subject: location configuration? Message-ID: I'm reading through the nginx architecture document at http://www.aosabook.org/en/nginx.html. The location configuration is called out in a few places. For example: Phase handlers typically do four things: get the location configuration, generate an appropriate response, send the header, and send the body. What is the location configuration, and why is it significant? Thanks in advance. From coderman at gmail.com Thu Dec 12 14:36:46 2013 From: coderman at gmail.com (coderman) Date: Thu, 12 Dec 2013 06:36:46 -0800 Subject: location configuration? In-Reply-To: References: Message-ID: On Thu, Dec 12, 2013 at 6:30 AM, Jeffrey Walton wrote: > I'm reading through the nginx architecture document at > http://www.aosabook.org/en/nginx.html. >... > What is the location configuration, and why is it significant? e.g.: (assuming some web root set) ... # location configuration for directly serving files from directory location /dist { proxy_max_temp_file_size 1m; autoindex off; } # default handler location configuration for passing to upstream proxy location ^~ / { proxy_pass http://upstreamproxy; } ... best regards, From noloader at gmail.com Thu Dec 12 14:43:59 2013 From: noloader at gmail.com (Jeffrey Walton) Date: Thu, 12 Dec 2013 09:43:59 -0500 Subject: location configuration? In-Reply-To: References: Message-ID: On Thu, Dec 12, 2013 at 9:36 AM, coderman wrote: > On Thu, Dec 12, 2013 at 6:30 AM, Jeffrey Walton wrote: >> I'm reading through the nginx architecture document at >> http://www.aosabook.org/en/nginx.html. >>... >> What is the location configuration, and why is it significant? > > e.g.: (assuming some web root set) > ... > # location configuration for directly serving files from directory > location /dist { > proxy_max_temp_file_size 1m; > autoindex off; > } > # default handler location configuration for passing to upstream proxy > location ^~ / { > proxy_pass http://upstreamproxy; > } > ... Thanks. For some reason, I was thinking along the lines of IP addresses and geolocation. From mdounin at mdounin.ru Thu Dec 12 15:24:57 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 12 Dec 2013 19:24:57 +0400 Subject: Problem with TLS handshake in some browsers when OCSP stapling enabled In-Reply-To: References: Message-ID: <20131212152456.GL95113@mdounin.ru> Hello! On Thu, Dec 12, 2013 at 11:59:26AM +0400, kyprizel wrote: > Hi, > we got a problem with OCSP stapling. > > During the handshake some browsers send TLS extension "certificate status" > with more than 5 bytes in it. > In Nginx error_log it looks like: > > [crit] 8721#0: *35 SSL_do_handshake() failed (SSL: error:0D0680A8:asn1 > encoding routines:ASN1_CHECK_TLEN:wrong tag error:0D08303A:asn1 enco > ding routines:ASN1_TEMPLATE_NOEXP_D2I:nested asn1 error error:1408A0E3:SSL > routines:SSL3_GET_CLIENT_HELLO:parse tlsext) while SSL handshaking, client: > > If we disable OCSP stapling - everything works fine. Looks like the problem > is on the browser side and in OpenSSL tls ext parsing function. But can we > make it just ignore the incorrect (?) tls extension than dropping SSL > hanshake? I don't think it's possible to do anything in nginx here. Try looking at the relevant OpenSSL code - if the server status callback is set, it parses the extension, and if a parsing error happens - the error is returned. It should be possible to work it around in OpenSSL code though. -- Maxim Dounin http://nginx.org/ From nginx-forum at nginx.us Thu Dec 12 17:22:43 2013 From: nginx-forum at nginx.us (fakrulalam) Date: Thu, 12 Dec 2013 12:22:43 -0500 Subject: Google Analytics with nginx reverse proxy In-Reply-To: <2EF789F3-D89D-4C47-A3AB-3E2A36EB226D@oxygen.net> References: <2EF789F3-D89D-4C47-A3AB-3E2A36EB226D@oxygen.net> Message-ID: <7b338dbf126509b2f0381d83ae4cc707.NginxMailingListEnglish@forum.nginx.org> Google analytics code (the JavaScript) works properly when I point my domain to apache server. But when I point my domain to nginx (as reverse proxy); Google Analytics statistics get dropped and it doesn't show the actual users hits. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,245460,245477#msg-245477 From me at ptylr.com Thu Dec 12 19:19:56 2013 From: me at ptylr.com (Paul Taylor) Date: Thu, 12 Dec 2013 19:19:56 +0000 Subject: sent_http_HEADER Volatile under Nginx 1.2.4 Message-ID: I?m in the process of making some amends to an environment, where my upstream servers are sending a custom header (X-No-Cache), which I need to detect and alter caching rules within the configuration. The custom header is visible within the output, and I can re-output it as another header through configuration (i.e. add_header X-Sent-No-Cache $sent_http_x_no_cache; ). However, as soon as I perform any type of testing of this custom header, it disappears. For example, if I was to perform a map on the custom header, try to set an Nginx variable to the value of the header, or test within an IF statement, any future call to this header is no longer possible. Additionally any setting or testing of the header fails. Unfortunately I have little control of the upstream, so cannot use an alternative method (such as proper Cache-Control headers). Has anyone experienced similar behaviour, or have any pearls of wisdom? Thanks in advance, Paul From mdounin at mdounin.ru Thu Dec 12 19:32:37 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 12 Dec 2013 23:32:37 +0400 Subject: sent_http_HEADER Volatile under Nginx 1.2.4 In-Reply-To: References: Message-ID: <20131212193237.GN95113@mdounin.ru> Hello! On Thu, Dec 12, 2013 at 07:19:56PM +0000, Paul Taylor wrote: > I?m in the process of making some amends to an environment, > where my upstream servers are sending a custom header > (X-No-Cache), which I need to detect and alter caching rules > within the configuration. > > The custom header is visible within the output, and I can > re-output it as another header through configuration (i.e. > add_header X-Sent-No-Cache $sent_http_x_no_cache; ). > > However, as soon as I perform any type of testing of this custom > header, it disappears. > > For example, if I was to perform a map on the custom header, try > to set an Nginx variable to the value of the header, or test > within an IF statement, any future call to this header is no > longer possible. Additionally any setting or testing of the > header fails. Both "set" and "if" directives you mentioned are executed _before_ a request is sent to upstream, and at this point there is no X-No-Cache header in the response. Due to this, using the $sent_http_x_no_cache variable in "set" or "if" will result in an empty value, and this value will be cached for later use. It's not clear what you are trying to do so I can't advise any further, but certainly using the $sent_http_x_no_cache variable in "if" or "set" directives isn't going to work, and this is what causes behaviour you see. Just a map{} should work fine though - as long as you don't try to call the map before the X-No-Cache header is actually available. E.g., something like this should work fine: map $sent_http_x_no_cache $foo { "" empty; default foo; } add_header X-Foo $foo; It might be also a goo idea to use $upstream_http_x_no_cache variable instead, see here: http://nginx.org/en/docs/http/ngx_http_upstream_module.html#variables -- Maxim Dounin http://nginx.org/ From me at ptylr.com Thu Dec 12 23:36:21 2013 From: me at ptylr.com (Paul Taylor) Date: Thu, 12 Dec 2013 23:36:21 +0000 Subject: sent_http_HEADER Volatile under Nginx 1.2.4 In-Reply-To: References: Message-ID: <2859EC08-D88C-4434-A0BD-101527357928@ptylr.com> Hi Maxim, Thanks for your response. You?re right! Using the map did work (I thought I?d tried that, but must have been tired!). So, now I have one other challenge, the value of $foo that you define below is needed to identify whether to cache the response of not. The only issue is that I have a number of other directives that I also need to add into the mix - therefore I use the set_by_lua code to nest/combine OR within an if statement?code below (I?ve kept the variable name as foo, so it?s clear which I?m referring to): map $upstream_http_x_no_cache $foo { "" 0; default 1; } set_by_lua $bypass_cache ' local no_cache_dirs = tonumber(ngx.var.no_cache_dirs) or 0 local logged_in = tonumber(ngx.var.logged_in) or 0 local no_cache_header = tonumber(ngx.var.foo) or 0 if((no_cache_dirs == 1) or (no_cache_header == 1) or (logged_in == 1)) then return 1; end return 0; '; Now when I make the Lua local variable declaration in order to use it, the value of $upstream_http_x_no_cache is reset to 0, even when it was set as 1 originally. If I comment out the line declaring the local variable within the Lua call, it returns to being a value of 1 again. Am I getting the sequencing of events wrong again? Is there any way that I can get the value of $upstream_http_x_no_cache into this Lua block, or would I need to do it another way? Thanks very much for your help so far Maxim. Paul __________________________________________________________________ Hello! On Thu, Dec 12, 2013 at 07:19:56PM +0000, Paul Taylor wrote: > I?m in the process of making some amends to an environment, > where my upstream servers are sending a custom header > (X-No-Cache), which I need to detect and alter caching rules > within the configuration. > > The custom header is visible within the output, and I can > re-output it as another header through configuration (i.e. > add_header X-Sent-No-Cache $sent_http_x_no_cache; ). > > However, as soon as I perform any type of testing of this custom > header, it disappears. > > For example, if I was to perform a map on the custom header, try > to set an Nginx variable to the value of the header, or test > within an IF statement, any future call to this header is no > longer possible. Additionally any setting or testing of the > header fails. Both "set" and "if" directives you mentioned are executed _before_ a request is sent to upstream, and at this point there is no X-No-Cache header in the response. Due to this, using the $sent_http_x_no_cache variable in "set" or "if" will result in an empty value, and this value will be cached for later use. It's not clear what you are trying to do so I can't advise any further, but certainly using the $sent_http_x_no_cache variable in "if" or "set" directives isn't going to work, and this is what causes behaviour you see. Just a map{} should work fine though - as long as you don't try to call the map before the X-No-Cache header is actually available. E.g., something like this should work fine: map $sent_http_x_no_cache $foo { "" empty; default foo; } add_header X-Foo $foo; It might be also a goo idea to use $upstream_http_x_no_cache variable instead, see here: http://nginx.org/en/docs/http/ngx_http_upstream_module.html#variables -- Maxim Dounin http://nginx.org/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Fri Dec 13 16:31:41 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 13 Dec 2013 20:31:41 +0400 Subject: sent_http_HEADER Volatile under Nginx 1.2.4 In-Reply-To: <2859EC08-D88C-4434-A0BD-101527357928@ptylr.com> References: <2859EC08-D88C-4434-A0BD-101527357928@ptylr.com> Message-ID: <20131213163141.GX95113@mdounin.ru> Hello! On Thu, Dec 12, 2013 at 11:36:21PM +0000, Paul Taylor wrote: > Hi Maxim, > Thanks for your response. You?re right! Using the map did work > (I thought I?d tried that, but must have been tired!). > So, now I have one other challenge, the value of $foo that you > define below is needed to identify whether to cache the response > of not. The only issue is that I have a number of other > directives that I also need to add into the mix - therefore I > use the set_by_lua code to nest/combine OR within an if > statement?code below (I?ve kept the variable name as foo, so > it?s clear which I?m referring to): > map $upstream_http_x_no_cache $foo { > "" 0; > default 1; > } > set_by_lua $bypass_cache ' > local no_cache_dirs = tonumber(ngx.var.no_cache_dirs) or 0 > local logged_in = tonumber(ngx.var.logged_in) or 0 > local no_cache_header = tonumber(ngx.var.foo) or 0 > > if((no_cache_dirs == 1) or (no_cache_header == 1) or > (logged_in == 1)) then > return 1; > end > > return 0; > '; > Now when I make the Lua local variable declaration in order to > use it, the value of $upstream_http_x_no_cache is reset to 0, > even when it was set as 1 originally. If I comment out the line > declaring the local variable within the Lua call, it returns to > being a value of 1 again. > Am I getting the sequencing of events wrong again? Is there any > way that I can get the value of $upstream_http_x_no_cache into > this Lua block, or would I need to do it another way? Are you going to use the result in proxy_no_cache? If yes, you can just use multiple variables there, something like this should work: proxy_no_cache $upstream_http_x_no_cache $no_cache_dirs $logged_in; See here for details: http://nginx.org/r/proxy_no_cache > Thanks very much for your help so far Maxim. > Paul > __________________________________________________________________ > Hello! > > On Thu, Dec 12, 2013 at 07:19:56PM +0000, Paul Taylor wrote: > > > I?m in the process of making some amends to an environment, > > where my upstream servers are sending a custom header > > (X-No-Cache), which I need to detect and alter caching rules > > within the configuration. > > > > The custom header is visible within the output, and I can > > re-output it as another header through configuration (i.e. > > add_header X-Sent-No-Cache $sent_http_x_no_cache; ). > > > > However, as soon as I perform any type of testing of this custom > > header, it disappears. > > > > For example, if I was to perform a map on the custom header, try > > to set an Nginx variable to the value of the header, or test > > within an IF statement, any future call to this header is no > > longer possible. Additionally any setting or testing of the > > header fails. > > Both "set" and "if" directives you mentioned are executed _before_ > a request is sent to upstream, and at this point there is no > X-No-Cache header in the response. Due to this, using the > $sent_http_x_no_cache variable in "set" or "if" will result in an > empty value, and this value will be cached for later use. > > It's not clear what you are trying to do so I can't advise any > further, but certainly using the $sent_http_x_no_cache variable in > "if" or "set" directives isn't going to work, and this is what > causes behaviour you see. > > Just a map{} should work fine though - as long as you don't try to > call the map before the X-No-Cache header is actually available. > E.g., something like this should work fine: > > map $sent_http_x_no_cache $foo { > "" empty; > default foo; > } > > add_header X-Foo $foo; > > It might be also a goo idea to use $upstream_http_x_no_cache > variable instead, see here: > > http://nginx.org/en/docs/http/ngx_http_upstream_module.html#variables > > -- > Maxim Dounin > http://nginx.org/ > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -- Maxim Dounin http://nginx.org/ From david at styleflare.com Fri Dec 13 17:25:02 2013 From: david at styleflare.com (david) Date: Fri, 13 Dec 2013 12:25:02 -0500 Subject: Proxy Pass Redirect Problem Message-ID: <52AB42EE.3090008@styleflare.com> Not sure if this is my configuration causing this symptom or openresty. Here is whats happening. If I try to access the store "admin" via: http://mysite.com/admin I am getting proxy redirects sent to my browser and seeing 127.0.0.1:8000/admin in my address bar. Not exactly the result I was looking for. Any pointers where I should look? Running ngx_openresty/1.4.3.6 Here is my config location / { root html; index index.php index.html index.htm; try_files $uri @store; } location @wsgi { include uwsgi_params; uwsgi_pass unix://tmp/spften.sock; } location @store { include uwsgi_params; proxy_pass http://127.0.0.1:8000$uri; proxy_intercept_errors on; #recursive_error_pages on; error_page 404, 502 = @wsgi; } Thanks in advance for any pointers. From mdounin at mdounin.ru Fri Dec 13 17:48:20 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 13 Dec 2013 21:48:20 +0400 Subject: Proxy Pass Redirect Problem In-Reply-To: <52AB42EE.3090008@styleflare.com> References: <52AB42EE.3090008@styleflare.com> Message-ID: <20131213174820.GA95113@mdounin.ru> Hello! On Fri, Dec 13, 2013 at 12:25:02PM -0500, david wrote: > Not sure if this is my configuration causing this symptom or openresty. > > Here is whats happening. > > If I try to access the store "admin" > via: http://mysite.com/admin > > I am getting proxy redirects sent to my browser and seeing > 127.0.0.1:8000/admin in my address bar. > > Not exactly the result I was looking for. > > Any pointers where I should look? > > Running ngx_openresty/1.4.3.6 > > Here is my config > > location / { > root html; > index index.php index.html index.htm; > try_files $uri @store; > } > location @wsgi { > include uwsgi_params; > uwsgi_pass unix://tmp/spften.sock; > } > > location @store { > include uwsgi_params; > proxy_pass http://127.0.0.1:8000$uri; - proxy_pass http://127.0.0.1:8000$uri; + proxy_pass http://127.0.0.1:8000; See also docs here: http://nginx.org/r/proxy_pass http://nginx.org/r/proxy_redirect Default proxy_redirect should work for you if you'll remove "$uri" as suggested above. -- Maxim Dounin http://nginx.org/ From chigga101 at gmail.com Sat Dec 14 01:04:59 2013 From: chigga101 at gmail.com (Matthew Ngaha) Date: Sat, 14 Dec 2013 01:04:59 +0000 Subject: alias In-Reply-To: <20131211233652.GK21047@craic.sysops.org> References: <20131211233652.GK21047@craic.sysops.org> Message-ID: hey so out of all those config files i edited, i reloaded all of them and the changes weren't made. I think i stumbled onto the problem today. whenever it didn't work i always put the files back to their defaults before switching off my PC but this time i didn't. Today i didn't touch any config files but finally nginx was finding the new location. So it seems the reloading wasn't taking effect and only did after a restart of my PC. i've beeb typing "sudo service nginx reload" after editing .. why isn't this working, do i have to do something else? From nginx-forum at nginx.us Sat Dec 14 07:06:29 2013 From: nginx-forum at nginx.us (justin) Date: Sat, 14 Dec 2013 02:06:29 -0500 Subject: SSL OCSP stapling won't enable Message-ID: According to ssllabs.com SSL OCSP stapling is not enabled, even though I have the following in my http block: ssl_stapling on; ssl_stapling_verify on; ssl_trusted_certificate /etc/pki/tls/certs/ca-bundle.trust.crt; resolver 8.8.4.4 8.8.8.8 valid=600s; resolver_timeout 15s; Any idea why? Here is my full ssllabs.com report: https://www.ssllabs.com/ssltest/analyze.html?d=commando.io Posted at Nginx Forum: http://forum.nginx.org/read.php?2,245528,245528#msg-245528 From nginx-forum at nginx.us Sat Dec 14 18:11:45 2013 From: nginx-forum at nginx.us (itpp2012) Date: Sat, 14 Dec 2013 13:11:45 -0500 Subject: new ngx_resolver changes breaks modules Message-ID: <83c08a0feb3ff08fb3b7ff4dc03c952a.NginxMailingListEnglish@forum.nginx.org> While back porting ngx_resolver changes by Ruslan Ermilov I get these errors: error C2039: 'type' : is not a member of 'ngx_resolver_ctx_s' error C2440: '=' : cannot convert from 'ngx_addr_t' to 'ULONG' Has this been intentional or did a glitch happened throughout the changes ? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,245532,245532#msg-245532 From nginx.org at maclemon.at Sat Dec 14 20:12:55 2013 From: nginx.org at maclemon.at (MacLemon) Date: Sat, 14 Dec 2013 21:12:55 +0100 Subject: SSL OCSP stapling won't enable In-Reply-To: References: Message-ID: Only when I set `ssl_stapling_verify off;`I can get OCSP stapling to work on my setup. In my experience helps to (re)load the page a few times before testing with SSLLabs to give the server time to fetch the OCSP response. Best regards MacLemon On 14.12.2013, at 08:06, justin wrote: > According to ssllabs.com SSL OCSP stapling is not enabled, even though I > have the following in my http block: > > ssl_stapling on; > ssl_stapling_verify on; > ssl_trusted_certificate /etc/pki/tls/certs/ca-bundle.trust.crt; > resolver 8.8.4.4 8.8.8.8 valid=600s; > resolver_timeout 15s; > > Any idea why? Here is my full ssllabs.com report: > https://www.ssllabs.com/ssltest/analyze.html?d=commando.io From ru at nginx.com Sat Dec 14 20:36:17 2013 From: ru at nginx.com (Ruslan Ermilov) Date: Sun, 15 Dec 2013 00:36:17 +0400 Subject: new ngx_resolver changes breaks modules In-Reply-To: <83c08a0feb3ff08fb3b7ff4dc03c952a.NginxMailingListEnglish@forum.nginx.org> References: <83c08a0feb3ff08fb3b7ff4dc03c952a.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20131214203617.GL74021@lo0.su> On Sat, Dec 14, 2013 at 01:11:45PM -0500, itpp2012 wrote: > While back porting ngx_resolver changes by Ruslan Ermilov I get these > errors: > > error C2039: 'type' : is not a member of 'ngx_resolver_ctx_s' > error C2440: '=' : cannot convert from 'ngx_addr_t' to 'ULONG' > > Has this been intentional or did a glitch happened throughout the changes ? The API changes were minimal and intentional. The "type" field no longer exists, and the "addrs" array now holds ngx_addr_t's. From agentzh at gmail.com Sun Dec 15 00:42:32 2013 From: agentzh at gmail.com (Yichun Zhang (agentzh)) Date: Sat, 14 Dec 2013 16:42:32 -0800 Subject: [ANN] ngx_openresty mainline version 1.4.3.7 released Message-ID: Hello folks! I am happy to announce that the new mainline version of ngx_openresty, 1.4.3.7, is now released: http://openresty.org/#Download Thanks all our contributors for making this happen! The highlights of this release are the new LuaJIT v2.1 engine and the lua-resty-core library. You should observe immediate speedup in your existing OpenResty/Lua apps after upgrading to this version. Sometimes the speedup can be 40% overall, as observed in CloudFlare's Lua WAF system. Sometimes it may just be 10%, as observed in CloudFlare's Lua CDN system. Thanks to all the new improvements in LuaJIT v2.1, especially the improvements in the JIT compiler; Mike Pall is my hero ;) Special thanks go to CloudFlare for kindly sponsoring 3 development phases in LuaJIT v2.1 (and phase #4 is also coming!). The lua-resty-core library reimplements many Lua API functions provided by the ngx_lua module with LuaJIT FFI, which means user Lua code paths using these API functions can finally be JIT compiled by LuaJIT (they used to be interpreted by LuaJIT's interpreter only). So enabling lua-resty-core like below in nginx.conf may give another speedup in your Lua applications: init_by_lua ' require "resty.core" '; Loading the resty.core module will replace a lot of ngx_lua's Lua API functions with resty.core's own FFI-based implementations. So you can easily compare the performance with and without lua-resty-core :) Enabling lua-resty-core in CloudFlare's Lua WAF system gives another 33% ~ 37% overall speedup. Whee! But if your don't have enough Lua code paths (actually) JIT compiled, you MAY observe slowdown after enabling lua-resty-core. So always benchmark the performance of your app before enabling lua-resty-core in production. Or just resolve or workaround the blockers in your Lua code paths that cannot be JIT compiled, with the aid of LuaJIT's jit.v or jit.dump modules. To use LuaJIT's jit.v or jit.dump modules to analyze your Lua apps running in OpenResty/Nginx, you can put the following lines into your nginx.conf's http {} block: init_by_lua ' local verbose = false if verbose then local dump = require "jit.dump" dump.on("b", "/tmp/jit.log") else local v = require "jit.v" v.on("/tmp/jit.log") end require "resty.core" '; And load your app with tools like ab and weighttp to get your Lua code hot (for LuaJIT's JIT compiler). Then you can check the outputs in the file /tmp/jit.log for all the detailed information from the JIT compiler. Below is the complete change log for this release, as compared to the last (stable) release, 1.4.3.6: * upgraded LuaJIT to v2.1-20131211. * see changes here: https://github.com/agentzh/luajit2/commits/v2.1-agentzh * bundled LuaRestyCoreLibrary 0.0.2. * this library reimplements LuaNginxModule's Lua API with LuaJIT FFI. see https://github.com/agentzh/lua-resty-core for more details. * upgraded LuaNginxModule to 0.9.3. * feature: added a lot of pure C API (without using any Lua VM's C API) for FFI-based Lua API implementations like LuaRestyCoreLibrary. * feature: allow creating 0-delay timers upon worker process existing. * feature: added new API function ngx.worker.exiting() for testing if the current worker process has started exiting. * feature: ngx.re.find() now accepts the optional 5th argument "nth" to control which submatch capture's indexes are returned. thanks Lance Li for the feature request. * feature: added new API for version numbers of both Nginx and LuaNginxModule itself: ngx.config.nginx_version and ngx.config.ngx_lua_version. thanks smallfish for the patch. * feature: added support for loading LuaJIT 2.1 bytecode files directly in *_by_lua_file configuration directives. * bugfix: ngx.req.set_header() did not completely override the existing request header with multiple values. thanks Aviram Cohen for the report. * bugfix: modifying request headers in a subrequest could lead to assertion failures and crashes. thanks leaf corcoran for the report. * bugfix: turning off lua_code_cache could lead to memory issues (segfaults and LuaJIT assertion failures) when Lua libraries using LuaJIT FFI were used. now we always create a clean separate Lua VM instance for every Nginx request served by us when the Lua code cache is disabled. thanks Ron Gomes for the report. * bugfix: the linker option "-E" is not support in Cygwin's linker; we should test "--export-all-symbols" at the same time. thanks Heero Zhang for the report. * bugfix: fixed the warnings from the Microsoft Visual C++ compiler. thanks Edwin Cleton for the report. * optimize: optimized the implementation of ngx.headers_sent a bit. * doc: added new section "Statically Linking Pure Lua Modules": https://github.com/chaoslawful/lua-nginx-module#statically-l inking-pure-lua-modules * doc: typo fixes from Zheng Ping. * upgraded HeadersMoreNginxModule to 0.24. * bugfix: more_set_input_headers did not completely override the existing request header with multiple values. thanks Aviram Cohen for the report. * upgraded SetMiscNginxModule to 0.23. * feature: added new configuration directives set_formatted_gmt_time and set_formatted_local_time. thanks Trurl McByte for the patch. * upgraded MemcNginxModule to 0.14. * feature: added new configuration directive memc_ignore_client_abort. thanks Eldar Zaitov for the patch. * upgraded RdsJsonNginxModule to 0.13. * bugfix: fixed the warnings from the Microsoft Visual C++ compiler. thanks Edwin Cleton for the report. * upgraded EchoNginxModule to 0.50. * bugfix: fixed the warnings from the Microsoft Visual C++ compiler. thanks Edwin Cleton for the report. * upgraded ArrayVarNginxModule to 0.03. * bugfix: fixed the warnings from the Microsoft Visual C++ compiler. thanks Edwin Cleton for the report. * upgraded RedisNginxModule module to 0.3.7. * see changes here: * feature: applied the larger_max_error_str patch to the nginx core to allow error log messages up to 4096 bytes and to allow the C macro "NGX_MAX_ERROR_STR" to be overridden from the outside. * feature: added new configure option "--with-pcre-conf-opt=OPTIONS" to the nginx core to allow custom PCRE ./configure build options. thanks Lance Li for the original patch. The HTML version of the change log with lots of helpful hyper-links can be browsed here: http://openresty.org/#ChangeLog1004003 OpenResty (aka. ngx_openresty) is a full-fledged web application server by bundling the standard Nginx core, lots of 3rd-party Nginx modules and Lua libraries, as well as most of their external dependencies. See OpenResty's homepage for details: http://openresty.org/ We have run extensive testing on our Amazon EC2 test cluster and ensured that all the components (including the Nginx core) play well together. The latest test report can always be found here: http://qa.openresty.org Have fun! -agentzh From david at styleflare.com Sun Dec 15 01:07:37 2013 From: david at styleflare.com (david) Date: Sat, 14 Dec 2013 20:07:37 -0500 Subject: Proxy Pass Redirect Problem In-Reply-To: <20131213174820.GA95113@mdounin.ru> References: <52AB42EE.3090008@styleflare.com> <20131213174820.GA95113@mdounin.ru> Message-ID: <52AD00D9.4030709@styleflare.com> Maxim! Thank you. I must of missed something because it does not seem to solve my issue. I removed the $uri param from the proxy_pass. I also tried adding proxy_redirect (But I think that belongs to the @wsgi block and not the @store block. But I am not exactly sure.) Because my understanding is that its getting sent to the @wsgi app thats what is issuing the redirect and using the proxy pass address to rewrite the uri. location @wsgi { include uwsgi_params; uwsgi_pass unix://tmp/spften.sock; } location @store { include uwsgi_params; proxy_pass http://127.0.0.1:8000; proxy_redirect http://127.0.0.1:8000 /; # proxy_redirect default; proxy_intercept_errors on; #recursive_error_pages on; error_page 404, 502 = @wsgi; } Perhaps I misunderstood how this works. Thanks in advance for any pointers. On 12/13/13 12:48 PM, Maxim Dounin wrote: > Hello! > > On Fri, Dec 13, 2013 at 12:25:02PM -0500, david wrote: > >> Not sure if this is my configuration causing this symptom or openresty. >> >> Here is whats happening. >> >> If I try to access the store "admin" >> via: http://mysite.com/admin >> >> I am getting proxy redirects sent to my browser and seeing >> 127.0.0.1:8000/admin in my address bar. >> >> Not exactly the result I was looking for. >> >> Any pointers where I should look? >> >> Running ngx_openresty/1.4.3.6 >> >> Here is my config >> >> location / { >> root html; >> index index.php index.html index.htm; >> try_files $uri @store; >> } >> location @wsgi { >> include uwsgi_params; >> uwsgi_pass unix://tmp/spften.sock; >> } >> >> location @store { >> include uwsgi_params; >> proxy_pass http://127.0.0.1:8000$uri; > - proxy_pass http://127.0.0.1:8000$uri; > + proxy_pass http://127.0.0.1:8000; > > > See also docs here: > > http://nginx.org/r/proxy_pass > http://nginx.org/r/proxy_redirect > > Default proxy_redirect should work for you if you'll remove "$uri" > as suggested above. > From david at styleflare.com Sun Dec 15 01:21:18 2013 From: david at styleflare.com (david) Date: Sat, 14 Dec 2013 20:21:18 -0500 Subject: Proxy Pass Redirect Problem In-Reply-To: <20131213174820.GA95113@mdounin.ru> References: <52AB42EE.3090008@styleflare.com> <20131213174820.GA95113@mdounin.ru> Message-ID: <52AD040E.6010408@styleflare.com> Nevermind. I found my error. I was doing kill -HUP and had a typo in my config. I didnt notice until I checked the config. Thank you its working as expected. Sorry for the noise. On 12/13/13 12:48 PM, Maxim Dounin wrote: > Hello! > > On Fri, Dec 13, 2013 at 12:25:02PM -0500, david wrote: > >> Not sure if this is my configuration causing this symptom or openresty. >> >> Here is whats happening. >> >> If I try to access the store "admin" >> via: http://mysite.com/admin >> >> I am getting proxy redirects sent to my browser and seeing >> 127.0.0.1:8000/admin in my address bar. >> >> Not exactly the result I was looking for. >> >> Any pointers where I should look? >> >> Running ngx_openresty/1.4.3.6 >> >> Here is my config >> >> location / { >> root html; >> index index.php index.html index.htm; >> try_files $uri @store; >> } >> location @wsgi { >> include uwsgi_params; >> uwsgi_pass unix://tmp/spften.sock; >> } >> >> location @store { >> include uwsgi_params; >> proxy_pass http://127.0.0.1:8000$uri; > - proxy_pass http://127.0.0.1:8000$uri; > + proxy_pass http://127.0.0.1:8000; > > > See also docs here: > > http://nginx.org/r/proxy_pass > http://nginx.org/r/proxy_redirect > > Default proxy_redirect should work for you if you'll remove "$uri" > as suggested above. > From lists-nginx at swsystem.co.uk Sun Dec 15 01:37:38 2013 From: lists-nginx at swsystem.co.uk (Steve Wilson) Date: Sun, 15 Dec 2013 01:37:38 +0000 Subject: SSL OCSP stapling won't enable In-Reply-To: References: Message-ID: <52AD07E2.4000208@swsystem.co.uk> I'm using startssl for my certificates so had problems with the ssl_trusted_certificate too. just using resolver and ssl_stapling on got mine enabled. Using openssl on the console's helpful too: openssl s_client -connect www.stevewilson.co.uk:443 \ -tls1 -tlsextdebug -status < /dev/null| grep OCSP Not working yet gives "OCSP response: no response sent" give it time to gather the data and it then gives response data. Steve. On 14/12/2013 20:12, MacLemon wrote: > Only when I set `ssl_stapling_verify off;`I can get OCSP stapling to work on my setup. In my experience helps to (re)load the page a few times before testing with SSLLabs to give the server time to fetch the OCSP response. > > Best regards > MacLemon > > On 14.12.2013, at 08:06, justin wrote: >> According to ssllabs.com SSL OCSP stapling is not enabled, even though I >> have the following in my http block: >> >> ssl_stapling on; >> ssl_stapling_verify on; >> ssl_trusted_certificate /etc/pki/tls/certs/ca-bundle.trust.crt; >> resolver 8.8.4.4 8.8.8.8 valid=600s; >> resolver_timeout 15s; >> >> Any idea why? Here is my full ssllabs.com report: >> https://www.ssllabs.com/ssltest/analyze.html?d=commando.io > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > From agentzh at gmail.com Sun Dec 15 03:26:28 2013 From: agentzh at gmail.com (Yichun Zhang (agentzh)) Date: Sat, 14 Dec 2013 19:26:28 -0800 Subject: [ANN] ngx_openresty mainline version 1.4.3.7 released In-Reply-To: References: Message-ID: Hello! On Sat, Dec 14, 2013 at 4:42 PM, Yichun Zhang (agentzh) wrote: > I am happy to announce that the new mainline version of ngx_openresty, > 1.4.3.7, is now released: > http://openresty.org/#Download > I just kicked out the ngx_openresty 1.4.3.9 release, including an emergent fix for 1.4.3.7 where the LuaJIT include path was still pointing to luajit-2.0/ rather than luajit-2.1/. Thanks Tor Hveem and lhmwzy for the report! See http://openresty.org/#Download Best regards, -agentzh From lhmwzy at gmail.com Sun Dec 15 06:47:46 2013 From: lhmwzy at gmail.com (lhmwzy) Date: Sun, 15 Dec 2013 14:47:46 +0800 Subject: [ANN] ngx_openresty mainline version 1.4.3.7 released In-Reply-To: References: Message-ID: configure??????warning??? lib_jit.c: In function 'lj_cf_jit_profile_stop': lib_jit.c:594: warning: passing argument 2 of 'setlightudV' discards qualifiers from pointer target type lib_jit.c:596: warning: passing argument 2 of 'setlightudV' discards qualifiers from pointer target type lib_jit.c: In function 'jit_profile_callback': lib_jit.c:550: warning: passing argument 2 of 'setlightudV' discards qualifiers from pointer target type lib_jit.c: In function 'lj_cf_jit_profile_start': lib_jit.c:577: warning: passing argument 2 of 'setlightudV' discards qualifiers from pointer target type lib_jit.c:579: warning: passing argument 2 of 'setlightudV' discards qualifiers from pointer target type lib_jit.c: In function 'lj_cf_jit_profile_stop': lib_jit.c:594: warning: passing argument 2 of 'setlightudV' discards qualifiers from pointer target type lib_jit.c:596: warning: passing argument 2 of 'setlightudV' discards qualifiers from pointer target type lib_jit.c: In function 'jit_profile_callback': lib_jit.c:550: warning: passing argument 2 of 'setlightudV' discards qualifiers from pointer target type lib_jit.c: In function 'lj_cf_jit_profile_start': lib_jit.c:577: warning: passing argument 2 of 'setlightudV' discards qualifiers from pointer target type lib_jit.c:579: warning: passing argument 2 of 'setlightudV' discards qualifiers from pointer target type ??\bundle\LuaJIT-2.1-20131211\src\lib_jit.c?????????? ?542?543?? static const char KEY_PROFILE_THREAD = 't'; static const char KEY_PROFILE_FUNC = 'f'; ??? static char KEY_PROFILE_THREAD = 't'; static char KEY_PROFILE_FUNC = 'f'; ??configure???warning????? ????????? From nginx-forum at nginx.us Sun Dec 15 10:01:25 2013 From: nginx-forum at nginx.us (Larry) Date: Sun, 15 Dec 2013 05:01:25 -0500 Subject: Proxy_cache or direct static files ? Message-ID: <7835dbb2d0d99a632a5d3430b0b3bfac.NginxMailingListEnglish@forum.nginx.org> Hello, I don't quite understand what I could get from caching with proxy_cache vs serving static files directly. Everywhere people tend to say that it is better to cache, but isn't caching the same as serving directly from static file ? Say that I serve home.html from a plain static html file, would I get any benefit to use reverse proxy + cache to serve it ? Thanks, Posted at Nginx Forum: http://forum.nginx.org/read.php?2,245544,245544#msg-245544 From nginx-forum at nginx.us Sun Dec 15 15:21:14 2013 From: nginx-forum at nginx.us (nginx14) Date: Sun, 15 Dec 2013 10:21:14 -0500 Subject: hello Message-ID: Hello, I have a nginx server is enabled. And I have a file called htaccess The file should cause to allowing accessed only from Israel. But the file does not work, I realized that it is related to nginx only. What can you do? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,245545,245545#msg-245545 From contact at jpluscplusm.com Sun Dec 15 15:24:17 2013 From: contact at jpluscplusm.com (Jonathan Matthews) Date: Sun, 15 Dec 2013 15:24:17 +0000 Subject: hello In-Reply-To: References: Message-ID: On 15 December 2013 15:21, nginx14 wrote: > Hello, I have a nginx server is enabled. > And I have a file called htaccess > The file should cause to allowing accessed only from Israel. > > But the file does not work, > I realized that it is related to nginx only. Nginx does not use htaccess files at all. You'll need to work out how to express the Apache (htaccess file) config in nginx's config syntax. From ilan at time4learning.com Sun Dec 15 15:25:28 2013 From: ilan at time4learning.com (Ilan Berkner) Date: Sun, 15 Dec 2013 10:25:28 -0500 Subject: hello In-Reply-To: References: Message-ID: Nginx does not use the .htaccess file, only Apache does. To prevent all users from accessing your site, except those coming from Israel, you can use the MaxMind Geo IP database in conjunction with an if statement in the Nginx configuration file. See http://eng.eelcowesemann.nl/linux-unix-android/nginx/nginx-blocking/ for an example. Note that use of the "if" statement is not considered a great idea, see: http://wiki.nginx.org/IfIsEvil. Ilan On Sun, Dec 15, 2013 at 10:21 AM, nginx14 wrote: > Hello, I have a nginx server is enabled. > And I have a file called htaccess > The file should cause to allowing accessed only from Israel. > > But the file does not work, > I realized that it is related to nginx only. > > What can you do? > > Posted at Nginx Forum: > http://forum.nginx.org/read.php?2,245545,245545#msg-245545 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -- Ilan Berkner Chief Technology Officer Time4Learning.com 6300 NE 1st Ave., Suite 203 Ft. Lauderdale, FL 33334 (954) 771-0914 Time4Learning.com - Online interactive curriculum for home use, PreK-8th Grade. Time4Writing.com - Online writing tutorials for high, middle, and elementary school students. Time4Learning.net - A forum to chat with parents online about kids, education, parenting and more. spellingcity.com - Online vocabulary and spelling activities for teachers, parents and students. -------------- next part -------------- An HTML attachment was scrubbed... URL: From noloader at gmail.com Sun Dec 15 21:31:08 2013 From: noloader at gmail.com (Jeffrey Walton) Date: Sun, 15 Dec 2013 16:31:08 -0500 Subject: ngx_conf_t args count Message-ID: >From Miller's http://www.evanmiller.org/nginx-modules-guide.html, section 5.2: ngx_http_upstream_hash(ngx_conf_t *cf, ngx_command_t *cmd, void *conf) { ngx_http_upstream_srv_conf_t *uscf; ngx_http_script_compile_t sc; ngx_str_t *value; ngx_array_t *vars_lengths, *vars_values; value = cf->args->elts; /* the following is necessary to evaluate the argument to "hash" as a $variable */ ngx_memzero(&sc, sizeof(ngx_http_script_compile_t)); vars_lengths = NULL; vars_values = NULL; sc.cf = cf; sc.source = &value[1]; ... } How does one know that value[1] is valid? Shouldn't cf->args->nelts be checked first? Or does ngx_conf_t always have at least two options? Related: why was value[0] not chosen? Jeff From nginx-forum at nginx.us Mon Dec 16 01:43:52 2013 From: nginx-forum at nginx.us (justin) Date: Sun, 15 Dec 2013 20:43:52 -0500 Subject: SSL OCSP stapling won't enable In-Reply-To: <52AD07E2.4000208@swsystem.co.uk> References: <52AD07E2.4000208@swsystem.co.uk> Message-ID: <2481f105fe56a9faf49eb511f207e3f8.NginxMailingListEnglish@forum.nginx.org> Steve, Yeah, I am getting OCSP response: no response sent. Should I try ssl_stapling_verify off; Any other ideas? Thanks. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,245528,245549#msg-245549 From ru at nginx.com Mon Dec 16 07:15:24 2013 From: ru at nginx.com (Ruslan Ermilov) Date: Mon, 16 Dec 2013 11:15:24 +0400 Subject: ngx_conf_t args count In-Reply-To: References: Message-ID: <20131216071524.GM74021@lo0.su> On Sun, Dec 15, 2013 at 04:31:08PM -0500, Jeffrey Walton wrote: > From Miller's http://www.evanmiller.org/nginx-modules-guide.html, section 5.2: > > ngx_http_upstream_hash(ngx_conf_t *cf, ngx_command_t *cmd, void *conf) > { > ngx_http_upstream_srv_conf_t *uscf; > ngx_http_script_compile_t sc; > ngx_str_t *value; > ngx_array_t *vars_lengths, *vars_values; > > value = cf->args->elts; > > /* the following is necessary to evaluate the argument to "hash" > as a $variable */ > ngx_memzero(&sc, sizeof(ngx_http_script_compile_t)); > > vars_lengths = NULL; > vars_values = NULL; > > sc.cf = cf; > sc.source = &value[1]; > ... > } > > How does one know that value[1] is valid? Shouldn't cf->args->nelts be > checked first? Or does ngx_conf_t always have at least two options? > Related: why was value[0] not chosen? value[0] is the directive name, "hash" in this case. It's like argv[] in main(). The "hash" directive can be specified in the "upstream" context (NGX_HTTP_UPS_CONF) and it takes exactly one argument (NGX_CONF_TAKE1). The generic configuration parser code ensures that the argument exists. http://www.evanmiller.org/nginx-modules-guide.html#directives From noloader at gmail.com Mon Dec 16 07:27:22 2013 From: noloader at gmail.com (Jeffrey Walton) Date: Mon, 16 Dec 2013 02:27:22 -0500 Subject: ngx_conf_t args count In-Reply-To: <20131216071524.GM74021@lo0.su> References: <20131216071524.GM74021@lo0.su> Message-ID: On Mon, Dec 16, 2013 at 2:15 AM, Ruslan Ermilov wrote: > On Sun, Dec 15, 2013 at 04:31:08PM -0500, Jeffrey Walton wrote: >> From Miller's http://www.evanmiller.org/nginx-modules-guide.html, section 5.2: >> >> ngx_http_upstream_hash(ngx_conf_t *cf, ngx_command_t *cmd, void *conf) >> { >> ngx_http_upstream_srv_conf_t *uscf; >> ngx_http_script_compile_t sc; >> ngx_str_t *value; >> ngx_array_t *vars_lengths, *vars_values; >> >> value = cf->args->elts; >> >> /* the following is necessary to evaluate the argument to "hash" >> as a $variable */ >> ngx_memzero(&sc, sizeof(ngx_http_script_compile_t)); >> >> vars_lengths = NULL; >> vars_values = NULL; >> >> sc.cf = cf; >> sc.source = &value[1]; >> ... >> } >> >> How does one know that value[1] is valid? Shouldn't cf->args->nelts be >> checked first? Or does ngx_conf_t always have at least two options? >> Related: why was value[0] not chosen? > > value[0] is the directive name, "hash" in this case. It's > like argv[] in main(). > > The "hash" directive can be specified in the "upstream" context > (NGX_HTTP_UPS_CONF) and it takes exactly one argument (NGX_CONF_TAKE1). > The generic configuration parser code ensures that the argument > exists. > > http://www.evanmiller.org/nginx-modules-guide.html#directives Ah, thanks. It makes perfect sense. Jeff From me at ptylr.com Mon Dec 16 09:22:25 2013 From: me at ptylr.com (Paul Taylor) Date: Mon, 16 Dec 2013 09:22:25 +0000 Subject: sent_http_HEADER Volatile under Nginx 1.2.4 In-Reply-To: <20131213163141.GX95113@mdounin.ru> References: <2859EC08-D88C-4434-A0BD-101527357928@ptylr.com> <20131213163141.GX95113@mdounin.ru> Message-ID: <0205F5C7-5C5B-4091-8E43-9D510877B8E6@ptylr.com> Yup, again, you?re right! I?ve moved the config around, so that I?m testing for any ?true? value in the proxy_no_cache & proxy_bypass_cache directives (removing the existing set_by_lua block). However, it?s still not behaving as I?d expect. In the following scenario (note comments): map $upstream_http_x_no_cache $no_cache_header { "" 0; default 1; } proxy_cache_bypass $no_cache_dirs $logged_in; # $no_cache_header; proxy_no_cache $no_cache_dirs $logged_in; # $no_cache_header; X-Cache-Status value is MISS, which is correct. Output of $no_cache_header is 1 (as set in the map). However, when adding back in the compare on $no_cache_header: proxy_cache_bypass $no_cache_dirs $logged_in $no_cache_header; proxy_no_cache $no_cache_dirs $logged_in $no_cache_header; X-Cache-Status value is still MISS, which is not correct, as it should be BYPASS. Output of $no_cache_header is 0. Unless I?m missing something, it still looks like touching the variable kills it? Thanks again, Paul On 13 Dec 2013, at 16:31, Maxim Dounin wrote: > Hello! > > On Thu, Dec 12, 2013 at 11:36:21PM +0000, Paul Taylor wrote: > >> Hi Maxim, >> Thanks for your response. You?re right! Using the map did work >> (I thought I?d tried that, but must have been tired!). >> So, now I have one other challenge, the value of $foo that you >> define below is needed to identify whether to cache the response >> of not. The only issue is that I have a number of other >> directives that I also need to add into the mix - therefore I >> use the set_by_lua code to nest/combine OR within an if >> statement?code below (I?ve kept the variable name as foo, so >> it?s clear which I?m referring to): >> map $upstream_http_x_no_cache $foo { >> "" 0; >> default 1; >> } >> set_by_lua $bypass_cache ' >> local no_cache_dirs = tonumber(ngx.var.no_cache_dirs) or 0 >> local logged_in = tonumber(ngx.var.logged_in) or 0 >> local no_cache_header = tonumber(ngx.var.foo) or 0 >> >> if((no_cache_dirs == 1) or (no_cache_header == 1) or >> (logged_in == 1)) then >> return 1; >> end >> >> return 0; >> '; >> Now when I make the Lua local variable declaration in order to >> use it, the value of $upstream_http_x_no_cache is reset to 0, >> even when it was set as 1 originally. If I comment out the line >> declaring the local variable within the Lua call, it returns to >> being a value of 1 again. >> Am I getting the sequencing of events wrong again? Is there any >> way that I can get the value of $upstream_http_x_no_cache into >> this Lua block, or would I need to do it another way? > > Are you going to use the result in proxy_no_cache? If yes, you > can just use multiple variables there, something like this should > work: > > proxy_no_cache $upstream_http_x_no_cache > $no_cache_dirs > $logged_in; > > See here for details: > > http://nginx.org/r/proxy_no_cache > >> Thanks very much for your help so far Maxim. >> Paul >> __________________________________________________________________ >> Hello! >> >> On Thu, Dec 12, 2013 at 07:19:56PM +0000, Paul Taylor wrote: >> >>> I?m in the process of making some amends to an environment, >>> where my upstream servers are sending a custom header >>> (X-No-Cache), which I need to detect and alter caching rules >>> within the configuration. >>> >>> The custom header is visible within the output, and I can >>> re-output it as another header through configuration (i.e. >>> add_header X-Sent-No-Cache $sent_http_x_no_cache; ). >>> >>> However, as soon as I perform any type of testing of this custom >>> header, it disappears. >>> >>> For example, if I was to perform a map on the custom header, try >>> to set an Nginx variable to the value of the header, or test >>> within an IF statement, any future call to this header is no >>> longer possible. Additionally any setting or testing of the >>> header fails. >> >> Both "set" and "if" directives you mentioned are executed _before_ >> a request is sent to upstream, and at this point there is no >> X-No-Cache header in the response. Due to this, using the >> $sent_http_x_no_cache variable in "set" or "if" will result in an >> empty value, and this value will be cached for later use. >> >> It's not clear what you are trying to do so I can't advise any >> further, but certainly using the $sent_http_x_no_cache variable in >> "if" or "set" directives isn't going to work, and this is what >> causes behaviour you see. >> >> Just a map{} should work fine though - as long as you don't try to >> call the map before the X-No-Cache header is actually available. >> E.g., something like this should work fine: >> >> map $sent_http_x_no_cache $foo { >> "" empty; >> default foo; >> } >> >> add_header X-Foo $foo; >> >> It might be also a goo idea to use $upstream_http_x_no_cache >> variable instead, see here: >> >> http://nginx.org/en/docs/http/ngx_http_upstream_module.html#variables >> >> -- >> Maxim Dounin >> http://nginx.org/ >> > >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx > > > -- > Maxim Dounin > http://nginx.org/ > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Mon Dec 16 10:47:46 2013 From: nginx-forum at nginx.us (Larry) Date: Mon, 16 Dec 2013 05:47:46 -0500 Subject: Proxy_cache or direct static files ? In-Reply-To: <7835dbb2d0d99a632a5d3430b0b3bfac.NginxMailingListEnglish@forum.nginx.org> References: <7835dbb2d0d99a632a5d3430b0b3bfac.NginxMailingListEnglish@forum.nginx.org> Message-ID: <9f3729d68050cc70f94107f0d69492e3.NginxMailingListEnglish@forum.nginx.org> Did i understand something wrong ? Thanks Posted at Nginx Forum: http://forum.nginx.org/read.php?2,245544,245552#msg-245552 From mdounin at mdounin.ru Mon Dec 16 11:12:33 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 16 Dec 2013 15:12:33 +0400 Subject: sent_http_HEADER Volatile under Nginx 1.2.4 In-Reply-To: <0205F5C7-5C5B-4091-8E43-9D510877B8E6@ptylr.com> References: <2859EC08-D88C-4434-A0BD-101527357928@ptylr.com> <20131213163141.GX95113@mdounin.ru> <0205F5C7-5C5B-4091-8E43-9D510877B8E6@ptylr.com> Message-ID: <20131216111233.GB95113@mdounin.ru> Hello! On Mon, Dec 16, 2013 at 09:22:25AM +0000, Paul Taylor wrote: > Yup, again, you?re right! I?ve moved the config around, so that I?m testing for any ?true? value in the proxy_no_cache & proxy_bypass_cache directives (removing the existing set_by_lua block). > > However, it?s still not behaving as I?d expect. > > In the following scenario (note comments): > > map $upstream_http_x_no_cache $no_cache_header { > "" 0; > default 1; > } > > proxy_cache_bypass $no_cache_dirs $logged_in; # $no_cache_header; > proxy_no_cache $no_cache_dirs $logged_in; # $no_cache_header; > > X-Cache-Status value is MISS, which is correct. Output of $no_cache_header is 1 (as set in the map). > > However, when adding back in the compare on $no_cache_header: > > proxy_cache_bypass $no_cache_dirs $logged_in $no_cache_header; > proxy_no_cache $no_cache_dirs $logged_in $no_cache_header; > > X-Cache-Status value is still MISS, which is not correct, as it should be BYPASS. Output of $no_cache_header is 0. > > Unless I?m missing something, it still looks like touching the variable kills it? The proxy_cache_bypass directive is expected to be checked before a request is sent to a backend - it is to control whether a request will be served from a cache or passed to a backend. That is, what you see is actually expected behaviour - there are no reasons X-Cache-Status to be BYPASS, and the cached $no_cache_header value to be different from 0. -- Maxim Dounin http://nginx.org/ From mdounin at mdounin.ru Mon Dec 16 11:41:25 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 16 Dec 2013 15:41:25 +0400 Subject: Proxy_cache or direct static files ? In-Reply-To: <7835dbb2d0d99a632a5d3430b0b3bfac.NginxMailingListEnglish@forum.nginx.org> References: <7835dbb2d0d99a632a5d3430b0b3bfac.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20131216114124.GD95113@mdounin.ru> Hello! On Sun, Dec 15, 2013 at 05:01:25AM -0500, Larry wrote: > Hello, > > I don't quite understand what I could get from caching with proxy_cache vs > serving static files directly. > > Everywhere people tend to say that it is better to cache, but isn't caching > the same as serving directly from static file ? > > Say that I serve home.html from a plain static html file, would I get any > benefit to use reverse proxy + cache to serve it ? Caching is useful when you have some resource which is costly to generate (e.g., dynamic pages or remote resources). If you are serving static files which are already present on the same server, in most cases there are no reasons to use cache. In some rare cases it may be useful though, e.g., if you have some faster storage for cache. -- Maxim Dounin http://nginx.org/ From contact at jpluscplusm.com Mon Dec 16 11:43:23 2013 From: contact at jpluscplusm.com (Jonathan Matthews) Date: Mon, 16 Dec 2013 11:43:23 +0000 Subject: Proxy_cache or direct static files ? In-Reply-To: <9f3729d68050cc70f94107f0d69492e3.NginxMailingListEnglish@forum.nginx.org> References: <7835dbb2d0d99a632a5d3430b0b3bfac.NginxMailingListEnglish@forum.nginx.org> <9f3729d68050cc70f94107f0d69492e3.NginxMailingListEnglish@forum.nginx.org> Message-ID: On 16 December 2013 10:47, Larry wrote > Did i understand something wrong ? Yes. Proxy cache is for storing the response of an upstream HTTP server whose requests you're proxying, so that you don't have to ask the potentially slow(er) upstream server the next time an identical request comes in. Serving static files from local disk is, well, for on-disk local assets. They're different concepts. Don't confuse them. Just recognise whichever one you're doing, and use the appropriate technique. From me at ptylr.com Mon Dec 16 12:38:59 2013 From: me at ptylr.com (Paul Taylor) Date: Mon, 16 Dec 2013 12:38:59 +0000 Subject: sent_http_HEADER Volatile under Nginx 1.2.4 In-Reply-To: <20131216111233.GB95113@mdounin.ru> References: <2859EC08-D88C-4434-A0BD-101527357928@ptylr.com> <20131213163141.GX95113@mdounin.ru> <0205F5C7-5C5B-4091-8E43-9D510877B8E6@ptylr.com> <20131216111233.GB95113@mdounin.ru> Message-ID: Hi Maxim, Ok, thanks for the clarification. So to confirm, we are looking for the value of the sent header from the upstream, to identify whether the content should be served from the cache, or the upstream. Does this therefore mean that the code that we have below, will check for the X-No-Cache header, and if present, will always render the content from the upstream (no cache), and that if not present, will enable the result to be cacheable? If so, and it is only the reporting of the X-Cache-Status value that is rendering a false value, then this will give us what we want? If not, then what suggestions would you have for caching only on the basis of this sent http header being present? Thanks again?nearly there ;) Paul On 16 Dec 2013, at 11:12, Maxim Dounin wrote: > Hello! > > On Mon, Dec 16, 2013 at 09:22:25AM +0000, Paul Taylor wrote: > >> Yup, again, you?re right! I?ve moved the config around, so that I?m testing for any ?true? value in the proxy_no_cache & proxy_bypass_cache directives (removing the existing set_by_lua block). >> >> However, it?s still not behaving as I?d expect. >> >> In the following scenario (note comments): >> >> map $upstream_http_x_no_cache $no_cache_header { >> "" 0; >> default 1; >> } >> >> proxy_cache_bypass $no_cache_dirs $logged_in; # $no_cache_header; >> proxy_no_cache $no_cache_dirs $logged_in; # $no_cache_header; >> >> X-Cache-Status value is MISS, which is correct. Output of $no_cache_header is 1 (as set in the map). >> >> However, when adding back in the compare on $no_cache_header: >> >> proxy_cache_bypass $no_cache_dirs $logged_in $no_cache_header; >> proxy_no_cache $no_cache_dirs $logged_in $no_cache_header; >> >> X-Cache-Status value is still MISS, which is not correct, as it should be BYPASS. Output of $no_cache_header is 0. >> >> Unless I?m missing something, it still looks like touching the variable kills it? > > The proxy_cache_bypass directive is expected to be checked before > a request is sent to a backend - it is to control whether a > request will be served from a cache or passed to a backend. > > That is, what you see is actually expected behaviour - there are > no reasons X-Cache-Status to be BYPASS, and the cached > $no_cache_header value to be different from 0. > > -- > Maxim Dounin > http://nginx.org/ > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From contact at jpluscplusm.com Mon Dec 16 12:45:22 2013 From: contact at jpluscplusm.com (Jonathan Matthews) Date: Mon, 16 Dec 2013 12:45:22 +0000 Subject: sent_http_HEADER Volatile under Nginx 1.2.4 In-Reply-To: References: <2859EC08-D88C-4434-A0BD-101527357928@ptylr.com> <20131213163141.GX95113@mdounin.ru> <0205F5C7-5C5B-4091-8E43-9D510877B8E6@ptylr.com> <20131216111233.GB95113@mdounin.ru> Message-ID: On 16 December 2013 12:38, Paul Taylor wrote: > Hi Maxim, > > Ok, thanks for the clarification. > > So to confirm, we are looking for the value of the sent header from the > upstream, to identify whether the content should be served from the cache, > or the upstream. Does this therefore mean that the code that we have below, > will check for the X-No-Cache header, and if present, will always render the > content from the upstream (no cache), and that if not present, will enable > the result to be cacheable? If so, and it is only the reporting of the > X-Cache-Status value that is rendering a false value, then this will give us > what we want? > > If not, then what suggestions would you have for caching only on the basis > of this sent http header being present? I may have missed in this thread why the earlier suggestion of proxy_no_cache $upstream_http_x_no_cache doesn't work for you. It does seem to meet your requirement of "caching only on the basis of this sent [== upstream?] http header being present" ... J From nginx-forum at nginx.us Mon Dec 16 13:08:31 2013 From: nginx-forum at nginx.us (kustodian) Date: Mon, 16 Dec 2013 08:08:31 -0500 Subject: No SPDY support in the official repository packages Message-ID: <4772f754820faaf6ca16c21ab9f87baa.NginxMailingListEnglish@forum.nginx.org> Hi, Nginx 1.4.0 added support for SPDY to the stable version, so my question is why is SPDY not enabled in the packages from the Nginx official repository? I'm explicitely talking about the Centos packages, I haven't tried others. Regards, Strahinja Posted at Nginx Forum: http://forum.nginx.org/read.php?2,245553,245553#msg-245553 From vbart at nginx.com Mon Dec 16 13:14:34 2013 From: vbart at nginx.com (Valentin V. Bartenev) Date: Mon, 16 Dec 2013 17:14:34 +0400 Subject: No SPDY support in the official repository packages In-Reply-To: <4772f754820faaf6ca16c21ab9f87baa.NginxMailingListEnglish@forum.nginx.org> References: <4772f754820faaf6ca16c21ab9f87baa.NginxMailingListEnglish@forum.nginx.org> Message-ID: <6224078.WFQ0l5ZoG9@vbart-laptop> On Monday 16 December 2013 08:08:31 kustodian wrote: > Hi, > > Nginx 1.4.0 added support for SPDY to the stable version, so my question is > why is SPDY not enabled in the packages from the Nginx official repository? > > I'm explicitely talking about the Centos packages, I haven't tried others. > SPDY support is enabled for systems where OpenSSL 1.0.1+ is available. wbr, Valentin V. Bartenev From nginx-forum at nginx.us Mon Dec 16 16:13:43 2013 From: nginx-forum at nginx.us (djlarsu) Date: Mon, 16 Dec 2013 11:13:43 -0500 Subject: SSL OCSP stapling won't enable In-Reply-To: References: Message-ID: <67b09fe2ddea91949531783a1ceffbb7.NginxMailingListEnglish@forum.nginx.org> This configuration is working for me. Perhaps nginx cannot verify the OCSP response with the bundle in /etc/pki/tls/certs/ca-bundle.trust.crt ? In my ssl_trusted_certificate file, I have these certificates, in order. C=US, O=The Go Daddy Group, Inc., OU=Go Daddy Class 2 Certification Authority C=US, ST=Arizona, L=Scottsdale, O=GoDaddy.com, Inc., OU=http://certificates.godaddy.com/repository, CN=Go Daddy Secure Certification Authority/serialNumber=07969287 I put my file in http://pastebin.com/G10e4sRh for reference. Hope this helps! Ryanne Posted at Nginx Forum: http://forum.nginx.org/read.php?2,245528,245574#msg-245574 From nginx-forum at nginx.us Mon Dec 16 16:22:59 2013 From: nginx-forum at nginx.us (djlarsu) Date: Mon, 16 Dec 2013 11:22:59 -0500 Subject: SSL OCSP stapling won't enable In-Reply-To: <67b09fe2ddea91949531783a1ceffbb7.NginxMailingListEnglish@forum.nginx.org> References: <67b09fe2ddea91949531783a1ceffbb7.NginxMailingListEnglish@forum.nginx.org> Message-ID: To add a bit more info, I see your site is using a Go Daddy G2 (SHA2) cert. In that case, here is the intermediate/root chain you'll want to use as ssl_trusted_cetificate. C=US, ST=Arizona, L=Scottsdale, O=GoDaddy.com, Inc., OU=http://certs.godaddy.com/repository/, CN=Go Daddy Secure Certificate Authority - G2 C=US, ST=Arizona, L=Scottsdale, O=GoDaddy.com, Inc., CN=Go Daddy Root Certificate Authority - G2 http://pastebin.com/gnWDSQ8Z Ryanne Posted at Nginx Forum: http://forum.nginx.org/read.php?2,245528,245594#msg-245594 From nginx-forum at nginx.us Mon Dec 16 19:26:43 2013 From: nginx-forum at nginx.us (justin) Date: Mon, 16 Dec 2013 14:26:43 -0500 Subject: SSL OCSP stapling won't enable In-Reply-To: References: <67b09fe2ddea91949531783a1ceffbb7.NginxMailingListEnglish@forum.nginx.org> Message-ID: <365f00bcf9747216dbb94cfacad112ef.NginxMailingListEnglish@forum.nginx.org> Thanks so much, that worked perfectly using http://pastebin.com/gnWDSQ8Z. Danke! Posted at Nginx Forum: http://forum.nginx.org/read.php?2,245528,245598#msg-245598 From noloader at gmail.com Tue Dec 17 00:12:56 2013 From: noloader at gmail.com (Jeffrey Walton) Date: Mon, 16 Dec 2013 19:12:56 -0500 Subject: checking for OpenSSL library ... not found Message-ID: checking for OpenSSL library ... not found ./auto/configure: error: SSL modules require the OpenSSL library. You can either do not enable the modules, or install the OpenSSL library into the system, or build the OpenSSL library statically from the source with nginx by using --with-openssl= option. ***** I believe OpenSSL is present (I just built it from sources): $ ls /usr/local/ssl/ bin certs include lib man misc openssl.cnf private $ ls /usr/local/ssl/lib/ engines libcrypto.a libssl.a pkgconfig ***** Here was my configure. $THIS_USER and $THIS_GROUP was set properly to my login and group. ./auto/configure --with-debug --with-http_ssl_module --prefix="$THIS_DIR/ac" --http-proxy-temp-path="$THIS_DIR/ac/temp" --user="$THIS_USER" --group="$THIS_GROUP" --with-cc-opt="-I/usr/local/ssl/include" --with-ld-opt="/usr/local/ssl/lib/libcrypto.a /usr/local/ssl/lib/libssl.a /usr/local/ssl/lib/libcrypto.a ***** I believe --with-cc-opt and --with-ld-opt is the preferred (required?) way to do things for local/custom OpenSSL (http://mailman.nginx.org/pipermail/nginx/2010-April/019644.html). Does anything look out of place? Jeff From coderman at gmail.com Tue Dec 17 00:40:56 2013 From: coderman at gmail.com (coderman) Date: Mon, 16 Dec 2013 16:40:56 -0800 Subject: checking for OpenSSL library ... not found In-Reply-To: References: Message-ID: On Mon, Dec 16, 2013 at 4:12 PM, Jeffrey Walton wrote: > > checking for OpenSSL library ... not found > ... with nginx by using --with-openssl= option. > --with-openssl=/some/path/to/ssl/root works for me. try --with-openssl=/usr/local/ssl ? From katmai at keptprivate.com Tue Dec 17 02:29:09 2013 From: katmai at keptprivate.com (Stefanita Rares Dumitrescu) Date: Tue, 17 Dec 2013 03:29:09 +0100 Subject: checking for OpenSSL library ... not found In-Reply-To: References: Message-ID: <52AFB6F5.2060706@keptprivate.com> if you are using centos/fedora you need to install openssl-devel On 17/12/2013 01:40, coderman wrote: > On Mon, Dec 16, 2013 at 4:12 PM, Jeffrey Walton wrote: >> >> checking for OpenSSL library ... not found >> ... with nginx by using --with-openssl= option. >> > > --with-openssl=/some/path/to/ssl/root > > works for me. try --with-openssl=/usr/local/ssl ? > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > From zellster at gmail.com Tue Dec 17 05:05:18 2013 From: zellster at gmail.com (Adam Zell) Date: Mon, 16 Dec 2013 21:05:18 -0800 Subject: Optimizing NGINX TLS Time To First Byte (TTTFB) Message-ID: FYI: http://www.igvita.com/2013/12/16/optimizing-nginx-tls-time-to-first-byte/ We started with a ~1800ms overhead for our TLS connection (nearly 5 extra RTTs); eliminated the extra certificate roundtrip after a nginx upgrade; cut another RTT by forcing a smaller record size; dropped an extra RTT from the TLS handshake thanks to TLS False Start. With all said and done, *our TTTFB is down to ~1560ms, which is exactly one roundtrip higher than a regular HTTP connection.* Now we're talking! -- Adam zellster at gmail.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From luky-37 at hotmail.com Tue Dec 17 08:46:00 2013 From: luky-37 at hotmail.com (Lukas Tribus) Date: Tue, 17 Dec 2013 09:46:00 +0100 Subject: Optimizing NGINX TLS Time To First Byte (TTTFB) In-Reply-To: References: Message-ID: Hi Adam, > FYI: http://www.igvita.com/2013/12/16/optimizing-nginx-tls-time-to-first-byte/? >? > We started with a ~1800ms overhead for our TLS connection (nearly 5? > extra RTTs); eliminated the extra certificate roundtrip after a nginx? > upgrade; cut another RTT by forcing a smaller record size; dropped an? > extra RTT from the TLS handshake thanks to TLS False Start. With all? > said and done, our TTTFB is down to ~1560ms, which is exactly one? > roundtrip higher than a regular HTTP connection. Now we're talking!? Thanks, this is very helpful. Are you trying to upstream the record size patch? What I don't get from your patch, it seems like you are hardcoding the buffer to 16384 bytes during handshake (line 570) and only later use a 1400 byte buffer (via NGX_SSL_BUFSIZE). Am I misunderstanding the patch/code? Thanks, Lukas From noloader at gmail.com Tue Dec 17 09:38:02 2013 From: noloader at gmail.com (Jeffrey Walton) Date: Tue, 17 Dec 2013 04:38:02 -0500 Subject: checking for OpenSSL library ... not found In-Reply-To: <52AFB6F5.2060706@keptprivate.com> References: <52AFB6F5.2060706@keptprivate.com> Message-ID: > if you are using centos/fedora you need to install openssl-devel Thanks Stefanita. This is kind of weird in auto/lib/openssl/conf (the default case is no non-MS|non-Borland compilers): *) have=NGX_OPENSSL . auto/have have=NGX_SSL . auto/have CORE_INCS="$CORE_INCS $OPENSSL/.openssl/include" CORE_DEPS="$CORE_DEPS $OPENSSL/.openssl/include/openssl/ssl.h" CORE_LIBS="$CORE_LIBS $OPENSSL/.openssl/lib/libssl.a" CORE_LIBS="$CORE_LIBS $OPENSSL/.openssl/lib/libcrypto.a" CORE_LIBS="$CORE_LIBS $NGX_LIBDL" ;; My directory structure does not look like that since I built OpenSSL from sources and installed in /usr/local/ssl. The libs are in /usr/local/ssl/lib, and I don't believe there's a way to force that extra ".openssl" in front of it. Jeff On Mon, Dec 16, 2013 at 9:29 PM, Stefanita Rares Dumitrescu wrote: > if you are using centos/fedora you need to install openssl-devel > > > On 17/12/2013 01:40, coderman wrote: >> >> On Mon, Dec 16, 2013 at 4:12 PM, Jeffrey Walton >> wrote: >>> >>> >>> checking for OpenSSL library ... not found >>> ... with nginx by using --with-openssl= option. >>> >> >> >> --with-openssl=/some/path/to/ssl/root >> >> works for me. try --with-openssl=/usr/local/ssl ? From contact at jpluscplusm.com Tue Dec 17 09:48:36 2013 From: contact at jpluscplusm.com (Jonathan Matthews) Date: Tue, 17 Dec 2013 09:48:36 +0000 Subject: Optimizing NGINX TLS Time To First Byte (TTTFB) In-Reply-To: References: Message-ID: On 17 December 2013 08:46, Lukas Tribus wrote: > Hi Adam, > > Thanks, this is very helpful. Are you trying to upstream the record size > patch? > > What I don't get from your patch, it seems like you are hardcoding the > buffer to 16384 bytes during handshake (line 570) and only later use a > 1400 byte buffer (via NGX_SSL_BUFSIZE). > > Am I misunderstanding the patch/code? I don't think Adam wrote the article or patch; Ilya Grigorik did. J From luky-37 at hotmail.com Tue Dec 17 09:52:07 2013 From: luky-37 at hotmail.com (Lukas Tribus) Date: Tue, 17 Dec 2013 10:52:07 +0100 Subject: Optimizing NGINX TLS Time To First Byte (TTTFB) In-Reply-To: References: , , Message-ID: Hi, > On 17 December 2013 08:46, Lukas Tribus wrote: >> Hi Adam, >> >> Thanks, this is very helpful. Are you trying to upstream the record size >> patch? >> >> What I don't get from your patch, it seems like you are hardcoding the >> buffer to 16384 bytes during handshake (line 570) and only later use a >> 1400 byte buffer (via NGX_SSL_BUFSIZE). >> >> Am I misunderstanding the patch/code? > > I don't think Adam wrote the article or patch; Ilya Grigorik did. > > J Ups, right. Looping Ilya, perhaps he can comment. Thanks, Lukas From noloader at gmail.com Tue Dec 17 10:16:15 2013 From: noloader at gmail.com (Jeffrey Walton) Date: Tue, 17 Dec 2013 05:16:15 -0500 Subject: checking for OpenSSL library ... not found In-Reply-To: References: Message-ID: So, looking into this more, it looks like the configure subsytem is not flexible enough to handle OpenSSL with customizations. The problem appears to be in auto/lib/conf, with some hard coded values around line 49: ngx_feature_path= ngx_feature_libs="-lssl -lcrypto" ngx_feature_test="SSL_library_init()" In my case, the path is not standard (i.e, /usr/include); and I do not desire just any libssl or libcrypto (i.e., I want a particular version of the static libraries from a certain location). I think the best way to proceed is to patch conf and then fix whatever problems may surface later. I know OpenSSL's FIPS is likely going to be a problem when I have to use fipsld and incore for backpatching signatures, but that's par for the course. Does this sound reasonable? Or should I try to figure out a way to make "ngx_feature_path" and "ngx_feature_libs" more helpful? Jeff On Mon, Dec 16, 2013 at 7:12 PM, Jeffrey Walton wrote: > > checking for OpenSSL library ... not found > > ./auto/configure: error: SSL modules require the OpenSSL library. > You can either do not enable the modules, or install the OpenSSL library > into the system, or build the OpenSSL library statically from the source > with nginx by using --with-openssl= option. > > > ***** > > I believe OpenSSL is present (I just built it from sources): > > $ ls /usr/local/ssl/ > bin certs include lib man misc openssl.cnf private > > $ ls /usr/local/ssl/lib/ > engines libcrypto.a libssl.a pkgconfig > > ***** > > Here was my configure. $THIS_USER and $THIS_GROUP was set properly to > my login and group. > > ./auto/configure --with-debug --with-http_ssl_module > --prefix="$THIS_DIR/ac" --http-proxy-temp-path="$THIS_DIR/ac/temp" > --user="$THIS_USER" --group="$THIS_GROUP" > --with-cc-opt="-I/usr/local/ssl/include" > --with-ld-opt="/usr/local/ssl/lib/libcrypto.a > /usr/local/ssl/lib/libssl.a /usr/local/ssl/lib/libcrypto.a > > ***** > > I believe --with-cc-opt and --with-ld-opt is the preferred (required?) > way to do things for local/custom OpenSSL > (http://mailman.nginx.org/pipermail/nginx/2010-April/019644.html). > > Does anything look out of place? From maxxer at ufficyo.com Tue Dec 17 11:26:16 2013 From: maxxer at ufficyo.com (Lorenzo Milesi) Date: Tue, 17 Dec 2013 12:26:16 +0100 (CET) Subject: Override index.php for a subdirectory In-Reply-To: <1890925840.5783.1387279498724.JavaMail.zimbra@yetopen.it> Message-ID: <1636735918.5794.1387279576738.JavaMail.zimbra@yetopen.it> Hi. I need to override default index file for a subdirectory only. My actual config (pretty much ubuntu's default): server { listen 80 default_server; root /var/www; index index.php index.html index.htm; server_name localhost; location / { try_files $uri $uri/ /index.html; } location ~ \.php$ { fastcgi_split_path_info ^(.+\.php)(/.+)$; fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; include fastcgi_params; } location ~ /\.ht { deny all; } } Then in a second file I added: server { location ~ /work/management_site/ { fastcgi_index index-maxxer.php; index index-maxxer.php; try_files $uri index-maxxer.php?$args; set $fsn "/index-maxxer.php"; fastcgi_param SCRIPT_FILENAME $document_root$fsn; } } But doesn't work. How can I accomplish that? thanks -- Lorenzo Milesi - lorenzo.milesi at yetopen.it YetOpen S.r.l. - http://www.yetopen.it/ From noloader at gmail.com Tue Dec 17 11:45:03 2013 From: noloader at gmail.com (Jeffrey Walton) Date: Tue, 17 Dec 2013 06:45:03 -0500 Subject: checking for OpenSSL library ... not found In-Reply-To: References: Message-ID: Any comments on this patch before it gets offered to Trac? The patch allows a developer to specify OpenSSL include and library directories through NGX_CONF_OPENSSL_INC and NGX_CONF_OPENSSL_LIB. The developer must export them for the new functionality. If NGX_CONF_OPENSSL_INC and NGX_CONF_OPENSSL_LIB are present, they get tested and added to the configuration upon success. If not present or config failure, then config falls back to the original test. NGX_CONF_OPENSSL_LIB is especially important because nginx assumes dynamic linking is OK via '-lssl' and '-lcrypto'. A developer is free to use them, or he/she can specify the exact library code they want (e.g., /usr/local/ssl/lib/libssla.). Tested OK on Fedora 19 and Ubuntu 13.04. On Mon, Dec 16, 2013 at 7:12 PM, Jeffrey Walton wrote: > > checking for OpenSSL library ... not found > > ./auto/configure: error: SSL modules require the OpenSSL library. > You can either do not enable the modules, or install the OpenSSL library > into the system, or build the OpenSSL library statically from the source > with nginx by using --with-openssl= option. > > > ***** > > I believe OpenSSL is present (I just built it from sources): > > $ ls /usr/local/ssl/ > bin certs include lib man misc openssl.cnf private > > $ ls /usr/local/ssl/lib/ > engines libcrypto.a libssl.a pkgconfig > > ***** > > Here was my configure. $THIS_USER and $THIS_GROUP was set properly to > my login and group. > > ./auto/configure --with-debug --with-http_ssl_module > --prefix="$THIS_DIR/ac" --http-proxy-temp-path="$THIS_DIR/ac/temp" > --user="$THIS_USER" --group="$THIS_GROUP" > --with-cc-opt="-I/usr/local/ssl/include" > --with-ld-opt="/usr/local/ssl/lib/libcrypto.a > /usr/local/ssl/lib/libssl.a /usr/local/ssl/lib/libcrypto.a > > ***** > > I believe --with-cc-opt and --with-ld-opt is the preferred (required?) > way to do things for local/custom OpenSSL > (http://mailman.nginx.org/pipermail/nginx/2010-April/019644.html). > > Does anything look out of place? > > Jeff -------------- next part -------------- diff -r 7e9543faf5f0 auto/lib/openssl/conf --- a/auto/lib/openssl/conf Tue Nov 19 15:25:24 2013 +0400 +++ b/auto/lib/openssl/conf Tue Dec 17 06:36:21 2013 -0500 @@ -42,19 +42,56 @@ OPENSSL=NO - ngx_feature="OpenSSL library" - ngx_feature_name="NGX_OPENSSL" - ngx_feature_run=no - ngx_feature_incs="#include " - ngx_feature_path= - ngx_feature_libs="-lssl -lcrypto" - ngx_feature_test="SSL_library_init()" - . auto/feature + # First, test if a dev has specified an OpenSSL from non-standard location. + # The include should be exported by the developer in NGX_CONF_OPENSSL_INC; + # and the libraries should be exported by the developer in + # NGX_CONF_OPENSSL_LIB. NOTE: nginx does not set NGX_CONF_OPENSSL_INC or + # NGX_CONF_OPENSSL_LIB; its consumes them if they are set. + # + # NOTE: on Red Hat and Fedora, be sure the NGX_CONF_OPENSSL_LIB includes + # '-ldl' for dlopen and friends during configuration testing. + if [ ! -z "$NGX_CONF_OPENSSL_INC" ] || [ ! -z "$NGX_CONF_OPENSSL_LIB" ]; then - if [ $ngx_found = yes ]; then - have=NGX_SSL . auto/have - CORE_LIBS="$CORE_LIBS $ngx_feature_libs $NGX_LIBDL" - OPENSSL=YES + ngx_feature="OpenSSL library" + ngx_feature_name="NGX_OPENSSL" + ngx_feature_run=no + ngx_feature_incs="#include " + ngx_feature_path="$NGX_CONF_OPENSSL_INC" + ngx_feature_libs="$NGX_CONF_OPENSSL_LIB" + ngx_feature_test="SSL_library_init()" + . auto/feature + + if [ $ngx_found = yes ]; then + have=NGX_SSL . auto/have + CORE_INCS="$CORE_INCS $ngx_feature_path" + CORE_LIBS="$CORE_LIBS $ngx_feature_libs $NGX_LIBDL" + OPENSSL=YES + fi + fi + + # Second, perform the original test. The original test is somewhat limited + # because it makes certain assumptions. The assumtions include particular + # locations for components and a dev is OK with linking to shared objects. + # Assuming shared object linking is bad on platforms like Ubuntu 12.04 and + # Ubuntu 12.10 because Ubuntu disabled TLSv1.1 and TLSv1.2 and refuses to + # enable it due to [years old] concern over interoperability. (Ubuntu 12.04 + # is LTS and it will be available until 2017). + if [ $OPENSSL != YES ]; then + + ngx_feature="OpenSSL library" + ngx_feature_name="NGX_OPENSSL" + ngx_feature_run=no + ngx_feature_incs="#include " + ngx_feature_path= + ngx_feature_libs="-lssl -lcrypto" + ngx_feature_test="SSL_library_init()" + . auto/feature + + if [ $ngx_found = yes ]; then + have=NGX_SSL . auto/have + CORE_LIBS="$CORE_LIBS $ngx_feature_libs $NGX_LIBDL" + OPENSSL=YES + fi fi fi @@ -70,5 +107,5 @@ END exit 1 fi +fi -fi From mdounin at mdounin.ru Tue Dec 17 12:02:53 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 17 Dec 2013 16:02:53 +0400 Subject: checking for OpenSSL library ... not found In-Reply-To: References: Message-ID: <20131217120253.GK95113@mdounin.ru> Hello! On Mon, Dec 16, 2013 at 07:12:56PM -0500, Jeffrey Walton wrote: > > checking for OpenSSL library ... not found > > ./auto/configure: error: SSL modules require the OpenSSL library. > You can either do not enable the modules, or install the OpenSSL library > into the system, or build the OpenSSL library statically from the source > with nginx by using --with-openssl= option. > > > ***** > > I believe OpenSSL is present (I just built it from sources): > > $ ls /usr/local/ssl/ > bin certs include lib man misc openssl.cnf private > > $ ls /usr/local/ssl/lib/ > engines libcrypto.a libssl.a pkgconfig > > ***** > > Here was my configure. $THIS_USER and $THIS_GROUP was set properly to > my login and group. > > ./auto/configure --with-debug --with-http_ssl_module > --prefix="$THIS_DIR/ac" --http-proxy-temp-path="$THIS_DIR/ac/temp" > --user="$THIS_USER" --group="$THIS_GROUP" > --with-cc-opt="-I/usr/local/ssl/include" > --with-ld-opt="/usr/local/ssl/lib/libcrypto.a > /usr/local/ssl/lib/libssl.a /usr/local/ssl/lib/libcrypto.a > > ***** > > I believe --with-cc-opt and --with-ld-opt is the preferred (required?) > way to do things for local/custom OpenSSL > (http://mailman.nginx.org/pipermail/nginx/2010-April/019644.html). > > Does anything look out of place? You are trying to explicitly specify library files to be loaded on all ld invocations. This is wrong. Instead, you should specify a _path_ to load libraries from, much like you did with include directories. Something like this should work for you: ./configure --with-cc-opt="-I/usr/local/ssl/include" \ --with-ld-opt="-L/usr/local/ssl/lib" -- Maxim Dounin http://nginx.org/ From noloader at gmail.com Tue Dec 17 12:17:47 2013 From: noloader at gmail.com (Jeffrey Walton) Date: Tue, 17 Dec 2013 07:17:47 -0500 Subject: checking for OpenSSL library ... not found In-Reply-To: <20131217120253.GK95113@mdounin.ru> References: <20131217120253.GK95113@mdounin.ru> Message-ID: On Tue, Dec 17, 2013 at 7:02 AM, Maxim Dounin wrote: > Hello! > > On Mon, Dec 16, 2013 at 07:12:56PM -0500, Jeffrey Walton wrote: > >> >> checking for OpenSSL library ... not found >> >> ./auto/configure: error: SSL modules require the OpenSSL library. >> You can either do not enable the modules, or install the OpenSSL library >> into the system, or build the OpenSSL library statically from the source >> with nginx by using --with-openssl= option. >> >> >> ***** >> >> I believe OpenSSL is present (I just built it from sources): >> >> $ ls /usr/local/ssl/ >> bin certs include lib man misc openssl.cnf private >> >> $ ls /usr/local/ssl/lib/ >> engines libcrypto.a libssl.a pkgconfig >> >> ***** >> >> Here was my configure. $THIS_USER and $THIS_GROUP was set properly to >> my login and group. >> >> ./auto/configure --with-debug --with-http_ssl_module >> --prefix="$THIS_DIR/ac" --http-proxy-temp-path="$THIS_DIR/ac/temp" >> --user="$THIS_USER" --group="$THIS_GROUP" >> --with-cc-opt="-I/usr/local/ssl/include" >> --with-ld-opt="/usr/local/ssl/lib/libcrypto.a >> /usr/local/ssl/lib/libssl.a /usr/local/ssl/lib/libcrypto.a >> >> ***** >> >> I believe --with-cc-opt and --with-ld-opt is the preferred (required?) >> way to do things for local/custom OpenSSL >> (http://mailman.nginx.org/pipermail/nginx/2010-April/019644.html). >> >> Does anything look out of place? > > You are trying to explicitly specify library files to be loaded on > all ld invocations. This is wrong. Instead, you should specify a > _path_ to load libraries from, much like you did with include > directories. > > Something like this should work for you: > > ./configure --with-cc-opt="-I/usr/local/ssl/include" \ > --with-ld-opt="-L/usr/local/ssl/lib" Thanks Maxim. This did not work, and its the reason I moved to envars to fully specify everything. ***** $ auto/configure --with-cc-opt="-I/usr/local/ssl/include" --with-ld-opt="-L/usr/local/ssl/lib -ldl" checking for OS + Linux 3.11.10-200.fc19.x86_64 x86_64 checking for C compiler ... found + using GNU C compiler ... Configuration summary + using system PCRE library + OpenSSL library is not used + md5: using system crypto library + sha1: using system crypto library + using system zlib library ***** $ auto/configure --with-cc-opt="-I/usr/local/ssl/include" --with-ld-opt="-L/usr/local/ssl/lib -ldl /usr/local/ssl/lib/libssla. /usr/local/ssl/lib/libcrypto.a" checking for OS + Linux 3.11.10-200.fc19.x86_64 x86_64 ... checking for --with-ld-opt="-L/usr/local/ssl/lib -ldl /usr/local/ssl/lib/libssla. /usr/local/ssl/lib/libcrypto.a" ... not found auto/configure: error: the invalid value in --with-ld-opt="-L/usr/local/ssl/lib -ldl /usr/local/ssl/lib/libssla. /usr/local/ssl/lib/libcrypto.a" ***** Any thoughts on how to proceed? Jeff From richard at kearsley.me Tue Dec 17 12:58:43 2013 From: richard at kearsley.me (Richard Kearsley) Date: Tue, 17 Dec 2013 12:58:43 +0000 Subject: gzip proxy query Message-ID: <52B04A83.6060404@kearsley.me> Hi If 'gzip off;' on front-end but a proxy_pass to backend gives a gzipped response, will the front-end decompress it before proxy to client? Cheers Richard From mdounin at mdounin.ru Tue Dec 17 13:01:47 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 17 Dec 2013 17:01:47 +0400 Subject: checking for OpenSSL library ... not found In-Reply-To: References: <20131217120253.GK95113@mdounin.ru> Message-ID: <20131217130147.GL95113@mdounin.ru> Hello! On Tue, Dec 17, 2013 at 07:17:47AM -0500, Jeffrey Walton wrote: > On Tue, Dec 17, 2013 at 7:02 AM, Maxim Dounin wrote: > > Hello! > > > > On Mon, Dec 16, 2013 at 07:12:56PM -0500, Jeffrey Walton wrote: > > > >> > >> checking for OpenSSL library ... not found > >> > >> ./auto/configure: error: SSL modules require the OpenSSL library. > >> You can either do not enable the modules, or install the OpenSSL library > >> into the system, or build the OpenSSL library statically from the source > >> with nginx by using --with-openssl= option. > >> > >> > >> ***** > >> > >> I believe OpenSSL is present (I just built it from sources): > >> > >> $ ls /usr/local/ssl/ > >> bin certs include lib man misc openssl.cnf private > >> > >> $ ls /usr/local/ssl/lib/ > >> engines libcrypto.a libssl.a pkgconfig > >> > >> ***** > >> > >> Here was my configure. $THIS_USER and $THIS_GROUP was set properly to > >> my login and group. > >> > >> ./auto/configure --with-debug --with-http_ssl_module > >> --prefix="$THIS_DIR/ac" --http-proxy-temp-path="$THIS_DIR/ac/temp" > >> --user="$THIS_USER" --group="$THIS_GROUP" > >> --with-cc-opt="-I/usr/local/ssl/include" > >> --with-ld-opt="/usr/local/ssl/lib/libcrypto.a > >> /usr/local/ssl/lib/libssl.a /usr/local/ssl/lib/libcrypto.a > >> > >> ***** > >> > >> I believe --with-cc-opt and --with-ld-opt is the preferred (required?) > >> way to do things for local/custom OpenSSL > >> (http://mailman.nginx.org/pipermail/nginx/2010-April/019644.html). > >> > >> Does anything look out of place? > > > > You are trying to explicitly specify library files to be loaded on > > all ld invocations. This is wrong. Instead, you should specify a > > _path_ to load libraries from, much like you did with include > > directories. > > > > Something like this should work for you: > > > > ./configure --with-cc-opt="-I/usr/local/ssl/include" \ > > --with-ld-opt="-L/usr/local/ssl/lib" > Thanks Maxim. This did not work, and its the reason I moved to envars > to fully specify everything. > > ***** > > $ auto/configure --with-cc-opt="-I/usr/local/ssl/include" > --with-ld-opt="-L/usr/local/ssl/lib -ldl" > checking for OS > + Linux 3.11.10-200.fc19.x86_64 x86_64 > checking for C compiler ... found > + using GNU C compiler > ... > > Configuration summary > + using system PCRE library > + OpenSSL library is not used > + md5: using system crypto library > + sha1: using system crypto library > + using system zlib library > > ***** > > $ auto/configure --with-cc-opt="-I/usr/local/ssl/include" > --with-ld-opt="-L/usr/local/ssl/lib -ldl /usr/local/ssl/lib/libssla. > /usr/local/ssl/lib/libcrypto.a" > checking for OS > + Linux 3.11.10-200.fc19.x86_64 x86_64 > ... > > checking for --with-ld-opt="-L/usr/local/ssl/lib -ldl > /usr/local/ssl/lib/libssla. /usr/local/ssl/lib/libcrypto.a" ... not > found > auto/configure: error: the invalid value in > --with-ld-opt="-L/usr/local/ssl/lib -ldl /usr/local/ssl/lib/libssla. > /usr/local/ssl/lib/libcrypto.a" > > ***** > > Any thoughts on how to proceed? First of all, try fixing typo in your configure arguments. If it doesn't help, try looking into objs/autoconf.err. -- Maxim Dounin http://nginx.org/ From mdounin at mdounin.ru Tue Dec 17 13:04:16 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 17 Dec 2013 17:04:16 +0400 Subject: gzip proxy query In-Reply-To: <52B04A83.6060404@kearsley.me> References: <52B04A83.6060404@kearsley.me> Message-ID: <20131217130416.GM95113@mdounin.ru> Hello! On Tue, Dec 17, 2013 at 12:58:43PM +0000, Richard Kearsley wrote: > Hi > If 'gzip off;' on front-end but a proxy_pass to backend gives a gzipped > response, will the front-end decompress it before proxy to client? No. But if you want nginx to decompress responses, there is gunzip module which can do it for you. http://nginx.org/en/docs/http/ngx_http_gunzip_module.html -- Maxim Dounin http://nginx.org/ From noloader at gmail.com Tue Dec 17 13:05:49 2013 From: noloader at gmail.com (Jeffrey Walton) Date: Tue, 17 Dec 2013 08:05:49 -0500 Subject: checking for OpenSSL library ... not found In-Reply-To: <20131217130147.GL95113@mdounin.ru> References: <20131217120253.GK95113@mdounin.ru> <20131217130147.GL95113@mdounin.ru> Message-ID: On Tue, Dec 17, 2013 at 8:01 AM, Maxim Dounin wrote: > Hello! > > On Tue, Dec 17, 2013 at 07:17:47AM -0500, Jeffrey Walton wrote: > >> On Tue, Dec 17, 2013 at 7:02 AM, Maxim Dounin wrote: >> >... >> > >> > Something like this should work for you: >> > >> > ./configure --with-cc-opt="-I/usr/local/ssl/include" \ >> > --with-ld-opt="-L/usr/local/ssl/lib" >> Thanks Maxim. This did not work, and its the reason I moved to envars >> to fully specify everything. >> >> ***** >> >> $ auto/configure --with-cc-opt="-I/usr/local/ssl/include" >> --with-ld-opt="-L/usr/local/ssl/lib -ldl" >> checking for OS >> + Linux 3.11.10-200.fc19.x86_64 x86_64 >> checking for C compiler ... found >> + using GNU C compiler >> ... >> >> Configuration summary >> + using system PCRE library >> + OpenSSL library is not used >> + md5: using system crypto library >> + sha1: using system crypto library >> + using system zlib library >> >> ***** >> >> $ auto/configure --with-cc-opt="-I/usr/local/ssl/include" >> --with-ld-opt="-L/usr/local/ssl/lib -ldl /usr/local/ssl/lib/libssla. >> /usr/local/ssl/lib/libcrypto.a" >> checking for OS >> + Linux 3.11.10-200.fc19.x86_64 x86_64 >> ... >> >> checking for --with-ld-opt="-L/usr/local/ssl/lib -ldl >> /usr/local/ssl/lib/libssla. /usr/local/ssl/lib/libcrypto.a" ... not >> found >> auto/configure: error: the invalid value in >> --with-ld-opt="-L/usr/local/ssl/lib -ldl /usr/local/ssl/lib/libssla. >> /usr/local/ssl/lib/libcrypto.a" >> >> ***** >> >> Any thoughts on how to proceed? > > First of all, try fixing typo in your configure arguments. If it > doesn't help, try looking into objs/autoconf.err. You'll have to forgive my ignorance. I would not have made the typo if I was aware. Would you kindly point it out? From mdounin at mdounin.ru Tue Dec 17 13:07:57 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 17 Dec 2013 17:07:57 +0400 Subject: checking for OpenSSL library ... not found In-Reply-To: References: <20131217120253.GK95113@mdounin.ru> <20131217130147.GL95113@mdounin.ru> Message-ID: <20131217130757.GN95113@mdounin.ru> Hello! On Tue, Dec 17, 2013 at 08:05:49AM -0500, Jeffrey Walton wrote: > On Tue, Dec 17, 2013 at 8:01 AM, Maxim Dounin wrote: > > Hello! > > > > On Tue, Dec 17, 2013 at 07:17:47AM -0500, Jeffrey Walton wrote: > > > >> On Tue, Dec 17, 2013 at 7:02 AM, Maxim Dounin wrote: > >> >... > >> > > >> > Something like this should work for you: > >> > > >> > ./configure --with-cc-opt="-I/usr/local/ssl/include" \ > >> > --with-ld-opt="-L/usr/local/ssl/lib" > >> Thanks Maxim. This did not work, and its the reason I moved to envars > >> to fully specify everything. > >> > >> ***** > >> > >> $ auto/configure --with-cc-opt="-I/usr/local/ssl/include" > >> --with-ld-opt="-L/usr/local/ssl/lib -ldl" > >> checking for OS > >> + Linux 3.11.10-200.fc19.x86_64 x86_64 > >> checking for C compiler ... found > >> + using GNU C compiler > >> ... > >> > >> Configuration summary > >> + using system PCRE library > >> + OpenSSL library is not used > >> + md5: using system crypto library > >> + sha1: using system crypto library > >> + using system zlib library > >> > >> ***** > >> > >> $ auto/configure --with-cc-opt="-I/usr/local/ssl/include" > >> --with-ld-opt="-L/usr/local/ssl/lib -ldl /usr/local/ssl/lib/libssla. > >> /usr/local/ssl/lib/libcrypto.a" > >> checking for OS > >> + Linux 3.11.10-200.fc19.x86_64 x86_64 > >> ... > >> > >> checking for --with-ld-opt="-L/usr/local/ssl/lib -ldl > >> /usr/local/ssl/lib/libssla. /usr/local/ssl/lib/libcrypto.a" ... not > >> found > >> auto/configure: error: the invalid value in > >> --with-ld-opt="-L/usr/local/ssl/lib -ldl /usr/local/ssl/lib/libssla. > >> /usr/local/ssl/lib/libcrypto.a" > >> > >> ***** > >> > >> Any thoughts on how to proceed? > > > > First of all, try fixing typo in your configure arguments. If it > > doesn't help, try looking into objs/autoconf.err. > You'll have to forgive my ignorance. I would not have made the typo if > I was aware. Would you kindly point it out? The "libssla." is certainly wrong. -- Maxim Dounin http://nginx.org/ From richard at kearsley.me Tue Dec 17 13:12:40 2013 From: richard at kearsley.me (Richard Kearsley) Date: Tue, 17 Dec 2013 13:12:40 +0000 Subject: gzip proxy query In-Reply-To: <20131217130416.GM95113@mdounin.ru> References: <52B04A83.6060404@kearsley.me> <20131217130416.GM95113@mdounin.ru> Message-ID: <52B04DC8.2040400@kearsley.me> On 17/12/13 13:04, Maxim Dounin wrote: > Hello! > > On Tue, Dec 17, 2013 at 12:58:43PM +0000, Richard Kearsley wrote: > >> Hi >> If 'gzip off;' on front-end but a proxy_pass to backend gives a gzipped >> response, will the front-end decompress it before proxy to client? > No. > > But if you want nginx to decompress responses, there is gunzip > module which can do it for you. > > http://nginx.org/en/docs/http/ngx_http_gunzip_module.html > Many thanks The behaviour that I want is to keep it compressed so im happy :) From noloader at gmail.com Tue Dec 17 13:36:43 2013 From: noloader at gmail.com (Jeffrey Walton) Date: Tue, 17 Dec 2013 08:36:43 -0500 Subject: checking for OpenSSL library ... not found In-Reply-To: <20131217130757.GN95113@mdounin.ru> References: <20131217120253.GK95113@mdounin.ru> <20131217130147.GL95113@mdounin.ru> <20131217130757.GN95113@mdounin.ru> Message-ID: On Tue, Dec 17, 2013 at 8:07 AM, Maxim Dounin wrote: > Hello! > > On Tue, Dec 17, 2013 at 08:05:49AM -0500, Jeffrey Walton wrote: > >> On Tue, Dec 17, 2013 at 8:01 AM, Maxim Dounin wrote: >> > ... >> >> >> >> Any thoughts on how to proceed? >> > >> > First of all, try fixing typo in your configure arguments. If it >> > doesn't help, try looking into objs/autoconf.err. >> You'll have to forgive my ignorance. I would not have made the typo if >> I was aware. Would you kindly point it out? > > The "libssla." is certainly wrong. Perfect, thanks. Looking back, I think the original problem was the lack of '-ldl' on Fedora. That caused autotest.c to fail. But output error output was silently swallowed, so I went down the wrong road. Thanks again. Jeff From mdounin at mdounin.ru Tue Dec 17 14:07:54 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 17 Dec 2013 18:07:54 +0400 Subject: nginx-1.5.8 Message-ID: <20131217140754.GQ95113@mdounin.ru> Changes with nginx 1.5.8 17 Dec 2013 *) Feature: IPv6 support in resolver. *) Feature: the "listen" directive supports the "fastopen" parameter. Thanks to Mathew Rodley. *) Feature: SSL support in the ngx_http_uwsgi_module. Thanks to Roberto De Ioris. *) Feature: vim syntax highlighting scripts were added to contrib. Thanks to Evan Miller. *) Bugfix: a timeout might occur while reading client request body in an SSL connection using chunked transfer encoding. *) Bugfix: the "master_process" directive did not work correctly in nginx/Windows. *) Bugfix: the "setfib" parameter of the "listen" directive might not work. *) Bugfix: in the ngx_http_spdy_module. -- Maxim Dounin http://nginx.org/en/donation.html From nginx-forum at nginx.us Tue Dec 17 17:18:00 2013 From: nginx-forum at nginx.us (hussan) Date: Tue, 17 Dec 2013 12:18:00 -0500 Subject: Proxy_pass remote nginx server Message-ID: Hi, i have 2 nginx server, one with my main site(www.site.com) and other nginx server with my blog(www.site2.com). My nginx server 1 have this configurarion: on location /blog/ i have a (proxy_pass) to blog on nginx server 2 *========== server { server_name www.site.com; root "/home/site/site.com"; index index.php; client_max_body_size 10m; access_log /home/site/_logs/access.log; error_log /home/site/_logs/error.log; location /blog/ { proxy_pass http://www.site2.com/; } location / { try_files $uri $uri/ /index.php$uri?$args; } location ~ "^(.+\.php)($|/)" { fastcgi_split_path_info ^(.+\.php)(.*)$; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; fastcgi_param SCRIPT_NAME $fastcgi_script_name; fastcgi_param PATH_INFO $fastcgi_path_info; fastcgi_param SERVER_NAME $host; fastcgi_read_timeout 120; fastcgi_pass unix:/var/run/site_fpm.sock; include fastcgi_params; } # location ~* \.(js|css|png|jpg|jpeg|gif|ico)$ { # expires max; # # log_not_found off; # access_log off; # } # location ~* \.(html|htm)$ { # expires 30m; # } location ~* /\.(ht|git|svn) { deny all; } } *========== Nginx server 2 (blog) config *========== server { server_name www.site2.com; root "/home/site2/www.site2.com"; index index.php; client_max_body_size 10m; access_log /home/site2/_logs/access.log; error_log /home/site2/_logs/error.log; location / { try_files $uri $uri/ /index.php$uri?$args; } location ~ "^(.+\.php)($|/)" { fastcgi_split_path_info ^(.+\.php)(.*)$; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; fastcgi_param SCRIPT_NAME $fastcgi_script_name; fastcgi_param PATH_INFO $fastcgi_path_info; fastcgi_param SERVER_NAME $host; fastcgi_read_timeout 120; fastcgi_pass unix:/var/run/site2_fpm.sock; include fastcgi_params; } location ~* /\.(ht|git|svn) { deny all; } } *========== When i try www.site.com/blog/ proxy_pass works, go to nginx server 2 , and my css/js are loaded fine. But when try www.site.com/blog/wp-admin/ is redirected to www.site.com/blog/wp-login.php (on server 1) 404 error(i dont have this file in server 1). how i can solve this? all www.site.com/blog/* trafic go to proxy_pass on nginx server 2 www.site2.com/* ? thanls Posted at Nginx Forum: http://forum.nginx.org/read.php?2,245606,245606#msg-245606 From luky-37 at hotmail.com Tue Dec 17 20:19:12 2013 From: luky-37 at hotmail.com (Lukas Tribus) Date: Tue, 17 Dec 2013 21:19:12 +0100 Subject: Optimizing NGINX TLS Time To First Byte (TTTFB) In-Reply-To: References: , , , Message-ID: Hi! >>> What I don't get from your patch, it seems like you are hardcoding the >>> buffer to 16384 bytes during handshake (line 570) and only later use a >>> 1400 byte buffer (via NGX_SSL_BUFSIZE). >>> >>> Am I misunderstanding the patch/code? > > It may well be the case that I'm misunderstanding it too :) ... The > intent is: > > - set maximum record size for application data to 1400 bytes. [1] > - allow the handshake to set/use the maximum 16KB bufsize to avoid extra > RTTs during tunnel negotiation. Ok, what I read from the patch and your intent is the same :) I was confused about the 16KB bufsize for the initial negotiation, but now I've read the bug report [1] and the patch [2] about the extra RTT when using long certificate chains, and I understand it. But I don't really get the openssl documentation about this [3]: > The initial buffer size is DEFAULT_BUFFER_SIZE, currently 4096. Any attempt > to reduce the buffer size below DEFAULT_BUFFER_SIZE is ignored. In other words this would mean we cannot set the buffer size below 4096, but you are doing exactly this, by setting the buffer size to 1400 byte. Also, you measurements indicate success, so it looks like this statement in the openssl documentation is wrong? Or does setting the buffer size to 1400 "just" reset it from 16KB to 4KB and thats the improvement you see in your measurement? > I think that's a minimum patchset that would significantly improve > performance over current defaults. From there, I'd like to see several > improvements. For example, either (a) provide a way to configure the > default record size via a config flag (not a recompile, that's a deal > breaker for most), or, (b) implement a smarter strategy where each session > begins with small record size (1400 bytes) but grows its record size as the > connection gets older -- this allows us to eliminate unnecessary buffering > latency at the beginning when TCP cwnd is low, and then decrease the > framing overhead (i.e. go back to 16KB records) once the connection is > warmed up. > > P.S. (b) would be much better, even if takes a bit more work. Well, I'm not sure (b) its so easy, nginx would need to understand whether there is bulk or interactive traffic. Such heuristics may backfire in more complex scenarios. But setting an optimal buffer size for pre- and post-handshake seems to be a good compromise and 'upstream-able'. I suspect that haproxy suffers from the same problem with an extra RTT when using a small tune.ssl.maxrecord value. I will see if I can reproduce this. Thanks for clarifying, Lukas [1] http://trac.nginx.org/nginx/ticket/413 [2] http://hg.nginx.org/nginx/rev/a720f0b0e083 [3] https://www.openssl.org/docs/crypto/BIO_f_buffer.html From francis at daoine.org Tue Dec 17 20:34:51 2013 From: francis at daoine.org (Francis Daly) Date: Tue, 17 Dec 2013 20:34:51 +0000 Subject: Proxy_pass remote nginx server In-Reply-To: References: Message-ID: <20131217203451.GQ21047@craic.sysops.org> On Tue, Dec 17, 2013 at 12:18:00PM -0500, hussan wrote: Hi there, > location /blog/ { > location / { > location ~ "^(.+\.php)($|/)" { > location ~* /\.(ht|git|svn) { > When i try www.site.com/blog/ proxy_pass works, go to nginx server 2 , and > my css/js are loaded fine. But when try www.site.com/blog/wp-admin/ is > redirected to www.site.com/blog/wp-login.php (on server 1) 404 error(i dont > have this file in server 1). > > how i can solve this? all www.site.com/blog/* trafic go to proxy_pass on > nginx server 2 www.site2.com/* ? http://nginx.org/r/location You want ^~ f -- Francis Daly francis at daoine.org From francis at daoine.org Tue Dec 17 20:42:31 2013 From: francis at daoine.org (Francis Daly) Date: Tue, 17 Dec 2013 20:42:31 +0000 Subject: alias In-Reply-To: References: <20131211233652.GK21047@craic.sysops.org> Message-ID: <20131217204231.GR21047@craic.sysops.org> On Sat, Dec 14, 2013 at 01:04:59AM +0000, Matthew Ngaha wrote: Hi there, > So it seems the reloading wasn't taking effect and only did > after a restart of my PC. i've beeb typing "sudo service nginx > reload" after editing .. why isn't this working, do i have to do > something else? The best reply is "what do your logs say?". You should probably learn exactly what "sudo service nginx reload" does on your machine. "service nginx" probably runs the shell script /etc/init.d/nginx. That possibly runs an nginx binary with a named configuration file. If it does, then that is the one configuration file that matters. If it does not, then you running the same nginx binary with a "-V" argument should indicate what the one configuration file that matters is. After that, you can edit that file to introduce a deliberate error, and see what your "sudo service nginx reload" shows, and compare that with what happens when you run nginx directly with appropriate arguments. If there is an error in the config file, you should see a clear indication of it. Running nginx with a "-h" argument shows the help text. Good luck with it, f -- Francis Daly francis at daoine.org From francis at daoine.org Tue Dec 17 20:48:37 2013 From: francis at daoine.org (Francis Daly) Date: Tue, 17 Dec 2013 20:48:37 +0000 Subject: Override index.php for a subdirectory In-Reply-To: <1636735918.5794.1387279576738.JavaMail.zimbra@yetopen.it> References: <1890925840.5783.1387279498724.JavaMail.zimbra@yetopen.it> <1636735918.5794.1387279576738.JavaMail.zimbra@yetopen.it> Message-ID: <20131217204837.GS21047@craic.sysops.org> On Tue, Dec 17, 2013 at 12:26:16PM +0100, Lorenzo Milesi wrote: Hi there, > I need to override default index file for a subdirectory only. http://nginx.org/r/location Make sure that requests for this subdirectory only are handled in a specific location block. Set the default index file within that location block. > server { > listen 80 default_server; > server_name localhost; > location / { > location ~ \.php$ { > location ~ /\.ht { > } > server { > location ~ /work/management_site/ { > } > But doesn't work. > How can I accomplish that? http://nginx.org/en/docs/http/request_processing.html When a request comes in, first the one server{} to handle it is chosen. Then the one location{} to handle it within that server{} is chosen. You will probably want your new location, which will probably use "^~", to be in the same server{} block as the rest of your configuration. f -- Francis Daly francis at daoine.org From noloader at gmail.com Tue Dec 17 21:16:55 2013 From: noloader at gmail.com (Jeffrey Walton) Date: Tue, 17 Dec 2013 16:16:55 -0500 Subject: Force linking to static archives during make? Message-ID: This should be my last build question. $ ./auto/configure --with-http_ssl_module ... --with-cc-opt="-I/usr/local/ssl/include" --with-ld-opt="-L/usr/local/ssl/lib /usr/local/ssl/lib/libssl.a /usr/local/ssl/lib/libcrypto.a -ldl" ... $ make ... Results in the following. Note that OpenSSL is still dynamically linked: $ ldd objs/nginx linux-vdso.so.1 => (0x00007fffd0dfe000) libdl.so.2 => /lib64/libdl.so.2 (0x0000003ebf600000) libpthread.so.0 => /lib64/libpthread.so.0 (0x0000003ebfa00000) libcrypt.so.1 => /lib64/libcrypt.so.1 (0x0000003ed3e00000) libpcre.so.1 => /lib64/libpcre.so.1 (0x0000003ec0a00000) libssl.so.1.0.0 => not found libcrypto.so.1.0.0 => not found libz.so.1 => /lib64/libz.so.1 (0x0000003ebfe00000) libc.so.6 => /lib64/libc.so.6 (0x0000003ebf200000) /lib64/ld-linux-x86-64.so.2 (0x0000003ebea00000) libfreebl3.so => /lib64/libfreebl3.so (0x0000003ec7a00000) ***** Adding -Bstatic does not help even though its clearly on the link command line: $ ./auto/configure --with-http_ssl_module ... --with-cc-opt="-I/usr/local/ssl/include" --with-ld-opt="-Bstatic -L/usr/local/ssl/lib /usr/local/ssl/lib/libssl.a /usr/local/ssl/lib/libcrypto.a -ldl" ... $ make ... objs/src/http/modules/ngx_http_upstream_keepalive_module.o \ objs/ngx_modules.o \ -Bstatic -L/usr/local/ssl/lib /usr/local/ssl/lib/libssl.a /usr/local/ssl/lib/libcrypto.a -ldl -lpthread -lcrypt -lpcre -lssl -lcrypto -lz ... $ ldd objs/nginx linux-vdso.so.1 => (0x00007fffd4fc6000) libdl.so.2 => /lib64/libdl.so.2 (0x0000003ebf600000) ... libssl.so.1.0.0 => not found libcrypto.so.1.0.0 => not found ***** Omitting -L/usr/local/ssl/lib results in a failed configure. ***** How does one force nginx to use static linking for a library? Thanks in advance. From igrigorik at gmail.com Wed Dec 18 00:03:27 2013 From: igrigorik at gmail.com (Ilya Grigorik) Date: Tue, 17 Dec 2013 16:03:27 -0800 Subject: Optimizing NGINX TLS Time To First Byte (TTTFB) In-Reply-To: References: Message-ID: > > >>> What I don't get from your patch, it seems like you are hardcoding the > >>> buffer to 16384 bytes during handshake (line 570) and only later use a > >>> 1400 byte buffer (via NGX_SSL_BUFSIZE). > >>> > >>> Am I misunderstanding the patch/code? > > > > It may well be the case that I'm misunderstanding it too :) ... The > > intent is: > > > > - set maximum record size for application data to 1400 bytes. [1] > > - allow the handshake to set/use the maximum 16KB bufsize to avoid extra > > RTTs during tunnel negotiation. > > Ok, what I read from the patch and your intent is the same :) > > I was confused about the 16KB bufsize for the initial negotiation, but now > I've read the bug report [1] and the patch [2] about the extra RTT when > using long certificate chains, and I understand it. > > But I don't really get the openssl documentation about this [3]: > > The initial buffer size is DEFAULT_BUFFER_SIZE, currently 4096. Any > attempt > > to reduce the buffer size below DEFAULT_BUFFER_SIZE is ignored. > > In other words this would mean we cannot set the buffer size below 4096, > but > you are doing exactly this, by setting the buffer size to 1400 byte. Also, > you measurements indicate success, so it looks like this statement in the > openssl documentation is wrong? > > Or does setting the buffer size to 1400 "just" reset it from 16KB to 4KB > and > thats the improvement you see in your measurement? > Looking at the tcpdump after applying the patch does show ~1400 byte records: http://cloudshark.org/captures/714cf2e0ca10?filter=tcp.stream%3D%3D2 Although now on closer inspection there seems to be another gotcha in there that I overlooked: it's emitting two packets, one is 1389 bytes, and second is ~31 extra bytes, which means the actual record is 1429 bytes. Obviously, this should be a single packet... and 1400 bytes. > > I think that's a minimum patchset that would significantly improve > > performance over current defaults. From there, I'd like to see several > > improvements. For example, either (a) provide a way to configure the > > default record size via a config flag (not a recompile, that's a deal > > breaker for most), or, (b) implement a smarter strategy where each > session > > begins with small record size (1400 bytes) but grows its record size as > the > > connection gets older -- this allows us to eliminate unnecessary > buffering > > latency at the beginning when TCP cwnd is low, and then decrease the > > framing overhead (i.e. go back to 16KB records) once the connection is > > warmed up. > > > > P.S. (b) would be much better, even if takes a bit more work. > > Well, I'm not sure (b) its so easy, nginx would need to understand whether > there is bulk or interactive traffic. Such heuristics may backfire in more > complex scenarios. > > But setting an optimal buffer size for pre- and post-handshake seems to be > a good compromise and 'upstream-able'. > If you only distinguish pre and post TLS handshake then you'll still (likely) incur the extra RTT on first app-data record -- that's what we're trying to avoid by reducing the default record size. For HTTP traffic, I think you want 1400 bytes records. Once we're out of slow-start, you can switch back to larger record size. -------------- next part -------------- An HTML attachment was scrubbed... URL: From alex at zeitgeist.se Wed Dec 18 03:59:42 2013 From: alex at zeitgeist.se (Alex) Date: Wed, 18 Dec 2013 04:59:42 +0100 Subject: Optimizing NGINX TLS Time To First Byte (TTTFB) In-Reply-To: References: Message-ID: <9FD35AB9-B31C-4EE9-A8AA-13095743C22C@postfach.slogh.com> > Looking at the tcpdump after applying the patch does show ~1400 byte records: > http://cloudshark.org/captures/714cf2e0ca10?filter=tcp.stream%3D%3D2 [2] > > Although now on closer inspection there seems to be another gotcha in there that I overlooked: it's emitting two packets, one is 1389 bytes, and second is ~31 extra bytes, which means the actual record is 1429 bytes. Obviously, this should be a single packet... and 1400 bytes. I did some empirical testing and with my configuration (given cipher size, padding, and all), I came to 1370 bytes as being the optimal size for avoid fragmenting of TLS record fragmentation. > If you only distinguish pre and post TLS handshake then you'll still (likely) incur the extra RTT on first app-data record -- that's what we're trying to avoid by reducing the default record size. For HTTP traffic, I think you want 1400 bytes records. Once we're out of slow-start, you can switch back to larger record size. Maybe I am wrong but I was of the belief that you should always try to fit TLS records into individual TCP segments. Hence you should always try to keep TLS record ~1400 bytes (or 1370 in my case), no matter the TCP Window. From nginx-forum at nginx.us Wed Dec 18 06:34:37 2013 From: nginx-forum at nginx.us (Downchuck) Date: Wed, 18 Dec 2013 01:34:37 -0500 Subject: Proxy buffering In-Reply-To: <20131115105115.GH95765@mdounin.ru> References: <20131115105115.GH95765@mdounin.ru> Message-ID: <22d329abe0dc84d44e220b13afeb6350.NginxMailingListEnglish@forum.nginx.org> Is there a large technical barrier to implementing this feature? Patches have been available for some time at: http://yaoweibin.cn/patches/ Posted at Nginx Forum: http://forum.nginx.org/read.php?2,244680,245610#msg-245610 From ru at nginx.com Wed Dec 18 09:29:39 2013 From: ru at nginx.com (Ruslan Ermilov) Date: Wed, 18 Dec 2013 13:29:39 +0400 Subject: Force linking to static archives during make? In-Reply-To: References: Message-ID: <20131218092939.GZ63816@lo0.su> On Tue, Dec 17, 2013 at 04:16:55PM -0500, Jeffrey Walton wrote: > This should be my last build question. > > $ ./auto/configure --with-http_ssl_module ... > --with-cc-opt="-I/usr/local/ssl/include" > --with-ld-opt="-L/usr/local/ssl/lib /usr/local/ssl/lib/libssl.a > /usr/local/ssl/lib/libcrypto.a -ldl" > ... > $ make > ... > > Results in the following. Note that OpenSSL is still dynamically linked: > > $ ldd objs/nginx > linux-vdso.so.1 => (0x00007fffd0dfe000) > libdl.so.2 => /lib64/libdl.so.2 (0x0000003ebf600000) > libpthread.so.0 => /lib64/libpthread.so.0 (0x0000003ebfa00000) > libcrypt.so.1 => /lib64/libcrypt.so.1 (0x0000003ed3e00000) > libpcre.so.1 => /lib64/libpcre.so.1 (0x0000003ec0a00000) > libssl.so.1.0.0 => not found > libcrypto.so.1.0.0 => not found > libz.so.1 => /lib64/libz.so.1 (0x0000003ebfe00000) > libc.so.6 => /lib64/libc.so.6 (0x0000003ebf200000) > /lib64/ld-linux-x86-64.so.2 (0x0000003ebea00000) > libfreebl3.so => /lib64/libfreebl3.so (0x0000003ec7a00000) > > ***** > > Adding -Bstatic does not help even though its clearly on the link command line: > > $ ./auto/configure --with-http_ssl_module ... > --with-cc-opt="-I/usr/local/ssl/include" --with-ld-opt="-Bstatic > -L/usr/local/ssl/lib /usr/local/ssl/lib/libssl.a > /usr/local/ssl/lib/libcrypto.a -ldl" > ... > $ make > ... > objs/src/http/modules/ngx_http_upstream_keepalive_module.o \ > objs/ngx_modules.o \ > -Bstatic -L/usr/local/ssl/lib /usr/local/ssl/lib/libssl.a > /usr/local/ssl/lib/libcrypto.a -ldl -lpthread -lcrypt -lpcre -lssl > -lcrypto -lz > ... > $ ldd objs/nginx > linux-vdso.so.1 => (0x00007fffd4fc6000) > libdl.so.2 => /lib64/libdl.so.2 (0x0000003ebf600000) > ... > libssl.so.1.0.0 => not found > libcrypto.so.1.0.0 => not found > > ***** > > Omitting -L/usr/local/ssl/lib results in a failed configure. > > ***** > > How does one force nginx to use static linking for a library? > > Thanks in advance. I can't tell for Linux, but on FreeBSD it's as simple as: $ auto/configure --with-ld-opt=-static [...] $ make -sj4 $ file objs/nginx objs/nginx: ELF 64-bit LSB executable, x86-64, version 1 (FreeBSD), statically linked, for FreeBSD 9.2 (902503), not stripped $ ldd objs/nginx ldd: objs/nginx: not a dynamic ELF executable From pasik at iki.fi Wed Dec 18 10:45:37 2013 From: pasik at iki.fi (Pasi =?iso-8859-1?Q?K=E4rkk=E4inen?=) Date: Wed, 18 Dec 2013 12:45:37 +0200 Subject: Proxy buffering In-Reply-To: <22d329abe0dc84d44e220b13afeb6350.NginxMailingListEnglish@forum.nginx.org> References: <20131115105115.GH95765@mdounin.ru> <22d329abe0dc84d44e220b13afeb6350.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20131218104537.GO2924@reaktio.net> On Wed, Dec 18, 2013 at 01:34:37AM -0500, Downchuck wrote: > Is there a large technical barrier to implementing this feature? Patches > have been available for some time at: http://yaoweibin.cn/patches/ > Hi, Based on my testing the no_buffer v8 patch works OK with nginx 1.4.x! http://yaoweibin.cn/patches/nginx-1.4.2-no_buffer-v8.patch Maxim: Have you taken a look at the no_buffer-v8 patch? It would be very nice to get this feature upstreamed to nginx.. Thanks, -- Pasi From maxxer at ufficyo.com Wed Dec 18 15:05:04 2013 From: maxxer at ufficyo.com (Lorenzo Milesi) Date: Wed, 18 Dec 2013 16:05:04 +0100 (CET) Subject: Override index.php for a subdirectory In-Reply-To: <20131217204837.GS21047@craic.sysops.org> References: <1890925840.5783.1387279498724.JavaMail.zimbra@yetopen.it> <1636735918.5794.1387279576738.JavaMail.zimbra@yetopen.it> <20131217204837.GS21047@craic.sysops.org> Message-ID: <92363757.9003.1387379104792.JavaMail.zimbra@yetopen.it> > You will probably want your new location, which will probably use "^~", > to be in the same server{} block as the rest of your configuration. Thanks for your suggestion. For benefit of others I solved this way: server { [...] location /work/management_site/ { location ~ \.(js|css|png|jpg|gif|swf|ico|pdf|mov|fla|zip|rar)$ { try_files $uri =404; } set $yii_bootstrap "index-maxxer.php"; index index-maxxer.php; try_files $uri $uri/ /work/management_site/index-maxxer.php?$args; set $fsn /$yii_bootstrap; if (-f $document_root$fastcgi_script_name){ set $fsn $fastcgi_script_name; } fastcgi_pass 127.0.0.1:9000; fastcgi_index index-maxxer.php; include fastcgi_params; fastcgi_param SCRIPT_FILENAME $document_root$fsn; } } It can probably be improved, but right now it works! Thanks again -- Lorenzo Milesi - lorenzo.milesi at yetopen.it YetOpen S.r.l. - http://www.yetopen.it/ From mdounin at mdounin.ru Wed Dec 18 15:21:13 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 18 Dec 2013 19:21:13 +0400 Subject: Proxy buffering In-Reply-To: <20131218104537.GO2924@reaktio.net> References: <20131115105115.GH95765@mdounin.ru> <22d329abe0dc84d44e220b13afeb6350.NginxMailingListEnglish@forum.nginx.org> <20131218104537.GO2924@reaktio.net> Message-ID: <20131218152113.GG95113@mdounin.ru> Hello! On Wed, Dec 18, 2013 at 12:45:37PM +0200, Pasi K?rkk?inen wrote: > On Wed, Dec 18, 2013 at 01:34:37AM -0500, Downchuck wrote: > > Is there a large technical barrier to implementing this feature? Patches > > have been available for some time at: http://yaoweibin.cn/patches/ > > > > Hi, > > Based on my testing the no_buffer v8 patch works OK with nginx 1.4.x! > http://yaoweibin.cn/patches/nginx-1.4.2-no_buffer-v8.patch > > Maxim: Have you taken a look at the no_buffer-v8 patch? > > It would be very nice to get this feature upstreamed to nginx.. While it looks mostly working (some parts seems to be missing though - e.g., using chunked encoding requires appropriate headers to be sent to upstream, and HTTP/1.1 used), it seems to introduce too much code duplication. It needs to be addressed somehow before we'll able to commit it. I'll plan to work on this and related problems at the start of next year. -- Maxim Dounin http://nginx.org/ From mdounin at mdounin.ru Wed Dec 18 16:38:04 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 18 Dec 2013 20:38:04 +0400 Subject: Optimizing NGINX TLS Time To First Byte (TTTFB) In-Reply-To: References: Message-ID: <20131218163804.GJ95113@mdounin.ru> Hello! On Tue, Dec 17, 2013 at 04:03:27PM -0800, Ilya Grigorik wrote: [...] > Although now on closer inspection there seems to be another gotcha in there > that I overlooked: it's emitting two packets, one is 1389 bytes, and second > is ~31 extra bytes, which means the actual record is 1429 bytes. Obviously, > this should be a single packet... and 1400 bytes. We've discussed this alot here a while ago, and it turns out that it's very non-trivial task to fill exactly one packet - as space in packets may vary depending on TCP options used, MTU, tunnels used on a way to a client, etc. On the other hand, it looks good enough to have records up to initial CWND in size without any significant latency changes. And with IW10 this basically means that anything up to about 14k should be fine (with RFC3390, something like 4k should be ok). It also reduces bandwidth costs associated with using multiple records. Just in case, below is a patch to play with SSL buffer size: # HG changeset patch # User Maxim Dounin # Date 1387302972 -14400 # Tue Dec 17 21:56:12 2013 +0400 # Node ID 090a57a2a599049152e87693369b6921efcd6bca # Parent e7d1a00f06731d7508ec120c1ac91c337d15c669 SSL: ssl_buffer_size directive. diff --git a/src/event/ngx_event_openssl.c b/src/event/ngx_event_openssl.c --- a/src/event/ngx_event_openssl.c +++ b/src/event/ngx_event_openssl.c @@ -190,6 +190,8 @@ ngx_ssl_create(ngx_ssl_t *ssl, ngx_uint_ return NGX_ERROR; } + ssl->buffer_size = NGX_SSL_BUFSIZE; + /* client side options */ SSL_CTX_set_options(ssl->ctx, SSL_OP_MICROSOFT_SESS_ID_BUG); @@ -726,6 +728,7 @@ ngx_ssl_create_connection(ngx_ssl_t *ssl } sc->buffer = ((flags & NGX_SSL_BUFFER) != 0); + sc->buffer_size = ssl->buffer_size; sc->connection = SSL_new(ssl->ctx); @@ -1222,7 +1225,7 @@ ngx_ssl_send_chain(ngx_connection_t *c, buf = c->ssl->buf; if (buf == NULL) { - buf = ngx_create_temp_buf(c->pool, NGX_SSL_BUFSIZE); + buf = ngx_create_temp_buf(c->pool, c->ssl->buffer_size); if (buf == NULL) { return NGX_CHAIN_ERROR; } @@ -1231,14 +1234,14 @@ ngx_ssl_send_chain(ngx_connection_t *c, } if (buf->start == NULL) { - buf->start = ngx_palloc(c->pool, NGX_SSL_BUFSIZE); + buf->start = ngx_palloc(c->pool, c->ssl->buffer_size); if (buf->start == NULL) { return NGX_CHAIN_ERROR; } buf->pos = buf->start; buf->last = buf->start; - buf->end = buf->start + NGX_SSL_BUFSIZE; + buf->end = buf->start + c->ssl->buffer_size; } send = buf->last - buf->pos; diff --git a/src/event/ngx_event_openssl.h b/src/event/ngx_event_openssl.h --- a/src/event/ngx_event_openssl.h +++ b/src/event/ngx_event_openssl.h @@ -29,6 +29,7 @@ typedef struct { SSL_CTX *ctx; ngx_log_t *log; + size_t buffer_size; } ngx_ssl_t; @@ -37,6 +38,7 @@ typedef struct { ngx_int_t last; ngx_buf_t *buf; + size_t buffer_size; ngx_connection_handler_pt handler; diff --git a/src/http/modules/ngx_http_ssl_module.c b/src/http/modules/ngx_http_ssl_module.c --- a/src/http/modules/ngx_http_ssl_module.c +++ b/src/http/modules/ngx_http_ssl_module.c @@ -111,6 +111,13 @@ static ngx_command_t ngx_http_ssl_comma offsetof(ngx_http_ssl_srv_conf_t, ciphers), NULL }, + { ngx_string("ssl_buffer_size"), + NGX_HTTP_MAIN_CONF|NGX_HTTP_SRV_CONF|NGX_CONF_TAKE1, + ngx_conf_set_size_slot, + NGX_HTTP_SRV_CONF_OFFSET, + offsetof(ngx_http_ssl_srv_conf_t, buffer_size), + NULL }, + { ngx_string("ssl_verify_client"), NGX_HTTP_MAIN_CONF|NGX_HTTP_SRV_CONF|NGX_CONF_TAKE1, ngx_conf_set_enum_slot, @@ -424,6 +431,7 @@ ngx_http_ssl_create_srv_conf(ngx_conf_t sscf->enable = NGX_CONF_UNSET; sscf->prefer_server_ciphers = NGX_CONF_UNSET; + sscf->buffer_size = NGX_CONF_UNSET_SIZE; sscf->verify = NGX_CONF_UNSET_UINT; sscf->verify_depth = NGX_CONF_UNSET_UINT; sscf->builtin_session_cache = NGX_CONF_UNSET; @@ -465,6 +473,9 @@ ngx_http_ssl_merge_srv_conf(ngx_conf_t * (NGX_CONF_BITMASK_SET|NGX_SSL_SSLv3|NGX_SSL_TLSv1 |NGX_SSL_TLSv1_1|NGX_SSL_TLSv1_2)); + ngx_conf_merge_size_value(conf->buffer_size, prev->buffer_size, + NGX_SSL_BUFSIZE); + ngx_conf_merge_uint_value(conf->verify, prev->verify, 0); ngx_conf_merge_uint_value(conf->verify_depth, prev->verify_depth, 1); @@ -572,6 +583,8 @@ ngx_http_ssl_merge_srv_conf(ngx_conf_t * return NGX_CONF_ERROR; } + conf->ssl.buffer_size = conf->buffer_size; + if (conf->verify) { if (conf->client_certificate.len == 0 && conf->verify != 3) { diff --git a/src/http/modules/ngx_http_ssl_module.h b/src/http/modules/ngx_http_ssl_module.h --- a/src/http/modules/ngx_http_ssl_module.h +++ b/src/http/modules/ngx_http_ssl_module.h @@ -26,6 +26,8 @@ typedef struct { ngx_uint_t verify; ngx_uint_t verify_depth; + size_t buffer_size; + ssize_t builtin_session_cache; time_t session_timeout; -- Maxim Dounin http://nginx.org/ From noloader at gmail.com Wed Dec 18 17:03:31 2013 From: noloader at gmail.com (Jeffrey Walton) Date: Wed, 18 Dec 2013 12:03:31 -0500 Subject: Force linking to static archives during make? In-Reply-To: <20131218092939.GZ63816@lo0.su> References: <20131218092939.GZ63816@lo0.su> Message-ID: Thanks On Wed, Dec 18, 2013 at 4:29 AM, Ruslan Ermilov wrote: > On Tue, Dec 17, 2013 at 04:16:55PM -0500, Jeffrey Walton wrote: >> This should be my last build question. >> >> $ ./auto/configure --with-http_ssl_module ... >> --with-cc-opt="-I/usr/local/ssl/include" >> --with-ld-opt="-L/usr/local/ssl/lib /usr/local/ssl/lib/libssl.a >> /usr/local/ssl/lib/libcrypto.a -ldl" >> ... >> $ make >> ... >> >> Results in the following. Note that OpenSSL is still dynamically linked: >> >> $ ldd objs/nginx >> linux-vdso.so.1 => (0x00007fffd0dfe000) >> libdl.so.2 => /lib64/libdl.so.2 (0x0000003ebf600000) >> libpthread.so.0 => /lib64/libpthread.so.0 (0x0000003ebfa00000) >> libcrypt.so.1 => /lib64/libcrypt.so.1 (0x0000003ed3e00000) >> libpcre.so.1 => /lib64/libpcre.so.1 (0x0000003ec0a00000) >> libssl.so.1.0.0 => not found >> libcrypto.so.1.0.0 => not found >> libz.so.1 => /lib64/libz.so.1 (0x0000003ebfe00000) >> libc.so.6 => /lib64/libc.so.6 (0x0000003ebf200000) >> /lib64/ld-linux-x86-64.so.2 (0x0000003ebea00000) >> libfreebl3.so => /lib64/libfreebl3.so (0x0000003ec7a00000) >> >> ***** >> >> Adding -Bstatic does not help even though its clearly on the link command line: >> >> $ ./auto/configure --with-http_ssl_module ... >> --with-cc-opt="-I/usr/local/ssl/include" --with-ld-opt="-Bstatic >> -L/usr/local/ssl/lib /usr/local/ssl/lib/libssl.a >> /usr/local/ssl/lib/libcrypto.a -ldl" >> ... >> $ make >> ... >> objs/src/http/modules/ngx_http_upstream_keepalive_module.o \ >> objs/ngx_modules.o \ >> -Bstatic -L/usr/local/ssl/lib /usr/local/ssl/lib/libssl.a >> /usr/local/ssl/lib/libcrypto.a -ldl -lpthread -lcrypt -lpcre -lssl >> -lcrypto -lz >> ... >> $ ldd objs/nginx >> linux-vdso.so.1 => (0x00007fffd4fc6000) >> libdl.so.2 => /lib64/libdl.so.2 (0x0000003ebf600000) >> ... >> libssl.so.1.0.0 => not found >> libcrypto.so.1.0.0 => not found >> >> ***** >> >> Omitting -L/usr/local/ssl/lib results in a failed configure. >> >> ***** >> >> How does one force nginx to use static linking for a library? >> >> Thanks in advance. > > I can't tell for Linux, but on FreeBSD it's as simple as: > > $ auto/configure --with-ld-opt=-static > [...] > $ make -sj4 > $ file objs/nginx > objs/nginx: ELF 64-bit LSB executable, x86-64, version 1 (FreeBSD), statically linked, for FreeBSD 9.2 (902503), not stripped > $ ldd objs/nginx > ldd: objs/nginx: not a dynamic ELF executable This is a known problem with the toolchains on Linux and Apple. Hence the reason we need a way to use the fully specified archive when we want static linking. In my case, if I distribute on Ubuntu 12.04 LTS, folks will likely get Ubuntu's version of OpenSSL since the names are the same and binary compatible. The Ubuntu folks disable TLSv1.1 and TLSv1.2, and they refuse to enable them (there's a bug report covering it). I just don't see how this can be done without enhancing/patching nginx's configuration subsystem. Jeff ***** $ cat t.c #include int main(int argc, char* argv[]) { return (int)SSL_library_init(); } $ gcc -I/usr/local/ssl/include t.c -L/usr/local/ssl/lib -Bstatic -lssl -lcrypto $ ldd a.out linux-vdso.so.1 => (0x00007fff68a2e000) libssl.so.1.0.0 => not found libcrypto.so.1.0.0 => not found libc.so.6 => /lib64/libc.so.6 (0x0000003ebf200000) /lib64/ld-linux-x86-64.so.2 (0x0000003ebea00000) ***** $ gcc -I/usr/local/ssl/include t.c /usr/local/ssl/lib/libssl.a /usr/local/ssl/lib/libcrypto.a -ldl $ ldd a.out linux-vdso.so.1 => (0x00007ffffdffe000) libdl.so.2 => /lib64/libdl.so.2 (0x0000003ebf600000) libc.so.6 => /lib64/libc.so.6 (0x0000003ebf200000) /lib64/ld-linux-x86-64.so.2 (0x0000003ebea00000) From kworthington at gmail.com Wed Dec 18 18:04:15 2013 From: kworthington at gmail.com (Kevin Worthington) Date: Wed, 18 Dec 2013 13:04:15 -0500 Subject: [nginx-announce] nginx-1.5.8 In-Reply-To: <20131217140800.GR95113@mdounin.ru> References: <20131217140800.GR95113@mdounin.ru> Message-ID: Hello Nginx users, Now available: Nginx 1.5.8 for Windows http://goo.gl/zq3gGm (32-bit and 64-bit versions) These versions are to support legacy users who are already using Cygwin based builds of Nginx. Officially supported native Windows binaries are at nginx.org. Announcements are also available via my Twitter stream ( http://twitter.com/kworthington), if you prefer to receive updates that way. Thank you, Kevin -- Kevin Worthington kworthington *@* (gmail] [dot} {com) http://kevinworthington.com/ http://twitter.com/kworthington On Tue, Dec 17, 2013 at 9:08 AM, Maxim Dounin wrote: > Changes with nginx 1.5.8 17 Dec > 2013 > > *) Feature: IPv6 support in resolver. > > *) Feature: the "listen" directive supports the "fastopen" parameter. > Thanks to Mathew Rodley. > > *) Feature: SSL support in the ngx_http_uwsgi_module. > Thanks to Roberto De Ioris. > > *) Feature: vim syntax highlighting scripts were added to contrib. > Thanks to Evan Miller. > > *) Bugfix: a timeout might occur while reading client request body in > an > SSL connection using chunked transfer encoding. > > *) Bugfix: the "master_process" directive did not work correctly in > nginx/Windows. > > *) Bugfix: the "setfib" parameter of the "listen" directive might not > work. > > *) Bugfix: in the ngx_http_spdy_module. > > > -- > Maxim Dounin > http://nginx.org/en/donation.html > > _______________________________________________ > nginx-announce mailing list > nginx-announce at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx-announce > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Wed Dec 18 20:23:36 2013 From: nginx-forum at nginx.us (itpp2012) Date: Wed, 18 Dec 2013 15:23:36 -0500 Subject: [ANN] Windows nginx 1.5.8.3 Caterpillar Message-ID: 19:46 18-12-2013: nginx 1.5.8.3 Caterpillar Based on nginx 1.5.8 (release) with; + prove.zip (onsite), a Windows Test_Suite way to show/prove it all really works with at the moment a limited amount of tests which will grow over time + Streaming with nginx-rtmp-module, v1.0.8 (upgraded 16-12) + pcre-8.34 (upgraded) + lua-nginx-module v0.9.3 (upgraded) + echo-nginx-module v0.50 (upgraded) + Source changes back ported (including fixes for the changed resolver API) + Source changes add-on's back ported (including fixes for the changed resolver API) * More compiler optimizations * Additional specifications are like 15:34 6-12-2013: nginx 1.5.8.2 Caterpillar Builds can be found here: http://nginx-win.ecsds.eu/ Posted at Nginx Forum: http://forum.nginx.org/read.php?2,245700,245700#msg-245700 From francis at daoine.org Wed Dec 18 22:13:08 2013 From: francis at daoine.org (Francis Daly) Date: Wed, 18 Dec 2013 22:13:08 +0000 Subject: Force linking to static archives during make? In-Reply-To: References: <20131218092939.GZ63816@lo0.su> Message-ID: <20131218221308.GT21047@craic.sysops.org> On Wed, Dec 18, 2013 at 12:03:31PM -0500, Jeffrey Walton wrote: > > On Tue, Dec 17, 2013 at 04:16:55PM -0500, Jeffrey Walton wrote: Hi there, > >> How does one force nginx to use static linking for a library? > In my case, if I distribute on Ubuntu 12.04 LTS, folks will likely get > Ubuntu's version of OpenSSL since the names are the same and binary > compatible. The Ubuntu folks disable TLSv1.1 and TLSv1.2, and they > refuse to enable them (there's a bug report covering it). I'd say that it's not nginx's job to work around undesired defaults from Ubuntu. Invite people to install your OpenSSL package instead, and they'll be happy that all of their crypto-using tools have been enhanced. However, you should be able to build whatever you want to, on your system. So... > I just don't see how this can be done without enhancing/patching > nginx's configuration subsystem. It looks to me like you can either try to enhance nginx's config system to use a syntax to specify exactly which libraries should be statically linked and which should be dynamically linked (because restricting this to just openssl is probably unreasonable)... Or you can edit your generated objs/Makefile to include exactly the syntax that your system needs for you to build it the way you want it to be built. Perhaps it is as simple as: remove "-lssl -lcrypto"; based on the examples you provided. sed -i '/-lssl -lcrypto/s///' objs/Makefile (For reference, on a random CentOS system here, I seem to need to use gcc t.c /usr/lib/libssl.a /usr/lib/libcrypto.a -lkrb5 -lz to allow it to build cleanly.) Then build, and distribute your build as widely as the licences allow. I suspect that if you come up with a general config solution that works everywhere that matters and avoids obnoxious syntax and doesn't add to the maintenance burden, patches will be welcome. But I also suspect that your use case is quite special, and may not be worth the effort of a general solution. (But I'm not going to stop you spending your time however you see fit.) Good luck with it, f -- Francis Daly francis at daoine.org From nginx-forum at nginx.us Wed Dec 18 23:27:31 2013 From: nginx-forum at nginx.us (ako673de) Date: Wed, 18 Dec 2013 18:27:31 -0500 Subject: nginx misbehaviour in conjunction with non-ASCII characters Message-ID: <672196b52d0737378b9989ec434b5d0c.NginxMailingListEnglish@forum.nginx.org> Found a bug in implementation of MOVE and COPY (webdav) methods. It happens if destination header contains non-ASCII characters (that need to be escaped with "%"). An example: Rename (=MOVE) file "/TheCore.ogm" to "/The_Core.ogm": Request header: --> MOVE http://andinas/TheCore.ogm HTTP/1.1 --> Destination: http://andinas/The_Core.ogm Response header: --> HTTP/1.1 204 No Content --> Server: nginx/1.5.6 Result: File renamed to "The_Core.ogm". Fine! Now rename (=MOVE) file "/andinas/The_Core.ogm" to "/andinas/The_ Core.ogm" (notice the blank after the underscore, but the same is true for ???? and the like!): Request header: --> MOVE http://andinas/The_Core.ogm HTTP/1.1 --> Destination: http://andinas/The_%20Core.ogm Response header: --> HTTP/1.1 204 No Content --> Server: nginx/1.5.6 Result: File renamed to "The_%20Core.ogm". Not so fine! The escaped blank is treated by nginx MOVE as if it was not escaped! Found a similar issue with this config-file line: --> if ( $http_destination ~ https?://[^/]+/(.*) ) { set $httpdest http://localhost:8008/$remote_user/$1; } (for this (working!) code that makes the destination header ready for proxy_pass to another webdav server with user dependent base folders (pyWebDav allows only one user :-/): --> proxy_set_header Destination $httpdest; --> proxy_pass http://127.0.0.1:8008/$remote_user$request_uri; ). Here again, if $http_destination contains the perfectly correct escaped characters from the webdav client, the resulting $httpdest will additionally escape the "escape" characters. Example: Destination File = "/The_ Core.ogm" $http_destination = "http://andinas/The_%20Core.ogm" (correct) $httpdest = "http://andinas/The_%2520Core.ogm" (wrong!) Obviously here the highly undesirable transformation happens during the regex matching. But why? Can I switch that off somehow? Workaround: Use perl function to replace "%25" by "%". Use the undocumented "r->variable()" for that! If you are still reading :-): There are two very funny things about the --> proxy_pass http://127.0.0.1:8008/$remote_user$request_uri; line: 1) You cannot use localhost here. nginx needs a resolver as soon as there is a variable in the string. And at least I didn't manage to find a resolver that can resolve localhost. 2) Nowhere on the internet it's been said, that you need to add $request_uri to that line, but you do need! And it even must not be $uri, because that one has the same escaping issues like the other things I mentioned above. OK, now I'm done ;-). Please help! best regards ako673de Posted at Nginx Forum: http://forum.nginx.org/read.php?2,245702,245702#msg-245702 From igrigorik at gmail.com Thu Dec 19 00:04:59 2013 From: igrigorik at gmail.com (Ilya Grigorik) Date: Wed, 18 Dec 2013 16:04:59 -0800 Subject: Optimizing NGINX TLS Time To First Byte (TTTFB) In-Reply-To: <20131218163804.GJ95113@mdounin.ru> References: <20131218163804.GJ95113@mdounin.ru> Message-ID: On Tue, Dec 17, 2013 at 7:59 PM, Alex wrote: > > I did some empirical testing and with my configuration (given cipher > size, padding, and all), I came to 1370 bytes as being the optimal size > for avoid fragmenting of TLS record fragmentation. > Ah, right, we're not setting the "total" record size... Rather, we're setting the maximum payload size within the record. On top of that there is the extra 5 bytes for the record header, plus MAC and padding (if block cipher is used) -- so that's 5 bytes + up to 32 extra bytes per record. Add IP (40 bytes for IPv6), TCP header (20), and some room for TCP options (40), and we're looking at ~1360 bytes.. Which is close to what you're seeing in your testing. > > If you only distinguish pre and post TLS handshake then you'll still > (likely) incur the extra RTT on first app-data record -- that's what we're > trying to avoid by reducing the default record size. For HTTP traffic, I > think you want 1400 bytes records. Once we're out of slow-start, you can > switch back to larger record size. > > Maybe I am wrong but I was of the belief that you should always try to > fit TLS records into individual TCP segments. Hence you should always > try to keep TLS record ~1400 bytes (or 1370 in my case), no matter the > TCP Window. > For interactive traffic I think that's generally true as it eliminates the edge case of CWND overflows (extra RTT of buffering) and minimizes impact of packet reordering and packet loss. FWIW, for these exact reasons the Google frontend servers have been using TLS record = TCP segment for a few years now... So there is good precedent to using this as a default. That said, small records do incur overhead due to extra framing, plus more CPU cycles (more MACs and framing processing). So, in some instances, if you're delivering large streams (e.g. video), you may want to use larger records... Exposing record size as a configurable option would address this. On Wed, Dec 18, 2013 at 8:38 AM, Maxim Dounin wrote: > > > Although now on closer inspection there seems to be another gotcha in > there > > that I overlooked: it's emitting two packets, one is 1389 bytes, and > second > > is ~31 extra bytes, which means the actual record is 1429 bytes. > Obviously, > > this should be a single packet... and 1400 bytes. > > We've discussed this alot here a while ago, and it turns > out that it's very non-trivial task to fill exactly one packet - > as space in packets may vary depending on TCP options used, MTU, > tunnels used on a way to a client, etc. > Yes, that's a good point. > On the other hand, it looks good enough to have records up to > initial CWND in size without any significant latency changes. And > with IW10 this basically means that anything up to about 14k > should be fine (with RFC3390, something like 4k should be ok). > It also reduces bandwidth costs associated with using multiple > records. > In theory, I agree with you, but in practice even while trying to play with this on my own server it appears to be more tricky than that: to ~reliably avoid the CWND overflow I have to set the record size <10k.. There are also differences in how the CWND is increased (byte based vs packet based) across different platforms, and other edge cases I'm surely overlooking. Also, while this addresses the CWND overflow during slowstart, smaller records offer additional benefits as they help minimize impact of reordering and packet loss (not eliminate, but reduce its negative impact in some cases). Just in case, below is a patch to play with SSL buffer size: > > # HG changeset patch > # User Maxim Dounin > # Date 1387302972 -14400 > # Tue Dec 17 21:56:12 2013 +0400 > # Node ID 090a57a2a599049152e87693369b6921efcd6bca > # Parent e7d1a00f06731d7508ec120c1ac91c337d15c669 > SSL: ssl_buffer_size directive. > Just tried it on my local server, works as advertised. :-) Defaults matter and we should optimize for best performance out of the box... Can we update NGX_SSL_BUFSIZE size as part of this patch? My current suggestion is 1360 bytes as this guarantees best possible case for helping the browser start processing data as soon as possible: minimal impact of reordering / packet loss / no CWND overflows. -------------- next part -------------- An HTML attachment was scrubbed... URL: From alex at zeitgeist.se Thu Dec 19 00:50:00 2013 From: alex at zeitgeist.se (Alex) Date: Thu, 19 Dec 2013 01:50:00 +0100 Subject: Optimizing NGINX TLS Time To First Byte (TTTFB) In-Reply-To: References: <20131218163804.GJ95113@mdounin.ru> Message-ID: On 2013-12-19 01:04, Ilya Grigorik wrote: > ...and we're looking at ~1360 bytes.. Which is close to what you're seeing in your testing. Yes, and I haven't employed IPv6 yet; hence I could save 20 bytes. > and minimizes impact of packet reordering and packet loss. I remember reading (I believe it was in your (excellent) book! ;)) that upon packet loss, the full TLS record has to be retransmitted. Not cool if the TLS record is large and fragmented. So that's indeed a good reason to keep TLS records small and preferably within the size of a TCP segment. > FWIW, for these exact reasons the Google frontend servers have been using TLS record = TCP segment for a few years now... So there is good precedent to using this as a default. Yeah, about that. Google's implementation looks very nice. I keep looking at it in Wireshark and wonder if there is a way that I could replicate their implementation with my limited knowledge. It probably requires tuning of the underlying application as well? Google uses a 1470 bytes frame size (14 bytes header plus 1456 bytes payload), with the TLS record fixed at ~ 1411 bytes. Not sure if a MTU 1470 / MSS 1430 is any beneficial for TLS communication. They optimized the stack to almost always _exactly_ fit a TLS record into the available space of a TCP segment. If I look at one of my sites, https://www.zeitgeist.se, with standard MTU/MSS, and the TLS record size fixed to 1370 bytes + overhead, Nginx would happily use the remaining space in the TCP record and add part of a second TLS record to it, of which the rest then fragments into a second TCP segment. I played around with TCP_CORK (tcp_nopush), but it didn't seem to make any difference. > That said, small records do incur overhead due to extra framing, plus more CPU cycles (more MACs and framing processing). So, in some instances, if you're delivering large streams (e.g. video), you may want to use larger records... Exposing record size as a configurable option would address this. Absolutely. Before I said Google uses a 1470 bytes frame size, but that is not true for example when it comes to streaming from Youtube. Here they use the standard MTU, and also large, fragmenting TLS records. So like you said it's important to look at the application you're trying to optimize. +1 for the configurable TLS record size option. To pick up from the code Maxim just posted, perhaps the record size could be even dynamically altered within location blocks (to specify different record sizes for large and small streams). From iptablez at yahoo.com Thu Dec 19 04:21:00 2013 From: iptablez at yahoo.com (Indo Php) Date: Wed, 18 Dec 2013 20:21:00 -0800 (PST) Subject: How to delete cache based on expires headers? Message-ID: <1387426860.9588.YahooMailNeo@web142305.mail.bf1.yahoo.com> Hi I'm using proxy_cache to mirror my files with the configuration below proxy_cache_path? /var/cache/nginx/image levels=1:2 keys_zone=one:10m inactive=7d ? ? max_size=100g; Our backend server has the expires header set to 600secs Is that posibble for us to also delete the cache files located at?/var/cache/nginx/image depends on the backend expire header? -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Thu Dec 19 07:59:03 2013 From: nginx-forum at nginx.us (honwel) Date: Thu, 19 Dec 2013 02:59:03 -0500 Subject: Parallel subrequests Message-ID: <8ea2df23bf2cf52d4ea8e6bef77bf376.NginxMailingListEnglish@forum.nginx.org> Hi I want write a http hadler(using subrequest) to deal with combine response from mutiple backend. but "Emiller's Advanced Topics In Nginx Module Development - 2.3. Sequential subrequests" issue that " Subrequests might need to access the network, and if so, Nginx needs to return to its other work while it waits for a response. So we need to check the return value of ngx_http_subrequest" How to write a Parallel subrequests which several subrequests by POST in parallel rather than one by one after received preivous response ? thanks. best regards honwel Posted at Nginx Forum: http://forum.nginx.org/read.php?2,245707,245707#msg-245707 From ru at nginx.com Thu Dec 19 09:11:40 2013 From: ru at nginx.com (Ruslan Ermilov) Date: Thu, 19 Dec 2013 13:11:40 +0400 Subject: nginx misbehaviour in conjunction with non-ASCII characters In-Reply-To: <672196b52d0737378b9989ec434b5d0c.NginxMailingListEnglish@forum.nginx.org> References: <672196b52d0737378b9989ec434b5d0c.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20131219091140.GJ63816@lo0.su> On Wed, Dec 18, 2013 at 06:27:31PM -0500, ako673de wrote: > Found a bug in implementation of MOVE and COPY (webdav) methods. It happens > if destination header contains non-ASCII characters (that need to be escaped > with "%"). > > An example: > > Rename (=MOVE) file "/TheCore.ogm" to "/The_Core.ogm": > > Request header: > --> MOVE http://andinas/TheCore.ogm HTTP/1.1 > --> Destination: http://andinas/The_Core.ogm > Response header: > --> HTTP/1.1 204 No Content > --> Server: nginx/1.5.6 > Result: File renamed to "The_Core.ogm". Fine! > > Now rename (=MOVE) file "/andinas/The_Core.ogm" to > "/andinas/The_ Core.ogm" (notice the blank after the underscore, > but the same is true for ???? and the like!): > > Request header: > --> MOVE http://andinas/The_Core.ogm HTTP/1.1 > --> Destination: http://andinas/The_%20Core.ogm > Response header: > --> HTTP/1.1 204 No Content > --> Server: nginx/1.5.6 > Result: File renamed to "The_%20Core.ogm". Not so fine! > > The escaped blank is treated by nginx MOVE as if it was not escaped! I have a patch for that, would you like to try it? > Found a similar issue with this config-file line: > --> if ( $http_destination ~ https?://[^/]+/(.*) ) { set $httpdest > http://localhost:8008/$remote_user/$1; } > > (for this (working!) code that makes the destination header ready for > proxy_pass to another webdav server with user dependent base folders > (pyWebDav allows only one user :-/): > --> proxy_set_header Destination $httpdest; > --> proxy_pass http://127.0.0.1:8008/$remote_user$request_uri; > ). > > Here again, if $http_destination contains the perfectly correct escaped > characters from the webdav client, the resulting $httpdest will additionally > escape the "escape" characters. > Example: > Destination File = "/The_ Core.ogm" > $http_destination = "http://andinas/The_%20Core.ogm" (correct) > $httpdest = "http://andinas/The_%2520Core.ogm" (wrong!) > > Obviously here the highly undesirable transformation happens during the > regex matching. But why? Can I switch that off somehow? > > Workaround: Use perl function to replace "%25" by "%". Use the undocumented > "r->variable()" for that! Don't use rewrite. nginx's DAV module supports relative URLs. Using the following config snippet: http { server { listen 8000; location / { proxy_set_header Destination /$http_user$http_destination; proxy_pass http://127.0.0.1:8001/$http_user$request_uri; } } } and the following request: printf 'MOVE /foo%%20bar HTTP/1.1\r\nDestination: /bar%%20baz\r\nHost: 127.0.0.1\r\nUser: nobody\r\n\r\n' | nc 127.0.0.1 8000 I get: $ nc -l 8001 MOVE /nobody/foo%20bar HTTP/1.0 Destination: /nobody/bar%20baz Host: 127.0.0.1:8001 Connection: close User: nobody > If you are still reading :-): There are two very funny things about the > --> proxy_pass http://127.0.0.1:8008/$remote_user$request_uri; > line: > 1) You cannot use localhost here. nginx needs a resolver as soon as there is > a variable in the string. And at least I didn't manage to find a resolver > that can resolve localhost. > 2) Nowhere on the internet it's been said, that you need to add $request_uri > to that line, but you do need! And it even must not be $uri, because that > one has the same escaping issues like the other things I mentioned above. > > > OK, now I'm done ;-). Please help! > > best regards > ako673de From nginx-forum at nginx.us Thu Dec 19 09:59:34 2013 From: nginx-forum at nginx.us (athalas) Date: Thu, 19 Dec 2013 04:59:34 -0500 Subject: nginx-1.5.8 In-Reply-To: <20131217140754.GQ95113@mdounin.ru> References: <20131217140754.GQ95113@mdounin.ru> Message-ID: <75245310bdb4425d9cb2033d03b41112.NginxMailingListEnglish@forum.nginx.org> Where would we find documentation on the "fastopen" parameter? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,245651,245713#msg-245713 From citrin at citrin.ru Thu Dec 19 10:36:17 2013 From: citrin at citrin.ru (Anton Yuzhaninov) Date: Thu, 19 Dec 2013 14:36:17 +0400 Subject: nginx-1.5.8 In-Reply-To: <75245310bdb4425d9cb2033d03b41112.NginxMailingListEnglish@forum.nginx.org> References: <20131217140754.GQ95113@mdounin.ru> <75245310bdb4425d9cb2033d03b41112.NginxMailingListEnglish@forum.nginx.org> Message-ID: <52B2CC21.7090204@citrin.ru> On 12/19/13 13:59, athalas wrote: > Where would we find documentation on the "fastopen" parameter? "fastopen" parameter set TCP_FASTOPEN socket option on Linux: https://en.wikipedia.org/wiki/TCP_Fast_Open From citrin at citrin.ru Thu Dec 19 10:51:47 2013 From: citrin at citrin.ru (Anton Yuzhaninov) Date: Thu, 19 Dec 2013 14:51:47 +0400 Subject: Optimizing NGINX TLS Time To First Byte (TTTFB) In-Reply-To: References: <20131218163804.GJ95113@mdounin.ru> Message-ID: <52B2CFC3.8000807@citrin.ru> On 12/19/13 04:50, Alex wrote: > I remember reading (I believe it was in your (excellent) book! ;)) that > upon packet loss, the full TLS record has to be retransmitted. Not cool > if the TLS record is large and fragmented. So that's indeed a good > reason to keep TLS records small and preferably within the size of a TCP > segment. Why TCP retransmit for single lost packet is not enough (in kernel TCP stack, whit is unaware of TLS record)? Kernel on receiver side, should wait for this lost packet to retransmit, and return data to application in same order as it was sent. Big TLS record can add some delay for first byte (but not to last byte) in decrypted page, but browser anyway can't render first byte of page, It need at least some data. From ru at nginx.com Thu Dec 19 12:10:27 2013 From: ru at nginx.com (Ruslan Ermilov) Date: Thu, 19 Dec 2013 16:10:27 +0400 Subject: nginx-1.5.8 In-Reply-To: <75245310bdb4425d9cb2033d03b41112.NginxMailingListEnglish@forum.nginx.org> References: <20131217140754.GQ95113@mdounin.ru> <75245310bdb4425d9cb2033d03b41112.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20131219121027.GB21748@lo0.su> On Thu, Dec 19, 2013 at 04:59:34AM -0500, athalas wrote: > Where would we find documentation on the "fastopen" parameter? It's in the works. From mdounin at mdounin.ru Thu Dec 19 13:15:18 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 19 Dec 2013 17:15:18 +0400 Subject: Optimizing NGINX TLS Time To First Byte (TTTFB) In-Reply-To: References: <20131218163804.GJ95113@mdounin.ru> Message-ID: <20131219131518.GL95113@mdounin.ru> Hello! On Wed, Dec 18, 2013 at 04:04:59PM -0800, Ilya Grigorik wrote: > > On the other hand, it looks good enough to have records up to > > initial CWND in size without any significant latency changes. And > > with IW10 this basically means that anything up to about 14k > > should be fine (with RFC3390, something like 4k should be ok). > > It also reduces bandwidth costs associated with using multiple > > records. > > > > In theory, I agree with you, but in practice even while trying to play with > this on my own server it appears to be more tricky than that: to ~reliably > avoid the CWND overflow I have to set the record size <10k.. There are also > differences in how the CWND is increased (byte based vs packet based) > across different platforms, and other edge cases I'm surely overlooking. > Also, while this addresses the CWND overflow during slowstart, smaller > records offer additional benefits as they help minimize impact of > reordering and packet loss (not eliminate, but reduce its negative impact > in some cases). The problem that there are even more edge cases with packet-sized records. Also, in practice with packet-sized records there seems to be significant difference in throughput. In my limited testing packet-sized records resulted in 2x slowdown on large responses. Of course the overhead may be somewhat reduced by applying smaller records deeper in the code, but a) even in theory, there is some overhead, and b) it doesn't looks like a trivial task when using OpenSSL. Additionally, there may be wierd "Nagle vs. delayed ack" related effects on fast connections, it needs additional investigation. As of now, I tend to think that 4k (or 8k on systems with IW10) buffer size is optimal for latency-sensitive workloads. > > Just in case, below is a patch to play with SSL buffer size: > > > > # HG changeset patch > > # User Maxim Dounin > > # Date 1387302972 -14400 > > # Tue Dec 17 21:56:12 2013 +0400 > > # Node ID 090a57a2a599049152e87693369b6921efcd6bca > > # Parent e7d1a00f06731d7508ec120c1ac91c337d15c669 > > SSL: ssl_buffer_size directive. > > > > Just tried it on my local server, works as advertised. :-) > > Defaults matter and we should optimize for best performance out of the > box... Can we update NGX_SSL_BUFSIZE size as part of this patch? My current > suggestion is 1360 bytes as this guarantees best possible case for helping > the browser start processing data as soon as possible: minimal impact of > reordering / packet loss / no CWND overflows. I don't think that changing the default is a good idea, it may/will cause performance degradation with large requests, see above. While reducing latency is important in some cases, it's certainly not the only thing to consider during performance optimization. -- Maxim Dounin http://nginx.org/ From nginx-forum at nginx.us Thu Dec 19 14:19:07 2013 From: nginx-forum at nginx.us (ako673de) Date: Thu, 19 Dec 2013 09:19:07 -0500 Subject: nginx misbehaviour in conjunction with non-ASCII characters In-Reply-To: <20131219091140.GJ63816@lo0.su> References: <20131219091140.GJ63816@lo0.su> Message-ID: <9188114da50ab19c836358120967471a.NginxMailingListEnglish@forum.nginx.org> > > On Wed, Dec 18, 2013 at 06:27:31PM -0500, ako673de wrote: > > Found a bug in implementation of MOVE and COPY (webdav) methods. It happens if destination header contains non-ASCII characters (that need to be escaped with "%"). > I have a patch for that, would you like to try it? Well, currently I need to work around the lack of LOCK features with another WebDav server (see below) anyway. Therefore I can live without patching nginx. For me it would be perfectly alright to have it in the next release. But maybe someone else out there might need it more urgently...? > > Found a similar issue with this config-file line: > > --> if ( $http_destination ~ https?://[^/]+/(.*) ) { set $httpdest http://localhost:8008/$remote_user/$1; } > Don't use rewrite. nginx's DAV module supports relative URLs. With "proxy_set_header Destination /$http_user$http_destination;" ... ... I get "Destination: /http:////\r\n" ... ... which of course is wrong! The reason for your snippet giving correct results is simply that your "printf | nc" is wrong! Webdav clients unfortunately often (or always?) have the "http:///" part included in the destination header. And then you can't simply add strings together any more but need to separate host and path parts in order to insert the user part. I simply don't know of another way to do so except "regex rewrite". Other ideas? best regards ako673de Posted at Nginx Forum: http://forum.nginx.org/read.php?2,245702,245720#msg-245720 From nginx-forum at nginx.us Thu Dec 19 14:48:45 2013 From: nginx-forum at nginx.us (Larry) Date: Thu, 19 Dec 2013 09:48:45 -0500 Subject: Proxy_cache or direct static files ? In-Reply-To: References: Message-ID: Ok, Now I get it right :) @Maxim : when you say faster memory storage, doesn't nginx get the result cached by the os itself ? And so in the ram ? What could be faster than that ? Thanks Posted at Nginx Forum: http://forum.nginx.org/read.php?2,245544,245721#msg-245721 From mdounin at mdounin.ru Thu Dec 19 15:03:53 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 19 Dec 2013 19:03:53 +0400 Subject: Force linking to static archives during make? In-Reply-To: References: <20131218092939.GZ63816@lo0.su> Message-ID: <20131219150353.GQ95113@mdounin.ru> Hello! On Wed, Dec 18, 2013 at 12:03:31PM -0500, Jeffrey Walton wrote: > Thanks > > On Wed, Dec 18, 2013 at 4:29 AM, Ruslan Ermilov wrote: > > On Tue, Dec 17, 2013 at 04:16:55PM -0500, Jeffrey Walton wrote: > >> This should be my last build question. > >> > >> $ ./auto/configure --with-http_ssl_module ... > >> --with-cc-opt="-I/usr/local/ssl/include" > >> --with-ld-opt="-L/usr/local/ssl/lib /usr/local/ssl/lib/libssl.a > >> /usr/local/ssl/lib/libcrypto.a -ldl" > >> ... > >> $ make > >> ... > >> > >> Results in the following. Note that OpenSSL is still dynamically linked: > >> > >> $ ldd objs/nginx > >> linux-vdso.so.1 => (0x00007fffd0dfe000) > >> libdl.so.2 => /lib64/libdl.so.2 (0x0000003ebf600000) > >> libpthread.so.0 => /lib64/libpthread.so.0 (0x0000003ebfa00000) > >> libcrypt.so.1 => /lib64/libcrypt.so.1 (0x0000003ed3e00000) > >> libpcre.so.1 => /lib64/libpcre.so.1 (0x0000003ec0a00000) > >> libssl.so.1.0.0 => not found > >> libcrypto.so.1.0.0 => not found > >> libz.so.1 => /lib64/libz.so.1 (0x0000003ebfe00000) > >> libc.so.6 => /lib64/libc.so.6 (0x0000003ebf200000) > >> /lib64/ld-linux-x86-64.so.2 (0x0000003ebea00000) > >> libfreebl3.so => /lib64/libfreebl3.so (0x0000003ec7a00000) > >> > >> ***** > >> > >> Adding -Bstatic does not help even though its clearly on the link command line: > >> > >> $ ./auto/configure --with-http_ssl_module ... > >> --with-cc-opt="-I/usr/local/ssl/include" --with-ld-opt="-Bstatic > >> -L/usr/local/ssl/lib /usr/local/ssl/lib/libssl.a > >> /usr/local/ssl/lib/libcrypto.a -ldl" > >> ... > >> $ make > >> ... > >> objs/src/http/modules/ngx_http_upstream_keepalive_module.o \ > >> objs/ngx_modules.o \ > >> -Bstatic -L/usr/local/ssl/lib /usr/local/ssl/lib/libssl.a > >> /usr/local/ssl/lib/libcrypto.a -ldl -lpthread -lcrypt -lpcre -lssl > >> -lcrypto -lz > >> ... > >> $ ldd objs/nginx > >> linux-vdso.so.1 => (0x00007fffd4fc6000) > >> libdl.so.2 => /lib64/libdl.so.2 (0x0000003ebf600000) > >> ... > >> libssl.so.1.0.0 => not found > >> libcrypto.so.1.0.0 => not found > >> > >> ***** > >> > >> Omitting -L/usr/local/ssl/lib results in a failed configure. > >> > >> ***** > >> > >> How does one force nginx to use static linking for a library? > >> > >> Thanks in advance. > > > > I can't tell for Linux, but on FreeBSD it's as simple as: > > > > $ auto/configure --with-ld-opt=-static > > [...] > > $ make -sj4 > > $ file objs/nginx > > objs/nginx: ELF 64-bit LSB executable, x86-64, version 1 (FreeBSD), statically linked, for FreeBSD 9.2 (902503), not stripped > > $ ldd objs/nginx > > ldd: objs/nginx: not a dynamic ELF executable > This is a known problem with the toolchains on Linux and Apple. Hence > the reason we need a way to use the fully specified archive when we > want static linking. > > In my case, if I distribute on Ubuntu 12.04 LTS, folks will likely get > Ubuntu's version of OpenSSL since the names are the same and binary > compatible. The Ubuntu folks disable TLSv1.1 and TLSv1.2, and they > refuse to enable them (there's a bug report covering it). > > I just don't see how this can be done without enhancing/patching > nginx's configuration subsystem. > > Jeff > > ***** > > $ cat t.c > #include > > int main(int argc, char* argv[]) > { > return (int)SSL_library_init(); > } > > $ gcc -I/usr/local/ssl/include t.c -L/usr/local/ssl/lib -Bstatic -lssl -lcrypto > > $ ldd a.out > linux-vdso.so.1 => (0x00007fff68a2e000) > libssl.so.1.0.0 => not found > libcrypto.so.1.0.0 => not found > libc.so.6 => /lib64/libc.so.6 (0x0000003ebf200000) > /lib64/ld-linux-x86-64.so.2 (0x0000003ebea00000) Just tested on Ubuntu 13.10, and it seems to work this way: gcc t.c -I/usr/local/ssl/include -L/usr/local/ssl/lib \ -Wl,-Bstatic -lssl -lcrypto -Wl,-Bdynamic -ldl That is, using ./configure --with-cc-opt="-I/usr/local/ssl/include" \ --with-ld-opt="-L/usr/local/ssl/lib -Wl,-Bstatic -lssl -lcrypto -Wl,-Bdynamic -ldl" will force use of static versions of libssl and libcrypto, without any changes to nginx configure. Alternatively, trivial workaround is to move away dynamic libraries (or not compile/install them). Or you may ask nginx to compile OpenSSL for you. -- Maxim Dounin http://nginx.org/ From mdounin at mdounin.ru Thu Dec 19 15:20:53 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 19 Dec 2013 19:20:53 +0400 Subject: Proxy_cache or direct static files ? In-Reply-To: References: Message-ID: <20131219152053.GR95113@mdounin.ru> Hello! On Thu, Dec 19, 2013 at 09:48:45AM -0500, Larry wrote: > Ok, > > Now I get it right :) > > @Maxim : when you say faster memory storage, doesn't nginx get the result > cached by the os itself ? And so in the ram ? > > What could be faster than that ? Consider you have 100T of data on rotating disks, 32G of memory, and 256G of SSD available. In such a case, using SSD for a cache may be beneficial - not because something is faster than RAM, but because you don't have enough RAM to cache everything. -- Maxim Dounin http://nginx.org/ From mdounin at mdounin.ru Thu Dec 19 15:23:42 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 19 Dec 2013 19:23:42 +0400 Subject: Parallel subrequests In-Reply-To: <8ea2df23bf2cf52d4ea8e6bef77bf376.NginxMailingListEnglish@forum.nginx.org> References: <8ea2df23bf2cf52d4ea8e6bef77bf376.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20131219152342.GS95113@mdounin.ru> Hello! On Thu, Dec 19, 2013 at 02:59:03AM -0500, honwel wrote: > Hi > > I want write a http hadler(using subrequest) to deal with combine > response from mutiple backend. but "Emiller's Advanced Topics In Nginx > Module Development - 2.3. Sequential subrequests" issue that " Subrequests > might need to access the network, and if so, Nginx needs to return to its > other work while it waits for a response. So we need to check the return > value of ngx_http_subrequest" > How to write a Parallel subrequests which several subrequests by POST in > parallel rather than one by one after received preivous response ? > thanks. Just call ngx_http_subrequest() multiple times. Take a look at the source to understand how it works. -- Maxim Dounin http://nginx.org/ From nginx-forum at nginx.us Thu Dec 19 16:01:53 2013 From: nginx-forum at nginx.us (honwel) Date: Thu, 19 Dec 2013 11:01:53 -0500 Subject: Parallel subrequests In-Reply-To: <20131219152342.GS95113@mdounin.ru> References: <20131219152342.GS95113@mdounin.ru> Message-ID: Ok,thanks a lot,I will try. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,245707,245726#msg-245726 From pasik at iki.fi Thu Dec 19 19:15:15 2013 From: pasik at iki.fi (Pasi =?iso-8859-1?Q?K=E4rkk=E4inen?=) Date: Thu, 19 Dec 2013 21:15:15 +0200 Subject: Proxy buffering In-Reply-To: <20131218152113.GG95113@mdounin.ru> References: <20131115105115.GH95765@mdounin.ru> <22d329abe0dc84d44e220b13afeb6350.NginxMailingListEnglish@forum.nginx.org> <20131218104537.GO2924@reaktio.net> <20131218152113.GG95113@mdounin.ru> Message-ID: <20131219191515.GQ2924@reaktio.net> On Wed, Dec 18, 2013 at 07:21:13PM +0400, Maxim Dounin wrote: > Hello! > Hi, > On Wed, Dec 18, 2013 at 12:45:37PM +0200, Pasi K?rkk?inen wrote: > > > On Wed, Dec 18, 2013 at 01:34:37AM -0500, Downchuck wrote: > > > Is there a large technical barrier to implementing this feature? Patches > > > have been available for some time at: http://yaoweibin.cn/patches/ > > > > > > > Hi, > > > > Based on my testing the no_buffer v8 patch works OK with nginx 1.4.x! > > http://yaoweibin.cn/patches/nginx-1.4.2-no_buffer-v8.patch > > > > Maxim: Have you taken a look at the no_buffer-v8 patch? > > > > It would be very nice to get this feature upstreamed to nginx.. > > While it looks mostly working (some parts seems to be missing > though - e.g., using chunked encoding requires appropriate headers > to be sent to upstream, and HTTP/1.1 used), it seems to introduce > too much code duplication. It needs to be addressed somehow > before we'll able to commit it. > OK. > I'll plan to work on this and related problems at the start of > next year. > That sounds good! Thanks, -- Pasi > -- > Maxim Dounin > http://nginx.org/ > From igrigorik at gmail.com Thu Dec 19 23:55:05 2013 From: igrigorik at gmail.com (Ilya Grigorik) Date: Thu, 19 Dec 2013 15:55:05 -0800 Subject: Optimizing NGINX TLS Time To First Byte (TTTFB) In-Reply-To: <20131219131518.GL95113@mdounin.ru> References: <20131218163804.GJ95113@mdounin.ru> <20131219131518.GL95113@mdounin.ru> Message-ID: On Thu, Dec 19, 2013 at 2:51 AM, Anton Yuzhaninov wrote: > On 12/19/13 04:50, Alex wrote: > >> I remember reading (I believe it was in your (excellent) book! ;)) that >> upon packet loss, the full TLS record has to be retransmitted. Not cool >> if the TLS record is large and fragmented. So that's indeed a good >> reason to keep TLS records small and preferably within the size of a TCP >> segment. >> > > Why TCP retransmit for single lost packet is not enough (in kernel TCP > stack, whit is unaware of TLS record)? > Kernel on receiver side, should wait for this lost packet to retransmit, > and return data to application in same order as it was sent. > Yep, no need to retransmit the record, just the lost packet... The entire record is buffered on the client until all the packets are available, after that the MAC is verified and contents are decrypted + finally passed to the application. On Wed, Dec 18, 2013 at 4:50 PM, Alex wrote: > On 2013-12-19 01:04, Ilya Grigorik wrote: > > > FWIW, for these exact reasons the Google frontend servers have been using > TLS record = TCP segment for a few years now... So there is good precedent > to using this as a default. > > Yeah, about that. Google's implementation looks very nice. I keep > looking at it in Wireshark and wonder if there is a way that I could > replicate their implementation with my limited knowledge. It probably > requires tuning of the underlying application as well? Google uses a > 1470 bytes frame size (14 bytes header plus 1456 bytes payload), with > the TLS record fixed at ~ 1411 bytes. Not sure if a MTU 1470 / MSS 1430 > is any beneficial for TLS communication. > > They optimized the stack to almost always _exactly_ fit a TLS record > into the available space of a TCP segment. If I look at one of my sites, > https://www.zeitgeist.se, with standard MTU/MSS, and the TLS record size > fixed to 1370 bytes + overhead, Nginx would happily use the remaining > space in the TCP record and add part of a second TLS record to it, of > which the rest then fragments into a second TCP segment. I played around > with TCP_CORK (tcp_nopush), but it didn't seem to make any difference. > Right, I ran into the same issue when testing it on this end. The very first record goes into first packet, and then some extra (30~50) bytes of following record are padded into it.. from thereon, most records span two packets. The difference with GFE's is that they flush the packet on each record boundary. Perhaps some nginx guru's can help with this one? :-) > > That said, small records do incur overhead due to extra framing, plus > more CPU cycles (more MACs and framing processing). So, in some instances, > if you're delivering large streams (e.g. video), you may want to use larger > records... Exposing record size as a configurable option would address this. > > Absolutely. Before I said Google uses a 1470 bytes frame size, but that > is not true for example when it comes to streaming from Youtube. Here > they use the standard MTU, and also large, fragmenting TLS records. Actually, it should be even smarter: connection starts with small record sizes to get fast time to first frame (exact same concerns as TTFB for HTML), and then record size is increased as connection opens up. Not sure if that's been officially rolled out 100%, but I do know that this was the plan. The benefit here is there is no application tweaking required. I'd love to see this in nginx as well. On Thu, Dec 19, 2013 at 5:15 AM, Maxim Dounin wrote: > Hello! > > > In theory, I agree with you, but in practice even while trying to play > with > > this on my own server it appears to be more tricky than that: to > ~reliably > > avoid the CWND overflow I have to set the record size <10k.. There are > also > > differences in how the CWND is increased (byte based vs packet based) > > across different platforms, and other edge cases I'm surely overlooking. > > Also, while this addresses the CWND overflow during slowstart, smaller > > records offer additional benefits as they help minimize impact of > > reordering and packet loss (not eliminate, but reduce its negative impact > > in some cases). > > The problem that there are even more edge cases with packet-sized > records. Also, in practice with packet-sized records there seems > to be significant difference in throughput. In my limited testing > packet-sized records resulted in 2x slowdown on large responses. > Of course the overhead may be somewhat reduced by applying smaller > records deeper in the code, but a) even in theory, there is some > overhead, and b) it doesn't looks like a trivial task when using > OpenSSL. Additionally, there may be wierd "Nagle vs. delayed ack" > related effects on fast connections, it needs additional > investigation. > As of now, I tend to think that 4k (or 8k on systems with IW10) > buffer size is optimal for latency-sensitive workloads. > If we assume that new systems are using IW10 (which I think is reasonable), then an 8K default is a good / simple middle-ground. Alternatively, what are your thoughts on making this adjustment dynamically? Start the connection with small record size, then bump it to higher limit? In theory, that would also avoid the extra config flag. -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Fri Dec 20 00:21:53 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 20 Dec 2013 04:21:53 +0400 Subject: Optimizing NGINX TLS Time To First Byte (TTTFB) In-Reply-To: References: <20131218163804.GJ95113@mdounin.ru> <20131219131518.GL95113@mdounin.ru> Message-ID: <20131220002153.GF95113@mdounin.ru> Hello! On Thu, Dec 19, 2013 at 03:55:05PM -0800, Ilya Grigorik wrote: > > As of now, I tend to think that 4k (or 8k on systems with IW10) > > buffer size is optimal for latency-sensitive workloads. > > > > If we assume that new systems are using IW10 (which I think is reasonable), > then an 8K default is a good / simple middle-ground. > > Alternatively, what are your thoughts on making this adjustment > dynamically? Start the connection with small record size, then bump it to > higher limit? In theory, that would also avoid the extra config flag. In theory, this may be intresting, and I thought about it too. But I don't think it will stop people from asking us to add a configuration directive anyway, and if 4k/8k will work fine - there should be no need to add extra complexity here. -- Maxim Dounin http://nginx.org/ From nginx-forum at nginx.us Fri Dec 20 02:41:54 2013 From: nginx-forum at nginx.us (justin) Date: Thu, 19 Dec 2013 21:41:54 -0500 Subject: spdy protocol version 3 support Message-ID: <3d55d0fbbbbc54873d170725f2994664.NginxMailingListEnglish@forum.nginx.org> Any update or eta on when we can expect spdy protocol version 3 in the 1.5.X branch? Thanks. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,245732,245732#msg-245732 From andrew at nginx.com Fri Dec 20 02:43:37 2013 From: andrew at nginx.com (Andrew Alexeev) Date: Thu, 19 Dec 2013 18:43:37 -0800 Subject: spdy protocol version 3 support In-Reply-To: <3d55d0fbbbbc54873d170725f2994664.NginxMailingListEnglish@forum.nginx.org> References: <3d55d0fbbbbc54873d170725f2994664.NginxMailingListEnglish@forum.nginx.org> Message-ID: On Dec 19, 2013, at 6:41 PM, "justin" wrote: > Any update or eta on when we can expect spdy protocol version 3 in the 1.5.X > branch? Pretty soon, stay tuned and thanks for the patience! > Thanks. > > Posted at Nginx Forum: http://forum.nginx.org/read.php?2,245732,245732#msg-245732 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > From yatiohi at ideopolis.gr Fri Dec 20 12:36:43 2013 From: yatiohi at ideopolis.gr (Christos Trochalakis) Date: Fri, 20 Dec 2013 14:36:43 +0200 Subject: spdy protocol version 3 support In-Reply-To: References: <3d55d0fbbbbc54873d170725f2994664.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20131220123643.GA28616@luke.ws.skroutz.gr> On Thu, Dec 19, 2013 at 06:43:37PM -0800, Andrew Alexeev wrote: >On Dec 19, 2013, at 6:41 PM, "justin" wrote: > >> Any update or eta on when we can expect spdy protocol version 3 in the 1.5.X >> branch? > >Pretty soon, stay tuned and thanks for the patience! > Please also consider backporting spdy/3 (or spdy/3.1) to the stable 1.4.x branch. I am asking this because firefox and chrome will probably drop spdy/2 support around the end of February[0], in which case stable nginx servers will be effectively spdy incapable. https://groups.google.com/d/topic/spdy-dev/XDudMZSq3e4/discussion From vbart at nginx.com Fri Dec 20 13:16:33 2013 From: vbart at nginx.com (Valentin V. Bartenev) Date: Fri, 20 Dec 2013 17:16:33 +0400 Subject: spdy protocol version 3 support In-Reply-To: <20131220123643.GA28616@luke.ws.skroutz.gr> References: <3d55d0fbbbbc54873d170725f2994664.NginxMailingListEnglish@forum.nginx.org> <20131220123643.GA28616@luke.ws.skroutz.gr> Message-ID: <3815280.kisnXL3jAh@vbart-laptop> On Friday 20 December 2013 14:36:43 Christos Trochalakis wrote: > On Thu, Dec 19, 2013 at 06:43:37PM -0800, Andrew Alexeev wrote: > >On Dec 19, 2013, at 6:41 PM, "justin" wrote: > >> Any update or eta on when we can expect spdy protocol version 3 in the > >> 1.5.X branch? > > > >Pretty soon, stay tuned and thanks for the patience! > > Please also consider backporting spdy/3 (or spdy/3.1) to the stable 1.4.x > branch. I am asking this because firefox and chrome will probably drop > spdy/2 support around the end of February[0], in which case stable nginx > servers will be effectively spdy incapable. > > https://groups.google.com/d/topic/spdy-dev/XDudMZSq3e4/discussion > The mainline branch is also stable and reliable for production, and if you need new features then you should use it, that is why it exists. Furthermore, SPDY in current mainline version is actually better and have less bugs, since the stable branch receives only critical fixes. So, if you are using SPDY I recommend update to 1.5.8 without waiting for spdy/3. As you have probably figured out, 1.4.x will not get spdy/3 support. There are no reasons for backporting. wbr, Valentin V. Bartenev From nginx-forum at nginx.us Fri Dec 20 14:54:21 2013 From: nginx-forum at nginx.us (gglamenace) Date: Fri, 20 Dec 2013 09:54:21 -0500 Subject: nginx variable to know the number of active connections Message-ID: <03b35fbb4df434dc9f4a1f8df2cb5db8.NginxMailingListEnglish@forum.nginx.org> Hello, Is there a way to get the number of active connections in order to activate or a not a cache mechanism when nginx is in a high trafic period? Regards Jerome Posted at Nginx Forum: http://forum.nginx.org/read.php?2,245738,245738#msg-245738 From mdounin at mdounin.ru Fri Dec 20 15:48:38 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 20 Dec 2013 19:48:38 +0400 Subject: nginx variable to know the number of active connections In-Reply-To: <03b35fbb4df434dc9f4a1f8df2cb5db8.NginxMailingListEnglish@forum.nginx.org> References: <03b35fbb4df434dc9f4a1f8df2cb5db8.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20131220154837.GI95113@mdounin.ru> Hello! On Fri, Dec 20, 2013 at 09:54:21AM -0500, gglamenace wrote: > Hello, > > Is there a way to get the number of active connections in order to activate > or a not a cache mechanism when nginx is in a high trafic period? The following variable are available in the ngx_http_stub_status_module: $connections_active, $connections_reading $connections_writing $connections_waiting -- Maxim Dounin http://nginx.org/ From nginx-forum at nginx.us Fri Dec 20 16:54:19 2013 From: nginx-forum at nginx.us (gglamenace) Date: Fri, 20 Dec 2013 11:54:19 -0500 Subject: nginx variable to know the number of active connections In-Reply-To: <20131220154837.GI95113@mdounin.ru> References: <20131220154837.GI95113@mdounin.ru> Message-ID: Oh great, so i can parameter using a cache mechanism using fastcgi_cache_path or the Lua Module SRCACHE for my PHP upstream block simply by checking for example $connections_waiting on each request. I don't know exactly what would be a right setting for using a cache by testing $connections_waiting. I have to do some tests. thank you J?r?me Posted at Nginx Forum: http://forum.nginx.org/read.php?2,245738,245741#msg-245741 From igrigorik at gmail.com Fri Dec 20 19:08:23 2013 From: igrigorik at gmail.com (Ilya Grigorik) Date: Fri, 20 Dec 2013 11:08:23 -0800 Subject: Optimizing NGINX TLS Time To First Byte (TTTFB) In-Reply-To: <20131220002153.GF95113@mdounin.ru> References: <20131218163804.GJ95113@mdounin.ru> <20131219131518.GL95113@mdounin.ru> <20131220002153.GF95113@mdounin.ru> Message-ID: On Thu, Dec 19, 2013 at 4:21 PM, Maxim Dounin wrote: > > Alternatively, what are your thoughts on making this adjustment > > dynamically? Start the connection with small record size, then bump it to > > higher limit? In theory, that would also avoid the extra config flag. > > In theory, this may be intresting, and I thought about it too. > But I don't think it will stop people from asking us to add a > configuration directive anyway, and if 4k/8k will work fine - > there should be no need to add extra complexity here. For others following the discussion.. Followed up on: http://mailman.nginx.org/pipermail/nginx-devel/2013-December/004703.html -------------- next part -------------- An HTML attachment was scrubbed... URL: From maxim at nginx.com Fri Dec 20 20:19:40 2013 From: maxim at nginx.com (Maxim Konovalov) Date: Sat, 21 Dec 2013 00:19:40 +0400 Subject: nginx-1.5.8 In-Reply-To: <75245310bdb4425d9cb2033d03b41112.NginxMailingListEnglish@forum.nginx.org> References: <20131217140754.GQ95113@mdounin.ru> <75245310bdb4425d9cb2033d03b41112.NginxMailingListEnglish@forum.nginx.org> Message-ID: <52B4A65C.50402@nginx.com> On 12/19/13 1:59 PM, athalas wrote: > Where would we find documentation on the "fastopen" parameter? > http://nginx.org/r/listen -- Maxim Konovalov http://nginx.com From alex at zeitgeist.se Fri Dec 20 21:06:59 2013 From: alex at zeitgeist.se (Alex) Date: Fri, 20 Dec 2013 22:06:59 +0100 Subject: nginx-1.5.8 In-Reply-To: <52B4A65C.50402@nginx.com> References: <20131217140754.GQ95113@mdounin.ru> <75245310bdb4425d9cb2033d03b41112.NginxMailingListEnglish@forum.nginx.org> <52B4A65C.50402@nginx.com> Message-ID: On 2013-12-20 21:19, Maxim Konovalov wrote: > On 12/19/13 1:59 PM, athalas wrote: >> Where would we find documentation on the "fastopen" parameter? >> > http://nginx.org/r/listen In the documentation above it's pointed out that the server needs to tolerate the possibility of receiving duplicate initial SYN segments. I am not exactly sure on what level I would ensure that the server performs properly in this regard. According to the draft on TFO (http://tools.ietf.org/html/draft-cheng-tcpm-fastopen-00.html), 2.1.: Rather than trying to capture all the dubious SYN packets to make TFO 100% compatible with TCP semantics, we've made a design decision early on to accept old SYN packets with data, i.e., to allow TFO for a class of applications that are tolerant of duplicate SYN packets with data, e.g., idempotent or query type transactions. We believe this is the right design trade-off balancing complexity with usefulness. There is a large class of applications that can tolerate dubious transaction requests. For this reason, TFO MUST be disabled by default, and only enabled explicitly by applications on a per service port basis. Wouldn't it be the responsibility of nginx (the application) to handle duplicate SYNs? From nginx-forum at nginx.us Fri Dec 20 23:20:18 2013 From: nginx-forum at nginx.us (justin) Date: Fri, 20 Dec 2013 18:20:18 -0500 Subject: Using 127.0.0.1 in resolver Message-ID: <37c72e84ed74ef4d2aa54f10444071e1.NginxMailingListEnglish@forum.nginx.org> Using: resolver 127.0.0.1 valid=300s; Does not work. I assume this would simply uses the DNS servers listed in /etc/resolv.conf? Thanks. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,245748,245748#msg-245748 From contact at jpluscplusm.com Fri Dec 20 23:39:39 2013 From: contact at jpluscplusm.com (Jonathan Matthews) Date: Fri, 20 Dec 2013 23:39:39 +0000 Subject: Using 127.0.0.1 in resolver In-Reply-To: <37c72e84ed74ef4d2aa54f10444071e1.NginxMailingListEnglish@forum.nginx.org> References: <37c72e84ed74ef4d2aa54f10444071e1.NginxMailingListEnglish@forum.nginx.org> Message-ID: On 20 December 2013 23:20, justin wrote: > Using: > > resolver 127.0.0.1 valid=300s; > > Does not work. I assume this would simply uses the DNS servers listed in > /etc/resolv.conf? Your assumption is wrong. You'd need to be running a local DNS resolver for that config to work. From list_nginx at bluerosetech.com Sat Dec 21 00:12:45 2013 From: list_nginx at bluerosetech.com (Darren Pilgrim) Date: Fri, 20 Dec 2013 16:12:45 -0800 Subject: Using 127.0.0.1 in resolver In-Reply-To: <37c72e84ed74ef4d2aa54f10444071e1.NginxMailingListEnglish@forum.nginx.org> References: <37c72e84ed74ef4d2aa54f10444071e1.NginxMailingListEnglish@forum.nginx.org> Message-ID: <52B4DCFD.7030001@bluerosetech.com> On 12/20/2013 3:20 PM, justin wrote: > Using: > > resolver 127.0.0.1 valid=300s; > > Does not work. I assume this would simply uses the DNS servers listed in > /etc/resolv.conf? Thanks. The resolver directive tells NSD to do its own DNS lookups, bypassing the system name lookup call (and thus /etc/resolv.conf) entirely. From nginx-forum at nginx.us Sat Dec 21 06:59:41 2013 From: nginx-forum at nginx.us (Larry) Date: Sat, 21 Dec 2013 01:59:41 -0500 Subject: Proxy_cache or direct static files ? In-Reply-To: <20131219152053.GR95113@mdounin.ru> References: <20131219152053.GR95113@mdounin.ru> Message-ID: Makes sense Thanks Posted at Nginx Forum: http://forum.nginx.org/read.php?2,245544,245752#msg-245752 From ru at nginx.com Sat Dec 21 09:53:20 2013 From: ru at nginx.com (Ruslan Ermilov) Date: Sat, 21 Dec 2013 13:53:20 +0400 Subject: nginx-1.5.8 In-Reply-To: References: <20131217140754.GQ95113@mdounin.ru> <75245310bdb4425d9cb2033d03b41112.NginxMailingListEnglish@forum.nginx.org> <52B4A65C.50402@nginx.com> Message-ID: <20131221095320.GH55641@lo0.su> On Fri, Dec 20, 2013 at 10:06:59PM +0100, Alex wrote: > On 2013-12-20 21:19, Maxim Konovalov wrote: > > On 12/19/13 1:59 PM, athalas wrote: > >> Where would we find documentation on the "fastopen" parameter? > >> > > http://nginx.org/r/listen > > In the documentation above it's pointed out that the server needs to > tolerate the possibility of receiving duplicate initial SYN segments. I > am not exactly sure on what level I would ensure that the server > performs properly in this regard. According to the draft on TFO > (http://tools.ietf.org/html/draft-cheng-tcpm-fastopen-00.html), 2.1.: > > Rather than trying to capture all the dubious SYN packets to make TFO > 100% compatible with TCP semantics, we've made a design decision > early on to accept old SYN packets with data, i.e., to allow TFO for > a class of applications that are tolerant of duplicate SYN packets > with data, e.g., idempotent or query type transactions. We believe > this is the right design trade-off balancing complexity with > usefulness. There is a large class of applications that can tolerate > dubious transaction requests. > > For this reason, TFO MUST be disabled by default, and only enabled > explicitly by applications on a per service port basis. > > Wouldn't it be the responsibility of nginx (the application) to handle > duplicate SYNs? It's the property of the Web application, not the server (nginx). Please see section 3.1 of http://static.googleusercontent.com/media/research.google.com/en//pubs/archive/37517.pdf for a less formal explanation of when it's safe to enable TFO: : We found that to manage stale or duplicate SYN packets would : add significant complexity to our design, and thus we decided : to accept old SYN packets with data in some rare cases; this : decision restricts the use of TFO to applications that are : tolerant to duplicate connection / data requests. Since a : wide variety of applications can tolerate duplicate SYN packets : with data (e.g. those that are idempotent or perform query-style : transactions), we believe this constitutes an appropriate tradeoff. From ru at nginx.com Sat Dec 21 14:57:32 2013 From: ru at nginx.com (Ruslan Ermilov) Date: Sat, 21 Dec 2013 18:57:32 +0400 Subject: Using 127.0.0.1 in resolver In-Reply-To: <52B4DCFD.7030001@bluerosetech.com> References: <37c72e84ed74ef4d2aa54f10444071e1.NginxMailingListEnglish@forum.nginx.org> <52B4DCFD.7030001@bluerosetech.com> Message-ID: <20131221145732.GI55641@lo0.su> On Fri, Dec 20, 2013 at 04:12:45PM -0800, Darren Pilgrim wrote: > On 12/20/2013 3:20 PM, justin wrote: > > Using: > > > > resolver 127.0.0.1 valid=300s; > > > > Does not work. I assume this would simply uses the DNS servers listed in > > /etc/resolv.conf? Thanks. > > The resolver directive tells NSD to do its own DNS lookups, bypassing > the system name lookup call (and thus /etc/resolv.conf) entirely. That's not quite true. Resolving that takes place during configuration parsing uses the system resolver. Resolving that takes place at run time uses the DNS servers from the "resolver" directive. Please see http://nginx.org/r/resolver and http://nginx.org/r/ssl_stapling for details. From nginx-forum at nginx.us Sun Dec 22 12:12:05 2013 From: nginx-forum at nginx.us (bahaa) Date: Sun, 22 Dec 2013 07:12:05 -0500 Subject: bug in spdy - 499 response code on long running requests In-Reply-To: <201306241821.10051.vbart@nginx.com> References: <201306241821.10051.vbart@nginx.com> Message-ID: I noticed the same behavior when sending long running requests (>10secs) from chrome with nginx running as a reversy proxy with spdy enabled. What happens is that chrome sends a PING frame, then sends the request, then if it doesn't get a PING reply it will close the connection and resend the request. You can see the handling of PING failure here: http://git.chromium.org/gitweb/?p=chromium/src/net.git;a=blob;f=http/http_network_transaction.cc;h=a6d50699c0f1854036772b67c2a0ea1b1e62ae63;hb=HEAD#l1409 The reason chrome isn't getting a PING reply within 10secs is because nginx queues the ping reply and doesn't send it until it gets a response from upstream. I added a call to ngx_http_spdy_send_output_queue() in ngx_http_spdy_state_ping() to send the PING reply immediately and it solved the problem. Bahaa Aidi Posted at Nginx Forum: http://forum.nginx.org/read.php?2,240278,245759#msg-245759 From yaoweibin at gmail.com Sun Dec 22 12:56:35 2013 From: yaoweibin at gmail.com (Weibin Yao) Date: Sun, 22 Dec 2013 20:56:35 +0800 Subject: [PATCH] SPDY: support for SPDY v3 Message-ID: Hi, We have just implemented the support for SPDY v3 in nginx, with flow control (upload and download), and a switch option between SPDY v2 and SPDY v3. This patch is for Nginx-1.5.8. The directives we added are: spdy_version syntax: spdy_version [2|3] default: spdy_version 3 context: http, server Specify the version of current SPDY protocol. spdy_flow_control syntax: spdy_flow_control on|off default: spdy_flow_control on context: http, server Turn on or off with SPDY flow control. spdy_init_recv_window_size syntax: spdy_init_recv_window_size size default: spdy_init_recv_window_size 64k context: http, server Specify the receiving window size for SPDY. By default, it's 64K. It will send a WINDOW UPDATE frame when it receives half of the window size data every time. Hope this patch is helpful. Have fun :-) Thank you. -- Weibin Yao Developer @ Web Platform Team of Taobao -------------- next part -------------- A non-text attachment was scrubbed... Name: SPDY_v3.patch Type: application/octet-stream Size: 139773 bytes Desc: not available URL: From alex at zeitgeist.se Sun Dec 22 15:11:55 2013 From: alex at zeitgeist.se (Alex) Date: Sun, 22 Dec 2013 16:11:55 +0100 Subject: nginx-1.5.8 In-Reply-To: <20131221095320.GH55641@lo0.su> References: <20131217140754.GQ95113@mdounin.ru> <75245310bdb4425d9cb2033d03b41112.NginxMailingListEnglish@forum.nginx.org> <52B4A65C.50402@nginx.com> <20131221095320.GH55641@lo0.su> Message-ID: On 2013-12-21 10:53, Ruslan Ermilov wrote: > It's the property of the Web application, not the server (nginx). > > Please see section 3.1 of > http://static.googleusercontent.com/media/research.google.com/en//pubs/archive/37517.pdf > for a less formal explanation of when it's safe to enable TFO: Thanks for the explanation and the link, Ruslan. I somehow had the OSI model (it's application layer) in mind when I read about the restriction. From chigga101 at gmail.com Sun Dec 22 17:13:22 2013 From: chigga101 at gmail.com (Matthew Ngaha) Date: Sun, 22 Dec 2013 17:13:22 +0000 Subject: trying to understand fastcgi Message-ID: I was trying to understand the fastcgi forwarding. The example shows: server { server_name .website.com; listen 80; root /home/website/www; index index.html; location / { fastcgi_pass 127.0.0.1:9000; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; fastcgi_param PATH_INFO $fastcgi_script_name; include fastcgi_params; } } in SCRIPT_FILENAME, does $document_root == root /home/website/www; ? and what value is usually stored in (SCRIPT_NAME) $fastcgi_script_name? i don't understand what script SCRIPT_NAME refers to. Am i suppose to set it or does it set itself? It also says QUERY_STRING $query_string; is needed for configuring fastcgi. But this example clearly hasn't used it? Is the idea to have fastcgi_param for only the variables i want to include? or is this config simply defining them with a value by default? From nginx-forum at nginx.us Sun Dec 22 22:47:15 2013 From: nginx-forum at nginx.us (justin) Date: Sun, 22 Dec 2013 17:47:15 -0500 Subject: bug in spdy - 499 response code on long running requests In-Reply-To: References: <201306241821.10051.vbart@nginx.com> Message-ID: <7323a718bc94d4bc56042fb64e43d8de.NginxMailingListEnglish@forum.nginx.org> Thanks for the response Bahaa. Great information. Would you say the behavior that nginx queues the ping reply and doesn't send it until it gets a response from upstream is a bug? We've had to disabled SPDY until the double request issue gets resolved. Hoping that SPDY protocol version 3 support in nginx will fix it. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,240278,245775#msg-245775 From nginx-forum at nginx.us Sun Dec 22 23:36:04 2013 From: nginx-forum at nginx.us (meto) Date: Sun, 22 Dec 2013 18:36:04 -0500 Subject: No SPDY support in the official repository packages In-Reply-To: <6224078.WFQ0l5ZoG9@vbart-laptop> References: <6224078.WFQ0l5ZoG9@vbart-laptop> Message-ID: <000e6813fb1e7069ee13617faf43f2d1.NginxMailingListEnglish@forum.nginx.org> Valentin V. Bartenev Wrote: ------------------------------------------------------- > On Monday 16 December 2013 08:08:31 kustodian wrote: > > Hi, > > > > Nginx 1.4.0 added support for SPDY to the stable version, so my > question is > > why is SPDY not enabled in the packages from the Nginx official > repository? > > > > I'm explicitely talking about the Centos packages, I haven't tried > others. > > > > SPDY support is enabled for systems where OpenSSL 1.0.1+ is available. Since RHEL 6.5 it does support OpenSSL 1.0.1e. Can you compile next packages with SPDY enabled? > > wbr, Valentin V. Bartenev > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx Posted at Nginx Forum: http://forum.nginx.org/read.php?2,245553,245778#msg-245778 From nginx-forum at nginx.us Mon Dec 23 03:51:46 2013 From: nginx-forum at nginx.us (bahaa) Date: Sun, 22 Dec 2013 22:51:46 -0500 Subject: bug in spdy - 499 response code on long running requests In-Reply-To: <7323a718bc94d4bc56042fb64e43d8de.NginxMailingListEnglish@forum.nginx.org> References: <201306241821.10051.vbart@nginx.com> <7323a718bc94d4bc56042fb64e43d8de.NginxMailingListEnglish@forum.nginx.org> Message-ID: <8db4cfb19c02c573c79114265c1383a9.NginxMailingListEnglish@forum.nginx.org> I think it's a bug. Quote from spdy2 spec: "Receivers of a PING frame should send an identical frame to the sender as soon as possible (if there is other pending data waiting to be sent, PING should take highest priority)." Bahaa Aidi Posted at Nginx Forum: http://forum.nginx.org/read.php?2,240278,245777#msg-245777 From nginx-forum at nginx.us Mon Dec 23 06:13:14 2013 From: nginx-forum at nginx.us (TECK) Date: Mon, 23 Dec 2013 01:13:14 -0500 Subject: Redirect loop fixed, I need a better rule format Message-ID: Hello everyone, I have a little problem on hand, related to redirect loop for a specific location: server { listen 192.168.1.2:443 spdy ssl default_server; ... location ^~ /alpha { return 301 https://www.domain.com$request_uri; } ... } I would like to redirect URL's of this format: https://www.domain.com/alpha/delta/[more of $request_uri] To: https://www.domain.com/delta/[more of $request_uri] I can access fine the https://www.domain.com/delta/... directory. When the GET is performed on https://www.domain.com/alpha/, it tries to use the old REQUEST URI which is obviously wrong. I solved the issue this way: location ^~ /alpha { location /alpha { return 301 https://www.domain.com/; } location ~ ^/alpha/(.+)$ { return 301 https://www.domain.com/$1; } } Is there a better (more compact) way to achieve the same results? With the rules listed above, I tell Nginx to redirect /alpha or /alpha/ to /. If the $request_uri contains something after /alpha, I grab the part of $request_uri after /alpha and push it to /. Thank you for helping me make this configuration part better. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,245780,245780#msg-245780 From m.hodenpijl at 1hippo.com Mon Dec 23 10:06:56 2013 From: m.hodenpijl at 1hippo.com (Martijn Hodenpijl) Date: Mon, 23 Dec 2013 11:06:56 +0100 Subject: Redirect loop in combination with https and apache Message-ID: Hi, We have a setup with tomcat/apache and nginx. When a redirect occurs from the application from https to http, the nginx gets trapped in a redirect loop. In the apache configuration we have this setting: ExpiresActive On ExpiresDefault "access plus 1 month" ExpiresByType image/gif "access plus 1 year" ExpiresByType image/jpeg "access plus 1 year" ExpiresByType image/png "access plus 1 year" .... more mimetypes The nginx configuration has this location location / { proxy_pass http://def-t-site1/; proxy_http_version 1.1; proxy_hide_header Expires; proxy_hide_header Last-Modified; proxy_redirect off; proxy_set_header Host $http_host; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_cache http_def; proxy_cache_key $scheme://$host$uri$is_args$args; # proxy_cache_key "$scheme://$host$request_uri"; proxy_cache_valid 200 302 10m; proxy_cache_valid 404 1m; add_header Cache-Control "public"; add_header X-Cache-Status $upstream_cache_status; add_header X-Via $hostname; } a similar setting we have for 443 port (https). If we remove ExpiresDefault "access plus 1 month" from apache, the redirect loop does not occur. The cache of nginx uses a TTL of 1 month after the redirect occurs. This causes a redirect loop, since the https request is cached as well. So far, we tried several things - proxy_cache_valid 200 301 0m; No change in the TTL. The redirect loop is not solved, and the TTL is still a month. Then we tried to configure the expire headers in nginx. That solves the redirect, but unfortunately the expire headers are not set. We tried for instance: if ($upstream_http_content_type ~ "image/jpeg") { expires 2m; } or map $upstream_http_content_type $new_cache_control_header_val { default $upstream_http_cache_control; "~*image/jpeg" "max-age=120, must-revalidate"; } but these settings did not have any effect on the TTL of the images. So, is there a way to avoid the redirect loop and set the expire header per mimetype in nginx ? Thanks! -- Martijn Gijsberti Hodenpijl Web Developer Hippo -------------- next part -------------- An HTML attachment was scrubbed... URL: From vbart at nginx.com Mon Dec 23 14:10:40 2013 From: vbart at nginx.com (Valentin V. Bartenev) Date: Mon, 23 Dec 2013 18:10:40 +0400 Subject: bug in spdy - 499 response code on long running requests In-Reply-To: <8db4cfb19c02c573c79114265c1383a9.NginxMailingListEnglish@forum.nginx.org> References: <201306241821.10051.vbart@nginx.com> <7323a718bc94d4bc56042fb64e43d8de.NginxMailingListEnglish@forum.nginx.org> <8db4cfb19c02c573c79114265c1383a9.NginxMailingListEnglish@forum.nginx.org> Message-ID: <3047263.xWiBAFAj8A@vbart-laptop> On Sunday 22 December 2013 22:51:46 bahaa wrote: > I think it's a bug. Quote from spdy2 spec: > "Receivers of a PING frame should send an identical frame to the sender as > soon as possible (if there is other pending data waiting to be sent, PING > should take highest priority)." > The actual bug is that some control frames (not only PING one) can stay unsent in output queue for an undefined time after processing of read-event. I'm going to look at it soon. wbr, Valentin V. Bartenev From iptablez at yahoo.com Tue Dec 24 02:57:47 2013 From: iptablez at yahoo.com (Indo Php) Date: Mon, 23 Dec 2013 18:57:47 -0800 (PST) Subject: How to delete cache based on expires headers? In-Reply-To: <1387426860.9588.YahooMailNeo@web142305.mail.bf1.yahoo.com> References: <1387426860.9588.YahooMailNeo@web142305.mail.bf1.yahoo.com> Message-ID: <1387853867.39367.YahooMailNeo@web140403.mail.bf1.yahoo.com> Hello.. Can somebody help me on this? Thank you before On Thursday, December 19, 2013 11:21 AM, Indo Php wrote: Hi I'm using proxy_cache to mirror my files with the configuration below proxy_cache_path? /var/cache/nginx/image levels=1:2 keys_zone=one:10m inactive=7d ? ? max_size=100g; Our backend server has the expires header set to 600secs Is that posibble for us to also delete the cache files located at?/var/cache/nginx/image depends on the backend expire header? _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From magic.drums at gmail.com Tue Dec 24 15:20:23 2013 From: magic.drums at gmail.com (magic.drums at gmail.com) Date: Tue, 24 Dec 2013 12:20:23 -0300 Subject: How to delete cache based on expires headers? In-Reply-To: <1387853867.39367.YahooMailNeo@web140403.mail.bf1.yahoo.com> References: <1387426860.9588.YahooMailNeo@web142305.mail.bf1.yahoo.com> <1387853867.39367.YahooMailNeo@web140403.mail.bf1.yahoo.com> Message-ID: Using change de name the file or change size file. Regards, On Mon, Dec 23, 2013 at 11:57 PM, Indo Php wrote: > Hello.. > > Can somebody help me on this? > > Thank you before > > > On Thursday, December 19, 2013 11:21 AM, Indo Php > wrote: > Hi > > I'm using proxy_cache to mirror my files with the configuration below > > proxy_cache_path /var/cache/nginx/image levels=1:2 keys_zone=one:10m > inactive=7d max_size=100g; > > Our backend server has the expires header set to 600secs > > Is that posibble for us to also delete the cache files located at /var/cache/nginx/image > depends on the backend expire header? > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -- Victor Pereira Fono : +56981882989 Linkedin : http://www.linkedin.com/in/magicdrums -------------- next part -------------- An HTML attachment was scrubbed... URL: From appa at perusio.net Tue Dec 24 15:28:03 2013 From: appa at perusio.net (=?ISO-8859-1?Q?Ant=F3nio_P=2E_P=2E_Almeida?=) Date: Tue, 24 Dec 2013 16:28:03 +0100 Subject: How to delete cache based on expires headers? In-Reply-To: <1387853867.39367.YahooMailNeo@web140403.mail.bf1.yahoo.com> References: <1387426860.9588.YahooMailNeo@web142305.mail.bf1.yahoo.com> <1387853867.39367.YahooMailNeo@web140403.mail.bf1.yahoo.com> Message-ID: Why you want to do this? nginx can manage expiration/cache-control headers all by itself. As soon as the defined max-age is set it returns a upstream status of EXPIRED until it fetches a fresh page from upstream. Deleting won't buy you anything in terms of content freshness. ----appa On Tue, Dec 24, 2013 at 3:57 AM, Indo Php wrote: > Hello.. > > Can somebody help me on this? > > Thank you before > > > On Thursday, December 19, 2013 11:21 AM, Indo Php > wrote: > Hi > > I'm using proxy_cache to mirror my files with the configuration below > > proxy_cache_path /var/cache/nginx/image levels=1:2 keys_zone=one:10m > inactive=7d max_size=100g; > > Our backend server has the expires header set to 600secs > > Is that posibble for us to also delete the cache files located at /var/cache/nginx/image > depends on the backend expire header? > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Wed Dec 25 07:14:41 2013 From: nginx-forum at nginx.us (honwel) Date: Wed, 25 Dec 2013 02:14:41 -0500 Subject: Parallel subrequests In-Reply-To: <8ea2df23bf2cf52d4ea8e6bef77bf376.NginxMailingListEnglish@forum.nginx.org> References: <8ea2df23bf2cf52d4ea8e6bef77bf376.NginxMailingListEnglish@forum.nginx.org> Message-ID: <8d6ae7388dd7df4679e6ce827c36c5d9.NginxMailingListEnglish@forum.nginx.org> hi, Maxim It is seem like that ngx_http_subrequest() supports only GET method: sr->method = NGX_HTTP_GET; sr->http_version = r->http_version; ............... sr->method_name = ngx_http_core_get_method; In my application currently, NGINX received a POST request and issue mutiple subrequests(Parallel) to get combine content by set proxy_pass location, but ngx_http_subrequest() not supports POST method, so, What do I need to pay attention, if i just modify to : sr->method = NGX_HTTP_POST sr->http_version = r->http_version; ............... sr->method_name = r->method_name; ??? thanks a lot! best regards honwel Posted at Nginx Forum: http://forum.nginx.org/read.php?2,245707,245819#msg-245819 From maxim at nginx.com Wed Dec 25 09:54:06 2013 From: maxim at nginx.com (Maxim Konovalov) Date: Wed, 25 Dec 2013 13:54:06 +0400 Subject: [PATCH] SPDY: support for SPDY v3 In-Reply-To: References: Message-ID: <52BAAB3E.4060507@nginx.com> Hi! On 12/22/13 4:56 PM, Weibin Yao wrote: > Hi, > > We have just implemented the support for SPDY v3 in nginx, with flow > control (upload and download), and a switch option between SPDY v2 and > SPDY v3. > > This patch is for Nginx-1.5.8. [...] Thanks for the submission! We already have spdy/3.1 support in our roadmap (http://trac.nginx.org/nginx/roadmap) and planning to finish this project by the end of January, 2014. During this work we will evaluate the Taobao patch too. Thanks again, Maxim Konovalov -- Maxim Konovalov http://nginx.com From nginx-forum at nginx.us Wed Dec 25 13:55:23 2013 From: nginx-forum at nginx.us (fred) Date: Wed, 25 Dec 2013 08:55:23 -0500 Subject: [PATCH] SPDY: support for SPDY v3 In-Reply-To: References: Message-ID: This patch can use with google's pagespeed. ---- -o objs/addon/src/ngx_fetch.o \ ../ngx_pagespeed-release-1.6.29.7-beta/src/ngx_fetch.cc ../ngx_pagespeed-release-1.6.29.7-beta/src/ngx_fetch.cc: In member function ?bool net_instaweb::NgxFetch::Init()?: ../ngx_pagespeed-release-1.6.29.7-beta/src/ngx_fetch.cc:167:22: error: ?ngx_resolver_ctx_t? has no member named ?type? ../ngx_pagespeed-release-1.6.29.7-beta/src/ngx_fetch.cc: In static member function ?static void net_instaweb::NgxFetch::NgxFetchResolveDone(ngx_resolver_ctx_t*)?: ../ngx_pagespeed-release-1.6.29.7-beta/src/ngx_fetch.cc:306:56: error: cannot convert ?ngx_addr_t? to ?in_addr_t {aka unsigned int}? in assignment make[1]: *** [objs/addon/src/ngx_fetch.o] Error 1 make[1]: Leaving directory `/tmp/nginx-1.5.8' make: *** [build] Error 2 Posted at Nginx Forum: http://forum.nginx.org/read.php?2,245760,245829#msg-245829 From nginx-forum at nginx.us Wed Dec 25 14:10:16 2013 From: nginx-forum at nginx.us (itpp2012) Date: Wed, 25 Dec 2013 09:10:16 -0500 Subject: [PATCH] SPDY: support for SPDY v3 In-Reply-To: References: Message-ID: <878206d39d564dbea80d060f23d6fcd5.NginxMailingListEnglish@forum.nginx.org> fred Wrote: ------------------------------------------------------- > This patch can use with google's pagespeed. > The API for resolver has changed, see; https://github.com/chaoslawful/lua-nginx-module/commit/f364c82039d8d76efa0767a7fd909935d5c40a65 for code that fixes this for pre 1.5.8 and post 1.5.8 Posted at Nginx Forum: http://forum.nginx.org/read.php?2,245760,245831#msg-245831 From agentzh at gmail.com Wed Dec 25 20:21:52 2013 From: agentzh at gmail.com (Yichun Zhang (agentzh)) Date: Wed, 25 Dec 2013 12:21:52 -0800 Subject: Parallel subrequests In-Reply-To: <8d6ae7388dd7df4679e6ce827c36c5d9.NginxMailingListEnglish@forum.nginx.org> References: <8ea2df23bf2cf52d4ea8e6bef77bf376.NginxMailingListEnglish@forum.nginx.org> <8d6ae7388dd7df4679e6ce827c36c5d9.NginxMailingListEnglish@forum.nginx.org> Message-ID: Hello! On Tue, Dec 24, 2013 at 11:14 PM, honwel wrote: > but ngx_http_subrequest() not supports POST method, so, What > do I need to pay attention, if i just modify to : > sr->method = NGX_HTTP_POST > sr->http_version = r->http_version; > ............... > > sr->method_name = r->method_name; > ??? > Yeah, you can do something like that. My Nginx modules ngx_echo, ngx_srcache, and ngx_lua all support custom method names in their subrequests. See https://github.com/agentzh/echo-nginx-module#echo_subrequest_async https://github.com/agentzh/srcache-nginx-module#srcache_fetch https://github.com/chaoslawful/lua-nginx-module#ngxlocationcapture_multi Merry Christmas! Best regards, -agentzh From nginx-forum at nginx.us Thu Dec 26 03:49:31 2013 From: nginx-forum at nginx.us (honwel) Date: Wed, 25 Dec 2013 22:49:31 -0500 Subject: Parallel subrequests In-Reply-To: References: Message-ID: <47799d5199ff90ba7359fb9a8a1534f7.NginxMailingListEnglish@forum.nginx.org> ok, thanks. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,245707,245835#msg-245835 From shahzaib.cb at gmail.com Thu Dec 26 11:46:42 2013 From: shahzaib.cb at gmail.com (shahzaib shahzaib) Date: Thu, 26 Dec 2013 16:46:42 +0500 Subject: Connection timeout for upstream server !! Message-ID: We're having following issue with nginx and php-fpm, please have a look on logs : 2013/12/26 16:43:24 [error] 9906#0: *37001 upstream timed out (110: Connection timed out) while reading response header from upstream, client: 192.252.214.240, server: search.com, request: "GET /search/es/index_video.php?videoid=1593251&is_updated=yes HTTP/1.1", upstream: "fastcgi://127.0.0.1:9000", host: "search.com" Also tried increasing proxy timeout to 300 from nginx. When we telnet the port 9000, connection automatically closed after 4~5 seconds : Trying 127.0.0.1... Connected to 127.0.0.1. Escape character is '^]'. Connection closed by foreign host. we've also found slow logs in php-fpm : [root at search ~]# tail -30 /var/log/php-fpm/search-slow.log [26-Dec-2013 16:44:48] [pool search] pid 10303 script_filename = /home/search/public_html/search/es/index_video.php [0x00000000015e0960] curl_exec() /home/search/public_html/search/es/index_video.php:90 [26-Dec-2013 16:45:11] [pool search] pid 10302 script_filename = /home/search/public_html/search/es/index_video.php [0x00000000015e0960] curl_exec() /home/search/public_html/search/es/index_video.php:90 [26-Dec-2013 16:45:18] [pool search] pid 10320 script_filename = /home/search/public_html/search/es/index_video.php [0x00000000015d8950] curl_exec() /home/search/public_html/search/es/index_video.php:90 [26-Dec-2013 16:45:28] [pool search] pid 10311 script_filename = /home/search/public_html/search/es/index_video.php [0x00000000015e0960] curl_exec() /home/search/public_html/search/es/index_video.php:73 [26-Dec-2013 16:45:34] [pool search] pid 10307 script_filename = /home/search/public_html/search/es/index_video.php [0x00000000015e2cd0] curl_exec() /home/search/public_html/search/vendor/ruflin/Elastica/lib/Elastica/Transport/Http.php:117 [0x00000000015e2a20] exec() /home/search/public_html/search/vendor/ruflin/Elastica/lib/Elastica/Request.php:165 [0x00000000015e2568] send() /home/search/public_html/search/vendor/ruflin/Elastica/lib/Elastica/Client.php:526 [0x00000000015e2278] request() /home/search/public_html/search/vendor/ruflin/Elastica/lib/Elastica/Bulk.php:269 [0x00000000015e1fa0] send() /home/search/public_html/search/vendor/ruflin/Elastica/lib/Elastica/Client.php:249 [0x00000000015e1d20] addDocuments() /home/search/public_html/search/vendor/ruflin/Elastica/lib/Elastica/Index.php:123 [0x00000000015e1aa0] addDocuments() /home/search/public_html/search/vendor/ruflin/Elastica/lib/Elastica/Type.php:161 [0x00000000015e0960] addDocuments() /home/search/public_html/search/es/index_video.php:34 [26-Dec-2013 16:45:53] [pool search] pid 10322 script_filename = /home/search/public_html/search/es/index_video.php [0x00000000015e0960] curl_exec() /home/search/public_html/search/es/index_video.php:90 Please help me. Regards. Shahzaib -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Thu Dec 26 12:50:01 2013 From: nginx-forum at nginx.us (honwel) Date: Thu, 26 Dec 2013 07:50:01 -0500 Subject: Parallel subrequests In-Reply-To: References: Message-ID: hi, agentzh There is also a problem that how can hold the one subrequest's response meanwhile waiting others(subrequest) comes, then combine the all response(data) and send it to client? is it possible? thanks a lot! Best regards, honwel Posted at Nginx Forum: http://forum.nginx.org/read.php?2,245707,245849#msg-245849 From mdounin at mdounin.ru Thu Dec 26 13:15:00 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 26 Dec 2013 17:15:00 +0400 Subject: Parallel subrequests In-Reply-To: References: Message-ID: <20131226131500.GD95113@mdounin.ru> Hello! On Thu, Dec 26, 2013 at 07:50:01AM -0500, honwel wrote: > hi, agentzh > > There is also a problem that how can hold the one subrequest's response > meanwhile waiting others(subrequest) comes, then combine the all > response(data) and send it to client? is it possible? > thanks a lot! By default, responses to subrequests are serialized in correct order by postpone filter module, see src/http/ngx_http_postpone_filter_module.c. That is, if you want to just return responses in order there is no need to do anything, it will happen automatically. -- Maxim Dounin http://nginx.org/ From nginx-forum at nginx.us Thu Dec 26 15:38:20 2013 From: nginx-forum at nginx.us (honwel) Date: Thu, 26 Dec 2013 10:38:20 -0500 Subject: Parallel subrequests In-Reply-To: <20131226131500.GD95113@mdounin.ru> References: <20131226131500.GD95113@mdounin.ru> Message-ID: Yes,you are right. But I want combine the data like: Subrequst1's response: "first name: John" Subrequst2's response: "last name: abc" Excepting response: "John abc" Posted at Nginx Forum: http://forum.nginx.org/read.php?2,245707,245860#msg-245860 From mdounin at mdounin.ru Thu Dec 26 15:47:05 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 26 Dec 2013 19:47:05 +0400 Subject: Parallel subrequests In-Reply-To: References: <20131226131500.GD95113@mdounin.ru> Message-ID: <20131226154704.GG95113@mdounin.ru> Hello! On Thu, Dec 26, 2013 at 10:38:20AM -0500, honwel wrote: > Yes,you are right. But I want combine the data like: > Subrequst1's response: "first name: John" > Subrequst2's response: "last name: abc" > > Excepting response: > "John abc" In this particular example, using a filter module after the postpone filter should be a good solution. It will see full response concatenated by the postpone filter, and will be able to transform it appropriately. -- Maxim Dounin http://nginx.org/ From nginx-forum at nginx.us Thu Dec 26 16:21:42 2013 From: nginx-forum at nginx.us (honwel) Date: Thu, 26 Dec 2013 11:21:42 -0500 Subject: Parallel subrequests In-Reply-To: <20131226154704.GG95113@mdounin.ru> References: <20131226154704.GG95113@mdounin.ru> Message-ID: <9cab4a230ea5ab76933b3737d849fa2b.NginxMailingListEnglish@forum.nginx.org> Excepting response: "John abc" Not Response1:"John" Response2:"abc" Thanks for your patience! Posted at Nginx Forum: http://forum.nginx.org/read.php?2,245707,245864#msg-245864 From mdounin at mdounin.ru Thu Dec 26 17:12:16 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 26 Dec 2013 21:12:16 +0400 Subject: Parallel subrequests In-Reply-To: <9cab4a230ea5ab76933b3737d849fa2b.NginxMailingListEnglish@forum.nginx.org> References: <20131226154704.GG95113@mdounin.ru> <9cab4a230ea5ab76933b3737d849fa2b.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20131226171216.GH95113@mdounin.ru> Hello! On Thu, Dec 26, 2013 at 11:21:42AM -0500, honwel wrote: > Excepting response: > "John abc" > > Not > Response1:"John" > Response2:"abc" > > Thanks for your patience! Yes. What I suggested is to write a filter which will transform Response1:"John" Response2:"abc" as available after the pospone filter to "John abc" you want to have as a result. This should be easier than writing a filter which will also wait for all subrequests. If it doesn't work for you (e.g., if concatenation of subrequest responses isn't parsable), you may do subrequest serialization yourself, before the postpone filter. -- Maxim Dounin http://nginx.org/ From nginx-forum at nginx.us Fri Dec 27 09:15:44 2013 From: nginx-forum at nginx.us (volga629) Date: Fri, 27 Dec 2013 04:15:44 -0500 Subject: mod wsgi Message-ID: <9859119a7cb0badcc1ec5a3f9c40653c.NginxMailingListEnglish@forum.nginx.org> Hello Everyone, Tried add mod wsgi and compilation fail. Any help thank you in advance.
		-o objs/addon/src/ngx_http_wsgi_module.o \
		/home/volga629/rpmbuild/BUILD/nginx-1.4.4/mod_wsgi/src/ngx_http_wsgi_module.c
/home/volga629/rpmbuild/BUILD/nginx-1.4.4/mod_wsgi/src/ngx_http_wsgi_module.c:184:
error: 'ngx_garbage_collector_temp_handler' undeclared here (not in a
function)
/home/volga629/rpmbuild/BUILD/nginx-1.4.4/mod_wsgi/src/ngx_http_wsgi_module.c:184:
warning: missing initializer
/home/volga629/rpmbuild/BUILD/nginx-1.4.4/mod_wsgi/src/ngx_http_wsgi_module.c:184:
warning: (near initialization for 'ngx_http_wsgi_commands[12].post')
/home/volga629/rpmbuild/BUILD/nginx-1.4.4/mod_wsgi/src/ngx_http_wsgi_module.c:
In function 'ngx_http_wsgi_merge_loc_conf':
/home/volga629/rpmbuild/BUILD/nginx-1.4.4/mod_wsgi/src/ngx_http_wsgi_module.c:373:
warning: passing argument 1 of 'ngx_conf_merge_path_value' from incompatible
pointer type
src/core/ngx_file.h:142: note: expected 'struct ngx_conf_t *' but argument
is of type 'struct ngx_path_t *'
/home/volga629/rpmbuild/BUILD/nginx-1.4.4/mod_wsgi/src/ngx_http_wsgi_module.c:373:
warning: passing argument 2 of 'ngx_conf_merge_path_value' from incompatible
pointer type
src/core/ngx_file.h:142: note: expected 'struct ngx_path_t **' but argument
is of type 'struct ngx_path_t *'
/home/volga629/rpmbuild/BUILD/nginx-1.4.4/mod_wsgi/src/ngx_http_wsgi_module.c:373:
warning: passing argument 3 of 'ngx_conf_merge_path_value' from incompatible
pointer type
src/core/ngx_file.h:142: note: expected 'struct ngx_path_t *' but argument
is of type 'char *'
/home/volga629/rpmbuild/BUILD/nginx-1.4.4/mod_wsgi/src/ngx_http_wsgi_module.c:373:
warning: passing argument 4 of 'ngx_conf_merge_path_value' makes pointer
from integer without a cast
src/core/ngx_file.h:142: note: expected 'struct ngx_path_init_t *' but
argument is of type 'int'
/home/volga629/rpmbuild/BUILD/nginx-1.4.4/mod_wsgi/src/ngx_http_wsgi_module.c:373:
error: too many arguments to function 'ngx_conf_merge_path_value'
make[1]: *** [objs/addon/src/ngx_http_wsgi_module.o] Error 1
make[1]: Leaving directory `/home/volga629/rpmbuild/BUILD/nginx-1.4.4'
make: *** [build] Error 2
error: Bad exit status from /var/tmp/rpm-tmp.ajqqDV (%build)

Posted at Nginx Forum: http://forum.nginx.org/read.php?2,245873,245873#msg-245873 From manlio_perillo at libero.it Fri Dec 27 11:58:43 2013 From: manlio_perillo at libero.it (Manlio Perillo) Date: Fri, 27 Dec 2013 12:58:43 +0100 Subject: mod wsgi In-Reply-To: <9859119a7cb0badcc1ec5a3f9c40653c.NginxMailingListEnglish@forum.nginx.org> References: <9859119a7cb0badcc1ec5a3f9c40653c.NginxMailingListEnglish@forum.nginx.org> Message-ID: <52BD6B73.8070907@libero.it> On 27/12/2013 10:15, volga629 wrote: > Hello Everyone, > Tried add mod wsgi and compilation fail. Any help thank you in advance. > Hi. The WSGI module is not compatible with recent Nginx versions. It is time to update the code. I also plan to convert the repository from Mercurial to Git and push it on github. It will take at least one week, however. Finally, note that the Nginx WSGI module is not for the "casual web application". There are some issues, since execution of Python code will block an entire Nginx worker process. This may be ok for some applications, but may be a problem with other applications where a different WSGI implementation (like uWSGI or Apache mod_wsgi) may be a better solution. > [...] Regards Manlio Perillo From max at mxcrypt.com Fri Dec 27 20:47:51 2013 From: max at mxcrypt.com (Maxim Khitrov) Date: Fri, 27 Dec 2013 15:47:51 -0500 Subject: Serving default error pages with 'proxy_intercept_errors on' Message-ID: Hello, I'm running nginx v1.4.1 on OpenBSD 5.4 and I'd like to use 'proxy_intercept_errors on' directive without providing my own error pages. In other words, instead of forwarding page content from the backend server, just use the error pages that nginx generates by default. This isn't supported normally, however the following configuration seems to achieve the desired result (though you have to list the error codes explicitly): error_page 403 404 ... @error; location @error { return 444; } I didn't actually know what this would do until I tried it. I assume that when a matching error code is received from the backend server, nginx closes the proxy connection without reading the body and creates a new internal request to @error, which is immediately closed without a response. Thus, there is no body to send to the client, so it falls back to the default behavior. The question is whether this behavior is an accident and may change in a future version, or if it's an acceptable way of intercepting errors without providing custom error pages? - Max From mdounin at mdounin.ru Sat Dec 28 12:19:43 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Sat, 28 Dec 2013 16:19:43 +0400 Subject: Serving default error pages with 'proxy_intercept_errors on' In-Reply-To: References: Message-ID: <20131228121943.GR95113@mdounin.ru> Hello! On Fri, Dec 27, 2013 at 03:47:51PM -0500, Maxim Khitrov wrote: > Hello, > > I'm running nginx v1.4.1 on OpenBSD 5.4 and I'd like to use > 'proxy_intercept_errors on' directive without providing my own error > pages. In other words, instead of forwarding page content from the > backend server, just use the error pages that nginx generates by > default. > > This isn't supported normally, however the following configuration > seems to achieve the desired result (though you have to list the error > codes explicitly): > > error_page 403 404 ... @error; > location @error { return 444; } > > I didn't actually know what this would do until I tried it. I assume > that when a matching error code is received from the backend server, > nginx closes the proxy connection without reading the body and creates > a new internal request to @error, which is immediately closed without > a response. Thus, there is no body to send to the client, so it falls > back to the default behavior. Not really. What you see works because currently "return <4xx>" doesn't override error code previously set, thus for 404 error code originally returned by a proxied server location @error { return 4xx; } is effectively equivalent to location @error { return 404; } which is documented to return "status code of the last occurred error", see http://nginx.org/r/error_page. > The question is whether this behavior is an accident and may change in > a future version, or if it's an acceptable way of intercepting errors > without providing custom error pages? I wouldn't recommend relaying on this. Current behaviour of "return", which effectively uses previous error code set, is somewhat questionable, and may change in the future. -- Maxim Dounin http://nginx.org/ From max at mxcrypt.com Sat Dec 28 14:59:02 2013 From: max at mxcrypt.com (Maxim Khitrov) Date: Sat, 28 Dec 2013 09:59:02 -0500 Subject: Serving default error pages with 'proxy_intercept_errors on' In-Reply-To: <20131228121943.GR95113@mdounin.ru> References: <20131228121943.GR95113@mdounin.ru> Message-ID: On Sat, Dec 28, 2013 at 7:19 AM, Maxim Dounin wrote: > Hello! > > On Fri, Dec 27, 2013 at 03:47:51PM -0500, Maxim Khitrov wrote: > >> Hello, >> >> I'm running nginx v1.4.1 on OpenBSD 5.4 and I'd like to use >> 'proxy_intercept_errors on' directive without providing my own error >> pages. In other words, instead of forwarding page content from the >> backend server, just use the error pages that nginx generates by >> default. >> >> This isn't supported normally, however the following configuration >> seems to achieve the desired result (though you have to list the error >> codes explicitly): >> >> error_page 403 404 ... @error; >> location @error { return 444; } >> >> I didn't actually know what this would do until I tried it. I assume >> that when a matching error code is received from the backend server, >> nginx closes the proxy connection without reading the body and creates >> a new internal request to @error, which is immediately closed without >> a response. Thus, there is no body to send to the client, so it falls >> back to the default behavior. > > Not really. > > What you see works because currently "return <4xx>" doesn't > override error code previously set, thus for 404 error code > originally returned by a proxied server > > location @error { return 4xx; } > > is effectively equivalent to > > location @error { return 404; } > > which is documented to return "status code of the last occurred > error", see http://nginx.org/r/error_page. I thought that the original error code is returned because I'm not using '=[response]' syntax in the error_page directive. You're saying that 'return' may, at some point in the future, change the original status code even though I'm not using 'error_page 403 404 = @error' (with the equal sign)? Is there any other way of getting nginx to return the default error pages in this situation? From nginx-forum at nginx.us Sun Dec 29 18:08:18 2013 From: nginx-forum at nginx.us (linuxr00lz2013) Date: Sun, 29 Dec 2013 13:08:18 -0500 Subject: How do I disable DNS Caching and DNS Reverse Lookup in Nginx ? Message-ID: <276a701e0211b1c246f52eb48d460f21.NginxMailingListEnglish@forum.nginx.org> Hello all Ive been assigned a task to set up an ipv6 to ipv4 reverse proxy for my company. I decided to use nginx to do the job. I found the following article online which describes how to configure nginx as a reverse proxy : http://www.kutukupret.com/2011/05/02/nginx-as-reverse-proxy-ipv6-to-ipv4-website/ So this is how i set up my reverse proxy. First off I installed RHEL 6.5 on a VM and installed nginx on it. Second off I set up an AAAA record in our DNS as a test FQDN so that I could use that FQDN to connect through the proxy to an IPV4 website. For example, the FQDN is ipv6.mycoolsite.com and the IPv4 website is www.yourcoolsite.com. I set up the default.conf file as such: server { listen [::]:80 default ipv6only=on; server_name ipv6.mycoolsite.com; #charset koi8-r; access_log /var/log/nginx/log/ipv6.mycoolsite.com.access.log main; error_log /var/log/nginx/log/ipv6.mycoolsite.com.error.log; location / { # root /usr/share/nginx/html; # index index.html index.htm; proxy_pass http://www.yourcoolsite.com; proxy_redirect default; proxy_set_header X-Real-Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-Host $host; proxy_set_header X-Forwarded-Server $host; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_read_timeout 120; } } Here are the issues that I am currently having: When I run the nginx service and I test the FQDN on an ipv6 enabled computer, I am able to access the IPV4 website www.yourcoolsite.com. But when I change the proxy_pass FQDN to a different IPV4 website in the config file and reload the service, ipv6.mycoolsite.com still connects to www.yourcoolsite.com and not to the new IPV4 FQDN. I think its loading a cached copy of www.yourcoolsite.com instead of loading the new IPV4 FQDN. When it finallly does load the new site, it does so REALLY slowly. I think this is due to reverse DNS lookup occuring! Now what I am trying to figure out here is what is causing the caching to occur and the slow loading times? How do I go about disabling DNS caching as well as the reverse DNS lookup? I want to be able to connect the IPV4 website specified in the default.conf file when ever I change the file and reload the service. I dont want to connect to a cached copy of the previous IPV4 entry ! any help will be greatly appreciated!! Oh and when I check the access logs after I test the proxy, this is what I see: - - [29/Dec/2013:01:31:13 -0500] "GET /commonspot/javascript/lightbox/window_ref.js HTTP/1.1" 200 11198 "http://ipv6.mycoolsite.com/" "Mozilla/5.0 (X11; Linux i686; rv:17.0) Gecko/20131023 Firefox/17.0" "-" - - [29/Dec/2013:01:31:13 -0500] "GET /commonspot/javascript/util.js HTTP/1.1" 200 64891 "http://ipv6.mycoolsite.com/" "Mozilla/5.0 (X11; Linux i686; rv:17.0) Gecko/20131023 Firefox/17.0" "-" - - [29/Dec/2013:01:31:13 -0500] "GET /commonspot/javascript/lightbox/lightbox.js HTTP/1.1" 200 59730 "http://ipv6.mycoolsite.com/" "Mozilla/5.0 (X11; Linux i686; rv:17.0) Gecko/20131023 Firefox/17.0" "-" - - [29/Dec/2013:01:31:14 -0500] "GET /global/images/chrome/logos/slogan.png HTTP/1.1" 404 8839 "http://ipv6.mycoolsite.com/global/css/style.css" "Mozilla/5.0 (X11; Linux i686; rv:17.0) Gecko/20131023 Firefox/17.0" "-" - - [29/Dec/2013:01:31:14 -0500] "GET /common/commonspot/templates/images/chrome/bg/results-bottom.png HTTP/1.1" 200 669 "http://ipv6.mycoolsite.com/" "Mozilla/5.0 (X11; Linux i686; rv:17.0) Gecko/20131023 Firefox/17.0" "-" - - [29/Dec/2013:01:31:14 -0500] "GET /images/2013Dec5.jpg HTTP/1.1" 404 8849 "http://ipv6.mycoolsite.com/" "Mozilla/5.0 (X11; Linux i686; rv:17.0) Gecko/20131023 Firefox/17.0" "-" - - [29/Dec/2013:01:31:14 -0500] "GET /images/2013Dec1.jpg HTTP/1.1" 404 8840 "http://ipv6.mycoolsite.com/" "Mozilla/5.0 (X11; Linux i686; rv:17.0) Gecko/20131023 Firefox/17.0" "-" - - [29/Dec/2013:01:31:14 -0500] "GET /images/2013Dec2.jpg HTTP/1.1" 404 8847 "http://ipv6.mycoolsite.com/" "Mozilla/5.0 (X11; Linux i686; rv:17.0) Gecko/20131023 Firefox/17.0" "-" - - [29/Dec/2013:01:31:14 -0500] "GET /images/2013Dec4.jpg HTTP/1.1" 404 8850 "http://ipv6.mycoolsite.com/" "Mozilla/5.0 (X11; Linux i686; rv:17.0) Gecko/20131023 Firefox/17.0" "-" - - [29/Dec/2013:01:32:08 -0500] "GET /images/2013Dec3.jpg HTTP/1.1" 404 8842 "http://ipv6.mycoolsite.com/" "Mozilla/5.0 (X11; Linux i686; rv:17.0) Gecko/20131023 Firefox/17.0" "-" Why am I getting a 404 response in the log entry? Also here is the error log 2013/12/27 13:13:01 [error] 6138#0: *248 upstream timed out (110: Connection timed out) while connecting to upstream, client: xxxx:xxxx:x:xxxx::xxx:xxxx, server: ipv6.mycoolsite.com, request: "GET /commonspot/javascript/lightbox/lightbox.js HTTP/1.1", upstream: "http://[2001:1900:2302:2000::ff]:80/commonspot/javascript/lightbox/lightbox.js", host: "ipv6.mycoolsite.com", referrer: "http://ipv6.mycoolsite.com/index.htm" 2013/12/27 13:43:08 [error] 6138#0: *276 upstream timed out (110: Connection timed out) while connecting to upstream, client: xxxx:xxxx:x:xxxx::xxx:xxxx, server: ipv6.mycoolsite.com, request: "GET /index.htm HTTP/1.1", upstream: "http://[2001:1900:2302:2000::ff]:80/index.htm", host: "ipv6.mycoolsite.com" 2013/12/29 01:14:03 [error] 13140#0: *402 upstream timed out (110: Connection timed out) while connecting to upstream, client: xxxx:xxxx:x:xxxx::xxx:xxxx, server: ipv6.mycoolsite.com, request: "GET /global/js/libs/validation-engine.css HTTP/1.1", upstream: "http://[2001:1900:2300:1::ff]:80/global/js/libs/validation-engine.css", host: "ipv6.mycoolsite.com", referrer: "http://ipv6.mycoolsite.com/" 2013/12/29 01:14:03 [error] 13140#0: *406 upstream timed out (110: Connection timed out) while connecting to upstream, client: xxxx:xxxx:x:xxxx::xxx:xxxx, server: ipv6.mycoolsite.com, request: "GET /global/js/jquery.scrollTo-min.js HTTP/1.1", upstream: "http://[2001:1900:2300:1::ff]:80/global/js/jquery.scrollTo-min.js", host: "ipv6.mycoolsite.com", referrer: "http://ipv6.mycoolsite.com/" 2013/12/29 01:14:03 [error] 13140#0: *410 upstream timed out (110: Connection timed out) while connecting to upstream, client: xxxx:xxxx:x:xxxx::xxx:xxxx, server: ipv6.mycoolsite.com, request: "GET /global/js/libs/always-include-ie.js HTTP/1.1", upstream: "http://[2001:1900:2300:1::ff]:80/global/js/libs/always-include-ie.js", host: "ipv6.mycoolsite.com", referrer: "http://ipv6.mycoolsite.com/" 2013/12/29 01:14:04 [error] 13140#0: *404 upstream timed out (110: Connection timed out) while connecting to upstream, client: xxxx:xxxx:x:xxxx::xxx:xxxx, server: ipv6.mycoolsite.com, request: "GET /images/2013Dec2.jpg HTTP/1.1", upstream: "http://[2001:1900:2300:1::ff]:80/images/2013Dec2.jpg", host: "ipv6.mycoolsite.com", referrer: "http://ipv6.mycoolsite.com/" 2013/12/29 01:14:04 [error] 13140#0: *408 upstream timed out (110: Connection timed out) while connecting to upstream, client: xxxx:xxxx:x:xxxx::xxx:xxxx, server: ipv6.mycoolsite.com, request: "GET /images/2013Dec4.jpg HTTP/1.1", upstream: "http://[2001:1900:2300:1::ff]:80/images/2013Dec4.jpg", host: "ipv6.mycoolsite.com", referrer: "http://ipv6.mycoolsite.com/" 2013/12/29 01:15:34 [error] 13140#0: *410 upstream timed out (110: Connection timed out) while connecting to upstream, client: xxxx:xxxx:x:xxxx::xxx:xxxx, server: ipv6.mycoolsite.com, request: "GET /global/css/colorbox.css HTTP/1.1", upstream: "http://[2001:1900:2300:1::ff]:80/global/css/colorbox.css", host: "ipv6.mycoolsite.com", referrer: "http://ipv6.mycoolsite.com/" 2013/12/29 01:25:57 [error] 13140#0: *472 upstream timed out (110: Connection timed out) while connecting to upstream, client: xxxx:xxxx:x:xxxx::xxx:xxxx, server: ipv6.mycoolsite.com, request: "GET /global/js/libs/intercept-include.js HTTP/1.1", upstream: "http://[2001:1900:2300:1::ff]:80/global/js/libs/intercept-include.js", host: "ipv6.mycoolsite.com", referrer: "http://ipv6.mycoolsite.com/" 2013/12/29 01:32:07 [error] 13140#0: *510 upstream timed out (110: Connection timed out) while connecting to upstream, client: xxxx:xxxx:x:xxxx::xxx:xxxx, server: ipv6.mycoolsite.com, request: "GET /images/2013Dec3.jpg HTTP/1.1", upstream: "http://[2001:1900:2300:1::ff]:80/images/2013Dec3.jpg", host: "ipv6.mycoolsite.com", referrer: "http://ipv6.mycoolsite.com/" I had to blank out the IPV6 address for privacy's sake. Also i have no idea how to paste code properly in mailing lists! lol Sorry I am a bit new web servers so any help will be greatly appreciated! Posted at Nginx Forum: http://forum.nginx.org/read.php?2,245904,245904#msg-245904 From nginx-forum at nginx.us Sun Dec 29 20:26:02 2013 From: nginx-forum at nginx.us (NginxNewComer) Date: Sun, 29 Dec 2013 15:26:02 -0500 Subject: insert subdomain into query Message-ID: <0296b16354bb19ec2327c78116093854.NginxMailingListEnglish@forum.nginx.org> I have setup a wildcard subdomain server(many subdomains) server { listen 80; server_name ~(?.*).localhost; ... ... } I want to make the subdomain apart of the query in the backend when we visit http://clothes.localhost/ backend query http://localhost/script.php?sd=clothes or we visit http://jackets.localhost/large/blue/ backend query http://localhost/script.php?sd=jackets&size=large&colour=blue I have tried for 5 hours now, Modifying rewrite and return as many ways possible, Multiple errors such as rewrite ^ http://$subdomain.localhost/script.php?x=$subdomain permanent; http://clothes.localhost/script.php/script.php/script.php/script.php/script.php/script.php/script.php/script.php/?sd=clothes?sd=clothes&sd=clothes?sd=clothes&sd=clothes?sd=clothes&sd=clothes?sd=clothes& or http://localhost/localhost/localhost/localhost/localhost etc etc(I have tried many more tweaks but didnt record all as I went along) Thanks in advance for any intention to help or answers given. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,245906,245906#msg-245906 From mdounin at mdounin.ru Sun Dec 29 23:16:48 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 30 Dec 2013 03:16:48 +0400 Subject: Serving default error pages with 'proxy_intercept_errors on' In-Reply-To: References: <20131228121943.GR95113@mdounin.ru> Message-ID: <20131229231647.GU95113@mdounin.ru> Hello! On Sat, Dec 28, 2013 at 09:59:02AM -0500, Maxim Khitrov wrote: > On Sat, Dec 28, 2013 at 7:19 AM, Maxim Dounin wrote: > > Hello! > > > > On Fri, Dec 27, 2013 at 03:47:51PM -0500, Maxim Khitrov wrote: > > > >> Hello, > >> > >> I'm running nginx v1.4.1 on OpenBSD 5.4 and I'd like to use > >> 'proxy_intercept_errors on' directive without providing my own error > >> pages. In other words, instead of forwarding page content from the > >> backend server, just use the error pages that nginx generates by > >> default. > >> > >> This isn't supported normally, however the following configuration > >> seems to achieve the desired result (though you have to list the error > >> codes explicitly): > >> > >> error_page 403 404 ... @error; > >> location @error { return 444; } > >> > >> I didn't actually know what this would do until I tried it. I assume > >> that when a matching error code is received from the backend server, > >> nginx closes the proxy connection without reading the body and creates > >> a new internal request to @error, which is immediately closed without > >> a response. Thus, there is no body to send to the client, so it falls > >> back to the default behavior. > > > > Not really. > > > > What you see works because currently "return <4xx>" doesn't > > override error code previously set, thus for 404 error code > > originally returned by a proxied server > > > > location @error { return 4xx; } > > > > is effectively equivalent to > > > > location @error { return 404; } > > > > which is documented to return "status code of the last occurred > > error", see http://nginx.org/r/error_page. > > I thought that the original error code is returned because I'm not > using '=[response]' syntax in the error_page directive. You're saying > that 'return' may, at some point in the future, change the original > status code even though I'm not using 'error_page 403 404 = @error' > (with the equal sign)? As already pointed out, the documentation of the error_page directive says (http://nginx.org/r/error_page): : If uri processing leads to an error, the status code of the last : occurred error is returned to the client. To illustrate this, consider the following configuration: error_page 403 /no.such.file.html; If at some point 403 error happens, request is redirected to "/no.such.file.html". If it exists, the file will be returned to a client with the 403 status code. But if it doesn't exists, the 404 error will be generated, and it will be returned to the client. Currently, this is not what "return 404" will do - instead, it will preserve original error code. But the difference with non-existant static file is obviously wrong. > Is there any other way of getting nginx to return the default error > pages in this situation? Bulletproof way would be to use "return " for each you want to intercept. -- Maxim Dounin http://nginx.org/ From steve at greengecko.co.nz Mon Dec 30 00:03:28 2013 From: steve at greengecko.co.nz (Steve Holdoway) Date: Mon, 30 Dec 2013 13:03:28 +1300 Subject: problems setting ttl on single image... Message-ID: <1388361808.30443.3.camel@steve-new> I'm trying to enable a 10 minute autorefresh for my webcam which you can see at http://www.diamondharbour.org.nz/Local-Weather.html Server runs nginx 1.4.4 ( and an ancient CMS called Etomite ). Here's the WPT result http://www.webpagetest.org/result/131229_HV_MK2/ Here's a snippet of my config... location = /assets/Photo\ Gallery/Weather/current\.jpg { expires 10m; log_not_found off; } location ~* \.(js|css|png|jpg|jpeg|gif|ico)$ { expires max; log_not_found off; } Why is is not setting a 10 minute expiry - the webpagetest report should pick up on it??? I'm sort of expecting that it may have something to do with the space in the URL? Any pointers gratefully received (: Steve -- Steve Holdoway BSc(Hons) MIITP http://www.greengecko.co.nz Linkedin: http://www.linkedin.com/in/steveholdoway Skype: sholdowa From nginx-forum at nginx.us Mon Dec 30 00:53:02 2013 From: nginx-forum at nginx.us (flash008) Date: Sun, 29 Dec 2013 19:53:02 -0500 Subject: ssl handshake fail when proxy between two tomcat with mutual authentication In-Reply-To: <9f165963da2af2f5fad4a68f555cef2a.NginxMailingListEnglish@forum.nginx.org> References: <9f165963da2af2f5fad4a68f555cef2a.NginxMailingListEnglish@forum.nginx.org> Message-ID: Hi Dirnsnow, Have you find the solution for your problem of mutual auth between Nginx and Tomcat2? I meet the same error as yours when I am using Nginx as a reverse proxy and trying to mutual talk my backend server through mutual SSL. Thank you, flash008 Posted at Nginx Forum: http://forum.nginx.org/read.php?2,241171,245912#msg-245912 From r1ch+nginx at teamliquid.net Mon Dec 30 01:05:32 2013 From: r1ch+nginx at teamliquid.net (Richard Stanway) Date: Mon, 30 Dec 2013 02:05:32 +0100 Subject: problems setting ttl on single image... In-Reply-To: <1388361808.30443.3.camel@steve-new> References: <1388361808.30443.3.camel@steve-new> Message-ID: > Here's a snippet of my config... > > location = /assets/Photo\ Gallery/Weather/current\.jpg { > expires 10m; > log_not_found off; > } > The "location =" syntax does not use regular expressions. You may also want to surround the string with quotes if it contains spaces rather than escaping spaces. From steve at greengecko.co.nz Mon Dec 30 01:57:18 2013 From: steve at greengecko.co.nz (Steve Holdoway) Date: Mon, 30 Dec 2013 14:57:18 +1300 Subject: problems setting ttl on single image... In-Reply-To: References: <1388361808.30443.3.camel@steve-new> Message-ID: <1388368638.30443.6.camel@steve-new> On Mon, 2013-12-30 at 02:05 +0100, Richard Stanway wrote: > > Here's a snippet of my config... > > > > location = /assets/Photo\ Gallery/Weather/current\.jpg { > > expires 10m; > > log_not_found off; > > } > > > > The "location =" syntax does not use regular expressions. You may also > want to surround the string with quotes if it contains spaces rather > than escaping spaces. > Yup, that fixed it thanks! Cheers, Steve -- Steve Holdoway BSc(Hons) MIITP http://www.greengecko.co.nz Linkedin: http://www.linkedin.com/in/steveholdoway Skype: sholdowa From iptablez at yahoo.com Mon Dec 30 04:27:57 2013 From: iptablez at yahoo.com (Indo Php) Date: Sun, 29 Dec 2013 20:27:57 -0800 (PST) Subject: How to delete cache based on expires headers? In-Reply-To: References: <1387426860.9588.YahooMailNeo@web142305.mail.bf1.yahoo.com> <1387853867.39367.YahooMailNeo@web140403.mail.bf1.yahoo.com> Message-ID: <1388377677.36102.YahooMailNeo@web140402.mail.bf1.yahoo.com> Hi Is that means that nginx will put the files based on the upstream expire headers? After that nginx will delete the cache files? On Tuesday, December 24, 2013 10:28 PM, Ant?nio P. P. Almeida wrote: Why you want to do this? nginx can manage expiration/cache-control headers all by itself. As soon as the defined max-age is set it returns a upstream status of EXPIRED until it fetches a fresh page from upstream. Deleting won't buy you anything in terms of content freshness. ----appa On Tue, Dec 24, 2013 at 3:57 AM, Indo Php wrote: Hello.. > > >Can somebody help me on this? > > >Thank you before > > > >On Thursday, December 19, 2013 11:21 AM, Indo Php wrote: > >Hi > > >I'm using proxy_cache to mirror my files with the configuration below > > >proxy_cache_path? /var/cache/nginx/image levels=1:2 keys_zone=one:10m inactive=7d ? ? max_size=100g; > > >Our backend server has the expires header set to 600secs > > >Is that posibble for us to also delete the cache files located at?/var/cache/nginx/image depends on the backend expire header? > > >_______________________________________________ >nginx mailing list >nginx at nginx.org >http://mailman.nginx.org/mailman/listinfo/nginx > > >_______________________________________________ >nginx mailing list >nginx at nginx.org >http://mailman.nginx.org/mailman/listinfo/nginx > _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Mon Dec 30 08:47:21 2013 From: nginx-forum at nginx.us (Beataling) Date: Mon, 30 Dec 2013 03:47:21 -0500 Subject: it can't describe his design Message-ID: You don't need somebody to interpret for you what a beautiful sunrise it is. Like the [url=http://www.cheapkdvnerf.com/]Cheap KD V Nerf[/url]; [url=http://www.cheapkdvnerf.com/kd-v-nerf-c-1588.html]KD V Nerf[/url]; [url=http://www.cheapkdvnerf.com/kd-6vi-sneakers-c-1552.html]KD 6(VI) Sneakers[/url]; [url=http://www.cheapkdvnerf.com/kobe-olympic-iv-c-1538.html]Kobe Olympic IV[/url]; [url=http://www.cheapkdvnerf.com/lebron-10-mvp-perfect-multi-color-p-14527.html]Lebron 10 MVP[/url]; store, it can't describe his design aesthetic feeling. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,245921,245921#msg-245921 From nginx-forum at nginx.us Mon Dec 30 08:47:38 2013 From: nginx-forum at nginx.us (Beataling) Date: Mon, 30 Dec 2013 03:47:38 -0500 Subject: now you can move your mouse Message-ID: Don't try so hard, the best things come when you least expect them to. If you think that you missed the buying [url=http://www.cheapkdvnerf.com/]Cheap KD V Nerf[/url]; [url=http://www.cheapkdvnerf.com/kd-v-nerf-c-1588.html]KD V Nerf[/url]; [url=http://www.cheapkdvnerf.com/kd-6vi-sneakers-c-1552.html]KD 6(VI) Sneakers[/url]; [url=http://www.cheapkdvnerf.com/kobe-olympic-iv-c-1538.html]Kobe Olympic IV[/url]; [url=http://www.cheapkdvnerf.com/lebron-10-mvp-perfect-multi-color-p-14527.html]Lebron 10 MVP[/url];shoes. now you can move your mouse to click our store. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,245922,245922#msg-245922 From mdounin at mdounin.ru Mon Dec 30 23:15:45 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 31 Dec 2013 03:15:45 +0400 Subject: How do I disable DNS Caching and DNS Reverse Lookup in Nginx ? In-Reply-To: <276a701e0211b1c246f52eb48d460f21.NginxMailingListEnglish@forum.nginx.org> References: <276a701e0211b1c246f52eb48d460f21.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20131230231545.GZ95113@mdounin.ru> Hello! On Sun, Dec 29, 2013 at 01:08:18PM -0500, linuxr00lz2013 wrote: [...] > When I run the nginx service and I test the FQDN on an ipv6 enabled > computer, I am able to access the IPV4 website www.yourcoolsite.com. But > when I change the proxy_pass FQDN to a different IPV4 website in the config > file and reload the service, ipv6.mycoolsite.com still connects to > www.yourcoolsite.com and not to the new IPV4 FQDN. I think its loading a > cached copy of www.yourcoolsite.com instead of loading the new IPV4 FQDN. > When it finallly does load the new site, it does so REALLY slowly. I think > this is due to reverse DNS lookup occuring! > > Now what I am trying to figure out here is what is causing the caching to > occur and the slow loading times? How do I go about disabling DNS caching as > well as the reverse DNS lookup? I want to be able to connect the IPV4 > website specified in the default.conf file when ever I change the file and > reload the service. I dont want to connect to a cached copy of the previous > IPV4 entry ! > > any help will be greatly appreciated!! Most likely, what you are seeing is your browser's caching. Try cleaning your browser's cache. -- Maxim Dounin http://nginx.org/ From mb at 14v.de Tue Dec 31 11:16:12 2013 From: mb at 14v.de (Michael Both) Date: Tue, 31 Dec 2013 12:16:12 +0100 Subject: feature request: smtp auth passthrough Message-ID: <52C2A77C.30205@14v.de> Hello, sorry for that beforehand, but yes, I'm another newbie using (or at least trying to use) NGINX in a mail proxy setup. I read the wiki entries and docs beforehand... and I thought it just works like that (passing auth to backend). So actually I was rather shocked that authentication is not passed to backend SMTP servers opposed to POP and IMAP backends. As for our deployment, it would be a pain in the arse to import login data from our backend servers. So research a bit and at least I'm not the only one trying to use a setup like this. http://mailman.nginx.org/pipermail/nginx/2008-April/004234.html https://www.ruby-forum.com/topic/1045106 http://mailman.nginx.org/pipermail/nginx/2010-February/019028.html http://mailman.nginx.org/pipermail/nginx/2010-April/020027.html http://mailman.nginx.org/pipermail/nginx/2010-November/023555.html http://mailman.nginx.org/pipermail/nginx-devel/2012-April/002074.html David Jonas even wrote a patch addressing this issue. But as we only use official distribution packets, and also his patch is for a quite old nginx version, that won't work for us. Hence I kindly ask of you, also in the name of my predecessors and everyone else looking for a solution like this, to implement a method for passing smtp login information to the backend server in one of the next versions. Best regards and a happy new year! Michael Both From emailgrant at gmail.com Tue Dec 31 22:05:04 2013 From: emailgrant at gmail.com (Grant) Date: Tue, 31 Dec 2013 14:05:04 -0800 Subject: Hiring a dev: nginx+interchange Message-ID: Hello, I use a perl framework called interchange (icdevgroup.org) and I've been using a perl module called Interchange::Link to interface interchange to apache: https://github.com/interchange/interchange/blob/master/dist/src/mod_perl2/Interchange/Link.pm I'd like to switch from apache to nginx and I need to hire someone to help me interface interchange to nginx. I don't need the interface to include all of the features from Interchange::Link. - Grant