From nginx-forum at nginx.us Mon Apr 1 12:51:23 2013 From: nginx-forum at nginx.us (etrader) Date: Mon, 01 Apr 2013 08:51:23 -0400 Subject: location vs. rewite Message-ID: I want to server favicon from a different place rather than the root folder. As I explored I have two options 1. location = /favicon.ico { alias /var/www/media/images/favicon.X.ico; } 2. rewrite ^/favicon.ico /var/www/media/images/favicon.X.ico last; As I understand the second one needs an additional regex check for every request to check if the url is favicon or not (imagine that we have a rewite of ^/(.*)\.(.*) and thus the favicon rewite must be first). How the location works? Does it also should be checked on every request? In general, which solution is better ? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,238002,238002#msg-238002 From citrin at citrin.ru Mon Apr 1 12:56:35 2013 From: citrin at citrin.ru (Anton Yuzhaninov) Date: Mon, 01 Apr 2013 16:56:35 +0400 Subject: location vs. rewite In-Reply-To: References: Message-ID: <51598403.9010800@citrin.ru> On 04/01/13 16:51, etrader wrote: > I want to server favicon from a different place rather than the root folder. > As I explored I have two options > > 1. > location = /favicon.ico { alias /var/www/media/images/favicon.X.ico; } It is working option. > > 2. > rewrite ^/favicon.ico /var/www/media/images/favicon.X.ico last; This is wrong - you can rewrite to other location, not to path on file system. Anyway rewrite does not need here, use location. From nginx-forum at nginx.us Tue Apr 2 09:41:07 2013 From: nginx-forum at nginx.us (Larry) Date: Tue, 02 Apr 2013 05:41:07 -0400 Subject: Split words into caracters Message-ID: Hi ! I don't know if it is the right way to go, and I hope to hear from you if it is not. My concern is about the selection of a file traversing a tree. I have a bunch of hashed directories (dozens of millions) on different servers. As for performance reasons, I would like to avoid having to many sub-directories in the same directory. I would rather have this scheme : 1234/5678/90AB/CDEF/1234/5678/90AB/CDEF/myfile It would help proxying to the right server and so on. Will the traverse be fast enought ? Every directory is approx 100kb heavy (really light). But other than openresty, i didn't see any implementation inside nginx to split any word or cookie. C is faster than lua, so it would be a good bet. Well, I think. What is your word on it ? Thanks ! Larry Posted at Nginx Forum: http://forum.nginx.org/read.php?2,238015,238015#msg-238015 From r at roze.lv Tue Apr 2 09:46:29 2013 From: r at roze.lv (Reinis Rozitis) Date: Tue, 2 Apr 2013 12:46:29 +0300 Subject: Split words into caracters In-Reply-To: References: Message-ID: > But other than openresty, i didn't see any implementation inside nginx to > split any word or cookie. You can possibly use rewrite http://wiki.nginx.org/HttpRewriteModule ( http://nginx.org/en/docs/http/ngx_http_rewrite_module.html ) There is also an example to rewrite /photos/123456 to something like /path/to/photos/12/1234/123456.png rr From r at roze.lv Tue Apr 2 09:54:45 2013 From: r at roze.lv (Reinis Rozitis) Date: Tue, 2 Apr 2013 12:54:45 +0300 Subject: Why use haproxy now ? In-Reply-To: <70FA65D1-38CC-49C8-9678-202DAACE43D5@sysoev.ru> References: <6B92BA8B64E54DAD8CADC31585382718@MezhRoze> <6C9E35BA-B419-4879-B624-C05EBE3BE1AD@sysoev.ru> <70FA65D1-38CC-49C8-9678-202DAACE43D5@sysoev.ru> Message-ID: <4ED420F529E842909405BC0E2A9084A0@MasterPC> > Did you try previously nginx cache also on SSD or on usual hard disk? Tbh I don't remember as it was a while ago (on 0.7.x), it might have been a regular SAS system instead (which of course is not as speedy as ssd and objective to compare). But as I said I'll test the current and see how it goes. rr From nginx-forum at nginx.us Tue Apr 2 10:54:30 2013 From: nginx-forum at nginx.us (spdyg) Date: Tue, 02 Apr 2013 06:54:30 -0400 Subject: SPDY + proxy cache = occasional 499 errors In-Reply-To: <201212031714.05331.vbart@nginx.com> References: <201212031714.05331.vbart@nginx.com> Message-ID: <60e5d60a469479da8ac682060e9589f1.NginxMailingListEnglish@forum.nginx.org> I posted this query back in December: http://forum.nginx.org/read.php?2,233497 And the issue was fixed shortly afterwards in the 1.3.x branch. I'm now seeing the same issue again (static items coming from proxy cache + SPDY not displaying). However this time, the log does not show 499 error anymore.. it just shows successful 304. However, looking at firefox web console, the first few static items come back as 304 without problems, then the next set of static items come back with no response at all.. (no response header, no body). The setup looks like this: Firefox -> Nginx 1.3.15 -> HAproxy -> Tomcat Has this bug crept back in, or is this a new one? Also reproduced on 1.3.13. Can't seem to reproduce the issue if I take out HAproxy in the middle though. spdyg Posted at Nginx Forum: http://forum.nginx.org/read.php?2,233497,238019#msg-238019 From mdounin at mdounin.ru Tue Apr 2 12:53:55 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 2 Apr 2013 16:53:55 +0400 Subject: nginx-1.2.8 Message-ID: <20130402125355.GN62550@mdounin.ru> Changes with nginx 1.2.8 02 Apr 2013 *) Bugfix: new sessions were not always stored if the "ssl_session_cache shared" directive was used and there was no free space in shared memory. Thanks to Piotr Sikora. *) Bugfix: responses might hang if subrequests were used and a DNS error happened during subrequest processing. Thanks to Lanshun Zhou. *) Bugfix: in the ngx_http_mp4_module. Thanks to Gernot Vormayr. *) Bugfix: in backend usage accounting. -- Maxim Dounin http://nginx.org/en/donation.html From vbart at nginx.com Tue Apr 2 13:16:46 2013 From: vbart at nginx.com (Valentin V. Bartenev) Date: Tue, 2 Apr 2013 17:16:46 +0400 Subject: SPDY + proxy cache = occasional 499 errors In-Reply-To: <60e5d60a469479da8ac682060e9589f1.NginxMailingListEnglish@forum.nginx.org> References: <201212031714.05331.vbart@nginx.com> <60e5d60a469479da8ac682060e9589f1.NginxMailingListEnglish@forum.nginx.org> Message-ID: <201304021716.46296.vbart@nginx.com> On Tuesday 02 April 2013 14:54:30 spdyg wrote: > I posted this query back in December: > > http://forum.nginx.org/read.php?2,233497 > > And the issue was fixed shortly afterwards in the 1.3.x branch. > > I'm now seeing the same issue again (static items coming from proxy cache + > SPDY not displaying). > > However this time, the log does not show 499 error anymore.. it just shows > successful 304. However, looking at firefox web console, the first few > static items come back as 304 without problems, then the next set of static > items come back with no response at all.. (no response header, no body). > > The setup looks like this: > > Firefox -> Nginx 1.3.15 -> HAproxy -> Tomcat > > Has this bug crept back in, or is this a new one? Also reproduced on > 1.3.13. > > Can't seem to reproduce the issue if I take out HAproxy in the middle > though. > The behavior of "return 499" can result in weird effects if it affects a bunch of requests at once. wbr, Valentin V. Bartenev -- http://nginx.org/en/donation.html From mdounin at mdounin.ru Tue Apr 2 14:46:01 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 2 Apr 2013 18:46:01 +0400 Subject: SPDY + proxy cache = occasional 499 errors In-Reply-To: <201304021716.46296.vbart@nginx.com> References: <201212031714.05331.vbart@nginx.com> <60e5d60a469479da8ac682060e9589f1.NginxMailingListEnglish@forum.nginx.org> <201304021716.46296.vbart@nginx.com> Message-ID: <20130402144601.GR62550@mdounin.ru> Hello! On Tue, Apr 02, 2013 at 05:16:46PM +0400, Valentin V. Bartenev wrote: > On Tuesday 02 April 2013 14:54:30 spdyg wrote: > > I posted this query back in December: > > > > http://forum.nginx.org/read.php?2,233497 > > > > And the issue was fixed shortly afterwards in the 1.3.x branch. > > > > I'm now seeing the same issue again (static items coming from proxy cache + > > SPDY not displaying). > > > > However this time, the log does not show 499 error anymore.. it just shows > > successful 304. However, looking at firefox web console, the first few > > static items come back as 304 without problems, then the next set of static > > items come back with no response at all.. (no response header, no body). > > > > The setup looks like this: > > > > Firefox -> Nginx 1.3.15 -> HAproxy -> Tomcat > > > > Has this bug crept back in, or is this a new one? Also reproduced on > > 1.3.13. > > > > Can't seem to reproduce the issue if I take out HAproxy in the middle > > though. > > > > The behavior of "return 499" can result in weird effects if it affects > a bunch of requests at once. Do you mean "return 444"? -- Maxim Dounin http://nginx.org/en/donation.html From nginx-forum at nginx.us Tue Apr 2 14:47:04 2013 From: nginx-forum at nginx.us (Larry) Date: Tue, 02 Apr 2013 10:47:04 -0400 Subject: Split words into caracters In-Reply-To: References: Message-ID: <54a74704f157986bd2c3ee2367fc794e.NginxMailingListEnglish@forum.nginx.org> yeah but it works when you have multiple words. In my case, there is only one which is 1234567890...DEF (md5 -> 32 chars) And I would need the possibility to do m=1234567890...DEF m[1] = 1 m[2] = 2 .. m[32] = F A mere rewrite is impossible here.. It seems indeed, Larry Posted at Nginx Forum: http://forum.nginx.org/read.php?2,238015,238029#msg-238029 From vbart at nginx.com Tue Apr 2 15:21:35 2013 From: vbart at nginx.com (Valentin V. Bartenev) Date: Tue, 2 Apr 2013 19:21:35 +0400 Subject: SPDY + proxy cache = occasional 499 errors In-Reply-To: <20130402144601.GR62550@mdounin.ru> References: <201212031714.05331.vbart@nginx.com> <201304021716.46296.vbart@nginx.com> <20130402144601.GR62550@mdounin.ru> Message-ID: <201304021921.35906.vbart@nginx.com> On Tuesday 02 April 2013 18:46:01 Maxim Dounin wrote: > Hello! > > On Tue, Apr 02, 2013 at 05:16:46PM +0400, Valentin V. Bartenev wrote: > > On Tuesday 02 April 2013 14:54:30 spdyg wrote: > > > I posted this query back in December: > > > > > > http://forum.nginx.org/read.php?2,233497 > > > > > > And the issue was fixed shortly afterwards in the 1.3.x branch. > > > > > > I'm now seeing the same issue again (static items coming from proxy > > > cache + SPDY not displaying). > > > > > > However this time, the log does not show 499 error anymore.. it just > > > shows successful 304. However, looking at firefox web console, the > > > first few static items come back as 304 without problems, then the > > > next set of static items come back with no response at all.. (no > > > response header, no body). > > > > > > The setup looks like this: > > > > > > Firefox -> Nginx 1.3.15 -> HAproxy -> Tomcat > > > > > > Has this bug crept back in, or is this a new one? Also reproduced on > > > 1.3.13. > > > > > > Can't seem to reproduce the issue if I take out HAproxy in the middle > > > though. > > > > The behavior of "return 499" can result in weird effects if it affects > > a bunch of requests at once. > > Do you mean "return 444"? Indeed, I accidentally confused 499 with 444 and as a result I misunderstood the issue. Thanks for correcting me. wbr, Valentin V. Bartenev -- http://nginx.org/en/donation.html From r at roze.lv Tue Apr 2 15:38:51 2013 From: r at roze.lv (Reinis Rozitis) Date: Tue, 2 Apr 2013 18:38:51 +0300 Subject: Split words into caracters In-Reply-To: <54a74704f157986bd2c3ee2367fc794e.NginxMailingListEnglish@forum.nginx.org> References: <54a74704f157986bd2c3ee2367fc794e.NginxMailingListEnglish@forum.nginx.org> Message-ID: > yeah but it works when you have multiple words. What do you mean by multiple words? > In my case, there is only one which is 1234567890...DEF (md5 -> 32 chars) > m=1234567890...DEF > m[1] = 1 > m[2] = 2 > .. > m[32] = F > A mere rewrite is impossible here.. > It seems indeed, Why not? rewrite "/path/(?[\w]{1})(?[\w]{1})(?[\w]{1})(?[\w]{1})(?[\w]{1})(?[\w]{1})(?[\w]{1})(?[\w]{1})(?[\w]{1})(?[\w]{1})(?[\w]{1})(?[\w]{1})(?[\w]{1})(?[\w]{1})(?[\w]{1})(?[\w]{1})(?[\w]{1})(?[\w]{1})(?[\w]{1})(?[\w]{1})(?[\w]{1})(?[\w]{1})(?[\w]{1})(?[\w]{1})(?[\w]{1})(?[\w]{1})(?[\w]{1})(?[\w]{1})(?[\w]{1})(?[\w]{1})(?[\w]{1})(?[\w]{1})" /realpath/$r1/$r2/$r3/$r4/$r5/$r6/$r7/$r8/$r9/$r10/$r11/$r12/$r13/$r14/$r15/$r16/$r17/$r18/$r19/$r20/$r21/$r22/$r23/$r24/$r25/$r26/$r27/$r28/$r29/$r30/$r31/$r32; It looks somewhat ugly because of 32 variables (aso because you need to use named ones (since $10/$11 ... wouldnt work)). There are probably much better ways to do this, but as a proof of concept it works: Request to http://someserver/path/aBcdfghjklqwertyuiopas1234567890 gets rewritten/translated to: 2013/04/02 18:25:37 [error] 2032#0: *60 open() "/www/realpath/a/B/c/d/f/g/h/j/k/l/q/w/e/r/t/y/u/i/o/p/a/s/1/2/3/4/5/6/7/8/9/0" failed (2: No such file or directory), request: "GET /path/aBcdfghjklqwertyuiopas1234567890 (obviously error since such file doesnt exist for me) :) rr From nginx-forum at nginx.us Tue Apr 2 17:10:10 2013 From: nginx-forum at nginx.us (Larry) Date: Tue, 02 Apr 2013 13:10:10 -0400 Subject: Why use haproxy now ? In-Reply-To: <4ED420F529E842909405BC0E2A9084A0@MasterPC> References: <4ED420F529E842909405BC0E2A9084A0@MasterPC> Message-ID: <5c269c51d749f329acce8d326d829a75.NginxMailingListEnglish@forum.nginx.org> Will you keep us in touch Reinis ? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,237874,238033#msg-238033 From nginx-forum at nginx.us Tue Apr 2 17:11:04 2013 From: nginx-forum at nginx.us (Larry) Date: Tue, 02 Apr 2013 13:11:04 -0400 Subject: Split words into caracters In-Reply-To: References: Message-ID: <87faf19b6896f3116b9d32a55b8095b7.NginxMailingListEnglish@forum.nginx.org> You are right Reinis, It first seemed tough to me but yeah, it works :) Thanks, Larry Posted at Nginx Forum: http://forum.nginx.org/read.php?2,238015,238034#msg-238034 From nginx-forum at nginx.us Tue Apr 2 18:43:18 2013 From: nginx-forum at nginx.us (skrode) Date: Tue, 02 Apr 2013 14:43:18 -0400 Subject: Nginx and upstart Message-ID: <74effe50811d3bfbfd0b90aba6d42251.NginxMailingListEnglish@forum.nginx.org> Hi I'm using the following script to keep nginx up and running: start on (filesystem and net-device-up IFACE=lo) stop on runlevel [!2345] env DAEMON=/usr/sbin/nginx env CONF=/etc/nginx/nginx.conf env PID=/var/run/nginx.pid respawn respawn limit 10 5 pre-start script $DAEMON -t if [ $? -ne 0 ]; then exit $? fi end script exec $DAEMON -c $CONF -g "daemon off;" > /dev/null 2>&1 It works fine except if I kill the master process with eg. kill -5. If I kill it with kill command the pid keeps changing every few seconds. Any idea how to fix this? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,238036,238036#msg-238036 From nginx-forum at nginx.us Tue Apr 2 22:49:55 2013 From: nginx-forum at nginx.us (spdyg) Date: Tue, 02 Apr 2013 18:49:55 -0400 Subject: SPDY + proxy cache static content failures In-Reply-To: <201304021921.35906.vbart@nginx.com> References: <201304021921.35906.vbart@nginx.com> Message-ID: <473898f040c56b4af9fcfb2b54a1f89a.NginxMailingListEnglish@forum.nginx.org> Hi guys, So perhaps I shouldn't have reused my existing topic and confused things, sorry! I only did so because the symptom is exactly the same as the issue I originally reported (SOME static content served from proxy cache occasionally doesn't make it to Firefox when using SPDY). If you disable either SPDY or the proxy cache the problem stops happening. The originally reported problem stopped occurring after a fix was implemented, but now I'm seeing it again since I put haproxy as the upstream (to get enhanced health checks etc). The difference this time is that I'm not seeing the 499 errors (or 444) in the nginx log. I'm not returning 444 (via config) in these scenarios either. Firefox shows this: GET https://whatever/postrequest [HTTP/1.1 200 OK 45ms] GET https://whatever/css1 [HTTP/1.1 304 OK 25ms] GET https://whatever/css2 [HTTP/1.1 304 OK 260ms] GET https://whatever/js1 [HTTP/1.1 304 OK 260ms] GET https://whatever/js2 [HTTP/1.1 304 OK 35ms] GET https://whatever/image1 [HTTP/1.1 304 OK 40ms] GET https://whatever/image2 [HTTP/1.1 304 OK 40ms] GET https://whatever/image3 [HTTP/1.1 304 OK 40ms] GET https://whatever/image4 [HTTP/1.1 304 OK 40ms] GET https://whatever/image5 [5ms] GET https://google.com/analyticsjs [HTTP/1.1 304 Not Modified 50ms] GET https://whatever/image6 [0ms] GET https://whatever/image7 [0ms] GET https://whatever/image8 [0ms] GET https://whatever/image9 [0ms] GET https://whatever/image10 [0ms] So you can see that around the time it loads the 5th image things go wrong and the 5th onward images all have no content. Firefox detail shows no response headers/content for the 5th image onward. The Haproxy and Nginx logs show 304 successfully for all of these requests, so I guess the connection is dropped somewhere and Nginx isn't picking it up and still thinks it worked fine. Any suggestions on how to proceed further on this? Happy to provide specific details/logs via email if required. Thanks, spdyg Posted at Nginx Forum: http://forum.nginx.org/read.php?2,233497,238039#msg-238039 From nginx-forum at nginx.us Wed Apr 3 10:31:49 2013 From: nginx-forum at nginx.us (Sekhar) Date: Wed, 03 Apr 2013 06:31:49 -0400 Subject: Exact Client public certificate authentication using Nginx Message-ID: <1bd17eb258e2699019be55e5fe9abeb8.NginxMailingListEnglish@forum.nginx.org> Hi, I am relatevely new to Nginx and below is what i need to achieve. I have an Nginx proxy server with following key and certicate. ->Nginx_server_private_key.pem ->Nginx_server_public_cert.cer(Signed By Verisign CA) I have 3 clients who should be able to access the Nginx server based on their certificates. All their certificates are signed by verisign CA. Client 1 has following key certificate pair ->Nginx_client1_private_key.pem ->Nginx_client1_public_cert.cer (Signed By verisign CA) Similarly client 2 ->Nginx_client2_private_key.pem ->Nginx_client2_public_cert.cer (Signed by Verisign CA) Similarly client 3 ->Nginx_client3_private_key.pem ->Nginx_client3_public_cert.cer (Signed by Verisign CA) The server and clients will exchange their public certificates for mutual authentication. During SSL handshake the Nginx server only validates the CA of the incoming public certificate and if the CA is trusted, it allowes the connection. By this logic any certificate signed by the same verisign CA will be able to access my application. Question: 1. Can I configure Nginx to match the exact public certificate insted of verifying the signing CA? 2. Can I store the client's public certificates in a key store directory and configure Nginx to verify the incoming client certificates based on public certificates in that directory. In short, can I have a trust store or validation credential ? Any help/suggestion is greatly appriciated. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,238050,238050#msg-238050 From mdounin at mdounin.ru Wed Apr 3 10:53:06 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 3 Apr 2013 14:53:06 +0400 Subject: Exact Client public certificate authentication using Nginx In-Reply-To: <1bd17eb258e2699019be55e5fe9abeb8.NginxMailingListEnglish@forum.nginx.org> References: <1bd17eb258e2699019be55e5fe9abeb8.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20130403105306.GV62550@mdounin.ru> Hello! On Wed, Apr 03, 2013 at 06:31:49AM -0400, Sekhar wrote: > Hi, > > I am relatevely new to Nginx and below is what i need to achieve. > > I have an Nginx proxy server with following key and certicate. > ->Nginx_server_private_key.pem > ->Nginx_server_public_cert.cer(Signed By Verisign CA) > > I have 3 clients who should be able to access the Nginx server based on > their certificates. All their certificates are signed by verisign CA. > Client 1 has following key certificate pair > ->Nginx_client1_private_key.pem > ->Nginx_client1_public_cert.cer (Signed By verisign CA) > Similarly client 2 > ->Nginx_client2_private_key.pem > ->Nginx_client2_public_cert.cer (Signed by Verisign CA) > Similarly client 3 > ->Nginx_client3_private_key.pem > ->Nginx_client3_public_cert.cer (Signed by Verisign CA) > > The server and clients will exchange their public certificates for mutual > authentication. > > During SSL handshake the Nginx server only validates the CA of the incoming > public certificate and if the CA is trusted, it allowes the connection. By > this logic any certificate signed by the same verisign CA will be able to > access my application. > > Question: > 1. Can I configure Nginx to match the exact public certificate insted of > verifying the signing CA? No. Client certificate is considered to be good as long as it is verified successfully up to a trusted root certificate. What you can do, however, is to configure nginx to only allow access for a particular DN's, e.g. by using if ($ssl_client_s_dn != "some-good-DN") { return 403; } More complex checks should probably use map, see http://nginx.org/r/map. -- Maxim Dounin http://nginx.org/en/donation.html From potxoka at gmail.com Wed Apr 3 13:04:27 2013 From: potxoka at gmail.com (Anto) Date: Wed, 3 Apr 2013 15:04:27 +0200 Subject: Optimize rewrite Message-ID: hello I have a script that works with apache but I want to migrate to nginx, I have this rule, but maybe you can do differently optimized. ## HTACCESS RewriteEngine On RewriteCond %{REQUEST_FILENAME} !-d RewriteCond %{REQUEST_FILENAME} !-s RewriteRule ^(.*)$ index.php?url=$1 [QSA,NC,L] RewriteCond %{REQUEST_FILENAME} -d RewriteRule ^(.*)$ index.php [QSA,NC,L] RewriteCond %{REQUEST_FILENAME} -s RewriteRule ^(.*)$ index.php [QSA,NC,L] ## NGINX location / { rewrite ^/(.*)/(.*)$ /$1/index.php?url=$2; } Thanks !! Regards -------------- next part -------------- An HTML attachment was scrubbed... URL: From sb at waeme.net Wed Apr 3 13:08:26 2013 From: sb at waeme.net (Sergey Budnevitch) Date: Wed, 3 Apr 2013 17:08:26 +0400 Subject: nginx-1.3.15 In-Reply-To: <20130326132948.GO62550@mdounin.ru> References: <20130326132948.GO62550@mdounin.ru> Message-ID: Hello We've added new repository with pre-build linux packages for nginx 1.3.*. Documentation/instruction: http://nginx.org/en/linux_packages.html#mainline The only differences in nginx configure options from stable packages are gunzip module in all distributions and spdy module in Ubuntu 12.04 and 12.10 where openssl 1.0.1 is available. From kworthington at gmail.com Wed Apr 3 13:23:03 2013 From: kworthington at gmail.com (Kevin Worthington) Date: Wed, 3 Apr 2013 09:23:03 -0400 Subject: nginx-1.2.8 In-Reply-To: <20130402125355.GN62550@mdounin.ru> References: <20130402125355.GN62550@mdounin.ru> Message-ID: Hello Nginx users, Now available: Nginx 1.2.8 For Windows http://goo.gl/YbLP0 (32-bit and 64-bit versions) These versions are to support legacy users who are already using Cygwin based builds of Nginx. Officially supported native Windows binaries are at nginx.org. Announcements are also available via my Twitter stream ( http://twitter.com/kworthington), if you prefer to receive updates that way. Thank you, Kevin -- Kevin Worthington kworthington *@* (gmail] [dot} {com) http://kevinworthington.com/ http://twitter.com/kworthington On Tue, Apr 2, 2013 at 8:53 AM, Maxim Dounin wrote: > Changes with nginx 1.2.8 02 Apr > 2013 > > *) Bugfix: new sessions were not always stored if the > "ssl_session_cache > shared" directive was used and there was no free space in shared > memory. > Thanks to Piotr Sikora. > > *) Bugfix: responses might hang if subrequests were used and a DNS > error > happened during subrequest processing. > Thanks to Lanshun Zhou. > > *) Bugfix: in the ngx_http_mp4_module. > Thanks to Gernot Vormayr. > > *) Bugfix: in backend usage accounting. > > > -- > Maxim Dounin > http://nginx.org/en/donation.html > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Wed Apr 3 13:30:40 2013 From: nginx-forum at nginx.us (Sekhar) Date: Wed, 03 Apr 2013 09:30:40 -0400 Subject: Exact Client public certificate authentication using Nginx In-Reply-To: <20130403105306.GV62550@mdounin.ru> References: <20130403105306.GV62550@mdounin.ru> Message-ID: <90cc3c901f75c728c5e32a8b3c741d60.NginxMailingListEnglish@forum.nginx.org> Hi Maxim, Thanks for replying to the post. Below is my concern. Multiple certificate can have the same DN and the DN name match will happen after the SSL handshake is complete using the root CA. It means the SSL layer is complete and we are doing authorization not authentication. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,238050,238059#msg-238059 From mdounin at mdounin.ru Wed Apr 3 14:06:38 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 3 Apr 2013 18:06:38 +0400 Subject: Exact Client public certificate authentication using Nginx In-Reply-To: <90cc3c901f75c728c5e32a8b3c741d60.NginxMailingListEnglish@forum.nginx.org> References: <20130403105306.GV62550@mdounin.ru> <90cc3c901f75c728c5e32a8b3c741d60.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20130403140638.GB62550@mdounin.ru> Hello! On Wed, Apr 03, 2013 at 09:30:40AM -0400, Sekhar wrote: > Hi Maxim, > > Thanks for replying to the post. Below is my concern. > > Multiple certificate can have the same DN and the DN name match will happen > after the SSL handshake is complete using the root CA. It means the SSL > layer is complete and we are doing authorization not authentication. The CA is supposed to ensure that DN claimed in a certificate is correct, that's the whole point of PKI. If you want to do authentication yourself without trusting the root CA used to issue certificates, you may do so in a similar manner by checking the whole certificate as available via $ssl_client_raw_cert variable. -- Maxim Dounin http://nginx.org/en/donation.html From steve at greengecko.co.nz Wed Apr 3 20:18:53 2013 From: steve at greengecko.co.nz (Steve Holdoway) Date: Thu, 04 Apr 2013 09:18:53 +1300 Subject: upstream load balancing. Message-ID: <1365020333.5117.5311.camel@steve-new> Folks, I'm sharing processing load across 3 remote servers, and am having a terrible time getting it balanced. Here's the config upstream backend { server 192.168.162.218:9000 fail_timeout=30 max_fails=3 weight=1 ; # Engine 1 server 192.168.175.5:9000 fail_timeout=30 max_fails=3 weight=1; # Engine 2 server 192.168.175.213:9000 fail_timeout=30 max_fails=3 weight=1 ; # Engine 3 } When the server gets busy, all load seems to be put onto the final entry, which is seeing load averages in the 70's, whereas the first 2 are below 5. This is causing serious performance issues. How on earth can we force a more even loading? Cheers, Steve -- Steve Holdoway BSc(Hons) MIITP http://www.greengecko.co.nz Skype: sholdowa -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/x-pkcs7-signature Size: 6189 bytes Desc: not available URL: From vbart at nginx.com Wed Apr 3 22:20:36 2013 From: vbart at nginx.com (Valentin V. Bartenev) Date: Thu, 4 Apr 2013 02:20:36 +0400 Subject: SPDY + proxy cache static content failures In-Reply-To: <473898f040c56b4af9fcfb2b54a1f89a.NginxMailingListEnglish@forum.nginx.org> References: <201304021921.35906.vbart@nginx.com> <473898f040c56b4af9fcfb2b54a1f89a.NginxMailingListEnglish@forum.nginx.org> Message-ID: <201304040220.38123.vbart@nginx.com> On Wednesday 03 April 2013 02:49:55 spdyg wrote: > Hi guys, > > So perhaps I shouldn't have reused my existing topic and confused things, > sorry! > > I only did so because the symptom is exactly the same as the issue I > originally reported (SOME static content served from proxy cache > occasionally doesn't make it to Firefox when using SPDY). If you disable > either SPDY or the proxy cache the problem stops happening. > > The originally reported problem stopped occurring after a fix was > implemented, but now I'm seeing it again since I put haproxy as the > upstream (to get enhanced health checks etc). > > The difference this time is that I'm not seeing the 499 errors (or 444) in > the nginx log. I'm not returning 444 (via config) in these scenarios > either. > > Firefox shows this: > > GET https://whatever/postrequest [HTTP/1.1 200 OK 45ms] > GET https://whatever/css1 [HTTP/1.1 304 OK 25ms] > GET https://whatever/css2 [HTTP/1.1 304 OK 260ms] > GET https://whatever/js1 [HTTP/1.1 304 OK 260ms] > GET https://whatever/js2 [HTTP/1.1 304 OK 35ms] > GET https://whatever/image1 [HTTP/1.1 304 OK 40ms] > GET https://whatever/image2 [HTTP/1.1 304 OK 40ms] > GET https://whatever/image3 [HTTP/1.1 304 OK 40ms] > GET https://whatever/image4 [HTTP/1.1 304 OK 40ms] > GET https://whatever/image5 [5ms] > GET https://google.com/analyticsjs [HTTP/1.1 304 Not Modified 50ms] > GET https://whatever/image6 [0ms] > GET https://whatever/image7 [0ms] > GET https://whatever/image8 [0ms] > GET https://whatever/image9 [0ms] > GET https://whatever/image10 [0ms] > > So you can see that around the time it loads the 5th image things go wrong > and the 5th onward images all have no content. Firefox detail shows no > response headers/content for the 5th image onward. > > The Haproxy and Nginx logs show 304 successfully for all of these requests, > so I guess the connection is dropped somewhere and Nginx isn't picking it > up and still thinks it worked fine. > > Any suggestions on how to proceed further on this? Happy to provide > specific details/logs via email if required. > Is there anything in the error log? wbr, Valentin V. Bartenev -- http://nginx.org/en/donation.html From nginx-forum at nginx.us Wed Apr 3 22:43:15 2013 From: nginx-forum at nginx.us (spdyg) Date: Wed, 03 Apr 2013 18:43:15 -0400 Subject: SPDY + proxy cache static content failures In-Reply-To: <201304040220.38123.vbart@nginx.com> References: <201304040220.38123.vbart@nginx.com> Message-ID: No there's nothing in the error log. Access log shows 304's for all requests that failed. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,233497,238073#msg-238073 From nginx-forum at nginx.us Thu Apr 4 01:58:57 2013 From: nginx-forum at nginx.us (tssungeng) Date: Wed, 03 Apr 2013 21:58:57 -0400 Subject: How can I limit the total speed of a port or domain name? Message-ID: <7d8a065e61f1d9aaf0a010cc7a085a01.NginxMailingListEnglish@forum.nginx.org> You know,the ngx_http_limit_zone_module can limit the speed of an IP . I want to limit the total speed of a port or a domain name. For example: There was a Server(the all speed is limited by the IDC 10M/s). There was a Nginx on the Server. There was two port for this Nginx---> 8080 and 8090,now I want to limit the total speed of 8090 to 5M/s,No matter how match ip,No matter how match conn per IP. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,238074,238074#msg-238074 From davide.damico at contactlab.com Thu Apr 4 07:01:17 2013 From: davide.damico at contactlab.com (Davide D'Amico) Date: Thu, 04 Apr 2013 09:01:17 +0200 Subject: Nginx RP: a lot of rewrite rules Message-ID: <89f30638476b5b336e85e7bf53e139b3@sys.tomatointeractive.it> Hi, I've two freebsd 9.1 vm (2 corees, 4gb ram) with nginx-1.2.7 in active/passive mode (using carp), acting as a reverse proxy to 4 apache22 backend (2 cores, 4gb ram). I should set 800 simple rewrite rules, such as: rewrite ^/something/foo.html /bar.html permanent; I cannot use any regexp to optimize all these rules, I have no scripting language enabled (the vhosts will serve only static pages, the apache22 backend will use SSI too). Here the question: Is it better to use (in terms of performances, reliability, load on the reverse proxy) the rewrite rules on the nginx reverse proxy, or use the rewrite rules on the apache22 backend? Thanks, d. From nginx-forum at nginx.us Thu Apr 4 08:16:30 2013 From: nginx-forum at nginx.us (etrader) Date: Thu, 04 Apr 2013 04:16:30 -0400 Subject: location vs. rewite In-Reply-To: <51598403.9010800@citrin.ru> References: <51598403.9010800@citrin.ru> Message-ID: Anton Yuzhaninov Wrote: ------------------------------------------------------- > > rewrite ^/favicon.ico /var/www/media/images/favicon.X.ico last; > > This is wrong - you can rewrite to other location, not to path on file > system. > Anyway rewrite does not need here, use location. Sorry, my bad! In the second case, I meant rewrite ^/favicon.ico /media/images/favicon.X.ico last; but anyway, it seems that location works better in general. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,238002,238078#msg-238078 From roberto at unbit.it Thu Apr 4 08:17:40 2013 From: roberto at unbit.it (Roberto De Ioris) Date: Thu, 4 Apr 2013 10:17:40 +0200 Subject: [semi-OT] GridFS uWSGI plugin Message-ID: <9e65ada2e77676ff56a4ea4a3aaebd86.squirrel@manage.unbit.it> Hi everyone, latest uWSGI release got GridFS support: https://uwsgi-docs.readthedocs.org/en/latest/GridFS.html In my company we are using it behind nginx for serving items from a replica set https://uwsgi-docs.readthedocs.org/en/latest/GridFS.html#combining-with-nginx Currently the plugin misses "range" support and mongodb authentication, but they will be added in the next releases. I hope it can be useful. -- Roberto De Ioris http://unbit.it From nginx-forum at nginx.us Thu Apr 4 08:30:43 2013 From: nginx-forum at nginx.us (double) Date: Thu, 04 Apr 2013 04:30:43 -0400 Subject: Is it possible that nginx will not buffer the client body? In-Reply-To: <20130314141348.GV8912@reaktio.net> References: <20130314141348.GV8912@reaktio.net> Message-ID: <179d688cbc0a59a79d41868989742e7f.NginxMailingListEnglish@forum.nginx.org> > Weibin: Have you thought of upstreaming the no_buffer patch to nginx 1.3.x > so it could become part of next nginx stable version 1.4 ? > It'd be really nice to have the no_buffer functionality in stock nginx! I agree. Nginx is such an excellent product - but this is a missing feature. Thanks Markus Posted at Nginx Forum: http://forum.nginx.org/read.php?2,234926,238080#msg-238080 From francis at daoine.org Thu Apr 4 08:39:19 2013 From: francis at daoine.org (Francis Daly) Date: Thu, 4 Apr 2013 09:39:19 +0100 Subject: How can I limit the total speed of a port or domain name? In-Reply-To: <7d8a065e61f1d9aaf0a010cc7a085a01.NginxMailingListEnglish@forum.nginx.org> References: <7d8a065e61f1d9aaf0a010cc7a085a01.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20130404083919.GD21631@craic.sysops.org> On Wed, Apr 03, 2013 at 09:58:57PM -0400, tssungeng wrote: Hi there, > You know,the ngx_http_limit_zone_module can limit the speed of an IP . Can it? If you can show your configuration that does that, then it should be straightforward to modify it to use the server port or server name instead of the ip address as the controlling variable. I'm not aware of a stock-nginx way of limiting speed, other than per-request. f -- Francis Daly francis at daoine.org From crirus at gmail.com Thu Apr 4 08:43:06 2013 From: crirus at gmail.com (Cristian Rusu) Date: Thu, 4 Apr 2013 11:43:06 +0300 Subject: url escape variable Message-ID: Hello I have this issue trying to use post_action My config is like this: location / { post_action /afterdownload; ... ... ... location /afterdownload { proxy_pass http://78.140.165.80/download_counter.php?FileName=$request&status=$request_completion ; internal; } How to escape the content of $request so it's a valid URL? Right now the value there is like (from the log) 78.140.165.80 - - [04/Apr/2013:10:36:53 +0200] "GET /download_counter.php?FileName=GET /storage/files/0/1/17/176/myvideo.avi?md=IfHSTECs I get a 400 (Bad request) for this call and I guess it's because the space after GET inside this variable. How can I either escape or run a replace on it to remove GET_ from the beginning of it? --------------------------------------------------------------- Cristian Rusu Web Development & Electronic Publishing Phone: 40 723 625 808 -------------- next part -------------- An HTML attachment was scrubbed... URL: From skaurus at gmail.com Thu Apr 4 08:46:14 2013 From: skaurus at gmail.com (=?UTF-8?B?0JTQvNC40YLRgNC40Lkg0KjQsNC70LDRiNC+0LI=?=) Date: Thu, 4 Apr 2013 12:46:14 +0400 Subject: reset_timedout_connection In-Reply-To: References: Message-ID: Heeey :-) Is it safe to enable it? Best regards, Dmitriy Shalashov 2013/3/29 ??????? ??????? > Hi. > > Why is this parameter disabled by default? > > Best regards, > Dmitriy Shalashov > -------------- next part -------------- An HTML attachment was scrubbed... URL: From igor at sysoev.ru Thu Apr 4 09:08:54 2013 From: igor at sysoev.ru (Igor Sysoev) Date: Thu, 4 Apr 2013 13:08:54 +0400 Subject: reset_timedout_connection In-Reply-To: References: Message-ID: <106DB408-1D0B-4FC2-8A81-E80A55B94623@sysoev.ru> On Apr 4, 2013, at 12:46 , ??????? ??????? wrote: > Heeey :-) > > Is it safe to enable it? Yes, it is safe. Probably, it should be enabled by default. -- Igor Sysoev > 2013/3/29 ??????? ??????? > Hi. > > Why is this parameter disabled by default? > > Best regards, > Dmitriy Shalashov -------------- next part -------------- An HTML attachment was scrubbed... URL: From skaurus at gmail.com Thu Apr 4 09:10:45 2013 From: skaurus at gmail.com (=?UTF-8?B?0JTQvNC40YLRgNC40Lkg0KjQsNC70LDRiNC+0LI=?=) Date: Thu, 4 Apr 2013 13:10:45 +0400 Subject: reset_timedout_connection In-Reply-To: <106DB408-1D0B-4FC2-8A81-E80A55B94623@sysoev.ru> References: <106DB408-1D0B-4FC2-8A81-E80A55B94623@sysoev.ru> Message-ID: Thank you! Best regards, Dmitriy Shalashov 2013/4/4 Igor Sysoev > On Apr 4, 2013, at 12:46 , ??????? ??????? wrote: > > Heeey :-) > > Is it safe to enable it? > > > Yes, it is safe. Probably, it should be enabled by default. > > > -- > Igor Sysoev > > 2013/3/29 ??????? ??????? > >> Hi. >> >> Why is this parameter disabled by default? >> >> Best regards, >> Dmitriy Shalashov >> > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From francis at daoine.org Thu Apr 4 09:34:12 2013 From: francis at daoine.org (Francis Daly) Date: Thu, 4 Apr 2013 10:34:12 +0100 Subject: Nginx RP: a lot of rewrite rules In-Reply-To: <89f30638476b5b336e85e7bf53e139b3@sys.tomatointeractive.it> References: <89f30638476b5b336e85e7bf53e139b3@sys.tomatointeractive.it> Message-ID: <20130404093412.GA23034@craic.sysops.org> On Thu, Apr 04, 2013 at 09:01:17AM +0200, Davide D'Amico wrote: Hi there, > Is it better to use (in terms of performances, reliability, load on the > reverse proxy) the rewrite rules on the nginx reverse proxy, or use the > rewrite rules on the apache22 backend? Completely untested, but it seems that "the earliest place you can do it" is the right place to do it, at least for a straightforward mechanical transformation like this. So I'd say "on nginx". When you are doing the load-testing, you might want to consider comparing: * rewrite ^/old /new permanent; * location = /old { return 301 /new; } * map new $uri { default 0; /old /new; } if ($new) { return 301 /new; } (Confirm the syntax before testing; these are just some alternative suggestions.) I suspect that the suggestions are increasingly more efficient -- but you presumably have the interest to find out for sure, on your hardware and with your expected loads. Cheers, f -- Francis Daly francis at daoine.org From r at roze.lv Thu Apr 4 09:39:43 2013 From: r at roze.lv (Reinis Rozitis) Date: Thu, 4 Apr 2013 12:39:43 +0300 Subject: Optimize rewrite In-Reply-To: References: Message-ID: <8DEE41783A264DDDA593F8F2BB2682AB@MasterPC> > > location / { > rewrite ^/(.*)/(.*)$ /$1/index.php?url=$2; > } I would suggest to use try_files: location / { try_files $uri $uri/ /index.php?url=$uri&$args; } Personally instead of pasing the uri in the 'url' param I like to use try_files $uri $uri/ /index.php?$args; and then in the php code have the url in $_SERVER['REQUEST_URI'] (or $_SERVER['DOCUMENT_URI']) that way there is no possibility to accidentaly mess with the GET variables. rr From davide.damico at contactlab.com Thu Apr 4 10:20:42 2013 From: davide.damico at contactlab.com (Davide D'Amico) Date: Thu, 04 Apr 2013 12:20:42 +0200 Subject: Nginx RP: a lot of rewrite rules In-Reply-To: <20130404093412.GA23034@craic.sysops.org> References: <89f30638476b5b336e85e7bf53e139b3@sys.tomatointeractive.it> <20130404093412.GA23034@craic.sysops.org> Message-ID: <515D53FA.4070705@contactlab.com> Il 04/04/13 11:34, Francis Daly ha scritto: > On Thu, Apr 04, 2013 at 09:01:17AM +0200, Davide D'Amico wrote: > > Hi there, > >> Is it better to use (in terms of performances, reliability, load on the >> reverse proxy) the rewrite rules on the nginx reverse proxy, or use the >> rewrite rules on the apache22 backend? > > Completely untested, but it seems that "the earliest place you can do it" > is the right place to do it, at least for a straightforward mechanical > transformation like this. > > So I'd say "on nginx". > > When you are doing the load-testing, you might want to consider comparing: > > * rewrite ^/old /new permanent; > * location = /old { return 301 /new; } > * map new $uri { default 0; /old /new; } > if ($new) { return 301 /new; } > > (Confirm the syntax before testing; these are just some alternative > suggestions.) > > I suspect that the suggestions are increasingly more efficient -- but > you presumably have the interest to find out for sure, on your hardware > and with your expected loads. > Thank you Francis, but I cannot "group" all the rewrite I have so I am starting using all these rewrites on backends (where I have rewritemaps, too) and later I'll test them on nginx. d. From ezwetkow at gmx.de Thu Apr 4 10:36:33 2013 From: ezwetkow at gmx.de (Elena Zwetkow) Date: Thu, 4 Apr 2013 12:36:33 +0200 (CEST) Subject: Aw: [semi-OT] GridFS uWSGI plugin In-Reply-To: <9e65ada2e77676ff56a4ea4a3aaebd86.squirrel@manage.unbit.it> References: <9e65ada2e77676ff56a4ea4a3aaebd86.squirrel@manage.unbit.it> Message-ID: ?Thanks for sharing, sounds really good. ? Gesendet:?Donnerstag, 04. April 2013 um 10:17 Uhr Von:?"Roberto De Ioris" An:?nginx at nginx.org Betreff:?[semi-OT] GridFS uWSGI plugin Hi everyone, latest uWSGI release got GridFS support: https://uwsgi-docs.readthedocs.org/en/latest/GridFS.html In my company we are using it behind nginx for serving items from a replica set https://uwsgi-docs.readthedocs.org/en/latest/GridFS.html#combining-with-nginx[https://uwsgi-docs.readthedocs.org/en/latest/GridFS.html#combining-with-nginx] Currently the plugin misses "range" support and mongodb authentication, but they will be added in the next releases. I hope it can be useful. -- Roberto De Ioris http://unbit.it[http://unbit.it] _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx[http://mailman.nginx.org/mailman/listinfo/nginx] From nginx-forum at nginx.us Thu Apr 4 10:40:55 2013 From: nginx-forum at nginx.us (johnnybut7) Date: Thu, 04 Apr 2013 06:40:55 -0400 Subject: Redirect 404 errors to the home page Message-ID: <72fceb41f131d5a1bb4a1de15a71eb55.NginxMailingListEnglish@forum.nginx.org> Hi, Ive received a request stating that the website owner wants all 404 error messages redirected to the home page as its affecting their seo ranking(im not sure about this myself but thats another issue). Its a rails application so i have static page displayed currently and even applied some javascript to redirect to the home page but the seo tool still states there are X amount of broken links and that redirecting to the home page via the server config will fix this. I cant seem to get this to work in the nginx config, can anyone help me? Ive tried various solutions but nothing is working 100%. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,238094,238094#msg-238094 From nginx-forum at nginx.us Thu Apr 4 11:59:27 2013 From: nginx-forum at nginx.us (ntib1) Date: Thu, 04 Apr 2013 07:59:27 -0400 Subject: proxy_cache_key and $content_length Message-ID: Hi, I'd like to put $content_length in proxy_cache_key in order nginx to check if file had changed and send it instead of old file if it's case. But the $content_length is always empty. Why ? Or do you have another solution for my problem ? Thanks, Thibaut Posted at Nginx Forum: http://forum.nginx.org/read.php?2,238097,238097#msg-238097 From rkearsley at blueyonder.co.uk Thu Apr 4 12:09:51 2013 From: rkearsley at blueyonder.co.uk (Richard Kearsley) Date: Thu, 04 Apr 2013 13:09:51 +0100 Subject: proxy_cache_key and $content_length In-Reply-To: References: Message-ID: <515D6D8F.6040801@blueyonder.co.uk> Hi That's a good idea but I think it's not possible The key is set before request is sent to backend but it can only know the content length after the request is sent to backend (catch 22) On 04/04/13 12:59, ntib1 wrote: > Hi, > > I'd like to put $content_length in proxy_cache_key in order nginx to check > if file had changed and send it instead of old file if it's case. > > But the $content_length is always empty. Why ? > > Or do you have another solution for my problem ? > > Thanks, > > Thibaut > > Posted at Nginx Forum: http://forum.nginx.org/read.php?2,238097,238097#msg-238097 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From contact at jpluscplusm.com Thu Apr 4 12:15:57 2013 From: contact at jpluscplusm.com (Jonathan Matthews) Date: Thu, 4 Apr 2013 13:15:57 +0100 Subject: Redirect 404 errors to the home page In-Reply-To: <72fceb41f131d5a1bb4a1de15a71eb55.NginxMailingListEnglish@forum.nginx.org> References: <72fceb41f131d5a1bb4a1de15a71eb55.NginxMailingListEnglish@forum.nginx.org> Message-ID: On 4 April 2013 11:40, johnnybut7 wrote: > Hi, > > Ive received a request stating that the website owner wants all 404 error > messages redirected to the home page as its affecting their seo ranking(im > not sure about this myself but thats another issue). Its a rails application > so i have static page displayed currently and even applied some javascript > to redirect to the home page but the seo tool still states there are X > amount of broken links and that redirecting to the home page via the server > config will fix this. > > I cant seem to get this to work in the nginx config, can anyone help me? > Ive tried various solutions but nothing is working 100%. http://nginx.org/r/error_page with http://nginx.org/r/proxy_intercept_errors should do what you want. Let us know how you get on. Jonathan From nginx-forum at nginx.us Thu Apr 4 12:32:02 2013 From: nginx-forum at nginx.us (johnnybut7) Date: Thu, 04 Apr 2013 08:32:02 -0400 Subject: Redirect 404 errors to the home page In-Reply-To: References: Message-ID: <4e9c09f4082fc480bbc679c1ad2c9b6d.NginxMailingListEnglish@forum.nginx.org> Thanks, ive already used this resources trying the below but it doesnt make a difference, the rails static page is still being served. I think maybe i need to disable that first, thanks for your help. error_page 404 =301 http://example.com/notfound.html; Posted at Nginx Forum: http://forum.nginx.org/read.php?2,238094,238103#msg-238103 From nginx-forum at nginx.us Thu Apr 4 12:52:31 2013 From: nginx-forum at nginx.us (ntib1) Date: Thu, 04 Apr 2013 08:52:31 -0400 Subject: proxy_cache_key and $content_length In-Reply-To: <515D6D8F.6040801@blueyonder.co.uk> References: <515D6D8F.6040801@blueyonder.co.uk> Message-ID: <6abdad49d93872a3c901d5bac8f6da0e.NginxMailingListEnglish@forum.nginx.org> Ok, it's what I supposed but I was not sure. Is there an alternative ? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,238097,238104#msg-238104 From nginx-forum at nginx.us Thu Apr 4 12:54:22 2013 From: nginx-forum at nginx.us (mottwsc) Date: Thu, 04 Apr 2013 08:54:22 -0400 Subject: SOLVED: securing access to a folder - 404 error In-Reply-To: <20130319091126.GG18002@craic.sysops.org> References: <20130319091126.GG18002@craic.sysops.org> Message-ID: <7960677a823037fd063f4b8801da0406.NginxMailingListEnglish@forum.nginx.org> This is what was done to solve the problem. I am providing the two relevant location blocks. # protect the "secure" folder ( /var/www/html/secure ) location /secure/ { auth_basic "Restricted"; auth_basic_user_file /var/www/protected/.htpasswd; } # This is required to protect individual files inside the directory location ~ ^/secure/.*\.php$ { auth_basic "Restricted Area"; auth_basic_user_file /var/www/protected/.htpasswd; fastcgi_pass 127.0.0.1:9010; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include fastcgi_params; } Posted at Nginx Forum: http://forum.nginx.org/read.php?2,237196,238105#msg-238105 From mdounin at mdounin.ru Thu Apr 4 14:14:56 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 4 Apr 2013 18:14:56 +0400 Subject: upstream load balancing. In-Reply-To: <1365020333.5117.5311.camel@steve-new> References: <1365020333.5117.5311.camel@steve-new> Message-ID: <20130404141456.GG62550@mdounin.ru> Hello! On Thu, Apr 04, 2013 at 09:18:53AM +1300, Steve Holdoway wrote: > Folks, > > I'm sharing processing load across 3 remote servers, and am having a > terrible time getting it balanced. > > Here's the config > > upstream backend { > server 192.168.162.218:9000 fail_timeout=30 max_fails=3 weight=1 > ; # Engine 1 > server 192.168.175.5:9000 fail_timeout=30 max_fails=3 weight=1; > # Engine 2 > server 192.168.175.213:9000 fail_timeout=30 max_fails=3 weight=1 > ; # Engine 3 > } > > > When the server gets busy, all load seems to be put onto the final > entry, which is seeing load averages in the 70's, whereas the first 2 > are below 5. > > This is causing serious performance issues. How on earth can we force a > more even loading? The above configuration should result in equal number of requests to each of the backends. This may not be the same as equal load in terms of load averages, especially if servers are not equal. You may use "weight=" parameter to tune request distribution more precisely. Alternatively, you may consider using least_conn balancer algorithm for atomatic balancing based on number of currently active connections to upstream servers configured. See http://nginx.org/r/least_conn for details. -- Maxim Dounin http://nginx.org/en/donation.html From nginx-forum at nginx.us Thu Apr 4 14:21:57 2013 From: nginx-forum at nginx.us (ankurs) Date: Thu, 04 Apr 2013 10:21:57 -0400 Subject: nginx keeps crashing Message-ID: <9f3499fc22dff97d2d7515041db66a58.NginxMailingListEnglish@forum.nginx.org> *** glibc detected *** nginx: worker process: double free or corruption (!prev): 0x0000000008281500 *** ======= Backtrace: ========= /lib64/libc.so.6(+0x760e6)[0x7f10da7370e6] /lib64/libc.so.6(+0x78c13)[0x7f10da739c13] /lib64/libc.so.6(fclose+0x14d)[0x7f10da72774d] nginx: worker process[0x46db59] nginx: worker process[0x46e578] nginx: worker process[0x46b2f7] nginx: worker process(ngx_http_core_content_phase+0x2c)[0x430734] nginx: worker process(ngx_http_core_run_phases+0x23)[0x42b4d3] nginx: worker process(ngx_http_handler+0xd6)[0x42b5cd] nginx: worker process(ngx_http_internal_redirect+0x132)[0x42f7b7] nginx: worker process(ngx_http_perl_handle_request+0x1a0)[0x468423] nginx: worker process[0x468492] nginx: worker process(ngx_http_core_content_phase+0x2c)[0x430734] nginx: worker process(ngx_http_core_run_phases+0x23)[0x42b4d3] nginx: worker process(ngx_http_handler+0xd6)[0x42b5cd] nginx: worker process[0x434bf6] nginx: worker process[0x4351da] nginx: worker process[0x435711] nginx: worker process[0x432fbf] nginx: worker process(ngx_event_process_posted+0x36)[0x42139a] nginx: worker process(ngx_process_events_and_timers+0x115)[0x42126a] nginx: worker process[0x426e63] nginx: worker process(ngx_spawn_process+0x43d)[0x4257b4] nginx: worker process(ngx_master_process_cycle+0x42b)[0x4275ff] nginx: worker process(main+0xa0b)[0x40d012] /lib64/libc.so.6(__libc_start_main+0xfd)[0x7f10da6dfcdd] nginx: worker process[0x40b829] ======= Memory map: ======== 00400000-00492000 r-xp 00000000 41:13 2362864 /usr/local/nginx/sbin/nginx 00691000-006a1000 rw-p 00091000 41:13 2362864 /usr/local/nginx/sbin/nginx 006a1000-006b0000 rw-p 00000000 00:00 0 01909000-01a2f000 rw-p 00000000 00:00 0 01a2f000-01b39000 rw-p 00000000 00:00 0 01b39000-0fae3000 rw-p 00000000 00:00 0 7f10cc043000-7f10d2443000 rw-s 00000000 00:04 1313229410 /dev/zero (deleted) 7f10d2443000-7f10d8843000 rw-s 00000000 00:04 1313229409 /dev/zero (deleted) 7f10d9a24000-7f10d9a3a000 r-xp 00000000 41:13 1310732 /lib64/libgcc_s-4.4.7-20120601.so.1 7f10d9a3a000-7f10d9c39000 ---p 00016000 41:13 1310732 /lib64/libgcc_s-4.4.7-20120601.so.1 7f10d9c39000-7f10d9c3a000 rw-p 00015000 41:13 1310732 /lib64/libgcc_s-4.4.7-20120601.so.1 7f10d9c43000-7f10d9c4f000 r-xp 00000000 41:13 1311021 /lib64/libnss_files-2.12.so 7f10d9c4f000-7f10d9e4f000 ---p 0000c000 41:13 1311021 /lib64/libnss_files-2.12.so 7f10d9e4f000-7f10d9e50000 r--p 0000c000 41:13 1311021 /lib64/libnss_files-2.12.so 7f10d9e50000-7f10d9e51000 rw-p 0000d000 41:13 1311021 /lib64/libnss_files-2.12.so 7f10d9e51000-7f10d9e54000 r-xp 00000000 41:13 2757508 /usr/lib64/perl5/auto/MIME/Base64/Base64.so 7f10d9e54000-7f10da053000 ---p 00003000 41:13 2757508 /usr/lib64/perl5/auto/MIME/Base64/Base64.so 7f10da053000-7f10da054000 rw-p 00002000 41:13 2757508 /usr/lib64/perl5/auto/MIME/Base64/Base64.so 7f10da054000-7f10da058000 r-xp 00000000 41:13 2625506 /usr/lib64/perl5/auto/Digest/MD5/MD5.so 7f10da058000-7f10da257000 ---p 00004000 41:13 2625506 /usr/lib64/perl5/auto/Digest/MD5/MD5.so 7f10da257000-7f10da258000 rw-p 00003000 41:13 2625506 /usr/lib64/perl5/auto/Digest/MD5/MD5.so 7f10da258000-7f10da25f000 r-xp 00000000 41:13 2361574 /usr/local/lib64/perl5/auto/nginx/nginx.so 7f10da25f000-7f10da45e000 ---p 00007000 41:13 2361574 /usr/local/lib64/perl5/auto/nginx/nginx.so 7f10da45e000-7f10da45f000 rw-p 00006000 41:13 2361574 /usr/local/lib64/perl5/auto/nginx/nginx.so 7f10da45f000-7f10da4bc000 r-xp 00000000 41:13 1310726 /lib64/libfreebl3.so 7f10da4bc000-7f10da6bb000 ---p 0005d000 41:13 1310726 /lib64/libfreebl3.so 7f10da6bb000-7f10da6bc000 r--p 0005c000 41:13 1310726 /lib64/libfreebl3.so 7f10da6bc000-7f10da6bd000 rw-p 0005d000 41:13 1310726 /lib64/libfreebl3.so 7f10da6bd000-7f10da6c1000 rw-p 00000000 00:00 0 7f10da6c1000-7f10da84b000 r-xp 00000000 41:13 1310730 /lib64/libc-2.12.so 7f10da84b000-7f10daa4a000 ---p 0018a000 41:13 1310730 /lib64/libc-2.12.so 7f10daa4a000-7f10daa4e000 r--p 00189000 41:13 1310730 /lib64/libc-2.12.so 7f10daa4e000-7f10daa4f000 rw-p 0018d000 41:13 1310730 /lib64/libc-2.12.so 7f10daa4f000-7f10daa54000 rw-p 00000000 00:00 0 7f10daa54000-7f10daa56000 r-xp 00000000 41:13 1311077 /lib64/libutil-2.12.so 7f10daa56000-7f10dac55000 ---p 00002000 41:13 1311077 /lib64/libutil-2.12.so 7f10dac55000-7f10dac56000 r--p 00001000 41:13 1311077 /lib64/libutil-2.12.so 7f10dac56000-7f10dac57000 rw-p 00002000 41:13 1311077 /lib64/libutil-2.12.so 7f10dac57000-7f10dacda000 r-xp 00000000 41:13 1310761 /lib64/libm-2.12.so 7f10dacda000-7f10daed9000 ---p 00083000 41:13 1310761 /lib64/libm-2.12.so 7f10daed9000-7f10daeda000 r--p 00082000 41:13 1310761 /lib64/libm-2.12.so 7f10daeda000-7f10daedb000 rw-p 00083000 41:13 1310761 /lib64/libm-2.12.so 7f10daedb000-7f10daedd000 r-xp 00000000 41:13 1310757 /lib64/libdl-2.12.so 7f10daedd000-7f10db0dd000 ---p 00002000 41:13 1310757 /lib64/libdl-2.12.so 7f10db0dd000-7f10db0de000 r--p 00002000 41:13 1310757 /lib64/libdl-2.12.so 7f10db0de000-7f10db0df000 rw-p 00003000 41:13 1310757 /lib64/libdl-2.12.so 7f10db0df000-7f10db0f5000 r-xp 00000000 41:13 1310798 /lib64/libnsl-2.12.so 7f10db0f5000-7f10db2f4000 ---p 00016000 41:13 1310798 /lib64/libnsl-2.12.so 7f10db2f4000-7f10db2f5000 r--p 00015000 41:13 1310798 /lib64/libnsl-2.12.so 7f10db2f5000-7f10db2f6000 rw-p 00016000 41:13 1310798 /lib64/libnsl-2.12.so 7f10db2f6000-7f10db2f8000 rw-p 00000000 00:00 0 7f10db2f8000-7f10db30e000 r-xp 00000000 41:13 1311069 /lib64/libresolv-2.12.so 7f10db30e000-7f10db50e000 ---p 00016000 41:13 1311069 /lib64/libresolv-2.12.so 7f10db50e000-7f10db50f000 r--p 00016000 41:13 1311069 /lib64/libresolv-2.12.so 7f10db50f000-7f10db510000 rw-p 00017000 41:13 1311069 /lib64/libresolv-2.12.so 7f10db510000-7f10db512000 rw-p 00000000 00:00 0 7f10db512000-7f10db674000 r-xp 00000000 41:13 2359842 /usr/lib64/perl5/CORE/libperl.so 7f10db674000-7f10db874000 ---p 00162000 41:13 2359842 /usr/lib64/perl5/CORE/libperl.so 7f10db874000-7f10db87d000 rw-p 00162000 41:13 2359842 /usr/lib64/perl5/CORE/libperl.so 7f10db87d000-7f10db892000 r-xp 00000000 41:13 1310783 /lib64/libz.so.1.2.3 7f10db892000-7f10dba91000 ---p 00015000 41:13 1310783 /lib64/libz.so.1.2.3 7f10dba91000-7f10dba92000 r--p 00014000 41:13 1310783 /lib64/libz.so.1.2.3 7f10dba92000-7f10dba93000 rw-p 00015000 41:13 1310783 /lib64/libz.so.1.2.3 7f10dba93000-7f10dbc07000 r-xp 00000000 41:13 2361229 /usr/lib64/libcrypto.so.1.0.0 7f10dbc07000-7f10dbe06000 ---p 00174000 41:13 2361229 /usr/lib64/libcrypto.so.1.0.0 7f10dbe06000-7f10dbe1f000 r--p 00173000 41:13 2361229 /usr/lib64/libcrypto.so.1.0.0 7f10dbe1f000-7f10dbe29000 rw-p 0018c000 41:13 2361229 /usr/lib64/libcrypto.so.1.0.0 7f10dbe29000-7f10dbe2d000 rw-p 00000000 00:00 0 7f10dbe2d000-7f10dbe59000 r-xp 00000000 41:13 1310821 /lib64/libpcre.so.0.0.1 7f10dbe59000-7f10dc058000 ---p 0002c000 41:13 1310821 /lib64/libpcre.so.0.0.1 7f10dc058000-7f10dc059000 rw-p 0002b000 41:13 1310821 /lib64/libpcre.so.0.0.1 7f10dc059000-7f10dc060000 r-xp 00000000 41:13 1310749 /lib64/libcrypt-2.12.so 7f10dc060000-7f10dc260000 ---p 00007000 41:13 1310749 /lib64/libcrypt-2.12.so 7f10dc260000-7f10dc261000 r--p 00007000 41:13 1310749 /lib64/libcrypt-2.12.so 7f10dc261000-7f10dc262000 rw-p 00008000 41:13 1310749 /lib64/libcrypt-2.12.so 7f10dc262000-7f10dc290000 rw-p 00000000 00:00 0 7f10dc290000-7f10dc2a7000 r-xp 00000000 41:13 1310754 /lib64/libpthread-2.12.so 7f10dc2a7000-7f10dc4a7000 ---p 00017000 41:13 1310754 /lib64/libpthread-2.12.so 7f10dc4a7000-7f10dc4a8000 r--p 00017000 41:13 1310754 /lib64/libpthread-2.12.so 7f10dc4a8000-7f10dc4a9000 rw-p 00018000 41:13 1310754 /lib64/libpthread-2.12.so 7f10dc4a9000-7f10dc4ad000 rw-p 00000000 00:00 0 7f10dc4ad000-7f10dc4cd000 r-xp 00000000 41:13 1310723 /lib64/ld-2.12.so 7f10dc4cd000-7f10dc6ba000 rw-p 00000000 00:00 0 7f10dc6ba000-7f10dc6c2000 rw-p 00000000 00:00 0 7f10dc6c9000-7f10dc6ca000 rw-p 00000000 00:00 0 7f10dc6ca000-7f10dc6cb000 rw-s 00000000 00:04 1292546255 /dev/zero (deleted) 7f10dc6cb000-7f10dc6cc000 rw-p 00000000 00:00 0 7f10dc6cc000-7f10dc6cd000 r--p 0001f000 41:13 1310723 /lib64/ld-2.12.so 7f10dc6cd000-7f10dc6ce000 rw-p 00020000 41:13 1310723 /lib64/ld-2.12.so 7f10dc6ce000-7f10dc6cf000 rw-p 00000000 00:00 0 7fffe48ca000-7fffe48df000 rw-p 00000000 00:00 0 [stack] 7fffe494c000-7fffe494d000 r-xp 00000000 00:00 0 [vdso] ffffffffff600000-ffffffffff601000 r-xp 00000000 00:00 0 [vsyscall] ------------------------------------------------------------- [root at localhost sbin]# ./nginx -V nginx version: nginx/1.2.7 built by gcc 4.4.7 20120313 (Red Hat 4.4.7-3) (GCC) configure arguments: --with-http_stub_status_module --with-http_perl_module --with-http_flv_module --add-module=nginx_mod_h264_streaming/ ------------------------------------------------------------- worker_processes 4; worker_rlimit_nofile 20480; worker_rlimit_sigpending 62768; #error_log logs/error.log; #error_log logs/error.log notice; error_log logs/error.log info; #pid logs/nginx.pid; events { worker_connections 5120; } http { include mime.types; default_type application/octet-stream; log_format xfstube '$arg_id|$arg_usr|$arg_dmode|$remote_addr|$body_bytes_sent|$arg_embed|$status'; error_log logs/error.log error; limit_conn_log_level warn; access_log off; #sendfile on; #tcp_nopush on; keepalive_timeout 0; #keepalive_timeout 65; #gzip on; limit_conn_zone $binary_remote_addr zone=addr:100m; limit_conn_zone $binary_remote_addr zone=one:100m; perl_modules perl; perl_require somefile.pm; perl_modules perl; perl_require somefile.pm; include sites/*.conf; } ------------------------------------------------------------- What could be causing this crash ? also is --with-http_mp4_module stable enough to be replace h264.code-shop module. Any help to fix this would be greatly appreciated. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,238109,238109#msg-238109 From mdounin at mdounin.ru Thu Apr 4 14:48:59 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 4 Apr 2013 18:48:59 +0400 Subject: nginx keeps crashing In-Reply-To: <9f3499fc22dff97d2d7515041db66a58.NginxMailingListEnglish@forum.nginx.org> References: <9f3499fc22dff97d2d7515041db66a58.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20130404144859.GI62550@mdounin.ru> Hello! On Thu, Apr 04, 2013 at 10:21:57AM -0400, ankurs wrote: [...] > [root at localhost sbin]# ./nginx -V > nginx version: nginx/1.2.7 > built by gcc 4.4.7 20120313 (Red Hat 4.4.7-3) (GCC) > configure arguments: --with-http_stub_status_module --with-http_perl_module > --with-http_flv_module --add-module=nginx_mod_h264_streaming/ First of all I would recommend to test if you are able to reproduce the problem without 3rd party modules compiled in. General debugging hints may be found here: http://wiki.nginx.org/Debugging > What could be causing this crash ? also is --with-http_mp4_module stable > enough to be replace h264.code-shop module. The mp4 module is stable enough and has no known problems. -- Maxim Dounin http://nginx.org/en/donation.html From francis at daoine.org Thu Apr 4 18:19:11 2013 From: francis at daoine.org (Francis Daly) Date: Thu, 4 Apr 2013 19:19:11 +0100 Subject: Nginx RP: a lot of rewrite rules In-Reply-To: <515D53FA.4070705@contactlab.com> References: <89f30638476b5b336e85e7bf53e139b3@sys.tomatointeractive.it> <20130404093412.GA23034@craic.sysops.org> <515D53FA.4070705@contactlab.com> Message-ID: <20130404181911.GA27451@craic.sysops.org> On Thu, Apr 04, 2013 at 12:20:42PM +0200, Davide D'Amico wrote: > Il 04/04/13 11:34, Francis Daly ha scritto: > >On Thu, Apr 04, 2013 at 09:01:17AM +0200, Davide D'Amico wrote: Hi there, > >I suspect that the suggestions are increasingly more efficient -- but > >you presumably have the interest to find out for sure, on your hardware > >and with your expected loads. > > Thank you Francis, but I cannot "group" all the rewrite I have so I am > starting using all these rewrites on backends (where I have rewritemaps, > too) and later I'll test them on nginx. I may have been unclear. I'm not talking about grouping the rewrites; I'm talking about a list of desired old -> new redirections, which presumably you have somewhere. Get the /oldN -> /newN local urls that you care about, and in nginx you can try (and I have now tested this): === http { map $uri $new { default ""; /old3 /new3; /old4 /new4; } server { listen 8000; if ($new) { return 301 $new; } location = /old1 { return 301 /new1 ;} location = /old2 { return 301 /new2 ;} } } === and look at the output of things like curl -I http://localhost:8000/old1 Test twice, once with many "location =" lines and no map/if; and the other time with many lines inside the map and no special "location =" lines; in order to know which is better on your system. Anyway, if you have something working well enough for you now, then you don't need to change anything. But when the time comes, this is a more specific example of what I was suggesting. f -- Francis Daly francis at daoine.org From mrvisser at gmail.com Thu Apr 4 22:24:02 2013 From: mrvisser at gmail.com (Branden Visser) Date: Thu, 4 Apr 2013 18:24:02 -0400 Subject: "writev() failed (134: Transport endpoint is not connected)" when upstream down Message-ID: Hello, I've found that when there are upstream servers unavailable in my upstream group, applying a little bit of load on the server (i.e., just myself browsing around quickly, 2-3 req/s max) results in the following errors even for upstream servers that are available and well: 2013/04/04 22:02:21 [error] 4211#0: *2898 writev() failed (134: Transport endpoint is not connected) while sending request to upstream, client: 184.94.54.70, server: , request: "GET /api/ui/skin HTTP/1.1", upstream: "http://10.112.5.119:2001/api/ui/skin", host: "mysite.org", referrer: "http://mysite.org/search" In this particular example, I have 4 upstreams, 3 servers are shut down (all except 10.112.5.119). If I comment out the 3 other upstream servers, I cannot reproduce this error. Running SmartOS (Joyent cloud) $ nginx -v nginx version: nginx/1.3.14 These are things I tried to no avail: * I used to have keepalive 64 on the upstream, I removed it * Nginx used to run as a non-privileged user, I switched it to root (prctl reports that privileged users should have 65,000 nofiles allowed) * I used to have worker_processes set to 5, I increased it to 16 * The upstream server configuration used to not have max_fails *or* max_timeout, I added those in trying to limit the amount of times nginx tried to access the downed upstream servers * I used to have the proxy_connect_timeout unspecified so it should have defaulted to 60s, I tried setting it to 1s * I tried commenting out all the rate-limiting directives The URLs I'm hitting in my tests are all those for the "tenantworkers" upstream. Any idea? I would think I probably have a resource limit issue, or an issue with the back-end server, but it just doesn't make sense that everything is OK after I comment out the downed upstreams. My concern is that the system will crumble under real load when even 1 upstream becomes unavailable. Thanks, Branden -------------- next part -------------- A non-text attachment was scrubbed... Name: nginx.conf Type: application/octet-stream Size: 11558 bytes Desc: not available URL: From nginx-forum at nginx.us Fri Apr 5 02:29:01 2013 From: nginx-forum at nginx.us (tssungeng) Date: Thu, 04 Apr 2013 22:29:01 -0400 Subject: How can I limit the total speed of a port or domain name? In-Reply-To: <20130404083919.GD21631@craic.sysops.org> References: <20130404083919.GD21631@craic.sysops.org> Message-ID: centos5.5 + nginx-1.3.14 I use the limit_speed_zone (https://github.com/yaoweibin/nginx_limit_speed_module),and set the nginx.conf: http { limit_speed_zone one $server_port 10m; server { listen 8080; server_name localhost; location / { root /opt/case/web/www; index index.html index.htm index.php; limit_speed one 10k; } } } The uper setting can limit the speed to 10K per IP. and then ,i try the HttpLimitConnModule: http { limit_conn_zone $server_port zone=addr:10m; server { listen 8080; server_name localhost; location / { root /opt/case/web/www; index index.html index.htm index.php; limit_rate 20k; } } } The uper setting can limit the speed to 20K per connetction.and if a IP open 5 thread for conn,then ,the IP can download 100K/s from my nginx. the nginx.conf of my Nginx with some error? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,238074,238119#msg-238119 From francis at daoine.org Fri Apr 5 08:06:20 2013 From: francis at daoine.org (Francis Daly) Date: Fri, 5 Apr 2013 09:06:20 +0100 Subject: How can I limit the total speed of a port or domain name? In-Reply-To: References: <20130404083919.GD21631@craic.sysops.org> Message-ID: <20130405080620.GB27451@craic.sysops.org> On Thu, Apr 04, 2013 at 10:29:01PM -0400, tssungeng wrote: Hi there, > I use the limit_speed_zone > (https://github.com/yaoweibin/nginx_limit_speed_module),and set the > nginx.conf: Ok, this third party module looks like it should do what you want, according to its description. > http { > limit_speed_zone one $server_port 10m; > server { > listen 8080; > server_name localhost; > location / { > root /opt/case/web/www; > index index.html index.htm index.php; > limit_speed one 10k; > } > } > } > > The uper setting can limit the speed to 10K per IP. I don't see anything there which says "per IP". It looks like what is above will limit the speed per server_port, which is one of the things you wanted. Does it not work for you? What does it do? > and then ,i try the HttpLimitConnModule: That can limit the number of connections, not the speed directly. > http { > limit_conn_zone $server_port zone=addr:10m; Here you *define* this zone, but you don't have any limit_conn directive to *use* the zone, so you have no limit on the number of connections. > The uper setting can limit the speed to 20K per connetction.and if a IP open > 5 thread for conn,then ,the IP can download 100K/s from my nginx. Yes, that's what limit_rate is expected to do. > the nginx.conf of my Nginx with some error? The third-party module config looks like it should be right, and should do what you want. The stock module config won't do what you want. f -- Francis Daly francis at daoine.org From nginx-forum at nginx.us Fri Apr 5 08:38:36 2013 From: nginx-forum at nginx.us (philipp) Date: Fri, 05 Apr 2013 04:38:36 -0400 Subject: limit proxy_next_upstream Message-ID: <1b813e8f051e09e845a3f2b07e8a76b1.NginxMailingListEnglish@forum.nginx.org> Is it possible to limit the amount of upstreams asked? I have four upstreams defined and it makes no sense to ask all of them. If two of them timeout or error there is possible something wrong with the request and asking another node doesn't help. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,238124,238124#msg-238124 From nginx-forum at nginx.us Fri Apr 5 08:44:11 2013 From: nginx-forum at nginx.us (philipp) Date: Fri, 05 Apr 2013 04:44:11 -0400 Subject: inconsistent upstream_addr log Message-ID: My log format looks like this log_format vcombined '$host $remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for" ' '$ssl_cipher $request_time $gzip_ratio ' '$upstream_addr $upstream_response_time $geoip_country_code'; I get these logs www.example.de 192.168.1.1 - - [04/Apr/2013:13:41:58 +0200] "GET /de/test/ HTTP/1.1" 504 383 "-" "Mozilla/5.0 (Windows NT 6.1; rv:20.0) Gecko/20100101 Firefox/20.0" "-" RC4-SHA 120.045 - 10.6.10.10:81, 10.6.10.11:81, 10.6.10.13:81, 10.6.10.12:81 30.003, 30.000, 30.002, 30.001 DE www.example.de 192.168.1.1 - - [04/Apr/2013:13:41:58 +0200] "GET /de/test/ HTTP/1.1" 504 383 "-" "Mozilla/5.0 (Windows NT 6.1; rv:20.0) Gecko/20100101 Firefox/20.0" "-" RC4-SHA 120.045 - 10.6.10.10:81, 10.6.10.11:81, 10.6.10.13:81, upstreamgroupname, 10.6.10.12:81 30.003, 30.000, 30.002, -, 30.001 DE www.example.de 192.168.1.1 - - [04/Apr/2013:13:41:58 +0200] "GET /de/test/ HTTP/1.1" 504 383 "-" "Mozilla/5.0 (Windows NT 6.1; rv:20.0) Gecko/20100101 Firefox/20.0" "-" RC4-SHA 120.045 - 10.6.10.10:81, 10.6.10.11:81, 10.6.10.13:81, 10.6.10.12:81, upstreamgroupname 30.003, 30.000, 30.002, 30.001, - DE www.example.de 192.168.1.1 - - [04/Apr/2013:13:41:58 +0200] "GET /de/test/ HTTP/1.1" 504 383 "-" "Mozilla/5.0 (Windows NT 6.1; rv:20.0) Gecko/20100101 Firefox/20.0" "-" RC4-SHA 120.045 - upstreamgroupname - DE I can understand the last and first log line. But what is happening in between? Why is the upstreamgroupname and the end or in the middle? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,238125,238125#msg-238125 From nginx-forum at nginx.us Fri Apr 5 09:38:22 2013 From: nginx-forum at nginx.us (andrea.mandolo) Date: Fri, 05 Apr 2013 05:38:22 -0400 Subject: Rewrite "break" directive - a strange behavior Message-ID: Hi, I'd like to report a strange behaviour of REWRITE "break" directives inside a "location" block, when it is used a SET directive subsequently. Now, i quote a little example, with a basic Nginx configuration that simulate the issue. ##### /etc/nginx/nginx.conf ########## worker_processes auto; pid /var/run/nginx.pid; events { worker_connections 2048; } http { sendfile on; tcp_nopush on; tcp_nodelay on; keepalive_timeout 65; types_hash_max_size 2048; access_log /var/log/nginx/access.log; error_log /var/log/nginx/error.log; upstream media_server { server 127.0.0.1:1935; ### Our media server} } include /etc/nginx/sites-enabled/example.conf; } ############## END ########################## Example server block: ##### /etc/nginx/sites-enabled/example.conf ########## server { listen 0.0.0.0:80; server_name _; set $cache_max_age "600"; set $cache_crossdomain "2"; location ~* "/crossdomain\.xml$" { rewrite ^/pippo/(.*) /$1 break; set $cache_max_age "$cache_crossdomain"; proxy_pass http://media_server; } add_header Test-Cache-Control "max-age=$cache_max_age"; } ############### END ######################### I expect the response to a request ( performed via WGET for example ) to "http://localhost/pippo/crossdomain.xml" contains the HEADER "Test-Cache-Control: max-age=2" Instead, i get a wrong answer with HEADER "Test-Cache-Control: max-age=600", as if the variable "$cache_max_age" is not re-setted with the new value "2". This is a WGET example. ######### START ############################# [root ~]# wget -S -O - http://localhost/pippo/crossdomain.xml --2013-04-04 09:05:11-- http://localhost/pippo/crossdomain.xml Resolving localhost (localhost)... 127.0.0.1 Connecting to localhost (localhost)|127.0.0.1|:80... connected. HTTP request sent, awaiting response... HTTP/1.1 200 OK Server: nginx/1.2.6 Date: Thu, 04 Apr 2013 09:05:11 GMT Content-Type: text/xml Content-Length: 250 Connection: keep-alive Cache-Control: no-cache Test-Cache-Control: max-age=600 Length: 250 [text/xml] Saving to: `STDOUT' 0% [ ] 0 --.-K/s 100%[=====================================================================================================================================================================>] 250 --.-K/s in 0s 2013-04-04 09:05:11 (21.1 MB/s) - written to stdout [250/250] ######### END ######################## I tried to move the SET directive before REWRITE "break" and everything works. ####### START ########## location ~* "/crossdomain\.xml$" { set $cache_max_age "$cache_crossdomain"; rewrite ^/pippo/(.*) /$1 break; proxy_pass http://media_server; } ###### END ########### Example request, after moving the SET directive ( "Test-Cache-Control: max-age=2" is NOW correct ) ########## START ######################### [root ~]# wget -S -O - http://localhost/pippo/crossdomain.xml --2013-04-04 09:12:37-- http://localhost/pippo/crossdomain.xml Resolving localhost (localhost)... 127.0.0.1 Connecting to localhost (localhost)|127.0.0.1|:80... connected. HTTP request sent, awaiting response... HTTP/1.1 200 OK Server: nginx/1.2.6 Date: Thu, 04 Apr 2013 09:12:37 GMT Content-Type: text/xml Content-Length: 250 Connection: keep-alive Cache-Control: no-cache Test-Cache-Control: max-age=2 Length: 250 [text/xml] Saving to: `STDOUT' 0% [ ] 0 --.-K/s 100%[=====================================================================================================================================================================>] 250 --.-K/s in 0s 2013-04-04 09:12:37 (15.9 MB/s) - written to stdout [250/250] ############# END ###################### I searched inside the official documentation: - REWRITE "break" descriptions say: -- http://wiki.nginx.org/HttpRewriteModule#rewrite --- "completes processing of current rewrite directives and non-rewrite processing continues within the current location block only." -- http://nginx.org/en/docs/http/ngx_http_rewrite_module.html#rewrite --- "stops processing the current set of ngx_http_rewrite_module directives." I haven't found anything that justifies this behaviour. Maybe, the set directive is considerated an "ngx_http_rewrite_module directive" ? or, is this a potential issue ? Thanks in advance for your support !! Posted at Nginx Forum: http://forum.nginx.org/read.php?2,238128,238128#msg-238128 From nginx-forum at nginx.us Fri Apr 5 09:42:49 2013 From: nginx-forum at nginx.us (andrea.mandolo) Date: Fri, 05 Apr 2013 05:42:49 -0400 Subject: Rewrite "break" directive - a strange behavior In-Reply-To: References: Message-ID: <245423814c19346f96e4fc1ed75d932c.NginxMailingListEnglish@forum.nginx.org> Sorry, i forgot to post Nginx version and build details. NGINX_VERSION="1.2.6" ##### CONFIGURE command used to build ############### ./configure --conf-path=/etc/nginx/nginx.conf --prefix=/etc/nginx --error-log-path=/var/log/nginx/error.log \ --http-client-body-temp-path=/var/lib/nginx/body --http-fastcgi-temp-path=/var/lib/nginx/fastcgi \ --http-log-path=/var/log/nginx/access.log --http-proxy-temp-path=/var/lib/nginx/proxy \ --http-scgi-temp-path=/var/lib/nginx/scgi --http-uwsgi-temp-path=/var/lib/nginx/uwsgi \ --lock-path=/var/lock/nginx.lock --pid-path=/var/run/nginx.pid --without-mail_pop3_module \ --without-mail_imap_module --without-mail_smtp_module --with-debug --with-rtsig_module --with-file-aio \ --with-http_realip_module --with-http_addition_module --with-http_sub_module --with-http_random_index_module \ --with-http_secure_link_module --with-http_dav_module --with-http_gzip_static_module --with-http_random_index_module \ --add-module=modules/lua-nginx-module \ --add-module=modules/nginx-push-stream-module --add-module=modules/chunkin-nginx-module \ --add-module=modules/nginx-upload-progress-module --add-module=modules/substitutions4nginx-read-only \ --add-module=modules/echo-nginx-module --add-module=modules/ngx_devel_kit --add-module=modules/headers-more-nginx-module \ --add-module=modules/nginx_upload_module-2.2.0 --with-pcre=modules/pcre-8.32 --with-zlib=modules/zlib-1.2.7 \ --with-http_ssl_module --with-openssl=modules/openssl-1.0.1c --with-http_xslt_module --with-ipv6 \ --with-sha1=/usr/include/openssl --with-md5=/usr/include/openssl \ --with-cc-opt=-Wno-error #################################################### Posted at Nginx Forum: http://forum.nginx.org/read.php?2,238128,238129#msg-238129 From mdounin at mdounin.ru Fri Apr 5 09:58:11 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 5 Apr 2013 13:58:11 +0400 Subject: Rewrite "break" directive - a strange behavior In-Reply-To: References: Message-ID: <20130405095811.GU62550@mdounin.ru> Hello! On Fri, Apr 05, 2013 at 05:38:22AM -0400, andrea.mandolo wrote: > Hi, > > I'd like to report a strange behaviour of REWRITE "break" directives inside > a "location" block, when it is used a SET directive subsequently. > > Now, i quote a little example, with a basic Nginx configuration that > simulate the issue. [...] > server { > listen 0.0.0.0:80; > server_name _; > > set $cache_max_age "600"; > set $cache_crossdomain "2"; > > location ~* "/crossdomain\.xml$" { > rewrite ^/pippo/(.*) /$1 break; > set $cache_max_age "$cache_crossdomain"; > proxy_pass http://media_server; > } > > add_header Test-Cache-Control "max-age=$cache_max_age"; > } > ############### END ######################### > > I expect the response to a request ( performed via WGET for example ) to > "http://localhost/pippo/crossdomain.xml" > contains the HEADER "Test-Cache-Control: max-age=2" This is wrong expectation. As "rewrite ... break" stops processing of rewrite module directives, and "set" is the rewrite module directive, the set $cache_max_age "$cache_crossdomain"; is never executed due to break. The add_header configured uses previously computed value of the $cache_max_age variable, i.e. "600". The confusion likely comes from the fact that rewrite module directives are imperative, in contrast to other parts of the nginx config, which is declarative. Reading docs here (in particular, preface and internal implementation sections) should be helpfull to understand how it works: http://nginx.org/en/docs/http/ngx_http_rewrite_module.html > Instead, i get a wrong answer with HEADER "Test-Cache-Control: > max-age=600", > as if the variable "$cache_max_age" is not re-setted with the new value > "2". This is expected behviour. If you want "set ..." to be executed, you have two basic options: 1) Don't use "rewrite ... break" but use "break" after rewrite module directives, i.e. rewrite ... set ... break; proxy_pass ... 2) Just switch order of "rewrite" and "break" directives in your config: set ... rewrite ... break; proxy_pass ... [...] > I searched inside the official documentation: > - REWRITE "break" descriptions say: > -- http://wiki.nginx.org/HttpRewriteModule#rewrite > --- "completes processing of current rewrite directives and non-rewrite > processing continues within the current location block only." > -- http://nginx.org/en/docs/http/ngx_http_rewrite_module.html#rewrite > --- "stops processing the current set of ngx_http_rewrite_module > directives." > > I haven't found anything that justifies this behaviour. > > Maybe, the set directive is considerated an "ngx_http_rewrite_module > directive" ? > or, is this a potential issue ? The "set" directive _is_ rewrite module directive. http://nginx.org/en/docs/http/ngx_http_rewrite_module.html#set -- Maxim Dounin http://nginx.org/en/donation.html From mdounin at mdounin.ru Fri Apr 5 10:12:37 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 5 Apr 2013 14:12:37 +0400 Subject: limit proxy_next_upstream In-Reply-To: <1b813e8f051e09e845a3f2b07e8a76b1.NginxMailingListEnglish@forum.nginx.org> References: <1b813e8f051e09e845a3f2b07e8a76b1.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20130405101237.GW62550@mdounin.ru> Hello! On Fri, Apr 05, 2013 at 04:38:36AM -0400, philipp wrote: > Is it possible to limit the amount of upstreams asked? I have four upstreams > defined and it makes no sense to ask all of them. If two of them timeout or > error there is possible something wrong with the request and asking another > node doesn't help. No, as of now only switching off proxy_next_upstream completely is available. On the other hand, with switched of proxy_next_upstream you may still configure retries to additional (or other) upstream servers via error_page directive, using a configuration similar to last example at http://nginx.org/r/error_page. Something like this should generate no more than two requests regardless of number of upstream servers configured: location / { error_page 502 504 = @fallback; proxy_pass http://backend; proxy_next_upstream off; } location @fallback { proxy_pass http://backend; proxy_next_upstream off; } -- Maxim Dounin http://nginx.org/en/donation.html From mdounin at mdounin.ru Fri Apr 5 10:22:27 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 5 Apr 2013 14:22:27 +0400 Subject: inconsistent upstream_addr log In-Reply-To: References: Message-ID: <20130405102227.GX62550@mdounin.ru> Hello! On Fri, Apr 05, 2013 at 04:44:11AM -0400, philipp wrote: > My log format looks like this > > log_format vcombined '$host $remote_addr - $remote_user [$time_local] > "$request" ' > '$status $body_bytes_sent "$http_referer" ' > '"$http_user_agent" "$http_x_forwarded_for" ' > '$ssl_cipher $request_time $gzip_ratio ' > '$upstream_addr $upstream_response_time > $geoip_country_code'; > > > I get these logs > > www.example.de 192.168.1.1 - - [04/Apr/2013:13:41:58 +0200] "GET /de/test/ > HTTP/1.1" 504 383 "-" "Mozilla/5.0 (Windows NT 6.1; rv:20.0) Gecko/20100101 > Firefox/20.0" "-" RC4-SHA 120.045 - 10.6.10.10:81, 10.6.10.11:81, > 10.6.10.13:81, 10.6.10.12:81 30.003, 30.000, 30.002, 30.001 DE > www.example.de 192.168.1.1 - - [04/Apr/2013:13:41:58 +0200] "GET /de/test/ > HTTP/1.1" 504 383 "-" "Mozilla/5.0 (Windows NT 6.1; rv:20.0) Gecko/20100101 > Firefox/20.0" "-" RC4-SHA 120.045 - 10.6.10.10:81, 10.6.10.11:81, > 10.6.10.13:81, upstreamgroupname, 10.6.10.12:81 30.003, 30.000, 30.002, -, > 30.001 DE > www.example.de 192.168.1.1 - - [04/Apr/2013:13:41:58 +0200] "GET /de/test/ > HTTP/1.1" 504 383 "-" "Mozilla/5.0 (Windows NT 6.1; rv:20.0) Gecko/20100101 > Firefox/20.0" "-" RC4-SHA 120.045 - 10.6.10.10:81, 10.6.10.11:81, > 10.6.10.13:81, 10.6.10.12:81, upstreamgroupname 30.003, 30.000, 30.002, > 30.001, - DE > www.example.de 192.168.1.1 - - [04/Apr/2013:13:41:58 +0200] "GET /de/test/ > HTTP/1.1" 504 383 "-" "Mozilla/5.0 (Windows NT 6.1; rv:20.0) Gecko/20100101 > Firefox/20.0" "-" RC4-SHA 120.045 - upstreamgroupname - DE > > I can understand the last and first log line. But what is happening in > between? Why is the upstreamgroupname and the end or in the middle? This means all servers during a server lookup were considered down ("no live upstreams" error should appear in logs). If this happens, nginx will mark all upstream servers as up, and will retry a request if allowed by proxy_next_upstream. -- Maxim Dounin http://nginx.org/en/donation.html From vbart at nginx.com Fri Apr 5 10:56:00 2013 From: vbart at nginx.com (Valentin V. Bartenev) Date: Fri, 5 Apr 2013 14:56:00 +0400 Subject: SPDY + proxy cache static content failures In-Reply-To: References: <201304040220.38123.vbart@nginx.com> Message-ID: <201304051456.00233.vbart@nginx.com> On Thursday 04 April 2013 02:43:15 spdyg wrote: > No there's nothing in the error log. Access log shows 304's for all > requests that failed. > Could you provide a debug log for the issue? Here is the guide: http://nginx.org/en/docs/debugging_log.html wbr, Valentin V. Bartenev -- http://nginx.org/en/donation.html From nginx-forum at nginx.us Fri Apr 5 12:35:06 2013 From: nginx-forum at nginx.us (andrea.mandolo) Date: Fri, 05 Apr 2013 08:35:06 -0400 Subject: Rewrite "break" directive - a strange behavior In-Reply-To: <20130405095811.GU62550@mdounin.ru> References: <20130405095811.GU62550@mdounin.ru> Message-ID: Thank you very much for the immediate answer !! :) Just reading this documentation ("http://nginx.org/en/docs/http/ngx_http_rewrite_module.html#rewrite") i had the suspect that "set" is a rewrite module`s directive. But, in this other documentation ("http://wiki.nginx.org/HttpRewriteModule#rewrite") i was a little in confusion. The first documentation (http://nginx.org/en/docs/http/ngx_http_rewrite_module.html#rewrite) is certainly more helpful to understand this behaviour. Thanks again! Posted at Nginx Forum: http://forum.nginx.org/read.php?2,238128,238140#msg-238140 From mrvisser at gmail.com Fri Apr 5 12:34:04 2013 From: mrvisser at gmail.com (Branden Visser) Date: Fri, 5 Apr 2013 08:34:04 -0400 Subject: "writev() failed (134: Transport endpoint is not connected)" when upstream down In-Reply-To: References: Message-ID: Also, here is the log, nginx was compiled with --with-debug and I set: error_log debug; There are rate-limiting warning messages in there, though if I disable the rate-limiting, the issue persists, and the only difference in the debug log is that there are no rate-limiting warnings. Thanks, Branden On Thu, Apr 4, 2013 at 6:24 PM, Branden Visser wrote: > Hello, I've found that when there are upstream servers unavailable in > my upstream group, applying a little bit of load on the server (i.e., > just myself browsing around quickly, 2-3 req/s max) results in the > following errors even for upstream servers that are available and > well: > > 2013/04/04 22:02:21 [error] 4211#0: *2898 writev() failed (134: > Transport endpoint is not connected) while sending request to > upstream, client: 184.94.54.70, server: , request: "GET /api/ui/skin > HTTP/1.1", upstream: "http://10.112.5.119:2001/api/ui/skin", host: > "mysite.org", referrer: "http://mysite.org/search" > > In this particular example, I have 4 upstreams, 3 servers are shut > down (all except 10.112.5.119). If I comment out the 3 other upstream > servers, I cannot reproduce this error. > > Running SmartOS (Joyent cloud) > > $ nginx -v > nginx version: nginx/1.3.14 > > These are things I tried to no avail: > > * I used to have keepalive 64 on the upstream, I removed it > * Nginx used to run as a non-privileged user, I switched it to root > (prctl reports that privileged users should have 65,000 nofiles > allowed) > * I used to have worker_processes set to 5, I increased it to 16 > * The upstream server configuration used to not have max_fails *or* > max_timeout, I added those in trying to limit the amount of times > nginx tried to access the downed upstream servers > * I used to have the proxy_connect_timeout unspecified so it should > have defaulted to 60s, I tried setting it to 1s > * I tried commenting out all the rate-limiting directives > > The URLs I'm hitting in my tests are all those for the "tenantworkers" upstream. > > Any idea? I would think I probably have a resource limit issue, or an > issue with the back-end server, but it just doesn't make sense that > everything is OK after I comment out the downed upstreams. My concern > is that the system will crumble under real load when even 1 upstream > becomes unavailable. > > Thanks, > Branden -------------- next part -------------- A non-text attachment was scrubbed... Name: nginx-error.log Type: application/octet-stream Size: 2326261 bytes Desc: not available URL: From yaoweibin at gmail.com Fri Apr 5 13:59:29 2013 From: yaoweibin at gmail.com (Weibin Yao) Date: Fri, 5 Apr 2013 21:59:29 +0800 Subject: limit proxy_next_upstream In-Reply-To: <20130405101237.GW62550@mdounin.ru> References: <1b813e8f051e09e845a3f2b07e8a76b1.NginxMailingListEnglish@forum.nginx.org> <20130405101237.GW62550@mdounin.ru> Message-ID: We have the similar request. If we have dozens of servers in a same upstream block, I don't want to retry all of them. One side effect is it will increase the failure count with all of the backend servers. After several times, all of the servers will be marked down for a while and all of the requests will be replied with 502. We also need the retry mechnism and don't want to diable it. I think if there is a configurable tries time with the direcitve proxy_next_upstream, it will be very nice. 2013/4/5 Maxim Dounin > Hello! > > On Fri, Apr 05, 2013 at 04:38:36AM -0400, philipp wrote: > > > Is it possible to limit the amount of upstreams asked? I have four > upstreams > > defined and it makes no sense to ask all of them. If two of them timeout > or > > error there is possible something wrong with the request and asking > another > > node doesn't help. > > No, as of now only switching off proxy_next_upstream completely is > available. > > On the other hand, with switched of proxy_next_upstream you may > still configure retries to additional (or other) upstream servers > via error_page directive, using a configuration similar to last > example at http://nginx.org/r/error_page. > > Something like this should generate no more than two requests > regardless of number of upstream servers configured: > > location / { > error_page 502 504 = @fallback; > proxy_pass http://backend; > proxy_next_upstream off; > } > > location @fallback { > proxy_pass http://backend; > proxy_next_upstream off; > } > > -- > Maxim Dounin > http://nginx.org/en/donation.html > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -- Weibin Yao Developer @ Server Platform Team of Taobao -------------- next part -------------- An HTML attachment was scrubbed... URL: From yaoweibin at gmail.com Fri Apr 5 14:04:36 2013 From: yaoweibin at gmail.com (Weibin Yao) Date: Fri, 5 Apr 2013 22:04:36 +0800 Subject: How can I limit the total speed of a port or domain name? In-Reply-To: References: <20130404083919.GD21631@craic.sysops.org> Message-ID: How do you test the limit_speed module? It works in my test box. Thanks. 2013/4/5 tssungeng > centos5.5 + nginx-1.3.14 > > I use the limit_speed_zone > (https://github.com/yaoweibin/nginx_limit_speed_module),and set the > nginx.conf: > > http { > limit_speed_zone one $server_port 10m; > server { > listen 8080; > server_name localhost; > location / { > root /opt/case/web/www; > index index.html index.htm index.php; > limit_speed one 10k; > } > } > } > > The uper setting can limit the speed to 10K per IP. > > and then ,i try the HttpLimitConnModule: > > http { > limit_conn_zone $server_port zone=addr:10m; > server { > listen 8080; > server_name localhost; > location / { > root /opt/case/web/www; > index index.html index.htm index.php; > limit_rate 20k; > } > } > } > > The uper setting can limit the speed to 20K per connetction.and if a IP > open > 5 thread for conn,then ,the IP can download 100K/s from my nginx. > > the nginx.conf of my Nginx with some error? > > Posted at Nginx Forum: > http://forum.nginx.org/read.php?2,238074,238119#msg-238119 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -- Weibin Yao Developer @ Server Platform Team of Taobao -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Fri Apr 5 14:13:24 2013 From: nginx-forum at nginx.us (ankurs) Date: Fri, 05 Apr 2013 10:13:24 -0400 Subject: nginx keeps crashing In-Reply-To: <9f3499fc22dff97d2d7515041db66a58.NginxMailingListEnglish@forum.nginx.org> References: <9f3499fc22dff97d2d7515041db66a58.NginxMailingListEnglish@forum.nginx.org> Message-ID: <6c00c4f3d0c9158331459afcfbae2c5f.NginxMailingListEnglish@forum.nginx.org> Hello, Thanks Maxim for looking into my issue. I recompiled nginx with no 3rd party module & debugging enabled. [root at localhost sbin]# ./nginx -V nginx version: nginx/1.2.8 built by gcc 4.4.7 20120313 (Red Hat 4.4.7-3) (GCC) configure arguments: --with-http_stub_status_module --with-http_perl_module --with-http_flv_module --with-http_mp4_module --with-debug there is no nginx.core file found in /home/core/ , thats the path specified in nginx.conf but i still keep seeing *** glibc detected *** nginx: worker process: double free or corruption (!prev): 0x0000000008915030 *** ======= Backtrace: ========= /lib64/libc.so.6(+0x760e6)[0x7f10da7370e6] /lib64/libc.so.6(+0x78c13)[0x7f10da739c13] /lib64/libc.so.6(fclose+0x14d)[0x7f10da72774d] nginx: worker process[0x46db59] nginx: worker process[0x46e578] nginx: worker process[0x46b2f7] nginx: worker process(ngx_http_core_content_phase+0x2c)[0x430734] nginx: worker process(ngx_http_core_run_phases+0x23)[0x42b4d3] nginx: worker process(ngx_http_handler+0xd6)[0x42b5cd] nginx: worker process(ngx_http_internal_redirect+0x132)[0x42f7b7] nginx: worker process(ngx_http_perl_handle_request+0x1a0)[0x468423] nginx: worker process[0x468492] nginx: worker process(ngx_http_core_content_phase+0x2c)[0x430734] nginx: worker process(ngx_http_core_run_phases+0x23)[0x42b4d3] nginx: worker process(ngx_http_handler+0xd6)[0x42b5cd] nginx: worker process[0x434bf6] nginx: worker process[0x4351da] nginx: worker process[0x435711] nginx: worker process[0x432fbf] nginx: worker process(ngx_event_process_posted+0x36)[0x42139a] nginx: worker process(ngx_process_events_and_timers+0x115)[0x42126a] nginx: worker process[0x426e63] nginx: worker process(ngx_spawn_process+0x43d)[0x4257b4] nginx: worker process(ngx_master_process_cycle+0x42b)[0x4275ff] nginx: worker process(main+0xa0b)[0x40d012] /lib64/libc.so.6(__libc_start_main+0xfd)[0x7f10da6dfcdd] nginx: worker process[0x40b829] ======= Memory map: ======== 00400000-00492000 r-xp 00000000 41:13 2362864 /usr/local/nginx/sbin/nginx.old 00691000-006a1000 rw-p 00091000 41:13 2362864 /usr/local/nginx/sbin/nginx.old 006a1000-006b0000 rw-p 00000000 00:00 0 01909000-01a2f000 rw-p 00000000 00:00 0 01a2f000-01b32000 rw-p 00000000 00:00 0 01b32000-129d5000 rw-p 00000000 00:00 0 7f10cc043000-7f10d2443000 rw-s 00000000 00:04 1313229410 /dev/zero (deleted) 7f10d2443000-7f10d8843000 rw-s 00000000 00:04 1313229409 /dev/zero (deleted) 7f10d9a24000-7f10d9a3a000 r-xp 00000000 41:13 1310732 /lib64/libgcc_s-4.4.7-20120601.so.1 7f10d9a3a000-7f10d9c39000 ---p 00016000 41:13 1310732 /lib64/libgcc_s-4.4.7-20120601.so.1 7f10d9c39000-7f10d9c3a000 rw-p 00015000 41:13 1310732 /lib64/libgcc_s-4.4.7-20120601.so.1 7f10d9c43000-7f10d9c4f000 r-xp 00000000 41:13 1311021 /lib64/libnss_files-2.12.so 7f10d9c4f000-7f10d9e4f000 ---p 0000c000 41:13 1311021 /lib64/libnss_files-2.12.so 7f10d9e4f000-7f10d9e50000 r--p 0000c000 41:13 1311021 /lib64/libnss_files-2.12.so 7f10d9e50000-7f10d9e51000 rw-p 0000d000 41:13 1311021 /lib64/libnss_files-2.12.so 7f10d9e51000-7f10d9e54000 r-xp 00000000 41:13 2757508 /usr/lib64/perl5/auto/MIME/Base64/Base64.so 7f10d9e54000-7f10da053000 ---p 00003000 41:13 2757508 /usr/lib64/perl5/auto/MIME/Base64/Base64.so 7f10da053000-7f10da054000 rw-p 00002000 41:13 2757508 /usr/lib64/perl5/auto/MIME/Base64/Base64.so 7f10da054000-7f10da058000 r-xp 00000000 41:13 2625506 /usr/lib64/perl5/auto/Digest/MD5/MD5.so 7f10da058000-7f10da257000 ---p 00004000 41:13 2625506 /usr/lib64/perl5/auto/Digest/MD5/MD5.so 7f10da257000-7f10da258000 rw-p 00003000 41:13 2625506 /usr/lib64/perl5/auto/Digest/MD5/MD5.so 7f10da258000-7f10da25f000 r-xp 00000000 41:13 2361574 /usr/local/lib64/perl5/auto/nginx/nginx.so (deleted) 7f10da25f000-7f10da45e000 ---p 00007000 41:13 2361574 /usr/local/lib64/perl5/auto/nginx/nginx.so (deleted) 7f10da45e000-7f10da45f000 rw-p 00006000 41:13 2361574 /usr/local/lib64/perl5/auto/nginx/nginx.so (deleted) 7f10da45f000-7f10da4bc000 r-xp 00000000 41:13 1310726 /lib64/libfreebl3.so 7f10da4bc000-7f10da6bb000 ---p 0005d000 41:13 1310726 /lib64/libfreebl3.so 7f10da6bb000-7f10da6bc000 r--p 0005c000 41:13 1310726 /lib64/libfreebl3.so 7f10da6bc000-7f10da6bd000 rw-p 0005d000 41:13 1310726 /lib64/libfreebl3.so 7f10da6bd000-7f10da6c1000 rw-p 00000000 00:00 0 7f10da6c1000-7f10da84b000 r-xp 00000000 41:13 1310730 /lib64/libc-2.12.so 7f10da84b000-7f10daa4a000 ---p 0018a000 41:13 1310730 /lib64/libc-2.12.so 7f10daa4a000-7f10daa4e000 r--p 00189000 41:13 1310730 /lib64/libc-2.12.so 7f10daa4e000-7f10daa4f000 rw-p 0018d000 41:13 1310730 /lib64/libc-2.12.so 7f10daa4f000-7f10daa54000 rw-p 00000000 00:00 0 7f10daa54000-7f10daa56000 r-xp 00000000 41:13 1311077 /lib64/libutil-2.12.so 7f10daa56000-7f10dac55000 ---p 00002000 41:13 1311077 /lib64/libutil-2.12.so 7f10dac55000-7f10dac56000 r--p 00001000 41:13 1311077 /lib64/libutil-2.12.so 7f10dac56000-7f10dac57000 rw-p 00002000 41:13 1311077 /lib64/libutil-2.12.so 7f10dac57000-7f10dacda000 r-xp 00000000 41:13 1310761 /lib64/libm-2.12.so 7f10dacda000-7f10daed9000 ---p 00083000 41:13 1310761 /lib64/libm-2.12.so 7f10daed9000-7f10daeda000 r--p 00082000 41:13 1310761 /lib64/libm-2.12.so 7f10daeda000-7f10daedb000 rw-p 00083000 41:13 1310761 /lib64/libm-2.12.so 7f10daedb000-7f10daedd000 r-xp 00000000 41:13 1310757 /lib64/libdl-2.12.so 7f10daedd000-7f10db0dd000 ---p 00002000 41:13 1310757 /lib64/libdl-2.12.so 7f10db0dd000-7f10db0de000 r--p 00002000 41:13 1310757 /lib64/libdl-2.12.so 7f10db0de000-7f10db0df000 rw-p 00003000 41:13 1310757 /lib64/libdl-2.12.so 7f10db0df000-7f10db0f5000 r-xp 00000000 41:13 1310798 /lib64/libnsl-2.12.so 7f10db0f5000-7f10db2f4000 ---p 00016000 41:13 1310798 /lib64/libnsl-2.12.so 7f10db2f4000-7f10db2f5000 r--p 00015000 41:13 1310798 /lib64/libnsl-2.12.so 7f10db2f5000-7f10db2f6000 rw-p 00016000 41:13 1310798 /lib64/libnsl-2.12.so 7f10db2f6000-7f10db2f8000 rw-p 00000000 00:00 0 7f10db2f8000-7f10db30e000 r-xp 00000000 41:13 1311069 /lib64/libresolv-2.12.so 7f10db30e000-7f10db50e000 ---p 00016000 41:13 1311069 /lib64/libresolv-2.12.so 7f10db50e000-7f10db50f000 r--p 00016000 41:13 1311069 /lib64/libresolv-2.12.so 7f10db50f000-7f10db510000 rw-p 00017000 41:13 1311069 /lib64/libresolv-2.12.so 7f10db510000-7f10db512000 rw-p 00000000 00:00 0 7f10db512000-7f10db674000 r-xp 00000000 41:13 2359842 /usr/lib64/perl5/CORE/libperl.so 7f10db674000-7f10db874000 ---p 00162000 41:13 2359842 /usr/lib64/perl5/CORE/libperl.so 7f10db874000-7f10db87d000 rw-p 00162000 41:13 2359842 /usr/lib64/perl5/CORE/libperl.so 7f10db87d000-7f10db892000 r-xp 00000000 41:13 1310783 /lib64/libz.so.1.2.3 7f10db892000-7f10dba91000 ---p 00015000 41:13 1310783 /lib64/libz.so.1.2.3 7f10dba91000-7f10dba92000 r--p 00014000 41:13 1310783 /lib64/libz.so.1.2.3 7f10dba92000-7f10dba93000 rw-p 00015000 41:13 1310783 /lib64/libz.so.1.2.3 7f10dba93000-7f10dbc07000 r-xp 00000000 41:13 2361229 /usr/lib64/libcrypto.so.1.0.0 7f10dbc07000-7f10dbe06000 ---p 00174000 41:13 2361229 /usr/lib64/libcrypto.so.1.0.0 7f10dbe06000-7f10dbe1f000 r--p 00173000 41:13 2361229 /usr/lib64/libcrypto.so.1.0.0 7f10dbe1f000-7f10dbe29000 rw-p 0018c000 41:13 2361229 /usr/lib64/libcrypto.so.1.0.0 7f10dbe29000-7f10dbe2d000 rw-p 00000000 00:00 0 7f10dbe2d000-7f10dbe59000 r-xp 00000000 41:13 1310821 /lib64/libpcre.so.0.0.1 7f10dbe59000-7f10dc058000 ---p 0002c000 41:13 1310821 /lib64/libpcre.so.0.0.1 7f10dc058000-7f10dc059000 rw-p 0002b000 41:13 1310821 /lib64/libpcre.so.0.0.1 7f10dc059000-7f10dc060000 r-xp 00000000 41:13 1310749 /lib64/libcrypt-2.12.so 7f10dc060000-7f10dc260000 ---p 00007000 41:13 1310749 /lib64/libcrypt-2.12.so 7f10dc260000-7f10dc261000 r--p 00007000 41:13 1310749 /lib64/libcrypt-2.12.so 7f10dc261000-7f10dc262000 rw-p 00008000 41:13 1310749 /lib64/libcrypt-2.12.so 7f10dc262000-7f10dc290000 rw-p 00000000 00:00 0 7f10dc290000-7f10dc2a7000 r-xp 00000000 41:13 1310754 /lib64/libpthread-2.12.so 7f10dc2a7000-7f10dc4a7000 ---p 00017000 41:13 1310754 /lib64/libpthread-2.12.so 7f10dc4a7000-7f10dc4a8000 r--p 00017000 41:13 1310754 /lib64/libpthread-2.12.so 7f10dc4a8000-7f10dc4a9000 rw-p 00018000 41:13 1310754 /lib64/libpthread-2.12.so 7f10dc4a9000-7f10dc4ad000 rw-p 00000000 00:00 0 7f10dc4ad000-7f10dc4cd000 r-xp 00000000 41:13 1310723 /lib64/ld-2.12.so 7f10dc4cd000-7f10dc6ba000 rw-p 00000000 00:00 0 7f10dc6ba000-7f10dc6c2000 rw-p 00000000 00:00 0 7f10dc6c9000-7f10dc6ca000 rw-p 00000000 00:00 0 7f10dc6ca000-7f10dc6cb000 rw-s 00000000 00:04 1292546255 /dev/zero (deleted) 7f10dc6cb000-7f10dc6cc000 rw-p 00000000 00:00 0 7f10dc6cc000-7f10dc6cd000 r--p 0001f000 41:13 1310723 /lib64/ld-2.12.so 7f10dc6cd000-7f10dc6ce000 rw-p 00020000 41:13 1310723 /lib64/ld-2.12.so 7f10dc6ce000-7f10dc6cf000 rw-p 00000000 00:00 0 7fffe48ca000-7fffe48df000 rw-p 00000000 00:00 0 [stack] 7fffe494c000-7fffe494d000 r-xp 00000000 00:00 0 [vdso] ffffffffff600000-ffffffffff601000 r-xp 00000000 00:00 0 [vsyscall] 2013/04/05 16:14:15 [notice] 1641#0: signal 17 (SIGCHLD) received 2013/04/05 16:14:15 [alert] 1641#0: worker process 16572 exited on signal 6 2013/04/05 16:14:15 [notice] 1641#0: start worker process 16621 2013/04/05 16:14:15 [notice] 1641#0: signal 29 (SIGIO) received 2013/04/05 16:15:01 [notice] 1641#0: signal 10 (SIGUSR1) received, reopening logs 2013/04/05 16:15:01 [notice] 1641#0: reopening logs 2013/04/05 16:15:01 [notice] 16621#0: signal 10 (SIGUSR1) received, reopening logs 2013/04/05 16:15:01 [notice] 16151#0: signal 10 (SIGUSR1) received, reopening logs 2013/04/05 16:15:01 [notice] 16159#0: signal 10 (SIGUSR1) received, reopening logs 2013/04/05 16:15:01 [notice] 13549#0: signal 10 (SIGUSR1) received, reopening logs 2013/04/05 16:15:01 [notice] 14649#0: signal 10 (SIGUSR1) received, reopening logs 2013/04/05 16:15:01 [notice] 16573#0: signal 10 (SIGUSR1) received, reopening logs 2013/04/05 16:15:01 [notice] 13549#0: reopening logs 2013/04/05 16:15:01 [notice] 14657#0: signal 10 (SIGUSR1) received, reopening logs 2013/04/05 16:15:01 [notice] 16573#0: reopening logs 2013/04/05 16:15:01 [notice] 16151#0: reopening logs 2013/04/05 16:15:01 [notice] 16573#0: reopening logs 2013/04/05 16:15:01 [notice] 14168#0: signal 10 (SIGUSR1) received, reopening logs Posted at Nginx Forum: http://forum.nginx.org/read.php?2,238109,238144#msg-238144 From luky-37 at hotmail.com Fri Apr 5 18:02:10 2013 From: luky-37 at hotmail.com (Lukas Tribus) Date: Fri, 5 Apr 2013 20:02:10 +0200 Subject: nginx keeps crashing In-Reply-To: <6c00c4f3d0c9158331459afcfbae2c5f.NginxMailingListEnglish@forum.nginx.org> References: <9f3499fc22dff97d2d7515041db66a58.NginxMailingListEnglish@forum.nginx.org>, <6c00c4f3d0c9158331459afcfbae2c5f.NginxMailingListEnglish@forum.nginx.org> Message-ID: > there is no nginx.core file found in /home/core/ , thats the path specified > in nginx.conf Make sure: - /home/core/ is writable (chmod a+w) - ulimit is configured correctly - fs.suid_dumpable is ok? You can read more about it here: http://www.cyberciti.biz/tips/linux-core-dumps.html From mdounin at mdounin.ru Fri Apr 5 19:47:17 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 5 Apr 2013 23:47:17 +0400 Subject: nginx keeps crashing In-Reply-To: <6c00c4f3d0c9158331459afcfbae2c5f.NginxMailingListEnglish@forum.nginx.org> References: <9f3499fc22dff97d2d7515041db66a58.NginxMailingListEnglish@forum.nginx.org> <6c00c4f3d0c9158331459afcfbae2c5f.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20130405194717.GZ62550@mdounin.ru> Hello! On Fri, Apr 05, 2013 at 10:13:24AM -0400, ankurs wrote: > Hello, > > Thanks Maxim for looking into my issue. > > I recompiled nginx with no 3rd party module & debugging enabled. > > [root at localhost sbin]# ./nginx -V > nginx version: nginx/1.2.8 > built by gcc 4.4.7 20120313 (Red Hat 4.4.7-3) (GCC) > configure arguments: --with-http_stub_status_module --with-http_perl_module > --with-http_flv_module --with-http_mp4_module --with-debug > > there is no nginx.core file found in /home/core/ , thats the path specified > in nginx.conf > > but i still keep seeing [...] > ======= Memory map: ======== > 00400000-00492000 r-xp 00000000 41:13 2362864 > /usr/local/nginx/sbin/nginx.old > 00691000-006a1000 rw-p 00091000 41:13 2362864 > /usr/local/nginx/sbin/nginx.old It looks like you forgot to restart (or upgrade binary on the fly), and old nginx with 3rd party modules is still running. -- Maxim Dounin http://nginx.org/en/donation.html From reallfqq-nginx at yahoo.fr Fri Apr 5 20:17:09 2013 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Fri, 5 Apr 2013 16:17:09 -0400 Subject: IPv4 & IPv6 Message-ID: Hello, I noticed in an article dedicated to the subject (Fr)that to allow Nginx to listen both on IPv4 and IPv6 interfaces simultaneously, you needed to set sysctl with the following configuration (otherwise Nginx listening binds conflict one with each other): net.ipv6.bindv6only = 1 However, this has conflicting and unwanted effects on other applications such as Java. Is there any other way to allow Nginx ot listen on both interface types without mandatory system configuration? Thanks, --- *B. R.* -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Fri Apr 5 21:26:43 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Sat, 6 Apr 2013 01:26:43 +0400 Subject: limit proxy_next_upstream In-Reply-To: References: <1b813e8f051e09e845a3f2b07e8a76b1.NginxMailingListEnglish@forum.nginx.org> <20130405101237.GW62550@mdounin.ru> Message-ID: <20130405212643.GB62550@mdounin.ru> Hello! On Fri, Apr 05, 2013 at 09:59:29PM +0800, Weibin Yao wrote: > We have the similar request. If we have dozens of servers in a same > upstream block, I don't want to retry all of them. One side effect is it > will increase the failure count with all of the backend servers. After > several times, all of the servers will be marked down for a while and all > of the requests will be replied with 502. > > We also need the retry mechnism and don't want to diable it. I think if > there is a configurable tries time with the direcitve proxy_next_upstream, > it will be very nice. This is somewhere in TODO, and likely will be implemented soon. As usual, the most serious problem is a name for the directive (proxy_next_tries?). :) Additional similar proposal discussed here includes limiting total time spent in retries before giving up (proxy_next_time?). This will allow to try many servers as soon as they, e.g., return RST quickly, but to only try one or two servers if they fail to answer in configured proxy_connect_timeout/proxy_read_timeout. > > > 2013/4/5 Maxim Dounin > > > Hello! > > > > On Fri, Apr 05, 2013 at 04:38:36AM -0400, philipp wrote: > > > > > Is it possible to limit the amount of upstreams asked? I have four > > upstreams > > > defined and it makes no sense to ask all of them. If two of them timeout > > or > > > error there is possible something wrong with the request and asking > > another > > > node doesn't help. > > > > No, as of now only switching off proxy_next_upstream completely is > > available. > > > > On the other hand, with switched of proxy_next_upstream you may > > still configure retries to additional (or other) upstream servers > > via error_page directive, using a configuration similar to last > > example at http://nginx.org/r/error_page. > > > > Something like this should generate no more than two requests > > regardless of number of upstream servers configured: > > > > location / { > > error_page 502 504 = @fallback; > > proxy_pass http://backend; > > proxy_next_upstream off; > > } > > > > location @fallback { > > proxy_pass http://backend; > > proxy_next_upstream off; > > } > > > > -- > > Maxim Dounin > > http://nginx.org/en/donation.html > > > > _______________________________________________ > > nginx mailing list > > nginx at nginx.org > > http://mailman.nginx.org/mailman/listinfo/nginx > > > > > > -- > Weibin Yao > Developer @ Server Platform Team of Taobao > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -- Maxim Dounin http://nginx.org/en/donation.html From mdounin at mdounin.ru Fri Apr 5 21:43:31 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Sat, 6 Apr 2013 01:43:31 +0400 Subject: IPv4 & IPv6 In-Reply-To: References: Message-ID: <20130405214331.GE62550@mdounin.ru> Hello! On Fri, Apr 05, 2013 at 04:17:09PM -0400, B.R. wrote: > Hello, > > I noticed in an article dedicated to the subject > (Fr)that > to allow Nginx to listen both on IPv4 and IPv6 interfaces > simultaneously, you needed to set sysctl with the following configuration > (otherwise Nginx listening binds conflict one with each other): > > net.ipv6.bindv6only = 1 > > However, this has conflicting and unwanted effects on other applications > such as Java. > > Is there any other way to allow Nginx ot listen on both interface types > without mandatory system configuration? There is "ipv6only" parameter of the "listen" directive, which allows to listen on ipv6 and ipv4 sockets reliably and simulteniously regardless of the system configuration. Configuration like listen 80; listen [::]:80 ipv6only=on; will do the trick. As of nginx 1.3.x, the ipv6only=on is used by default and there is no need to specify it explicitly. See http://nginx.org/r/listen for more details. -- Maxim Dounin http://nginx.org/en/donation.html From mellon at fugue.com Fri Apr 5 21:46:51 2013 From: mellon at fugue.com (Ted Lemon) Date: Fri, 5 Apr 2013 17:46:51 -0400 Subject: IPv4 & IPv6 In-Reply-To: References: Message-ID: On Apr 5, 2013, at 4:17 PM, "B.R." wrote: > Is there any other way to allow Nginx ot listen on both interface types without mandatory system configuration? I just have "listen *:80" in my configurations, and it works fine over IPv4 and IPv6. -------------- next part -------------- An HTML attachment was scrubbed... URL: From luky-37 at hotmail.com Fri Apr 5 21:49:49 2013 From: luky-37 at hotmail.com (Lukas Tribus) Date: Fri, 5 Apr 2013 23:49:49 +0200 Subject: IPv4 & IPv6 In-Reply-To: References: Message-ID: Everything you need to know: http://nginx.org/en/docs/http/ngx_http_core_module.html#listen From reallfqq-nginx at yahoo.fr Fri Apr 5 22:07:23 2013 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Fri, 5 Apr 2013 18:07:23 -0400 Subject: IPv4 & IPv6 In-Reply-To: References: Message-ID: Hello, @Maxim I tried the duplicate configuration entries: listen 80; listen [::]:80 ipv6only=on; I has the following error: nginx: [emerg] duplicate listen options for [::]:80 in /etc/nginx/conf.d/***.conf:3 @Ted I? tested your solution but as I expected nginx is only listening on IPv4 interfaces after restart and not IPv6 ones anymore.? --- *B. R.* On Fri, Apr 5, 2013 at 5:49 PM, Lukas Tribus wrote: > Everything you need to know: > > http://nginx.org/en/docs/http/ngx_http_core_module.html#listen > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Fri Apr 5 22:21:28 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Sat, 6 Apr 2013 02:21:28 +0400 Subject: IPv4 & IPv6 In-Reply-To: References: Message-ID: <20130405222128.GG62550@mdounin.ru> Hello! On Fri, Apr 05, 2013 at 06:07:23PM -0400, B.R. wrote: > Hello, > > @Maxim > I tried the duplicate configuration entries: > listen 80; > listen [::]:80 ipv6only=on; > > I has the following error: > nginx: [emerg] duplicate listen options for [::]:80 in > /etc/nginx/conf.d/***.conf:3 If you have multiple virtual server{}s with the same listening sockets used, you have to specify listening options in a single listen directive only. That is, add "ipv6only=on" to a listen directive in first/default server in your configuration. This will do the trick. -- Maxim Dounin http://nginx.org/en/donation.html From mellon at fugue.com Fri Apr 5 22:32:05 2013 From: mellon at fugue.com (Ted Lemon) Date: Fri, 5 Apr 2013 18:32:05 -0400 Subject: IPv4 & IPv6 In-Reply-To: References: Message-ID: On Apr 5, 2013, at 6:07 PM, B.R. wrote: > @Ted > I? tested your solution but as I expected nginx is only listening on IPv4 interfaces after restart and not IPv6 ones anymore.? Weird. I have this: listen 443 ssl; listen [::]443 ipv6only=on ssl; But aside from that, I don't have any references to ipv6 in the configuration, and it is listening on [::]/80. -------------- next part -------------- An HTML attachment was scrubbed... URL: From reallfqq-nginx at yahoo.fr Sat Apr 6 00:02:11 2013 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Fri, 5 Apr 2013 20:02:11 -0400 Subject: IPv4 & IPv6 In-Reply-To: <20130405222128.GG62550@mdounin.ru> References: <20130405222128.GG62550@mdounin.ru> Message-ID: I have indeed several virtual servers. I have a specific one which serves different content whether a client connects in HTTP or HTTPS (basically the HTTP content provides directions for the HTTPS setup). I also have one virtual server which I want listening on IPv4 only, not IPv6. That's why I prefer managing the listen directives in virtual servers rather than in the 'http' directive block. Is there no other mean than using global 'listen' directives? --- *B. R.* On Fri, Apr 5, 2013 at 6:21 PM, Maxim Dounin wrote: > Hello! > > On Fri, Apr 05, 2013 at 06:07:23PM -0400, B.R. wrote: > > > Hello, > > > > @Maxim > > I tried the duplicate configuration entries: > > listen 80; > > listen [::]:80 ipv6only=on; > > > > I has the following error: > > nginx: [emerg] duplicate listen options for [::]:80 in > > /etc/nginx/conf.d/***.conf:3 > > If you have multiple virtual server{}s with the same listening > sockets used, you have to specify listening options in a single listen > directive only. > > That is, add "ipv6only=on" to a listen directive in first/default > server in your configuration. This will do the trick. > > -- > Maxim Dounin > http://nginx.org/en/donation.html > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From reallfqq-nginx at yahoo.fr Sat Apr 6 00:48:01 2013 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Fri, 5 Apr 2013 20:48:01 -0400 Subject: IPv4 & IPv6 In-Reply-To: References: <20130405222128.GG62550@mdounin.ru> Message-ID: Hmm... @Maxim I guess I haven't understood your piece of advice, since 'listen' can only be used in 'server' directive... What is it you wanted me to try, again? :oD --- *B. R.* On Fri, Apr 5, 2013 at 8:02 PM, B.R. wrote: > I have indeed several virtual servers. > > I have a specific one which serves different content whether a client > connects in HTTP or HTTPS (basically the HTTP content provides directions > for the HTTPS setup). > I also have one virtual server which I want listening on IPv4 only, not > IPv6. > > That's why I prefer managing the listen directives in virtual servers > rather than in the 'http' directive block. > > Is there no other mean than using global 'listen' directives? > --- > *B. R.* > > > On Fri, Apr 5, 2013 at 6:21 PM, Maxim Dounin wrote: > >> Hello! >> >> On Fri, Apr 05, 2013 at 06:07:23PM -0400, B.R. wrote: >> >> > Hello, >> > >> > @Maxim >> > I tried the duplicate configuration entries: >> > listen 80; >> > listen [::]:80 ipv6only=on; >> > >> > I has the following error: >> > nginx: [emerg] duplicate listen options for [::]:80 in >> > /etc/nginx/conf.d/***.conf:3 >> >> If you have multiple virtual server{}s with the same listening >> sockets used, you have to specify listening options in a single listen >> directive only. >> >> That is, add "ipv6only=on" to a listen directive in first/default >> server in your configuration. This will do the trick. >> >> -- >> Maxim Dounin >> http://nginx.org/en/donation.html >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From yaoweibin at gmail.com Sat Apr 6 03:52:43 2013 From: yaoweibin at gmail.com (Weibin Yao) Date: Sat, 6 Apr 2013 11:52:43 +0800 Subject: limit proxy_next_upstream In-Reply-To: <20130405212643.GB62550@mdounin.ru> References: <1b813e8f051e09e845a3f2b07e8a76b1.NginxMailingListEnglish@forum.nginx.org> <20130405101237.GW62550@mdounin.ru> <20130405212643.GB62550@mdounin.ru> Message-ID: I agree. The directive name and format are always the diffcult parts.[?] I thought we could add a new parameter to the proxy_next_upstream directive. The individual directive is OK for me. Limit the retry total time is great. It could eliminate some very long timeout responses. 2013/4/6 Maxim Dounin > Hello! > > On Fri, Apr 05, 2013 at 09:59:29PM +0800, Weibin Yao wrote: > > > We have the similar request. If we have dozens of servers in a same > > upstream block, I don't want to retry all of them. One side effect is it > > will increase the failure count with all of the backend servers. After > > several times, all of the servers will be marked down for a while and all > > of the requests will be replied with 502. > > > > We also need the retry mechnism and don't want to diable it. I think if > > there is a configurable tries time with the direcitve > proxy_next_upstream, > > it will be very nice. > > This is somewhere in TODO, and likely will be implemented soon. > As usual, the most serious problem is a name for the directive > (proxy_next_tries?). :) > > Additional similar proposal discussed here includes limiting total > time spent in retries before giving up (proxy_next_time?). This > will allow to try many servers as soon as they, e.g., return RST > quickly, but to only try one or two servers if they fail to answer > in configured proxy_connect_timeout/proxy_read_timeout. > > > > > > > 2013/4/5 Maxim Dounin > > > > > Hello! > > > > > > On Fri, Apr 05, 2013 at 04:38:36AM -0400, philipp wrote: > > > > > > > Is it possible to limit the amount of upstreams asked? I have four > > > upstreams > > > > defined and it makes no sense to ask all of them. If two of them > timeout > > > or > > > > error there is possible something wrong with the request and asking > > > another > > > > node doesn't help. > > > > > > No, as of now only switching off proxy_next_upstream completely is > > > available. > > > > > > On the other hand, with switched of proxy_next_upstream you may > > > still configure retries to additional (or other) upstream servers > > > via error_page directive, using a configuration similar to last > > > example at http://nginx.org/r/error_page. > > > > > > Something like this should generate no more than two requests > > > regardless of number of upstream servers configured: > > > > > > location / { > > > error_page 502 504 = @fallback; > > > proxy_pass http://backend; > > > proxy_next_upstream off; > > > } > > > > > > location @fallback { > > > proxy_pass http://backend; > > > proxy_next_upstream off; > > > } > > > > > > -- > > > Maxim Dounin > > > http://nginx.org/en/donation.html > > > > > > _______________________________________________ > > > nginx mailing list > > > nginx at nginx.org > > > http://mailman.nginx.org/mailman/listinfo/nginx > > > > > > > > > > > -- > > Weibin Yao > > Developer @ Server Platform Team of Taobao > > > _______________________________________________ > > nginx mailing list > > nginx at nginx.org > > http://mailman.nginx.org/mailman/listinfo/nginx > > > -- > Maxim Dounin > http://nginx.org/en/donation.html > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -- Weibin Yao Developer @ Server Platform Team of Taobao -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: 330.gif Type: image/gif Size: 96 bytes Desc: not available URL: From agentzh at gmail.com Sat Apr 6 05:15:18 2013 From: agentzh at gmail.com (agentzh) Date: Fri, 5 Apr 2013 22:15:18 -0700 Subject: [openresty-en] Re: [ANN] ngx_openresty devel version 1.2.7.3 released In-Reply-To: <6f1fcae4-3cb6-422e-8418-ce6d698bb9ca@googlegroups.com> References: <6f1fcae4-3cb6-422e-8418-ce6d698bb9ca@googlegroups.com> Message-ID: Hello! On Fri, Apr 5, 2013 at 7:02 PM, Bearnard Hibbins wrote: > * Known issues: > "Server" header still says 1.2.7.1 but is actually 1.2.7.3 I cannot reproduce it on my side: $ curl -i localhost/lua HTTP/1.1 200 OK Server: ngx_openresty/1.2.7.3 Date: Sat, 06 Apr 2013 05:13:54 GMT Content-Type: application/octet-stream Transfer-Encoding: chunked Connection: keep-alive LuaJIT 2.0.1 Are you sure you're using the new nginx executable? Please note that "HUP reload" will not update the nginx executable. > lua-cjson is 2.1.0 and not 1.0.3 (not sure if thats a real issue) I doubt. Regards, -agentzh From reallfqq-nginx at yahoo.fr Sat Apr 6 06:25:54 2013 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Sat, 6 Apr 2013 02:25:54 -0400 Subject: IPv4 & IPv6 In-Reply-To: References: <20130405222128.GG62550@mdounin.ru> Message-ID: Hello, It seems I solved the problem... It was indeed by reading a little more carefully the doc http://wiki.nginx.org/HttpCoreModule#listen, thanks @Lukas! ;o) The '*:80' syntax is used for IPv4 listening, I don't understand why it works as-is for you Ted. Maybe Maxim will be of a better help on that case. It is said that the IPv6 syntax will make Nginx listen for the 6to4 IP address syntax, making the websites reachable through IPv4, even if no specific IPv4 binding exist for the listening sockets. Using: listen [::]:80; I have: $ sudo ss -lnp|grep nginx 0 128 :::80 :::* users:(("nginx",***,11),("nginx",***,11)) 0 128 :::443 :::* users:(("nginx",***,12),("nginx",***,12)) You shall *not* have 2 'listen' directive if you did not separate you IPv6 and IPv4 stacks (with the sysctl net.ipv6.bindv6only directive set to 1). I had that configuration before, but the sysctl configuration has an impact on the whole system and also on Java which, when listening on IPv6, seems not to be reachable through 6to4 syntax. I still need to confirm my websites are accessible in the same fashion with IPv4 or IPv6, but I guess my trouble comes from bad IPv6 routing, not from the webserver. Thanks for your input, everyone! --- *B. R.* On Fri, Apr 5, 2013 at 8:48 PM, B.R. wrote: > Hmm... > > @Maxim > I guess I haven't understood your piece of advice, since 'listen' can only > be used in 'server' directive... > What is it you wanted me to try, again? :oD > --- > *B. R.* > > > On Fri, Apr 5, 2013 at 8:02 PM, B.R. wrote: > >> I have indeed several virtual servers. >> >> I have a specific one which serves different content whether a client >> connects in HTTP or HTTPS (basically the HTTP content provides directions >> for the HTTPS setup). >> I also have one virtual server which I want listening on IPv4 only, not >> IPv6. >> >> That's why I prefer managing the listen directives in virtual servers >> rather than in the 'http' directive block. >> >> Is there no other mean than using global 'listen' directives? >> --- >> *B. R.* >> >> >> On Fri, Apr 5, 2013 at 6:21 PM, Maxim Dounin wrote: >> >>> Hello! >>> >>> On Fri, Apr 05, 2013 at 06:07:23PM -0400, B.R. wrote: >>> >>> > Hello, >>> > >>> > @Maxim >>> > I tried the duplicate configuration entries: >>> > listen 80; >>> > listen [::]:80 ipv6only=on; >>> > >>> > I has the following error: >>> > nginx: [emerg] duplicate listen options for [::]:80 in >>> > /etc/nginx/conf.d/***.conf:3 >>> >>> If you have multiple virtual server{}s with the same listening >>> sockets used, you have to specify listening options in a single listen >>> directive only. >>> >>> That is, add "ipv6only=on" to a listen directive in first/default >>> server in your configuration. This will do the trick. >>> >>> -- >>> Maxim Dounin >>> http://nginx.org/en/donation.html >>> >>> _______________________________________________ >>> nginx mailing list >>> nginx at nginx.org >>> http://mailman.nginx.org/mailman/listinfo/nginx >>> >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Sat Apr 6 10:39:05 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Sat, 6 Apr 2013 14:39:05 +0400 Subject: IPv4 & IPv6 In-Reply-To: References: <20130405222128.GG62550@mdounin.ru> Message-ID: <20130406103905.GI62550@mdounin.ru> Hello! On Sat, Apr 06, 2013 at 02:25:54AM -0400, B.R. wrote: > Hello, > > It seems I solved the problem... > It was indeed by reading a little more carefully the doc > http://wiki.nginx.org/HttpCoreModule#listen, thanks @Lukas! ;o) > > The '*:80' syntax is used for IPv4 listening, I don't understand why it > works as-is for you Ted. Maybe Maxim will be of a better help on that case. > > It is said that the IPv6 syntax will make Nginx listen for the 6to4 IP > address syntax, making the websites reachable through IPv4, even if no > specific IPv4 binding exist for the listening sockets. > Using: > listen [::]:80; > > I have: > $ sudo ss -lnp|grep nginx > 0 128 :::80 > :::* users:(("nginx",***,11),("nginx",***,11)) > 0 128 :::443 > :::* users:(("nginx",***,12),("nginx",***,12)) > > You shall *not* have 2 'listen' directive if you did not separate you IPv6 > and IPv4 stacks (with the sysctl net.ipv6.bindv6only directive set to 1). This is wrong aproach and it will no longer work for you after 1.3.x upgrade. As I already suggested, use listen 80; listen [::]:80 ipv6only=on; instead as a portable solution, which doesn't depend on a system configuration. (In 1.3.x, the "ipv6only=on" part can be removed as it's now the default.) -- Maxim Dounin http://nginx.org/en/donation.html From nginx-forum at nginx.us Sat Apr 6 12:55:52 2013 From: nginx-forum at nginx.us (Larry) Date: Sat, 06 Apr 2013 08:55:52 -0400 Subject: Reverse proxy and wireshark Message-ID: <146f040149d79d1d7b03e05f21be5242.NginxMailingListEnglish@forum.nginx.org> Hello, I am suddenly worrying about something simple : I have a box that send some traffic with proxy_pass to get files from another of my box faking the url. Hence acting as a reverse proxy. All the connections are ssl covered. Right. But is the whole reverse proxy broken if one listen with wireshark to the traffic of that proxy server ? Will it tell in the clear that I get the file from https://xxx.xxx.xxx.xxx$uri ? Any hope to prevent that ? I don't want people to be able to know my other boxes ips. My boxes are all over europe, cannot change this. Is there an option in Nginx that would help there ? Thanks ! Larry Posted at Nginx Forum: http://forum.nginx.org/read.php?2,238162,238162#msg-238162 From rkearsley at blueyonder.co.uk Sat Apr 6 13:19:55 2013 From: rkearsley at blueyonder.co.uk (Richard Kearsley) Date: Sat, 06 Apr 2013 14:19:55 +0100 Subject: Reverse proxy and wireshark In-Reply-To: <146f040149d79d1d7b03e05f21be5242.NginxMailingListEnglish@forum.nginx.org> References: <146f040149d79d1d7b03e05f21be5242.NginxMailingListEnglish@forum.nginx.org> Message-ID: <516020FB.4040905@blueyonder.co.uk> If you run wireshark on your main box, you will be able to see the ips it connects to (but not the urls because of https). However they would need to be logged into your box to run wireshark and at this point they could just run a netstat command to find the ips it is connected to. If you mean can the network operator find these ips? They can use tools like netflow/sflow on their switches and routers to find these ips (which is totally out of your control) There's no way to prevent this.. On 06/04/13 13:55, Larry wrote: > Hello, > > I am suddenly worrying about something simple : > > I have a box that send some traffic with proxy_pass to get files from > another of my box faking the url. Hence acting as a reverse proxy. > All the connections are ssl covered. > > Right. > > But is the whole reverse proxy broken if one listen with wireshark to the > traffic of that proxy server ? > Will it tell in the clear that I get the file from > https://xxx.xxx.xxx.xxx$uri ? > > Any hope to prevent that ? I don't want people to be able to know my other > boxes ips. > > My boxes are all over europe, cannot change this. > > Is there an option in Nginx that would help there ? > > Thanks ! > > Larry > > Posted at Nginx Forum: http://forum.nginx.org/read.php?2,238162,238162#msg-238162 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From nginx-forum at nginx.us Sat Apr 6 14:01:12 2013 From: nginx-forum at nginx.us (Larry) Date: Sat, 06 Apr 2013 10:01:12 -0400 Subject: Reverse proxy and wireshark In-Reply-To: <516020FB.4040905@blueyonder.co.uk> References: <516020FB.4040905@blueyonder.co.uk> Message-ID: <3da2c15ac4deccef890007cef1d6dfd1.NginxMailingListEnglish@forum.nginx.org> My concern is that a hacker is able to know my other ips over europe. My host is not a problem. The real deal is the outgoing packets I don't want external people to know where they are going to. It would defeat the whole purpose of reverse proxy.. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,238162,238164#msg-238164 From rkearsley at blueyonder.co.uk Sat Apr 6 14:07:27 2013 From: rkearsley at blueyonder.co.uk (Richard Kearsley) Date: Sat, 06 Apr 2013 15:07:27 +0100 Subject: Reverse proxy and wireshark In-Reply-To: <3da2c15ac4deccef890007cef1d6dfd1.NginxMailingListEnglish@forum.nginx.org> References: <516020FB.4040905@blueyonder.co.uk> <3da2c15ac4deccef890007cef1d6dfd1.NginxMailingListEnglish@forum.nginx.org> Message-ID: <51602C1F.2070701@blueyonder.co.uk> any hacker will need to be inside your server or have some administration over the network to find those ips On 06/04/13 15:01, Larry wrote: > My concern is that a hacker is able to know my other ips over europe. > > My host is not a problem. The real deal is the outgoing packets I don't want > external people to know where they are going to. > > It would defeat the whole purpose of reverse proxy.. > > Posted at Nginx Forum: http://forum.nginx.org/read.php?2,238162,238164#msg-238164 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From nginx-forum at nginx.us Sat Apr 6 14:27:17 2013 From: nginx-forum at nginx.us (Larry) Date: Sat, 06 Apr 2013 10:27:17 -0400 Subject: Reverse proxy and wireshark In-Reply-To: <51602C1F.2070701@blueyonder.co.uk> References: <51602C1F.2070701@blueyonder.co.uk> Message-ID: <94354f5d0e12d28a7d9eb3a91f039b77.NginxMailingListEnglish@forum.nginx.org> Reassuring but everywhere on the web, you can see wireshark sniffing in/out packet to any server. Hence, they are not connected to the server to sniff packets. That is why I started worrying actually ! Posted at Nginx Forum: http://forum.nginx.org/read.php?2,238162,238166#msg-238166 From contact at jpluscplusm.com Sat Apr 6 14:33:22 2013 From: contact at jpluscplusm.com (Jonathan Matthews) Date: Sat, 6 Apr 2013 15:33:22 +0100 Subject: Reverse proxy and wireshark In-Reply-To: <94354f5d0e12d28a7d9eb3a91f039b77.NginxMailingListEnglish@forum.nginx.org> References: <51602C1F.2070701@blueyonder.co.uk> <94354f5d0e12d28a7d9eb3a91f039b77.NginxMailingListEnglish@forum.nginx.org> Message-ID: On 6 April 2013 15:27, Larry wrote: > Reassuring but everywhere on the web, you can see wireshark sniffing in/out > packet to any server. No you can't. > Hence, they are not connected to the server to sniff packets. Your conclusion is wrong as it is based on incorrect information. > That is why I started worrying actually ! I would suggest you started worrying because you don't understand the threat model you're trying to mitigate. Please do some more reading before continuing this thread. Jonathan From reallfqq-nginx at yahoo.fr Sat Apr 6 14:52:39 2013 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Sat, 6 Apr 2013 10:52:39 -0400 Subject: IPv4 & IPv6 In-Reply-To: <20130406103905.GI62550@mdounin.ru> References: <20130405222128.GG62550@mdounin.ru> <20130406103905.GI62550@mdounin.ru> Message-ID: But as I noticed earlier, these configuration directives conflict with each other across multiple virtual servers... That's a huge step backwards. H ?aving to specify them only once across every configuration file is counter-intuitive. Why isn't nginx able to ?summarize all the needs for listening sockets across configuraiton files before attempting to open them? Having to define those listening directives in a 'generic default server' is awkward and looks ugly. --- *B. R.* On Sat, Apr 6, 2013 at 6:39 AM, Maxim Dounin wrote: > Hello! > > On Sat, Apr 06, 2013 at 02:25:54AM -0400, B.R. wrote: > > > Hello, > > > > It seems I solved the problem... > > It was indeed by reading a little more carefully the doc > > http://wiki.nginx.org/HttpCoreModule#listen, thanks @Lukas! ;o) > > > > The '*:80' syntax is used for IPv4 listening, I don't understand why it > > works as-is for you Ted. Maybe Maxim will be of a better help on that > case. > > > > It is said that the IPv6 syntax will make Nginx listen for the 6to4 IP > > address syntax, making the websites reachable through IPv4, even if no > > specific IPv4 binding exist for the listening sockets. > > Using: > > listen [::]:80; > > > > I have: > > $ sudo ss -lnp|grep nginx > > 0 128 :::80 > > :::* users:(("nginx",***,11),("nginx",***,11)) > > 0 128 :::443 > > :::* users:(("nginx",***,12),("nginx",***,12)) > > > > You shall *not* have 2 'listen' directive if you did not separate you > IPv6 > > and IPv4 stacks (with the sysctl net.ipv6.bindv6only directive set to 1). > > This is wrong aproach and it will no longer work for you after > 1.3.x upgrade. As I already suggested, use > > listen 80; > listen [::]:80 ipv6only=on; > > instead as a portable solution, which doesn't depend on a system > configuration. (In 1.3.x, the "ipv6only=on" part can be removed > as it's now the default.) > > -- > Maxim Dounin > http://nginx.org/en/donation.html > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From reallfqq-nginx at yahoo.fr Sat Apr 6 15:01:58 2013 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Sat, 6 Apr 2013 11:01:58 -0400 Subject: IPv4 & IPv6 In-Reply-To: References: <20130405222128.GG62550@mdounin.ru> <20130406103905.GI62550@mdounin.ru> Message-ID: Add-on: Besides, as I explained earlier, having generic 'listen' directives implies some difficulties. For example, I am using 2 virtual servers to serve content for the same server_name, one listening on port 80, the other on port 443, allowing me to serve cotnent for HTTP and HTTPS in different fashions. Using generic 'listen' directive breaks that system and I'm stuck. What would be an acceptable solution? Thanks, --- *B. R.* On Sat, Apr 6, 2013 at 10:52 AM, B.R. wrote: > But as I noticed earlier, these configuration directives conflict with > each other across multiple virtual servers... > That's a huge step backwards. > > H > ?aving to specify them only once across every configuration file is > counter-intuitive. > Why isn't nginx able to ?summarize all the needs for listening sockets > across configuraiton files before attempting to open them? > Having to define those listening directives in a 'generic default server' > is awkward and looks ugly. > > --- > *B. R.* > > > On Sat, Apr 6, 2013 at 6:39 AM, Maxim Dounin wrote: > >> Hello! >> >> On Sat, Apr 06, 2013 at 02:25:54AM -0400, B.R. wrote: >> >> > Hello, >> > >> > It seems I solved the problem... >> > It was indeed by reading a little more carefully the doc >> > http://wiki.nginx.org/HttpCoreModule#listen, thanks @Lukas! ;o) >> > >> > The '*:80' syntax is used for IPv4 listening, I don't understand why it >> > works as-is for you Ted. Maybe Maxim will be of a better help on that >> case. >> > >> > It is said that the IPv6 syntax will make Nginx listen for the 6to4 IP >> > address syntax, making the websites reachable through IPv4, even if no >> > specific IPv4 binding exist for the listening sockets. >> > Using: >> > listen [::]:80; >> > >> > I have: >> > $ sudo ss -lnp|grep nginx >> > 0 128 :::80 >> > :::* users:(("nginx",***,11),("nginx",***,11)) >> > 0 128 :::443 >> > :::* users:(("nginx",***,12),("nginx",***,12)) >> > >> > You shall *not* have 2 'listen' directive if you did not separate you >> IPv6 >> > and IPv4 stacks (with the sysctl net.ipv6.bindv6only directive set to >> 1). >> >> This is wrong aproach and it will no longer work for you after >> 1.3.x upgrade. As I already suggested, use >> >> listen 80; >> listen [::]:80 ipv6only=on; >> >> instead as a portable solution, which doesn't depend on a system >> configuration. (In 1.3.x, the "ipv6only=on" part can be removed >> as it's now the default.) >> >> -- >> Maxim Dounin >> http://nginx.org/en/donation.html >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From igor at sysoev.ru Sat Apr 6 15:23:42 2013 From: igor at sysoev.ru (Igor Sysoev) Date: Sat, 6 Apr 2013 19:23:42 +0400 Subject: IPv4 & IPv6 In-Reply-To: References: <20130405222128.GG62550@mdounin.ru> <20130406103905.GI62550@mdounin.ru> Message-ID: server { listen 80; listen [::]:80 ipv6only=on; server_name one; ... } server { listen 443 ssl; listen [::]:443 ssl ipv6only=on; server_name one; ... } -- Igor Sysoev http://nginx.com/services.html On Apr 6, 2013, at 19:01 , B.R. wrote: > Add-on: > > Besides, as I explained earlier, having generic 'listen' directives implies some difficulties. > > For example, I am using 2 virtual servers to serve content for the same server_name, one listening on port 80, the other on port 443, allowing me to serve cotnent for HTTP and HTTPS in different fashions. > Using generic 'listen' directive breaks that system and I'm stuck. > > What would be an acceptable solution? > Thanks, > --- > B. R. > > > On Sat, Apr 6, 2013 at 10:52 AM, B.R. wrote: > But as I noticed earlier, these configuration directives conflict with each other across multiple virtual servers... > That's a huge step backwards. > > H?aving to specify them only once across every configuration file is counter-intuitive. > Why isn't nginx able to ?summarize all the needs for listening sockets across configuraiton files before attempting to open them? > Having to define those listening directives in a 'generic default server' is awkward and looks ugly. > > --- > B. R. > > > On Sat, Apr 6, 2013 at 6:39 AM, Maxim Dounin wrote: > Hello! > > On Sat, Apr 06, 2013 at 02:25:54AM -0400, B.R. wrote: > > > Hello, > > > > It seems I solved the problem... > > It was indeed by reading a little more carefully the doc > > http://wiki.nginx.org/HttpCoreModule#listen, thanks @Lukas! ;o) > > > > The '*:80' syntax is used for IPv4 listening, I don't understand why it > > works as-is for you Ted. Maybe Maxim will be of a better help on that case. > > > > It is said that the IPv6 syntax will make Nginx listen for the 6to4 IP > > address syntax, making the websites reachable through IPv4, even if no > > specific IPv4 binding exist for the listening sockets. > > Using: > > listen [::]:80; > > > > I have: > > $ sudo ss -lnp|grep nginx > > 0 128 :::80 > > :::* users:(("nginx",***,11),("nginx",***,11)) > > 0 128 :::443 > > :::* users:(("nginx",***,12),("nginx",***,12)) > > > > You shall *not* have 2 'listen' directive if you did not separate you IPv6 > > and IPv4 stacks (with the sysctl net.ipv6.bindv6only directive set to 1). > > This is wrong aproach and it will no longer work for you after > 1.3.x upgrade. As I already suggested, use > > listen 80; > listen [::]:80 ipv6only=on; > > instead as a portable solution, which doesn't depend on a system > configuration. (In 1.3.x, the "ipv6only=on" part can be removed > as it's now the default.) > > -- > Maxim Dounin > http://nginx.org/en/donation.html > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From typlo at me.com Sat Apr 6 15:34:45 2013 From: typlo at me.com (Typlo) Date: Sat, 06 Apr 2013 15:34:45 +0000 (GMT) Subject: Location support for multiple URLs Message-ID: Hello, I would like to use the FastCGI cache feature of nginx for my web application. But I need to use it only for a set of URL. I would like to use it for the following locations: http://domain.com/index.php?act=detail&ID=[ANY ID HERE] Example: http://domain.com/index.php?act=detail&id=o2Zimg And so on. What should I place in the location directive to cache only those URLs? I can't figure it out on the nginx wiki. Also, I would like to replace the Cache Control and Pragma headers set by my PHP application, can I use add_headers directive? Or I would have to add a 3rd party module, like more_http_headers? I use nginx from PPA(Ubuntu), so for adding more_http_headers I would have to build it :/ Greetings from Antarctica. Thanks. -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Sat Apr 6 18:44:31 2013 From: nginx-forum at nginx.us (Larry) Date: Sat, 06 Apr 2013 14:44:31 -0400 Subject: Reverse proxy and wireshark In-Reply-To: <146f040149d79d1d7b03e05f21be5242.NginxMailingListEnglish@forum.nginx.org> References: <146f040149d79d1d7b03e05f21be5242.NginxMailingListEnglish@forum.nginx.org> Message-ID: <4c99c5d76492f9226eed4d15ff7f08f0.NginxMailingListEnglish@forum.nginx.org> Thank you both of you, I admit I started worrying on the basis of wrong information/comprehension. Now it is ok, and I can keep up my nginx config with the x-accel variables. Thanks again, and sincerely sorry I bothered you for such thing. Regards, Larry Posted at Nginx Forum: http://forum.nginx.org/read.php?2,238162,238173#msg-238173 From reallfqq-nginx at yahoo.fr Sat Apr 6 20:05:22 2013 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Sat, 6 Apr 2013 16:05:22 -0400 Subject: IPv4 & IPv6 In-Reply-To: References: <20130405222128.GG62550@mdounin.ru> <20130406103905.GI62550@mdounin.ru> Message-ID: That's exactly what I tried first, and if there are multiple servers listening to same ports, I get the following error: nginx: [emerg] duplicate listen options for [::]:80 in /etc/nginx/conf.d/***.conf:3 See follow-up messages or (through the forum archive on this subject): http://forum.nginx.org/read.php?2,238147,238152#msg-238152 I have the feeling we entered a loop back to the top of the subject... Let me summarize : *Attempt #1* listen 80; listen [::]:80 ipv6only=on; Only works if a *single* server contains those directives. Produces error 'nginx: [emerg] duplicate listen options for [::]:80 in /etc/nginx/conf.d/***.conf:3' if several servers use them to listen to the same ports. *Attempt #2* listen [::]:80; 'Wrong approach' said Maxim, since it won't work in 1.4 stable due to changes in the 1.3 dev branch. However, in 1.2, that's the only working way I have to decide on per-server level which protocol and ports they need to listen to. I'm stuck. --- *B. R.* On Sat, Apr 6, 2013 at 11:23 AM, Igor Sysoev wrote: > server { > listen 80; > listen [::]:80 ipv6only=on; > server_name one; > ... > } > > server { > listen 443 ssl; > listen [::]:443 ssl ipv6only=on; > server_name one; > ... > } > > > -- > Igor Sysoev > http://nginx.com/services.html > > On Apr 6, 2013, at 19:01 , B.R. wrote: > > Add-on: > > Besides, as I explained earlier, having generic 'listen' directives > implies some difficulties. > > For example, I am using 2 virtual servers to serve content for the same > server_name, one listening on port 80, the other on port 443, allowing me > to serve cotnent for HTTP and HTTPS in different fashions. > Using generic 'listen' directive breaks that system and I'm stuck. > > What would be an acceptable solution? > Thanks, > --- > *B. R.* > > > On Sat, Apr 6, 2013 at 10:52 AM, B.R. wrote: > >> But as I noticed earlier, these configuration directives conflict with >> each other across multiple virtual servers... >> That's a huge step backwards. >> >> H >> ?aving to specify them only once across every configuration file is >> counter-intuitive. >> Why isn't nginx able to ?summarize all the needs for listening sockets >> across configuraiton files before attempting to open them? >> Having to define those listening directives in a 'generic default server' >> is awkward and looks ugly. >> >> --- >> *B. R.* >> >> >> On Sat, Apr 6, 2013 at 6:39 AM, Maxim Dounin wrote: >> >>> Hello! >>> >>> On Sat, Apr 06, 2013 at 02:25:54AM -0400, B.R. wrote: >>> >>> > Hello, >>> > >>> > It seems I solved the problem... >>> > It was indeed by reading a little more carefully the doc >>> > http://wiki.nginx.org/HttpCoreModule#listen, thanks @Lukas! ;o) >>> > >>> > The '*:80' syntax is used for IPv4 listening, I don't understand why it >>> > works as-is for you Ted. Maybe Maxim will be of a better help on that >>> case. >>> > >>> > It is said that the IPv6 syntax will make Nginx listen for the 6to4 IP >>> > address syntax, making the websites reachable through IPv4, even if no >>> > specific IPv4 binding exist for the listening sockets. >>> > Using: >>> > listen [::]:80; >>> > >>> > I have: >>> > $ sudo ss -lnp|grep nginx >>> > 0 128 :::80 >>> > :::* users:(("nginx",***,11),("nginx",***,11)) >>> > 0 128 :::443 >>> > :::* users:(("nginx",***,12),("nginx",***,12)) >>> > >>> > You shall *not* have 2 'listen' directive if you did not separate you >>> IPv6 >>> > and IPv4 stacks (with the sysctl net.ipv6.bindv6only directive set to >>> 1). >>> >>> This is wrong aproach and it will no longer work for you after >>> 1.3.x upgrade. As I already suggested, use >>> >>> listen 80; >>> listen [::]:80 ipv6only=on; >>> >>> instead as a portable solution, which doesn't depend on a system >>> configuration. (In 1.3.x, the "ipv6only=on" part can be removed >>> as it's now the default.) >>> >>> -- >>> Maxim Dounin >>> http://nginx.org/en/donation.html >>> >>> _______________________________________________ >>> nginx mailing list >>> nginx at nginx.org >>> http://mailman.nginx.org/mailman/listinfo/nginx >>> >> >> > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jim at ohlste.in Sat Apr 6 20:44:22 2013 From: jim at ohlste.in (Jim Ohlstein) Date: Sat, 06 Apr 2013 16:44:22 -0400 Subject: IPv4 & IPv6 In-Reply-To: References: <20130405222128.GG62550@mdounin.ru> <20130406103905.GI62550@mdounin.ru> Message-ID: <51608926.3080001@ohlste.in> On 4/6/13 4:05 PM, B.R. wrote: > That's exactly what I tried first, and if there are multiple servers > listening to same ports, I get the following error: > nginx: [emerg] duplicate listen options for [::]:80 in > /etc/nginx/conf.d/***.conf:3 > See follow-up messages or (through the forum archive on this subject): > http://forum.nginx.org/read.php?2,238147,238152#msg-238152 Please do not top post just because that's where the insertion point is in your email client. It makes following, and, more importantly, later searching through a thread, rather difficult. Think of it this way - you don't shit in your pants just because that's where your asshole is. It's annoying enough that you use HTML mail. Plain text is preferable. > > I have the feeling we entered a loop back to the top of the subject... > > Let me summarize : > *Attempt #1* > listen 80; > listen [::]:80 ipv6only=on; > > Only works if a /*single*/ server contains those directives. > Produces error 'nginx: [emerg] duplicate listen options for [::]:80 in > /etc/nginx/conf.d/***.conf:3' if several servers use them to listen to > the same ports. > > *Attempt #2* > listen [::]:80; > > 'Wrong approach' said Maxim, since it won't work in 1.4 stable due to > changes in the 1.3 dev branch. > However, in 1.2, that's the only working way I have to decide on > per-server level which protocol and ports they need to listen to. > > I'm stuck. Why not try to use specific IPv6 addresses in your vhost config files? This may be a band-aid for your system. It also works because it's what we do. Most hosts allocate a large block of IPv6 addresses at a time. We get a /48 or a /64 at a time, depending on the provider. That's hundreds of thousands of addresses. So we give each vhost a unique IPv6, even if we're not using SSL. For instance, I host the nginx forum. The first few lines of the config file are: server { listen [2001:49f0:1018::5]:80; listen 80; server_name forum.nginx.org; ... } For a vhost with both standard and ssl ports, we use something like this: server { listen 80; listen ip.v4.add.ress:443 ssl; listen [2001:49f0:1018::a]:80; listen [2001:49f0:1018::a]:443 ssl; server_name domain.tld; ... } This way there is no competition for what is listening on IPv6 addresses and you should not see that error. I've been doing it this way for at least a couple of years now, certainly before 1.0.x series came out. It may not be the "recommended" approach but it's what I had to do to get it back then. One caveat - I am using FreeBSD and IPv6 may be implemented differently in GNU/Linux. > --- > *B. R.* > > > On Sat, Apr 6, 2013 at 11:23 AM, Igor Sysoev > wrote: > > server { > listen 80; > listen [::]:80 ipv6only=on; > server_name one; > ... > } > > server { > listen 443 ssl; > listen [::]:443 ssl ipv6only=on; > server_name one; > ... > } > > > -- > Igor Sysoev > http://nginx.com/services.html > > On Apr 6, 2013, at 19:01 , B.R. wrote: > >> Add-on: >> >> Besides, as I explained earlier, having generic 'listen' >> directives implies some difficulties. >> >> For example, I am using 2 virtual servers to serve content for the >> same server_name, one listening on port 80, the other on port 443, >> allowing me to serve cotnent for HTTP and HTTPS in different fashions. >> Using generic 'listen' directive breaks that system and I'm stuck. >> >> What would be an acceptable solution? >> Thanks, >> --- >> *B. R.* >> >> >> On Sat, Apr 6, 2013 at 10:52 AM, B.R. > > wrote: >> >> But as I noticed earlier, these configuration directives >> conflict with each other across multiple virtual servers... >> That's a huge step backwards. >> >> H >> ?aving to specify them only once across every configuration >> file is counter-intuitive. >> Why isn't nginx able to ?summarize all the needs for listening >> sockets across configuraiton files before attempting to open them? >> Having to define those listening directives in a 'generic >> default server' is awkward and looks ugly. >> >> --- >> *B. R.* >> >> >> On Sat, Apr 6, 2013 at 6:39 AM, Maxim Dounin >> > wrote: >> >> Hello! >> >> On Sat, Apr 06, 2013 at 02:25:54AM -0400, B.R. wrote: >> >> > Hello, >> > >> > It seems I solved the problem... >> > It was indeed by reading a little more carefully the doc >> > http://wiki.nginx.org/HttpCoreModule#listen, thanks >> @Lukas! ;o) >> > >> > The '*:80' syntax is used for IPv4 listening, I don't >> understand why it >> > works as-is for you Ted. Maybe Maxim will be of a better >> help on that case. >> > >> > It is said that the IPv6 syntax will make Nginx listen >> for the 6to4 IP >> > address syntax, making the websites reachable through >> IPv4, even if no >> > specific IPv4 binding exist for the listening sockets. >> > Using: >> > listen [::]:80; >> > >> > I have: >> > $ sudo ss -lnp|grep nginx >> > 0 128 :::80 >> > :::* users:(("nginx",***,11),("nginx",***,11)) >> > 0 128 :::443 >> > :::* users:(("nginx",***,12),("nginx",***,12)) >> > >> > You shall *not* have 2 'listen' directive if you did not >> separate you IPv6 >> > and IPv4 stacks (with the sysctl net.ipv6.bindv6only >> directive set to 1). >> >> This is wrong aproach and it will no longer work for you after >> 1.3.x upgrade. As I already suggested, use >> >> listen 80; >> listen [::]:80 ipv6only=on; >> >> instead as a portable solution, which doesn't depend on a >> system >> configuration. (In 1.3.x, the "ipv6only=on" part can be >> removed >> as it's now the default.) >> >> -- >> Maxim Dounin >> http://nginx.org/en/donation.html >> > -- Jim Ohlstein From mdounin at mdounin.ru Sat Apr 6 21:25:13 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Sun, 7 Apr 2013 01:25:13 +0400 Subject: IPv4 & IPv6 In-Reply-To: References: <20130405222128.GG62550@mdounin.ru> <20130406103905.GI62550@mdounin.ru> Message-ID: <20130406212513.GJ62550@mdounin.ru> Hello! On Sat, Apr 06, 2013 at 04:05:22PM -0400, B.R. wrote: > That's exactly what I tried first, and if there are multiple servers > listening to same ports, I get the following error: > nginx: [emerg] duplicate listen options for [::]:80 in > /etc/nginx/conf.d/***.conf:3 You've already been told to only specify listen options once. That is, you should write server { listen [::]:80 ipv6only=on; ... } server { listen [::]:80; ... } instead of server { listen [::]:80 ipv6only=on; ... } server { listen [::]:80 ipv6only=on; ... } in your config. -- Maxim Dounin http://nginx.org/en/donation.html From reallfqq-nginx at yahoo.fr Sat Apr 6 21:43:51 2013 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Sat, 6 Apr 2013 17:43:51 -0400 Subject: IPv4 & IPv6 In-Reply-To: <20130406212513.GJ62550@mdounin.ru> References: <20130405222128.GG62550@mdounin.ru> <20130406103905.GI62550@mdounin.ru> <20130406212513.GJ62550@mdounin.ru> Message-ID: Thanks Maxim ! There was a misunderstanding there, I thought I shouldn't use the whole directive, I didn't get that only the 'ipv6only=on' part was not to be repeated amongst servers. Works great ('of course it does!' ;o)). Thanks for the help again, --- *B. R.* On Sat, Apr 6, 2013 at 5:25 PM, Maxim Dounin wrote: > Hello! > > On Sat, Apr 06, 2013 at 04:05:22PM -0400, B.R. wrote: > > > That's exactly what I tried first, and if there are multiple servers > > listening to same ports, I get the following error: > > nginx: [emerg] duplicate listen options for [::]:80 in > > /etc/nginx/conf.d/***.conf:3 > > You've already been told to only specify listen options once. > That is, you should write > > server { > listen [::]:80 ipv6only=on; > ... > } > > server { > listen [::]:80; > ... > } > > instead of > > server { > listen [::]:80 ipv6only=on; > ... > } > > server { > listen [::]:80 ipv6only=on; > ... > } > > in your config. > > -- > Maxim Dounin > http://nginx.org/en/donation.html > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From igor at sysoev.ru Sun Apr 7 06:38:25 2013 From: igor at sysoev.ru (Igor Sysoev) Date: Sun, 7 Apr 2013 10:38:25 +0400 Subject: IPv4 & IPv6 In-Reply-To: <51608926.3080001@ohlste.in> References: <20130405222128.GG62550@mdounin.ru> <20130406103905.GI62550@mdounin.ru> <51608926.3080001@ohlste.in> Message-ID: <2C5D090D-C924-47F6-BB86-A0939948CF01@sysoev.ru> On Apr 7, 2013, at 0:44 , Jim Ohlstein wrote: > This way there is no competition for what is listening on IPv6 addresses > and you should not see that error. I've been doing it this way for at > least a couple of years now, certainly before 1.0.x series came out. It > may not be the "recommended" approach but it's what I had to do to get > it back then. One caveat - I am using FreeBSD and IPv6 may be > implemented differently in GNU/Linux. On FreeBSD dual stack sockets are disabled by default: >sysctl net.inet6.ip6.v6only net.inet6.ip6.v6only: 1 while on Linux they are enabled: >cat /proc/sys/net/ipv6/bindv6only 0 -- Igor Sysoev http://nginx.com/services.html From ianevans at digitalhit.com Mon Apr 8 07:46:12 2013 From: ianevans at digitalhit.com (Ian M. Evans) Date: Mon, 8 Apr 2013 03:46:12 -0400 Subject: nginx rpm and configure arguments Message-ID: <4c008149ed1e06367e00e75013486299.squirrel@www.digitalhit.com> I began thinking about using Puppet to automate deployment and that led me to thinking about using rpms to make keeping up with new releases easier. Since I've always compiled nginx from source, I was curious how one specifies configure arguments like --with-http_ssl_module and --with-http_gzip_static_module when using rpms. Color me curious... From sb at waeme.net Mon Apr 8 09:08:07 2013 From: sb at waeme.net (Sergey Budnevitch) Date: Mon, 8 Apr 2013 13:08:07 +0400 Subject: nginx rpm and configure arguments In-Reply-To: <4c008149ed1e06367e00e75013486299.squirrel@www.digitalhit.com> References: <4c008149ed1e06367e00e75013486299.squirrel@www.digitalhit.com> Message-ID: <2E6F7EEE-EE07-450C-8C11-467B2366E72A@waeme.net> On 8 Apr2013, at 11:46 , Ian M. Evans wrote: > I began thinking about using Puppet to automate deployment and that led me > to thinking about using rpms to make keeping up with new releases easier. > > Since I've always compiled nginx from source, I was curious how one > specifies configure arguments like --with-http_ssl_module and > --with-http_gzip_static_module when using rpms. You could download srpm (http://nginx.org/packages/rhel/6/SRPMS/nginx-1.2.8-1.el6.ngx.src.rpm), install it, and look at nginx.spec as example. From nginx-forum at nginx.us Mon Apr 8 12:16:29 2013 From: nginx-forum at nginx.us (spdyg) Date: Mon, 08 Apr 2013 08:16:29 -0400 Subject: SPDY + proxy cache static content failures In-Reply-To: <201304051456.00233.vbart@nginx.com> References: <201304051456.00233.vbart@nginx.com> Message-ID: Thanks Valentin. I have emailed you (off-list) some debug logs / screenshots. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,233497,238195#msg-238195 From nginx-forum at nginx.us Mon Apr 8 13:36:22 2013 From: nginx-forum at nginx.us (Kenneth Yeh) Date: Mon, 08 Apr 2013 09:36:22 -0400 Subject: can nginx listen two different port with one nginx? Message-ID: <5b3c6f50b196dd0c296280b1f58169b8.NginxMailingListEnglish@forum.nginx.org> Hi, I am new user of nginx, and wondering if nginx can linsten two diffent port within one nginx process? Thanks Posted at Nginx Forum: http://forum.nginx.org/read.php?2,238197,238197#msg-238197 From contact at jpluscplusm.com Mon Apr 8 13:52:54 2013 From: contact at jpluscplusm.com (Jonathan Matthews) Date: Mon, 8 Apr 2013 14:52:54 +0100 Subject: can nginx listen two different port with one nginx? In-Reply-To: <5b3c6f50b196dd0c296280b1f58169b8.NginxMailingListEnglish@forum.nginx.org> References: <5b3c6f50b196dd0c296280b1f58169b8.NginxMailingListEnglish@forum.nginx.org> Message-ID: http://nginx.org/en/docs/http/ngx_http_core_module.html#listen On 8 April 2013 14:36, Kenneth Yeh wrote: > Hi, I am new user of nginx, and wondering if nginx can linsten two diffent > port within one nginx process? > > Thanks > > Posted at Nginx Forum: http://forum.nginx.org/read.php?2,238197,238197#msg-238197 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -- Jonathan Matthews // Oxford, London, UK This is not an request for representation; I require explicit prior approval of each & every 3rd party contact you initiate regarding my services/availability. By replying to this email, you agree to this. http://www.jpluscplusm.com/contact.html From nginx-forum at nginx.us Mon Apr 8 15:46:55 2013 From: nginx-forum at nginx.us (gwinans) Date: Mon, 08 Apr 2013 11:46:55 -0400 Subject: proxy_cache_path's max_size being violated/ignored Message-ID: <8ab1dae7b936c8f234033b508f296cba.NginxMailingListEnglish@forum.nginx.org> Hi folks, We setup some nginx instances a few months ago in order to serve up/cache video content. Recently, we've noticed that there is *severe* cache overrun -- on the order of 1TB + per cache directory. It seems like proxy_cache_path is entirely ignoring the max_size value specified in the config. Here's the config as it stands: http://pastie.org/private/aoc4nwyrdagtiwrmgmtjma The actual cache size on-disk is ~1.1-1.4TB rather than the expected 320GB. I'm at a loss after having asked for days on end in the #nginx channel on freenode. Did I pooch the config? Is the nginx cache loader unable to keep up with the incoming data rate? Is the max_size value arbitrarily ignored for some reason? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,238200,238200#msg-238200 From mdounin at mdounin.ru Mon Apr 8 16:22:22 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 8 Apr 2013 20:22:22 +0400 Subject: proxy_cache_path's max_size being violated/ignored In-Reply-To: <8ab1dae7b936c8f234033b508f296cba.NginxMailingListEnglish@forum.nginx.org> References: <8ab1dae7b936c8f234033b508f296cba.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20130408162222.GQ62550@mdounin.ru> Hello! On Mon, Apr 08, 2013 at 11:46:55AM -0400, gwinans wrote: > Hi folks, > > We setup some nginx instances a few months ago in order to serve up/cache > video content. Recently, we've noticed that there is *severe* cache overrun > -- on the order of 1TB + per cache directory. > > It seems like proxy_cache_path is entirely ignoring the max_size value > specified in the config. > > Here's the config as it stands: > > http://pastie.org/private/aoc4nwyrdagtiwrmgmtjma > > The actual cache size on-disk is ~1.1-1.4TB rather than the expected 320GB. > I'm at a loss after having asked for days on end in the #nginx channel on > freenode. > > Did I pooch the config? Is the nginx cache loader unable to keep up with the > incoming data rate? Is the max_size value arbitrarily ignored for some > reason? Is cache manager process running? What's in logs? What "nginx -V" shows? -- Maxim Dounin http://nginx.org/en/donation.html From nginx-forum at nginx.us Mon Apr 8 16:28:46 2013 From: nginx-forum at nginx.us (gwinans) Date: Mon, 08 Apr 2013 12:28:46 -0400 Subject: proxy_cache_path's max_size being violated/ignored In-Reply-To: <20130408162222.GQ62550@mdounin.ru> References: <20130408162222.GQ62550@mdounin.ru> Message-ID: <3caa9364c97c1099b66f7616a8d6dd27.NginxMailingListEnglish@forum.nginx.org> The cache manager process is, indeed, running. I even did a brief strace on it to be sure it was actually working. There is nothing in the logs that indicates an issue. In fact, the only thing in the logs is expected http requests. Output of nginx -V: [root at nginx01-dpl ~]# nginx -V nginx version: nginx/1.2.8 built by gcc 4.4.7 20120313 (Red Hat 4.4.7-3) (GCC) TLS SNI support enabled configure arguments: --user=nginx --group=nginx --prefix=/usr/share/nginx --sbin-path=/usr/sbin/nginx --conf-path=/etc/nginx/nginx.conf --error-log-path=/var/log/nginx/error.log --http-log-path=/var/log/nginx/access.log --http-client-body-temp-path=/var/lib/nginx/tmp/client_body --http-proxy-temp-path=/var/lib/nginx/tmp/proxy --http-fastcgi-temp-path=/var/lib/nginx/tmp/fastcgi --http-uwsgi-temp-path=/var/lib/nginx/tmp/uwsgi --http-scgi-temp-path=/var/lib/nginx/tmp/scgi --pid-path=/var/run/nginx.pid --lock-path=/var/lock/subsys/nginx --with-http_ssl_module --with-http_realip_module --with-http_addition_module --with-http_xslt_module --with-http_image_filter_module --with-http_geoip_module --with-http_sub_module --with-http_dav_module --with-http_flv_module --with-http_gzip_static_module --with-http_random_index_module --with-http_secure_link_module --with-http_degradation_module --with-http_stub_status_module --with-http_perl_module --with-mail --with-file-aio --with-mail_ssl_module --with-ipv6 --add-module=/builddir/build/BUILD/nginx-1.2.8/ngx_cache_purge-2.0 --with-cc-opt='-O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector --param=ssp-buffer-size=4 -m64 -mtune=generic' --with-ld-opt=-Wl,-E Posted at Nginx Forum: http://forum.nginx.org/read.php?2,238200,238202#msg-238202 From mdounin at mdounin.ru Mon Apr 8 17:08:41 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 8 Apr 2013 21:08:41 +0400 Subject: proxy_cache_path's max_size being violated/ignored In-Reply-To: <3caa9364c97c1099b66f7616a8d6dd27.NginxMailingListEnglish@forum.nginx.org> References: <20130408162222.GQ62550@mdounin.ru> <3caa9364c97c1099b66f7616a8d6dd27.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20130408170840.GS62550@mdounin.ru> Hello! On Mon, Apr 08, 2013 at 12:28:46PM -0400, gwinans wrote: > The cache manager process is, indeed, running. I even did a brief strace on > it to be sure it was actually working. There is nothing in the logs that > indicates an issue. In fact, the only thing in the logs is expected http > requests. > > Output of nginx -V: > > [root at nginx01-dpl ~]# nginx -V > nginx version: nginx/1.2.8 > built by gcc 4.4.7 20120313 (Red Hat 4.4.7-3) (GCC) > TLS SNI support enabled > configure arguments: --user=nginx --group=nginx --prefix=/usr/share/nginx > --sbin-path=/usr/sbin/nginx --conf-path=/etc/nginx/nginx.conf > --error-log-path=/var/log/nginx/error.log > --http-log-path=/var/log/nginx/access.log > --http-client-body-temp-path=/var/lib/nginx/tmp/client_body > --http-proxy-temp-path=/var/lib/nginx/tmp/proxy > --http-fastcgi-temp-path=/var/lib/nginx/tmp/fastcgi > --http-uwsgi-temp-path=/var/lib/nginx/tmp/uwsgi > --http-scgi-temp-path=/var/lib/nginx/tmp/scgi --pid-path=/var/run/nginx.pid > --lock-path=/var/lock/subsys/nginx --with-http_ssl_module > --with-http_realip_module --with-http_addition_module > --with-http_xslt_module --with-http_image_filter_module > --with-http_geoip_module --with-http_sub_module --with-http_dav_module > --with-http_flv_module --with-http_gzip_static_module > --with-http_random_index_module --with-http_secure_link_module > --with-http_degradation_module --with-http_stub_status_module > --with-http_perl_module --with-mail --with-file-aio --with-mail_ssl_module > --with-ipv6 > --add-module=/builddir/build/BUILD/nginx-1.2.8/ngx_cache_purge-2.0 > --with-cc-opt='-O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions > -fstack-protector --param=ssp-buffer-size=4 -m64 -mtune=generic' > --with-ld-opt=-Wl,-E Have you tried without the cache purge module compiled in? -- Maxim Dounin http://nginx.org/en/donation.html From nginx-forum at nginx.us Mon Apr 8 17:16:34 2013 From: nginx-forum at nginx.us (gwinans) Date: Mon, 08 Apr 2013 13:16:34 -0400 Subject: proxy_cache_path's max_size being violated/ignored In-Reply-To: <20130408170840.GS62550@mdounin.ru> References: <20130408170840.GS62550@mdounin.ru> Message-ID: <7f4ff5e755b24aa806086eccb0ac99d4.NginxMailingListEnglish@forum.nginx.org> Maxim Dounin Wrote: ------------------------------------------------------- > Hello! > ** snip** > Have you tried without the cache purge module compiled in? > > -- > Maxim Dounin I have not -- this is actually an RPM-build from the Atomic Repo. Wouldn't removing said purge module disable the ability for it to clear items? Or does this module have some other purpose? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,238200,238204#msg-238204 From john at disqus.com Mon Apr 8 17:37:59 2013 From: john at disqus.com (John Watson) Date: Mon, 8 Apr 2013 10:37:59 -0700 Subject: If-Modified-Since proxy cache optimization Message-ID: I noticed there was a patch submitted couple years ago that was never accepted and last year a post saying to expect it around Jan 2013. I understand SPDY probably pushed the roadmap back a bit. So I was wondering if there is a new estimate for when to expect this 1.3.x roadmap item to be completed? Thanks, John -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Mon Apr 8 18:42:02 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 8 Apr 2013 22:42:02 +0400 Subject: proxy_cache_path's max_size being violated/ignored In-Reply-To: <7f4ff5e755b24aa806086eccb0ac99d4.NginxMailingListEnglish@forum.nginx.org> References: <20130408170840.GS62550@mdounin.ru> <7f4ff5e755b24aa806086eccb0ac99d4.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20130408184202.GV62550@mdounin.ru> Hello! On Mon, Apr 08, 2013 at 01:16:34PM -0400, gwinans wrote: > Maxim Dounin Wrote: > ------------------------------------------------------- > > Hello! > > > > ** snip** > > > Have you tried without the cache purge module compiled in? > > > > -- > > Maxim Dounin > > I have not -- this is actually an RPM-build from the Atomic Repo. Wouldn't > removing said purge module disable the ability for it to clear items? Or > does this module have some other purpose? Purge module is a 3rd party module to allow selective removal of cache items via HTTP requests. It is not required for normal operation of nginx cache. (On the other hand, it may iterfere with normal cache operations and may cause problems you observe, much like any other 3rd party module. That's why we usually ask people to reproduce the issue without 3rd party modules/patches as one of first investigation steps.) -- Maxim Dounin http://nginx.org/en/donation.html From mdounin at mdounin.ru Mon Apr 8 18:47:50 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 8 Apr 2013 22:47:50 +0400 Subject: If-Modified-Since proxy cache optimization In-Reply-To: References: Message-ID: <20130408184750.GW62550@mdounin.ru> Hello! On Mon, Apr 08, 2013 at 10:37:59AM -0700, John Watson wrote: > I noticed there was a patch submitted couple years ago that was never > accepted and last year a post saying to expect it around Jan 2013. I > understand SPDY probably pushed the roadmap back a bit. So I was wondering > if there is a new estimate for when to expect this 1.3.x roadmap item to be > completed? As 1.4.0 is expected to appear soon and we are more less in feature freeze for 1.3.x already, the said functionality will be reconsidered for 1.5.x. Optimistic forecast is something near May. -- Maxim Dounin http://nginx.org/en/donation.html From nginx-forum at nginx.us Mon Apr 8 18:54:55 2013 From: nginx-forum at nginx.us (gwinans) Date: Mon, 08 Apr 2013 14:54:55 -0400 Subject: proxy_cache_path's max_size being violated/ignored In-Reply-To: <20130408184202.GV62550@mdounin.ru> References: <20130408184202.GV62550@mdounin.ru> Message-ID: <75c48cd9d977abf6753fa7f9148aba06.NginxMailingListEnglish@forum.nginx.org> Maxim Dounin Wrote: ------------------------------------------------------- > Hello! ** snip** > (On the other hand, it may iterfere with normal cache operations > and may cause problems you observe, much like any other 3rd party > module. That's why we usually ask people to reproduce the issue > without 3rd party modules/patches as one of first investigation > steps.) > > -- > Maxim Dounin Got it. I've made a build minus that module and have rolled it to production. Will keep an eye on it and see how it works out -- if at all. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,238200,238209#msg-238209 From nginx-forum at nginx.us Tue Apr 9 07:00:40 2013 From: nginx-forum at nginx.us (F21) Date: Tue, 09 Apr 2013 03:00:40 -0400 Subject: HEAD requests with PHP-FPM Message-ID: I would like to be able to support HEAD requests. For the most part, HEAD works well with static files. However, if I am trying to use a HEAD request against a PHP file, it is missing the Content-Length or the Transfer-Encoding header. PHP-FPM is connected to nginx 1.2.8 using FCGI. I also have gzip enabled. user www-user; worker_processes 1; error_log logs/error.log info; events { worker_connections 1024; } http { include mime.types; default_type application/octet-stream; client_max_body_size 10M; sendfile on; keepalive_timeout 65; gzip on; gzip_types text/plain text/css application/json application/javascript application/x-javascript text/javascript; server { server_name test.com; root /web; location ~ \.php$ { fastcgi_intercept_errors on; fastcgi_pass unix:/run/php/php-fpm.sock; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include fastcgi_params; } listen 80 default_server; error_page 500 502 503 504 /50x.html; } } If I use a GET, everything works as expected: $ curl -XGET http://test.com/phpinfo.php -I HTTP/1.1 200 OK Date: Tue, 09 Apr 2013 06:58:50 GMT Content-Type: text/html Transfer-Encoding: chunked <------------------- Connection: keep-alive But, if I use a HEAD, Transfer-Encoding is missing: $ curl -XHEAD http://test.com/phpinfo.php -I HTTP/1.1 200 OK Date: Tue, 09 Apr 2013 06:59:24 GMT Content-Type: text/html Connection: keep-alive What's the reason for nginx not including the Transfer-Encoding header? Is there anyway to turn it on? Interestingly, if I try and set the Transfer-Encoding header inside the PHP script, it is still removed by nginx. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,238218,238218#msg-238218 From eswar7028 at gmail.com Tue Apr 9 12:35:45 2013 From: eswar7028 at gmail.com (ESWAR RAO) Date: Tue, 9 Apr 2013 08:35:45 -0400 Subject: Reg. automatic failover Message-ID: Hi All, I observed that automatic failover behaviour is not working with my nginx server. # /opt/nginx/sbin/nginx -v nginx version: nginx/1.2.8 nginx server is running with following conf: upstream lb_get { server localhost:8031 ; server localhost:8032 ; } server { listen 8081; ..................... } location / { proxy_pass http://lb_get; ............. } I have two servers running as: $ nc -kl 8031 $ nc -kl 8032 >From another machine: # curl 'http://192.168.2.94:8081' $ nc -kl 8031 GET / HTTP/1.0 Host: 192.168.2.94 X-Real-IP: 192.168.2.52 X-Forwarded-For: 192.168.2.52 Connection: close User-Agent: curl/7.22.0 (i686-pc-linux-gnu) libcurl/7.22.0 OpenSSL/1.0.1 zlib/1.2.3.4 libidn/1.23 librtmp/2.3 Accept: */* But if I kill this server the curl client is also getting killed and its not getting transferred to other server listening at 8032. Can anyone provide me some insight on this issue. Thanks Eswar -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Tue Apr 9 12:59:54 2013 From: nginx-forum at nginx.us (gadh) Date: Tue, 09 Apr 2013 08:59:54 -0400 Subject: nginx + my module crashes only when ignore client abort = on In-Reply-To: <26eafeb963c981a50298d8b8a45aa8fc.NginxMailingListEnglish@forum.nginx.org> References: <26eafeb963c981a50298d8b8a45aa8fc.NginxMailingListEnglish@forum.nginx.org> Message-ID: <0c0108f9578fa425b4eefd40e2f12707.NginxMailingListEnglish@forum.nginx.org> after a few addons to the code - in totally irrelevant places - the error returns so it did not help. Now i try to create a new upstream handler so i can use it instead of the subrequest model. i described the model i work in the first post above. let me add this: in my first tests of the upstream - i cannot get to the backend server at all - i just get the upstream response and nginx pass it directly to my filters and then to the client. my needs are different - after i recv the upstream response, i need it to go to the backend and then i'm going to inject some of the data i recv from the upstream - to the backend response and only then to the client. can you tell me if creating a new upstream module (my examples are proxy/memcached) can suit my needs here ? Tnx alot Gad Posted at Nginx Forum: http://forum.nginx.org/read.php?2,237362,238226#msg-238226 From nginx-forum at nginx.us Tue Apr 9 14:04:04 2013 From: nginx-forum at nginx.us (andor.toth) Date: Tue, 09 Apr 2013 10:04:04 -0400 Subject: try_files for alias In-Reply-To: References: Message-ID: <74501226c596394ae32e674ab80b2216.NginxMailingListEnglish@forum.nginx.org> Tip: I had overcame alias+try_files problem by using symlinks insted of aliases. Bests, Andor Posted at Nginx Forum: http://forum.nginx.org/read.php?2,2991,238228#msg-238228 From nginx-forum at nginx.us Tue Apr 9 21:24:12 2013 From: nginx-forum at nginx.us (gwinans) Date: Tue, 09 Apr 2013 17:24:12 -0400 Subject: proxy_cache_path's max_size being violated/ignored In-Reply-To: <75c48cd9d977abf6753fa7f9148aba06.NginxMailingListEnglish@forum.nginx.org> References: <20130408184202.GV62550@mdounin.ru> <75c48cd9d977abf6753fa7f9148aba06.NginxMailingListEnglish@forum.nginx.org> Message-ID: So, I've run without this third party cache purge module since I mentioned a custom build. The server is still chewing through disk space and IO remains pegged to the wall (mix of the cache loader and new cache items). The cache loader seems unable to keep up -- I've never seen it actually delete anything. Through watching strace, it's just loading items. Does it not work on the queue until all items are loaded? Is there anything I can do to try to alleviate this problem? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,238200,238244#msg-238244 From mdounin at mdounin.ru Tue Apr 9 23:58:01 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 10 Apr 2013 03:58:01 +0400 Subject: proxy_cache_path's max_size being violated/ignored In-Reply-To: References: <20130408184202.GV62550@mdounin.ru> <75c48cd9d977abf6753fa7f9148aba06.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20130409235801.GF62550@mdounin.ru> Hello! On Tue, Apr 09, 2013 at 05:24:12PM -0400, gwinans wrote: > So, I've run without this third party cache purge module since I mentioned a > custom build. > > The server is still chewing through disk space and IO remains pegged to the > wall (mix of the cache loader and new cache items). The cache loader seems > unable to keep up -- I've never seen it actually delete anything. Through > watching strace, it's just loading items. Does it not work on the queue > until all items are loaded? Cache _loader_ never deletes anything. It loads information about the cache from disk. If it's running, it means that information about the cache wasn't yet loaded and nginx can't maintain configured max_size as it don't know current cache size (yet). In contrast, cache _manager_ maintains max_size and deletes inactive cache items. It will not be able to maintain max_size till the cache is loaded, see above. > Is there anything I can do to try to alleviate this problem? If you have problems with IO, most trivial solution to try is to reduce IO with proxy_cache_min_uses, see http://nginx.org/r/proxy_cache_min_uses. -- Maxim Dounin http://nginx.org/en/donation.html From aweber at comcast.net Wed Apr 10 01:08:05 2013 From: aweber at comcast.net (AJ Weber) Date: Tue, 09 Apr 2013 21:08:05 -0400 Subject: nginx-1.3.15 In-Reply-To: References: <20130326132948.GO62550@mdounin.ru> Message-ID: <5164BB75.6080105@comcast.net> I followed the instructions for adding the "mainline" repo to my yum config, ran a clean but I still only find 1.2.8 available for install (CentOS6 x64). What might I be doing wrong? -AJ On 4/3/2013 9:08 AM, Sergey Budnevitch wrote: > Hello > > We've added new repository with pre-build linux packages for nginx 1.3.*. > Documentation/instruction: http://nginx.org/en/linux_packages.html#mainline > The only differences in nginx configure options from stable packages > are gunzip module in all distributions and spdy module in Ubuntu 12.04 and > 12.10 where openssl 1.0.1 is available. > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From agentzh at gmail.com Wed Apr 10 01:14:16 2013 From: agentzh at gmail.com (agentzh) Date: Tue, 9 Apr 2013 18:14:16 -0700 Subject: [ANN] ngx_openresty devel version 1.2.7.5 released Message-ID: Hi folks! I am happy to announce that the new development version of ngx_openresty, 1.2.7.5, is now released: http://openresty.org/#Download Special thanks go to all our contributors and users for helping make this happen! Below is the complete change log for this release, as compared to the last (devel) release, 1.2.7.3: * upgraded EchoNginxModule to 0.45. * bugfix: $echo_client_request_headers would return the first part of the request body when request body was read before reading this variable. * bugfix: $echo_client_request_headers might not work properly in a subrequest. * upgraded DrizzleNginxModule to 0.1.5. * bugfix: compilation errors occurred with nginx 1.3.15. thanks Karl Blessing for reporting this issue. * docs: fixed a typo in the sample code for $drizzle_thread_id reported by Lanfeng/yyQiuye. * docs: documented the config syntax for db passwords with special chars in them. * upgraded LuaNginxModule to 0.7.20. * feature: now we allow the "0" time argument in ngx.sleep(). * feature: ngx.location.capture and ngx.location.capture_multi now return a lua table with the boolean field "truncated", which indicates whether the subrequest response body is truncated. * bugfix: request hung when rewrite cycled in ngx.req.set_uri(uri, true) instead of throwing out an error log message and a 500 page properly. thanks Calin Don for the report. * bugfix: assignment to ngx.status did not take effect when the response status line had already been generated (by ngx_proxy or others). thanks eqiuno for reporting this issue. * bugfix: ngx.req.raw_header() would return the first part of the request body when request body was read before the call. thanks Matthieu Tourne for reporting this issue. * bugfix: ngx.req.raw_header() might not work properly in a subrequest. * bugfix: we would override the subrequest response status code later when error happens. * bugfix: the debug log message "lua set uri jump to " generated by ngx.req.set_uri(uri, true) was wrong for "" was the old URI. * upgraded LuaRestyMySQLLibrary to 0.13. * bugfix: 64-bit integer values in the MySQL packets (like last insert ids) could not be properly parsed due to the lack of support for 64-bit integers in LuaJIT's standard "bit" module. thanks Azure Wang for the patch implementing a temporary workaround. * docs: various typo fixes from Tor Hveem and doledoletree. * upgraded LuaRestyMemcachedLibrary to 0.11. * feature: added new method "touch" for the new Memcached command "touch". thanks merlin for the patch. * updated the upstream_truncation patch for the Nginx core. * bugfix: chunked upstream response bodies were treated as 502. thanks Andy Yuan for the report. * bugfix: request response status was changed to 502 after response header was sent in case of data truncation. * bugfix: the "last buf" (i.e., bufs with "last_buf" or "last_in_chain" set) should not be sent downstream in case of upstream data truncation. * updated the dtrace patch for the Nginx core. * feature: made the stap function "ngx_chain_dump()" print out info about the "last_buf" and "last_in_chain" flags in bufs and removed the old "" notation in the output. The HTML version of the change log with lots of helpful hyper-links can be browsed here: http://openresty.org/#ChangeLog1002007 OpenResty (aka. ngx_openresty) is a full-fledged web application server by bundling the standard Nginx core, lots of 3rd-party Nginx modules and Lua libraries, as well as most of their external dependencies. See OpenResty's homepage for details: http://openresty.org/ We have been running extensive testing on our Amazon EC2 test cluster and ensure that all the components (including the Nginx core) play well together. The latest test report can always be found here: http://qa.openresty.org Have fun! From sb at waeme.net Wed Apr 10 09:29:19 2013 From: sb at waeme.net (Sergey Budnevitch) Date: Wed, 10 Apr 2013 13:29:19 +0400 Subject: nginx-1.3.15 In-Reply-To: <5164BB75.6080105@comcast.net> References: <20130326132948.GO62550@mdounin.ru> <5164BB75.6080105@comcast.net> Message-ID: <42955005-19CF-49EE-984B-99F02C7E94DB@waeme.net> On 10 Apr2013, at 05:08 , AJ Weber wrote: > I followed the instructions for adding the "mainline" repo to my yum config, ran a clean but I still only find 1.2.8 available for install (CentOS6 x64). > > What might I be doing wrong? Please show result of yum repolist -v nginx From nginx-forum at nginx.us Wed Apr 10 20:57:33 2013 From: nginx-forum at nginx.us (abstein2) Date: Wed, 10 Apr 2013 16:57:33 -0400 Subject: Status Code 001 In Logs Message-ID: I can't find documention anywhere on what it means when nginx shows 001 as the value of $status in the access_log. I currently use nginx as a reverse proxy and I get this error when uploading large files (2+ MB though my client_max_body_size is 4 MB) . Also worth noting, the follow values per my log files: $upstream_status: - $upstream_response_time: - $request_completion: Right now, the nginx server sits behind a load balancer, so it goes: Request -> Load Balancer -> Nginx -> Origin. Anyone have any ideas what the 001 code is supposed to represent? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,238267,238267#msg-238267 From aweber at comcast.net Thu Apr 11 01:43:30 2013 From: aweber at comcast.net (AJ Weber) Date: Wed, 10 Apr 2013 21:43:30 -0400 Subject: nginx-1.3.15 In-Reply-To: <42955005-19CF-49EE-984B-99F02C7E94DB@waeme.net> References: <20130326132948.GO62550@mdounin.ru> <5164BB75.6080105@comcast.net> <42955005-19CF-49EE-984B-99F02C7E94DB@waeme.net> Message-ID: <51661542.5030409@comcast.net> yum repolist -v nginx Loading "fastestmirror" plugin Config time: 0.019 Yum Version: 3.2.29 Loading mirror speeds from cached hostfile * base: centosp4.centos.org * extras: centosc5.centos.org * updates: centosy3.centos.org Setting up Package Sacks pkgsack time: 0.015 Repo-id : nginx Repo-name : nginx repo Repo-status : enabled Repo-revision: 1364910404 Repo-updated : Tue Apr 2 13:46:45 2013 Repo-pkgs : 37 Repo-size : 26 M Repo-baseurl : http://nginx.org/packages/mainline/centos/6/x86_64/ Repo-expire : 21600 second(s) (last: Thu Apr 11 01:41:29 2013) repolist: 37 On 4/10/2013 5:29 AM, Sergey Budnevitch wrote: > On 10 Apr2013, at 05:08 , AJ Weber wrote: > >> I followed the instructions for adding the "mainline" repo to my yum config, ran a clean but I still only find 1.2.8 available for install (CentOS6 x64). >> >> What might I be doing wrong? > Please show result of > yum repolist -v nginx > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From nginx-forum at nginx.us Thu Apr 11 03:47:24 2013 From: nginx-forum at nginx.us (mengqy) Date: Wed, 10 Apr 2013 23:47:24 -0400 Subject: configure --prefix with "~" Message-ID: nginx 1.3.15, ubuntu 12.04, bash: $ ./configure --prefix=~/tools/webserver/install ... $ make && make install --> a dir named '~' is created within current dir, but `~` should be interpreted as `$HOME` instead, right? Thanks to the protection from gvfs, `rm -rf ~` did not cause a disaster (it should be `rm -rf \~`) Posted at Nginx Forum: http://forum.nginx.org/read.php?2,238270,238270#msg-238270 From nginx-forum at nginx.us Thu Apr 11 03:52:32 2013 From: nginx-forum at nginx.us (mengqy) Date: Wed, 10 Apr 2013 23:52:32 -0400 Subject: nginx 1.2: static file truncated with HTTP status code 200 In-Reply-To: <20120516141529.GD31671@mdounin.ru> References: <20120516141529.GD31671@mdounin.ru> Message-ID: Sorry for the late thanks to all of you hackers, great job :) Posted at Nginx Forum: http://forum.nginx.org/read.php?2,226506,238271#msg-238271 From steve at greengecko.co.nz Thu Apr 11 06:18:19 2013 From: steve at greengecko.co.nz (Steve Holdoway) Date: Thu, 11 Apr 2013 18:18:19 +1200 Subject: auth_basic and file uploads. Message-ID: <1365661099.5406.12.camel@steve-new> Hi Folks, I've got a magento site under development, and just want it to be password protected until it goes live. No problem I thought... add in the auth_basic/auth_basic_user_file entries to the location / block. However, when I do that, I get a password request for the upload... 2013/04/11 05:12:40 [error] 9866#0: *31 no user/password was provided for basic authentication, client: Mmy IP>, server: example.com, request: "POST /index.php/admin/catalog_product_gallery/upload/key/ HTTP/1.1", host: "example.com" If I enclose the auth_basic/auth_basic_user_file entries in a limit_except POST block, then I can't log in, wcwn though it them works perfectly if I'm already logged in! Any pointers?? Cheers, Steve -- Steve Holdoway BSc(Hons) MIITP http://www.greengecko.co.nz Skype: sholdowa -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/x-pkcs7-signature Size: 6189 bytes Desc: not available URL: From nginx-forum at nginx.us Thu Apr 11 08:15:02 2013 From: nginx-forum at nginx.us (winniethepooh) Date: Thu, 11 Apr 2013 04:15:02 -0400 Subject: Dropbox as upsream server Message-ID: <67c81ec7b37938d3d16407724a11a9c1.NginxMailingListEnglish@forum.nginx.org> I'm trying to use the Dropbox public folder and my nginx server as a upstream server to server static files. I'm hoping someone can point me in the right direction or tell me what I'm doing wrong. So: my.domain.net/u/#########/*.bz2 serves from http://dl.dropbox.com/u/#########/*.bz2 "twice" as much as my.domain.net/u/#########/*.bz2 It seems when the server uses the x.x.x.x:80 it loops and hits the proxy_pass directive over and over resulting in a 500 error. Any help is appreciated! Here is what my configuration looks like: upstream backend { server x.x.x.x:80 weight=1 max_fails=2; server dl.dropbox.com:80 weight=2 max_fails=2; } server { listen 80; server_name my.domain.net www.my.domain.net; root /path/to/my/root/; location /u/#########/ { autoindex on; autoindex_exact_size off; } location ~* ^.+.(bz2|bsp|nav)$ { proxy_pass http://backend; proxy_set_header Host $http_host; proxy_set_header X-Real-IP $remote_addr; } } Posted at Nginx Forum: http://forum.nginx.org/read.php?2,238280,238280#msg-238280 From mdounin at mdounin.ru Thu Apr 11 11:00:27 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 11 Apr 2013 15:00:27 +0400 Subject: auth_basic and file uploads. In-Reply-To: <1365661099.5406.12.camel@steve-new> References: <1365661099.5406.12.camel@steve-new> Message-ID: <20130411110026.GI62550@mdounin.ru> Hello! On Thu, Apr 11, 2013 at 06:18:19PM +1200, Steve Holdoway wrote: > Hi Folks, > > I've got a magento site under development, and just want it to be > password protected until it goes live. No problem I thought... > > add in the auth_basic/auth_basic_user_file entries to the location / > block. > > However, when I do that, I get a password request for the upload... > > 2013/04/11 05:12:40 [error] 9866#0: *31 no user/password was provided > for basic authentication, client: Mmy IP>, server: example.com, request: > "POST /index.php/admin/catalog_product_gallery/upload/key/ key> HTTP/1.1", host: "example.com" > > > If I enclose the auth_basic/auth_basic_user_file entries in a > limit_except POST block, then I can't log in, wcwn though it them works > perfectly if I'm already logged in! > > > Any pointers?? If your browser sees password request only on file uploads, it may not be able to get 401 (Unauthorized) response correctly and retry the request with authentication. I would expect this to be very similar to 413 (Request Entity Too Large) handling by browsers, as explicitly mentioned here in docs: http://nginx.org/r/client_max_body_size Obvious solution is to require authentication before the upload. -- Maxim Dounin http://nginx.org/en/donation.html From mdounin at mdounin.ru Thu Apr 11 11:05:25 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 11 Apr 2013 15:05:25 +0400 Subject: Dropbox as upsream server In-Reply-To: <67c81ec7b37938d3d16407724a11a9c1.NginxMailingListEnglish@forum.nginx.org> References: <67c81ec7b37938d3d16407724a11a9c1.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20130411110525.GJ62550@mdounin.ru> Hello! On Thu, Apr 11, 2013 at 04:15:02AM -0400, winniethepooh wrote: > I'm trying to use the Dropbox public folder and my nginx server as a > upstream server to server static files. I'm hoping someone can point me in > the right direction or tell me what I'm doing wrong. > > So: > my.domain.net/u/#########/*.bz2 serves from > http://dl.dropbox.com/u/#########/*.bz2 "twice" as much as > my.domain.net/u/#########/*.bz2 > > It seems when the server uses the x.x.x.x:80 it loops and hits the > proxy_pass directive over and over resulting in a 500 error. > > Any help is appreciated! > > > Here is what my configuration looks like: > upstream backend { > server x.x.x.x:80 weight=1 max_fails=2; > server dl.dropbox.com:80 weight=2 max_fails=2; > } It's not clear why you added "x.x.x.x" to the upstream block if it's not an upstream but the same server. Obvious solution would be to remove it. If you try to implement "check the local disk, and if the file isn't there - proxy to dropbox" logic, correct solution would be to use something similar to an example provided at http://nginx.org/r/error_page: location / { error_page 404 = @fallback; } location @fallback { proxy_pass http://backend; } -- Maxim Dounin http://nginx.org/en/donation.html From ru at nginx.com Thu Apr 11 12:19:55 2013 From: ru at nginx.com (Ruslan Ermilov) Date: Thu, 11 Apr 2013 16:19:55 +0400 Subject: configure --prefix with "~" In-Reply-To: References: Message-ID: <20130411121955.GF75157@lo0.su> On Wed, Apr 10, 2013 at 11:47:24PM -0400, mengqy wrote: > nginx 1.3.15, ubuntu 12.04, bash: > > $ ./configure --prefix=~/tools/webserver/install ... > $ make && make install > > --> a dir named '~' is created within current dir, but `~` should be > interpreted as `$HOME` instead, right? > > Thanks to the protection from gvfs, `rm -rf ~` did not cause a disaster (it > should be `rm -rf \~`) `~' is interpreted by your shell. Normally shell interprets it as "tilde prefix" only at the beginning of a word. From mdounin at mdounin.ru Thu Apr 11 12:27:10 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 11 Apr 2013 16:27:10 +0400 Subject: Status Code 001 In Logs In-Reply-To: References: Message-ID: <20130411122710.GO62550@mdounin.ru> Hello! On Wed, Apr 10, 2013 at 04:57:33PM -0400, abstein2 wrote: > I can't find documention anywhere on what it means when nginx shows 001 as > the value of $status in the access_log. > > I currently use nginx as a reverse proxy and I get this error when uploading > large files (2+ MB though my client_max_body_size is 4 MB) . > > Also worth noting, the follow values per my log files: > > $upstream_status: - > $upstream_response_time: - > $request_completion: > > Right now, the nginx server sits behind a load balancer, so it goes: Request > -> Load Balancer -> Nginx -> Origin. > > Anyone have any ideas what the 001 code is supposed to represent? There is no special value "001" as used by nginx. It might be something got from an upstream, but $upstream_status suggests it's not. You may try providing "nginx -V" output and debug log for a deeper investigation, see http://wiki.nginx.org/Debugging. -- Maxim Dounin http://nginx.org/en/donation.html From nginx-forum at nginx.us Thu Apr 11 15:08:47 2013 From: nginx-forum at nginx.us (winniethepooh) Date: Thu, 11 Apr 2013 11:08:47 -0400 Subject: Dropbox as upsream server In-Reply-To: <20130411110525.GJ62550@mdounin.ru> References: <20130411110525.GJ62550@mdounin.ru> Message-ID: Maxim Dounin Wrote: ------------------------------------------------------- > Hello! > > On Thu, Apr 11, 2013 at 04:15:02AM -0400, winniethepooh wrote: > > It's not clear why you added "x.x.x.x" to the upstream block if > it's not an upstream but the same server. Obvious solution would > be to remove it. > > If you try to implement "check the local disk, and if the file > isn't there - proxy to dropbox" logic, correct solution would be > to use something similar to an example provided at > http://nginx.org/r/error_page: > > location / { > error_page 404 = @fallback; > } > > location @fallback { > proxy_pass http://backend; > } > > -- > Maxim Dounin > http://nginx.org/en/donation.html Hey, thanks for your reply! I have the same files stored both at /path/to/my/root/* and at http://dl.dropbox.com/u/#########/*. The same folder structure and everything. What I want is to "load balance" between dropbox and the same nginx site (x.x.x.x) so both places can serve files. Is the only way to do this to split up the proxy as one server block on one port (80 for example) and the actual server block hosting the files in another server block on a different server port (8080 for example)? Or can I modify the originally posted setup to do what I want? Thanks again. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,238280,238296#msg-238296 From sb at waeme.net Thu Apr 11 15:10:45 2013 From: sb at waeme.net (Sergey Budnevitch) Date: Thu, 11 Apr 2013 19:10:45 +0400 Subject: nginx-1.3.15 In-Reply-To: <51661542.5030409@comcast.net> References: <20130326132948.GO62550@mdounin.ru> <5164BB75.6080105@comcast.net> <42955005-19CF-49EE-984B-99F02C7E94DB@waeme.net> <51661542.5030409@comcast.net> Message-ID: <9891B4CF-4A7C-44C9-B418-7FC962C1F3DB@waeme.net> On 11 Apr2013, at 05:43 , AJ Weber wrote: > yum repolist -v nginx > Loading "fastestmirror" plugin > Config time: 0.019 > Yum Version: 3.2.29 > Loading mirror speeds from cached hostfile > * base: centosp4.centos.org > * extras: centosc5.centos.org > * updates: centosy3.centos.org > Setting up Package Sacks > pkgsack time: 0.015 > Repo-id : nginx > Repo-name : nginx repo > Repo-status : enabled > Repo-revision: 1364910404 > Repo-updated : Tue Apr 2 13:46:45 2013 > Repo-pkgs : 37 > Repo-size : 26 M > Repo-baseurl : http://nginx.org/packages/mainline/centos/6/x86_64/ > Repo-expire : 21600 second(s) (last: Thu Apr 11 01:41:29 2013) > > repolist: 37 Please run: yum clean metadata and try to install package once again. > > > On 4/10/2013 5:29 AM, Sergey Budnevitch wrote: >> On 10 Apr2013, at 05:08 , AJ Weber wrote: >> >>> I followed the instructions for adding the "mainline" repo to my yum config, ran a clean but I still only find 1.2.8 available for install (CentOS6 x64). >>> >>> What might I be doing wrong? >> Please show result of >> yum repolist -v nginx >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From mdounin at mdounin.ru Thu Apr 11 15:45:11 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 11 Apr 2013 19:45:11 +0400 Subject: Dropbox as upsream server In-Reply-To: References: <20130411110525.GJ62550@mdounin.ru> Message-ID: <20130411154510.GY62550@mdounin.ru> Hello! On Thu, Apr 11, 2013 at 11:08:47AM -0400, winniethepooh wrote: > Maxim Dounin Wrote: > ------------------------------------------------------- > > Hello! > > > > On Thu, Apr 11, 2013 at 04:15:02AM -0400, winniethepooh wrote: > > > > It's not clear why you added "x.x.x.x" to the upstream block if > > it's not an upstream but the same server. Obvious solution would > > be to remove it. > > > > If you try to implement "check the local disk, and if the file > > isn't there - proxy to dropbox" logic, correct solution would be > > to use something similar to an example provided at > > http://nginx.org/r/error_page: > > > > location / { > > error_page 404 = @fallback; > > } > > > > location @fallback { > > proxy_pass http://backend; > > } > > > > -- > > Maxim Dounin > > http://nginx.org/en/donation.html > > Hey, thanks for your reply! I have the same files stored both at > /path/to/my/root/* and at http://dl.dropbox.com/u/#########/*. The same > folder structure and everything. > > What I want is to "load balance" between dropbox and the same nginx site > (x.x.x.x) so both places can serve files. Ok, so you are you trying to save disk bandwidth at cost of network one. > Is the only way to do this to split up the proxy as one server block on one > port (80 for example) and the actual server block hosting the files in > another server block on a different server port (8080 for example)? > > Or can I modify the originally posted setup to do what I want? Using distinct server blocks is most simple option. -- Maxim Dounin http://nginx.org/en/donation.html From nginx-forum at nginx.us Thu Apr 11 17:11:52 2013 From: nginx-forum at nginx.us (jayesh_kapoor) Date: Thu, 11 Apr 2013 13:11:52 -0400 Subject: Is there any harm if the map directive in nginx is repeated for the same variable name? Message-ID: <441ec48fda8d107b4979e4644ba4eeab.NginxMailingListEnglish@forum.nginx.org> When I try it out, the definition that comes later seems to take effect. The question is, are there unintended consequences of doing this? http { map $http_host $a { hostnames; default 1; example.com 1; *.example.com 2; } map $http_host $a { hostnames; default 3; example.com 3; *.example.com 4; } server { server_name example.com *.example.com location / { echo $a } } } Now with this configuration if I try: curl http://example.com 3 curl http://www.example.com 4 Background: We are using this to provide an override for a map in an optional include file. So right after the map is defined, we have an include directive for *_override_map.conf. If this file exists and provides an alternate definition for the same map, then thats what gets used instead of the original map definition. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,238301,238301#msg-238301 From semackenzie at gmail.com Fri Apr 12 10:54:26 2013 From: semackenzie at gmail.com (Scott E. MacKenzie) Date: Fri, 12 Apr 2013 18:54:26 +0800 Subject: large_client_header_buffers directive is accepted, but ignored Message-ID: large_client_header_buffers directive is accepted, but ignored nginx version: nginx/1.0.14 We are trying to pass a file through nginx but it dies at the same line level no matter what changes we make to nginx.conf or if we adjust the server block in the vhost as indicated in http://nginx.2469901.n2.nabble.com/large-client-header-buffers-directive-is-accepted-in-server-but-ignored-td5435750.html large_client_header_buffers 4 32k; We use a dual host model where Server A is a proxy_pass for Server B which runs our PHP code. Both servers use the same nginx.conf and large_client_header_buffers 4 32k; option. 414 error no matter what changes are made. Any ideas? Scott -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Fri Apr 12 12:55:32 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 12 Apr 2013 16:55:32 +0400 Subject: large_client_header_buffers directive is accepted, but ignored In-Reply-To: References: Message-ID: <20130412125532.GH62550@mdounin.ru> Hello! On Fri, Apr 12, 2013 at 06:54:26PM +0800, Scott E. MacKenzie wrote: > large_client_header_buffers directive is accepted, but ignored > nginx version: nginx/1.0.14 > > We are trying to pass a file through nginx but it dies at the same line > level no matter what changes we make to nginx.conf or if we adjust the > server block in the vhost as indicated in > http://nginx.2469901.n2.nabble.com/large-client-header-buffers-directive-is-accepted-in-server-but-ignored-td5435750.html > > large_client_header_buffers 4 32k; > > We use a dual host model where Server A is a proxy_pass for Server B which > runs our PHP code. Both servers use the same nginx.conf and > large_client_header_buffers 4 32k; option. > > 414 error no matter what changes are made. Any ideas? Make sure to adjust the directive in a default server as well if there are more than one virtual host configured for the listen socket in question. -- Maxim Dounin http://nginx.org/en/donation.html From semackenzie at gmail.com Fri Apr 12 13:57:38 2013 From: semackenzie at gmail.com (Scott) Date: Fri, 12 Apr 2013 06:57:38 -0700 (PDT) Subject: large_client_header_buffers directive is accepted, but ignored In-Reply-To: <20130412125532.GH62550@mdounin.ru> References: <20130412125532.GH62550@mdounin.ru> Message-ID: <1365775058707-7584739.post@n2.nabble.com> Thanks for the reply Maxim. We are performing a rather large data update via our api using GET and have hit this error. The changes to both nginx.conf and to the vhosts make no difference and the process dies with 414 at the same line of the update. After some more resarch we came across a post on browser URL limits. Do you think that we are hitting the maximum limit for GET and that is the reason why the adjustments are not making any difference? *Reference link:* http://stackoverflow.com/questions/417142/what-is-the-maximum-length-of-a-url-in-different-browsers Thoughts? -- View this message in context: http://nginx.2469901.n2.nabble.com/large-client-header-buffers-directive-is-accepted-but-ignored-tp7584737p7584739.html Sent from the nginx mailing list archive at Nabble.com. From mdounin at mdounin.ru Fri Apr 12 14:24:20 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 12 Apr 2013 18:24:20 +0400 Subject: large_client_header_buffers directive is accepted, but ignored In-Reply-To: <1365775058707-7584739.post@n2.nabble.com> References: <20130412125532.GH62550@mdounin.ru> <1365775058707-7584739.post@n2.nabble.com> Message-ID: <20130412142420.GJ62550@mdounin.ru> Hello! On Fri, Apr 12, 2013 at 06:57:38AM -0700, Scott wrote: > Thanks for the reply Maxim. > > We are performing a rather large data update via our api using GET and have > hit this error. The changes to both nginx.conf and to the vhosts make no > difference and the process dies with 414 at the same line of the update. > > After some more resarch we came across a post on browser URL limits. Do you > think that we are hitting the maximum limit for GET and that is the reason > why the adjustments are not making any difference? > > *Reference link:* > http://stackoverflow.com/questions/417142/what-is-the-maximum-length-of-a-url-in-different-browsers > > Thoughts? Looking into nginx logs will allow you to know exactly. -- Maxim Dounin http://nginx.org/en/donation.html From semackenzie at gmail.com Fri Apr 12 14:34:07 2013 From: semackenzie at gmail.com (Scott E. MacKenzie) Date: Fri, 12 Apr 2013 22:34:07 +0800 Subject: large_client_header_buffers directive is accepted, but ignored In-Reply-To: <20130412142420.GJ62550@mdounin.ru> References: <20130412125532.GH62550@mdounin.ru> <1365775058707-7584739.post@n2.nabble.com> <20130412142420.GJ62550@mdounin.ru> Message-ID: Hi Maxim, The logs (even with debug enabled) do not produce any evidence to isolate this to the browser hitting a maximum count. However, on the other hand there is no evidence of large_client_header_buffers being reached either. Any other thought on isolating the issue beyond changing the GET to POST? -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Fri Apr 12 15:05:37 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 12 Apr 2013 19:05:37 +0400 Subject: large_client_header_buffers directive is accepted, but ignored In-Reply-To: References: <20130412125532.GH62550@mdounin.ru> <1365775058707-7584739.post@n2.nabble.com> <20130412142420.GJ62550@mdounin.ru> Message-ID: <20130412150537.GO62550@mdounin.ru> Hello! On Fri, Apr 12, 2013 at 10:34:07PM +0800, Scott E. MacKenzie wrote: > Hi Maxim, > > The logs (even with debug enabled) do not produce any evidence to isolate > this to the browser hitting a maximum count. However, on the other hand > there is no evidence of large_client_header_buffers being reached either. > Any other thought on isolating the issue beyond changing the GET to POST? First of all you should check if request actually hits nginx, it should be as easy as looking into access log for 414. -- Maxim Dounin http://nginx.org/en/donation.html From semackenzie at gmail.com Fri Apr 12 15:15:13 2013 From: semackenzie at gmail.com (Scott E. MacKenzie) Date: Fri, 12 Apr 2013 23:15:13 +0800 Subject: large_client_header_buffers directive is accepted, but ignored In-Reply-To: <20130412150537.GO62550@mdounin.ru> References: <20130412125532.GH62550@mdounin.ru> <1365775058707-7584739.post@n2.nabble.com> <20130412142420.GJ62550@mdounin.ru> <20130412150537.GO62550@mdounin.ru> Message-ID: Hi Maxim, Yes, did that before posting to the list and no error 414 is logged. I have not come across this before and hence the question to the list. Based on the lack of anything logged it does look to be a hard limit being reached in the browser, but again, I thought that I would post to the list to ensure that we have not overlooked anything. Thanks for the feedback. -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Fri Apr 12 15:45:52 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 12 Apr 2013 19:45:52 +0400 Subject: large_client_header_buffers directive is accepted, but ignored In-Reply-To: References: <20130412125532.GH62550@mdounin.ru> <1365775058707-7584739.post@n2.nabble.com> <20130412142420.GJ62550@mdounin.ru> <20130412150537.GO62550@mdounin.ru> Message-ID: <20130412154551.GP62550@mdounin.ru> Hello! On Fri, Apr 12, 2013 at 11:15:13PM +0800, Scott E. MacKenzie wrote: > Hi Maxim, > > Yes, did that before posting to the list and no error 414 is logged. I > have not come across this before and hence the question to the list. Based > on the lack of anything logged it does look to be a hard limit being > reached in the browser, but again, I thought that I would post to the list > to ensure that we have not overlooked anything. If you are unsure, you can always look at what happens on the wire with tcpdump/wireshark/whatever. -- Maxim Dounin http://nginx.org/en/donation.html From semackenzie at gmail.com Fri Apr 12 16:22:41 2013 From: semackenzie at gmail.com (Scott E. MacKenzie) Date: Sat, 13 Apr 2013 00:22:41 +0800 Subject: large_client_header_buffers directive is accepted, but ignored In-Reply-To: <20130412154551.GP62550@mdounin.ru> References: <20130412125532.GH62550@mdounin.ru> <1365775058707-7584739.post@n2.nabble.com> <20130412142420.GJ62550@mdounin.ru> <20130412150537.GO62550@mdounin.ru> <20130412154551.GP62550@mdounin.ru> Message-ID: Yes, thanks. Packet level debugging would be the next step. Just thought someone on the list may have run into this hard limit before. -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Fri Apr 12 16:23:53 2013 From: nginx-forum at nginx.us (cruise) Date: Fri, 12 Apr 2013 12:23:53 -0400 Subject: nginx reverse proxy 502 bad gateway error Message-ID: <5177d7294758f5ddad66eda8fca92326.NginxMailingListEnglish@forum.nginx.org> Hello, I am getting 502 bad gateway error while trying to setup nginx server as reverse proxy server with caching enabled. Both servers are on seperate machines, another server having apache web server. Given below are my config files, please help me. nginx.conf --------------------------------------------------------------------------------------------------------------------------------------- user www www; worker_processes 1; error_log /home/wwwlogs/nginx_error.log crit; pid /usr/local/nginx/logs/nginx.pid; #Specifies the value for maximum file descriptors that can be opened by this process. worker_rlimit_nofile 51200; events { use epoll; worker_connections 51200; } http { include mime.types; default_type application/octet-stream; server_names_hash_bucket_size 128; client_header_buffer_size 32k; large_client_header_buffers 4 32k; client_max_body_size 50m; sendfile on; tcp_nopush on; keepalive_timeout 60; tcp_nodelay on; fastcgi_connect_timeout 300; fastcgi_send_timeout 300; fastcgi_read_timeout 300; fastcgi_read_timeout 300; fastcgi_buffer_size 32k; fastcgi_buffers 8 16k; fastcgi_busy_buffers_size 64k; fastcgi_temp_file_write_size 256k; gzip on; gzip_min_length 1k; gzip_buffers 4 16k; gzip_http_version 1.0; gzip_comp_level 2; gzip_types text/plain application/x-javascript text/css application/xml; gzip_vary on; #limit_zone crawler $binary_remote_addr 10m; #log format log_format access '$remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" $http_x_forwarded_for'; include vhost/*.conf; } --------------------------------------------------------------------------------------------------------------------------------------- www.example.com.conf --------------------------------------------------------------------------------------------------------------------------------------- proxy_cache_path /usr/local/nginx/proxy levels=1:2 keys_zone=one:15m inactive=7d max_size=1000m; server { listen 80; server_name www.example.com; access_log off; error_log off; location / { proxy_pass http://1.2.3.4:80; proxy_redirect off; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_cache one; proxy_max_temp_file_size 0; client_max_body_size 10m; client_body_buffer_size 128k; proxy_connect_timeout 90; proxy_send_timeout 90; proxy_read_timeout 90; proxy_buffer_size 4k; proxy_buffers 4 32k; proxy_busy_buffers_size 64k; proxy_temp_file_write_size 64k; } } server { listen 80; server_name example.com; rewrite ^/(.*) http://www.example.com/$1 permanent; } --------------------------------------------------------------------------------------------------------------------------------------- Please note www.example.com is replaced with actual domain, and ip 1.2.3.4 is replaced with actual ip address in the original configuration file. Additional Information : I have done the lnmp installation using the script available here : http://www.ruchirablog.com/lnmp-v08-complete-nginx-auto-installer/ Posted at Nginx Forum: http://forum.nginx.org/read.php?2,238322,238322#msg-238322 From mdounin at mdounin.ru Fri Apr 12 16:34:49 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 12 Apr 2013 20:34:49 +0400 Subject: nginx reverse proxy 502 bad gateway error In-Reply-To: <5177d7294758f5ddad66eda8fca92326.NginxMailingListEnglish@forum.nginx.org> References: <5177d7294758f5ddad66eda8fca92326.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20130412163448.GQ62550@mdounin.ru> Hello! On Fri, Apr 12, 2013 at 12:23:53PM -0400, cruise wrote: > Hello, > > I am getting 502 bad gateway error while trying to setup nginx server as > reverse proxy server with caching enabled. Both servers are on seperate > machines, another server having apache web server. > > Given below are my config files, please help me. > > nginx.conf > --------------------------------------------------------------------------------------------------------------------------------------- > user www www; > > worker_processes 1; > > error_log /home/wwwlogs/nginx_error.log crit; First of all, try the following: 1) Configure you error log at some sensible level, e.g. "notice". 2) Reproduce the issue and look into error log. If after the above steps you still won't be able to resolve the issue yourself, try asking here again with error log information included. -- Maxim Dounin http://nginx.org/en/donation.html From nginx-forum at nginx.us Fri Apr 12 17:12:26 2013 From: nginx-forum at nginx.us (cruise) Date: Fri, 12 Apr 2013 13:12:26 -0400 Subject: nginx reverse proxy 502 bad gateway error In-Reply-To: <20130412163448.GQ62550@mdounin.ru> References: <20130412163448.GQ62550@mdounin.ru> Message-ID: <1a49240eaff9bfe3f4d44ac9596f3ce5.NginxMailingListEnglish@forum.nginx.org> Log file details. 2013/04/13 00:03:29 [alert] 10931#0: open socket #13 left in connection 3 2013/04/13 00:03:29 [alert] 10931#0: open socket #22 left in connection 14 2013/04/13 00:03:29 [alert] 10931#0: open socket #23 left in connection 15 2013/04/13 00:03:29 [alert] 10931#0: open socket #27 left in connection 19 2013/04/13 00:03:29 [alert] 10931#0: aborting 2013/04/13 01:07:53 [notice] 11471#0: using the "epoll" event method 2013/04/13 01:07:53 [notice] 11471#0: start worker processes 2013/04/13 01:07:53 [notice] 11471#0: start worker process 12080 2013/04/13 01:07:53 [notice] 11471#0: start cache manager process 12081 2013/04/13 01:07:53 [notice] 11471#0: signal 17 (SIGCHLD) received 2013/04/13 01:07:53 [notice] 11471#0: cache manager process 11474 exited with co de 0 2013/04/13 01:07:53 [notice] 11471#0: signal 29 (SIGIO) received 2013/04/13 01:07:56 [notice] 11471#0: signal 17 (SIGCHLD) received 2013/04/13 01:07:56 [notice] 11471#0: worker process 11473 exited with code 0 2013/04/13 01:07:56 [notice] 11471#0: signal 29 (SIGIO) received 2013/04/13 01:08:01 [notice] 11471#0: signal 15 (SIGTERM) received, exiting 2013/04/13 01:08:01 [notice] 12081#0: exiting 2013/04/13 01:08:01 [notice] 12080#0: exiting 2013/04/13 01:08:01 [notice] 12080#0: exit 2013/04/13 01:08:01 [notice] 11471#0: signal 17 (SIGCHLD) received 2013/04/13 01:08:01 [notice] 11471#0: cache manager process 12081 exited with co de 0 2013/04/13 01:08:01 [notice] 11471#0: signal 29 (SIGIO) received 2013/04/13 01:08:01 [notice] 11471#0: signal 17 (SIGCHLD) received 2013/04/13 01:08:01 [notice] 11471#0: worker process 12080 exited with code 0 2013/04/13 01:08:01 [notice] 11471#0: exit 2013/04/13 01:08:06 [notice] 12128#0: using the "epoll" event method 2013/04/13 01:08:06 [notice] 12128#0: nginx/1.0.15 2013/04/13 01:08:06 [notice] 12128#0: built by gcc 4.4.6 20120305 (Red Hat 4.4.6 -4) (GCC) 2013/04/13 01:08:06 [notice] 12128#0: OS: Linux 2.6.32-042stab055.10 2013/04/13 01:08:06 [notice] 12128#0: getrlimit(RLIMIT_NOFILE): 65535:65535 2013/04/13 01:08:06 [notice] 12129#0: start worker processes 2013/04/13 01:08:06 [notice] 12129#0: start worker process 12131 2013/04/13 01:08:06 [notice] 12129#0: start cache manager process 12132 2013/04/13 01:08:06 [notice] 12129#0: start cache loader process 12133 2013/04/13 01:09:06 [notice] 12133#0: http file cache: /usr/local/nginx/proxy 0. 000M, bsize: 4096 2013/04/13 01:09:06 [notice] 12129#0: signal 17 (SIGCHLD) received 2013/04/13 01:09:06 [notice] 12129#0: cache loader process 12133 exited with cod e 0 2013/04/13 01:09:06 [notice] 12129#0: signal 29 (SIGIO) received Posted at Nginx Forum: http://forum.nginx.org/read.php?2,238322,238325#msg-238325 From nginx-forum at nginx.us Fri Apr 12 17:22:14 2013 From: nginx-forum at nginx.us (Shohreh) Date: Fri, 12 Apr 2013 13:22:14 -0400 Subject: Cross-compiling Nginx for ARM? Message-ID: <0abe1535de1799656fd915af452cea73.NginxMailingListEnglish@forum.nginx.org> Hello, I'd like to run Nginx on the SheevaPlug (http://en.wikipedia.org/wiki/SheevaPlug), which is based on Marvell's Kirkwood 88F6281 (ARM9E/ARMv5TE). I have a couple of newbie questions: 1. The archives of the mailing list (http://forum.nginx.org/read.php?2,227934,228360#msg-228360) include a reference to those ports based on older versions: http://packages.debian.org/search?arch=arm&keywords=nginx Based on 1.2.1-2.2 http://nginx.org/packages/debian/pool/nginx/n/nginx/ Based on 1.2.8-1 Currently (www.nginx.org/en/CHANGES), the latest Nginx source is 1.3.15. Why are those ports based on older code? Because it's not just a matter of recompiling, but it also requires making changes to the source code so Nginx can run on ARM? 2. I tried cross-compiling Nginx on a PC running Ubuntu with Marvell's cross-compiler (www.plugcomputer.org/downloads/plug-basic/), but it fails right from the start: ==================== ~/nginx-1.2.6# ./configure --with-cc=/root/LinuxHost/gcc/arm-none-linux-gnueabi/bin/gcc checking for OS + Linux 3.5.0-17-generic i686 checking for C compiler ... not found ./configure: error: C compiler /root/LinuxHost/gcc/arm-none-linux-gnueabi/bin/gcc is not found ~/nginx-1.2.6# ll /root/LinuxHost/gcc/arm-none-linux-gnueabi/bin/gcc -rwxr-xr-x 1 fred fred 180000 Feb 26 2008 /root/LinuxHost/gcc/arm-none-linux-gnueabi/bin/gcc* ==================== How should I set things up so that I can try cross-compiling Nginx? Thanks for any help. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,238312,238312#msg-238312 From mdounin at mdounin.ru Fri Apr 12 17:45:59 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 12 Apr 2013 21:45:59 +0400 Subject: Cross-compiling Nginx for ARM? In-Reply-To: <0abe1535de1799656fd915af452cea73.NginxMailingListEnglish@forum.nginx.org> References: <0abe1535de1799656fd915af452cea73.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20130412174559.GR62550@mdounin.ru> Hello! On Fri, Apr 12, 2013 at 01:22:14PM -0400, Shohreh wrote: > Hello, > > I'd like to run Nginx on the SheevaPlug > (http://en.wikipedia.org/wiki/SheevaPlug), which is based on Marvell's > Kirkwood 88F6281 (ARM9E/ARMv5TE). > > I have a couple of newbie questions: > > 1. The archives of the mailing list > (http://forum.nginx.org/read.php?2,227934,228360#msg-228360) include a > reference to those ports based on older versions: > > http://packages.debian.org/search?arch=arm&keywords=nginx > Based on 1.2.1-2.2 > > http://nginx.org/packages/debian/pool/nginx/n/nginx/ > Based on 1.2.8-1 > > Currently (www.nginx.org/en/CHANGES), the latest Nginx source is 1.3.15. Why > are those ports based on older code? Because it's not just a matter of > recompiling, but it also requires making changes to the source code so Nginx > can run on ARM? Latest stable is 1.2.8, and that's what you see in the official repository. Latest mainline is 1.3.15, it is available in a separate repository, see here: http://nginx.org/en/linux_packages.html#mainline > 2. I tried cross-compiling Nginx on a PC running Ubuntu with Marvell's > cross-compiler (www.plugcomputer.org/downloads/plug-basic/), but it fails > right from the start: > > ==================== > ~/nginx-1.2.6# ./configure > --with-cc=/root/LinuxHost/gcc/arm-none-linux-gnueabi/bin/gcc > > checking for OS > + Linux 3.5.0-17-generic i686 > checking for C compiler ... not found > > ./configure: error: C compiler > /root/LinuxHost/gcc/arm-none-linux-gnueabi/bin/gcc is not found > > ~/nginx-1.2.6# ll /root/LinuxHost/gcc/arm-none-linux-gnueabi/bin/gcc > -rwxr-xr-x 1 fred fred 180000 Feb 26 2008 > /root/LinuxHost/gcc/arm-none-linux-gnueabi/bin/gcc* > ==================== > > How should I set things up so that I can try cross-compiling Nginx? We don't really support cross-compilation in our configure/build system, but there were reports of success with relatively few tweaks to configure. You may try to look into ./objs/autoconf.err for more information on what fails during configure. -- Maxim Dounin http://nginx.org/en/donation.html From djczaski at gmail.com Fri Apr 12 18:50:48 2013 From: djczaski at gmail.com (djczaski at gmail.com) Date: Fri, 12 Apr 2013 14:50:48 -0400 Subject: Cross-compiling Nginx for ARM? In-Reply-To: <20130412174559.GR62550@mdounin.ru> References: <0abe1535de1799656fd915af452cea73.NginxMailingListEnglish@forum.nginx.org> <20130412174559.GR62550@mdounin.ru> Message-ID: <967BDB2F-7D4C-4B7C-A1A2-C5DE31BBED34@gmail.com> I've compiled for ARM A8. The biggest issue was configuring to use the system OpenSSL and pcre libraries. other than that , no problems. On Apr 12, 2013, at 1:45 PM, Maxim Dounin wrote: > Hello! > > On Fri, Apr 12, 2013 at 01:22:14PM -0400, Shohreh wrote: > >> Hello, >> >> I'd like to run Nginx on the SheevaPlug >> (http://en.wikipedia.org/wiki/SheevaPlug), which is based on Marvell's >> Kirkwood 88F6281 (ARM9E/ARMv5TE). >> >> I have a couple of newbie questions: >> >> 1. The archives of the mailing list >> (http://forum.nginx.org/read.php?2,227934,228360#msg-228360) include a >> reference to those ports based on older versions: >> >> http://packages.debian.org/search?arch=arm&keywords=nginx >> Based on 1.2.1-2.2 >> >> http://nginx.org/packages/debian/pool/nginx/n/nginx/ >> Based on 1.2.8-1 >> >> Currently (www.nginx.org/en/CHANGES), the latest Nginx source is 1.3.15. Why >> are those ports based on older code? Because it's not just a matter of >> recompiling, but it also requires making changes to the source code so Nginx >> can run on ARM? > > Latest stable is 1.2.8, and that's what you see in the official > repository. Latest mainline is 1.3.15, it is available in a > separate repository, see here: > > http://nginx.org/en/linux_packages.html#mainline > >> 2. I tried cross-compiling Nginx on a PC running Ubuntu with Marvell's >> cross-compiler (www.plugcomputer.org/downloads/plug-basic/), but it fails >> right from the start: >> >> ==================== >> ~/nginx-1.2.6# ./configure >> --with-cc=/root/LinuxHost/gcc/arm-none-linux-gnueabi/bin/gcc >> >> checking for OS >> + Linux 3.5.0-17-generic i686 >> checking for C compiler ... not found >> >> ./configure: error: C compiler >> /root/LinuxHost/gcc/arm-none-linux-gnueabi/bin/gcc is not found >> >> ~/nginx-1.2.6# ll /root/LinuxHost/gcc/arm-none-linux-gnueabi/bin/gcc >> -rwxr-xr-x 1 fred fred 180000 Feb 26 2008 >> /root/LinuxHost/gcc/arm-none-linux-gnueabi/bin/gcc* >> ==================== >> >> How should I set things up so that I can try cross-compiling Nginx? > > We don't really support cross-compilation in our configure/build > system, but there were reports of success with relatively few > tweaks to configure. You may try to look into ./objs/autoconf.err > for more information on what fails during configure. > > -- > Maxim Dounin > http://nginx.org/en/donation.html > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From nginx-forum at nginx.us Fri Apr 12 19:11:19 2013 From: nginx-forum at nginx.us (cruise) Date: Fri, 12 Apr 2013 15:11:19 -0400 Subject: nginx reverse proxy 502 bad gateway error In-Reply-To: <1a49240eaff9bfe3f4d44ac9596f3ce5.NginxMailingListEnglish@forum.nginx.org> References: <20130412163448.GQ62550@mdounin.ru> <1a49240eaff9bfe3f4d44ac9596f3ce5.NginxMailingListEnglish@forum.nginx.org> Message-ID: <06025d8f36731f837bd351945aed3dbc.NginxMailingListEnglish@forum.nginx.org> 2013/04/13 01:09:06 [notice] 12129#0: signal 29 (SIGIO) received 2013/04/13 01:18:53 [error] 12131#0: *19 open() "/home/wwwroot/tag/psd-to-html5-volusion-themes" failed (2: No such file or directory), client: 66.249.73.10, server: www.surilasafar.com, request: "GET /tag/psd-to-html5-volusion-themes HTTP/1.1", host: "psd-html5.devangsolanki.com" 2013/04/13 01:20:12 [error] 12131#0: *26 open() "/home/wwwroot/favicon.ico" failed (2: No such file or directory), client: 66.249.84.70, server: www.surilasafar.com, request: "GET /favicon.ico HTTP/1.1", host: "prestashop-development.devangsolanki.com" 2013/04/13 01:24:46 [error] 12131#0: *42 open() "/home/wwwroot/2012/12" failed (2: No such file or directory), client: 66.249.73.106, server: www.surilasafar.com, request: "GET /2012/12 HTTP/1.1", host: "volusion.devangsolanki.com" 2013/04/13 01:26:29 [error] 12131#0: *47 "/home/wwwroot/tag/html5-website-templates/feed/index.html" is not found (2: No such file or directory), client: 66.249.75.36, server: www.surilasafar.com, request: "GET /tag/html5-website-templates/feed/ HTTP/1.1", host: "html5.devangsolanki.com" 2013/04/13 01:26:38 [error] 12131#0: *48 open() "/home/wwwroot/tag/prestashop-free-modules/feed" failed (2: No such file or directory), client: 66.249.74.70, server: www.surilasafar.com, request: "GET /tag/prestashop-free-modules/feed HTTP/1.1", host: "prestashop-development.devangsolanki.com" 2013/04/13 01:36:29 [error] 12131#0: *59 open() "/home/wwwroot/robots.txt" failed (2: No such file or directory), client: 60.36.84.1, server: www.surilasafar.com, request: "GET /robots.txt HTTP/1.0", host: "psd-html5.devangsolanki.com" 2013/04/13 01:40:49 [error] 12131#0: *76 open() "/home/wwwroot/tag/volusion-template-development-2" failed (2: No such file or directory), client: 66.249.73.106, server: www.surilasafar.com, request: "GET /tag/volusion-template-development-2 HTTP/1.1", host: "volusion.devangsolanki.com" 2013/04/13 01:49:39 [error] 12131#0: *82 open() "/home/wwwroot/robots.txt" failed (2: No such file or directory), client: 66.249.75.207, server: www.surilasafar.com, request: "GET /robots.txt HTTP/1.1", host: "design.devangsolanki.com" 2013/04/13 01:49:39 [error] 12131#0: *82 "/home/wwwroot/2013/02/22/hello-world/index.html" is not found (2: No such file or directory), client: 66.249.75.207, server: www.surilasafar.com, request: "GET /2013/02/22/hello-world/ HTTP/1.1", host: "design.devangsolanki.com" 2013/04/13 01:52:06 [error] 12131#0: *83 open() "/home/wwwroot/tag/html5-website-developer/feed" failed (2: No such file or directory), client: 141.8.147.4, server: www.surilasafar.com, request: "GET /tag/html5-website-developer/feed HTTP/1.1", host: "hire-html5-developer.devangsolanki.com" 2013/04/13 01:53:08 [error] 12131#0: *87 open() "/home/wwwroot/robots.txt" failed (2: No such file or directory), client: 65.55.213.243, server: www.surilasafar.com, request: "GET /robots.txt HTTP/1.1", host: "volusion.devangsolanki.com" 2013/04/13 01:53:08 [error] 12131#0: *88 open() "/home/wwwroot/volusion-sitemap.xml" failed (2: No such file or directory), client: 65.55.213.243, server: www.surilasafar.com, request: "GET /volusion-sitemap.xml HTTP/1.1", host: "volusion.devangsolanki.com" 2013/04/13 01:55:43 [error] 12131#0: *95 open() "/home/wwwroot/tag/prestashop-mobile-theme/feed" failed (2: No such file or directory), client: 66.249.74.70, server: www.surilasafar.com, request: "GET /tag/prestashop-mobile-theme/feed HTTP/1.1", host: "prestashop-development.devangsolanki.com" 2013/04/13 01:58:13 [error] 12131#0: *96 "/home/wwwroot/tag/html5-mobile-development/feed/index.html" is not found (2: No such file or directory), client: 66.249.75.36, server: www.surilasafar.com, request: "GET /tag/html5-mobile-development/feed/ HTTP/1.1", host: "html5.devangsolanki.com" 2013/04/13 02:06:30 [error] 12131#0: *105 open() "/home/wwwroot/robots.txt" failed (2: No such file or directory), client: 65.55.215.46, server: www.surilasafar.com, request: "GET /robots.txt HTTP/1.1", host: "portfolio.devangsolanki.com" 2013/04/13 02:06:30 [error] 12131#0: *106 open() "/home/wwwroot/wp-content/plugins/wp-postratings/images/stars/rating_on.gif" failed (2: No such file or directory), client: 65.55.215.46, server: www.surilasafar.com, request: "GET /wp-content/plugins/wp-postratings/images/stars/rating_on.gif HTTP/1.1", host: "portfolio.devangsolanki.com" 2013/04/13 02:09:27 [error] 12131#0: *119 open() "/home/wwwroot/classified-listings.html" failed (2: No such file or directory), client: 59.58.157.243, server: www.surilasafar.com, request: "GET /classified-listings.html HTTP/1.1", host: "classifieds.devangsolanki.com" 2013/04/13 02:09:27 [error] 12131#0: *119 "/home/wwwroot/classified-listings.html/trackback/index.html" is not found (2: No such file or directory), client: 59.58.157.243, server: www.surilasafar.com, request: "POST /classified-listings.html/trackback/ HTTP/1.1", host: "classifieds.devangsolanki.com", referrer: "http://classifieds.devangsolanki.com/classified-listings.html" 2013/04/13 02:12:14 [error] 12131#0: *122 open() "/home/wwwroot/robots.txt" failed (2: No such file or directory), client: 66.249.73.161, server: www.surilasafar.com, request: "GET /robots.txt HTTP/1.1", host: "directory.devangsolanki.com" 2013/04/13 02:20:06 [error] 12131#0: *155 "/home/wwwroot/tag/html5-templates-for-free/feed/index.html" is not found (2: No such file or directory), client: 66.249.75.36, server: www.surilasafar.com, request: "GET /tag/html5-templates-for-free/feed/ HTTP/1.1", host: "html5.devangsolanki.com" 2013/04/13 02:21:59 [error] 12131#0: *156 open() "/home/wwwroot/html5-psd-xhtml-html-css3-services.html" failed (2: No such file or directory), client: 46.105.52.235, server: www.surilasafar.com, request: "GET /html5-psd-xhtml-html-css3-services.html HTTP/1.0", host: "psd-html5.devangsolanki.com", referrer: "http://psd-html5.devangsolanki.com/html5-psd-xhtml-html-css3-services.html" 2013/04/13 02:23:47 [error] 12131#0: *173 "/home/wwwroot/2013/02/22/hello-world/index.html" is not found (2: No such file or directory), client: 66.249.75.44, server: www.surilasafar.com, request: "GET /2013/02/22/hello-world/ HTTP/1.1", host: "dotnetnuke.devangsolanki.com" 2013/04/13 02:26:48 [error] 12131#0: *181 open() "/home/wwwroot/tag/prestashop-search-module/feed" failed (2: No such file or directory), client: 66.249.74.70, server: www.surilasafar.com, request: "GET /tag/prestashop-search-module/feed HTTP/1.1", host: "prestashop-development.devangsolanki.com" 2013/04/13 02:31:55 [error] 12131#0: *187 open() "/home/wwwroot/robots.txt" failed (2: No such file or directory), client: 199.30.16.32, server: www.surilasafar.com, request: "GET /robots.txt HTTP/1.1", host: "ns1.devangsolanki.com" 2013/04/13 02:31:55 [error] 12131#0: *188 open() "/home/wwwroot/wp-content/uploads/sites/11/2013/02/europe-travel-consultants-dnn-design-development-germany-320x200.jpg" failed (2: No such file or directory), client: 199.30.16.32, server: www.surilasafar.com, request: "GET /wp-content/uploads/sites/11/2013/02/europe-travel-consultants-dnn-design-development-germany-320x200.jpg HTTP/1.1", host: "ns1.devangsolanki.com" 2013/04/13 02:34:01 [error] 12131#0: *201 open() "/home/wwwroot/robots.txt" failed (2: No such file or directory), client: 66.249.73.234, server: www.surilasafar.com, request: "GET /robots.txt HTTP/1.1", host: "hire-prestashop-developer.devangsolanki.com" 2013/04/13 02:34:01 [error] 12131#0: *201 open() "/home/wwwroot/author/devangsolanki" failed (2: No such file or directory), client: 66.249.73.234, server: www.surilasafar.com, request: "GET /author/devangsolanki HTTP/1.1", host: "hire-prestashop-developer.devangsolanki.com" 2013/04/13 02:35:30 [error] 12131#0: *206 open() "/home/wwwroot/volusion-search-engine-optimization-seo-experts.html+"Notify+me+of+new+posts+by+email"+generate+online+form+html&ct=clnk" failed (2: No such file or directory), client: 69.175.78.132, server: www.surilasafar.com, request: "GET /volusion-search-engine-optimization-seo-experts.html+"Notify+me+of+new+posts+by+email"+generate+online+form+html&ct=clnk HTTP/1.0", host: "volusion.devangsolanki.com", referrer: "http://volusion.devangsolanki.com/" 2013/04/13 02:40:18 [error] 12131#0: *218 open() "/home/wwwroot/robots.txt" failed (2: No such file or directory), client: 66.249.75.195, server: www.surilasafar.com, request: "GET /robots.txt HTTP/1.1", host: "ns2.devangsolanki.com" 2013/04/13 02:40:18 [error] 12131#0: *218 open() "/home/wwwroot/wp-content/themes/akita/framework/frontend/assets/images/shortcodes/icons/clock.png" failed (2: No such file or directory), client: 66.249.75.195, server: www.surilasafar.com, request: "GET /wp-content/themes/akita/framework/frontend/assets/images/shortcodes/icons/clock.png HTTP/1.1", host: "ns2.devangsolanki.com" 2013/04/13 02:42:47 [error] 12131#0: *237 open() "/home/wwwroot/wp-content/plugins/wordpress-23-related-posts-plugin/static/thumbs/20.jpg" failed (2: No such file or directory), client: 66.249.75.195, server: www.surilasafar.com, request: "GET /wp-content/plugins/wordpress-23-related-posts-plugin/static/thumbs/20.jpg HTTP/1.1", host: "ns2.devangsolanki.com" 2013/04/13 02:43:47 [error] 12131#0: *244 open() "/home/wwwroot/dotnetnuke-html5-responsive-skins-design-development.html" failed (2: No such file or directory), client: 70.194.131.69, server: www.surilasafar.com, request: "GET /dotnetnuke-html5-responsive-skins-design-development.html HTTP/1.1", host: "portfolio.devangsolanki.com" 2013/04/13 02:48:59 [error] 12131#0: *254 "/home/wwwroot/tag/html5-responsive-template/feed/index.html" is not found (2: No such file or directory), client: 66.249.75.36, server: www.surilasafar.com, request: "GET /tag/html5-responsive-template/feed/ HTTP/1.1", host: "html5.devangsolanki.com" 2013/04/13 02:49:10 [notice] 12129#0: signal 1 (SIGHUP) received, reconfiguring 2013/04/13 02:49:10 [notice] 12129#0: reconfiguring 2013/04/13 02:49:10 [notice] 12129#0: using the "epoll" event method 2013/04/13 02:49:10 [notice] 12129#0: start worker processes 2013/04/13 02:49:10 [notice] 12129#0: start worker process 12788 2013/04/13 02:49:10 [notice] 12129#0: start cache manager process 12789 2013/04/13 02:49:10 [notice] 12131#0: gracefully shutting down 2013/04/13 02:49:10 [notice] 12131#0: exiting 2013/04/13 02:49:10 [notice] 12131#0: exit 2013/04/13 02:49:10 [notice] 12132#0: exiting 2013/04/13 02:49:10 [notice] 12129#0: signal 17 (SIGCHLD) received 2013/04/13 02:49:10 [notice] 12129#0: cache manager process 12132 exited with code 0 2013/04/13 02:49:10 [notice] 12129#0: signal 29 (SIGIO) received 2013/04/13 02:49:10 [notice] 12129#0: signal 17 (SIGCHLD) received 2013/04/13 02:49:10 [notice] 12129#0: worker process 12131 exited with code 0 2013/04/13 02:49:10 [notice] 12129#0: signal 29 (SIGIO) received 2013/04/13 02:49:17 [notice] 12129#0: signal 15 (SIGTERM) received, exiting 2013/04/13 02:49:17 [notice] 12789#0: exiting 2013/04/13 02:49:17 [notice] 12788#0: exiting 2013/04/13 02:49:17 [notice] 12788#0: exit 2013/04/13 02:49:17 [notice] 12129#0: signal 17 (SIGCHLD) received 2013/04/13 02:49:17 [notice] 12129#0: worker process 12788 exited with code 0 2013/04/13 02:49:17 [notice] 12129#0: cache manager process 12789 exited with code 0 2013/04/13 02:49:17 [notice] 12129#0: signal 14 (SIGALRM) received 2013/04/13 02:49:17 [notice] 12129#0: exit 2013/04/13 02:49:21 [notice] 12832#0: using the "epoll" event method 2013/04/13 02:49:21 [notice] 12832#0: nginx/1.0.15 2013/04/13 02:49:21 [notice] 12832#0: built by gcc 4.4.6 20120305 (Red Hat 4.4.6-4) (GCC) 2013/04/13 02:49:21 [notice] 12832#0: OS: Linux 2.6.32-042stab055.10 2013/04/13 02:49:21 [notice] 12832#0: getrlimit(RLIMIT_NOFILE): 65535:65535 2013/04/13 02:49:21 [notice] 12833#0: start worker processes 2013/04/13 02:49:21 [notice] 12833#0: start worker process 12835 2013/04/13 02:49:21 [notice] 12833#0: start cache manager process 12836 2013/04/13 02:49:21 [notice] 12833#0: start cache loader process 12837 2013/04/13 02:50:21 [notice] 12837#0: http file cache: /usr/local/nginx/proxy 0.000M, bsize: 4096 2013/04/13 02:50:21 [notice] 12833#0: signal 17 (SIGCHLD) received 2013/04/13 02:50:21 [notice] 12833#0: cache loader process 12837 exited with code 0 2013/04/13 02:50:21 [notice] 12833#0: signal 29 (SIGIO) received 2013/04/13 02:50:39 [error] 12835#0: *31 open() "/home/wwwroot/tag/website-design-companies/feed" failed (2: No such file or directory), client: 66.249.74.70, server: www.surilasafar.com, request: "GET /tag/website-design-companies/feed HTTP/1.1", host: "prestashop-development.devangsolanki.com" 2013/04/13 02:53:39 [error] 12835#0: *43 open() "/home/wwwroot/classified-listings.html" failed (2: No such file or directory), client: 60.169.77.110, server: www.surilasafar.com, request: "GET /classified-listings.html HTTP/1.1", host: "classifieds.devangsolanki.com" 2013/04/13 02:53:40 [error] 12835#0: *43 "/home/wwwroot/classified-listings.html/trackback/index.html" is not found (2: No such file or directory), client: 60.169.77.110, server: www.surilasafar.com, request: "POST /classified-listings.html/trackback/ HTTP/1.1", host: "classifieds.devangsolanki.com", referrer: "http://classifieds.devangsolanki.com/classified-listings.html" 2013/04/13 02:54:47 [notice] 12833#0: signal 1 (SIGHUP) received, reconfiguring 2013/04/13 02:54:47 [notice] 12833#0: reconfiguring 2013/04/13 02:54:47 [notice] 12833#0: using the "epoll" event method 2013/04/13 02:54:47 [notice] 12833#0: start worker processes 2013/04/13 02:54:47 [notice] 12833#0: start worker process 13046 2013/04/13 02:54:47 [notice] 12833#0: start cache manager process 13047 2013/04/13 02:54:47 [notice] 12836#0: exiting 2013/04/13 02:54:47 [notice] 12835#0: gracefully shutting down 2013/04/13 02:54:47 [notice] 12833#0: signal 17 (SIGCHLD) received 2013/04/13 02:54:47 [notice] 12833#0: cache manager process 12836 exited with code 0 2013/04/13 02:54:47 [notice] 12833#0: signal 29 (SIGIO) received 2013/04/13 02:54:49 [error] 13046#0: *47 open() "/home/wwwroot/robots.txt" failed (2: No such file or directory), client: 65.55.213.243, server: www.surilasafar.com, request: "GET /robots.txt HTTP/1.1", host: "website-design.devangsolanki.com" 2013/04/13 02:54:50 [error] 13046#0: *48 open() "/home/wwwroot/sitemap.xml" failed (2: No such file or directory), client: 65.55.213.243, server: www.surilasafar.com, request: "GET /sitemap.xml HTTP/1.1", host: "website-design.devangsolanki.com" 2013/04/13 02:54:54 [notice] 12833#0: signal 15 (SIGTERM) received, exiting 2013/04/13 02:54:54 [notice] 13046#0: exiting 2013/04/13 02:54:54 [notice] 13047#0: exiting 2013/04/13 02:54:54 [notice] 13046#0: exit 2013/04/13 02:54:54 [notice] 12833#0: signal 17 (SIGCHLD) received 2013/04/13 02:54:54 [notice] 12833#0: cache manager process 13047 exited with code 0 2013/04/13 02:54:54 [notice] 12833#0: signal 29 (SIGIO) received 2013/04/13 02:54:54 [notice] 12833#0: signal 17 (SIGCHLD) received 2013/04/13 02:54:54 [notice] 12833#0: worker process 13046 exited with code 0 2013/04/13 02:54:54 [notice] 12833#0: signal 29 (SIGIO) received 2013/04/13 02:54:54 [notice] 12835#0: exiting 2013/04/13 02:54:54 [alert] 12835#0: open socket #13 left in connection 5 2013/04/13 02:54:54 [alert] 12835#0: open socket #11 left in connection 8 2013/04/13 02:54:54 [alert] 12835#0: open socket #12 left in connection 10 2013/04/13 02:54:54 [alert] 12835#0: aborting 2013/04/13 02:54:54 [notice] 12835#0: exit 2013/04/13 02:54:54 [notice] 12833#0: signal 17 (SIGCHLD) received 2013/04/13 02:54:54 [notice] 12833#0: worker process 12835 exited with code 0 2013/04/13 02:54:54 [notice] 12833#0: exit 2013/04/13 02:54:59 [notice] 13097#0: using the "epoll" event method 2013/04/13 02:54:59 [notice] 13097#0: nginx/1.0.15 2013/04/13 02:54:59 [notice] 13097#0: built by gcc 4.4.6 20120305 (Red Hat 4.4.6-4) (GCC) 2013/04/13 02:54:59 [notice] 13097#0: OS: Linux 2.6.32-042stab055.10 2013/04/13 02:54:59 [notice] 13097#0: getrlimit(RLIMIT_NOFILE): 65535:65535 2013/04/13 02:54:59 [notice] 13098#0: start worker processes 2013/04/13 02:54:59 [notice] 13098#0: start worker process 13099 2013/04/13 02:54:59 [notice] 13098#0: start cache manager process 13101 2013/04/13 02:54:59 [notice] 13098#0: start cache loader process 13102 2013/04/13 02:55:59 [notice] 13102#0: http file cache: /usr/local/nginx/proxy 0.000M, bsize: 4096 2013/04/13 02:55:59 [notice] 13098#0: signal 17 (SIGCHLD) received 2013/04/13 02:55:59 [notice] 13098#0: cache loader process 13102 exited with code 0 2013/04/13 02:55:59 [notice] 13098#0: signal 29 (SIGIO) received 2013/04/13 02:56:31 [error] 13099#0: *22 open() "/home/wwwroot/robots.txt" failed (2: No such file or directory), client: 157.55.32.162, server: www.surilasafar.com, request: "GET /robots.txt HTTP/1.1", host: "prestashop-development.devangsolanki.com" 2013/04/13 02:58:08 [error] 13099#0: *27 "/home/wwwroot/sample-page/index.html" is not found (2: No such file or directory), client: 66.249.75.181, server: www.surilasafar.com, request: "GET /sample-page/ HTTP/1.1", host: "dnn.devangsolanki.com" 2013/04/13 03:00:16 [error] 13099#0: *35 open() "/home/wwwroot/feed" failed (2: No such file or directory), client: 157.55.32.162, server: www.surilasafar.com, request: "GET /feed HTTP/1.1", host: "prestashop-development.devangsolanki.com" 2013/04/13 03:01:56 [error] 13099#0: *39 open() "/home/wwwroot/tag/developer/feed" failed (2: No such file or directory), client: 141.8.147.4, server: www.surilasafar.com, request: "GET /tag/developer/feed HTTP/1.1", host: "hire-html5-developer.devangsolanki.com" 2013/04/13 03:02:58 [notice] 13098#0: signal 1 (SIGHUP) received, reconfiguring 2013/04/13 03:02:58 [notice] 13098#0: reconfiguring 2013/04/13 03:02:58 [notice] 13098#0: using the "epoll" event method 2013/04/13 03:02:58 [notice] 13098#0: start worker processes 2013/04/13 03:02:58 [notice] 13098#0: start worker process 13320 2013/04/13 03:02:58 [notice] 13098#0: start cache manager process 13321 2013/04/13 03:02:58 [notice] 13101#0: exiting 2013/04/13 03:02:58 [notice] 13099#0: gracefully shutting down 2013/04/13 03:02:58 [notice] 13099#0: exiting 2013/04/13 03:02:58 [notice] 13099#0: exit 2013/04/13 03:02:58 [notice] 13098#0: signal 17 (SIGCHLD) received 2013/04/13 03:02:58 [notice] 13098#0: worker process 13099 exited with code 0 2013/04/13 03:02:58 [notice] 13098#0: cache manager process 13101 exited with code 0 2013/04/13 03:02:58 [notice] 13098#0: signal 29 (SIGIO) received 2013/04/13 03:02:58 [notice] 13098#0: signal 17 (SIGCHLD) received 2013/04/13 03:03:05 [notice] 13098#0: signal 15 (SIGTERM) received, exiting 2013/04/13 03:03:05 [notice] 13320#0: exiting 2013/04/13 03:03:05 [notice] 13321#0: exiting 2013/04/13 03:03:05 [notice] 13320#0: exit 2013/04/13 03:03:05 [notice] 13098#0: signal 17 (SIGCHLD) received 2013/04/13 03:03:05 [notice] 13098#0: worker process 13320 exited with code 0 2013/04/13 03:03:05 [notice] 13098#0: signal 29 (SIGIO) received 2013/04/13 03:03:05 [notice] 13098#0: signal 17 (SIGCHLD) received 2013/04/13 03:03:05 [notice] 13098#0: cache manager process 13321 exited with code 0 2013/04/13 03:03:05 [notice] 13098#0: exit 2013/04/13 03:03:12 [notice] 13373#0: using the "epoll" event method 2013/04/13 03:03:12 [notice] 13373#0: nginx/1.0.15 2013/04/13 03:03:12 [notice] 13373#0: built by gcc 4.4.6 20120305 (Red Hat 4.4.6-4) (GCC) 2013/04/13 03:03:12 [notice] 13373#0: OS: Linux 2.6.32-042stab055.10 2013/04/13 03:03:12 [notice] 13373#0: getrlimit(RLIMIT_NOFILE): 65535:65535 2013/04/13 03:03:12 [notice] 13374#0: start worker processes 2013/04/13 03:03:12 [notice] 13374#0: start worker process 13375 2013/04/13 03:03:12 [notice] 13374#0: start cache manager process 13377 2013/04/13 03:03:12 [notice] 13374#0: start cache loader process 13378 2013/04/13 03:04:07 [error] 13375#0: *13 open() "/home/wwwroot/article/example-website-article-1" failed (2: No such file or directory), client: 173.232.20.190, server: www.surilasafar.com, request: "GET /article/example-website-article-1 HTTP/1.0", host: "directory.devangsolanki.com", referrer: "http://directory.devangsolanki.com/article/example-website-article-1" 2013/04/13 03:04:12 [notice] 13378#0: http file cache: /usr/local/nginx/proxy 0.000M, bsize: 4096 2013/04/13 03:04:12 [notice] 13374#0: signal 17 (SIGCHLD) received 2013/04/13 03:04:12 [notice] 13374#0: cache loader process 13378 exited with code 0 2013/04/13 03:04:12 [notice] 13374#0: signal 29 (SIGIO) received Any help friends ??? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,238322,238329#msg-238329 From nginx-forum at nginx.us Fri Apr 12 20:44:34 2013 From: nginx-forum at nginx.us (mrtn) Date: Fri, 12 Apr 2013 16:44:34 -0400 Subject: limit_conn_zone and limit_conn behavior in 1.2.8 Message-ID: In my config, within http section, I have: limit_conn_zone $binary_remote_addr zone=addr:10m; limit_conn addr 3; which I interpret as, setting a memory zone of size 10 megabytes for keeping states of connected IPs and restricting 3 concurrent connections per IP at a time. If the size of the zone is exhausted or the limit per IP is breached, Nginx will return 503 as response. I have them before upgrading to 1.2.8. During testing today, I found several 503 responses due to these settings, for example: '2013/04/12 15:38:48 [error] 5888#0: *352 limiting connections by zone "addr", client: 127.0.0.1, server: static.mysite.com, request: "GET /js/jquery.reject.min.js HTTP/1.1", host: "static.mysite.com", referrer: "https://www.mysite.com/blah/blah?var=blah" The client IP is 127.0.0.1 here because Nginx is behind a HAproxy. The test I did was launching several requests which requests for delivery of some JavaScripts from Nginx. Some of these requests return 200 OK, but a number of them failed with 503s. The same test I did before with Nginx 1.2.7 did not result any 503 responses. Any idea what might have caused this? Is it because the HAProxy in front of Nginx? Thanks! Posted at Nginx Forum: http://forum.nginx.org/read.php?2,238331,238331#msg-238331 From ix8675874 at sent.at Fri Apr 12 21:34:23 2013 From: ix8675874 at sent.at (ix8675874 at sent.at) Date: Fri, 12 Apr 2013 14:34:23 -0700 Subject: How to set up nginx as a 2-factor authentication portal that becomes transparent once auth'd? Message-ID: <1365802463.29714.140661217057058.408B66FC@webmail.messagingengine.com> Hi, I just started with a small company that's got a bunch of web apps being served up from a bunch of different web servers. Some are 'appliances', most are Apache. It's a mess of an infrastrucutre -- slow and . My long term plan is to convert to one lighter weight platform with commercial support available. Although I haven't used it myself for anything in production yet, after a bunch of reading and some fooling around on my own, I'm 99% sure it's going to be Nginx. In the short term -- like the boss wants it yesterday! -- I need to put everything behind two factor authentication and enable SSL. Right now, every web app is directly exposed to the web with single-factor auth over http://. In principle, I think I can solve this in one nginx instance. Setting nginx up to listen on one IP, and serve up separate SSL certificates for each web app is brilliantly easy in nginx! Works perfectly. SO that part's basically done. The auth piece has me scratching my head -- and I hope somebody here can provide some guidance. What I want to do is have all access to the webapps FIRST go through a two factor authentication webpage in nginx. The two factors I need are (1) a simple password known to the user, and (2) a GoogleAuthenticator-generated token/passcode. ONLY on correct & timely enter of both do I want the user passed through to the webapp on one of those servers I mentioned. But once they do, the 'authentication site' should become trabsparent and not interfere at all with the session, etc. I'm not sure how to: (1) implement Google AUthenticator integration in Nginx. I've looked for something built-in, or some plugin, which would be fantastic. But I've haven't found anything reliable yet. (2) make sure that after Authentication is OK to make everything transparent to & from the webapps behind the nginx instance. Is this proxying? I'm pretty sure I need to pass some sort of variables, but is there some setting that bundles up everything so it's fully transparent? Are there any built-in ways -- and better yet, good tutorials! -- that exist alrady for these? I doubt I've thought up anything new here, so I'm hoping someone's already posted some know-how. THanks a bunch for any help! Dave From laursen at oxygen.net Fri Apr 12 21:36:52 2013 From: laursen at oxygen.net (Lasse Laursen) Date: Fri, 12 Apr 2013 23:36:52 +0200 Subject: How to set up nginx as a 2-factor authentication portal that becomes transparent once auth'd? In-Reply-To: <1365802463.29714.140661217057058.408B66FC@webmail.messagingengine.com> References: <1365802463.29714.140661217057058.408B66FC@webmail.messagingengine.com> Message-ID: <32D868FD-FF77-4FFB-AB33-680AF73B71DE@oxygen.net> Have a look at roboo and work backwards from that? Sent from my iPhone On 12/04/2013, at 23.34, ix8675874 at sent.at wrote: > Hi, > > I just started with a small company that's got a bunch of web apps being > served up from a bunch of different web servers. Some are 'appliances', > most are Apache. > > It's a mess of an infrastrucutre -- slow and . My long term plan is to > convert to one lighter weight platform with commercial support > available. Although I haven't used it myself for anything in production > yet, after a bunch of reading and some fooling around on my own, I'm 99% > sure it's going to be Nginx. > > In the short term -- like the boss wants it yesterday! -- I need to put > everything behind two factor authentication and enable SSL. Right now, > every web app is directly exposed to the web with single-factor auth > over http://. > > In principle, I think I can solve this in one nginx instance. Setting > nginx up to listen on one IP, and serve up separate SSL certificates for > each web app is brilliantly easy in nginx! Works perfectly. SO that > part's basically done. > > The auth piece has me scratching my head -- and I hope somebody here can > provide some guidance. > > What I want to do is have all access to the webapps FIRST go through a > two factor authentication webpage in nginx. The two factors I need are > (1) a simple password known to the user, and (2) a > GoogleAuthenticator-generated token/passcode. > > ONLY on correct & timely enter of both do I want the user passed through > to the webapp on one of those servers I mentioned. But once they do, > the 'authentication site' should become trabsparent and not interfere at > all with the session, etc. > > I'm not sure how to: > > (1) implement Google AUthenticator integration in Nginx. I've looked > for something built-in, or some plugin, which would be fantastic. But > I've haven't found anything reliable yet. > (2) make sure that after Authentication is OK to make everything > transparent to & from the webapps behind the nginx instance. Is this > proxying? I'm pretty sure I need to pass some sort of variables, but is > there some setting that bundles up everything so it's fully transparent? > > Are there any built-in ways -- and better yet, good tutorials! -- that > exist alrady for these? I doubt I've thought up anything new here, so > I'm hoping someone's already posted some know-how. > > THanks a bunch for any help! > > > Dave > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From andrew at nginx.com Fri Apr 12 22:29:22 2013 From: andrew at nginx.com (Andrew Alexeev) Date: Fri, 12 Apr 2013 15:29:22 -0700 Subject: How to set up nginx as a 2-factor authentication portal that becomes transparent once auth'd? In-Reply-To: <1365802463.29714.140661217057058.408B66FC@webmail.messagingengine.com> References: <1365802463.29714.140661217057058.408B66FC@webmail.messagingengine.com> Message-ID: Hi Dave, On Apr 12, 2013, at 2:34 PM, ix8675874 at sent.at wrote: > Hi, > > I just started with a small company that's got a bunch of web apps being > served up from a bunch of different web servers. Some are 'appliances', > most are Apache. > > It's a mess of an infrastrucutre -- slow and . My long term plan is to > convert to one lighter weight platform with commercial support > available. Although I haven't used it myself for anything in production > yet, after a bunch of reading and some fooling around on my own, I'm 99% > sure it's going to be Nginx. > > In the short term -- like the boss wants it yesterday! -- I need to put > everything behind two factor authentication and enable SSL. Right now, > every web app is directly exposed to the web with single-factor auth > over http://. > > In principle, I think I can solve this in one nginx instance. Setting > nginx up to listen on one IP, and serve up separate SSL certificates for > each web app is brilliantly easy in nginx! Works perfectly. SO that > part's basically done. > > The auth piece has me scratching my head -- and I hope somebody here can > provide some guidance. > > What I want to do is have all access to the webapps FIRST go through a > two factor authentication webpage in nginx. The two factors I need are > (1) a simple password known to the user, and (2) a > GoogleAuthenticator-generated token/passcode. > > ONLY on correct & timely enter of both do I want the user passed through > to the webapp on one of those servers I mentioned. But once they do, > the 'authentication site' should become trabsparent and not interfere at > all with the session, etc. > > I'm not sure how to: > > (1) implement Google AUthenticator integration in Nginx. I've looked > for something built-in, or some plugin, which would be fantastic. But > I've haven't found anything reliable yet. > (2) make sure that after Authentication is OK to make everything > transparent to & from the webapps behind the nginx instance. Is this > proxying? I'm pretty sure I need to pass some sort of variables, but is > there some setting that bundles up everything so it's fully transparent? > > Are there any built-in ways -- and better yet, good tutorials! -- that > exist alrady for these? I doubt I've thought up anything new here, so > I'm hoping someone's already posted some know-how. There's an http request authentication module by one of nginx core developers here: http://mdounin.ru/hg/ngx_http_auth_request_module/file/a29d74804ff1/README And have you checked Lua-module for nginx by agentzh (Yichun Zhang) ? http://wiki.nginx.org/HttpLuaModule http://seatgeek.com/blog/dev/oauth-support-for-nginx-with-lua https://gist.github.com/josegonzalez/4196901 etc. :) > THanks a bunch for any help! > > > Dave > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > From ix8675874 at sent.at Fri Apr 12 23:01:59 2013 From: ix8675874 at sent.at (ix8675874 at sent.at) Date: Fri, 12 Apr 2013 16:01:59 -0700 Subject: How to set up nginx as a 2-factor authentication portal that becomes transparent once auth'd? In-Reply-To: References: <1365802463.29714.140661217057058.408B66FC@webmail.messagingengine.com> Message-ID: <1365807719.11946.140661217079830.54641F7E@webmail.messagingengine.com> Hi Andrew, On Fri, Apr 12, 2013, at 03:29 PM, Andrew Alexeev wrote: > There's an http request authentication module by one of nginx core > developers here: > > http://mdounin.ru/hg/ngx_http_auth_request_module/file/a29d74804ff1/README That one takes care of the known password I guess. > And have you checked Lua-module for nginx by agentzh (Yichun Zhang) ? > > http://wiki.nginx.org/HttpLuaModule > http://seatgeek.com/blog/dev/oauth-support-for-nginx-with-lua > https://gist.github.com/josegonzalez/4196901 Nope not yet. I'm not even sure what Lua does specifically. But the Oauth support looks like it's in the right direction. So, I guess I'll do some reading. Ideally I'd love to find something this simple - https://code.google.com/p/google-authenticator-apache-module/, but for nginx. Seems like it'd be generally useful for any / all websites. I'll read up AND keep looking. Thanks a bunch. Dave From shahzaib.cb at gmail.com Sat Apr 13 12:46:30 2013 From: shahzaib.cb at gmail.com (shahzaib shahzaib) Date: Sat, 13 Apr 2013 17:46:30 +0500 Subject: Buffering stopped after 3minutes Message-ID: Hello, We are running a video stream website and using nginx(1.2.1) for streaming. Whenever user plays the video it started smoothly but buffering stopped on the mid point and user have to seek again to play the video. I am sorry for my bad english. Its an urgent issue that is why i am texting crazy. I have already checked the file-descriptors error but have found nothing in the logs but there's a suspicious error "[notice] 57324#0: signal process started" Help will be highly appreciated. Thanks -------------- next part -------------- An HTML attachment was scrubbed... URL: From luky-37 at hotmail.com Sat Apr 13 14:16:52 2013 From: luky-37 at hotmail.com (Lukas Tribus) Date: Sat, 13 Apr 2013 16:16:52 +0200 Subject: Buffering stopped after 3minutes In-Reply-To: References: Message-ID: Hi! > We are running a video stream website and using nginx(1.2.1) for > streaming. Get nginx 1.2.8. There are at least 2 bugfixes regarding mp4 streaming between 1.2.1 and the latest stable version. From shahzaib.cb at gmail.com Sat Apr 13 14:24:33 2013 From: shahzaib.cb at gmail.com (shahzaib shahzaib) Date: Sat, 13 Apr 2013 19:24:33 +0500 Subject: Buffering stopped after 3minutes In-Reply-To: References: Message-ID: We are facing issues regarding flv and somehow things are fix and smooth and i haven't done anything :(. But i want to prevent these issues for the future. The source i used to download nginx is http://nginx.org/download/ . Please check my nginx.conf file may be there's some wrong settings there :- user nginx; worker_processes 8; worker_rlimit_nofile 300000; #2 filehandlers for each connection #error_log logs/error.log; #error_log logs/error.log notice; #error_log logs/error.log info; #pid logs/nginx.pid; events { worker_connections 6000; use epoll; } http { include mime.types; default_type application/octet-stream; client_body_buffer_size 128K; sendfile_max_chunk 128k; client_header_buffer_size 256k; large_client_header_buffers 4 256k; output_buffers 1 512k; server_tokens off; #Conceals nginx version #access_log logs/access.log main; access_log off; sendfile off; ignore_invalid_headers on; client_header_timeout 3m; client_body_timeout 3m; send_timeout 3m; keepalive_timeout 0; reset_timedout_connection on; etc etc } These are the particular settings in my nginx.conf file. Please let me know if you want the rest of config. -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Sun Apr 14 02:35:47 2013 From: nginx-forum at nginx.us (mastah) Date: Sat, 13 Apr 2013 22:35:47 -0400 Subject: Reverse proxy : preserving base url Message-ID: <4fb48516f941e1448988a8f6118b45f1.NginxMailingListEnglish@forum.nginx.org> Hi. I'm struggling since about 2days to create a reverse proxy with nginx. Here is my situation : I've a sub domain which we will call : sub.example.com I've an internal application at 127.0.0.1:9090/app What I would like : I would like to proxy sub.example.com to 127.0.0.1:9090/app and i want to keep my URL as sub.example.com So far I've done countless try without success. The optimal situation would be that when I do : http://sub.example.com/ I get the content from http://127.0.0.1:9090/app but without any change in the sub domain URL. So for example if the internal app has http://127.0.0.1:9090/app/magical.css, my sub domain should be http://sub.example.com/magical.css (without app/) Thanks in advance to anyone that is willing to help me. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,238350,238350#msg-238350 From nginx-forum at nginx.us Sun Apr 14 13:22:53 2013 From: nginx-forum at nginx.us (gadh) Date: Sun, 14 Apr 2013 09:22:53 -0400 Subject: using upstream module that won't override the original request Message-ID: <1406db3951ce673239a9754b87d62fd4.NginxMailingListEnglish@forum.nginx.org> I'm trying to hold a client request to backend (not go to the server yet), issue an upstream request to another server, then parse the upstream response, store it in my orig. request ctx, only then go to backend server (original request), and in the output filter inject some of the ctx data i stored before - to the backend response. i tried to do all that with a subrequest but then Maxim told me that the subrequest was not designed for that - hence no support in my method. Now if i use upstream instead - it overrides the original request and sends its own response to the client. so how can i achieve my goal ? Ref: see Maxim's response to a similar thread here: http://web.archiveorange.com/archive/v/yKUXMonlhsVD7yQtg8xL see my new subrequest method & discussion here: http://forum.nginx.org/read.php?2,237362 (nginx core crashes when using my method + flag of 'ignore client abort'=on and no solution from nginx developers for that) Posted at Nginx Forum: http://forum.nginx.org/read.php?2,238355,238355#msg-238355 From shahzaib.cb at gmail.com Sun Apr 14 14:21:51 2013 From: shahzaib.cb at gmail.com (shahzaib shahzaib) Date: Sun, 14 Apr 2013 19:21:51 +0500 Subject: Buffering stopped after 3minutes In-Reply-To: References: Message-ID: The issue has been resolved. It was monit who was creating issues and closing nginx connection during buffering. I changed the version of monit from 4.1* to 5.1* and there ain't no error in logs regarding "signal process started" . Thanks On Sat, Apr 13, 2013 at 7:24 PM, shahzaib shahzaib wrote: > We are facing issues regarding flv and somehow things are fix and smooth > and i haven't done anything :(. But i want to prevent these issues for the > future. The source i used to download nginx is http://nginx.org/download/. Please check my nginx.conf file may be there's some wrong settings there > :- > > user nginx; > worker_processes 8; > worker_rlimit_nofile 300000; #2 filehandlers for each connection > #error_log logs/error.log; > #error_log logs/error.log notice; > #error_log logs/error.log info; > > #pid logs/nginx.pid; > > > events { > worker_connections 6000; > use epoll; > } > http { > include mime.types; > default_type application/octet-stream; > client_body_buffer_size 128K; > sendfile_max_chunk 128k; > client_header_buffer_size 256k; > large_client_header_buffers 4 256k; > output_buffers 1 512k; > server_tokens off; #Conceals nginx version > #access_log logs/access.log main; > access_log off; > sendfile off; > ignore_invalid_headers on; > client_header_timeout 3m; > client_body_timeout 3m; > send_timeout 3m; > keepalive_timeout 0; > reset_timedout_connection on; > > etc etc > > } > > These are the particular settings in my nginx.conf file. Please let me > know if you want the rest of config. > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From agentzh at gmail.com Sun Apr 14 20:21:32 2013 From: agentzh at gmail.com (agentzh) Date: Sun, 14 Apr 2013 13:21:32 -0700 Subject: using upstream module that won't override the original request In-Reply-To: <1406db3951ce673239a9754b87d62fd4.NginxMailingListEnglish@forum.nginx.org> References: <1406db3951ce673239a9754b87d62fd4.NginxMailingListEnglish@forum.nginx.org> Message-ID: Hello! On Sun, Apr 14, 2013 at 6:22 AM, gadh wrote: > I'm trying to hold a client request to backend (not go to the server yet), > issue an upstream request to another server, then parse the upstream > response, store it in my orig. request ctx, only then go to backend server > (original request), and in the output filter inject some of the ctx data i > stored before - to the backend response. > > i tried to do all that with a subrequest but then Maxim told me that the > subrequest was not designed for that - hence no support in my method. > Now if i use upstream instead - it overrides the original request and sends > its own response to the client. > so how can i achieve my goal ? I don't understand what you're saying completely, but you may find the subrequest API provided by ngx_lua fit your needs: http://wiki.nginx.org/HttpLuaModule#ngx.location.capture And also its cosocket API: http://wiki.nginx.org/HttpLuaModule#ngx.socket.tcp I think you can just write a little Lua to do your task without writing your own nginx C module. Well, just a suggestion. Best regards, -agentzh From nginx-forum at nginx.us Sun Apr 14 23:06:22 2013 From: nginx-forum at nginx.us (mrtn) Date: Sun, 14 Apr 2013 19:06:22 -0400 Subject: limit_conn_zone and limit_conn behavior in 1.2.8 In-Reply-To: References: Message-ID: So, I've found out more about my situation. Apparently, limit_conn_zone stuff I'm doing on Nginx applies to HAProxy only which is in front of my Nginx. I guess I have two options: 1. Use HAProxy (instead of Nginx) for request/connection limiting. 2. Limit requests/connections on Nginx based on X-Forwarded-For instead of the IP of HAProxy (which is going to always 127.0.0.1). Which do you guys think would work better? Thanks! Posted at Nginx Forum: http://forum.nginx.org/read.php?2,238331,238366#msg-238366 From ianevans at digitalhit.com Mon Apr 15 02:50:15 2013 From: ianevans at digitalhit.com (Ian M. Evans) Date: Sun, 14 Apr 2013 22:50:15 -0400 Subject: nginx tips for localhost development Message-ID: As I'm about to launch some major additions to my site (and launch another site) I've been starting to get my act together in regards to development, e.g. finally starting to use git, getting ready for a responsive redesign, etc. I've realized that it would probably be wise to develop locally by running a localhost server in a linux VM on my laptop and desktop. Can anyone give me any tips (or point me to a tutorial/article) that would show me how to set up nginx for localhost testing? Is it possible for me to create a setup in nginx so both www.example.com and www.mynewsiteexample.com would be served by the localhost nginx? Thanks for any advice! From lists at ruby-forum.com Mon Apr 15 07:14:05 2013 From: lists at ruby-forum.com (ESWAR R.) Date: Mon, 15 Apr 2013 09:14:05 +0200 Subject: Reg. automatic failover In-Reply-To: References: Message-ID: <60c85883674786f96080927474a8fe59@ruby-forum.com> Can anyone provide me some insight on this issue. -- Posted via http://www.ruby-forum.com/. From contact at jpluscplusm.com Mon Apr 15 10:59:27 2013 From: contact at jpluscplusm.com (Jonathan Matthews) Date: Mon, 15 Apr 2013 11:59:27 +0100 Subject: Reg. automatic failover In-Reply-To: References: Message-ID: On 9 April 2013 13:35, ESWAR RAO wrote: > Hi All, > > I observed that automatic failover behaviour is not working with my nginx > server. What you describe is exactly what I would expect. No HTTP-compliant proxy would shunt an active connection, mid-request, over to another backend once a TCP connection has been established, and any traffic has been passed to it. The proxy has no way of knowing if the backend has taken some non-idempotent action based on as much of the request as managed to be communicated. Would you like it if your bank re-dispatched "pay this bill" requests to multiple backends just because one of them fell over? ;-) What are your expectations here? Be clear! Jonathan From nginx-forum at nginx.us Mon Apr 15 13:16:50 2013 From: nginx-forum at nginx.us (mex) Date: Mon, 15 Apr 2013 09:16:50 -0400 Subject: nginx tips for localhost development In-Reply-To: References: Message-ID: you might want to play with your /etc/hosts. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,238368,238372#msg-238372 From francis at daoine.org Mon Apr 15 17:49:22 2013 From: francis at daoine.org (Francis Daly) Date: Mon, 15 Apr 2013 18:49:22 +0100 Subject: nginx tips for localhost development In-Reply-To: References: Message-ID: <20130415174922.GB16160@craic.sysops.org> On Sun, Apr 14, 2013 at 10:50:15PM -0400, Ian M. Evans wrote: Hi there, > I've realized that it would probably be wise to develop locally by running > a localhost server in a linux VM on my laptop and desktop. > > Can anyone give me any tips (or point me to a tutorial/article) that would > show me how to set up nginx for localhost testing? Is it possible for me > to create a setup in nginx so both www.example.com and > www.mynewsiteexample.com would be served by the localhost nginx? On the nginx side, there should be approximately nothing special to do. The nginx.conf that works on your production server can be put onto your development server; "listen" directives which specify ip addresses may need to be changed, and file names may need to be changed if you have a different layout. But other than that, not much about nginx cares. The main thing you will need, if you use multiple server{} blocks with different server_name directives, is to make sure that whatever client you are testing with (== web browser) resolves the names that you use to be an address that your test nginx listens on. /etc/hosts is probably the simplest way to arrange that. If you're comfortable testing using "curl", you don't even need that -- just add a suitable "-H Host:" argument to each command. f -- Francis Daly francis at daoine.org From francis at daoine.org Mon Apr 15 18:05:28 2013 From: francis at daoine.org (Francis Daly) Date: Mon, 15 Apr 2013 19:05:28 +0100 Subject: Reverse proxy : preserving base url In-Reply-To: <4fb48516f941e1448988a8f6118b45f1.NginxMailingListEnglish@forum.nginx.org> References: <4fb48516f941e1448988a8f6118b45f1.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20130415180528.GC16160@craic.sysops.org> On Sat, Apr 13, 2013 at 10:35:47PM -0400, mastah wrote: Hi there, > I've a sub domain which we will call : sub.example.com > I've an internal application at 127.0.0.1:9090/app > > What I would like : > I would like to proxy sub.example.com to 127.0.0.1:9090/app and i want to > keep my URL as sub.example.com > So far I've done countless try without success. There are two distinct parts to this. You have the http headers, and you have the body (which will be html or css or jpg or something else). nginx can reasonably be expected to adjust the response http headers. nginx can't reasonably be expected to adjust the response body. Unless you put in special configuration, nginx with proxy_pass will probably adjust the response headers for you. You can try to adjust the body. You will have to match every string that your client browser will interpret as a url, and make sure that it works when the browser considers it relative to the non-/app/ url that it started with. > The optimal situation would be that when I do : > http://sub.example.com/ I get the content from http://127.0.0.1:9090/app > but without any change in the sub domain URL. > > So for example if the internal app has > http://127.0.0.1:9090/app/magical.css, my sub domain should be > http://sub.example.com/magical.css (without app/) The cleanest way to achieve this is for you to configure your application either (a) to believe that it is installed at "/", not at "/app/"; or (b) to never create any links that start with "/". In each of those cases, the browser should interpret relative links correctly without nginx having to modify any body content. (If you use option (b) and create links like "../../../app/other/place", then that will break this. So don't do that.) f -- Francis Daly francis at daoine.org From rkearsley at blueyonder.co.uk Mon Apr 15 19:08:20 2013 From: rkearsley at blueyonder.co.uk (Richard Kearsley) Date: Mon, 15 Apr 2013 20:08:20 +0100 Subject: open_file_cache Message-ID: <516C5024.2070802@blueyonder.co.uk> Hi Is the max value specified in `open_file_cache` on a per-worker basis? e.g. if I set it to 20,000, will it cache 80,000 open fds with 4 workers? Thanks From mdounin at mdounin.ru Mon Apr 15 19:59:07 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 15 Apr 2013 23:59:07 +0400 Subject: open_file_cache In-Reply-To: <516C5024.2070802@blueyonder.co.uk> References: <516C5024.2070802@blueyonder.co.uk> Message-ID: <20130415195907.GJ92338@mdounin.ru> Hello! On Mon, Apr 15, 2013 at 08:08:20PM +0100, Richard Kearsley wrote: > Hi > Is the max value specified in `open_file_cache` on a per-worker basis? > e.g. if I set it to 20,000, will it cache 80,000 open fds with 4 workers? Yes, it's per-worker. -- Maxim Dounin http://nginx.org/en/donation.html From nginx-forum at nginx.us Mon Apr 15 21:35:27 2013 From: nginx-forum at nginx.us (jaychris) Date: Mon, 15 Apr 2013 17:35:27 -0400 Subject: handling x_forwarded_proto with Nginx as backend to HAproxy Message-ID: <319aa0242e0827d414c4a3fa45ee668c.NginxMailingListEnglish@forum.nginx.org> Running Nginx 1.2.7 behind HAproxy, with SSL being terminated on on HAproxy. I was using the X_FORWARDED_PROTO header to make some decisions on the backend when I was using Apache, but it doesn't look like Nginx is handling the header currently: client sent invalid header line: "X_FORWARDED_PROTO: http" while reading client request headers, I found an old thread that seemed to address this, with Igor suggesting to use: http { map $http_x_forwarded_proto $my_https { default off; https on; } server { ... location { ...; fastcgi_param HTTPS $my_https; } } but this doesn't seem to be carrying the header forward. Maybe it's a lack of understanding on my part or maybe the way the issue is handled has changed in the years since that post (HERE: http://osdir.com/ml/nginx/2010-05/msg00101.html), but if anyone has a suggestion I would appreciate it. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,238384,238384#msg-238384 From francis at daoine.org Mon Apr 15 21:42:30 2013 From: francis at daoine.org (Francis Daly) Date: Mon, 15 Apr 2013 22:42:30 +0100 Subject: handling x_forwarded_proto with Nginx as backend to HAproxy In-Reply-To: <319aa0242e0827d414c4a3fa45ee668c.NginxMailingListEnglish@forum.nginx.org> References: <319aa0242e0827d414c4a3fa45ee668c.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20130415214230.GD16160@craic.sysops.org> On Mon, Apr 15, 2013 at 05:35:27PM -0400, jaychris wrote: Hi there, > I was using the X_FORWARDED_PROTO header to make some decisions on the > backend when I was using Apache, but it doesn't look like Nginx is handling > the header currently: > > client sent invalid header line: "X_FORWARDED_PROTO: http" while reading > client request headers, "_" is not a valid character in a http header. http://nginx.org/r/ignore_invalid_headers or maybe http://nginx.org/r/underscores_in_headers f -- Francis Daly francis at daoine.org From nginx-forum at nginx.us Mon Apr 15 22:03:24 2013 From: nginx-forum at nginx.us (jaychris) Date: Mon, 15 Apr 2013 18:03:24 -0400 Subject: handling x_forwarded_proto with Nginx as backend to HAproxy In-Reply-To: <20130415214230.GD16160@craic.sysops.org> References: <20130415214230.GD16160@craic.sysops.org> Message-ID: Thanks! Setting "underscores_in_headers on;" fixed the issue and X_FORWARDED_PROTO is being carried forward now. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,238384,238387#msg-238387 From mdounin at mdounin.ru Mon Apr 15 22:06:30 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 16 Apr 2013 02:06:30 +0400 Subject: handling x_forwarded_proto with Nginx as backend to HAproxy In-Reply-To: <20130415214230.GD16160@craic.sysops.org> References: <319aa0242e0827d414c4a3fa45ee668c.NginxMailingListEnglish@forum.nginx.org> <20130415214230.GD16160@craic.sysops.org> Message-ID: <20130415220629.GN92338@mdounin.ru> Hello! On Mon, Apr 15, 2013 at 10:42:30PM +0100, Francis Daly wrote: > On Mon, Apr 15, 2013 at 05:35:27PM -0400, jaychris wrote: > > Hi there, > > > I was using the X_FORWARDED_PROTO header to make some decisions on the > > backend when I was using Apache, but it doesn't look like Nginx is handling > > the header currently: > > > > client sent invalid header line: "X_FORWARDED_PROTO: http" while reading > > client request headers, > > "_" is not a valid character in a http header. > > http://nginx.org/r/ignore_invalid_headers or maybe > http://nginx.org/r/underscores_in_headers Strictly speaking, "_" isn't invalid, but it's not something nginx allows by default due to security problems it might create - as it's indistinguishable from "-" in CGI-like headers representation. It is possible to allow the header in question using directives mentioned, but better solution would be to use X-Forwarded-Proto rather than X_FORWARDED_PROTO. -- Maxim Dounin http://nginx.org/en/donation.html From r1ch+nginx at teamliquid.net Mon Apr 15 22:18:04 2013 From: r1ch+nginx at teamliquid.net (Richard Stanway) Date: Mon, 15 Apr 2013 18:18:04 -0400 Subject: Possible to have a limit_req "nodelay burst" option? Message-ID: Hello, I'm using the limit_req directive to control the rate at which my backends are hit with requests. Typically a backend will generate a page and the client will not request anything for a short while, so a rate of 1 per second works well. Sometimes however a backend will return a HTTP redirect, and then the client must wait for a one second delay on the request to the redirected page. I'd like to avoid this if possible to avoid the slow feeling when users click on redirected links. The nodelay option looked like it would work at first glance, but this bypasses the delay completely for all requests up to the burst, so it's still possible for the backend to be hit with many requests at once. Ideally I would like to have a "nodelay burst" option to control how many of the burst requests are processed without delay which I could set to 2 in my situation, while still delaying any further requests beyond that. Another idea I had was to have the backend send a special header similar to how X-Accel-Redirect works, eg X-Limit-Req: 0 to avoid counting a single request towards the rate limit for purposes of redirects and similar situations. Any other thoughts how something like this could work? -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Mon Apr 15 22:23:02 2013 From: nginx-forum at nginx.us (Shohreh) Date: Mon, 15 Apr 2013 18:23:02 -0400 Subject: Cross-compiling Nginx for ARM? In-Reply-To: <967BDB2F-7D4C-4B7C-A1A2-C5DE31BBED34@gmail.com> References: <967BDB2F-7D4C-4B7C-A1A2-C5DE31BBED34@gmail.com> Message-ID: djczaski Wrote: ------------------------------------------------------- > I've compiled for ARM A8. The biggest issue was configuring to use the system OpenSSL and pcre libraries. other than that , no problems. Thanks for the input. By any chance, did you write a tutorial that I could use to try and compile it for that other ARM processor? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,238312,238390#msg-238390 From mdounin at mdounin.ru Mon Apr 15 22:38:03 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 16 Apr 2013 02:38:03 +0400 Subject: Possible to have a limit_req "nodelay burst" option? In-Reply-To: References: Message-ID: <20130415223803.GO92338@mdounin.ru> Hello! On Mon, Apr 15, 2013 at 06:18:04PM -0400, Richard Stanway wrote: > Hello, > I'm using the limit_req directive to control the rate at which my backends > are hit with requests. Typically a backend will generate a page and the > client will not request anything for a short while, so a rate of 1 per > second works well. Sometimes however a backend will return a HTTP redirect, > and then the client must wait for a one second delay on the request to the > redirected page. I'd like to avoid this if possible to avoid the slow > feeling when users click on redirected links. > > The nodelay option looked like it would work at first glance, but this > bypasses the delay completely for all requests up to the burst, so it's > still possible for the backend to be hit with many requests at once. > Ideally I would like to have a "nodelay burst" option to control how many > of the burst requests are processed without delay which I could set to 2 in > my situation, while still delaying any further requests beyond that. ... and next time you'll notice that site feels slow on a page which uses 2 redirects in a row, or includes an image/css from a backend, or user just clicks links fast enough. I would recommend just using "limit_req ... nodelay" unless you are really sure you need a delay in a particular case. -- Maxim Dounin http://nginx.org/en/donation.html From r1ch+nginx at teamliquid.net Tue Apr 16 00:08:50 2013 From: r1ch+nginx at teamliquid.net (Richard Stanway) Date: Mon, 15 Apr 2013 20:08:50 -0400 Subject: Possible to have a limit_req "nodelay burst" option? In-Reply-To: <20130415223803.GO92338@mdounin.ru> References: <20130415223803.GO92338@mdounin.ru> Message-ID: On Mon, Apr 15, 2013 at 6:38 PM, Maxim Dounin wrote: > > Hello! > > On Mon, Apr 15, 2013 at 06:18:04PM -0400, Richard Stanway wrote: > > > Hello, > > I'm using the limit_req directive to control the rate at which my backends > > are hit with requests. Typically a backend will generate a page and the > > client will not request anything for a short while, so a rate of 1 per > > second works well. Sometimes however a backend will return a HTTP redirect, > > and then the client must wait for a one second delay on the request to the > > redirected page. I'd like to avoid this if possible to avoid the slow > > feeling when users click on redirected links. > > > > The nodelay option looked like it would work at first glance, but this > > bypasses the delay completely for all requests up to the burst, so it's > > still possible for the backend to be hit with many requests at once. > > Ideally I would like to have a "nodelay burst" option to control how many > > of the burst requests are processed without delay which I could set to 2 in > > my situation, while still delaying any further requests beyond that. > > ... and next time you'll notice that site feels slow on a page > which uses 2 redirects in a row, or includes an image/css from a > backend, or user just clicks links fast enough. > > I would recommend just using "limit_req ... nodelay" unless you > are really sure you need a delay in a particular case. Thanks for the reply. I'll try to explain my situation a little better, the delay is mainly there to prevent crazy scripts / spambots / etc from making too many fast requests to the backend and tying up the "expensive" processes. Images and CSS are served from a separate backend that isn't subject to rate limiting, and if the main backend determines a redirect is needed, it will guarantee a redirect to a final URL with no further intermediate redirects. Currently I'm using a burst of 10 since showing a 503 to users is a bad experience, in the event they click links really fast I'd prefer them to just think the server is a little busy. I was hoping to remove this delay on redirects or at least the first couple of "fast clicks", while still causing spambots / etc to be subjected to 1 req/s delay so the backend is not suddenly hit with up to 10 requests at once. For now I'm going to try using rewrite rules to catch the most commonly redirected paths and pass them to a non-limited backend, but I'd really like to see this feature if possible. Regards, Richard From perusio at gmail.com Tue Apr 16 07:51:34 2013 From: perusio at gmail.com (=?ISO-8859-1?Q?Ant=F3nio_P=2E_P=2E_Almeida?=) Date: Tue, 16 Apr 2013 09:51:34 +0200 Subject: Least connections and multi_accept In-Reply-To: References: Message-ID: Recently Ruslan, while replying to someone stated that the internal counters for the least connections load balancing algorithm ate *per worker* and consequently unless there is a high load it might not add up that in fact the algorithm logic is "respected" while considering all workers. Is this still through if multi_accept is off. Or this directive has no bearing on the least connection algorithm? Thanks, -------------- next part -------------- An HTML attachment was scrubbed... URL: From francis at daoine.org Tue Apr 16 08:12:40 2013 From: francis at daoine.org (Francis Daly) Date: Tue, 16 Apr 2013 09:12:40 +0100 Subject: handling x_forwarded_proto with Nginx as backend to HAproxy In-Reply-To: <20130415220629.GN92338@mdounin.ru> References: <319aa0242e0827d414c4a3fa45ee668c.NginxMailingListEnglish@forum.nginx.org> <20130415214230.GD16160@craic.sysops.org> <20130415220629.GN92338@mdounin.ru> Message-ID: <20130416081240.GE16160@craic.sysops.org> On Tue, Apr 16, 2013 at 02:06:30AM +0400, Maxim Dounin wrote: > On Mon, Apr 15, 2013 at 10:42:30PM +0100, Francis Daly wrote: > > On Mon, Apr 15, 2013 at 05:35:27PM -0400, jaychris wrote: Hi there, > > > client sent invalid header line: "X_FORWARDED_PROTO: http" while reading > > > client request headers, > > > > "_" is not a valid character in a http header. > Strictly speaking, "_" isn't invalid, but it's not something nginx > allows by default due to security problems it might create - as > it's indistinguishable from "-" in CGI-like headers > representation. Oh, thanks for the correction. I learn something new every day. (RFC 2616 and its definition of "token", which allows 78 characters, if I count right. I don't know if there's a beyond-ASCII update to extend that, but it shouldn't restrict it further.) Cheers, f -- Francis Daly francis at daoine.org From mdounin at mdounin.ru Tue Apr 16 11:50:26 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 16 Apr 2013 15:50:26 +0400 Subject: Least connections and multi_accept In-Reply-To: References: Message-ID: <20130416115026.GT92338@mdounin.ru> Hello! On Tue, Apr 16, 2013 at 09:51:34AM +0200, Ant?nio P. P. Almeida wrote: > Recently Ruslan, while replying to someone stated that the internal > counters for the least connections load balancing algorithm ate *per > worker* and consequently unless there is a high load it might not add up > that in fact the algorithm logic is "respected" while considering all > workers. Is this still through if multi_accept is off. Or this directive > has no bearing on the least connection algorithm? While multi_accept might affect connection distribution between worker processes, it is otherwise unrelated. -- Maxim Dounin http://nginx.org/en/donation.html From duanemulder at rattyshack.ca Tue Apr 16 12:07:05 2013 From: duanemulder at rattyshack.ca (duanemulder at rattyshack.ca) Date: Tue, 16 Apr 2013 12:07:05 Subject: Least connections and multi_accept In-Reply-To: <20130416115026.GT92338@mdounin.ru> References: <20130416115026.GT92338@mdounin.ru> Message-ID: <20130416120705.80A4F250075@homiemail-a18.g.dreamhost.com> An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Tue Apr 16 14:21:03 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 16 Apr 2013 18:21:03 +0400 Subject: nginx-1.3.16 Message-ID: <20130416142103.GY92338@mdounin.ru> Changes with nginx 1.3.16 16 Apr 2013 *) Bugfix: a segmentation fault might occur in a worker process if subrequests were used; the bug had appeared in 1.3.9. *) Bugfix: the "tcp_nodelay" directive caused an error if a WebSocket connection was proxied into a unix domain socket. *) Bugfix: the $upstream_response_length variable has an incorrect value "0" if buffering was not used. Thanks to Piotr Sikora. *) Bugfix: in the eventport and /dev/poll methods. -- Maxim Dounin http://nginx.org/en/donation.html From david at styleflare.com Tue Apr 16 14:47:17 2013 From: david at styleflare.com (David | StyleFlare) Date: Tue, 16 Apr 2013 10:47:17 -0400 Subject: Rewrite rule for all domains. Message-ID: <516D6475.8090104@styleflare.com> Pardon me if I missed this in the docs... My issue is that I want to rewrite every domain and not create a server block for each. I am trying to rewrite every domain thats pointing to Nginx from www.server.com to server.com currently I am doing uwsgi_pass unix://$host.sock but if the host name has the www prefix then my socket is not found... Currently I am using the default nginx.conf file with only one line added uwsgi_pass Thanks in advance for any help. From contact at jpluscplusm.com Tue Apr 16 15:07:26 2013 From: contact at jpluscplusm.com (Jonathan Matthews) Date: Tue, 16 Apr 2013 16:07:26 +0100 Subject: Rewrite rule for all domains. In-Reply-To: <516D6475.8090104@styleflare.com> References: <516D6475.8090104@styleflare.com> Message-ID: On 16 April 2013 15:47, David | StyleFlare wrote: > Pardon me if I missed this in the docs... > > My issue is that I want to rewrite every domain and not create a server > block for each. > > I am trying to rewrite every domain thats pointing to Nginx > > from www.server.com to server.com Have a single separate server block do it for you: server { listen 80; server_name ~^(www\.)(?.+)$; rewrite $scheme://$domain$uri$is_args$args; } (written but not tested; YMMV!) Jonathan From david at styleflare.com Tue Apr 16 15:15:15 2013 From: david at styleflare.com (David | StyleFlare) Date: Tue, 16 Apr 2013 11:15:15 -0400 Subject: Rewrite rule for all domains. In-Reply-To: References: <516D6475.8090104@styleflare.com> Message-ID: <516D6B03.6050402@styleflare.com> Thanks, I will test it. Appreciate it. On 4/16/13 11:07 AM, Jonathan Matthews wrote: > On 16 April 2013 15:47, David | StyleFlare wrote: >> Pardon me if I missed this in the docs... >> >> My issue is that I want to rewrite every domain and not create a server >> block for each. >> >> I am trying to rewrite every domain thats pointing to Nginx >> >> from www.server.com to server.com > Have a single separate server block do it for you: > > server { > listen 80; > server_name ~^(www\.)(?.+)$; > rewrite $scheme://$domain$uri$is_args$args; > } > > (written but not tested; YMMV!) > > Jonathan > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From appa at perusio.net Tue Apr 16 15:18:50 2013 From: appa at perusio.net (=?ISO-8859-1?Q?Ant=F3nio_P=2E_P=2E_Almeida?=) Date: Tue, 16 Apr 2013 17:18:50 +0200 Subject: Rewrite rule for all domains. In-Reply-To: <516D6475.8090104@styleflare.com> References: <516D6475.8090104@styleflare.com> Message-ID: Adding to Jonathan suggestion with a twist: 1. Use a map directive at the http level: map $host $rewrite_domain { default 0; ~www\.(?.*)$ $domain; } 2. Create a default server while leaving all vhosts using only the base domain. server { listen 80 default_server; if ($rewrite_domain) { return 301 $scheme://$rewrite_domain$request_uri; } } ----appa On Tue, Apr 16, 2013 at 4:47 PM, David | StyleFlare wrote: > Pardon me if I missed this in the docs... > > My issue is that I want to rewrite every domain and not create a server > block for each. > > I am trying to rewrite every domain thats pointing to Nginx > > from www.server.com to server.com > > currently I am doing uwsgi_pass unix://$host.sock > > but if the host name has the www prefix then my socket is not found... > > Currently I am using the default nginx.conf file with only one line added > uwsgi_pass > > Thanks in advance for any help. > > > > > > > ______________________________**_________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/**mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Tue Apr 16 16:04:32 2013 From: nginx-forum at nginx.us (abstein2) Date: Tue, 16 Apr 2013 12:04:32 -0400 Subject: Status Code 001 In Logs In-Reply-To: <20130411122710.GO62550@mdounin.ru> References: <20130411122710.GO62550@mdounin.ru> Message-ID: <104e72e65ace91864cc5c9aab7288bc3.NginxMailingListEnglish@forum.nginx.org> Sorry for the delay. I do think part of the issue is tied to the load balancer since that connection is timing out (we set it very low for testing purposes), but the load balancer terminating the connection doesn't explain why nginx is returning a 001 status code, since the connection from the nginx box to the origin box shouldn't be affected by that. nginx -V: nginx version: nginx/1.2.3 built by gcc 4.4.3 (Ubuntu 4.4.3-4ubuntu5.1) TLS SNI support enabled configure arguments: --prefix=/usr --sbin-path=/usr/sbin/nginx --conf-path=/etc/nginx/nginx.conf --with-http_ssl_module --with-http_gzip_static_module --error-log-path=/home/logs/error.log --pid-path=/etc/nginx/nginx.pid --lock-path=/var/lock/nginx.lock --with-http_geoip_module --with-http_addition_module --with-http_sub_module --with-http_stub_status_module --with-http_ssl_module --with-debug --with-http_realip_module --with-http_perl_module --add-module=/etc/nginx/yaoweibin-nginx-eval-module-706056b/ --add-module=/etc/nginx/yaoweibin-nginx_http_recaptcha_module-5d8f6b2/ --add-module=/etc/nginx/yaoweibin-nginx_secure_cookie_module-bcc69e5/ --add-module=/etc/nginx/ngx_http_redis-0.3.5/ --add-module=/etc/nginx/ngx_http_substitutions_filter_module/ --add-module=/etc/nginx/ngx_cache_purge-1.5/ --add-module=/etc/nginx/agentzh-redis2-nginx-module-1d79e5b/ --with-cc-opt=-Wno-error Debug Log: 2013/04/16 15:38:37 [debug] 28029#0: *629969 rewrite phase: 3 2013/04/16 15:38:37 [debug] 28029#0: *629969 rewrite phase: 4 2013/04/16 15:38:37 [debug] 28029#0: *629969 post rewrite phase: 5 2013/04/16 15:38:37 [debug] 28029#0: *629969 generic phase: 6 2013/04/16 15:38:37 [debug] 28029#0: *629969 realip: "69.0.0.0" 2013/04/16 15:38:37 [debug] 28029#0: *629969 generic phase: 7 2013/04/16 15:38:37 [debug] 28029#0: *629969 generic phase: 8 2013/04/16 15:38:37 [debug] 28029#0: *629969 access phase: 9 2013/04/16 15:38:37 [debug] 28029#0: *629969 access phase: 10 2013/04/16 15:38:37 [debug] 28029#0: *629969 post access phase: 11 2013/04/16 15:38:37 [debug] 28029#0: *629969 try files phase: 12 2013/04/16 15:38:37 [debug] 28029#0: *629969 perl handler 2013/04/16 15:38:37 [debug] 28029#0: *629969 perl variable: "a" 2013/04/16 15:38:37 [debug] 28029#0: *629969 perl variable: "v" 2013/04/16 15:38:37 [debug] 28029#0: *629969 perl variable: "i" 2013/04/16 15:38:37 [debug] 28029#0: *629969 perl variable: "r" 2013/04/16 15:38:37 [debug] 28029#0: *629969 perl variable: "b" 2013/04/16 15:38:37 [debug] 28029#0: *629969 perl variable: "upstream_http_content_type" 2013/04/16 15:38:37 [debug] 28029#0: *629969 perl variable: "upstream_status" 2013/04/16 15:38:37 [debug] 28029#0: *629969 call_sv: 1 2013/04/16 15:38:37 [debug] 28029#0: *629969 perl handler done: 1 2013/04/16 15:38:37 [debug] 28029#0: *629969 http finalize request: 1, "/panel/?" a:1, c:2 2013/04/16 15:38:37 [debug] 28029#0: *629969 http terminate request count:2 2013/04/16 15:38:37 [debug] 28029#0: *629969 http terminate cleanup count:2 blk:0 2013/04/16 15:38:37 [debug] 28029#0: *629969 http finalize request: -4, "/panel/?" a:1, c:2 2013/04/16 15:38:37 [debug] 28029#0: *629969 http request count:2 blk:0 2013/04/16 15:38:37 [debug] 28029#0: *629969 http posted request: "/panel/?" 2013/04/16 15:38:37 [debug] 28029#0: *629969 http terminate handler count:1 2013/04/16 15:38:37 [debug] 28029#0: *629969 http request count:1 blk:0 2013/04/16 15:38:37 [debug] 28029#0: *629969 http close request 2013/04/16 15:38:37 [debug] 28029#0: *629969 http log handler 2013/04/16 15:38:37 [debug] 28029#0: *629969 run cleanup: 000000000DA332C8 2013/04/16 15:38:37 [debug] 28029#0: *629969 file cleanup: fd:18 2013/04/16 15:38:37 [debug] 28029#0: *629969 run cleanup: 00000000045C8C60 2013/04/16 15:38:37 [debug] 28029#0: *629969 free: 000000000F2C6570 2013/04/16 15:38:37 [debug] 28029#0: *629969 free: 00000000045C8210, unused: 0 2013/04/16 15:38:37 [debug] 28029#0: *629969 free: 000000000A2BB580, unused: 1 2013/04/16 15:38:37 [debug] 28029#0: *629969 free: 000000000DA32A90, unused: 0 2013/04/16 15:38:37 [debug] 28029#0: *629969 free: 00000000034DEB30, unused: 2901 2013/04/16 15:38:37 [debug] 28029#0: *629969 close http connection: 16 2013/04/16 15:38:37 [debug] 28029#0: *629969 reusable connection: 0 2013/04/16 15:38:37 [debug] 28029#0: *629969 free: 000000000DCE5A40 2013/04/16 15:38:37 [debug] 28029#0: *629969 free: 00000000038A6F70 2013/04/16 15:38:37 [debug] 28029#0: *629969 free: 00000000028DF7D0, unused: 8 2013/04/16 15:38:37 [debug] 28029#0: *629969 free: 0000000004433C10, unused: 115 Posted at Nginx Forum: http://forum.nginx.org/read.php?2,238267,238416#msg-238416 From mdounin at mdounin.ru Tue Apr 16 16:22:55 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 16 Apr 2013 20:22:55 +0400 Subject: Status Code 001 In Logs In-Reply-To: <104e72e65ace91864cc5c9aab7288bc3.NginxMailingListEnglish@forum.nginx.org> References: <20130411122710.GO62550@mdounin.ru> <104e72e65ace91864cc5c9aab7288bc3.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20130416162255.GC92338@mdounin.ru> Hello! On Tue, Apr 16, 2013 at 12:04:32PM -0400, abstein2 wrote: > Sorry for the delay. I do think part of the issue is tied to the load > balancer since that connection is timing out (we set it very low for testing > purposes), but the load balancer terminating the connection doesn't explain > why nginx is returning a 001 status code, since the connection from the > nginx box to the origin box shouldn't be affected by that. [...] > 2013/04/16 15:38:37 [debug] 28029#0: *629969 access phase: 9 > 2013/04/16 15:38:37 [debug] 28029#0: *629969 access phase: 10 > 2013/04/16 15:38:37 [debug] 28029#0: *629969 post access phase: 11 > 2013/04/16 15:38:37 [debug] 28029#0: *629969 try files phase: 12 > 2013/04/16 15:38:37 [debug] 28029#0: *629969 perl handler > 2013/04/16 15:38:37 [debug] 28029#0: *629969 perl variable: "a" > 2013/04/16 15:38:37 [debug] 28029#0: *629969 perl variable: "v" > 2013/04/16 15:38:37 [debug] 28029#0: *629969 perl variable: "i" > 2013/04/16 15:38:37 [debug] 28029#0: *629969 perl variable: "r" > 2013/04/16 15:38:37 [debug] 28029#0: *629969 perl variable: "b" > 2013/04/16 15:38:37 [debug] 28029#0: *629969 perl variable: > "upstream_http_content_type" > 2013/04/16 15:38:37 [debug] 28029#0: *629969 perl variable: > "upstream_status" > 2013/04/16 15:38:37 [debug] 28029#0: *629969 call_sv: 1 > 2013/04/16 15:38:37 [debug] 28029#0: *629969 perl handler done: 1 > 2013/04/16 15:38:37 [debug] 28029#0: *629969 http finalize request: 1, > "/panel/?" a:1, c:2 As per debug log, the request in question was finalized from a perl handler with status code 1. Likely just a bug in your perl code and it's meant to return 200. -- Maxim Dounin http://nginx.org/en/donation.html From nginx-forum at nginx.us Tue Apr 16 19:39:35 2013 From: nginx-forum at nginx.us (abstein2) Date: Tue, 16 Apr 2013 15:39:35 -0400 Subject: Status Code 001 In Logs In-Reply-To: <20130416162255.GC92338@mdounin.ru> References: <20130416162255.GC92338@mdounin.ru> Message-ID: <6c49a1ca02303b457b3cf9a6f9b33654.NginxMailingListEnglish@forum.nginx.org> Based on your post, I was actually dug a little bit deeper because there was nowhere in my Perl I could find that returned 1. After disabling most of the Perl, I was getting 499 errors which made sense, the client was closing the connection. It looks like part of the issue is that attached to the location I have a post_action that runs another perl method. This Perl method doesn't have a return value and, in it's place, nginx is just using 001. When adding a return code to this method, it takes over the $status variable in the nginx log file. In general, have the method return $r->variable('status') seems to properly emulate exactly what codes the proxy is actual returning to the browser. The one exception to this (that I can find) though is if the nginx return code was 499. Then I just see 009 in my log files. Is there anything better in nginx I can use than $status for getting the status code returned to the browser? Alternatively, is there anything I can do so that the log file doesn't read the status code from the post_action and instead just reads it from the initial location? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,238267,238418#msg-238418 From nginx-forum at nginx.us Tue Apr 16 20:28:20 2013 From: nginx-forum at nginx.us (jeff7091) Date: Tue, 16 Apr 2013 16:28:20 -0400 Subject: Any equivalent of mod_dumpio Message-ID: Is there anything like Apache's mod_dumpio? Any way to get a full running trace of request & responses, including the body data? Would such a thing be useful to anyone besides me? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,238419,238419#msg-238419 From kworthington at gmail.com Wed Apr 17 02:12:56 2013 From: kworthington at gmail.com (Kevin Worthington) Date: Tue, 16 Apr 2013 22:12:56 -0400 Subject: nginx-1.3.16 In-Reply-To: <20130416142103.GY92338@mdounin.ru> References: <20130416142103.GY92338@mdounin.ru> Message-ID: Hello Nginx Users, Now available: Nginx 1.3.16 For Windows http://goo.gl/w5Nam (32-bit and 64-bit versions) These versions are to support legacy users who are already using Cygwin based builds of Nginx. Officially supported native Windows binaries are at nginx.org. Announcements are also available via my Twitter stream ( http://twitter.com/kworthington), if you prefer to receive updates that way. Thank you, Kevin -- Kevin Worthington kworthington *@* (gmail] [dot} {com) http://kevinworthington.com/ http://twitter.com/kworthington On Tue, Apr 16, 2013 at 10:21 AM, Maxim Dounin wrote: > Changes with nginx 1.3.16 16 Apr > 2013 > > *) Bugfix: a segmentation fault might occur in a worker process if > subrequests were used; the bug had appeared in 1.3.9. > > *) Bugfix: the "tcp_nodelay" directive caused an error if a WebSocket > connection was proxied into a unix domain socket. > > *) Bugfix: the $upstream_response_length variable has an incorrect > value > "0" if buffering was not used. > Thanks to Piotr Sikora. > > *) Bugfix: in the eventport and /dev/poll methods. > > > -- > Maxim Dounin > http://nginx.org/en/donation.html > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From xanto at egaming.ro Wed Apr 17 06:35:33 2013 From: xanto at egaming.ro (Mike) Date: Wed, 17 Apr 2013 09:35:33 +0300 Subject: Nginx connection close module Message-ID: Hello, I'm sending you this email after some hours of struggling on google and nginx related page/articles/wiki for finding some nginx plugin for tcp disconnect/reset of some client. My plan is to make an reverse mail (imap/pop3) proxy using nginx. I need an nginx module to close/terminate an particular client connection in cases of for example the account password was changed. MUAs like thunderbird/outlook tend to keep some persistent connections with the server (5 by default). Now, the question is: do you know any module for nginx witch could help me implement such thing? Thank you in advance, *Mike.* -------------- next part -------------- An HTML attachment was scrubbed... URL: From edigarov at qarea.com Wed Apr 17 08:02:10 2013 From: edigarov at qarea.com (Gregory Edigarov) Date: Wed, 17 Apr 2013 11:02:10 +0300 Subject: Any equivalent of mod_dumpio In-Reply-To: References: Message-ID: <516E5702.3050508@qarea.com> On 04/16/2013 11:28 PM, jeff7091 wrote: > Is there anything like Apache's mod_dumpio? Any way to get a full running > trace of request & responses, including the body data? Would such a thing be > useful to anyone besides me? does debug log (http://nginx.org/en/docs/debugging_log.html) suffice? -- With best regards, Gregory Edigarov From nginx-forum at nginx.us Wed Apr 17 08:42:10 2013 From: nginx-forum at nginx.us (mamoos1) Date: Wed, 17 Apr 2013 04:42:10 -0400 Subject: Dynamic upstream configuration Message-ID: Hi, I have an nginx configured as a reverse proxy using proxy_pass. This is a dynamic reverse proxy, it fetches content for users in accordance to the HOST header in the requests. The problem I encountered is that the backends are normally SSL, and every request that goes has to re-handshake (since there is no keep-alive between nginx and the backend servers). I know that I can configure keep-alive with upstream - but that requires me to know upfront which servers will be used (which I don't, since I dynamically fetch content according to headers). Is there a way to configure that the proxy will keep-alive connections with backend servers dynamically, or for some time? Thanks, Roy. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,238424,238424#msg-238424 From wmark+nginx at hurrikane.de Wed Apr 17 13:31:40 2013 From: wmark+nginx at hurrikane.de (W-Mark Kubacki) Date: Wed, 17 Apr 2013 15:31:40 +0200 Subject: Cross-compiling Nginx for ARM? In-Reply-To: References: <967BDB2F-7D4C-4B7C-A1A2-C5DE31BBED34@gmail.com> Message-ID: 2013/4/16 Shohreh: > djczaski Wrote: > > Thanks for the input. By any chance, did you write a tutorial that I could > use to try and compile it for that other ARM processor? Here you go: [1] http://mark.ossdl.de/en/2009/09/nginx-on-sheevaplug.html You don't need to patch Nginx anymore and can skip step 7. I've run a Gentoo binhost for ARM architecture, compatible to the Sheevaplug's Kirkwood 88F6281 ?Feroceon?. Some binaries might work on Ubuntu, though I've switched to Gentoo: [2] http://binhost.ossdl.de/ARM/armv5tel-softfloat-linux-gnueabi/ (see www-servers there; ?Packages? is a plaintext file which lists the contents of the binhost) More: [3] http://mark.ossdl.de/en/2009/09/gentoo-on-the-sheevaplug.html [4] http://mark.ossdl.de/en/2009/09/network-booting-linux-on-the-sheevaplug.html [5] http://mark.ossdl.de/en/2009/09/cross-compiling-for-the-sheevaplug-kernel-distcc.html [6] http://mark.ossdl.de/en/2009/10/sheevaplug-kernel-and-gentoo-binhost.html Links to git.ossdl.de don't work, but you can download my modified kernel, get its ?.config? and compile your own. Most patches (excluding the one for SATA on the SheevaPlug) have already been integrated into Linux. [7] http://mark.ossdl.de/en/2010/04/howto-extend-the-sheevaplug-by-esata.html If I were you I would go for a Mikrotik Routerboard (the RB951G-2HnD is excellent except its lack of 5GHz wifi). That are MIPS machines, though. ;-) -- Mark From nginx-forum at nginx.us Wed Apr 17 16:33:34 2013 From: nginx-forum at nginx.us (jeff7091) Date: Wed, 17 Apr 2013 12:33:34 -0400 Subject: Any equivalent of mod_dumpio In-Reply-To: <516E5702.3050508@qarea.com> References: <516E5702.3050508@qarea.com> Message-ID: <13842d1850c391bb350e01b5b378a933.NginxMailingListEnglish@forum.nginx.org> Just tried that out. Quite a nice dump, but I don't see the body in the output. I'm tracing ActiveSync via a proxy, wanting to glean the WBXML in the body, kinda like using a proxy as a sniffer. If I have to develop this, is the debug feature the right place? I could see creating a filter module, maybe? Note that I also want to be able to capture & display POST data going toward the server. Advice appreciate. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,238419,238434#msg-238434 From nbari at inbox.im Wed Apr 17 21:26:58 2013 From: nbari at inbox.im (Nicolas de Bari Embriz Garcia Rojas) Date: Wed, 17 Apr 2013 22:26:58 +0100 Subject: resumable uploads using PUT method Message-ID: <9CAF14F1-3160-4524-A6DC-805AFAE8BD83@inbox.im> Hi, is there a way to use the PUT method instead of POST when using the upload module (http://www.grid.net.ru/nginx/resumable_uploads.en.html) regards. -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Thu Apr 18 04:55:16 2013 From: nginx-forum at nginx.us (davidjb) Date: Thu, 18 Apr 2013 00:55:16 -0400 Subject: Subrequests: returning response to client Message-ID: <1d70f5d28572c99486a453d8c6e38e8e.NginxMailingListEnglish@forum.nginx.org> I'm currently looking to extend the 'Auth Request' (http://mdounin.ru/hg/ngx_http_auth_request_module/) Nginx add-on module and have the module be able to conform (at least wherever possible) to the FastCGI "Authorizer" specification. The full specification is at http://www.fastcgi.com/drupal/node/22#S6.3 - the idea being the configured authorizer should be hit with a sub-request, and if a 200 is returned, then access allowed, with some manipulation of special headers. This is fine and I've successfully written extended the existing auth-request code. However, for any other response aside from 200, the Authorizer specification states that the server must send the response status, headers, and content back to the HTTP client. In my specific FastCGI authorizer, it sends various 301/302 redirects to do single sign on. This is where I've gotten stuck. So far, within the ngx_http_auth_request_handler function, I've managed to have this mostly working using this: ngx_http_request_t *sr; ... if (authorizer) { sr = ctx->subrequest; r->headers_out = sr->headers_out; return ctx->status; } which results in sending the subrequest's headers and status back to the client. I'm unsure if this is sensible in replacing headers in this fashion - eg performance or memory wise - so could some please comment on this? It does work though. However, all that said, the subrequest's response body is the missing piece of the puzzle. I did figure out that the ``sr`` above defaults to ``sr->header_only = 1``, so it is ignoring the response body. However, turning this off results in the subrequest returning the body response to the client before main response headers can be sent -- exactly as per the discussion here: http://forum.nginx.org/read.php?2,222427,222543 . Curiously, only the sub-request response body is sent to the client, not the headers as well. If this behaviour for the subrequest could be changed to send its headers as well, then this would solve my problem. One way or another, I'd like to get the subrequest's response to the client as the whole response. Is this possible, and if so, how? Thanks in advance! -- David Posted at Nginx Forum: http://forum.nginx.org/read.php?2,238444,238444#msg-238444 From agentzh at gmail.com Thu Apr 18 06:21:21 2013 From: agentzh at gmail.com (agentzh) Date: Wed, 17 Apr 2013 23:21:21 -0700 Subject: [ANN] ngx_openresty stable version 1.2.7.6 released Message-ID: Hello! I am happy to announce that the new stable version of ngx_openresty, 1.2.7.6, is now released: http://openresty.org/#Download Special thanks go to all our contributors and users for helping make this happen! Below is the complete change log for this release, as compared to the last (devel) release, 1.2.7.5: * upgraded LuaNginxModule to 0.7.21. * bugfix: boolean values in an array table were rejected with the exception "attempt to use boolean as query arg value" while encoding a Lua (hash) table as URL arguments. thanks Calin Don for reporting this issue. * bugfix: ngx.req.raw_header() would return an empty string value when the default header buffer ("c->buffer") can hold the request line but not the whole header. thanks KDr2 for reporting this issue. * upgraded EncryptedSessionNginxModule to 0.03. * refactor: fixed typos in the source code: replacing "3des" with "aes"; thanks Edgar Liu for reporting this issue. * upgraded IconvNginxModule to 0.10. * bugfix: failed to build on Solaris with the bogus error message "ngx_devel_kit is required to build ngx_iconv; please put it before ngx_iconv". The HTML version of the change log with some useful hyper-links can be browsed here: http://openresty.org/#ChangeLog1002007 OpenResty (aka. ngx_openresty) is a full-fledged web application server by bundling the standard Nginx core, lots of 3rd-party Nginx modules and Lua libraries, as well as most of their external dependencies. See OpenResty's homepage for details: http://openresty.org/ We have been running extensive testing on our Amazon EC2 test cluster and ensure that all the components (including the Nginx core) play well together. The latest test report can always be found here: http://qa.openresty.org Enjoy! -agentzh From nginx-forum at nginx.us Thu Apr 18 08:37:37 2013 From: nginx-forum at nginx.us (HajoLOcke) Date: Thu, 18 Apr 2013 04:37:37 -0400 Subject: HealthCheck in Baseconfiguration Message-ID: Hello, i use nginx 1.0.12 and 1.2.4 I know there is a special healthcheckmodule. But i want to know if nginx ist doing a kind of healtcheck in its basic Configuration without the special HttpHealthcheckModule. Is nginx able to check and/or weight his backend in a simple base Configuration? Thanks, Hajo Posted at Nginx Forum: http://forum.nginx.org/read.php?2,238450,238450#msg-238450 From andrew at nginx.com Thu Apr 18 08:42:15 2013 From: andrew at nginx.com (Andrew Alexeev) Date: Thu, 18 Apr 2013 12:42:15 +0400 Subject: HealthCheck in Baseconfiguration In-Reply-To: References: Message-ID: <69331B8A-C2F8-4DC7-B8D4-3302CAFCA249@nginx.com> On Apr 18, 2013, at 12:37 PM, "HajoLOcke" wrote: > Hello, > > i use nginx 1.0.12 and 1.2.4 > I know there is a special healthcheckmodule. But i want to know if nginx ist > doing a kind of healtcheck in its basic Configuration without the special > HttpHealthcheckModule. Is nginx able to check and/or weight his backend in a > simple base Configuration? There has been always a mechanism to check backend state in nginx load balancing code. However, that's not an "out-of-band" health checking, as you pointed out. It's more like an integrated or "in-band" synchronous check. Rationale for that is that nginx was developed and mostly used across high load environments, so an asynchronous health check running every few minutes doesn't really help when you've got thousands requests per second per backend. We always check if a particular request has failed, and you've got means to configure the restoration logic. http://nginx.org/en/docs/http/ngx_http_upstream_module.html#server > Thanks, > Hajo > > Posted at Nginx Forum: http://forum.nginx.org/read.php?2,238450,238450#msg-238450 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > From mdounin at mdounin.ru Thu Apr 18 08:55:13 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 18 Apr 2013 12:55:13 +0400 Subject: Subrequests: returning response to client In-Reply-To: <1d70f5d28572c99486a453d8c6e38e8e.NginxMailingListEnglish@forum.nginx.org> References: <1d70f5d28572c99486a453d8c6e38e8e.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20130418085513.GL92338@mdounin.ru> Hello! On Thu, Apr 18, 2013 at 12:55:16AM -0400, davidjb wrote: > I'm currently looking to extend the 'Auth Request' > (http://mdounin.ru/hg/ngx_http_auth_request_module/) Nginx add-on module and > have the module be able to conform (at least wherever possible) to the > FastCGI "Authorizer" specification. The full specification is at > http://www.fastcgi.com/drupal/node/22#S6.3 - the idea being the configured > authorizer should be hit with a sub-request, and if a 200 is returned, then > access allowed, with some manipulation of special headers. This is fine and > I've successfully written extended the existing auth-request code. > > However, for any other response aside from 200, the Authorizer specification > states that the server must send the response status, headers, and content > back to the HTTP client. In my specific FastCGI authorizer, it sends various > 301/302 redirects to do single sign on. This is where I've gotten stuck. > > So far, within the ngx_http_auth_request_handler function, I've managed to > have this mostly working using this: > > ngx_http_request_t *sr; > ... > if (authorizer) { > sr = ctx->subrequest; > r->headers_out = sr->headers_out; > return ctx->status; > } > > which results in sending the subrequest's headers and status back to the > client. I'm unsure if this is sensible in replacing headers in this fashion > - eg performance or memory wise - so could some please comment on this? It > does work though. While this might work, I wouldn't recommend relying on it. This way headers are processed within subrequest context, and then again within main request context. This might potentially result in headers being not in sync with filter module contexts, resulting in invalid response. Safer aproach would be to copy only specific headers. (On the other hand, I don't think anything bad would happen with standard filter modules, as most of them just ignore subrequests.) > However, all that said, the subrequest's response body is the missing piece > of the puzzle. I did figure out that the ``sr`` above defaults to > ``sr->header_only = 1``, so it is ignoring the response body. However, > turning this off results in the subrequest returning the body response to > the client before main response headers can be sent -- exactly as per the > discussion here: http://forum.nginx.org/read.php?2,222427,222543 . > Curiously, only the sub-request response body is sent to the client, not the > headers as well. If this behaviour for the subrequest could be changed to > send its headers as well, then this would solve my problem. > > One way or another, I'd like to get the subrequest's response to the client > as the whole response. Is this possible, and if so, how? What you are trying to do should be possible with NGX_HTTP_SUBREQUEST_IN_MEMORY - this way subrequest body will be available in memory instead of being sent to a client. But it's not currently supported for fastcgi. -- Maxim Dounin http://nginx.org/en/donation.html From nginx-forum at nginx.us Thu Apr 18 09:27:18 2013 From: nginx-forum at nginx.us (double) Date: Thu, 18 Apr 2013 05:27:18 -0400 Subject: Is it possible that nginx will not buffer the client body? In-Reply-To: References: Message-ID: <79927bd67b3b04510beaa2f85eeab336.NginxMailingListEnglish@forum.nginx.org> Hmm, this patch is still not in nginx-1.3.16. It would be great to have this feature in vanilla nginx. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,234926,238461#msg-238461 From nginx-forum at nginx.us Thu Apr 18 09:39:45 2013 From: nginx-forum at nginx.us (HajoLOcke) Date: Thu, 18 Apr 2013 05:39:45 -0400 Subject: HealthCheck in Baseconfiguration In-Reply-To: <69331B8A-C2F8-4DC7-B8D4-3302CAFCA249@nginx.com> References: <69331B8A-C2F8-4DC7-B8D4-3302CAFCA249@nginx.com> Message-ID: Hello, thanks for your quick response. I tried the HttpHealthcheckModule 2 years ago. Then this module only worked in combination with round_robin and not ip_hash. Has this situation changed? May be i dont need this Module, but i want to do some tests with it. Thanks, Hajo Posted at Nginx Forum: http://forum.nginx.org/read.php?2,238450,238465#msg-238465 From crazynuxer at gmail.com Thu Apr 18 09:40:37 2013 From: crazynuxer at gmail.com (crazynuxer) Date: Thu, 18 Apr 2013 16:40:37 +0700 Subject: listen backlog Message-ID: Dear All, I have some vhost with nginx , when I set backlog at some vhost, I get error nginx: [emerg] duplicate listen options for 0.0.0.0:80 in my question is. When we configure 1 vhost with backlog option , it will be applied to all vhost or not? and if not, how , I'm configure backlog option , I set backlog in my vhost listen *:80 backlog=65535; thanks, Rizki -- -everything will be okay in the end, if it's not okay, it's not the end -tumbang tujuh kali, itu berarti kita harus mampu berdiri delapan kali http://rizki.us crazynuxer at gmail.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From igor at sysoev.ru Thu Apr 18 09:45:29 2013 From: igor at sysoev.ru (Igor Sysoev) Date: Thu, 18 Apr 2013 13:45:29 +0400 Subject: listen backlog In-Reply-To: References: Message-ID: On Apr 18, 2013, at 13:40 , crazynuxer wrote: > Dear All, > > I have some vhost with nginx , when I set backlog at some vhost, I get error > > nginx: [emerg] duplicate listen options for 0.0.0.0:80 in > > my question is. When we configure 1 vhost with backlog option , it will be applied to all vhost or not? Yes, it is applied to all vhost listening on the same port. This is property of a port. -- Igor Sysoev http://nginx.com/services.html -------------- next part -------------- An HTML attachment was scrubbed... URL: From rizki at detik.com Thu Apr 18 10:04:14 2013 From: rizki at detik.com (crazynuxer) Date: Thu, 18 Apr 2013 17:04:14 +0700 Subject: listen backlog In-Reply-To: References: Message-ID: thanks, Rizki On Thu, Apr 18, 2013 at 4:45 PM, Igor Sysoev wrote: > On Apr 18, 2013, at 13:40 , crazynuxer wrote: > > Dear All, > > I have some vhost with nginx , when I set backlog at some vhost, I get > error > > nginx: [emerg] duplicate listen options for 0.0.0.0:80 in > > my question is. When we configure 1 vhost with backlog option , it will be > applied to all vhost or not? > > > Yes, it is applied to all vhost listening on the same port. This is > property of a port. > > > -- > Igor Sysoev > http://nginx.com/services.html > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -- -biasakan bottom post dan delete yang tidak perlu -everything will be okay in the end, if it's not okay, it's not the end -tumbang tujuh kali, itu berarti kita harus mampu berdiri delapan kali -If I Fail, I try again and again and again. Cause I Believe, there is GAIN in aGAIN!" -simpati: 08119625241 http://rizki.us crazynuxer at gmail.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Thu Apr 18 10:52:50 2013 From: nginx-forum at nginx.us (akam) Date: Thu, 18 Apr 2013 06:52:50 -0400 Subject: Exchange / Outlook - RPC Method and Error 405 In-Reply-To: <95767d411dbb22e16b4aa57668f8d49a.NginxMailingListEnglish@forum.nginx.org> References: <20130228152003.GJ81985@mdounin.ru> <95767d411dbb22e16b4aa57668f8d49a.NginxMailingListEnglish@forum.nginx.org> Message-ID: <67e0b3cc9375b95618925e22ea133f5f.NginxMailingListEnglish@forum.nginx.org> So, anyone know, will be this feature in future betas or releases? (I tryed to use haproxy, but it don't work :( can you please share your config? ) Posted at Nginx Forum: http://forum.nginx.org/read.php?2,236709,238473#msg-238473 From nginx-forum at nginx.us Thu Apr 18 11:07:55 2013 From: nginx-forum at nginx.us (gmor) Date: Thu, 18 Apr 2013 07:07:55 -0400 Subject: Exchange / Outlook - RPC Method and Error 405 In-Reply-To: <67e0b3cc9375b95618925e22ea133f5f.NginxMailingListEnglish@forum.nginx.org> References: <20130228152003.GJ81985@mdounin.ru> <95767d411dbb22e16b4aa57668f8d49a.NginxMailingListEnglish@forum.nginx.org> <67e0b3cc9375b95618925e22ea133f5f.NginxMailingListEnglish@forum.nginx.org> Message-ID: <1859b9e4c764feb722c9339815b290cb.NginxMailingListEnglish@forum.nginx.org> Hi, Happy to share my config. This is based on HAProxy Version 1.5-Dev17. It's by no means perfect, but's working for us at the moment: global # Default Maximum Number of Connections. Used to set ulimit -n maxconn 20000 # Run as a Daemon Service in the Background daemon # Define the Number of Processor Cores - Not Essential #nbproc 2 # Allows Turning Off of Kernel TCP Splicing - Not Essential #nosplice # Logging Setting. Local to Local Syslog and Control from There log 127.0.0.1 daemon log-send-hostname log-tag haproxy # Define a UNIX Socket so that you can Admin the Service interactively stats socket /usr/local/sbin/haproxy-socket level admin defaults # Do Not Log Connections with No Requests option dontlognull # Force Clients to try and Reconnect to an Alternative Server if one is Down option redispatch # Ensure that Streaming HTTP Works Correctly - Vital for Outlook Anywhere option http-no-delay # Enable Continuous Stats for Long Running Connections option contstats # Log All HTTP Date option httplog # Log Request and Responses as Fast as Possible option logasap # Set Logging to the Setting in Global log global # Define the Method of Load Balancing - source = Source IP Hash balance source # Client Inactivity Timeout #timeout client 900s timeout client 3600s # Server Inactivity Timeout #timeout server 900s timeout server 3600s # Maximum Time a Request is Queued on the Load Balancer timeout queue 30s # Other Timeouts - Need Investigating timeout connect 5s timeout http-keep-alive 1s timeout http-request 15s timeout tarpit 1m # Define the Default Server Checking Behaviour - 10 seconds, 3 Missed Checks is Failure, 2 Successful Check Brings Server Back default-server inter 10s fall 3 rise 2 userlist stats-auth # User / Password for Admin Access to Stats Page group stats-admin users admin user admin password [Remvoed] # User / Password for Monitor Access to Stats Page group stats-readonly users monitor user monitor password [Removed] listen stats # Define the Mode mode http # Bind to an IP Address/Port bind 10.2.1.1:8080 # Define ACLs to be Used in the Stats Authentication Process acl AUTH-readonly http_auth_group(stats-auth) stats-readonly acl AUTH-admin http_auth_group(stats-auth) stats-admin acl net-allowed src 10.3.1.8/29 10.4.1.8/29 # Enable Various Stats Features stats enable stats show-desc Load Balancer for Exchange stats uri / stats refresh 10s # Enable Stats Auth stats http-request auth unless AUTH-admin OR AUTH-readonly stats admin if AUTH-admin # Block Access Unless in the Allow Network Range block unless net-allowed frontend ft_exchange # Define the Mode mode http # Define the Maximum Number of Connections for the Frontend maxconn 8000 # Bind to an IP Address/Port, Select SSL and specific the Certificate # The Ciphers option for SSL can be Added: ciphers bind 10.2.1.1:443 ssl crt /etc/ssl/crt.domain.com.pem ciphers TLSv1+SSLv3+HIGH:!aNULL:!eNULL # Define a List of Accepted ACLs for Future use acl all-exchange path_beg -i /autodiscover /owa /oab /ews /public /microsoft-server-activesync /rpc acl root url_len 1 acl autodiscover path_beg -i /autodiscover acl owa path_beg -i /owa acl oab path_beg -i /oab acl ews path_beg -i /ews acl public path_beg -i /public acl activesync path_beg -i /microsoft-server-activesync acl outlook-anywhere path_beg -i /rpc # Block All Request Except Those to Exchange Virtual Directories block unless all-exchange OR root # Redirect is the URL is a Single Character, which can only mean / redirect location /owa if root # Capture the User-Agent Header, so that it is Added to the Log capture request header User-Agent len 50 capture request header Content-Length len 120 capture response header Content-Length len 120 # Define Which Set of Backend Servers to Use default_backend bk_exchange_all backend bk_exchange_all # Define the Mode mode http # Define the Overal Maximum Number of Connections for the Backend fullconn 8000 # Define the Backend Servers server exchange01 10.1.1.1:80 check server exchange02 10.1.1.2:80 check (IP addresses and names have been changed to protect to innocent). Posted at Nginx Forum: http://forum.nginx.org/read.php?2,236709,238474#msg-238474 From nginx-forum at nginx.us Thu Apr 18 11:46:34 2013 From: nginx-forum at nginx.us (akam) Date: Thu, 18 Apr 2013 07:46:34 -0400 Subject: Exchange / Outlook - RPC Method and Error 405 In-Reply-To: <1859b9e4c764feb722c9339815b290cb.NginxMailingListEnglish@forum.nginx.org> References: <20130228152003.GJ81985@mdounin.ru> <95767d411dbb22e16b4aa57668f8d49a.NginxMailingListEnglish@forum.nginx.org> <67e0b3cc9375b95618925e22ea133f5f.NginxMailingListEnglish@forum.nginx.org> <1859b9e4c764feb722c9339815b290cb.NginxMailingListEnglish@forum.nginx.org> Message-ID: Thank you, I'll try to modify it, because don't work for me too :) Apr 18 15:39:21 localhost haproxy[32134]: 217.118.93.107:25372 [18/Apr/2013:15:39:21.100] ft_exchange~ ft_exchange/ 129/-1/-1/-1/+129 302 +101 - - PR-- 0/0/0/0/0 0/0"GET / HTTP/1.1" By the way, are you use haproxy only for exchange? your backends use http? not https? gmor Wrote: ------------------------------------------------------- > Hi, > > bind 10.2.1.1:443 ssl crt /etc/ssl/crt.domain.com.pem ciphers > TLSv1+SSLv3+HIGH:!aNULL:!eNULL > > backend bk_exchange_all > > # Define the Backend Servers > server exchange01 10.1.1.1:80 check > server exchange02 10.1.1.2:80 check > Posted at Nginx Forum: http://forum.nginx.org/read.php?2,236709,238475#msg-238475 From kristofer at cybernetik.net Thu Apr 18 16:19:09 2013 From: kristofer at cybernetik.net (kristofer at cybernetik.net) Date: Thu, 18 Apr 2013 11:19:09 -0500 (CDT) Subject: Proxy cache In-Reply-To: <1104667803.589849.1366301793664.JavaMail.root@cybernetik.net> Message-ID: <466987793.590058.1366301949093.JavaMail.root@cybernetik.net> Hello, I am using nginx as a reverse proxy to cache content for an application. Requests to the application are expensive, so I would like to set up caching so that if the file exists in nginx, it won't even bother querying the backend server. I can't seem to figure out what I am missing. This is how I am set up: location /download { index index.html index.htm; proxy_pass http://x.x.x.x/download; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host application.domain.com; proxy_set_header Accept-Encoding ""; proxy_ignore_headers Set-Cookie X-Accel-Expires Expires Cache-Control; if_modified_since off; add_header X-Cache-Status $upstream_cache_status; proxy_cache_valid 200 24h; expires 168h; proxy_cache staticfilecache; } proxy_cache_path /var/www/nginxcache/ levels=1:1:2 keys_zone=staticfilecache:2000m inactive=10800m; proxy_cache_key "$scheme$host$request_uri$cookie_user"; So for all requests to /download, I want it to serve strictly from the cache. I do not want it to query the proxy_pass location at all (not even for last modified time) if the file exists in the local cache. I just want it to serve the cached copy and be done. Is this possible? -------------- next part -------------- An HTML attachment was scrubbed... URL: From hems.inlet at gmail.com Thu Apr 18 17:04:54 2013 From: hems.inlet at gmail.com (henrique matias) Date: Thu, 18 Apr 2013 18:04:54 +0100 Subject: Proxy cache In-Reply-To: <466987793.590058.1366301949093.JavaMail.root@cybernetik.net> References: <1104667803.589849.1366301793664.JavaMail.root@cybernetik.net> <466987793.590058.1366301949093.JavaMail.root@cybernetik.net> Message-ID: Am completely sure its possible, am just not the nginx specialist, so i might not point you to the best directions. But as far as i understand in my little time together with things beautiful thing called nginx, you should have a look on this: http://wiki.nginx.org/HttpCoreModule#root let me know the results you achieve, since i'll be soon passing by the same problem :P On 18 April 2013 17:19, wrote: > Hello, > > I am using nginx as a reverse proxy to cache content for an application. > Requests to the application are expensive, so I would like to set up > caching so that if the file exists in nginx, it won't even bother querying > the backend server. > > I can't seem to figure out what I am missing. > > This is how I am set up: > > location /download { > index index.html index.htm; > proxy_pass http://x.x.x.x/download; > proxy_set_header X-Real-IP $remote_addr; > proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; > proxy_set_header Host application.domain.com; > proxy_set_header Accept-Encoding ""; > proxy_ignore_headers Set-Cookie X-Accel-Expires Expires Cache-Control; > if_modified_since off; > add_header X-Cache-Status $upstream_cache_status; > > proxy_cache_valid 200 24h; > expires 168h; > proxy_cache staticfilecache; > } > > proxy_cache_path /var/www/nginxcache/ levels=1:1:2 > keys_zone=staticfilecache:2000m inactive=10800m; > proxy_cache_key "$scheme$host$request_uri$cookie_user"; > > So for all requests to /download, I want it to serve strictly from the > cache. I do not want it to query the proxy_pass location at all (not even > for last modified time) if the file exists in the local cache. I just want > it to serve the cached copy and be done. > > Is this possible? > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From friedrich.locke at gmail.com Thu Apr 18 19:02:20 2013 From: friedrich.locke at gmail.com (Friedrich Locke) Date: Thu, 18 Apr 2013 16:02:20 -0300 Subject: kerberos Message-ID: I wonder if nginx supports kerberos in the same sense as apache does! I mean: with apache i may specify if i will want SSO or password (by retrieving the password from the kerberos database). The configuration options are: KrbMethodNegotiate off/on KrbMethodK5Passwd off/on These above are the directives for SSO or password (fetched from kdc database). Does nginx support the same way too ? Thanks in advance. -------------- next part -------------- An HTML attachment was scrubbed... URL: From friedrich.locke at gmail.com Thu Apr 18 19:04:19 2013 From: friedrich.locke at gmail.com (Friedrich Locke) Date: Thu, 18 Apr 2013 16:04:19 -0300 Subject: php supports Message-ID: Hi folks, I wonder if i can use php with ldap support in nginx ? does kadm5 PECL is supported too by php nginx ? Thanks once more. Best regards, Gustavo. -------------- next part -------------- An HTML attachment was scrubbed... URL: From contact at jpluscplusm.com Thu Apr 18 19:22:54 2013 From: contact at jpluscplusm.com (Jonathan Matthews) Date: Thu, 18 Apr 2013 20:22:54 +0100 Subject: php supports In-Reply-To: References: Message-ID: On 18 April 2013 20:04, Friedrich Locke wrote: > Hi folks, > > I wonder if i can use php with ldap support in nginx ? > does kadm5 PECL is supported too by php nginx ? Nginx proxies requests to PHP, which runs separately from Nginx. PHP is not embedded within Nginx like it is with Apache. This means that Nginx plays no part in deciding which modules/classes/etc PHP can use. Jonathan -- Jonathan Matthews // Oxford, London, UK http://www.jpluscplusm.com/contact.html From miguelmclara at gmail.com Thu Apr 18 19:47:39 2013 From: miguelmclara at gmail.com (miguelmclara at gmail.com) Date: Thu, 18 Apr 2013 20:47:39 +0100 Subject: php supports In-Reply-To: Message-ID: <20130418194739.4087931.13260.147@gmail.com> An HTML attachment was scrubbed... URL: -------------- next part -------------- _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx From djczaski at gmail.com Thu Apr 18 20:15:23 2013 From: djczaski at gmail.com (djczaski) Date: Thu, 18 Apr 2013 16:15:23 -0400 Subject: websocket backend In-Reply-To: <5143B2CB.4080204@consbio.org> References: <5143B2CB.4080204@consbio.org> Message-ID: This looked interesting but I ran into two issues. I need to be able to filter publishes on a connection basis. Also, the GPLv3 license is not suitable for my project. On Fri, Mar 15, 2013 at 7:46 PM, Nikolas Stevenson-Molnar < nik.molnar at consbio.org> wrote: > I haven't tried it yet, but nginx-push-stream-module looks good: > https://github.com/wandenberg/nginx-push-stream-module > > _Nik > > On 3/15/2013 4:24 PM, djczaski wrote: > > What are the best options for websocket backends? I'm working with an > > embedded platform so I'm somewhat restricted to something lightweight > > in C/C++. The things I found so far are: > > > > libwebsockets: http://git.warmcat.com/cgi-bin/cgit/libwebsockets/ > > poco: http://www.appinf.com/docs/poco/Poco.Net.WebSocket.html > > > > Another option would be to handle it right in Openresty/ngx_lua. I > > see there was some discussion about this a little while back: > > > > https://github.com/chaoslawful/lua-nginx-module/issues/165 > > > > What are the best options? > > > > _______________________________________________ > > nginx mailing list > > nginx at nginx.org > > http://mailman.nginx.org/mailman/listinfo/nginx > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From moreda at allenta.com Fri Apr 19 23:18:40 2013 From: moreda at allenta.com (Roberto Moreda) Date: Sat, 20 Apr 2013 01:18:40 +0200 Subject: OCSP validation of client certificates Message-ID: <15ECDC00-DBB8-4473-BFFF-DA9AACBBE52C@allenta.com> Hi, Is someone working in OCSP validation of client certificates? I just want to know if someone is interested or working already before trying to do it myself :-) Cheers, Roberto From nginx-list at puzzled.xs4all.nl Sat Apr 20 13:41:30 2013 From: nginx-list at puzzled.xs4all.nl (Patrick Lists) Date: Sat, 20 Apr 2013 15:41:30 +0200 Subject: OCSP validation of client certificates In-Reply-To: <15ECDC00-DBB8-4473-BFFF-DA9AACBBE52C@allenta.com> References: <15ECDC00-DBB8-4473-BFFF-DA9AACBBE52C@allenta.com> Message-ID: <51729B0A.50907@puzzled.xs4all.nl> Hi Roberto, On 04/20/2013 01:18 AM, Roberto Moreda wrote: > Hi, > > Is someone working in OCSP validation of client certificates? > I just want to know if someone is interested or working already before trying to do it myself :-) OCSP is mentioned in the documentation of the ngx_http_ssl_module module. See for example the ssl_stapling config option: http://nginx.org/en/docs/http/ngx_http_ssl_module.html#ssl_stapling Regards, Patrick From joerg.kastning at synaxon.de Sun Apr 21 06:29:58 2013 From: joerg.kastning at synaxon.de (=?ISO-8859-1?Q?J=F6rg_Kastning?=) Date: Sun, 21 Apr 2013 08:29:58 +0200 Subject: How to use nginx as loadbalancer for different webserver? Message-ID: Hello all. I'm new to this maillinglist as to nginx as well. I setup a nginx to run as a loadbalancer with just adding the following lines to my /etc/nginx/nginx_conf. This config works fine. http { upstream loadbalancer1 { server 192.168.0.1:80; server 192.168.0.2:80; } server { listen 80; server_name www.example.com example.com; location / { proxy_pass http://loadbalancer1; } } Please note, that example.com isn't my real domain. So that's not the error. ;-) Now I tried to add a second site which is hosted on two different webservers. My configuration in /etc/nginx/nginx_conf looks like the following now. http { upstream loadbalancer1 { server 192.168.0.1:80; server 192.168.0.2:80; } server { listen 80; server_name www.example.com example.com; location / { proxy_pass http://loadbalancer1; } } upstream loadbalancer2 { server 192.168.0.3:80; server 192.168.0.4:80; } server { listen 80; server_name www.anyway.com anyway.com; location / { proxy_pass http://loadbalancer2; } }} But if I try to get www.anyway.com it didn't work and I got a request timeout. Could somebody please tell me, what's wrong with my configuration? Best Regards Joerg -------------- next part -------------- An HTML attachment was scrubbed... URL: From steve at greengecko.co.nz Sun Apr 21 06:39:16 2013 From: steve at greengecko.co.nz (Steve Holdoway) Date: Sun, 21 Apr 2013 18:39:16 +1200 Subject: How to use nginx as loadbalancer for different webserver? In-Reply-To: References: Message-ID: <1366526356.8420.73.camel@steve-new> Try telnetting to port 80 on 192.168.0.3 and .4 to check they're listening, and that www.anyway.com resolves correctly... On Sun, 2013-04-21 at 08:29 +0200, J?rg Kastning wrote: > Hello all. > > > I'm new to this maillinglist as to nginx as well. I setup a nginx to > run as a loadbalancer with just adding the following lines to > my /etc/nginx/nginx_conf. This config works fine. > > http { > upstream loadbalancer1 { > server 192.168.0.1:80; > server 192.168.0.2:80; > } > > server { > listen 80; > server_name www.example.com example.com; > location / { > proxy_pass http://loadbalancer1; > } > } > > Please note, that example.com isn't my real domain. So that's not the error. ;-) > > > > Now I tried to add a second site which is hosted on two different > webservers. My configuration in /etc/nginx/nginx_conf looks like the > following now. > > http { > upstream loadbalancer1 { > server 192.168.0.1:80; > server 192.168.0.2:80; > } > > server { > listen 80; > server_name www.example.com example.com; > location / { > proxy_pass http://loadbalancer1; > } > } > > upstream loadbalancer2 { > server 192.168.0.3:80; > server 192.168.0.4:80; > } > > server { > listen 80; > server_name www.anyway.com anyway.com; > location / { > proxy_pass http://loadbalancer2; > } > } > } > > But if I try to get www.anyway.com it didn't work and I got a request timeout. Could somebody please tell me, what's wrong with my configuration? > > > > > Best Regards > Joerg > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -- Steve Holdoway BSc(Hons) MIITP http://www.greengecko.co.nz Skype: sholdowa From nginx-forum at nginx.us Sun Apr 21 08:21:40 2013 From: nginx-forum at nginx.us (mex) Date: Sun, 21 Apr 2013 04:21:40 -0400 Subject: How to use nginx as loadbalancer for different webserver? In-Reply-To: References: Message-ID: can you make sure that anyway.com ist reachable? sounds like the path to your front-lb is somehow not working. i think i dont need to ask for an nginx-restart after config-changes? if you upstream-config is messy or your upstream-servers are unreachable you should usually see a: 502 Bad Gateway regards, mex Posted at Nginx Forum: http://forum.nginx.org/read.php?2,238509,238511#msg-238511 From nginx-forum at nginx.us Sun Apr 21 13:31:52 2013 From: nginx-forum at nginx.us (gadh) Date: Sun, 21 Apr 2013 09:31:52 -0400 Subject: which version to use in production - 1.2.x or 1.3.x Message-ID: i know that 1.3.x is development version, but is it stable enough to be used in production? as stated here: http://forum.nginx.org/read.php?2,221377,221390#msg-221390 Tell me if i figured that right : if the 1.2.x is based on 1.2.0 - then its basic functionality is about 1 year old, and the main changes in it are bug fixes and few features taken from 1.3 after thorough testing ? My main goal of course is to use the latest stable version , and i saw many bug fixes in 1.3 that did not enter the stable 1.2 branch (such as in 1.3.16 - subrequest fixes) Posted at Nginx Forum: http://forum.nginx.org/read.php?2,238512,238512#msg-238512 From paulnpace at gmail.com Sun Apr 21 19:50:39 2013 From: paulnpace at gmail.com (Paul N. Pace) Date: Sun, 21 Apr 2013 12:50:39 -0700 Subject: Why does nginx work at the server IP address only with default root location? Message-ID: I have set up a server on Rackspace using Ubuntu 12.04 and the nginx stable PPA. Using the default root location of /usr/share/nginx/html the index.html file is displayed when I call the public IP address of the server. If I change the root location to my own /var/www/example.com/public the index.html file is not displayed. Output of ll on /var/www/example.com/public: drwxrwsr-x 2 www-data www-data 4096 Apr 21 04:13 ./ drwxrwsr-x 7 www-data www-data 4096 Apr 21 03:55 ../ -rw-rw-r-- 1 www-data www-data 624 Apr 21 04:17 index.html This is the only change I make and I get the failure, but I don't expect it. What am I doing wrong? From steve at greengecko.co.nz Sun Apr 21 20:09:34 2013 From: steve at greengecko.co.nz (Steve Holdoway) Date: Mon, 22 Apr 2013 08:09:34 +1200 Subject: Why does nginx work at the server IP address only with default root location? In-Reply-To: References: Message-ID: <92BEE86D-B2CB-47CC-A5F1-43D7F4E17B94@greengecko.co.nz> At a guess, /var or /var/www isn't readable by www-data Steve On 22/04/2013, at 7:50 AM, "Paul N. Pace" wrote: > I have set up a server on Rackspace using Ubuntu 12.04 and the nginx stable PPA. > > Using the default root location of /usr/share/nginx/html the > index.html file is displayed when I call the public IP address of the > server. > > If I change the root location to my own /var/www/example.com/public > the index.html file is not displayed. > > Output of ll on /var/www/example.com/public: > > drwxrwsr-x 2 www-data www-data 4096 Apr 21 04:13 ./ > drwxrwsr-x 7 www-data www-data 4096 Apr 21 03:55 ../ > -rw-rw-r-- 1 www-data www-data 624 Apr 21 04:17 index.html > > This is the only change I make and I get the failure, but I don't > expect it. What am I doing wrong? > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From paulnpace at gmail.com Sun Apr 21 20:14:35 2013 From: paulnpace at gmail.com (Paul N. Pace) Date: Sun, 21 Apr 2013 13:14:35 -0700 Subject: Why does nginx work at the server IP address only with default root location? In-Reply-To: <92BEE86D-B2CB-47CC-A5F1-43D7F4E17B94@greengecko.co.nz> References: <92BEE86D-B2CB-47CC-A5F1-43D7F4E17B94@greengecko.co.nz> Message-ID: Steve, you are a Linux genius, and I am but a humble plebe, forever in your debt. On Sun, Apr 21, 2013 at 1:09 PM, Steve Holdoway wrote: > At a guess, /var or /var/www isn't readable by www-data > > Steve > > On 22/04/2013, at 7:50 AM, "Paul N. Pace" wrote: > >> I have set up a server on Rackspace using Ubuntu 12.04 and the nginx stable PPA. >> >> Using the default root location of /usr/share/nginx/html the >> index.html file is displayed when I call the public IP address of the >> server. >> >> If I change the root location to my own /var/www/example.com/public >> the index.html file is not displayed. >> >> Output of ll on /var/www/example.com/public: >> >> drwxrwsr-x 2 www-data www-data 4096 Apr 21 04:13 ./ >> drwxrwsr-x 7 www-data www-data 4096 Apr 21 03:55 ../ >> -rw-rw-r-- 1 www-data www-data 624 Apr 21 04:17 index.html >> >> This is the only change I make and I get the failure, but I don't >> expect it. What am I doing wrong? >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From mdounin at mdounin.ru Sun Apr 21 21:21:08 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 22 Apr 2013 01:21:08 +0400 Subject: OCSP validation of client certificates In-Reply-To: <15ECDC00-DBB8-4473-BFFF-DA9AACBBE52C@allenta.com> References: <15ECDC00-DBB8-4473-BFFF-DA9AACBBE52C@allenta.com> Message-ID: <20130421212108.GB92338@mdounin.ru> Hello! On Sat, Apr 20, 2013 at 01:18:40AM +0200, Roberto Moreda wrote: > Is someone working in OCSP validation of client certificates? > I just want to know if someone is interested or working already > before trying to do it myself :-) AFAIK, nobody is working on it. Code in OCSP Stapling support is likely to be partially reusable though. -- Maxim Dounin http://nginx.org/en/donation.html From nginx-forum at nginx.us Mon Apr 22 04:17:31 2013 From: nginx-forum at nginx.us (davidjb) Date: Mon, 22 Apr 2013 00:17:31 -0400 Subject: Subrequests: returning response to client In-Reply-To: <20130418085513.GL92338@mdounin.ru> References: <20130418085513.GL92338@mdounin.ru> Message-ID: <23df26809b17e463eae29ea4278e8c05.NginxMailingListEnglish@forum.nginx.org> Maxim Dounin Wrote: ------------------------------------------------------- > What you are trying to do should be possible with > NGX_HTTP_SUBREQUEST_IN_MEMORY - this way subrequest body will be > available in memory instead of being sent to a client. But it's > not currently supported for fastcgi. Thanks for the clarification and your reply. I've taken to ignoring request and response bodies for now since in my use case, the auth_request backend really only needs to respond with redirections for Single Sign On. Is the thinking that this may eventually be supported by other modules like FastCGI/uWSGI or is it related to some technical issue? Thanks, David Posted at Nginx Forum: http://forum.nginx.org/read.php?2,238444,238522#msg-238522 From nginx-forum at nginx.us Mon Apr 22 04:35:51 2013 From: nginx-forum at nginx.us (davidjb) Date: Mon, 22 Apr 2013 00:35:51 -0400 Subject: Feature extension to auth_request module: FastCGI authorizer Message-ID: I've written an additional feature into the Auth Request module (from http://mdounin.ru/hg/ngx_http_auth_request_module/) that allows a user to control the behaviour of the auth_request in such a way that it can act as a FastCGI authorizer. This patch that I have written allows the user to specify the flag "authorizer=on" against a call to "auth_request" (eg "auth_request /my-auth authorizer=on;") and the auth request module will behave as per the authorizer specification (http://www.fastcgi.com/drupal/node/22#S6.3). There is one (potentially significant) caveat for now is that request/response bodies are not passed to the authorizer or back to the client respectively - assistance on this would be greatly appreciated. However, as it stands at present, the authorizer mode is able to correctly handle situations where only the headers are utilised -- eg the Shibboleth SSO FastCGI authorizer which relies on redirection and cookies and never a response/request body. This satisfies at least what I need it for at present and authentication works successfully. I'd like to see about whether this can be included within the main module itself at http://mdounin.ru/hg/ngx_http_auth_request_module, as I know this will be useful to more than just me. For example, see the various posts and questions surrounding this: https://www.google.com/search?q=fastcgi+authorizer+nginx . The latest version of my module lives at: https://bitbucket.org/davidjb/ngx_http_auth_request_module and the one main diff is located at: https://bitbucket.org/davidjb/ngx_http_auth_request_module/commits/3d865a718d3e34e4e353962ccc71c588a806db31/raw/ Comments are more than welcome. Thanks, David Posted at Nginx Forum: http://forum.nginx.org/read.php?2,238523,238523#msg-238523 From joerg.kastning at synaxon.de Mon Apr 22 06:37:51 2013 From: joerg.kastning at synaxon.de (=?ISO-8859-1?Q?J=F6rg_Kastning?=) Date: Mon, 22 Apr 2013 08:37:51 +0200 Subject: How to use nginx as loadbalancer for different webserver? In-Reply-To: References: Message-ID: Hi. I found the mistake. The Firewall-Policy was configured but the used Host-Object was wrong. There were the wrong ip addresses for my webservrs added. I changed this and can reach my webservers now. Thanks for your help. Mit freundlichen Gr??en J?rg Kastning IT-Systemadministrator SYNAXON AG Falkenstra?e 31 33758 Schlo? Holte-Stukenbrock Fon: +49(0)5207 9299-282 Fax +49(0)5207 9299-296 mailto: joerg.kastning at synaxon.de XMPP: joerg.kastning at jabber.synaxon.de Vorstand: Frank Roebers (Vorsitzender), Andreas Wenninger, Mark Schr?der Aufsichtsratsvorsitzender: Dr. G?nter Lewald Handelsregister Bielefeld HRB 36014 Weitere Infos unter: http://www.synaxon.de 2013/4/21 mex > can you make sure that anyway.com ist reachable? > > sounds like the path to your front-lb is somehow > not working. > > i think i dont need to ask for an nginx-restart after config-changes? > > if you upstream-config is messy or your upstream-servers > are unreachable you should usually see a: > 502 Bad Gateway > > > regards, > > > mex > > Posted at Nginx Forum: > http://forum.nginx.org/read.php?2,238509,238511#msg-238511 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From vini.gupta03 at gmail.com Mon Apr 22 07:30:15 2013 From: vini.gupta03 at gmail.com (Vini Gupta) Date: Mon, 22 Apr 2013 13:00:15 +0530 Subject: 1.3 stable version release Message-ID: Hi, I wanted to use the static etag support module for nginx. I found a module written by mikewest here . But it doesn't work. Also its pretty old (4 years old). I recently saw 1.3.3 change-list. It supports static-etag feature. But I am not sure how reliable it is to use it for production. Any idea by when the stable version of 1.3.3 be released? Should I go ahead using this version? Is there any major concern? Please give suggestions. Thanks Cheers !!! Vini -------------- next part -------------- An HTML attachment was scrubbed... URL: From joerg.kastning at synaxon.de Mon Apr 22 07:52:19 2013 From: joerg.kastning at synaxon.de (=?ISO-8859-1?Q?J=F6rg_Kastning?=) Date: Mon, 22 Apr 2013 09:52:19 +0200 Subject: Trouble migrating from pound to nginx Message-ID: Hi. I have some trouble here migration from the loadbalancer pound to nginx. Nginx should be running as a loadbalancer for our webapplications which are hosted on apache webservers in our lan. The basic configuration looks like this: lb(nginx) -----> two webservers(apache) -----> one or more applicationserver(glassfish) The old configuration is the same, we only used pound instead of nginx. But for any reason the the webapplication wouldn't be delivered and I got the apache default site instead. I posted my configuration to pastbin.com. Please note that I anonymized hostnames, domainnames and ips. - nginx.conf - http://pastebin.com/iej9RQx1 - vhost config - http://pastebin.com/zVUPWX8A Thanks in advance for helping me. Best Regards, Joerg -------------- next part -------------- An HTML attachment was scrubbed... URL: From contact at jpluscplusm.com Mon Apr 22 08:08:17 2013 From: contact at jpluscplusm.com (Jonathan Matthews) Date: Mon, 22 Apr 2013 09:08:17 +0100 Subject: Trouble migrating from pound to nginx In-Reply-To: References: Message-ID: I'm on a sufficiently dreadful connection that I can't examine your configs on pastebin. Having said that, I strongly suspect you're not propagating a Host header from NginX to your Apache servers. Fix that up, and I think you'll have more success. Regards, Jonathan -- Jonathan Matthews Oxford, London, UK http://www.jpluscplusm.com/contact.html From defan at nginx.com Mon Apr 22 08:20:55 2013 From: defan at nginx.com (Andrei Belov) Date: Mon, 22 Apr 2013 12:20:55 +0400 Subject: "writev() failed (134: Transport endpoint is not connected)" when upstream down In-Reply-To: References: Message-ID: Branden, On Apr 5, 2013, at 2:24 , Branden Visser wrote: > Hello, I've found that when there are upstream servers unavailable in > my upstream group, applying a little bit of load on the server (i.e., > just myself browsing around quickly, 2-3 req/s max) results in the > following errors even for upstream servers that are available and > well: > > 2013/04/04 22:02:21 [error] 4211#0: *2898 writev() failed (134: > Transport endpoint is not connected) while sending request to > upstream, client: 184.94.54.70, server: , request: "GET /api/ui/skin > HTTP/1.1", upstream: "http://10.112.5.119:2001/api/ui/skin", host: > "mysite.org", referrer: "http://mysite.org/search" > > In this particular example, I have 4 upstreams, 3 servers are shut > down (all except 10.112.5.119). If I comment out the 3 other upstream > servers, I cannot reproduce this error. > > Running SmartOS (Joyent cloud) > > $ nginx -v > nginx version: nginx/1.3.14 > > These are things I tried to no avail: > > * I used to have keepalive 64 on the upstream, I removed it > * Nginx used to run as a non-privileged user, I switched it to root > (prctl reports that privileged users should have 65,000 nofiles > allowed) > * I used to have worker_processes set to 5, I increased it to 16 > * The upstream server configuration used to not have max_fails *or* > max_timeout, I added those in trying to limit the amount of times > nginx tried to access the downed upstream servers > * I used to have the proxy_connect_timeout unspecified so it should > have defaulted to 60s, I tried setting it to 1s > * I tried commenting out all the rate-limiting directives > > The URLs I'm hitting in my tests are all those for the "tenantworkers" upstream. > > Any idea? I would think I probably have a resource limit issue, or an > issue with the back-end server, but it just doesn't make sense that > everything is OK after I comment out the downed upstreams. My concern > is that the system will crumble under real load when even 1 upstream > becomes unavailable. > > Thanks, > Branden Thanks for reporting this! There was actually a bug in /dev/poll event method, fix included in nginx 1.3.16. From nginx-forum at nginx.us Mon Apr 22 08:38:25 2013 From: nginx-forum at nginx.us (gadh) Date: Mon, 22 Apr 2013 04:38:25 -0400 Subject: how to debug memory leak/grow Message-ID: <7e010c5cdacc7d1301c3b19ffae73848.NginxMailingListEnglish@forum.nginx.org> in my nginx, compiled with my modules, i see that under every day usage (its on a web site , i cannot reproduce this in my lab) the memory usage of nginx grows all the time, it has many open connections (but the total number of connections is high but stays roughly the same over time) and aftert a week or so it consumes about 7GB of ram so i have to reload its processes (8 cores = 8 nginx workers). can somebody help me on how to debug this ? which tools you use ? valgrind did not help, even after applying the patches to nginx core for debugging with it. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,238529,238529#msg-238529 From joerg.kastning at synaxon.de Mon Apr 22 09:31:02 2013 From: joerg.kastning at synaxon.de (=?ISO-8859-1?Q?J=F6rg_Kastning?=) Date: Mon, 22 Apr 2013 11:31:02 +0200 Subject: Trouble migrating from pound to nginx In-Reply-To: References: Message-ID: You're right. I added the following line to my location block: proxy_set_header Host "my.forced.hostname"; Now I get the expected Site. Thanks for your support. 2013/4/22 Jonathan Matthews > I'm on a sufficiently dreadful connection that I can't examine your > configs on pastebin. Having said that, I strongly suspect you're not > propagating a Host header from NginX to your Apache servers. Fix that > up, and I think you'll have more success. > > Regards, > Jonathan > > -- > Jonathan Matthews > Oxford, London, UK > http://www.jpluscplusm.com/contact.html > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mrvisser at gmail.com Mon Apr 22 10:23:43 2013 From: mrvisser at gmail.com (Branden Visser) Date: Mon, 22 Apr 2013 06:23:43 -0400 Subject: "writev() failed (134: Transport endpoint is not connected)" when upstream down In-Reply-To: References: Message-ID: Thanks Andrei, much appreciated! Cheers, Branden On Mon, Apr 22, 2013 at 4:20 AM, Andrei Belov wrote: > Branden, > > On Apr 5, 2013, at 2:24 , Branden Visser wrote: > >> Hello, I've found that when there are upstream servers unavailable in >> my upstream group, applying a little bit of load on the server (i.e., >> just myself browsing around quickly, 2-3 req/s max) results in the >> following errors even for upstream servers that are available and >> well: >> >> 2013/04/04 22:02:21 [error] 4211#0: *2898 writev() failed (134: >> Transport endpoint is not connected) while sending request to >> upstream, client: 184.94.54.70, server: , request: "GET /api/ui/skin >> HTTP/1.1", upstream: "http://10.112.5.119:2001/api/ui/skin", host: >> "mysite.org", referrer: "http://mysite.org/search" >> >> In this particular example, I have 4 upstreams, 3 servers are shut >> down (all except 10.112.5.119). If I comment out the 3 other upstream >> servers, I cannot reproduce this error. >> >> Running SmartOS (Joyent cloud) >> >> $ nginx -v >> nginx version: nginx/1.3.14 >> >> These are things I tried to no avail: >> >> * I used to have keepalive 64 on the upstream, I removed it >> * Nginx used to run as a non-privileged user, I switched it to root >> (prctl reports that privileged users should have 65,000 nofiles >> allowed) >> * I used to have worker_processes set to 5, I increased it to 16 >> * The upstream server configuration used to not have max_fails *or* >> max_timeout, I added those in trying to limit the amount of times >> nginx tried to access the downed upstream servers >> * I used to have the proxy_connect_timeout unspecified so it should >> have defaulted to 60s, I tried setting it to 1s >> * I tried commenting out all the rate-limiting directives >> >> The URLs I'm hitting in my tests are all those for the "tenantworkers" upstream. >> >> Any idea? I would think I probably have a resource limit issue, or an >> issue with the back-end server, but it just doesn't make sense that >> everything is OK after I comment out the downed upstreams. My concern >> is that the system will crumble under real load when even 1 upstream >> becomes unavailable. >> >> Thanks, >> Branden > > Thanks for reporting this! > > There was actually a bug in /dev/poll event method, fix included in nginx 1.3.16. > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From dale.gallagher at gmail.com Mon Apr 22 10:23:59 2013 From: dale.gallagher at gmail.com (Dale Gallagher) Date: Mon, 22 Apr 2013 12:23:59 +0200 Subject: auth_request and auth_request_set confusion ... Message-ID: Hi I'd appreciate it if someone could enlighten me as to why the following isn't working as expected. I'm trying to make the proxying to php dynamic - in other words, depending on the authenticated user, requests will be proxied to that user's PHP socket. Both the login and auth locations are proxied to a Perl Dancer app. Here's the auth app's /auth route: get '/auth' => sub { if (session('user') && session('time')) { my $time_now = time; if ($time_now - session('time') < config->{'session_timeout'}) { session 'time' => $time_now; header 'X-Auth-User' => session('user'); status 'ok'; } else { header 'X-Error-Page' => '/login/session_expired'; status 'forbidden'; } } else { header 'X-Error-Page' => '/login/not_authorised'; status 'forbidden'; } }; nginx.conf snippet: location /login { expires -1; proxy_set_header Host $host; proxy_pass http://127.0.0.1:3000; proxy_redirect http://$host https://$host; } location /auth { internal; expires -1; proxy_set_header Host $host; proxy_pass http://127.0.0.1:3001; proxy_pass_request_body off; proxy_redirect http://$host https://$host; proxy_set_header Content-Length ""; } location /protected { error_page 401 403 $error_page; expires -1; set $auth_user = 'none'; auth_request /auth; auth_request_set $error_page $upstream_http_x_error_page; auth_request_set $auth_user $upstream_http_x_auth_user; location ~* \.php { fastcgi_pass unix:/srv/web/$auth_user/sock/php-5.3.22.sock; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; fastcgi_param FILES_ROOT /srv/web/$auth_user/site; include fastcgi_params; } } The error_page works, when the Dancer app returns forbidden, but no matter what I've tried to use the X-Auth_User header on the /auth app returning 200, I can't seem to coax nginx into passing it onto anything, be it a rewrite, or the above listed \.php location stanza. Any pointers would be appreciated. Thanks Dale -------------- next part -------------- An HTML attachment was scrubbed... URL: From contact at jpluscplusm.com Mon Apr 22 10:27:43 2013 From: contact at jpluscplusm.com (Jonathan Matthews) Date: Mon, 22 Apr 2013 11:27:43 +0100 Subject: Nginx wiki points to 1.2.6 as being latest stable Message-ID: As per subject, http://wiki.nginx.org/Install#Source_Releases points towards 1.2.6. I don't personally have the ability to update the wiki (I'm not sure if non @nginx.org people ever do?) Cheers, Jonathan -- Jonathan Matthews // Oxford, London, UK http://www.jpluscplusm.com/contact.html From joerg.kastning at synaxon.de Mon Apr 22 13:17:32 2013 From: joerg.kastning at synaxon.de (=?ISO-8859-1?Q?J=F6rg_Kastning?=) Date: Mon, 22 Apr 2013 15:17:32 +0200 Subject: Issue with my proxy configuration Message-ID: Hello. In our lan I can reach my webapplication with an url like http:// :8081/AppName. I try to configure nginx to forward requests from wan site to this server. All firewall policies needed are configured and the dns entry for access from wan are set and reachable. I tried the following configuration in /etc/nginx/nginx.conf: server { listen 80; server_name host.domainname.de; rewrite ^ https://$server_name$request_uri? permanent; } server { listen 443; server_name host.domainname.de; ssl on; ssl_certificate domainname.de.pem; ssl_certificate_key domainname.de.key; ssl_session_timeout 5m; ssl_protocols SSLv3 TLSv1; ssl_ciphers ALL:!ADH:!EXPORT56:RC4+RSA:+HIGH:+MEDIUM:+LOW:+SSLv3:+EXP; ssl_prefer_server_ciphers on; location / { proxy_pass http://:8081/AppName; } } With this configuration the webapplication ist available when I use the url http://host.domainname.de/AppName. But I want to access the webapp with the url http://host.domainname.de. Is this possible? Regards, Joerg -------------- next part -------------- An HTML attachment was scrubbed... URL: From pr1 at pr1.ru Mon Apr 22 14:47:55 2013 From: pr1 at pr1.ru (Andrey Feldman) Date: Mon, 22 Apr 2013 18:47:55 +0400 Subject: Issue with my proxy configuration In-Reply-To: References: Message-ID: Hi. Try something like: location / { proxy_pass http://:8081/AppName/; } Or location / { proxy_pass http://:8081/AppName/$uri; } On Mon, Apr 22, 2013 at 5:17 PM, J?rg Kastning wrote: > Hello. > > In our lan I can reach my webapplication with an url like http:// > :8081/AppName. I try to configure nginx to forward requests > from wan site to this server. All firewall policies needed are configured > and the dns entry for access from wan are set and reachable. > > I tried the following configuration in /etc/nginx/nginx.conf: > > server { > listen 80; > server_name host.domainname.de; > rewrite ^ https://$server_name$request_uri? permanent; > } > server { > listen 443; > server_name host.domainname.de; > > ssl on; > ssl_certificate domainname.de.pem; > ssl_certificate_key domainname.de.key; > > ssl_session_timeout 5m; > > ssl_protocols SSLv3 TLSv1; > ssl_ciphers > ALL:!ADH:!EXPORT56:RC4+RSA:+HIGH:+MEDIUM:+LOW:+SSLv3:+EXP; > ssl_prefer_server_ciphers on; > location / { > proxy_pass http://:8081/AppName; > } > } > > With this configuration the webapplication ist available when I use the > url http://host.domainname.de/AppName. But I want to access the webapp > with the url http://host.domainname.de. > Is this possible? > > Regards, > Joerg > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -- -- Andrey Feldman -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Mon Apr 22 16:14:47 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 22 Apr 2013 20:14:47 +0400 Subject: Subrequests: returning response to client In-Reply-To: <23df26809b17e463eae29ea4278e8c05.NginxMailingListEnglish@forum.nginx.org> References: <20130418085513.GL92338@mdounin.ru> <23df26809b17e463eae29ea4278e8c05.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20130422161447.GC92338@mdounin.ru> Hello! On Mon, Apr 22, 2013 at 12:17:31AM -0400, davidjb wrote: > Maxim Dounin Wrote: > ------------------------------------------------------- > > > What you are trying to do should be possible with > > NGX_HTTP_SUBREQUEST_IN_MEMORY - this way subrequest body will be > > available in memory instead of being sent to a client. But it's > > not currently supported for fastcgi. > > Thanks for the clarification and your reply. I've taken to ignoring request > and response bodies for now since in my use case, the auth_request backend > really only needs to respond with redirections for Single Sign On. > > Is the thinking that this may eventually be supported by other modules like > FastCGI/uWSGI or is it related to some technical issue? The subrequest in memory functionality needs unbuffered handling in the upstream module, which is not implemented for FastCGI. -- Maxim Dounin http://nginx.org/en/donation.html From mdounin at mdounin.ru Mon Apr 22 16:22:36 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 22 Apr 2013 20:22:36 +0400 Subject: which version to use in production - 1.2.x or 1.3.x In-Reply-To: References: Message-ID: <20130422162236.GD92338@mdounin.ru> Hello! On Sun, Apr 21, 2013 at 09:31:52AM -0400, gadh wrote: > i know that 1.3.x is development version, but is it stable enough to be used > in production? as stated here: > http://forum.nginx.org/read.php?2,221377,221390#msg-221390 > > Tell me if i figured that right : if the 1.2.x is based on 1.2.0 - then its > basic functionality is about 1 year old, and the main changes in it are bug > fixes and few features taken from 1.3 after thorough testing ? > > My main goal of course is to use the latest stable version , and i saw many > bug fixes in 1.3 that did not enter the stable 1.2 branch (such as in 1.3.16 > - subrequest fixes) Generally both development aka mainline and stable versions are ok for production. Development might need a bit more attention on upgrades though, especially if you use 3rd party modules - as it occasionally introduces various changes (including API ones). -- Maxim Dounin http://nginx.org/en/donation.html From mdounin at mdounin.ru Mon Apr 22 16:39:26 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 22 Apr 2013 20:39:26 +0400 Subject: Feature extension to auth_request module: FastCGI authorizer In-Reply-To: References: Message-ID: <20130422163926.GE92338@mdounin.ru> Hello! On Mon, Apr 22, 2013 at 12:35:51AM -0400, davidjb wrote: > I've written an additional feature into the Auth Request module (from > http://mdounin.ru/hg/ngx_http_auth_request_module/) that allows a user to > control the behaviour of the auth_request in such a way that it can act as a > FastCGI authorizer. This patch that I have written allows the user to > specify the flag "authorizer=on" against a call to "auth_request" (eg > "auth_request /my-auth authorizer=on;") and the auth request module will > behave as per the authorizer specification > (http://www.fastcgi.com/drupal/node/22#S6.3). > > There is one (potentially significant) caveat for now is that > request/response bodies are not passed to the authorizer or back to the > client respectively - assistance on this would be greatly appreciated. > However, as it stands at present, the authorizer mode is able to correctly > handle situations where only the headers are utilised -- eg the Shibboleth > SSO FastCGI authorizer which relies on redirection and cookies and never a > response/request body. This satisfies at least what I need it for at > present and authentication works successfully. > > I'd like to see about whether this can be included within the main module > itself at http://mdounin.ru/hg/ngx_http_auth_request_module, as I know this > will be useful to more than just me. For example, see the various posts and > questions surrounding this: > https://www.google.com/search?q=fastcgi+authorizer+nginx . > > The latest version of my module lives at: > https://bitbucket.org/davidjb/ngx_http_auth_request_module > > and the one main diff is located at: > https://bitbucket.org/davidjb/ngx_http_auth_request_module/commits/3d865a718d3e34e4e353962ccc71c588a806db31/raw/ > > Comments are more than welcome. For me it doesn't looks like what you do actually matches FastCGI Authorizer specification. Even if we ignore the fact that body isn't handled properly, and authorizer mode isn't advertized to FastCGI. Most of the code in the patch seems to be dedicated to special processing of Variable-* headers. But they don't seem to do what they are expected to do as per FastCGI spec - with your code the "Variable-AUTH_METHOD" header returned by an authorizer will result in "AUTH_METHOD" header being passed to the application, i.e. it will be available in HTTP_AUTH_METHOD variable in subsequent FastCGI requests - instead of AUTH_METHOD variable as per FastCGI spec. Please also note that it's bad idea to try to modify input headers - this is not something expected to be done by modules, and will result in a segmentation fault if you'll try to do it in a subrequest. -- Maxim Dounin http://nginx.org/en/donation.html From mdounin at mdounin.ru Mon Apr 22 17:16:26 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 22 Apr 2013 21:16:26 +0400 Subject: 1.3 stable version release In-Reply-To: References: Message-ID: <20130422171626.GG92338@mdounin.ru> Hello! On Mon, Apr 22, 2013 at 01:00:15PM +0530, Vini Gupta wrote: > Hi, > > I wanted to use the static etag support module for nginx. I found a module > written by mikewest here . > But it doesn't work. Also its pretty old (4 years old). > I recently saw 1.3.3 change-list. It supports static-etag feature. But I am > not sure how reliable it is to use it for production. Any idea by when the > stable version of 1.3.3 be released? Should I go ahead using this version? > Is there any major concern? 1.3.x releases are believed to be ok for production, see http://mailman.nginx.org/pipermail/nginx/2013-April/038616.html for more details. 1.4.0, stable release based on 1.3.x branch, is expected to appear soon though. -- Maxim Dounin http://nginx.org/en/donation.html From mdounin at mdounin.ru Mon Apr 22 17:27:37 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 22 Apr 2013 21:27:37 +0400 Subject: how to debug memory leak/grow In-Reply-To: <7e010c5cdacc7d1301c3b19ffae73848.NginxMailingListEnglish@forum.nginx.org> References: <7e010c5cdacc7d1301c3b19ffae73848.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20130422172737.GI92338@mdounin.ru> Hello! On Mon, Apr 22, 2013 at 04:38:25AM -0400, gadh wrote: > in my nginx, compiled with my modules, i see that under every day usage (its > on a web site , i cannot reproduce this in my lab) the memory usage of nginx > grows all the time, it has many open connections (but the total number of > connections is high but stays roughly the same over time) and aftert a week > or so it consumes about 7GB of ram so i have to reload its processes (8 > cores = 8 nginx workers). > > can somebody help me on how to debug this ? which tools you use ? valgrind > did not help, even after applying the patches to nginx core for debugging > with it. Some tips about tracing socket leaks can by found here: http://wiki.nginx.org/Debugging Memory leaks by itself generally don't happen in nginx as it uses memory pools bound to requests for most of the allocations. -- Maxim Dounin http://nginx.org/en/donation.html From tdgh2323 at hotmail.com Mon Apr 22 18:30:41 2013 From: tdgh2323 at hotmail.com (Joseph Cabezas) Date: Mon, 22 Apr 2013 18:30:41 +0000 Subject: nginx eating all RAM, log files? Message-ID: I have two nginx instances (nginx/1.0.15) on a 4GB RAM machine. Each instance runs fewer than 25 requests as reported with stub_status on; The problem is that once nginx is started from scratch it starts eating (or reserving?) RAM progressively as reported by free -m, up to the point where it leaves only 2-5mb left and then it doesnt go past that. In about 7 days it has eating everything. Nginx doesnt crash, nor does it touch swap but I definately feel it compromises system resources to the point iam concerned as in a production spike its wise to think the server has no room. This is the exact same issue on 4 other nginx machines I have. In order to start over again I must kill -QUIT the PIDs , delete the log file (which is smaller then 800mb) and then all goes back to the cycle. Please indicate me what pieces of information I can supply to debug this issue. Thanks Joseph -------------- next part -------------- An HTML attachment was scrubbed... URL: From miguelmclara at gmail.com Mon Apr 22 18:45:25 2013 From: miguelmclara at gmail.com (Miguel Clara) Date: Mon, 22 Apr 2013 19:45:25 +0100 Subject: nginx eating all RAM, log files? In-Reply-To: References: Message-ID: Before anything I suggest you upgrade you nginx install... you're running nginx/1.0.15, the last satble version is 1.2.8! It might be some bug that was fixing in the mean time! On Mon, Apr 22, 2013 at 7:30 PM, Joseph Cabezas wrote: > I have two nginx instances (nginx/1.0.15) on a 4GB RAM machine. Each > instance runs fewer than 25 requests as reported with stub_status on; > > The problem is that once nginx is started from scratch it starts eating > (or reserving?) RAM progressively as reported by free -m, up to the point > where it leaves only 2-5mb left and then it doesnt go past that. In about 7 > days it has eating everything. Nginx doesnt crash, nor does it touch swap > but I definately feel it compromises system resources to the point iam > concerned as in a production spike its wise to think the server has no > room. > > This is the exact same issue on 4 other nginx machines I have. > > In order to start over again I must kill -QUIT the PIDs , delete the log > file (which is smaller then 800mb) and then all goes back to the cycle. > > Please indicate me what pieces of information I can supply to debug this > issue. > > Thanks > Joseph > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From rkearsley at blueyonder.co.uk Mon Apr 22 19:32:31 2013 From: rkearsley at blueyonder.co.uk (Richard Kearsley) Date: Mon, 22 Apr 2013 20:32:31 +0100 Subject: nginx eating all RAM, log files? In-Reply-To: References: Message-ID: <5175904F.3060703@blueyonder.co.uk> Hi Are you sure it's not the linux file/buffer cache that's using all your ram? (does ps/top show nginx or the worker processes using it directly?) Linux and most/all other unix variants will fill up unused ram with cached versions of the most recently used files so they don't have to be read from disk each time... it's completely normal and expected behaviour :) On 22/04/13 19:30, Joseph Cabezas wrote: > I have two nginx instances (nginx/1.0.15) on a 4GB RAM machine. Each > instance runs fewer than 25 requests as reported with stub_status on; > > The problem is that once nginx is started from scratch it starts > eating (or reserving?) RAM progressively as reported by free -m, up to > the point where it leaves only 2-5mb left and then it doesnt go past > that. In about 7 days it has eating everything. Nginx doesnt crash, > nor does it touch swap but I definately feel it compromises system > resources to the point iam concerned as in a production spike its wise > to think the server has no room. > > This is the exact same issue on 4 other nginx machines I have. > > In order to start over again I must kill -QUIT the PIDs , delete the > log file (which is smaller then 800mb) and then all goes back to the > cycle. > > Please indicate me what pieces of information I can supply to debug > this issue. > > Thanks > Joseph > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Mon Apr 22 19:33:35 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 22 Apr 2013 23:33:35 +0400 Subject: nginx eating all RAM, log files? In-Reply-To: References: Message-ID: <20130422193335.GM92338@mdounin.ru> Hello! On Mon, Apr 22, 2013 at 06:30:41PM +0000, Joseph Cabezas wrote: > I have two nginx instances (nginx/1.0.15) on a 4GB RAM machine. > Each instance runs fewer than 25 requests as reported with > stub_status on; > > The problem is that once nginx is started from scratch it starts > eating (or reserving?) RAM progressively as reported by free -m, > up to the point where it leaves only 2-5mb left and then it > doesnt go past that. In about 7 days it has eating everything. > Nginx doesnt crash, nor does it touch swap but I definately feel > it compromises system resources to the point iam concerned as in > a production spike its wise to think the server has no room. > > This is the exact same issue on 4 other nginx machines I have. > > In order to start over again I must kill -QUIT the PIDs , delete > the log file (which is smaller then 800mb) and then all goes > back to the cycle. > > Please indicate me what pieces of information I can supply to > debug this issue. As you are referring to "free -m" output, most likely this link will help: http://www.linuxatemyram.com/ In short: don't panic, you RAM is fine. -- Maxim Dounin http://nginx.org/en/donation.html From tdgh2323 at hotmail.com Mon Apr 22 20:03:56 2013 From: tdgh2323 at hotmail.com (Joseph Cabezas) Date: Mon, 22 Apr 2013 20:03:56 +0000 Subject: nginx eating all RAM, log files? In-Reply-To: <5175904F.3060703@blueyonder.co.uk> References: , <5175904F.3060703@blueyonder.co.uk> Message-ID: Hello!, This is the output of ps, iam having a hard time interpreting the memory usage perhaps you can help me differ? USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND root 18951 0.0 0.1 195892 6340 ? Ss Apr16 0:00 nginx: master process /usr/sbin/nginx -c /etc/nginx/modified/nginx-consolidated-5.conf root 18977 0.0 0.1 197600 5188 ? Ss Apr16 0:00 nginx: master process /usr/sbin/nginx -c /etc/nginx/modified/nginx-consolidated-8.conf nginx 18978 0.0 0.2 200096 9392 ? S Apr16 5:02 nginx: worker process nginx 18979 0.0 0.1 197604 5768 ? S Apr16 0:03 nginx: cache manager process nginx 23921 0.0 0.2 199892 9148 ? S Apr21 1:28 nginx: worker process nginx 23922 0.0 0.1 195896 4844 ? S Apr21 0:00 nginx: cache manager process root 24894 0.0 0.0 103244 832 pts/0 S+ 14:00 0:00 grep nginx [root]# free -m total used free shared buffers cached Mem: 3631 1760 1871 0 307 1212 -/+ buffers/cache: 240 3391 Swap: 2047 0 2047 I have a cache explicitly set at the nginx.conf . My site is not that big, the cache I have doesnt get pass 15mb. Please let me know if this is suitable or if I should report differently. Thanks Joseph Date: Mon, 22 Apr 2013 20:32:31 +0100 From: rkearsley at blueyonder.co.uk To: nginx at nginx.org Subject: Re: nginx eating all RAM, log files? Hi Are you sure it's not the linux file/buffer cache that's using all your ram? (does ps/top show nginx or the worker processes using it directly?) Linux and most/all other unix variants will fill up unused ram with cached versions of the most recently used files so they don't have to be read from disk each time... it's completely normal and expected behaviour :) On 22/04/13 19:30, Joseph Cabezas wrote: I have two nginx instances (nginx/1.0.15) on a 4GB RAM machine. Each instance runs fewer than 25 requests as reported with stub_status on; The problem is that once nginx is started from scratch it starts eating (or reserving?) RAM progressively as reported by free -m, up to the point where it leaves only 2-5mb left and then it doesnt go past that. In about 7 days it has eating everything. Nginx doesnt crash, nor does it touch swap but I definately feel it compromises system resources to the point iam concerned as in a production spike its wise to think the server has no room. This is the exact same issue on 4 other nginx machines I have. In order to start over again I must kill -QUIT the PIDs , delete the log file (which is smaller then 800mb) and then all goes back to the cycle. Please indicate me what pieces of information I can supply to debug this issue. Thanks Joseph _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From tdgh2323 at hotmail.com Mon Apr 22 20:07:46 2013 From: tdgh2323 at hotmail.com (Joseph Cabezas) Date: Mon, 22 Apr 2013 20:07:46 +0000 Subject: nginx eating all RAM, log files? In-Reply-To: <20130422193335.GM92338@mdounin.ru> References: , <20130422193335.GM92338@mdounin.ru> Message-ID: Maxim, Thank you...!! Regards, Josephan/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Mon Apr 22 20:29:20 2013 From: nginx-forum at nginx.us (mex) Date: Mon, 22 Apr 2013 16:29:20 -0400 Subject: nginx eating all RAM, log files? In-Reply-To: References: Message-ID: <054caa1e12101c3af6ef0f3c3d73c197.NginxMailingListEnglish@forum.nginx.org> man ps | grep RSS man ps | grep VSZ when using a tool like top: real_free = free + cached you might want to try htop for a better continuous display your ram is fine :) regards, mex Posted at Nginx Forum: http://forum.nginx.org/read.php?2,238547,238556#msg-238556 From mdounin at mdounin.ru Mon Apr 22 20:40:13 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 23 Apr 2013 00:40:13 +0400 Subject: Nginx wiki points to 1.2.6 as being latest stable In-Reply-To: References: Message-ID: <20130422204013.GP92338@mdounin.ru> Hello! On Mon, Apr 22, 2013 at 11:27:43AM +0100, Jonathan Matthews wrote: > As per subject, http://wiki.nginx.org/Install#Source_Releases points > towards 1.2.6. > > I don't personally have the ability to update the wiki (I'm not sure > if non @nginx.org people ever do?) Yep, the page looks outdated. Cliff, could you please give me a right to edit protected pages on the wiki, including the Install one? Wiki user is MaximDounin. -- Maxim Dounin http://nginx.org/en/donation.html From mdounin at mdounin.ru Mon Apr 22 21:02:21 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 23 Apr 2013 01:02:21 +0400 Subject: auth_request and auth_request_set confusion ... In-Reply-To: References: Message-ID: <20130422210221.GQ92338@mdounin.ru> Hello! On Mon, Apr 22, 2013 at 12:23:59PM +0200, Dale Gallagher wrote: > Hi > > I'd appreciate it if someone could enlighten me as to why the following > isn't working as expected. I'm trying to make the proxying to php dynamic - > in other words, depending on the authenticated user, requests will be > proxied to that user's PHP socket. > > Both the login and auth locations are proxied to a Perl Dancer app. > > Here's the auth app's /auth route: > > get '/auth' => sub { > if (session('user') && session('time')) { > my $time_now = time; > if ($time_now - session('time') < config->{'session_timeout'}) { > session 'time' => $time_now; > header 'X-Auth-User' => session('user'); > status 'ok'; > } > else { > header 'X-Error-Page' => '/login/session_expired'; > status 'forbidden'; > } > } > else { > header 'X-Error-Page' => '/login/not_authorised'; > status 'forbidden'; > } > }; > > nginx.conf snippet: > > location /login { > expires -1; > proxy_set_header Host $host; > proxy_pass http://127.0.0.1:3000; > proxy_redirect http://$host https://$host; > } > > location /auth { > internal; > expires -1; > proxy_set_header Host $host; > proxy_pass http://127.0.0.1:3001; > proxy_pass_request_body off; > proxy_redirect http://$host https://$host; > proxy_set_header Content-Length ""; > } > > location /protected { > error_page 401 403 $error_page; > expires -1; > set $auth_user = 'none'; > auth_request /auth; > auth_request_set $error_page $upstream_http_x_error_page; > auth_request_set $auth_user $upstream_http_x_auth_user; > > location ~* \.php { > fastcgi_pass unix:/srv/web/$auth_user/sock/php-5.3.22.sock; > fastcgi_index index.php; > fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; > fastcgi_param FILES_ROOT /srv/web/$auth_user/site; > include fastcgi_params; > } > } > > The error_page works, when the Dancer app returns forbidden, but no matter > what I've tried to use the X-Auth_User header on the /auth app returning > 200, I can't seem to coax nginx into passing it onto anything, be it a > rewrite, or the above listed \.php location stanza. > > Any pointers would be appreciated. Could you please show debug log? See http://nginx.org/en/docs/debugging_log.html for more information. -- Maxim Dounin http://nginx.org/en/donation.html From nginx-forum at nginx.us Mon Apr 22 22:45:55 2013 From: nginx-forum at nginx.us (lpr) Date: Mon, 22 Apr 2013 18:45:55 -0400 Subject: Emulate SSI 'exec cmd' with nginx Message-ID: Dear all Trying to move my pages from Apache to nginx (1.2.1 on Debian stable with backports), I run into the problem of having used SSI's 'exec cmd' for more than a decade quite intensively. What is the best and easiest way to emulate 'exec cmd' with nginx? For example, in my footers I make use of dynamically change between ENglish and GErman with a shell script as easy as with setlanguage.sh as echo "Deutsch" When I try using , the script is executed. However, instead of just adding the link, nginx includes the German web-page fully. Is there an easy way to get the same functionality with nginx? Thanks for any hint. Best regards Lukas Posted at Nginx Forum: http://forum.nginx.org/read.php?2,238561,238561#msg-238561 From cliff at develix.com Tue Apr 23 02:10:33 2013 From: cliff at develix.com (Cliff Wells) Date: Mon, 22 Apr 2013 19:10:33 -0700 Subject: Nginx wiki points to 1.2.6 as being latest stable In-Reply-To: <20130422204013.GP92338@mdounin.ru> References: <20130422204013.GP92338@mdounin.ru> Message-ID: <1366683033.30215.0.camel@portable-evil> Will do. Also, I'm going to be moving to a new colo shortly, so we'll need to update wiki.nginx.org to point to a new address. Would you be the one to do this? Cliff On Tue, 2013-04-23 at 00:40 +0400, Maxim Dounin wrote: > Hello! > > On Mon, Apr 22, 2013 at 11:27:43AM +0100, Jonathan Matthews wrote: > > > As per subject, http://wiki.nginx.org/Install#Source_Releases points > > towards 1.2.6. > > > > I don't personally have the ability to update the wiki (I'm not sure > > if non @nginx.org people ever do?) > > Yep, the page looks outdated. > > Cliff, could you please give me a right to edit protected pages on > the wiki, including the Install one? Wiki user is MaximDounin. > From tdgh2323 at hotmail.com Tue Apr 23 02:47:46 2013 From: tdgh2323 at hotmail.com (Joseph Cabezas) Date: Tue, 23 Apr 2013 02:47:46 +0000 Subject: nginx cache, worth it to put it on a ramdisk? Message-ID: Hello, I have a reverse proxy that has a cache set up for static content working, as traffic increases iam wondering if its worth it to put the cache on a ramdisk or would the normal file caching/buffer in ram from the linux kernel be sufficient? The size of the cache is 30mb. Thanks Joseph -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Tue Apr 23 05:58:08 2013 From: nginx-forum at nginx.us (mex) Date: Tue, 23 Apr 2013 01:58:08 -0400 Subject: nginx cache, worth it to put it on a ramdisk? In-Reply-To: References: Message-ID: <6f1462de95e29d951c3ebac0a7a31c42.NginxMailingListEnglish@forum.nginx.org> i made some benchmarks lately and it loks like it doesnt matter for smaller caches, since os-caching is smart enough. if you really want to know just test yourself. regards, mex Posted at Nginx Forum: http://forum.nginx.org/read.php?2,238562,238565#msg-238565 From ianevans at digitalhit.com Tue Apr 23 06:11:38 2013 From: ianevans at digitalhit.com (Ian M. Evans) Date: Tue, 23 Apr 2013 02:11:38 -0400 Subject: nginx tips for localhost development In-Reply-To: <20130415174922.GB16160@craic.sysops.org> References: <20130415174922.GB16160@craic.sysops.org> Message-ID: <958f29170ecb812c1c0b3785fc62793c.squirrel@www.digitalhit.com> On Mon, April 15, 2013 1:49 pm, Francis Daly wrote: > On the nginx side, there should be approximately nothing special to do. > > The nginx.conf that works on your production server can be put onto your > development server; "listen" directives which specify ip addresses may > need to be changed, and file names may need to be changed if you have > a different layout. [snip] Thanks for the advice. From mdounin at mdounin.ru Tue Apr 23 06:18:04 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 23 Apr 2013 10:18:04 +0400 Subject: Nginx wiki points to 1.2.6 as being latest stable In-Reply-To: <1366683033.30215.0.camel@portable-evil> References: <20130422204013.GP92338@mdounin.ru> <1366683033.30215.0.camel@portable-evil> Message-ID: <20130423061804.GS92338@mdounin.ru> Hello! On Mon, Apr 22, 2013 at 07:10:33PM -0700, Cliff Wells wrote: > Will do. Thanx. > Also, I'm going to be moving to a new colo shortly, so we'll > need to update wiki.nginx.org to point to a new address. Would you be > the one to do this? Better person is Sergey Budnevitch , cc'd. > > Cliff > > On Tue, 2013-04-23 at 00:40 +0400, Maxim Dounin wrote: > > Hello! > > > > On Mon, Apr 22, 2013 at 11:27:43AM +0100, Jonathan Matthews wrote: > > > > > As per subject, http://wiki.nginx.org/Install#Source_Releases points > > > towards 1.2.6. > > > > > > I don't personally have the ability to update the wiki (I'm not sure > > > if non @nginx.org people ever do?) > > > > Yep, the page looks outdated. > > > > Cliff, could you please give me a right to edit protected pages on > > the wiki, including the Install one? Wiki user is MaximDounin. > > > > > -- Maxim Dounin http://nginx.org/en/donation.html From eswar7028 at gmail.com Tue Apr 23 06:29:22 2013 From: eswar7028 at gmail.com (ESWAR RAO) Date: Tue, 23 Apr 2013 11:59:22 +0530 Subject: Reg. nginx_tcp_proxy_module Message-ID: Hi All, I have a below setup: netcat client (nc localhost 8081) =====>nginx server(8081) with tcp_proxy module=====>2 netcat servers(8031 and 8032) $ nc localhost 8081 biiiiiiiiiiiiiiiiii $ nc -lk 8031 biiiiiiiiiiiiiiiiii $ nc -lk 8032 If I kill the process $ nc -lk 8031, the client is also getting killed. I expect the nginx server would read 0 bytes upon closing the connection to 8031 server and it detects the server failover and it establishes the connection to 8032 server so that the clients won't experience any downtime. Since the client establishes connection to only nginx at 8081, it should be shielded from failover of 8031 server. Can anyone please help me if my understanding is wrong ?? Thanks Eswar Rao -------------- next part -------------- An HTML attachment was scrubbed... URL: From joerg.kastning at synaxon.de Tue Apr 23 06:31:12 2013 From: joerg.kastning at synaxon.de (=?ISO-8859-1?Q?J=F6rg_Kastning?=) Date: Tue, 23 Apr 2013 08:31:12 +0200 Subject: Issue with my proxy configuration In-Reply-To: References: Message-ID: Hi. I tried both and it gets a little bit better. But now I get a 500-error when trying to access http://host.domainname.de. The url returned is http://host.domainname.de/AppName/main.aspx?svid=7. At first I thought the Webserver is broken but I still can access the Webapplication via http://:8081/AppName from my LAN. Regards Joerg 2013/4/22 Andrey Feldman > Hi. > Try something like: > location / { > proxy_pass http://:8081/AppName/; > } > > Or > > location / { > proxy_pass http://:8081/AppName/$uri; > } > > > On Mon, Apr 22, 2013 at 5:17 PM, J?rg Kastning wrote: > >> Hello. >> >> In our lan I can reach my webapplication with an url like http:// >> :8081/AppName. I try to configure nginx to forward requests >> from wan site to this server. All firewall policies needed are configured >> and the dns entry for access from wan are set and reachable. >> >> I tried the following configuration in /etc/nginx/nginx.conf: >> >> server { >> listen 80; >> server_name host.domainname.de; >> rewrite ^ https://$server_name$request_uri? permanent; >> } >> server { >> listen 443; >> server_name host.domainname.de; >> >> ssl on; >> ssl_certificate domainname.de.pem; >> ssl_certificate_key domainname.de.key; >> >> ssl_session_timeout 5m; >> >> ssl_protocols SSLv3 TLSv1; >> ssl_ciphers >> ALL:!ADH:!EXPORT56:RC4+RSA:+HIGH:+MEDIUM:+LOW:+SSLv3:+EXP; >> ssl_prefer_server_ciphers on; >> location / { >> proxy_pass http://:8081/AppName; >> } >> } >> >> With this configuration the webapplication ist available when I use the >> url http://host.domainname.de/AppName. But I want to access the webapp >> with the url http://host.domainname.de. >> Is this possible? >> >> Regards, >> Joerg >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx >> > > > > -- > -- > Andrey Feldman > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Tue Apr 23 06:41:17 2013 From: nginx-forum at nginx.us (mex) Date: Tue, 23 Apr 2013 02:41:17 -0400 Subject: Issue with my proxy configuration In-Reply-To: References: Message-ID: <669493d9b179ab8234920e41167d11bb.NginxMailingListEnglish@forum.nginx.org> is your Appserver hostname-aware? your error 500 should come from your appserver; check your logfiles on that part http://wiki.nginx.org/HttpProxyModule Posted at Nginx Forum: http://forum.nginx.org/read.php?2,238537,238570#msg-238570 From mdounin at mdounin.ru Tue Apr 23 07:04:36 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 23 Apr 2013 11:04:36 +0400 Subject: Reg. nginx_tcp_proxy_module In-Reply-To: References: Message-ID: <20130423070436.GU92338@mdounin.ru> Hello! On Tue, Apr 23, 2013 at 11:59:22AM +0530, ESWAR RAO wrote: > Hi All, > > I have a below setup: > > netcat client (nc localhost 8081) =====>nginx server(8081) with tcp_proxy > module=====>2 netcat servers(8031 and 8032) > > $ nc localhost 8081 > biiiiiiiiiiiiiiiiii > > $ nc -lk 8031 > biiiiiiiiiiiiiiiiii > > $ nc -lk 8032 > > > If I kill the process $ nc -lk 8031, the client is also getting killed. > > I expect the nginx server would read 0 bytes upon closing the connection to > 8031 server and it detects the server failover and it establishes the > connection to 8032 server so that the clients won't experience any > downtime. Since the client establishes connection to only nginx at 8081, it > should be shielded from failover of 8031 server. > > Can anyone please help me if my understanding is wrong ?? While I'm not really familiar with 3rd party tcp_proxy module, the "reestablishing" connection would be really unexpected behaviour - as the module doesn't know anything about the protocol and can't assume there is no states associated with a connection on a client and/or on a server. That is, just dropping the connection looks correct. What the module can do is to select a working server on connection establishment. -- Maxim Dounin http://nginx.org/en/donation.html From nginx-forum at nginx.us Tue Apr 23 08:12:14 2013 From: nginx-forum at nginx.us (gadh) Date: Tue, 23 Apr 2013 04:12:14 -0400 Subject: which version to use in production - 1.2.x or 1.3.x In-Reply-To: <20130422162236.GD92338@mdounin.ru> References: <20130422162236.GD92338@mdounin.ru> Message-ID: <62ddc74c4ae92a34f5b228e68129bb2c.NginxMailingListEnglish@forum.nginx.org> thanks, where can i see the API changes between 1.2.x and 1.3.x ? i currently use subrequest in 1.2.8 and in 1.3.16 the filter module is not called and the browser waits forever (so it does not go to backend also) Posted at Nginx Forum: http://forum.nginx.org/read.php?2,238512,238573#msg-238573 From nginx-forum at nginx.us Tue Apr 23 10:57:37 2013 From: nginx-forum at nginx.us (abbott) Date: Tue, 23 Apr 2013 06:57:37 -0400 Subject: Majid Al Futtaim Fashion to launch Abercrombie and Fitch in Dubai in 2014 Message-ID: Majid Al Futtaim Fashion to launch Abercrombie and Fitch in Dubai in 2014 Hot on the heels of its recent announcement to bring Hollister to the region, Majid Al Futtaim Fashion, the retail arm of Majid Al Futtaim Ventures, has finalized another joint-venture agreement with Abercrombie & Fitch Co., to bring the iconic Abercrombie & Fitch clothing brand to the Middle East. The first [url=http://www.abercrombiebillig.org/frauen/abercrombie-and-fitch-frauen-pullover]abercrombie and fitch pullover[/url] store for the region is set to open in Dubai in 2014. North American brands are the most global, according to a recent retail industry report released by CB Richard Ellis, with 73% present in all three major retail markets - Europe, Asia Pacific and Middle East & Africa. The report also found that 61.2% cent of American brands operate at least one store in Dubai, making it the second most targeted city for international retailers [url=http://www.abercrombiebillig.org/abercrombie-and-fitch-for-men/abercrombie-fitch-herren-tees]abercrombie and fitch t shirt[/url], falling closely behind London. "The signing of another joint-venture with leading retailer Abercrombie & Fitch Co., reflects our demonstrated understanding of the region's evolving retail landscape," said Ahmed Galal Ismail, CEO Majid Al Futtaim Ventures. "The [url=http://www.abercrombiebillig.org/frauen/abercrombie-and-fitch-frauen-jeans]abercrombie and fitch jeans[/url] brand is an iconic label that will enhance Majid Al Futtaim Fashion's already established portfolio of brands, and cement its reputation as the preferred strategic partner for the world's most desirable brands." "I'm delighted to finalize Majid Al Futtaim Fashion's partnership with Abercrombie & Fitch, said Asil Attar, CEO of Majid Al Futtaim Fashion. "The brand exudes the same spirited energy as our young, modern consumer and we're confident that it will resonate amongst our core audiences. With our initial store projected for 2014 in Dubai, we will work closely with [url=http://www.abercrombiebillig.org/frauen/abercrombie-and-fitch-frauen-hoodies]abercrombie and fitch hoodie[/url] to ensure we remain true to the brand's lifestyle concept, and work on a strategic rollout plan to increase the brand's presence across the region." This is Abercrombie & Fitch Co.'s, first venture into the Middle East region and will see leading American brands, Abercrombie and Fitch and separately, Hollister mark their regional debut both in Dubai. Reaffirming Dubai's appeal as an international launch pad for global brands, [url=http://www.abercrombiebillig.org/hollister]abercrombie and fitch hollister[/url] will further diversify the city's rapidly expanding retail offer. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,238578,238578#msg-238578 From mdounin at mdounin.ru Tue Apr 23 14:14:56 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 23 Apr 2013 18:14:56 +0400 Subject: which version to use in production - 1.2.x or 1.3.x In-Reply-To: <62ddc74c4ae92a34f5b228e68129bb2c.NginxMailingListEnglish@forum.nginx.org> References: <20130422162236.GD92338@mdounin.ru> <62ddc74c4ae92a34f5b228e68129bb2c.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20130423141453.GB95730@mdounin.ru> Hello! On Tue, Apr 23, 2013 at 04:12:14AM -0400, gadh wrote: > thanks, where can i see the API changes between 1.2.x and 1.3.x ? i > currently use subrequest in 1.2.8 and in 1.3.16 the filter module is not > called and the browser waits forever (so it does not go to backend also) All commits can be seen in nginx source code repository. Commit logs usually indicate API changes done, either explicitly or implicitly. Note though that not everything you can do with nginx internal structures is a part of the API. And in many cases "it now waits forever" problems are results of code worked in previous versions by chance or due to some implementation details which were changed in new versions. -- Maxim Dounin http://nginx.org/en/donation.html From mdounin at mdounin.ru Tue Apr 23 14:23:14 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 23 Apr 2013 18:23:14 +0400 Subject: Emulate SSI 'exec cmd' with nginx In-Reply-To: References: Message-ID: <20130423142314.GC95730@mdounin.ru> Hello! On Mon, Apr 22, 2013 at 06:45:55PM -0400, lpr wrote: > Dear all > > Trying to move my pages from Apache to nginx (1.2.1 on Debian stable with > backports), I run into the problem of having used SSI's 'exec cmd' for more > than a decade quite intensively. > > What is the best and easiest way to emulate 'exec cmd' with nginx? > > For example, in my footers I make use of dynamically change between ENglish > and GErman with a shell script as easy as > > > > with setlanguage.sh as > > echo "Deutsch" > > When I try using , the script is > executed. However, instead of just adding the link, nginx includes the > German web-page fully. > > Is there an easy way to get the same functionality with nginx? There is no "exec" SSI command support in nginx. In this particular case I would recommend using if with regular expression and echo commands instead. Something like this should work: /GE/">Deutsch (Untested.) -- Maxim Dounin http://nginx.org/en/donation.html From ryan.parrish at corelogicllc.com Tue Apr 23 15:05:58 2013 From: ryan.parrish at corelogicllc.com (Ryan Parrish) Date: Tue, 23 Apr 2013 11:05:58 -0400 Subject: s-maxage not being honored with proxy_cache Message-ID: I have a slow backend application that I'm using nginx to provide the authentication and caching for. It's been working great however there is one nagging issue that I cannot seem to resolve, when the backend app sets a s-maxage and a maxage Cache-Control, nginx only seems to honor the maxage and expires the cache with its value. An example response from the backend is like this... Cache-Control: max-age=60, s-maxage=3600, public, must-revalidate My idea here is I only want the client to cache this data for a short amount of time before checking in with me it see if it's still valid. The data usually wont be changing that often so I want nginx to cache it for an hour, but in the event it does I use the excellent nginx-cache-purge script in my backend app to invalidate the cache and the next time a client checks in (after 60 seconds) they will get the new data. However in all my testing and usage I will only get a cache HIT for 60 seconds after the first request to a resource, after 60 seconds it will be EXPIRED then it will go to the backend again. Am I missing something in the Cache-Control that is causing this behavior? -- Ryan Parrish -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Tue Apr 23 20:20:06 2013 From: nginx-forum at nginx.us (duv) Date: Tue, 23 Apr 2013 16:20:06 -0400 Subject: new directive: proxy_next_tries N Message-ID: <9c90c15eecef60badeaf8bf7a5fc7c75.NginxMailingListEnglish@forum.nginx.org> Short description: Will attempt only N upstreams and then fail with last error I didn't commit this to any trunk/branch, This is my first patch so I'm not sure how it goes. I submitted a gist of it to: https://gist.github.com/shai-d/5446961 Hope you will find it useful. Shai Duvdevani. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,238589,238589#msg-238589 From nginx-forum at nginx.us Tue Apr 23 23:23:26 2013 From: nginx-forum at nginx.us (davidjb) Date: Tue, 23 Apr 2013 19:23:26 -0400 Subject: Feature extension to auth_request module: FastCGI authorizer In-Reply-To: <20130422163926.GE92338@mdounin.ru> References: <20130422163926.GE92338@mdounin.ru> Message-ID: <429e2bbe0f5ec82a2eac739080c7bce1.NginxMailingListEnglish@forum.nginx.org> Maxim Dounin Wrote: ------------------------------------------------------- > For me it doesn't looks like what you do actually matches FastCGI > Authorizer specification. Even if we ignore the fact that body > isn't handled properly, and authorizer mode isn't advertized to > FastCGI. > > Most of the code in the patch seems to be dedicated to special > processing of Variable-* headers. But they don't seem to do what > they are expected to do as per FastCGI spec - with your code the > "Variable-AUTH_METHOD" header returned by an authorizer will > result in "AUTH_METHOD" header being passed to the application, > i.e. it will be available in HTTP_AUTH_METHOD variable in > subsequent FastCGI requests - instead of AUTH_METHOD variable as > per FastCGI spec. It's still very much a work in progress (fwiw, I started using Nginx last week). On another read of the FastCGI specification, I do agree that your interpretation is right - I was interpreting part of the specification without understanding the rest of the definitions. So, in that regard it could certainly be improved. However, if strictly adhering to the FastCGI spec, this would thus force the backend application to be FastCGI as well -- and this is why my code does what it does. The authorisation technology (Shibboleth) I'm working with needs to inject user-related variables into the request going to a backend application, and for ease of use/performance, I don't want to have to re-route via a FastCGI application. So perhaps on balance, this functionality may well be better suited to its own add-on module. > > Please also note that it's bad idea to try to modify input headers - > this is not something expected to be done by modules, and will > result in a segmentation fault if you'll try to do it in a > subrequest. Okay, but what of a module like "Headers more" -- which allows you to manipulate any headers, incoming or outgoing. Should something like this not exist for Nginx or is it just considered 'bad practice'? Either way, I'd be curious for both the code I've written, and also as I'm relying on the "Headers more" module to drop certain request headers. As for the code I've written, the input headers are being modified after the subrequest has been completed, and this appears to succeed. So no seg faults so far. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,238523,238591#msg-238591 From nginx-forum at nginx.us Wed Apr 24 10:03:49 2013 From: nginx-forum at nginx.us (mex) Date: Wed, 24 Apr 2013 06:03:49 -0400 Subject: Proxy cache In-Reply-To: References: Message-ID: <288f0e4a95588a95e4fd9db76055e187.NginxMailingListEnglish@forum.nginx.org> > Is this possible? yes. depending on your setup it could worth the try to use nginx as static server for your download-files, esp. if you run your proxy_pass - location on on the same server. os-cache can be as fast as a ram-cache via tmpfs. depending on the amount of files in /download and filesize it can be useful to tweak your setup (buffers, sendfile etc) http://wiki.nginx.org/HttpProxyModule#proxy_cache Posted at Nginx Forum: http://forum.nginx.org/read.php?2,238485,238595#msg-238595 From mdounin at mdounin.ru Wed Apr 24 10:37:55 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 24 Apr 2013 14:37:55 +0400 Subject: Feature extension to auth_request module: FastCGI authorizer In-Reply-To: <429e2bbe0f5ec82a2eac739080c7bce1.NginxMailingListEnglish@forum.nginx.org> References: <20130422163926.GE92338@mdounin.ru> <429e2bbe0f5ec82a2eac739080c7bce1.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20130424103755.GB10443@mdounin.ru> Hello! On Tue, Apr 23, 2013 at 07:23:26PM -0400, davidjb wrote: > Maxim Dounin Wrote: > ------------------------------------------------------- > > For me it doesn't looks like what you do actually matches FastCGI > > Authorizer specification. Even if we ignore the fact that body > > isn't handled properly, and authorizer mode isn't advertized to > > FastCGI. > > > > Most of the code in the patch seems to be dedicated to special > > processing of Variable-* headers. But they don't seem to do what > > they are expected to do as per FastCGI spec - with your code the > > "Variable-AUTH_METHOD" header returned by an authorizer will > > result in "AUTH_METHOD" header being passed to the application, > > i.e. it will be available in HTTP_AUTH_METHOD variable in > > subsequent FastCGI requests - instead of AUTH_METHOD variable as > > per FastCGI spec. > > It's still very much a work in progress (fwiw, I started using Nginx last > week). On another read of the FastCGI specification, I do agree that your > interpretation is right - I was interpreting part of the specification > without understanding the rest of the definitions. So, in that regard it > could certainly be improved. > > However, if strictly adhering to the FastCGI spec, this would thus force the > backend application to be FastCGI as well -- and this is why my code does > what it does. The authorisation technology (Shibboleth) I'm working with > needs to inject user-related variables into the request going to a backend > application, and for ease of use/performance, I don't want to have to > re-route via a FastCGI application. > > So perhaps on balance, this functionality may well be better suited to its > own add-on module. Note that if you just need to pass some variables you know about - it can be easily done with auth_request_set and fastcgi_param directives. > > Please also note that it's bad idea to try to modify input headers - > > this is not something expected to be done by modules, and will > > result in a segmentation fault if you'll try to do it in a > > subrequest. > > Okay, but what of a module like "Headers more" -- which allows you to > manipulate any headers, incoming or outgoing. Should something like this > not exist for Nginx or is it just considered 'bad practice'? Either way, > I'd be curious for both the code I've written, and also as I'm relying on > the "Headers more" module to drop certain request headers. This is considered bad and unsupported by nginx core. You may take a look at headers more module for a number of quirks it uses for this to work. Recommended aproach is to use variables and appropriate backend protocol module directives (proxy_set_header, fastcgi_param, ...) to pass them to a backend. An example of similar functionality in nginx core: proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_set_header http://nginx.org/en/docs/http/ngx_http_proxy_module.html#variables -- Maxim Dounin http://nginx.org/en/donation.html From mdounin at mdounin.ru Wed Apr 24 11:41:59 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 24 Apr 2013 15:41:59 +0400 Subject: s-maxage not being honored with proxy_cache In-Reply-To: References: Message-ID: <20130424114159.GD10443@mdounin.ru> Hello! On Tue, Apr 23, 2013 at 11:05:58AM -0400, Ryan Parrish wrote: > I have a slow backend application that I'm using nginx to provide > the authentication and caching for. It's been working great however there > is one nagging issue that I cannot seem to resolve, when the backend app > sets a s-maxage and a maxage Cache-Control, nginx only seems to honor the > maxage and expires the cache with its value. > > An example response from the backend is like this... > > Cache-Control: max-age=60, s-maxage=3600, public, must-revalidate > My idea here is I only want the client to cache this data for a short > amount of time before checking in with me it see if it's still valid. The > data usually wont be changing that often so I want nginx to cache it for an > hour, but in the event it does I use the excellent nginx-cache-purge script > in my backend app to invalidate the cache and the next time a client checks > in (after 60 seconds) they will get the new data. Note: you will not be able to purge shared chaches outside of your control, so this might not work as you expect. > However in all my testing and usage I will only get a cache HIT for 60 > seconds after the first request to a resource, after 60 seconds it will be > EXPIRED then it will go to the backend again. Am I missing something in > the Cache-Control that is causing this behavior? As of now nginx doesn't handle s-maxage. Trivial solution is to use X-Accel-Expires to specifi expiration time for your nginx cache. This should also better match a use case you've described (as it will only ask your nginx cache to cache longer, not all shared caches in the world). -- Maxim Dounin http://nginx.org/en/donation.html From mdounin at mdounin.ru Wed Apr 24 12:12:48 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 24 Apr 2013 16:12:48 +0400 Subject: new directive: proxy_next_tries N In-Reply-To: <9c90c15eecef60badeaf8bf7a5fc7c75.NginxMailingListEnglish@forum.nginx.org> References: <9c90c15eecef60badeaf8bf7a5fc7c75.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20130424121248.GH10443@mdounin.ru> Hello! On Tue, Apr 23, 2013 at 04:20:06PM -0400, duv wrote: > Short description: Will attempt only N upstreams and then fail with last > error > > I didn't commit this to any trunk/branch, > This is my first patch so I'm not sure how it goes. > I submitted a gist of it to: > https://gist.github.com/shai-d/5446961 > > Hope you will find it useful. > Shai Duvdevani. Replied with a review to a patch you've sent in a personal mail. Please use nginx-devel@ mailing list for further patch submits. -- Maxim Dounin http://nginx.org/en/donation.html From ryan.parrish at corelogicllc.com Wed Apr 24 13:51:20 2013 From: ryan.parrish at corelogicllc.com (Ryan Parrish) Date: Wed, 24 Apr 2013 09:51:20 -0400 Subject: s-maxage not being honored with proxy_cache In-Reply-To: <20130424114159.GD10443@mdounin.ru> References: <20130424114159.GD10443@mdounin.ru> Message-ID: On Wed, Apr 24, 2013 at 7:41 AM, Maxim Dounin wrote: > Hello! > > On Tue, Apr 23, 2013 at 11:05:58AM -0400, Ryan Parrish wrote: > > > I have a slow backend application that I'm using nginx to provide > > the authentication and caching for. It's been working great however > there > > is one nagging issue that I cannot seem to resolve, when the backend app > > sets a s-maxage and a maxage Cache-Control, nginx only seems to honor the > > maxage and expires the cache with its value. > > > > An example response from the backend is like this... > > > > Cache-Control: max-age=60, s-maxage=3600, public, must-revalidate > > My idea here is I only want the client to cache this data for a short > > amount of time before checking in with me it see if it's still valid. > The > > data usually wont be changing that often so I want nginx to cache it for > an > > hour, but in the event it does I use the excellent nginx-cache-purge > script > > in my backend app to invalidate the cache and the next time a client > checks > > in (after 60 seconds) they will get the new data. > > Note: you will not be able to purge shared chaches outside of your > control, so this might not work as you expect. > > > However in all my testing and usage I will only get a cache HIT for 60 > > seconds after the first request to a resource, after 60 seconds it will > be > > EXPIRED then it will go to the backend again. Am I missing something in > > the Cache-Control that is causing this behavior? > > As of now nginx doesn't handle s-maxage. > > Trivial solution is to use X-Accel-Expires to specifi expiration > time for your nginx cache. This should also better match a use > case you've described (as it will only ask your nginx cache to > cache longer, not all shared caches in the world). > > That worked perfectly, thank you! -- -- Ryan Parrish Chief Technologist, Member CoreLogic LLC M: (408)966-4673 www.corelogicllc.com solutions for the extended enterprise -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Wed Apr 24 14:19:39 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 24 Apr 2013 18:19:39 +0400 Subject: nginx-1.4.0 Message-ID: <20130424141939.GJ10443@mdounin.ru> Changes with nginx 1.4.0 24 Apr 2013 *) Bugfix: nginx could not be built with the ngx_http_perl_module if the --with-openssl option was used; the bug had appeared in 1.3.16. *) Bugfix: in a request body handling in the ngx_http_perl_module; the bug had appeared in 1.3.9. -- Maxim Dounin http://nginx.org/en/donation.html From nginx-forum at nginx.us Wed Apr 24 15:08:11 2013 From: nginx-forum at nginx.us (motto) Date: Wed, 24 Apr 2013 11:08:11 -0400 Subject: upload filename with double quote failure Message-ID: <47d4610dfc1f2805b1a7a9d7148f406c.NginxMailingListEnglish@forum.nginx.org> Problem uploading file name - test"try.txt (with double quotes inside file name) under firefox browser. It pass to backend file content as a file name. {"note": [""], "upfile": ["erthtyjt\nrthy\ntrh\nrt\nh\nrt\nh\nrt\nh\nrth\nr\nth\nr\nth\n\nt\n\n"]} here is debug from nginx. http://pastebin.com/P6nkNNj3 Posted at Nginx Forum: http://forum.nginx.org/read.php?2,238613,238613#msg-238613 From jaderhs5 at gmail.com Wed Apr 24 15:37:28 2013 From: jaderhs5 at gmail.com (Jader H. Silva) Date: Wed, 24 Apr 2013 12:37:28 -0300 Subject: SMTP proxy and insert header Message-ID: Hello. I'd like to configure nginx as a SMTP proxy and to insert a custom SMTP header (e.g. "X-My-Script: bla") in the message before sending it to backend. Is there any way to configure nginx to insert a custom header in an email message? Jader H. Silva -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Wed Apr 24 15:45:16 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 24 Apr 2013 19:45:16 +0400 Subject: upload filename with double quote failure In-Reply-To: <47d4610dfc1f2805b1a7a9d7148f406c.NginxMailingListEnglish@forum.nginx.org> References: <47d4610dfc1f2805b1a7a9d7148f406c.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20130424154516.GP10443@mdounin.ru> Hello! On Wed, Apr 24, 2013 at 11:08:11AM -0400, motto wrote: > Problem uploading file name - test"try.txt (with double quotes inside file > name) under firefox browser. > It pass to backend file content as a file name. > > {"note": [""], "upfile": > ["erthtyjt\nrthy\ntrh\nrt\nh\nrt\nh\nrt\nh\nrth\nr\nth\nr\nth\n\nt\n\n"]} > > here is debug from nginx. > http://pastebin.com/P6nkNNj3 It doesn't look like there is anything wrong in nginx. Try looking into what happens on your backend. -- Maxim Dounin http://nginx.org/en/donation.html From nginx-forum at nginx.us Wed Apr 24 15:57:03 2013 From: nginx-forum at nginx.us (motto) Date: Wed, 24 Apr 2013 11:57:03 -0400 Subject: upload filename with double quote failure In-Reply-To: <20130424154516.GP10443@mdounin.ru> References: <20130424154516.GP10443@mdounin.ru> Message-ID: <79be6ffc21f864c762692c8aaa8ce1a8.NginxMailingListEnglish@forum.nginx.org> as for nginx backend for debug purpose I use python-tornado script, which will just give back params, which nginx pass to it. so here it is: upload under Chrome: {u'upfile.path': ['/usr/local/apps/Opus/temp/upload/6/0009495446'], u'upfile.size': ['51'], u'upfile.name': ['test%22try.txt'], u'note': [''], u'upfile.md5': ['5a089ad5ea93048b0d492d53626bf76b'], u'upfile.content_type': ['text/plain']} upload under Firefox: {u'note': [''], u'upfile': ['erthtyjt\nrthy\ntrh\nrt\nh\nrt\nh\nrt\nh\nrth\nr\nth\nr\nth\n\nt\n\n']} Chrome does character encoding, but Firefox - just backspace double quote sign. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,238613,238618#msg-238618 From mdounin at mdounin.ru Wed Apr 24 16:42:20 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 24 Apr 2013 20:42:20 +0400 Subject: SMTP proxy and insert header In-Reply-To: References: Message-ID: <20130424164220.GR10443@mdounin.ru> Hello! On Wed, Apr 24, 2013 at 12:37:28PM -0300, Jader H. Silva wrote: > Hello. > I'd like to configure nginx as a SMTP proxy and to insert a custom SMTP > header (e.g. "X-My-Script: bla") in the message before sending it to > backend. > > Is there any way to configure nginx to insert a custom header in an email > message? No. Messages aren't processed by nginx smtp proxy, it instead authenticates a user and passed the connection to a backend server. -- Maxim Dounin http://nginx.org/en/donation.html From mdounin at mdounin.ru Wed Apr 24 17:36:36 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 24 Apr 2013 21:36:36 +0400 Subject: upload filename with double quote failure In-Reply-To: <79be6ffc21f864c762692c8aaa8ce1a8.NginxMailingListEnglish@forum.nginx.org> References: <20130424154516.GP10443@mdounin.ru> <79be6ffc21f864c762692c8aaa8ce1a8.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20130424173636.GU10443@mdounin.ru> Hello! On Wed, Apr 24, 2013 at 11:57:03AM -0400, motto wrote: > as for nginx backend for debug purpose I use python-tornado script, which > will just give back params, which nginx pass to it. > so here it is: > upload under Chrome: > {u'upfile.path': ['/usr/local/apps/Opus/temp/upload/6/0009495446'], > u'upfile.size': ['51'], u'upfile.name': ['test%22try.txt'], u'note': [''], > u'upfile.md5': ['5a089ad5ea93048b0d492d53626bf76b'], u'upfile.content_type': > ['text/plain']} > upload under Firefox: > {u'note': [''], u'upfile': > ['erthtyjt\nrthy\ntrh\nrt\nh\nrt\nh\nrt\nh\nrth\nr\nth\nr\nth\n\nt\n\n']} > > Chrome does character encoding, but Firefox - just backspace double quote > sign. What Firefox does is actually correct per RFC, as handling of multipart/form-data forms should be done per MIME encoding rules. See http://tools.ietf.org/html/rfc2388 for more details. Though anyway it's not related to nginx - it doesn't do anything with a request body provided by a browser, it just passes it as is to a backend. -- Maxim Dounin http://nginx.org/en/donation.html From kworthington at gmail.com Thu Apr 25 00:46:48 2013 From: kworthington at gmail.com (Kevin Worthington) Date: Wed, 24 Apr 2013 20:46:48 -0400 Subject: nginx-1.4.0 In-Reply-To: <20130424141939.GJ10443@mdounin.ru> References: <20130424141939.GJ10443@mdounin.ru> Message-ID: Hello Nginx users, Now available: Nginx 1.4.0 for Windows http://goo.gl/Lkmr1 (32-bit and 64-bit versions) These versions are to support legacy users who are already using Cygwin based builds of Nginx. Officially supported native Windows binaries are at nginx.org. Announcements are also available via my Twitter stream ( http://twitter.com/kworthington), if you prefer to receive updates that way. Thank you, Kevin -- Kevin Worthington kworthington *@* (gmail] [dot} {com) http://kevinworthington.com/ http://twitter.com/kworthington On Wed, Apr 24, 2013 at 10:19 AM, Maxim Dounin wrote: > Changes with nginx 1.4.0 24 Apr > 2013 > > *) Bugfix: nginx could not be built with the ngx_http_perl_module if > the > --with-openssl option was used; the bug had appeared in 1.3.16. > > *) Bugfix: in a request body handling in the ngx_http_perl_module; the > bug had appeared in 1.3.9. > > > -- > Maxim Dounin > http://nginx.org/en/donation.html > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Thu Apr 25 06:58:40 2013 From: nginx-forum at nginx.us (TECK) Date: Thu, 25 Apr 2013 02:58:40 -0400 Subject: Full WebDAV support? Message-ID: <2d35f7e9f774d0106807d5b8945eca9b.NginxMailingListEnglish@forum.nginx.org> Hi everyone, I was wondering if there are any plans to add full WebDAV support to native nginx module? Right now I use ngx_http_dav_ext_module.c as complement and I'm wondering what are the reasons why the devs do not add the missing commands. Regards, Floren Posted at Nginx Forum: http://forum.nginx.org/read.php?2,238626,238626#msg-238626 From nginx-forum at nginx.us Thu Apr 25 07:06:36 2013 From: nginx-forum at nginx.us (motto) Date: Thu, 25 Apr 2013 03:06:36 -0400 Subject: upload filename with double quote failure In-Reply-To: <20130424173636.GU10443@mdounin.ru> References: <20130424173636.GU10443@mdounin.ru> Message-ID: <829f18e0bf69b8259b9af6ed70496d48.NginxMailingListEnglish@forum.nginx.org> Sorry to be annoying, here is part of nginx config: # Upload form should be submitted to this location location /upload_test { upload_pass @test; upload_store /usr/local/apps/Opus/temp/upload 1; upload_store_access user:r; upload_set_form_field $upload_field_name.name "$upload_file_name"; upload_set_form_field $upload_field_name.content_type "$upload_content_type"; upload_set_form_field $upload_field_name.path "$upload_tmp_path"; upload_aggregate_form_field "$upload_field_name.md5" "$upload_file_md5"; upload_aggregate_form_field "$upload_field_name.size" "$upload_file_size"; upload_pass_form_field "^(.*)"; #upload_cleanup 400 404 499 500-505; upload_cleanup 404; } # Pass altered request body to a backend location @test { proxy_pass http://localhost:8080; } in case with firefox upload as you could see above, only "upfile" parameter is available, while upfile.path, upfile.name and others are missing. That could not be related to backend as I used the same backend while uploading with the same form under Chrome and Firefox in above example. Thank you. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,238613,238627#msg-238627 From joerg.kastning at synaxon.de Thu Apr 25 07:08:39 2013 From: joerg.kastning at synaxon.de (=?ISO-8859-1?Q?J=F6rg_Kastning?=) Date: Thu, 25 Apr 2013 09:08:39 +0200 Subject: Include additional files Message-ID: Hi. I have some trouble cleaning up my /etc/nginx/nginx.conf. I have several upstream and server blocks in this file and it becomes a mess. So I tried to write each configuration in a seperate file saved in /etc/nginx/conf.d/loki.conf for example and include them in nginx.conf. I include the file with the following entry in nginx.conf: include /etc/nginx/conf.d/loki.conf; But when reloading/restarting nginx I get the following error: [emerg] 15941#0: duplicate upstream "loadbalancer" in /etc/nginx/conf.d/loki.conf:1 I looked around in the Wiki but find no hint. How can I store my configruations in seperate files and include them? Greetz Joerg -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Thu Apr 25 07:15:45 2013 From: nginx-forum at nginx.us (motto) Date: Thu, 25 Apr 2013 03:15:45 -0400 Subject: Include additional files In-Reply-To: References: Message-ID: <907bd723c8fff2b3fc5f6cfdfac48610.NginxMailingListEnglish@forum.nginx.org> upstream name should be unique per entire config, you could name it in nginx.conf and than reference to its name from different your includes. like: upstream loadbalancer { server 192.168.0.1:8080; server 192.168.0.2:8080; server 192.168.0.3:8080; server 192.168.0.4:8080; } than in certain location (in included file for example) proxy_pass http://loadbalancer; Posted at Nginx Forum: http://forum.nginx.org/read.php?2,238628,238629#msg-238629 From mdounin at mdounin.ru Thu Apr 25 10:42:23 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 25 Apr 2013 14:42:23 +0400 Subject: upload filename with double quote failure In-Reply-To: <829f18e0bf69b8259b9af6ed70496d48.NginxMailingListEnglish@forum.nginx.org> References: <20130424173636.GU10443@mdounin.ru> <829f18e0bf69b8259b9af6ed70496d48.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20130425104223.GB10443@mdounin.ru> Hello! On Thu, Apr 25, 2013 at 03:06:36AM -0400, motto wrote: > Sorry to be annoying, here is part of nginx config: > > # Upload form should be submitted to this location > location /upload_test { > upload_pass @test; > upload_store /usr/local/apps/Opus/temp/upload 1; > upload_store_access user:r; > upload_set_form_field $upload_field_name.name "$upload_file_name"; > upload_set_form_field $upload_field_name.content_type > "$upload_content_type"; > upload_set_form_field $upload_field_name.path "$upload_tmp_path"; > upload_aggregate_form_field "$upload_field_name.md5" > "$upload_file_md5"; > upload_aggregate_form_field "$upload_field_name.size" > "$upload_file_size"; > upload_pass_form_field "^(.*)"; > #upload_cleanup 400 404 499 500-505; > upload_cleanup 404; > } > > # Pass altered request body to a backend > location @test { > proxy_pass http://localhost:8080; > } > > in case with firefox upload as you could see above, only "upfile" parameter > is available, while upfile.path, upfile.name and others are missing. > That could not be related to backend as I used the same backend while > uploading with the same form under Chrome and Firefox in above example. > Thank you. Ah, ok, I understand now. You are using upload module by Valery Kholodkov (http://grid.net.ru/nginx/upload.en.html). It indeed changes the request body and might introduce the problem. You may want to make sure you are able to reproduce the problem with latest version of the upload module and report the problem to the module author. -- Maxim Dounin http://nginx.org/en/donation.html From jefftk at google.com Thu Apr 25 21:02:00 2013 From: jefftk at google.com (Jeff Kaufman) Date: Thu, 25 Apr 2013 17:02:00 -0400 Subject: Announcing ngx_pagespeed beta 1.5.27.1 Message-ID: The Nginx version of PageSpeed is now ready for production use. You can see it in action with examples on our demonstration site: http://ngxpagespeed.com Read more in our official announcement: http://googledevelopers.blogspot.com/2013/04/speed-up-your-sites-with-pagespeed-for.html Get started with ngx_pagespeed or upgrade to beta with the installation guide: https://github.com/pagespeed/ngx_pagespeed#readme Join our mailing lists to keep up with changes in ngx_pagespeed: https://groups.google.com/forum/#!forum/ngx-pagespeed-discuss https://groups.google.com/forum/#!forum/ngx-pagespeed-announce Thanks to everyone who helped with getting us to beta, especially Otto van der Schaff, Chai Zhenhua, Weibin Yao, Junmin Xiong, and Ben Noordhuis. Jeff Kaufman Google From cnst++ at FreeBSD.org Thu Apr 25 21:43:56 2013 From: cnst++ at FreeBSD.org (Constantine A. Murenin) Date: Thu, 25 Apr 2013 14:43:56 -0700 Subject: Announcing ngx_pagespeed beta 1.5.27.1 In-Reply-To: References: Message-ID: <5179A39C.3050602@FreeBSD.org> But the web-site appears to be down; tested from two distinct locations. Cns# http_ping -count 4 -interval 1 http://ngxpagespeed.com/; date http://ngxpagespeed.com/: timed out http://ngxpagespeed.com/: timed out http://ngxpagespeed.com/: timed out http://ngxpagespeed.com/: timed out --- http://ngxpagespeed.com/ http_ping statistics --- 4 fetches started, 0 completed (0%), 0 failures (0%), 4 timeouts (100%) Thu Apr 25 14:41:29 PDT 2013 Cns# traceroute ngxpagespeed.com; date traceroute to ngxpagespeed.com (67.207.141.173), 64 hops max, 40 byte packets 1 static.33.203.4.46.clients.your-server.de (46.4.203.33) 0.712 ms 3.311 ms 0.507 ms 2 hos-tr2.juniper1.rz13.hetzner.de (213.239.224.33) 0.243 ms hos-tr1.juniper1.rz13.hetzner.de (213.239.224.1) 0.240 ms hos-tr4.juniper2.rz13.hetzner.de (213.239.224.97) 0.242 ms 3 hos-bb2.juniper4.rz2.hetzner.de (213.239.240.138) 2.868 ms 2.846 ms 2.847 ms 4 r1nue1.core.init7.net (77.109.135.101) 2.913 ms 10.420 ms 3.176 ms 5 r1ams1.core.init7.net (77.109.140.25) 25.4 ms 25.49 ms 25.43 ms 6 er1.ams1.nl.above.net (195.69.144.122) 25.284 ms 25.262 ms r1fra2.core.init7.net (77.109.140.49) 17.540 ms 7 xe-1-2-0.mpr1.fra4.de.above.net (80.81.194.26) 6.691 ms 6.735 ms 6.733 ms 8 xe-5-3-0.cr2.lga5.us.above.net (64.125.25.57) 93.279 ms 93.273 ms 99.8 ms 9 xe-3-0-0.cr2.ord2.us.above.net (64.125.31.74) 115.35 ms so-2-1-0.mpr2.lga5.us.above.net (64.125.31.182) 92.991 ms 93.40 ms 10 xe-2-3-0.cr2.ord2.us.above.net (64.125.24.30) 114.972 ms 114.513 ms 114.564 ms 11 xe-0-0-0.cr1.ord2.us.above.net (64.125.28.233) 114.73 ms 117.839 ms 114.441 ms 12 64.124.65.218.allocated.above.net (64.124.65.218) 116.262 ms 116.354 ms 116.227 ms 13 coreb.ord1.rackspace.net (184.106.126.142) 115.435 ms corea.ord1.rackspace.net (184.106.126.140) 115.231 ms coreb.ord1.rackspace.net (184.106.126.142) 115.219 ms 14 core1-CoreA.ord1.rackspace.net (184.106.126.125) 115.323 ms core1-CoreB.ord1.rackspace.net (184.106.126.129) 117.682 ms corea.ord1.rackspace.net (184.106.126.140) 114.820 ms 15 core1-CoreB.ord1.rackspace.net (184.106.126.129) 117.579 ms core1-CoreA.ord1.rackspace.net (184.106.126.125) 114.855 ms core1-CoreB.ord1.rackspace.net (184.106.126.129) 117.336 ms 16 184.106.126.69 (184.106.126.69) 115.970 ms 116.285 ms 115.802 ms 17 67-207-141-173.static.cloud-ips.com (67.207.141.173) 115.950 ms !C 115.894 ms !C 115.999 ms !C Thu Apr 25 14:41:44 PDT 2013 C. On 2013-04-25 14:02, Jeff Kaufman wrote: > The Nginx version of PageSpeed is now ready for production use. You > can see it in action with examples on our demonstration site: > > http://ngxpagespeed.com > > Read more in our official announcement: > > http://googledevelopers.blogspot.com/2013/04/speed-up-your-sites-with-pagespeed-for.html > > Get started with ngx_pagespeed or upgrade to beta with the installation guide: > > https://github.com/pagespeed/ngx_pagespeed#readme > > Join our mailing lists to keep up with changes in ngx_pagespeed: > > https://groups.google.com/forum/#!forum/ngx-pagespeed-discuss > https://groups.google.com/forum/#!forum/ngx-pagespeed-announce > > Thanks to everyone who helped with getting us to beta, especially Otto > van der Schaff, Chai Zhenhua, Weibin Yao, Junmin Xiong, and Ben > Noordhuis. > > Jeff Kaufman > Google > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > From jefftk at google.com Fri Apr 26 01:46:02 2013 From: jefftk at google.com (Jeff Kaufman) Date: Thu, 25 Apr 2013 21:46:02 -0400 Subject: Announcing ngx_pagespeed beta 1.5.27.1 In-Reply-To: <5179A39C.3050602@FreeBSD.org> References: <5179A39C.3050602@FreeBSD.org> Message-ID: Shoot. Fixed. On Thu, Apr 25, 2013 at 5:43 PM, Constantine A. Murenin wrote: > But the web-site appears to be down; tested from two distinct locations. > > Cns# http_ping -count 4 -interval 1 http://ngxpagespeed.com/; date > http://ngxpagespeed.com/: timed out > http://ngxpagespeed.com/: timed out > http://ngxpagespeed.com/: timed out > http://ngxpagespeed.com/: timed out > > --- http://ngxpagespeed.com/ http_ping statistics --- > 4 fetches started, 0 completed (0%), 0 failures (0%), 4 timeouts (100%) > Thu Apr 25 14:41:29 PDT 2013 > Cns# traceroute ngxpagespeed.com; date > traceroute to ngxpagespeed.com (67.207.141.173), 64 hops max, 40 byte > packets > 1 static.33.203.4.46.clients.your-server.de (46.4.203.33) 0.712 ms 3.311 > ms 0.507 ms > 2 hos-tr2.juniper1.rz13.hetzner.de (213.239.224.33) 0.243 ms > hos-tr1.juniper1.rz13.hetzner.de (213.239.224.1) 0.240 ms > hos-tr4.juniper2.rz13.hetzner.de (213.239.224.97) 0.242 ms > 3 hos-bb2.juniper4.rz2.hetzner.de (213.239.240.138) 2.868 ms 2.846 ms > 2.847 ms > 4 r1nue1.core.init7.net (77.109.135.101) 2.913 ms 10.420 ms 3.176 ms > 5 r1ams1.core.init7.net (77.109.140.25) 25.4 ms 25.49 ms 25.43 ms > 6 er1.ams1.nl.above.net (195.69.144.122) 25.284 ms 25.262 ms > r1fra2.core.init7.net (77.109.140.49) 17.540 ms > 7 xe-1-2-0.mpr1.fra4.de.above.net (80.81.194.26) 6.691 ms 6.735 ms > 6.733 ms > 8 xe-5-3-0.cr2.lga5.us.above.net (64.125.25.57) 93.279 ms 93.273 ms > 99.8 ms > 9 xe-3-0-0.cr2.ord2.us.above.net (64.125.31.74) 115.35 ms > so-2-1-0.mpr2.lga5.us.above.net (64.125.31.182) 92.991 ms 93.40 ms > 10 xe-2-3-0.cr2.ord2.us.above.net (64.125.24.30) 114.972 ms 114.513 ms > 114.564 ms > 11 xe-0-0-0.cr1.ord2.us.above.net (64.125.28.233) 114.73 ms 117.839 ms > 114.441 ms > 12 64.124.65.218.allocated.above.net (64.124.65.218) 116.262 ms 116.354 ms > 116.227 ms > 13 coreb.ord1.rackspace.net (184.106.126.142) 115.435 ms > corea.ord1.rackspace.net (184.106.126.140) 115.231 ms > coreb.ord1.rackspace.net (184.106.126.142) 115.219 ms > 14 core1-CoreA.ord1.rackspace.net (184.106.126.125) 115.323 ms > core1-CoreB.ord1.rackspace.net (184.106.126.129) 117.682 ms > corea.ord1.rackspace.net (184.106.126.140) 114.820 ms > 15 core1-CoreB.ord1.rackspace.net (184.106.126.129) 117.579 ms > core1-CoreA.ord1.rackspace.net (184.106.126.125) 114.855 ms > core1-CoreB.ord1.rackspace.net (184.106.126.129) 117.336 ms > 16 184.106.126.69 (184.106.126.69) 115.970 ms 116.285 ms 115.802 ms > 17 67-207-141-173.static.cloud-ips.com (67.207.141.173) 115.950 ms !C > 115.894 ms !C 115.999 ms !C > Thu Apr 25 14:41:44 PDT 2013 > > C. > > > On 2013-04-25 14:02, Jeff Kaufman wrote: >> >> The Nginx version of PageSpeed is now ready for production use. You >> can see it in action with examples on our demonstration site: >> >> http://ngxpagespeed.com >> >> Read more in our official announcement: >> >> >> http://googledevelopers.blogspot.com/2013/04/speed-up-your-sites-with-pagespeed-for.html >> >> Get started with ngx_pagespeed or upgrade to beta with the installation >> guide: >> >> https://github.com/pagespeed/ngx_pagespeed#readme >> >> Join our mailing lists to keep up with changes in ngx_pagespeed: >> >> https://groups.google.com/forum/#!forum/ngx-pagespeed-discuss >> https://groups.google.com/forum/#!forum/ngx-pagespeed-announce >> >> Thanks to everyone who helped with getting us to beta, especially Otto >> van der Schaff, Chai Zhenhua, Weibin Yao, Junmin Xiong, and Ben >> Noordhuis. >> >> Jeff Kaufman >> Google >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx >> > From joerg.kastning at synaxon.de Fri Apr 26 05:47:13 2013 From: joerg.kastning at synaxon.de (=?ISO-8859-1?Q?J=F6rg_Kastning?=) Date: Fri, 26 Apr 2013 07:47:13 +0200 Subject: Include additional files In-Reply-To: <907bd723c8fff2b3fc5f6cfdfac48610.NginxMailingListEnglish@forum.nginx.org> References: <907bd723c8fff2b3fc5f6cfdfac48610.NginxMailingListEnglish@forum.nginx.org> Message-ID: Ok. So the upstream block has to be in the nginx.conf. I thought I could this one export to a separate file, too. I was wondering why I still got the error message after I deleted the upstream block in nginx.conf and had it only in my included file. So the upstream configuration must be in nginx.conf? 2013/4/25 motto > upstream name should be unique per entire config, you could name it in > nginx.conf and than reference to its name from different your includes. > like: > upstream loadbalancer { > server 192.168.0.1:8080; > server 192.168.0.2:8080; > server 192.168.0.3:8080; > server 192.168.0.4:8080; > } > > than in certain location (in included file for example) > proxy_pass http://loadbalancer; > > Posted at Nginx Forum: > http://forum.nginx.org/read.php?2,238628,238629#msg-238629 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Fri Apr 26 06:05:06 2013 From: nginx-forum at nginx.us (mex) Date: Fri, 26 Apr 2013 02:05:06 -0400 Subject: Include additional files In-Reply-To: References: Message-ID: <7de7e90d2109a080c2bdb354c0951bee.NginxMailingListEnglish@forum.nginx.org> > Ok. So the upstream block has to be in the nginx.conf. I thought I > could > this one export to a separate file, too. yes, you can (include your upstream-config and any other part). you just need to place it into the right context, e.g. inside a http { ... } - block and not inside a server { ... } - block http://wiki.nginx.org/HttpUpstreamModule#upstream > > I was wondering why I still got the error message after I deleted the > upstream block in nginx.conf and had it only in my included file. So > the > upstream configuration must be in nginx.conf? every upstream-config for each of your server-parts gets a **unique** name, what does $ grep -Rn "loadbalancer" /etc/nginx/* gives you back? might be just a typo. > Posted at Nginx Forum: http://forum.nginx.org/read.php?2,238628,238655#msg-238655 From nginx-forum at nginx.us Fri Apr 26 06:15:59 2013 From: nginx-forum at nginx.us (George) Date: Fri, 26 Apr 2013 02:15:59 -0400 Subject: Announcing ngx_pagespeed beta 1.5.27.1 In-Reply-To: <5179A39C.3050602@FreeBSD.org> References: <5179A39C.3050602@FreeBSD.org> Message-ID: <41cf312404b56ca915c94870b2914a19.NginxMailingListEnglish@forum.nginx.org> Yup http://ngxpagespeed.com/ isn't accessible at all from my end either. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,238650,238656#msg-238656 From maxim at nginx.com Fri Apr 26 08:28:41 2013 From: maxim at nginx.com (Maxim Konovalov) Date: Fri, 26 Apr 2013 12:28:41 +0400 Subject: recent nginx security issue announce Message-ID: <517A3AB9.70807@nginx.com> Hello, On behalf of the nginx team I want to let the community know that we are aware of the recent security announce[*] and working on the issue. We will share our conclusion when get more details about its nature and impact. * http://www.securityfocus.com/archive/1/526439/30/0/threaded -- Maxim Konovalov +7 (910) 4293178 http://nginx.com/services.html From appa at perusio.net Fri Apr 26 09:13:10 2013 From: appa at perusio.net (=?ISO-8859-1?Q?Ant=F3nio_P=2E_P=2E_Almeida?=) Date: Fri, 26 Apr 2013 11:13:10 +0200 Subject: recent nginx security issue announce In-Reply-To: <517A3AB9.70807@nginx.com> References: <517A3AB9.70807@nginx.com> Message-ID: It seems that they don't know the meaning of responsible disclosure. They should have given you some time before going public. Unfortunately there are plenty of drama queens in the IT security field. All responsible disclosure be gone, for I want to have the attribution: first post is all that matters. It builds cred among the "customer base". ----appa On Fri, Apr 26, 2013 at 10:28 AM, Maxim Konovalov wrote: > Hello, > > On behalf of the nginx team I want to let the community know that we > are aware of the recent security announce[*] and working on the > issue. We will share our conclusion when get more details about its > nature and impact. > > * http://www.securityfocus.com/archive/1/526439/30/0/threaded > > -- > Maxim Konovalov > +7 (910) 4293178 > http://nginx.com/services.html > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From lists at necoro.eu Fri Apr 26 09:50:23 2013 From: lists at necoro.eu (=?ISO-8859-15?Q?Ren=E9_Neumann?=) Date: Fri, 26 Apr 2013 11:50:23 +0200 Subject: Config file templating Message-ID: <517A4DDF.5060006@necoro.eu> Hi, I'm in the mid of converting from lighttpd to nginx. There I have one small config snippet for each 'subsite' (it's personal only, so lots of different unrelated stuff). Unfortunately, quite often a snippet contains a location part that should go into the main server block and one server block. With nginx, this is not possible as-is. Example -- file "sites/foo": ----------------------8<--------------------- location /foo { # do something } server { include "listen"; server_name foo.example.com return 301 http://example.com/foo$request_uri } ---------------------->8--------------------- Does anyone have a working solution (probably some template approach) which would allow to include the snippets in two places and ignoring 'the wrong part'? So that it might look like: nginx.conf: ----------------------8<--------------------- # ... server { include "listen"; server_name example.com; include "snippets/*"; } include "snippets/*"; ---------------------->8--------------------- sites/foo: -----------------------8<-------------------- # if server location /foo { # do something } # endif # if main server { include "listen"; server_name foo.example.com return 301 http://example.com/foo$request_uri } # endif ---------------------->8--------------------- Thanks, Ren? From davide.damico at contactlab.com Fri Apr 26 12:04:09 2013 From: davide.damico at contactlab.com (Davide D'Amico) Date: Fri, 26 Apr 2013 14:04:09 +0200 Subject: Integer overflow in ngx_http_close_connection Message-ID: <517A6D39.1000401@contactlab.com> Cfr. http://www.securityfocus.com/archive/1/526439/30/0/threaded Is 1.4.x release affected? Thanks, d. From luky-37 at hotmail.com Fri Apr 26 12:12:41 2013 From: luky-37 at hotmail.com (Lukas Tribus) Date: Fri, 26 Apr 2013 14:12:41 +0200 Subject: Integer overflow in ngx_http_close_connection In-Reply-To: <517A6D39.1000401@contactlab.com> References: <517A6D39.1000401@contactlab.com> Message-ID: Hi! > Cfr. http://www.securityfocus.com/archive/1/526439/30/0/threaded > > Is 1.4.x release affected? I guess. Please see the "recent nginx security issue announce?" thread. Cheers, Lukas From andrew at nginx.com Fri Apr 26 12:15:04 2013 From: andrew at nginx.com (Andrew Alexeev) Date: Fri, 26 Apr 2013 16:15:04 +0400 Subject: Integer overflow in ngx_http_close_connection In-Reply-To: References: <517A6D39.1000401@contactlab.com> Message-ID: On Apr 26, 2013, at 4:12 PM, Lukas Tribus wrote: > Hi! > >> Cfr. http://www.securityfocus.com/archive/1/526439/30/0/threaded >> >> Is 1.4.x release affected? > > I guess. Please see the "recent nginx security issue announce?" thread. We are still investigating. So far we can't confirm it's a full disclosure. > Cheers, > Lukas > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From alvaro at eragoo.com Fri Apr 26 18:40:06 2013 From: alvaro at eragoo.com (Alvaro Mantilla Gimenez) Date: Fri, 26 Apr 2013 12:40:06 -0600 Subject: Nginx as Reverse Proxy Cache of fcgi django app in separate server Message-ID: Hi, I've been looking on Internet about this but seems all the examples available are for a proxy conf or fcgi conf. Not both. This is my scenario: I have three servers. The first one run only nginx (and it should be the entry point for my websites) and the other two servers run django apps. Those django apps have been launched as fastcgi applications and listen on some ports. For example: /usr/bin/python /var/www/app/manage.py runfcgi method=threaded host=server_ip port=1111 My intention is to run nginx as a reverse proxy for caching some pages (created through django templates). I set the nginx configuration on this way: upstream app { ip_hash; server server_ip1:1111; server server_ip2:1111; } location / { include fastcgi_params; fastcgi_pass app; fastcgi_split_path_info ^()(.*)$; } This way the application works (however I am not sure if I am reaching both servers). But, If I change the configuration to this: location / { proxy_pass http://app; } Then nginx shows an error and I can't see the django app. Any idea? Thanks in advance!! -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Fri Apr 26 21:10:03 2013 From: nginx-forum at nginx.us (Fleshgrinder) Date: Fri, 26 Apr 2013 17:10:03 -0400 Subject: nginx openssl compilation problem Message-ID: <8bc0f86aeffb9db6aa10a4f9ca8bd524.NginxMailingListEnglish@forum.nginx.org> Hello, I'm desparately trying to compile the latest nginx with the latest OpenSSL. In short I'm grabbing the latest nginx tar.gz (1.4.0 but had the same problem with 1.3.16) and the latest OpenSSL tar.gz (1.0.1e but have the same problem with 1.0.1d) extract them and want to compile them. Everything wents smooth until the following point: /usr/lib/gcc/x86_64-linux-gnu/4.7/../../../x86_64-linux-gnu/crt1.o: In function `_start': (.text+0x20): undefined reference to `main' collect2: error: ld returned 1 exit status make[4]: *** [link_app.] Error 1 make[4]: Leaving directory `/tmp/openssl-1.0.1e/test' make[3]: *** [md2test] Error 2 make[3]: Leaving directory `/tmp/openssl-1.0.1e/test' make[2]: *** [build_tests] Error 1 make[2]: Leaving directory `/tmp/openssl-1.0.1e' make[1]: *** [/tmp/openssl-1.0.1e/.openssl/include/openssl/ssl.h] Error 2 make[1]: Leaving directory `/tmp/nginx-1.4.0' make: *** [build] Error 2 My ./configure line looks like the following: ./configure --prefix=/usr/local --sbin-path=/usr/local/sbin --conf-path=/etc/nginx/nginx.conf --pid-path=/var/run/nginx.pid --lock-path=/var/lock/nginx.lock --error-log-path=/var/log/nginx/error.log --http-log-path=/var/log/nginx/access.log --http-client-body-temp-path=/dev/shm/nginx/client-body/tmp --http-fastcgi-temp-path=/dev/shm/nginx/fastcgi/tmp --with-pcre=/tmp/pcre-8.32 --with-openssl=/tmp/openssl-1.0.1e --with-zlib=/tmp/zlib --with-cc-opt='-O3 -m64' --with-ld-opt='-m64' --with-ipv6 --with-http_gzip_static_module --with-http_ssl_module --with-http_spdy_module --with-md5=/tmp/openssl-1.0.1e --with-md5-asm --with-sha1=/tmp/openssl-1.0.1e --with-sha1-asm --with-pcre-jit --without-http_autoindex_module --without-http_auth_basic_module --without-http_browser_module --without-http_geo_module --without-http_limit_conn_module --without-http_limit_req_module --without-http_map_module --without-http_memcached_module --without-http_proxy_module --without-http_referer_module --without-http_scgi_module --without-http_split_clients_module --without-http_ssi_module --without-http_upstream_ip_hash_module --without-http_userid_module --without-http_uwsgi_module --without-mail_pop3_module --without-mail_imap_module --without-mail_smtp_module --add-module=/tmp/nginx-upload-progress-module --add-module=/tmp/nginx-upstream-fair Of course the PCRE, Zlib, nginx-upload-progress-module and nginx-upstream-fair sources are in place and working just fine. Some more info on the environment: gcc (Debian 4.7.2-5) 4.7.2 cpp (Debian 4.7.2-5) 4.7.2 pcre 8.32 zlib and additional modules from github master I know that this is some gcc linker problem and I tryed several -lxxx options or leaving the -O3 / -m64 options, nothing seems to help and I hope somebody can point me in the right direction. Many thanks in advance! Richard Posted at Nginx Forum: http://forum.nginx.org/read.php?2,238671,238671#msg-238671 From agentzh at gmail.com Sat Apr 27 01:46:40 2013 From: agentzh at gmail.com (agentzh) Date: Fri, 26 Apr 2013 18:46:40 -0700 Subject: [ANN] ngx_openresty devel version 1.2.8.1 released Message-ID: Hello guys! I am excited to announce that the new development version of ngx_openresty, 1.2.8.1, is now released: http://openresty.org/#Download Special thanks go to all our contributors and users for helping make this release happen! Below is the complete change log for this release, as compared to the last (stable) release, 1.2.7.6: * upgraded the Nginx core to 1.2.8. * see for changes. * upgraded LuaNginxModule to 0.8.1. * feature: implemented the new timer API: the ngx.timer.at Lua function and two configure directives lua_max_pending_timers and lua_max_running_timers. thanks Matthieu Tourne for requesting this feature. * feature: added the "U" regex option to the ngx.re API to mean enabling the UTF-8 matching mode but disabling UTF-8 validity check on the subject strings. thanks Lance Li for the patch. * bugfix: setting ngx.header.etag could not affect other things reading the "ETag" response header (like the etag directive introduced in Nginx 1.3.3+). thanks Brian Akins for the patch. * bugfix: when lua_http10_buffering is on, for HTTP 1.0 requests, ngx.exit(N) would always trigger the Nginx's own error pages when N >= 300. thanks Matthieu Tourne for reporting this issue. * bugfix: modifying the "Cookie" request headers via ngx.req.set_header or ngx.req.clear_header did not update the Nginx internal data structure, "r->headers_in.cookies", at the same time, which might cause issues when reading variables $cookie_COOKIE, for example. thanks Matthieu Tourne for the patch. * bugfix: modifying the "Via" request header with ngx.req.set_header or ngx.req.clear_header did not update the special field "r->headers_in.via" when the ngx_gzip module was enabled. * bugfix: modifying the "X-Real-IP" request header with ngx.req.set_header or ngx.req.clear_header did not update the special field "r->headers_in.x_real_ip" when the ngx_realip module was enabled. thanks Matthieu Tourne for the patch. * bugfix: modifying the "Connection" request header via ngx.req.set_header or ngx.req.clear_header did not update the special internal field in the Nginx core, "r->headers_in.connection_type". Thanks Matthieu Tourne for the patch. * bugfix: modifying the "User-Agent" request header via ngx.req.set_header or ngx.req.clear_header did not update those special internal flags in the Nginx core, like "r->headers_in.msie6" and "r->headers_in.opera". Thanks Matthieu Tourne for the patch. * bugfix: fixed several places in the header API where we should return "NGX_ERROR" instead of "NGX_HTTP_INTERNAL_SERVER_ERROR". * upgraded SrcacheNginxModule to 0.20. * bugfix: use of C global variables at the configuration phase would cause troubles when "HUP" reload failed. * upgraded HeadersMoreNginxModule to 0.20. * bugfix: modifying the "Cookie" request headers via more_set_input_headers or more_clear_input_headers did not update the Nginx internal data structure, "r->headers_in.cookies", at the same time, which might cause issues when reading variable $cookie_COOKIE, for example. * bugfix: modifying the "Via" request header via more_set_input_headers or more_clear_input_headers did not update the special internal field in the Nginx core, "r->headers_in.via", when the ngx_gzip module was enabled. * bugfix: modifying the "X-Real-IP" request header via more_set_input_headers or more_clear_input_headers did not update the special internal field in the Nginx core, "r->headers_in.x_real_ip", when the ngx_realip module was enabled. * bugfix: modifying the "Connection" request header via more_set_input_headers or more_clear_input_headers did not update the special internal field in the Nginx core, "r->headers_in.connection_type". * bugfix: modifying the "User-Agent" request header via more_set_input_headers or more_clear_input_headers did not update those special internal flags in the Nginx core, like "r->headers_in.msie6" and "r->headers_in.opera". * bugfix: fixed places where we should return "NGX_ERROR" instead of "NGX_HTTP_INTERNAL_SERVER_ERROR". * feature: always enable debuginfo in the bundled LuaJIT 2.0.1 build and Lua 5.1.5 build to support Nginx Systemtap Toolkit. * bugfix: no longer pass "-O0" to gcc when the "--with-debug" configure option is specified because gcc often generates bogus DWARF info when optimization is turned off. The HTML version of the change log with lots of helpful hyper-links can be browsed here: http://openresty.org/#ChangeLog1002008 OpenResty (aka. ngx_openresty) is a full-fledged web application server by bundling the standard Nginx core, lots of 3rd-party Nginx modules and Lua libraries, as well as most of their external dependencies. See OpenResty's homepage for details: http://openresty.org/ We have been running extensive testing on our Amazon EC2 test cluster and ensure that all the components (including the Nginx core) play well together. The latest test report can always be found here: http://qa.openresty.org Enjoy! -agentzh From nginx-forum at nginx.us Sat Apr 27 08:19:05 2013 From: nginx-forum at nginx.us (asmith) Date: Sat, 27 Apr 2013 04:19:05 -0400 Subject: Simple forward proxy Message-ID: Hello, I have nginx 1.3 on my ubuntu 10.04 and there are some websites running by it. I'd like to setup a forward proxy on a specific port so that I could use it on my browsers network options. I've done this so far: server { listen my.vps.ip.address:54321; location / { resolver 8.8.8.8; proxy_pass $scheme://$http_host$uri$is_args$args; } error_page 500 502 503 504 /50x.html; location = /50x.html { root html; } } It works fine for http websites. but for all https websites, I get "connection was reset". Any tweak for this? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,238675,238675#msg-238675 From francis at daoine.org Sat Apr 27 08:58:31 2013 From: francis at daoine.org (Francis Daly) Date: Sat, 27 Apr 2013 09:58:31 +0100 Subject: Nginx as Reverse Proxy Cache of fcgi django app in separate server In-Reply-To: References: Message-ID: <20130427085831.GC27406@craic.sysops.org> On Fri, Apr 26, 2013 at 12:40:06PM -0600, Alvaro Mantilla Gimenez wrote: Hi there, > I've been looking on Internet about this but seems all the examples > available are for a proxy conf or fcgi conf. Not both. In nginx, each request in handled in one location block. Only the configuration appropriate to that location block is used. You can use proxy_pass in one location block, and fastcgi_pass in another. Perhaps the examples you found didn't want to overcomplicate things? > This is my scenario: I have three servers. The first one run only nginx > (and it should be the entry point for my websites) and the other two > servers run django apps. > > Those django apps have been launched as fastcgi applications and listen > on some ports. For example: So: your upstream/backend servers speak fastcgi, and do not speak http or https. In nginx you will want the fastcgi-related directives. > My intention is to run nginx as a reverse proxy for caching some pages http://nginx.org/r/fastcgi_cache and things nearby. > upstream app { > ip_hash; > server server_ip1:1111; > server server_ip2:1111; > } > > location / { > include fastcgi_params; > fastcgi_pass app; > fastcgi_split_path_info ^()(.*)$; > } > > This way the application works nginx speaks the fastcgi protocol to the upstream servers, and they respond. I'm not quite sure what the fastcgi_split_path_info line is doing, but you say the config works, so that's good. You have no mention of caching here, which your question said you wanted. But you can add it, per the documentation. > (however I am not sure if I am reaching both servers). What do the logs say? You can enhance nginx logging; or you can look at the logs for each upstream server; or you can look at network traffic involving each upstream server. Or you can add a different response from each server, so that you can tell directly which was used. > But, If I change the configuration to this: > > location / { > proxy_pass http://app; > } > > Then nginx shows an error and I can't see the django app. If your django app isn't responding to http queries, then asking nginx to speak http to it isn't going to work. Possibly the error (either shown, or in the logs) explains what is failing. > Any idea? You have a working configuration. You change it, and have a non-working configuration. Change it back, and it should work again. f -- Francis Daly francis at daoine.org From francis at daoine.org Sat Apr 27 09:19:04 2013 From: francis at daoine.org (Francis Daly) Date: Sat, 27 Apr 2013 10:19:04 +0100 Subject: Simple forward proxy In-Reply-To: References: Message-ID: <20130427091904.GD27406@craic.sysops.org> On Sat, Apr 27, 2013 at 04:19:05AM -0400, asmith wrote: Hi there, > I have nginx 1.3 on my ubuntu 10.04 and there are some websites running by > it. I'd like to setup a forward proxy on a specific port so that I could use > it on my browsers network options. nginx is not a forward proxy. It does do some of the same things that a forward http proxy does, so if you have a controlled environment and are interested in experimenting to see which parts of "being a http proxy" the current nginx code does reliably, then your design choices and patches to make nginx become a http proxy might be interesting if they don't affect its primary purpose. But nginx does not handle the http CONNECT method, which is usually the way clients expect proxying of https to be done. > I've done this so far: > It works fine for http websites. but for all https websites, I get > "connection was reset". Logs should show you more details, if you care. But unless you're planning on writing code to use yourself, you don't need to care. > Any tweak for this? Use a proxy server, not nginx. f -- Francis Daly francis at daoine.org From cnst++ at FreeBSD.org Sat Apr 27 17:41:06 2013 From: cnst++ at FreeBSD.org (Constantine A. Murenin) Date: Sat, 27 Apr 2013 10:41:06 -0700 Subject: adding header/footer to gzip'ed html files Message-ID: <517C0DB2.2060907@FreeBSD.org> Hello, I'm trying to see ways in which OpenGrok could be optimised with nginx. One of the ideas I have is using nginx to serve the /xref/ pages, instead of them going through OpenGrok each time. OpenGrok (the indexer) pre-generates the body of the /xref/ pages, and stores the resulting html as .gz files, but those files don't have any header/footer, and require to be presented within "
" and "
", which OpenGrok (the webapp) then adds on the fly. Would it be possible to use `add_before_body` and `add_after_body` (http://nginx.org/docs/http/ngx_http_addition_module.html), together with `gzip_static always` (http://nginx.org/docs/http/ngx_http_gzip_static_module.html), together with `gunzip on` (http://nginx.org/docs/http/ngx_http_gunzip_module.html), to replace passing /xref/ to OpenGrok (the webapp)? Technically, gzip / deflate is a stream encoding, so, supposedly, there'd be no need to decode and re-encode the .gz files, but some special handling will probably still have to be performed nonetheless. I presume a scenario as above would not currently work (but I might as well be wrong); however, does this sound like something that's potentially interesting, and not overly difficult and complicated to fix up? Or would it be simpler to amend all the /xref/ pages for all of them to redundantly include the needed header and footer? Cheers, Constantine. From nginx-forum at nginx.us Sat Apr 27 19:59:59 2013 From: nginx-forum at nginx.us (itpp2012) Date: Sat, 27 Apr 2013 15:59:59 -0400 Subject: nginx-1.4.0 In-Reply-To: <20130424141939.GJ10443@mdounin.ru> References: <20130424141939.GJ10443@mdounin.ru> Message-ID: <44c2370a8e0951e19601b15db621c57d.NginxMailingListEnglish@forum.nginx.org> Hello Maxim, Can you tell us the status with the branches ? Is 1.3 now the new stable ? (what is then the status of 1.2 ?) Is 1.4 development ? Should all 1.2 users upgrade to 1.3 ? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,238606,238681#msg-238681 From jim at ohlste.in Sat Apr 27 20:03:51 2013 From: jim at ohlste.in (Jim Ohlstein) Date: Sat, 27 Apr 2013 16:03:51 -0400 Subject: nginx-1.4.0 In-Reply-To: <44c2370a8e0951e19601b15db621c57d.NginxMailingListEnglish@forum.nginx.org> References: <20130424141939.GJ10443@mdounin.ru> <44c2370a8e0951e19601b15db621c57d.NginxMailingListEnglish@forum.nginx.org> Message-ID: <37689803-8C4D-4555-99A7-5906332FAF50@ohlste.in> On Apr 27, 2013, at 3:59 PM, "itpp2012" wrote: > Hello Maxim, > > Can you tell us the status with the branches ? > Is 1.3 now the new stable ? (what is then the status of 1.2 ?) > Is 1.4 development ? > > Should all 1.2 users upgrade to 1.3 ? > > http://nginx.org/ Jim Ohlstein -------------- next part -------------- An HTML attachment was scrubbed... URL: From christoph at christoph-egger.org Sun Apr 28 11:05:37 2013 From: christoph at christoph-egger.org (Christoph Egger) Date: Sun, 28 Apr 2013 13:05:37 +0200 Subject: basic_auth for parts of uwsgi Message-ID: <87a9oi9apq.fsf@hepworth.siccegge.de> Hi! I have the following problem: I'm running a uwsgi application using nginx on /. I would like to add authentication for /foo/ and /bar/. However neither > location / { > include uwsgi_params; > uwsgi_pass unix:/run/uwsgi/app/something/socket; > } > > location /foo/ { > auth_basic "LOGIN"; > auth_basic_user_file "/tmp/test/"; > } > > location /bar/ { > auth_basic "LOGIN"; > auth_basic_user_file "/tmp/test/"; > } nor > location / { > include uwsgi_params; > uwsgi_pass unix:/run/uwsgi/app/something/socket;] > > location /foo/ { > auth_basic "LOGIN"; > auth_basic_user_file "/tmp/test/foo"; > } > > location /bar/ { > auth_basic "LOGIN"; > auth_basic_user_file "/tmp/test/bar"; > } > } Seem to pass /foo/ and /bar/ to the wsgi socket and I can't find a solution on the interwebz. Christoph -- 9FED 5C6C E206 B70A 5857 70CA 9655 22B9 D49A E731 Debian Developer | Lisp Hacker | CaCert Assurer From francis at daoine.org Sun Apr 28 11:20:29 2013 From: francis at daoine.org (Francis Daly) Date: Sun, 28 Apr 2013 12:20:29 +0100 Subject: basic_auth for parts of uwsgi In-Reply-To: <87a9oi9apq.fsf@hepworth.siccegge.de> References: <87a9oi9apq.fsf@hepworth.siccegge.de> Message-ID: <20130428112029.GE27406@craic.sysops.org> On Sun, Apr 28, 2013 at 01:05:37PM +0200, Christoph Egger wrote: Hi there, > I'm running a uwsgi application using nginx on /. I would like to add > authentication for /foo/ and /bar/. One request is handled in one location. In the one location that handles the request "/foo/something", you want to have both "auth_basic" and "uwsgi_pass". In the one location that handles the request "/not-foo/something", you want to have "uwsgi_pass" but not "auth_basic". f -- Francis Daly francis at daoine.org From saint42 at gmail.com Sun Apr 28 18:42:08 2013 From: saint42 at gmail.com (Simon Templar) Date: Sun, 28 Apr 2013 14:42:08 -0400 Subject: nginx Digest, Vol 42, Issue 54 In-Reply-To: References: Message-ID: <517D6D80.8060101@gmail.com> unsubscribe On 4/28/13 8:00 AM, nginx-request at nginx.org wrote: > Send nginx mailing list submissions to > nginx at nginx.org > > To subscribe or unsubscribe via the World Wide Web, visit > http://mailman.nginx.org/mailman/listinfo/nginx > or, via email, send a message with subject or body 'help' to > nginx-request at nginx.org > > You can reach the person managing the list at > nginx-owner at nginx.org > > When replying, please edit your Subject line so it is more specific > than "Re: Contents of nginx digest..." > > > Today's Topics: > > 1. adding header/footer to gzip'ed html files > (Constantine A. Murenin) > 2. Re: nginx-1.4.0 (itpp2012) > 3. Re: nginx-1.4.0 (Jim Ohlstein) > 4. basic_auth for parts of uwsgi (Christoph Egger) > 5. Re: basic_auth for parts of uwsgi (Francis Daly) > > > ---------------------------------------------------------------------- > > Message: 1 > Date: Sat, 27 Apr 2013 10:41:06 -0700 > From: "Constantine A. Murenin" > To: nginx at nginx.org > Subject: adding header/footer to gzip'ed html files > Message-ID: <517C0DB2.2060907 at FreeBSD.org> > Content-Type: text/plain; charset=KOI8-R; format=flowed > > Hello, > > I'm trying to see ways in which OpenGrok could be optimised with nginx. > > One of the ideas I have is using nginx to serve the /xref/ pages, > instead of them going through OpenGrok each time. OpenGrok (the > indexer) pre-generates the body of the /xref/ pages, and stores the > resulting html as .gz files, but those files don't have any > header/footer, and require to be presented within "
" and "
", > which OpenGrok (the webapp) then adds on the fly. > > Would it be possible to use `add_before_body` and `add_after_body` > (http://nginx.org/docs/http/ngx_http_addition_module.html), together > with `gzip_static always` > (http://nginx.org/docs/http/ngx_http_gzip_static_module.html), together > with `gunzip on` > (http://nginx.org/docs/http/ngx_http_gunzip_module.html), to replace > passing /xref/ to OpenGrok (the webapp)? > > Technically, gzip / deflate is a stream encoding, so, supposedly, > there'd be no need to decode and re-encode the .gz files, but some > special handling will probably still have to be performed nonetheless. > > I presume a scenario as above would not currently work (but I might as > well be wrong); however, does this sound like something that's > potentially interesting, and not overly difficult and complicated to fix > up? Or would it be simpler to amend all the /xref/ pages for all of > them to redundantly include the needed header and footer? > > Cheers, > Constantine. > > > > ------------------------------ > > Message: 2 > Date: Sat, 27 Apr 2013 15:59:59 -0400 > From: "itpp2012" > To: nginx at nginx.org > Subject: Re: nginx-1.4.0 > Message-ID: > <44c2370a8e0951e19601b15db621c57d.NginxMailingListEnglish at forum.nginx.org> > > Content-Type: text/plain; charset=UTF-8 > > Hello Maxim, > > Can you tell us the status with the branches ? > Is 1.3 now the new stable ? (what is then the status of 1.2 ?) > Is 1.4 development ? > > Should all 1.2 users upgrade to 1.3 ? > > Posted at Nginx Forum: http://forum.nginx.org/read.php?2,238606,238681#msg-238681 > > > > ------------------------------ > > Message: 3 > Date: Sat, 27 Apr 2013 16:03:51 -0400 > From: Jim Ohlstein > To: "nginx at nginx.org" > Subject: Re: nginx-1.4.0 > Message-ID: <37689803-8C4D-4555-99A7-5906332FAF50 at ohlste.in> > Content-Type: text/plain; charset="us-ascii" > > On Apr 27, 2013, at 3:59 PM, "itpp2012" wrote: > >> Hello Maxim, >> >> Can you tell us the status with the branches ? >> Is 1.3 now the new stable ? (what is then the status of 1.2 ?) >> Is 1.4 development ? >> >> Should all 1.2 users upgrade to 1.3 ? >> >> > http://nginx.org/ > > Jim Ohlstein > -------------- next part -------------- > An HTML attachment was scrubbed... > URL: > > ------------------------------ > > Message: 4 > Date: Sun, 28 Apr 2013 13:05:37 +0200 > From: Christoph Egger > To: nginx at nginx.org > Subject: basic_auth for parts of uwsgi > Message-ID: <87a9oi9apq.fsf at hepworth.siccegge.de> > Content-Type: text/plain > > Hi! > > I have the following problem: > > I'm running a uwsgi application using nginx on /. I would like to add > authentication for /foo/ and /bar/. However neither > >> location / { >> include uwsgi_params; >> uwsgi_pass unix:/run/uwsgi/app/something/socket; >> } >> >> location /foo/ { >> auth_basic "LOGIN"; >> auth_basic_user_file "/tmp/test/"; >> } >> >> location /bar/ { >> auth_basic "LOGIN"; >> auth_basic_user_file "/tmp/test/"; >> } > nor > >> location / { >> include uwsgi_params; >> uwsgi_pass unix:/run/uwsgi/app/something/socket;] >> >> location /foo/ { >> auth_basic "LOGIN"; >> auth_basic_user_file "/tmp/test/foo"; >> } >> >> location /bar/ { >> auth_basic "LOGIN"; >> auth_basic_user_file "/tmp/test/bar"; >> } >> } > Seem to pass /foo/ and /bar/ to the wsgi socket and I can't find a > solution on the interwebz. > > Christoph > From nginx-forum at nginx.us Sun Apr 28 19:13:07 2013 From: nginx-forum at nginx.us (maanas) Date: Sun, 28 Apr 2013 15:13:07 -0400 Subject: Get request_body in set_by_lua directive In-Reply-To: References: Message-ID: <16d0a20eb1f56554081d66c604baf5b0.NginxMailingListEnglish@forum.nginx.org> Thanks it help me as well. I was trying to read request post body in set_by_lua directive Posted at Nginx Forum: http://forum.nginx.org/read.php?2,223876,238690#msg-238690 From agentzh at gmail.com Mon Apr 29 03:09:13 2013 From: agentzh at gmail.com (agentzh) Date: Sun, 28 Apr 2013 20:09:13 -0700 Subject: Get request_body in set_by_lua directive In-Reply-To: <16d0a20eb1f56554081d66c604baf5b0.NginxMailingListEnglish@forum.nginx.org> References: <16d0a20eb1f56554081d66c604baf5b0.NginxMailingListEnglish@forum.nginx.org> Message-ID: Hello! On Sun, Apr 28, 2013 at 12:13 PM, maanas wrote: > Thanks it help me as well. I was trying to read request post body in > set_by_lua directive > No, the request body is not read yet at the phase where set_by_lua (and those ngx_rewrite directives) runs. Try using the rewrite_by_lua directive instead where you can call ngx.req.read_body() in Lua to explicitly read the request body (otherwise the body is still not read): http://wiki.nginx.org/HttpLuaModule#ngx.req.read_body But note that by default, rewrite_by_lua always runs after those ngx_rewrite directives like "set" or "if". If you want the other way around, just turn on the rewrite_by_lua_no_postpone directive: http://wiki.nginx.org/HttpLuaModule#rewrite_by_lua_no_postpone Best regards, -agentzh From nginx-forum at nginx.us Mon Apr 29 04:42:15 2013 From: nginx-forum at nginx.us (rahmanusta) Date: Mon, 29 Apr 2013 00:42:15 -0400 Subject: Wordpress RSS pages dont work with Nginx Message-ID: <0cdee76385203afab5b8af53d54424c1.NginxMailingListEnglish@forum.nginx.org> I had a wordpress blog and was working on Apache. I migrated the blog to Nginx + php-fpm. But i have a problem with this. My blog has RSS with example.com/feed URL , and i could see the feeds with paged like this example -> http://www.kodcu.com/feed/?paged=45. But in Nginx, this paged RSS urls dont work with my config. /feed and /feed/?paged=X URLs shows top 10 content. My nginx.conf same as below. How can i handle this problem? user root root; worker_processes 2; pid /var/run/nginx.pid; events { worker_connections 1024; } http { sendfile on; tcp_nopush on; tcp_nodelay on; keepalive_timeout 2; types_hash_max_size 2048; include /etc/nginx/mime.types; default_type application/octet-stream; error_log /var/log/nginx/error.log; access_log off; gzip on; gzip_disable "msie6"; gzip_vary on; gzip_proxied any; gzip_comp_level 6; gzip_buffers 16 8k; gzip_http_version 1.1; gzip_types text/html text/plain text/css application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript; ## # Virtual Host Configs ## index index.php index.html index.htm; ## See here: http://wiki.nginx.org/WordPress server { server_name example.com www.example.com; root /var/www/example.com; location = /favicon.ico { log_not_found off; access_log off; } location = /robots.txt { allow all; log_not_found off; access_log off; } location / { # This is cool because no php is touched for static content try_files $uri $uri/ /index.php?$args; } location ~ \.php$ { fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include fastcgi_params; } location ~* \.(js|css|png|jpg|jpeg|gif|ico)$ { expires max; log_not_found off; } } } Posted at Nginx Forum: http://forum.nginx.org/read.php?2,238692,238692#msg-238692 From katmai at keptprivate.com Mon Apr 29 06:15:42 2013 From: katmai at keptprivate.com (Stefanita Rares Dumitrescu) Date: Mon, 29 Apr 2013 08:15:42 +0200 Subject: Nginx security issue? In-Reply-To: <0cdee76385203afab5b8af53d54424c1.NginxMailingListEnglish@forum.nginx.org> References: <0cdee76385203afab5b8af53d54424c1.NginxMailingListEnglish@forum.nginx.org> Message-ID: <517E100E.6040405@keptprivate.com> Hi guys This was posted today: http://blog.solidshellsecurity.com/2013/04/29/nginx-ngx_http_close_connection-function-integer-overflow-exploit-patch/?utm_source=feedly Has this been patched? I don't think i found anything about it in the changelogs From eswar7028 at gmail.com Mon Apr 29 06:27:55 2013 From: eswar7028 at gmail.com (ESWAR RAO) Date: Mon, 29 Apr 2013 11:57:55 +0530 Subject: load balancing according to url Message-ID: Hi All, Can anyone please help me with the below requirement. Host machine contains a plugin and it communicates with a plugin handler running on backend servers and nginx is used to load balance the requests. host machine(plugin) ===== >nginx as load balancer =====>3 backend servers which hosts plugin handler I need to load balance the requests based on customer-id field in host machine. # curl ' http://localhost:8031/test1/test2/test3/customer/123456789999999999/......./ customer-id: 123456789999999999 customer-id changes with customers. Since the requests come from same machine, I can't use ip_hash or cookie based load balancing technique. My requirement is to load balance according to customer id and if same request comes the same customer it should go to same earlier server which served the request. I am planning to extract the customer-id in nginx configuration file and add them on the fly in the config file and compare the ids using "map" directive. But unable to know which server served the requests to customer-id: map $customer_id $sticky_backend { default bad_gateway; ; ; } if ( $request_uri ~ ^/(.*)/(customer)/(.*?)/(.*)/ ) { set $customer_id $3; Thanks Eswar -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Mon Apr 29 06:57:26 2013 From: nginx-forum at nginx.us (Sylvia) Date: Mon, 29 Apr 2013 02:57:26 -0400 Subject: Wordpress RSS pages dont work with Nginx In-Reply-To: <0cdee76385203afab5b8af53d54424c1.NginxMailingListEnglish@forum.nginx.org> References: <0cdee76385203afab5b8af53d54424c1.NginxMailingListEnglish@forum.nginx.org> Message-ID: <547e717ca6c799735a5438430e3f0d63.NginxMailingListEnglish@forum.nginx.org> Hi. Are you using Wp-Super-Cache or similar plugin? Check either - 1) disable caching for is_feed 2) Don?t cache pages with GET parameters. (?x=y at the end of a url) Posted at Nginx Forum: http://forum.nginx.org/read.php?2,238692,238695#msg-238695 From nginx-forum at nginx.us Mon Apr 29 07:18:35 2013 From: nginx-forum at nginx.us (rahmanusta) Date: Mon, 29 Apr 2013 03:18:35 -0400 Subject: Wordpress RSS pages dont work with Nginx In-Reply-To: <547e717ca6c799735a5438430e3f0d63.NginxMailingListEnglish@forum.nginx.org> References: <0cdee76385203afab5b8af53d54424c1.NginxMailingListEnglish@forum.nginx.org> <547e717ca6c799735a5438430e3f0d63.NginxMailingListEnglish@forum.nginx.org> Message-ID: <124640c3c72c912bdd641d75d7a985a0.NginxMailingListEnglish@forum.nginx.org> I've removed the WP-Super cache. How can i do 2) ? I dont know how to do it. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,238692,238698#msg-238698 From nginx-forum at nginx.us Mon Apr 29 07:26:46 2013 From: nginx-forum at nginx.us (mex) Date: Mon, 29 Apr 2013 03:26:46 -0400 Subject: load balancing according to url In-Reply-To: References: Message-ID: <64008c92ac060bf5e47b29ac32f19f04.NginxMailingListEnglish@forum.nginx.org> there's a sticky-module (3rd-party), maybe this works out of the box for you. https://code.google.com/p/nginx-sticky-module/wiki/Documentation Posted at Nginx Forum: http://forum.nginx.org/read.php?2,238694,238700#msg-238700 From appa at perusio.net Mon Apr 29 09:01:34 2013 From: appa at perusio.net (=?ISO-8859-1?Q?Ant=F3nio_P=2E_P=2E_Almeida?=) Date: Mon, 29 Apr 2013 11:01:34 +0200 Subject: load balancing according to url In-Reply-To: References: Message-ID: You chain two map directives. Like this: map $uri $customer_id { ~/customer/(?[^/]+)/.*$ $cust_id; } map $customer_id $sticky_backend { default bad_gateway; ; ; } ----appa On Mon, Apr 29, 2013 at 8:27 AM, ESWAR RAO wrote: > Hi All, > > Can anyone please help me with the below requirement. > > Host machine contains a plugin and it communicates with a plugin handler > running on backend servers and nginx is used to load balance the requests. > > host machine(plugin) ===== >nginx as load balancer =====>3 backend servers > which hosts plugin handler > > I need to load balance the requests based on customer-id field in host > machine. > # curl ' > http://localhost:8031/test1/test2/test3/customer/123456789999999999/......./ > > customer-id: 123456789999999999 > customer-id changes with customers. > > Since the requests come from same machine, I can't use ip_hash or cookie > based load balancing technique. > > My requirement is to load balance according to customer id and if same > request comes the same customer it should go to same earlier server which > served the request. > > > I am planning to extract the customer-id in nginx configuration file and > add them on the fly in the config file and compare the ids using "map" > directive. But unable to know which server served the requests to > customer-id: > > map $customer_id $sticky_backend { > default bad_gateway; > ; > ; > } > if ( $request_uri ~ ^/(.*)/(customer)/(.*?)/(.*)/ ) { > set $customer_id $3; > > Thanks > Eswar > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Mon Apr 29 09:37:55 2013 From: nginx-forum at nginx.us (Fleshgrinder) Date: Mon, 29 Apr 2013 05:37:55 -0400 Subject: nginx openssl compilation problem In-Reply-To: <8bc0f86aeffb9db6aa10a4f9ca8bd524.NginxMailingListEnglish@forum.nginx.org> References: <8bc0f86aeffb9db6aa10a4f9ca8bd524.NginxMailingListEnglish@forum.nginx.org> Message-ID: <0fc3a917ec1e259a6b9d7836343fe7c8.NginxMailingListEnglish@forum.nginx.org> Finally I was able to compile nginx and of course I'd like to share this with you. Seems like the order of the configure options was the problem. I used the following configure argument order and it compiled without any problems. nginx version: nginx/1.4.0 built by gcc 4.7.2 (Debian 4.7.2-5) TLS SNI support enabled configure arguments: --user=www-data --group=www-data --prefix=/usr/local --sbin-path=/usr/local/sbin --conf-path=/etc/nginx/nginx.conf --pid-path=/var/run/nginx.pid --lock-path=/var/lock/nginx.lock --error-log-path=/var/log/nginx/error.log --http-log-path=/var/log/nginx/access.log --http-client-body-temp-path=/dev/shm/nginx/client-body/tmp --http-fastcgi-temp-path=/dev/shm/nginx/fastcgi/tmp --with-ipv6 --with-http_gzip_static_module --with-http_ssl_module --with-http_spdy_module --with-openssl=/usr/local/src/nginx/openssl-1.0.1e --with-md5=/usr/local/src/nginx/openssl-1.0.1e --with-md5-asm --with-sha1=/usr/local/src/nginx/openssl-1.0.1e --with-sha1-asm --with-pcre=/usr/local/src/nginx/pcre-8.32 --with-pcre-jit --with-zlib=/usr/local/src/nginx/zlib --without-http_autoindex_module --without-http_auth_basic_module --without-http_browser_module --without-http_geo_module --without-http_limit_conn_module --without-http_limit_req_module --without-http_map_module --without-http_memcached_module --without-http_proxy_module --without-http_referer_module --without-http_scgi_module --without-http_split_clients_module --without-http_ssi_module --without-http_upstream_ip_hash_module --without-http_userid_module --without-http_uwsgi_module --without-mail_pop3_module --without-mail_imap_module --without-mail_smtp_module --add-module=/usr/local/src/nginx/nginx-upload-progress-module --add-module=/usr/local/src/nginx/nginx-upstream-fair Posted at Nginx Forum: http://forum.nginx.org/read.php?2,238671,238705#msg-238705 From eswar7028 at gmail.com Mon Apr 29 11:03:57 2013 From: eswar7028 at gmail.com (ESWAR RAO) Date: Mon, 29 Apr 2013 16:33:57 +0530 Subject: load balancing according to url In-Reply-To: References: Message-ID: Hi Antonio, Thanks for the response. I am unable to understand your solution. As I said, I am unable to know which server served the requests to respective customer-id. So I cant write the below map directive. map $customer_id $sticky_backend { default bad_gateway; ; ; } Thanks Eswar On Mon, Apr 29, 2013 at 2:31 PM, Ant?nio P. P. Almeida wrote: > You chain two map directives. Like this: > > map $uri $customer_id { > ~/customer/(?[^/]+)/.*$ $cust_id; > > } > > map $customer_id $sticky_backend { > default bad_gateway; > ; > ; > } > > > ----appa > > > > On Mon, Apr 29, 2013 at 8:27 AM, ESWAR RAO wrote: > >> Hi All, >> >> Can anyone please help me with the below requirement. >> >> Host machine contains a plugin and it communicates with a plugin handler >> running on backend servers and nginx is used to load balance the requests. >> >> host machine(plugin) ===== >nginx as load balancer =====>3 backend >> servers which hosts plugin handler >> >> I need to load balance the requests based on customer-id field in host >> machine. >> # curl ' >> http://localhost:8031/test1/test2/test3/customer/123456789999999999/......./ >> >> customer-id: 123456789999999999 >> customer-id changes with customers. >> >> Since the requests come from same machine, I can't use ip_hash or cookie >> based load balancing technique. >> >> My requirement is to load balance according to customer id and if same >> request comes the same customer it should go to same earlier server which >> served the request. >> >> >> I am planning to extract the customer-id in nginx configuration file and >> add them on the fly in the config file and compare the ids using "map" >> directive. But unable to know which server served the requests to >> customer-id: >> >> map $customer_id $sticky_backend { >> default bad_gateway; >> ; >> ; >> } >> if ( $request_uri ~ ^/(.*)/(customer)/(.*?)/(.*)/ ) { >> set $customer_id $3; >> >> Thanks >> Eswar >> >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx >> > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From eswar7028 at gmail.com Mon Apr 29 11:08:21 2013 From: eswar7028 at gmail.com (ESWAR RAO) Date: Mon, 29 Apr 2013 16:38:21 +0530 Subject: load balancing according to url In-Reply-To: <64008c92ac060bf5e47b29ac32f19f04.NginxMailingListEnglish@forum.nginx.org> References: <64008c92ac060bf5e47b29ac32f19f04.NginxMailingListEnglish@forum.nginx.org> Message-ID: Hi Mex, Thanks for the response. But I don't think the sticky-module is going to help in my case where the load balancing is to be done based on a particular filed in the URL. Thanks Eswar On Mon, Apr 29, 2013 at 12:56 PM, mex wrote: > there's a sticky-module (3rd-party), maybe this works out of the box > for you. > > https://code.google.com/p/nginx-sticky-module/wiki/Documentation > > Posted at Nginx Forum: > http://forum.nginx.org/read.php?2,238694,238700#msg-238700 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From pr1 at pr1.ru Mon Apr 29 11:27:51 2013 From: pr1 at pr1.ru (Andrey Feldman) Date: Mon, 29 Apr 2013 15:27:51 +0400 Subject: load balancing according to url In-Reply-To: References: Message-ID: Hi. Emm, maybe you want to make some kind of sharding by the value of your cookie? upstream backend { server server1; server server2; hash $customer_id;} On Mon, Apr 29, 2013 at 3:03 PM, ESWAR RAO wrote: > Hi Antonio, > > Thanks for the response. > > I am unable to understand your solution. > > As I said, I am unable to know which server served the requests to > respective customer-id. > So I cant write the below map directive. > map $customer_id $sticky_backend { > default bad_gateway; > ; > ; > } > > Thanks > Eswar > > > On Mon, Apr 29, 2013 at 2:31 PM, Ant?nio P. P. Almeida wrote: > >> You chain two map directives. Like this: >> >> map $uri $customer_id { >> ~/customer/(?[^/]+)/.*$ $cust_id; >> >> } >> >> map $customer_id $sticky_backend { >> default bad_gateway; >> ; >> ; >> } >> >> >> ----appa >> >> >> >> On Mon, Apr 29, 2013 at 8:27 AM, ESWAR RAO wrote: >> >>> Hi All, >>> >>> Can anyone please help me with the below requirement. >>> >>> Host machine contains a plugin and it communicates with a plugin handler >>> running on backend servers and nginx is used to load balance the requests. >>> >>> host machine(plugin) ===== >nginx as load balancer =====>3 backend >>> servers which hosts plugin handler >>> >>> I need to load balance the requests based on customer-id field in host >>> machine. >>> # curl ' >>> http://localhost:8031/test1/test2/test3/customer/123456789999999999/......./ >>> >>> customer-id: 123456789999999999 >>> customer-id changes with customers. >>> >>> Since the requests come from same machine, I can't use ip_hash or cookie >>> based load balancing technique. >>> >>> My requirement is to load balance according to customer id and if same >>> request comes the same customer it should go to same earlier server which >>> served the request. >>> >>> >>> I am planning to extract the customer-id in nginx configuration file and >>> add them on the fly in the config file and compare the ids using "map" >>> directive. But unable to know which server served the requests to >>> customer-id: >>> >>> map $customer_id $sticky_backend { >>> default bad_gateway; >>> ; >>> ; >>> } >>> if ( $request_uri ~ ^/(.*)/(customer)/(.*?)/(.*)/ ) { >>> set $customer_id $3; >>> >>> Thanks >>> Eswar >>> >>> >>> _______________________________________________ >>> nginx mailing list >>> nginx at nginx.org >>> http://mailman.nginx.org/mailman/listinfo/nginx >>> >> >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx >> > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -- -- Andrey Feldman -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Mon Apr 29 12:31:55 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 29 Apr 2013 16:31:55 +0400 Subject: about alleged security issue Message-ID: <20130429123154.GV10443@mdounin.ru> Hello! Recently a report appeared alleging an integer overflow vulnerability in nginx, claiming remote code execution impact. We've carefully investigated the issue, and cannot confirm the alleged vulnerability exists. Taking this opportunity to remind: if you think you've found a security issue in nginx - it's a good idea to report it to security-alert at nginx.org, as listed at the nginx security advisories page here: http://nginx.org/en/security_advisories.html -- Maxim Dounin http://nginx.org/en/donation.html From mdounin at mdounin.ru Mon Apr 29 12:35:49 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 29 Apr 2013 16:35:49 +0400 Subject: Nginx security issue? In-Reply-To: <517E100E.6040405@keptprivate.com> References: <0cdee76385203afab5b8af53d54424c1.NginxMailingListEnglish@forum.nginx.org> <517E100E.6040405@keptprivate.com> Message-ID: <20130429123549.GX10443@mdounin.ru> Hello! On Mon, Apr 29, 2013 at 08:15:42AM +0200, Stefanita Rares Dumitrescu wrote: > Hi guys > > This was posted today: > > http://blog.solidshellsecurity.com/2013/04/29/nginx-ngx_http_close_connection-function-integer-overflow-exploit-patch/?utm_source=feedly > > Has this been patched? I don't think i found anything about it in > the changelogs http://mailman.nginx.org/pipermail/nginx/2013-April/038701.html -- Maxim Dounin http://nginx.org/en/donation.html From nginx-forum at nginx.us Mon Apr 29 16:44:33 2013 From: nginx-forum at nginx.us (Rickey) Date: Mon, 29 Apr 2013 12:44:33 -0400 Subject: Online Jobs and Entertainment Message-ID: http://kholyar.blogspot.com/ Open It And Enjoy :) Posted at Nginx Forum: http://forum.nginx.org/read.php?2,238720,238720#msg-238720 From paulnpace at gmail.com Mon Apr 29 17:26:58 2013 From: paulnpace at gmail.com (Paul N. Pace) Date: Mon, 29 Apr 2013 17:26:58 +0000 Subject: Online Jobs and Entertainment Message-ID: <1824837040-1367256419-cardhu_decombobulator_blackberry.rim.net-700656164-@b26.c8.bise6.blackberry> Speaking of which, what do you guys use for a spam filter? I've been thinking about setting up mailman. I'm surprised at how little spam I've seen here given how popular nginx is. (I realize this gem came from a forum post). ------Original Message------ From: Rickey Sender: nginx-bounces at nginx.org To: nginx at nginx.org ReplyTo: nginx at nginx.org Subject: Online Jobs and Entertainment Sent: Apr 29, 2013 9:44 AM http://kholyar.blogspot.com/ Open It And Enjoy :) Posted at Nginx Forum: http://forum.nginx.org/read.php?2,238720,238720#msg-238720 _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx From hems.inlet at gmail.com Mon Apr 29 21:02:35 2013 From: hems.inlet at gmail.com (henrique matias) Date: Mon, 29 Apr 2013 22:02:35 +0100 Subject: Converting subdomain to path component without redirect ? Message-ID: Hello guys, Am having trouble setting up my nginx.config to transparently proxy the subdomains and domains to the same app, but with different "path components" appended to the $uri example: mydomain.it/PATH should return ~> mydomain.com/it/PATH using regexp: (www\.)?mydomain\.(it|jp|es|de) to return my http://app_server/$2/$request_uri My idea is to proxy the "localised server_name" to the "default server_name" without letting the user know ( no browser redirect ). This is my "working" nginx.config ( without the rukes ): http://pastebin.com/v9pcVR4e This is my last unsuccessful attempt: http://pastebin.com/bZZA30zC any input is highly appreciated thanks a lot, peace -------------- next part -------------- An HTML attachment was scrubbed... URL: From kingsley at internode.com.au Tue Apr 30 00:04:44 2013 From: kingsley at internode.com.au (Kingsley Foreman) Date: Tue, 30 Apr 2013 00:04:44 +0000 Subject: Online Jobs and Entertainment In-Reply-To: <1824837040-1367256419-cardhu_decombobulator_blackberry.rim.net-700656164-@b26.c8.bise6.blackberry> References: <1824837040-1367256419-cardhu_decombobulator_blackberry.rim.net-700656164-@b26.c8.bise6.blackberry> Message-ID: <6FA1B8C1117F444AB55F3E36E9314EC862CC4CAE@EXCHMBX-ADL2-01.staff.internode.com.au> Ironports :) Kingsley -----Original Message----- From: nginx-bounces at nginx.org [mailto:nginx-bounces at nginx.org] On Behalf Of Paul N. Pace Sent: Tuesday, 30 April 2013 2:57 AM To: nginx at nginx.org Subject: Re: Online Jobs and Entertainment Speaking of which, what do you guys use for a spam filter? I've been thinking about setting up mailman. I'm surprised at how little spam I've seen here given how popular nginx is. (I realize this gem came from a forum post). ------Original Message------ From: Rickey Sender: nginx-bounces at nginx.org To: nginx at nginx.org ReplyTo: nginx at nginx.org Subject: Online Jobs and Entertainment Sent: Apr 29, 2013 9:44 AM http://kholyar.blogspot.com/ Open It And Enjoy :) Posted at Nginx Forum: http://forum.nginx.org/read.php?2,238720,238720#msg-238720 _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx From tseveendorj at gmail.com Tue Apr 30 02:36:54 2013 From: tseveendorj at gmail.com (tseveendorj) Date: Tue, 30 Apr 2013 10:36:54 +0800 Subject: rewrite difficulty Message-ID: <517F2E46.3040000@gmail.com> Hello, I have difficulty to convert apache like rewrite to nginx. This is my config file of virtualhost on nginx. http://pastebin.com/HTtKXnFy My installed php script should have following rewrite http://pastebin.com/M2h3uAt3 Currently any requested php code displayed it's source on browser. How could I migrate ? Thanks. -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Tue Apr 30 10:14:54 2013 From: nginx-forum at nginx.us (Rancor) Date: Tue, 30 Apr 2013 06:14:54 -0400 Subject: Howto set geoip_country for IPv4 and IPv6 databases? In-Reply-To: <50F5824A.300@puzzled.xs4all.nl> References: <50F5824A.300@puzzled.xs4all.nl> Message-ID: <6250e28d45f771da50af2c818b307695.NginxMailingListEnglish@forum.nginx.org> Hi, i'm trying to get this to work with the current NginX package 1.4.0 from dotdeb.org (using --with-ipv6) on a debian squeeze system. When downloading the GeoIP ipv6 binary from: http://dev.maxmind.com/geoip/geolite and changing: geoip_country /etc/nginx/GeoIP.dat; to: geoip_country /etc/nginx/GeoIPv6.dat; in my nginx.conf i'm getting this message after reload: nginx: [emerg] invalid GeoIP database "/etc/nginx/GeoIPv6.dat" type:12 in /etc/nginx/nginx.conf:47 Any hints what's wrong here? Thanks in advance for a reply. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,235108,238735#msg-238735 From hems.inlet at gmail.com Tue Apr 30 11:08:09 2013 From: hems.inlet at gmail.com (henrique matias) Date: Tue, 30 Apr 2013 12:08:09 +0100 Subject: Converting subdomain to path component without redirect ? In-Reply-To: References: Message-ID: *i meant: This is my "working" nginx.config ( without the rewrite rules ): On 29 April 2013 22:02, henrique matias wrote: > Hello guys, > > Am having trouble setting up my nginx.config to transparently proxy the > subdomains and domains to the same app, but with different "path > components" appended to the $uri > > example: > mydomain.it/PATH should return ~> mydomain.com/it/PATH > > using regexp: > (www\.)?mydomain\.(it|jp|es|de) to return my > http://app_server/$2/$request_uri > > My idea is to proxy the "localised server_name" to the "default > server_name" without letting the user know ( no browser redirect ). > > This is my "working" nginx.config ( without the rukes ): > http://pastebin.com/v9pcVR4e > > This is my last unsuccessful attempt: http://pastebin.com/bZZA30zC > > any input is highly appreciated > > thanks a lot, > peace > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Tue Apr 30 12:03:34 2013 From: nginx-forum at nginx.us (lpr) Date: Tue, 30 Apr 2013 08:03:34 -0400 Subject: Emulate SSI 'exec cmd' with nginx In-Reply-To: <20130423142314.GC95730@mdounin.ru> References: <20130423142314.GC95730@mdounin.ru> Message-ID: <68afd4e5250f5d1fe50fac3a5b1a1e15.NginxMailingListEnglish@forum.nginx.org> Thank you! This solved my problem. Best regards Lukas Posted at Nginx Forum: http://forum.nginx.org/read.php?2,238561,238739#msg-238739 From ru at nginx.com Tue Apr 30 12:34:08 2013 From: ru at nginx.com (Ruslan Ermilov) Date: Tue, 30 Apr 2013 16:34:08 +0400 Subject: Howto set geoip_country for IPv4 and IPv6 databases? In-Reply-To: <6250e28d45f771da50af2c818b307695.NginxMailingListEnglish@forum.nginx.org> References: <50F5824A.300@puzzled.xs4all.nl> <6250e28d45f771da50af2c818b307695.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20130430123408.GH19561@lo0.su> On Tue, Apr 30, 2013 at 06:14:54AM -0400, Rancor wrote: > Hi, > > i'm trying to get this to work with the current NginX package 1.4.0 from > dotdeb.org (using --with-ipv6) on a debian squeeze system. When downloading > the GeoIP ipv6 binary from: > > http://dev.maxmind.com/geoip/geolite > > and changing: > > geoip_country /etc/nginx/GeoIP.dat; > > to: > > geoip_country /etc/nginx/GeoIPv6.dat; > > in my nginx.conf i'm getting this message after reload: > > nginx: [emerg] invalid GeoIP database "/etc/nginx/GeoIPv6.dat" type:12 in > /etc/nginx/nginx.conf:47 > > Any hints what's wrong here? Thanks in advance for a reply. This happens if nginx is built without IPv6 support: $ nginx -p . -c x.conf -t nginx: [emerg] invalid GeoIP database "GeoIPv6.dat" type:12 in ./x.conf:9 nginx: configuration file ./x.conf test failed $ sed -ne9p x.conf geoip_country GeoIPv6.dat; If OTOH nginx is built with proper IPv6 support: $ nginx -p . -c x.conf -t nginx: the configuration file ./x.conf syntax is ok nginx: configuration file ./x.conf test is successful Make sure your nginx is built with IPv6 support. From nginx-forum at nginx.us Tue Apr 30 13:03:20 2013 From: nginx-forum at nginx.us (Rancor) Date: Tue, 30 Apr 2013 09:03:20 -0400 Subject: Howto set geoip_country for IPv4 and IPv6 databases? In-Reply-To: <20130430123408.GH19561@lo0.su> References: <20130430123408.GH19561@lo0.su> Message-ID: Hey, thanks for your reply. The packages of dotdeb.org are build with IPv6 support. When i'm using: nginx -V the output contains: --with-ipv6 Additional netstat -npl shows this output: tcp6 0 0 :::80 :::* LISTEN 4942/nginx tcp6 0 0 :::443 :::* LISTEN 4942/nginx so this NginX should run with IPv6? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,235108,238743#msg-238743 From contact at jpluscplusm.com Tue Apr 30 14:01:14 2013 From: contact at jpluscplusm.com (Jonathan Matthews) Date: Tue, 30 Apr 2013 15:01:14 +0100 Subject: rewrite difficulty In-Reply-To: <517F2E46.3040000@gmail.com> References: <517F2E46.3040000@gmail.com> Message-ID: On 30 April 2013 03:36, tseveendorj wrote: > Hello, > > I have difficulty to convert apache like rewrite to nginx. This is my config > file of virtualhost on nginx. http://pastebin.com/HTtKXnFy OMFG. You win today's prize for "Nginx config I am least likely even to /try/ and change". Congrats! ;-) > My installed php script should have following rewrite > http://pastebin.com/M2h3uAt3 > > Currently any requested php code displayed it's source on browser. How could > I migrate ? You need to start small. Learn how Nginx does its thing in one small area and when you've understood that, move on the next. At the moment, you have literally picked up your apache config and dumped it into Nginx's config syntax. You are unlikely to succeed if you don't learn how to work *with* Nginx, instead of trying just to make it behave like Apache. This may not be the "here's your config; I fixed it" reply you were looking for, but it's the best I can give you. Your Nginx config is /horrible/, and I'm not going to spend my time deciphering it! :-) Have a *really* good read of http://nginx.org/en/docs/http/ngx_http_rewrite_module.html#rewrite and http://wiki.nginx.org/HttpRewriteModule. They'd be good places to start ... J -- Jonathan Matthews // Oxford, London, UK http://www.jpluscplusm.com/contact.html From nginx-forum at nginx.us Tue Apr 30 14:03:43 2013 From: nginx-forum at nginx.us (mamoos1) Date: Tue, 30 Apr 2013 10:03:43 -0400 Subject: Dynamic upstream configuration In-Reply-To: References: Message-ID: <04e2b7c12c33edea64a2b2b33de5621b.NginxMailingListEnglish@forum.nginx.org> A shame that no one has a solution for this... It's a really big performance hit whenever backend servers are https and nginx simply renegotiates SSL for each request. Is there any plan to support this? Some sort of backend connection pooling with SSL? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,238424,238746#msg-238746 From nginx-forum at nginx.us Tue Apr 30 14:25:34 2013 From: nginx-forum at nginx.us (mrtn) Date: Tue, 30 Apr 2013 10:25:34 -0400 Subject: proxy_pass only if a file exists Message-ID: <8f2c959228348af3f80d4af8db3599a4.NginxMailingListEnglish@forum.nginx.org> I need to make sure a file actually exists before proxy_pass-ing the request to an upstream server. I don't serve existing files directly using Nginx because there are some application-specific logic i need to perform on the application server for such requests. I've looked at try_files, but it seems like it will serve the file straightaway once it is found, which is not what I want here. Another way is to use if (!-f $request_filename), but as mentioned here: http://wiki.nginx.org/Pitfalls#Check_IF_File_Exists, it's not a terrible way to check the existence of a file. Is there a feasible yet efficient way? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,238747,238747#msg-238747 From appa at perusio.net Tue Apr 30 15:18:54 2013 From: appa at perusio.net (=?ISO-8859-1?Q?Ant=F3nio_P=2E_P=2E_Almeida?=) Date: Tue, 30 Apr 2013 17:18:54 +0200 Subject: Dynamic upstream configuration In-Reply-To: <04e2b7c12c33edea64a2b2b33de5621b.NginxMailingListEnglish@forum.nginx.org> References: <04e2b7c12c33edea64a2b2b33de5621b.NginxMailingListEnglish@forum.nginx.org> Message-ID: No it does not unless you don't configure a shared SSL session cache. http://nginx.org/en/docs/http/ngx_http_ssl_module.html#ssl_session_cache I'm assuming that the upstream servers are Nginx also. If not adapt to the appropriate server session cache setup. ----appa On Tue, Apr 30, 2013 at 4:03 PM, mamoos1 wrote: > A shame that no one has a solution for this... > It's a really big performance hit whenever backend servers are https and > nginx simply renegotiates SSL for each request. > > Is there any plan to support this? Some sort of backend connection pooling > with SSL? > > Posted at Nginx Forum: > http://forum.nginx.org/read.php?2,238424,238746#msg-238746 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Tue Apr 30 16:10:37 2013 From: nginx-forum at nginx.us (hoffmabc) Date: Tue, 30 Apr 2013 12:10:37 -0400 Subject: nginx 1.2.1 won't start with large CRL In-Reply-To: <20120621141801.GL31671@mdounin.ru> References: <20120621141801.GL31671@mdounin.ru> Message-ID: <162138540cdc078b7ed95e39fbec3fde.NginxMailingListEnglish@forum.nginx.org> The CRL cannot get any smaller as it's a DoD CRL. Also adding more memory does not solve the problem. It seems to have an issue with starting the server period with a large CRL. [alert] 19759#0: fork() failed while spawning "worker process" (12: Cannot allocate memory) Posted at Nginx Forum: http://forum.nginx.org/read.php?2,227763,238749#msg-238749 From nginx-forum at nginx.us Tue Apr 30 16:44:44 2013 From: nginx-forum at nginx.us (mrtn) Date: Tue, 30 Apr 2013 12:44:44 -0400 Subject: proxy_pass only if a file exists In-Reply-To: <8f2c959228348af3f80d4af8db3599a4.NginxMailingListEnglish@forum.nginx.org> References: <8f2c959228348af3f80d4af8db3599a4.NginxMailingListEnglish@forum.nginx.org> Message-ID: <64df3c6e6147c440b12522d1f53f3c4a.NginxMailingListEnglish@forum.nginx.org> Sorry, I meant to say that (!-f $request_filename) check IS a terrible way to check the existence of the file, as suggested by the documentation. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,238747,238750#msg-238750 From dave at daveb.net Tue Apr 30 16:45:09 2013 From: dave at daveb.net (Dave Bailey) Date: Tue, 30 Apr 2013 09:45:09 -0700 Subject: nginx http poller module Message-ID: Hi, I've written an nginx module that allows the user to configure nginx worker processes to make detached, repeated "polling" HTTP requests. The request endpoint, method, URI, headers, and body are configurable, and it's also possible to register a set of callbacks to process the response status line, headers, body, and finalization. The idea is that if you have a multiple-worker setup, it may sometimes be useful to have each worker polling some service with its own status, or to request dynamic configuration updates, etc. In some scenarios, this may be a desirable alternative to using shared memory to synchronize state between workers. Module source: https://github.com/dbcode/nginx-poller-module Example configuration (the README has more details): http { poller config { endpoint http://config; method GET; header Host config; header User-Agent nginx; uri $my_config_uri; interval $my_config_interval; } } Thanks to Piotr Sikora for ngx_supervisord, which showed how to build the detached request without an incoming connection. -dave -------------- next part -------------- An HTML attachment was scrubbed... URL: From ru at nginx.com Tue Apr 30 21:17:09 2013 From: ru at nginx.com (Ruslan Ermilov) Date: Wed, 1 May 2013 01:17:09 +0400 Subject: Howto set geoip_country for IPv4 and IPv6 databases? In-Reply-To: References: <20130430123408.GH19561@lo0.su> Message-ID: <20130430211709.GM19561@lo0.su> On Tue, Apr 30, 2013 at 09:03:20AM -0400, Rancor wrote: > Hey, > > thanks for your reply. The packages of dotdeb.org are build with IPv6 > support. When i'm using: > > nginx -V > > the output contains: > > --with-ipv6 > > Additional netstat -npl shows this output: > > tcp6 0 0 :::80 :::* LISTEN > 4942/nginx > tcp6 0 0 :::443 :::* LISTEN > 4942/nginx > > so this NginX should run with IPv6? nginx detects the IPv6 support in libgeoip by trying to compile the following code snippet: #include #include int main(void) { printf("%d\n", GEOIP_CITY_EDITION_REV0_V6); return (0); } Does it compile OK on your system? From nginx-forum at nginx.us Tue Apr 30 21:49:44 2013 From: nginx-forum at nginx.us (nikandriko) Date: Tue, 30 Apr 2013 17:49:44 -0400 Subject: 504 Gateway Time-out media temple In-Reply-To: <8a47b859908153926e0b34a8aafdb160.NginxMailingListEnglish@forum.nginx.org> References: <20121002151157.GL40452@mdounin.ru> <8a47b859908153926e0b34a8aafdb160.NginxMailingListEnglish@forum.nginx.org> Message-ID: <118d8ff028eb008257dd31625e98d5bf.NginxMailingListEnglish@forum.nginx.org> Hi, I am also a Media Temple customer and its knowledgebase has not been updated with this issue although it is very frequent from what I see. Thank you very much. I changed the nginx.conf file adding the lines you mentioned and then restarted. It also helped me a lot finding the nginx restart command on the blog post you mentioned: kill -HUP `ps -ef | grep nginx | grep master | awk {'print $2'}` Posted at Nginx Forum: http://forum.nginx.org/read.php?2,231318,238756#msg-238756 From nginx-forum at nginx.us Tue Apr 30 23:25:22 2013 From: nginx-forum at nginx.us (nauger) Date: Tue, 30 Apr 2013 19:25:22 -0400 Subject: limit_req and IP white listing on 0.8.55 Message-ID: <2a17a81a7c814883063cc8a7aab4cdf7.NginxMailingListEnglish@forum.nginx.org> Hello! I've followed this reference: http://forum.nginx.org/read.php?2,228956,228961#msg-228961 To produce the following config: http { geo $public_vs_our_networks { default 1; 127.0.0.1/32 0; ... my networks ... } map $public_vs_our_networks $limit_public { 1 $binary_remote_addr; 0 ""; } limit_req_zone $limit_public zone=public_facing_network:10m rate=40r/m; ... server { ... location / { ... limit_req zone=public_facing_network burst=5 nodelay; ... proxy_pass http://my_upstream; } } } Unfortunately-- my error logs quickly filled up with clients who were incorrectly rate limited. It was as if this configuration created 1 bucket for ALL the public facing clients, as opposed to individually bucketing each public client by their $binary_remote_addr. Please advise on what I might be missing. Thanks for your help! -Nick Posted at Nginx Forum: http://forum.nginx.org/read.php?2,238757,238757#msg-238757