From pete at nextengine.com Fri Feb 1 00:21:00 2008 From: pete at nextengine.com (Pete DeLaurentis) Date: Thu, 31 Jan 2008 13:21:00 -0800 Subject: Internet Explorer Downloads Fail In-Reply-To: References: Message-ID: <2A6DA551-5B55-464F-851E-50A5C13D7337@nextengine.com> Hi guys, I'm using X-Accel-Redirect to serve files from nginx. This works for Firefox and Safari, however... When users on Internet Explorer try to download these files, it fails to ever start downloading, and Internet Explorer eventually times out. I'm guessing there is some special setting I need to put in my nginx configuration file. Can you help point me in the right direction? Thanks for your help, Pete DeLaurentis From fairwinds at eastlink.ca Fri Feb 1 01:31:48 2008 From: fairwinds at eastlink.ca (David Pratt) Date: Thu, 31 Jan 2008 18:31:48 -0400 Subject: Fair Proxy Balancer In-Reply-To: <20080131182905.GD21638@vadmin.megiteam.pl> References: <2cc9d1ea0711221338q60704b41h8831453dade974df@mail.gmail.com> <88daf38c0711221507y3b15e8d5q5860d094b38e6ce7@mail.gmail.com> <121a28810711230438u5ee6b2c8i2733f77e64f44cd7@mail.gmail.com> <47A14008.1060409@eastlink.ca> <88daf38c0801310251u60a4ee79n2707719f6fcf5daf@mail.gmail.com> <47A1CDB1.6090107@eastlink.ca> <20080131164557.GC21638@vadmin.megiteam.pl> <47A2057C.6040107@eastlink.ca> <20080131182905.GD21638@vadmin.megiteam.pl> Message-ID: <47A24C54.7060208@eastlink.ca> Hi Grzegorz. This gives me a much better idea of what to expect. Thank you for this. I am curious whether you have you done anything in the way of comparing the effectiveness of the fair proxy balancer to other balancing schemes like haproxy or lvm. Speed is a big factor for deployments so hoping speed will be good with the simplicity that this option presents. Many thanks. Regards, David Grzegorz Nosek wrote: > On Thu, Jan 31, 2008 at 01:29:32PM -0400, David Pratt wrote: >> Hi Grzegorz. I appreciate your explanation. It would be more convenient >> to compile as an option since I am using an automated build process. If >> it is self contained, can you forsee any problems building with most >> current 0.5.x branch or is this strictly 0.6.x? Also, what is the >> request threshold that triggers the issue with round robin issue that I >> am aware. Many thanks. > > The module works with 0.5.x as well as 0.6.x (if it doesn't work for > you, please mail me with a bug report). > > There's no threshold per se, it's just that the original load balancer > directs requests strictly round robin, i.e. 0-1-2-3-0-1-2-3 etc. This > ensures that every backend gets the same number of requests. > > upstream_fair always starts from backend 0 and works its way up until it > finds an idle peer (more or less). If your load effectively uses a > single backend at one time, it'll always be backend 0. If it uses the > power of two backends, they'll be 0 and 1 etc. Thus the first backend > will always have the most requests served, the second one will have more > than the third etc. > > Best regards, > Grzegorz Nosek > > From john at interserver.net Fri Feb 1 17:34:54 2008 From: john at interserver.net (John Quaglieri) Date: Fri, 01 Feb 2008 09:34:54 -0500 Subject: url changed in proxy_pass Message-ID: <47A32E0E.5020204@interserver.net> I have nginx installed on port 80 and apache is installed on port 81. I proxy php to apache and have nginx handle all the static and image content. On my config I have an if statment similar to this: if (!-f $request_filename) { break; proxy_pass http://domain.com:81; } I've noticed that if this is used the url gets rewritten to http://domain.com:81/file Is there a way to have it not add in :81 into the url? Thanks John Quaglieri From is at rambler-co.ru Fri Feb 1 17:44:49 2008 From: is at rambler-co.ru (Igor Sysoev) Date: Fri, 1 Feb 2008 17:44:49 +0300 Subject: url changed in proxy_pass In-Reply-To: <47A32E0E.5020204@interserver.net> References: <47A32E0E.5020204@interserver.net> Message-ID: <20080201144449.GH29098@rambler-co.ru> On Fri, Feb 01, 2008 at 09:34:54AM -0500, John Quaglieri wrote: > I have nginx installed on port 80 and apache is installed on port 81. I > proxy php to apache and have nginx handle all the static and image content. > > On my config I have an if statment similar to this: > > if (!-f $request_filename) { > break; > proxy_pass http://domain.com:81; > } > > I've noticed that if this is used the url gets rewritten to > http://domain.com:81/file > > Is there a way to have it not add in :81 into the url? Where do you see these URLs: inside HTML or redirects ? -- Igor Sysoev http://sysoev.ru/en/ From john at interserver.net Fri Feb 1 18:00:15 2008 From: john at interserver.net (John Quaglieri) Date: Fri, 01 Feb 2008 10:00:15 -0500 Subject: url changed in proxy_pass In-Reply-To: <20080201144449.GH29098@rambler-co.ru> References: <47A32E0E.5020204@interserver.net> <20080201144449.GH29098@rambler-co.ru> Message-ID: <47A333FF.7000907@interserver.net> It changes in the address bar, in the HTML they remain the same. Igor Sysoev wrote: > On Fri, Feb 01, 2008 at 09:34:54AM -0500, John Quaglieri wrote: > >> I have nginx installed on port 80 and apache is installed on port 81. I >> proxy php to apache and have nginx handle all the static and image content. >> >> On my config I have an if statment similar to this: >> >> if (!-f $request_filename) { >> break; >> proxy_pass http://domain.com:81; >> } >> >> I've noticed that if this is used the url gets rewritten to >> http://domain.com:81/file >> >> Is there a way to have it not add in :81 into the url? > > Where do you see these URLs: inside HTML or redirects ? > > From is at rambler-co.ru Fri Feb 1 18:19:44 2008 From: is at rambler-co.ru (Igor Sysoev) Date: Fri, 1 Feb 2008 18:19:44 +0300 Subject: url changed in proxy_pass In-Reply-To: <47A333FF.7000907@interserver.net> References: <47A32E0E.5020204@interserver.net> <20080201144449.GH29098@rambler-co.ru> <47A333FF.7000907@interserver.net> Message-ID: <20080201151944.GK29098@rambler-co.ru> On Fri, Feb 01, 2008 at 10:00:15AM -0500, John Quaglieri wrote: > It changes in the address bar, in the HTML they remain the same. It seems it's redirect. Are you sure you use domain.com:81 in proxy_pass, but not 127.0.0.1, etc ? You may try if (!-f $request_filename) { break; proxy_pass http://domain.com:81; } proxy_redirect http://domain.com:81 http://domain.com; Also the debug log will help. > Igor Sysoev wrote: > >On Fri, Feb 01, 2008 at 09:34:54AM -0500, John Quaglieri wrote: > > > >>I have nginx installed on port 80 and apache is installed on port 81. I > >>proxy php to apache and have nginx handle all the static and image > >>content. > >> > >>On my config I have an if statment similar to this: > >> > >>if (!-f $request_filename) { > >> break; > >> proxy_pass http://domain.com:81; > >> } > >> > >>I've noticed that if this is used the url gets rewritten to > >>http://domain.com:81/file > >> > >>Is there a way to have it not add in :81 into the url? > > > >Where do you see these URLs: inside HTML or redirects ? > > > > > -- Igor Sysoev http://sysoev.ru/en/ From john at interserver.net Fri Feb 1 18:42:57 2008 From: john at interserver.net (John Quaglieri) Date: Fri, 01 Feb 2008 10:42:57 -0500 Subject: url changed in proxy_pass In-Reply-To: <20080201151944.GK29098@rambler-co.ru> References: <47A32E0E.5020204@interserver.net> <20080201144449.GH29098@rambler-co.ru> <47A333FF.7000907@interserver.net> <20080201151944.GK29098@rambler-co.ru> Message-ID: <47A33E01.7080802@interserver.net> Thanks Igor. The proxy_redirect has solved this. Igor Sysoev wrote: > On Fri, Feb 01, 2008 at 10:00:15AM -0500, John Quaglieri wrote: > >> It changes in the address bar, in the HTML they remain the same. > > It seems it's redirect. > Are you sure you use domain.com:81 in proxy_pass, but not 127.0.0.1, etc ? > > You may try > > if (!-f $request_filename) { > break; > proxy_pass http://domain.com:81; > } > > proxy_redirect http://domain.com:81 http://domain.com; > > Also the debug log will help. > >> Igor Sysoev wrote: >>> On Fri, Feb 01, 2008 at 09:34:54AM -0500, John Quaglieri wrote: >>> >>>> I have nginx installed on port 80 and apache is installed on port 81. I >>>> proxy php to apache and have nginx handle all the static and image >>>> content. >>>> >>>> On my config I have an if statment similar to this: >>>> >>>> if (!-f $request_filename) { >>>> break; >>>> proxy_pass http://domain.com:81; >>>> } >>>> >>>> I've noticed that if this is used the url gets rewritten to >>>> http://domain.com:81/file >>>> >>>> Is there a way to have it not add in :81 into the url? >>> Where do you see these URLs: inside HTML or redirects ? >>> >>> > From is at rambler-co.ru Fri Feb 1 19:00:13 2008 From: is at rambler-co.ru (Igor Sysoev) Date: Fri, 1 Feb 2008 19:00:13 +0300 Subject: url changed in proxy_pass In-Reply-To: <47A33E01.7080802@interserver.net> References: <47A32E0E.5020204@interserver.net> <20080201144449.GH29098@rambler-co.ru> <47A333FF.7000907@interserver.net> <20080201151944.GK29098@rambler-co.ru> <47A33E01.7080802@interserver.net> Message-ID: <20080201160013.GN29098@rambler-co.ru> On Fri, Feb 01, 2008 at 10:42:57AM -0500, John Quaglieri wrote: > Thanks Igor. The proxy_redirect has solved this. nginx should do it by default if you set the same domain name in proxy_pass, those backend returns in its redirect. > Igor Sysoev wrote: > >On Fri, Feb 01, 2008 at 10:00:15AM -0500, John Quaglieri wrote: > > > >>It changes in the address bar, in the HTML they remain the same. > > > >It seems it's redirect. > >Are you sure you use domain.com:81 in proxy_pass, but not 127.0.0.1, etc ? > > > >You may try > > > > if (!-f $request_filename) { > > break; > > proxy_pass http://domain.com:81; > > } > > > > proxy_redirect http://domain.com:81 http://domain.com; > > > >Also the debug log will help. > > > >>Igor Sysoev wrote: > >>>On Fri, Feb 01, 2008 at 09:34:54AM -0500, John Quaglieri wrote: > >>> > >>>>I have nginx installed on port 80 and apache is installed on port 81. I > >>>>proxy php to apache and have nginx handle all the static and image > >>>>content. > >>>> > >>>>On my config I have an if statment similar to this: > >>>> > >>>>if (!-f $request_filename) { > >>>> break; > >>>> proxy_pass http://domain.com:81; > >>>> } > >>>> > >>>>I've noticed that if this is used the url gets rewritten to > >>>>http://domain.com:81/file > >>>> > >>>>Is there a way to have it not add in :81 into the url? > >>>Where do you see these URLs: inside HTML or redirects ? > >>> > >>> > > > -- Igor Sysoev http://sysoev.ru/en/ From casey.rayman at d2sc.com Fri Feb 1 21:47:05 2008 From: casey.rayman at d2sc.com (Casey Rayman) Date: Fri, 1 Feb 2008 12:47:05 -0600 Subject: use proxy store with url parameters Message-ID: <25548499-D240-4C13-A160-5B4650BC7B25@d2sc.com> I have been trying to get proxy_store to work with a url of the form / resource?resourceId=XXX where XXX is a number. This url actually returns a graphic from a database which does not change often. Is is possible to have proxy store work in this case where the XXX is the only part of the URL which is any different? My most recent attempt: location /resource { root /var/www/data/fetch; error_page 404 = /fetch$args; } location ^~ /fetch/ { internal; proxy_pass http://10.0.3.197:8246; proxy_store /var/www/data/fetch$args; proxy_store_access user:rw group:rw all:r; } Regards, Casey -------------- next part -------------- An HTML attachment was scrubbed... URL: From lists at ruby-forum.com Fri Feb 1 23:36:10 2008 From: lists at ruby-forum.com (Alexander Daniel) Date: Fri, 1 Feb 2008 21:36:10 +0100 Subject: (99: Cannot assign requested address) In-Reply-To: <56CC3204-02E9-49B3-9E88-50FCC6E2F02D@lovelysystems.com> References: <56CC3204-02E9-49B3-9E88-50FCC6E2F02D@lovelysystems.com> Message-ID: <0659e703aa2c5912d7b0c2dc5b417f38@ruby-forum.com> hello jodok, many thanks for your answer. i have set the option -vv for the memcached but there are no error entrys in the logfile. this is the last logentry: <7 connection closed. <7 new client connection <7 get website:page:/de/database/1 >7 sending key website:page:/de/database/1 >7 END <7 connection closed. last night i read some interesting articles. when i run the netstat -ntc command, i get hundreds of TIME_WAIT entrys. so i changed the /etc/sysctl.conf file. i changed the following entry from net.ipv4.ip_local_port_range = 32768 61000 to net.ipv4.ip_local_port_range = 1024 65000 now my benchmark test works fine. are there any other problems when i run debian with this large port range? sincerely alexander -- Posted via http://www.ruby-forum.com/. From is at rambler-co.ru Sat Feb 2 00:00:09 2008 From: is at rambler-co.ru (Igor Sysoev) Date: Sat, 2 Feb 2008 00:00:09 +0300 Subject: (99: Cannot assign requested address) In-Reply-To: <0659e703aa2c5912d7b0c2dc5b417f38@ruby-forum.com> References: <56CC3204-02E9-49B3-9E88-50FCC6E2F02D@lovelysystems.com> <0659e703aa2c5912d7b0c2dc5b417f38@ruby-forum.com> Message-ID: <20080201210009.GA36916@rambler-co.ru> On Fri, Feb 01, 2008 at 09:36:10PM +0100, Alexander Daniel wrote: > many thanks for your answer. i have set the option -vv for the memcached > but there are no error entrys in the logfile. this is the last logentry: > > <7 connection closed. > <7 new client connection > <7 get website:page:/de/database/1 > >7 sending key website:page:/de/database/1 > >7 END > <7 connection closed. > > > last night i read some interesting articles. when i run the > > netstat -ntc > > command, i get hundreds of TIME_WAIT entrys. so > i changed the /etc/sysctl.conf file. i changed the following > entry from > > net.ipv4.ip_local_port_range = 32768 61000 > > to > > net.ipv4.ip_local_port_range = 1024 65000 > > now my benchmark test works fine. are there any other problems > when i run debian with this large port range? No, there should no be problems. You may set 1024-65535. -- Igor Sysoev http://sysoev.ru/en/ From lists at ruby-forum.com Sat Feb 2 00:12:26 2008 From: lists at ruby-forum.com (Ian Neubert) Date: Fri, 1 Feb 2008 22:12:26 +0100 Subject: More than one conditon in an IF statement? Message-ID: <2df31a1cc396370cc61328500694627e@ruby-forum.com> Hello All, I can't seem to figure out how to put more than on condition in an IF statement. Is this possible? I thought this would work: if ($request_uri ~* \.(jpg|jpeg|gif|png|css)$) { expires 1d; } if ($is_args) { expires max; break; } But it sets all URI's with a query string to expire max, instead of just those ending in (jpg|jpeg|gif|png|css). Any ideas? Thanks in advance for your help. -- Posted via http://www.ruby-forum.com/. From eliott at cactuswax.net Sat Feb 2 00:19:29 2008 From: eliott at cactuswax.net (eliott) Date: Fri, 1 Feb 2008 13:19:29 -0800 Subject: More than one conditon in an IF statement? In-Reply-To: <2df31a1cc396370cc61328500694627e@ruby-forum.com> References: <2df31a1cc396370cc61328500694627e@ruby-forum.com> Message-ID: <428d921d0802011319n19210098v5c9cba32fe39b566@mail.gmail.com> On 2/1/08, Ian Neubert wrote: > Hello All, > > I can't seem to figure out how to put more than on condition in an IF > statement. Is this possible? > > I thought this would work: > if ($request_uri ~* \.(jpg|jpeg|gif|png|css)$) { > expires 1d; > } > if ($is_args) { > expires max; > break; > } > > But it sets all URI's with a query string to expire max, instead of just > those ending in (jpg|jpeg|gif|png|css). > > Any ideas? Thanks in advance for your help. you could either adjust your if statement logic, or set a variable in the first if, and test for it later. From lists at ruby-forum.com Sat Feb 2 00:22:27 2008 From: lists at ruby-forum.com (Alexander Daniel) Date: Fri, 1 Feb 2008 22:22:27 +0100 Subject: (99: Cannot assign requested address) In-Reply-To: <20080201210009.GA36916@rambler-co.ru> References: <56CC3204-02E9-49B3-9E88-50FCC6E2F02D@lovelysystems.com> <0659e703aa2c5912d7b0c2dc5b417f38@ruby-forum.com> <20080201210009.GA36916@rambler-co.ru> Message-ID: <01da5efb47b60c368e47920f24127937@ruby-forum.com> > No, there should no be problems. > You may set 1024-65535. Thanks a lot Igor, Alexander -- Posted via http://www.ruby-forum.com/. From robert at exoweb.net Sat Feb 2 06:04:19 2008 From: robert at exoweb.net (Robert Bunting) Date: Sat, 02 Feb 2008 11:04:19 +0800 Subject: allow/deny for a single location, with other location handler Message-ID: <47A3DDB3.4090702@exoweb.net> Hi, I'm using nginx to proxy through to apache, with a simple location / { proxy_pass ... } However, there is one location ( /account/sync_profile/) which I'd like to restrict to just one IP address. If I add a location for that address, location /account/sync_profile/ { allow 59.150.40.29; deny all; } then of course it doesn't get handled by the proxy. I can't put an if ($uri = /account/sync_profile) { allow 59.150.40.29; deny all; } inside my main location, since allow, deny won't work there. I can't use a multi-level if; I suppose one solution would be to include the proxy config into the restricted location as well, but this seems like unnecessarily verbose.. Any ideas? thanks, robert. From mobiledreamers at gmail.com Sat Feb 2 10:20:18 2008 From: mobiledreamers at gmail.com (mobiledreamers at gmail.com) Date: Fri, 1 Feb 2008 23:20:18 -0800 Subject: Fair Proxy Balancer In-Reply-To: <47A24C54.7060208@eastlink.ca> References: <2cc9d1ea0711221338q60704b41h8831453dade974df@mail.gmail.com> <88daf38c0711221507y3b15e8d5q5860d094b38e6ce7@mail.gmail.com> <121a28810711230438u5ee6b2c8i2733f77e64f44cd7@mail.gmail.com> <47A14008.1060409@eastlink.ca> <88daf38c0801310251u60a4ee79n2707719f6fcf5daf@mail.gmail.com> <47A1CDB1.6090107@eastlink.ca> <20080131164557.GC21638@vadmin.megiteam.pl> <47A2057C.6040107@eastlink.ca> <20080131182905.GD21638@vadmin.megiteam.pl> <47A24C54.7060208@eastlink.ca> Message-ID: yes it l b great if fair proxy balancer is added to nginx trunk On Jan 31, 2008 2:31 PM, David Pratt wrote: > Hi Grzegorz. This gives me a much better idea of what to expect. Thank > you for this. I am curious whether you have you done anything in the way > of comparing the effectiveness of the fair proxy balancer to other > balancing schemes like haproxy or lvm. Speed is a big factor for > deployments so hoping speed will be good with the simplicity that this > option presents. Many thanks. > > Regards, > David > > Grzegorz Nosek wrote: > > On Thu, Jan 31, 2008 at 01:29:32PM -0400, David Pratt wrote: > >> Hi Grzegorz. I appreciate your explanation. It would be more convenient > >> to compile as an option since I am using an automated build process. If > >> it is self contained, can you forsee any problems building with most > >> current 0.5.x branch or is this strictly 0.6.x? Also, what is the > >> request threshold that triggers the issue with round robin issue that I > >> am aware. Many thanks. > > > > The module works with 0.5.x as well as 0.6.x (if it doesn't work for > > you, please mail me with a bug report). > > > > There's no threshold per se, it's just that the original load balancer > > directs requests strictly round robin, i.e. 0-1-2-3-0-1-2-3 etc. This > > ensures that every backend gets the same number of requests. > > > > upstream_fair always starts from backend 0 and works its way up until it > > finds an idle peer (more or less). If your load effectively uses a > > single backend at one time, it'll always be backend 0. If it uses the > > power of two backends, they'll be 0 and 1 etc. Thus the first backend > > will always have the most requests served, the second one will have more > > than the third etc. > > > > Best regards, > > Grzegorz Nosek > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From eden at mojiti.com Sat Feb 2 10:28:51 2008 From: eden at mojiti.com (Eden Li) Date: Sat, 2 Feb 2008 15:28:51 +0800 Subject: use proxy store with url parameters In-Reply-To: <25548499-D240-4C13-A160-5B4650BC7B25@d2sc.com> References: <25548499-D240-4C13-A160-5B4650BC7B25@d2sc.com> Message-ID: <61F5207A-9EB4-4CFB-9E84-8E447B09331D@mojiti.com> Try replacing `/fetch/$args` in your `proxy_store` statement with ` $request_uri`. It should already be rewritten from the `error_page` statement. e.g. - proxy_store /var/www/data/fetch$args; + proxy_store /var/www/data$request_uri; On Feb 2, 2008, at 2:47 AM, Casey Rayman wrote: > I have been trying to get proxy_store to work with a url of the > form /resource?resourceId=XXX where XXX is a number. This url > actually returns a graphic from a database which does not change > often. Is is possible to have proxy store work in this case where > the XXX is the only part of the URL which is any different? > > My most recent attempt: > > location /resource { > root /var/www/data/fetch; > error_page 404 = /fetch$args; > } > location ^~ /fetch/ { > internal; > proxy_pass http://10.0.3.197:8246; > proxy_store /var/www/data/fetch$args; > proxy_store_access user:rw group:rw all:r; > } > > > Regards, > Casey From mfarroyo at nexustech.com.ph Sat Feb 2 18:15:37 2008 From: mfarroyo at nexustech.com.ph (Mario F. Arroyo) Date: Sat, 2 Feb 2008 23:15:37 +0800 Subject: Difficulty in Proxying for MS Exchange Web Access using NGINX Message-ID: <2EAD38AFFF955D42A54A38E33724DF9208BA9F@emailserver5.int.nexustech.com.ph> Hi Igor, Please allow me to thank you for your efforts in coming up and maintaining such a wonderful piece of software! Its fast, stable and light on resources ... truly an amazing piece of work! I am running the nginx-0.6.25 software on Ubuntu Server 7.10. I was able to set up the web services and proxy them to web sites running PHP-based applications. Finally, I was able to proxy http traffic inside https using nginx. However, I am having difficulty doing the same for the MS Web Service for Exchange. The weird thing is ... if I connect to http://ns1.nexustech.com.ph/exchange/ ... everything works! Here are the pertinent config entries: server { listen 80; server_name ns1.nexustech.com.ph; access_log /var/log/nginx/access_http.log main; location / { proxy_pass http://192.168.0.135; proxy_set_header Host $http_host; } } But if I were to connect using https://ns1.nexustech.com.ph/exchange/ .... I get the connected and everything but some functions do not work properly ... in fact, what seems to happen is that it tries to keep connecting back to the http port ... I have tried enabling proxy_redirect but I can't seem to get the right redirection ... Anyway, here is the https section: server { listen 443; server_name ns1.nexustech.com.ph; access_log /var/log/nginx/access_https.log main; ssl on; ssl_certificate /etc/ssl/certs/cert.crt; ssl_certificate_key /etc/ssl/private/cert.key; location / { proxy_pass http://192.168.0.135; proxy_set_header Host $http_host; } And here are some log entries from the access_https.log: 202.175.215.131 - - [02/Feb/2008:22:52:17 +0800] GET /exchange? HTTP/1.1 "401" 83 "-" "RPT-HTTPClient/0.3-3E" "-" 202.175.215.131 - - [02/Feb/2008:22:52:17 +0800] GET /exchange? HTTP/1.1 "401" 83 "-" "RPT-HTTPClient/0.3-3E" "-" And from the access_http.log: 124.6.189.254 - mario [02/Feb/2008:23:08:02 +0800] POST /exchange/mfarroyo/Drafts/Difficulty%20in%20Proxying%20for%20MS%20Exchange%20Web%20Access%20using%20NGINX.EML HTTP/1.1 "302" 0 "http://ns1.nexustech.com.ph/exchange/mfarroyo/Drafts/Difficulty%20in%20Proxying%20for%20MS%20Exchange%20Web%20Access%20using%20NGINX.EML?Cmd=edit" "Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.8.1.11) Gecko/20071204 Ubuntu/7.10 (gutsy) Firefox/2.0.0.11" "-" 124.6.189.254 - mario [02/Feb/2008:23:08:03 +0800] GET /exchange/mfarroyo/Drafts/Difficulty%20in%20Proxying%20for%20MS%20Exchange%20Web%20Access%20using%20NGINX.EML?Cmd=edit HTTP/1.1 "200" 3212 "http://ns1.nexustech.com.ph/exchange/mfarroyo/Drafts/Difficulty%20in%20Proxying%20for%20MS%20Exchange%20Web%20Access%20using%20NGINX.EML?Cmd=edit" "Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.8.1.11) Gecko/20071204 Ubuntu/7.10 (gutsy) Firefox/2.0.0.11" "-" /var/log/nginx/access_http.log (END) Thanks in advance for your help! Mario -------------- next part -------------- An HTML attachment was scrubbed... URL: From propanbutan at gmx.net Sat Feb 2 20:19:36 2008 From: propanbutan at gmx.net (propanbutan) Date: Sat, 2 Feb 2008 18:19:36 +0100 Subject: allow/deny for a single location, with other location handler References: <47A3DDB3.4090702@exoweb.net> Message-ID: <20080202181936.4b816240.propanbutan@gmx.net> Robert Bunting wrote: > Any ideas? location /account/sync_profile/ { allow 59.150.40.29; deny all; proxy_pass .. } location / { proxy_pass .. } From dseifert at searchspark.com Sun Feb 3 02:18:27 2008 From: dseifert at searchspark.com (Douglas A. Seifert) Date: Sat, 02 Feb 2008 15:18:27 -0800 Subject: Custom 503 Error Page Message-ID: <1201994307.3555.11.camel@localhost> I am trying to use a test for the existence of a file to return a error page with a 503 Temporarily Unavailable response code. My configuration is below. The problem is that it does not work. I can see the custom error page, but the HTTP status code is 200, not the desired 503. If I change the if directive to this: if (-f $document_root/system/maintenance.html) { rewrite ^(.*)$ /system/maintenance.html; # No last return 503; } I start getting a 503 HTTP status code, but the content is not my custom error page, but rather the default 503 response compiled into the nginx server. Am I doing something terribly wrong? I would really like to see my custom page with a real 503 HTTP status code. Thanks for any help, Douglas A. Seifert nginx.conf: -------------------------------------------------- # user and group to run as #user www www; # number of nginx workers worker_processes 6; # pid of nginx master process pid /usr/local/www/nginx.pid; # Number of worker connections. 1024 is a good default events { worker_connections 1024; } # start the http module where we config http access. http { # pull in mime-types. You can break out your config # into as many include's as you want to make it cleaner include /usr/local/nginx/conf/mime.types; # set a default type for the rare situation that # nothing matches from the mimie-type include default_type application/octet-stream; # configure log format log_format main '$remote_addr - $remote_user [$time_local] ' '"$request" $status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"'; # main access log access_log /usr/local/www/log/nginx_access.log main; # main error log error_log /usr/local/www/log/nginx_error.log debug; # no sendfile on OSX sendfile on; # These are good default values. tcp_nopush on; tcp_nodelay off; # output compression saves bandwidth gzip on; gzip_http_version 1.0; gzip_comp_level 2; gzip_proxied any; gzip_types text/plain text/html text/css application/x-javascript text/xml application/xml application/xml+rss text/javascript; server { # port to listen on. Can also be set to an IP:PORT listen *:8080; # Set the max size for file uploads to 50Mb client_max_body_size 50M; # sets the domain[s] that this vhost server requests for server_name .foo.com *; # doc root root /usr/local/www/test; # vhost specific access log access_log /usr/local/www/log/nginx.vhost.access.log main; # this rewrites all the requests to the maintenance.html # page if it exists in the doc root. This is for capistrano's # disable web task if (-f $document_root/system/maintenance.html) { rewrite ^(.*)$ /system/maintenance.html last; return 503; } location / { root /usr/local/www/test; } error_page 500 502 504 /500.html; error_page 503 /503.html; } } ----------------------------------------------------------------- From mdounin at mdounin.ru Sun Feb 3 03:37:11 2008 From: mdounin at mdounin.ru (Maxim Dounin) Date: Sun, 3 Feb 2008 03:37:11 +0300 Subject: Custom 503 Error Page In-Reply-To: <1201994307.3555.11.camel@localhost> References: <1201994307.3555.11.camel@localhost> Message-ID: <20080203003711.GN75203@mdounin.ru> Hello! On Sat, Feb 02, 2008 at 03:18:27PM -0800, Douglas A. Seifert wrote: >I am trying to use a test for the existence of a file to return a error >page with a 503 Temporarily Unavailable response code. My configuration >is below. The problem is that it does not work. I can see the custom >error page, but the HTTP status code is 200, not the desired 503. > >If I change the if directive to this: > > if (-f $document_root/system/maintenance.html) { > rewrite ^(.*)$ /system/maintenance.html; # No last > return 503; > } > >I start getting a 503 HTTP status code, but the content is not my custom >error page, but rather the default 503 response compiled into the nginx >server. > >Am I doing something terribly wrong? I would really like to see my >custom page with a real 503 HTTP status code. If you want to use custom response for 503 error, you should write error_page 503 /system/maintenance.html; in your config. Maxim Dounin From dseifert at searchspark.com Sun Feb 3 04:08:29 2008 From: dseifert at searchspark.com (Douglas A. Seifert) Date: Sat, 02 Feb 2008 17:08:29 -0800 Subject: Custom 503 Error Page In-Reply-To: <20080203003711.GN75203@mdounin.ru> References: <1201994307.3555.11.camel@localhost> <20080203003711.GN75203@mdounin.ru> Message-ID: <1202000909.3555.13.camel@localhost> > >I start getting a 503 HTTP status code, but the content is not my custom > >error page, but rather the default 503 response compiled into the nginx > >server. > > > >Am I doing something terribly wrong? I would really like to see my > >custom page with a real 503 HTTP status code. > > If you want to use custom response for 503 error, you should write > > error_page 503 /system/maintenance.html; > > in your config. > Thanks for the response. Unfortunately, however, that has no effect. I still see the compiled in 503 content. -Doug Seifert From eden at mojiti.com Sun Feb 3 04:18:18 2008 From: eden at mojiti.com (Eden Li) Date: Sun, 3 Feb 2008 09:18:18 +0800 Subject: allow/deny for a single location, with other location handler In-Reply-To: <47A3DDB3.4090702@exoweb.net> References: <47A3DDB3.4090702@exoweb.net> Message-ID: <93D6C9EB-C4B3-4FF6-9EE4-248EE3ED0BA0@mojiti.com> proxy_* config declarations in the server block trickle down to location blocks. eg: proxy_set_header X-Forwarded-For $http_x_client_ip; .. location / { proxy_pass ...; } # (1) location /account/sync_profile/ { allow 59.150.40.29; deny all; proxy_pass ...; } # (2) both (1) and (2) will pass along the X-Client-IP header On Feb 2, 2008, at 11:04 AM, Robert Bunting wrote: > Hi, > > I'm using nginx to proxy through to apache, with a simple > > location / { > proxy_pass ... > } > > However, there is one location ( /account/sync_profile/) which I'd > like to restrict to just one IP address. > > If I add a location for that address, > location /account/sync_profile/ { > allow 59.150.40.29; > deny all; > } > > then of course it doesn't get handled by the proxy. > > I can't put an > if ($uri = /account/sync_profile) { allow 59.150.40.29; deny all; } > inside my main location, since allow, deny won't work there. > > I can't use a multi-level if; > > I suppose one solution would be to include the proxy config into the > restricted location as well, but this seems like unnecessarily > verbose.. > > Any ideas? > > thanks, > robert. > > > > From mdounin at mdounin.ru Sun Feb 3 04:20:56 2008 From: mdounin at mdounin.ru (Maxim Dounin) Date: Sun, 3 Feb 2008 04:20:56 +0300 Subject: Custom 503 Error Page In-Reply-To: <1202000909.3555.13.camel@localhost> References: <1201994307.3555.11.camel@localhost> <20080203003711.GN75203@mdounin.ru> <1202000909.3555.13.camel@localhost> Message-ID: <20080203012056.GA55257@mdounin.ru> Hello! On Sat, Feb 02, 2008 at 05:08:29PM -0800, Douglas A. Seifert wrote: > >> >I start getting a 503 HTTP status code, but the content is not my custom >> >error page, but rather the default 503 response compiled into the nginx >> >server. >> > >> >Am I doing something terribly wrong? I would really like to see my >> >custom page with a real 503 HTTP status code. >> >> If you want to use custom response for 503 error, you should write >> >> error_page 503 /system/maintenance.html; >> >> in your config. >> > >Thanks for the response. Unfortunately, however, that has no effect. I still see the compiled in 503 content. Probably because you have error_page 503 redefined later in your config to /503.html. Maxim Dounin From eden at mojiti.com Sun Feb 3 04:23:07 2008 From: eden at mojiti.com (Eden Li) Date: Sun, 3 Feb 2008 09:23:07 +0800 Subject: Custom 503 Error Page In-Reply-To: <1201994307.3555.11.camel@localhost> References: <1201994307.3555.11.camel@localhost> Message-ID: Try: if (-f $document_root/system/maintenance.html) { rewrite ^(.*)$ @maintenance last; } location = @maintenance { error_page 503 /system/maintenance.html; return 503; } On Feb 3, 2008, at 7:18 AM, Douglas A. Seifert wrote: > I am trying to use a test for the existence of a file to return a > error > page with a 503 Temporarily Unavailable response code. My > configuration > is below. The problem is that it does not work. I can see the custom > error page, but the HTTP status code is 200, not the desired 503. > > If I change the if directive to this: > > if (-f $document_root/system/maintenance.html) { > rewrite ^(.*)$ /system/maintenance.html; # No last > return 503; > } > > I start getting a 503 HTTP status code, but the content is not my > custom > error page, but rather the default 503 response compiled into the > nginx > server. > > Am I doing something terribly wrong? I would really like to see my > custom page with a real 503 HTTP status code. > > Thanks for any help, > Douglas A. Seifert > > > nginx.conf: > -------------------------------------------------- > # user and group to run as > #user www www; > > # number of nginx workers > worker_processes 6; > > # pid of nginx master process > pid /usr/local/www/nginx.pid; > > # Number of worker connections. 1024 is a good default > events { > worker_connections 1024; > } > > # start the http module where we config http access. > http { > # pull in mime-types. You can break out your config > # into as many include's as you want to make it cleaner > include /usr/local/nginx/conf/mime.types; > > # set a default type for the rare situation that > # nothing matches from the mimie-type include > default_type application/octet-stream; > > # configure log format > log_format main '$remote_addr - $remote_user [$time_local] ' > '"$request" $status $body_bytes_sent "$http_referer" > ' > '"$http_user_agent" "$http_x_forwarded_for"'; > > # main access log > access_log /usr/local/www/log/nginx_access.log main; > > # main error log > error_log /usr/local/www/log/nginx_error.log debug; > > # no sendfile on OSX > sendfile on; > > # These are good default values. > tcp_nopush on; > tcp_nodelay off; > # output compression saves bandwidth > gzip on; > gzip_http_version 1.0; > gzip_comp_level 2; > gzip_proxied any; > gzip_types text/plain text/html text/css application/x- > javascript > text/xml application/xml application/xml+rss text/javascript; > > server { > > # port to listen on. Can also be set to an IP:PORT > listen *:8080; > > # Set the max size for file uploads to 50Mb > client_max_body_size 50M; > > # sets the domain[s] that this vhost server requests for > server_name .foo.com *; > > # doc root > root /usr/local/www/test; > > # vhost specific access log > access_log /usr/local/www/log/nginx.vhost.access.log main; > > # this rewrites all the requests to the maintenance.html > # page if it exists in the doc root. This is for capistrano's > # disable web task > if (-f $document_root/system/maintenance.html) { > rewrite ^(.*)$ /system/maintenance.html last; > return 503; > } > > location / { > root /usr/local/www/test; > } > > error_page 500 502 504 /500.html; > error_page 503 /503.html; > } > > } > ----------------------------------------------------------------- > From dseifert at searchspark.com Sun Feb 3 05:15:08 2008 From: dseifert at searchspark.com (Douglas A. Seifert) Date: Sat, 02 Feb 2008 18:15:08 -0800 Subject: Custom 503 Error Page In-Reply-To: References: <1201994307.3555.11.camel@localhost> Message-ID: <1202004908.3555.15.camel@localhost> On Sun, 2008-02-03 at 09:23 +0800, Eden Li wrote: > Try: > > if (-f $document_root/system/maintenance.html) { > rewrite ^(.*)$ @maintenance last; > } > > location = @maintenance { > error_page 503 /system/maintenance.html; > return 503; > } Unfortunately, I get the same result: a 503 with the compiled in 503 content. Thanks, Doug Seifert From dseifert at searchspark.com Sun Feb 3 05:16:15 2008 From: dseifert at searchspark.com (Douglas A. Seifert) Date: Sat, 02 Feb 2008 18:16:15 -0800 Subject: Custom 503 Error Page In-Reply-To: <20080203012056.GA55257@mdounin.ru> References: <1201994307.3555.11.camel@localhost> <20080203003711.GN75203@mdounin.ru> <1202000909.3555.13.camel@localhost> <20080203012056.GA55257@mdounin.ru> Message-ID: <1202004975.3555.18.camel@localhost> On Sun, 2008-02-03 at 04:20 +0300, Maxim Dounin wrote: > Hello! > > On Sat, Feb 02, 2008 at 05:08:29PM -0800, Douglas A. Seifert wrote: > > > > >> >I start getting a 503 HTTP status code, but the content is not my custom > >> >error page, but rather the default 503 response compiled into the nginx > >> >server. > >> > > >> >Am I doing something terribly wrong? I would really like to see my > >> >custom page with a real 503 HTTP status code. > >> > >> If you want to use custom response for 503 error, you should write > >> > >> error_page 503 /system/maintenance.html; > >> > >> in your config. > >> > > > >Thanks for the response. Unfortunately, however, that has no effect. I still see the compiled in 503 content. > > Probably because you have error_page 503 redefined later in your > config to /503.html. > Maxim, Thanks for trying, but it doesn't matter where in the config the error_page directive is placed, the result is the same: a 503 response with the compiled in 503 content. Thanks, Doug Seifert From mdounin at mdounin.ru Sun Feb 3 05:34:01 2008 From: mdounin at mdounin.ru (Maxim Dounin) Date: Sun, 3 Feb 2008 05:34:01 +0300 Subject: Custom 503 Error Page In-Reply-To: <1202004975.3555.18.camel@localhost> References: <1201994307.3555.11.camel@localhost> <20080203003711.GN75203@mdounin.ru> <1202000909.3555.13.camel@localhost> <20080203012056.GA55257@mdounin.ru> <1202004975.3555.18.camel@localhost> Message-ID: <20080203023401.GB55257@mdounin.ru> Hello! On Sat, Feb 02, 2008 at 06:16:15PM -0800, Douglas A. Seifert wrote: > >On Sun, 2008-02-03 at 04:20 +0300, Maxim Dounin wrote: >> Hello! >> >> On Sat, Feb 02, 2008 at 05:08:29PM -0800, Douglas A. Seifert wrote: >> >> > >> >> >I start getting a 503 HTTP status code, but the content is not my custom >> >> >error page, but rather the default 503 response compiled into the nginx >> >> >server. >> >> > >> >> >Am I doing something terribly wrong? I would really like to see my >> >> >custom page with a real 503 HTTP status code. >> >> >> >> If you want to use custom response for 503 error, you should write >> >> >> >> error_page 503 /system/maintenance.html; >> >> >> >> in your config. >> >> >> > >> >Thanks for the response. Unfortunately, however, that has no effect. I still see the compiled in 503 content. >> >> Probably because you have error_page 503 redefined later in your >> config to /503.html. >> >Maxim, >Thanks for trying, but it doesn't matter where in the config the >error_page directive is placed, the result is the same: a 503 response >with the compiled in 503 content. Just another quick note: due to some implementation wierdness of ngx_http_rewrite_module, it may be required to define error_page _before_ if/return block. Try something like this: error_page 503 /system/maintenance.html; if (-f ...) { return 503; } Maxim Dounin From eden at mojiti.com Sun Feb 3 05:47:39 2008 From: eden at mojiti.com (Eden Li) Date: Sun, 3 Feb 2008 10:47:39 +0800 Subject: Custom 503 Error Page In-Reply-To: <20080203023401.GB55257@mdounin.ru> References: <1201994307.3555.11.camel@localhost> <20080203003711.GN75203@mdounin.ru> <1202000909.3555.13.camel@localhost> <20080203012056.GA55257@mdounin.ru> <1202004975.3555.18.camel@localhost> <20080203023401.GB55257@mdounin.ru> Message-ID: Hmm... that doesn't seem to work either. Is it possible that `return xxx;` always generates the internal response? The only thing that would achieve the desired result in this case is proxying to some blackhole which would cause the 503 to be caught and rewritten according to the error_page directive. On Feb 3, 2008, at 10:34 AM, Maxim Dounin wrote: > Just another quick note: due to some implementation wierdness of > ngx_http_rewrite_module, it may be required to define error_page > _before_ if/return block. Try something like this: > > error_page 503 /system/maintenance.html; > > if (-f ...) { > return 503; > } From robert at exoweb.net Sun Feb 3 05:50:48 2008 From: robert at exoweb.net (Robert Bunting) Date: Sun, 03 Feb 2008 10:50:48 +0800 Subject: allow/deny for a single location, with other location handler In-Reply-To: <93D6C9EB-C4B3-4FF6-9EE4-248EE3ED0BA0@mojiti.com> References: <47A3DDB3.4090702@exoweb.net> <93D6C9EB-C4B3-4FF6-9EE4-248EE3ED0BA0@mojiti.com> Message-ID: <47A52C08.9070507@exoweb.net> Thanks Eden, I guess that way will allow me to put the minimum amount of config in each location directive, at least. So for the record, I ended up with: server { ... proxy_buffers ...; proxy_set_header...; ... other global proxy options; location / { proxy_pass http://bella; } location /special_url/ { allow 59.150.40.29; deny all; proxy_pass http://bella; } location /static/ { ... } } Not so bad, as long as I don't have too many of these special per-location rules. thanks, robert. Eden Li wrote: > proxy_* config declarations in the server block trickle down to location > blocks. > > eg: > > proxy_set_header X-Forwarded-For $http_x_client_ip; > .. > location / { proxy_pass ...; } # (1) > location /account/sync_profile/ { allow 59.150.40.29; deny all; > proxy_pass ...; } # (2) > > both (1) and (2) will pass along the X-Client-IP header > > On Feb 2, 2008, at 11:04 AM, Robert Bunting wrote: > >> Hi, >> >> I'm using nginx to proxy through to apache, with a simple >> >> location / { >> proxy_pass ... >> } >> >> However, there is one location ( /account/sync_profile/) which I'd >> like to restrict to just one IP address. >> >> If I add a location for that address, >> location /account/sync_profile/ { >> allow 59.150.40.29; >> deny all; >> } >> >> then of course it doesn't get handled by the proxy. >> >> I can't put an >> if ($uri = /account/sync_profile) { allow 59.150.40.29; deny all; } >> inside my main location, since allow, deny won't work there. >> >> I can't use a multi-level if; >> >> I suppose one solution would be to include the proxy config into the >> restricted location as well, but this seems like unnecessarily verbose.. >> >> Any ideas? >> >> thanks, >> robert. >> >> >> >> > > > From mdounin at mdounin.ru Sun Feb 3 06:15:39 2008 From: mdounin at mdounin.ru (Maxim Dounin) Date: Sun, 3 Feb 2008 06:15:39 +0300 Subject: Custom 503 Error Page In-Reply-To: References: <1201994307.3555.11.camel@localhost> <20080203003711.GN75203@mdounin.ru> <1202000909.3555.13.camel@localhost> <20080203012056.GA55257@mdounin.ru> <1202004975.3555.18.camel@localhost> <20080203023401.GB55257@mdounin.ru> Message-ID: <20080203031539.GC55257@mdounin.ru> Hello! On Sun, Feb 03, 2008 at 10:47:39AM +0800, Eden Li wrote: > Hmm... that doesn't seem to work either. Is it possible that `return xxx;` > always generates the internal response? > > The only thing that would achieve the desired result in this case is > proxying to some blackhole which would cause the 503 to be caught and > rewritten according to the error_page directive. Oops. Sorry, I missed. The real problem is that with suggested configuration (i.e. always return 503 at server level) there is no way to reach /system/maintenance.html file for nginx. So, it tries to get /system/maintenance.html for error body, and gets yet another 503. So it has to return hardcoded content. The only solution is to allow nginx to access /system/maintenance.html somehow. Something like this: error_page 503 /system/maintenance.html; location / { if (-f ...) { return 503; } } location /system/maintenance.html { # allow requests here - do not return 503 } The if{} block should be in all locations where access should be disallowed, but not for /system/maintenance.html itself. Maxim Dounin > > On Feb 3, 2008, at 10:34 AM, Maxim Dounin wrote: >> Just another quick note: due to some implementation wierdness of >> ngx_http_rewrite_module, it may be required to define error_page _before_ >> if/return block. Try something like this: >> >> error_page 503 /system/maintenance.html; >> >> if (-f ...) { >> return 503; >> } > From dseifert at searchspark.com Sun Feb 3 07:02:53 2008 From: dseifert at searchspark.com (Douglas A. Seifert) Date: Sat, 02 Feb 2008 20:02:53 -0800 Subject: Custom 503 Error Page In-Reply-To: <20080203031539.GC55257@mdounin.ru> References: <1201994307.3555.11.camel@localhost> <20080203003711.GN75203@mdounin.ru> <1202000909.3555.13.camel@localhost> <20080203012056.GA55257@mdounin.ru> <1202004975.3555.18.camel@localhost> <20080203023401.GB55257@mdounin.ru> <20080203031539.GC55257@mdounin.ru> Message-ID: <1202011373.3555.22.camel@localhost> > > Hmm... that doesn't seem to work either. Is it possible that `return xxx;` > > always generates the internal response? > > > > The only thing that would achieve the desired result in this case is > > proxying to some blackhole which would cause the 503 to be caught and > > rewritten according to the error_page directive. > > Oops. Sorry, I missed. The real problem is that with suggested > configuration (i.e. always return 503 at server level) there > is no way to reach /system/maintenance.html file for nginx. > > So, it tries to get /system/maintenance.html for error body, and > gets yet another 503. So it has to return hardcoded content. > > The only solution is to allow nginx to access > /system/maintenance.html somehow. Something like this: > > error_page 503 /system/maintenance.html; > > location / { > if (-f ...) { > return 503; > } > } > > location /system/maintenance.html { > # allow requests here - do not return 503 > } > > The if{} block should be in all locations where access should be > disallowed, but not for /system/maintenance.html itself. > > Maxim Dounin > Doing it that way gets me the custom content, but with a 200 OK response. :( I'm afraid the only way I will be able to do this is to recompile nginx with the default 503 content set to what I want it to be (that's my current workaround). Or dig up my extremely rusty C skills and try to figure out the bug, if any. Thanks, Doug Seifert From atmos at atmos.org Sun Feb 3 09:40:02 2008 From: atmos at atmos.org (Corey Donohoe) Date: Sat, 2 Feb 2008 23:40:02 -0700 Subject: Custom 503 Error Page In-Reply-To: <1202011373.3555.22.camel@localhost> References: <1201994307.3555.11.camel@localhost> <20080203003711.GN75203@mdounin.ru> <1202000909.3555.13.camel@localhost> <20080203012056.GA55257@mdounin.ru> <1202004975.3555.18.camel@localhost> <20080203023401.GB55257@mdounin.ru> <20080203031539.GC55257@mdounin.ru> <1202011373.3555.22.camel@localhost> Message-ID: On 2/2/08, Douglas A. Seifert wrote: > > > Hmm... that doesn't seem to work either. Is it possible that `return xxx;` > > > always generates the internal response? > > > > > > The only thing that would achieve the desired result in this case is > > > proxying to some blackhole which would cause the 503 to be caught and > > > rewritten according to the error_page directive. > > > > Oops. Sorry, I missed. The real problem is that with suggested > > configuration (i.e. always return 503 at server level) there > > is no way to reach /system/maintenance.html file for nginx. > > > > So, it tries to get /system/maintenance.html for error body, and > > gets yet another 503. So it has to return hardcoded content. > > > > The only solution is to allow nginx to access > > /system/maintenance.html somehow. Something like this: > > > > error_page 503 /system/maintenance.html; > > > > location / { > > if (-f ...) { > > return 503; > > } > > } > > > > location /system/maintenance.html { > > # allow requests here - do not return 503 > > } > > > > The if{} block should be in all locations where access should be > > disallowed, but not for /system/maintenance.html itself. > > > > Maxim Dounin > > > > Doing it that way gets me the custom content, but with a 200 OK > response. :( I'm afraid the only way I will be able to do this is to > recompile nginx with the default 503 content set to what I want it to be > (that's my current workaround). Or dig up my extremely rusty C skills > and try to figure out the bug, if any. > > Thanks, > Doug Seifert > > Hey Doug, We do something like http://forum.engineyard.com/forums/3/topics/22 at Engine Yard. I did it earlier this week for a client. Is this approach any different than the approaches you've tried thusfar? -- Corey Donohoe http://www.atmos.org/ From is at rambler-co.ru Sun Feb 3 09:40:40 2008 From: is at rambler-co.ru (Igor Sysoev) Date: Sun, 3 Feb 2008 09:40:40 +0300 Subject: Custom 503 Error Page In-Reply-To: <1202011373.3555.22.camel@localhost> References: <1201994307.3555.11.camel@localhost> <20080203003711.GN75203@mdounin.ru> <1202000909.3555.13.camel@localhost> <20080203012056.GA55257@mdounin.ru> <1202004975.3555.18.camel@localhost> <20080203023401.GB55257@mdounin.ru> <20080203031539.GC55257@mdounin.ru> <1202011373.3555.22.camel@localhost> Message-ID: <20080203064040.GA66125@rambler-co.ru> On Sat, Feb 02, 2008 at 08:02:53PM -0800, Douglas A. Seifert wrote: > > > Hmm... that doesn't seem to work either. Is it possible that `return xxx;` > > > always generates the internal response? > > > > > > The only thing that would achieve the desired result in this case is > > > proxying to some blackhole which would cause the 503 to be caught and > > > rewritten according to the error_page directive. > > > > Oops. Sorry, I missed. The real problem is that with suggested > > configuration (i.e. always return 503 at server level) there > > is no way to reach /system/maintenance.html file for nginx. > > > > So, it tries to get /system/maintenance.html for error body, and > > gets yet another 503. So it has to return hardcoded content. > > > > The only solution is to allow nginx to access > > /system/maintenance.html somehow. Something like this: > > > > error_page 503 /system/maintenance.html; > > > > location / { > > if (-f ...) { > > return 503; > > } > > } > > > > location /system/maintenance.html { > > # allow requests here - do not return 503 > > } > > > > The if{} block should be in all locations where access should be > > disallowed, but not for /system/maintenance.html itself. > > > > Maxim Dounin > > > > Doing it that way gets me the custom content, but with a 200 OK > response. :( I'm afraid the only way I will be able to do this is to > recompile nginx with the default 503 content set to what I want it to be > (that's my current workaround). Or dig up my extremely rusty C skills > and try to figure out the bug, if any. The way suggested by Maxim should work. Are you sure that yo do not use error_page 503 = /system/maintenance.html; instead of error_page 503 /system/maintenance.html; ? -- Igor Sysoev http://sysoev.ru/en/ From dseifert at searchspark.com Sun Feb 3 18:51:51 2008 From: dseifert at searchspark.com (Douglas A. Seifert) Date: Sun, 03 Feb 2008 07:51:51 -0800 Subject: Custom 503 Error Page In-Reply-To: <20080203064040.GA66125@rambler-co.ru> References: <1201994307.3555.11.camel@localhost> <20080203003711.GN75203@mdounin.ru> <1202000909.3555.13.camel@localhost> <20080203012056.GA55257@mdounin.ru> <1202004975.3555.18.camel@localhost> <20080203023401.GB55257@mdounin.ru> <20080203031539.GC55257@mdounin.ru> <1202011373.3555.22.camel@localhost> <20080203064040.GA66125@rambler-co.ru> Message-ID: <1202053911.3555.27.camel@localhost> > > > > Hmm... that doesn't seem to work either. Is it possible that `return xxx;` > > > > always generates the internal response? > > > > > > > > The only thing that would achieve the desired result in this case is > > > > proxying to some blackhole which would cause the 503 to be caught and > > > > rewritten according to the error_page directive. > > > > > > Oops. Sorry, I missed. The real problem is that with suggested > > > configuration (i.e. always return 503 at server level) there > > > is no way to reach /system/maintenance.html file for nginx. > > > > > > So, it tries to get /system/maintenance.html for error body, and > > > gets yet another 503. So it has to return hardcoded content. > > > > > > The only solution is to allow nginx to access > > > /system/maintenance.html somehow. Something like this: > > > > > > error_page 503 /system/maintenance.html; > > > > > > location / { > > > if (-f ...) { > > > return 503; > > > } > > > } > > > > > > location /system/maintenance.html { > > > # allow requests here - do not return 503 > > > } > > > > > > The if{} block should be in all locations where access should be > > > disallowed, but not for /system/maintenance.html itself. > > > > > > Maxim Dounin > > > > > > > Doing it that way gets me the custom content, but with a 200 OK > > response. :( I'm afraid the only way I will be able to do this is to > > recompile nginx with the default 503 content set to what I want it to be > > (that's my current workaround). Or dig up my extremely rusty C skills > > and try to figure out the bug, if any. > > The way suggested by Maxim should work. Are you sure that yo do not use > > error_page 503 = /system/maintenance.html; > > instead of > > error_page 503 /system/maintenance.html; > I had forgotten to pull the rewrite directive out of the if: I had if (-f ...) { rewrite ^(.*)$ /system/maintenance.html last; return 503; } Taking the rewrite directive out fixed it. For those interested, the final config that does what I want is below. I imagine a more complicated config will be harder to deal with because I will have to make sure the check for the maintenance page stays out of the top level of the config. test.conf ---------------------------------------- # user and group to run as #user www www; # number of nginx workers worker_processes 6; # pid of nginx master process pid /usr/local/www/nginx.pid; # Number of worker connections. 1024 is a good default events { worker_connections 1024; } # start the http module where we config http access. http { # pull in mime-types. You can break out your config # into as many include's as you want to make it cleaner include /usr/local/nginx/conf/mime.types; # set a default type for the rare situation that # nothing matches from the mimie-type include default_type application/octet-stream; # configure log format log_format main '$remote_addr - $remote_user [$time_local] ' '"$request" $status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"'; # main access log access_log /usr/local/www/log/nginx_access.log main; # main error log error_log /usr/local/www/log/nginx_error.log debug; # no sendfile on OSX sendfile on; # These are good default values. tcp_nopush on; tcp_nodelay off; # output compression saves bandwidth gzip on; gzip_http_version 1.0; gzip_comp_level 2; gzip_proxied any; gzip_types text/plain text/html text/css application/x-javascript text/xml application/xml application/xml+rss text/javascript; server { # port to listen on. Can also be set to an IP:PORT listen *:8080; # Set the max size for file uploads to 50Mb client_max_body_size 50M; # sets the domain[s] that this vhost server requests for server_name .foo.com *; # doc root root /usr/local/www/test; # vhost specific access log access_log /usr/local/www/log/nginx.vhost.access.log main; # this rewrites all the requests to the maintenance.html # page if it exists in the doc root. This is for capistrano's # disable web task error_page 500 502 504 /500.html; error_page 503 /system/maintenance.html; location /system/maintenance.html { # Allow requests } location / { if (-f $document_root/system/maintenance.html) { return 503; } } } } ----------------------------------------------------- From dseifert at searchspark.com Sun Feb 3 18:58:46 2008 From: dseifert at searchspark.com (Douglas A. Seifert) Date: Sun, 03 Feb 2008 07:58:46 -0800 Subject: Custom 503 Error Page In-Reply-To: References: <1201994307.3555.11.camel@localhost> <20080203003711.GN75203@mdounin.ru> <1202000909.3555.13.camel@localhost> <20080203012056.GA55257@mdounin.ru> <1202004975.3555.18.camel@localhost> <20080203023401.GB55257@mdounin.ru> <20080203031539.GC55257@mdounin.ru> <1202011373.3555.22.camel@localhost> Message-ID: <1202054326.3555.31.camel@localhost> > > > > Hmm... that doesn't seem to work either. Is it possible that `return xxx;` > > > > always generates the internal response? > > > > > > > > The only thing that would achieve the desired result in this case is > > > > proxying to some blackhole which would cause the 503 to be caught and > > > > rewritten according to the error_page directive. > > > > > > Oops. Sorry, I missed. The real problem is that with suggested > > > configuration (i.e. always return 503 at server level) there > > > is no way to reach /system/maintenance.html file for nginx. > > > > > > So, it tries to get /system/maintenance.html for error body, and > > > gets yet another 503. So it has to return hardcoded content. > > > > > > The only solution is to allow nginx to access > > > /system/maintenance.html somehow. Something like this: > > > > > > error_page 503 /system/maintenance.html; > > > > > > location / { > > > if (-f ...) { > > > return 503; > > > } > > > } > > > > > > location /system/maintenance.html { > > > # allow requests here - do not return 503 > > > } > > > > > > The if{} block should be in all locations where access should be > > > disallowed, but not for /system/maintenance.html itself. > > > > > > Maxim Dounin > > > > > > > Doing it that way gets me the custom content, but with a 200 OK > > response. :( I'm afraid the only way I will be able to do this is to > > recompile nginx with the default 503 content set to what I want it to be > > (that's my current workaround). Or dig up my extremely rusty C skills > > and try to figure out the bug, if any. > > > > Thanks, > > Doug Seifert > > > > > Hey Doug, > > We do something like http://forum.engineyard.com/forums/3/topics/22 at > Engine Yard. I did it earlier this week for a client. Is this > approach any different than the approaches you've tried thusfar? > Corey, I tried to duplicate the config on the forum post you cited, but I must be doing something wrong because I get into an infinite redirect loop. Attached is what I tried ... test.conf ---------------------- # number of nginx workers worker_processes 6; # pid of nginx master process pid /usr/local/www/nginx.pid; # Number of worker connections. 1024 is a good default events { worker_connections 1024; } # start the http module where we config http access. http { # pull in mime-types. You can break out your config # into as many include's as you want to make it cleaner include /usr/local/nginx/conf/mime.types; # set a default type for the rare situation that # nothing matches from the mimie-type include default_type application/octet-stream; # configure log format log_format main '$remote_addr - $remote_user [$time_local] ' '"$request" $status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"'; # main access log access_log /usr/local/www/log/nginx_access.log main; # main error log error_log /usr/local/www/log/nginx_error.log debug; # no sendfile on OSX sendfile on; # These are good default values. tcp_nopush on; tcp_nodelay off; # output compression saves bandwidth gzip on; gzip_http_version 1.0; gzip_comp_level 2; gzip_proxied any; gzip_types text/plain text/html text/css application/x-javascript text/xml application/xml application/xml+rss text/javascript; server { # port to listen on. Can also be set to an IP:PORT listen *:8080; # Set the max size for file uploads to 50Mb client_max_body_size 50M; # sets the domain[s] that this vhost server requests for server_name .foo.com *; # doc root root /usr/local/www/test; # vhost specific access log access_log /usr/local/www/log/nginx.vhost.access.log main; # this rewrites all the requests to the maintenance.html # page if it exists in the doc root. This is for capistrano's # disable web task error_page 500 502 504 /500.html; error_page 503 @503; location @503 { rewrite ^(.*)$ /system/maintenance.html break; } location /system/maintenance.html { # pass } location / { if (-f $document_root/system/maintenance.html) { return 503; } } } } --------------------------- From atmos at atmos.org Sun Feb 3 21:00:02 2008 From: atmos at atmos.org (Corey Donohoe) Date: Sun, 3 Feb 2008 11:00:02 -0700 Subject: Custom 503 Error Page In-Reply-To: <1202054326.3555.31.camel@localhost> References: <1201994307.3555.11.camel@localhost> <1202000909.3555.13.camel@localhost> <20080203012056.GA55257@mdounin.ru> <1202004975.3555.18.camel@localhost> <20080203023401.GB55257@mdounin.ru> <20080203031539.GC55257@mdounin.ru> <1202011373.3555.22.camel@localhost> <1202054326.3555.31.camel@localhost> Message-ID: On 2/3/08, Douglas A. Seifert wrote: > > test.conf > ---------------------- > # number of nginx workers > worker_processes 6; > > # pid of nginx master process > pid /usr/local/www/nginx.pid; > > # Number of worker connections. 1024 is a good default > events { > worker_connections 1024; > } > > # start the http module where we config http access. > http { > # pull in mime-types. You can break out your config > # into as many include's as you want to make it cleaner > include /usr/local/nginx/conf/mime.types; > > # set a default type for the rare situation that > # nothing matches from the mimie-type include > default_type application/octet-stream; > > # configure log format > log_format main '$remote_addr - $remote_user [$time_local] ' > '"$request" $status $body_bytes_sent "$http_referer" > ' > '"$http_user_agent" "$http_x_forwarded_for"'; > > # main access log > access_log /usr/local/www/log/nginx_access.log main; > > # main error log > error_log /usr/local/www/log/nginx_error.log debug; > > # no sendfile on OSX > sendfile on; > > # These are good default values. > tcp_nopush on; > tcp_nodelay off; > # output compression saves bandwidth > gzip on; > gzip_http_version 1.0; > gzip_comp_level 2; > gzip_proxied any; > gzip_types text/plain text/html text/css application/x-javascript > text/xml application/xml application/xml+rss text/javascript; > > server { > > # port to listen on. Can also be set to an IP:PORT > listen *:8080; > > # Set the max size for file uploads to 50Mb > client_max_body_size 50M; > > # sets the domain[s] that this vhost server requests for > server_name .foo.com *; > > # doc root > root /usr/local/www/test; > > # vhost specific access log > access_log /usr/local/www/log/nginx.vhost.access.log main; > > # this rewrites all the requests to the maintenance.html > # page if it exists in the doc root. This is for capistrano's > # disable web task > error_page 500 502 504 /500.html; > error_page 503 @503; > location @503 { > rewrite ^(.*)$ /system/maintenance.html break; > } > location /system/maintenance.html { > # pass > } > > location / { > if (-f $document_root/system/maintenance.html) { > return 503; > } > } > > } > > } > --------------------------- > > > Here's my server block. My ordering is slightly different. http://pastie.caboo.se/private/dmnepj3m2zxsnxhrqsuhcq Here's the shell output from curl before and after the maintenance page goes up, and it appears to be delivering the content as expected. http://pastie.caboo.se/private/rpjtmssmf8jfyblhp4rpfq -- Corey Donohoe http://www.atmos.org/ From carloscm at gmail.com Mon Feb 4 00:39:34 2008 From: carloscm at gmail.com (Carlos) Date: Sun, 3 Feb 2008 22:39:34 +0100 Subject: Support for POST client body sent with Transfer-Encoding: chunked Message-ID: <1c9d46d70802031339w19847c8dg71af993235a4eed5@mail.gmail.com> In the process of switching from Apache 2 to an all-nginx setup for front-end servers I have found a nginx (miss-)feature that makes it impossible to service certain clients. Basically nginx will refuse to serve POST requests that don't specify a Content-Length parameter and use Transfer-Encoding: chunked for the POST client body. A 411 error will be served every time such a request is received. Setting up nginx as a proxy doesn't work either, the same 411 error is served. It appears nginx is buffering the POST body and applies the same limiting rules even for proxy_pass-ed requests. I've looked in the code and it appears to be an explicit limitation (ngx_http_process_request_header() function in ngx_http_request.c), and the changelog mentions it as an actual "bugfix" (nginx 0.3.12). I'm not an expert in technolegalese but it appears that TE:chunked POST bodies is a HTTP 1.1 feature and there are actual products sending such POST bodies, and I am pretty sure other servers that Apache 2 support it too. Is there any technical explanation for nginx not supporting, or at least explicitly disabling it? Anybody knows a work-around for it? I guess the correct answer is "add the feature and send a patch", but asking first to not reinvent any wheel won't hurt :) (The clients in question are J2ME phones. I can do my own HTTP client for the few of them that expose TCP sockets to applications, but the majority only allow high-level "HTTP connections" that don't expose any socket internals or details about how the actual request is formatted. Not all of them are doing TE:chunked POST but the ones that do are very significant, for example the official Sun J2ME emulator.) -- Carlos From dseifert at searchspark.com Mon Feb 4 00:36:25 2008 From: dseifert at searchspark.com (Doug Seifert) Date: Sun, 3 Feb 2008 16:36:25 -0500 Subject: Custom 503 Error Page References: <1201994307.3555.11.camel@localhost><1202000909.3555.13.camel@localhost><20080203012056.GA55257@mdounin.ru><1202004975.3555.18.camel@localhost><20080203023401.GB55257@mdounin.ru><20080203031539.GC55257@mdounin.ru><1202011373.3555.22.camel@localhost><1202054326.3555.31.camel@localhost> Message-ID: > Here's my server block. My ordering is slightly different. > http://pastie.caboo.se/private/dmnepj3m2zxsnxhrqsuhcq > > Here's the shell output from curl before and after the maintenance > page goes up, and it appears to be delivering the content as expected. > http://pastie.caboo.se/private/rpjtmssmf8jfyblhp4rpfq That did the trick. Thanks a million. I think Ezra should update his default nginx configuration to match this. 503's at maintenance time are so much desired for SEO ... -Doug Seifert From arlenecc at incesoft.com Mon Feb 4 17:38:59 2008 From: arlenecc at incesoft.com (arlene chen) Date: Mon, 04 Feb 2008 22:38:59 +0800 Subject: can upstream be used for memcachedmodule? Message-ID: <47A72383.4090804@incesoft.com> Hi All, I try to use upstream to define a set of memcached,then I can use it in memcached_pass setting ,but it can not work,anyone know how to do this ? If I can not set and use it just like in proxy_pass,then how can I set more than one memcached for memcachedmodule ? memcached_next_upstream need an upstream with two or more servers,but how can I do this ? -- best regards, arlenecc WARNINGS : this email may be confidential and/or privileged. Only the intended recipient may access or use it. If you are not the intented recipient, please delete this email and notify us promptly. We use virus scanning software but disclaim any liability for viruses or any other contents of this message or any attachment. From is at rambler-co.ru Mon Feb 4 18:02:32 2008 From: is at rambler-co.ru (Igor Sysoev) Date: Mon, 4 Feb 2008 18:02:32 +0300 Subject: can upstream be used for memcachedmodule? In-Reply-To: <47A72383.4090804@incesoft.com> References: <47A72383.4090804@incesoft.com> Message-ID: <20080204150232.GB88672@rambler-co.ru> On Mon, Feb 04, 2008 at 10:38:59PM +0800, arlene chen wrote: > I try to use upstream to define a set of memcached,then I can use it in > memcached_pass setting ,but it can not work,anyone know how to do this ? > If I can not set and use it just like in proxy_pass,then how can I set > more than one memcached for memcachedmodule ? memcached_next_upstream > need an upstream with two or more servers,but how can I do this ? ngx_http_memcached_module should work with several upstreams. -- Igor Sysoev http://sysoev.ru/en/ From postmaster at softsearch.ru Mon Feb 4 18:30:11 2008 From: postmaster at softsearch.ru (=?windows-1251?B?zO7t4Pi44iDM6PXg6Os=?=) Date: Mon, 4 Feb 2008 18:30:11 +0300 Subject: can upstream be used for memcachedmodule? In-Reply-To: <47A72383.4090804@incesoft.com> References: <47A72383.4090804@incesoft.com> Message-ID: <811566244.20080204183011@softsearch.ru> Hi, arlene. > I try to use upstream to define a set of memcached,then I can use it in > memcached_pass setting ,but it can not work,anyone know how to do this ? > If I can not set and use it just like in proxy_pass,then how can I set > more than one memcached for memcachedmodule ? memcached_next_upstream > need an upstream with two or more servers,but how can I do this ? You can try ngx_http_upstream_memcached_hash_module - load balancer to distribute the requests among memcached servers in a cluster specified with memcached_pass directive. http://openhack.ru/nginx-patched -- ? ?????????, ??????? ??????, SoftSearch.ru mailto:postmaster at softsearch.ru ICQ# 166233339 http://michael.mindmix.ru/ ??? ?????? ?? ?????. From arlenecc at incesoft.com Mon Feb 4 19:06:59 2008 From: arlenecc at incesoft.com (arlene chen) Date: Tue, 05 Feb 2008 00:06:59 +0800 Subject: can upstream be used for memcachedmodule? In-Reply-To: <20080204150232.GB88672@rambler-co.ru> References: <47A72383.4090804@incesoft.com> <20080204150232.GB88672@rambler-co.ru> Message-ID: <47A73823.7010302@incesoft.com> Igor Sysoev ??: > On Mon, Feb 04, 2008 at 10:38:59PM +0800, arlene chen wrote: > > >> I try to use upstream to define a set of memcached,then I can use it in >> memcached_pass setting ,but it can not work,anyone know how to do this ? >> If I can not set and use it just like in proxy_pass,then how can I set >> more than one memcached for memcachedmodule ? memcached_next_upstream >> need an upstream with two or more servers,but how can I do this ? >> > > ngx_http_memcached_module should work with several upstreams. > > > but it is not support in the default module of nginx ,isn`t it ? -- best regards, arlenecc WARNINGS : this email may be confidential and/or privileged. Only the intended recipient may access or use it. If you are not the intented recipient, please delete this email and notify us promptly. We use virus scanning software but disclaim any liability for viruses or any other contents of this message or any attachment. From arlenecc at incesoft.com Mon Feb 4 19:09:41 2008 From: arlenecc at incesoft.com (arlene chen) Date: Tue, 05 Feb 2008 00:09:41 +0800 Subject: can upstream be used for memcachedmodule? In-Reply-To: <811566244.20080204183011@softsearch.ru> References: <47A72383.4090804@incesoft.com> <811566244.20080204183011@softsearch.ru> Message-ID: <47A738C5.7020605@incesoft.com> ??????? ?????? ????: > Hi, arlene. > > >> I try to use upstream to define a set of memcached,then I can use it in >> memcached_pass setting ,but it can not work,anyone know how to do this ? >> If I can not set and use it just like in proxy_pass,then how can I set >> more than one memcached for memcachedmodule ? memcached_next_upstream >> need an upstream with two or more servers,but how can I do this ? >> > > You can try ngx_http_upstream_memcached_hash_module - load balancer to > distribute the requests among memcached servers in a cluster specified > with memcached_pass directive. http://openhack.ru/nginx-patched > > > oh, great module! That is just what I want ! thanks! -- best regards, arlenecc WARNINGS : this email may be confidential and/or privileged. Only the intended recipient may access or use it. If you are not the intented recipient, please delete this email and notify us promptly. We use virus scanning software but disclaim any liability for viruses or any other contents of this message or any attachment. From arlenecc at incesoft.com Mon Feb 4 19:18:07 2008 From: arlenecc at incesoft.com (arlene chen) Date: Tue, 05 Feb 2008 00:18:07 +0800 Subject: when can nginx support cache objects in memory? Message-ID: <47A73ABF.9070200@incesoft.com> when can nginx support cache objects in memory? By now the only way we can cache objects in nginx is use memcached,but the performance is not so good ,perhaps only mem cache can solve this problem. -- best regards, arlenecc WARNINGS : this email may be confidential and/or privileged. Only the intended recipient may access or use it. If you are not the intented recipient, please delete this email and notify us promptly. We use virus scanning software but disclaim any liability for viruses or any other contents of this message or any attachment. From tho.nguyen at intier.com Mon Feb 4 23:20:12 2008 From: tho.nguyen at intier.com (Tho Nguyen) Date: Mon, 4 Feb 2008 20:20:12 +0000 (UTC) Subject: nginx 0.5.35 ssl error References: <20080129202041.GB58654@rambler-co.ru> Message-ID: The patch seems to have resolved the SSL3_WRITE_PENDING:bad write retry error. The SSL3_READ_BYTES:reason(1000) error is still there. 2008/02/04 15:12:23 [crit] 11762#0: *22267 SSL_do_handshake() failed (SSL: error:140943E8:SSL routines:SSL3_READ_BYTES:reason(1000)) while reading client request line, client: 172.23.75.21, server: agile.seating-intier.com 2008/02/04 15:12:23 [crit] 11762#0: *22268 SSL_do_handshake() failed (SSL: error:140943E8:SSL routines:SSL3_READ_BYTES:reason(1000)) while reading client request line, client: 172.23.75.21, server: agile.seating-intier.com 2008/02/04 15:12:23 [crit] 11762#0: *22271 SSL_do_handshake() failed (SSL: error:140943E8:SSL routines:SSL3_READ_BYTES:reason(1000)) while reading client request line, client: 172.23.75.21, server: agile.seating-intier.com 2008/02/04 15:12:23 [crit] 11762#0: *22272 SSL_do_handshake() failed (SSL: error:140943E8:SSL routines:SSL3_READ_BYTES:reason(1000)) while reading client request line, client: 172.23.75.21, server: agile.seating-intier.com From ilya at fortehost.com Tue Feb 5 00:38:18 2008 From: ilya at fortehost.com (Ilya Grigorik) Date: Mon, 4 Feb 2008 21:38:18 +0000 (UTC) Subject: dynamic =?utf-8?b?ZGVmYXVsdF90eXBl?= Message-ID: I'm trying to integrate Nginx memcached module as a front-end server for serving cached requests, but the problem is, the application type is defined dynamically as part of the query string.. For example: GET /test?format=xml GET /test?format=json The requests get cached by Nginx and memcached, but on subsequent queries, they are returned as default 'octetstream', which is not very useful.. So I've got as far as creating a dynamic variable in my config: 90 location /test { 91 if ($args ~* format=json) { set $type "text/javascript"; } 92 if ($args ~* format=xml) { set $type "application/xml"; } 93 94 default_type $type; 95 ... 95 } The problem is, default_type does not seem to evaluate the parameter and returns the literal string "$type". Any ideas on how to get it to return the actual string it points to? I also tried breaking up the queries by location regex: 82 location ~* test.*format=xml { 83 default_type application/xml; 84 ... 88 } But the regex does not seem to pass or is invalid. Any tips would be appreciated.. From is at rambler-co.ru Tue Feb 5 14:36:05 2008 From: is at rambler-co.ru (Igor Sysoev) Date: Tue, 5 Feb 2008 14:36:05 +0300 Subject: nginx 0.5.35 ssl error In-Reply-To: References: <20080129202041.GB58654@rambler-co.ru> Message-ID: <20080205113605.GC9713@rambler-co.ru> On Mon, Feb 04, 2008 at 08:20:12PM +0000, Tho Nguyen wrote: > The patch seems to have resolved the SSL3_WRITE_PENDING:bad write retry error. OK. > The SSL3_READ_BYTES:reason(1000) error is still there. > > 2008/02/04 15:12:23 [crit] 11762#0: *22267 SSL_do_handshake() failed (SSL: > error:140943E8:SSL routines:SSL3_READ_BYTES:reason(1000)) while reading client > request line, client: 172.23.75.21, server: agile.seating-intier.com > 2008/02/04 15:12:23 [crit] 11762#0: *22268 SSL_do_handshake() failed (SSL: > error:140943E8:SSL routines:SSL3_READ_BYTES:reason(1000)) while reading client > request line, client: 172.23.75.21, server: agile.seating-intier.com > 2008/02/04 15:12:23 [crit] 11762#0: *22271 SSL_do_handshake() failed (SSL: > error:140943E8:SSL routines:SSL3_READ_BYTES:reason(1000)) while reading client > request line, client: 172.23.75.21, server: agile.seating-intier.com > 2008/02/04 15:12:23 [crit] 11762#0: *22272 SSL_do_handshake() failed (SSL: > error:140943E8:SSL routines:SSL3_READ_BYTES:reason(1000)) while reading client > request line, client: 172.23.75.21, server: agile.seating-intier.com "reason(1000)" means that peer has sent a "close notify" alert while SSL handshake. In next 0.6.x version I will decrease this and some other handshake errors at info level. -- Igor Sysoev http://sysoev.ru/en/ From lists at ruby-forum.com Tue Feb 5 15:14:14 2008 From: lists at ruby-forum.com (Joerg Diekmann) Date: Tue, 5 Feb 2008 13:14:14 +0100 Subject: nginx_uploadprogress_module v0.2 In-Reply-To: <1192106724.6278.190.camel@localhost.localdomain> References: <1191426585.17784.81.camel@localhost.localdomain> <470B753A.8070302@home.se> <1191938795.23529.39.camel@localhost.localdomain> <470C9998.90403@home.se> <1192022075.6278.95.camel@localhost.localdomain> <470E0F46.6090904@home.se> <1192106724.6278.190.camel@localhost.localdomain> Message-ID: Does this work for Internet Explorer? When I do this in my FF: http://www.mydomain.com/progress?X-Progress-ID=4a9d9c3264ccfabd2bce1aaf919cfbdd I get back - as expexted - the following plain text: new Object({ 'state' : 'starting' }) But, if I do this in Internet Explorer: I get back an error alert box saying: Internet Explorer cannot download .... 9d9c3264ccfabd2bce1aaf919cfbdd from www.mydomain.com. Internet Explorer was not able to open this Internet site. The requested site is either unavailable or cannot be found. Please try again later. When I look into my nginx log files, I can see that the request returned a 200 status. Any ideas. Any manual overrides I need to perform on IE so that it works? Thanks Joerg Brice Figureau wrote: > On Thu, 2007-10-11 at 13:55 +0200, Otto Bretz wrote: >> >> To try it out I used a copy of the lighttpd upload page[1]. Is that ok? >> I compiled the module with-debug as well and have attached the log of an >> upload attempt. > > The progress report handler doesn't yet support case-insentive progress > id parameter (I know it sucks, but that's on my todo list for v0.3). > > Apply this patch to your upload.html (provided it is the same as the > file on the webpage you referenced): > > --- upload.html.orig 2007-10-11 14:21:26.000000000 +0200 > +++ upload.html 2007-10-11 14:21:36.000000000 +0200 > @@ -40,7 +40,7 @@ > if(!req) return; > > req.open("GET", "/progress", 1); > - req.setRequestHeader("X-Progress-Id", uuid); > + req.setRequestHeader("X-Progress-ID", uuid); > req.onreadystatechange = function () { > if (!req) { > window.clearTimeout(interval); > > > And then it will work fine. > > If you are testing locally (ie the server is on the same computer or LAN > as the client), you won't see the progress bar evolving, because the > bandwidth is too high. > If you try on a distant server with limited bandwidth it works fine, the > progress bar is slowly increasing. > If you want to see the progress evolving on localhost, I suggest you > reduce interval_msec in upload.html to something like 150 or 200, > otherwise the first /progress probe is sent after the upload has > finished :-) > > To test that the upload progress works fine, I'm using curl: > To initiate the upload: > curl -v -Ffilename=@file-to-upload --limit-rate 1k > "http://localhost/upload.html?X-Progress-ID=1234567890123456789019" > > And to see the probe in action: > curl -v -H'X-Progress-ID: 1234567890123456789019' > "http://localhost/progress" > > If you want to send me debug material, please send them privately and > not on the list, that might annoy people not interested in our > discussion... > > Anyway, let me know if that's working for you, and do not hesitate to > report any issues or improvements, -- Posted via http://www.ruby-forum.com/. From brice+nginx at daysofwonder.com Tue Feb 5 18:23:39 2008 From: brice+nginx at daysofwonder.com (Brice Figureau) Date: Tue, 05 Feb 2008 16:23:39 +0100 Subject: nginx_uploadprogress_module v0.2 In-Reply-To: References: <1191426585.17784.81.camel@localhost.localdomain> <470B753A.8070302@home.se> <1191938795.23529.39.camel@localhost.localdomain> <470C9998.90403@home.se> <1192022075.6278.95.camel@localhost.localdomain> <470E0F46.6090904@home.se> <1192106724.6278.190.camel@localhost.localdomain> Message-ID: <1202225019.27503.15.camel@localhost.localdomain> Hi, On Tue, 2008-02-05 at 13:14 +0100, Joerg Diekmann wrote: > Does this work for Internet Explorer? > > When I do this in my FF: > > http://www.mydomain.com/progress?X-Progress-ID=4a9d9c3264ccfabd2bce1aaf919cfbdd > > I get back - as expexted - the following plain text: > > new Object({ 'state' : 'starting' }) > > > But, if I do this in Internet Explorer: > I get back an error alert box saying: > > Internet Explorer cannot download .... 9d9c3264ccfabd2bce1aaf919cfbdd > from www.mydomain.com. > Internet Explorer was not able to open this Internet site. The requested > site is either unavailable or cannot be found. Please try again later. > > When I look into my nginx log files, I can see that the request returned > a 200 status. > > Any ideas. Any manual overrides I need to perform on IE so that it > works? I think that's because the upload progress module is returning the response with a javascript mime-type. Internet Explorer which always tries to be smarter than everyone just says it can't do anything with it, and can't display it. I don't think it's an issue if you just programmatically (I mean from javascript) try to access to the progress url. I never tested the whole setup with IE, but if it doesn't work let me know, and I'll try to reproduce it and fix it (if necessary). Thanks, -- Brice Figureau From lists at ruby-forum.com Tue Feb 5 18:54:58 2008 From: lists at ruby-forum.com (Joerg Diekmann) Date: Tue, 5 Feb 2008 16:54:58 +0100 Subject: nginx_uploadprogress_module v0.2 In-Reply-To: <1202225019.27503.15.camel@localhost.localdomain> References: <1191426585.17784.81.camel@localhost.localdomain> <470B753A.8070302@home.se> <1191938795.23529.39.camel@localhost.localdomain> <470C9998.90403@home.se> <1192022075.6278.95.camel@localhost.localdomain> <470E0F46.6090904@home.se> <1192106724.6278.190.camel@localhost.localdomain> <1202225019.27503.15.camel@localhost.localdomain> Message-ID: <9b207192e339d5cc76d2ded54404deb2@ruby-forum.com> Hi Brice, I have set it up according to this article: http://blog.new-bamboo.co.uk/2007/11/23/upload-progress-with-nginx In it I call that URL using an AJAX method. What I get back from that are 403 errors(?) from nginx when using Internet Explorer; and then Internet Explorer freaks out and responds with a 500, but works fine when I use FF or Safari. Any ideas on that? Joerg > > I don't think it's an issue if you just programmatically (I mean from > javascript) > try to access to the progress url. > -- Posted via http://www.ruby-forum.com/. From brice+nginx at daysofwonder.com Tue Feb 5 20:05:05 2008 From: brice+nginx at daysofwonder.com (Brice Figureau) Date: Tue, 05 Feb 2008 18:05:05 +0100 Subject: nginx_uploadprogress_module v0.2 In-Reply-To: <9b207192e339d5cc76d2ded54404deb2@ruby-forum.com> References: <1191426585.17784.81.camel@localhost.localdomain> <470B753A.8070302@home.se> <1191938795.23529.39.camel@localhost.localdomain> <470C9998.90403@home.se> <1192022075.6278.95.camel@localhost.localdomain> <470E0F46.6090904@home.se> <1192106724.6278.190.camel@localhost.localdomain> <1202225019.27503.15.camel@localhost.localdomain> <9b207192e339d5cc76d2ded54404deb2@ruby-forum.com> Message-ID: <1202231105.27503.22.camel@localhost.localdomain> On Tue, 2008-02-05 at 16:54 +0100, Joerg Diekmann wrote: > Hi Brice, > > I have set it up according to this article: > > http://blog.new-bamboo.co.uk/2007/11/23/upload-progress-with-nginx Oh, I wasn't aware of this blog article. > In it I call that URL using an AJAX method. What I get back from that > are 403 errors(?) from nginx when using Internet Explorer; and then > Internet Explorer freaks out and responds with a 500, but works fine > when I use FF or Safari. Please activate debug (compile nginx with --with-debug, and put the error log in debug mode) and send me privately the gzipped logs for one request. I'll try to understand what happens from the log. Also, if you are using exactly what is outlined in the article, you might want to use a somewhat simpler example (like the one I provide in the readme) to isolate javascript issues. Thanks, -- Brice Figureau From boethius at elitistjerks.com Wed Feb 6 01:56:09 2008 From: boethius at elitistjerks.com (Boethius) Date: Tue, 05 Feb 2008 17:56:09 -0500 Subject: rewrite rules In-Reply-To: <0F064497-C85B-46CB-BF86-EF8AC93B2941@dobrestrony.pl> References: <0F064497-C85B-46CB-BF86-EF8AC93B2941@dobrestrony.pl> Message-ID: <47A8E989.8070702@elitistjerks.com> I am attempting to use vBSEO with nginx as well. Your rules work great except image attachments currently aren't working. I believe vBSEO rewrites attachment URLs in Apache as well, but there aren't any applicable nginx rewrites active in the conf you posted. Do you have any idea what might get them working (are they working for you)? Jan ?lusarczyk wrote: >> > Thanks for all the tips. For a combination of Typo3 installation on root > (realurl rewriting to /index.php) and vbseo enhanced vbulletin in > /forum/ directory of a main site I've come up with the following: > > server { > listen 192.168.1.1:80; > server_name www.servername.tld; > access_log /var/log/nginx/www.servername.tld.access.log combined; > > root /var/www/hosts/www.servername.tld; > index index.php index.html index.htm; > > location ~ /\.ht { > deny all; > } > location /forum/ { > rewrite ^/forum/((urllist|sitemap_).*\.(xml|txt)(\.gz)?)$ > /forum/vbseo_sitemap/vbseo_getsitemap.php?sitemap=$1 last; > if ($request_filename ~ "\.php$" ) { > rewrite ^(.*)$ /forum/vbseo.php?vbseourl=$1 last; > } > if (!-e $request_filename) { > rewrite ^/forum/(.*)$ /forum/vbseo.php?vbseourl=$1 last; > } > } > location / { > if (!-e $request_filename) { > rewrite ^(.*)$ /index.php last; > } > } > location ~ \.php$ { > fastcgi_pass 127.0.0.1:9000; > fastcgi_index index.php; > fastcgi_param SCRIPT_FILENAME > /var/www/hosts/www.servername.tld$fastcgi_script_name; > include /etc/nginx/fastcgi.conf; > } > } > > Does it make sense? Can it be better? Anything I should be aware of? > Thank you > Jan > -- Andrew Hunn (Boethius) boethius at elitistjerks.com From tho.nguyen at intier.com Wed Feb 6 02:16:24 2008 From: tho.nguyen at intier.com (Tho Nguyen) Date: Tue, 5 Feb 2008 23:16:24 +0000 (UTC) Subject: Problem uploading file Message-ID: I switch our apache reverse proxy over to nginx and I have a problem. >From the web browser or the javaclient, I can attach files fine. But the majority of our usage is via a CAD client. A plugin that hooks in the a CAD software like Catia that connects in and checkin/checkout file. They can checkout file fine; just not checking in. There is nothing in the error log. Here are some line from the access log when it happen. 172.23.65.131 - ifsuser [05/Feb/2008:07:18:57 -0500] POST /Filemgr/CheckInServlet? token=14475C3491031FEAAA6A823FC808E8508F17767ED3A78C540741E4235EFADA0ED57E099F&v ault=&filename=.xls&opcode=checkin HTTP/1.1 "411" 181 "-" "Jakarta Commons- HttpClient/3.0" "-" 172.23.65.131 - ifsuser [05/Feb/2008:07:18:57 -0500] POST /Filemgr/CheckInServlet? token=14475C3491031FEAAA6A823FC808E8508F17767ED3A78C540741E4235EFADA0ED57E099F&v ault=&filename=.xls&opcode=checkin HTTP/1.1 "411" 181 "-" "Jakarta Commons- HttpClient/3.0" "-" From rob at rascal.ca Wed Feb 6 03:50:45 2008 From: rob at rascal.ca (Rob Mitzel) Date: Tue, 5 Feb 2008 16:50:45 -0800 Subject: Fair Proxy Balancer In-Reply-To: <47A14008.1060409@eastlink.ca> References: <2cc9d1ea0711221338q60704b41h8831453dade974df@mail.gmail.com> <88daf38c0711221507y3b15e8d5q5860d094b38e6ce7@mail.gmail.com> <121a28810711230438u5ee6b2c8i2733f77e64f44cd7@mail.gmail.com> <47A14008.1060409@eastlink.ca> Message-ID: <003e01c8685a$4f6a21c0$ee3e6540$@ca> Hey, sorry for the late response here, but I thought I should mention, we're using the fair proxy balancer on a rails site that averages over 100 million hits/month. Been using it for about a month now, and love it. Also, to Alexander, our programmer especially wanted me to thank you for coming up with that mongrel process title patch, that thing is awesome! :) -Rob. -----Original Message----- From: owner-nginx at sysoev.ru [mailto:owner-nginx at sysoev.ru] On Behalf Of David Pratt Sent: Wednesday, January 30, 2008 7:27 PM To: nginx at sysoev.ru Subject: Re: Fair Proxy Balancer Hi. It has been a while since the introduction of fair proxy balancer. How stable is it for production use. I was looking at potentially using haproxy or lvm but hoping the new balancer is stable enough since I don't want to unnecessarily complicate things with even more layers of software in the stack. Anyone using it for production that can comment. Regards, David Grzegorz Nosek wrote: > Hi, > > 2007/11/23, Alexander Staubo : >>> One question, how is the busy state determined? In case >>> of zeo each backend client can take some defined number of requests >>> in parallel, how is such a case handled? > > Should work out of the box, distributing the load equally. You may > wish to specify weight for each backend but if all are equal, this > should have no effect. > >> I have not studied the sources, but I expect it will pick the upstream >> with the fewest number of current pending requests; among upstreams >> with the same number of concurrent requests, the one picked is >> probably arbitrary. > > The scheduling logic looks like this: > - The backends are selected _mostly_ round-robin (i.e. if you get 1 > req/hour, they'll be serviced by successive backends) > - Idle (no requests currently serviced) backends have absolute > priority (an idle backend will be always chosen if available) > - Otherwise, the scheduler walks around the list of backends > (remembering where it finished last time) until the scheduler score > stops increasing. The highest scored backend is chosen (note: not all > backends are probed, or at least not always). > - The scheduler score is calculated roughly as follows (yes, it could > be cleaned up a little bit): > > score = (1 - nreq) * 1000 + last_active_delta; > if (score < 0) { > score /= current_weight; > } else { > score *= current_weight; > } > > nreq is the number of currently processed requests > last_active_delta is time since last request start _or_ stop (serviced > by this backend), in milliseconds > current_weight is a counter decreasing from the backend's weight to 1 > with every serviced request > > It has a few properties which (I hope) make it good: > - penalizing busy backends, with something like a pessimistic > estimate of request time > - rewarding backends which have been servicing a request for a long > time (statistically they should finish earlier) > - rewarding backends with higher weight more or less proportionally. > > Please give the module a try and report any issues you might find. > > Best regards, > Grzegorz Nosek > From rob at rascal.ca Wed Feb 6 03:53:31 2008 From: rob at rascal.ca (Rob Mitzel) Date: Tue, 5 Feb 2008 16:53:31 -0800 Subject: Fair Proxy Balancer In-Reply-To: <20080131164557.GC21638@vadmin.megiteam.pl> References: <2cc9d1ea0711221338q60704b41h8831453dade974df@mail.gmail.com> <88daf38c0711221507y3b15e8d5q5860d094b38e6ce7@mail.gmail.com> <121a28810711230438u5ee6b2c8i2733f77e64f44cd7@mail.gmail.com> <47A14008.1060409@eastlink.ca> <88daf38c0801310251u60a4ee79n2707719f6fcf5daf@mail.gmail.com> <47A1CDB1.6090107@eastlink.ca> <20080131164557.GC21638@vadmin.megiteam.pl> Message-ID: <004401c8685a$b2b6d110$18247330$@ca> Hey Grzegorz, First, actually thank YOU for coming up with the balancer. It's made my life much easier. And please, keep the round-robin behaviour as-is! I mean, it's a great way to tell if you're running too many mongrels and/or too many nginx connections. -Rob. -----Original Message----- From: owner-nginx at sysoev.ru [mailto:owner-nginx at sysoev.ru] On Behalf Of Grzegorz Nosek Sent: Thursday, January 31, 2008 8:46 AM To: nginx at sysoev.ru Subject: Re: Fair Proxy Balancer One known issue (again, waiting for the One Day) is that the round-robin part doesn't work too well. E.g. if your load is very low, all requests will go to the first backend. Anyway, I'll keep the current behaviour as an option as it may be useful in dimensioning your backend cluster (i.e. if the Nth backend has serviced no requests, N-1 should be enough). Best regards, Grzegorz Nosek From mdounin at mdounin.ru Wed Feb 6 05:16:35 2008 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 6 Feb 2008 05:16:35 +0300 Subject: dynamic default_type In-Reply-To: References: Message-ID: <20080206021635.GC68975@mdounin.ru> Hello! On Mon, Feb 04, 2008 at 09:38:18PM +0000, Ilya Grigorik wrote: >I'm trying to integrate Nginx memcached module as a front-end server for serving >cached requests, but the problem is, the application type is defined dynamically >as part of the query string.. For example: > >GET /test?format=xml >GET /test?format=json > >The requests get cached by Nginx and memcached, but on subsequent queries, they >are returned as default 'octetstream', which is not very useful.. So I've got as >far as creating a dynamic variable in my config: > > > 90 location /test { > 91 if ($args ~* format=json) { set $type "text/javascript"; } > 92 if ($args ~* format=xml) { set $type "application/xml"; } > 93 > 94 default_type $type; > 95 ... > 95 } > >The problem is, default_type does not seem to evaluate the parameter and returns >the literal string "$type". Any ideas on how to get it to return the actual >string it points to? This is expected, as default_type doesn't support variables. >I also tried breaking up the queries by location regex: > 82 location ~* test.*format=xml { > 83 default_type application/xml; > 84 ... > 88 } > >But the regex does not seem to pass or is invalid. This is expected too, as location match path only, not query string. > >Any tips would be appreciated.. Try something like this: location /test { if ($args ~* "format=json") { rewrite ^ /test-json last; } if ($args ~* "format=xml") { rewrite ^ /test-xml last; } } location /test-json { default_type "text/javascript"; ... } location /test-xml { default_type "application/xml"; ... } Maxim Dounin From lists at ruby-forum.com Wed Feb 6 15:37:48 2008 From: lists at ruby-forum.com (Joerg Diekmann) Date: Wed, 6 Feb 2008 13:37:48 +0100 Subject: nginx_uploadprogress_module v0.2 In-Reply-To: <1202231105.27503.22.camel@localhost.localdomain> References: <1191426585.17784.81.camel@localhost.localdomain> <470B753A.8070302@home.se> <1191938795.23529.39.camel@localhost.localdomain> <470C9998.90403@home.se> <1192022075.6278.95.camel@localhost.localdomain> <470E0F46.6090904@home.se> <1192106724.6278.190.camel@localhost.localdomain> <1202225019.27503.15.camel@localhost.localdomain> <9b207192e339d5cc76d2ded54404deb2@ruby-forum.com> <1202231105.27503.22.camel@localhost.localdomain> Message-ID: <4dae9f182e81edc4f89b64cd262c39c3@ruby-forum.com> Hi Brice, thanks a lot for looking into it. I can't access the servers at this stage, but will be able to give you a the log file tomorrow if that is ok. How do I get your email address? Joerg > Please activate debug (compile nginx with --with-debug, and put the > error log in debug mode) and send me privately the gzipped logs for one > request. > I'll try to understand what happens from the log. -- Posted via http://www.ruby-forum.com/. From lists at ruby-forum.com Wed Feb 6 16:59:16 2008 From: lists at ruby-forum.com (Joerg Diekmann) Date: Wed, 6 Feb 2008 14:59:16 +0100 Subject: nginx_uploadprogress_module v0.2 In-Reply-To: <1202305520.4324.12.camel@localhost.localdomain> References: <1191426585.17784.81.camel@localhost.localdomain> <470B753A.8070302@home.se> <1191938795.23529.39.camel@localhost.localdomain> <470C9998.90403@home.se> <1192022075.6278.95.camel@localhost.localdomain> <470E0F46.6090904@home.se> <1192106724.6278.190.camel@localhost.localdomain> <1202225019.27503.15.camel@localhost.localdomain> <9b207192e339d5cc76d2ded54404deb2@ruby-forum.com> <1202231105.27503.22.camel@localhost.localdomain> <4dae9f182e81edc4f89b64cd262c39c3@ruby-forum.com> <1202305520.4324.12.camel@localhost.localdomain> Message-ID: Ok cool - no worries. I'm using the forum, so I can't see your email address ... > Just reply to this e-mail :-) > I don't have lots of time this week to have a look to your problem, but > I'll try to find what's wrong early next week (or this week-end maybe). -- Posted via http://www.ruby-forum.com/. From fairwinds at eastlink.ca Wed Feb 6 17:07:08 2008 From: fairwinds at eastlink.ca (David Pratt) Date: Wed, 06 Feb 2008 10:07:08 -0400 Subject: Fair Proxy Balancer In-Reply-To: <004401c8685a$b2b6d110$18247330$@ca> References: <2cc9d1ea0711221338q60704b41h8831453dade974df@mail.gmail.com> <88daf38c0711221507y3b15e8d5q5860d094b38e6ce7@mail.gmail.com> <121a28810711230438u5ee6b2c8i2733f77e64f44cd7@mail.gmail.com> <47A14008.1060409@eastlink.ca> <88daf38c0801310251u60a4ee79n2707719f6fcf5daf@mail.gmail.com> <47A1CDB1.6090107@eastlink.ca> <20080131164557.GC21638@vadmin.megiteam.pl> <004401c8685a$b2b6d110$18247330$@ca> Message-ID: <47A9BF0C.1030400@eastlink.ca> Hi Rob. This is encouraging news and I am working on a setup to incorporate this into my process. I would really like to hear if there has been any attempt to evaluate the fair proxy balancer in relation to other balancing schemes. From the standpoint of server resources, it is attractive and much simpler than the haproxy or lvm for setup. I realize speed is subject to all sorts of additional parameters but a comparison of the balancer with others would be quite interesting. Rob, can you elaborate a bit more on your mongrels situation. I do not use ruby but have a similar situation other types of backend servers. In the current scenario, the last server will always get less hits. Are you setting some sort of threshold to determine how many mongrels to run (or just starting up mongrels until the last is getting no hits). Many thanks. Regards, David Rob Mitzel wrote: > Hey Grzegorz, > > First, actually thank YOU for coming up with the balancer. It's made my > life much easier. And please, keep the round-robin behaviour as-is! I > mean, it's a great way to tell if you're running too many mongrels and/or > too many nginx connections. > > -Rob. > > > -----Original Message----- > From: owner-nginx at sysoev.ru [mailto:owner-nginx at sysoev.ru] On Behalf Of > Grzegorz Nosek > Sent: Thursday, January 31, 2008 8:46 AM > To: nginx at sysoev.ru > Subject: Re: Fair Proxy Balancer > > One known issue (again, waiting for the One Day) is that the round-robin > part doesn't work too well. E.g. if your load is very low, all requests > will go to the first backend. Anyway, I'll keep the current behaviour as > an option as it may be useful in dimensioning your backend cluster (i.e. > if the Nth backend has serviced no requests, N-1 should be enough). > > Best regards, > Grzegorz Nosek > > > > From grzegorz.nosek at gmail.com Wed Feb 6 18:41:19 2008 From: grzegorz.nosek at gmail.com (Grzegorz Nosek) Date: Wed, 6 Feb 2008 16:41:19 +0100 Subject: Fair Proxy Balancer In-Reply-To: <47A9BF0C.1030400@eastlink.ca> References: <2cc9d1ea0711221338q60704b41h8831453dade974df@mail.gmail.com> <88daf38c0711221507y3b15e8d5q5860d094b38e6ce7@mail.gmail.com> <121a28810711230438u5ee6b2c8i2733f77e64f44cd7@mail.gmail.com> <47A14008.1060409@eastlink.ca> <88daf38c0801310251u60a4ee79n2707719f6fcf5daf@mail.gmail.com> <47A1CDB1.6090107@eastlink.ca> <20080131164557.GC21638@vadmin.megiteam.pl> <004401c8685a$b2b6d110$18247330$@ca> <47A9BF0C.1030400@eastlink.ca> Message-ID: <20080206154119.GC15884@vadmin.megiteam.pl> Hi, On Wed, Feb 06, 2008 at 10:07:08AM -0400, David Pratt wrote: > Hi Rob. This is encouraging news and I am working on a setup to > incorporate this into my process. I would really like to hear if there > has been any attempt to evaluate the fair proxy balancer in relation to > other balancing schemes. From the standpoint of server resources, it is > attractive and much simpler than the haproxy or lvm for setup. I realize > speed is subject to all sorts of additional parameters but a comparison > of the balancer with others would be quite interesting. (disclaimer: I wrote upstream_fair, I'm biased). No, I haven't compared haproxy or lvs (I assume that was what you meant). However, haproxy is a TCP forwarder which makes it uncomfortable at times. For example, even if your backends are down, connections to haproxy will succeed and the only thing haproxy can do is to reset your new connection (even though nginx has already happily sent the request). This is a bit different than a failed backend, which returns a system error (connection refused) or times out. Besides, AFAIK haproxy does not offer least-connection balancing. LVS, I cannot comment (haven't used it) but it has a wider choice of balancing algorithms, including weighted least-connection. If you have the resources to set it up (looks a bit hairy to me), it should perform very well. > Rob, can you elaborate a bit more on your mongrels situation. I do not > use ruby but have a similar situation other types of backend servers. In > the current scenario, the last server will always get less hits. Are you > setting some sort of threshold to determine how many mongrels to run (or > just starting up mongrels until the last is getting no hits). Many thanks. > Hmm, let me use your message to reply to Rob too :) > >First, actually thank YOU for coming up with the balancer. It's made my > >life much easier. And please, keep the round-robin behaviour as-is! I > >mean, it's a great way to tell if you're running too many mongrels and/or > >too many nginx connections. Unfortunately, pure WLC behaviour causes problems for mongrel as it apparently doesn't like to be slammed too hard (looks like it leaks memory but that's just a guess). In the newest snapshot I added (or rather fixed) the round-robin part. I'll make it configurable, but the default will probably be round-robin from now on. But yes, it is handy. Best regards, Grzegorz Nosek From fairwinds at eastlink.ca Wed Feb 6 21:44:31 2008 From: fairwinds at eastlink.ca (David Pratt) Date: Wed, 06 Feb 2008 14:44:31 -0400 Subject: Fair Proxy Balancer In-Reply-To: <20080206154119.GC15884@vadmin.megiteam.pl> References: <2cc9d1ea0711221338q60704b41h8831453dade974df@mail.gmail.com> <88daf38c0711221507y3b15e8d5q5860d094b38e6ce7@mail.gmail.com> <121a28810711230438u5ee6b2c8i2733f77e64f44cd7@mail.gmail.com> <47A14008.1060409@eastlink.ca> <88daf38c0801310251u60a4ee79n2707719f6fcf5daf@mail.gmail.com> <47A1CDB1.6090107@eastlink.ca> <20080131164557.GC21638@vadmin.megiteam.pl> <004401c8685a$b2b6d110$18247330$@ca> <47A9BF0C.1030400@eastlink.ca> <20080206154119.GC15884@vadmin.megiteam.pl> Message-ID: <47AA000F.7030006@eastlink.ca> Hi. Both haproxy and lvs have setups that are more involved for sure. haproxy 1.3 has more balancing algorithms than 1.2. I have seen patches that provide least connection balancing for 1.2 also. lvs is what I believe to be 'the' mainstream balancer but needs to be compiled into the linux kernel - it as not as portable and simple as incorporating the fair proxy balancer as a result. Interested in Rob's experience to determine no of servers. Many thanks Grzegorz. Regards, David Grzegorz Nosek wrote: > Hi, > > On Wed, Feb 06, 2008 at 10:07:08AM -0400, David Pratt wrote: >> Hi Rob. This is encouraging news and I am working on a setup to >> incorporate this into my process. I would really like to hear if there >> has been any attempt to evaluate the fair proxy balancer in relation to >> other balancing schemes. From the standpoint of server resources, it is >> attractive and much simpler than the haproxy or lvm for setup. I realize >> speed is subject to all sorts of additional parameters but a comparison >> of the balancer with others would be quite interesting. > > (disclaimer: I wrote upstream_fair, I'm biased). > > No, I haven't compared haproxy or lvs (I assume that was what you > meant). However, haproxy is a TCP forwarder which makes it uncomfortable > at times. For example, even if your backends are down, connections to > haproxy will succeed and the only thing haproxy can do is to reset your > new connection (even though nginx has already happily sent the request). > This is a bit different than a failed backend, which returns a system > error (connection refused) or times out. Besides, AFAIK haproxy does not > offer least-connection balancing. > > LVS, I cannot comment (haven't used it) but it has a wider choice of > balancing algorithms, including weighted least-connection. If you have > the resources to set it up (looks a bit hairy to me), it should perform > very well. > >> Rob, can you elaborate a bit more on your mongrels situation. I do not >> use ruby but have a similar situation other types of backend servers. In >> the current scenario, the last server will always get less hits. Are you >> setting some sort of threshold to determine how many mongrels to run (or >> just starting up mongrels until the last is getting no hits). Many thanks. >> > > Hmm, let me use your message to reply to Rob too :) > >>> First, actually thank YOU for coming up with the balancer. It's made my >>> life much easier. And please, keep the round-robin behaviour as-is! I >>> mean, it's a great way to tell if you're running too many mongrels and/or >>> too many nginx connections. > > Unfortunately, pure WLC behaviour causes problems for mongrel as it > apparently doesn't like to be slammed too hard (looks like it leaks > memory but that's just a guess). > > In the newest snapshot I added (or rather fixed) the round-robin part. > I'll make it configurable, but the default will probably be round-robin > from now on. But yes, it is handy. > > Best regards, > Grzegorz Nosek > > From ezmobius at gmail.com Wed Feb 6 23:01:18 2008 From: ezmobius at gmail.com (Ezra Zygmuntowicz) Date: Wed, 6 Feb 2008 12:01:18 -0800 Subject: Fair Proxy Balancer In-Reply-To: <47AA000F.7030006@eastlink.ca> References: <2cc9d1ea0711221338q60704b41h8831453dade974df@mail.gmail.com> <88daf38c0711221507y3b15e8d5q5860d094b38e6ce7@mail.gmail.com> <121a28810711230438u5ee6b2c8i2733f77e64f44cd7@mail.gmail.com> <47A14008.1060409@eastlink.ca> <88daf38c0801310251u60a4ee79n2707719f6fcf5daf@mail.gmail.com> <47A1CDB1.6090107@eastlink.ca> <20080131164557.GC21638@vadmin.megiteam.pl> <004401c8685a$b2b6d110$18247330$@ca> <47A9BF0C.1030400@eastlink.ca> <20080206154119.GC15884@vadmin.megiteam.pl> <47AA000F.7030006@eastlink.ca> Message-ID: <1C49A23F-361B-43BC-9215-F08725A35E1F@gmail.com> On Feb 6, 2008, at 10:44 AM, David Pratt wrote: > Hi. Both haproxy and lvs have setups that are more involved for > sure. haproxy 1.3 has more balancing algorithms than 1.2. I have > seen patches that provide least connection balancing for 1.2 also. > lvs is what I believe to be 'the' mainstream balancer but needs to > be compiled into the linux kernel - it as not as portable and simple > as incorporating the fair proxy balancer as a result. Interested in > Rob's experience to determine no of servers. Many thanks Grzegorz. > > Regards, > David Hey David- We're running the fair balancer on about 100 servers with good success. We had some issues with the fair balancer in lower load situations only sending requests to the first backend instead of doing a round robin when under lower load, this was causing the single backend to become overloaded. The latest version Grzegorz has just pushed to his git repo works much better in all the situations we have put it under. We run LVS at the edge of our clusters and have LVS balance to nginx on each VM with nginx doing fair balancing directly to the mongrels and it is working great. Much fewer moving parts then throwing haproxy in the mix. In my benchmarks having haproxy between nginx and the mongrels was s lower since there was one more level of indirection. So having nginx serving static content and fair balancing to the backends is ideal for us. Cheers- - Ezra Zygmuntowicz -- Founder & Software Architect -- ezra at engineyard.com -- EngineYard.com From adrianperez at udc.es Thu Feb 7 06:17:34 2008 From: adrianperez at udc.es (Adrian Perez) Date: Thu, 7 Feb 2008 04:17:34 +0100 Subject: beta testing for mod_wsgi In-Reply-To: <479E11E7.6000700@libero.it> References: <4780EBFD.3090806@libero.it> <20080112205047.7d73ba09@gila.local> <20080114034927.0e4a03d6@gila.local> <478B3AC4.20104@libero.it> <20080128032233.715e9588@gila.local> <479DB2C1.8010002@libero.it> <20080128171802.568dd4ab@gila.local> <479E11E7.6000700@libero.it> Message-ID: <20080207041734.4daa8aa1@gila.local> El Mon, 28 Jan 2008 18:33:27 +0100 Manlio Perillo escribi?: > Adrian Perez ha scritto: > > El Mon, 28 Jan 2008 11:47:29 +0100 > > Manlio Perillo escribi?: > > > >> The module name is built from the name of the location used, > >> replacing '/' with '_'. > >> > >> [...] > >> > >> I have to admit that I have never thought about this > >> possibility ;-). > > > > I see. Only if "auth_basic" and "auth_basic_user_file" could be used > > inside an "if" context, we could workaround this... I think WSGI > > authentication middleware could be used as well, as long as the > > middleware allows to only limit some URLs ;-) > > > > You can set the middleware stack inside a specific location. Just letting you know: I have been able of workaround this by using the AccountManager plugin (http://trac-hacks.org/wiki/AccountManagerPlugin) which performs authentication by itself. Now I only need to define trac.env_parent_dir and make a "wsgi_pass" to the Trac handler, without additional "location" directives for authentication ;-) > The WSGI implementation for Apache computes the md5 hash of the > script full path name, but maybe I can just use the path name, > replacing '/' with '_'. > > I have to think (unfortunately lately I have very little free time to > dedicate to the development of mod_wsgi). I think this is not an urgent issue, now I have found the aforementioned Trac plugin, hehe. Thanks again for your great work on mod_wsgi :) -- The Force will be with you, always. -- (Ben "Obi-Wan" Kenobi) From reynhout at quesera.com Thu Feb 7 08:23:32 2008 From: reynhout at quesera.com (Andrew Reynhout) Date: Thu, 7 Feb 2008 05:23:32 +0000 Subject: gzip compression of upstream proxied requests In-Reply-To: <004401c8685a$b2b6d110$18247330$@ca> References: <2cc9d1ea0711221338q60704b41h8831453dade974df@mail.gmail.com> <88daf38c0711221507y3b15e8d5q5860d094b38e6ce7@mail.gmail.com> <121a28810711230438u5ee6b2c8i2733f77e64f44cd7@mail.gmail.com> <47A14008.1060409@eastlink.ca> <88daf38c0801310251u60a4ee79n2707719f6fcf5daf@mail.gmail.com> <47A1CDB1.6090107@eastlink.ca> <20080131164557.GC21638@vadmin.megiteam.pl> <004401c8685a$b2b6d110$18247330$@ca> Message-ID: <20080207052332.GC10959@quesera.com> Hi, I've run into a behavioral difference between different versions of nginx, but I'm not sure which behavior is expected... I need to proxy a request for a dynamically-assembled javascript file to a set of app servers, but want to serve static js files directly from nginx. I want to compress both proxied and unproxied js files. The config below works, but doesn't return a compressed file for the proxied request in all environments. Unproxied javascript requests are compressed in all cases. nginx-0.5.20 on FreeBSD-6.2: returns compressed files for all /js/ paths, including proxied nginx-0.5.30 on Solaris 10: returns compressed files for local /js/ paths, but returns UNcompressed files for all upstream proxied /js/ paths nginx-0.5.35 on Solaris 10: (same as nginx-0.5.30 on Solaris 10) I couldn't find anything in the 0.5.20 - 0.5.30 release notes that looked like a relevant bugfix or intentional behavior change. I realize the environment diffs are substantial, but I'm also not sure from the docs whether this _should_ work. :) gzip_proxied seems to control a conditional on the headers of the UA request, and not be relevant to the gzip-or-not decision for the results of upstream requests before returning to the UA. If it hadn't worked in 0.5.20 either, I'd assume that was the whole story. The best answer might be "Hmm, weird, but you should just cache the first request to /js/all.js and let nginx treat it as local thereafter anyway". Fair enough, but it adds some complexity elsewhere that I'd like to avoid for now. Here are the relevant bits of the config file: ## handle static files directly location ~ ^/(js|css|themes)/ { root /path/to/docroot gzip_types text/javascript text/css text/plain text/js application/x-javascript application/javascript; gzip_proxied any; if ( $uri ~ ^/(js|css)/ ) { gzip on; } ## force proxy for all.js, as it is ephemeral if ( $uri = "/js/all.js" ) { proxy_pass http://appservers; break; } } Thank you for any insights! Andrew -- reynhout at quesera.com From manlio_perillo at libero.it Thu Feb 7 13:49:49 2008 From: manlio_perillo at libero.it (Manlio Perillo) Date: Thu, 07 Feb 2008 11:49:49 +0100 Subject: beta testing for mod_wsgi In-Reply-To: <20080207041734.4daa8aa1@gila.local> References: <4780EBFD.3090806@libero.it> <20080112205047.7d73ba09@gila.local> <20080114034927.0e4a03d6@gila.local> <478B3AC4.20104@libero.it> <20080128032233.715e9588@gila.local> <479DB2C1.8010002@libero.it> <20080128171802.568dd4ab@gila.local> <479E11E7.6000700@libero.it> <20080207041734.4daa8aa1@gila.local> Message-ID: <47AAE24D.3@libero.it> Adrian Perez ha scritto: > El Mon, 28 Jan 2008 18:33:27 +0100 > Manlio Perillo escribi?: > >> Adrian Perez ha scritto: >>> El Mon, 28 Jan 2008 11:47:29 +0100 >>> Manlio Perillo escribi?: >>> >>>> The module name is built from the name of the location used, >>>> replacing '/' with '_'. >>>> >>>> [...] >>>> >>>> I have to admit that I have never thought about this >>>> possibility ;-). >>> I see. Only if "auth_basic" and "auth_basic_user_file" could be used >>> inside an "if" context, we could workaround this... I think WSGI >>> authentication middleware could be used as well, as long as the >>> middleware allows to only limit some URLs ;-) >>> >> You can set the middleware stack inside a specific location. > > Just letting you know: I have been able of workaround this by using the > AccountManager plugin (http://trac-hacks.org/wiki/AccountManagerPlugin) > which performs authentication by itself. Now I only need to define > trac.env_parent_dir and make a "wsgi_pass" to the Trac handler, without > additional "location" directives for authentication ;-) > >> The WSGI implementation for Apache computes the md5 hash of the >> script full path name, but maybe I can just use the path name, >> replacing '/' with '_'. >> >> I have to think (unfortunately lately I have very little free time to >> dedicate to the development of mod_wsgi). > > I think this is not an urgent issue, now I have found the > aforementioned Trac plugin, hehe. Thanks again for your great work on > mod_wsgi :) > I agree that this is not urgent, but it is however a problem that should be solved. Thanks Manlio Perillo From batlogg at mac.com Thu Feb 7 23:23:10 2008 From: batlogg at mac.com (Jodok Batlogg) Date: Thu, 7 Feb 2008 21:23:10 +0100 Subject: wrong status code when proxying 404 errors Message-ID: <836E5400-1572-4DA9-B18C-FF9887479438@mac.com> hi, i'm trying to use an upstream proxy to generate a 404 error page. my config looks like that: location '/error/404' { rewrite ^/(.*) /news/404 break; proxy_pass http://backend_varnish; } error_page 404 = /error/404; unfortunately nginx returns a response header of 200 OK i also tried error_page 404 =404 /error/404; but that doesn't work as well. any ideas? thanks jodok From abdulrahman at advany.com Fri Feb 8 09:35:23 2008 From: abdulrahman at advany.com (Abdul-Rahman Advany) Date: Fri, 8 Feb 2008 07:35:23 +0100 Subject: wrong status code when proxying 404 errors In-Reply-To: <836E5400-1572-4DA9-B18C-FF9887479438@mac.com> References: <836E5400-1572-4DA9-B18C-FF9887479438@mac.com> Message-ID: <54899e710802072235u71c94e4fpde6e49b17f5e82d@mail.gmail.com> remove the = On 2/7/08, Jodok Batlogg wrote: > > hi, > > i'm trying to use an upstream proxy to generate a 404 error page. > my config looks like that: > > > location '/error/404' { > rewrite ^/(.*) /news/404 break; > proxy_pass http://backend_varnish; > } > > error_page 404 = /error/404; > > > unfortunately nginx returns a response header of 200 OK > > i also tried > > error_page 404 =404 /error/404; > > but that doesn't work as well. > any ideas? > > thanks > > jodok > > > -- Abdul-Rahman Advany IM: abdulrahman at advany.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From denis at gostats.ru Fri Feb 8 09:41:39 2008 From: denis at gostats.ru (Denis F. Latypoff) Date: Fri, 8 Feb 2008 12:41:39 +0600 Subject: wrong status code when proxying 404 errors In-Reply-To: <836E5400-1572-4DA9-B18C-FF9887479438@mac.com> References: <836E5400-1572-4DA9-B18C-FF9887479438@mac.com> Message-ID: <408344647.20080208124139@gostats.ru> Hello Jodok, Friday, February 8, 2008, 2:23:10 AM, you wrote: > hi, > i'm trying to use an upstream proxy to generate a 404 error page. > my config looks like that: - location '/error/404' { + location = /error/404 { > rewrite ^/(.*) /news/404 break; > proxy_pass http://backend_varnish; > } > error_page 404 = /error/404; > unfortunately nginx returns a response header of 200 OK > i also tried > error_page 404 =404 /error/404; > but that doesn't work as well. > any ideas? > thanks > jodok -- Best regards, Denis mailto:denis at gostats.ru From is at rambler-co.ru Fri Feb 8 10:02:07 2008 From: is at rambler-co.ru (Igor Sysoev) Date: Fri, 8 Feb 2008 10:02:07 +0300 Subject: wrong status code when proxying 404 errors In-Reply-To: <836E5400-1572-4DA9-B18C-FF9887479438@mac.com> References: <836E5400-1572-4DA9-B18C-FF9887479438@mac.com> Message-ID: <20080208070207.GC83060@rambler-co.ru> On Thu, Feb 07, 2008 at 09:23:10PM +0100, Jodok Batlogg wrote: > i'm trying to use an upstream proxy to generate a 404 error page. > my config looks like that: > > > location '/error/404' { > rewrite ^/(.*) /news/404 break; > proxy_pass http://backend_varnish; > } > > error_page 404 = /error/404; > > > unfortunately nginx returns a response header of 200 OK > > i also tried > > error_page 404 =404 /error/404; > > but that doesn't work as well. > any ideas? First, some optimization: location = /error/404 { proxy_pass http://backend_varnish/news/404; } Second, you should use either error_page 404 /error/404; or error_page 404 =404 /error/404; But not error_page 404 = /error/404; unless http://backend_varnish/news/404 returns 404 status. If the first two variants do not work, could you should debug log ? -- Igor Sysoev http://sysoev.ru/en/ From fairwinds at eastlink.ca Fri Feb 8 16:49:59 2008 From: fairwinds at eastlink.ca (David Pratt) Date: Fri, 08 Feb 2008 09:49:59 -0400 Subject: Fair Proxy Balancer In-Reply-To: <1C49A23F-361B-43BC-9215-F08725A35E1F@gmail.com> References: <2cc9d1ea0711221338q60704b41h8831453dade974df@mail.gmail.com> <88daf38c0711221507y3b15e8d5q5860d094b38e6ce7@mail.gmail.com> <121a28810711230438u5ee6b2c8i2733f77e64f44cd7@mail.gmail.com> <47A14008.1060409@eastlink.ca> <88daf38c0801310251u60a4ee79n2707719f6fcf5daf@mail.gmail.com> <47A1CDB1.6090107@eastlink.ca> <20080131164557.GC21638@vadmin.megiteam.pl> <004401c8685a$b2b6d110$18247330$@ca> <47A9BF0C.1030400@eastlink.ca> <20080206154119.GC15884@vadmin.megiteam.pl> <47AA000F.7030006@eastlink.ca> <1C49A23F-361B-43BC-9215-F08725A35E1F@gmail.com> Message-ID: <47AC5E07.5020306@eastlink.ca> Hi Ezra. Cool. The setup I am looking at is quite similar so great to hear it is doing the job well. Many thanks for sharing your experience. Regards David Ezra Zygmuntowicz wrote: > > On Feb 6, 2008, at 10:44 AM, David Pratt wrote: > >> Hi. Both haproxy and lvs have setups that are more involved for sure. >> haproxy 1.3 has more balancing algorithms than 1.2. I have seen >> patches that provide least connection balancing for 1.2 also. lvs is >> what I believe to be 'the' mainstream balancer but needs to be >> compiled into the linux kernel - it as not as portable and simple as >> incorporating the fair proxy balancer as a result. Interested in Rob's >> experience to determine no of servers. Many thanks Grzegorz. >> >> Regards, >> David > > > > Hey David- > > We're running the fair balancer on about 100 servers with good > success. We had some issues with the fair balancer in lower load > situations only sending requests to the first backend instead of doing a > round robin when under lower load, this was causing the single backend > to become overloaded. The latest version Grzegorz has just pushed to his > git repo works much better in all the situations we have put it under. > > We run LVS at the edge of our clusters and have LVS balance to nginx > on each VM with nginx doing fair balancing directly to the mongrels and > it is working great. Much fewer moving parts then throwing haproxy in > the mix. In my benchmarks having haproxy between nginx and the mongrels > was s lower since there was one more level of indirection. So having > nginx serving static content and fair balancing to the backends is ideal > for us. > > > Cheers- > - Ezra Zygmuntowicz > -- Founder & Software Architect > -- ezra at engineyard.com > -- EngineYard.com > > From alex at purefiction.net Fri Feb 8 18:29:27 2008 From: alex at purefiction.net (Alexander Staubo) Date: Fri, 8 Feb 2008 16:29:27 +0100 Subject: Fair Proxy Balancer In-Reply-To: <20080206154119.GC15884@vadmin.megiteam.pl> References: <2cc9d1ea0711221338q60704b41h8831453dade974df@mail.gmail.com> <88daf38c0711221507y3b15e8d5q5860d094b38e6ce7@mail.gmail.com> <121a28810711230438u5ee6b2c8i2733f77e64f44cd7@mail.gmail.com> <47A14008.1060409@eastlink.ca> <88daf38c0801310251u60a4ee79n2707719f6fcf5daf@mail.gmail.com> <47A1CDB1.6090107@eastlink.ca> <20080131164557.GC21638@vadmin.megiteam.pl> <004401c8685a$b2b6d110$18247330$@ca> <47A9BF0C.1030400@eastlink.ca> <20080206154119.GC15884@vadmin.megiteam.pl> Message-ID: <88daf38c0802080729y4412d743g9f7c1034fde46bc0@mail.gmail.com> On Feb 6, 2008 4:41 PM, Grzegorz Nosek wrote: > No, I haven't compared haproxy or lvs (I assume that was what you > meant). However, haproxy is a TCP forwarder which makes it uncomfortable > at times. For example, even if your backends are down, connections to > haproxy will succeed and the only thing haproxy can do is to reset your > new connection (even though nginx has already happily sent the request). Could you explain what you mean by "if your backends are down, connections to haproxy will succeed"? By backend, do you mean Nginx or, say, a Mongrel server? Alexander. From grzegorz.nosek at gmail.com Fri Feb 8 18:47:52 2008 From: grzegorz.nosek at gmail.com (Grzegorz Nosek) Date: Fri, 8 Feb 2008 16:47:52 +0100 Subject: Fair Proxy Balancer In-Reply-To: <88daf38c0802080729y4412d743g9f7c1034fde46bc0@mail.gmail.com> References: <88daf38c0711221507y3b15e8d5q5860d094b38e6ce7@mail.gmail.com> <121a28810711230438u5ee6b2c8i2733f77e64f44cd7@mail.gmail.com> <47A14008.1060409@eastlink.ca> <88daf38c0801310251u60a4ee79n2707719f6fcf5daf@mail.gmail.com> <47A1CDB1.6090107@eastlink.ca> <20080131164557.GC21638@vadmin.megiteam.pl> <004401c8685a$b2b6d110$18247330$@ca> <47A9BF0C.1030400@eastlink.ca> <20080206154119.GC15884@vadmin.megiteam.pl> <88daf38c0802080729y4412d743g9f7c1034fde46bc0@mail.gmail.com> Message-ID: <20080208154752.GD13807@vadmin.megiteam.pl> On Fri, Feb 08, 2008 at 04:29:27PM +0100, Alexander Staubo wrote: > On Feb 6, 2008 4:41 PM, Grzegorz Nosek wrote: > > No, I haven't compared haproxy or lvs (I assume that was what you > > meant). However, haproxy is a TCP forwarder which makes it uncomfortable > > at times. For example, even if your backends are down, connections to > > haproxy will succeed and the only thing haproxy can do is to reset your > > new connection (even though nginx has already happily sent the request). > > Could you explain what you mean by "if your backends are down, > connections to haproxy will succeed"? By backend, do you mean Nginx > or, say, a Mongrel server? Say you have: nginx --> haproxy --> (a bunch of backends) When nginx connects to haproxy, the connection will succeed (as haproxy is still alive) but when haproxy tries to connect to any backend (let's say they're all dead or a switch failed etc.), the connection (haproxy->backend) will eventually time out. However, from nginx's point of view, the connection succeeded (haproxy replied), so it sends the request which then times out or dies with a connection reset. If you had several haproxy instances, each fronting its own set of mongrels, you'd have just lost a request unneccessarily [sp?]. Note, I don't know whether haproxy is smart enough to shut down its listening socket when all backends fail (it might be). I rather meant to say that a TCP forwarder must explicitely support such situations to handle them. Best regards, Grzegorz Nosek From alex at purefiction.net Sat Feb 9 03:33:56 2008 From: alex at purefiction.net (Alexander Staubo) Date: Sat, 9 Feb 2008 01:33:56 +0100 Subject: Fair Proxy Balancer In-Reply-To: <20080208154752.GD13807@vadmin.megiteam.pl> References: <88daf38c0711221507y3b15e8d5q5860d094b38e6ce7@mail.gmail.com> <47A14008.1060409@eastlink.ca> <88daf38c0801310251u60a4ee79n2707719f6fcf5daf@mail.gmail.com> <47A1CDB1.6090107@eastlink.ca> <20080131164557.GC21638@vadmin.megiteam.pl> <004401c8685a$b2b6d110$18247330$@ca> <47A9BF0C.1030400@eastlink.ca> <20080206154119.GC15884@vadmin.megiteam.pl> <88daf38c0802080729y4412d743g9f7c1034fde46bc0@mail.gmail.com> <20080208154752.GD13807@vadmin.megiteam.pl> Message-ID: <88daf38c0802081633w1c17f406l9c7c4c268ad84b62@mail.gmail.com> On Feb 8, 2008 4:47 PM, Grzegorz Nosek wrote: > When nginx connects to haproxy, the connection will succeed (as haproxy > is still alive) but when haproxy tries to connect to any backend (let's > say they're all dead or a switch failed etc.), the connection > (haproxy->backend) will eventually time out. However, from nginx's point > of view, the connection succeeded (haproxy replied), so it sends the > request which then times out or dies with a connection reset. I see. That's a valid point. Actually, I never considered that one might use HAProxy behind Nginx in the first place. Rather, I assumed HAProxy would be a more naturally placed *before* Nginx, if one wanted, say, a more strictly layered setup where a dedicated proxy did the proxying and a web server such as Nginx did the static file serving. Alexander. From grzegorz.nosek at gmail.com Sat Feb 9 12:14:02 2008 From: grzegorz.nosek at gmail.com (Grzegorz Nosek) Date: Sat, 9 Feb 2008 10:14:02 +0100 Subject: Fair Proxy Balancer In-Reply-To: <88daf38c0802081633w1c17f406l9c7c4c268ad84b62@mail.gmail.com> References: <47A14008.1060409@eastlink.ca> <88daf38c0801310251u60a4ee79n2707719f6fcf5daf@mail.gmail.com> <47A1CDB1.6090107@eastlink.ca> <20080131164557.GC21638@vadmin.megiteam.pl> <004401c8685a$b2b6d110$18247330$@ca> <47A9BF0C.1030400@eastlink.ca> <20080206154119.GC15884@vadmin.megiteam.pl> <88daf38c0802080729y4412d743g9f7c1034fde46bc0@mail.gmail.com> <20080208154752.GD13807@vadmin.megiteam.pl> <88daf38c0802081633w1c17f406l9c7c4c268ad84b62@mail.gmail.com> Message-ID: <20080209091401.GA19627@vadmin.megiteam.pl> On Sat, Feb 09, 2008 at 01:33:56AM +0100, Alexander Staubo wrote: > Actually, I never considered that one might use HAProxy behind Nginx > in the first place. Rather, I assumed HAProxy would be a more > naturally placed *before* Nginx, if one wanted, say, a more strictly > layered setup where a dedicated proxy did the proxying and a web > server such as Nginx did the static file serving. Yes, that would be more natural for me too. However, the question was to compare upstream_fair and haproxy. In order to compare them in any way, you'd have to put haproxy _behind_ nginx or set up haproxy in front of a cluster of nginx instances, each fronting one backend. In both of these setups, the nginx load balancer will be replaced by the one from haproxy (the difference is the number of nginx instances), so it would be possible to compare the two load balancers somehow. Best regards, Grzegorz Nosek From lists at ruby-forum.com Sat Feb 9 18:25:05 2008 From: lists at ruby-forum.com (Bbq Plate) Date: Sat, 9 Feb 2008 16:25:05 +0100 Subject: trouble viewing nginx rails app on LAN? Message-ID: Hi, I cant seem to view my rails app from another computer on my LAN. I have all ports open. Is there something in the nginx conf i must set? thank you -- Posted via http://www.ruby-forum.com/. From ficovh at gmail.com Sat Feb 9 18:46:52 2008 From: ficovh at gmail.com (Francisco Valladolid) Date: Sat, 9 Feb 2008 20:16:52 +0430 Subject: trouble viewing nginx rails app on LAN? In-Reply-To: References: Message-ID: <40b34c120802090746k60cbcca4q1c0d10a4924cc31d@mail.gmail.com> Hi. I don't think abut the nginx thing. You maybe have to check some firewall configurations. Regards On Feb 9, 2008 7:55 PM, Bbq Plate wrote: > Hi, > I cant seem to view my rails app from another computer on my LAN. I have > all ports open. Is there something in the nginx conf i must set? > > thank you > -- > Posted via http://www.ruby-forum.com/. > > -- Francisco Valladolid H. -- http://bsdguy.net - Jesus Christ follower. -------------- next part -------------- An HTML attachment was scrubbed... URL: From lists at ruby-forum.com Sat Feb 9 23:57:09 2008 From: lists at ruby-forum.com (Bbq Plate) Date: Sat, 9 Feb 2008 21:57:09 +0100 Subject: trouble viewing nginx rails app on LAN? In-Reply-To: <40b34c120802090746k60cbcca4q1c0d10a4924cc31d@mail.gmail.com> References: <40b34c120802090746k60cbcca4q1c0d10a4924cc31d@mail.gmail.com> Message-ID: <605a80d2914d1a263acb69921772a24d@ruby-forum.com> it turns out i configured my conf file wrong. for the listening section, i put localhost:port number as suppose to just the port number! Francisco Valladolid wrote: > Hi. > > I don't think abut the nginx thing. You maybe have to check some > firewall > configurations. > > Regards -- Posted via http://www.ruby-forum.com/. From mike.javorski at gmail.com Sun Feb 10 09:32:14 2008 From: mike.javorski at gmail.com (Mike Javorski) Date: Sat, 9 Feb 2008 22:32:14 -0800 Subject: Issue w/ nginx in hybrid static/php load balancer scenario Message-ID: I have nginx set up as a load balancer in front of two machines running fastcgi/php, and nginx for static content. The desired goal is to have all php pages (including the site index pages of .*/index.php) processed by the fastcgi/php upstream, and everything else provided by the static servers. The following is what I have and what I believe should have worked, but it appears to run all directory paths via the static rule, rather than the php rule which matches the index. To sum up: / -- via static system (WRONG) /index.php -- via fastcgi/php system (RIGHT) /blah/ -- via static system (WRONG) /blah/index.php -- via fastcgi/php system (RIGHT) nginx version is 0.6.25, Help! :-) tia, - mike My Config File (the relevent bits anyway): --------------------------------------------------- http { upstream static-pool { server 192.168.7.40:80; server 192.168.7.41:80; } upstream php-fcgi-pool { server 192.168.7.40:7000; server 192.168.7.41:7000; } server { listen 80; root /website/htdocs; index index.php; fastcgi_index index.php; include /etc/nginx/fastcgi_params; location / { proxy_pass http://static-pool/website/htdocs/; } location ~ \.php$ { fastcgi_pass php-fcgi-pool; } } } From denis at gostats.ru Sun Feb 10 09:42:33 2008 From: denis at gostats.ru (Denis F. Latypoff) Date: Sun, 10 Feb 2008 12:42:33 +0600 Subject: Issue w/ nginx in hybrid static/php load balancer scenario In-Reply-To: References: Message-ID: <78130054.20080210124233@gostats.ru> Hello Mike, Sunday, February 10, 2008, 12:32:14 PM, you wrote: > I have nginx set up as a load balancer in front of two machines > running fastcgi/php, and nginx for static content. The desired goal is > to have all php pages (including the site index pages of .*/index.php) > processed by the fastcgi/php upstream, and everything else provided by > the static servers. The following is what I have and what I believe > should have worked, but it appears to run all directory paths via the > static rule, rather than the php rule which matches the index. > To sum up: > / -- via static system (WRONG) > /index.php -- via fastcgi/php system (RIGHT) > /blah/ -- via static system (WRONG) > /blah/index.php -- via fastcgi/php system (RIGHT) > nginx version is 0.6.25, Help! :-) > tia, > - mike > My Config File (the relevent bits anyway): > --------------------------------------------------- > http { > upstream static-pool { > server 192.168.7.40:80; > server 192.168.7.41:80; > } > upstream php-fcgi-pool { > server 192.168.7.40:7000; > server 192.168.7.41:7000; > } > server { > listen 80; > root /website/htdocs; > index index.php; > fastcgi_index index.php; > include /etc/nginx/fastcgi_params; > location / { > proxy_pass http://static-pool/website/htdocs/; > } + location = / { + fastcgi_pass php-fcgi-pool; + } + location = /blah/ { + fastcgi_pass php-fcgi-pool; + } > location ~ \.php$ { > fastcgi_pass php-fcgi-pool; > } > } > } -- Best regards, Denis mailto:denis at gostats.ru From mike.javorski at gmail.com Sun Feb 10 09:51:17 2008 From: mike.javorski at gmail.com (Mike Javorski) Date: Sat, 9 Feb 2008 22:51:17 -0800 Subject: Issue w/ nginx in hybrid static/php load balancer scenario In-Reply-To: <78130054.20080210124233@gostats.ru> References: <78130054.20080210124233@gostats.ru> Message-ID: Thanks Denis. The problem w/ your solution is there is lots of directories. I don't want to have to create entries for each. I suppose I could to a regex for .*/$ but some of the directories have index.html instead of index.php, and I need to support that as well :-(. Any other options/suggestions? - mike On Feb 9, 2008 10:42 PM, Denis F. Latypoff wrote: > Hello Mike, > > > Sunday, February 10, 2008, 12:32:14 PM, you wrote: > > > I have nginx set up as a load balancer in front of two machines > > running fastcgi/php, and nginx for static content. The desired goal is > > to have all php pages (including the site index pages of .*/index.php) > > processed by the fastcgi/php upstream, and everything else provided by > > the static servers. The following is what I have and what I believe > > should have worked, but it appears to run all directory paths via the > > static rule, rather than the php rule which matches the index. > > > To sum up: > > / -- via static system (WRONG) > > /index.php -- via fastcgi/php system (RIGHT) > > /blah/ -- via static system (WRONG) > > /blah/index.php -- via fastcgi/php system (RIGHT) > > > nginx version is 0.6.25, Help! :-) > > > tia, > > > - mike > > > My Config File (the relevent bits anyway): > > --------------------------------------------------- > > > http { > > upstream static-pool { > > server 192.168.7.40:80; > > server 192.168.7.41:80; > > } > > > upstream php-fcgi-pool { > > server 192.168.7.40:7000; > > server 192.168.7.41:7000; > > } > > > server { > > listen 80; > > root /website/htdocs; > > index index.php; > > fastcgi_index index.php; > > include /etc/nginx/fastcgi_params; > > > location / { > > proxy_pass http://static-pool/website/htdocs/; > > } > > + location = / { > + fastcgi_pass php-fcgi-pool; > + } > > + location = /blah/ { > + fastcgi_pass php-fcgi-pool; > + } > > > > location ~ \.php$ { > > fastcgi_pass php-fcgi-pool; > > } > > } > > } > > > > -- > Best regards, > Denis mailto:denis at gostats.ru > > > From parker at isohunt.com Sun Feb 10 10:32:02 2008 From: parker at isohunt.com (Allen Parker) Date: Sat, 09 Feb 2008 23:32:02 -0800 Subject: location style hostname matching? Message-ID: <47AEA872.1040301@isohunt.com> If my configuration (sample) looks like this: http { server { server_name _*; root /path/to/htdocs/$host; }} is there a way to match 'location' or perhaps another type of keyword to hostname for one particular vhost ie: location a.host.net { rewrite /blog(.+)$ /wordpress$1; } location b.host.net { root /other/filepath/to/$host; } What I'm looking for is something similar to lighttpd's $HTTP['host']. Thanks, Allen Parker From is at rambler-co.ru Sun Feb 10 10:50:20 2008 From: is at rambler-co.ru (Igor Sysoev) Date: Sun, 10 Feb 2008 10:50:20 +0300 Subject: location style hostname matching? In-Reply-To: <47AEA872.1040301@isohunt.com> References: <47AEA872.1040301@isohunt.com> Message-ID: <20080210075020.GA33000@rambler-co.ru> On Sat, Feb 09, 2008 at 11:32:02PM -0800, Allen Parker wrote: > If my configuration (sample) looks like this: > > http { > server { > server_name _*; > root /path/to/htdocs/$host; > }} > > is there a way to match 'location' or perhaps another type of keyword to > hostname for one particular vhost ie: > > location a.host.net { > rewrite /blog(.+)$ /wordpress$1; > } > location b.host.net { > root /other/filepath/to/$host; > } > > What I'm looking for is something similar to lighttpd's $HTTP['host']. http { root /path/to/htdocs/$host; server { server_name _; # default server } server { server_name a.host.net; rewrite /blog(.+)$ /wordpress$1; } server { server_name b.host.net; root /other/filepath/to/b.host.net; } } -- Igor Sysoev http://sysoev.ru/en/ From hendrik.hardeman at hotmail.com Sun Feb 10 11:41:23 2008 From: hendrik.hardeman at hotmail.com (Hendrik Hardeman) Date: Sun, 10 Feb 2008 14:11:23 +0530 Subject: Test message Message-ID: This is a test message. Please ignore. _________________________________________________________________ Post free property ads on Yello Classifieds now! www.yello.in http://ss1.richmedia.in/recurl.asp?pid=220 -------------- next part -------------- An HTML attachment was scrubbed... URL: From hendrik.hardeman at hotmail.com Sun Feb 10 11:56:01 2008 From: hendrik.hardeman at hotmail.com (Hendrik Hardeman) Date: Sun, 10 Feb 2008 14:26:01 +0530 Subject: Nginx feature request Message-ID: Hi all, Discovered Nginx a few weeks back. After some experimenting I have come to the conclusion that Nginx really is very very good. I have a small feature request: I have a set of static files (html and others) which I want to make available only from a certain date/time. Inspired by the rewrite module, this morning I thought of a simple method to control access to such files. My idea was to set the file creation/modification time of the static files in the future, i.e. the date/time from which they can be made available (e.g. I could use the touch tool to set the appropriate date/time). I could then do something like if (!-f $request_filename) {return 404;} if ($date_gmt < fct($request_filename)) {return 403;} Unfortunately, I haven't found a way to get at the file creation/modification time (please do let me know if I overlooked something). The 'fct' in my example is an imaginary function which returns the file creation time in the same format as $date_gmt (presumably unix epoch timestamp) An even better way to handle this would be to have a directive in the core module which disallows serving files which have a creation/modification time in the future. I could then use: location /ftc/ { filetime_check on; } to disallow serving of files with a filetime in the future for that location. Files would then automatically become available once the request time >= filetime. This way access to certain files could be controlled in a very straightforward and transparent way - and with a simple 'touch'. Though I'd definitely prefer a directive for the above purpose, access to file creation/modification time (through variable or function) could still be useful in the rewriting or ssi module. Anyone any other suggestions ? Thanks, Hendrik Hardeman _________________________________________________________________ Post ads for free - to sell, rent or even buy.www.yello.in http://ss1.richmedia.in/recurl.asp?pid=186 -------------- next part -------------- An HTML attachment was scrubbed... URL: From manlio_perillo at libero.it Sun Feb 10 13:21:11 2008 From: manlio_perillo at libero.it (Manlio Perillo) Date: Sun, 10 Feb 2008 11:21:11 +0100 Subject: [ANN] use_x_forwared_host.patch Message-ID: <47AED017.2080008@libero.it> Hi. I have written a small patch for nginx that allows the use of the X-Forwarded-Host instead of the Host header. The rationale is that some hosting companies that use Apache as main web server do not set the correct Host header in the proxied request. With this patch nginx behind a proxy server is able to properly do virtual hosting. The patch adds a new configuration directive `use_x_forwarded_host` in the main configuration context. This directive is a flag and its default value is `off`. I would like to receive a review for this patch. Thanks Manlio Perillo -------------- next part -------------- A non-text attachment was scrubbed... Name: use_x_forwared_host.patch Type: text/x-patch Size: 2762 bytes Desc: not available URL: From 2bedros at gmail.com Sun Feb 10 19:54:59 2008 From: 2bedros at gmail.com (Bedros Hanounik) Date: Sun, 10 Feb 2008 08:54:59 -0800 Subject: Nginx feature request In-Reply-To: References: Message-ID: check out secdownload module in lighttpd http://trac.lighttpd.net/trac/wiki/Docs:ModSecDownload I've already made a feature request for something like that in nginx; but I believe the developers of nginx did not think building such a module is of high priority. lighttpd has a memory leak, and I'm not sure if they actually solved it (probably not). I personally prefer nginx, but I wish they build a module like secdownload. On Feb 10, 2008 12:56 AM, Hendrik Hardeman wrote: > Hi all, > > Discovered Nginx a few weeks back. After some experimenting I have come to > the conclusion that Nginx really is very very good. > > I have a small feature request: > > I have a set of static files (html and others) which I want to make > available only from a certain date/time. Inspired by the rewrite module, > this morning I thought of a simple method to control access to such files. > > My idea was to set the file creation/modification time of the static files > in the future, i.e. the date/time from which they can be made available ( > e.g. I could use the touch tool to set the appropriate date/time). I could > then do something like > > if (!-f $request_filename) {return 404;} > if ($date_gmt < fct($request_filename)) {return 403;} > > Unfortunately, I haven't found a way to get at the file > creation/modification time (please do let me know if I overlooked > something). The 'fct' in my example is an imaginary function which returns > the file creation time in the same format as $date_gmt (presumably unix > epoch timestamp) > > An even better way to handle this would be to have a directive in the core > module which disallows serving files which have a creation/modification time > in the future. I could then use: > > location /ftc/ { > filetime_check on; > } > > to disallow serving of files with a filetime in the future for that > location. Files would then automatically become available once the request > time >= filetime. This way access to certain files could be controlled in a > very straightforward and transparent way - and with a simple 'touch'. > > Though I'd definitely prefer a directive for the above purpose, access to > file creation/modification time (through variable or function) could still > be useful in the rewriting or ssi module. > > Anyone any other suggestions ? > > Thanks, > > Hendrik Hardeman > > ------------------------------ > Post free auto ads on Yello Classifieds now! Try it now! > -------------- next part -------------- An HTML attachment was scrubbed... URL: From manlio_perillo at libero.it Sun Feb 10 20:05:29 2008 From: manlio_perillo at libero.it (Manlio Perillo) Date: Sun, 10 Feb 2008 18:05:29 +0100 Subject: Nginx feature request In-Reply-To: References: Message-ID: <47AF2ED9.1030206@libero.it> Hendrik Hardeman ha scritto: > Hi all, > > Discovered Nginx a few weeks back. After some experimenting I have come > to the conclusion that Nginx really is very very good. > > I have a small feature request: > > I have a set of static files (html and others) which I want to make > available only from a certain date/time. Inspired by the rewrite module, > this morning I thought of a simple method to control access to such files. > > My idea was to set the file creation/modification time of the static > files in the future, i.e. the date/time from which they can be made > available (e.g. I could use the touch tool to set the appropriate > date/time). I could then do something like > > if (!-f $request_filename) {return 404;} > if ($date_gmt < fct($request_filename)) {return 403;} > > Unfortunately, I haven't found a way to get at the file > creation/modification time (please do let me know if I overlooked > something). You can use the embedded Perl module. > [...] Manlio Perillo From hendrik.hardeman at hotmail.com Sun Feb 10 21:37:33 2008 From: hendrik.hardeman at hotmail.com (Hendrik Hardeman) Date: Mon, 11 Feb 2008 00:07:33 +0530 Subject: Nginx feature request In-Reply-To: <47AF2ED9.1030206@libero.it> References: <47AF2ED9.1030206@libero.it> Message-ID: Manlio, Thanks for the suggestion, but when I propose this feature, it's exactly because I want to *avoid* something on top of Nginx (whether Perl, Python, PHP or anything else) ! My need is very simple. All I want is to be able to disallow Nginx to serve certain files based on filetime. This could be done in two ways: 1. A directive which is off by default but which can be switched on in any of the standard places - most likely place would be in 'location'. Let me tentatively name this directive 'filetime_check'. If the directive is on, nginx would not serve any files with a filetime later than current server/request time. Instead it would return a 403 error so that we can distinguish between 'file not found' and 'access not allowed'. You could then define something like: location /ftc/ {filetime_check on;} Any request for a file in location /ftc/ with a filetime later than request time (i.e. in the future) would then return 403. 2. Independent from the above, it might be useful to be able to access the filetime of requested file for defining rewriting rules or for ssi. This could either be through a variable or through a function: if ($date_gmt < fct($request_filename)) {do something, e.g. return 403;} Nginx already has to check the last modification time of requested files, so adding a directive as proposed in 1 shouldn't put a lot of extra burden on it. Shouldn't be too hard to implement this myself privately, but I thought that others might find this useful as well and in that case an official feature is probably better. Hendrik Hardeman > Date: Sun, 10 Feb 2008 18:05:29 +0100 > From: manlio_perillo at libero.it > To: nginx at sysoev.ru > Subject: Re: Nginx feature request > > Hendrik Hardeman ha scritto: > > Hi all, > > > > Discovered Nginx a few weeks back. After some experimenting I have come > > to the conclusion that Nginx really is very very good. > > > > I have a small feature request: > > > > I have a set of static files (html and others) which I want to make > > available only from a certain date/time. Inspired by the rewrite module, > > this morning I thought of a simple method to control access to such files. > > > > My idea was to set the file creation/modification time of the static > > files in the future, i.e. the date/time from which they can be made > > available (e.g. I could use the touch tool to set the appropriate > > date/time). I could then do something like > > > > if (!-f $request_filename) {return 404;} > > if ($date_gmt < fct($request_filename)) {return 403;} > > > > Unfortunately, I haven't found a way to get at the file > > creation/modification time (please do let me know if I overlooked > > something). > > You can use the embedded Perl module. > > > [...] > > > Manlio Perillo > _________________________________________________________________ Tried the new MSN Messenger? It?s cool! Download now. http://messenger.msn.com/Download/Default.aspx?mkt=en-in -------------- next part -------------- An HTML attachment was scrubbed... URL: From hendrik.hardeman at hotmail.com Sun Feb 10 21:55:08 2008 From: hendrik.hardeman at hotmail.com (Hendrik Hardeman) Date: Mon, 11 Feb 2008 00:25:08 +0530 Subject: Nginx feature request In-Reply-To: References: Message-ID: Hi Bedros, The module you refer to is not exactly what I have requested. The feature I propose is very simple: a method / directive to disallow serving a file which has a filetime (last modified time) set in the future. E.g. if I set the last modified time for a certain file to 4 March 2008 14:30, then any request for the file before that time would result in a 403. For any request made after that time the file will be served as usual. Such a directive would make life a lot simpler for me since I have thousands of small static data files which should become available only from a certain time. I could pregenerate lots of files, place them in the corresponding directories (for which I switched on the directive filetime_check), set any wanted filetime (e.g. with touch) and Nginx would do all the hard :) work by returning 403 instead of the file whenever the request is made before the last modification time of the file. As simple as that ! Perl, databases, etc. are too much overhead and I would not want to use any of those for this purpose. Hendrik Hardeman Date: Sun, 10 Feb 2008 08:54:59 -0800 From: 2bedros at gmail.com To: nginx at sysoev.ru Subject: Re: Nginx feature request check out secdownload module in lighttpd http://trac.lighttpd.net/trac/wiki/Docs:ModSecDownload I've already made a feature request for something like that in nginx; but I believe the developers of nginx did not think building such a module is of high priority. lighttpd has a memory leak, and I'm not sure if they actually solved it (probably not). I personally prefer nginx, but I wish they build a module like secdownload. On Feb 10, 2008 12:56 AM, Hendrik Hardeman wrote: Hi all, Discovered Nginx a few weeks back. After some experimenting I have come to the conclusion that Nginx really is very very good. I have a small feature request: I have a set of static files (html and others) which I want to make available only from a certain date/time. Inspired by the rewrite module, this morning I thought of a simple method to control access to such files. My idea was to set the file creation/modification time of the static files in the future, i.e. the date/time from which they can be made available (e.g. I could use the touch tool to set the appropriate date/time). I could then do something like if (!-f $request_filename) {return 404;} if ($date_gmt < fct($request_filename)) {return 403;} Unfortunately, I haven't found a way to get at the file creation/modification time (please do let me know if I overlooked something). The 'fct' in my example is an imaginary function which returns the file creation time in the same format as $date_gmt (presumably unix epoch timestamp) An even better way to handle this would be to have a directive in the core module which disallows serving files which have a creation/modification time in the future. I could then use: location /ftc/ { filetime_check on; } to disallow serving of files with a filetime in the future for that location. Files would then automatically become available once the request time >= filetime. This way access to certain files could be controlled in a very straightforward and transparent way - and with a simple 'touch'. Though I'd definitely prefer a directive for the above purpose, access to file creation/modification time (through variable or function) could still be useful in the rewriting or ssi module. Anyone any other suggestions ? Thanks, Hendrik Hardeman Post free auto ads on Yello Classifieds now! Try it now! _________________________________________________________________ Post ads for free - to sell, rent or even buy.www.yello.in http://ss1.richmedia.in/recurl.asp?pid=186 -------------- next part -------------- An HTML attachment was scrubbed... URL: From manlio_perillo at libero.it Sun Feb 10 22:16:58 2008 From: manlio_perillo at libero.it (Manlio Perillo) Date: Sun, 10 Feb 2008 20:16:58 +0100 Subject: Nginx feature request In-Reply-To: References: <47AF2ED9.1030206@libero.it> Message-ID: <47AF4DAA.7070600@libero.it> Hendrik Hardeman ha scritto: > Manlio, > > Thanks for the suggestion, but when I propose this feature, it's exactly > because I want to *avoid* something on top of Nginx (whether Perl, > Python, PHP or anything else) ! > Well, you need scripting support, so I don't understand why you should avoid to use Perl. nginx scripting module is very limited, and Igor has expressed intentions to improve it. Maybe a better language to embed in nginx is LUA, but this is another question. > My need is very simple. All I want is to be able to disallow Nginx to > serve certain files based on filetime. This could be done in two ways: > > 1. > A directive which is off by default but which can be switched on in any > of the standard places - most likely place would be in 'location'. Let > me tentatively name this directive 'filetime_check'. > This should be easy to implement with a post access handler module, as an example. > 2. > Independent from the above, it might be useful to be able to access the > filetime of requested file for defining rewriting rules or for ssi. This > could either be through a variable or through a function: > > if ($date_gmt < fct($request_filename)) {do something, e.g. return 403;} > This too is easy to implement patching the rewrite module, but nginx will not be able to do the comparison (it only support boolean and regex opoerators). Better to add a condition that will return true if the last modification time is in the future. > [...] Manlio Perillo From manlio_perillo at libero.it Sun Feb 10 22:35:22 2008 From: manlio_perillo at libero.it (Manlio Perillo) Date: Sun, 10 Feb 2008 20:35:22 +0100 Subject: Nginx feature request In-Reply-To: <47AF4DAA.7070600@libero.it> References: <47AF2ED9.1030206@libero.it> <47AF4DAA.7070600@libero.it> Message-ID: <47AF51FA.9060200@libero.it> Manlio Perillo ha scritto: > [...] > >> 2. >> Independent from the above, it might be useful to be able to access >> the filetime of requested file for defining rewriting rules or for >> ssi. This could either be through a variable or through a function: >> >> if ($date_gmt < fct($request_filename)) {do something, e.g. return 403;} >> > > This too is easy to implement patching the rewrite module, but nginx > will not be able to do the comparison (it only support boolean and regex > opoerators). > A correction: this feature requires to patch the http script module too. Manlio Perillo From kupokomapa at gmail.com Sun Feb 10 23:14:36 2008 From: kupokomapa at gmail.com (Kiril Angov) Date: Sun, 10 Feb 2008 15:14:36 -0500 Subject: Issue w/ nginx in hybrid static/php load balancer scenario In-Reply-To: References: <78130054.20080210124233@gostats.ru> Message-ID: <13c357830802101214m118bce3cifd2d9f4c6884f9b2@mail.gmail.com> Why don't you put location ~ \.php$ { fastcgi_pass php-fcgi-pool; } before "location /" so that it can match first? Kupo On Feb 10, 2008 1:51 AM, Mike Javorski wrote: > Thanks Denis. The problem w/ your solution is there is lots of > directories. I don't want to have to create entries for each. I > suppose I could to a regex for .*/$ but some of the directories have > index.html instead of index.php, and I need to support that as well > :-(. > > Any other options/suggestions? > > - mike > > On Feb 9, 2008 10:42 PM, Denis F. Latypoff wrote: > > Hello Mike, > > > > > > Sunday, February 10, 2008, 12:32:14 PM, you wrote: > > > > > I have nginx set up as a load balancer in front of two machines > > > running fastcgi/php, and nginx for static content. The desired goal is > > > to have all php pages (including the site index pages of .*/index.php) > > > processed by the fastcgi/php upstream, and everything else provided by > > > the static servers. The following is what I have and what I believe > > > should have worked, but it appears to run all directory paths via the > > > static rule, rather than the php rule which matches the index. > > > > > To sum up: > > > / -- via static system (WRONG) > > > /index.php -- via fastcgi/php system (RIGHT) > > > /blah/ -- via static system (WRONG) > > > /blah/index.php -- via fastcgi/php system (RIGHT) > > > > > nginx version is 0.6.25, Help! :-) > > > > > tia, > > > > > - mike > > > > > My Config File (the relevent bits anyway): > > > --------------------------------------------------- > > > > > http { > > > upstream static-pool { > > > server 192.168.7.40:80; > > > server 192.168.7.41:80; > > > } > > > > > upstream php-fcgi-pool { > > > server 192.168.7.40:7000; > > > server 192.168.7.41:7000; > > > } > > > > > server { > > > listen 80; > > > root /website/htdocs; > > > index index.php; > > > fastcgi_index index.php; > > > include /etc/nginx/fastcgi_params; > > > > > location / { > > > proxy_pass http://static-pool/website/htdocs/; > > > } > > > > + location = / { > > + fastcgi_pass php-fcgi-pool; > > + } > > > > + location = /blah/ { > > + fastcgi_pass php-fcgi-pool; > > + } > > > > > > > location ~ \.php$ { > > > fastcgi_pass php-fcgi-pool; > > > } > > > } > > > } > > > > > > > > -- > > Best regards, > > Denis mailto:denis at gostats.ru > > > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From hendrik.hardeman at hotmail.com Sun Feb 10 23:19:03 2008 From: hendrik.hardeman at hotmail.com (Hendrik Hardeman) Date: Mon, 11 Feb 2008 01:49:03 +0530 Subject: Nginx feature request In-Reply-To: <47AF4DAA.7070600@libero.it> References: <47AF2ED9.1030206@libero.it> <47AF4DAA.7070600@libero.it> Message-ID: Manlio, Thanks a lot for your thoughts. I will look into your suggestions for implementing points 1 and/or 2 below. As for scripting, for this particular purpose (serving static files) I don't require any scripting support. So adding Perl, or any other language for that matter, on top would be overkill. To be frank, I'm personally also not a big fan of Perl. I'd prefer Python over Perl (I'm sorry about that :-). Embedded LUA ? anything else with the smallest possible overhead - could be nice though for some purposes. For dynamic content I myself use Nginx upstream to several purpose-written asynchronous servers, written in Python on top of the libevent module (http://www.monkey.org/~provos/libevent/). I use Libevent as the actual server engine (probably quite close in performance to Nginx) which takes care of receiving / sending. I don't even bother about FastCGI. Each Python server listens on a particular port (behind Nginx upstream), is written to handle particular requests and has only the Python code required for that to keep overhead as small as possible. I also use SSI wherever possible - I wish Nginx had a few more options in this area. By pregenerating static files with content that should become available only after a certain time I can exploit Nginx even better - that is, if one day it has the feature I propose in this thread ! Hope this provides a better insight into my reasoning for proposing a filetime check option / directive. Hendrik > Date: Sun, 10 Feb 2008 20:16:58 +0100 > From: manlio_perillo at libero.it > To: nginx at sysoev.ru > Subject: Re: Nginx feature request > > Hendrik Hardeman ha scritto: > > Manlio, > > > > Thanks for the suggestion, but when I propose this feature, it's exactly > > because I want to *avoid* something on top of Nginx (whether Perl, > > Python, PHP or anything else) ! > > > > Well, you need scripting support, so I don't understand why you should > avoid to use Perl. > > nginx scripting module is very limited, and Igor has expressed > intentions to improve it. > > Maybe a better language to embed in nginx is LUA, but this is another > question. > > > > My need is very simple. All I want is to be able to disallow Nginx to > > serve certain files based on filetime. This could be done in two ways: > > > > 1. > > A directive which is off by default but which can be switched on in any > > of the standard places - most likely place would be in 'location'. Let > > me tentatively name this directive 'filetime_check'. > > > > This should be easy to implement with a post access handler module, as > an example. > > > > 2. > > Independent from the above, it might be useful to be able to access the > > filetime of requested file for defining rewriting rules or for ssi. This > > could either be through a variable or through a function: > > > > if ($date_gmt < fct($request_filename)) {do something, e.g. return 403;} > > > > This too is easy to implement patching the rewrite module, but nginx > will not be able to do the comparison (it only support boolean and regex > opoerators). > > Better to add a condition that will return true if the last modification > time is in the future. > > > > [...] > > > > Manlio Perillo > _________________________________________________________________ Post free property ads on Yello Classifieds now! www.yello.in http://ss1.richmedia.in/recurl.asp?pid=219 -------------- next part -------------- An HTML attachment was scrubbed... URL: From hendrik.hardeman at hotmail.com Sun Feb 10 23:24:26 2008 From: hendrik.hardeman at hotmail.com (Hendrik Hardeman) Date: Mon, 11 Feb 2008 01:54:26 +0530 Subject: Nginx feature request In-Reply-To: <47AF51FA.9060200@libero.it> References: <47AF2ED9.1030206@libero.it> <47AF4DAA.7070600@libero.it> <47AF51FA.9060200@libero.it> Message-ID: Manlio, Thanks for the correction. Will try to look into this when I find some time. So far, I haven't even looked at any Nginx code, let alone tried to understand how it works. Anyway, option 1 (directive) seems to be the easiest to implement. So will look at that first. Hendrik > Date: Sun, 10 Feb 2008 20:35:22 +0100 > From: manlio_perillo at libero.it > To: nginx at sysoev.ru > Subject: Re: Nginx feature request > > Manlio Perillo ha scritto: > > [...] > > > >> 2. > >> Independent from the above, it might be useful to be able to access > >> the filetime of requested file for defining rewriting rules or for > >> ssi. This could either be through a variable or through a function: > >> > >> if ($date_gmt < fct($request_filename)) {do something, e.g. return 403;} > >> > > > > This too is easy to implement patching the rewrite module, but nginx > > will not be able to do the comparison (it only support boolean and regex > > opoerators). > > > > A correction: this feature requires to patch the http script module too. > > > Manlio Perillo > > > _________________________________________________________________ Post free property ads on Yello Classifieds now! www.yello.in http://ss1.richmedia.in/recurl.asp?pid=220 -------------- next part -------------- An HTML attachment was scrubbed... URL: From manlio_perillo at libero.it Sun Feb 10 23:32:22 2008 From: manlio_perillo at libero.it (Manlio Perillo) Date: Sun, 10 Feb 2008 21:32:22 +0100 Subject: Nginx feature request In-Reply-To: References: <47AF2ED9.1030206@libero.it> <47AF4DAA.7070600@libero.it> Message-ID: <47AF5F56.7000905@libero.it> Hendrik Hardeman ha scritto: > Manlio, > > Thanks a lot for your thoughts. I will look into your suggestions for > implementing points 1 and/or 2 below. > > As for scripting, for this particular purpose (serving static files) I > don't require any scripting support. So adding Perl, or any other > language for that matter, on top would be overkill. > > To be frank, I'm personally also not a big fan of Perl. I'd prefer > Python over Perl (I'm sorry about that :-). Well, I'm a Python programmer and the author of the WSGI module for nginx :). > Embedded LUA ? anything else with the smallest possible overhead - could > be nice though for some purposes. > > For dynamic content I myself use Nginx upstream to several > purpose-written asynchronous servers, written in Python on top of the > libevent module (http://www.monkey.org/~provos/libevent/). I use > Libevent as the actual server engine (probably quite close in > performance to Nginx) which takes care of receiving / sending. Interesting, but I prefer to use Python embedded in Nginx, since it is more robust. In future I hope to be able to add support for asynchronous Python applications. I think that it would be interesting to have some benchmarks of mod_wsgi against libevent based Python servers. > I don't > even bother about FastCGI. Each Python server listens on a particular > port (behind Nginx upstream), is written to handle particular requests > and has only the Python code required for that to keep overhead as small > as possible. > This is a Goog Thing! > I also use SSI wherever possible - I wish Nginx had a few more options > in this area. > > By pregenerating static files with content that should become available > only after a certain time I can exploit Nginx even better - that is, if > one day it has the feature I propose in this thread ! > > Hope this provides a better insight into my reasoning for proposing a > filetime check option / directive. > The problem is that this is a very specialized need, so you should implement it by yourself, of rent a coder that can work on the nginx codebase for you. > > Hendrik > > Manlio Perillo From eliott at cactuswax.net Mon Feb 11 00:13:51 2008 From: eliott at cactuswax.net (eliott) Date: Sun, 10 Feb 2008 13:13:51 -0800 Subject: Nginx feature request In-Reply-To: <47AF5F56.7000905@libero.it> References: <47AF2ED9.1030206@libero.it> <47AF4DAA.7070600@libero.it> <47AF5F56.7000905@libero.it> Message-ID: <428d921d0802101313j28ff564ep7f950c9d37e6dc9a@mail.gmail.com> How about a cron job that runs every 5 minutes, and sets a file with a future mtime as non readable, and a past mtime as readable? From hendrik.hardeman at hotmail.com Mon Feb 11 00:39:58 2008 From: hendrik.hardeman at hotmail.com (Hendrik Hardeman) Date: Mon, 11 Feb 2008 03:09:58 +0530 Subject: Nginx feature request In-Reply-To: <47AF5F56.7000905@libero.it> References: <47AF2ED9.1030206@libero.it> <47AF4DAA.7070600@libero.it> <47AF5F56.7000905@libero.it> Message-ID: Manlio, > Interesting, but I prefer to use Python embedded in Nginx, since it is > more robust. > In future I hope to be able to add support for asynchronous Python > applications. > I agree that embedded Python would be more robust, but then I would probably still use standard Nginx in front with perhaps one or more bare-bones Nginx servers upstream with embedded Python listening on particular ports. With embedded Python I don't see a need for some of the standard modules, e.g. the rewriting module. That's something the Nginx server in front could take care of. I'd rather see something like Libevent (i.e. an asynchronous HTTP server engine) with embedded Python. The core of Nginx is probably similar to Libevent with its HTTP layer. Would be interesting to find out how Nginx core compares to Libevent with HTTP layer in terms of performance. A core Nginx engine with embedded Python would definitely interest me ! That's for sure more robust than Python on top of Libevent. Already binding Python to the latest version of Libevent was quite tricky. Both existing Python modules for libevent (pyevent / libevent-python) are outdated and I had to use the ctypes library to bind to Libevent. Wouldn't mind replacing Python->ctypes->libevent by an Nginx core with embedded Python. > > I think that it would be interesting to have some benchmarks of mod_wsgi > against libevent based Python servers. > > I believe it would be difficult to compare a more general purpose mod_wsgi with a purpose-written Python server based on libevent. I myself have no experience with wsgi and am not in a position to compare, but you can try. > > The problem is that this is a very specialized need, so you should > implement it by yourself, of rent a coder that can work on the nginx > codebase for you. > Will probably implement it myself, unless there's someone else out there who would be interested to give it a try. For the time being I have myself no time whatsoever, at least not in the next two months. Hendrik _________________________________________________________________ Tried the new MSN Messenger? It?s cool! Download now. http://messenger.msn.com/Download/Default.aspx?mkt=en-in -------------- next part -------------- An HTML attachment was scrubbed... URL: From zellster at gmail.com Mon Feb 11 00:50:16 2008 From: zellster at gmail.com (Adam Zell) Date: Sun, 10 Feb 2008 13:50:16 -0800 Subject: Nginx feature request In-Reply-To: References: <47AF2ED9.1030206@libero.it> <47AF4DAA.7070600@libero.it> <47AF5F56.7000905@libero.it> Message-ID: <2cc9d1ea0802101350n53e87ff5y9baf490f21d8977f@mail.gmail.com> Just a quick blurb for libevent users: you may want to check out libev. >From http://software.schmorp.de/pkg/libev.html: "A full-featured and high-performance (see benchmark) event loop that is loosely modelled after libevent, but without its limitations and bugs. It is used, among others, in the GNU Virtual Private Ethernet and rxvt-unicodepackages, and in the Deliantra MORPG Server and Client." On Feb 10, 2008 1:39 PM, Hendrik Hardeman wrote: > Manlio, > > > Interesting, but I prefer to use Python embedded in Nginx, since it is > > more robust. > > In future I hope to be able to add support for asynchronous Python > > applications. > > > > I agree that embedded Python would be more robust, but then I would > probably still use standard Nginx in front with perhaps one or more > bare-bones Nginx servers upstream with embedded Python listening on > particular ports. With embedded Python I don't see a need for some of the > standard modules, e.g. the rewriting module. That's something the Nginx > server in front could take care of. I'd rather see something like Libevent ( > i.e. an asynchronous HTTP server engine) with embedded Python. The core of > Nginx is probably similar to Libevent with its HTTP layer. Would be > interesting to find out how Nginx core compares to Libevent with HTTP layer > in terms of performance. > > A core Nginx engine with embedded Python would definitely interest me ! > That's for sure more robust than Python on top of Libevent. Already binding > Python to the latest version of Libevent was quite tricky. Both existing > Python modules for libevent (pyevent / libevent-python) are outdated and I > had to use the ctypes library to bind to Libevent. Wouldn't mind replacing > Python->ctypes->libevent by an Nginx core with embedded Python. > > > > > I think that it would be interesting to have some benchmarks of mod_wsgi > > > against libevent based Python servers. > > > > > > I believe it would be difficult to compare a more general purpose mod_wsgi > with a purpose-written Python server based on libevent. I myself have no > experience with wsgi and am not in a position to compare, but you can try. > > > > > The problem is that this is a very specialized need, so you should > > implement it by yourself, of rent a coder that can work on the nginx > > codebase for you. > > > > Will probably implement it myself, unless there's someone else out there > who would be interested to give it a try. For the time being I have myself > no time whatsoever, at least not in the next two months. > > Hendrik > > ------------------------------ > Live the life in style with MSN Lifestyle. Check out! Try it now! > -- Adam zellster at gmail.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From manlio_perillo at libero.it Mon Feb 11 01:14:15 2008 From: manlio_perillo at libero.it (Manlio Perillo) Date: Sun, 10 Feb 2008 23:14:15 +0100 Subject: Nginx feature request In-Reply-To: References: <47AF2ED9.1030206@libero.it> <47AF4DAA.7070600@libero.it> <47AF5F56.7000905@libero.it> Message-ID: <47AF7737.2050801@libero.it> Hendrik Hardeman ha scritto: > Manlio, > > > Interesting, but I prefer to use Python embedded in Nginx, since it is > > more robust. > > In future I hope to be able to add support for asynchronous Python > > applications. > > > > I agree that embedded Python would be more robust, but then I would > probably still use standard Nginx in front with perhaps one or more > bare-bones Nginx servers upstream with embedded Python listening on > particular ports. Yes, this is the recommended deployment. > With embedded Python I don't see a need for some of > the standard modules, e.g. the rewriting module. That's something the > Nginx server in front could take care of. Right. > I'd rather see something like > Libevent (i.e. an asynchronous HTTP server engine) with embedded Python. > The core of Nginx is probably similar to Libevent with its HTTP layer. > Would be interesting to find out how Nginx core compares to Libevent > with HTTP layer in terms of performance. > > A core Nginx engine with embedded Python would definitely interest me ! Then you should check http://hg.mperillo.ath.cx/nginx/mod_wsgi/! > [...] > > I believe it would be difficult to compare a more general purpose > mod_wsgi with a purpose-written Python server based on libevent. I > myself have no experience with wsgi and am not in a position to compare, > but you can try. > Not sure about this, WSGI is a low level interface, well designed. I think it is better then any other specific purpose API. > > > > The problem is that this is a very specialized need, so you should > > implement it by yourself, of rent a coder that can work on the nginx > > codebase for you. > > > > Will probably implement it myself, unless there's someone else out there > who would be interested to give it a try. For the time being I have > myself no time whatsoever, at least not in the next two months. > I can write it, but not for free, sorry, since I don't need this feature and I too don't have very free time. It will take, for me, only a few hours of coding, however. Also, you should give the elliot suggestion a change. Removing the read permission bit from a file is a good solution, and with the help of cron it should resolve your problem. > Hendrik > Manlio Perillo From hendrik.hardeman at hotmail.com Mon Feb 11 01:21:40 2008 From: hendrik.hardeman at hotmail.com (Hendrik Hardeman) Date: Mon, 11 Feb 2008 03:51:40 +0530 Subject: Nginx feature request In-Reply-To: <428d921d0802101313j28ff564ep7f950c9d37e6dc9a@mail.gmail.com> References: <47AF2ED9.1030206@libero.it> <47AF4DAA.7070600@libero.it> <47AF5F56.7000905@libero.it> <428d921d0802101313j28ff564ep7f950c9d37e6dc9a@mail.gmail.com> Message-ID: eliott, Thanks for the suggestion. The solution you offer seems workable for a smaller number of files, but I'm not sure how such a cron job would scale with lots of files (potentially thousands) added and deleted on a regular basis. Cron jobs also require cpu time and harddisk access - even more so if I'd want a finer resolution than 5 minutes. I'd rather rely entirely on the filesystem itself. Just create a file with the appropriate mtime (with a resolution of seconds rather than minutes) and then forget about it - as nginx could take care of the rest. Replicating / scaling would also be very straightforward. Well, just an idea. Is probably not very useful for anyone else, so I don't expect much support for my feature request :) Will probably end up implementing it myself. Hendrik > Date: Sun, 10 Feb 2008 13:13:51 -0800 > From: eliott at cactuswax.net > To: nginx at sysoev.ru > Subject: Re: Nginx feature request > > How about a cron job that runs every 5 minutes, and sets a file with a > future mtime as non readable, and a past mtime as readable? > _________________________________________________________________ Post free property ads on Yello Classifieds now! www.yello.in http://ss1.richmedia.in/recurl.asp?pid=219 -------------- next part -------------- An HTML attachment was scrubbed... URL: From hendrik.hardeman at hotmail.com Mon Feb 11 01:44:17 2008 From: hendrik.hardeman at hotmail.com (Hendrik Hardeman) Date: Mon, 11 Feb 2008 04:14:17 +0530 Subject: Nginx feature request In-Reply-To: <47AF7737.2050801@libero.it> References: <47AF2ED9.1030206@libero.it> <47AF4DAA.7070600@libero.it> <47AF5F56.7000905@libero.it> <47AF7737.2050801@libero.it> Message-ID: Manlio, > > Then you should check http://hg.mperillo.ath.cx/nginx/mod_wsgi/! > Will check it out ! > Not sure about this, WSGI is a low level interface, well designed. > I think it is better then any other specific purpose API. > As I said, I have no experience with WSGI. Will try and see if I can compare with my existing libevent-based solution. > > Also, you should give the elliot suggestion a change. > Removing the read permission bit from a file is a good solution, and > with the help of cron it should resolve your problem. > Have replied to eliott's suggestion. As I mentioned there, the solution seems workable, but has its drawbacks. Is definitely an option for a temporary solution though. Hendrik _________________________________________________________________ Post free property ads on Yello Classifieds now! www.yello.in http://ss1.richmedia.in/recurl.asp?pid=221 -------------- next part -------------- An HTML attachment was scrubbed... URL: From mike.javorski at gmail.com Mon Feb 11 02:41:18 2008 From: mike.javorski at gmail.com (Mike Javorski) Date: Sun, 10 Feb 2008 15:41:18 -0800 Subject: Issue w/ nginx in hybrid static/php load balancer scenario In-Reply-To: <13c357830802101214m118bce3cifd2d9f4c6884f9b2@mail.gmail.com> References: <78130054.20080210124233@gostats.ru> <13c357830802101214m118bce3cifd2d9f4c6884f9b2@mail.gmail.com> Message-ID: I tried that (iirc). The issue is that location / is processed first (since it's not a regex). but the regex doesn't seem to apply after the index file options are tacked on the end. On Feb 10, 2008 12:14 PM, Kiril Angov wrote: > Why don't you put > > location ~ \.php$ { > fastcgi_pass php-fcgi-pool; > } > > > before "location /" so that it can match first? > > Kupo > > > > On Feb 10, 2008 1:51 AM, Mike Javorski wrote: > > Thanks Denis. The problem w/ your solution is there is lots of > > directories. I don't want to have to create entries for each. I > > suppose I could to a regex for .*/$ but some of the directories have > > index.html instead of index.php, and I need to support that as well > > :-(. > > > > Any other options/suggestions? > > > > - mike > > > > > > > > > > On Feb 9, 2008 10:42 PM, Denis F. Latypoff wrote: > > > Hello Mike, > > > > > > > > > Sunday, February 10, 2008, 12:32:14 PM, you wrote: > > > > > > > I have nginx set up as a load balancer in front of two machines > > > > running fastcgi/php, and nginx for static content. The desired goal is > > > > to have all php pages (including the site index pages of .*/index.php) > > > > processed by the fastcgi/php upstream, and everything else provided by > > > > the static servers. The following is what I have and what I believe > > > > should have worked, but it appears to run all directory paths via the > > > > static rule, rather than the php rule which matches the index. > > > > > > > To sum up: > > > > / -- via static system (WRONG) > > > > /index.php -- via fastcgi/php system (RIGHT) > > > > /blah/ -- via static system (WRONG) > > > > /blah/index.php -- via fastcgi/php system (RIGHT) > > > > > > > nginx version is 0.6.25, Help! :-) > > > > > > > tia, > > > > > > > - mike > > > > > > > My Config File (the relevent bits anyway): > > > > --------------------------------------------------- > > > > > > > http { > > > > upstream static-pool { > > > > server 192.168.7.40:80; > > > > server 192.168.7.41:80; > > > > } > > > > > > > upstream php-fcgi-pool { > > > > server 192.168.7.40:7000; > > > > server 192.168.7.41:7000; > > > > } > > > > > > > server { > > > > listen 80; > > > > root /website/htdocs; > > > > index index.php; > > > > fastcgi_index index.php; > > > > include /etc/nginx/fastcgi_params; > > > > > > > location / { > > > > proxy_pass http://static-pool/website/htdocs/; > > > > } > > > > > > + location = / { > > > + fastcgi_pass php-fcgi-pool; > > > + } > > > > > > + location = /blah/ { > > > + fastcgi_pass php-fcgi-pool; > > > + } > > > > > > > > > > location ~ \.php$ { > > > > fastcgi_pass php-fcgi-pool; > > > > } > > > > } > > > > } > > > > > > > > > > > > -- > > > Best regards, > > > Denis mailto:denis at gostats.ru > > > > > > > > > > > > > > > From den.lists at gmail.com Mon Feb 11 06:28:09 2008 From: den.lists at gmail.com (Denis S. Filimonov) Date: Sun, 10 Feb 2008 22:28:09 -0500 Subject: Issue w/ nginx in hybrid static/php load balancer scenario In-Reply-To: References: <13c357830802101214m118bce3cifd2d9f4c6884f9b2@mail.gmail.com> Message-ID: <200802102228.09992.den.lists@gmail.com> I'd try something like the following: location / { if ($request_filename !~ "(\.php|/)$") { proxy_pass http://static-pool/website/htdocs/; } } set $dir_index index.php; location ~ /$ { set $index_php $root$uri$dir_index; if (-f $index_php) { fastcgi_pass php-fcgi-pool; break; } proxy_pass http://static-pool/website/htdocs/; } location ~ \.php$ { fastcgi_pass php-fcgi-pool; } On Sunday 10 February 2008 18:41:18 Mike Javorski wrote: > I tried that (iirc). The issue is that location / is processed first > (since it's not a regex). but the regex doesn't seem to apply after > the index file options are tacked on the end. > > On Feb 10, 2008 12:14 PM, Kiril Angov wrote: > > Why don't you put > > > > location ~ \.php$ { > > fastcgi_pass php-fcgi-pool; > > } > > > > > > before "location /" so that it can match first? > > > > Kupo > > > > On Feb 10, 2008 1:51 AM, Mike Javorski wrote: > > > Thanks Denis. The problem w/ your solution is there is lots of > > > directories. I don't want to have to create entries for each. I > > > suppose I could to a regex for .*/$ but some of the directories have > > > index.html instead of index.php, and I need to support that as well > > > > > > :-(. > > > > > > Any other options/suggestions? > > > > > > - mike > > > > > > On Feb 9, 2008 10:42 PM, Denis F. Latypoff wrote: > > > > Hello Mike, > > > > > > > > Sunday, February 10, 2008, 12:32:14 PM, you wrote: > > > > > I have nginx set up as a load balancer in front of two machines > > > > > running fastcgi/php, and nginx for static content. The desired goal > > > > > is to have all php pages (including the site index pages of > > > > > .*/index.php) processed by the fastcgi/php upstream, and everything > > > > > else provided by the static servers. The following is what I have > > > > > and what I believe should have worked, but it appears to run all > > > > > directory paths via the static rule, rather than the php rule which > > > > > matches the index. > > > > > > > > > > To sum up: > > > > > / -- via static system (WRONG) > > > > > /index.php -- via fastcgi/php system (RIGHT) > > > > > /blah/ -- via static system (WRONG) > > > > > /blah/index.php -- via fastcgi/php system (RIGHT) > > > > > > > > > > nginx version is 0.6.25, Help! :-) > > > > > > > > > > tia, > > > > > > > > > > - mike > > > > > > > > > > My Config File (the relevent bits anyway): > > > > > --------------------------------------------------- > > > > > > > > > > http { > > > > > upstream static-pool { > > > > > server 192.168.7.40:80; > > > > > server 192.168.7.41:80; > > > > > } > > > > > > > > > > upstream php-fcgi-pool { > > > > > server 192.168.7.40:7000; > > > > > server 192.168.7.41:7000; > > > > > } > > > > > > > > > > server { > > > > > listen 80; > > > > > root /website/htdocs; > > > > > index index.php; > > > > > fastcgi_index index.php; > > > > > include /etc/nginx/fastcgi_params; > > > > > > > > > > location / { > > > > > proxy_pass http://static-pool/website/htdocs/; > > > > > } > > > > > > > > + location = / { > > > > + fastcgi_pass php-fcgi-pool; > > > > + } > > > > > > > > + location = /blah/ { > > > > + fastcgi_pass php-fcgi-pool; > > > > + } > > > > > > > > > location ~ \.php$ { > > > > > fastcgi_pass php-fcgi-pool; > > > > > } > > > > > } > > > > > } > > > > > > > > -- > > > > Best regards, > > > > Denis mailto:denis at gostats.ru -- Denis. From is at rambler-co.ru Mon Feb 11 09:49:01 2008 From: is at rambler-co.ru (Igor Sysoev) Date: Mon, 11 Feb 2008 09:49:01 +0300 Subject: Issue w/ nginx in hybrid static/php load balancer scenario In-Reply-To: References: Message-ID: <20080211064901.GA64410@rambler-co.ru> On Sat, Feb 09, 2008 at 10:32:14PM -0800, Mike Javorski wrote: > I have nginx set up as a load balancer in front of two machines > running fastcgi/php, and nginx for static content. The desired goal is > to have all php pages (including the site index pages of .*/index.php) > processed by the fastcgi/php upstream, and everything else provided by > the static servers. The following is what I have and what I believe > should have worked, but it appears to run all directory paths via the > static rule, rather than the php rule which matches the index. > > To sum up: > / -- via static system (WRONG) > /index.php -- via fastcgi/php system (RIGHT) > /blah/ -- via static system (WRONG) > /blah/index.php -- via fastcgi/php system (RIGHT) > > nginx version is 0.6.25, Help! :-) > > tia, > > - mike > > My Config File (the relevent bits anyway): > --------------------------------------------------- > > http { > upstream static-pool { > server 192.168.7.40:80; > server 192.168.7.41:80; > } > > upstream php-fcgi-pool { > server 192.168.7.40:7000; > server 192.168.7.41:7000; > } > > server { > listen 80; > root /website/htdocs; > index index.php; > fastcgi_index index.php; > include /etc/nginx/fastcgi_params; > > location / { > proxy_pass http://static-pool/website/htdocs/; > } + location ~ /$ { + fastcgi_pass php-fcgi-pool; + } Then all "..../" will be handled by php-fcgi-pool. > location ~ \.php$ { > fastcgi_pass php-fcgi-pool; > } > } > } -- Igor Sysoev http://sysoev.ru/en/ From is at rambler-co.ru Mon Feb 11 09:50:26 2008 From: is at rambler-co.ru (Igor Sysoev) Date: Mon, 11 Feb 2008 09:50:26 +0300 Subject: Issue w/ nginx in hybrid static/php load balancer scenario In-Reply-To: <13c357830802101214m118bce3cifd2d9f4c6884f9b2@mail.gmail.com> References: <78130054.20080210124233@gostats.ru> <13c357830802101214m118bce3cifd2d9f4c6884f9b2@mail.gmail.com> Message-ID: <20080211065026.GB64410@rambler-co.ru> On Sun, Feb 10, 2008 at 03:14:36PM -0500, Kiril Angov wrote: > Why don't you put > > location ~ \.php$ { > fastcgi_pass php-fcgi-pool; > } > > before "location /" so that it can match first? No. See the processing order: http://wiki.codemongers.com/NginxHttpCoreModule#location > Kupo > > On Feb 10, 2008 1:51 AM, Mike Javorski wrote: > > > Thanks Denis. The problem w/ your solution is there is lots of > > directories. I don't want to have to create entries for each. I > > suppose I could to a regex for .*/$ but some of the directories have > > index.html instead of index.php, and I need to support that as well > > :-(. > > > > Any other options/suggestions? > > > > - mike > > > > On Feb 9, 2008 10:42 PM, Denis F. Latypoff wrote: > > > Hello Mike, > > > > > > > > > Sunday, February 10, 2008, 12:32:14 PM, you wrote: > > > > > > > I have nginx set up as a load balancer in front of two machines > > > > running fastcgi/php, and nginx for static content. The desired goal is > > > > to have all php pages (including the site index pages of .*/index.php) > > > > processed by the fastcgi/php upstream, and everything else provided by > > > > the static servers. The following is what I have and what I believe > > > > should have worked, but it appears to run all directory paths via the > > > > static rule, rather than the php rule which matches the index. > > > > > > > To sum up: > > > > / -- via static system (WRONG) > > > > /index.php -- via fastcgi/php system (RIGHT) > > > > /blah/ -- via static system (WRONG) > > > > /blah/index.php -- via fastcgi/php system (RIGHT) > > > > > > > nginx version is 0.6.25, Help! :-) > > > > > > > tia, > > > > > > > - mike > > > > > > > My Config File (the relevent bits anyway): > > > > --------------------------------------------------- > > > > > > > http { > > > > upstream static-pool { > > > > server 192.168.7.40:80; > > > > server 192.168.7.41:80; > > > > } > > > > > > > upstream php-fcgi-pool { > > > > server 192.168.7.40:7000; > > > > server 192.168.7.41:7000; > > > > } > > > > > > > server { > > > > listen 80; > > > > root /website/htdocs; > > > > index index.php; > > > > fastcgi_index index.php; > > > > include /etc/nginx/fastcgi_params; > > > > > > > location / { > > > > proxy_pass http://static-pool/website/htdocs/; > > > > } > > > > > > + location = / { > > > + fastcgi_pass php-fcgi-pool; > > > + } > > > > > > + location = /blah/ { > > > + fastcgi_pass php-fcgi-pool; > > > + } > > > > > > > > > > location ~ \.php$ { > > > > fastcgi_pass php-fcgi-pool; > > > > } > > > > } > > > > } > > > > > > > > > > > > -- > > > Best regards, > > > Denis mailto:denis at gostats.ru > > > > > > > > > > > > > -- Igor Sysoev http://sysoev.ru/en/ From is at rambler-co.ru Mon Feb 11 09:53:36 2008 From: is at rambler-co.ru (Igor Sysoev) Date: Mon, 11 Feb 2008 09:53:36 +0300 Subject: Issue w/ nginx in hybrid static/php load balancer scenario In-Reply-To: <200802102228.09992.den.lists@gmail.com> References: <13c357830802101214m118bce3cifd2d9f4c6884f9b2@mail.gmail.com> <200802102228.09992.den.lists@gmail.com> Message-ID: <20080211065336.GC64410@rambler-co.ru> On Sun, Feb 10, 2008 at 10:28:09PM -0500, Denis S. Filimonov wrote: > I'd try something like the following: > > location / { > if ($request_filename !~ "(\.php|/)$") { > proxy_pass http://static-pool/website/htdocs/; > } > } > > set $dir_index index.php; > > location ~ /$ { > set $index_php $root$uri$dir_index; > if (-f $index_php) { > fastcgi_pass php-fcgi-pool; > break; > } > proxy_pass http://static-pool/website/htdocs/; > } > > location ~ \.php$ { > fastcgi_pass php-fcgi-pool; > } There is no need of such spaghetti configuration: all tests can be done via location regex and index directive. > On Sunday 10 February 2008 18:41:18 Mike Javorski wrote: > > I tried that (iirc). The issue is that location / is processed first > > (since it's not a regex). but the regex doesn't seem to apply after > > the index file options are tacked on the end. > > > > On Feb 10, 2008 12:14 PM, Kiril Angov wrote: > > > Why don't you put > > > > > > location ~ \.php$ { > > > fastcgi_pass php-fcgi-pool; > > > } > > > > > > > > > before "location /" so that it can match first? > > > > > > Kupo > > > > > > On Feb 10, 2008 1:51 AM, Mike Javorski wrote: > > > > Thanks Denis. The problem w/ your solution is there is lots of > > > > directories. I don't want to have to create entries for each. I > > > > suppose I could to a regex for .*/$ but some of the directories have > > > > index.html instead of index.php, and I need to support that as well > > > > > > > > :-(. > > > > > > > > Any other options/suggestions? > > > > > > > > - mike > > > > > > > > On Feb 9, 2008 10:42 PM, Denis F. Latypoff wrote: > > > > > Hello Mike, > > > > > > > > > > Sunday, February 10, 2008, 12:32:14 PM, you wrote: > > > > > > I have nginx set up as a load balancer in front of two machines > > > > > > running fastcgi/php, and nginx for static content. The desired goal > > > > > > is to have all php pages (including the site index pages of > > > > > > .*/index.php) processed by the fastcgi/php upstream, and everything > > > > > > else provided by the static servers. The following is what I have > > > > > > and what I believe should have worked, but it appears to run all > > > > > > directory paths via the static rule, rather than the php rule which > > > > > > matches the index. > > > > > > > > > > > > To sum up: > > > > > > / -- via static system (WRONG) > > > > > > /index.php -- via fastcgi/php system (RIGHT) > > > > > > /blah/ -- via static system (WRONG) > > > > > > /blah/index.php -- via fastcgi/php system (RIGHT) > > > > > > > > > > > > nginx version is 0.6.25, Help! :-) > > > > > > > > > > > > tia, > > > > > > > > > > > > - mike > > > > > > > > > > > > My Config File (the relevent bits anyway): > > > > > > --------------------------------------------------- > > > > > > > > > > > > http { > > > > > > upstream static-pool { > > > > > > server 192.168.7.40:80; > > > > > > server 192.168.7.41:80; > > > > > > } > > > > > > > > > > > > upstream php-fcgi-pool { > > > > > > server 192.168.7.40:7000; > > > > > > server 192.168.7.41:7000; > > > > > > } > > > > > > > > > > > > server { > > > > > > listen 80; > > > > > > root /website/htdocs; > > > > > > index index.php; > > > > > > fastcgi_index index.php; > > > > > > include /etc/nginx/fastcgi_params; > > > > > > > > > > > > location / { > > > > > > proxy_pass http://static-pool/website/htdocs/; > > > > > > } > > > > > > > > > > + location = / { > > > > > + fastcgi_pass php-fcgi-pool; > > > > > + } > > > > > > > > > > + location = /blah/ { > > > > > + fastcgi_pass php-fcgi-pool; > > > > > + } > > > > > > > > > > > location ~ \.php$ { > > > > > > fastcgi_pass php-fcgi-pool; > > > > > > } > > > > > > } > > > > > > } > > > > > > > > > > -- > > > > > Best regards, > > > > > Denis mailto:denis at gostats.ru > -- > Denis. > -- Igor Sysoev http://sysoev.ru/en/ From lists at ruby-forum.com Mon Feb 11 11:39:45 2008 From: lists at ruby-forum.com (Bbq Plate) Date: Mon, 11 Feb 2008 09:39:45 +0100 Subject: static files @ 50 req a second. using nginx and 2 mongrels? Message-ID: hi, i have a static page about.html in my railsapp/public folder. when running a httperf, i am only getting around 50 requests a second. shouldnt it be a few hundred per second? -- Posted via http://www.ruby-forum.com/. From lists at ruby-forum.com Mon Feb 11 11:44:02 2008 From: lists at ruby-forum.com (Bbq Plate) Date: Mon, 11 Feb 2008 09:44:02 +0100 Subject: static files @ 50 req a second. using nginx and 2 mongre In-Reply-To: References: Message-ID: actually,,, when getting the 50 reqs a second, i am on a different computer on my LAN. however, when performing the httperf on the same computer as nginx, i am now getting 5000 req/second. -- Posted via http://www.ruby-forum.com/. From frank at openminds.be Mon Feb 11 11:45:43 2008 From: frank at openminds.be (Frank Louwers) Date: Mon, 11 Feb 2008 09:45:43 +0100 Subject: static files @ 50 req a second. using nginx and 2 mongrels? In-Reply-To: References: Message-ID: <68BE0249-944A-4B81-BD02-0EB5C6CA4EB7@openminds.be> On 11 Feb 2008, at 09:39, Bbq Plate wrote: > hi, > i have a static page about.html in my railsapp/public folder. when > running a httperf, i am only getting around 50 requests a second. > shouldnt it be a few hundred per second? Yes, it should be way higher. Do you have configured your nginx to serve the statics, or are the mongrels serving them? Mongrel isn't good at serving static conent, use nginx for that. Frank > > -- > Posted via http://www.ruby-forum.com/. > From is at rambler-co.ru Mon Feb 11 11:48:00 2008 From: is at rambler-co.ru (Igor Sysoev) Date: Mon, 11 Feb 2008 11:48:00 +0300 Subject: static files @ 50 req a second. using nginx and 2 mongre In-Reply-To: References: Message-ID: <20080211084800.GG64410@rambler-co.ru> On Mon, Feb 11, 2008 at 09:44:02AM +0100, Bbq Plate wrote: > when getting the 50 reqs a second, i am on a different computer on my > LAN. however, when performing the httperf on the same computer as nginx, > i am now getting 5000 req/second. Probalby you set tcp_nodelay off; It should be on (default). -- Igor Sysoev http://sysoev.ru/en/ From adam at digitalagemedia.net Mon Feb 11 11:48:45 2008 From: adam at digitalagemedia.net (Adam Michaels) Date: Mon, 11 Feb 2008 00:48:45 -0800 Subject: static files @ 50 req a second. using nginx and 2 mongrels? References: Message-ID: <01b701c86c8a$e98f7a00$7200a8c0@invasion> Make sure nginx is configured to serve the content? ----- Original Message ----- From: "Bbq Plate" To: Sent: Monday, February 11, 2008 12:39 AM Subject: static files @ 50 req a second. using nginx and 2 mongrels? > hi, > i have a static page about.html in my railsapp/public folder. when > running a httperf, i am only getting around 50 requests a second. > shouldnt it be a few hundred per second? > -- > Posted via http://www.ruby-forum.com/. > > > From foxx at freemail.gr Mon Feb 11 13:04:35 2008 From: foxx at freemail.gr (Athan Dimoy) Date: Mon, 11 Feb 2008 12:04:35 +0200 Subject: Location regex issue. Message-ID: I have a problem with regex in a location used to deny access to some Drupal directories and files. location ~ \.(engine|inc|info|install|module|profile|po|sh|.*sql|theme|tpl(\.php)?|xtmpl)$|^(code-style\.pl|Entries.*|Repository|Root|Tag|Template)$ { return 404; } First part up to OR operator (|) works fine but seems to ignore the ^ operator (right next to |). Trying to isolate the problem I used the following location ~ ^code-style\.pl$ { return 404; } Unfortunately code-style.pl was still accessible. Is ^ (begin of) operatior supported in location context? Thanks, Athan From is at rambler-co.ru Mon Feb 11 13:18:50 2008 From: is at rambler-co.ru (Igor Sysoev) Date: Mon, 11 Feb 2008 13:18:50 +0300 Subject: Location regex issue. In-Reply-To: References: Message-ID: <20080211101850.GA70984@rambler-co.ru> On Mon, Feb 11, 2008 at 12:04:35PM +0200, Athan Dimoy wrote: > I have a problem with regex in a location used to deny access to some > Drupal directories and files. > > location ~ > \.(engine|inc|info|install|module|profile|po|sh|.*sql|theme|tpl(\.php)?|xtmpl)$|^(code-style\.pl|Entries.*|Repository|Root|Tag|Template)$ > { > return 404; > } > > First part up to OR operator (|) works fine but seems to ignore the ^ > operator (right next to |). > > Trying to isolate the problem I used the following > > location ~ ^code-style\.pl$ { > return 404; > } > > Unfortunately code-style.pl was still accessible. Is ^ (begin of) operatior > supported in location context? What begin are you mean ? Begin of URI or begin of level in directory ? "^" is begin of all text. All URIs started from '/'. You probably need location ~ /code-style\.pl$ { -- Igor Sysoev http://sysoev.ru/en/ From foxx at freemail.gr Mon Feb 11 13:31:18 2008 From: foxx at freemail.gr (Athan Dimoy) Date: Mon, 11 Feb 2008 12:31:18 +0200 Subject: Location regex issue. In-Reply-To: <20080211101850.GA70984@rambler-co.ru> References: <20080211101850.GA70984@rambler-co.ru> Message-ID: "Igor Sysoev" wrote in message news:20080211101850.GA70984 at rambler-co.ru... > > What begin are you mean ? Begin of URI or begin of level in directory ? > "^" is begin of all text. All URIs started from '/'. My mistake! You're right Igor, in that case ^ means begin of all text not just filename part. > You probably need > location ~ /code-style\.pl$ { That works. Thanks, Athan From lists at ruby-forum.com Mon Feb 11 17:40:34 2008 From: lists at ruby-forum.com (Bbq Plate) Date: Mon, 11 Feb 2008 15:40:34 +0100 Subject: static files @ 50 req a second. using nginx and 2 mongre In-Reply-To: <01b701c86c8a$e98f7a00$7200a8c0@invasion> References: <01b701c86c8a$e98f7a00$7200a8c0@invasion> Message-ID: <30ce412479fd0d4d62d9973c74a051f2@ruby-forum.com> hi, here is the config i am using. since i am getting such a large return for static requests (5000 a second) on my localhost, should that not indicate the conf is set to serve static by nginx? when testing static content on localhost on apache, i was getting 500 requests a second. nginx is ridiculously quick! however, my other computer on the LAN connected via 802.11b, seems to get max rate of 50 req/sec on both nginx and apache. i guess my bandwidth is the bottleneck? i tried setting tcpnodelay to on, however it doesnt change anything performing the httperf test. thanks for any help! ## # Basic config modified only slightly from http://brainspl.at/articles/2007/01/03/new-nginx-conf-with-optimizations # # Turns SSI on and uses locations as defined in install-nginx.sh script. # # See also http://topfunky.net/svn/shovel/nginx # # USE AT YOUR OWN RISK! # user and group to run as # user deploy deploy; # number of nginx workers worker_processes 1; # pid of nginx master process pid /usr/local/nginx/logs/nginx.pid; # Number of worker connections. 1024 is a good default events { worker_connections 1024; } # start the http module where we config http access. http { # pull in mime-types. You can break out your config # into as many include's as you want to make it cleaner include /usr/local/nginx/conf/mime.types; # set a default type for the rare situation that # nothing matches from the mimie-type include default_type application/octet-stream; # configure log format log_format main '$remote_addr - $remote_user [$time_local] ' '"$request" $status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"'; # main access log access_log /usr/local/nginx/logs/nginx_access.log main; # main error log error_log /usr/local/nginx/logs/nginx_error.log debug; # no sendfile on OSX sendfile on; # These are good default values. tcp_nopush on; tcp_nodelay on; #off; # output compression saves bandwidth gzip on; gzip_http_version 1.0; gzip_comp_level 2; gzip_proxied any; gzip_types text/plain text/html text/css application/x-javascript text/xml application/xml application/xml+rss text/javascript; # this is where you define your mongrel clusters. # you need one of these blocks for each cluster # and each one needs its own name to refer to it later. # # Rename to mongrel_site1, mongrel_site2, etc if using # virtual hosts. upstream mongrel { server 127.0.0.1:8002; server 127.0.0.1:8003; server 127.0.0.1:7000; } # Copy this section on down and put into a separate file # if you want to organize your virtual hosts in files. # # Then include here with # # include /usr/local/nginx/conf/vhosts/my_subdomain.conf # # the server directive is nginx's virtual host directive. server { # port to listen on. Can also be set to an IP:PORT. listen 3050; # Set the max size for file uploads to 50Mb client_max_body_size 50M; # sets the domain[s] that this vhost server requests for # server_name www.[engineyard].com [engineyard].com; # doc root root /var/www/app3x/public; # vhost specific access log access_log /var/www/app3x/nginx.vhost.access.log main; # NOTE Uncomment and edit to redirect all subdomains back to domain.com # Useful for sending .net and .org variants back to your site. # if ($host !~ ^domain\.com$) { # rewrite ^.+ http://domain.com$uri permanent; # break; # } # this rewrites all the requests to the maintenance.html # page if it exists in the doc root. This is for capistrano's # disable web task if (-f $document_root/system/maintenance.html) { rewrite ^(.*)$ /system/maintenance.html last; break; } location / { # Uncomment to allow server side includes so nginx can # post-process Rails content ## ssi on; # needed to forward user's IP address to rails proxy_set_header X-Real-IP $remote_addr; # needed for HTTPS proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $http_host; proxy_redirect false; proxy_max_temp_file_size 0; # If the file exists as a static file serve it directly without # running all the other rewite tests on it if (-f $request_filename) { break; } # check for index.html for directory index # if its there on the filesystem then rewite # the url to add /index.html to the end of it # and then break to send it to the next config rules. if (-f $request_filename/index.html) { rewrite (.*) $1/index.html break; } # this is the meat of the rails page caching config # it adds .html to the end of the url and then checks # the filesystem for that file. If it exists, then we # rewite the url to have explicit .html on the end # and then send it on its way to the next config rule. # if there is no file on the fs then it sets all the # necessary headers and proxies to our upstream mongrels if (-f $request_filename.html) { rewrite (.*) $1.html break; } if (!-f $request_filename) { # Use other cluster name here if you are running multiple # virtual hosts. proxy_pass http://mongrel; break; } } error_page 500 502 503 504 /500.html; location = /500.html { root /var/www/app3x/public; } } # This server is setup for ssl. Uncomment if # you are using ssl as well as port 80. # server { # # port to listen on. Can also be set to an IP:PORT # listen 443; # # # Set the max size for file uploads to 50Mb # client_max_body_size 50M; # # # sets the domain[s] that this vhost server requests for # # server_name www.[engineyard].com [engineyard].com; # # # doc root # root /var/www/apps/mysite.com/current/public; # # # vhost specific access log # access_log /var/www/apps/mysite.com/shared/log/nginx.vhost.access.log main; # # # NOTE See also http://blog.imperialdune.com/2007/3/31/setting-up-godaddy-turbo-ssl-on-nginx # # if you are buying a GoDaddy SSL cert. # ssl on; # ssl_certificate /var/keys/domain.com.crt; # ssl_certificate_key /var/keys/domain.com.key; # # # NOTE Uncomment and edit to redirect all subdomains back to domain.com # # Useful for sending .net and .org variants back to your site. # if ($host !~ ^domain\.com$) { # rewrite ^.+ https://domain.com$uri permanent; # break; # } # # # this rewrites all the requests to the maintenance.html # # page if it exists in the doc root. This is for capistrano's # # disable web task # if (-f $document_root/system/maintenance.html) { # rewrite ^(.*)$ /system/maintenance.html last; # break; # } # # location / { # # needed to forward user's IP address to rails # proxy_set_header X-Real-IP $remote_addr; # # # needed for HTTPS # proxy_set_header X_FORWARDED_PROTO https; # # proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; # proxy_set_header Host $http_host; # proxy_redirect false; # proxy_max_temp_file_size 0; # # # If the file exists as a static file serve it directly without # # running all the other rewite tests on it # if (-f $request_filename) { # break; # } # # # check for index.html for directory index # # if its there on the filesystem then rewite # # the url to add /index.html to the end of it # # and then break to send it to the next config rules. # if (-f $request_filename/index.html) { # rewrite (.*) $1/index.html break; # } # # # this is the meat of the rails page caching config # # it adds .html to the end of the url and then checks # # the filesystem for that file. If it exists, then we # # rewite the url to have explicit .html on the end # # and then send it on its way to the next config rule. # # if there is no file on the fs then it sets all the # # necessary headers and proxies to our upstream mongrels # if (-f $request_filename.html) { # rewrite (.*) $1.html break; # } # # if (!-f $request_filename) { # proxy_pass http://mongrel; # break; # } # } # # error_page 500 502 503 504 /500.html; # location = /500.html { # root /var/www/apps/mysite.com/current/public; # } # } } -- Posted via http://www.ruby-forum.com/. From is at rambler-co.ru Mon Feb 11 18:04:42 2008 From: is at rambler-co.ru (Igor Sysoev) Date: Mon, 11 Feb 2008 18:04:42 +0300 Subject: static files @ 50 req a second. using nginx and 2 mongre In-Reply-To: <30ce412479fd0d4d62d9973c74a051f2@ruby-forum.com> References: <01b701c86c8a$e98f7a00$7200a8c0@invasion> <30ce412479fd0d4d62d9973c74a051f2@ruby-forum.com> Message-ID: <20080211150442.GC70984@rambler-co.ru> On Mon, Feb 11, 2008 at 03:40:34PM +0100, Bbq Plate wrote: > hi, here is the config i am using. since i am getting such a large > return for static requests (5000 a second) on my localhost, should that > not indicate the conf is set to serve static by nginx? when testing > static content on localhost on apache, i was getting 500 requests a > second. nginx is ridiculously quick! > > however, my other computer on the LAN connected via 802.11b, seems to > get max rate of 50 req/sec on both nginx and apache. i guess my > bandwidth is the bottleneck? i tried setting tcpnodelay to on, however > it doesnt change anything performing the httperf test. Yes, 802.11b is 11Mbit/s network with big latencies. -- Igor Sysoev http://sysoev.ru/en/ From is at rambler-co.ru Mon Feb 11 18:20:03 2008 From: is at rambler-co.ru (Igor Sysoev) Date: Mon, 11 Feb 2008 18:20:03 +0300 Subject: nginx-0.6.26 Message-ID: <20080211152003.GE70984@rambler-co.ru> Changes with nginx 0.6.26 11 Feb 2008 *) Bugfix: the "proxy_store" and "fastcgi_store" directives did not check a response length. *) Bugfix: a segmentation fault occurred in worker process, if big value was used in a "expires" directive. Thanks to Joaquin Cuenca Abela. *) Bugfix: nginx incorrectly detected cache line size on Pentium 4. Thanks to Gena Makhomed. *) Bugfix: in proxied or FastCGI subrequests a client original method was used instead of the GET method. *) Bugfix: socket leak in HTTPS mode if deferred accept was used. Thanks to Ben Maurer. *) Bugfix: nginx issued the bogus error message "SSL_shutdown() failed (SSL: )"; bug appeared in 0.6.23. *) Bugfix: in HTTPS mode requests might fail with the "bad write retry" error; bug appeared in 0.6.23. -- Igor Sysoev http://sysoev.ru/en/ From ilya at fortehost.com Mon Feb 11 23:56:16 2008 From: ilya at fortehost.com (Ilya Grigorik) Date: Mon, 11 Feb 2008 20:56:16 +0000 (UTC) Subject: dynamic =?utf-8?b?ZGVmYXVsdF90eXBl?= References: <20080206021635.GC68975@mdounin.ru> Message-ID: Thanks Max. After some trial and error I got it all working: http://www.igvita.com/2008/02/11/nginx-and-memcached-a-400-boost/ Thanks for the tips. Cheers, Ilya From mike.javorski at gmail.com Tue Feb 12 00:56:07 2008 From: mike.javorski at gmail.com (Mike Javorski) Date: Mon, 11 Feb 2008 13:56:07 -0800 Subject: Issue w/ nginx in hybrid static/php load balancer scenario In-Reply-To: References: Message-ID: I've gotten things to work OK w/ the following. It means I can't have more than one type of directory index, but it's better than nothing. Thanks for the suggestions all, and Igor, thanks for nginx period :-). - mike server { listen 80; root /website/htdocs; index index.php; fastcgi_index index.php; include /etc/nginx/fastcgi_params; location / { proxy_pass http://static-pool/website/htdocs/; } location ~ (\.php|/)$ { fastcgi_pass php-fcgi-pool; } On Feb 9, 2008 10:32 PM, Mike Javorski wrote: > I have nginx set up as a load balancer in front of two machines > running fastcgi/php, and nginx for static content. The desired goal is > to have all php pages (including the site index pages of .*/index.php) > processed by the fastcgi/php upstream, and everything else provided by > the static servers. The following is what I have and what I believe > should have worked, but it appears to run all directory paths via the > static rule, rather than the php rule which matches the index. > > To sum up: > / -- via static system (WRONG) > /index.php -- via fastcgi/php system (RIGHT) > /blah/ -- via static system (WRONG) > /blah/index.php -- via fastcgi/php system (RIGHT) > > nginx version is 0.6.25, Help! :-) > > tia, > > - mike > > My Config File (the relevent bits anyway): > --------------------------------------------------- > > http { > upstream static-pool { > server 192.168.7.40:80; > server 192.168.7.41:80; > } > > upstream php-fcgi-pool { > server 192.168.7.40:7000; > server 192.168.7.41:7000; > } > > server { > listen 80; > root /website/htdocs; > index index.php; > fastcgi_index index.php; > include /etc/nginx/fastcgi_params; > > location / { > proxy_pass http://static-pool/website/htdocs/; > } > > location ~ \.php$ { > fastcgi_pass php-fcgi-pool; > } > } > } > From is at rambler-co.ru Tue Feb 12 01:09:16 2008 From: is at rambler-co.ru (Igor Sysoev) Date: Tue, 12 Feb 2008 01:09:16 +0300 Subject: Issue w/ nginx in hybrid static/php load balancer scenario In-Reply-To: References: Message-ID: <20080211220916.GC84198@rambler-co.ru> On Mon, Feb 11, 2008 at 01:56:07PM -0800, Mike Javorski wrote: > I've gotten things to work OK w/ the following. It means I can't have > more than one type of directory index, but it's better than nothing. > Thanks for the suggestions all, and Igor, thanks for nginx period :-). For local static files nginx can try several index files. However, there is no way to learn existent index files on remote host. > - mike > > server { > listen 80; > root /website/htdocs; > index index.php; > fastcgi_index index.php; > include /etc/nginx/fastcgi_params; > > location / { > proxy_pass http://static-pool/website/htdocs/; > } > > location ~ (\.php|/)$ { > fastcgi_pass php-fcgi-pool; > } -- Igor Sysoev http://sysoev.ru/en/ From lists at ruby-forum.com Tue Feb 12 01:15:23 2008 From: lists at ruby-forum.com (Bbq Plate) Date: Mon, 11 Feb 2008 23:15:23 +0100 Subject: static files @ 50 req a second. using nginx and 2 mongre In-Reply-To: <20080211150442.GC70984@rambler-co.ru> References: <01b701c86c8a$e98f7a00$7200a8c0@invasion> <30ce412479fd0d4d62d9973c74a051f2@ruby-forum.com> <20080211150442.GC70984@rambler-co.ru> Message-ID: thank you Igor! Igor Sysoev wrote: > On Mon, Feb 11, 2008 at 03:40:34PM +0100, Bbq Plate wrote: > >> hi, here is the config i am using. since i am getting such a large >> return for static requests (5000 a second) on my localhost, should that >> not indicate the conf is set to serve static by nginx? when testing >> static content on localhost on apache, i was getting 500 requests a >> second. nginx is ridiculously quick! >> >> however, my other computer on the LAN connected via 802.11b, seems to >> get max rate of 50 req/sec on both nginx and apache. i guess my >> bandwidth is the bottleneck? i tried setting tcpnodelay to on, however >> it doesnt change anything performing the httperf test. > > Yes, 802.11b is 11Mbit/s network with big latencies. -- Posted via http://www.ruby-forum.com/. From den.lists at gmail.com Tue Feb 12 01:18:25 2008 From: den.lists at gmail.com (Denis S. Filimonov) Date: Mon, 11 Feb 2008 17:18:25 -0500 Subject: Issue w/ nginx in hybrid static/php load balancer scenario In-Reply-To: <20080211065336.GC64410@rambler-co.ru> References: <200802102228.09992.den.lists@gmail.com> <20080211065336.GC64410@rambler-co.ru> Message-ID: <200802111718.25195.den.lists@gmail.com> On Monday 11 February 2008 01:53:36 Igor Sysoev wrote: > On Sun, Feb 10, 2008 at 10:28:09PM -0500, Denis S. Filimonov wrote: > > I'd try something like the following: > > > > location / { > > if ($request_filename !~ "(\.php|/)$") { > > proxy_pass http://static-pool/website/htdocs/; > > } > > } > > > > set $dir_index index.php; > > > > location ~ /$ { > > set $index_php $root$uri$dir_index; > > if (-f $index_php) { > > fastcgi_pass php-fcgi-pool; > > break; > > } > > proxy_pass http://static-pool/website/htdocs/; > > } > > > > location ~ \.php$ { > > fastcgi_pass php-fcgi-pool; > > } > > There is no need of such spaghetti configuration: > all tests can be done via location regex and index directive. > I believe the spaghetti is necessary to handle the following, I quote: --- I suppose I could to a regex for .*/$ but some of the directories have index.html instead of index.php, and I need to support that as well --- In the configuration above, .*/$ locations will be passed to FastCGI only if there is an index.php and passed to the proxy otherwise. -- Denis. From is at rambler-co.ru Tue Feb 12 01:24:46 2008 From: is at rambler-co.ru (Igor Sysoev) Date: Tue, 12 Feb 2008 01:24:46 +0300 Subject: Issue w/ nginx in hybrid static/php load balancer scenario In-Reply-To: <200802111718.25195.den.lists@gmail.com> References: <200802102228.09992.den.lists@gmail.com> <20080211065336.GC64410@rambler-co.ru> <200802111718.25195.den.lists@gmail.com> Message-ID: <20080211222446.GD84198@rambler-co.ru> On Mon, Feb 11, 2008 at 05:18:25PM -0500, Denis S. Filimonov wrote: > On Monday 11 February 2008 01:53:36 Igor Sysoev wrote: > > On Sun, Feb 10, 2008 at 10:28:09PM -0500, Denis S. Filimonov wrote: > > > I'd try something like the following: > > > > > > location / { > > > if ($request_filename !~ "(\.php|/)$") { > > > proxy_pass http://static-pool/website/htdocs/; > > > } > > > } > > > > > > set $dir_index index.php; > > > > > > location ~ /$ { > > > set $index_php $root$uri$dir_index; > > > if (-f $index_php) { > > > fastcgi_pass php-fcgi-pool; > > > break; > > > } > > > proxy_pass http://static-pool/website/htdocs/; > > > } > > > > > > location ~ \.php$ { > > > fastcgi_pass php-fcgi-pool; > > > } > > > > There is no need of such spaghetti configuration: > > all tests can be done via location regex and index directive. > > > > I believe the spaghetti is necessary to handle the following, I quote: > --- > I suppose I could to a regex for .*/$ but some of the directories > have index.html instead of index.php, and I need to support that as well > --- > > In the configuration above, .*/$ locations will be passed to FastCGI only if > there is an index.php and passed to the proxy otherwise. If nginx has access to these index files, then index directive tries index.php and index.html and does internal redirect to index.php or index.html. The index.php will go to FastCGI. -- Igor Sysoev http://sysoev.ru/en/ From cliff at develix.com Tue Feb 12 03:11:14 2008 From: cliff at develix.com (Cliff Wells) Date: Mon, 11 Feb 2008 16:11:14 -0800 Subject: Nginx feature request In-Reply-To: <47AF7737.2050801@libero.it> References: <47AF2ED9.1030206@libero.it> <47AF4DAA.7070600@libero.it> <47AF5F56.7000905@libero.it> <47AF7737.2050801@libero.it> Message-ID: <1202775074.4636.50.camel@portableevil.develix.com> On Sun, 2008-02-10 at 23:14 +0100, Manlio Perillo wrote: > Also, you should give the elliot suggestion a change. > Removing the read permission bit from a file is a good solution, and > with the help of cron it should resolve your problem. I think using "at" would scale better than a cron script if there are thousands of files. Regards, Cliff From hendrik.hardeman at hotmail.com Tue Feb 12 05:29:46 2008 From: hendrik.hardeman at hotmail.com (Hendrik Hardeman) Date: Tue, 12 Feb 2008 07:59:46 +0530 Subject: Nginx feature request In-Reply-To: <1202775074.4636.50.camel@portableevil.develix.com> References: <47AF2ED9.1030206@libero.it> <47AF4DAA.7070600@libero.it> <47AF5F56.7000905@libero.it> <47AF7737.2050801@libero.it> <1202775074.4636.50.camel@portableevil.develix.com> Message-ID: Cliff, Thanks for your valuable suggestion. Setting the read permission bit with "at" does look like a solution worth investigating. "at" does indeed seem better than cron here, but it still isn't as transparent as a future file modification time. E.g. checking or resetting the time at which a particular file should become accessible seems rather complicated with atq / atrm / at. While I'm still keen on setting up a filesystem solution with a directive in Nginx, the combination of eliott's and your suggestion seems to be the best interim solution. Regards, Hendrik > Subject: Re: Nginx feature request > From: cliff at develix.com > To: nginx at sysoev.ru > Date: Mon, 11 Feb 2008 16:11:14 -0800 > > > On Sun, 2008-02-10 at 23:14 +0100, Manlio Perillo wrote: > > > Also, you should give the elliot suggestion a change. > > Removing the read permission bit from a file is a good solution, and > > with the help of cron it should resolve your problem. > > I think using "at" would scale better than a cron script if there are > thousands of files. > > Regards, > Cliff > > _________________________________________________________________ Post free property ads on Yello Classifieds now! www.yello.in http://ss1.richmedia.in/recurl.asp?pid=221 -------------- next part -------------- An HTML attachment was scrubbed... URL: From cliff at develix.com Tue Feb 12 06:11:02 2008 From: cliff at develix.com (Cliff Wells) Date: Mon, 11 Feb 2008 19:11:02 -0800 Subject: Nginx feature request In-Reply-To: References: <47AF2ED9.1030206@libero.it> <47AF4DAA.7070600@libero.it> <47AF5F56.7000905@libero.it> <47AF7737.2050801@libero.it> <1202775074.4636.50.camel@portableevil.develix.com> Message-ID: <1202785862.4636.57.camel@portableevil.develix.com> On Tue, 2008-02-12 at 07:59 +0530, Hendrik Hardeman wrote: > Setting the read permission bit with "at" does look like a solution > worth investigating. "at" does indeed seem better than cron here, but > it still isn't as transparent as a future file modification time. E.g. > checking or resetting the time at which a particular file should > become accessible seems rather complicated with atq / atrm / at. > > While I'm still keen on setting up a filesystem solution with a > directive in Nginx, the combination of eliott's and your suggestion > seems to be the best interim solution. One more option (assuming you're on Linux) would be to write a FUSE filesystem that exposes the files you want and hides those with a future timestamp. You could mount this over the top (or rather, alongside) of your real FS. This would probably be pretty simple (assuming you start with an existing FUSE FS and modify it to your needs) although I'm sure how much of a performance impact you'd see. Regards, Cliff From dd at davedash.com Tue Feb 12 10:40:46 2008 From: dd at davedash.com (Dave Dash) Date: Mon, 11 Feb 2008 23:40:46 -0800 Subject: Static Files not serving Message-ID: I have an nginx conf as such: server { listen 80; server_name onyxfoundation.org; access_log /var/log/nginx/onyx.access.log; error_log /var/log/nginx/onyx.error.log; location /static { root /var/www/django/onyx/static; access_log off; expires 30d; } location / { # host and port to fastcgi server fastcgi_pass 127.0.0.1:3000; fastcgi_param PATH_INFO $fastcgi_script_name; fastcgi_param REQUEST_METHOD $request_method; fastcgi_param QUERY_STRING $query_string; fastcgi_param SERVER_NAME $server_name; fastcgi_param SERVER_PORT $server_port; fastcgi_param SERVER_PROTOCOL $server_protocol; fastcgi_param CONTENT_TYPE $content_type; fastcgi_pass_header Authorization; fastcgi_intercept_errors off; } } But none of my static assets are being served. They get a 404 error. Is their something I'm missing, I tried location ^~ /static as well and that didn't work. -d From is at rambler-co.ru Tue Feb 12 10:49:14 2008 From: is at rambler-co.ru (Igor Sysoev) Date: Tue, 12 Feb 2008 10:49:14 +0300 Subject: Static Files not serving In-Reply-To: References: Message-ID: <20080212074914.GA12939@rambler-co.ru> On Mon, Feb 11, 2008 at 11:40:46PM -0800, Dave Dash wrote: > I have an nginx conf as such: > > > server { > listen 80; > server_name onyxfoundation.org; > access_log /var/log/nginx/onyx.access.log; > error_log /var/log/nginx/onyx.error.log; > > location /static { > root /var/www/django/onyx/static; Look in error_log. Probably, you should do - root /var/www/django/onyx/static; + root /var/www/django/onyx; > access_log off; > expires 30d; > } > > location / { > # host and port to fastcgi server > fastcgi_pass 127.0.0.1:3000; > fastcgi_param PATH_INFO $fastcgi_script_name; > fastcgi_param REQUEST_METHOD $request_method; > fastcgi_param QUERY_STRING $query_string; > fastcgi_param SERVER_NAME $server_name; > fastcgi_param SERVER_PORT $server_port; > fastcgi_param SERVER_PROTOCOL $server_protocol; > fastcgi_param CONTENT_TYPE $content_type; > fastcgi_pass_header Authorization; > fastcgi_intercept_errors off; > } > } > > > But none of my static assets are being served. They get a 404 > error. Is their something I'm missing, I tried location ^~ /static > as well and that didn't work. -- Igor Sysoev http://sysoev.ru/en/ From hendrik.hardeman at hotmail.com Tue Feb 12 13:13:43 2008 From: hendrik.hardeman at hotmail.com (Hendrik Hardeman) Date: Tue, 12 Feb 2008 15:43:43 +0530 Subject: Nginx feature request In-Reply-To: <1202785862.4636.57.camel@portableevil.develix.com> References: <47AF2ED9.1030206@libero.it> <47AF4DAA.7070600@libero.it> <47AF5F56.7000905@libero.it> <47AF7737.2050801@libero.it> <1202775074.4636.50.camel@portableevil.develix.com> <1202785862.4636.57.camel@portableevil.develix.com> Message-ID: Cliff, Thanks a lot for yet another interesting suggestion. > One more option (assuming you're on Linux) would be to write a FUSE > filesystem that exposes the files you want and hides those with a future > timestamp. You could mount this over the top (or rather, alongside) of > your real FS. Yes, I'm on debian. Had never looked into FUSE. Sounds like a really interesting option. If I could use FUSE to hide files with a future timestamp and automatically expose them at the appropriate time, then that would be an ideal solution since it would work with nginx and anything else that should or should not have access. > > This would probably be pretty simple (assuming you start with an > existing FUSE FS and modify it to your needs) although I'm sure how much > of a performance impact you'd see. > There will obviously be some performance impact, but not a huge one I would imagine. I can't spare time for experiments with FUSE right now, but will definitely look at this promising option at a later stage. Regards, Hendrik _________________________________________________________________ Tried the new MSN Messenger? It?s cool! Download now. http://messenger.msn.com/Download/Default.aspx?mkt=en-in -------------- next part -------------- An HTML attachment was scrubbed... URL: From mg174717 at students.mimuw.edu.pl Tue Feb 12 13:05:01 2008 From: mg174717 at students.mimuw.edu.pl (Marcin Gajda) Date: Tue, 12 Feb 2008 11:05:01 +0100 Subject: Determine when request is fully sent Message-ID: <20080212100501.GA26410@students.mimuw.edu.pl> Hi all! I need to write a module which will make an action when all request body is sent to the client (or after breaking the connection by the client). It is very important to me to do it when data are really sent, not only scheduled for sending. Is this possible using nginx module framework? Which kind of module (handler, filter or balancer) should I use? I looked on handler and filter module references but it seems that they just prepare/update response body for sending. I would like to insert callback after real, phisicall send. Best regards, -- Marcin Gajda ________________________ Linux registered user #300108 _______ Dieu me pardonnera - c'est son metier From is at rambler-co.ru Tue Feb 12 14:18:01 2008 From: is at rambler-co.ru (Igor Sysoev) Date: Tue, 12 Feb 2008 14:18:01 +0300 Subject: Determine when request is fully sent In-Reply-To: <20080212100501.GA26410@students.mimuw.edu.pl> References: <20080212100501.GA26410@students.mimuw.edu.pl> Message-ID: <20080212111801.GB22193@rambler-co.ru> On Tue, Feb 12, 2008 at 11:05:01AM +0100, Marcin Gajda wrote: > I need to write a module which will make an action when all request body > is sent to the client (or after breaking the connection by the client). > It is very important to me to do it when data are really sent, not only > scheduled for sending. Is this possible using nginx module framework? > Which kind of module (handler, filter or balancer) should I use? > > I looked on handler and filter module references but it seems that they > just prepare/update response body for sending. I would like to insert > callback after real, phisicall send. nginx may report only successful sending full response to a kernel, but it can not say whether response was sent by kernel to client. -- Igor Sysoev http://sysoev.ru/en/ From lists at ruby-forum.com Tue Feb 12 17:19:33 2008 From: lists at ruby-forum.com (Pierre-olivier Poc) Date: Tue, 12 Feb 2008 15:19:33 +0100 Subject: Nginx proxy to swiftiply problems Message-ID: Hi, I'm trying to setup nginx to proxy requests to a swiftiply server. I am having some issues with the sub domains. Let me explain my setup : I have 2 servers (physical) : - NginxServer : runs nginx; gets incoming requests and forwards them to SwiftiplyServer (when necessary) - SwiftiplyServer : runs swiftiply with different swiftiplied mongrels applications. These application are accessed with these uris ( application1.myswiftserver.com, application2.myswiftserver.com, etc. ) When I access my SwiftiplyServer directly, everything works fine; I get served the right application. But when I go through Nginx, I always get served the default application (the one swiftiply is configured to serve when it does not find any matching applications). Its like if nginx does not pass the "application1" part of the uri to swiftiply. Here is my nginx config for proxying : upstream swiftiply { server myswiftserver.com:80; } server { listen 80; server_name localhost; location / { proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $http_host; proxy_redirect false; proxy_max_temp_file_size 0; proxy_pass http://swiftiply; break; } } } Thank you -- Posted via http://www.ruby-forum.com/. From mg174717 at students.mimuw.edu.pl Tue Feb 12 17:48:20 2008 From: mg174717 at students.mimuw.edu.pl (Marcin Gajda) Date: Tue, 12 Feb 2008 15:48:20 +0100 Subject: Determine when request is fully sent In-Reply-To: <20080212111801.GB22193@rambler-co.ru> References: <20080212100501.GA26410@students.mimuw.edu.pl> <20080212111801.GB22193@rambler-co.ru> Message-ID: <20080212144820.GA15879@students.mimuw.edu.pl> On Tue, Feb 12, 2008 at 02:18:01PM +0300, Igor Sysoev wrote: > nginx may report only successful sending full response to a kernel, > but it can not say whether response was sent by kernel to client. Hm... It's a pity for me :( But perhaps the solution with sending full response to a kernel could be sufficient for me. Could you write me, which module type should I use? Maybe a filter linked somewhere at the end? Best regards, -- Marcin Gajda ________________________ Linux registered user #300108 _______ Dieu me pardonnera - c'est son metier From is at rambler-co.ru Tue Feb 12 18:12:57 2008 From: is at rambler-co.ru (Igor Sysoev) Date: Tue, 12 Feb 2008 18:12:57 +0300 Subject: Determine when request is fully sent In-Reply-To: <20080212144820.GA15879@students.mimuw.edu.pl> References: <20080212100501.GA26410@students.mimuw.edu.pl> <20080212111801.GB22193@rambler-co.ru> <20080212144820.GA15879@students.mimuw.edu.pl> Message-ID: <20080212151257.GI22193@rambler-co.ru> On Tue, Feb 12, 2008 at 03:48:20PM +0100, Marcin Gajda wrote: > On Tue, Feb 12, 2008 at 02:18:01PM +0300, Igor Sysoev wrote: > > > nginx may report only successful sending full response to a kernel, > > but it can not say whether response was sent by kernel to client. > > Hm... It's a pity for me :( > > But perhaps the solution with sending full response to a kernel could be > sufficient for me. Could you write me, which module type should I use? > Maybe a filter linked somewhere at the end? location / { post_action /post; } location = /post { proxy_pass/etc. # $request_completion here is "OK" if request complete # or "" otherwise. } -- Igor Sysoev http://sysoev.ru/en/ From lists at ruby-forum.com Tue Feb 12 21:55:40 2008 From: lists at ruby-forum.com (Ian Neubert) Date: Tue, 12 Feb 2008 19:55:40 +0100 Subject: http_proxy using HTTP/1.1? Message-ID: Hello all, Is it possible to have the proxy module communicate with back-end servers using HTTP/1.1? The back-end servers that I am proxying, in this case, require the Host header as part of the HTTP/1.1 protocol. Thanks for any ideas you may have. -Ian Neubert -- Posted via http://www.ruby-forum.com/. From lists at ruby-forum.com Tue Feb 12 22:37:40 2008 From: lists at ruby-forum.com (Pierre-olivier Poc) Date: Tue, 12 Feb 2008 20:37:40 +0100 Subject: http_proxy using HTTP/1.1? In-Reply-To: References: Message-ID: <8adce06161243b58d489e2bb02a8a993@ruby-forum.com> Hey Ian, I'm not sure but i think i might be having the same issue. I have a swiftiply server as backend that requires host header to select the right application to serve. It seems my backend server is not getting these as I am alwasy served the default application no matter what i write in the url... If you find anything, let me know. -- Posted via http://www.ruby-forum.com/. From emmiller at gmail.com Tue Feb 12 22:46:05 2008 From: emmiller at gmail.com (Evan Miller) Date: Tue, 12 Feb 2008 11:46:05 -0800 Subject: http_proxy using HTTP/1.1? In-Reply-To: <8adce06161243b58d489e2bb02a8a993@ruby-forum.com> References: <8adce06161243b58d489e2bb02a8a993@ruby-forum.com> Message-ID: Pierre-olivier Poc wrote: > Hey Ian, > > I'm not sure but i think i might be having the same issue. I have a > swiftiply server as backend that requires host header to select the > right application to serve. It seems my backend server is not getting > these as I am alwasy served the default application no matter what i > write in the url... > > If you find anything, let me know. > > http://wiki.codemongers.com/NginxHttpProxyModule#proxy_set_header From lists at ruby-forum.com Tue Feb 12 23:01:59 2008 From: lists at ruby-forum.com (Ian Neubert) Date: Tue, 12 Feb 2008 21:01:59 +0100 Subject: http_proxy using HTTP/1.1? In-Reply-To: References: <8adce06161243b58d489e2bb02a8a993@ruby-forum.com> Message-ID: Evan Miller wrote: > http://wiki.codemongers.com/NginxHttpProxyModule#proxy_set_header This didn't seem to work for me. It's still sending "HTTP/1.0" in the request and that seems to kill the connection to the virtual server on Apache. -- Posted via http://www.ruby-forum.com/. From den.lists at gmail.com Tue Feb 12 23:36:34 2008 From: den.lists at gmail.com (Denis S. Filimonov) Date: Tue, 12 Feb 2008 15:36:34 -0500 Subject: http_proxy using HTTP/1.1? In-Reply-To: References: Message-ID: <200802121536.35029.den.lists@gmail.com> On Tuesday 12 February 2008 15:01:59 Ian Neubert wrote: > Evan Miller wrote: > > http://wiki.codemongers.com/NginxHttpProxyModule#proxy_set_header > > This didn't seem to work for me. It's still sending "HTTP/1.0" in the > request and that seems to kill the connection to the virtual server on > Apache. http://wiki.codemongers.com/NginxHttpProxyModule ----- It is an HTTP/1.0 proxy without the ability for keep-alive requests yet. (As a result, backend connections are created and destroyed on every request.) ----- -- Denis. From lists at ruby-forum.com Wed Feb 13 00:15:58 2008 From: lists at ruby-forum.com (Pierre-olivier Poc) Date: Tue, 12 Feb 2008 22:15:58 +0100 Subject: http_proxy using HTTP/1.1? In-Reply-To: <200802121536.35029.den.lists@gmail.com> References: <8adce06161243b58d489e2bb02a8a993@ruby-forum.com> <200802121536.35029.den.lists@gmail.com> Message-ID: <37d4bd15970860f295724edcc1894627@ruby-forum.com> Found my problem. I was sending the wrong host (duh). Setting the header host with $host would send to my swiftiply server the host "blabla.mynginxserver.com" which is not recognized and so returns the default application. -- Posted via http://www.ruby-forum.com/. From yusufg at gmail.com Wed Feb 13 02:44:45 2008 From: yusufg at gmail.com (Yusuf Goolamabbas) Date: Wed, 13 Feb 2008 07:44:45 +0800 Subject: http_proxy using HTTP/1.1? In-Reply-To: References: <8adce06161243b58d489e2bb02a8a993@ruby-forum.com> Message-ID: If its absolutely essential for you to have HTTP 1.1 behaviour between proxy and upstream then give lighttpd 1.5 (out of svn) a try svn checkout svn://svn.lighttpd.net/lighttpd/trunk/ http://trac.lighttpd.net/trac/wiki/Docs%3AModProxyCore On Feb 13, 2008 4:01 AM, Ian Neubert wrote: > Evan Miller wrote: > > http://wiki.codemongers.com/NginxHttpProxyModule#proxy_set_header > > This didn't seem to work for me. It's still sending "HTTP/1.0" in the > request and that seems to kill the connection to the virtual server on > Apache. > > -- > Posted via http://www.ruby-forum.com/. > > From eggie5 at gmail.com Wed Feb 13 09:48:52 2008 From: eggie5 at gmail.com (Alex Egg) Date: Tue, 12 Feb 2008 22:48:52 -0800 Subject: help with my config Message-ID: <6f7401650802122248v4a60e3fch8e7a050e4d59a11d@mail.gmail.com> As soon as I add the expires section (at the bottom) accel-redirect for images stops working. Is it apparent why? My config: http://pastie.caboo.se/151398 Thanks, Alex From is at rambler-co.ru Wed Feb 13 12:32:34 2008 From: is at rambler-co.ru (Igor Sysoev) Date: Wed, 13 Feb 2008 12:32:34 +0300 Subject: help with my config In-Reply-To: <6f7401650802122248v4a60e3fch8e7a050e4d59a11d@mail.gmail.com> References: <6f7401650802122248v4a60e3fch8e7a050e4d59a11d@mail.gmail.com> Message-ID: <20080213093234.GC64181@rambler-co.ru> On Tue, Feb 12, 2008 at 10:48:52PM -0800, Alex Egg wrote: > As soon as I add the expires section (at the bottom) accel-redirect > for images stops working. > > Is it apparent why? > > My config: > > http://pastie.caboo.se/151398 I have tired of this "if (-f $request_filename)" pattern. This became the same ugly pattern as Apache's Auth directives usage in .htaccess only and lot of RewriteRule's to compencate inability of PHP programmers to work with URIs different from /index.php. You should use ^~ in "location ^~ /accounts": location ^~ /accounts { internal; root /u/apps/asdf/shared; } And remove the useless "if (-f $request_filename)" in static location: location ~* \.(js|css|jpg|jpeg|gif|png|swf)$ { expires 1M; } -- Igor Sysoev http://sysoev.ru/en/ From armin at personifi.com Wed Feb 13 19:55:39 2008 From: armin at personifi.com (Armin Roehrl) Date: Wed, 13 Feb 2008 17:55:39 +0100 Subject: nginx compareable to lighttpd's mod_secdownload Message-ID: <47B3210B.9000901@personifi.com> Hi all, Is there something like mod_secdownload for nginx? Basically I am streaming flash-videos, but first need todo a check whether the user is allowed to see this video. I could do this authentication using another application that needs to parse the request string and send it off to a 3rd party-server to get an OK or false back. Authentication per (flash-file, user)-tuple has to happen only once. Thanks for your help, -Armin From is at rambler-co.ru Wed Feb 13 20:06:36 2008 From: is at rambler-co.ru (Igor Sysoev) Date: Wed, 13 Feb 2008 20:06:36 +0300 Subject: nginx compareable to lighttpd's mod_secdownload In-Reply-To: <47B3210B.9000901@personifi.com> References: <47B3210B.9000901@personifi.com> Message-ID: <20080213170636.GH2212@rambler-co.ru> On Wed, Feb 13, 2008 at 05:55:39PM +0100, Armin Roehrl wrote: > Is there something like mod_secdownload for nginx? > > Basically I am streaming flash-videos, but first need todo > a check whether the user is allowed to see this video. > > I could do this authentication using another application > that needs to parse the request string and send it off > to a 3rd party-server to get an OK or false back. > Authentication per (flash-file, user)-tuple has to happen > only once. No, the recommended way is X-Accel-Redirect: http://wiki.codemongers.com/NginxXSendfile -- Igor Sysoev http://sysoev.ru/en/ From armin at personifi.com Wed Feb 13 20:16:35 2008 From: armin at personifi.com (Armin Roehrl) Date: Wed, 13 Feb 2008 18:16:35 +0100 Subject: nginx compareable to lighttpd's mod_secdownload In-Reply-To: <20080213170636.GH2212@rambler-co.ru> References: <47B3210B.9000901@personifi.com> <20080213170636.GH2212@rambler-co.ru> Message-ID: <47B325F3.10507@personifi.com> Thanks a lot. This is all I need. Brilliant. > On Wed, Feb 13, 2008 at 05:55:39PM +0100, Armin Roehrl wrote: > > >> Is there something like mod_secdownload for nginx? >> >> Basically I am streaming flash-videos, but first need todo >> a check whether the user is allowed to see this video. >> >> I could do this authentication using another application >> that needs to parse the request string and send it off >> to a 3rd party-server to get an OK or false back. >> Authentication per (flash-file, user)-tuple has to happen >> only once. >> > > No, the recommended way is X-Accel-Redirect: > http://wiki.codemongers.com/NginxXSendfile > > > From eggie5 at gmail.com Wed Feb 13 21:21:54 2008 From: eggie5 at gmail.com (Alex Egg) Date: Wed, 13 Feb 2008 10:21:54 -0800 Subject: help with my config In-Reply-To: <20080213093234.GC64181@rambler-co.ru> References: <6f7401650802122248v4a60e3fch8e7a050e4d59a11d@mail.gmail.com> <20080213093234.GC64181@rambler-co.ru> Message-ID: <6f7401650802131021u1e3ea77by7fb5f8b054ab948c@mail.gmail.com> How does this look? I updated the '^~' to my location, but I moved the expires to a different section. How's this config? Alex On Feb 13, 2008 1:32 AM, Igor Sysoev wrote: > > On Tue, Feb 12, 2008 at 10:48:52PM -0800, Alex Egg wrote: > > > As soon as I add the expires section (at the bottom) accel-redirect > > for images stops working. > > > > Is it apparent why? > > > > My config: > > > > http://pastie.caboo.se/151398 > > I have tired of this "if (-f $request_filename)" pattern. > This became the same ugly pattern as Apache's Auth directives usage in > .htaccess only and lot of RewriteRule's to compencate inability > of PHP programmers to work with URIs different from /index.php. > > You should use ^~ in "location ^~ /accounts": > > location ^~ /accounts { > internal; > root /u/apps/asdf/shared; > } > > And remove the useless "if (-f $request_filename)" in static location: > > location ~* \.(js|css|jpg|jpeg|gif|png|swf)$ { > expires 1M; > } > > > -- > Igor Sysoev > http://sysoev.ru/en/ > > From eggie5 at gmail.com Wed Feb 13 21:22:10 2008 From: eggie5 at gmail.com (Alex Egg) Date: Wed, 13 Feb 2008 10:22:10 -0800 Subject: help with my config In-Reply-To: <6f7401650802131021u1e3ea77by7fb5f8b054ab948c@mail.gmail.com> References: <6f7401650802122248v4a60e3fch8e7a050e4d59a11d@mail.gmail.com> <20080213093234.GC64181@rambler-co.ru> <6f7401650802131021u1e3ea77by7fb5f8b054ab948c@mail.gmail.com> Message-ID: <6f7401650802131022u112a7755i65d1c924b660952b@mail.gmail.com> Here's the link: http://pastie.caboo.se/151627 On Feb 13, 2008 10:21 AM, Alex Egg wrote: > How does this look? > > I updated the '^~' to my location, but I moved the expires to a > different section. > > How's this config? > > Alex > > > On Feb 13, 2008 1:32 AM, Igor Sysoev wrote: > > > > On Tue, Feb 12, 2008 at 10:48:52PM -0800, Alex Egg wrote: > > > > > As soon as I add the expires section (at the bottom) accel-redirect > > > for images stops working. > > > > > > Is it apparent why? > > > > > > My config: > > > > > > http://pastie.caboo.se/151398 > > > > I have tired of this "if (-f $request_filename)" pattern. > > This became the same ugly pattern as Apache's Auth directives usage in > > .htaccess only and lot of RewriteRule's to compencate inability > > of PHP programmers to work with URIs different from /index.php. > > > > You should use ^~ in "location ^~ /accounts": > > > > location ^~ /accounts { > > internal; > > root /u/apps/asdf/shared; > > } > > > > And remove the useless "if (-f $request_filename)" in static location: > > > > location ~* \.(js|css|jpg|jpeg|gif|png|swf)$ { > > expires 1M; > > } > > > > > > -- > > Igor Sysoev > > http://sysoev.ru/en/ > > > > > From lists at ruby-forum.com Wed Feb 13 21:32:45 2008 From: lists at ruby-forum.com (Ian Neubert) Date: Wed, 13 Feb 2008 19:32:45 +0100 Subject: http_proxy using HTTP/1.1? In-Reply-To: References: Message-ID: <7349d983a4442b1fd9a7b6aefdfebdc2@ruby-forum.com> I actually found my problem. Headers are being sent to the proxied server that are interfering. Is it possible to tell nginx to not send certain headers to the proxied servers? Thanks. -Ian -- Posted via http://www.ruby-forum.com/. From roxis at list.ru Wed Feb 13 21:40:52 2008 From: roxis at list.ru (Roxis) Date: Wed, 13 Feb 2008 19:40:52 +0100 Subject: http_proxy using HTTP/1.1? In-Reply-To: <7349d983a4442b1fd9a7b6aefdfebdc2@ruby-forum.com> References: <7349d983a4442b1fd9a7b6aefdfebdc2@ruby-forum.com> Message-ID: <200802131940.52284.roxis@list.ru> On Wednesday 13 February 2008, Ian Neubert wrote: > Is it possible to tell nginx to not send certain headers to the proxied > servers? just set an empty string as header and nginx won't send it to back-end. for example: proxy_set_header Accept-Encoding ""; From lists at ruby-forum.com Wed Feb 13 21:41:50 2008 From: lists at ruby-forum.com (Ian Neubert) Date: Wed, 13 Feb 2008 19:41:50 +0100 Subject: http_proxy using HTTP/1.1? In-Reply-To: <7349d983a4442b1fd9a7b6aefdfebdc2@ruby-forum.com> References: <7349d983a4442b1fd9a7b6aefdfebdc2@ruby-forum.com> Message-ID: <48c01839a8e25de5b0784a961321f8c8@ruby-forum.com> Ian Neubert wrote: > I actually found my problem. Headers are being sent to the proxied > server that are interfering. > > Is it possible to tell nginx to not send certain headers to the proxied > servers? > > Thanks. > > -Ian That was easy. To stop a header from being sent to the proxied servers just set the header to blank! For example: proxy_set_header Authorization ""; proxy_set_header User-Agent ""; HTH someone else. -Ian -- Posted via http://www.ruby-forum.com/. From is at rambler-co.ru Wed Feb 13 22:53:07 2008 From: is at rambler-co.ru (Igor Sysoev) Date: Wed, 13 Feb 2008 22:53:07 +0300 Subject: help with my config In-Reply-To: <6f7401650802131022u112a7755i65d1c924b660952b@mail.gmail.com> References: <6f7401650802122248v4a60e3fch8e7a050e4d59a11d@mail.gmail.com> <20080213093234.GC64181@rambler-co.ru> <6f7401650802131021u1e3ea77by7fb5f8b054ab948c@mail.gmail.com> <6f7401650802131022u112a7755i65d1c924b660952b@mail.gmail.com> Message-ID: <20080213195307.GA9035@rambler-co.ru> On Wed, Feb 13, 2008 at 10:22:10AM -0800, Alex Egg wrote: > Here's the link: > > http://pastie.caboo.se/151627 I prefer this configuration: server { ... root /u/apps/asdf/current/public; if (-f $document_root/system/maintenance.html) { rewrite ^(.*)$ /system/maintenance.html last; } if (-f $request_filename.html) { rewrite (.*) $1.html last; } location / { index index.html; error_page 404 = @mongrel; } location @monrgel { proxy_pass http://monrgel; proxy_set_header ... ... } location ^~ /accounts { internal; root /u/apps/asdf/shared; } location ~* \.(js|css|jpg|jpeg|gif|png|swf)$ { expires 1M; } error_page 500 502 503 504 /500.html; location = /500.html { root /u/apps/asdf/current/public; } -- Igor Sysoev http://sysoev.ru/en/ From e98cuenc at gmail.com Wed Feb 13 23:16:52 2008 From: e98cuenc at gmail.com (Joaquin Cuenca Abela) Date: Wed, 13 Feb 2008 21:16:52 +0100 Subject: Richer status module Message-ID: <8b2ed8550802131216x1ab91120n9561fa7ba0b939be@mail.gmail.com> Hi, I'm interested in adding more stats to the stub_status module, like number of 20x, 30x, 40x, 50x answers. The current statistics are quite low level, and I don't see how to add the kind of statistics that I'm interested in. Can someone provide some guidance here? any idea on what files should I hack on? Thanks! -- Joaquin Cuenca Abela From manlio_perillo at libero.it Thu Feb 14 00:56:04 2008 From: manlio_perillo at libero.it (Manlio Perillo) Date: Wed, 13 Feb 2008 22:56:04 +0100 Subject: Richer status module In-Reply-To: <8b2ed8550802131216x1ab91120n9561fa7ba0b939be@mail.gmail.com> References: <8b2ed8550802131216x1ab91120n9561fa7ba0b939be@mail.gmail.com> Message-ID: <47B36774.4070503@libero.it> Joaquin Cuenca Abela ha scritto: > Hi, > > I'm interested in adding more stats to the stub_status module, like > number of 20x, 30x, 40x, 50x answers. The current statistics are quite > low level, and I don't see how to add the kind of statistics that I'm > interested in. > > Can someone provide some guidance here? any idea on what files should I hack on? > > Thanks! > What about parsing the log files? Adding the statistics you want requires to patch nginx, in detail the source file: src/http/ngx_http_header_filter_module.c Another solution, simpler (IMHO) is to add a custom module, registered at the NGX_HTTP_LOG_PHASE phase. You need to write a new stup_status module, however. Manlio Perillo From lists at ruby-forum.com Thu Feb 14 02:14:36 2008 From: lists at ruby-forum.com (Bbq Plate) Date: Thu, 14 Feb 2008 00:14:36 +0100 Subject: Using X-Accel-Redirect for protected pictures? Message-ID: <56d3b2a76c91edbb9f9d57861d8b0168@ruby-forum.com> hi, i would like to only let friends of a user view a users private photo album. can i use x-accel for this purpose? could something like this work? def photos @photos = UserPhotos.get_all end photos.rhtml for pics in @photos @response.headers['X-Accel-Redirect'] = "/files/#{pics.public_filename}" end ..... i dont this this would work. am i close? -- Posted via http://www.ruby-forum.com/. From lists at humanesoftware.com Thu Feb 14 13:12:35 2008 From: lists at humanesoftware.com (Mark Slater) Date: Thu, 14 Feb 2008 02:12:35 -0800 Subject: rewrite POST into GET? Message-ID: <62D553DE-0B71-43D1-A0B1-9CAF5BDB8024@humanesoftware.com> I'm working on a facebook application built using Rails. This is my first time deploying a Rails site, and I'm setting up Capistrano to do the heavy lifting. I've got it creating a "down for maintenance" file that I would like served to all facebook requests when I'm updating things, but facebook always sends a POST request. This causes Nginx to respond with a 405 and report "client sent invalid method..."; obviously you can't really POST to a static page. Is there a way I can re-direct POST requests to GET requests or force Nginx to return the static page regardless of the method used to access it? My backup plan is to deploy a second app on a different set of ports that always returns the "down for maintenance" message.... but it seems silly to run one app to report you're upgrading another. Thanks! Mark From kdemanawa at gmail.com Thu Feb 14 14:47:00 2008 From: kdemanawa at gmail.com (Kenneth Demanawa) Date: Thu, 14 Feb 2008 19:47:00 +0800 Subject: rewrite POST into GET? In-Reply-To: <62D553DE-0B71-43D1-A0B1-9CAF5BDB8024@humanesoftware.com> References: <62D553DE-0B71-43D1-A0B1-9CAF5BDB8024@humanesoftware.com> Message-ID: hi mark. maybe something like .. ... location / { # ... if ($request_method = POST ) { rewrite (.*) /system_maintenance.html proxy_pass http://rails_app; } } ciao :) On Thu, Feb 14, 2008 at 6:12 PM, Mark Slater wrote: > I'm working on a facebook application built using Rails. This is my > first time deploying a Rails site, and I'm setting up Capistrano to do > the heavy lifting. I've got it creating a "down for maintenance" file > that I would like served to all facebook requests when I'm updating > things, but facebook always sends a POST request. This causes Nginx to > respond with a 405 and report "client sent invalid method..."; > obviously you can't really POST to a static page. > > Is there a way I can re-direct POST requests to GET requests or force > Nginx to return the static page regardless of the method used to > access it? My backup plan is to deploy a second app on a different set > of ports that always returns the "down for maintenance" message.... > but it seems silly to run one app to report you're upgrading another. > > Thanks! > > Mark > > From is at rambler-co.ru Thu Feb 14 17:01:22 2008 From: is at rambler-co.ru (Igor Sysoev) Date: Thu, 14 Feb 2008 17:01:22 +0300 Subject: rewrite POST into GET? In-Reply-To: <62D553DE-0B71-43D1-A0B1-9CAF5BDB8024@humanesoftware.com> References: <62D553DE-0B71-43D1-A0B1-9CAF5BDB8024@humanesoftware.com> Message-ID: <20080214140122.GB27753@rambler-co.ru> On Thu, Feb 14, 2008 at 02:12:35AM -0800, Mark Slater wrote: > I'm working on a facebook application built using Rails. This is my > first time deploying a Rails site, and I'm setting up Capistrano to do > the heavy lifting. I've got it creating a "down for maintenance" file > that I would like served to all facebook requests when I'm updating > things, but facebook always sends a POST request. This causes Nginx to > respond with a 405 and report "client sent invalid method..."; > obviously you can't really POST to a static page. > > Is there a way I can re-direct POST requests to GET requests or force > Nginx to return the static page regardless of the method used to > access it? My backup plan is to deploy a second app on a different set > of ports that always returns the "down for maintenance" message.... > but it seems silly to run one app to report you're upgrading another. The attached patch adds the "post_to_static" directive: location / { post_to_static on; } or server { if ( maintaince ) { ... break; post_to_static on; } -- Igor Sysoev http://sysoev.ru/en/ -------------- next part -------------- Index: src/http/modules/ngx_http_static_module.c =================================================================== --- src/http/modules/ngx_http_static_module.c (revision 1212) +++ src/http/modules/ngx_http_static_module.c (working copy) @@ -9,10 +9,33 @@ #include +typedef struct { + ngx_flag_t post_to_static; +} ngx_http_static_loc_conf_t; + + static ngx_int_t ngx_http_static_handler(ngx_http_request_t *r); +static void ngx_http_static_request(ngx_http_request_t *r); +static void *ngx_http_static_create_loc_conf(ngx_conf_t *cf); +static char *ngx_http_static_merge_loc_conf(ngx_conf_t *cf, void *parent, + void *child); static ngx_int_t ngx_http_static_init(ngx_conf_t *cf); +static ngx_command_t ngx_http_static_commands[] = { + + { ngx_string("post_to_static"), + NGX_HTTP_MAIN_CONF|NGX_HTTP_SRV_CONF|NGX_HTTP_SIF_CONF + |NGX_HTTP_LOC_CONF|NGX_HTTP_LIF_CONF|NGX_CONF_FLAG, + ngx_conf_set_flag_slot, + NGX_HTTP_LOC_CONF_OFFSET, + offsetof(ngx_http_static_loc_conf_t, post_to_static), + NULL }, + + ngx_null_command +}; + + ngx_http_module_t ngx_http_static_module_ctx = { NULL, /* preconfiguration */ ngx_http_static_init, /* postconfiguration */ @@ -23,15 +46,15 @@ NULL, /* create server configuration */ NULL, /* merge server configuration */ - NULL, /* create location configuration */ - NULL /* merge location configuration */ + ngx_http_static_create_loc_conf, /* create location configuration */ + ngx_http_static_merge_loc_conf /* merge location configuration */ }; ngx_module_t ngx_http_static_module = { NGX_MODULE_V1, &ngx_http_static_module_ctx, /* module context */ - NULL, /* module directives */ + ngx_http_static_commands, /* module directives */ NGX_HTTP_MODULE, /* module type */ NULL, /* init master */ NULL, /* init module */ @@ -47,38 +70,63 @@ static ngx_int_t ngx_http_static_handler(ngx_http_request_t *r) { - u_char *last, *location; - size_t root; - ngx_str_t path; - ngx_int_t rc; - ngx_uint_t level; - ngx_log_t *log; - ngx_buf_t *b; - ngx_chain_t out; - ngx_open_file_info_t of; - ngx_http_core_loc_conf_t *clcf; + ngx_int_t rc; + ngx_http_static_loc_conf_t *lcf; - if (!(r->method & (NGX_HTTP_GET|NGX_HTTP_HEAD))) { - return NGX_HTTP_NOT_ALLOWED; - } - if (r->uri.data[r->uri.len - 1] == '/') { return NGX_DECLINED; } - /* TODO: Win32 */ if (r->zero_in_uri) { return NGX_DECLINED; } - rc = ngx_http_discard_request_body(r); + if (r->method & (NGX_HTTP_GET|NGX_HTTP_HEAD)) { - if (rc != NGX_OK) { - return rc; + rc = ngx_http_discard_request_body(r); + + if (rc != NGX_OK) { + return rc; + } + + ngx_http_static_request(r); + + return NGX_DONE; + + } else if (r->method & NGX_HTTP_POST) { + + lcf = ngx_http_get_module_loc_conf(r, ngx_http_static_module); + + if (lcf->post_to_static) { + + rc = ngx_http_read_client_request_body(r, ngx_http_static_request); + + if (rc >= NGX_HTTP_SPECIAL_RESPONSE) { + return rc; + } + + return NGX_DONE; + } } - log = r->connection->log; + return NGX_HTTP_NOT_ALLOWED; +} + +static void +ngx_http_static_request(ngx_http_request_t *r) +{ + u_char *last, *location; + size_t root; + ngx_str_t path; + ngx_int_t rc; + ngx_uint_t level; + ngx_log_t *log; + ngx_buf_t *b; + ngx_chain_t out; + ngx_open_file_info_t of; + ngx_http_core_loc_conf_t *clcf; + /* * ngx_http_map_uri_to_path() allocates memory for terminating '\0' * so we do not need to reserve memory for '/' for possible redirect @@ -86,11 +134,14 @@ last = ngx_http_map_uri_to_path(r, &path, &root, 0); if (last == NULL) { - return NGX_HTTP_INTERNAL_SERVER_ERROR; + ngx_http_finalize_request(r, NGX_HTTP_INTERNAL_SERVER_ERROR); + return; } path.len = last - path.data; + log = r->connection->log; + ngx_log_debug1(NGX_LOG_DEBUG_HTTP, log, 0, "http filename: \"%s\"", path.data); @@ -108,7 +159,8 @@ switch (of.err) { case 0: - return NGX_HTTP_INTERNAL_SERVER_ERROR; + ngx_http_finalize_request(r, NGX_HTTP_INTERNAL_SERVER_ERROR); + return; case NGX_ENOENT: case NGX_ENOTDIR: @@ -136,7 +188,8 @@ ngx_open_file_n " \"%s\" failed", path.data); } - return rc; + ngx_http_finalize_request(r, rc); + return; } ngx_log_debug1(NGX_LOG_DEBUG_HTTP, log, 0, "http static fd: %d", of.fd); @@ -147,7 +200,8 @@ r->headers_out.location = ngx_palloc(r->pool, sizeof(ngx_table_elt_t)); if (r->headers_out.location == NULL) { - return NGX_HTTP_INTERNAL_SERVER_ERROR; + ngx_http_finalize_request(r, NGX_HTTP_INTERNAL_SERVER_ERROR); + return; } if (!clcf->alias && clcf->root_lengths == NULL) { @@ -156,7 +210,8 @@ } else { location = ngx_palloc(r->pool, r->uri.len + 1); if (location == NULL) { - return NGX_HTTP_INTERNAL_SERVER_ERROR; + ngx_http_finalize_request(r, NGX_HTTP_INTERNAL_SERVER_ERROR); + return; } last = ngx_copy(location, r->uri.data, r->uri.len); @@ -172,7 +227,8 @@ r->headers_out.location->value.len = r->uri.len + 1; r->headers_out.location->value.data = location; - return NGX_HTTP_MOVED_PERMANENTLY; + ngx_http_finalize_request(r, NGX_HTTP_MOVED_PERMANENTLY); + return; } #if !(NGX_WIN32) /* the not regular files are probably Unix specific */ @@ -181,7 +237,8 @@ ngx_log_error(NGX_LOG_CRIT, log, ngx_errno, "\"%s\" is not a regular file", path.data); - return NGX_HTTP_NOT_FOUND; + ngx_http_finalize_request(r, NGX_HTTP_NOT_FOUND); + return; } #endif @@ -193,11 +250,13 @@ r->headers_out.last_modified_time = of.mtime; if (ngx_http_set_content_type(r) != NGX_OK) { - return NGX_HTTP_INTERNAL_SERVER_ERROR; + ngx_http_finalize_request(r, NGX_HTTP_INTERNAL_SERVER_ERROR); + return; } if (r != r->main && of.size == 0) { - return ngx_http_send_header(r); + ngx_http_finalize_request(r, ngx_http_send_header(r)); + return; } r->allow_ranges = 1; @@ -206,18 +265,21 @@ b = ngx_pcalloc(r->pool, sizeof(ngx_buf_t)); if (b == NULL) { - return NGX_HTTP_INTERNAL_SERVER_ERROR; + ngx_http_finalize_request(r, NGX_HTTP_INTERNAL_SERVER_ERROR); + return; } b->file = ngx_pcalloc(r->pool, sizeof(ngx_file_t)); if (b->file == NULL) { - return NGX_HTTP_INTERNAL_SERVER_ERROR; + ngx_http_finalize_request(r, NGX_HTTP_INTERNAL_SERVER_ERROR); + return; } rc = ngx_http_send_header(r); if (rc == NGX_ERROR || rc > NGX_OK || r->header_only) { - return rc; + ngx_http_finalize_request(r, rc); + return; } b->file_pos = 0; @@ -234,10 +296,38 @@ out.buf = b; out.next = NULL; - return ngx_http_output_filter(r, &out); + ngx_http_finalize_request(r, ngx_http_output_filter(r, &out)); } +static void * +ngx_http_static_create_loc_conf(ngx_conf_t *cf) +{ + ngx_http_static_loc_conf_t *lcf; + + lcf = ngx_palloc(cf->pool, sizeof(ngx_http_static_loc_conf_t)); + if (lcf == NULL) { + return NGX_CONF_ERROR; + } + + lcf->post_to_static = NGX_CONF_UNSET; + + return lcf; +} + + +static char * +ngx_http_static_merge_loc_conf(ngx_conf_t *cf, void *parent, void *child) +{ + ngx_http_static_loc_conf_t *prev = parent; + ngx_http_static_loc_conf_t *conf = child; + + ngx_conf_merge_value(conf->post_to_static, prev->post_to_static, 0); + + return NGX_CONF_OK; +} + + static ngx_int_t ngx_http_static_init(ngx_conf_t *cf) { From rkmr.em at gmail.com Fri Feb 15 00:11:29 2008 From: rkmr.em at gmail.com (rkmr.em at gmail.com) Date: Thu, 14 Feb 2008 13:11:29 -0800 Subject: phantom event ? Message-ID: i get these in my error logs.. what does this mean? is this serious? 2008/02/14 11:01:25 [alert] 18361#0: phantom event 0004 for closed and removed socket 12 2008/02/14 12:04:47 [alert] 18361#0: phantom event 0004 for closed and removed socket 13 2008/02/14 12:39:26 [alert] 18361#0: phantom event 0001 for closed and removed socket 11 From zeroguy at verizon.net Fri Feb 15 02:05:23 2008 From: zeroguy at verizon.net (Andrew Deason) Date: Thu, 14 Feb 2008 17:05:23 -0600 Subject: phantom event ? In-Reply-To: References: Message-ID: <20080214170523.457801a8.zeroguy@verizon.net> On Thu, 14 Feb 2008 13:11:29 -0800 "rkmr.em at gmail.com" wrote: > i get these in my error logs.. what does this mean? is this serious? > > 2008/02/14 11:01:25 [alert] 18361#0: phantom event 0004 for closed and > removed socket 12 > 2008/02/14 12:04:47 [alert] 18361#0: phantom event 0004 for closed and > removed socket 13 > 2008/02/14 12:39:26 [alert] 18361#0: phantom event 0001 for closed and > removed socket 11 See . Igor's provided configuration has worked fine. -- Andrew Deason zeroguy at verizon.net From rkmr.em at gmail.com Fri Feb 15 03:00:52 2008 From: rkmr.em at gmail.com (rkmr.em at gmail.com) Date: Thu, 14 Feb 2008 16:00:52 -0800 Subject: phantom event ? In-Reply-To: <20080214170523.457801a8.zeroguy@verizon.net> References: <20080214170523.457801a8.zeroguy@verizon.net> Message-ID: so i have to add this: events { devpoll_events 1; } will this affect the performance? On Thu, Feb 14, 2008 at 3:05 PM, Andrew Deason wrote: > > On Thu, 14 Feb 2008 13:11:29 -0800 > "rkmr.em at gmail.com" wrote: > > > i get these in my error logs.. what does this mean? is this serious? > > > > 2008/02/14 11:01:25 [alert] 18361#0: phantom event 0004 for closed and > > removed socket 12 > > 2008/02/14 12:04:47 [alert] 18361#0: phantom event 0004 for closed and > > removed socket 13 > > 2008/02/14 12:39:26 [alert] 18361#0: phantom event 0001 for closed and > > removed socket 11 > > See . Igor's provided > configuration has worked fine. > > -- > Andrew Deason > zeroguy at verizon.net > > From lists at humanesoftware.com Fri Feb 15 04:13:57 2008 From: lists at humanesoftware.com (Mark Slater) Date: Thu, 14 Feb 2008 17:13:57 -0800 Subject: rewrite POST into GET? In-Reply-To: <20080214140122.GB27753@rambler-co.ru> References: <62D553DE-0B71-43D1-A0B1-9CAF5BDB8024@humanesoftware.com> <20080214140122.GB27753@rambler-co.ru> Message-ID: <01883D4E-5C37-4AEE-9D68-753F5AE35310@humanesoftware.com> Wow Igor, that was fast! Thank you! I downloaded the development version of nginx (I'd been using the previous stable version 0.5.35), and applied the patch. Unfortunately, when I started the new version of the server, the post_to_static didn't change the 405 result sent back to facebook. My configuration file looks like this: http { ... server { listen 8080; server_name localhost; # set the max size for file uploads to 50 MB. client_max_body_size 50M; #charset koi8-r; access_log logs/host.vhost.access.log main; root /usr/local/webapps/listage/current/public; if (-f $document_root/system/maintenance.html) { rewrite ^(.*)$ /system/maintenance.html last; break; post_to_static on; } location / { ... } } } Do I have that right? I tried it manually and got the same error: mark$ telnet localhost 8080 Trying 127.0.0.1... Connected to 127.0.0.1. Escape character is '^]'. POST / HTTP/1.0 Content-Length: 0 HTTP/1.1 405 Not Allowed Server: nginx/0.6.26 Date: Fri, 15 Feb 2008 01:12:06 GMT Content-Type: text/html Content-Length: 173 Connection: close 405 Not Allowed

405 Not Allowed


nginx/0.6.26
Connection closed by foreign host. Mark On Feb 14, 2008, at 6:01 AM, Igor Sysoev wrote: > On Thu, Feb 14, 2008 at 02:12:35AM -0800, Mark Slater wrote: > >> I'm working on a facebook application built using Rails. This is my >> first time deploying a Rails site, and I'm setting up Capistrano to >> do >> the heavy lifting. I've got it creating a "down for maintenance" file >> that I would like served to all facebook requests when I'm updating >> things, but facebook always sends a POST request. This causes Nginx >> to >> respond with a 405 and report "client sent invalid method..."; >> obviously you can't really POST to a static page. >> >> Is there a way I can re-direct POST requests to GET requests or force >> Nginx to return the static page regardless of the method used to >> access it? My backup plan is to deploy a second app on a different >> set >> of ports that always returns the "down for maintenance" message.... >> but it seems silly to run one app to report you're upgrading another. > > The attached patch adds the "post_to_static" directive: > > location / { > post_to_static on; > } > > or > > server { > > if ( maintaince ) { > > ... > break; > > post_to_static on; > } > > > -- > Igor Sysoev > http://sysoev.ru/en/ > From is at rambler-co.ru Fri Feb 15 09:29:14 2008 From: is at rambler-co.ru (Igor Sysoev) Date: Fri, 15 Feb 2008 09:29:14 +0300 Subject: rewrite POST into GET? In-Reply-To: <01883D4E-5C37-4AEE-9D68-753F5AE35310@humanesoftware.com> References: <62D553DE-0B71-43D1-A0B1-9CAF5BDB8024@humanesoftware.com> <20080214140122.GB27753@rambler-co.ru> <01883D4E-5C37-4AEE-9D68-753F5AE35310@humanesoftware.com> Message-ID: <20080215062914.GA49201@rambler-co.ru> On Thu, Feb 14, 2008 at 05:13:57PM -0800, Mark Slater wrote: > Wow Igor, that was fast! Thank you! > > I downloaded the development version of nginx (I'd been using the > previous stable version 0.5.35), and applied the patch. Unfortunately, > when I started the new version of the server, the post_to_static > didn't change the 405 result sent back to facebook. > > My configuration file looks like this: > > http { > ... > server { > listen 8080; > server_name localhost; > > # set the max size for file uploads to 50 MB. > client_max_body_size 50M; > > #charset koi8-r; > > access_log logs/host.vhost.access.log main; > root /usr/local/webapps/listage/current/public; > > if (-f $document_root/system/maintenance.html) { > rewrite ^(.*)$ /system/maintenance.html last; > break; > post_to_static on; > } > > location / { > ... > } > } > } > > Do I have that right? I was wrong - the configuration should be changed to: server { ... if (-f $document_root/system/maintenance.html) { rewrite ^(.*)$ /system/maintenance.html break; break; } location = /system/maintenance.html { post_to_static on; } -- Igor Sysoev http://sysoev.ru/en/ From gabor at nekomancer.net Fri Feb 15 15:59:18 2008 From: gabor at nekomancer.net (=?ISO-8859-1?Q?G=E1bor_Farkas?=) Date: Fri, 15 Feb 2008 13:59:18 +0100 Subject: disable keepalive for internet-explorer? Message-ID: <47B58CA6.9040703@nekomancer.net> Hi, is there a way to do this in nginx: if the browser is internet-explorer (any version), don't do keepalive? i found the $msie variable, but the keepalive_timeout cannot be used in an "if()" construct.. so i do not know how to do it. thanks, gabor From tho.nguyen at intier.com Fri Feb 15 21:17:44 2008 From: tho.nguyen at intier.com (Tho Nguyen) Date: Fri, 15 Feb 2008 18:17:44 +0000 (UTC) Subject: Support for POST client body sent with Transfer-Encoding: chunked References: <1c9d46d70802031339w19847c8dg71af993235a4eed5@mail.gmail.com> Message-ID: Carlos, Did you find a work around for this issue? I'm having the same issue as well I believed. We have a system that uses a browser, java client, and a CAD client. The first two can upload files fine except the last one. From carloscm at gmail.com Fri Feb 15 21:54:57 2008 From: carloscm at gmail.com (Carlos) Date: Fri, 15 Feb 2008 19:54:57 +0100 Subject: Support for POST client body sent with Transfer-Encoding: chunked In-Reply-To: References: <1c9d46d70802031339w19847c8dg71af993235a4eed5@mail.gmail.com> Message-ID: <1c9d46d70802151054v156ccc60pdc57cc86b528f35d@mail.gmail.com> Hello Tho, Unfortunately I didn't find a work around. I just went and set up a dedicated Apache in a high port to serve the mobile clients (a solution that has its own problems since it is common for mobile carriers to block non-standard ports.) If you are dealing with a desktop client app you may have some control over its configuration. The problem comes mainly from HTTP 1.1 clients assuming the server has full support for HTTP 1.1 features, like chunked POST bodies. Try to force HTTP 1.0 in the client, that should disable chunked POST. I actually spent some time with the source of nginx. I was impressed with the extremely optimized approach to everything, from I/O to buffering to even string comparisons. It was also very tidy and readable. Disabling the 411 error check was easy enough but the buffering code for the POST body is written under the asumption that its length is fully known in advance (i.e. from the Content-Length header sent by the client.) Apparently it is legal for HTTP 1.1 clients to not send a Content-Length header and just rely on the auto-finalizing feature of the chunked encoding (sending a 0 length chunk.) The buffering code was not ready for this at all and would have required extensive modifications. Having 0 experience with the code it would take me way too long to come up with a working patch, so I decided to just go back to Apache for the time being. On Fri, Feb 15, 2008 at 7:17 PM, Tho Nguyen wrote: > Carlos, > > Did you find a work around for this issue? I'm having the same issue as well I > believed. We have a system that uses a browser, java client, and a CAD > client. The first two can upload files fine except the last one. > > > > From amd at urbanspoon.com Sun Feb 17 07:44:00 2008 From: amd at urbanspoon.com (Adam Doppelt) Date: Sat, 16 Feb 2008 20:44:00 -0800 Subject: URL encoding and other hackery Message-ID: <47B7BB90.8070006@urbanspoon.com> Hi. First, let me just say that I love nginx. Thanks for creating and maintaining it - we appreciate it. I am using nginx as the front end to a rails cluster. When rails generates a page I write the page to disk, where nginx can look for it later. I want to use something like this: if (-f $document_root/$uri) But I anticipate a few problems: 1) the uri might include ".." or similar hackery 2) the uri might include query parameters That leads to my questions: 1) Does nginx validate incoming uris? Will it strip out ".."? 2) Can I URL encode a variable? Thanks! Adam From alex at purefiction.net Sun Feb 17 17:54:18 2008 From: alex at purefiction.net (Alexander Staubo) Date: Sun, 17 Feb 2008 15:54:18 +0100 Subject: URL encoding and other hackery In-Reply-To: <47B7BB90.8070006@urbanspoon.com> References: <47B7BB90.8070006@urbanspoon.com> Message-ID: <88daf38c0802170654j20dea2ebs766fa50dd392f2f8@mail.gmail.com> On 2/17/08, Adam Doppelt wrote: > I am using nginx as the front end to a rails cluster. When rails > generates a page I write the page to disk, where nginx can look for it > later. I want to use something like this: We're doing the same thing. > if (-f $document_root/$uri) This works: if (-f $request_filename) { break; } > But I anticipate a few problems: > > 1) the uri might include ".." or similar hackery It's up to you to ensure that the saved file corresponds to the file name Nginx generates in $request_filename. Nginx will URL-decode the path, which means that http://example.com/buttons/a%2f..%2fbutton.png will resolve to $document_root/buttons/a/../button.png which is expanded to $document_root/buttons/button.png ...which is the file name that Rails' cache_page method will use. In other words it's possible to generate URLs which may end up outside your designated document root. The risk is not Nginx that could try to serve stuff beyond the document root, but that the Rails app might write its cache file in unexpected places. I don't know if cache_page validates the file name to ensure it's within $RAILS_ENV/public. It certainly ought to. There's also the risk that cache_page could overwrite *other* files within the public directory, of course, such as stuff in public/images. > 2) the uri might include query parameters If so you will need to create an Nginx variable to appends the query string to the end of the $request_filename. But a better option is to rely on Rails routes. Here's a typical route we use for rendering buttons: map.connect 'cache/button/:id', :controller => "theme", :action => 'button', :requirements => {:id => /.*/} This will map a URL such as this: http://example.com/cache/button/style=green;text=Click+me.png to a controller action as well as a nicely readable file name within our cache directory. In the controller we parse the file name and do the rendering: def button options = Button.parse_options(params[:id]) button = Button.new(options) ... data = button.render.to_blob send_data(data, :type => button.content_type, :disposition => "inline") cache_page(data) end Alexander. From is at rambler-co.ru Sun Feb 17 20:40:47 2008 From: is at rambler-co.ru (Igor Sysoev) Date: Sun, 17 Feb 2008 20:40:47 +0300 Subject: URL encoding and other hackery In-Reply-To: <47B7BB90.8070006@urbanspoon.com> References: <47B7BB90.8070006@urbanspoon.com> Message-ID: <20080217174047.GA7304@rambler-co.ru> On Sat, Feb 16, 2008 at 08:44:00PM -0800, Adam Doppelt wrote: > Hi. First, let me just say that I love nginx. Thanks for creating and > maintaining it - we appreciate it. > > I am using nginx as the front end to a rails cluster. When rails > generates a page I write the page to disk, where nginx can look for it > later. I want to use something like this: > > if (-f $document_root/$uri) You should use $request_filename instead: - it's "$document_root$uri". However, $request_filename correctly handle "root" as "alias". > But I anticipate a few problems: > > 1) the uri might include ".." or similar hackery > 2) the uri might include query parameters $uri and $request_filename does not contains query parameters. The query parameters are available via $args or $query_string (the later is for compatibilty with Apache). > That leads to my questions: > > 1) Does nginx validate incoming uris? Will it strip out ".."? Yes, nginx processes various /./, /../ in clear and escaped form, and does not allow to to below URI's root. > 2) Can I URL encode a variable? I do not understand the question. -- Igor Sysoev http://sysoev.ru/en/ From manlio_perillo at libero.it Sun Feb 17 20:47:20 2008 From: manlio_perillo at libero.it (Manlio Perillo) Date: Sun, 17 Feb 2008 18:47:20 +0100 Subject: URL encoding and other hackery In-Reply-To: <20080217174047.GA7304@rambler-co.ru> References: <47B7BB90.8070006@urbanspoon.com> <20080217174047.GA7304@rambler-co.ru> Message-ID: <47B87328.3010506@libero.it> Igor Sysoev ha scritto: > [...] > $uri and $request_filename does not contains query parameters. > The query parameters are available via $args or $query_string (the later > is for compatibilty with Apache). > By the way, I have noted that the uri fragment is removed? Why? > [...] Thanks Manlio Perillo From is at rambler-co.ru Sun Feb 17 20:51:31 2008 From: is at rambler-co.ru (Igor Sysoev) Date: Sun, 17 Feb 2008 20:51:31 +0300 Subject: URL encoding and other hackery In-Reply-To: <47B87328.3010506@libero.it> References: <47B7BB90.8070006@urbanspoon.com> <20080217174047.GA7304@rambler-co.ru> <47B87328.3010506@libero.it> Message-ID: <20080217175131.GC7304@rambler-co.ru> On Sun, Feb 17, 2008 at 06:47:20PM +0100, Manlio Perillo wrote: > Igor Sysoev ha scritto: > >[...] > >$uri and $request_filename does not contains query parameters. > >The query parameters are available via $args or $query_string (the later > >is for compatibilty with Apache). > > > > By the way, I have noted that the uri fragment is removed? Why? Do you mean "/uri#fragment" ? It is removed by browser, but not nginx. -- Igor Sysoev http://sysoev.ru/en/ From amd at urbanspoon.com Sun Feb 17 21:24:45 2008 From: amd at urbanspoon.com (Adam Doppelt) Date: Sun, 17 Feb 2008 10:24:45 -0800 Subject: URL encoding and other hackery In-Reply-To: <20080217174047.GA7304@rambler-co.ru> References: <47B7BB90.8070006@urbanspoon.com> <20080217174047.GA7304@rambler-co.ru> Message-ID: <47B87BED.8030902@urbanspoon.com> Igor Sysoev wrote: >> later. I want to use something like this: >> >> if (-f $document_root/$uri) >> > > You should use $request_filename instead: - it's "$document_root$uri". > However, $request_filename correctly handle "root" as "alias". > I'm looking for a flavor of $request_filename that includes query arguments, so that I can cache a version of the page that includes the arguments. At the very least, I need to differentiate between a version WITH arguments and a version WITHOUT arguments. I don't want those two requests to serve up the same page. This is very important for SEO purposes, to avoid duplicate content being shown at two different urls. I could use this: if (-f $document_root/$request_uri) But there are escaping issues. Perhaps nginx needs something like this: $request_filename_with_args >> 2) Can I URL encode a variable? >> > > I do not understand the question. > Another solution to my problem would be something like: set $path url_escape($request_uri) if (-f $document_root/$path) { ... Adam -------------- next part -------------- An HTML attachment was scrubbed... URL: From eden at mojiti.com Mon Feb 18 02:47:14 2008 From: eden at mojiti.com (Eden Li) Date: Mon, 18 Feb 2008 07:47:14 +0800 Subject: URL encoding and other hackery In-Reply-To: <47B87BED.8030902@urbanspoon.com> References: <47B7BB90.8070006@urbanspoon.com> <20080217174047.GA7304@rambler-co.ru> <47B87BED.8030902@urbanspoon.com> Message-ID: <8FB0BC97-DDD7-48E1-A100-CC03E7A7FE4B@mojiti.com> What escaping issues? $request_uri will already be uri-encoded, so you can use it directly on the file system. You just need to make sure whatever's behind nginx making those files have ordered the parameters correctly. for example, given a request $request_uri = /path/to/some.xml?a=%22&b= %23 $document_root$request_uri ==> /path/to/root/path/to/some.xml?a=%22&b= %23 this is a valid path in most file systems. (note, there's no slash between document_root and request_uri) On Feb 18, 2008, at 2:24 AM, Adam Doppelt wrote: > I could use this: > > if (-f $document_root/$request_uri) > > But there are escaping issues. Perhaps nginx needs something like > this: > > $request_filename_with_args From just.starting at gmail.com Mon Feb 18 11:48:29 2008 From: just.starting at gmail.com (just starting) Date: Mon, 18 Feb 2008 14:18:29 +0530 Subject: If file not found redirect to proxy... Message-ID: <3898fa730802180048t798b6194o4fd2e82032bccea8@mail.gmail.com> hi, I am using nginx to serve static files and proxying to jetty for dynamic pages. What I generally do is build the .war file, put it on jetty and also put the static files on nginx for nginx to serve. Now, can I do this, if some static file is not present in nginx, it will ask the jetty server for this file? I tried to proxy to jetty server if the requested file is not present, but it is not allowing me to put the proxy_pass line inside the if(filenotfound) block. Is there any other way to achieve that. Thanks, Rakesh. -------------- next part -------------- An HTML attachment was scrubbed... URL: From denis at gostats.ru Mon Feb 18 12:01:09 2008 From: denis at gostats.ru (Denis F. Latypoff) Date: Mon, 18 Feb 2008 15:01:09 +0600 Subject: If file not found redirect to proxy... In-Reply-To: <3898fa730802180048t798b6194o4fd2e82032bccea8@mail.gmail.com> References: <3898fa730802180048t798b6194o4fd2e82032bccea8@mail.gmail.com> Message-ID: <1325693575.20080218150109@gostats.ru> Hello just, Monday, February 18, 2008, 2:48:29 PM, you wrote: > hi, > I am using nginx to serve static files and proxying to jetty for dynamic > pages. > What I generally do is build the .war file, put it on jetty and also put the > static files on nginx for nginx to serve. > Now, can I do this, if some static file is not present in nginx, it will ask > the jetty server for this file? > I tried to proxy to jetty server if the requested file is not present, but > it is not allowing me to put the proxy_pass line inside the if(filenotfound) > block. > Is there any other way to achieve that. location / { error_page 404 = @backend; } location @backend { proxy_pass ...; } > Thanks, > Rakesh. -- Best regards, Denis mailto:denis at gostats.ru From igor at pokelondon.com Mon Feb 18 12:43:25 2008 From: igor at pokelondon.com (Igor Clark) Date: Mon, 18 Feb 2008 09:43:25 +0000 Subject: SSL cert choice In-Reply-To: <47B58CA6.9040703@nekomancer.net> References: <47B58CA6.9040703@nekomancer.net> Message-ID: Hi there A customer needs to buy their own SSL certificate because of legal requirements. We will run the site on nginx/0.5.35 on CentOS 5. They have provided the following options, presumably from the certificate vendor. I generated the CSR using openssl on the server. Are "Apache" or RedHat the best choices? Thanks, Igor > Microsoft > Netscape > Apache > iPlanet > Advanced Businesslink > AOL > Alteon > Aventail > BEA weblogic > C2Net Stronghold > Cacheflow > Compaq > Covalent > Domino > F5 > Hummingbirg > IBM > IBM HTTP > Ingrian networks > Intel > Lotus > MS Front page > MS Bisual InterDev 6.0 > Mirapoint > Nanoteq > Netscreen > Nokia > Novell > O'Reilly Website 2.5 or higher > Redhat > SilverStream Software > Sonic WALL > Tandem > Velocity Software > WebMethods > WEbsphere > Zeus -- Igor Clark // POKE // 10 Redchurch Street // E2 7DD // +44 (0)20 7749 5355 // www.pokelondon.com From jodok at lovelysystems.com Mon Feb 18 12:58:06 2008 From: jodok at lovelysystems.com (Jodok Batlogg) Date: Mon, 18 Feb 2008 10:58:06 +0100 Subject: SSL cert choice In-Reply-To: References: <47B58CA6.9040703@nekomancer.net> Message-ID: <10DA2504-D8F5-4AF2-85CF-FAE439CDB75E@lovelysystems.com> On 18.02.2008, at 10:43, Igor Clark wrote: > Hi there > > A customer needs to buy their own SSL certificate because of legal > requirements. > We will run the site on nginx/0.5.35 on CentOS 5. > They have provided the following options, presumably from the > certificate vendor. > I generated the CSR using openssl on the server. > Are "Apache" or RedHat the best choices? apache is fine jodok > > > Thanks, > Igor > >> Microsoft >> Netscape >> Apache >> iPlanet >> Advanced Businesslink >> AOL >> Alteon >> Aventail >> BEA weblogic >> C2Net Stronghold >> Cacheflow >> Compaq >> Covalent >> Domino >> F5 >> Hummingbirg >> IBM >> IBM HTTP >> Ingrian networks >> Intel >> Lotus >> MS Front page >> MS Bisual InterDev 6.0 >> Mirapoint >> Nanoteq >> Netscreen >> Nokia >> Novell >> O'Reilly Website 2.5 or higher >> Redhat >> SilverStream Software >> Sonic WALL >> Tandem >> Velocity Software >> WebMethods >> WEbsphere >> Zeus > > > -- > Igor Clark // POKE // 10 Redchurch Street // E2 7DD // +44 (0)20 > 7749 5355 // www.pokelondon.com > > > > > -- "Beautiful is better than ugly." -- The Zen of Python, by Tim Peters Jodok Batlogg, Lovely Systems GmbH Schmelzh?tterstra?e 26a, 6850 Dornbirn, Austria mobile: +43 676 5683591, phone: +43 5572 908060 -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 2454 bytes Desc: not available URL: From just.starting at gmail.com Mon Feb 18 13:20:04 2008 From: just.starting at gmail.com (just starting) Date: Mon, 18 Feb 2008 15:50:04 +0530 Subject: If file not found redirect to proxy... In-Reply-To: <1325693575.20080218150109@gostats.ru> References: <3898fa730802180048t798b6194o4fd2e82032bccea8@mail.gmail.com> <1325693575.20080218150109@gostats.ru> Message-ID: <3898fa730802180220w4c5deb30va10d80babad9711e@mail.gmail.com> Thanks, that did the job. On FeThb 18, 2008 2:31 PM, Denis F. Latypoff wrote: > Hello just, > > Monday, February 18, 2008, 2:48:29 PM, you wrote: > > > hi, > > > I am using nginx to serve static files and proxying to jetty for dynamic > > pages. > > > What I generally do is build the .war file, put it on jetty and also put > the > > static files on nginx for nginx to serve. > > > Now, can I do this, if some static file is not present in nginx, it will > ask > > the jetty server for this file? > > > I tried to proxy to jetty server if the requested file is not present, > but > > it is not allowing me to put the proxy_pass line inside the > if(filenotfound) > > block. > > > Is there any other way to achieve that. > > location / { > error_page 404 = @backend; > } > > location @backend { > proxy_pass ...; > } > > > Thanks, > > Rakesh. > > > > -- > Best regards, > Denis mailto:denis at gostats.ru > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From just.starting at gmail.com Mon Feb 18 13:26:04 2008 From: just.starting at gmail.com (just starting) Date: Mon, 18 Feb 2008 15:56:04 +0530 Subject: trailing / problem Message-ID: <3898fa730802180226u2dd83a21v5e2c52a1c47ad654@mail.gmail.com> hi, When I dont put a trailing '/', the testsite is redirecting me to localhost. conf file snippet: location /testsite/ { root /usr/local/nginx/html; index index.html view.html /view.html; error_page 404 = @jetty; } Now when I enter www.mysite.com/testsite/ it works fine, but when I try www.mysite.com/mysite it redirects me to http://localhost/testsite/ What is the problem here. If you cant come to any conclusion without looking at the conf file let me know, I will copy it. Thanks, Rakesh. -------------- next part -------------- An HTML attachment was scrubbed... URL: From denis at gostats.ru Mon Feb 18 13:31:50 2008 From: denis at gostats.ru (Denis F. Latypoff) Date: Mon, 18 Feb 2008 16:31:50 +0600 Subject: trailing / problem In-Reply-To: <3898fa730802180226u2dd83a21v5e2c52a1c47ad654@mail.gmail.com> References: <3898fa730802180226u2dd83a21v5e2c52a1c47ad654@mail.gmail.com> Message-ID: <76642237.20080218163150@gostats.ru> Hello just, Monday, February 18, 2008, 4:26:04 PM, you wrote: > hi, > When I dont put a trailing '/', the testsite is redirecting me to localhost. > conf file snippet: > location /testsite/ { > root /usr/local/nginx/html; > index index.html view.html /view.html; > error_page 404 = @jetty; > } > Now when I enter www.mysite.com/testsite/ it works fine, but when I try > www.mysite.com/mysite it redirects me to http://localhost/testsite/ > What is the problem here. > If you cant come to any conclusion without looking at the conf file let me > know, I will copy it. yes, it would be good if you copy it. > Thanks, > Rakesh. -- Best regards, Denis mailto:denis at gostats.ru From igor at pokelondon.com Mon Feb 18 13:43:34 2008 From: igor at pokelondon.com (Igor Clark) Date: Mon, 18 Feb 2008 10:43:34 +0000 Subject: SSL cert choice In-Reply-To: <10DA2504-D8F5-4AF2-85CF-FAE439CDB75E@lovelysystems.com> References: <47B58CA6.9040703@nekomancer.net> <10DA2504-D8F5-4AF2-85CF-FAE439CDB75E@lovelysystems.com> Message-ID: <3B5C80A0-7C15-41FE-9A4C-58560A34BCEE@pokelondon.com> Thanks very much! i On 18 Feb 2008, at 09:58, Jodok Batlogg wrote: > On 18.02.2008, at 10:43, Igor Clark wrote: > >> Hi there >> >> A customer needs to buy their own SSL certificate because of legal >> requirements. >> We will run the site on nginx/0.5.35 on CentOS 5. >> They have provided the following options, presumably from the >> certificate vendor. >> I generated the CSR using openssl on the server. >> Are "Apache" or RedHat the best choices? > > apache is fine > > jodok > >> >> >> Thanks, >> Igor >> >>> Microsoft >>> Netscape >>> Apache >>> iPlanet >>> Advanced Businesslink >>> AOL >>> Alteon >>> Aventail >>> BEA weblogic >>> C2Net Stronghold >>> Cacheflow >>> Compaq >>> Covalent >>> Domino >>> F5 >>> Hummingbirg >>> IBM >>> IBM HTTP >>> Ingrian networks >>> Intel >>> Lotus >>> MS Front page >>> MS Bisual InterDev 6.0 >>> Mirapoint >>> Nanoteq >>> Netscreen >>> Nokia >>> Novell >>> O'Reilly Website 2.5 or higher >>> Redhat >>> SilverStream Software >>> Sonic WALL >>> Tandem >>> Velocity Software >>> WebMethods >>> WEbsphere >>> Zeus >> >> >> -- >> Igor Clark // POKE // 10 Redchurch Street // E2 7DD // +44 (0)20 >> 7749 5355 // www.pokelondon.com >> >> >> >> >> > > -- > "Beautiful is better than ugly." > -- The Zen of Python, by Tim Peters > > Jodok Batlogg, Lovely Systems GmbH > Schmelzh?tterstra?e 26a, 6850 Dornbirn, Austria > mobile: +43 676 5683591, phone: +43 5572 908060 > -- Igor Clark // POKE // 10 Redchurch Street // E2 7DD // +44 (0)20 7749 5355 // www.pokelondon.com From just.starting at gmail.com Mon Feb 18 14:10:37 2008 From: just.starting at gmail.com (just starting) Date: Mon, 18 Feb 2008 16:40:37 +0530 Subject: trailing / problem In-Reply-To: <76642237.20080218163150@gostats.ru> References: <3898fa730802180226u2dd83a21v5e2c52a1c47ad654@mail.gmail.com> <76642237.20080218163150@gostats.ru> Message-ID: <3898fa730802180310k1f80b5aepe5b51674a78b09c3@mail.gmail.com> Here it is: If there is any other prob that anyone finds with this config, please let me know. Thanks, Rakesh. =====================START===================== #user nobody; worker_processes 1; #error_log logs/error.log; #error_log logs/error.log notice; #error_log logs/error.log info; #pid logs/nginx.pid; events { worker_connections 1024; } http { include mime.types; default_type application/octet-stream; #log_format main '$remote_addr - $remote_user [$time_local] $request ' # '"$status" $body_bytes_sent "$http_referer" ' # '"$http_user_agent" "$http_x_forwarded_for"'; #access_log logs/access.log main; sendfile on; #tcp_nopush on; #keepalive_timeout 0; keepalive_timeout 65; gzip on; gzip_http_version 1.0; gzip_min_length 1100; gzip_comp_level 2; gzip_proxied any; gzip_vary on; gzip_types text/plain text/html text/css text/xml application/x-javascript application/pdf application/xml application/\ xml+rss text/javascript; gzip_disable "MSIE [1-6]\."; #gzip_buffers 128 8k; server { listen 80; server_name localhost; #charset koi8-r; #access_log logs/host.access.log main; location / { root html/mysite; index index.html index.htm view.html; error_page 404 = @jetty; } location /testsite/ { root /usr/local/nginx/html; index index.html view.html /view.html; error_page 404 = @jetty; } location @jetty { proxy_pass http://127.0.0.1:8080; } #error_page 404 /404.html; # redirect server error pages to the static page /50x.html # error_page 500 502 503 504 /50x.html; location = /50x.html { root html; } location ~* ^.+\.(jsp|json)*$ { proxy_pass http://127.0.0.1:8080; #proxy_redirect false; include /usr/local/nginx/conf/proxy.conf; } location ~ /bind$ { proxy_pass http://127.0.0.1:8080; include /usr/local/nginx/conf/proxy.conf; } location /testsite/images/store/ { proxy_pass http://127.0.0.1:8080; include /usr/local/nginx/conf/proxy.conf; } location ~ /\.ht { deny all; } } } ==========THE END========== On Feb 18, 2008 4:01 PM, Denis F. Latypoff wrote: > Hello just, > > Monday, February 18, 2008, 4:26:04 PM, you wrote: > > > hi, > > > When I dont put a trailing '/', the testsite is redirecting me to > localhost. > > > conf file snippet: > > location /testsite/ { > > root /usr/local/nginx/html; > > index index.html view.html /view.html; > > error_page 404 = @jetty; > > } > > > Now when I enter www.mysite.com/testsite/ it works fine, but when I try > > www.mysite.com/mysite it redirects me to http://localhost/testsite/ > > > What is the problem here. > > > If you cant come to any conclusion without looking at the conf file let > me > > know, I will copy it. > > yes, it would be good if you copy it. > > > Thanks, > > Rakesh. > > > -- > Best regards, > Denis mailto:denis at gostats.ru > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From denis at gostats.ru Mon Feb 18 14:21:33 2008 From: denis at gostats.ru (Denis F. Latypoff) Date: Mon, 18 Feb 2008 17:21:33 +0600 Subject: trailing / problem In-Reply-To: <3898fa730802180310k1f80b5aepe5b51674a78b09c3@mail.gmail.com> References: <3898fa730802180226u2dd83a21v5e2c52a1c47ad654@mail.gmail.com> <76642237.20080218163150@gostats.ru> <3898fa730802180310k1f80b5aepe5b51674a78b09c3@mail.gmail.com> Message-ID: <1705545573.20080218172133@gostats.ru> Hello just, Monday, February 18, 2008, 5:10:37 PM, you wrote: > Here it is: > If there is any other prob that anyone finds with this config, please let me > know. [...] > server { > listen 80; > server_name localhost; + server_name_id_redirect off; > #charset koi8-r; > #access_log logs/host.access.log main; [...] >> >> > Now when I enter www.mysite.com/testsite/ it works fine, but when I try >> > www.mysite.com/mysite it redirects me to http://localhost/testsite/ >> >> > What is the problem here. -- Best regards, Denis mailto:denis at gostats.ru From denis at gostats.ru Mon Feb 18 14:30:28 2008 From: denis at gostats.ru (Denis F. Latypoff) Date: Mon, 18 Feb 2008 17:30:28 +0600 Subject: trailing / problem In-Reply-To: <1705545573.20080218172133@gostats.ru> References: <3898fa730802180226u2dd83a21v5e2c52a1c47ad654@mail.gmail.com> <76642237.20080218163150@gostats.ru> <3898fa730802180310k1f80b5aepe5b51674a78b09c3@mail.gmail.com> <1705545573.20080218172133@gostats.ru> Message-ID: <1704371648.20080218173028@gostats.ru> Hello Denis, Monday, February 18, 2008, 5:21:33 PM, you wrote: > Hello just, > Monday, February 18, 2008, 5:10:37 PM, you wrote: >> Here it is: >> If there is any other prob that anyone finds with this config, please let me >> know. > [...] >> server { >> listen 80; >> server_name localhost; - server_name_id_redirect off; + server_name_in_redirect off; >> #charset koi8-r; >> #access_log logs/host.access.log main; > [...] >>> >>> > Now when I enter www.mysite.com/testsite/ it works fine, but when I try >>> > www.mysite.com/mysite it redirects me to http://localhost/testsite/ >>> >>> > What is the problem here. -- Best regards, Denis mailto:denis at gostats.ru From just.starting at gmail.com Mon Feb 18 15:13:26 2008 From: just.starting at gmail.com (just starting) Date: Mon, 18 Feb 2008 17:43:26 +0530 Subject: trailing / problem In-Reply-To: <1704371648.20080218173028@gostats.ru> References: <3898fa730802180226u2dd83a21v5e2c52a1c47ad654@mail.gmail.com> <76642237.20080218163150@gostats.ru> <3898fa730802180310k1f80b5aepe5b51674a78b09c3@mail.gmail.com> <1705545573.20080218172133@gostats.ru> <1704371648.20080218173028@gostats.ru> Message-ID: <3898fa730802180413y6f1bdf8ga76a47dfb9e5d5a1@mail.gmail.com> Thanks to both Dennis and Denis :) This is funny On Feb 18, 2008 5:00 PM, Denis F. Latypoff wrote: > Hello Denis, I was actually searching for the 2nd Denis and came to realise both are the same :P We have some really nice ppl here, helping each other at every stage. Thanks a lot guys, keep it up. Rakesh. > > > Monday, February 18, 2008, 5:21:33 PM, you wrote: > > > Hello just, > > > Monday, February 18, 2008, 5:10:37 PM, you wrote: > > >> Here it is: > > >> If there is any other prob that anyone finds with this config, please > let me > >> know. > > > [...] > > >> server { > >> listen 80; > >> server_name localhost; > > - server_name_id_redirect off; > + server_name_in_redirect off; > > >> #charset koi8-r; > >> #access_log logs/host.access.log main; > > > [...] > > >>> > >>> > Now when I enter www.mysite.com/testsite/ it works fine, but when I > try > >>> > www.mysite.com/mysite it redirects me to http://localhost/testsite/ > >>> > >>> > What is the problem here. > > > > > > -- > Best regards, > Denis mailto:denis at gostats.ru > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From matteo.niccoli at staff.dada.net Tue Feb 19 12:37:36 2008 From: matteo.niccoli at staff.dada.net (Matteo Niccoli) Date: Tue, 19 Feb 2008 10:37:36 +0100 Subject: Rewrite migration from apache. Message-ID: <47BAA360.2050409@staff.dada.net> Hi, I'm migrating some sites from apache 2.0 to nginx. I have problems with this rewrite: RewriteRule ^([^/]{2,})/$ home.php?key=$1 I have changed to: rewrite ^([^/]{2,})/$ home.php?key=$1 last; but when I make a checkconfig, nginx output this: directive "rewrite" is not terminated by ";" in /etc/nginx/nginx.conf: Seems that {2,} is not accepted from nginx. Can anybody explain me how can I solve this? Thanks. From denis at gostats.ru Tue Feb 19 13:22:29 2008 From: denis at gostats.ru (Denis F. Latypoff) Date: Tue, 19 Feb 2008 16:22:29 +0600 Subject: Rewrite migration from apache. In-Reply-To: <47BAA360.2050409@staff.dada.net> References: <47BAA360.2050409@staff.dada.net> Message-ID: <1954392835.20080219162229@gostats.ru> Hello Matteo, Tuesday, February 19, 2008, 3:37:36 PM, you wrote: > Hi, > I'm migrating some sites from apache 2.0 to nginx. I have problems with > this rewrite: > RewriteRule ^([^/]{2,})/$ home.php?key=$1 > I have changed to: - rewrite ^([^/]{2,})/$ home.php?key=$1 last; + rewrite "^([^/]{2,})/$" home.php?key=$1 last; > but when I make a checkconfig, nginx output this: > directive "rewrite" is not terminated by ";" in /etc/nginx/nginx.conf: > Seems that {2,} is not accepted from nginx. Can anybody explain me > how can I solve this? the problem is in "{}" characters which are used by nginx as config block: location / { # <- begin block } # <- end block your config is: rewrite ^([^/]{ # <- begin block without proper terminating a directive by ";". > Thanks. -- Best regards, Denis mailto:denis at gostats.ru From ba4an at ya.ru Sun Feb 17 23:34:46 2008 From: ba4an at ya.ru (Sergey Bochenkov) Date: Sun, 17 Feb 2008 20:34:46 +0000 (UTC) Subject: URL encoding and other hackery References: <47B7BB90.8070006@urbanspoon.com> <20080217174047.GA7304@rambler-co.ru> <47B87BED.8030902@urbanspoon.com> Message-ID: > Perhaps nginx needs something like this: > $request_filename_with_args Of course, you can just use: $request_filename$is_args$args without if () {} statement. :) From matteo.niccoli at staff.dada.net Tue Feb 19 13:38:50 2008 From: matteo.niccoli at staff.dada.net (Matteo Niccoli) Date: Tue, 19 Feb 2008 11:38:50 +0100 Subject: Rewrite migration from apache. In-Reply-To: <1954392835.20080219162229@gostats.ru> References: <47BAA360.2050409@staff.dada.net> <1954392835.20080219162229@gostats.ru> Message-ID: <47BAB1BA.9060101@staff.dada.net> Denis F. Latypoff ha scritto: > the problem is in "{}" characters which are used by nginx as config block: > > location / { # <- begin block > } # <- end block > > your config is: rewrite ^([^/]{ # <- begin block without proper terminating a > directive by ";". Oh, thanks so much. Now it works! Bye. From igor at pokelondon.com Tue Feb 19 14:04:48 2008 From: igor at pokelondon.com (Igor Clark) Date: Tue, 19 Feb 2008 11:04:48 +0000 Subject: Location problems In-Reply-To: <47BAB1BA.9060101@staff.dada.net> References: <47BAA360.2050409@staff.dada.net> <1954392835.20080219162229@gostats.ru> <47BAB1BA.9060101@staff.dada.net> Message-ID: <16EE163C-093B-487A-A9EB-0CB14DF38AF6@pokelondon.com> Hi folks, I often have problems trying to use different locations without having to duplicate config. I think I must be thinking about it the wrong way! Basically I just want to make /admin/ password-protected, but inherit all the other config. So I tried this: location / { include /path/to/php.conf; # includes all fastcgi stuff and some rewrites location ~ /admin/.* { auth_basic "Restricted"; auth_basic_user_file /path/to/admin.htusers; } } But it doesn't work, so I tried this way which I've made work before: location / { include /path/to/php.conf; } location ~ /admin/.* { auth_basic "Restricted"; auth_basic_user_file /path/to/admin.htusers; include /path/to/php.conf; } But this doesn't work either, it includes the PHP file but doesn't do the auth, and there's no error in the log. I've tried various permutations on ~ /admin/.* too. What am I doing wrong? Many thanks, Igor -- Igor Clark // POKE // 10 Redchurch Street // E2 7DD // +44 (0)20 7749 5355 // www.pokelondon.com From denis at gostats.ru Tue Feb 19 14:23:46 2008 From: denis at gostats.ru (Denis F. Latypoff) Date: Tue, 19 Feb 2008 17:23:46 +0600 Subject: Location problems In-Reply-To: <16EE163C-093B-487A-A9EB-0CB14DF38AF6@pokelondon.com> References: <47BAA360.2050409@staff.dada.net> <1954392835.20080219162229@gostats.ru> <47BAB1BA.9060101@staff.dada.net> <16EE163C-093B-487A-A9EB-0CB14DF38AF6@pokelondon.com> Message-ID: <741757296.20080219172346@gostats.ru> Hello Igor, Tuesday, February 19, 2008, 5:04:48 PM, you wrote: > Hi folks, > I often have problems trying to use different locations without having > to duplicate config. > I think I must be thinking about it the wrong way! > Basically I just want to make /admin/ password-protected, but inherit > all the other config. > So I tried this: > location / { > include /path/to/php.conf; # includes all fastcgi stuff and some > rewrites > location ~ /admin/.* { > auth_basic "Restricted"; > auth_basic_user_file /path/to/admin.htusers; > } > } > But it doesn't work, so I tried this way which I've made work before: > location / { > include /path/to/php.conf; > } - location ~ /admin/.* { + location /admin { # not tested > auth_basic "Restricted"; > auth_basic_user_file /path/to/admin.htusers; > include /path/to/php.conf; > } > But this doesn't work either, it includes the PHP file but doesn't do > the auth, and there's no error in the log. I've tried various > permutations on ~ /admin/.* too. > What am I doing wrong? > Many thanks, > Igor > -- > Igor Clark // POKE // 10 Redchurch Street // E2 7DD // +44 (0)20 7749 > 5355 // www.pokelondon.com -- Best regards, Denis mailto:denis at gostats.ru From rkmr.em at gmail.com Wed Feb 20 04:57:04 2008 From: rkmr.em at gmail.com (rkmr.em at gmail.com) Date: Tue, 19 Feb 2008 17:57:04 -0800 Subject: help with location Message-ID: i want /userupload /userwebupload /uploadvideo etc, any url with 'upload' in it to get served by a specific backend, i can use fastcgi_pass for the backend part.. how do i specify location for this ? something like this? location / *upload* { root /home/mark/work/pop; fastcgi_pass backend_pop; thanks a lot From eden at mojiti.com Wed Feb 20 05:38:20 2008 From: eden at mojiti.com (Eden Li) Date: Wed, 20 Feb 2008 10:38:20 +0800 Subject: help with location In-Reply-To: References: Message-ID: <72D27B51-8754-4A5B-8ADC-430AFD0F8045@mojiti.com> Your answer is here: http://wiki.codemongers.com/NginxHttpCoreModule#location On Feb 20, 2008, at 9:57 AM, rkmr.em at gmail.com wrote: > i want > /userupload > /userwebupload > /uploadvideo > etc, > any url with 'upload' in it to get served by a specific backend, i can > use fastcgi_pass for the backend part.. how do i specify location for > this ? > something like this? > location / *upload* { > root /home/mark/work/pop; > fastcgi_pass backend_pop; > > > thanks a lot > From phill at theactivitypeople.co.uk Wed Feb 20 12:46:23 2008 From: phill at theactivitypeople.co.uk (Phillip B Oldham) Date: Wed, 20 Feb 2008 09:46:23 +0000 Subject: Multiple vhosts with wildcards? Message-ID: <47BBF6EF.2070600@theactivitypeople.co.uk> Hi At the moment we're using lighttpd, but its proving to be a little flaky with php-fcgi. I'd like to know whether its possible to get the following set-up working so I can replace lighttpd with nginx. We use a *lot* of wildcard domains. The subdomains correspond to a client, and they have their own areas: review.*.ourdomain.com mail.*.ourdomain.com dev.*.ourdomain.com *.ourdomain.com This is pretty trivial to set up in lighttpd: $HTTP["host"] =~ "^review\.(.*)\.ourdomain\.com" {} $HTTP["host"] =~ "^mail\.(.*)\.ourdomain\.com" {} $HTTP["host"] =~ "^dev\.(.*)\.ourdomain\.com" {} $HTTP["host"] =~ "^(.*)\.ourdomain\.com" {} Trying to set up something similar in nginx raises problems. This is my setup: server { server_name review.it.ourdomain.com review.*.ourdomain.com; ... } server { server_name dev.it.ourdomain.com dev.*.ourdomain.com; ... } but this just throws the following error: [emerg] 3320#0: invalid server name or wildcard "dev.*.ourdomain.com" Any idea how I can achieve the result I'm looking for? -- *Phillip B Oldham* The Activity People phill at theactivitypeople.co.uk ------------------------------------------------------------------------ *Policies* This e-mail and its attachments are intended for the above named recipient(s) only and may be confidential. If they have come to you in error, please reply to this e-mail and highlight the error. No action should be taken regarding content, nor must you copy or show them to anyone. This e-mail has been created in the knowledge that Internet e-mail is not a 100% secure communications medium, and we have taken steps to ensure that this e-mail and attachments are free from any virus. We must advise that in keeping with good computing practice the recipient should ensure they are completely virus free, and that you understand and observe the lack of security when e-mailing us. ------------------------------------------------------------------------ -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: phill.vcf Type: text/x-vcard Size: 261 bytes Desc: not available URL: From martin.schoettler at email.de Wed Feb 20 13:29:09 2008 From: martin.schoettler at email.de (=?ISO-8859-15?Q?Martin_Sch=F6ttler?=) Date: Wed, 20 Feb 2008 11:29:09 +0100 Subject: Certificate issue on two IP-addresses with same port Message-ID: <47BC00F5.4020003@email.de> I need to run two web applications (Ruby on Rails) on one server over SSL. Both applications shall run on port 443 (other wise we get firewall problems). What I did: - I got two IP-addresses (85.214.47.37, 85.214.56.139) for the server. - I connected different domains to that addresses (gedis-intern.de -> 85.214.47.37, gedis-second.de -> 85.214.56.139 - I configured two vhosts for nginx to listen on - listen 443; server_name secure.gedis-intern.de; - listen 443; server_name secure.ticket-db.gedis-second.de; - I provided certificates for the two vhosts (ssl_certificate, ssl_certificate_key) When I connect to the second application (https://secure.ticket-db.gedis-second.de), then the *wrong certificate* is presented (that of the first one) to the client. Therefore the browser displays a warning, which confuses the user. (*That is the problem*) If I set port 444 for the second application, than everything works fine. But - as I said - port 444 is sometimes blocked by firewalls. Any help is appreciated! Thanks, Martin ______________________________ Martin Schoettler Herzogstandweg 21 D-82431 Kochel am See fon +49-(0) 88 51 - 92 31 54 fax +49-(0) 88 51 - 92 31 56 gsm +49-(0) 163 - 44 33 621 private +49-(0) 8851 - 7581 Skype: martin.schoettler ______________________________ From lists at humanesoftware.com Wed Feb 20 13:39:12 2008 From: lists at humanesoftware.com (Mark Slater) Date: Wed, 20 Feb 2008 02:39:12 -0800 Subject: rewrite POST into GET? In-Reply-To: <20080215062914.GA49201@rambler-co.ru> References: <62D553DE-0B71-43D1-A0B1-9CAF5BDB8024@humanesoftware.com> <20080214140122.GB27753@rambler-co.ru> <01883D4E-5C37-4AEE-9D68-753F5AE35310@humanesoftware.com> <20080215062914.GA49201@rambler-co.ru> Message-ID: Hey Igor, Again thank you for such a fast response. I tried the new configuration and it works perfectly! Mark On Feb 14, 2008, at 10:29 PM, Igor Sysoev wrote: > On Thu, Feb 14, 2008 at 05:13:57PM -0800, Mark Slater wrote: > >> Wow Igor, that was fast! Thank you! >> >> I downloaded the development version of nginx (I'd been using the >> previous stable version 0.5.35), and applied the patch. >> Unfortunately, >> when I started the new version of the server, the post_to_static >> didn't change the 405 result sent back to facebook. >> >> My configuration file looks like this: >> >> http { >> ... >> server { >> listen 8080; >> server_name localhost; >> >> # set the max size for file uploads to 50 MB. >> client_max_body_size 50M; >> >> #charset koi8-r; >> >> access_log logs/host.vhost.access.log main; >> root /usr/local/webapps/listage/current/public; >> >> if (-f $document_root/system/maintenance.html) { >> rewrite ^(.*)$ /system/maintenance.html last; >> break; >> post_to_static on; >> } >> >> location / { >> ... >> } >> } >> } >> >> Do I have that right? > > I was wrong - the configuration should be changed to: > > server { > > ... > > if (-f $document_root/system/maintenance.html) { > rewrite ^(.*)$ /system/maintenance.html break; > break; > } > > location = /system/maintenance.html { > post_to_static on; > } > > > -- > Igor Sysoev > http://sysoev.ru/en/ > From cliff at develix.com Wed Feb 20 13:58:15 2008 From: cliff at develix.com (Cliff Wells) Date: Wed, 20 Feb 2008 02:58:15 -0800 Subject: Certificate issue on two IP-addresses with same port In-Reply-To: <47BC00F5.4020003@email.de> References: <47BC00F5.4020003@email.de> Message-ID: <1203505095.13326.11.camel@portableevil.develix.com> On Wed, 2008-02-20 at 11:29 +0100, Martin Sch?ttler wrote: > - I connected different domains to that addresses (gedis-intern.de -> > 85.214.47.37, gedis-second.de -> 85.214.56.139 > - I configured two vhosts for nginx to listen on > - listen 443; server_name secure.gedis-intern.de; > - listen 443; server_name secure.ticket-db.gedis-second.de; You have to configure the address, otherwise they will both try to connect to all interfaces: server { server_name gedis-intern.de; listen 85.214.47.37:443; # ... } server { server_name gedis-second.de; listen 85.214.56.139:443; # ... } Regards, Cliff From is at rambler-co.ru Wed Feb 20 15:25:59 2008 From: is at rambler-co.ru (Igor Sysoev) Date: Wed, 20 Feb 2008 15:25:59 +0300 Subject: Multiple vhosts with wildcards? In-Reply-To: <47BBF6EF.2070600@theactivitypeople.co.uk> References: <47BBF6EF.2070600@theactivitypeople.co.uk> Message-ID: <20080220122559.GB76459@rambler-co.ru> On Wed, Feb 20, 2008 at 09:46:23AM +0000, Phillip B Oldham wrote: > At the moment we're using lighttpd, but its proving to be a little flaky > with php-fcgi. I'd like to know whether its possible to get the > following set-up working so I can replace lighttpd with nginx. > > We use a *lot* of wildcard domains. The subdomains correspond to a > client, and they have their own areas: > > review.*.ourdomain.com > mail.*.ourdomain.com > dev.*.ourdomain.com > *.ourdomain.com > > This is pretty trivial to set up in lighttpd: > > $HTTP["host"] =~ "^review\.(.*)\.ourdomain\.com" {} > $HTTP["host"] =~ "^mail\.(.*)\.ourdomain\.com" {} > $HTTP["host"] =~ "^dev\.(.*)\.ourdomain\.com" {} > $HTTP["host"] =~ "^(.*)\.ourdomain\.com" {} > > Trying to set up something similar in nginx raises problems. This is my > setup: > > server { > server_name review.it.ourdomain.com review.*.ourdomain.com; > ... > } > > server { > server_name dev.it.ourdomain.com dev.*.ourdomain.com; > ... > } > > but this just throws the following error: > > [emerg] 3320#0: invalid server name or wildcard "dev.*.ourdomain.com" > > Any idea how I can achieve the result I'm looking for? > -- Use regex (note "~"): server { server_name review.it.ourdomain.com ~^review\..+\.ourdomain\.com$; ... } -- Igor Sysoev http://sysoev.ru/en/ From yusufg at gmail.com Wed Feb 20 16:39:11 2008 From: yusufg at gmail.com (Yusuf Goolamabbas) Date: Wed, 20 Feb 2008 21:39:11 +0800 Subject: Multiple vhosts with wildcards? In-Reply-To: <20080220122559.GB76459@rambler-co.ru> References: <47BBF6EF.2070600@theactivitypeople.co.uk> <20080220122559.GB76459@rambler-co.ru> Message-ID: > Use regex (note "~"): > > server { > server_name review.it.ourdomain.com ~^review\..+\.ourdomain\.com$; > ... > } > Cool, never knew that server_name supported regex. Didn't see that in the English wiki, just cross referencd it in the Russian docs From is at rambler-co.ru Wed Feb 20 16:44:18 2008 From: is at rambler-co.ru (Igor Sysoev) Date: Wed, 20 Feb 2008 16:44:18 +0300 Subject: Multiple vhosts with wildcards? In-Reply-To: References: <47BBF6EF.2070600@theactivitypeople.co.uk> <20080220122559.GB76459@rambler-co.ru> Message-ID: <20080220134418.GE76459@rambler-co.ru> On Wed, Feb 20, 2008 at 09:39:11PM +0800, Yusuf Goolamabbas wrote: > > Use regex (note "~"): > > > > server { > > server_name review.it.ourdomain.com ~^review\..+\.ourdomain\.com$; > > ... > > } > > Cool, never knew that server_name supported regex. Didn't see that in > the English wiki, just cross referencd it in the Russian docs It had appeared in 0.6.7 and 0.5.33. -- Igor Sysoev http://sysoev.ru/en/ From lists at ruby-forum.com Wed Feb 20 22:49:12 2008 From: lists at ruby-forum.com (Todd HG) Date: Wed, 20 Feb 2008 20:49:12 +0100 Subject: excessive RAM consumption - memory leak Message-ID: <9f314ae9fa6e8071326460534a64a55d@ruby-forum.com> I am using Nginx 0.5.35 on a server that has a Xeon 5130 dual core, 4 GB of RAM, and 10,000 RPM HD. Nginx serves millions of large images per day. Sometimes tens of millions in a day. Over the past week the server has been experiencing record traffic. I am looking for a way to reduce the amount of RAM consumed by Nginx, but still deliver images at the same rate. The CPU cycles are at an acceptable level, but yesterday Nginx nearly consumed all 4 GB of RAM before I had to reboot the server under very heavy traffic. All this server does is serve images files like jpeg, etc. Some of my configuration is below: Since this is a dual core CPU I am using: worker_processes 2; worker_connections 12000; use epoll; # This is a RedHat Enterprise Server 4 I also have: gzip on; sendfile on; tcp_nopush on; tcp_nodelay on; keepalive_timeout 75 20; server_names_hash_bucket_size 128; I have tried reducing the keepalive_timeout to close the connection sooner, so that resources might be freed sooner, but it has no noticable effect. Can someone make some suggestions how I could handle the same traffic, but manage the RAM usage better. -- Posted via http://www.ruby-forum.com/. From eliott at cactuswax.net Wed Feb 20 23:15:39 2008 From: eliott at cactuswax.net (eliott) Date: Wed, 20 Feb 2008 12:15:39 -0800 Subject: excessive RAM consumption - memory leak In-Reply-To: <9f314ae9fa6e8071326460534a64a55d@ruby-forum.com> References: <9f314ae9fa6e8071326460534a64a55d@ruby-forum.com> Message-ID: <428d921d0802201215x57196ceq9791ba34c83254ae@mail.gmail.com> On 2/20/08, Todd HG wrote: > I have tried reducing the keepalive_timeout to close the connection > sooner, so that resources might be freed sooner, but it has no noticable > effect. > > Can someone make some suggestions how I could handle the same traffic, > but manage the RAM usage better. Wow. I would set keepalive to maybe 5 or 10, not 75. If you are serving that much traffic, I might even try turning keepalives off altogether. But if you already modified those values, and didn't see a change, then I don't know. You may try setting expires headers for your images, if they don't change very often (or at all). From dave at cheney.net Wed Feb 20 23:31:00 2008 From: dave at cheney.net (Dave Cheney) Date: Thu, 21 Feb 2008 07:31:00 +1100 Subject: excessive RAM consumption - memory leak In-Reply-To: <428d921d0802201215x57196ceq9791ba34c83254ae@mail.gmail.com> References: <9f314ae9fa6e8071326460534a64a55d@ruby-forum.com> <428d921d0802201215x57196ceq9791ba34c83254ae@mail.gmail.com> Message-ID: <6E3ABA69-E0AF-454C-AED6-1E2C0C9503F9@cheney.net> Hi Todd, I'm supprised you are finding memory usage a problem, we handle 2mm hits / day with nginx and memory usage is about 3.5 megabytes total. What revision of nginx are you running? Try pointing yslow at your host, and then follow its recommendations to improve response times (mainly to reduce origin hits). Cheers Dave On 21/02/2008, at 7:15 AM, eliott wrote: > On 2/20/08, Todd HG wrote: >> I have tried reducing the keepalive_timeout to close the connection >> sooner, so that resources might be freed sooner, but it has no >> noticable >> effect. >> >> Can someone make some suggestions how I could handle the same >> traffic, >> but manage the RAM usage better. > > Wow. I would set keepalive to maybe 5 or 10, not 75. > If you are serving that much traffic, I might even try turning > keepalives off altogether. > > But if you already modified those values, and didn't see a change, > then I don't know. > You may try setting expires headers for your images, if they don't > change very often (or at all). > From is at rambler-co.ru Wed Feb 20 23:42:51 2008 From: is at rambler-co.ru (Igor Sysoev) Date: Wed, 20 Feb 2008 23:42:51 +0300 Subject: excessive RAM consumption - memory leak In-Reply-To: <9f314ae9fa6e8071326460534a64a55d@ruby-forum.com> References: <9f314ae9fa6e8071326460534a64a55d@ruby-forum.com> Message-ID: <20080220204251.GI76459@rambler-co.ru> On Wed, Feb 20, 2008 at 08:49:12PM +0100, Todd HG wrote: > I am using Nginx 0.5.35 on a server that has a Xeon 5130 dual core, 4 GB > of RAM, and 10,000 RPM HD. Nginx serves millions of large images per > day. Sometimes tens of millions in a day. Over the past week the server > has been experiencing record traffic. I am looking for a way to reduce > the amount of RAM consumed by Nginx, but still deliver images at the > same rate. The CPU cycles are at an acceptable level, but yesterday > Nginx nearly consumed all 4 GB of RAM before I had to reboot the server > under very heavy traffic. All this server does is serve images files > like jpeg, etc. Some of my configuration is below: > > Since this is a dual core CPU I am using: > > worker_processes 2; > worker_connections 12000; > use epoll; # This is a RedHat Enterprise Server 4 > > I also have: > > gzip on; > sendfile on; > tcp_nopush on; > tcp_nodelay on; > keepalive_timeout 75 20; > server_names_hash_bucket_size 128; > > I have tried reducing the keepalive_timeout to close the connection > sooner, so that resources might be freed sooner, but it has no noticable > effect. > > Can someone make some suggestions how I could handle the same traffic, > but manage the RAM usage better. Do you use the standard nginx without any external modules ? Does nginx serve static files only without any proxy, fastcgi, perl, etc. processing ? What does ps ax -o pid,ppid,%cpu,vsz,wchan,command|egrep '(nginx|PID)' show ? -- Igor Sysoev http://sysoev.ru/en/ From lists at ruby-forum.com Thu Feb 21 00:22:52 2008 From: lists at ruby-forum.com (Todd HG) Date: Wed, 20 Feb 2008 22:22:52 +0100 Subject: excessive RAM consumption - memory leak In-Reply-To: <20080220204251.GI76459@rambler-co.ru> References: <9f314ae9fa6e8071326460534a64a55d@ruby-forum.com> <20080220204251.GI76459@rambler-co.ru> Message-ID: <7dbb925fea95c29dede9accc861e82a0@ruby-forum.com> > Do you use the standard nginx without any external modules ? > Does nginx serve static files only without any proxy, fastcgi, perl, > etc. > processing ? > > What does > > ps ax -o pid,ppid,%cpu,vsz,wchan,command|egrep '(nginx|PID)' > > show ? Yes, I use standard Nginx without proxy, fastcgi, perl, ect. It is compiled and installed without any added modules. Nginx is only serving the static image files. ps ax -o pid,ppid,%cpu,vsz,wchan,command|egrep '(nginx|PID)' shows: PID PPID %CPU VSZ WCHAN COMMAND 9327 1 0.0 2376 rt_sig nginx: master process /usr/local/nginx/sbin/nginx -c /usr/local/nginx/conf/nginx.conf 9328 9327 8.8 11192 - nginx: worker process 9329 9327 8.7 13200 - nginx: worker process 23660 23641 0.0 5136 pipe_w egrep (nginx|PID) I also have configured: client_header_timeout 3m; client_body_timeout 3m; send_timeout 3m; -- Posted via http://www.ruby-forum.com/. From lists at ruby-forum.com Thu Feb 21 00:31:57 2008 From: lists at ruby-forum.com (Todd HG) Date: Wed, 20 Feb 2008 22:31:57 +0100 Subject: excessive RAM consumption - memory leak In-Reply-To: <6E3ABA69-E0AF-454C-AED6-1E2C0C9503F9@cheney.net> References: <9f314ae9fa6e8071326460534a64a55d@ruby-forum.com> <428d921d0802201215x57196ceq9791ba34c83254ae@mail.gmail.com> <6E3ABA69-E0AF-454C-AED6-1E2C0C9503F9@cheney.net> Message-ID: <5cecf1a00da2e7b0e0303736dc3821ca@ruby-forum.com> Dave Cheney wrote: > Hi Todd, > > I'm supprised you are finding memory usage a problem, we handle 2mm > hits / day with nginx and memory usage is about 3.5 megabytes total. > > What revision of nginx are you running? > > Try pointing yslow at your host, and then follow its recommendations > to improve response times (mainly to reduce origin hits). > > Cheers > > Dave Could you post your config file, so that I might compare to my own? -- Posted via http://www.ruby-forum.com/. From is at rambler-co.ru Thu Feb 21 00:34:52 2008 From: is at rambler-co.ru (Igor Sysoev) Date: Thu, 21 Feb 2008 00:34:52 +0300 Subject: excessive RAM consumption - memory leak In-Reply-To: <7dbb925fea95c29dede9accc861e82a0@ruby-forum.com> References: <9f314ae9fa6e8071326460534a64a55d@ruby-forum.com> <20080220204251.GI76459@rambler-co.ru> <7dbb925fea95c29dede9accc861e82a0@ruby-forum.com> Message-ID: <20080220213452.GK76459@rambler-co.ru> On Wed, Feb 20, 2008 at 10:22:52PM +0100, Todd HG wrote: > > Do you use the standard nginx without any external modules ? > > Does nginx serve static files only without any proxy, fastcgi, perl, > > etc. > > processing ? > > > > What does > > > > ps ax -o pid,ppid,%cpu,vsz,wchan,command|egrep '(nginx|PID)' > > > > show ? > > Yes, I use standard Nginx without proxy, fastcgi, perl, ect. It is > compiled and installed without any added modules. Nginx is only serving > the static image files. > > ps ax -o pid,ppid,%cpu,vsz,wchan,command|egrep '(nginx|PID)' > > shows: > > PID PPID %CPU VSZ WCHAN COMMAND > 9327 1 0.0 2376 rt_sig nginx: master process > /usr/local/nginx/sbin/nginx -c /usr/local/nginx/conf/nginx.conf > 9328 9327 8.8 11192 - nginx: worker process > 9329 9327 8.7 13200 - nginx: worker process > 23660 23641 0.0 5136 pipe_w egrep (nginx|PID) > > I also have configured: > > client_header_timeout 3m; > client_body_timeout 3m; > send_timeout 3m; What gzip settings do you use ? -- Igor Sysoev http://sysoev.ru/en/ From dave at cheney.net Thu Feb 21 00:36:31 2008 From: dave at cheney.net (Dave Cheney) Date: Thu, 21 Feb 2008 08:36:31 +1100 Subject: excessive RAM consumption - memory leak In-Reply-To: <7dbb925fea95c29dede9accc861e82a0@ruby-forum.com> References: <9f314ae9fa6e8071326460534a64a55d@ruby-forum.com> <20080220204251.GI76459@rambler-co.ru> <7dbb925fea95c29dede9accc861e82a0@ruby-forum.com> Message-ID: You should probably use the defaults for a server that only serves static images. If they client can't talk to you fast enough to send a small GET request, they probably won't be able to receive the response in a timely manner. Best to drop the quickly. Dave On 21/02/2008, at 8:22 AM, Todd HG wrote: > I also have configured: > > client_header_timeout 3m; > client_body_timeout 3m; > send_timeout 3m; From is at rambler-co.ru Thu Feb 21 00:41:20 2008 From: is at rambler-co.ru (Igor Sysoev) Date: Thu, 21 Feb 2008 00:41:20 +0300 Subject: excessive RAM consumption - memory leak In-Reply-To: References: <9f314ae9fa6e8071326460534a64a55d@ruby-forum.com> <20080220204251.GI76459@rambler-co.ru> <7dbb925fea95c29dede9accc861e82a0@ruby-forum.com> Message-ID: <20080220214120.GL76459@rambler-co.ru> On Thu, Feb 21, 2008 at 08:36:31AM +1100, Dave Cheney wrote: > You should probably use the defaults for a server that only serves > static images. If they client can't talk to you fast enough to send a > small GET request, they probably won't be able to receive the response > in a timely manner. Best to drop the quickly. If nginx uses sendfile, it eats kernel memory, but not its own. So these timeouts should not affect on nginx memory usage. > On 21/02/2008, at 8:22 AM, Todd HG wrote: > > >I also have configured: > > > > client_header_timeout 3m; > > client_body_timeout 3m; > > send_timeout 3m; -- Igor Sysoev http://sysoev.ru/en/ From dave at cheney.net Thu Feb 21 00:43:13 2008 From: dave at cheney.net (Dave Cheney) Date: Thu, 21 Feb 2008 08:43:13 +1100 Subject: excessive RAM consumption - memory leak In-Reply-To: <5cecf1a00da2e7b0e0303736dc3821ca@ruby-forum.com> References: <9f314ae9fa6e8071326460534a64a55d@ruby-forum.com> <428d921d0802201215x57196ceq9791ba34c83254ae@mail.gmail.com> <6E3ABA69-E0AF-454C-AED6-1E2C0C9503F9@cheney.net> <5cecf1a00da2e7b0e0303736dc3821ca@ruby-forum.com> Message-ID: <57D69369-BA2C-4778-B016-986988DEC89A@cheney.net> Sure, please forward me your contact details, the config is spread out over many files so isn't appropriate to post here. Cheers Dave On 21/02/2008, at 8:31 AM, Todd HG wrote: > Dave Cheney wrote: >> Hi Todd, >> >> I'm supprised you are finding memory usage a problem, we handle 2mm >> hits / day with nginx and memory usage is about 3.5 megabytes total. >> >> What revision of nginx are you running? >> >> Try pointing yslow at your host, and then follow its recommendations >> to improve response times (mainly to reduce origin hits). >> >> Cheers >> >> Dave > > Could you post your config file, so that I might compare to my own? > -- > Posted via http://www.ruby-forum.com/. > From lists at ruby-forum.com Thu Feb 21 00:58:00 2008 From: lists at ruby-forum.com (Todd HG) Date: Wed, 20 Feb 2008 22:58:00 +0100 Subject: excessive RAM consumption - memory leak In-Reply-To: <6E3ABA69-E0AF-454C-AED6-1E2C0C9503F9@cheney.net> References: <9f314ae9fa6e8071326460534a64a55d@ruby-forum.com> <428d921d0802201215x57196ceq9791ba34c83254ae@mail.gmail.com> <6E3ABA69-E0AF-454C-AED6-1E2C0C9503F9@cheney.net> Message-ID: > Try pointing yslow at your host, and then follow its recommendations > to improve response times (mainly to reduce origin hits). Thanks for letting me know about Yslow. It's a great tool. For anyone who reads this later, Yslow requires Firebug to run in Firefox. http://www.getfirebug.com/ http://developer.yahoo.com/yslow/ -- Posted via http://www.ruby-forum.com/. From lists at ruby-forum.com Thu Feb 21 00:59:54 2008 From: lists at ruby-forum.com (Todd HG) Date: Wed, 20 Feb 2008 22:59:54 +0100 Subject: excessive RAM consumption - memory leak In-Reply-To: <20080220213452.GK76459@rambler-co.ru> References: <9f314ae9fa6e8071326460534a64a55d@ruby-forum.com> <20080220204251.GI76459@rambler-co.ru> <7dbb925fea95c29dede9accc861e82a0@ruby-forum.com> <20080220213452.GK76459@rambler-co.ru> Message-ID: <62a7101ee8bf005dd224cc9813800bff@ruby-forum.com> Igor Sysoev wrote: > On Wed, Feb 20, 2008 at 10:22:52PM +0100, Todd HG wrote: > >> >> /usr/local/nginx/sbin/nginx -c /usr/local/nginx/conf/nginx.conf >> 9328 9327 8.8 11192 - nginx: worker process >> 9329 9327 8.7 13200 - nginx: worker process >> 23660 23641 0.0 5136 pipe_w egrep (nginx|PID) >> >> I also have configured: >> >> client_header_timeout 3m; >> client_body_timeout 3m; >> send_timeout 3m; > > What gzip settings do you use ? gzip on; gzip_min_length 1100; gzip_buffers 4 8k; gzip_types image/jpg image/jpeg image/gif image/png text/plain text/xml application/xhtml+xml text/css application/xml image/svg+xml application/rss+xml application/atom_xml application/x-javascript application/x-httpd-php application/x-httpd-fastphp application/x-httpd-eruby text/html; gzip_comp_level 9; -- Posted via http://www.ruby-forum.com/. From lists at ruby-forum.com Thu Feb 21 01:07:46 2008 From: lists at ruby-forum.com (Todd HG) Date: Wed, 20 Feb 2008 23:07:46 +0100 Subject: excessive RAM consumption - memory leak In-Reply-To: <57D69369-BA2C-4778-B016-986988DEC89A@cheney.net> References: <9f314ae9fa6e8071326460534a64a55d@ruby-forum.com> <428d921d0802201215x57196ceq9791ba34c83254ae@mail.gmail.com> <6E3ABA69-E0AF-454C-AED6-1E2C0C9503F9@cheney.net> <5cecf1a00da2e7b0e0303736dc3821ca@ruby-forum.com> <57D69369-BA2C-4778-B016-986988DEC89A@cheney.net> Message-ID: <184a7b56ceeb97cf7e7401b90e138870@ruby-forum.com> Dave Cheney wrote: > Sure, please forward me your contact details, the config is spread out > over many files so isn't appropriate to post here. > > Cheers > > Dave I believe you can attach files when replying to these messages. I'm a bit wary of posting my email address on a bbs or chat forum. However, I would really like to see how you've configured your Nginx install. -- Posted via http://www.ruby-forum.com/. From jodok at lovelysystems.com Thu Feb 21 01:10:17 2008 From: jodok at lovelysystems.com (Jodok Batlogg) Date: Wed, 20 Feb 2008 23:10:17 +0100 Subject: excessive RAM consumption - memory leak In-Reply-To: <62a7101ee8bf005dd224cc9813800bff@ruby-forum.com> References: <9f314ae9fa6e8071326460534a64a55d@ruby-forum.com> <20080220204251.GI76459@rambler-co.ru> <7dbb925fea95c29dede9accc861e82a0@ruby-forum.com> <20080220213452.GK76459@rambler-co.ru> <62a7101ee8bf005dd224cc9813800bff@ruby-forum.com> Message-ID: <645F3183-3F9D-4A6F-B64B-52CADC41B46F@lovelysystems.com> On 20.02.2008, at 22:59, Todd HG wrote: > Igor Sysoev wrote: >> On Wed, Feb 20, 2008 at 10:22:52PM +0100, Todd HG wrote: >> >>> >>> /usr/local/nginx/sbin/nginx -c /usr/local/nginx/conf/nginx.conf >>> 9328 9327 8.8 11192 - nginx: worker process >>> 9329 9327 8.7 13200 - nginx: worker process >>> 23660 23641 0.0 5136 pipe_w egrep (nginx|PID) >>> >>> I also have configured: >>> >>> client_header_timeout 3m; >>> client_body_timeout 3m; >>> send_timeout 3m; >> >> What gzip settings do you use ? > > gzip on; > gzip_min_length 1100; > gzip_buffers 4 8k; > gzip_types image/jpg image/jpeg image/gif image/png text/ > plain > text/xml application/xhtml+xml text/css application/xml image/svg+xml > application/rss+xml application/atom_xml application/x-javascript > application/x-httpd-php application/x-httpd-fastphp > application/x-httpd-eruby text/html; > gzip_comp_level 9; sorry, why do you compress jpeg, gif and png files? they are already compressed... double compression just uses cpu power and causes global warming :) jodok > > > -- > Posted via http://www.ruby-forum.com/. > -- "Beautiful is better than ugly." -- The Zen of Python, by Tim Peters Jodok Batlogg, Lovely Systems GmbH Schmelzh?tterstra?e 26a, 6850 Dornbirn, Austria mobile: +43 676 5683591, phone: +43 5572 908060 -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 2454 bytes Desc: not available URL: From dave at cheney.net Thu Feb 21 01:19:44 2008 From: dave at cheney.net (Dave Cheney) Date: Thu, 21 Feb 2008 09:19:44 +1100 Subject: excessive RAM consumption - memory leak In-Reply-To: <184a7b56ceeb97cf7e7401b90e138870@ruby-forum.com> References: <9f314ae9fa6e8071326460534a64a55d@ruby-forum.com> <428d921d0802201215x57196ceq9791ba34c83254ae@mail.gmail.com> <6E3ABA69-E0AF-454C-AED6-1E2C0C9503F9@cheney.net> <5cecf1a00da2e7b0e0303736dc3821ca@ruby-forum.com> <57D69369-BA2C-4778-B016-986988DEC89A@cheney.net> <184a7b56ceeb97cf7e7401b90e138870@ruby-forum.com> Message-ID: dave AT cheney DOT net On 21/02/2008, at 9:07 AM, Todd HG wrote: > Dave Cheney wrote: >> Sure, please forward me your contact details, the config is spread >> out >> over many files so isn't appropriate to post here. >> >> Cheers >> >> Dave > > I believe you can attach files when replying to these messages. I'm a > bit wary of posting my email address on a bbs or chat forum. > However, I > would really like to see how you've configured your Nginx install. > -- > Posted via http://www.ruby-forum.com/. > From lists at ruby-forum.com Thu Feb 21 01:21:17 2008 From: lists at ruby-forum.com (Todd HG) Date: Wed, 20 Feb 2008 23:21:17 +0100 Subject: excessive RAM consumption - memory leak In-Reply-To: <428d921d0802201215x57196ceq9791ba34c83254ae@mail.gmail.com> References: <9f314ae9fa6e8071326460534a64a55d@ruby-forum.com> <428d921d0802201215x57196ceq9791ba34c83254ae@mail.gmail.com> Message-ID: <95316dd691d630244b27935945079d8e@ruby-forum.com> eliott wrote: > You may try setting expires headers for your images, if they don't > change very often (or at all). Do you have an example for setting the expire header. In the code example it shows: expires 24h; expires 0; expires -1; expires epoch; add_header Cache-Control private; server_tokens off; I'm not sure if I should be using only: expires 24h; add_header Cache-Control private; server_tokens off; or if I also need: expires 0; expires -1; expires epoch; -- Posted via http://www.ruby-forum.com/. From is at rambler-co.ru Thu Feb 21 01:22:08 2008 From: is at rambler-co.ru (Igor Sysoev) Date: Thu, 21 Feb 2008 01:22:08 +0300 Subject: excessive RAM consumption - memory leak In-Reply-To: <62a7101ee8bf005dd224cc9813800bff@ruby-forum.com> References: <9f314ae9fa6e8071326460534a64a55d@ruby-forum.com> <20080220204251.GI76459@rambler-co.ru> <7dbb925fea95c29dede9accc861e82a0@ruby-forum.com> <20080220213452.GK76459@rambler-co.ru> <62a7101ee8bf005dd224cc9813800bff@ruby-forum.com> Message-ID: <20080220222208.GO76459@rambler-co.ru> On Wed, Feb 20, 2008 at 10:59:54PM +0100, Todd HG wrote: > Igor Sysoev wrote: > > On Wed, Feb 20, 2008 at 10:22:52PM +0100, Todd HG wrote: > > > >> > >> /usr/local/nginx/sbin/nginx -c /usr/local/nginx/conf/nginx.conf > >> 9328 9327 8.8 11192 - nginx: worker process > >> 9329 9327 8.7 13200 - nginx: worker process > >> 23660 23641 0.0 5136 pipe_w egrep (nginx|PID) > >> > >> I also have configured: > >> > >> client_header_timeout 3m; > >> client_body_timeout 3m; > >> send_timeout 3m; > > > > What gzip settings do you use ? > > gzip on; > gzip_min_length 1100; > gzip_buffers 4 8k; > gzip_types image/jpg image/jpeg image/gif image/png text/plain > text/xml application/xhtml+xml text/css application/xml image/svg+xml > application/rss+xml application/atom_xml application/x-javascript > application/x-httpd-php application/x-httpd-fastphp > application/x-httpd-eruby text/html; > gzip_comp_level 9; This is cause of memory and CPU consumption. You do not need to compress already compressed jpegs/etc. If you serve images only, you should turn gzip off at all (it's default). As to other MIME types: 1) there are no such types as application/x-httpd-php application/x-httpd-fastphp application/x-httpd-eruby they probably exist as internal MIME-types inside Apache, but they are never showed to a client. 2) the following types as application/xhtml+xml application/rss+xml application/atom_xml probably do not exist too. Keep the list as small as possible, because nginx iterates it sequenctally. -- Igor Sysoev http://sysoev.ru/en/ From lists at ruby-forum.com Thu Feb 21 01:23:26 2008 From: lists at ruby-forum.com (Todd HG) Date: Wed, 20 Feb 2008 23:23:26 +0100 Subject: excessive RAM consumption - memory leak In-Reply-To: <645F3183-3F9D-4A6F-B64B-52CADC41B46F@lovelysystems.com> References: <9f314ae9fa6e8071326460534a64a55d@ruby-forum.com> <20080220204251.GI76459@rambler-co.ru> <7dbb925fea95c29dede9accc861e82a0@ruby-forum.com> <20080220213452.GK76459@rambler-co.ru> <62a7101ee8bf005dd224cc9813800bff@ruby-forum.com> <645F3183-3F9D-4A6F-B64B-52CADC41B46F@lovelysystems.com> Message-ID: Jodok Batlogg wrote: > sorry, why do you compress jpeg, gif and png files? they are already > compressed... double compression just uses cpu power and causes global > warming :) > > jodok You may a great point. I had created a one-size-fits-all config, but I will comment out those files that don't need to be compressed. Thank you for pointing that out. -- Posted via http://www.ruby-forum.com/. From dave at cheney.net Thu Feb 21 01:26:48 2008 From: dave at cheney.net (Dave Cheney) Date: Thu, 21 Feb 2008 09:26:48 +1100 Subject: excessive RAM consumption - memory leak In-Reply-To: <62a7101ee8bf005dd224cc9813800bff@ruby-forum.com> References: <9f314ae9fa6e8071326460534a64a55d@ruby-forum.com> <20080220204251.GI76459@rambler-co.ru> <7dbb925fea95c29dede9accc861e82a0@ruby-forum.com> <20080220213452.GK76459@rambler-co.ru> <62a7101ee8bf005dd224cc9813800bff@ruby-forum.com> Message-ID: You might want to consider dropping your gzip ratio, local testing here showed little benefit past about 4. At level 9 you'll be using 4x the CPU for a tiny gain in compression, which is more than mitigated by the extra delay in overcompressing the pages. Also, as Jodok has just pointed out, there is little observable gain in compressing image/* mime types. [dave at crimson nginx]$ cat gzip.conf gzip on; gzip_proxied any; gzip_http_version 1.1; gzip_min_length 1100; gzip_comp_level 5; gzip_buffers 4 8k; gzip_types text/plain text/html text/xml text/css application/x- javascript text/xml application/xml application/xml+rss text/ javascript application/atom+xml; You could try the gzip_static module in the development branch to On 21/02/2008, at 8:59 AM, Todd HG wrote: > gzip on; > gzip_min_length 1100; > gzip_buffers 4 8k; > gzip_types image/jpg image/jpeg image/gif image/png text/ > plain > text/xml application/xhtml+xml text/css application/xml image/svg+xml > application/rss+xml application/atom_xml application/x-javascript > application/x-httpd-php application/x-httpd-fastphp > application/x-httpd-eruby text/html; > gzip_comp_level 9; From is at rambler-co.ru Thu Feb 21 01:27:19 2008 From: is at rambler-co.ru (Igor Sysoev) Date: Thu, 21 Feb 2008 01:27:19 +0300 Subject: excessive RAM consumption - memory leak In-Reply-To: <20080220222208.GO76459@rambler-co.ru> References: <9f314ae9fa6e8071326460534a64a55d@ruby-forum.com> <20080220204251.GI76459@rambler-co.ru> <7dbb925fea95c29dede9accc861e82a0@ruby-forum.com> <20080220213452.GK76459@rambler-co.ru> <62a7101ee8bf005dd224cc9813800bff@ruby-forum.com> <20080220222208.GO76459@rambler-co.ru> Message-ID: <20080220222719.GP76459@rambler-co.ru> On Thu, Feb 21, 2008 at 01:22:08AM +0300, Igor Sysoev wrote: > On Wed, Feb 20, 2008 at 10:59:54PM +0100, Todd HG wrote: > > > Igor Sysoev wrote: > > > On Wed, Feb 20, 2008 at 10:22:52PM +0100, Todd HG wrote: > > > > > >> > > >> /usr/local/nginx/sbin/nginx -c /usr/local/nginx/conf/nginx.conf > > >> 9328 9327 8.8 11192 - nginx: worker process > > >> 9329 9327 8.7 13200 - nginx: worker process > > >> 23660 23641 0.0 5136 pipe_w egrep (nginx|PID) > > >> > > >> I also have configured: > > >> > > >> client_header_timeout 3m; > > >> client_body_timeout 3m; > > >> send_timeout 3m; > > > > > > What gzip settings do you use ? > > > > gzip on; > > gzip_min_length 1100; > > gzip_buffers 4 8k; > > gzip_types image/jpg image/jpeg image/gif image/png text/plain > > text/xml application/xhtml+xml text/css application/xml image/svg+xml > > application/rss+xml application/atom_xml application/x-javascript > > application/x-httpd-php application/x-httpd-fastphp > > application/x-httpd-eruby text/html; > > gzip_comp_level 9; > > This is cause of memory and CPU consumption. You do not need to compress > already compressed jpegs/etc. If you serve images only, you should turn > gzip off at all (it's default). > > As to other MIME types: > > 1) there are no such types as > application/x-httpd-php > application/x-httpd-fastphp > application/x-httpd-eruby > they probably exist as internal MIME-types inside Apache, but they are > never showed to a client. > > 2) the following types as > application/xhtml+xml > application/rss+xml > application/atom_xml > > probably do not exist too. > > Keep the list as small as possible, because nginx iterates it sequenctally. image/jpg does not exist too. By default nginx in conf/mime.type uses text/xml for xml and rss, so application/xml is duplicate. -- Igor Sysoev http://sysoev.ru/en/ From lists at ruby-forum.com Thu Feb 21 01:38:50 2008 From: lists at ruby-forum.com (Todd HG) Date: Wed, 20 Feb 2008 23:38:50 +0100 Subject: excessive RAM consumption - memory leak In-Reply-To: References: <9f314ae9fa6e8071326460534a64a55d@ruby-forum.com> <20080220204251.GI76459@rambler-co.ru> <7dbb925fea95c29dede9accc861e82a0@ruby-forum.com> <20080220213452.GK76459@rambler-co.ru> <62a7101ee8bf005dd224cc9813800bff@ruby-forum.com> Message-ID: <15c96e2332588b825ad050f08ddbf1f1@ruby-forum.com> Dave Cheney wrote: > You could try the gzip_static module in the development branch to How do I compilet the gzip_static into Nginx? -- Posted via http://www.ruby-forum.com/. From lists at ruby-forum.com Thu Feb 21 01:42:22 2008 From: lists at ruby-forum.com (Todd HG) Date: Wed, 20 Feb 2008 23:42:22 +0100 Subject: excessive RAM consumption - memory leak In-Reply-To: <20080220222719.GP76459@rambler-co.ru> References: <9f314ae9fa6e8071326460534a64a55d@ruby-forum.com> <20080220204251.GI76459@rambler-co.ru> <7dbb925fea95c29dede9accc861e82a0@ruby-forum.com> <20080220213452.GK76459@rambler-co.ru> <62a7101ee8bf005dd224cc9813800bff@ruby-forum.com> <20080220222208.GO76459@rambler-co.ru> <20080220222719.GP76459@rambler-co.ru> Message-ID: <96116dde2e34cd8a33ed161f92543a03@ruby-forum.com> Igor Sysoev wrote: > On Thu, Feb 21, 2008 at 01:22:08AM +0300, Igor Sysoev wrote: > >> > >> >> > gzip_buffers 4 8k; >> >> application/xhtml+xml >> application/rss+xml >> application/atom_xml >> >> probably do not exist too. >> >> Keep the list as small as possible, because nginx iterates it sequenctally. > > image/jpg does not exist too. > > By default nginx in conf/mime.type uses text/xml for xml and rss, so > application/xml is duplicate. I've trimmed down gzip to files I serve off the image server such as css and js, but no longer include the image entries. My list is now: gzip_types text/css application/x-javascript; and gzip_comp_level 5; -- Posted via http://www.ruby-forum.com/. From is at rambler-co.ru Thu Feb 21 01:45:18 2008 From: is at rambler-co.ru (Igor Sysoev) Date: Thu, 21 Feb 2008 01:45:18 +0300 Subject: excessive RAM consumption - memory leak In-Reply-To: <15c96e2332588b825ad050f08ddbf1f1@ruby-forum.com> References: <9f314ae9fa6e8071326460534a64a55d@ruby-forum.com> <20080220204251.GI76459@rambler-co.ru> <7dbb925fea95c29dede9accc861e82a0@ruby-forum.com> <20080220213452.GK76459@rambler-co.ru> <62a7101ee8bf005dd224cc9813800bff@ruby-forum.com> <15c96e2332588b825ad050f08ddbf1f1@ruby-forum.com> Message-ID: <20080220224518.GQ76459@rambler-co.ru> On Wed, Feb 20, 2008 at 11:38:50PM +0100, Todd HG wrote: > Dave Cheney wrote: > > You could try the gzip_static module in the development branch to > > How do I compilet the gzip_static into Nginx? http://wiki.codemongers.com/NginxHttpGzipStaticModule -- Igor Sysoev http://sysoev.ru/en/ From lists at ruby-forum.com Thu Feb 21 01:47:29 2008 From: lists at ruby-forum.com (Todd HG) Date: Wed, 20 Feb 2008 23:47:29 +0100 Subject: excessive RAM consumption - memory leak In-Reply-To: <428d921d0802201215x57196ceq9791ba34c83254ae@mail.gmail.com> References: <9f314ae9fa6e8071326460534a64a55d@ruby-forum.com> <428d921d0802201215x57196ceq9791ba34c83254ae@mail.gmail.com> Message-ID: eliott wrote: > You may try setting expires headers for your images, if they don't > change very often (or at all). I have added, but I'm not altogether sure if this is optimum: http { expires 24h; add_header Cache-Control private; server_tokens off; -- Posted via http://www.ruby-forum.com/. From is at rambler-co.ru Thu Feb 21 01:49:01 2008 From: is at rambler-co.ru (Igor Sysoev) Date: Thu, 21 Feb 2008 01:49:01 +0300 Subject: excessive RAM consumption - memory leak In-Reply-To: <96116dde2e34cd8a33ed161f92543a03@ruby-forum.com> References: <9f314ae9fa6e8071326460534a64a55d@ruby-forum.com> <20080220204251.GI76459@rambler-co.ru> <7dbb925fea95c29dede9accc861e82a0@ruby-forum.com> <20080220213452.GK76459@rambler-co.ru> <62a7101ee8bf005dd224cc9813800bff@ruby-forum.com> <20080220222208.GO76459@rambler-co.ru> <20080220222719.GP76459@rambler-co.ru> <96116dde2e34cd8a33ed161f92543a03@ruby-forum.com> Message-ID: <20080220224901.GR76459@rambler-co.ru> On Wed, Feb 20, 2008 at 11:42:22PM +0100, Todd HG wrote: > Igor Sysoev wrote: > > On Thu, Feb 21, 2008 at 01:22:08AM +0300, Igor Sysoev wrote: > > > >> > gzip_buffers 4 8k; > >> > >> application/xhtml+xml > >> application/rss+xml > >> application/atom_xml > >> > >> probably do not exist too. > >> > >> Keep the list as small as possible, because nginx iterates it sequenctally. > > > > image/jpg does not exist too. > > > > By default nginx in conf/mime.type uses text/xml for xml and rss, so > > application/xml is duplicate. > > I've trimmed down gzip to files I serve off the image server such as css > and js, but no longer include the image entries. My list is now: > > gzip_types text/css application/x-javascript; OK. > and > > gzip_comp_level 5; Actually "gzip_comp_level 1" is enough (this is default). But if you will use gzip_static_module, you should compress the files with "gzip -9". -- Igor Sysoev http://sysoev.ru/en/ From is at rambler-co.ru Thu Feb 21 01:52:01 2008 From: is at rambler-co.ru (Igor Sysoev) Date: Thu, 21 Feb 2008 01:52:01 +0300 Subject: excessive RAM consumption - memory leak In-Reply-To: References: <9f314ae9fa6e8071326460534a64a55d@ruby-forum.com> <428d921d0802201215x57196ceq9791ba34c83254ae@mail.gmail.com> Message-ID: <20080220225201.GS76459@rambler-co.ru> On Wed, Feb 20, 2008 at 11:47:29PM +0100, Todd HG wrote: > eliott wrote: > > You may try setting expires headers for your images, if they don't > > change very often (or at all). > > I have added, but I'm not altogether sure if this is optimum: > > http { > expires 24h; > add_header Cache-Control private; > server_tokens off; If the images are public (not per client) you should set expires 1M; only to allow long caching in transit cache proxies. "server_tokens off" simply turn off nginx version from "Server" header: "Server: nginx" vs "Server: nginx/0.6.26". -- Igor Sysoev http://sysoev.ru/en/ From dave at cheney.net Thu Feb 21 01:55:09 2008 From: dave at cheney.net (Dave Cheney) Date: Thu, 21 Feb 2008 09:55:09 +1100 Subject: excessive RAM consumption - memory leak In-Reply-To: References: <9f314ae9fa6e8071326460534a64a55d@ruby-forum.com> <428d921d0802201215x57196ceq9791ba34c83254ae@mail.gmail.com> Message-ID: Why are you setting the cache control to private on a public asset image ? On 21/02/2008, at 9:47 AM, Todd HG wrote: > http { > expires 24h; > add_header Cache-Control private; > server_tokens off; From eliott at cactuswax.net Thu Feb 21 01:56:50 2008 From: eliott at cactuswax.net (eliott) Date: Wed, 20 Feb 2008 14:56:50 -0800 Subject: excessive RAM consumption - memory leak In-Reply-To: <95316dd691d630244b27935945079d8e@ruby-forum.com> References: <9f314ae9fa6e8071326460534a64a55d@ruby-forum.com> <428d921d0802201215x57196ceq9791ba34c83254ae@mail.gmail.com> <95316dd691d630244b27935945079d8e@ruby-forum.com> Message-ID: <428d921d0802201456w7b30cca3xa25c6261a4be9563@mail.gmail.com> On 2/20/08, Todd HG wrote: > eliott wrote: > > You may try setting expires headers for your images, if they don't > > change very often (or at all). > > > Do you have an example for setting the expire header. In the code > example it shows: > > expires 24h; > expires 0; > expires -1; > expires epoch; > add_header Cache-Control private; > server_tokens off; > > I'm not sure if I should be using only: > > expires 24h; > add_header Cache-Control private; > server_tokens off; > > or if I also need: > > expires 0; > expires -1; > expires epoch; The expires stanza tells how long the client can cache the object, or tells a proxy how long it can cache it. So you only need one of them (or one per location match stanzas). Cache control private tells a proxy, for instance, that it should not cache the object, but that a browser on an endpoint workstation can cache it. http://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html#sec14.9.1 If you wanted to let everyone cache an image forever (like maybe an image you never expect to change), you can set expires to max. This may reduce _some_ traffic to you, as it will allow for greater cacheability downstream. Take into consideration that clients may not fetch new objects if they change though.. I would put it into a stanza based on filetype, like specific to images that will never change. Hope some of that info helps. From lists at ruby-forum.com Thu Feb 21 03:22:10 2008 From: lists at ruby-forum.com (Todd HG) Date: Thu, 21 Feb 2008 01:22:10 +0100 Subject: excessive RAM consumption - memory leak In-Reply-To: <20080220225201.GS76459@rambler-co.ru> References: <9f314ae9fa6e8071326460534a64a55d@ruby-forum.com> <428d921d0802201215x57196ceq9791ba34c83254ae@mail.gmail.com> <20080220225201.GS76459@rambler-co.ru> Message-ID: <5a6f654a63690bd03000e2e2d300340d@ruby-forum.com> Igor Sysoev wrote: > On Wed, Feb 20, 2008 at 11:47:29PM +0100, Todd HG wrote: > I've set the cache to: expires 1M; add_header Cache-Control public; I've also added the following to the config: connection_pool_size 256; client_header_buffer_size 1k; large_client_header_buffers 4 4k; request_pool_size 4k; output_buffers 4 32k; postpone_output 1460; ignore_invalid_headers on; It appears the: connection_pool_size 256; ... caused a big drop in my RAM usage. Without this set, does the server just keep pooling connections? -- Posted via http://www.ruby-forum.com/. From rkmr.em at gmail.com Thu Feb 21 05:11:09 2008 From: rkmr.em at gmail.com (rkmr.em at gmail.com) Date: Wed, 20 Feb 2008 18:11:09 -0800 Subject: help with location In-Reply-To: <72D27B51-8754-4A5B-8ADC-430AFD0F8045@mojiti.com> References: <72D27B51-8754-4A5B-8ADC-430AFD0F8045@mojiti.com> Message-ID: i tried to write a reg-ex for location that will for work for any url containing' upload... it is hard how to write a regex for location for url that contains the word: upload pl. help thanks On Tue, Feb 19, 2008 at 6:38 PM, Eden Li wrote: > Your answer is here: http://wiki.codemongers.com/NginxHttpCoreModule#location > > > > On Feb 20, 2008, at 9:57 AM, rkmr.em at gmail.com wrote: > > > i want > > /userupload > > /userwebupload > > /uploadvideo > > etc, > > any url with 'upload' in it to get served by a specific backend, i can > > use fastcgi_pass for the backend part.. how do i specify location for > > this ? > > something like this? > > location / *upload* { > > root /home/mark/work/pop; > > fastcgi_pass backend_pop; > > > > > > thanks a lot > > > > > From lists at ruby-forum.com Thu Feb 21 05:49:41 2008 From: lists at ruby-forum.com (Todd HG) Date: Thu, 21 Feb 2008 03:49:41 +0100 Subject: excessive RAM consumption - memory leak In-Reply-To: <20080220225201.GS76459@rambler-co.ru> References: <9f314ae9fa6e8071326460534a64a55d@ruby-forum.com> <428d921d0802201215x57196ceq9791ba34c83254ae@mail.gmail.com> <20080220225201.GS76459@rambler-co.ru> Message-ID: <592c7997ce89ddacec2b3627b605e0e7@ruby-forum.com> Igor Sysoev wrote: > On Wed, Feb 20, 2008 at 11:47:29PM +0100, Is there anywhere I could read more about how Nginx uses connection_pool_size, stores connections, if it is in a cache in RAM or hard drive, and what else might be stored in RAM by Nginx? -- Posted via http://www.ruby-forum.com/. From martin.schoettler at email.de Thu Feb 21 08:16:45 2008 From: martin.schoettler at email.de (=?UTF-8?B?TWFydGluIFNjaMO2dHRsZXI=?=) Date: Thu, 21 Feb 2008 06:16:45 +0100 Subject: Certificate issue on two IP-addresses with same port In-Reply-To: <1203505095.13326.11.camel@portableevil.develix.com> References: <47BC00F5.4020003@email.de> <1203505095.13326.11.camel@portableevil.develix.com> Message-ID: <47BD093D.90703@email.de> Thank you Cliff. That did it! > listen 85.214.47.37:443; ... > listen 85.214.56.139:443; Best regards Martin ______________________________ Martin Schoettler Herzogstandweg 21 D-82431 Kochel am See fon +49-(0) 88 51 - 92 31 54 fax +49-(0) 88 51 - 92 31 56 gsm +49-(0) 163 - 44 33 621 private +49-(0) 8851 - 7581 Skype: martin.schoettler ______________________________ From redduck666 at gmail.com Thu Feb 21 09:08:51 2008 From: redduck666 at gmail.com (Almir Karic) Date: Thu, 21 Feb 2008 07:08:51 +0100 Subject: help with location In-Reply-To: References: <72D27B51-8754-4A5B-8ADC-430AFD0F8045@mojiti.com> Message-ID: On Thu, Feb 21, 2008 at 3:11 AM, rkmr.em at gmail.com wrote: > i tried to write a reg-ex for location that will for work for any url > containing' upload... it is hard > how to write a regex for location for url that contains the word: upload > pl. help > thanks .*upload.* -- error: one bad user found in front of screen From gabor at nekomancer.net Thu Feb 21 10:30:01 2008 From: gabor at nekomancer.net (=?ISO-8859-1?Q?G=E1bor_Farkas?=) Date: Thu, 21 Feb 2008 08:30:01 +0100 Subject: user with ssl-proxy, nginx problem Message-ID: <47BD2879.9060808@nekomancer.net> hi, i'm facing a strange problem here, maybe someone had experience with this before... i have an nginx server, which server some files using HTTPS, and http-basic-auth. because nginx had ssl-problems in the past, in the past we had this config: - we had an apache server, listening on port 443, that got the requests, did the ssl-handling, and then proxied the request to nginx. so nginx did not do any SSL-stuff. - and everything worked fine. but now we switched to an nginx-only solution, and starting to have problems with one user, who uses a https proxy. in the past, for him the file-download worked this way: - the java app requested the file, got a HTTP 401 (Unauthorized) response - so the java app requested the file again, but now it sent the necessary username/password, and got the file (and a HTTP 200) - and all was ok after we switched to the nginx-only solution, this is what happens: - the java app requests the file, gets a http 400 (not 401) - so the java app retries, and gets the http 400 again - this happens 5 times, and then the java app gives up in the nginx access log, i see 5 http 401 (Unauthorized) accesses, and i see that the client did not send the username/password. also, an additional detail: using a web-browser, the user is able to download from the server fine, even when using the https proxy. with this info, i would usually blame the java-app, but the strange thing is, that in the past with the apache+nginx config, it worked fine. i tried to migrate every setting from the apache-server to the nginx-server, even the ssl_cipher settings, but it did not help. any ideas? thanks, gabor From is at rambler-co.ru Thu Feb 21 10:36:58 2008 From: is at rambler-co.ru (Igor Sysoev) Date: Thu, 21 Feb 2008 10:36:58 +0300 Subject: excessive RAM consumption - memory leak In-Reply-To: <5a6f654a63690bd03000e2e2d300340d@ruby-forum.com> References: <9f314ae9fa6e8071326460534a64a55d@ruby-forum.com> <428d921d0802201215x57196ceq9791ba34c83254ae@mail.gmail.com> <20080220225201.GS76459@rambler-co.ru> <5a6f654a63690bd03000e2e2d300340d@ruby-forum.com> Message-ID: <20080221073658.GA830@rambler-co.ru> On Thu, Feb 21, 2008 at 01:22:10AM +0100, Todd HG wrote: > Igor Sysoev wrote: > > On Wed, Feb 20, 2008 at 11:47:29PM +0100, Todd HG wrote: > > > > I've set the cache to: > > expires 1M; > add_header Cache-Control public; You may omit "add_header Cache-Control public" at all. > I've also added the following to the config: > > connection_pool_size 256; > client_header_buffer_size 1k; > large_client_header_buffers 4 4k; > request_pool_size 4k; > output_buffers 4 32k; > postpone_output 1460; > > ignore_invalid_headers on; > > It appears the: > > connection_pool_size 256; > > ... caused a big drop in my RAM usage. This is default setting. > Without this set, does the server just keep pooling connections? The connection_pool_size set initial size of memory pool per connection. If a connection needs more memory nginx allocates it by chunk of this size. You may omit and use default values of connection_pool_size client_header_buffer_size large_client_header_buffers request_pool_size output_buffers postpone_output ignore_invalid_headers -- Igor Sysoev http://sysoev.ru/en/ From jsquintz at gmail.com Thu Feb 21 11:51:31 2008 From: jsquintz at gmail.com (Jamie Quint) Date: Thu, 21 Feb 2008 00:51:31 -0800 Subject: error_page on 0.5.35 Message-ID: <8e5825090802210051n24385643i2feb645d7cdecf4f@mail.gmail.com> I have this under http in my nginx.conf file: error_page 404 /var/www/apps/myapp/templates/404.html; error_page 502 503 504 /var/www/apps/myapp/templates/500.html; I can access these at myapp.com/500.html and myapp.com/400.html but when I get a 502 gateway error instead of displaying the 500.html page I get the default nginx page. Any suggestions? Best, Jamie -------------- next part -------------- An HTML attachment was scrubbed... URL: From is at rambler-co.ru Thu Feb 21 12:05:35 2008 From: is at rambler-co.ru (Igor Sysoev) Date: Thu, 21 Feb 2008 12:05:35 +0300 Subject: error_page on 0.5.35 In-Reply-To: <8e5825090802210051n24385643i2feb645d7cdecf4f@mail.gmail.com> References: <8e5825090802210051n24385643i2feb645d7cdecf4f@mail.gmail.com> Message-ID: <20080221090535.GF830@rambler-co.ru> On Thu, Feb 21, 2008 at 12:51:31AM -0800, Jamie Quint wrote: > I have this under http in my nginx.conf file: > error_page 404 /var/www/apps/myapp/templates/404.html; > error_page 502 503 504 /var/www/apps/myapp/templates/500.html; > > I can access these at myapp.com/500.html and myapp.com/400.html but when I > get a 502 gateway error instead of displaying the 500.html page I get the > default nginx page. Any suggestions? Do you have any error_page directives at server or location levels ? They override http level error_page's. -- Igor Sysoev http://sysoev.ru/en/ From roxis at list.ru Thu Feb 21 12:19:31 2008 From: roxis at list.ru (Roxis) Date: Thu, 21 Feb 2008 10:19:31 +0100 Subject: error_page on 0.5.35 In-Reply-To: <8e5825090802210051n24385643i2feb645d7cdecf4f@mail.gmail.com> References: <8e5825090802210051n24385643i2feb645d7cdecf4f@mail.gmail.com> Message-ID: <200802211019.31544.roxis@list.ru> On Thursday 21 February 2008, Jamie Quint wrote: > I have this under http in my nginx.conf file: > error_page 404 /var/www/apps/myapp/templates/404.html; > error_page 502 503 504 /var/www/apps/myapp/templates/500.html; > > I can access these at myapp.com/500.html and myapp.com/400.html but when I > get a 502 gateway error instead of displaying the 500.html page I get the > default nginx page. Any suggestions? you should specify URI or URL, but not full path error_page 404 /404.html; error_page 502 503 504 /500.html; From jsquintz at gmail.com Thu Feb 21 12:32:43 2008 From: jsquintz at gmail.com (Jamie Quint) Date: Thu, 21 Feb 2008 01:32:43 -0800 Subject: error_page on 0.5.35 In-Reply-To: <200802211019.31544.roxis@list.ru> References: <8e5825090802210051n24385643i2feb645d7cdecf4f@mail.gmail.com> <200802211019.31544.roxis@list.ru> Message-ID: <8e5825090802210132v47ff4b4ahdad945428f367880@mail.gmail.com> Igor: I do not, I only have those two. Roxis: I cant do this easily since I can't use a location directive because this is at the http level rather than the server level. On Thu, Feb 21, 2008 at 1:19 AM, Roxis wrote: > On Thursday 21 February 2008, Jamie Quint wrote: > > I have this under http in my nginx.conf file: > > error_page 404 /var/www/apps/myapp/templates/404.html; > > error_page 502 503 504 /var/www/apps/myapp/templates/500.html; > > > > I can access these at myapp.com/500.html and myapp.com/400.html but when > I > > get a 502 gateway error instead of displaying the 500.html page I get > the > > default nginx page. Any suggestions? > > you should specify URI or URL, but not full path > > error_page 404 /404.html; > error_page 502 503 504 /500.html; > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From roxis at list.ru Thu Feb 21 12:46:56 2008 From: roxis at list.ru (Roxis) Date: Thu, 21 Feb 2008 10:46:56 +0100 Subject: error_page on 0.5.35 In-Reply-To: <8e5825090802210132v47ff4b4ahdad945428f367880@mail.gmail.com> References: <8e5825090802210051n24385643i2feb645d7cdecf4f@mail.gmail.com> <200802211019.31544.roxis@list.ru> <8e5825090802210132v47ff4b4ahdad945428f367880@mail.gmail.com> Message-ID: <200802211046.56829.roxis@list.ru> On Thursday 21 February 2008, Jamie Quint wrote: > I cant do this easily since I can't use a location directive because > this is at the http level rather than the server level. Create a file with this configuration: location = /404.html { root /var/www/apps/myapp/templates; } location = /500.html { root /var/www/apps/myapp/templates; } and include it in every server directory From is at rambler-co.ru Thu Feb 21 12:49:08 2008 From: is at rambler-co.ru (Igor Sysoev) Date: Thu, 21 Feb 2008 12:49:08 +0300 Subject: error_page on 0.5.35 In-Reply-To: <8e5825090802210132v47ff4b4ahdad945428f367880@mail.gmail.com> References: <8e5825090802210051n24385643i2feb645d7cdecf4f@mail.gmail.com> <200802211019.31544.roxis@list.ru> <8e5825090802210132v47ff4b4ahdad945428f367880@mail.gmail.com> Message-ID: <20080221094908.GG830@rambler-co.ru> On Thu, Feb 21, 2008 at 01:32:43AM -0800, Jamie Quint wrote: > Igor: I do not, I only have those two. > Roxis: I cant do this easily since I can't use a location directive because > this is at the http level rather than the server level. Roxis is right: you should set URI, but not file path. http { error_page 404 /404.html; error_page 502 503 504 /500.html; server { location = /404.html { root /var/www/apps/myapp/templates; } location = /500.html { root /var/www/apps/myapp/templates; } > On Thu, Feb 21, 2008 at 1:19 AM, Roxis wrote: > > > On Thursday 21 February 2008, Jamie Quint wrote: > > > I have this under http in my nginx.conf file: > > > error_page 404 /var/www/apps/myapp/templates/404.html; > > > error_page 502 503 504 /var/www/apps/myapp/templates/500.html; > > > > > > I can access these at myapp.com/500.html and myapp.com/400.html but when > > I > > > get a 502 gateway error instead of displaying the 500.html page I get > > the > > > default nginx page. Any suggestions? > > > > you should specify URI or URL, but not full path > > > > error_page 404 /404.html; > > error_page 502 503 504 /500.html; -- Igor Sysoev http://sysoev.ru/en/ From jsquintz at gmail.com Thu Feb 21 13:15:31 2008 From: jsquintz at gmail.com (Jamie Quint) Date: Thu, 21 Feb 2008 02:15:31 -0800 Subject: error_page on 0.5.35 In-Reply-To: <20080221094908.GG830@rambler-co.ru> References: <8e5825090802210051n24385643i2feb645d7cdecf4f@mail.gmail.com> <200802211019.31544.roxis@list.ru> <8e5825090802210132v47ff4b4ahdad945428f367880@mail.gmail.com> <20080221094908.GG830@rambler-co.ru> Message-ID: <8e5825090802210215x1360ae5fuc07cfec39c335790@mail.gmail.com> I'll do that, thanks. I almost did it that way initially but was trying to avoid the duplication :) Best, Jamie On Thu, Feb 21, 2008 at 1:49 AM, Igor Sysoev wrote: > On Thu, Feb 21, 2008 at 01:32:43AM -0800, Jamie Quint wrote: > > > Igor: I do not, I only have those two. > > Roxis: I cant do this easily since I can't use a location directive > because > > this is at the http level rather than the server level. > > Roxis is right: you should set URI, but not file path. > > http { > > error_page 404 /404.html; > error_page 502 503 504 /500.html; > > server { > > location = /404.html { root /var/www/apps/myapp/templates; } > location = /500.html { root /var/www/apps/myapp/templates; } > > > > On Thu, Feb 21, 2008 at 1:19 AM, Roxis wrote: > > > > > On Thursday 21 February 2008, Jamie Quint wrote: > > > > I have this under http in my nginx.conf file: > > > > error_page 404 > /var/www/apps/myapp/templates/404.html; > > > > error_page 502 503 504 > /var/www/apps/myapp/templates/500.html; > > > > > > > > I can access these at myapp.com/500.html and myapp.com/400.html but > when > > > I > > > > get a 502 gateway error instead of displaying the 500.html page I > get > > > the > > > > default nginx page. Any suggestions? > > > > > > you should specify URI or URL, but not full path > > > > > > error_page 404 /404.html; > > > error_page 502 503 504 /500.html; > > > -- > Igor Sysoev > http://sysoev.ru/en/ > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From is at rambler-co.ru Thu Feb 21 14:57:23 2008 From: is at rambler-co.ru (Igor Sysoev) Date: Thu, 21 Feb 2008 14:57:23 +0300 Subject: user with ssl-proxy, nginx problem In-Reply-To: <47BD2879.9060808@nekomancer.net> References: <47BD2879.9060808@nekomancer.net> Message-ID: <20080221115723.GA5544@rambler-co.ru> On Thu, Feb 21, 2008 at 08:30:01AM +0100, G?bor Farkas wrote: > i'm facing a strange problem here, maybe someone had > experience with this before... > > i have an nginx server, which server some files using HTTPS, > and http-basic-auth. > > because nginx had ssl-problems in the past, in the past we had this config: > > - we had an apache server, listening on port 443, that got the requests, > did the ssl-handling, and then proxied the request to nginx. so nginx > did not do any SSL-stuff. > - and everything worked fine. > > but now we switched to an nginx-only solution, and starting to have > problems with one user, who uses a https proxy. > > in the past, for him the file-download worked this way: > > - the java app requested the file, got a HTTP 401 (Unauthorized) response > - so the java app requested the file again, but now it sent the > necessary username/password, and got the file (and a HTTP 200) > - and all was ok > > after we switched to the nginx-only solution, this is what happens: > > - the java app requests the file, gets a http 400 (not 401) > - so the java app retries, and gets the http 400 again > - this happens 5 times, and then the java app gives up > > in the nginx access log, i see 5 http 401 (Unauthorized) accesses, and i > see that the client did not send the username/password. > > > also, an additional detail: > > using a web-browser, the user is able to download from the server fine, > even when using the https proxy. > > with this info, i would usually blame the java-app, but the strange > thing is, that in the past with the apache+nginx config, it worked fine. > > i tried to migrate every setting from the apache-server to the nginx-server, > even the ssl_cipher settings, but it did not help. I need a debug log. You may send it privately. Note, that username/password in log is in plain text encoded by base64, so choose some dummy values. -- Igor Sysoev http://sysoev.ru/en/ From gabor at nekomancer.net Thu Feb 21 16:50:48 2008 From: gabor at nekomancer.net (=?UTF-8?B?R8OhYm9yIEZhcmthcw==?=) Date: Thu, 21 Feb 2008 14:50:48 +0100 Subject: user with ssl-proxy, nginx problem In-Reply-To: <20080221115723.GA5544@rambler-co.ru> References: <47BD2879.9060808@nekomancer.net> <20080221115723.GA5544@rambler-co.ru> Message-ID: <47BD81B8.9090207@nekomancer.net> Igor Sysoev wrote: > On Thu, Feb 21, 2008 at 08:30:01AM +0100, G?bor Farkas wrote: > >> >> i have an nginx server, which server some files using HTTPS, >> and http-basic-auth. >> >> - the java app requests the file, gets a http 400 (not 401) >> - so the java app retries, and gets the http 400 again >> - this happens 5 times, and then the java app gives up >> >> in the nginx access log, i see 5 http 401 (Unauthorized) accesses, and i >> see that the client did not send the username/password. > > I need a debug log. You may send it privately. > Note, that username/password in log is in plain text encoded by base64, > so choose some dummy values. is there a way to tell nginx to only "debug log" connections from certain IPs? thanks, gabor From mdounin at mdounin.ru Thu Feb 21 17:11:56 2008 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 21 Feb 2008 17:11:56 +0300 Subject: user with ssl-proxy, nginx problem In-Reply-To: <47BD81B8.9090207@nekomancer.net> References: <47BD2879.9060808@nekomancer.net> <20080221115723.GA5544@rambler-co.ru> <47BD81B8.9090207@nekomancer.net> Message-ID: <20080221141156.GB74878@mdounin.ru> Hello! On Thu, Feb 21, 2008 at 02:50:48PM +0100, G??bor Farkas wrote: > Igor Sysoev wrote: >> On Thu, Feb 21, 2008 at 08:30:01AM +0100, G?bor Farkas wrote: >> >>> >>> i have an nginx server, which server some files using HTTPS, >>> and http-basic-auth. >>> >>> - the java app requests the file, gets a http 400 (not 401) >>> - so the java app retries, and gets the http 400 again >>> - this happens 5 times, and then the java app gives up >>> >>> in the nginx access log, i see 5 http 401 (Unauthorized) accesses, and i >>> see that the client did not send the username/password. >> >> I need a debug log. You may send it privately. >> Note, that username/password in log is in plain text encoded by base64, >> so choose some dummy values. > > is there a way to tell nginx to only "debug log" connections from certain > IPs? events { debug_connection 1.2.3.4; ... } Instead of IP you may use CIDR here, e.g. 192.168.0.0/16. Maxim Dounin From is at rambler-co.ru Thu Feb 21 17:14:04 2008 From: is at rambler-co.ru (Igor Sysoev) Date: Thu, 21 Feb 2008 17:14:04 +0300 Subject: user with ssl-proxy, nginx problem In-Reply-To: <47BD81B8.9090207@nekomancer.net> References: <47BD2879.9060808@nekomancer.net> <20080221115723.GA5544@rambler-co.ru> <47BD81B8.9090207@nekomancer.net> Message-ID: <20080221141404.GA6340@rambler-co.ru> On Thu, Feb 21, 2008 at 02:50:48PM +0100, G??bor Farkas wrote: > Igor Sysoev wrote: > >On Thu, Feb 21, 2008 at 08:30:01AM +0100, G?bor Farkas wrote: > > > >> > >>i have an nginx server, which server some files using HTTPS, > >>and http-basic-auth. > >> > >>- the java app requests the file, gets a http 400 (not 401) > >>- so the java app retries, and gets the http 400 again > >>- this happens 5 times, and then the java app gives up > >> > >>in the nginx access log, i see 5 http 401 (Unauthorized) accesses, and i > >>see that the client did not send the username/password. > > > >I need a debug log. You may send it privately. > >Note, that username/password in log is in plain text encoded by base64, > >so choose some dummy values. > > is there a way to tell nginx to only "debug log" connections from > certain IPs? ./configure --with-debug events { debug_connection 192.168.1.0/32; debug_connection 10.1.1.0/16; } -- Igor Sysoev http://sysoev.ru/en/ From is at rambler-co.ru Thu Feb 21 18:22:27 2008 From: is at rambler-co.ru (Igor Sysoev) Date: Thu, 21 Feb 2008 18:22:27 +0300 Subject: excessive RAM consumption - memory leak In-Reply-To: <592c7997ce89ddacec2b3627b605e0e7@ruby-forum.com> References: <9f314ae9fa6e8071326460534a64a55d@ruby-forum.com> <428d921d0802201215x57196ceq9791ba34c83254ae@mail.gmail.com> <20080220225201.GS76459@rambler-co.ru> <592c7997ce89ddacec2b3627b605e0e7@ruby-forum.com> Message-ID: <20080221152227.GD6340@rambler-co.ru> On Thu, Feb 21, 2008 at 03:49:41AM +0100, Todd HG wrote: > Igor Sysoev wrote: > > On Wed, Feb 20, 2008 at 11:47:29PM +0100, > > Is there anywhere I could read more about how Nginx uses > connection_pool_size, stores connections, if it is in a cache in RAM or > hard drive, and what else might be stored in RAM by Nginx? All that you need is to disable gzipping images. It's enough. Other default settings do not allow workers to grow up. -- Igor Sysoev http://sysoev.ru/en/ From rkmr.em at gmail.com Thu Feb 21 20:25:08 2008 From: rkmr.em at gmail.com (rkmr.em at gmail.com) Date: Thu, 21 Feb 2008 09:25:08 -0800 Subject: errors are not getting loged Message-ID: this is my virtual server configuration. the access log is working, but the error log is empty and errors are not getting logged how to fix this? thanks server { server_name XX.YY.com; listen 8070; access_log logs/access_app1.log; error_log logs/error_app1.log; location /nginx_status { stub_status on; access_log off; } location /static { root /home/mark/work/luvgifts; } location / { root /home/mark/work/luvgifts; fastcgi_pass backend_luvgifts; include /home/mark/work/infrastructure/nginx_fastcgi.conf; } From lists at ruby-forum.com Thu Feb 21 21:17:40 2008 From: lists at ruby-forum.com (Vlad Ro) Date: Thu, 21 Feb 2008 19:17:40 +0100 Subject: 400 errors caused by loadbalancer In-Reply-To: References: Message-ID: <34e8ff70034860aee265f1e524444d03@ruby-forum.com> Most likely due to your bigip's "monitor" for that service (port 80), which at its most generic is a simple TCP/IP-level connect with no HTTP request behind it. You must make sure your monitor is a http monitor that does a "GET /" instead of a simple connect if you don't want to see HTTP 400. See the bigip reference for examples. V. Jodok Batlogg wrote: > hi, > > my loadbalancer (big-ip) causes 400 errors in the logfile. if found > http://wiki.codemongers.com/HWLoadbalancerCheckErrors > but it doesn't work for me. any idea? > > thanks > > jodok > > here is my config: > > http { > ... > geo $lb { > default 0; > 10.228.22.225/32 1; # o2lb01 > 10.228.22.226/32 1; # o2lb01-1 > 10.228.22.227/32 1; # o2lb01-2 > } > ... > server { > ... > error_page 400 /400; > location = '/400' { > if ($lb) { access_log off; } > return 400; > } > ... > } > > my log-file: > > 10.228.22.226 - - [27/Jan/2008:22:01:23 +0100] "-" 400 0 "-" "-" > 10.228.22.227 - - [27/Jan/2008:22:01:25 +0100] "-" 400 0 "-" "-" > 10.228.22.226 - - [27/Jan/2008:22:01:28 +0100] "-" 400 0 "-" "-" > 10.228.22.227 - - [27/Jan/2008:22:01:30 +0100] "-" 400 0 "-" "-" > 10.228.22.226 - - [27/Jan/2008:22:01:33 +0100] "-" 400 0 "-" "-" > 10.228.22.227 - - [27/Jan/2008:22:01:35 +0100] "-" 400 0 "-" "-" -- Posted via http://www.ruby-forum.com/. From redduck666 at gmail.com Thu Feb 21 21:19:31 2008 From: redduck666 at gmail.com (Almir Karic) Date: Thu, 21 Feb 2008 19:19:31 +0100 Subject: weird redirect Message-ID: location /~redduck666 { alias /home/redduck666/static_html; } this is what i have, and it works as expected. than i add: location ~ .ogg$ { access_log /var/log/nginx/ogg.log; } now, when i try to access /~redduck666/.ogg nginx behaves weirdly: 2008/02/21 19:14:49 [error] 10444#0: *1985762 open() "/usr/html/~redduck666/file.ogg" failed (2: No such file or directory), client: 89.142.54.200, server: static.kiberpipa.org, request: "GET /~redduck666/file.ogg HTTP/1.1", host: "static.kiberpipa.org", referrer: "http://static.kiberpipa.org/~redduck666/" i have NO idea where it got the /usr/html part, it is not mentioned anywhere in my config. any pointers on what i am doing wrong? -- error: one bad user found in front of screen From roxis at list.ru Thu Feb 21 21:28:07 2008 From: roxis at list.ru (Roxis) Date: Thu, 21 Feb 2008 19:28:07 +0100 Subject: weird redirect In-Reply-To: References: Message-ID: <200802211928.08024.roxis@list.ru> On Thursday 21 February 2008, Almir Karic wrote: > location /~redduck666 { > alias /home/redduck666/static_html; > } > > > this is what i have, and it works as expected. > > than i add: > > location ~ .ogg$ { > access_log /var/log/nginx/ogg.log; > } > > now, when i try to access /~redduck666/.ogg nginx behaves > weirdly: > > 2008/02/21 19:14:49 [error] 10444#0: *1985762 open() > "/usr/html/~redduck666/file.ogg" failed (2: No such file or > directory), client: 89.142.54.200, server: static.kiberpipa.org, > request: "GET /~redduck666/file.ogg HTTP/1.1", host: > "static.kiberpipa.org", referrer: > "http://static.kiberpipa.org/~redduck666/" > > > i have NO idea where it got the /usr/html part, it is not mentioned > anywhere in my config. > > any pointers on what i am doing wrong? firs read how nginx uses location From roxis at list.ru Thu Feb 21 21:30:55 2008 From: roxis at list.ru (Roxis) Date: Thu, 21 Feb 2008 19:30:55 +0100 Subject: weird redirect In-Reply-To: References: Message-ID: <200802211930.55953.roxis@list.ru> On Thursday 21 February 2008, Almir Karic wrote: > location /~redduck666 { > alias /home/redduck666/static_html; > } > > > this is what i have, and it works as expected. > > than i add: > > location ~ .ogg$ { > access_log /var/log/nginx/ogg.log; > } > > now, when i try to access /~redduck666/.ogg nginx behaves > weirdly: > > 2008/02/21 19:14:49 [error] 10444#0: *1985762 open() > "/usr/html/~redduck666/file.ogg" failed (2: No such file or > directory), client: 89.142.54.200, server: static.kiberpipa.org, > request: "GET /~redduck666/file.ogg HTTP/1.1", host: > "static.kiberpipa.org", referrer: > "http://static.kiberpipa.org/~redduck666/" > > > i have NO idea where it got the /usr/html part, it is not mentioned > anywhere in my config. > > any pointers on what i am doing wrong? first read how nginx uses location http://wiki.codemongers.com/NginxHttpCoreModule#location you probably need location like location ~ /~redduck666/.+\.ogg$ {alias /home/redduck666/static_html; access_log /var/log/nginx/ogg.log; } From lists at ruby-forum.com Thu Feb 21 22:17:01 2008 From: lists at ruby-forum.com (Todd HG) Date: Thu, 21 Feb 2008 20:17:01 +0100 Subject: excessive RAM consumption - memory leak In-Reply-To: <20080221152227.GD6340@rambler-co.ru> References: <9f314ae9fa6e8071326460534a64a55d@ruby-forum.com> <428d921d0802201215x57196ceq9791ba34c83254ae@mail.gmail.com> <20080220225201.GS76459@rambler-co.ru> <592c7997ce89ddacec2b3627b605e0e7@ruby-forum.com> <20080221152227.GD6340@rambler-co.ru> Message-ID: <845b6891ec5cf6a1776b91211e89ebd8@ruby-forum.com> >> Igor Sysoev wrote: >> > On Wed, Feb 20, 2008 at 11:47:29PM +0100, >> >> Is there anywhere I could read more about how Nginx uses >> connection_pool_size, stores connections, if it is in a cache in RAM or >> hard drive, and what else might be stored in RAM by Nginx? > > All that you need is to disable gzipping images. It's enough. > Other default settings do not allow workers to grow up. Of course disabling gzip defeats the purpose of having gzip decrease bandwidth and increase site speed for readers. Right now I only have gzip handling a few million js and css files a day, in addition to tens of millions of images which are not gzipped, but the RAM usage just grows until it is completely consumed. By setting the gzip compression level to 1 the RAM consumption grows more slowly, but eventually eats all the RAM. It appears what might be needed is a setting to allow the total number of connections for gzip to be set before Nginx automatically kills and restarts a worker. This would be similar to the Apache MaxRequestsPerChild limit setting. There should be a way to set Nginx to kill and restart the worker process to free the RAM, and start again at zero for situations like mine. Without a solution I need to restart my server about every 24 hours, and this is a very robust server. -- Posted via http://www.ruby-forum.com/. From redduck666 at gmail.com Thu Feb 21 22:21:39 2008 From: redduck666 at gmail.com (Almir Karic) Date: Thu, 21 Feb 2008 20:21:39 +0100 Subject: weird redirect In-Reply-To: <200802211928.08024.roxis@list.ru> References: <200802211928.08024.roxis@list.ru> Message-ID: reading the docs helped (surprise, surprise), thanks :-) perhaps it would be a good idea to allow nginx to apply more than one configuration at a time? On Thu, Feb 21, 2008 at 7:28 PM, Roxis wrote: > On Thursday 21 February 2008, Almir Karic wrote: > > > > location /~redduck666 { > > alias /home/redduck666/static_html; > > } > > > > > > this is what i have, and it works as expected. > > > > than i add: > > > > location ~ .ogg$ { > > access_log /var/log/nginx/ogg.log; > > } > > > > now, when i try to access /~redduck666/.ogg nginx behaves > > weirdly: > > > > 2008/02/21 19:14:49 [error] 10444#0: *1985762 open() > > "/usr/html/~redduck666/file.ogg" failed (2: No such file or > > directory), client: 89.142.54.200, server: static.kiberpipa.org, > > request: "GET /~redduck666/file.ogg HTTP/1.1", host: > > "static.kiberpipa.org", referrer: > > "http://static.kiberpipa.org/~redduck666/" > > > > > > i have NO idea where it got the /usr/html part, it is not mentioned > > anywhere in my config. > > > > any pointers on what i am doing wrong? > > firs read how nginx uses location > > -- error: one bad user found in front of screen From kupokomapa at gmail.com Thu Feb 21 22:36:21 2008 From: kupokomapa at gmail.com (Kiril Angov) Date: Thu, 21 Feb 2008 14:36:21 -0500 Subject: excessive RAM consumption - memory leak In-Reply-To: <845b6891ec5cf6a1776b91211e89ebd8@ruby-forum.com> References: <9f314ae9fa6e8071326460534a64a55d@ruby-forum.com> <428d921d0802201215x57196ceq9791ba34c83254ae@mail.gmail.com> <20080220225201.GS76459@rambler-co.ru> <592c7997ce89ddacec2b3627b605e0e7@ruby-forum.com> <20080221152227.GD6340@rambler-co.ru> <845b6891ec5cf6a1776b91211e89ebd8@ruby-forum.com> Message-ID: <13c357830802211136v47c9f10ar7f6d992b738f0efa@mail.gmail.com> I do not know how you restart nginx but you can send the control process "kill -HUP" and it will do exactly what you want, which is gracefully restart each worker process. You can have a script check for the memory usage and do that when you see it is getting high, or simply do that every 24 hours. Kiril On Thu, Feb 21, 2008 at 2:17 PM, Todd HG wrote: > >> Igor Sysoev wrote: > >> > On Wed, Feb 20, 2008 at 11:47:29PM +0100, > >> > >> Is there anywhere I could read more about how Nginx uses > >> connection_pool_size, stores connections, if it is in a cache in RAM or > >> hard drive, and what else might be stored in RAM by Nginx? > > > > All that you need is to disable gzipping images. It's enough. > > Other default settings do not allow workers to grow up. > > Of course disabling gzip defeats the purpose of having gzip decrease > bandwidth and increase site speed for readers. Right now I only have > gzip handling a few million js and css files a day, in addition to tens > of millions of images which are not gzipped, but the RAM usage just > grows until it is completely consumed. By setting the gzip compression > level to 1 the RAM consumption grows more slowly, but eventually eats > all the RAM. > > It appears what might be needed is a setting to allow the total number > of connections for gzip to be set before Nginx automatically kills and > restarts a worker. This would be similar to the Apache > MaxRequestsPerChild limit setting. There should be a way to set Nginx to > kill and restart the worker process to free the RAM, and start again at > zero for situations like mine. > > Without a solution I need to restart my server about every 24 hours, and > this is a very robust server. > > -- > Posted via http://www.ruby-forum.com/. > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx at wommm.nl Thu Feb 21 22:40:03 2008 From: nginx at wommm.nl (Martin Schut) Date: Thu, 21 Feb 2008 20:40:03 +0100 Subject: weird redirect In-Reply-To: References: <200802211928.08024.roxis@list.ru> Message-ID: > reading the docs helped (surprise, surprise), thanks :-) > > perhaps it would be a good idea to allow nginx to apply more than one > configuration at a time? That would create unreadable config files, I think. Consider: location /~redduck666 { alias /home/redduck666/static_html; } location ~ .ogg$ { access_log /var/log/nginx/ogg.log; alias /home/redduck666/oggs; } From which root should /~redduck666/.ogg be served? You probably will usggest the last location specified but this will become cumbersome when other config-files are included. The current solution is very clear. Regards, Martin From is at rambler-co.ru Thu Feb 21 22:47:07 2008 From: is at rambler-co.ru (Igor Sysoev) Date: Thu, 21 Feb 2008 22:47:07 +0300 Subject: excessive RAM consumption - memory leak In-Reply-To: <845b6891ec5cf6a1776b91211e89ebd8@ruby-forum.com> References: <9f314ae9fa6e8071326460534a64a55d@ruby-forum.com> <428d921d0802201215x57196ceq9791ba34c83254ae@mail.gmail.com> <20080220225201.GS76459@rambler-co.ru> <592c7997ce89ddacec2b3627b605e0e7@ruby-forum.com> <20080221152227.GD6340@rambler-co.ru> <845b6891ec5cf6a1776b91211e89ebd8@ruby-forum.com> Message-ID: <20080221194707.GA11938@rambler-co.ru> On Thu, Feb 21, 2008 at 08:17:01PM +0100, Todd HG wrote: > >> Igor Sysoev wrote: > >> > On Wed, Feb 20, 2008 at 11:47:29PM +0100, > >> > >> Is there anywhere I could read more about how Nginx uses > >> connection_pool_size, stores connections, if it is in a cache in RAM or > >> hard drive, and what else might be stored in RAM by Nginx? > > > > All that you need is to disable gzipping images. It's enough. > > Other default settings do not allow workers to grow up. > > Of course disabling gzip defeats the purpose of having gzip decrease > bandwidth and increase site speed for readers. Right now I only have > gzip handling a few million js and css files a day, in addition to tens > of millions of images which are not gzipped, but the RAM usage just > grows until it is completely consumed. By setting the gzip compression > level to 1 the RAM consumption grows more slowly, but eventually eats > all the RAM. > > It appears what might be needed is a setting to allow the total number > of connections for gzip to be set before Nginx automatically kills and > restarts a worker. This would be similar to the Apache > MaxRequestsPerChild limit setting. There should be a way to set Nginx to > kill and restart the worker process to free the RAM, and start again at > zero for situations like mine. Could you show what does ps ax -o pid,ppid,%cpu,vsz,rss,wchan,command|egrep '(nginx|PID)' show when nginx grows up ? What OS do you use ? > Without a solution I need to restart my server about every 24 hours, and > this is a very robust server. It's really strange. I run all my sites unattended. The workers are restarted only for reconfiguration or online upgrade. For example, this nginx runs more than 2 days (static, SSI, gzipping, proxying) without any leaks (60-120M is stable state): >ps ax -o pid,ppid,%cpu,vsz,lstart,wchan,command|egrep '(nginx|PID)' PID PPID %CPU VSZ STARTED WCHAN COMMAND 1645 1 0.0 16520 Mon Feb 18 02:16:37 2008 pause nginx: master proces 66458 1645 24.9 78984 Tue Feb 19 18:10:20 2008 kqread nginx: worker proces Now it handles 22000 simultaneous connections: >fstat | grep 'nginx.*tcp' | awk '{print $3}' | sort | uniq -c 8 1645 22013 66458 16 hours per day it handles 1000-2000 requests per seconds. This is 60-100 millions per day. -- Igor Sysoev http://sysoev.ru/en/ From lists at ruby-forum.com Fri Feb 22 00:29:09 2008 From: lists at ruby-forum.com (Todd HG) Date: Thu, 21 Feb 2008 22:29:09 +0100 Subject: excessive RAM consumption - memory leak In-Reply-To: <13c357830802211136v47c9f10ar7f6d992b738f0efa@mail.gmail.com> References: <9f314ae9fa6e8071326460534a64a55d@ruby-forum.com> <428d921d0802201215x57196ceq9791ba34c83254ae@mail.gmail.com> <20080220225201.GS76459@rambler-co.ru> <592c7997ce89ddacec2b3627b605e0e7@ruby-forum.com> <20080221152227.GD6340@rambler-co.ru> <845b6891ec5cf6a1776b91211e89ebd8@ruby-forum.com> <13c357830802211136v47c9f10ar7f6d992b738f0efa@mail.gmail.com> Message-ID: <61f0095a84391ba3befaefb35b4b11b3@ruby-forum.com> Kiril Angov wrote: > I do not know how you restart nginx but you can send the control process > "kill -HUP" and it will do exactly what you want, which is gracefully > restart each worker process. You can have a script check for the memory > usage and do that when you see it is getting high, or simply do that > every > 24 hours. > > Kiril I do have a script that restarts Nginx already, however, only rebooting the machine actually clears the RAM. This is why I was curious to know how and what Nginx stores in the RAM. -- Posted via http://www.ruby-forum.com/. From eliott at cactuswax.net Fri Feb 22 00:37:25 2008 From: eliott at cactuswax.net (eliott) Date: Thu, 21 Feb 2008 13:37:25 -0800 Subject: excessive RAM consumption - memory leak In-Reply-To: <61f0095a84391ba3befaefb35b4b11b3@ruby-forum.com> References: <9f314ae9fa6e8071326460534a64a55d@ruby-forum.com> <428d921d0802201215x57196ceq9791ba34c83254ae@mail.gmail.com> <20080220225201.GS76459@rambler-co.ru> <592c7997ce89ddacec2b3627b605e0e7@ruby-forum.com> <20080221152227.GD6340@rambler-co.ru> <845b6891ec5cf6a1776b91211e89ebd8@ruby-forum.com> <13c357830802211136v47c9f10ar7f6d992b738f0efa@mail.gmail.com> <61f0095a84391ba3befaefb35b4b11b3@ruby-forum.com> Message-ID: <428d921d0802211337p62b74af7p9de8024b7a55cdf2@mail.gmail.com> > I do have a script that restarts Nginx already, however, only rebooting > the machine actually clears the RAM. This is why I was curious to know > how and what Nginx stores in the RAM. Wait.. are you talking about overall system ram, or ram used by nginx processes? If you mean overall system ram, you are probably just noticing that the OS (esp linux) has cached many of the files in the filesystem (IOcache) to speed up read operations. From lists at ruby-forum.com Fri Feb 22 00:57:46 2008 From: lists at ruby-forum.com (Todd HG) Date: Thu, 21 Feb 2008 22:57:46 +0100 Subject: excessive RAM consumption - memory leak In-Reply-To: <20080221194707.GA11938@rambler-co.ru> References: <9f314ae9fa6e8071326460534a64a55d@ruby-forum.com> <428d921d0802201215x57196ceq9791ba34c83254ae@mail.gmail.com> <20080220225201.GS76459@rambler-co.ru> <592c7997ce89ddacec2b3627b605e0e7@ruby-forum.com> <20080221152227.GD6340@rambler-co.ru> <845b6891ec5cf6a1776b91211e89ebd8@ruby-forum.com> <20080221194707.GA11938@rambler-co.ru> Message-ID: <015e6a8067eb83f4a8280855b7a90896@ruby-forum.com> Igor Sysoev wrote: > On Thu, Feb 21, 2008 at 08:17:01PM +0100, Todd HG wrote: > >> Of course disabling gzip defeats the purpose of having gzip decrease >> MaxRequestsPerChild limit setting. There should be a way to set Nginx to >> kill and restart the worker process to free the RAM, and start again at >> zero for situations like mine. > > Could you show what does > > ps ax -o pid,ppid,%cpu,vsz,rss,wchan,command|egrep '(nginx|PID)' > > show when nginx grows up ? > > What OS do you use ? > >> Without a solution I need to restart my server about every 24 hours, and >> this is a very robust server. > > It's really strange. I run all my sites unattended. The workers are > restarted > only for reconfiguration or online upgrade. For example, this nginx > runs more than 2 days (static, SSI, gzipping, proxying) without any > leaks (60-120M is stable state): > >>ps ax -o pid,ppid,%cpu,vsz,lstart,wchan,command|egrep '(nginx|PID)' > PID PPID %CPU VSZ STARTED WCHAN COMMAND > 1645 1 0.0 16520 Mon Feb 18 02:16:37 2008 pause nginx: master > proces > 66458 1645 24.9 78984 Tue Feb 19 18:10:20 2008 kqread nginx: worker > proces > > Now it handles 22000 simultaneous connections: > >>fstat | grep 'nginx.*tcp' | awk '{print $3}' | sort | uniq -c > 8 1645 > 22013 66458 > > 16 hours per day it handles 1000-2000 requests per seconds. > This is 60-100 millions per day. I agree this is a strange problem. I am running Nginx on Redhat Enterprise Server 4. I will post the output of the Niginx master and worker processes once the memory reaches it's max again. That should be in about 24 hours. Just for reference I am posting my exact current configuration below, but of course I've replaced some values to keep them private: user nobody; worker_processes 2; # The worker_processes and worker_connections from the event sections allows you to calculate maxclients value: # max_clients = worker_processes * worker_connections pid /usr/local/nginx/logs/nginx.pid; events { worker_connections 12000; use epoll; } http { include /usr/local/nginx/conf/mime.types; default_type application/octet-stream; expires 1M; add_header Cache-Control must-revalidate; add_header Cache-Control public; server_tokens off; client_header_timeout 3m; client_body_timeout 3m; send_timeout 3m; gzip off; sendfile on; tcp_nopush on; tcp_nodelay on; keepalive_timeout 5 5; server_names_hash_bucket_size 128; server { listen ip-address:9000; server_name images.mydomain.com; error_page 404 http://www.mydomain.com/e404.php; location / { root /var/www/mydomain; expires 30d; valid_referers blocked mydomain.com; if ($invalid_referer) { # return 404; rewrite ^(.*)$ http://www.mydomain.com/; } deny ip-address; allow all; } } } -- Posted via http://www.ruby-forum.com/. From lists at ruby-forum.com Fri Feb 22 01:16:59 2008 From: lists at ruby-forum.com (Todd HG) Date: Thu, 21 Feb 2008 23:16:59 +0100 Subject: excessive RAM consumption - memory leak In-Reply-To: <428d921d0802211337p62b74af7p9de8024b7a55cdf2@mail.gmail.com> References: <9f314ae9fa6e8071326460534a64a55d@ruby-forum.com> <428d921d0802201215x57196ceq9791ba34c83254ae@mail.gmail.com> <20080220225201.GS76459@rambler-co.ru> <592c7997ce89ddacec2b3627b605e0e7@ruby-forum.com> <20080221152227.GD6340@rambler-co.ru> <845b6891ec5cf6a1776b91211e89ebd8@ruby-forum.com> <13c357830802211136v47c9f10ar7f6d992b738f0efa@mail.gmail.com> <61f0095a84391ba3befaefb35b4b11b3@ruby-forum.com> <428d921d0802211337p62b74af7p9de8024b7a55cdf2@mail.gmail.com> Message-ID: <137e0d906278b073224427bc03b08d1d@ruby-forum.com> eliott wrote: >> I do have a script that restarts Nginx already, however, only rebooting >> the machine actually clears the RAM. This is why I was curious to know >> how and what Nginx stores in the RAM. > > Wait.. are you talking about overall system ram, or ram used by nginx > processes? > If you mean overall system ram, you are probably just noticing that > the OS (esp linux) has cached many of the files in the filesystem > (IOcache) to speed up read operations. How would I stop Linux from filling up RAM with IOcache? How would I check to see if that is the case? I'm willing to consider all input. -- Posted via http://www.ruby-forum.com/. From eliott at cactuswax.net Fri Feb 22 01:38:14 2008 From: eliott at cactuswax.net (eliott) Date: Thu, 21 Feb 2008 14:38:14 -0800 Subject: excessive RAM consumption - memory leak In-Reply-To: <137e0d906278b073224427bc03b08d1d@ruby-forum.com> References: <9f314ae9fa6e8071326460534a64a55d@ruby-forum.com> <20080220225201.GS76459@rambler-co.ru> <592c7997ce89ddacec2b3627b605e0e7@ruby-forum.com> <20080221152227.GD6340@rambler-co.ru> <845b6891ec5cf6a1776b91211e89ebd8@ruby-forum.com> <13c357830802211136v47c9f10ar7f6d992b738f0efa@mail.gmail.com> <61f0095a84391ba3befaefb35b4b11b3@ruby-forum.com> <428d921d0802211337p62b74af7p9de8024b7a55cdf2@mail.gmail.com> <137e0d906278b073224427bc03b08d1d@ruby-forum.com> Message-ID: <428d921d0802211438y4a7e3ff0g8e5adad0641b6949@mail.gmail.com> On 2/21/08, Todd HG wrote: > eliott wrote: > >> I do have a script that restarts Nginx already, however, only rebooting > >> the machine actually clears the RAM. This is why I was curious to know > >> how and what Nginx stores in the RAM. > > > > Wait.. are you talking about overall system ram, or ram used by nginx > > processes? > > If you mean overall system ram, you are probably just noticing that > > the OS (esp linux) has cached many of the files in the filesystem > > (IOcache) to speed up read operations. > > > How would I stop Linux from filling up RAM with IOcache? How would I > check to see if that is the case? I'm willing to consider all input. iocache is a good thing! Linux uses available ram.. no point having unused ram laying around when you can speed up things with it. If an application needs ram, the iocache is reduced to make room for it. Not sure if that is what you are experiencing though. Post the output of the ps command Igor listed, once you feel your system has been up long enough to show us what you mean by 'using lots of ram'. also do a 'free -m' From dave at cheney.net Fri Feb 22 04:54:13 2008 From: dave at cheney.net (Dave Cheney) Date: Fri, 22 Feb 2008 12:54:13 +1100 Subject: excessive RAM consumption - memory leak In-Reply-To: <137e0d906278b073224427bc03b08d1d@ruby-forum.com> References: <9f314ae9fa6e8071326460534a64a55d@ruby-forum.com> <428d921d0802201215x57196ceq9791ba34c83254ae@mail.gmail.com> <20080220225201.GS76459@rambler-co.ru> <592c7997ce89ddacec2b3627b605e0e7@ruby-forum.com> <20080221152227.GD6340@rambler-co.ru> <845b6891ec5cf6a1776b91211e89ebd8@ruby-forum.com> <13c357830802211136v47c9f10ar7f6d992b738f0efa@mail.gmail.com> <61f0095a84391ba3befaefb35b4b11b3@ruby-forum.com> <428d921d0802211337p62b74af7p9de8024b7a55cdf2@mail.gmail.com> <137e0d906278b073224427bc03b08d1d@ruby-forum.com> Message-ID: Errr ... you _WANT_ the OS to cache stuff in ram otherwise you server will slow to a crawl as every disk access is uncached. On 22/02/2008, at 9:16 AM, Todd HG wrote: > How would I stop Linux from filling up RAM with IOcache? How would I > check to see if that is the case? I'm willing to consider all input. From lists at ruby-forum.com Fri Feb 22 06:02:43 2008 From: lists at ruby-forum.com (Todd HG) Date: Fri, 22 Feb 2008 04:02:43 +0100 Subject: excessive RAM consumption - memory leak In-Reply-To: References: <9f314ae9fa6e8071326460534a64a55d@ruby-forum.com> <428d921d0802201215x57196ceq9791ba34c83254ae@mail.gmail.com> <20080220225201.GS76459@rambler-co.ru> <592c7997ce89ddacec2b3627b605e0e7@ruby-forum.com> <20080221152227.GD6340@rambler-co.ru> <845b6891ec5cf6a1776b91211e89ebd8@ruby-forum.com> <13c357830802211136v47c9f10ar7f6d992b738f0efa@mail.gmail.com> <61f0095a84391ba3befaefb35b4b11b3@ruby-forum.com> <428d921d0802211337p62b74af7p9de8024b7a55cdf2@mail.gmail.com> <137e0d906278b073224427bc03b08d1d@ruby-forum.com> Message-ID: <94d65cc2e3435ac57d4f6c3292125385@ruby-forum.com> Dave Cheney wrote: > Errr ... you _WANT_ the OS to cache stuff in ram otherwise you server > will slow to a crawl as every disk access is uncached. Yes, you are correct Dave. I thought perhaps Elliot knew something about Redhat 4 that I wasn't aware of. All my packages are up to date, and I haven't seen any existing bugs that might cause a memory leak. I've recorded the RAM usage of all running processes after the machine was rebooted, and I'll record the RAM usage again after the RAM fills up to see which process is in fact the culprit. I'll post the results in a day or so, or as soon as the RAM is full. -- Posted via http://www.ruby-forum.com/. From andika at agrakom.com Fri Feb 22 07:19:38 2008 From: andika at agrakom.com (dika) Date: Fri, 22 Feb 2008 11:19:38 +0700 Subject: 404 error on WPMU Message-ID: <47BE4D5A.2010109@agrakom.com> Hai Teams, I've installed NginX to host my Wordpress MU. Everything running well, but one thing didn't works properly. When I use this : http://202.158.66.216/wp-admin/ I got *Error 404 - Not Found. *But if I use : http://202.158.66.216/wp-admin/index.php everything's running well. What should I do to make this run without adding /index.php ? here are my nginx.conf : ------- server { listen 80; server_name 202.158.66.216 ; error_log /var/log/nginx/error.lo; location ~* ^.+\.(html|jpg|jpeg|gif|png|ico|css|zip|tgz|gz|rar|bz2|doc|xls|exe|pdf|ppt|txt|tar|mid|midi|wav|bmp|rtf|js)$ { root /data/blog/wp; expires 30d; break; } location / { root /data/blog/wp; index index.html index.htm index.php; rewrite ^.*/files/(.*) /wp-content/blogs.php?file=$1; if (!-e $request_filename) { rewrite ^.+?(/wp-.*) $1 last; rewrite ^.+?(/.*\.php)$ $1 last; } if ($query_string !~ ".*s=.*") { rewrite ^(.*) /wp-content/cache/supercache/$http_host/$1index.html; } if ($http_cookie !~ "^.*comment_author_.*$" ) { rewrite ^(.*) /wp-content/cache/supercache/$http_host/$1index.html; } if ($http_cookie !~ "^.*wordpressuser.*$" ) { rewrite ^(.*) /wp-content/cache/supercache/$http_host/$1index.html; } if ($http_cookie !~ "^.*wp-postpass_.*$" ) { rewrite ^(.*) /wp-content/cache/supercache/$http_host/$1index.html break; } error_page 404 = @tricky; } error_page 500 502 503 504 /50x.html; location = /50x.html { root html; } location @tricky { rewrite ^ /index.php last; fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME /data/blog/wp$fastcgi_script_name; include /opt/nginx/conf/fastcgi_params; } location ~ \.php$ { fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME /data/blog/wp$fastcgi_script_name; include /opt/nginx/conf/fastcgi_params; } } -- Thanks for advice. *Andika* Indonesian -------------- next part -------------- An HTML attachment was scrubbed... URL: From kupokomapa at gmail.com Fri Feb 22 08:21:06 2008 From: kupokomapa at gmail.com (Kiril Angov) Date: Fri, 22 Feb 2008 00:21:06 -0500 Subject: 404 error on WPMU In-Reply-To: <47BE4D5A.2010109@agrakom.com> References: <47BE4D5A.2010109@agrakom.com> Message-ID: <13c357830802212121k59172406ge1b391f0887056c0@mail.gmail.com> # Look for existence of PHP index file. # Don't break here...just rewrite it. if (-f $request_filename/index.php) { rewrite (.*) $1/index.php; } On Thu, Feb 21, 2008 at 11:19 PM, dika wrote: > Hai Teams, > > I've installed NginX to host my Wordpress MU. > Everything running well, but one thing didn't works properly. > > When I use this : http://202.158.66.216/wp-admin/ > I got *Error 404 - Not Found. > > *But if I use : http://202.158.66.216/wp-admin/index.php > everything's running well. > > What should I do to make this run without adding /index.php ? > > here are my nginx.conf : > > ------- > server { > listen 80; > server_name 202.158.66.216 ; > error_log /var/log/nginx/error.lo; > location ~* > ^.+\.(html|jpg|jpeg|gif|png|ico|css|zip|tgz|gz|rar|bz2|doc|xls|exe|pdf|ppt|txt|tar|mid|midi|wav|bmp|rtf|js)$ > { > root /data/blog/wp; > expires 30d; > break; > } > > location / { > root /data/blog/wp; > index index.html index.htm index.php; > rewrite ^.*/files/(.*) /wp-content/blogs.php?file=$1; > > if (!-e $request_filename) { > rewrite ^.+?(/wp-.*) $1 last; > rewrite ^.+?(/.*\.php)$ $1 last; > } > > if ($query_string !~ ".*s=.*") { > rewrite ^(.*) /wp-content/cache/supercache/$http_host/$1index.html; > } > > if ($http_cookie !~ "^.*comment_author_.*$" ) { > rewrite ^(.*) /wp-content/cache/supercache/$http_host/$1index.html; > } > > if ($http_cookie !~ "^.*wordpressuser.*$" ) { > rewrite ^(.*) /wp-content/cache/supercache/$http_host/$1index.html; > } > > if ($http_cookie !~ "^.*wp-postpass_.*$" ) { > rewrite ^(.*) /wp-content/cache/supercache/$http_host/$1index.html > break; > } > > error_page 404 = @tricky; > } > error_page 500 502 503 504 /50x.html; > location = /50x.html { > root html; > } > > location @tricky { > rewrite ^ /index.php last; > fastcgi_pass 127.0.0.1:9000; > fastcgi_index index.php; > fastcgi_param SCRIPT_FILENAME /data/blog/wp$fastcgi_script_name; > include /opt/nginx/conf/fastcgi_params; > } > > location ~ \.php$ { > fastcgi_pass 127.0.0.1:9000; > fastcgi_index index.php; > fastcgi_param SCRIPT_FILENAME /data/blog/wp$fastcgi_script_name; > include /opt/nginx/conf/fastcgi_params; > } > } > > -- > > Thanks for advice. > > *Andika* > Indonesian > -------------- next part -------------- An HTML attachment was scrubbed... URL: From redduck666 at gmail.com Fri Feb 22 08:36:43 2008 From: redduck666 at gmail.com (Almir Karic) Date: Fri, 22 Feb 2008 06:36:43 +0100 Subject: weird redirect In-Reply-To: References: <200802211928.08024.roxis@list.ru> Message-ID: On Thu, Feb 21, 2008 at 8:40 PM, Martin Schut wrote: > > reading the docs helped (surprise, surprise), thanks :-) > > > > perhaps it would be a good idea to allow nginx to apply more than one > > configuration at a time? > That would create unreadable config files, I think. > > Consider: > > > location /~redduck666 { > alias /home/redduck666/static_html; > } > > > location ~ .ogg$ { > access_log /var/log/nginx/ogg.log; > alias /home/redduck666/oggs; > } > > From which root should /~redduck666/.ogg be served? > You probably will usggest the last location specified but this will become > cumbersome when other config-files are included. ok, i see where are you coming from, and i have to agree that you have a valid point, however saying that the configuration will become cumbersome is IMHO wrong, it *might* become cumbersome (depending on how you write it), on the other hand it would give you the benefits such as being able to do something based on 'location' ( /some/thing ) and based on extension (.ogg in my example) -- error: one bad user found in front of screen From andika at agrakom.com Fri Feb 22 09:29:46 2008 From: andika at agrakom.com (dika) Date: Fri, 22 Feb 2008 13:29:46 +0700 Subject: 404 error on WPMU In-Reply-To: <13c357830802212121k59172406ge1b391f0887056c0@mail.gmail.com> References: <47BE4D5A.2010109@agrakom.com> <13c357830802212121k59172406ge1b391f0887056c0@mail.gmail.com> Message-ID: <47BE6BDA.4070705@agrakom.com> Thanks for your suggestion sir, but unfortunately it doesn't work.. I still get 404 error alert. any advice ? -- anDika Kiril Angov wrote: > # Look for existence of PHP index file. > # Don't break here...just rewrite it. > if (-f $request_filename/index.php) { > rewrite (.*) $1/index.php; > } > > On Thu, Feb 21, 2008 at 11:19 PM, dika > wrote: > > Hai Teams, > > I've installed NginX to host my Wordpress MU. > Everything running well, but one thing didn't works properly. > > When I use this : http://202.158.66.216/wp-admin/ > I got *Error 404 - Not Found. > > *But if I use : http://202.158.66.216/wp-admin/index.php > everything's running well. > > What should I do to make this run without adding /index.php ? > > here are my nginx.conf : > > ------- > server { > listen 80; > server_name 202.158.66.216 ; > error_log /var/log/nginx/error.lo; > location ~* > ^.+\.(html|jpg|jpeg|gif|png|ico|css|zip|tgz|gz|rar|bz2|doc|xls|exe|pdf|ppt|txt|tar|mid|midi|wav|bmp|rtf|js)$ > { > root /data/blog/wp; > expires 30d; > break; > } > > location / { > root /data/blog/wp; > index index.html index.htm index.php; > rewrite ^.*/files/(.*) /wp-content/blogs.php?file=$1; > > if (!-e $request_filename) { > rewrite ^.+?(/wp-.*) $1 last; > rewrite ^.+?(/.*\.php)$ $1 last; > } > > if ($query_string !~ ".*s=.*") { > rewrite ^(.*) /wp-content/cache/supercache/$http_host/$1index.html; > } > > if ($http_cookie !~ "^.*comment_author_.*$" ) { > rewrite ^(.*) /wp-content/cache/supercache/$http_host/$1index.html; > } > > if ($http_cookie !~ "^.*wordpressuser.*$" ) { > rewrite ^(.*) /wp-content/cache/supercache/$http_host/$1index.html; > } > > if ($http_cookie !~ "^.*wp-postpass_.*$" ) { > rewrite ^(.*) /wp-content/cache/supercache/$http_host/$1index.html > break; > } > > error_page 404 = @tricky; > } > error_page 500 502 503 504 /50x.html; > location = /50x.html { > root html; > } > > location @tricky { > rewrite ^ /index.php last; > fastcgi_pass 127.0.0.1:9000 ; > fastcgi_index index.php; > fastcgi_param SCRIPT_FILENAME /data/blog/wp$fastcgi_script_name; > include /opt/nginx/conf/fastcgi_params; > } > > location ~ \.php$ { > fastcgi_pass 127.0.0.1:9000 ; > fastcgi_index index.php; > fastcgi_param SCRIPT_FILENAME /data/blog/wp$fastcgi_script_name; > include /opt/nginx/conf/fastcgi_params; > } > } > > -- > > Thanks for advice. > > *Andika* > Indonesian > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From is at rambler-co.ru Fri Feb 22 10:41:50 2008 From: is at rambler-co.ru (Igor Sysoev) Date: Fri, 22 Feb 2008 10:41:50 +0300 Subject: weird redirect In-Reply-To: References: <200802211928.08024.roxis@list.ru> Message-ID: <20080222074150.GA21417@rambler-co.ru> On Fri, Feb 22, 2008 at 06:36:43AM +0100, Almir Karic wrote: > On Thu, Feb 21, 2008 at 8:40 PM, Martin Schut wrote: > > > reading the docs helped (surprise, surprise), thanks :-) > > > > > > perhaps it would be a good idea to allow nginx to apply more than one > > > configuration at a time? > > That would create unreadable config files, I think. > > > > Consider: > > > > > > location /~redduck666 { > > alias /home/redduck666/static_html; > > } > > > > > > location ~ .ogg$ { > > access_log /var/log/nginx/ogg.log; > > alias /home/redduck666/oggs; > > } > > > > From which root should /~redduck666/.ogg be served? > > You probably will usggest the last location specified but this will become > > cumbersome when other config-files are included. > > ok, i see where are you coming from, and i have to agree that you have > a valid point, however saying that the configuration will become > cumbersome is IMHO wrong, it *might* become cumbersome (depending on > how you write it), on the other hand it would give you the benefits > such as being able to do something based on 'location' ( /some/thing ) > and based on extension (.ogg in my example) If it might, it will certainly will be. http://article.gmane.org/gmane.comp.web.nginx.english/2487/ -- Igor Sysoev http://sysoev.ru/en/ From bhoult at gmail.com Fri Feb 22 02:18:11 2008 From: bhoult at gmail.com (Brandon Hoult) Date: Thu, 21 Feb 2008 17:18:11 -0600 Subject: directory based virtual host proxy. Message-ID: <3bcff0250802211518r2e34967fs914540591814937a@mail.gmail.com> I would like to have several rails applications behind the same domain name. For example: my.domain.com/application_1 my.domain.com/application_2 my.domain.com/application_3 These then need to be directed to the appropriate mongrel server. My current config below would work fine if I had application1.domain.com, application2.domain.com etc. But I can't seem to find an example of how to use the same domain with different apps. Any hints would be appreciated. My curent config: ------------------------------------------------------------------------------- upstream rails { server 127.0.0.1:8050; server 127.0.0.1:8051; } #Rails App here server { listen 80; server_name rails.softwyre.com; root /var/www/rails/user_management/current/; index index.html index.htm; client_max_body_size 50M; access_log /var/log/nginx/localhost.access.log; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded_for $proxy_add_x_forwarded_for; proxy_set_header Host $http_host; proxy_redirect false; proxy_max_temp_file_size 0; location / { if (-f $request_filename) { break; } if (-f $request_filename/index.html) { rewrite (.*) $1/index.html break; } if (-f $request_filename.html) { rewrite (.*) $1.html break; } if (!-f $request_filename) { proxy_pass http://rails; break; } } # redirect server error pages to the static page /50x.html # error_page 500 502 503 504 /500.html; location = /500.html { root /var/www/rails/user_management/current/public; } } -------------- next part -------------- An HTML attachment was scrubbed... URL: From y.georgiev at gmail.com Fri Feb 22 12:35:47 2008 From: y.georgiev at gmail.com (Yordan Georgiev) Date: Fri, 22 Feb 2008 11:35:47 +0200 Subject: excessive RAM consumption - memory leak In-Reply-To: <94d65cc2e3435ac57d4f6c3292125385@ruby-forum.com> References: <9f314ae9fa6e8071326460534a64a55d@ruby-forum.com> <592c7997ce89ddacec2b3627b605e0e7@ruby-forum.com> <20080221152227.GD6340@rambler-co.ru> <845b6891ec5cf6a1776b91211e89ebd8@ruby-forum.com> <13c357830802211136v47c9f10ar7f6d992b738f0efa@mail.gmail.com> <61f0095a84391ba3befaefb35b4b11b3@ruby-forum.com> <428d921d0802211337p62b74af7p9de8024b7a55cdf2@mail.gmail.com> <137e0d906278b073224427bc03b08d1d@ruby-forum.com> <94d65cc2e3435ac57d4f6c3292125385@ruby-forum.com> Message-ID: <4378145a0802220135r5404a3c7n391ecec9da307d62@mail.gmail.com> Please excuse my bad english... I use nginx version: nginx/0.5.35 built by gcc 4.1.1 (Gentoo 4.1.1-r3). Load average: 0.08, 0.06, 0.01. This server provide 10-12M/s images content. And my config is: user nginx nginx; worker_processes 8; error_log off; events { worker_connections 24576; use epoll; } http { include /etc/nginx/mime.types; default_type application/octet-stream; client_header_timeout 2m; client_body_timeout 2m; send_timeout 2m; connection_pool_size 1024; client_header_buffer_size 1k; large_client_header_buffers 4 4k; request_pool_size 4k; gzip off; output_buffers 1 32k; postpone_output 1460; sendfile on; tcp_nopush off; tcp_nodelay off; keepalive_timeout 75 20; ignore_invalid_headers on; index index.html; server { listen My.IP; server_name My.IP; access_log off; root /storage; location / { error_page 404 = @backend; } location @backend { proxy_pass http://server2.my-domain.tld; } } } -- ? ????????, ?. ????????. WEB: http://gigavolt-bg.net/ Blog: http://live.gigavolt-bg.net/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From is at rambler-co.ru Fri Feb 22 12:47:06 2008 From: is at rambler-co.ru (Igor Sysoev) Date: Fri, 22 Feb 2008 12:47:06 +0300 Subject: post_to_static Message-ID: <20080222094706.GE22278@rambler-co.ru> I'm going to commit capability to POST to static files, however, I'm not sure about directive name: post_to_static on|off static_post on|off Are other variants ? -- Igor Sysoev http://sysoev.ru/en/ From marc at corky.net Fri Feb 22 13:07:18 2008 From: marc at corky.net (marc at corky.net) Date: Fri, 22 Feb 2008 05:07:18 -0500 Subject: post_to_static In-Reply-To: <20080222094706.GE22278@rambler-co.ru> References: <20080222094706.GE22278@rambler-co.ru> Message-ID: <47BE9ED6.7000007@corky.net> I vote for post_to_static From y.georgiev at gmail.com Fri Feb 22 13:06:06 2008 From: y.georgiev at gmail.com (Yordan Georgiev) Date: Fri, 22 Feb 2008 12:06:06 +0200 Subject: post_to_static In-Reply-To: <20080222094706.GE22278@rambler-co.ru> References: <20080222094706.GE22278@rambler-co.ru> Message-ID: <4378145a0802220206w76b94228uaf21d0b6e88779e2@mail.gmail.com> post_to_static on|off -------------- next part -------------- An HTML attachment was scrubbed... URL: From denis at gostats.ru Fri Feb 22 13:16:13 2008 From: denis at gostats.ru (Denis F. Latypoff) Date: Fri, 22 Feb 2008 16:16:13 +0600 Subject: post_to_static In-Reply-To: <20080222094706.GE22278@rambler-co.ru> References: <20080222094706.GE22278@rambler-co.ru> Message-ID: <426669877.20080222161613@gostats.ru> Hello Igor, Friday, February 22, 2008, 3:47:06 PM, you wrote: > I'm going to commit capability to POST to static files, however, I'm not sure > about directive name: > post_to_static on|off > static_post on|off > Are other variants ? allow_post_to_static ;) -- Best regards, Denis mailto:denis at gostats.ru From lists at ruby-forum.com Fri Feb 22 14:51:15 2008 From: lists at ruby-forum.com (Mustafa Toraman) Date: Fri, 22 Feb 2008 12:51:15 +0100 Subject: How to transfer a normal HTTP request to HTTPS? Message-ID: Hello all, I am using nginx/0.5.35 + FastCGI. I need to transfer a normal http request to ssl mode. For example; I own a bittorrent tracker and i need to work with ssl on announces. So then if someone trying to send an announce to http://domain/announce.php then how can i transfer it to SSL mode? I dont want to change announce URL but announce.php must work on SSL mode. Will it transfer this request to ssl inside of the box? Or will it works on background? Simply how to work with ssl on a normal http address? I'm ready to pay for that or say a thanks :) Thank you for now! -- Posted via http://www.ruby-forum.com/. From y.georgiev at gmail.com Fri Feb 22 15:12:36 2008 From: y.georgiev at gmail.com (Yordan Georgiev) Date: Fri, 22 Feb 2008 14:12:36 +0200 Subject: How to transfer a normal HTTP request to HTTPS? In-Reply-To: References: Message-ID: <4378145a0802220412i2284ac37gc1df157a8d65eba0@mail.gmail.com> Hello, First start nginx with ssl support. HINT: http://wiki.codemongers.com/NginxHttpSslModule Second, use rewrite module for redirect. HINT: http://wiki.codemongers.com/NginxHttpRewriteModule On Fri, Feb 22, 2008 at 1:51 PM, Mustafa Toraman wrote: > Hello all, > > I am using nginx/0.5.35 + FastCGI. I need to transfer a normal http > request to ssl mode. For example; > > I own a bittorrent tracker and i need to work with ssl on announces. So > then if someone trying to send an announce to http://domain/announce.php > then how can i transfer it to SSL mode? I dont want to change announce > URL but announce.php must work on SSL mode. Will it transfer this > request to ssl inside of the box? Or will it works on background? > > Simply how to work with ssl on a normal http address? > > I'm ready to pay for that or say a thanks :) > > Thank you for now! > -- > Posted via http://www.ruby-forum.com/. > > -- ? ????????, ?. ????????. WEB: http://gigavolt-bg.net/ Blog: http://live.gigavolt-bg.net/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From rafal.zawadzki at sensisoft.com Fri Feb 22 15:44:33 2008 From: rafal.zawadzki at sensisoft.com (=?UTF-8?B?UmFmYcWCIFphd2Fkemtp?=) Date: Fri, 22 Feb 2008 13:44:33 +0100 Subject: post_to_static In-Reply-To: <20080222094706.GE22278@rambler-co.ru> References: <20080222094706.GE22278@rambler-co.ru> Message-ID: <47BEC3B1.8000103@sensisoft.com> Igor Sysoev pisze: > I'm going to commit capability to POST to static files, however, I'm not sure > about directive name: Is there any wiki entry about this or any other plays where I can read more about this feature? > post_to_static on|off > > static_post on|off > > Are other variants ? post_to_static sounds great. -- Rafa? bluszcz Zawadzki System Architect +48 600 883 759 From lists at ruby-forum.com Fri Feb 22 15:56:52 2008 From: lists at ruby-forum.com (Mustafa Toraman) Date: Fri, 22 Feb 2008 13:56:52 +0100 Subject: How to transfer a normal HTTP request to HTTPS? In-Reply-To: <4378145a0802220412i2284ac37gc1df157a8d65eba0@mail.gmail.com> References: <4378145a0802220412i2284ac37gc1df157a8d65eba0@mail.gmail.com> Message-ID: <8c51e0deec8069a6157fd8d4e7e96b9e@ruby-forum.com> Thank you for your quick reply Mr. Georgiev, I start my nginx configured with ssl. It is accepting HTTPS requests well. Btw i have no idea about rewrite to redirect diffrent requests on ports with internal rewrite. HTTP is accepting port 80 and SSL 443. So then i dont want to get load with X2 requests with an external redirect. So , what is the correct rewrite code for internet transfer? Simply how to work announce.php request on SSL without any external rewrite. Thank you! Yordan Georgiev wrote: > Hello, > > First start nginx with ssl support. HINT: > http://wiki.codemongers.com/NginxHttpSslModule > Second, use rewrite module for redirect. HINT: > http://wiki.codemongers.com/NginxHttpRewriteModule > > On Fri, Feb 22, 2008 at 1:51 PM, Mustafa Toraman > wrote: > >> >> Simply how to work with ssl on a normal http address? >> >> I'm ready to pay for that or say a thanks :) >> >> Thank you for now! >> -- >> Posted via http://www.ruby-forum.com/. >> >> > > > -- > ? ????????, > ?. ????????. > > WEB: http://gigavolt-bg.net/ > Blog: http://live.gigavolt-bg.net/ -- Posted via http://www.ruby-forum.com/. From is at rambler-co.ru Fri Feb 22 16:16:20 2008 From: is at rambler-co.ru (Igor Sysoev) Date: Fri, 22 Feb 2008 16:16:20 +0300 Subject: How to transfer a normal HTTP request to HTTPS? In-Reply-To: <8c51e0deec8069a6157fd8d4e7e96b9e@ruby-forum.com> References: <4378145a0802220412i2284ac37gc1df157a8d65eba0@mail.gmail.com> <8c51e0deec8069a6157fd8d4e7e96b9e@ruby-forum.com> Message-ID: <20080222131620.GB26224@rambler-co.ru> On Fri, Feb 22, 2008 at 01:56:52PM +0100, Mustafa Toraman wrote: > Thank you for your quick reply Mr. Georgiev, > > I start my nginx configured with ssl. It is accepting HTTPS requests > well. Btw i have no idea about rewrite to redirect diffrent requests on > ports with internal rewrite. > > HTTP is accepting port 80 and SSL 443. So then i dont want to get load > with X2 requests with an external redirect. > > So , what is the correct rewrite code for internet transfer? Simply how > to work announce.php request on SSL without any external rewrite. Browser does SSL connection only if it see https:// protocol. If it connect to server using http:// the connection will be plain text. > Thank you! > > Yordan Georgiev wrote: > > Hello, > > > > First start nginx with ssl support. HINT: > > http://wiki.codemongers.com/NginxHttpSslModule > > Second, use rewrite module for redirect. HINT: > > http://wiki.codemongers.com/NginxHttpRewriteModule > > > > On Fri, Feb 22, 2008 at 1:51 PM, Mustafa Toraman > > wrote: > > > >> > >> Simply how to work with ssl on a normal http address? > >> > >> I'm ready to pay for that or say a thanks :) > >> > >> Thank you for now! > >> -- > >> Posted via http://www.ruby-forum.com/. > >> > >> > > > > > > -- > > ? ????????, > > ?. ????????. > > > > WEB: http://gigavolt-bg.net/ > > Blog: http://live.gigavolt-bg.net/ > > -- > Posted via http://www.ruby-forum.com/. > -- Igor Sysoev http://sysoev.ru/en/ From y.georgiev at gmail.com Fri Feb 22 17:45:43 2008 From: y.georgiev at gmail.com (Yordan Georgiev) Date: Fri, 22 Feb 2008 16:45:43 +0200 Subject: How to transfer a normal HTTP request to HTTPS? In-Reply-To: <20080222131620.GB26224@rambler-co.ru> References: <4378145a0802220412i2284ac37gc1df157a8d65eba0@mail.gmail.com> <8c51e0deec8069a6157fd8d4e7e96b9e@ruby-forum.com> <20080222131620.GB26224@rambler-co.ru> Message-ID: <4378145a0802220645m43992b11h4f39e1d46adddcaf@mail.gmail.com> May be: location ~ ^/shop { rewrite ^/(.*) https://mydomain.com/$1 permanent; } On Fri, Feb 22, 2008 at 3:16 PM, Igor Sysoev wrote: > On Fri, Feb 22, 2008 at 01:56:52PM +0100, Mustafa Toraman wrote: > > > Thank you for your quick reply Mr. Georgiev, > > > > I start my nginx configured with ssl. It is accepting HTTPS requests > > well. Btw i have no idea about rewrite to redirect diffrent requests on > > ports with internal rewrite. > > > > HTTP is accepting port 80 and SSL 443. So then i dont want to get load > > with X2 requests with an external redirect. > > > > So , what is the correct rewrite code for internet transfer? Simply how > > to work announce.php request on SSL without any external rewrite. > > Browser does SSL connection only if it see https:// protocol. > If it connect to server using http:// the connection will be plain text. > > > Thank you! > > > > Yordan Georgiev wrote: > > > Hello, > > > > > > First start nginx with ssl support. HINT: > > > http://wiki.codemongers.com/NginxHttpSslModule > > > Second, use rewrite module for redirect. HINT: > > > http://wiki.codemongers.com/NginxHttpRewriteModule > > > > > > On Fri, Feb 22, 2008 at 1:51 PM, Mustafa Toraman > > > > wrote: > > > > > >> > > >> Simply how to work with ssl on a normal http address? > > >> > > >> I'm ready to pay for that or say a thanks :) > > >> > > >> Thank you for now! > > >> -- > > >> Posted via http://www.ruby-forum.com/. > > >> > > >> > > > > > > > > > -- > > > ? ????????, > > > ?. ????????. > > > > > > WEB: http://gigavolt-bg.net/ > > > Blog: http://live.gigavolt-bg.net/ > > > > -- > > Posted via http://www.ruby-forum.com/. > > > > -- > Igor Sysoev > http://sysoev.ru/en/ > > -- ? ????????, ?. ????????. WEB: http://gigavolt-bg.net/ Blog: http://live.gigavolt-bg.net/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From bhoult at gmail.com Fri Feb 22 20:44:40 2008 From: bhoult at gmail.com (Brandon Hoult) Date: Fri, 22 Feb 2008 11:44:40 -0600 Subject: directory based virtual host proxy. In-Reply-To: <3bcff0250802211518r2e34967fs914540591814937a@mail.gmail.com> References: <3bcff0250802211518r2e34967fs914540591814937a@mail.gmail.com> Message-ID: <3bcff0250802220944i2e42ce0cl2eaf9609b17cb573@mail.gmail.com> I posted the below earlier, but nobody has replied. I think I found the solution to part of my problem but my solution introduces a new issue. I added the following directive to pass http://my.domain.com/user_management/ to the correct mongrel cluster. location /user_management_dev/ { root /var/www/rails/user_management/current/; rewrite /user_management_dev(.*) $1; proxy_pass http://user_management_prod; break; } The problem is that now the application sees it's url as being http://my.domain.com so all the links inside the rails application don't go anywhere. Is there a way to tell the proxied application that it needs to add "/user_management" to the end of it's host name? ----------------------------------- On Thu, Feb 21, 2008 at 5:18 PM, Brandon Hoult wrote: > I would like to have several rails applications behind the same domain > name. > > For example: > my.domain.com/application_1 > my.domain.com/application_2 > my.domain.com/application_3 > > These then need to be directed to the appropriate mongrel server. My > current config below would work fine if I had application1.domain.com, > application2.domain.com etc. But I can't seem to find an example of how > to use the same domain with different apps. > > Any hints would be appreciated. > > My curent config: > > ------------------------------------------------------------------------------- > upstream rails { > server 127.0.0.1:8050; > server 127.0.0.1:8051; > } > > #Rails App here > server { > listen 80; > server_name rails.softwyre.com; > root /var/www/rails/user_management/current/; > index index.html index.htm; > client_max_body_size 50M; > > access_log /var/log/nginx/localhost.access.log; > > proxy_set_header X-Real-IP $remote_addr; > proxy_set_header X-Forwarded_for $proxy_add_x_forwarded_for; > proxy_set_header Host $http_host; > proxy_redirect false; > proxy_max_temp_file_size 0; > > location / { > if (-f $request_filename) { > break; > } > if (-f $request_filename/index.html) { > rewrite (.*) $1/index.html break; > } > if (-f $request_filename.html) { > rewrite (.*) $1.html break; > } > if (!-f $request_filename) { > proxy_pass http://rails; > break; > } > } > > # redirect server error pages to the static page /50x.html > # > error_page 500 502 503 504 /500.html; > location = /500.html { > root /var/www/rails/user_management/current/public; > } > } > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From lists at ruby-forum.com Fri Feb 22 21:42:16 2008 From: lists at ruby-forum.com (Todd HG) Date: Fri, 22 Feb 2008 19:42:16 +0100 Subject: excessive RAM consumption - memory leak In-Reply-To: <4378145a0802220135r5404a3c7n391ecec9da307d62@mail.gmail.com> References: <9f314ae9fa6e8071326460534a64a55d@ruby-forum.com> <428d921d0802201215x57196ceq9791ba34c83254ae@mail.gmail.com> <20080220225201.GS76459@rambler-co.ru> <592c7997ce89ddacec2b3627b605e0e7@ruby-forum.com> <20080221152227.GD6340@rambler-co.ru> <845b6891ec5cf6a1776b91211e89ebd8@ruby-forum.com> <13c357830802211136v47c9f10ar7f6d992b738f0efa@mail.gmail.com> <61f0095a84391ba3befaefb35b4b11b3@ruby-forum.com> <428d921d0802211337p62b74af7p9de8024b7a55cdf2@mail.gmail.com> <137e0d906278b073224427bc03b08d1d@ruby-forum.com> <94d65cc2e3435ac57d4f6c3292125385@ruby-forum.com> <4378145a0802220135r5404a3c7n391ecec9da307d62@mail.gmail.com> Message-ID: Yordan Georgiev wrote: > Please excuse my bad english... > > I use nginx version: nginx/0.5.35 built by gcc 4.1.1 (Gentoo 4.1.1-r3). Thank you Yordan, I will try some of the settings from your config file. -- Posted via http://www.ruby-forum.com/. From lists at ruby-forum.com Fri Feb 22 23:45:23 2008 From: lists at ruby-forum.com (Brandon Hoult) Date: Fri, 22 Feb 2008 21:45:23 +0100 Subject: configuring nginx for different rails apps under same do In-Reply-To: <9ddc50bb591ff05ce1d0cc4a7c9c440b@ruby-forum.com> References: <20071121082431.GB23749@rambler-co.ru> <98b487d8ee0e478accf2bd91a45d9792@ruby-forum.com> <20071124090112.GB16975@rambler-co.ru> <9ddc50bb591ff05ce1d0cc4a7c9c440b@ruby-forum.com> Message-ID: I am having the same exact issue that you describe... did you ever figure out a solution? -- Posted via http://www.ruby-forum.com/. From bhoult at gmail.com Sat Feb 23 00:04:53 2008 From: bhoult at gmail.com (Brandon Hoult) Date: Fri, 22 Feb 2008 15:04:53 -0600 Subject: directory based virtual host proxy. In-Reply-To: <3bcff0250802220944i2e42ce0cl2eaf9609b17cb573@mail.gmail.com> References: <3bcff0250802211518r2e34967fs914540591814937a@mail.gmail.com> <3bcff0250802220944i2e42ce0cl2eaf9609b17cb573@mail.gmail.com> Message-ID: <3bcff0250802221304x12008d1bj599c2c062914aa62@mail.gmail.com> I figured this out... you have to use the prefix option for mongrel which can be called on the command line of included in the .yaml file in /etc/mongrel (if you have it configured that way) See: http://mongrel.rubyforge.org/wiki/FAQ On Fri, Feb 22, 2008 at 11:44 AM, Brandon Hoult wrote: > I posted the below earlier, but nobody has replied. I think I found the > solution to part of my problem but my solution introduces a new issue. > > I added the following directive to pass > http://my.domain.com/user_management/ to the correct mongrel cluster. > > location /user_management_dev/ { > root /var/www/rails/user_management/current/; > rewrite /user_management_dev(.*) $1; > proxy_pass http://user_management_prod; > break; > } > > The problem is that now the application sees it's url as being > http://my.domain.com so all the links inside the rails application don't > go anywhere. Is there a way to tell the proxied application that it needs > to add "/user_management" to the end of it's host name? > > ----------------------------------- > > > On Thu, Feb 21, 2008 at 5:18 PM, Brandon Hoult wrote: > > > I would like to have several rails applications behind the same domain > > name. > > > > For example: > > my.domain.com/application_1 > > my.domain.com/application_2 > > my.domain.com/application_3 > > > > These then need to be directed to the appropriate mongrel server. My > > current config below would work fine if I had application1.domain.com, > > application2.domain.com etc. But I can't seem to find an example of how > > to use the same domain with different apps. > > > > Any hints would be appreciated. > > > > My curent config: > > > > ------------------------------------------------------------------------------- > > upstream rails { > > server 127.0.0.1:8050; > > server 127.0.0.1:8051; > > } > > > > #Rails App here > > server { > > listen 80; > > server_name rails.softwyre.com; > > root /var/www/rails/user_management/current/; > > index index.html index.htm; > > client_max_body_size 50M; > > > > access_log /var/log/nginx/localhost.access.log; > > > > proxy_set_header X-Real-IP $remote_addr; > > proxy_set_header X-Forwarded_for $proxy_add_x_forwarded_for; > > proxy_set_header Host $http_host; > > proxy_redirect false; > > proxy_max_temp_file_size 0; > > > > location / { > > if (-f $request_filename) { > > break; > > } > > if (-f $request_filename/index.html) { > > rewrite (.*) $1/index.html break; > > } > > if (-f $request_filename.html) { > > rewrite (.*) $1.html break; > > } > > if (!-f $request_filename) { > > proxy_pass http://rails; > > break; > > } > > } > > > > # redirect server error pages to the static page /50x.html > > # > > error_page 500 502 503 504 /500.html; > > location = /500.html { > > root /var/www/rails/user_management/current/public; > > } > > } > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From lists at ruby-forum.com Sat Feb 23 00:08:05 2008 From: lists at ruby-forum.com (Brandon Hoult) Date: Fri, 22 Feb 2008 22:08:05 +0100 Subject: configuring nginx for different rails apps under same do In-Reply-To: References: <20071121082431.GB23749@rambler-co.ru> <98b487d8ee0e478accf2bd91a45d9792@ruby-forum.com> <20071124090112.GB16975@rambler-co.ru> <9ddc50bb591ff05ce1d0cc4a7c9c440b@ruby-forum.com> Message-ID: Brandon Hoult wrote: > I am having the same exact issue that you describe... did you ever > figure out a solution? Figured this out from another post... you have to use the prefix option for mongrel which can be called on the command line of included in the .yaml file in /etc/mongrel (if you have it configured that way) See: http://mongrel.rubyforge.org/wiki/FAQ -- Posted via http://www.ruby-forum.com/. From y.georgiev at gmail.com Sat Feb 23 12:20:22 2008 From: y.georgiev at gmail.com (Yordan Georgiev) Date: Sat, 23 Feb 2008 11:20:22 +0200 Subject: excessive RAM consumption - memory leak In-Reply-To: References: <9f314ae9fa6e8071326460534a64a55d@ruby-forum.com> <845b6891ec5cf6a1776b91211e89ebd8@ruby-forum.com> <13c357830802211136v47c9f10ar7f6d992b738f0efa@mail.gmail.com> <61f0095a84391ba3befaefb35b4b11b3@ruby-forum.com> <428d921d0802211337p62b74af7p9de8024b7a55cdf2@mail.gmail.com> <137e0d906278b073224427bc03b08d1d@ruby-forum.com> <94d65cc2e3435ac57d4f6c3292125385@ruby-forum.com> <4378145a0802220135r5404a3c7n391ecec9da307d62@mail.gmail.com> Message-ID: <4378145a0802230120w1c81c2a4v5456769c7c646039@mail.gmail.com> btw i compile my nginx with "--without-http_charset_module --without-http_ssi_module --without-http_auth_basic_module --without-http_autoindex_module --without-http_geo_module --without-http_map_module --without-http_limit_zone_module --without-http_empty_gif_module --without-http_browser_module --without-http_upstream_ip_hash_module" On Fri, Feb 22, 2008 at 8:42 PM, Todd HG wrote: > Yordan Georgiev wrote: > > Please excuse my bad english... > > > > I use nginx version: nginx/0.5.35 built by gcc 4.1.1 (Gentoo 4.1.1-r3). > > Thank you Yordan, I will try some of the settings from your config file. > -- > Posted via http://www.ruby-forum.com/. > > -- ? ????????, ?. ????????. WEB: http://gigavolt-bg.net/ Blog: http://live.gigavolt-bg.net/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From y.georgiev at gmail.com Sat Feb 23 23:27:02 2008 From: y.georgiev at gmail.com (Yordan Georgiev) Date: Sat, 23 Feb 2008 22:27:02 +0200 Subject: FastCGI and load-balancing Message-ID: <4378145a0802231227w5a260814qa0be765b9dca9303@mail.gmail.com> Hello every one, I wish use remote "fastcgi_pass" But my result is: The page you are looking for is temporarily unavailable. Please try again later. My spawn-fcgi server work: s1 ~ # pstree -u init-+-6*[agetty] |-php-cgi(nginx)---5*[php-cgi] |-sshd---sshd---bash---pstree `-udevd s1 ~ # netstat -ntap Active Internet connections (servers and established) Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name tcp 0 0 0.0.0.0:1026 0.0.0.0:* LISTEN 24973/php-cgi tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 4636/sshd s1 ~ # -- ? ????????, ?. ????????. WEB: http://gigavolt-bg.net/ Blog: http://live.gigavolt-bg.net/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From y.georgiev at gmail.com Sun Feb 24 00:33:58 2008 From: y.georgiev at gmail.com (Yordan Georgiev) Date: Sat, 23 Feb 2008 23:33:58 +0200 Subject: FastCGI and load-balancing In-Reply-To: <4378145a0802231227w5a260814qa0be765b9dca9303@mail.gmail.com> References: <4378145a0802231227w5a260814qa0be765b9dca9303@mail.gmail.com> Message-ID: <4378145a0802231333n7de171d2xae5ea61d4f9d8a3e@mail.gmail.com> My ISP filter port 1026! I change port to 8080 and my remote fastcgi_pass work. On Sat, Feb 23, 2008 at 10:27 PM, Yordan Georgiev wrote: > Hello every on > > I wish use remote "fastcgi_pass" But my result is: > > The page you are looking for is temporarily unavailable. > Please try again later. > > My spawn-fcgi server work: > s1 ~ # pstree -u > init-+-6*[agetty] > |-php-cgi(nginx)---5*[php-cgi] > |-sshd---sshd---bash---pstree > > `-udevd > s1 ~ # netstat -ntap > Active Internet connections (servers and established) > Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name > tcp 0 0 0.0.0.0:1026 0.0.0.0:* LISTEN 24973/php-cgi > > tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 4636/sshd > s1 ~ # > > -- > ? ????????, > ?. ????????. > > WEB: http://gigavolt-bg.net/ > Blog: http://live.gigavolt-bg.net/ -- ? ????????, ?. ????????. WEB: http://gigavolt-bg.net/ Blog: http://live.gigavolt-bg.net/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From jsierles at engineyard.com Sun Feb 24 01:51:27 2008 From: jsierles at engineyard.com (Joshua Sierles) Date: Sun, 24 Feb 2008 00:51:27 +0200 Subject: memcached module opens one connection per request Message-ID: <22553095-7AAF-4657-B77D-BEF5B1B247A5@engineyard.com> While testing out the nginx memcached module, I noticed that it opens a new connection to memcached for each request. Is there a way to make nginx reuse a single connection? This makes load testing complicated since the nginx machine quickly runs out of ephemeral ports. Joshua Sierles From nginx.mailinglist at xinio.info Sun Feb 24 04:41:49 2008 From: nginx.mailinglist at xinio.info (nginx.mailinglist) Date: Sun, 24 Feb 2008 01:41:49 +0000 Subject: high bandwidth configuration help Message-ID: <807a83ca0802231741t424f05e4u4a6b13ca95d55c07@mail.gmail.com> Hi how to tweak nginx for large file serving and maximizing the use of available bandwidth? I used to use lighttpd and was able to use 150mbit average at all times with lighttpd (writev backend selected) serving very large files t oalot of people But now i cant achievesame results Can anyone point me to any config options to make large file serving more efficient, there arent many examples :( Thanks -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx.mailinglist at xinio.info Sun Feb 24 04:51:36 2008 From: nginx.mailinglist at xinio.info (nginx.mailinglist) Date: Sun, 24 Feb 2008 01:51:36 +0000 Subject: high bandwidth configuration help In-Reply-To: <807a83ca0802231741t424f05e4u4a6b13ca95d55c07@mail.gmail.com> References: <807a83ca0802231741t424f05e4u4a6b13ca95d55c07@mail.gmail.com> Message-ID: <807a83ca0802231751y7de20ea4jfe2cdcf7acb7b571@mail.gmail.com> Forgot to mention bellow is my nginx config i can push about 80mbit average now but loads are above 6 (used to be 1-2 with lighttpd) this is after settings "sendfile off" switch, was only pushing ~30mbit then :( heres my config (i replaced ips and hostnames for privacy reasons) http://pastebin.com/m55a90c33 Thanks and hopefully someone can help me to migrate completely from lighttpd to nginx :) So any tips on how to optimise for alot of large concurent file downloads are welcome ! On Sun, Feb 24, 2008 at 1:41 AM, nginx. mailinglist < nginx.mailinglist at xinio.info> wrote: > Hi > > how to tweak nginx for large file serving and maximizing the use of > available bandwidth? > > I used to use lighttpd and was able to use 150mbit average at all times > with lighttpd (writev backend selected) serving very large files t oalot of > people > > But now i cant achievesame results > > Can anyone point me to any config options to make large file serving more > efficient, there arent many examples :( > > Thanks -------------- next part -------------- An HTML attachment was scrubbed... URL: From eliott at cactuswax.net Sun Feb 24 07:07:23 2008 From: eliott at cactuswax.net (eliott) Date: Sat, 23 Feb 2008 20:07:23 -0800 Subject: high bandwidth configuration help In-Reply-To: <807a83ca0802231751y7de20ea4jfe2cdcf7acb7b571@mail.gmail.com> References: <807a83ca0802231741t424f05e4u4a6b13ca95d55c07@mail.gmail.com> <807a83ca0802231751y7de20ea4jfe2cdcf7acb7b571@mail.gmail.com> Message-ID: <428d921d0802232007n4311088ai41d06cb1898961fb@mail.gmail.com> On 2/23/08, nginx. mailinglist wrote: > Forgot to mention bellow is my nginx config > > i can push about 80mbit average now but loads are above 6 (used to be 1-2 > with lighttpd) this is after settings "sendfile off" switch, was only > pushing ~30mbit then :( > > heres my config (i replaced ips and hostnames for privacy reasons) > > http://pastebin.com/m55a90c33 > > Thanks and hopefully someone can help me to migrate completely from lighttpd > to nginx :) > > So any tips on how to optimise for alot of large concurent file downloads > are welcome What is the OS you are running it on? From dave at cheney.net Sun Feb 24 08:55:44 2008 From: dave at cheney.net (Dave Cheney) Date: Sun, 24 Feb 2008 16:55:44 +1100 Subject: high bandwidth configuration help In-Reply-To: <807a83ca0802231751y7de20ea4jfe2cdcf7acb7b571@mail.gmail.com> References: <807a83ca0802231741t424f05e4u4a6b13ca95d55c07@mail.gmail.com> <807a83ca0802231751y7de20ea4jfe2cdcf7acb7b571@mail.gmail.com> Message-ID: <1B13E617-C440-4888-AED8-278531D61B29@cheney.net> What is the configuration of /hdd1, is it capable of pushing that kind of random IO load ? How many workers are you running? Cheers Dave On 24/02/2008, at 12:51 PM, nginx.mailinglist wrote: > Forgot to mention bellow is my nginx config > > i can push about 80mbit average now but loads are above 6 (used to > be 1-2 with lighttpd) this is after settings "sendfile off" switch, > was only pushing ~30mbit then :( > > heres my config (i replaced ips and hostnames for privacy reasons) > > http://pastebin.com/m55a90c33 > > Thanks and hopefully someone can help me to migrate completely from > lighttpd to nginx :) > > So any tips on how to optimise for alot of large concurent file > downloads are welcome -------------- next part -------------- An HTML attachment was scrubbed... URL: From is at rambler-co.ru Sun Feb 24 10:43:56 2008 From: is at rambler-co.ru (Igor Sysoev) Date: Sun, 24 Feb 2008 10:43:56 +0300 Subject: high bandwidth configuration help In-Reply-To: <807a83ca0802231751y7de20ea4jfe2cdcf7acb7b571@mail.gmail.com> References: <807a83ca0802231741t424f05e4u4a6b13ca95d55c07@mail.gmail.com> <807a83ca0802231751y7de20ea4jfe2cdcf7acb7b571@mail.gmail.com> Message-ID: <20080224074356.GB64612@rambler-co.ru> On Sun, Feb 24, 2008 at 01:51:36AM +0000, nginx.mailinglist wrote: > Forgot to mention bellow is my nginx config > > i can push about 80mbit average now but loads are above 6 (used to be 1-2 > with lighttpd) this is after settings "sendfile off" switch, was only > pushing ~30mbit then :( > > heres my config (i replaced ips and hostnames for privacy reasons) > > http://pastebin.com/m55a90c33 > > Thanks and hopefully someone can help me to migrate completely from lighttpd > to nginx :) > > So any tips on how to optimise for alot of large concurent file downloads > are welcome > > ! Set single worker process instead of 6. This is like single lighttpd process. It seems that your disk can not handle several concurrent accesses. You may try to set worker_processes to 2 or 3 and look bandwidth. > On Sun, Feb 24, 2008 at 1:41 AM, nginx. mailinglist < > nginx.mailinglist at xinio.info> wrote: > > > Hi > > > > how to tweak nginx for large file serving and maximizing the use of > > available bandwidth? > > > > I used to use lighttpd and was able to use 150mbit average at all times > > with lighttpd (writev backend selected) serving very large files t oalot of > > people > > > > But now i cant achievesame results > > > > Can anyone point me to any config options to make large file serving more > > efficient, there arent many examples :( -- Igor Sysoev http://sysoev.ru/en/ From is at rambler-co.ru Sun Feb 24 10:44:33 2008 From: is at rambler-co.ru (Igor Sysoev) Date: Sun, 24 Feb 2008 10:44:33 +0300 Subject: memcached module opens one connection per request In-Reply-To: <22553095-7AAF-4657-B77D-BEF5B1B247A5@engineyard.com> References: <22553095-7AAF-4657-B77D-BEF5B1B247A5@engineyard.com> Message-ID: <20080224074433.GC64612@rambler-co.ru> On Sun, Feb 24, 2008 at 12:51:27AM +0200, Joshua Sierles wrote: > While testing out the nginx memcached module, I noticed that it opens > a new connection to memcached for each request. Is there a way to make > nginx reuse a single connection? This makes load testing complicated > since the nginx machine quickly runs out of ephemeral ports. No, nginx does not support persistent connections to backends. -- Igor Sysoev http://sysoev.ru/en/ From al-nginx at none.at Sun Feb 24 11:15:10 2008 From: al-nginx at none.at (Aleksandar Lazic) Date: Sun, 24 Feb 2008 09:15:10 +0100 Subject: high bandwidth configuration help In-Reply-To: <807a83ca0802231751y7de20ea4jfe2cdcf7acb7b571@mail.gmail.com> References: <807a83ca0802231741t424f05e4u4a6b13ca95d55c07@mail.gmail.com> <807a83ca0802231751y7de20ea4jfe2cdcf7acb7b571@mail.gmail.com> Message-ID: <20080224081510.GA14704@none.at> Hi, On Son 24.02.2008 01:51, nginx.mailinglist wrote: >Forgot to mention bellow is my nginx config > >i can push about 80mbit average now but loads are above 6 (used to be >1-2 with lighttpd) this is after settings "sendfile off" switch, was >only pushing ~30mbit then :( > >heres my config (i replaced ips and hostnames for privacy reasons) > >http://pastebin.com/m55a90c33 > >Thanks and hopefully someone can help me to migrate completely from >lighttpd to nginx :) > >So any tips on how to optimise for alot of large concurent file >downloads are welcome How looks your lighttpd conffile? Have you changed anything else then the webserver? Which nginx version do you use? How was nginx compiled? As eliott asked: Which OS do you use? If you under linux maybe you can make a: vmstat -d 1 and/or vmstat 1 BR Aleks From nginx.mailinglist at xinio.info Sun Feb 24 12:47:02 2008 From: nginx.mailinglist at xinio.info (nginx.mailinglist) Date: Sun, 24 Feb 2008 09:47:02 +0000 Subject: high bandwidth configuration help In-Reply-To: <20080224081510.GA14704@none.at> References: <807a83ca0802231741t424f05e4u4a6b13ca95d55c07@mail.gmail.com> <807a83ca0802231751y7de20ea4jfe2cdcf7acb7b571@mail.gmail.com> <20080224081510.GA14704@none.at> Message-ID: <807a83ca0802240147h51165c54ib9b973aa232a0248@mail.gmail.com> Hi thanks for all the replies! i should have provided more details but I was in dispair and it was late at night, have clear head now after good sleep. Ok heres info requested, the servers (i have another 6 pushing 120~mbit average with lighttpd 1.4 compiled now, all servers same config) OS: Suse 10.3 (minimal, i have barely anything installed and allmost everythign compiled by me) http://pastebin.com/m41b5a046 Applications Running: *nginx_0.5.35 (compiled http://pastebin.com/m6462475a ) used for file downloads *php_5.2.5 (compiled http://pastebin.com/m3c02ea6e , ~20 threads started with lighttps spawn fcgi), very short php scripts are run that use X-Accell-Redirect to pass control to nginx of file serving *lighttpd_1.5r2048 (compiled) used for file uploads, due to the progress meter, fairly stable in production tho its from the svn *custom php_cli socket deamons for inter server RPC, these are very light *top http://pastebin.com/m2587b666* *dstat http://pastebin.com/m2055efd4* *df http://pastebin.com/m724e52a7* *vmstat http://pastebin.com/m5cdc2f0b* ** heres *lighttpd 1.4.18 config* from other servers on network (they are all pretty much same just differing hosts) http://pastebin.com/m7b1af1e6 now the only thing that changed on this server is lighttpd 1.4.18 was replaced by nginx, and now the request go thru nginx X-AcellRedirect not lighttpd's mod_secdownload the php5 fcgi scripts are very small it check file exists etc, then just adds a downlaod id into headers for nginx to pick up and passes control onto nginx the relevant bit is here > http://pastebin.com/m2bcbe7fd basically the server is capable of pushing above 100mbit easily before, now the load is high, but this could be due to me setting high number of worker threads like Igor said and more php-cgi processes floating around so to summarise I need to figure out how to tweak this so i can move all the servers from lighttpd to nginx for file downloads Regards On Sun, Feb 24, 2008 at 8:15 AM, Aleksandar Lazic wrote: > > How looks your lighttpd conffile? > > Have you changed anything else then the webserver? > Which nginx version do you use? > How was nginx compiled? > > As eliott asked: Which OS do you use? > > If you under linux maybe you can make a: > > vmstat -d 1 > and/or > vmstat 1 > > > BR > > Aleks > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx.mailinglist at xinio.info Sun Feb 24 13:46:55 2008 From: nginx.mailinglist at xinio.info (nginx.mailinglist) Date: Sun, 24 Feb 2008 10:46:55 +0000 Subject: high bandwidth configuration help In-Reply-To: <807a83ca0802240147h51165c54ib9b973aa232a0248@mail.gmail.com> References: <807a83ca0802231741t424f05e4u4a6b13ca95d55c07@mail.gmail.com> <807a83ca0802231751y7de20ea4jfe2cdcf7acb7b571@mail.gmail.com> <20080224081510.GA14704@none.at> <807a83ca0802240147h51165c54ib9b973aa232a0248@mail.gmail.com> Message-ID: <807a83ca0802240246m6c6bd216wa02a863ff6c9cbd7@mail.gmail.com> Hi see mrtg @ http://img402.imageshack.us/img402/8058/nginxwx0.png before the drop was running lighttpd (consistently above 100mbit) after nginx (80-90mbit) :( the bandwdith is paid for but i cant utilise it which sucks -------------- next part -------------- An HTML attachment was scrubbed... URL: From alex at purefiction.net Sun Feb 24 15:39:46 2008 From: alex at purefiction.net (Alexander Staubo) Date: Sun, 24 Feb 2008 13:39:46 +0100 Subject: high bandwidth configuration help In-Reply-To: <807a83ca0802240147h51165c54ib9b973aa232a0248@mail.gmail.com> References: <807a83ca0802231741t424f05e4u4a6b13ca95d55c07@mail.gmail.com> <807a83ca0802231751y7de20ea4jfe2cdcf7acb7b571@mail.gmail.com> <20080224081510.GA14704@none.at> <807a83ca0802240147h51165c54ib9b973aa232a0248@mail.gmail.com> Message-ID: <88daf38c0802240439o6ef79735u39cd6161525c150d@mail.gmail.com> On 2/24/08, nginx.mailinglist wrote: > thanks for all the replies! i should have provided more details but I was in > dispair and it was late at night, have clear head now after good sleep. [...] > top http://pastebin.com/m2587b666 > dstat http://pastebin.com/m2055efd4 > df http://pastebin.com/m724e52a7 > vmstat http://pastebin.com/m5cdc2f0b You need to run vmstat for a longer period of time (try "vmstat 1" and let it run for a minute), but looking at that one line: server11:/ # vmstat procs -----------memory---------- ---swap-- -----io---- -system-- ----cpu---- r b swpd free buff cache si so bi bo in cs us sy id wa 0 5 84 49296 16044 1824108 0 0 17 11 14 11 3 18 43 36 It seems you are completely I/O-bound. The second column shows us that five processes are blocking for I/O -- ie., doing nothing except wait for I/O. I suggest you follow Igor's advice and set: worker_processes 1; and see what happens then. Alexander. From is at rambler-co.ru Sun Feb 24 19:27:12 2008 From: is at rambler-co.ru (Igor Sysoev) Date: Sun, 24 Feb 2008 19:27:12 +0300 Subject: high bandwidth configuration help In-Reply-To: <807a83ca0802240147h51165c54ib9b973aa232a0248@mail.gmail.com> References: <807a83ca0802231741t424f05e4u4a6b13ca95d55c07@mail.gmail.com> <807a83ca0802231751y7de20ea4jfe2cdcf7acb7b571@mail.gmail.com> <20080224081510.GA14704@none.at> <807a83ca0802240147h51165c54ib9b973aa232a0248@mail.gmail.com> Message-ID: <20080224162712.GA77685@rambler-co.ru> On Sun, Feb 24, 2008 at 09:47:02AM +0000, nginx.mailinglist wrote: > Hi > > thanks for all the replies! i should have provided more details but I was in > dispair and it was late at night, have clear head now after good sleep. > > Ok heres info requested, the servers (i have another 6 pushing 120~mbit > average with lighttpd 1.4 compiled now, all servers same config) > > > > OS: Suse 10.3 (minimal, i have barely anything installed and allmost > everythign compiled by me) http://pastebin.com/m41b5a046 > > Applications Running: > *nginx_0.5.35 (compiled http://pastebin.com/m6462475a ) used for file > downloads > *php_5.2.5 (compiled http://pastebin.com/m3c02ea6e , ~20 threads started > with lighttps spawn fcgi), very short php scripts are run that use > X-Accell-Redirect to pass control to nginx of file serving > *lighttpd_1.5r2048 (compiled) used for file uploads, due to the progress > meter, fairly stable in production tho its from the svn > *custom php_cli socket deamons for inter server RPC, these are very light > > > *top http://pastebin.com/m2587b666* > *dstat http://pastebin.com/m2055efd4* > *df http://pastebin.com/m724e52a7* > *vmstat http://pastebin.com/m5cdc2f0b* > ** > heres *lighttpd 1.4.18 config* from other servers on network (they are all > pretty much same just differing hosts) http://pastebin.com/m7b1af1e6 > > > now the only thing that changed on this server is lighttpd 1.4.18 was > replaced by nginx, and now the request go thru nginx X-AcellRedirect not > lighttpd's mod_secdownload > > the php5 fcgi scripts are very small it check file exists etc, then just > adds a downlaod id into headers for nginx to pick up and passes control onto > nginx the relevant bit is here > http://pastebin.com/m2bcbe7fd > > basically the server is capable of pushing above 100mbit easily before, now > the load is high, but this could be due to me setting high number of worker > threads like Igor said and more php-cgi processes floating around > > so to summarise I need to figure out how to tweak this so i can move all the > servers from lighttpd to nginx for file downloads You should try to set worker_processes 1; LA in Linux is sum of processes that run or ready to run and processes that wait for disk I/O. LA 6 and "70.5%wa" means that all 6 nginx workers wait for disk I/O. Here is your bottleneck. -- Igor Sysoev http://sysoev.ru/en/ From y.georgiev at gmail.com Sun Feb 24 19:46:24 2008 From: y.georgiev at gmail.com (Yordan Georgiev) Date: Sun, 24 Feb 2008 18:46:24 +0200 Subject: high bandwidth configuration help In-Reply-To: <20080224162712.GA77685@rambler-co.ru> References: <807a83ca0802231741t424f05e4u4a6b13ca95d55c07@mail.gmail.com> <807a83ca0802231751y7de20ea4jfe2cdcf7acb7b571@mail.gmail.com> <20080224081510.GA14704@none.at> <807a83ca0802240147h51165c54ib9b973aa232a0248@mail.gmail.com> <20080224162712.GA77685@rambler-co.ru> Message-ID: <4378145a0802240846y11c6c66cke619f8f074524c9@mail.gmail.com> Use bonnie for test you storage space. And exucate "dmesg" (for Linux OS) and view any network problem. On Sun, Feb 24, 2008 at 6:27 PM, Igor Sysoev wrote: > On Sun, Feb 24, 2008 at 09:47:02AM +0000, nginx.mailinglist wrote: > > > Hi > > > > thanks for all the replies! i should have provided more details but I > was in > > dispair and it was late at night, have clear head now after good sleep. > > > > Ok heres info requested, the servers (i have another 6 pushing 120~mbit > > average with lighttpd 1.4 compiled now, all servers same config) > > > > > > > > OS: Suse 10.3 (minimal, i have barely anything installed and allmost > > everythign compiled by me) http://pastebin.com/m41b5a046 > > > > Applications Running: > > *nginx_0.5.35 (compiled http://pastebin.com/m6462475a ) used for file > > downloads > > *php_5.2.5 (compiled http://pastebin.com/m3c02ea6e , ~20 threads > started > > with lighttps spawn fcgi), very short php scripts are run that use > > X-Accell-Redirect to pass control to nginx of file serving > > *lighttpd_1.5r2048 (compiled) used for file uploads, due to the progress > > meter, fairly stable in production tho its from the svn > > *custom php_cli socket deamons for inter server RPC, these are very > light > > > > > > *top http://pastebin.com/m2587b666* > > *dstat http://pastebin.com/m2055efd4* > > *df http://pastebin.com/m724e52a7* > > *vmstat http://pastebin.com/m5cdc2f0b* > > ** > > heres *lighttpd 1.4.18 config* from other servers on network (they are > all > > pretty much same just differing hosts) http://pastebin.com/m7b1af1e6 > > > > > > now the only thing that changed on this server is lighttpd 1.4.18 was > > replaced by nginx, and now the request go thru nginx X-AcellRedirect not > > lighttpd's mod_secdownload > > > > the php5 fcgi scripts are very small it check file exists etc, then just > > adds a downlaod id into headers for nginx to pick up and passes control > onto > > nginx the relevant bit is here > http://pastebin.com/m2bcbe7fd > > > > basically the server is capable of pushing above 100mbit easily before, > now > > the load is high, but this could be due to me setting high number of > worker > > threads like Igor said and more php-cgi processes floating around > > > > so to summarise I need to figure out how to tweak this so i can move all > the > > servers from lighttpd to nginx for file downloads > > You should try to set > worker_processes 1; > > LA in Linux is sum of processes that run or ready to run and processes > that > wait for disk I/O. LA 6 and "70.5%wa" means that all 6 nginx workers wait > for disk I/O. Here is your bottleneck. > > > -- > Igor Sysoev > http://sysoev.ru/en/ > > -- Please excuse my bad english... ? ????????, ?. ????????. WEB: http://gigavolt-bg.net/ Blog: http://live.gigavolt-bg.net/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From robm at fastmail.fm Mon Feb 25 04:16:01 2008 From: robm at fastmail.fm (Rob Mueller) Date: Mon, 25 Feb 2008 12:16:01 +1100 Subject: post_action docs Message-ID: <2a9501c8774b$fdded1b0$0c01a8c0@robmhp> I'm trying to use post_action to track some download information, but I'm having a few issues. I've done some searching, but the documentation for this feature is almost non-existant from what I can see. Am I missing something? Anyway, my two main questions. 1. I have a server config that proxies all requests and all directories for a number domains directly to a backend server. Because it's all directories, I can't reserve one for the post_action handler like the examples I've normally seen. (e.g. the example here: http://article.gmane.org/gmane.comp.web.nginx.english/1070/) So I've tried this: server { listen a.b.c.d:80; location / { rewrite (.*) /http/$host$1 break; proxy_pass http://backend/; ... } post_action http://127.0.0.1/done; } server { listen 127.0.0.1:80; location = /done { fastcgi_pass ... etc ... } } But that didn't seem to work. Is there any way to get the post_action to submit to a completely separate server { } block? 2. I want to pass a header returned by the upstream server to the post_action handler for logging purposes, but I'm not sure how to do that. I must be missing something obvious here. fastcgi_param PARAM_EXTRA_1 ...some upstream header...; Thanks Rob From jiaosq at mail.51.com Mon Feb 25 04:51:56 2008 From: jiaosq at mail.51.com (=?GB2312?B?vbnKpMe/?=) Date: Mon, 25 Feb 2008 09:51:56 +0800 Subject: post_to_static In-Reply-To: <47BEC3B1.8000103@sensisoft.com> References: <20080222094706.GE22278@rambler-co.ru> <47BEC3B1.8000103@sensisoft.com> Message-ID: static_post 2008/2/22, Rafa? Zawadzki : > > Igor Sysoev pisze: > > > I'm going to commit capability to POST to static files, however, I'm not > sure > > about directive name: > > > Is there any wiki entry about this or any other plays where I can read > more about this feature? > > > > > post_to_static on|off > > > > static_post on|off > > > > Are other variants ? > > > post_to_static sounds great. > > > -- > Rafa? bluszcz Zawadzki > System Architect > +48 600 883 759 > > -- ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ??? Email: jiaosq at mail.51.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From dave at cheney.net Mon Feb 25 05:01:02 2008 From: dave at cheney.net (Dave Cheney) Date: Mon, 25 Feb 2008 13:01:02 +1100 Subject: high bandwidth configuration help In-Reply-To: <4378145a0802240846y11c6c66cke619f8f074524c9@mail.gmail.com> References: <807a83ca0802231741t424f05e4u4a6b13ca95d55c07@mail.gmail.com> <807a83ca0802231751y7de20ea4jfe2cdcf7acb7b571@mail.gmail.com> <20080224081510.GA14704@none.at> <807a83ca0802240147h51165c54ib9b973aa232a0248@mail.gmail.com> <20080224162712.GA77685@rambler-co.ru> <4378145a0802240846y11c6c66cke619f8f074524c9@mail.gmail.com> Message-ID: <0CA6AC04-E54F-4D5A-8D3D-0192F61F463E@cheney.net> My recommendation for IO bound workloads is to use the Deadline Scheduler rather than the anticipatory or CFQ scheduler On 25/02/2008, at 3:46 AM, Yordan Georgiev wrote: > Use bonnie for test you storage space. And exucate "dmesg" (for > Linux OS) and view any network problem. From robm at fastmail.fm Mon Feb 25 07:26:18 2008 From: robm at fastmail.fm (Rob Mueller) Date: Mon, 25 Feb 2008 15:26:18 +1100 Subject: post_action docs References: <2a9501c8774b$fdded1b0$0c01a8c0@robmhp> Message-ID: <2b8901c8776f$24680090$0c01a8c0@robmhp> To reply to myself... > 2. I want to pass a header returned by the upstream server to the > post_action handler for logging purposes, but I'm not sure how to do that. > I must be missing something obvious here. > > fastcgi_param PARAM_EXTRA_1 ...some upstream header...; I found this previous post: http://article.gmane.org/gmane.comp.web.nginx.english/1305 Which seems to fix issue 2 nicely. Now if only I could fix the first issue. Rob From y.georgiev at gmail.com Mon Feb 25 11:02:15 2008 From: y.georgiev at gmail.com (Yordan Georgiev) Date: Mon, 25 Feb 2008 10:02:15 +0200 Subject: high bandwidth configuration help In-Reply-To: <0CA6AC04-E54F-4D5A-8D3D-0192F61F463E@cheney.net> References: <807a83ca0802231741t424f05e4u4a6b13ca95d55c07@mail.gmail.com> <807a83ca0802231751y7de20ea4jfe2cdcf7acb7b571@mail.gmail.com> <20080224081510.GA14704@none.at> <807a83ca0802240147h51165c54ib9b973aa232a0248@mail.gmail.com> <20080224162712.GA77685@rambler-co.ru> <4378145a0802240846y11c6c66cke619f8f074524c9@mail.gmail.com> <0CA6AC04-E54F-4D5A-8D3D-0192F61F463E@cheney.net> Message-ID: <4378145a0802250002t7a7e30f5w6a062803f628181b@mail.gmail.com> CFQ - refinement, but powerful for extreme disk I/O On Mon, Feb 25, 2008 at 4:01 AM, Dave Cheney wrote: > My recommendation for IO bound workloads is to use the Deadline > Scheduler rather than the anticipatory or CFQ scheduler > > On 25/02/2008, at 3:46 AM, Yordan Georgiev wrote: > > > Use bonnie for test you storage space. And exucate "dmesg" (for > > Linux OS) and view any network problem. > > > -- Please excuse my bad english... ? ????????, ?. ????????. WEB: http://gigavolt-bg.net/ Blog: http://live.gigavolt-bg.net/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx.mailinglist at xinio.info Mon Feb 25 12:45:57 2008 From: nginx.mailinglist at xinio.info (nginx.mailinglist) Date: Mon, 25 Feb 2008 09:45:57 +0000 Subject: high bandwidth configuration help In-Reply-To: <4378145a0802250002t7a7e30f5w6a062803f628181b@mail.gmail.com> References: <807a83ca0802231741t424f05e4u4a6b13ca95d55c07@mail.gmail.com> <807a83ca0802231751y7de20ea4jfe2cdcf7acb7b571@mail.gmail.com> <20080224081510.GA14704@none.at> <807a83ca0802240147h51165c54ib9b973aa232a0248@mail.gmail.com> <20080224162712.GA77685@rambler-co.ru> <4378145a0802240846y11c6c66cke619f8f074524c9@mail.gmail.com> <0CA6AC04-E54F-4D5A-8D3D-0192F61F463E@cheney.net> <4378145a0802250002t7a7e30f5w6a062803f628181b@mail.gmail.com> Message-ID: <807a83ca0802250145o1c16ce3dhed8b0d3093b2868d@mail.gmail.com> I changed to one worker and the loads have gone down, thanks (spasibo!) Igor tho bandwidth usage is still 10% lower than lighttpd im gonna test on other servers now i will back report results -------------- next part -------------- An HTML attachment was scrubbed... URL: From is at rambler-co.ru Mon Feb 25 13:25:54 2008 From: is at rambler-co.ru (Igor Sysoev) Date: Mon, 25 Feb 2008 13:25:54 +0300 Subject: high bandwidth configuration help In-Reply-To: <807a83ca0802250145o1c16ce3dhed8b0d3093b2868d@mail.gmail.com> References: <807a83ca0802231741t424f05e4u4a6b13ca95d55c07@mail.gmail.com> <807a83ca0802231751y7de20ea4jfe2cdcf7acb7b571@mail.gmail.com> <20080224081510.GA14704@none.at> <807a83ca0802240147h51165c54ib9b973aa232a0248@mail.gmail.com> <20080224162712.GA77685@rambler-co.ru> <4378145a0802240846y11c6c66cke619f8f074524c9@mail.gmail.com> <0CA6AC04-E54F-4D5A-8D3D-0192F61F463E@cheney.net> <4378145a0802250002t7a7e30f5w6a062803f628181b@mail.gmail.com> <807a83ca0802250145o1c16ce3dhed8b0d3093b2868d@mail.gmail.com> Message-ID: <20080225102554.GB90377@rambler-co.ru> On Mon, Feb 25, 2008 at 09:45:57AM +0000, nginx.mailinglist wrote: > I changed to one worker and the loads have gone down, thanks (spasibo!) > Igor > > tho bandwidth usage is still 10% lower than lighttpd im gonna test on other > servers now > > i will back report results As I understand for writev-backend lighttpd mmap()s file in 512K chunks and writev()s them. You may try in nginx output_buffers 1 512k; # default is "1 32k" The output_buffers are used if sendfile is not used. Also you may try to set 2 or 3 workers if it will increase bandwidth. Note, that disabling sendfile in both nginx and ligthy may increase bandwidth, but also ceratinly increases memory consumption at user- and kernel-level that may leave to DOS. You should find compromisse. -- Igor Sysoev http://sysoev.ru/en/ From foxx at freemail.gr Mon Feb 25 14:45:29 2008 From: foxx at freemail.gr (Athan Dimoy) Date: Mon, 25 Feb 2008 13:45:29 +0200 Subject: post_to_static In-Reply-To: References: <20080222094706.GE22278@rambler-co.ru> <47BEC3B1.8000103@sensisoft.com> Message-ID: "???" wrote in message news:ee4e19230802241751u62c791d9qdafbb02ac6c78a5d at mail.gmail.com... static_post Count my vote for static_post Athan From nginx.mailinglist at xinio.info Mon Feb 25 19:41:35 2008 From: nginx.mailinglist at xinio.info (nginx.mailinglist) Date: Mon, 25 Feb 2008 16:41:35 +0000 Subject: high bandwidth configuration help In-Reply-To: <20080225102554.GB90377@rambler-co.ru> References: <807a83ca0802231741t424f05e4u4a6b13ca95d55c07@mail.gmail.com> <807a83ca0802231751y7de20ea4jfe2cdcf7acb7b571@mail.gmail.com> <20080224081510.GA14704@none.at> <807a83ca0802240147h51165c54ib9b973aa232a0248@mail.gmail.com> <20080224162712.GA77685@rambler-co.ru> <4378145a0802240846y11c6c66cke619f8f074524c9@mail.gmail.com> <0CA6AC04-E54F-4D5A-8D3D-0192F61F463E@cheney.net> <4378145a0802250002t7a7e30f5w6a062803f628181b@mail.gmail.com> <807a83ca0802250145o1c16ce3dhed8b0d3093b2868d@mail.gmail.com> <20080225102554.GB90377@rambler-co.ru> Message-ID: <807a83ca0802250841y4479942ereb083edba033790f@mail.gmail.com> Increasing buffer size and number of workers from 1 only made things worse, at one stage the laod started spiraling above 20 its down to 4 now on the 2 servers Right now im very screwed, the performance loss from switching to nginx is killing the servers, i also setup 2nd server identically the graphs speak for themselves :( server converted yesterday http://img255.imageshack.us/img255/6753/31857516tg7.png new server to the left of the dip is lighttpd http://img341.imageshack.us/img341/3734/59174322lt9.png im now very screwed as i was gonna relly on nginx's accel-redirect feature and lighttpd 1.4.18 has it, im testing lighttpd1.5 svn now but i dont know how stable that be, sorry for being so down :'{ i feel like crying this was certainly the worst case scenraio when is started migrating, a 10-20% drop in bandwdith usage between all the servers will mean 1000-2000$ loss due to wasted costs On Mon, Feb 25, 2008 at 10:25 AM, Igor Sysoev wrote: > On Mon, Feb 25, 2008 at 09:45:57AM +0000, nginx.mailinglist wrote: > > > I changed to one worker and the loads have gone down, thanks (spasibo!) > > Igor > > > > tho bandwidth usage is still 10% lower than lighttpd im gonna test on > other > > servers now > > > > i will back report results > > As I understand for writev-backend lighttpd mmap()s file in 512K chunks > and writev()s them. You may try in nginx > > output_buffers 1 512k; # default is "1 32k" > > The output_buffers are used if sendfile is not used. > > Also you may try to set 2 or 3 workers if it will increase bandwidth. > > Note, that disabling sendfile in both nginx and ligthy may increase > bandwidth, but also ceratinly increases memory consumption at user- and > kernel-level that may leave to DOS. You should find compromisse. > > > -- > Igor Sysoev > http://sysoev.ru/en/ > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jodok at lovelysystems.com Mon Feb 25 20:08:09 2008 From: jodok at lovelysystems.com (Jodok Batlogg) Date: Mon, 25 Feb 2008 18:08:09 +0100 Subject: high bandwidth configuration help In-Reply-To: <807a83ca0802250841y4479942ereb083edba033790f@mail.gmail.com> References: <807a83ca0802231741t424f05e4u4a6b13ca95d55c07@mail.gmail.com> <807a83ca0802231751y7de20ea4jfe2cdcf7acb7b571@mail.gmail.com> <20080224081510.GA14704@none.at> <807a83ca0802240147h51165c54ib9b973aa232a0248@mail.gmail.com> <20080224162712.GA77685@rambler-co.ru> <4378145a0802240846y11c6c66cke619f8f074524c9@mail.gmail.com> <0CA6AC04-E54F-4D5A-8D3D-0192F61F463E@cheney.net> <4378145a0802250002t7a7e30f5w6a062803f628181b@mail.gmail.com> <807a83ca0802250145o1c16ce3dhed8b0d3093b2868d@mail.gmail.com> <20080225102554.GB90377@rambler-co.ru> <807a83ca0802250841y4479942ereb083edba033790f@mail.gmail.com> Message-ID: On 25.02.2008, at 17:41, nginx.mailinglist wrote: > Increasing buffer size and number of workers from 1 only made things > worse, at one stage the laod started spiraling above 20 its down to > 4 now on the 2 servers > > Right now im very screwed, the performance loss from switching to > nginx is killing the servers, i also setup 2nd server identically > > the graphs speak for themselves :( > > > server converted yesterday > http://img255.imageshack.us/img255/6753/31857516tg7.png > > new server to the left of the dip is lighttpd > http://img341.imageshack.us/img341/3734/59174322lt9.png > > > im now very screwed as i was gonna relly on nginx's accel-redirect > feature and lighttpd 1.4.18 has it, im testing lighttpd1.5 svn now > but i dont know how stable that be, > sorry for being so down :'{ i feel like crying this was certainly > the worst case scenraio when is started migrating, a 10-20% drop in > bandwdith usage between all the servers will mean 1000-2000$ loss > due to wasted costs hey man... why on earth are you changing your production server setup when you know that you have troubles? calm down, and start thinking... probably switch back to your old setup and try the new setup on some development server first? did you try the suggestions you received earlier? sorry, but i don't feel any sympathy with you. jodok ps.: we're able to serve between 400 and 500 mbit with a single nginx- server with no significant load. (but with enough memory for kernel disk cache, large raid controller memory and really fast discs) > > > On Mon, Feb 25, 2008 at 10:25 AM, Igor Sysoev > wrote: > On Mon, Feb 25, 2008 at 09:45:57AM +0000, nginx.mailinglist wrote: > > > I changed to one worker and the loads have gone down, thanks > (spasibo!) > > Igor > > > > tho bandwidth usage is still 10% lower than lighttpd im gonna test > on other > > servers now > > > > i will back report results > > As I understand for writev-backend lighttpd mmap()s file in 512K > chunks > and writev()s them. You may try in nginx > > output_buffers 1 512k; # default is "1 32k" > > The output_buffers are used if sendfile is not used. > > Also you may try to set 2 or 3 workers if it will increase bandwidth. > > Note, that disabling sendfile in both nginx and ligthy may increase > bandwidth, but also ceratinly increases memory consumption at user- > and > kernel-level that may leave to DOS. You should find compromisse. > > > -- > Igor Sysoev > http://sysoev.ru/en/ > > -- "Beautiful is better than ugly." -- The Zen of Python, by Tim Peters Jodok Batlogg, Lovely Systems GmbH Schmelzh?tterstra?e 26a, 6850 Dornbirn, Austria mobile: +43 676 5683591, phone: +43 5572 908060 -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 2454 bytes Desc: not available URL: From is at rambler-co.ru Mon Feb 25 20:15:45 2008 From: is at rambler-co.ru (Igor Sysoev) Date: Mon, 25 Feb 2008 20:15:45 +0300 Subject: high bandwidth configuration help In-Reply-To: <807a83ca0802250841y4479942ereb083edba033790f@mail.gmail.com> References: <807a83ca0802231751y7de20ea4jfe2cdcf7acb7b571@mail.gmail.com> <20080224081510.GA14704@none.at> <807a83ca0802240147h51165c54ib9b973aa232a0248@mail.gmail.com> <20080224162712.GA77685@rambler-co.ru> <4378145a0802240846y11c6c66cke619f8f074524c9@mail.gmail.com> <0CA6AC04-E54F-4D5A-8D3D-0192F61F463E@cheney.net> <4378145a0802250002t7a7e30f5w6a062803f628181b@mail.gmail.com> <807a83ca0802250145o1c16ce3dhed8b0d3093b2868d@mail.gmail.com> <20080225102554.GB90377@rambler-co.ru> <807a83ca0802250841y4479942ereb083edba033790f@mail.gmail.com> Message-ID: <20080225171545.GG90377@rambler-co.ru> On Mon, Feb 25, 2008 at 04:41:35PM +0000, nginx.mailinglist wrote: > Increasing buffer size and number of workers from 1 only made things worse, > at one stage the laod started spiraling above 20 its down to 4 now on the 2 > servers > > Right now im very screwed, the performance loss from switching to nginx is > killing the servers, i also setup 2nd server identically > > the graphs speak for themselves :( > > > server converted yesterday > http://img255.imageshack.us/img255/6753/31857516tg7.png > > new server to the left of the dip is lighttpd > http://img341.imageshack.us/img341/3734/59174322lt9.png > > > im now very screwed as i was gonna relly on nginx's accel-redirect feature > and lighttpd 1.4.18 has it, im testing lighttpd1.5 svn now but i dont know > how stable that be, > sorry for being so down :'{ i feel like crying this was certainly the worst > case scenraio when is started migrating, a 10-20% drop in bandwdith > usage between all the servers will mean 1000-2000$ loss due to wasted costs Are any crit or alert errors in error_log ? "worker_connections 1024" may be low for load, it can be changed to 5000 or 10000. > On Mon, Feb 25, 2008 at 10:25 AM, Igor Sysoev wrote: > > > On Mon, Feb 25, 2008 at 09:45:57AM +0000, nginx.mailinglist wrote: > > > > > I changed to one worker and the loads have gone down, thanks (spasibo!) > > > Igor > > > > > > tho bandwidth usage is still 10% lower than lighttpd im gonna test on > > other > > > servers now > > > > > > i will back report results > > > > As I understand for writev-backend lighttpd mmap()s file in 512K chunks > > and writev()s them. You may try in nginx > > > > output_buffers 1 512k; # default is "1 32k" > > > > The output_buffers are used if sendfile is not used. > > > > Also you may try to set 2 or 3 workers if it will increase bandwidth. > > > > Note, that disabling sendfile in both nginx and ligthy may increase > > bandwidth, but also ceratinly increases memory consumption at user- and > > kernel-level that may leave to DOS. You should find compromisse. > > > > > > -- > > Igor Sysoev > > http://sysoev.ru/en/ > > > > -- Igor Sysoev http://sysoev.ru/en/ From y.georgiev at gmail.com Mon Feb 25 20:21:01 2008 From: y.georgiev at gmail.com (Yordan Georgiev) Date: Mon, 25 Feb 2008 19:21:01 +0200 Subject: high bandwidth configuration help In-Reply-To: References: <807a83ca0802231741t424f05e4u4a6b13ca95d55c07@mail.gmail.com> <807a83ca0802240147h51165c54ib9b973aa232a0248@mail.gmail.com> <20080224162712.GA77685@rambler-co.ru> <4378145a0802240846y11c6c66cke619f8f074524c9@mail.gmail.com> <0CA6AC04-E54F-4D5A-8D3D-0192F61F463E@cheney.net> <4378145a0802250002t7a7e30f5w6a062803f628181b@mail.gmail.com> <807a83ca0802250145o1c16ce3dhed8b0d3093b2868d@mail.gmail.com> <20080225102554.GB90377@rambler-co.ru> <807a83ca0802250841y4479942ereb083edba033790f@mail.gmail.com> Message-ID: <4378145a0802250921s307302f2kd6852b9a12a1d615@mail.gmail.com> Hello, You are top result is ***http://pastebin.com/m2587b666 I view* 70% wa!!! This is problem! Server disk is very load up. Test disk ot RAID for seek ! -- Please excuse my bad english... ? ????????, ?. ????????. WEB: http://gigavolt-bg.net/ Blog: http://live.gigavolt-bg.net/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From is at rambler-co.ru Mon Feb 25 22:00:33 2008 From: is at rambler-co.ru (Igor Sysoev) Date: Mon, 25 Feb 2008 22:00:33 +0300 Subject: post_action docs In-Reply-To: <2b8901c8776f$24680090$0c01a8c0@robmhp> References: <2a9501c8774b$fdded1b0$0c01a8c0@robmhp> <2b8901c8776f$24680090$0c01a8c0@robmhp> Message-ID: <20080225190033.GI90377@rambler-co.ru> On Mon, Feb 25, 2008 at 03:26:18PM +1100, Rob Mueller wrote: > To reply to myself... > > >2. I want to pass a header returned by the upstream server to the > >post_action handler for logging purposes, but I'm not sure how to do that. > >I must be missing something obvious here. > > > > fastcgi_param PARAM_EXTRA_1 ...some upstream header...; > > I found this previous post: > > http://article.gmane.org/gmane.comp.web.nginx.english/1305 > > Which seems to fix issue 2 nicely. Now if only I could fix the first issue. The attached patch enables named location in post_action: post_action @done; location @done { ... -- Igor Sysoev http://sysoev.ru/en/ -------------- next part -------------- Index: src/http/ngx_http_request.c =================================================================== --- src/http/ngx_http_request.c (revision 1213) +++ src/http/ngx_http_request.c (working copy) @@ -2448,8 +2448,13 @@ r->read_event_handler = ngx_http_block_reading; - ngx_http_internal_redirect(r, &clcf->post_action, NULL); + if (clcf->post_action.data[0] == '/') { + ngx_http_internal_redirect(r, &clcf->post_action, NULL); + } else { + ngx_http_named_location(r, &clcf->post_action); + } + return NGX_OK; } From robm at fastmail.fm Tue Feb 26 00:53:17 2008 From: robm at fastmail.fm (Rob Mueller) Date: Tue, 26 Feb 2008 08:53:17 +1100 Subject: post_action docs References: <2a9501c8774b$fdded1b0$0c01a8c0@robmhp> <2b8901c8776f$24680090$0c01a8c0@robmhp> <20080225190033.GI90377@rambler-co.ru> Message-ID: <2cd601c877f8$d63fa870$0c01a8c0@robmhp> > The attached patch enables named location in post_action: > > post_action @done; > > location @done { > ... Great, I'll try that out. Just to check, the "named location" still has to be in the same server { ... } block right? Rob From nginx.mailinglist at xinio.info Tue Feb 26 01:42:58 2008 From: nginx.mailinglist at xinio.info (nginx.mailinglist) Date: Mon, 25 Feb 2008 22:42:58 +0000 Subject: high bandwidth configuration help In-Reply-To: <4378145a0802250921s307302f2kd6852b9a12a1d615@mail.gmail.com> References: <807a83ca0802231741t424f05e4u4a6b13ca95d55c07@mail.gmail.com> <20080224162712.GA77685@rambler-co.ru> <4378145a0802240846y11c6c66cke619f8f074524c9@mail.gmail.com> <0CA6AC04-E54F-4D5A-8D3D-0192F61F463E@cheney.net> <4378145a0802250002t7a7e30f5w6a062803f628181b@mail.gmail.com> <807a83ca0802250145o1c16ce3dhed8b0d3093b2868d@mail.gmail.com> <20080225102554.GB90377@rambler-co.ru> <807a83ca0802250841y4479942ereb083edba033790f@mail.gmail.com> <4378145a0802250921s307302f2kd6852b9a12a1d615@mail.gmail.com> Message-ID: <807a83ca0802251442k4f585142jac61464afb15c38d@mail.gmail.com> Ok i got it stabilized at about 75mbit and loads of 2-6 im gonna have got to plan B get several more servers and put them on internal lan run the php-cgi on them and connect to them using nginx since it connects over a socket (this should work easily right? im gonna setup a test setup and test this feature properly) this way i be able to make most of the available and paid for bandwdith the extra checks added by the php layer are creating more load im not blaming nginx here, it is perfroming exceptionally, maybe my expectations were too high, the old lighttpd setup didnt touch php at all, but it didnt have much in way of functionality such as custom logging, download speed control, authentication etc that are now possible thanks to nginx's resumable X-Accel-redirect feature thanks everyone for the help -------------- next part -------------- An HTML attachment was scrubbed... URL: From lists at ruby-forum.com Tue Feb 26 09:15:29 2008 From: lists at ruby-forum.com (Todd HG) Date: Tue, 26 Feb 2008 07:15:29 +0100 Subject: excessive RAM consumption - memory leak In-Reply-To: <4378145a0802230120w1c81c2a4v5456769c7c646039@mail.gmail.com> References: <9f314ae9fa6e8071326460534a64a55d@ruby-forum.com> <428d921d0802201215x57196ceq9791ba34c83254ae@mail.gmail.com> <20080220225201.GS76459@rambler-co.ru> <592c7997ce89ddacec2b3627b605e0e7@ruby-forum.com> <20080221152227.GD6340@rambler-co.ru> <845b6891ec5cf6a1776b91211e89ebd8@ruby-forum.com> <13c357830802211136v47c9f10ar7f6d992b738f0efa@mail.gmail.com> <61f0095a84391ba3befaefb35b4b11b3@ruby-forum.com> <428d921d0802211337p62b74af7p9de8024b7a55cdf2@mail.gmail.com> <137e0d906278b073224427bc03b08d1d@ruby-forum.com> <94d65cc2e3435ac57d4f6c3292125385@ruby-forum.com> <4378145a0802220135r5404a3c7n391ecec9da307d62@mail.gmail.com> <4378145a0802230120w1c81c2a4v5456769c7c646039@mail.gmail.com> Message-ID: <03ace509d1322b9bdb2702570dfdc079@ruby-forum.com> After a lot of analysis I have found that Nginx was not the source of the memory leak. After setting up cache-control this did help a lot with bandwidth. Is there a way to setup cache control on the same server to deal with caching images one way, and .html, .css, and .js files another way in the nginx.conf? -- Posted via http://www.ruby-forum.com/. From is at rambler-co.ru Tue Feb 26 09:40:10 2008 From: is at rambler-co.ru (Igor Sysoev) Date: Tue, 26 Feb 2008 09:40:10 +0300 Subject: post_action docs In-Reply-To: <2cd601c877f8$d63fa870$0c01a8c0@robmhp> References: <2a9501c8774b$fdded1b0$0c01a8c0@robmhp> <2b8901c8776f$24680090$0c01a8c0@robmhp> <20080225190033.GI90377@rambler-co.ru> <2cd601c877f8$d63fa870$0c01a8c0@robmhp> Message-ID: <20080226064010.GB35820@rambler-co.ru> On Tue, Feb 26, 2008 at 08:53:17AM +1100, Rob Mueller wrote: > >The attached patch enables named location in post_action: > > > > post_action @done; > > > > location @done { > > ... > > Great, I'll try that out. > > Just to check, the "named location" still has to be in the same server { > ... } block right? Yes. There is no way to redirect a request internally to another server. -- Igor Sysoev http://sysoev.ru/en/ From is at rambler-co.ru Tue Feb 26 09:47:39 2008 From: is at rambler-co.ru (Igor Sysoev) Date: Tue, 26 Feb 2008 09:47:39 +0300 Subject: excessive RAM consumption - memory leak In-Reply-To: <03ace509d1322b9bdb2702570dfdc079@ruby-forum.com> References: <13c357830802211136v47c9f10ar7f6d992b738f0efa@mail.gmail.com> <61f0095a84391ba3befaefb35b4b11b3@ruby-forum.com> <428d921d0802211337p62b74af7p9de8024b7a55cdf2@mail.gmail.com> <137e0d906278b073224427bc03b08d1d@ruby-forum.com> <94d65cc2e3435ac57d4f6c3292125385@ruby-forum.com> <4378145a0802220135r5404a3c7n391ecec9da307d62@mail.gmail.com> <4378145a0802230120w1c81c2a4v5456769c7c646039@mail.gmail.com> <03ace509d1322b9bdb2702570dfdc079@ruby-forum.com> Message-ID: <20080226064739.GC35820@rambler-co.ru> On Tue, Feb 26, 2008 at 07:15:29AM +0100, Todd HG wrote: > After a lot of analysis I have found that Nginx was not the source of > the memory leak. I suspect you has no memory leak at all. It's very typical for Unix systems to keep free memory as small as possible (here is 11M only from 2G) : last pid: 79350; load averages: 0.39, 0.46, 0.41 up 33+17:16:00 09:43:53 21 processes: 1 running, 20 sleeping CPU states: 25.8% user, 0.0% nice, 11.4% system, 9.8% interrupt, 53.0% idle Mem: 103M Active, 1430M Inact, 378M Wired, 83M Cache, 63M Buf, 11M Free Swap: 2096M Total, 4872K Used, 2091M Free PID USERNAME THR PRI NICE SIZE RES STATE TIME WCPU COMMAND 67776 nobody 1 4 -10 87168K 85572K kqread 799:58 32.91% nginx 843 root 1 96 0 5600K 728K select 2:04 0.00% sshd 814 root 1 96 0 4612K 736K select 0:59 0.00% ntpd > After setting up cache-control this did help a lot with bandwidth. Is > there a way to setup cache control on the same server to deal with > caching images one way, and .html, .css, and .js files another way in > the nginx.conf? location ~ \.(html|css|js)$ { expires ... } location ~ \.(jpg|jpeg|gif)$ { expires ... } -- Igor Sysoev http://sysoev.ru/en/ From cliff at develix.com Tue Feb 26 13:33:20 2008 From: cliff at develix.com (Cliff Wells) Date: Tue, 26 Feb 2008 02:33:20 -0800 Subject: 0.6.26 build for Fedora 8 Message-ID: <1204022000.2955.19.camel@portableevil.develix.com> For anyone interested, I've created an Fedora 8 build for 0.6.26 (based on the official 0.5.x Fedora spec file). http://wiki.codemongers.com/NginxPlatformFedora Regards, Cliff From gabor at nekomancer.net Tue Feb 26 14:32:24 2008 From: gabor at nekomancer.net (=?ISO-8859-1?Q?G=E1bor_Farkas?=) Date: Tue, 26 Feb 2008 12:32:24 +0100 Subject: proxy_buffering=off, potential problems? other solutions? Message-ID: <47C3F8C8.2050109@nekomancer.net> hi, i have a fairly usual configuration of an nginx webserver + an apache-based application-server behind it. when requests come in, then nginx proxies it to apache, etc. my problem is, that in certain cases, i need that when apache sends the response to nginx, nginx should immediately send it to the client. i can solve this by simply turning proxy_buffering off, with "proxy_buffering = off", but i'd like to know: 1. what effect can this have? can it degrade performance? 2. is there perhaps a different solution? for example sending back to nginx a special header perhaps, or something like that? in short, is it recommended to simply turn off proxy_buffering in such situations, or is there a better approach? thanks, gabor From mansoor at zimbra.com Tue Feb 26 15:22:03 2008 From: mansoor at zimbra.com (Mansoor Peerbhoy) Date: Tue, 26 Feb 2008 04:22:03 -0800 (PST) Subject: upstream server:port variable ? Message-ID: <949380691.57231204028523826.JavaMail.root@dogfood.zimbra.com> Hello, A quick question: Does nginx have a variable which will give me the actual upstream server:port for the selected upstream server ? For instance, if I have: upstream xxx { server s1:7070; server s2:7070; server s3:7070; server s4:7070; } and if I have server { ... location / { proxy_pass http://xxx; proxy_set_header Host $variable_name; # <-- which variable should I use here, to get "s1:7070" or "s2:7070", etc. ? } ... } $server_name gives me the FQDN of the proxy server, $proxy_host gives me the name of the upstream block (in this case, "xxx") Which variable should I use in order to get the precise name of the selected upstream server ? Thanks Mansoor Peerbhoy From eden at mojiti.com Tue Feb 26 22:06:28 2008 From: eden at mojiti.com (Eden Li) Date: Tue, 26 Feb 2008 11:06:28 -0800 Subject: upstream server:port variable ? In-Reply-To: <949380691.57231204028523826.JavaMail.root@dogfood.zimbra.com> References: <949380691.57231204028523826.JavaMail.root@dogfood.zimbra.com> Message-ID: Not sure if this is available at the stage in the request that you want it, but you could try: $upstream_addr -- address of the upstream server that handled the request via: http://wiki.codemongers.com/NginxHttpUpstreamModule On Feb 26, 2008, at 4:22 AM, Mansoor Peerbhoy wrote: > Which variable should I use in order to get the precise name of the > selected upstream server ? From is at rambler-co.ru Tue Feb 26 22:16:01 2008 From: is at rambler-co.ru (Igor Sysoev) Date: Tue, 26 Feb 2008 22:16:01 +0300 Subject: upstream server:port variable ? In-Reply-To: <949380691.57231204028523826.JavaMail.root@dogfood.zimbra.com> References: <949380691.57231204028523826.JavaMail.root@dogfood.zimbra.com> Message-ID: <20080226191601.GA63834@rambler-co.ru> On Tue, Feb 26, 2008 at 04:22:03AM -0800, Mansoor Peerbhoy wrote: > A quick question: > Does nginx have a variable which will give me the actual upstream server:port for the selected upstream server ? > > For instance, if I have: > > upstream xxx > { > server s1:7070; > server s2:7070; > server s3:7070; > server s4:7070; > } > > and if I have > > server > { > ... > location / > { > proxy_pass http://xxx; > proxy_set_header Host $variable_name; # <-- which variable should I use here, to get "s1:7070" or "s2:7070", etc. ? > } > ... > } > > $server_name gives me the FQDN of the proxy server, > $proxy_host gives me the name of the upstream block (in this case, "xxx") > > Which variable should I use in order to get the precise name of the selected upstream server ? No, there is no such variable: it's expected that upstreams are equal. -- Igor Sysoev http://sysoev.ru/en/ From is at rambler-co.ru Tue Feb 26 22:17:24 2008 From: is at rambler-co.ru (Igor Sysoev) Date: Tue, 26 Feb 2008 22:17:24 +0300 Subject: upstream server:port variable ? In-Reply-To: References: <949380691.57231204028523826.JavaMail.root@dogfood.zimbra.com> Message-ID: <20080226191724.GB63834@rambler-co.ru> On Tue, Feb 26, 2008 at 11:06:28AM -0800, Eden Li wrote: > Not sure if this is available at the stage in the request that you > want it, but you could try: > > $upstream_addr -- address of the upstream server that handled the > request No, first, $upstream_addr is not read at this stage, and second, it's in unsuitable format: "192.168.1.1:80, 192.168.1.2:80". > via: http://wiki.codemongers.com/NginxHttpUpstreamModule > > On Feb 26, 2008, at 4:22 AM, Mansoor Peerbhoy wrote: > > >Which variable should I use in order to get the precise name of the > >selected upstream server ? > > -- Igor Sysoev http://sysoev.ru/en/ From is at rambler-co.ru Tue Feb 26 22:35:02 2008 From: is at rambler-co.ru (Igor Sysoev) Date: Tue, 26 Feb 2008 22:35:02 +0300 Subject: proxy_buffering=off, potential problems? other solutions? In-Reply-To: <47C3F8C8.2050109@nekomancer.net> References: <47C3F8C8.2050109@nekomancer.net> Message-ID: <20080226193502.GC63834@rambler-co.ru> On Tue, Feb 26, 2008 at 12:32:24PM +0100, G?bor Farkas wrote: > i have a fairly usual configuration of an nginx webserver + an > apache-based application-server behind it. > > when requests come in, then nginx proxies it to apache, etc. > > my problem is, that in certain cases, i need that when apache sends the > response to nginx, nginx should immediately send it to the client. > > i can solve this by simply turning proxy_buffering off, with > "proxy_buffering = off", but i'd like to know: > > 1. what effect can this have? can it degrade performance? > > 2. is there perhaps a different solution? for example sending back to > nginx a special header perhaps, or something like that? > > in short, is it recommended to simply turn off proxy_buffering in such > situations, or is there a better approach? If response will be bigger than proxy_buffer_size, then backend will be tied to nginx until the data will be sent to cliant. The maximum data size that nginx can read from backend at once in this mode is proxy_buffer_size. -- Igor Sysoev http://sysoev.ru/en/ From rkmr.em at gmail.com Tue Feb 26 22:58:16 2008 From: rkmr.em at gmail.com (rkmr.em at gmail.com) Date: Tue, 26 Feb 2008 11:58:16 -0800 Subject: getting remot4e address Message-ID: hi i have haproxy running in front of nginx, and i have the x-forwarded-for enabled in the haproxy configuration. How do I configure nginx so that my fastcgi backends get this IP address as the remote ip address and not the ip address of the haproxy. these are my fastcgi parameters fastcgi_param SCRIPT_FILENAME $fastcgi_script_name; fastcgi_param PATH_INFO $fastcgi_script_name; fastcgi_param QUERY_STRING $query_string; fastcgi_param CONTENT_TYPE $content_type; fastcgi_param CONTENT_LENGTH $content_length; fastcgi_param REQUEST_METHOD $request_method; fastcgi_param REMOTE_ADDR $remote_addr; fastcgi_param REMOTE_PORT $remote_port; fastcgi_param SERVER_PROTOCOL $server_protocol; fastcgi_param SERVER_ADDR $server_addr; fastcgi_param SERVER_PORT $server_port; fastcgi_param SERVER_NAME $server_name; From is at rambler-co.ru Tue Feb 26 23:08:22 2008 From: is at rambler-co.ru (Igor Sysoev) Date: Tue, 26 Feb 2008 23:08:22 +0300 Subject: getting remot4e address In-Reply-To: References: Message-ID: <20080226200822.GD63834@rambler-co.ru> On Tue, Feb 26, 2008 at 11:58:16AM -0800, rkmr.em at gmail.com wrote: > i have haproxy running in front of nginx, and i have the > x-forwarded-for enabled in the haproxy configuration. > How do I configure nginx so that my fastcgi backends get this IP > address as the remote ip address and not the ip address of the > haproxy. > > these are my fastcgi parameters > > fastcgi_param SCRIPT_FILENAME $fastcgi_script_name; > fastcgi_param PATH_INFO $fastcgi_script_name; > fastcgi_param QUERY_STRING $query_string; > fastcgi_param CONTENT_TYPE $content_type; > fastcgi_param CONTENT_LENGTH $content_length; > fastcgi_param REQUEST_METHOD $request_method; > fastcgi_param REMOTE_ADDR $remote_addr; > fastcgi_param REMOTE_PORT $remote_port; > fastcgi_param SERVER_PROTOCOL $server_protocol; > fastcgi_param SERVER_ADDR $server_addr; > fastcgi_param SERVER_PORT $server_port; > fastcgi_param SERVER_NAME $server_name; location / { set $addr $remote_addr; if ($http_x_forwarded_for ~ "(^|,)\s*(\d+\.\d+\.\d+\.\d+)\s$") { set $addr $1; } ... fastcgi_param REMOTE_ADDR $addr; -- Igor Sysoev http://sysoev.ru/en/ From rkmr.em at gmail.com Tue Feb 26 23:52:56 2008 From: rkmr.em at gmail.com (rkmr.em at gmail.com) Date: Tue, 26 Feb 2008 12:52:56 -0800 Subject: getting remot4e address In-Reply-To: <20080226200822.GD63834@rambler-co.ru> References: <20080226200822.GD63834@rambler-co.ru> Message-ID: On Tue, Feb 26, 2008 at 12:08 PM, Igor Sysoev wrote: > On Tue, Feb 26, 2008 at 11:58:16AM -0800, rkmr.em at gmail.com wrote: > > > x-forwarded-for enabled in the haproxy configuration. > > How do I configure nginx so that my fastcgi backends get this IP > > address as the remote ip address and not the ip address of the > location / { > set $addr $remote_addr; > > if ($http_x_forwarded_for ~ "(^|,)\s*(\d+\.\d+\.\d+\.\d+)\s$") { > set $addr $1; > } > > ... > fastcgi_param REMOTE_ADDR $addr; hi igor, thanks for your reply. i tried what you gave, and i still get only the ip address of the haproxy in my backends, how to fix? thanks this my config.. location / { root /home/mark/work/pop; fastcgi_pass backend_pop; include /home/mark/work/infrastructure/nginx_fastcgi.conf; } file nginx_fastcgi.conf; set $addr $remote_addr; if ($http_x_forwarded_for ~ "(^|,)\s*(\d+\.\d+\.\d+\.\d+)\s$") { set $addr $1; } fastcgi_param SCRIPT_FILENAME $fastcgi_script_name; fastcgi_param PATH_INFO $fastcgi_script_name; fastcgi_param QUERY_STRING $query_string; fastcgi_param CONTENT_TYPE $content_type; fastcgi_param CONTENT_LENGTH $content_length; fastcgi_param REQUEST_METHOD $request_method; #fastcgi_param REMOTE_ADDR $remote_addr; fastcgi_param REMOTE_ADDR $addr; fastcgi_param REMOTE_PORT $remote_port; fastcgi_param SERVER_PROTOCOL $server_protocol; fastcgi_param SERVER_ADDR $server_addr; fastcgi_param SERVER_PORT $server_port; fastcgi_param SERVER_NAME $server_name; From is at rambler-co.ru Wed Feb 27 00:00:34 2008 From: is at rambler-co.ru (Igor Sysoev) Date: Wed, 27 Feb 2008 00:00:34 +0300 Subject: getting remot4e address In-Reply-To: References: <20080226200822.GD63834@rambler-co.ru> Message-ID: <20080226210034.GE63834@rambler-co.ru> On Tue, Feb 26, 2008 at 12:52:56PM -0800, rkmr.em at gmail.com wrote: > On Tue, Feb 26, 2008 at 12:08 PM, Igor Sysoev wrote: > > On Tue, Feb 26, 2008 at 11:58:16AM -0800, rkmr.em at gmail.com wrote: > > > > > x-forwarded-for enabled in the haproxy configuration. > > > How do I configure nginx so that my fastcgi backends get this IP > > > address as the remote ip address and not the ip address of the > > location / { > > set $addr $remote_addr; > > > > if ($http_x_forwarded_for ~ "(^|,)\s*(\d+\.\d+\.\d+\.\d+)\s$") { > > set $addr $1; > > } > > > > ... > > fastcgi_param REMOTE_ADDR $addr; > > hi igor, > thanks for your reply. > i tried what you gave, and i still get only the ip address of the > haproxy in my backends, > how to fix? > thanks > > this my config.. > location / { > root /home/mark/work/pop; > fastcgi_pass backend_pop; > include /home/mark/work/infrastructure/nginx_fastcgi.conf; > } > > file nginx_fastcgi.conf; > set $addr $remote_addr; > > if ($http_x_forwarded_for ~ "(^|,)\s*(\d+\.\d+\.\d+\.\d+)\s$") { My mistake: - if ($http_x_forwarded_for ~ "(^|,)\s*(\d+\.\d+\.\d+\.\d+)\s$") { + if ($http_x_forwarded_for ~ "(^|,)\s*(\d+\.\d+\.\d+\.\d+)\s*$") { > set $addr $1; > } > > fastcgi_param SCRIPT_FILENAME $fastcgi_script_name; > fastcgi_param PATH_INFO $fastcgi_script_name; > fastcgi_param QUERY_STRING $query_string; > fastcgi_param CONTENT_TYPE $content_type; > fastcgi_param CONTENT_LENGTH $content_length; > fastcgi_param REQUEST_METHOD $request_method; > #fastcgi_param REMOTE_ADDR $remote_addr; > fastcgi_param REMOTE_ADDR $addr; > fastcgi_param REMOTE_PORT $remote_port; > fastcgi_param SERVER_PROTOCOL $server_protocol; > fastcgi_param SERVER_ADDR $server_addr; > fastcgi_param SERVER_PORT $server_port; > fastcgi_param SERVER_NAME $server_name; > -- Igor Sysoev http://sysoev.ru/en/ From rkmr.em at gmail.com Wed Feb 27 00:11:40 2008 From: rkmr.em at gmail.com (rkmr.em at gmail.com) Date: Tue, 26 Feb 2008 13:11:40 -0800 Subject: getting remot4e address In-Reply-To: <20080226210034.GE63834@rambler-co.ru> References: <20080226200822.GD63834@rambler-co.ru> <20080226210034.GE63834@rambler-co.ru> Message-ID: On Tue, Feb 26, 2008 at 1:00 PM, Igor Sysoev wrote: > On Tue, Feb 26, 2008 at 12:52:56PM -0800, rkmr.em at gmail.com wrote: > > On Tue, Feb 26, 2008 at 12:08 PM, Igor Sysoev wrote: > > > On Tue, Feb 26, 2008 at 11:58:16AM -0800, rkmr.em at gmail.com wrote: > > > > > > > x-forwarded-for enabled in the haproxy configuration. > > > > How do I configure nginx so that my fastcgi backends get this IP > > > > address as the remote ip address and not the ip address of the > > > location / { > > > set $addr $remote_addr; > > > > > > if ($http_x_forwarded_for ~ "(^|,)\s*(\d+\.\d+\.\d+\.\d+)\s$") { > > > set $addr $1; > > > } > > > > > > ... > > > fastcgi_param REMOTE_ADDR $addr; > > > > hi igor, > > thanks for your reply. > > i tried what you gave, and i still get only the ip address of the > > haproxy in my backends, > My mistake: > > - if ($http_x_forwarded_for ~ "(^|,)\s*(\d+\.\d+\.\d+\.\d+)\s$") { > + if ($http_x_forwarded_for ~ "(^|,)\s*(\d+\.\d+\.\d+\.\d+)\s*$") { igor: now for ip in the fastcgi backend i get an empty string 'ip': '' current config: set $addr $remote_addr; if ($http_x_forwarded_for ~ "(^|,)\s*(\d+\.\d+\.\d+\.\d+)\s*$") { set $addr $1; } From is at rambler-co.ru Wed Feb 27 00:24:57 2008 From: is at rambler-co.ru (Igor Sysoev) Date: Wed, 27 Feb 2008 00:24:57 +0300 Subject: getting remot4e address In-Reply-To: References: <20080226200822.GD63834@rambler-co.ru> <20080226210034.GE63834@rambler-co.ru> Message-ID: <20080226212457.GF63834@rambler-co.ru> On Tue, Feb 26, 2008 at 01:11:40PM -0800, rkmr.em at gmail.com wrote: > On Tue, Feb 26, 2008 at 1:00 PM, Igor Sysoev wrote: > > On Tue, Feb 26, 2008 at 12:52:56PM -0800, rkmr.em at gmail.com wrote: > > > On Tue, Feb 26, 2008 at 12:08 PM, Igor Sysoev wrote: > > > > On Tue, Feb 26, 2008 at 11:58:16AM -0800, rkmr.em at gmail.com wrote: > > > > > > > > > x-forwarded-for enabled in the haproxy configuration. > > > > > How do I configure nginx so that my fastcgi backends get this IP > > > > > address as the remote ip address and not the ip address of the > > > > location / { > > > > set $addr $remote_addr; > > > > > > > > if ($http_x_forwarded_for ~ "(^|,)\s*(\d+\.\d+\.\d+\.\d+)\s$") { > > > > set $addr $1; > > > > } > > > > > > > > ... > > > > fastcgi_param REMOTE_ADDR $addr; > > > > > > hi igor, > > > thanks for your reply. > > > i tried what you gave, and i still get only the ip address of the > > > haproxy in my backends, > > My mistake: > > > > - if ($http_x_forwarded_for ~ "(^|,)\s*(\d+\.\d+\.\d+\.\d+)\s$") { > > + if ($http_x_forwarded_for ~ "(^|,)\s*(\d+\.\d+\.\d+\.\d+)\s*$") { > > igor: > now for ip in the fastcgi backend i get an empty string > 'ip': '' > > current config: > set $addr $remote_addr; > if ($http_x_forwarded_for ~ "(^|,)\s*(\d+\.\d+\.\d+\.\d+)\s*$") { - if ($http_x_forwarded_for ~ "(^|,)\s*(\d+\.\d+\.\d+\.\d+)\s*$") { + if ($http_x_forwarded_for ~ "(?:^|,)\s*(\d+\.\d+\.\d+\.\d+)\s*$") { > set $addr $1; > } > -- Igor Sysoev http://sysoev.ru/en/ From rkmr.em at gmail.com Wed Feb 27 00:38:20 2008 From: rkmr.em at gmail.com (rkmr.em at gmail.com) Date: Tue, 26 Feb 2008 13:38:20 -0800 Subject: getting remot4e address In-Reply-To: <20080226212457.GF63834@rambler-co.ru> References: <20080226200822.GD63834@rambler-co.ru> <20080226210034.GE63834@rambler-co.ru> <20080226212457.GF63834@rambler-co.ru> Message-ID: On Tue, Feb 26, 2008 at 1:24 PM, Igor Sysoev wrote: > > On Tue, Feb 26, 2008 at 01:11:40PM -0800, rkmr.em at gmail.com wrote: > > > On Tue, Feb 26, 2008 at 1:00 PM, Igor Sysoev wrote: > > > On Tue, Feb 26, 2008 at 12:52:56PM -0800, rkmr.em at gmail.com wrote: > > > > On Tue, Feb 26, 2008 at 12:08 PM, Igor Sysoev wrote: > > > > > On Tue, Feb 26, 2008 at 11:58:16AM -0800, rkmr.em at gmail.com wrote: > > > > > > > > > > > x-forwarded-for enabled in the haproxy configuration. > > > > > > How do I configure nginx so that my fastcgi backends get this IP > > > > > > address as the remote ip address and not the ip address of the > > > > > location / { > > > > > set $addr $remote_addr; > > > > > > > > > > if ($http_x_forwarded_for ~ "(^|,)\s*(\d+\.\d+\.\d+\.\d+)\s$") { > > > > > set $addr $1; > > > > > } > > > > > > > > > > ... > > > > > fastcgi_param REMOTE_ADDR $addr; > > > > > > > > hi igor, > > > > thanks for your reply. > > > > i tried what you gave, and i still get only the ip address of the > > > > haproxy in my backends, > > > My mistake: > > > > > > - if ($http_x_forwarded_for ~ "(^|,)\s*(\d+\.\d+\.\d+\.\d+)\s$") { > > > + if ($http_x_forwarded_for ~ "(^|,)\s*(\d+\.\d+\.\d+\.\d+)\s*$") { > > > > igor: > > now for ip in the fastcgi backend i get an empty string > > 'ip': '' > > > > current config: > > set $addr $remote_addr; > > if ($http_x_forwarded_for ~ "(^|,)\s*(\d+\.\d+\.\d+\.\d+)\s*$") { > > - if ($http_x_forwarded_for ~ "(^|,)\s*(\d+\.\d+\.\d+\.\d+)\s*$") { > + if ($http_x_forwarded_for ~ "(?:^|,)\s*(\d+\.\d+\.\d+\.\d+)\s*$") { > > > set $addr $1; works great now!!! here is the final config: set $addr $remote_addr; if ($http_x_forwarded_for ~ "(?:^|,)\s*(\d+\.\d+\.\d+\.\d+)\s*$") { set $addr $1; } thanks a lot From rkmr.em at gmail.com Wed Feb 27 00:41:14 2008 From: rkmr.em at gmail.com (rkmr.em at gmail.com) Date: Tue, 26 Feb 2008 13:41:14 -0800 Subject: x-forwarded-for ip in server access logs Message-ID: how do i get the x-forwarded-for ip address in server access logs? i tried this log format in the server section, but i still get the default logs how to fix this? thanks server { listen 8070; server_name .XX.com; log_format main '$remote_addr - $remote_user [$time_local] $request ' '"$status" $body_bytes_sent "$http_referer"' '"$http_user_agent" "$http_x_forwarded_for"' '"$gzip_ratio" "$upstream_status" "$upstream_response_time" "$upstream_addr"'; access_log logs/access_pop.log; error_log logs/error_pop.log; location / { root /home/mark/work/pop; fastcgi_pass backend_pop; include /home/mark/work/infrastructure/nginx_fastcgi.conf; } } From rkmr.em at gmail.com Wed Feb 27 00:46:12 2008 From: rkmr.em at gmail.com (rkmr.em at gmail.com) Date: Tue, 26 Feb 2008 13:46:12 -0800 Subject: increasing server_names_hash_bucket_size Message-ID: hi i have configured nginx for multiple virtual servers, when i start nginx, i get this error: 2008/02/26 13:44:31 [emerg] 18598#0: could not build the server_names_hash, you should increase server_names_hash_bucket_size: 32 if i increase it to 64 or 128 the error is gone and it nginx starts working. now how much to increase this ? 64 or 128 or even higher? is there any performance effect because of this? thanks From y.georgiev at gmail.com Wed Feb 27 09:08:54 2008 From: y.georgiev at gmail.com (Yordan Georgiev) Date: Wed, 27 Feb 2008 08:08:54 +0200 Subject: x-forwarded-for ip in server access logs In-Reply-To: References: Message-ID: <4378145a0802262208l6b7ba9c2t5cb88121f869dc7d@mail.gmail.com> set X-Forwarded-For $add_x_forwarded_for; use $add_x_forwarded_for for log_format and any configure options On Tue, Feb 26, 2008 at 11:41 PM, rkmr.em at gmail.com wrote: > how do i get the x-forwarded-for ip address in server access logs? > i tried this log format in the server section, but i still get the default > logs > > how to fix this? > thanks > > > server { > listen 8070; > server_name .XX.com; > log_format main '$remote_addr - $remote_user [$time_local] > $request ' '"$status" $body_bytes_sent "$http_referer"' > '"$http_user_agent" "$http_x_forwarded_for"' '"$gzip_ratio" > "$upstream_status" "$upstream_response_time" "$upstream_addr"'; > access_log logs/access_pop.log; > error_log logs/error_pop.log; > location / { > root /home/mark/work/pop; > fastcgi_pass backend_pop; > include /home/mark/work/infrastructure/nginx_fastcgi.conf; > } > } > > Please excuse my bad english... ? ????????, ?. ????????. WEB: http://gigavolt-bg.net/ Blog: http://live.gigavolt-bg.net/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From gabor at nekomancer.net Wed Feb 27 10:36:27 2008 From: gabor at nekomancer.net (=?ISO-8859-1?Q?G=E1bor_Farkas?=) Date: Wed, 27 Feb 2008 08:36:27 +0100 Subject: proxy_buffering=off, potential problems? other solutions? Message-ID: <47C512FB.2020208@nekomancer.net> >> i have a fairly usual configuration of an nginx webserver + an >> apache-based application-server behind it. >> >> when requests come in, then nginx proxies it to apache, etc. >> >> my problem is, that in certain cases, i need that when apache sends the >> response to nginx, nginx should immediately send it to the client. >> >> i can solve this by simply turning proxy_buffering off, with >> "proxy_buffering = off" > > If response will be bigger than proxy_buffer_size, then backend will > be tied to nginx until the data will be sent to cliant. > The maximum data size that nginx can read from backend at once in this mode > is proxy_buffer_size. > maybe i'm misunderstanding something here. as far as i see, there are 2 separate "features": 1. nginx reads the whole response from the proxied apache and "frees" apache, even when nginx is not immediately able to send it to the client. 2. nginx does not start to send the response to the client until the whole response is read from apache (or until it has read "proxy_buffer_size" bytes from apache) my problem is #2, not #1. it seems that doing a "proxy_buffering = off" solves #2, but maybe it does also #1. is there a way to only do #2, but not #1? maybe it helps if i explain my situation in more detail: the apache web-app generates a webpage dynamically, the following way: A. generate the first part B. do some computation C. generate the second part it's very important that after step #A, the client immediately gets that part of the webpage. with proxy_buffering enabled, it does not happen, because nginx seems to wait for the whole response (or for enough data to fill it's buffers). it seems that "proxy_buffering=off" achieves what i need. but as i understood from your response, it also means that the apache-worker will be blocked until the whole response is sent to the client. is there a way to have what i need, and still have buffering enabled? :) (well, there is the possibility to send a lot of empty-space in the html to fill nginx's buffers, but that's not a nice solution :-) thanks, gabor From is at rambler-co.ru Wed Feb 27 12:32:41 2008 From: is at rambler-co.ru (Igor Sysoev) Date: Wed, 27 Feb 2008 12:32:41 +0300 Subject: increasing server_names_hash_bucket_size In-Reply-To: References: Message-ID: <20080227093241.GA84129@rambler-co.ru> On Tue, Feb 26, 2008 at 01:46:12PM -0800, rkmr.em at gmail.com wrote: > i have configured nginx for multiple virtual servers, when i start > nginx, i get this error: > > 2008/02/26 13:44:31 [emerg] 18598#0: could not build the > server_names_hash, you should increase server_names_hash_bucket_size: > 32 > > if i increase it to 64 or 128 the error is gone and it nginx starts working. > now how much to increase this ? 64 or 128 or even higher? is there any > performance effect because of this? > thanks You has configured host name that is longer than 27 characters. You should increase server_names_hash_bucket_size to smallest value, 64 in your case. The smaller the better, however, there are no noticeable perfomance effect. -- Igor Sysoev http://sysoev.ru/en/ From sean at ardishealth.com Wed Feb 27 19:26:56 2008 From: sean at ardishealth.com (Sean Allen) Date: Wed, 27 Feb 2008 11:26:56 -0500 Subject: if ,-f and variables Message-ID: <72215C54-C4B1-4D1E-8E5D-B4676857FC83@ardishealth.com> this works: if ( -f /ah/sites/colon365.co.uk/public/.maintenance ) { set $maintenance 1; } this doesn't: if ( -f $document_root/.maintenance ) { set $maintenance 1; } two questions, 1. is there a way to make the latter work? some slight change or tweak? 2. why doesn't it work? are variables not interpolated when doing file system checks like -f? From is at rambler-co.ru Wed Feb 27 19:41:58 2008 From: is at rambler-co.ru (Igor Sysoev) Date: Wed, 27 Feb 2008 19:41:58 +0300 Subject: if ,-f and variables In-Reply-To: <72215C54-C4B1-4D1E-8E5D-B4676857FC83@ardishealth.com> References: <72215C54-C4B1-4D1E-8E5D-B4676857FC83@ardishealth.com> Message-ID: <20080227164158.GB84129@rambler-co.ru> On Wed, Feb 27, 2008 at 11:26:56AM -0500, Sean Allen wrote: > this works: > > if ( -f /ah/sites/colon365.co.uk/public/.maintenance ) > { > set $maintenance 1; > } > > this doesn't: > > if ( -f $document_root/.maintenance ) > { > set $maintenance 1; > } > > two questions, > > 1. is there a way to make the latter work? some slight change or tweak? > 2. why doesn't it work? are variables not interpolated when doing file > system checks like -f? Have you defined root ? root /ah/sites/colon365.co.uk/public; if (-f $document_root/.maintenance) { set $maintenance 1; } -- Igor Sysoev http://sysoev.ru/en/ From roxis at list.ru Wed Feb 27 19:42:50 2008 From: roxis at list.ru (Roxis) Date: Wed, 27 Feb 2008 17:42:50 +0100 Subject: if ,-f and variables In-Reply-To: <72215C54-C4B1-4D1E-8E5D-B4676857FC83@ardishealth.com> References: <72215C54-C4B1-4D1E-8E5D-B4676857FC83@ardishealth.com> Message-ID: <200802271742.50318.roxis@list.ru> On Wednesday 27 February 2008, Sean Allen wrote: > this works: > > if ( -f /ah/sites/colon365.co.uk/public/.maintenance ) > { > set $maintenance 1; > } > > this doesn't: > > if ( -f $document_root/.maintenance ) > { > set $maintenance 1; > } > > two questions, > > 1. is there a way to make the latter work? some slight change or tweak? > 2. why doesn't it work? are variables not interpolated when doing file > system checks like -f? it should work. probably you have wrong root or root in wrong place. plz provide full config From sean at ardishealth.com Wed Feb 27 20:34:08 2008 From: sean at ardishealth.com (Sean Allen) Date: Wed, 27 Feb 2008 12:34:08 -0500 Subject: if ,-f and variables In-Reply-To: <200802271742.50318.roxis@list.ru> References: <72215C54-C4B1-4D1E-8E5D-B4676857FC83@ardishealth.com> <200802271742.50318.roxis@list.ru> Message-ID: <0A2FDDBC-3B06-402B-BB5F-EDC61E452293@ardishealth.com> On Feb 27, 2008, at 11:42 AM, Roxis wrote: > On Wednesday 27 February 2008, Sean Allen wrote: >> this works: >> >> if ( -f /ah/sites/colon365.co.uk/public/.maintenance ) >> { >> set $maintenance 1; >> } >> >> this doesn't: >> >> if ( -f $document_root/.maintenance ) >> { >> set $maintenance 1; >> } >> >> two questions, >> >> 1. is there a way to make the latter work? some slight change or >> tweak? >> 2. why doesn't it work? are variables not interpolated when doing >> file >> system checks like -f? > > it should work. > probably you have wrong root or root in wrong place. > plz provide full config > One thing i just noticed. root is defined until after that snippet above. is that an issue? Config is spread across multiple files. Here is the best bits: /ah/sites/colon365.co.uk/conf/nginx.conf: server { listen 208.113.69.210; server_name colon365.co.uk; server_name www.colon365.co.uk; include /ah/sites/colon365.co.uk/conf/nginx/base; include /ah/sites/colon365.co.uk/conf/nginx/maintenance; include /ah/sites/colon365.co.uk/conf/nginx/fake-homepage; access_log /var/log/ah/colon365.co.uk.log combined; include /ah/conf/nginx/www-shared; } -- /ah/sites/colon365.co.uk/conf/nginx/base: set $base /ah/sites/colon365.co.uk; -- /ah/conf/nginx/www-shared: include /ah/conf/nginx/root; include /ah/conf/nginx/favicon; include /ah/conf/nginx/standard-expire; include /ah/conf/nginx/unsub-aliases; include /ah/conf/nginx/hackersafe; include /ah/conf/nginx/historical; location / { if ( !-e $request_filename ) { expires -1; proxy_pass http://mod_perl; break; } } -- /ah/conf/nginx/root: set $root $base/public; root $root; --- From pavel at netclime.com Wed Feb 27 21:30:47 2008 From: pavel at netclime.com (Pavel Georgiev) Date: Wed, 27 Feb 2008 20:30:47 +0200 Subject: Nginx for proxy + rewrite Message-ID: <200802272030.47465.pavel@netclime.com> Hi, List! I`ve beed using nxingx as a local balancer for few backend servers: http { upstream mydomain.com { server 192.168.8.30; # backend server } server { listen 192.168.10.1:8080; server_name cmydomain.com; location / { proxy_pass http://mydomain.com; proxy_redirect off; proxy_set_header Host $host; } } } What I need to do is for a certain url to rewrite it to an external url but serve the requests as a proxy instead of returing a redirect, so this is transparent to the client: http://mydomain.com/redirect/(.*)$ should go to http://extranal.comain.com/$1 I saw this is possible with a simple rewrite but it returns a redirect to the client. Is is possible to make nginx to server the rewrite as a proxy? From is at rambler-co.ru Wed Feb 27 21:59:00 2008 From: is at rambler-co.ru (Igor Sysoev) Date: Wed, 27 Feb 2008 21:59:00 +0300 Subject: Nginx for proxy + rewrite In-Reply-To: <200802272030.47465.pavel@netclime.com> References: <200802272030.47465.pavel@netclime.com> Message-ID: <20080227185900.GC84129@rambler-co.ru> On Wed, Feb 27, 2008 at 08:30:47PM +0200, Pavel Georgiev wrote: > I`ve beed using nxingx as a local balancer for few backend servers: > > > http { > upstream mydomain.com { > server 192.168.8.30; # backend server > } > server { > listen 192.168.10.1:8080; > server_name cmydomain.com; > > location / { > proxy_pass http://mydomain.com; > proxy_redirect off; > proxy_set_header Host $host; > } > } > } > > > What I need to do is for a certain url to rewrite it to an external url but > serve the requests as a proxy instead of returing a redirect, so this is > transparent to the client: > > http://mydomain.com/redirect/(.*)$ should go to http://extranal.comain.com/$1 > > I saw this is possible with a simple rewrite but it returns a redirect to the > client. Is is possible to make nginx to server the rewrite as a proxy? I'm not sure that understand your problem, but this may help: proxy_pass http://mydomain.com; proxy_redirect http://mydomain.com/ /; proxy_redirect http://mydomain.com/redirect/ /; -- Igor Sysoev http://sysoev.ru/en/ From is at rambler-co.ru Wed Feb 27 22:08:45 2008 From: is at rambler-co.ru (Igor Sysoev) Date: Wed, 27 Feb 2008 22:08:45 +0300 Subject: Nginx for proxy + rewrite In-Reply-To: <200802272030.47465.pavel@netclime.com> References: <200802272030.47465.pavel@netclime.com> Message-ID: <20080227190845.GD84129@rambler-co.ru> On Wed, Feb 27, 2008 at 08:30:47PM +0200, Pavel Georgiev wrote: > I`ve beed using nxingx as a local balancer for few backend servers: > > > http { > upstream mydomain.com { > server 192.168.8.30; # backend server > } > server { > listen 192.168.10.1:8080; > server_name cmydomain.com; > > location / { > proxy_pass http://mydomain.com; > proxy_redirect off; > proxy_set_header Host $host; > } > } > } > > > What I need to do is for a certain url to rewrite it to an external url but > serve the requests as a proxy instead of returing a redirect, so this is > transparent to the client: > > http://mydomain.com/redirect/(.*)$ should go to http://extranal.comain.com/$1 > > I saw this is possible with a simple rewrite but it returns a redirect to the > client. Is is possible to make nginx to server the rewrite as a proxy? Or probably, you need X-Accel-Redirect: http://wiki.codemongers.com/NginxXSendfile -- Igor Sysoev http://sysoev.ru/en/ From brian.kirkbride at deeperbydesign.com Wed Feb 27 22:18:24 2008 From: brian.kirkbride at deeperbydesign.com (Brian Kirkbride) Date: Wed, 27 Feb 2008 13:18:24 -0600 Subject: Mapping sites to other sites without using FastCGI/DB Message-ID: <47C5B780.3050806@deeperbydesign.com> Hello, I'm new to Nginx but have been very impressed with it -- thanks for contributing this great software. I have a mass virtual hosting setup, all sites have the same config and will be served with server_name * (wildcard). The config is very simple, just serving some static files. I want to keep the Nginx boxes as simple and light as possible -- no FastCGI, no database. Some domains will simply be an alias to another domain. I know that for one or two domains, you could simply do a rewrite from ALIASDOMAIN.com/(.*) to REALDOMAIN/$1, but our situation is more dynamic and we can't update the config file all the time. Is there as way to have: docroot/ realdomain.com/ [content] aliasdomain.com/ aliased_to where aliased_to is a text file containing the real domain? Or what about: docroot/ realdomain.com/ [content] aliasdomain.com/ aliasto_realdomain.com Something like that? In apache mod_rewrite I would do this using maps, but my Nginx skills are weak! Thanks in advance, Brian Kirkbride From roxis at list.ru Wed Feb 27 22:31:32 2008 From: roxis at list.ru (Roxis) Date: Wed, 27 Feb 2008 20:31:32 +0100 Subject: if ,-f and variables In-Reply-To: <0A2FDDBC-3B06-402B-BB5F-EDC61E452293@ardishealth.com> References: <72215C54-C4B1-4D1E-8E5D-B4676857FC83@ardishealth.com> <200802271742.50318.roxis@list.ru> <0A2FDDBC-3B06-402B-BB5F-EDC61E452293@ardishealth.com> Message-ID: <200802272031.32537.roxis@list.ru> On Wednesday 27 February 2008, Sean Allen wrote: > One thing i just noticed. root is defined until after that snippet > above. > is that an issue? not the "root" by itself, but all variables affecting it From is at rambler-co.ru Wed Feb 27 22:34:10 2008 From: is at rambler-co.ru (Igor Sysoev) Date: Wed, 27 Feb 2008 22:34:10 +0300 Subject: if ,-f and variables In-Reply-To: <0A2FDDBC-3B06-402B-BB5F-EDC61E452293@ardishealth.com> References: <72215C54-C4B1-4D1E-8E5D-B4676857FC83@ardishealth.com> <200802271742.50318.roxis@list.ru> <0A2FDDBC-3B06-402B-BB5F-EDC61E452293@ardishealth.com> Message-ID: <20080227193410.GG84129@rambler-co.ru> On Wed, Feb 27, 2008 at 12:34:08PM -0500, Sean Allen wrote: > On Feb 27, 2008, at 11:42 AM, Roxis wrote: > > >On Wednesday 27 February 2008, Sean Allen wrote: > >>this works: > >> > >>if ( -f /ah/sites/colon365.co.uk/public/.maintenance ) > >>{ > >> set $maintenance 1; > >>} > >> > >>this doesn't: > >> > >>if ( -f $document_root/.maintenance ) > >>{ > >> set $maintenance 1; > >>} > >> > >>two questions, > >> > >>1. is there a way to make the latter work? some slight change or > >>tweak? > >>2. why doesn't it work? are variables not interpolated when doing > >>file > >>system checks like -f? > > > >it should work. > >probably you have wrong root or root in wrong place. > >plz provide full config > > > > One thing i just noticed. root is defined until after that snippet > above. > is that an issue? > > > Config is spread across multiple files. > > Here is the best bits: > > /ah/sites/colon365.co.uk/conf/nginx.conf: > > server > { > listen 208.113.69.210; > server_name colon365.co.uk; > server_name www.colon365.co.uk; > > include /ah/sites/colon365.co.uk/conf/nginx/base; > include /ah/sites/colon365.co.uk/conf/nginx/maintenance; > include /ah/sites/colon365.co.uk/conf/nginx/fake-homepage; > > access_log /var/log/ah/colon365.co.uk.log combined; > include /ah/conf/nginx/www-shared; > } > > > -- > > /ah/sites/colon365.co.uk/conf/nginx/base: > > set $base /ah/sites/colon365.co.uk; > > -- > > /ah/conf/nginx/www-shared: > > include /ah/conf/nginx/root; > include /ah/conf/nginx/favicon; > include /ah/conf/nginx/standard-expire; > include /ah/conf/nginx/unsub-aliases; > include /ah/conf/nginx/hackersafe; > include /ah/conf/nginx/historical; > location / > { > if ( !-e $request_filename ) > { > expires -1; > proxy_pass http://mod_perl; > break; > } > } > > > -- > > /ah/conf/nginx/root: > > set $root $base/public; > root $root; The "root" directive may be set in eny place of http, server, or locacation: it will be properly set or inherited: http { server { location / { # here root is /path } } root /path; } But this is not true for "set" directives: they are executed in order of thier apperance. include /ah/sites/colon365.co.uk/conf/nginx/base; set $base /ah/sites/colon365.co.uk; include /ah/sites/colon365.co.uk/conf/nginx/maintenance; using $document_root, here it is "root ''", because $root is still undefined include /ah/conf/nginx/www-shared; include /ah/conf/nginx/root; set $root $base/public; root $root; You should to set "set $root $base/public;" early. -- Igor Sysoev http://sysoev.ru/en/ From is at rambler-co.ru Wed Feb 27 22:43:46 2008 From: is at rambler-co.ru (Igor Sysoev) Date: Wed, 27 Feb 2008 22:43:46 +0300 Subject: Mapping sites to other sites without using FastCGI/DB In-Reply-To: <47C5B780.3050806@deeperbydesign.com> References: <47C5B780.3050806@deeperbydesign.com> Message-ID: <20080227194346.GH84129@rambler-co.ru> On Wed, Feb 27, 2008 at 01:18:24PM -0600, Brian Kirkbride wrote: > I'm new to Nginx but have been very impressed with it -- thanks for > contributing this great software. > > I have a mass virtual hosting setup, all sites have the same config > and will be served with server_name * (wildcard). The config is very > simple, just serving some static files. I want to keep the Nginx > boxes as simple and light as possible -- no FastCGI, no database. > > Some domains will simply be an alias to another domain. I know that > for one or two domains, you could simply do a rewrite from > ALIASDOMAIN.com/(.*) to REALDOMAIN/$1, but our situation is more > dynamic and we can't update the config file all the time. > > Is there as way to have: > > docroot/ > realdomain.com/ > [content] > aliasdomain.com/ > aliased_to > > where aliased_to is a text file containing the real domain? > > Or what about: > > docroot/ > realdomain.com/ > [content] > aliasdomain.com/ > aliasto_realdomain.com > > Something like that? > > In apache mod_rewrite I would do this using maps, but my Nginx skills > are weak! Use map, http://wiki.codemongers.com/NginxHttpMapModule http { map $http_host $root { realdomain.com realdomain.com; aliasdomain.com realdomain.com; ... } server { locaiton / { root /docroot/$root; ... } } } -- Igor Sysoev http://sysoev.ru/en/ From pavel at netclime.com Wed Feb 27 23:27:18 2008 From: pavel at netclime.com (Pavel Georgiev) Date: Wed, 27 Feb 2008 22:27:18 +0200 Subject: Nginx for proxy + rewrite In-Reply-To: <20080227190845.GD84129@rambler-co.ru> References: <200802272030.47465.pavel@netclime.com> <20080227190845.GD84129@rambler-co.ru> Message-ID: <200802272227.19021.pavel@netclime.com> On Wednesday 27 February 2008 21:08:45 Igor Sysoev wrote: > On Wed, Feb 27, 2008 at 08:30:47PM +0200, Pavel Georgiev wrote: > > I`ve beed using nxingx as a local balancer for few backend servers: > > > > > > http { > > upstream mydomain.com { > > server 192.168.8.30; # backend server > > } > > server { > > listen 192.168.10.1:8080; > > server_name cmydomain.com; > > > > location / { > > proxy_pass http://mydomain.com; > > proxy_redirect off; > > proxy_set_header Host $host; > > } > > } > > } > > > > > > What I need to do is for a certain url to rewrite it to an external url > > but serve the requests as a proxy instead of returing a redirect, so this > > is transparent to the client: > > > > http://mydomain.com/redirect/(.*)$ should go to > > http://extranal.comain.com/$1 > > > > I saw this is possible with a simple rewrite but it returns a redirect to > > the client. Is is possible to make nginx to server the rewrite as a > > proxy? > > Or probably, you need X-Accel-Redirect: > > http://wiki.codemongers.com/NginxXSendfile What I`m trying to do is to server some location (/redirect/ in the example above) to an external server. It is doable with this: location /redirect { rewrite ^/redirect/(.*)$ http://some.domain.com/$1 } This however returns a 302 code, what I want is nginx to get the file requested from http://some.domain.com/ and server it to the client, so that the client doesn't have a clue that this was taken from an external server. In other words, I`d like to treat http://some.domain.com as a backend server, but just for a given location. Hope that makes sense. I don't think X-Accel-Redirect is what I need here. From brian.kirkbride at deeperbydesign.com Wed Feb 27 23:30:24 2008 From: brian.kirkbride at deeperbydesign.com (Brian Kirkbride) Date: Wed, 27 Feb 2008 14:30:24 -0600 Subject: Mapping sites to other sites without using FastCGI/DB In-Reply-To: <20080227194346.GH84129@rambler-co.ru> References: <47C5B780.3050806@deeperbydesign.com> <20080227194346.GH84129@rambler-co.ru> Message-ID: <47C5C860.40903@deeperbydesign.com> Igor Sysoev wrote: > On Wed, Feb 27, 2008 at 01:18:24PM -0600, Brian Kirkbride wrote: > >> I'm new to Nginx but have been very impressed with it -- thanks for >> contributing this great software. >> >> I have a mass virtual hosting setup, all sites have the same config >> and will be served with server_name * (wildcard). The config is very >> simple, just serving some static files. I want to keep the Nginx >> boxes as simple and light as possible -- no FastCGI, no database. >> >> Some domains will simply be an alias to another domain. I know that >> for one or two domains, you could simply do a rewrite from >> ALIASDOMAIN.com/(.*) to REALDOMAIN/$1, but our situation is more >> dynamic and we can't update the config file all the time. >> >> Is there as way to have: >> >> docroot/ >> realdomain.com/ >> [content] >> aliasdomain.com/ >> aliased_to >> >> where aliased_to is a text file containing the real domain? >> >> Or what about: >> >> docroot/ >> realdomain.com/ >> [content] >> aliasdomain.com/ >> aliasto_realdomain.com >> >> Something like that? >> >> In apache mod_rewrite I would do this using maps, but my Nginx skills >> are weak! > > Use map, http://wiki.codemongers.com/NginxHttpMapModule > > http { > map $http_host $root { > realdomain.com realdomain.com; > aliasdomain.com realdomain.com; > ... > } > > server { > locaiton / { > root /docroot/$root; > ... > } > } > } > Thanks Igor! I can't believe I missed the map module while Googling. That should work great for our problem. We will just regenerate a mapfile to be included in the config and then kill -HUP nginx on changes. Our host map be very large, possibly several thousand entries? Will this slow things down very much? Best, Brian From is at rambler-co.ru Wed Feb 27 23:39:41 2008 From: is at rambler-co.ru (Igor Sysoev) Date: Wed, 27 Feb 2008 23:39:41 +0300 Subject: Mapping sites to other sites without using FastCGI/DB In-Reply-To: <47C5C860.40903@deeperbydesign.com> References: <47C5B780.3050806@deeperbydesign.com> <20080227194346.GH84129@rambler-co.ru> <47C5C860.40903@deeperbydesign.com> Message-ID: <20080227203941.GI84129@rambler-co.ru> On Wed, Feb 27, 2008 at 02:30:24PM -0600, Brian Kirkbride wrote: > I can't believe I missed the map module while Googling. That should > work great for our problem. We will just regenerate a mapfile to be > included in the config and then kill -HUP nginx on changes. > > Our host map be very large, possibly several thousand entries? Will > this slow things down very much? No, it's intended for tens thousands entries. -- Igor Sysoev http://sysoev.ru/en/ From is at rambler-co.ru Wed Feb 27 23:42:37 2008 From: is at rambler-co.ru (Igor Sysoev) Date: Wed, 27 Feb 2008 23:42:37 +0300 Subject: Nginx for proxy + rewrite In-Reply-To: <200802272227.19021.pavel@netclime.com> References: <200802272030.47465.pavel@netclime.com> <20080227190845.GD84129@rambler-co.ru> <200802272227.19021.pavel@netclime.com> Message-ID: <20080227204237.GJ84129@rambler-co.ru> On Wed, Feb 27, 2008 at 10:27:18PM +0200, Pavel Georgiev wrote: > On Wednesday 27 February 2008 21:08:45 Igor Sysoev wrote: > > On Wed, Feb 27, 2008 at 08:30:47PM +0200, Pavel Georgiev wrote: > > > I`ve beed using nxingx as a local balancer for few backend servers: > > > > > > > > > http { > > > upstream mydomain.com { > > > server 192.168.8.30; # backend server > > > } > > > server { > > > listen 192.168.10.1:8080; > > > server_name cmydomain.com; > > > > > > location / { > > > proxy_pass http://mydomain.com; > > > proxy_redirect off; > > > proxy_set_header Host $host; > > > } > > > } > > > } > > > > > > > > > What I need to do is for a certain url to rewrite it to an external url > > > but serve the requests as a proxy instead of returing a redirect, so this > > > is transparent to the client: > > > > > > http://mydomain.com/redirect/(.*)$ should go to > > > http://extranal.comain.com/$1 > > > > > > I saw this is possible with a simple rewrite but it returns a redirect to > > > the client. Is is possible to make nginx to server the rewrite as a > > > proxy? > > > > Or probably, you need X-Accel-Redirect: > > > > http://wiki.codemongers.com/NginxXSendfile > > What I`m trying to do is to server some location (/redirect/ in the example > above) to an external server. It is doable with this: > > location /redirect { > rewrite ^/redirect/(.*)$ http://some.domain.com/$1 > } > > This however returns a 302 code, what I want is nginx to get the file > requested from http://some.domain.com/ and server it to the client, so that > the client doesn't have a clue that this was taken from an external server. > In other words, I`d like to treat http://some.domain.com as a backend server, > but just for a given location. > > Hope that makes sense. I don't think X-Accel-Redirect is what I need here. I still do not understand your problem. Probably, you need: location / { proxy_pass http://mydomain.com; proxy_redirect off; proxy_set_header Host $host; } location /redirect/ { proxy_pass http://some.domain.com/; } -- Igor Sysoev http://sysoev.ru/en/ From sean at ardishealth.com Thu Feb 28 01:12:21 2008 From: sean at ardishealth.com (Sean Allen) Date: Wed, 27 Feb 2008 17:12:21 -0500 Subject: if ,-f and variables In-Reply-To: <20080227193410.GG84129@rambler-co.ru> References: <72215C54-C4B1-4D1E-8E5D-B4676857FC83@ardishealth.com> <200802271742.50318.roxis@list.ru> <0A2FDDBC-3B06-402B-BB5F-EDC61E452293@ardishealth.com> <20080227193410.GG84129@rambler-co.ru> Message-ID: <0AFEA5EC-C5A8-4B31-83AC-67ECE0838DD8@ardishealth.com> >> > > The "root" directive may be set in eny place of http, server, or > locacation: > it will be properly set or inherited: > > http { > > server { > location / { > # here root is /path > } > } > > root /path; > } > > But this is not true for "set" directives: they are executed in order > of thier apperance. > > include /ah/sites/colon365.co.uk/conf/nginx/base; > set $base /ah/sites/colon365.co.uk; > > include /ah/sites/colon365.co.uk/conf/nginx/maintenance; > > using $document_root, here it is "root ''", > because $root is still undefined > > include /ah/conf/nginx/www-shared; > include /ah/conf/nginx/root; > set $root $base/public; > root $root; > > You should to set "set $root $base/public;" early. > > can you use variables in access_log setup, ie path to file? access_log $some_thing/access.log combined; i had a problem with that before but it was probably this exact issue. From joe at joetify.com Thu Feb 28 02:20:08 2008 From: joe at joetify.com (Joe Williams) Date: Wed, 27 Feb 2008 17:20:08 -0600 Subject: httperf results with nginx and apache Message-ID: <47C5F028.2050908@joetify.com> I am attempting to do a bit of a comparison between nginx and apache using httperf but my results are coming out a little strange. I am running the following against both apache and nginx: > httperf --timeout=5 --client=0/1 --server=DOMAIN --port=80 > --uri=/robots.txt --rate=200 --send-buffer=4096 --recv-buffer=16384 > --num-conns=5000 --num-calls=10 The results I am seeing are the following: NGINX: > > Total: connections 5000 requests 50000 replies 50000 test-duration > 25.001 s > > Connection rate: 200.0 conn/s (5.0 ms/conn, <=5 concurrent connections) > > Request rate: 2000.0 req/s (0.5 ms/req) Apache: > > Total: connections 5000 requests 10000 replies 5000 test-duration 24.998 s > > Connection rate: 200.0 conn/s (5.0 ms/conn, <=1 concurrent connections) > > Request rate: 400.0 req/s (2.5 ms/req) Shouldn't the 'Total' numbers be the same against both web servers since I am using the same command to test them? Why is the 'requests' number a 1/5 of number that nginx responds with. Has anyone see this sort of result in the past? I have attempted this with a few different connection, rate and call counts and seem to get similar results with each. I have verified that my MaxClients and Servers setting in Apache is high enough as well. Thanks. -Joe -- Name: Joseph A. Williams Email: joe at joetify.com From pavel at netclime.com Thu Feb 28 14:09:27 2008 From: pavel at netclime.com (Pavel Georgiev) Date: Thu, 28 Feb 2008 13:09:27 +0200 Subject: Nginx for proxy + rewrite In-Reply-To: <20080227204237.GJ84129@rambler-co.ru> References: <200802272030.47465.pavel@netclime.com> <200802272227.19021.pavel@netclime.com> <20080227204237.GJ84129@rambler-co.ru> Message-ID: <200802281309.27399.pavel@netclime.com> On Wednesday 27 February 2008 22:42:37 Igor Sysoev wrote: > On Wed, Feb 27, 2008 at 10:27:18PM +0200, Pavel Georgiev wrote: > > On Wednesday 27 February 2008 21:08:45 Igor Sysoev wrote: > > > On Wed, Feb 27, 2008 at 08:30:47PM +0200, Pavel Georgiev wrote: > > > > I`ve beed using nxingx as a local balancer for few backend servers: > > > > > > > > > > > > http { > > > > upstream mydomain.com { > > > > server 192.168.8.30; # backend server > > > > } > > > > server { > > > > listen 192.168.10.1:8080; > > > > server_name cmydomain.com; > > > > > > > > location / { > > > > proxy_pass http://mydomain.com; > > > > proxy_redirect off; > > > > proxy_set_header Host $host; > > > > } > > > > } > > > > } > > > > > > > > > > > > What I need to do is for a certain url to rewrite it to an external > > > > url but serve the requests as a proxy instead of returing a redirect, > > > > so this is transparent to the client: > > > > > > > > http://mydomain.com/redirect/(.*)$ should go to > > > > http://extranal.comain.com/$1 > > > > > > > > I saw this is possible with a simple rewrite but it returns a > > > > redirect to the client. Is is possible to make nginx to server the > > > > rewrite as a proxy? > > > > > > Or probably, you need X-Accel-Redirect: > > > > > > http://wiki.codemongers.com/NginxXSendfile > > > > What I`m trying to do is to server some location (/redirect/ in the > > example above) to an external server. It is doable with this: > > > > location /redirect { > > rewrite ^/redirect/(.*)$ http://some.domain.com/$1 > > } > > > > This however returns a 302 code, what I want is nginx to get the file > > requested from http://some.domain.com/ and server it to the client, so > > that the client doesn't have a clue that this was taken from an external > > server. In other words, I`d like to treat http://some.domain.com as a > > backend server, but just for a given location. > > > > Hope that makes sense. I don't think X-Accel-Redirect is what I need > > here. > > I still do not understand your problem. Probably, you need: > > location / { > proxy_pass http://mydomain.com; > proxy_redirect off; > proxy_set_header Host $host; > } > > location /redirect/ { > proxy_pass http://some.domain.com/; > } Thats exactly what I needed, thanks a lot! From is at rambler-co.ru Thu Feb 28 14:22:23 2008 From: is at rambler-co.ru (Igor Sysoev) Date: Thu, 28 Feb 2008 14:22:23 +0300 Subject: httperf results with nginx and apache In-Reply-To: <47C5F028.2050908@joetify.com> References: <47C5F028.2050908@joetify.com> Message-ID: <20080228112223.GA13376@rambler-co.ru> On Wed, Feb 27, 2008 at 05:20:08PM -0600, Joe Williams wrote: > I am attempting to do a bit of a comparison between nginx and apache > using httperf but my results are coming out a little strange. I am > running the following against both apache and nginx: > > > httperf --timeout=5 --client=0/1 --server=DOMAIN --port=80 > >--uri=/robots.txt --rate=200 --send-buffer=4096 --recv-buffer=16384 > >--num-conns=5000 --num-calls=10 > > The results I am seeing are the following: > > NGINX: > > > > >Total: connections 5000 requests 50000 replies 50000 test-duration > >25.001 s > > > >Connection rate: 200.0 conn/s (5.0 ms/conn, <=5 concurrent connections) > > > >Request rate: 2000.0 req/s (0.5 ms/req) > > Apache: > > > > >Total: connections 5000 requests 10000 replies 5000 test-duration 24.998 s > > > >Connection rate: 200.0 conn/s (5.0 ms/conn, <=1 concurrent connections) > > > >Request rate: 400.0 req/s (2.5 ms/req) > > > Shouldn't the 'Total' numbers be the same against both web servers since > I am using the same command to test them? Why is the 'requests' number a > 1/5 of number that nginx responds with. Has anyone see this sort of > result in the past? > > I have attempted this with a few different connection, rate and call > counts and seem to get similar results with each. > > I have verified that my MaxClients and Servers setting in Apache is high > enough as well. The difference is probably in keepalive settings. Look --num-conns=5000 and --num-calls=10 parameters. -- Igor Sysoev http://sysoev.ru/en/ From is at rambler-co.ru Thu Feb 28 14:23:16 2008 From: is at rambler-co.ru (Igor Sysoev) Date: Thu, 28 Feb 2008 14:23:16 +0300 Subject: if ,-f and variables In-Reply-To: <0AFEA5EC-C5A8-4B31-83AC-67ECE0838DD8@ardishealth.com> References: <72215C54-C4B1-4D1E-8E5D-B4676857FC83@ardishealth.com> <200802271742.50318.roxis@list.ru> <0A2FDDBC-3B06-402B-BB5F-EDC61E452293@ardishealth.com> <20080227193410.GG84129@rambler-co.ru> <0AFEA5EC-C5A8-4B31-83AC-67ECE0838DD8@ardishealth.com> Message-ID: <20080228112316.GB13376@rambler-co.ru> On Wed, Feb 27, 2008 at 05:12:21PM -0500, Sean Allen wrote: > >> > > > >The "root" directive may be set in eny place of http, server, or > >locacation: > >it will be properly set or inherited: > > > >http { > > > > server { > > location / { > > # here root is /path > > } > > } > > > > root /path; > >} > > > >But this is not true for "set" directives: they are executed in order > >of thier apperance. > > > >include /ah/sites/colon365.co.uk/conf/nginx/base; > > set $base /ah/sites/colon365.co.uk; > > > >include /ah/sites/colon365.co.uk/conf/nginx/maintenance; > > > > using $document_root, here it is "root ''", > > because $root is still undefined > > > >include /ah/conf/nginx/www-shared; > > include /ah/conf/nginx/root; > > set $root $base/public; > > root $root; > > > >You should to set "set $root $base/public;" early. > > > > > > can you use variables in access_log setup, ie path to file? > > access_log $some_thing/access.log combined; > > > i had a problem with that before but it was probably this exact issue. No, nginx does not support variables in log names. -- Igor Sysoev http://sysoev.ru/en/ From joe at joetify.com Thu Feb 28 19:29:55 2008 From: joe at joetify.com (Joe Williams) Date: Thu, 28 Feb 2008 10:29:55 -0600 Subject: httperf results with nginx and apache In-Reply-To: <20080228112223.GA13376@rambler-co.ru> References: <47C5F028.2050908@joetify.com> <20080228112223.GA13376@rambler-co.ru> Message-ID: <47C6E183.3030502@joetify.com> Keepalives did the trick, thanks for the advice! Are keepalives on by default in Nginx? -Joe Igor Sysoev wrote: > On Wed, Feb 27, 2008 at 05:20:08PM -0600, Joe Williams wrote: > > >> I am attempting to do a bit of a comparison between nginx and apache >> using httperf but my results are coming out a little strange. I am >> running the following against both apache and nginx: >> >> >>> httperf --timeout=5 --client=0/1 --server=DOMAIN --port=80 >>> --uri=/robots.txt --rate=200 --send-buffer=4096 --recv-buffer=16384 >>> --num-conns=5000 --num-calls=10 >>> >> The results I am seeing are the following: >> >> NGINX: >> >> >>> >>> Total: connections 5000 requests 50000 replies 50000 test-duration >>> 25.001 s >>> >>> Connection rate: 200.0 conn/s (5.0 ms/conn, <=5 concurrent connections) >>> >>> Request rate: 2000.0 req/s (0.5 ms/req) >>> >> Apache: >> >> >>> >>> Total: connections 5000 requests 10000 replies 5000 test-duration 24.998 s >>> >>> Connection rate: 200.0 conn/s (5.0 ms/conn, <=1 concurrent connections) >>> >>> Request rate: 400.0 req/s (2.5 ms/req) >>> >> Shouldn't the 'Total' numbers be the same against both web servers since >> I am using the same command to test them? Why is the 'requests' number a >> 1/5 of number that nginx responds with. Has anyone see this sort of >> result in the past? >> >> I have attempted this with a few different connection, rate and call >> counts and seem to get similar results with each. >> >> I have verified that my MaxClients and Servers setting in Apache is high >> enough as well. >> > > The difference is probably in keepalive settings. > Look --num-conns=5000 and --num-calls=10 parameters. > > > -- Name: Joseph A. Williams Email: joe at joetify.com From is at rambler-co.ru Thu Feb 28 19:36:30 2008 From: is at rambler-co.ru (Igor Sysoev) Date: Thu, 28 Feb 2008 19:36:30 +0300 Subject: httperf results with nginx and apache In-Reply-To: <47C6E183.3030502@joetify.com> References: <47C5F028.2050908@joetify.com> <20080228112223.GA13376@rambler-co.ru> <47C6E183.3030502@joetify.com> Message-ID: <20080228163630.GG13376@rambler-co.ru> On Thu, Feb 28, 2008 at 10:29:55AM -0600, Joe Williams wrote: > Keepalives did the trick, thanks for the advice! > > Are keepalives on by default in Nginx? Yes. Keepalives are cheap for nginx, however, they take sockets, file descriptors, etc. in kernel. -- Igor Sysoev http://sysoev.ru/en/ From joe at joetify.com Thu Feb 28 19:51:01 2008 From: joe at joetify.com (Joe Williams) Date: Thu, 28 Feb 2008 10:51:01 -0600 Subject: httperf results with nginx and apache In-Reply-To: <20080228163630.GG13376@rambler-co.ru> References: <47C5F028.2050908@joetify.com> <20080228112223.GA13376@rambler-co.ru> <47C6E183.3030502@joetify.com> <20080228163630.GG13376@rambler-co.ru> Message-ID: <47C6E675.1090708@joetify.com> Understood, thanks for the help. -Joe Igor Sysoev wrote: > On Thu, Feb 28, 2008 at 10:29:55AM -0600, Joe Williams wrote: > > >> Keepalives did the trick, thanks for the advice! >> >> Are keepalives on by default in Nginx? >> > > Yes. Keepalives are cheap for nginx, however, they take sockets, file > descriptors, etc. in kernel. > > > -- Name: Joseph A. Williams Email: joe at joetify.com From hvenkata at gmail.com Thu Feb 28 21:43:06 2008 From: hvenkata at gmail.com (Hari) Date: Thu, 28 Feb 2008 10:43:06 -0800 Subject: Error setting up http authentication - 500 Internal Server Error Message-ID: I am using the instruction given at http://wiki.codemongers.com/NginxHttpAuthBasicModule#auth_basic When i access the site i get prompted for username and password. After i enter the username and password i get the error "500 Internal Server Error" When i have the following two lines commented out i do not get any error. # auth_basic "osusu"; # auth_basic_user_file conf/passwd; What am i doing wrong? Here is the setup of my conf file ========================== upstream domain1 { server 127.0.0.1:8000; server 127.0.0.1:8001; } server { listen 80; server_name www.osusu.com; rewrite ^/(.*) http://domain.com permanent; } server { listen 80; server_name osusu.com; access_log /home/demo/public_html/domain.com/shared/log/access.log; error_log /home/demo/public_html/domain.com/shared/log/error.log; root /home/demo/public_html/domain.com/current/public/; index index.html; location / { auth_basic "osusu"; auth_basic_user_file conf/passwd; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $http_host; proxy_redirect false; if (-f $request_filename/index.html) { rewrite (.*) $1/index.html break; } if (-f $request_filename.html) { rewrite (.*) $1.html break; } if (!-f $request_filename) { proxy_pass http://domain1; break; } } } ================= I created the conf file using the utility htpasswd. Any help on this is greatly appretiated... -- Hariharan Venkata From is at rambler-co.ru Thu Feb 28 21:50:23 2008 From: is at rambler-co.ru (Igor Sysoev) Date: Thu, 28 Feb 2008 21:50:23 +0300 Subject: Error setting up http authentication - 500 Internal Server Error In-Reply-To: References: Message-ID: <20080228185023.GB23692@rambler-co.ru> On Thu, Feb 28, 2008 at 10:43:06AM -0800, Hari wrote: > I am using the instruction given at > http://wiki.codemongers.com/NginxHttpAuthBasicModule#auth_basic > > When i access the site i get prompted for username and password. > After i enter the username and password i get the error "500 Internal > Server Error" What is in error_log ? > When i have the following two lines commented out i do not get any error. > # auth_basic "osusu"; > # auth_basic_user_file conf/passwd; > > > What am i doing wrong? > > Here is the setup of my conf file > ========================== > upstream domain1 { > server 127.0.0.1:8000; > server 127.0.0.1:8001; > } > > server { > listen 80; > server_name www.osusu.com; > rewrite ^/(.*) http://domain.com permanent; > } > > > server { > listen 80; > server_name osusu.com; > > access_log /home/demo/public_html/domain.com/shared/log/access.log; > error_log /home/demo/public_html/domain.com/shared/log/error.log; > > root /home/demo/public_html/domain.com/current/public/; > index index.html; > > location / { > auth_basic "osusu"; > auth_basic_user_file conf/passwd; > proxy_set_header X-Real-IP $remote_addr; > proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; > proxy_set_header Host $http_host; > proxy_redirect false; > > if (-f $request_filename/index.html) { > rewrite (.*) $1/index.html break; > } > if (-f $request_filename.html) { > rewrite (.*) $1.html break; > } > if (!-f $request_filename) { > proxy_pass http://domain1; > break; > } > } > } > ================= > > I created the conf file using the utility htpasswd. > > Any help on this is greatly appretiated... > > -- > Hariharan Venkata > -- Igor Sysoev http://sysoev.ru/en/ From hvenkata at gmail.com Thu Feb 28 22:09:08 2008 From: hvenkata at gmail.com (Hari) Date: Thu, 28 Feb 2008 11:09:08 -0800 Subject: Error setting up http authentication - 500 Internal Server Error In-Reply-To: <20080228185023.GB23692@rambler-co.ru> References: <20080228185023.GB23692@rambler-co.ru> Message-ID: Hi Here is the setup in the top level conf file ============================ user www-data; worker_processes 1; error_log /var/log/nginx/error.log; pid /var/run/nginx.pid; events { worker_connections 1024; } http { include /etc/nginx/mime.types; default_type application/octet-stream; access_log /var/log/nginx/access.log; error_log /var/log/nginx/error.log; sendfile on; #tcp_nopush on; #keepalive_timeout 0; keepalive_timeout 65; tcp_nodelay on; gzip on; include /etc/nginx/sites-enabled/*; } ============================ Here are the messages from the error.log file in /var/log/nginx/error.log demo at Himalaya:/etc/nginx$ cat /var/log/nginx/error.log 2008/02/25 15:45:59 [error] 5813#0: *1 open() "/var/www/nginx-default/favicon.ico" failed (2: No such file or directory), client: 208.54.15.154, server: localhost, URL: "/favicon.ico", host: "67.207.139.172" 2008/02/25 15:55:51 [error] 5917#0: *1 open() "/var/www/nginx-default/favicon.ico" failed (2: No such file or directory), client: 208.54.15.154, server: localhost, URL: "/favicon.ico", host: "67.207.139.172" Cheers Hari On Thu, Feb 28, 2008 at 10:50 AM, Igor Sysoev wrote: > On Thu, Feb 28, 2008 at 10:43:06AM -0800, Hari wrote: > > > I am using the instruction given at > > http://wiki.codemongers.com/NginxHttpAuthBasicModule#auth_basic > > > > When i access the site i get prompted for username and password. > > After i enter the username and password i get the error "500 Internal > > Server Error" > > What is in error_log ? > > > > > When i have the following two lines commented out i do not get any error. > > # auth_basic "osusu"; > > # auth_basic_user_file conf/passwd; > > > > > > What am i doing wrong? > > > > Here is the setup of my conf file > > ========================== > > upstream domain1 { > > server 127.0.0.1:8000; > > server 127.0.0.1:8001; > > } > > > > server { > > listen 80; > > server_name www.osusu.com; > > rewrite ^/(.*) http://domain.com permanent; > > } > > > > > > server { > > listen 80; > > server_name osusu.com; > > > > access_log /home/demo/public_html/domain.com/shared/log/access.log; > > error_log /home/demo/public_html/domain.com/shared/log/error.log; > > > > root /home/demo/public_html/domain.com/current/public/; > > index index.html; > > > > location / { > > auth_basic "osusu"; > > auth_basic_user_file conf/passwd; > > proxy_set_header X-Real-IP $remote_addr; > > proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; > > proxy_set_header Host $http_host; > > proxy_redirect false; > > > > if (-f $request_filename/index.html) { > > rewrite (.*) $1/index.html break; > > } > > if (-f $request_filename.html) { > > rewrite (.*) $1.html break; > > } > > if (!-f $request_filename) { > > proxy_pass http://domain1; > > break; > > } > > } > > } > > ================= > > > > I created the conf file using the utility htpasswd. > > > > Any help on this is greatly appretiated... > > > > -- > > Hariharan Venkata > > > > -- > Igor Sysoev > http://sysoev.ru/en/ > > -- Hariharan Venkata Phone - 408-890-9738 (Cell) From is at rambler-co.ru Thu Feb 28 22:22:53 2008 From: is at rambler-co.ru (Igor Sysoev) Date: Thu, 28 Feb 2008 22:22:53 +0300 Subject: Error setting up http authentication - 500 Internal Server Error In-Reply-To: References: <20080228185023.GB23692@rambler-co.ru> Message-ID: <20080228192253.GC23692@rambler-co.ru> On Thu, Feb 28, 2008 at 11:09:08AM -0800, Hari wrote: > Hi > > Here is the setup in the top level conf file > > ============================ > user www-data; > worker_processes 1; > > error_log /var/log/nginx/error.log; > pid /var/run/nginx.pid; > > events { > worker_connections 1024; > } > > http { > include /etc/nginx/mime.types; > default_type application/octet-stream; > > access_log /var/log/nginx/access.log; > error_log /var/log/nginx/error.log; > > sendfile on; > #tcp_nopush on; > > #keepalive_timeout 0; > keepalive_timeout 65; > tcp_nodelay on; > > gzip on; > > include /etc/nginx/sites-enabled/*; > > } > ============================ > > Here are the messages from the error.log file in /var/log/nginx/error.log > > demo at Himalaya:/etc/nginx$ cat /var/log/nginx/error.log > 2008/02/25 15:45:59 [error] 5813#0: *1 open() > "/var/www/nginx-default/favicon.ico" failed (2: No such file or > directory), client: 208.54.15.154, server: localhost, URL: > "/favicon.ico", host: "67.207.139.172" > 2008/02/25 15:55:51 [error] 5917#0: *1 open() > "/var/www/nginx-default/favicon.ico" failed (2: No such file or > directory), client: 208.54.15.154, server: localhost, URL: > "/favicon.ico", host: "67.207.139.172" There should be an error line at the same time when you tried to access site. > > Cheers > Hari > > > On Thu, Feb 28, 2008 at 10:50 AM, Igor Sysoev wrote: > > On Thu, Feb 28, 2008 at 10:43:06AM -0800, Hari wrote: > > > > > I am using the instruction given at > > > http://wiki.codemongers.com/NginxHttpAuthBasicModule#auth_basic > > > > > > When i access the site i get prompted for username and password. > > > After i enter the username and password i get the error "500 Internal > > > Server Error" > > > > What is in error_log ? > > > > > > > > > When i have the following two lines commented out i do not get any error. > > > # auth_basic "osusu"; > > > # auth_basic_user_file conf/passwd; > > > > > > > > > What am i doing wrong? > > > > > > Here is the setup of my conf file > > > ========================== > > > upstream domain1 { > > > server 127.0.0.1:8000; > > > server 127.0.0.1:8001; > > > } > > > > > > server { > > > listen 80; > > > server_name www.osusu.com; > > > rewrite ^/(.*) http://domain.com permanent; > > > } > > > > > > > > > server { > > > listen 80; > > > server_name osusu.com; > > > > > > access_log /home/demo/public_html/domain.com/shared/log/access.log; > > > error_log /home/demo/public_html/domain.com/shared/log/error.log; > > > > > > root /home/demo/public_html/domain.com/current/public/; > > > index index.html; > > > > > > location / { > > > auth_basic "osusu"; > > > auth_basic_user_file conf/passwd; > > > proxy_set_header X-Real-IP $remote_addr; > > > proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; > > > proxy_set_header Host $http_host; > > > proxy_redirect false; > > > > > > if (-f $request_filename/index.html) { > > > rewrite (.*) $1/index.html break; > > > } > > > if (-f $request_filename.html) { > > > rewrite (.*) $1.html break; > > > } > > > if (!-f $request_filename) { > > > proxy_pass http://domain1; > > > break; > > > } > > > } > > > } > > > ================= > > > > > > I created the conf file using the utility htpasswd. > > > > > > Any help on this is greatly appretiated... > > > > > > -- > > > Hariharan Venkata > > > > > > > -- > > Igor Sysoev > > http://sysoev.ru/en/ > > > > > > > > -- > Hariharan Venkata > Phone - 408-890-9738 (Cell) > -- Igor Sysoev http://sysoev.ru/en/ From hvenkata at gmail.com Thu Feb 28 22:40:05 2008 From: hvenkata at gmail.com (Hari) Date: Thu, 28 Feb 2008 11:40:05 -0800 Subject: Error setting up http authentication - 500 Internal Server Error In-Reply-To: <20080228192253.GC23692@rambler-co.ru> References: <20080228185023.GB23692@rambler-co.ru> <20080228192253.GC23692@rambler-co.ru> Message-ID: error is not being written to the error.log file..... the permission for the error log file is set as below demo at Himalaya:/etc/nginx/sites-available$ ls -al /var/log/nginx/error.log -rw-r--r-- 1 root root 416 Feb 25 15:55 /var/log/nginx/error.log Here are the two process one is running as root and the second one as www-data. demo at Himalaya:/etc/nginx/sites-available$ ps aux | grep nginx root 10159 0.0 0.3 27436 808 ? Ss 19:04 0:00 nginx: master process /usr/sbin/nginx www-data 10160 0.0 0.5 27884 1564 ? S 19:04 0:00 nginx: worker process the root as write permision so i am not sure why erorrs are not being logged!! Hari On Thu, Feb 28, 2008 at 11:22 AM, Igor Sysoev wrote: > > On Thu, Feb 28, 2008 at 11:09:08AM -0800, Hari wrote: > > > Hi > > > > Here is the setup in the top level conf file > > > > ============================ > > user www-data; > > worker_processes 1; > > > > error_log /var/log/nginx/error.log; > > pid /var/run/nginx.pid; > > > > events { > > worker_connections 1024; > > } > > > > http { > > include /etc/nginx/mime.types; > > default_type application/octet-stream; > > > > access_log /var/log/nginx/access.log; > > error_log /var/log/nginx/error.log; > > > > sendfile on; > > #tcp_nopush on; > > > > #keepalive_timeout 0; > > keepalive_timeout 65; > > tcp_nodelay on; > > > > gzip on; > > > > include /etc/nginx/sites-enabled/*; > > > > } > > ============================ > > > > Here are the messages from the error.log file in /var/log/nginx/error.log > > > > demo at Himalaya:/etc/nginx$ cat /var/log/nginx/error.log > > 2008/02/25 15:45:59 [error] 5813#0: *1 open() > > "/var/www/nginx-default/favicon.ico" failed (2: No such file or > > directory), client: 208.54.15.154, server: localhost, URL: > > "/favicon.ico", host: "67.207.139.172" > > 2008/02/25 15:55:51 [error] 5917#0: *1 open() > > "/var/www/nginx-default/favicon.ico" failed (2: No such file or > > directory), client: 208.54.15.154, server: localhost, URL: > > "/favicon.ico", host: "67.207.139.172" > > There should be an error line at the same time when you tried to access site. > > > > > > > Cheers > > Hari > > > > > > On Thu, Feb 28, 2008 at 10:50 AM, Igor Sysoev wrote: > > > On Thu, Feb 28, 2008 at 10:43:06AM -0800, Hari wrote: > > > > > > > I am using the instruction given at > > > > http://wiki.codemongers.com/NginxHttpAuthBasicModule#auth_basic > > > > > > > > When i access the site i get prompted for username and password. > > > > After i enter the username and password i get the error "500 Internal > > > > Server Error" > > > > > > What is in error_log ? > > > > > > > > > > > > > When i have the following two lines commented out i do not get any error. > > > > # auth_basic "osusu"; > > > > # auth_basic_user_file conf/passwd; > > > > > > > > > > > > What am i doing wrong? > > > > > > > > Here is the setup of my conf file > > > > ========================== > > > > upstream domain1 { > > > > server 127.0.0.1:8000; > > > > server 127.0.0.1:8001; > > > > } > > > > > > > > server { > > > > listen 80; > > > > server_name www.osusu.com; > > > > rewrite ^/(.*) http://domain.com permanent; > > > > } > > > > > > > > > > > > server { > > > > listen 80; > > > > server_name osusu.com; > > > > > > > > access_log /home/demo/public_html/domain.com/shared/log/access.log; > > > > error_log /home/demo/public_html/domain.com/shared/log/error.log; > > > > > > > > root /home/demo/public_html/domain.com/current/public/; > > > > index index.html; > > > > > > > > location / { > > > > auth_basic "osusu"; > > > > auth_basic_user_file conf/passwd; > > > > proxy_set_header X-Real-IP $remote_addr; > > > > proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; > > > > proxy_set_header Host $http_host; > > > > proxy_redirect false; > > > > > > > > if (-f $request_filename/index.html) { > > > > rewrite (.*) $1/index.html break; > > > > } > > > > if (-f $request_filename.html) { > > > > rewrite (.*) $1.html break; > > > > } > > > > if (!-f $request_filename) { > > > > proxy_pass http://domain1; > > > > break; > > > > } > > > > } > > > > } > > > > ================= > > > > > > > > I created the conf file using the utility htpasswd. > > > > > > > > Any help on this is greatly appretiated... > > > > > > > > -- > > > > Hariharan Venkata > > > > > > > > > > -- > > > Igor Sysoev > > > http://sysoev.ru/en/ > > > > > > > > > > > > > > -- > > Hariharan Venkata > > Phone - 408-890-9738 (Cell) > > > > -- > > > Igor Sysoev > http://sysoev.ru/en/ > > -- Hariharan Venkata Phone - 408-890-9738 (Cell) From igor at pokelondon.com Thu Feb 28 22:41:34 2008 From: igor at pokelondon.com (Igor Clark) Date: Thu, 28 Feb 2008 19:41:34 +0000 Subject: Location problems In-Reply-To: <741757296.20080219172346@gostats.ru> References: <47BAA360.2050409@staff.dada.net> <1954392835.20080219162229@gostats.ru> <47BAB1BA.9060101@staff.dada.net> <16EE163C-093B-487A-A9EB-0CB14DF38AF6@pokelondon.com> <741757296.20080219172346@gostats.ru> Message-ID: On 19 Feb 2008, at 11:23, Denis F. Latypoff wrote: > > - location ~ /admin/.* { > + location /admin { # not tested Thanks Denis, and sorry for the delay in replying. Unfortunately that didn't work, and I'm still having the same sort of problems with the location directive. On another PHP site, I'm trying to restrict access to /admin via IP. I have the following config, which works fine, though perhaps not optimal: > server { > listen 80; > server_name server.name; > > access_log /path/to/logs/access.log main; > error_log /path/to//logs/error.log info; > > location / { > root /path/to//public; > index index.php index.html; > > # if requesting /, rewrite to frontend.php and stop > rewrite ^/$ /frontend.php last; > > # Set $control_path to $my_request_uri, in case there > are any > # custom rules above that might have changed it > # Then, rewrite using the last rules > if (!-e $request_filename) { > rewrite ^/admin/(.*)$ /admin.php?CONTROL_PATH= > $1 last; > rewrite ^/speakers/(.+)/?$ /speakers/video/$1; > rewrite ^/financethemes/(.+)/?$ /financethemes/ > video/$1; > rewrite ^/transcripts/(speaker|theme)/(.+)/?$ / > transcripts/view/$1/$2; > rewrite ^(.*)$ /frontend.php?CONTROL_PATH=$1 > last; > } > > location ~ \.flv$ { > flv; > } > > location /admin { > allow 82.108.140.18; > deny all; > } > > # pass the PHP scripts to FastCGI server listening on > 127.0.0.1:9999 > location ~ \.php$ { > fastcgi_pass 127.0.0.1:9999; > fastcgi_index index.php; > fastcgi_intercept_errors on; > include conf/fastcgi_params; > } > } > error_page 404 /404.html; > error_page 500 /500.html; > } I just want to do the following, but still have all the other directives work, so that rewrites and PHP work under /admin: location /admin { allow 1.2.3.4; deny all; } How should I go about this? Where should I put the /admin location block? Nothing I do seems to work. I understand that the first matched regular expression stops the search, but as I can't seem to get nesting locations to work, what should I do? (By the way, this is the first time we've used the FLV module, and we're really pleased with the results, so thanks!) Best wishes Igor On 19 Feb 2008, at 11:23, Denis F. Latypoff wrote: > Hello Igor, > > Tuesday, February 19, 2008, 5:04:48 PM, you wrote: > >> Hi folks, > >> I often have problems trying to use different locations without >> having >> to duplicate config. >> I think I must be thinking about it the wrong way! > >> Basically I just want to make /admin/ password-protected, but inherit >> all the other config. > >> So I tried this: > >> location / { >> include /path/to/php.conf; # includes all >> fastcgi stuff and some >> rewrites >> location ~ /admin/.* { >> auth_basic "Restricted"; >> auth_basic_user_file /path/to/ >> admin.htusers; >> } >> } > >> But it doesn't work, so I tried this way which I've made work before: > >> location / { >> include /path/to/php.conf; >> } > > - location ~ /admin/.* { > + location /admin { # not tested >> auth_basic "Restricted"; >> auth_basic_user_file /path/to/admin.htusers; >> include /path/to/php.conf; >> } > >> But this doesn't work either, it includes the PHP file but doesn't do >> the auth, and there's no error in the log. I've tried various >> permutations on ~ /admin/.* too. > >> What am I doing wrong? > >> Many thanks, >> Igor > >> -- >> Igor Clark // POKE // 10 Redchurch Street // E2 7DD // +44 (0)20 7749 >> 5355 // www.pokelondon.com > > > > -- > Best regards, > Denis mailto:denis at gostats.ru > > -- Igor Clark // POKE // 10 Redchurch Street // E2 7DD // +44 (0)20 7749 5355 // www.pokelondon.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From is at rambler-co.ru Thu Feb 28 22:45:11 2008 From: is at rambler-co.ru (Igor Sysoev) Date: Thu, 28 Feb 2008 22:45:11 +0300 Subject: Error setting up http authentication - 500 Internal Server Error In-Reply-To: References: <20080228185023.GB23692@rambler-co.ru> <20080228192253.GC23692@rambler-co.ru> Message-ID: <20080228194511.GD23692@rambler-co.ru> On Thu, Feb 28, 2008 at 11:40:05AM -0800, Hari wrote: > error is not being written to the error.log file..... > > the permission for the error log file is set as below > > demo at Himalaya:/etc/nginx/sites-available$ ls -al /var/log/nginx/error.log > -rw-r--r-- 1 root root 416 Feb 25 15:55 /var/log/nginx/error.log > > Here are the two process one is running as root and the second one as www-data. > demo at Himalaya:/etc/nginx/sites-available$ ps aux | grep nginx > root 10159 0.0 0.3 27436 808 ? Ss 19:04 0:00 > nginx: master process /usr/sbin/nginx > www-data 10160 0.0 0.5 27884 1564 ? S 19:04 0:00 > nginx: worker process > > > the root as write permision so i am not sure why erorrs are not being logged!! Well, could yo build nginx with debug: ./configure --with-debug ... and enable debug log in nginx.conf: error_log /var/log/nginx/error.log debug; Then do failed request and show the log. Note, that user/password will be plain text encoded in base64, so use dummy user name and password. > Hari > > On Thu, Feb 28, 2008 at 11:22 AM, Igor Sysoev wrote: > > > > On Thu, Feb 28, 2008 at 11:09:08AM -0800, Hari wrote: > > > > > Hi > > > > > > Here is the setup in the top level conf file > > > > > > ============================ > > > user www-data; > > > worker_processes 1; > > > > > > error_log /var/log/nginx/error.log; > > > pid /var/run/nginx.pid; > > > > > > events { > > > worker_connections 1024; > > > } > > > > > > http { > > > include /etc/nginx/mime.types; > > > default_type application/octet-stream; > > > > > > access_log /var/log/nginx/access.log; > > > error_log /var/log/nginx/error.log; > > > > > > sendfile on; > > > #tcp_nopush on; > > > > > > #keepalive_timeout 0; > > > keepalive_timeout 65; > > > tcp_nodelay on; > > > > > > gzip on; > > > > > > include /etc/nginx/sites-enabled/*; > > > > > > } > > > ============================ > > > > > > Here are the messages from the error.log file in /var/log/nginx/error.log > > > > > > demo at Himalaya:/etc/nginx$ cat /var/log/nginx/error.log > > > 2008/02/25 15:45:59 [error] 5813#0: *1 open() > > > "/var/www/nginx-default/favicon.ico" failed (2: No such file or > > > directory), client: 208.54.15.154, server: localhost, URL: > > > "/favicon.ico", host: "67.207.139.172" > > > 2008/02/25 15:55:51 [error] 5917#0: *1 open() > > > "/var/www/nginx-default/favicon.ico" failed (2: No such file or > > > directory), client: 208.54.15.154, server: localhost, URL: > > > "/favicon.ico", host: "67.207.139.172" > > > > There should be an error line at the same time when you tried to access site. > > > > > > > > > > > > Cheers > > > Hari > > > > > > > > > On Thu, Feb 28, 2008 at 10:50 AM, Igor Sysoev wrote: > > > > On Thu, Feb 28, 2008 at 10:43:06AM -0800, Hari wrote: > > > > > > > > > I am using the instruction given at > > > > > http://wiki.codemongers.com/NginxHttpAuthBasicModule#auth_basic > > > > > > > > > > When i access the site i get prompted for username and password. > > > > > After i enter the username and password i get the error "500 Internal > > > > > Server Error" > > > > > > > > What is in error_log ? > > > > > > > > > > > > > > > > > When i have the following two lines commented out i do not get any error. > > > > > # auth_basic "osusu"; > > > > > # auth_basic_user_file conf/passwd; > > > > > > > > > > > > > > > What am i doing wrong? > > > > > > > > > > Here is the setup of my conf file > > > > > ========================== > > > > > upstream domain1 { > > > > > server 127.0.0.1:8000; > > > > > server 127.0.0.1:8001; > > > > > } > > > > > > > > > > server { > > > > > listen 80; > > > > > server_name www.osusu.com; > > > > > rewrite ^/(.*) http://domain.com permanent; > > > > > } > > > > > > > > > > > > > > > server { > > > > > listen 80; > > > > > server_name osusu.com; > > > > > > > > > > access_log /home/demo/public_html/domain.com/shared/log/access.log; > > > > > error_log /home/demo/public_html/domain.com/shared/log/error.log; > > > > > > > > > > root /home/demo/public_html/domain.com/current/public/; > > > > > index index.html; > > > > > > > > > > location / { > > > > > auth_basic "osusu"; > > > > > auth_basic_user_file conf/passwd; > > > > > proxy_set_header X-Real-IP $remote_addr; > > > > > proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; > > > > > proxy_set_header Host $http_host; > > > > > proxy_redirect false; > > > > > > > > > > if (-f $request_filename/index.html) { > > > > > rewrite (.*) $1/index.html break; > > > > > } > > > > > if (-f $request_filename.html) { > > > > > rewrite (.*) $1.html break; > > > > > } > > > > > if (!-f $request_filename) { > > > > > proxy_pass http://domain1; > > > > > break; > > > > > } > > > > > } > > > > > } > > > > > ================= > > > > > > > > > > I created the conf file using the utility htpasswd. > > > > > > > > > > Any help on this is greatly appretiated... > > > > > > > > > > -- > > > > > Hariharan Venkata > > > > > > > > > > > > > -- > > > > Igor Sysoev > > > > http://sysoev.ru/en/ > > > > > > > > > > > > > > > > > > > > -- > > > Hariharan Venkata > > > Phone - 408-890-9738 (Cell) > > > > > > > -- > > > > > > Igor Sysoev > > http://sysoev.ru/en/ > > > > > > > > -- > Hariharan Venkata > Phone - 408-890-9738 (Cell) > -- Igor Sysoev http://sysoev.ru/en/ From nginx at d-cohen.com Thu Feb 28 23:17:07 2008 From: nginx at d-cohen.com (nginx at d-cohen.com) Date: Thu, 28 Feb 2008 15:17:07 -0500 Subject: nginx front-end to sharepoint Message-ID: <47C716C3.7050604@d-cohen.com> Greetings, I am attempting to replace squid 3.0 with nginx as a reverse proxy to a sharepoint server with SSL and user-authentication. The problem I am having is nginx does not appear to pass the credentials to the real server w/o modifying them (after several failed attempts, once simply gets access denied). I am able to accomplish this in squid with this option: login=PASS I'm wondering if anybody has any insight/experience into this issue. I have included the relevant portions of my nginx.conf and my old squid.conf. Any help would be greatly appreciated. Thank you. nginx.conf: http { server { listen 443; server_name 192.168.0.10; ssl on; ssl_certificate /conf/nginx/cert.pem; ssl_certificate_key /conf/nginx/key.pem; location / { proxy_pass https://192.168.0.1/; } } } squid.conf: https_port 443 cert=/conf/squid/cert.pem key=/conf/squid/key.pem \ cafile=/conf/squid/ca.pem vhost cache_peer 192.168.0.1 parent 443 0 login=PASS no-query ssl proxy-only \ originserver sslflags=DONT_VERIFY_PEER,DONT_VERIFY_DOMAIN From is at rambler-co.ru Thu Feb 28 23:29:28 2008 From: is at rambler-co.ru (Igor Sysoev) Date: Thu, 28 Feb 2008 23:29:28 +0300 Subject: nginx front-end to sharepoint In-Reply-To: <47C716C3.7050604@d-cohen.com> References: <47C716C3.7050604@d-cohen.com> Message-ID: <20080228202928.GE23692@rambler-co.ru> On Thu, Feb 28, 2008 at 03:17:07PM -0500, nginx at d-cohen.com wrote: > I am attempting to replace squid 3.0 with nginx as a reverse proxy to a > sharepoint server with SSL and user-authentication. The problem I am > having is nginx does not appear to pass the credentials to the real > server w/o modifying them (after several failed attempts, once simply > gets access denied). I am able to accomplish this in squid with this > option: login=PASS nginx should pass all headers as is. Do you use Basic authentication ? > I'm wondering if anybody has any insight/experience into this issue. I > have included the relevant portions of my nginx.conf and my old > squid.conf. Any help would be greatly appreciated. > > Thank you. > > nginx.conf: > > http { > server { > listen 443; > server_name 192.168.0.10; > > ssl on; > ssl_certificate /conf/nginx/cert.pem; > ssl_certificate_key /conf/nginx/key.pem; > > location / { > proxy_pass https://192.168.0.1/; > } > } > } > > > squid.conf: > > https_port 443 cert=/conf/squid/cert.pem key=/conf/squid/key.pem \ > cafile=/conf/squid/ca.pem vhost > cache_peer 192.168.0.1 parent 443 0 login=PASS no-query ssl proxy-only \ > originserver sslflags=DONT_VERIFY_PEER,DONT_VERIFY_DOMAIN > -- Igor Sysoev http://sysoev.ru/en/ From is at rambler-co.ru Thu Feb 28 23:50:16 2008 From: is at rambler-co.ru (Igor Sysoev) Date: Thu, 28 Feb 2008 23:50:16 +0300 Subject: Location problems In-Reply-To: References: <47BAA360.2050409@staff.dada.net> <1954392835.20080219162229@gostats.ru> <47BAB1BA.9060101@staff.dada.net> <16EE163C-093B-487A-A9EB-0CB14DF38AF6@pokelondon.com> <741757296.20080219172346@gostats.ru> Message-ID: <20080228205016.GF23692@rambler-co.ru> On Thu, Feb 28, 2008 at 07:41:34PM +0000, Igor Clark wrote: > On 19 Feb 2008, at 11:23, Denis F. Latypoff wrote: > > > > >- location ~ /admin/.* { > >+ location /admin { # not tested > > Thanks Denis, and sorry for the delay in replying. > > Unfortunately that didn't work, and I'm still having the same sort of > problems with the location directive. > > On another PHP site, I'm trying to restrict access to /admin via IP. > I have the following config, which works fine, though perhaps not > optimal: > > > server { > > listen 80; > > server_name server.name; > > > > access_log /path/to/logs/access.log main; > > error_log /path/to//logs/error.log info; > > > > location / { > > root /path/to//public; > > index index.php index.html; > > > > # if requesting /, rewrite to frontend.php and stop > > rewrite ^/$ /frontend.php last; > > > > # Set $control_path to $my_request_uri, in case there > >are any > > # custom rules above that might have changed it > > # Then, rewrite using the last rules > > if (!-e $request_filename) { > > rewrite ^/admin/(.*)$ > > /admin.php?CONTROL_PATH= $1 last; > > rewrite ^/speakers/(.+)/?$ > > /speakers/video/$1; > > rewrite ^/financethemes/(.+)/?$ > > /financethemes/ video/$1; > > rewrite ^/transcripts/(speaker|theme)/(.+)/?$ / > >transcripts/view/$1/$2; > > rewrite ^(.*)$ > > /frontend.php?CONTROL_PATH=$1 last; > > } > > > > location ~ \.flv$ { > > flv; > > } > > > > location /admin { > > allow 82.108.140.18; > > deny all; > > } > > > > # pass the PHP scripts to FastCGI server listening on > >127.0.0.1:9999 > > location ~ \.php$ { > > fastcgi_pass 127.0.0.1:9999; > > fastcgi_index index.php; > > fastcgi_intercept_errors on; > > include conf/fastcgi_params; > > } > > } > > error_page 404 /404.html; > > error_page 500 /500.html; > > } > > I just want to do the following, but still have all the other > directives work, so that rewrites and PHP work under /admin: fastcgi_index index.php; fastcgi_intercept_errors on; include conf/fastcgi_params; location ^~ /admin/ { allow 1.2.3.4; deny all; location ~ \.php$ { fastcgi_pass 127.0.0.1:9999; } } location ~ \.php$ { fastcgi_pass 127.0.0.1:9999; } > > How should I go about this? Where should I put the /admin location > block? Nothing I do seems to work. I understand that the first matched > regular expression stops the search, but as I can't seem to get > nesting locations to work, what should I do? > location /admin { > allow 1.2.3.4; > deny all; > } > (By the way, this is the first time we've used the FLV module, and > we're really pleased with the results, so thanks!) > > Best wishes > Igor > > On 19 Feb 2008, at 11:23, Denis F. Latypoff wrote: > > >Hello Igor, > > > >Tuesday, February 19, 2008, 5:04:48 PM, you wrote: > > > >>Hi folks, > > > >>I often have problems trying to use different locations without > >>having > >>to duplicate config. > >>I think I must be thinking about it the wrong way! > > > >>Basically I just want to make /admin/ password-protected, but inherit > >>all the other config. > > > >>So I tried this: > > > >> location / { > >> include /path/to/php.conf; # includes all > >>fastcgi stuff and some > >>rewrites > >> location ~ /admin/.* { > >> auth_basic "Restricted"; > >> auth_basic_user_file /path/to/ > >>admin.htusers; > >> } > >> } > > > >>But it doesn't work, so I tried this way which I've made work before: > > > >> location / { > >> include /path/to/php.conf; > >> } > > > >- location ~ /admin/.* { > >+ location /admin { # not tested > >> auth_basic "Restricted"; > >> auth_basic_user_file /path/to/admin.htusers; > >> include /path/to/php.conf; > >> } > > > >>But this doesn't work either, it includes the PHP file but doesn't do > >>the auth, and there's no error in the log. I've tried various > >>permutations on ~ /admin/.* too. > > > >>What am I doing wrong? > > > >>Many thanks, > >>Igor > > > >>-- > >>Igor Clark // POKE // 10 Redchurch Street // E2 7DD // +44 (0)20 7749 > >>5355 // www.pokelondon.com > > > > > > > >-- > >Best regards, > >Denis mailto:denis at gostats.ru > > > > > > -- > Igor Clark // POKE // 10 Redchurch Street // E2 7DD // +44 (0)20 7749 > 5355 // www.pokelondon.com > > > > -- Igor Sysoev http://sysoev.ru/en/ From is at rambler-co.ru Fri Feb 29 00:00:17 2008 From: is at rambler-co.ru (Igor Sysoev) Date: Fri, 29 Feb 2008 00:00:17 +0300 Subject: proxy_buffering=off, potential problems? other solutions? In-Reply-To: <47C512FB.2020208@nekomancer.net> References: <47C512FB.2020208@nekomancer.net> Message-ID: <20080228210017.GG23692@rambler-co.ru> On Wed, Feb 27, 2008 at 08:36:27AM +0100, G?bor Farkas wrote: > >>i have a fairly usual configuration of an nginx webserver + an > >>apache-based application-server behind it. > >> > >>when requests come in, then nginx proxies it to apache, etc. > >> > >>my problem is, that in certain cases, i need that when apache sends the > >>response to nginx, nginx should immediately send it to the client. > >> > >>i can solve this by simply turning proxy_buffering off, with > >>"proxy_buffering = off" > > > >If response will be bigger than proxy_buffer_size, then backend will > >be tied to nginx until the data will be sent to cliant. > >The maximum data size that nginx can read from backend at once in this mode > >is proxy_buffer_size. > > > > maybe i'm misunderstanding something here. > > as far as i see, there are 2 separate "features": > > 1. nginx reads the whole response from the proxied apache and "frees" > apache, even when nginx is not immediately able to send it to the client. Yes. > 2. nginx does not start to send the response to the client until the > whole response is read from apache (or until it has read > "proxy_buffer_size" bytes from apache) Yes. > my problem is #2, not #1. it seems that doing a "proxy_buffering = off" > solves #2, but maybe it does also #1. > > is there a way to only do #2, but not #1? No. > maybe it helps if i explain my situation in more detail: > > the apache web-app generates a webpage dynamically, the following way: > > A. generate the first part > B. do some computation > C. generate the second part > > it's very important that after step #A, the client immediately gets that > part of the webpage. with proxy_buffering enabled, it does not happen, > because nginx seems to wait for the whole response (or for enough data > to fill it's buffers). > > it seems that "proxy_buffering=off" achieves what i need. but as i > understood from your response, it also means that the apache-worker will > be blocked until the whole response is sent to the client. is there a > way to have what i need, and still have buffering enabled? :) > > (well, there is the possibility to send a lot of empty-space in the html > to fill nginx's buffers, but that's not a nice solution :-) Yes, you are right: apache-worker will be blocked until the whole response is sent to the client. Initally, proxying was buffered only, because accelerator should get response as quickly as possible and free backend. Then non-buffered hack was added, basically to support memcached. Then "proxy_buffering off" was appeared. I want to rewrite all upstream code including balancers and it will next major target after caching will be complete. -- Igor Sysoev http://sysoev.ru/en/ From nginx at d-cohen.com Fri Feb 29 00:18:16 2008 From: nginx at d-cohen.com (nginx at d-cohen.com) Date: Thu, 28 Feb 2008 16:18:16 -0500 Subject: nginx front-end to sharepoint In-Reply-To: <20080228202928.GE23692@rambler-co.ru> References: <47C716C3.7050604@d-cohen.com> <20080228202928.GE23692@rambler-co.ru> Message-ID: <47C72518.1020900@d-cohen.com> Yes, we are using Basic authentication. Igor Sysoev wrote: > On Thu, Feb 28, 2008 at 03:17:07PM -0500, nginx at d-cohen.com wrote: > >> I am attempting to replace squid 3.0 with nginx as a reverse proxy to a >> sharepoint server with SSL and user-authentication. The problem I am >> having is nginx does not appear to pass the credentials to the real >> server w/o modifying them (after several failed attempts, once simply >> gets access denied). I am able to accomplish this in squid with this >> option: login=PASS > > nginx should pass all headers as is. > Do you use Basic authentication ? > >> I'm wondering if anybody has any insight/experience into this issue. I >> have included the relevant portions of my nginx.conf and my old >> squid.conf. Any help would be greatly appreciated. >> >> Thank you. >> >> nginx.conf: >> >> http { >> server { >> listen 443; >> server_name 192.168.0.10; >> >> ssl on; >> ssl_certificate /conf/nginx/cert.pem; >> ssl_certificate_key /conf/nginx/key.pem; >> >> location / { >> proxy_pass https://192.168.0.1/; >> } >> } >> } >> >> >> squid.conf: >> >> https_port 443 cert=/conf/squid/cert.pem key=/conf/squid/key.pem \ >> cafile=/conf/squid/ca.pem vhost >> cache_peer 192.168.0.1 parent 443 0 login=PASS no-query ssl proxy-only \ >> originserver sslflags=DONT_VERIFY_PEER,DONT_VERIFY_DOMAIN >> > From sean at ardishealth.com Fri Feb 29 01:09:18 2008 From: sean at ardishealth.com (Sean Allen) Date: Thu, 28 Feb 2008 17:09:18 -0500 Subject: ngx_http_memcached_module question Message-ID: can i do the following: check memcache for existence of content. if not continue our normal processing which is currently: check for static file if it exists, serve it if it doesnt exist, pass request off to upstream server. From joe at joetify.com Fri Feb 29 05:06:50 2008 From: joe at joetify.com (Joe Williams) Date: Thu, 28 Feb 2008 20:06:50 -0600 Subject: response times and network io Message-ID: <47C768BA.3050109@joetify.com> i am performing some httperf tests against apache and nginx. something i noticed that piqued my interest were the consistency of response times (0.4 ms each run regardless of number of request, much lower than apache in all cases) and network I/O (consistently higher than apache regardless of number of request). it also uses less cpu than apache and doesn't nearly drive up the load. are these normal results? is there a mechanism in nginx that keeps the response times low and consistent? also, is it normal that it uses more network I/O? if so, what is the cause? to me it would seem like that it uses more bandwidth to respond to the same number of requests which seems inefficient. please correct me if i am wrong. i am just trying to understand the core differences in how nginx works in comparison to apache and why i would see these performance differences. thanks for the help. -joe -- Name: Joseph A. Williams Email: joe at joetify.com From joe at joetify.com Fri Feb 29 05:24:24 2008 From: joe at joetify.com (Joe Williams) Date: Thu, 28 Feb 2008 20:24:24 -0600 Subject: response times and network io In-Reply-To: <47C768BA.3050109@joetify.com> References: <47C768BA.3050109@joetify.com> Message-ID: <47C76CD8.90605@joetify.com> please excuse my typo. regarding network I/O nginx uses consistently lower I/O than apache. regardless i am curious about how it processes requests differently to obtain lower response times and network I/O. thanks for any help you can provide. -Joe Joe Williams wrote: > i am performing some httperf tests against apache and nginx. something > i noticed that piqued my interest were the consistency of response > times (0.4 ms each run regardless of number of request, much lower > than apache in all cases) and network I/O (consistently higher than > apache regardless of number of request). it also uses less cpu than > apache and doesn't nearly drive up the load. > > are these normal results? is there a mechanism in nginx that keeps the > response times low and consistent? also, is it normal that it uses > more network I/O? if so, what is the cause? to me it would seem like > that it uses more bandwidth to respond to the same number of requests > which seems inefficient. > > please correct me if i am wrong. i am just trying to understand the > core differences in how nginx works in comparison to apache and why i > would see these performance differences. > > thanks for the help. > > -joe > -- Name: Joseph A. Williams Email: joe at joetify.com From just.starting at gmail.com Fri Feb 29 05:33:12 2008 From: just.starting at gmail.com (just starting) Date: Fri, 29 Feb 2008 08:03:12 +0530 Subject: having problem with ./configure Message-ID: <3898fa730802281833sc5b518ayfce18e91fcf8f440@mail.gmail.com> hi all, I tried to install nginx in a new machine and got the following error: $ ./configure checking for OS + Linux 2.6.9-22.ELsmp i686 checking for C compiler ... not found After that i run $yum install gcc* which installed abt 17 packages. Still getting same prob. How to resolve this. Thanks. -------------- next part -------------- An HTML attachment was scrubbed... URL: From developerhondev at gmail.com Fri Feb 29 09:52:04 2008 From: developerhondev at gmail.com (HonDev Developer) Date: Fri, 29 Feb 2008 17:52:04 +1100 Subject: 404 error on WPMU In-Reply-To: <47BE6BDA.4070705@agrakom.com> References: <47BE4D5A.2010109@agrakom.com> <13c357830802212121k59172406ge1b391f0887056c0@mail.gmail.com> <47BE6BDA.4070705@agrakom.com> Message-ID: After this line: rewrite ^.*/files/(.*) /wp-content/blogs.php?file=$1; try adding this: rewrite ^/wp-admin/ /wp-admin/index.php last; On Fri, Feb 22, 2008 at 5:29 PM, dika wrote: > Thanks for your suggestion sir, but unfortunately it doesn't work.. > I still get 404 error alert. > > any advice ? > > > -- > anDika > > > Kiril Angov wrote: > > # Look for existence of PHP index file. > # Don't break here...just rewrite it. > if (-f $request_filename/index.php) { > rewrite (.*) $1/index.php; > } > > On Thu, Feb 21, 2008 at 11:19 PM, dika wrote: > > > Hai Teams, > > > > I've installed NginX to host my Wordpress MU. > > Everything running well, but one thing didn't works properly. > > > > When I use this : http://202.158.66.216/wp-admin/ > > I got *Error 404 - Not Found. > > > > *But if I use : http://202.158.66.216/wp-admin/index.php > > everything's running well. > > > > What should I do to make this run without adding /index.php ? > > > > here are my nginx.conf : > > > > ------- > > server { > > listen 80; > > server_name 202.158.66.216 ; > > error_log /var/log/nginx/error.lo; > > location ~* > > ^.+\.(html|jpg|jpeg|gif|png|ico|css|zip|tgz|gz|rar|bz2|doc|xls|exe|pdf|ppt|txt|tar|mid|midi|wav|bmp|rtf|js)$ > > { > > root /data/blog/wp; > > expires 30d; > > break; > > } > > > > location / { > > root /data/blog/wp; > > index index.html index.htm index.php; > > rewrite ^.*/files/(.*) /wp-content/blogs.php?file=$1; > > > > if (!-e $request_filename) { > > rewrite ^.+?(/wp-.*) $1 last; > > rewrite ^.+?(/.*\.php)$ $1 last; > > } > > > > if ($query_string !~ ".*s=.*") { > > rewrite ^(.*) /wp-content/cache/supercache/$http_host/$1index.html; > > } > > > > if ($http_cookie !~ "^.*comment_author_.*$" ) { > > rewrite ^(.*) /wp-content/cache/supercache/$http_host/$1index.html; > > } > > > > if ($http_cookie !~ "^.*wordpressuser.*$" ) { > > rewrite ^(.*) /wp-content/cache/supercache/$http_host/$1index.html; > > } > > > > if ($http_cookie !~ "^.*wp-postpass_.*$" ) { > > rewrite ^(.*) /wp-content/cache/supercache/$http_host/$1index.html > > break; > > } > > > > error_page 404 = @tricky; > > } > > error_page 500 502 503 504 /50x.html; > > location = /50x.html { > > root html; > > } > > > > location @tricky { > > rewrite ^ /index.php last; > > fastcgi_pass 127.0.0.1:9000; > > fastcgi_index index.php; > > fastcgi_param SCRIPT_FILENAME /data/blog/wp$fastcgi_script_name; > > include /opt/nginx/conf/fastcgi_params; > > } > > > > location ~ \.php$ { > > fastcgi_pass 127.0.0.1:9000; > > fastcgi_index index.php; > > fastcgi_param SCRIPT_FILENAME /data/blog/wp$fastcgi_script_name; > > include /opt/nginx/conf/fastcgi_params; > > } > > } > > > > -- > > > > Thanks for advice. > > > > *Andika* > > Indonesian > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From al-nginx at none.at Fri Feb 29 10:28:50 2008 From: al-nginx at none.at (Aleksandar Lazic) Date: Fri, 29 Feb 2008 08:28:50 +0100 Subject: having problem with ./configure In-Reply-To: <3898fa730802281833sc5b518ayfce18e91fcf8f440@mail.gmail.com> References: <3898fa730802281833sc5b518ayfce18e91fcf8f440@mail.gmail.com> Message-ID: <20080229072850.GA28250@none.at> Hi, On Fre 29.02.2008 08:03, just starting wrote: > >I tried to install nginx in a new machine and got the following error: > >$ ./configure >checking for OS > + Linux 2.6.9-22.ELsmp i686 >checking for C compiler ... not found > > >After that i run >$yum install gcc* >which installed abt 17 packages. > >Still getting same prob. what is the content of objs/autoconf.err? what shows gcc -version? what is the output of rpm -qa|egrep gcc? Cheers Aleks From andika at agrakom.com Fri Feb 29 10:41:00 2008 From: andika at agrakom.com (dika) Date: Fri, 29 Feb 2008 14:41:00 +0700 Subject: 404 error on WPMU In-Reply-To: References: <47BE4D5A.2010109@agrakom.com> <13c357830802212121k59172406ge1b391f0887056c0@mail.gmail.com> <47BE6BDA.4070705@agrakom.com> Message-ID: <47C7B70C.30309@agrakom.com> wow.. works like a charms..! thank you so much. -- anDika_ HonDev Developer wrote: > After this line: > rewrite ^.*/files/(.*) /wp-content/blogs.php?file=$1; > > try adding this: > > rewrite ^/wp-admin/ /wp-admin/index.php last; > > > > On Fri, Feb 22, 2008 at 5:29 PM, dika > wrote: > > Thanks for your suggestion sir, but unfortunately it doesn't work.. > I still get 404 error alert. > > any advice ? > > > -- > anDika > > > Kiril Angov wrote: >> # Look for existence of PHP index file. >> # Don't break here...just rewrite it. >> if (-f $request_filename/index.php) { >> rewrite (.*) $1/index.php; >> } >> >> On Thu, Feb 21, 2008 at 11:19 PM, dika > > wrote: >> >> Hai Teams, >> >> I've installed NginX to host my Wordpress MU. >> Everything running well, but one thing didn't works properly. >> >> When I use this : http://202.158.66.216/wp-admin/ >> I got *Error 404 - Not Found. >> >> *But if I use : http://202.158.66.216/wp-admin/index.php >> everything's running well. >> >> What should I do to make this run without adding /index.php ? >> >> here are my nginx.conf : >> >> ------- >> server { >> listen 80; >> server_name 202.158.66.216 ; >> error_log /var/log/nginx/error.lo; >> location ~* >> ^.+\.(html|jpg|jpeg|gif|png|ico|css|zip|tgz|gz|rar|bz2|doc|xls|exe|pdf|ppt|txt|tar|mid|midi|wav|bmp|rtf|js)$ >> { >> root /data/blog/wp; >> expires 30d; >> break; >> } >> >> location / { >> root /data/blog/wp; >> index index.html index.htm index.php; >> rewrite ^.*/files/(.*) /wp-content/blogs.php?file=$1; >> >> if (!-e $request_filename) { >> rewrite ^.+?(/wp-.*) $1 last; >> rewrite ^.+?(/.*\.php)$ $1 last; >> } >> >> if ($query_string !~ ".*s=.*") { >> rewrite ^(.*) >> /wp-content/cache/supercache/$http_host/$1index.html; >> } >> >> if ($http_cookie !~ "^.*comment_author_.*$" ) { >> rewrite ^(.*) >> /wp-content/cache/supercache/$http_host/$1index.html; >> } >> >> if ($http_cookie !~ "^.*wordpressuser.*$" ) { >> rewrite ^(.*) >> /wp-content/cache/supercache/$http_host/$1index.html; >> } >> >> if ($http_cookie !~ "^.*wp-postpass_.*$" ) { >> rewrite ^(.*) >> /wp-content/cache/supercache/$http_host/$1index.html >> break; >> } >> >> error_page 404 = @tricky; >> } >> error_page 500 502 503 504 /50x.html; >> location = /50x.html { >> root html; >> } >> >> location @tricky { >> rewrite ^ /index.php last; >> fastcgi_pass 127.0.0.1:9000 ; >> fastcgi_index index.php; >> fastcgi_param SCRIPT_FILENAME /data/blog/wp$fastcgi_script_name; >> include /opt/nginx/conf/fastcgi_params; >> } >> >> location ~ \.php$ { >> fastcgi_pass 127.0.0.1:9000 ; >> fastcgi_index index.php; >> fastcgi_param SCRIPT_FILENAME /data/blog/wp$fastcgi_script_name; >> include /opt/nginx/conf/fastcgi_params; >> } >> } >> >> -- >> >> Thanks for advice. >> >> *Andika* >> Indonesian >> >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From just.starting at gmail.com Fri Feb 29 11:08:58 2008 From: just.starting at gmail.com (just starting) Date: Fri, 29 Feb 2008 13:38:58 +0530 Subject: having problem with ./configure In-Reply-To: <20080229072850.GA28250@none.at> References: <3898fa730802281833sc5b518ayfce18e91fcf8f440@mail.gmail.com> <20080229072850.GA28250@none.at> Message-ID: <3898fa730802290008x4f27b565obfc2e99a60769c71@mail.gmail.com> hi, >>>>what is the content of objs/autoconf.err? ---------------------------------------- checking for C compiler objs/autotest.c:2:23: /usr/include/sys/types.h: Permission denied ---------- #include int main() { ; return 0; } ---------- gcc -o objs/autotest objs/autotest.c ---------- >>> gcc-version gcc version 3.4.6 20060404 (Red Hat 3.4.6-9) >>>what is the output of rpm -qa|egrep gcc? compat-gcc-32-3.2.3-47.3 gcc-java-3.4.6-9 gcc4-c++-4.1.2-14.EL4 compat-gcc-32-c++-3.2.3-47.3 libgcc-3.4.6-9 gcc-3.4.6-9 gcc4-4.1.2-14.EL4 gcc4-gfortran-4.1.2-14.EL4 gcc-objc-3.4.6-9 gcc-gnat-3.4.6-9 gcc-c++-3.4.6-9 gcc-g77-3.4.6-9 gcc4-java-4.1.2-14.EL4 Ok, what I think is happening is I have to do sudo ./configure. Developers can help in pointing out the error file generated after build. Thanks, Paritosh. On Fri, Feb 29, 2008 at 12:58 PM, Aleksandar Lazic wrote: > Hi, > > On Fre 29.02.2008 08:03, just starting wrote: > > > >I tried to install nginx in a new machine and got the following error: > > > >$ ./configure > >checking for OS > > + Linux 2.6.9-22.ELsmp i686 > >checking for C compiler ... not found > > > > > >After that i run > >$yum install gcc* > >which installed abt 17 packages. > > > >Still getting same prob. > > what is the content of objs/autoconf.err? > what shows gcc -version? > what is the output of rpm -qa|egrep gcc? > > Cheers > > Aleks > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From is at rambler-co.ru Fri Feb 29 11:21:44 2008 From: is at rambler-co.ru (Igor Sysoev) Date: Fri, 29 Feb 2008 11:21:44 +0300 Subject: having problem with ./configure In-Reply-To: <3898fa730802290008x4f27b565obfc2e99a60769c71@mail.gmail.com> References: <3898fa730802281833sc5b518ayfce18e91fcf8f440@mail.gmail.com> <20080229072850.GA28250@none.at> <3898fa730802290008x4f27b565obfc2e99a60769c71@mail.gmail.com> Message-ID: <20080229082144.GA34623@rambler-co.ru> On Fri, Feb 29, 2008 at 01:38:58PM +0530, just starting wrote: > hi, > > >>>>what is the content of objs/autoconf.err? > ---------------------------------------- > checking for C compiler > objs/autotest.c:2:23: /usr/include/sys/types.h: Permission denied > ---------- > #include > > int main() { > ; > return 0; > } > ---------- > gcc -o objs/autotest objs/autotest.c > ---------- > >>> gcc-version > gcc version 3.4.6 20060404 (Red Hat 3.4.6-9) > > >>>what is the output of rpm -qa|egrep gcc? > > compat-gcc-32-3.2.3-47.3 > gcc-java-3.4.6-9 > gcc4-c++-4.1.2-14.EL4 > compat-gcc-32-c++-3.2.3-47.3 > libgcc-3.4.6-9 > gcc-3.4.6-9 > gcc4-4.1.2-14.EL4 > gcc4-gfortran-4.1.2-14.EL4 > gcc-objc-3.4.6-9 > gcc-gnat-3.4.6-9 > gcc-c++-3.4.6-9 > gcc-g77-3.4.6-9 > gcc4-java-4.1.2-14.EL4 > > Ok, what I think is happening is I have to do sudo ./configure. > > Developers can help in pointing out the error file generated after build. You have a broken developmnet environment: header files should be available to read for everyone. You should not run ./configure and make under root. -- Igor Sysoev http://sysoev.ru/en/ From scyz2 at 163.com Fri Feb 29 11:47:15 2008 From: scyz2 at 163.com (=?GBK?B?0e7NotPC?=) Date: Fri, 29 Feb 2008 16:47:15 +0800 (CST) Subject: having problem with ./configure In-Reply-To: <20080229082144.GA34623@rambler-co.ru> References: <20080229082144.GA34623@rambler-co.ru> <3898fa730802281833sc5b518ayfce18e91fcf8f440@mail.gmail.com> <20080229072850.GA28250@none.at> <3898fa730802290008x4f27b565obfc2e99a60769c71@mail.gmail.com> Message-ID: <8921993.531271204274835122.JavaMail.coremail@bj163app126.163.com> hi sir! You are try command: yum groupinstall "Development Tools" ./configure ?2008-02-29?"Igor Sysoev" ??? >On Fri, Feb 29, 2008 at 01:38:58PM +0530, just starting wrote: > >> hi, >> >> >>>>what is the content of objs/autoconf.err? >> ---------------------------------------- >> checking for C compiler >> objs/autotest.c:2:23: /usr/include/sys/types.h: Permission denied >> ---------- >> #include <sys/types.h> >> >> int main() { >> ; >> return 0; >> } >> ---------- >> gcc -o objs/autotest objs/autotest.c >> ---------- >> >>> gcc-version >> gcc version 3.4.6 20060404 (Red Hat 3.4.6-9) >> >> >>>what is the output of rpm -qa|egrep gcc? >> >> compat-gcc-32-3.2.3-47.3 >> gcc-java-3.4.6-9 >> gcc4-c++-4.1.2-14.EL4 >> compat-gcc-32-c++-3.2.3-47.3 >> libgcc-3.4.6-9 >> gcc-3.4.6-9 >> gcc4-4.1.2-14.EL4 >> gcc4-gfortran-4.1.2-14.EL4 >> gcc-objc-3.4.6-9 >> gcc-gnat-3.4.6-9 >> gcc-c++-3.4.6-9 >> gcc-g77-3.4.6-9 >> gcc4-java-4.1.2-14.EL4 >> >> Ok, what I think is happening is I have to do sudo ./configure. >> >> Developers can help in pointing out the error file generated after build. > >You have a broken developmnet environment: header files should be available >to read for everyone. You should not run ./configure and make under root. > > >-- >Igor Sysoev >http://sysoev.ru/en/ > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From alex at purefiction.net Fri Feb 29 12:39:24 2008 From: alex at purefiction.net (Alexander Staubo) Date: Fri, 29 Feb 2008 10:39:24 +0100 Subject: Nginx + fair load balancer patch looping Message-ID: <88daf38c0802290139h5604028es6a5c19f5dc8277df@mail.gmail.com> An Nginx instance suddenly started spewing the following to its error log at a rate of about 1GB/minute, and using a bit more CPU than usual: 2008/02/29 10:33:47 [error] 16875#0: *126309461 upstream prematurely closed connection while reading response header from upstream [...] Aside from the excessive logging, everything else seemed normal. Our next_upstream setting is: proxy_next_upstream error invalid_header; Restarting Nginx fixed the problem. Could this be the fair load balancer going hairwire? Alexander. From igor at pokelondon.com Fri Feb 29 12:39:20 2008 From: igor at pokelondon.com (Igor Clark) Date: Fri, 29 Feb 2008 09:39:20 +0000 Subject: Location problems In-Reply-To: <20080228205016.GF23692@rambler-co.ru> References: <47BAA360.2050409@staff.dada.net> <1954392835.20080219162229@gostats.ru> <47BAB1BA.9060101@staff.dada.net> <16EE163C-093B-487A-A9EB-0CB14DF38AF6@pokelondon.com> <741757296.20080219172346@gostats.ru> <20080228205016.GF23692@rambler-co.ru> Message-ID: Hi Igor, thank you very much. When I did location ~^ /admin/ it still gave a 404, but when I changed it to location ~^ /admin.php it worked perfectly. Seems I've been trying to apply "location" to pre- rewrite URLs, which just won't work - is that right? Igor On 28 Feb 2008, at 20:50, Igor Sysoev wrote: > On Thu, Feb 28, 2008 at 07:41:34PM +0000, Igor Clark wrote: > >> I just want to do the following, but still have all the other >> directives work, so that rewrites and PHP work under /admin: > > fastcgi_index index.php; > fastcgi_intercept_errors on; > include conf/fastcgi_params; > > location ^~ /admin/ { > > allow 1.2.3.4; > deny all; > > location ~ \.php$ { > fastcgi_pass 127.0.0.1:9999; > } > } > > location ~ \.php$ { > fastcgi_pass 127.0.0.1:9999; > } -- Igor Clark // POKE // 10 Redchurch Street // E2 7DD // +44 (0)20 7749 5355 // www.pokelondon.com From cova at ferrara.linux.it Fri Feb 29 13:06:10 2008 From: cova at ferrara.linux.it (Fabio Coatti) Date: Fri, 29 Feb 2008 11:06:10 +0100 Subject: Deprecated syscall Message-ID: <200802291106.11237.cova@ferrara.linux.it> With latest linux kernels (can't recall on the spot when I saw this for the first time, something around 2.6.23.XX I suppose) I get this warning on dmesg: warning: process `nginx' used the deprecated sysctl system call with 1.33. (Linux 2.6.24.2) I guess that at this moment this message is not harmful, but in future maybe it can lead to problems. maybe a small change in source code can fix this, but I'm not a developer so I'm only able to report and not to fix, I fear :) -- Fabio "Cova" Coatti http://members.ferrara.linux.it/cova Ferrara Linux Users Group http://ferrara.linux.it GnuPG fp:9765 A5B6 6843 17BC A646 BE8C FA56 373A 5374 C703 Old SysOps never die... they simply forget their password. From is at rambler-co.ru Fri Feb 29 13:30:33 2008 From: is at rambler-co.ru (Igor Sysoev) Date: Fri, 29 Feb 2008 13:30:33 +0300 Subject: Deprecated syscall In-Reply-To: <200802291106.11237.cova@ferrara.linux.it> References: <200802291106.11237.cova@ferrara.linux.it> Message-ID: <20080229103033.GA39144@rambler-co.ru> On Fri, Feb 29, 2008 at 11:06:10AM +0100, Fabio Coatti wrote: > With latest linux kernels (can't recall on the spot when I saw this for the > first time, something around 2.6.23.XX I suppose) I get this warning on > dmesg: > > warning: process `nginx' used the deprecated sysctl system call with 1.33. > > (Linux 2.6.24.2) > > I guess that at this moment this message is not harmful, but in future maybe > it can lead to problems. > > maybe a small change in source code can fix this, but I'm not a developer so > I'm only able to report and not to fix, I fear :) nginx uses sysctl to learn KERN_RTSIGMAX only, because procfs is not available in chroot. Failed sysctl is not harmful in modern and future kernels, because it's unlikely you will want to use fragile rtsig method instead of epoll. -- Igor Sysoev http://sysoev.ru/en/ From is at rambler-co.ru Fri Feb 29 13:32:33 2008 From: is at rambler-co.ru (Igor Sysoev) Date: Fri, 29 Feb 2008 13:32:33 +0300 Subject: Deprecated syscall In-Reply-To: <20080229103033.GA39144@rambler-co.ru> References: <200802291106.11237.cova@ferrara.linux.it> <20080229103033.GA39144@rambler-co.ru> Message-ID: <20080229103233.GB39144@rambler-co.ru> On Fri, Feb 29, 2008 at 01:30:33PM +0300, Igor Sysoev wrote: > On Fri, Feb 29, 2008 at 11:06:10AM +0100, Fabio Coatti wrote: > > > With latest linux kernels (can't recall on the spot when I saw this for the > > first time, something around 2.6.23.XX I suppose) I get this warning on > > dmesg: > > > > warning: process `nginx' used the deprecated sysctl system call with 1.33. > > > > (Linux 2.6.24.2) > > > > I guess that at this moment this message is not harmful, but in future maybe > > it can lead to problems. > > > > maybe a small change in source code can fix this, but I'm not a developer so > > I'm only able to report and not to fix, I fear :) > > nginx uses sysctl to learn KERN_RTSIGMAX only, because procfs is not available > in chroot. Failed sysctl is not harmful in modern and future kernels, > because it's unlikely you will want to use fragile rtsig method instead > of epoll. Probably, I will add --without-rtsig_module to disable rtsig autodetection. -- Igor Sysoev http://sysoev.ru/en/ From is at rambler-co.ru Fri Feb 29 13:42:18 2008 From: is at rambler-co.ru (Igor Sysoev) Date: Fri, 29 Feb 2008 13:42:18 +0300 Subject: Location problems In-Reply-To: References: <47BAA360.2050409@staff.dada.net> <1954392835.20080219162229@gostats.ru> <47BAB1BA.9060101@staff.dada.net> <16EE163C-093B-487A-A9EB-0CB14DF38AF6@pokelondon.com> <741757296.20080219172346@gostats.ru> <20080228205016.GF23692@rambler-co.ru> Message-ID: <20080229104218.GC39144@rambler-co.ru> On Fri, Feb 29, 2008 at 09:39:20AM +0000, Igor Clark wrote: > Hi Igor, thank you very much. When I did > > location ~^ /admin/ > > it still gave a 404, but when I changed it to > > location ~^ /admin.php > > it worked perfectly. Seems I've been trying to apply "location" to pre- > rewrite URLs, which just won't work - is that right? Could you write what URIs and how you want to handle, for example: / -> fascgi /admin/ -> fascgi, auth protected ... ? > On 28 Feb 2008, at 20:50, Igor Sysoev wrote: > > >On Thu, Feb 28, 2008 at 07:41:34PM +0000, Igor Clark wrote: > > > >>I just want to do the following, but still have all the other > >>directives work, so that rewrites and PHP work under /admin: > > > > fastcgi_index index.php; > > fastcgi_intercept_errors on; > > include conf/fastcgi_params; > > > > location ^~ /admin/ { > > > > allow 1.2.3.4; > > deny all; > > > > location ~ \.php$ { > > fastcgi_pass 127.0.0.1:9999; > > } > > } > > > > location ~ \.php$ { > > fastcgi_pass 127.0.0.1:9999; > > } -- Igor Sysoev http://sysoev.ru/en/ From is at rambler-co.ru Fri Feb 29 13:46:11 2008 From: is at rambler-co.ru (Igor Sysoev) Date: Fri, 29 Feb 2008 13:46:11 +0300 Subject: Nginx + fair load balancer patch looping In-Reply-To: <88daf38c0802290139h5604028es6a5c19f5dc8277df@mail.gmail.com> References: <88daf38c0802290139h5604028es6a5c19f5dc8277df@mail.gmail.com> Message-ID: <20080229104611.GD39144@rambler-co.ru> On Fri, Feb 29, 2008 at 10:39:24AM +0100, Alexander Staubo wrote: > An Nginx instance suddenly started spewing the following to its error > log at a rate of about 1GB/minute, and using a bit more CPU than > usual: > > 2008/02/29 10:33:47 [error] 16875#0: *126309461 upstream prematurely > closed connection while reading response header from upstream [...] > > Aside from the excessive logging, everything else seemed normal. Our > next_upstream setting is: > > proxy_next_upstream error invalid_header; > > Restarting Nginx fixed the problem. > > Could this be the fair load balancer going hairwire? nginx logs this when upstream has closed connection before sending anything or before it send full HTTP response header. This is may be nginx bug, upstream failure, or kernel bug. Could you truss failed process in next time ? -- Igor Sysoev http://sysoev.ru/en/ From jsierles at engineyard.com Fri Feb 29 13:51:33 2008 From: jsierles at engineyard.com (Joshua Sierles) Date: Fri, 29 Feb 2008 12:51:33 +0200 Subject: Nginx + fair load balancer patch looping In-Reply-To: <88daf38c0802290139h5604028es6a5c19f5dc8277df@mail.gmail.com> References: <88daf38c0802290139h5604028es6a5c19f5dc8277df@mail.gmail.com> Message-ID: <74700FAA-9AA2-41C4-9458-A3DAC400962A@engineyard.com> On Feb 29, 2008, at 11:39 AM, Alexander Staubo wrote: > An Nginx instance suddenly started spewing the following to its error > log at a rate of about 1GB/minute, and using a bit more CPU than > usual: > > 2008/02/29 10:33:47 [error] 16875#0: *126309461 upstream prematurely > closed connection while reading response header from upstream [...] > > Aside from the excessive logging, everything else seemed normal. Our > next_upstream setting is: > > proxy_next_upstream error invalid_header; > > Restarting Nginx fixed the problem. > > Could this be the fair load balancer going hairwire? > > Alexander. > At Engine Yard we are seeing this happen as well, usually when a backend goes down or a 'clock skew' message is logged. It gets logged hundreds of thousands of times for each request. Joshua Sierles Engine Yard From igor at pokelondon.com Fri Feb 29 14:13:48 2008 From: igor at pokelondon.com (Igor Clark) Date: Fri, 29 Feb 2008 11:13:48 +0000 Subject: Location problems In-Reply-To: <20080229104218.GC39144@rambler-co.ru> References: <47BAA360.2050409@staff.dada.net> <1954392835.20080219162229@gostats.ru> <47BAB1BA.9060101@staff.dada.net> <16EE163C-093B-487A-A9EB-0CB14DF38AF6@pokelondon.com> <741757296.20080219172346@gostats.ru> <20080228205016.GF23692@rambler-co.ru> <20080229104218.GC39144@rambler-co.ru> Message-ID: Hi Igor, Everything that doesn't exist as a file gets routed to either / frontend.php, or /admin.php if the URI starts with /admin. If so, it's IP-restricted in this case, basic_auth protected in other cases. So we want to be able to do: / -> /frontend.php -> fastcgi /speakers/show/all -> /frontend.php?control_path=/speakers/show/all -> fastcgi /admin -> /admin.php -> fastcgi, protected /admin/speakers/edit/32 /admin.php?control_path=/admin/speakers/edit/ 32 -> fastcgi, protected I'm just wondering whether our approach of "rewrite first, then deal with locations" is just wrong, maybe we should deal with locations first and then rewrite if necessary. Thanks for your help, Igor On 29 Feb 2008, at 10:42, Igor Sysoev wrote: > On Fri, Feb 29, 2008 at 09:39:20AM +0000, Igor Clark wrote: > >> Hi Igor, thank you very much. When I did >> >> location ~^ /admin/ >> >> it still gave a 404, but when I changed it to >> >> location ~^ /admin.php >> >> it worked perfectly. Seems I've been trying to apply "location" to >> pre- >> rewrite URLs, which just won't work - is that right? > > Could you write what URIs and how you want to handle, for example: > > / -> fascgi > /admin/ -> fascgi, auth protected > ... > > ? > >> On 28 Feb 2008, at 20:50, Igor Sysoev wrote: >> >>> On Thu, Feb 28, 2008 at 07:41:34PM +0000, Igor Clark wrote: >>> >>>> I just want to do the following, but still have all the other >>>> directives work, so that rewrites and PHP work under /admin: >>> >>> fastcgi_index index.php; >>> fastcgi_intercept_errors on; >>> include conf/fastcgi_params; >>> >>> location ^~ /admin/ { >>> >>> allow 1.2.3.4; >>> deny all; >>> >>> location ~ \.php$ { >>> fastcgi_pass 127.0.0.1:9999; >>> } >>> } >>> >>> location ~ \.php$ { >>> fastcgi_pass 127.0.0.1:9999; >>> } > > > -- > Igor Sysoev > http://sysoev.ru/en/ > -- Igor Clark // POKE // 10 Redchurch Street // E2 7DD // +44 (0)20 7749 5355 // www.pokelondon.com From is at rambler-co.ru Fri Feb 29 14:26:16 2008 From: is at rambler-co.ru (Igor Sysoev) Date: Fri, 29 Feb 2008 14:26:16 +0300 Subject: Nginx + fair load balancer patch looping In-Reply-To: <74700FAA-9AA2-41C4-9458-A3DAC400962A@engineyard.com> References: <88daf38c0802290139h5604028es6a5c19f5dc8277df@mail.gmail.com> <74700FAA-9AA2-41C4-9458-A3DAC400962A@engineyard.com> Message-ID: <20080229112616.GE39144@rambler-co.ru> On Fri, Feb 29, 2008 at 12:51:33PM +0200, Joshua Sierles wrote: > > On Feb 29, 2008, at 11:39 AM, Alexander Staubo wrote: > > >An Nginx instance suddenly started spewing the following to its error > >log at a rate of about 1GB/minute, and using a bit more CPU than > >usual: > > > > 2008/02/29 10:33:47 [error] 16875#0: *126309461 upstream prematurely > >closed connection while reading response header from upstream [...] > > > >Aside from the excessive logging, everything else seemed normal. Our > >next_upstream setting is: > > > > proxy_next_upstream error invalid_header; > > > >Restarting Nginx fixed the problem. > > > >Could this be the fair load balancer going hairwire? > > > >Alexander. > > > > At Engine Yard we are seeing this happen as well, usually when a > backend goes down or a 'clock skew' message is logged. It gets logged > hundreds of thousands of times for each request. Oh, I have missed "fair load balancer patch". No, I'm not able to debug the bug. -- Igor Sysoev http://sysoev.ru/en/ From is at rambler-co.ru Fri Feb 29 14:31:19 2008 From: is at rambler-co.ru (Igor Sysoev) Date: Fri, 29 Feb 2008 14:31:19 +0300 Subject: Location problems In-Reply-To: References: <47BAA360.2050409@staff.dada.net> <1954392835.20080219162229@gostats.ru> <47BAB1BA.9060101@staff.dada.net> <16EE163C-093B-487A-A9EB-0CB14DF38AF6@pokelondon.com> <741757296.20080219172346@gostats.ru> <20080228205016.GF23692@rambler-co.ru> <20080229104218.GC39144@rambler-co.ru> Message-ID: <20080229113119.GF39144@rambler-co.ru> On Fri, Feb 29, 2008 at 11:13:48AM +0000, Igor Clark wrote: > Hi Igor, > > Everything that doesn't exist as a file gets routed to either / > frontend.php, or /admin.php if the URI starts with /admin. If so, it's > IP-restricted in this case, basic_auth protected in other cases. > > So we want to be able to do: > > / -> /frontend.php -> fastcgi > > /speakers/show/all -> /frontend.php?control_path=/speakers/show/all -> > fastcgi > > /admin -> /admin.php -> fastcgi, protected > > /admin/speakers/edit/32 > /admin.php?control_path=/admin/speakers/edit/ 32 -> fastcgi, > protected > > I'm just wondering whether our approach of "rewrite first, then deal > with locations" is just wrong, maybe we should deal with locations > first and then rewrite if necessary. location / { error_page 404 = @fallback; } location @fallback { fastcgi_pass ... fastcgi_param SCRIPT_FILENAME /path/to/frontend.php; fastcgi_param QUERY_STRING control_path=$uri; ... } location /admin { allow 1.2.3.4; deny all; error_page 404 = @admin; } location @admin { fastcgi_pass ... fastcgi_param SCRIPT_FILENAME /path/to/admin.php; fastcgi_param QUERY_STRING control_path=$uri; ... } -- Igor Sysoev http://sysoev.ru/en/ From grzegorz.nosek at gmail.com Fri Feb 29 14:53:30 2008 From: grzegorz.nosek at gmail.com (Grzegorz Nosek) Date: Fri, 29 Feb 2008 12:53:30 +0100 Subject: Nginx + fair load balancer patch looping In-Reply-To: <74700FAA-9AA2-41C4-9458-A3DAC400962A@engineyard.com> References: <88daf38c0802290139h5604028es6a5c19f5dc8277df@mail.gmail.com> <74700FAA-9AA2-41C4-9458-A3DAC400962A@engineyard.com> Message-ID: <20080229115330.GA26129@vadmin.megiteam.pl> On Fri, Feb 29, 2008 at 12:51:33PM +0200, Joshua Sierles wrote: > On Feb 29, 2008, at 11:39 AM, Alexander Staubo wrote: > > >An Nginx instance suddenly started spewing the following to its error > >log at a rate of about 1GB/minute, and using a bit more CPU than > >usual: > >Could this be the fair load balancer going hairwire? > At Engine Yard we are seeing this happen as well, usually when a > backend goes down or a 'clock skew' message is logged. It gets logged > hundreds of thousands of times for each request. Alexander, Joshua, Please send me anything you know about this problem, especially how to reproduce it. I'd appreciate if you sent me your config files (privately and possibly anonymised, if you prefer). Best regards, Grzegorz Nosek From y.georgiev at gmail.com Fri Feb 29 17:07:25 2008 From: y.georgiev at gmail.com (Yordan Georgiev) Date: Fri, 29 Feb 2008 16:07:25 +0200 Subject: Deprecated syscall In-Reply-To: <20080229103233.GB39144@rambler-co.ru> References: <200802291106.11237.cova@ferrara.linux.it> <20080229103033.GA39144@rambler-co.ru> <20080229103233.GB39144@rambler-co.ru> Message-ID: <4378145a0802290607y4602b425s4f7f67511f29c51f@mail.gmail.com> Hello Fabio, Kernel 2.6.24.2 is verry bad: http://kerneltrap.org/node/15550 Don't use this kernel version. I migrate from 2.6.24.2 to 2.6.22.18 and my performance load up Please excuse my bad english! ? ????????, ?. ????????. WEB: http://gigavolt-bg.net/ Blog: http://live.gigavolt-bg.net/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From igor at pokelondon.com Fri Feb 29 18:36:44 2008 From: igor at pokelondon.com (Igor Clark) Date: Fri, 29 Feb 2008 15:36:44 +0000 Subject: Location problems In-Reply-To: <20080229113119.GF39144@rambler-co.ru> References: <47BAA360.2050409@staff.dada.net> <1954392835.20080219162229@gostats.ru> <47BAB1BA.9060101@staff.dada.net> <16EE163C-093B-487A-A9EB-0CB14DF38AF6@pokelondon.com> <741757296.20080219172346@gostats.ru> <20080228205016.GF23692@rambler-co.ru> <20080229104218.GC39144@rambler-co.ru> <20080229113119.GF39144@rambler-co.ru> Message-ID: <96639D9A-2391-4DD3-98CB-CB89140CF622@pokelondon.com> Thanks very much Igor, that's really helpful, and shows a completely different approach from the one I've been taking. Excellent stuff. Igor On 29 Feb 2008, at 11:31, Igor Sysoev wrote: > On Fri, Feb 29, 2008 at 11:13:48AM +0000, Igor Clark wrote: > >> Hi Igor, >> >> Everything that doesn't exist as a file gets routed to either / >> frontend.php, or /admin.php if the URI starts with /admin. If so, >> it's >> IP-restricted in this case, basic_auth protected in other cases. >> >> So we want to be able to do: >> >> / -> /frontend.php -> fastcgi >> >> /speakers/show/all -> /frontend.php?control_path=/speakers/show/all >> -> >> fastcgi >> >> /admin -> /admin.php -> fastcgi, protected >> >> /admin/speakers/edit/32 >> /admin.php?control_path=/admin/speakers/edit/ 32 -> fastcgi, >> protected >> >> I'm just wondering whether our approach of "rewrite first, then deal >> with locations" is just wrong, maybe we should deal with locations >> first and then rewrite if necessary. > > location / { > error_page 404 = @fallback; > } > > location @fallback { > fastcgi_pass ... > fastcgi_param SCRIPT_FILENAME /path/to/frontend.php; > fastcgi_param QUERY_STRING control_path=$uri; > ... > } > > location /admin { > allow 1.2.3.4; > deny all; > error_page 404 = @admin; > } > > location @admin { > fastcgi_pass ... > fastcgi_param SCRIPT_FILENAME /path/to/admin.php; > fastcgi_param QUERY_STRING control_path=$uri; > ... > } > > > -- > Igor Sysoev > http://sysoev.ru/en/ > -- Igor Clark // POKE // 10 Redchurch Street // E2 7DD // +44 (0)20 7749 5355 // www.pokelondon.com From is at rambler-co.ru Fri Feb 29 19:28:13 2008 From: is at rambler-co.ru (Igor Sysoev) Date: Fri, 29 Feb 2008 19:28:13 +0300 Subject: ngx_http_memcached_module question In-Reply-To: References: Message-ID: <20080229162813.GM39144@rambler-co.ru> On Thu, Feb 28, 2008 at 05:09:18PM -0500, Sean Allen wrote: > can i do the following: > > check memcache for existence of content. > > if not continue our normal processing which is currently: > > check for static file > if it exists, serve it > if it doesnt exist, pass request off to upstream server. The checking local file is faster than memcached, so: location / { error_page 404 = @memcache; } location @memcache { set $memcached_key "$uri?$args"; memcached_pass ... recursive_error_pages on; error_page 404 = @upstream; } location @upstream { proxy_pass ... error_page 404 = @upstream; } -- Igor Sysoev http://sysoev.ru/en/ From is at rambler-co.ru Fri Feb 29 19:29:22 2008 From: is at rambler-co.ru (Igor Sysoev) Date: Fri, 29 Feb 2008 19:29:22 +0300 Subject: nginx front-end to sharepoint In-Reply-To: <47C72518.1020900@d-cohen.com> References: <47C716C3.7050604@d-cohen.com> <20080228202928.GE23692@rambler-co.ru> <47C72518.1020900@d-cohen.com> Message-ID: <20080229162922.GN39144@rambler-co.ru> On Thu, Feb 28, 2008 at 04:18:16PM -0500, nginx at d-cohen.com wrote: > Yes, we are using Basic authentication. Have no idea. Could create debug log with dummy username/password ? > Igor Sysoev wrote: > >On Thu, Feb 28, 2008 at 03:17:07PM -0500, nginx at d-cohen.com wrote: > > > >>I am attempting to replace squid 3.0 with nginx as a reverse proxy to a > >>sharepoint server with SSL and user-authentication. The problem I am > >>having is nginx does not appear to pass the credentials to the real > >>server w/o modifying them (after several failed attempts, once simply > >>gets access denied). I am able to accomplish this in squid with this > >>option: login=PASS > > > >nginx should pass all headers as is. > >Do you use Basic authentication ? > > > >>I'm wondering if anybody has any insight/experience into this issue. I > >>have included the relevant portions of my nginx.conf and my old > >>squid.conf. Any help would be greatly appreciated. > >> > >>Thank you. > >> > >>nginx.conf: > >> > >>http { > >> server { > >> listen 443; > >> server_name 192.168.0.10; > >> > >> ssl on; > >> ssl_certificate /conf/nginx/cert.pem; > >> ssl_certificate_key /conf/nginx/key.pem; > >> > >> location / { > >> proxy_pass https://192.168.0.1/; > >> } > >> } > >>} > >> > >> > >>squid.conf: > >> > >>https_port 443 cert=/conf/squid/cert.pem key=/conf/squid/key.pem \ > >> cafile=/conf/squid/ca.pem vhost > >>cache_peer 192.168.0.1 parent 443 0 login=PASS no-query ssl proxy-only \ > >> originserver sslflags=DONT_VERIFY_PEER,DONT_VERIFY_DOMAIN > >> > > > > -- Igor Sysoev http://sysoev.ru/en/ From is at rambler-co.ru Fri Feb 29 19:38:36 2008 From: is at rambler-co.ru (Igor Sysoev) Date: Fri, 29 Feb 2008 19:38:36 +0300 Subject: response times and network io In-Reply-To: <47C76CD8.90605@joetify.com> References: <47C768BA.3050109@joetify.com> <47C76CD8.90605@joetify.com> Message-ID: <20080229163836.GP39144@rambler-co.ru> On Thu, Feb 28, 2008 at 08:24:24PM -0600, Joe Williams wrote: > please excuse my typo. regarding network I/O nginx uses consistently > lower I/O than apache. > > regardless i am curious about how it processes requests differently to > obtain lower response times and network I/O. How do you measure network I/O ? In short, Apache and nginx use different model for processing requests. Apache processes connection in one process or thread while nginx processes thousand connections in one process/thread using scaleable methods such as kqueue/epoll/etc. > thanks for any help you can provide. > > -Joe > > > Joe Williams wrote: > >i am performing some httperf tests against apache and nginx. something > >i noticed that piqued my interest were the consistency of response > >times (0.4 ms each run regardless of number of request, much lower > >than apache in all cases) and network I/O (consistently higher than > >apache regardless of number of request). it also uses less cpu than > >apache and doesn't nearly drive up the load. > > > >are these normal results? is there a mechanism in nginx that keeps the > >response times low and consistent? also, is it normal that it uses > >more network I/O? if so, what is the cause? to me it would seem like > >that it uses more bandwidth to respond to the same number of requests > >which seems inefficient. > > > >please correct me if i am wrong. i am just trying to understand the > >core differences in how nginx works in comparison to apache and why i > >would see these performance differences. > > > >thanks for the help. > > > >-joe > > > > -- > Name: Joseph A. Williams > Email: joe at joetify.com > > -- Igor Sysoev http://sysoev.ru/en/ From joe at joetify.com Fri Feb 29 20:05:37 2008 From: joe at joetify.com (Joe Williams) Date: Fri, 29 Feb 2008 11:05:37 -0600 Subject: response times and network io In-Reply-To: <20080229163836.GP39144@rambler-co.ru> References: <47C768BA.3050109@joetify.com> <47C76CD8.90605@joetify.com> <20080229163836.GP39144@rambler-co.ru> Message-ID: <47C83B61.2010302@joetify.com> I used httperf to give me the network I/O (KB/s) and response times. I could probably produce the sar data from each if you would like it. I assume the response times are due to Nginx not needing to take time to start up another process/thread? Thanks. -Joe Igor Sysoev wrote: > On Thu, Feb 28, 2008 at 08:24:24PM -0600, Joe Williams wrote: > > >> please excuse my typo. regarding network I/O nginx uses consistently >> lower I/O than apache. >> >> regardless i am curious about how it processes requests differently to >> obtain lower response times and network I/O. >> > > How do you measure network I/O ? > > In short, Apache and nginx use different model for processing requests. > Apache processes connection in one process or thread while nginx processes > thousand connections in one process/thread using scaleable methods such > as kqueue/epoll/etc. > > >> thanks for any help you can provide. >> >> -Joe >> >> >> Joe Williams wrote: >> >>> i am performing some httperf tests against apache and nginx. something >>> i noticed that piqued my interest were the consistency of response >>> times (0.4 ms each run regardless of number of request, much lower >>> than apache in all cases) and network I/O (consistently higher than >>> apache regardless of number of request). it also uses less cpu than >>> apache and doesn't nearly drive up the load. >>> >>> are these normal results? is there a mechanism in nginx that keeps the >>> response times low and consistent? also, is it normal that it uses >>> more network I/O? if so, what is the cause? to me it would seem like >>> that it uses more bandwidth to respond to the same number of requests >>> which seems inefficient. >>> >>> please correct me if i am wrong. i am just trying to understand the >>> core differences in how nginx works in comparison to apache and why i >>> would see these performance differences. >>> >>> thanks for the help. >>> >>> -joe >>> >>> >> -- >> Name: Joseph A. Williams >> Email: joe at joetify.com >> >> >> > > -- Name: Joseph A. Williams Email: joe at joetify.com From sean at ardishealth.com Fri Feb 29 20:58:42 2008 From: sean at ardishealth.com (Sean Allen) Date: Fri, 29 Feb 2008 12:58:42 -0500 Subject: ngx_http_memcached_module question In-Reply-To: <20080229162813.GM39144@rambler-co.ru> References: <20080229162813.GM39144@rambler-co.ru> Message-ID: <0CFD34C6-F85D-493B-BEC7-2EE52A37E716@ardishealth.com> On Feb 29, 2008, at 11:28 AM, Igor Sysoev wrote: > On Thu, Feb 28, 2008 at 05:09:18PM -0500, Sean Allen wrote: > >> can i do the following: >> >> check memcache for existence of content. >> >> if not continue our normal processing which is currently: >> >> check for static file >> if it exists, serve it >> if it doesnt exist, pass request off to upstream server. > > The checking local file is faster than memcached, so: > > location / { > error_page 404 = @memcache; > } > > location @memcache { > set $memcached_key "$uri?$args"; > memcached_pass ... > > recursive_error_pages on; > > error_page 404 = @upstream; > } > > location @upstream { > proxy_pass ... > > error_page 404 = @upstream; > } Well, this will serve from local file system and memcache but it isnt passing to the upstream. just get an nginx 404 if it can find in local file system or memcache. if i get that worked out, is there a way to have it just move on to the upstream if the memcache is down instead of returning a 502? From sean at ardishealth.com Fri Feb 29 21:25:37 2008 From: sean at ardishealth.com (Sean Allen) Date: Fri, 29 Feb 2008 13:25:37 -0500 Subject: ngx_http_memcached_module question In-Reply-To: <20080229162813.GM39144@rambler-co.ru> References: <20080229162813.GM39144@rambler-co.ru> Message-ID: <4971FDED-F09B-4644-8366-7CD8EDF02AA4@ardishealth.com> On Feb 29, 2008, at 11:28 AM, Igor Sysoev wrote: > On Thu, Feb 28, 2008 at 05:09:18PM -0500, Sean Allen wrote: > >> can i do the following: >> >> check memcache for existence of content. >> >> if not continue our normal processing which is currently: >> >> check for static file >> if it exists, serve it >> if it doesnt exist, pass request off to upstream server. > > The checking local file is faster than memcached, so: > > location / { > error_page 404 = @memcache; > } > > location @memcache { > set $memcached_key "$uri?$args"; > memcached_pass ... > > recursive_error_pages on; > > error_page 404 = @upstream; > } > > location @upstream { > proxy_pass ... > > error_page 404 = @upstream; > } I think I might have this working by moving recursive_error_pages on; into my server { } defs. this should fill my error log with tons of error messages correct? From sean at ardishealth.com Fri Feb 29 22:02:48 2008 From: sean at ardishealth.com (Sean Allen) Date: Fri, 29 Feb 2008 14:02:48 -0500 Subject: ngx_http_memcached_module question In-Reply-To: <4971FDED-F09B-4644-8366-7CD8EDF02AA4@ardishealth.com> References: <20080229162813.GM39144@rambler-co.ru> <4971FDED-F09B-4644-8366-7CD8EDF02AA4@ardishealth.com> Message-ID: <8275F933-7F46-475E-A6E2-EC18DA65441B@ardishealth.com> On Feb 29, 2008, at 1:25 PM, Sean Allen wrote: > > On Feb 29, 2008, at 11:28 AM, Igor Sysoev wrote: > >> On Thu, Feb 28, 2008 at 05:09:18PM -0500, Sean Allen wrote: >> >>> can i do the following: >>> >>> check memcache for existence of content. >>> >>> if not continue our normal processing which is currently: >>> >>> check for static file >>> if it exists, serve it >>> if it doesnt exist, pass request off to upstream server. >> >> The checking local file is faster than memcached, so: >> >> location / { >> error_page 404 = @memcache; >> } >> >> location @memcache { >> set $memcached_key "$uri?$args"; >> memcached_pass ... >> >> recursive_error_pages on; >> >> error_page 404 = @upstream; >> } >> >> location @upstream { >> proxy_pass ... >> >> error_page 404 = @upstream; >> } > > I think I might have this working by moving recursive_error_pages > on; into my server { } defs. > > this should fill my error log with tons of error messages correct? > > well ok so far, it works partially. got the 502 error figured out. any code that does a redirect, results in 405 error. been digging for an answer. any ideas? also is there a way to not have all these requests end up in the error log as file not found? From sean at ardishealth.com Fri Feb 29 23:19:35 2008 From: sean at ardishealth.com (Sean Allen) Date: Fri, 29 Feb 2008 15:19:35 -0500 Subject: ngx_http_memcached_module question In-Reply-To: <8275F933-7F46-475E-A6E2-EC18DA65441B@ardishealth.com> References: <20080229162813.GM39144@rambler-co.ru> <4971FDED-F09B-4644-8366-7CD8EDF02AA4@ardishealth.com> <8275F933-7F46-475E-A6E2-EC18DA65441B@ardishealth.com> Message-ID: On Feb 29, 2008, at 2:02 PM, Sean Allen wrote: > > On Feb 29, 2008, at 1:25 PM, Sean Allen wrote: > >> >> On Feb 29, 2008, at 11:28 AM, Igor Sysoev wrote: >> >>> On Thu, Feb 28, 2008 at 05:09:18PM -0500, Sean Allen wrote: >>> >>>> can i do the following: >>>> >>>> check memcache for existence of content. >>>> >>>> if not continue our normal processing which is currently: >>>> >>>> check for static file >>>> if it exists, serve it >>>> if it doesnt exist, pass request off to upstream server. >>> >>> The checking local file is faster than memcached, so: >>> >>> location / { >>> error_page 404 = @memcache; >>> } >>> >>> location @memcache { >>> set $memcached_key "$uri?$args"; >>> memcached_pass ... >>> >>> recursive_error_pages on; >>> >>> error_page 404 = @upstream; >>> } >>> >>> location @upstream { >>> proxy_pass ... >>> >>> error_page 404 = @upstream; >>> } >> Sorry for the noise. Managed to work through a number of things myself with some help from google and dumb luck. Routing all POST requests away from the error_page 404 = @memcache; took care of the 405 issue. Questions I have left: is there a way to hash the "$uri?$args" with something like md5 to keep a really long arg string from going beyond the max memcache key size? is there a way to selectively not log the file not found errors that using error_page to redirect creates while at the same time preserving 'real' errors? Thanks for all the help you have already given on this Igor. I wouldnt have come close to getting this near to done without your assistance. From is at rambler-co.ru Fri Feb 29 23:35:27 2008 From: is at rambler-co.ru (Igor Sysoev) Date: Fri, 29 Feb 2008 23:35:27 +0300 Subject: ngx_http_memcached_module question In-Reply-To: <0CFD34C6-F85D-493B-BEC7-2EE52A37E716@ardishealth.com> References: <20080229162813.GM39144@rambler-co.ru> <0CFD34C6-F85D-493B-BEC7-2EE52A37E716@ardishealth.com> Message-ID: <20080229203527.GA49265@rambler-co.ru> On Fri, Feb 29, 2008 at 12:58:42PM -0500, Sean Allen wrote: > > On Feb 29, 2008, at 11:28 AM, Igor Sysoev wrote: > > >On Thu, Feb 28, 2008 at 05:09:18PM -0500, Sean Allen wrote: > > > >>can i do the following: > >> > >>check memcache for existence of content. > >> > >>if not continue our normal processing which is currently: > >> > >>check for static file > >> if it exists, serve it > >> if it doesnt exist, pass request off to upstream server. > > > >The checking local file is faster than memcached, so: > > > > location / { > > error_page 404 = @memcache; > > } > > > > location @memcache { > > set $memcached_key "$uri?$args"; > > memcached_pass ... > > > > recursive_error_pages on; > > > > error_page 404 = @upstream; > > } > > > > location @upstream { > > proxy_pass ... > > > > error_page 404 = @upstream; This error_page was unneeded. > > } > > Well, this will serve from local file system and memcache but it isnt > passing to the upstream. > just get an nginx 404 if it can find in local file system or memcache. "recursive_error_pages on" should pass request to error_page hanlder. > if i get that worked out, is there a way to have it just move on to > the upstream if the memcache is > down instead of returning a 502? - error_page 404 = @upstream; + error_page 404 502 504 = @upstream; -- Igor Sysoev http://sysoev.ru/en/ From is at rambler-co.ru Fri Feb 29 23:39:02 2008 From: is at rambler-co.ru (Igor Sysoev) Date: Fri, 29 Feb 2008 23:39:02 +0300 Subject: ngx_http_memcached_module question In-Reply-To: References: <20080229162813.GM39144@rambler-co.ru> <4971FDED-F09B-4644-8366-7CD8EDF02AA4@ardishealth.com> <8275F933-7F46-475E-A6E2-EC18DA65441B@ardishealth.com> Message-ID: <20080229203902.GB49265@rambler-co.ru> On Fri, Feb 29, 2008 at 03:19:35PM -0500, Sean Allen wrote: > > On Feb 29, 2008, at 2:02 PM, Sean Allen wrote: > > > > >On Feb 29, 2008, at 1:25 PM, Sean Allen wrote: > > > >> > >>On Feb 29, 2008, at 11:28 AM, Igor Sysoev wrote: > >> > >>>On Thu, Feb 28, 2008 at 05:09:18PM -0500, Sean Allen wrote: > >>> > >>>>can i do the following: > >>>> > >>>>check memcache for existence of content. > >>>> > >>>>if not continue our normal processing which is currently: > >>>> > >>>>check for static file > >>>>if it exists, serve it > >>>>if it doesnt exist, pass request off to upstream server. > >>> > >>>The checking local file is faster than memcached, so: > >>> > >>> location / { > >>> error_page 404 = @memcache; > >>> } > >>> > >>> location @memcache { > >>> set $memcached_key "$uri?$args"; > >>> memcached_pass ... > >>> > >>> recursive_error_pages on; > >>> > >>> error_page 404 = @upstream; > >>> } > >>> > >>> location @upstream { > >>> proxy_pass ... > >>> > >>> error_page 404 = @upstream; > >>> } > >> > > Sorry for the noise. Managed to work through a number of things myself > with some help from google and dumb luck. > > Routing all POST requests away from the error_page 404 = @memcache; > took care of the 405 issue. ngx_http_memcached_module does not support POST now. It seems there should be "memcached_post" like "static_post". Or event allow POST to memcached by default. > Questions I have left: > > is there a way to hash the "$uri?$args" with something like md5 to > keep a really long arg string from > going beyond the max memcache key size? No. > is there a way to selectively not log the file not found errors that > using error_page to redirect creates > while at the same time preserving 'real' errors? What error types do you want to hide ? > Thanks for all the help you have already given on this Igor. I wouldnt > have come close to getting this > near to done without your assistance. -- Igor Sysoev http://sysoev.ru/en/ From sean at ardishealth.com Fri Feb 29 23:58:39 2008 From: sean at ardishealth.com (Sean Allen) Date: Fri, 29 Feb 2008 15:58:39 -0500 Subject: ngx_http_memcached_module question In-Reply-To: <20080229203902.GB49265@rambler-co.ru> References: <20080229162813.GM39144@rambler-co.ru> <4971FDED-F09B-4644-8366-7CD8EDF02AA4@ardishealth.com> <8275F933-7F46-475E-A6E2-EC18DA65441B@ardishealth.com> <20080229203902.GB49265@rambler-co.ru> Message-ID: <246182F8-0B11-4BCC-9501-2187BCEF5B4D@ardishealth.com> >> >> is there a way to selectively not log the file not found errors that >> using error_page to redirect creates >> while at the same time preserving 'real' errors? > > What error types do you want to hide ? > these: 2008/02/29 20:57:30 [error] 25532#0: *6135062 open() "/ah/sites/ thepinkpatch.co.uk/public/s-TPP/lp" failed (2: No such file or directory), client: 90.208.166.236, server: micro.thepinkpatch.co.uk, request: "GET /s-TPP/lp HTTP/1.1", host: "micro.thepinkpatch.co.uk" which are the result of using error page to pass over to our upstreams.