From ianevans at digitalhit.com Wed Aug 1 04:24:12 2012 From: ianevans at digitalhit.com (Ian M. Evans) Date: Wed, 1 Aug 2012 00:24:12 -0400 Subject: nginx simple caching solutions Message-ID: <032ddfa8ed12dba1173cc7d2ba7993aa.squirrel@www.digitalhit.com> I was Googling around for some simple page caching solutions (say, cache pages for 5 minutes, exclude certain directories) and came across Google results for varnish, memcached, using nginx as a reverse proxy for apache, etc. My Googling didn't find a method for nginx caching itself, though my coffee-deficient brain thinks that might still involve using the proxy directives to point to itself. I dunno, hence the question. I'm just looking for a way to speed up some dynamic pages that don't have personalization and can basically be static for a few minutes. Dedicated erver has 2 gig RAM and runs nginx, php-fpm, mysql 5 and a qmail server. Any thoughts or config example links? Thanks. From nbubingo at gmail.com Wed Aug 1 04:26:27 2012 From: nbubingo at gmail.com (=?GB2312?B?0qbOsLHz?=) Date: Wed, 1 Aug 2012 12:26:27 +0800 Subject: The patch of Nginx SSL: PEM pass phrase problem In-Reply-To: <43b5fa3104e5a0dabb407be7b6f40eb9.NginxMailingListEnglish@forum.nginx.org> References: <8c1d65b918b52351b2382a2fa6c5d4ae.NginxMailingListEnglish@forum.nginx.org> <254c32a8a8e3f5ac6eadcad4fbd281ba.NginxMailingListEnglish@forum.nginx.org> <43b5fa3104e5a0dabb407be7b6f40eb9.NginxMailingListEnglish@forum.nginx.org> Message-ID: Good point. Tengine has similar feature: http://tengine.taobao.org/document/http_ssl.html Maybe Nginx should consider this patch. 2012/8/1 chirho : > thank you for rapid answer. > > actually, i use v1.2.0 of nginx version and i'll try to patch it. > after patching and testing, i'll post the results. > > thank you again for this patch. > > best regards. > > Posted at Nginx Forum: http://forum.nginx.org/read.php?2,214641,229161#msg-229161 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From nbubingo at gmail.com Wed Aug 1 04:32:33 2012 From: nbubingo at gmail.com (=?GB2312?B?0qbOsLHz?=) Date: Wed, 1 Aug 2012 12:32:33 +0800 Subject: nginx simple caching solutions In-Reply-To: <032ddfa8ed12dba1173cc7d2ba7993aa.squirrel@www.digitalhit.com> References: <032ddfa8ed12dba1173cc7d2ba7993aa.squirrel@www.digitalhit.com> Message-ID: Have you tried the proxy_cache? http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_cache 2012/8/1 Ian M. Evans : > I was Googling around for some simple page caching solutions (say, cache > pages for 5 minutes, exclude certain directories) and came across Google > results for varnish, memcached, using nginx as a reverse proxy for apache, > etc. > > My Googling didn't find a method for nginx caching itself, though my > coffee-deficient brain thinks that might still involve using the proxy > directives to point to itself. I dunno, hence the question. > > I'm just looking for a way to speed up some dynamic pages that don't have > personalization and can basically be static for a few minutes. Dedicated > erver has 2 gig RAM and runs nginx, php-fpm, mysql 5 and a qmail server. > > Any thoughts or config example links? Thanks. > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From ianevans at digitalhit.com Wed Aug 1 04:49:18 2012 From: ianevans at digitalhit.com (Ian M. Evans) Date: Wed, 1 Aug 2012 00:49:18 -0400 Subject: nginx simple caching solutions In-Reply-To: References: <032ddfa8ed12dba1173cc7d2ba7993aa.squirrel@www.digitalhit.com> Message-ID: <7854e2dad507326b52f597b64d454574.squirrel@www.digitalhit.com> On Wed, August 1, 2012 12:32 am, ?????? wrote: > Have you tried the proxy_cache? > http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_cache I began to look at it but my brain wasn't seeing how to set it up for handling nginx caching itself. It might be safe to assume that this is because I have a summer cold and headache but still felt the need to surf when I couldn't sleep. :-) From appa at perusio.net Wed Aug 1 06:54:05 2012 From: appa at perusio.net (=?ISO-8859-1?Q?Ant=F3nio_P=2E_P=2E_Almeida?=) Date: Wed, 01 Aug 2012 08:54:05 +0200 Subject: nginx simple caching solutions Message-ID: If you want to cache PHP generated pages then just use the FCGI cache. http://nginx.org/en/docs/http/ngx_http_fastcgi_module.html#fastcgi_cache "Ian M. Evans" a ?crit?: >I was Googling around for some simple page caching solutions (say, cache >pages for 5 minutes, exclude certain directories) and came across Google >results for varnish, memcached, using nginx as a reverse proxy for apache, >etc. > >My Googling didn't find a method for nginx caching itself, though my >coffee-deficient brain thinks that might still involve using the proxy >directives to point to itself. I dunno, hence the question. > >I'm just looking for a way to speed up some dynamic pages that don't have >personalization and can basically be static for a few minutes. Dedicated >erver has 2 gig RAM and runs nginx, php-fpm, mysql 5 and a qmail server. > >Any thoughts or config example links? Thanks. > >_______________________________________________ >nginx mailing list >nginx at nginx.org >http://mailman.nginx.org/mailman/listinfo/nginx From nginx-forum at nginx.us Wed Aug 1 07:02:49 2012 From: nginx-forum at nginx.us (yashgt) Date: Wed, 1 Aug 2012 03:02:49 -0400 (EDT) Subject: Nginx causes redirects when server root is changed In-Reply-To: References: Message-ID: <8648ebb1e238c1e336d6ef415081e5a1.NginxMailingListEnglish@forum.nginx.org> That was exactly the problem. The app was causing the redirect. Thanks a ton. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,229127,229168#msg-229168 From nginx-forum at nginx.us Wed Aug 1 07:55:37 2012 From: nginx-forum at nginx.us (xufengnju) Date: Wed, 1 Aug 2012 03:55:37 -0400 (EDT) Subject: nginx logged ip different than php $_SERVER['REMOTE_ADDR'] Message-ID: nginx/0.8.49 php in fastcgi mode, php version 5.3.5 pieces of nginx cofig file: default log format: log_format main '$remote_addr - - [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent"'; access_log /app/logs/default.log main; location ~ .*\.php$ { fastcgi_pass 127.0.0.1:9000; #fastcgi_pass unix:/tmp/php-cgi.sock; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME /usr/local/www/default$fastcgi_script_name; include /usr/local/nginx/conf/fastcgi_params; } contents of fastcgi_params: fastcgi_param QUERY_STRING $query_string; fastcgi_param REQUEST_METHOD $request_method; fastcgi_param CONTENT_TYPE $content_type; fastcgi_param CONTENT_LENGTH $content_length; fastcgi_param SCRIPT_NAME $fastcgi_script_name; fastcgi_param REQUEST_URI $request_uri; fastcgi_param DOCUMENT_URI $document_uri; fastcgi_param DOCUMENT_ROOT $document_root; fastcgi_param SERVER_PROTOCOL $server_protocol; fastcgi_param GATEWAY_INTERFACE CGI/1.1; fastcgi_param SERVER_SOFTWARE nginx/$nginx_version; fastcgi_param REMOTE_ADDR $remote_addr; fastcgi_param REMOTE_PORT $remote_port; fastcgi_param SERVER_ADDR $server_addr; fastcgi_param SERVER_PORT $server_port; fastcgi_param SERVER_NAME $server_name; # PHP only, required if PHP was built with --enable-force-cgi-redirect fastcgi_param REDIRECT_STATUS 200; problem is: I got this in my log: 124.95.xx.yy - - [01/Aug/2012:15:22:48 +0800] "GET /ch111/index.php?111234 HTTP/1.0" 200 1483 "-" "Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 5.1; Trident/4.0)" And I got $_SERVER['REMOTE_ADDR'] with a different ip when it is shown on client's browser. I am sure that the different ips are from the same client. Before my web server, I have no balancers or other devices, but I am not of the client's side. Even if the client goes through some kind of proxy, no matter squid or commercial ones, $_SERVER['REMOTE_ADDR'] should be always the same with what is shown in nginx's log. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,229169,229169#msg-229169 From varia at e-healthexpert.org Wed Aug 1 08:44:49 2012 From: varia at e-healthexpert.org (Mark Alan) Date: Wed, 1 Aug 2012 09:44:49 +0100 Subject: nginx simple caching solutions In-Reply-To: <032ddfa8ed12dba1173cc7d2ba7993aa.squirrel@www.digitalhit.com> References: <032ddfa8ed12dba1173cc7d2ba7993aa.squirrel@www.digitalhit.com> Message-ID: <20120801094449.5e767b6c@e-healthexpert.org> On Wed, 1 Aug 2012 00:24:12 -0400, "Ian M. Evans" wrote: > I'm just looking for a way to speed up some dynamic pages that don't > have personalization and can basically be static for a few minutes. > Dedicated erver has 2 gig RAM and runs nginx, php-fpm, mysql 5 and a > qmail server. > > Any thoughts or config example links? 1. create dir /var/lib/nginx/fastcgicache and make it readable by nginx 2. at the very begining of your /etc/nginx/sites-available/yoursite file (i.e., before the first server { ... line) include these 3 lines: fastcgi_cache_path /var/lib/nginx/fastcgicache levels=1:2 keys_zone=MYCACHE:5m inactive=2h max_size=1g loader_files=1000 loader_threshold=2000; map $http_cookie $no_cache { default 0; ~SESS 1; } fastcgi_cache_key "$scheme$request_method$host$request_uri"; 3. in the block where you pass the control to php-fpm (i.e., the block were you put include fastcgi_params; fastcgi_pass ...; fastcgi_param ...; and so on ) add these: fastcgi_cache MYCACHE; fastcgi_keep_conn on; fastcgi_cache_bypass $no_cache; fastcgi_no_cache $no_cache; fastcgi_cache_valid 200 301 10m; fastcgi_cache_valid 302 5m; fastcgi_cache_valid 404 1m; fastcgi_cache_use_stale error timeout invalid_header updating http_500; fastcgi_ignore_headers Cache-Control Expires; expires epoch; fastcgi_cache_lock on; Done. Regards, M. From pcgeopc at gmail.com Wed Aug 1 10:20:36 2012 From: pcgeopc at gmail.com (Geo P.C.) Date: Wed, 1 Aug 2012 15:50:36 +0530 Subject: Unable to login with proxypass Message-ID: Dear Sir We need to proxy pass an url in which my.domain.com need to get drupal.apps.server.com. We configured rewrite and is working fine. Then we configured proxypass and while accessing my.domain.com we are getting the contents but we are unable to login to application (Drupal admin page). If you are accessing directly through drupal.apps.server.comwe are able to login and access the admin page. Please see our configuration : server { listen 80; server_name my.domain.com; location /{ proxy_pass http://drupal.get apps.server.com/; proxy_set_header X-Forwarded-Host $host; proxy_set_header X-Forwarded-Server $host; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; } } Can anyone please help us on it. Thanks Geo -------------- next part -------------- An HTML attachment was scrubbed... URL: From ian at ianhobson.co.uk Wed Aug 1 10:30:08 2012 From: ian at ianhobson.co.uk (Ian Hobson) Date: Wed, 01 Aug 2012 11:30:08 +0100 Subject: How to change the root for testing In-Reply-To: <18c5a33758a4ceef0d64283e926031b2.NginxMailingListEnglish@forum.nginx.org> References: <5014F11C.8090803@ianhobson.co.uk> <18c5a33758a4ceef0d64283e926031b2.NginxMailingListEnglish@forum.nginx.org> Message-ID: <50190530.7070602@ianhobson.co.uk> Hi leki75, Comments interspersed. On 30/07/2012 16:19, leki75 wrote: > Do you mean that requesting /testing/something.php tries to use > /home/ian/websites/coachmaster/htsecure/something.php instead of > /home/ian/websites/coachmaster/testing/something.php? Yes, Exactly. > > In the regexp location the document_root is inherited from the server > context. I expected that to be replaced as the location is more specific. Oh well. > > I think this would help you: > > map $reseller $reseller_path { > default /home/ian/websites/coachmaster/htsecure; > testing /home/ian/websites/coachmaster/testing; > } > > server { > index index.php index.html index.htm; > root /home/ian/websites/coachmaster/htsecure; > > rewrite ^/(?kaleidoscope|chat|Spanish|3MCoach|testing)(.*)$ > $2?rs=$reseller last; I don't understand what the ? bit is doing. (and Google has been no help!). Regards Ian > > location ~ \.php$ { > fastcgi_pass 127.0.0.1:9000; > fastcgi_param SCRIPT_FILENAME $reseller_path$fastcgi_script_name; > fastcgi_param HTTPS ON; > include /etc/nginx/fastcgi_params; > } > } > > -- Ian Hobson 31 Sheerwater, Northampton NN3 5HU, Tel: 01604 513875 Preparing eBooks for Kindle and ePub formats to give the best reader experience. From nginx-forum at nginx.us Wed Aug 1 11:02:44 2012 From: nginx-forum at nginx.us (sayres) Date: Wed, 1 Aug 2012 07:02:44 -0400 (EDT) Subject: change the root directory in nginx In-Reply-To: <06273df2dde92efcb5754657f886ca34.NginxMailingListEnglish@forum.nginx.org> References: <06273df2dde92efcb5754657f886ca34.NginxMailingListEnglish@forum.nginx.org> Message-ID: Thisis my defualt config: # # The default server # server { listen 80; server_name _; #charset koi8-r; #access_log logs/host.access.log main; location / { root /usr/share/nginx/html; index index.html index.htm index.php; } error_page 404 /404.html; location = /404.html { root /usr/share/nginx/html; } # redirect server error pages to the static page /50x.html # error_page 500 502 503 504 /50x.html; location = /50x.html { root /usr/share/nginx/html; } # proxy the PHP scripts to Apache listening on 127.0.0.1:80 # #location ~ \.php$ { # proxy_pass http://127.0.0.1; #} # pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000 # #location ~ \.php$ { # root html; # fastcgi_pass 127.0.0.1:9000; # fastcgi_index index.php; # fastcgi_param SCRIPT_FILENAME /scripts$fastcgi_script_name; # include fastcgi_params; #} location /phpMyAdmin { alias /usr/share/phpMyAdmin; index index.php index.html index.htm; } location ~ /phpMyAdmin/.*\.php$ { fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME /usr/share/$uri; fastcgi_intercept_errors on; include fastcgi_params; } location ~ \.php$ { try_files $uri =404; root /usr/share/nginx/html; fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include fastcgi_params; } # deny access to .htaccess files, if Apache's document root # concurs with nginx's one # #location ~ /\.ht { # deny all; #} location ~ /\.ht { deny all; } } Which part I must add /home/user/web?? Sorry I'm newcomer in nginx and my english is very bad. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,229128,229185#msg-229185 From appa at perusio.net Wed Aug 1 11:18:06 2012 From: appa at perusio.net (=?UTF-8?B?QW50w7NuaW8=?= P. P. Almeida) Date: Wed, 01 Aug 2012 13:18:06 +0200 Subject: Unable to login with proxypass In-Reply-To: References: Message-ID: <877gtivp1t.wl%appa@perusio.net> On 1 Ago 2012 12h20 CEST, pcgeopc at gmail.com wrote: > Dear Sir > > We need to proxy pass an url in which my.domain.com need to get > drupal.apps.server.com. We configured rewrite and is working fine. > > Then we configured proxypass and while accessing my.domain.com we > are getting the contents but we are unable to login to application > (Drupal admin page). If you are accessing directly through > drupal.apps.server.comwe are able to login and access the admin > page. > > Please see our configuration : > server { > listen 80; > server_name my.domain.com; > location /{ > proxy_pass http://drupal.get apps.server.com/; > proxy_set_header X-Forwarded-Host $host; > proxy_set_header X-Forwarded-Server $host; > proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; > } > } Have you tried to use proxy_redirect? http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_redirect E.g.: proxy_redirect http://drupal.apps.server.com/ http://mydomain.com/; What do the logs say? --- appa From nginx-forum at nginx.us Wed Aug 1 11:37:10 2012 From: nginx-forum at nginx.us (yashgt) Date: Wed, 1 Aug 2012 07:37:10 -0400 (EDT) Subject: Single server with multiple hierarchies Message-ID: <6f3cd8cbed3f5644de9ba76e372d10b3.NginxMailingListEnglish@forum.nginx.org> Hi, My directory structure is: /usr/share/nginx/www /magento/current <- This is my application /phpmyadmin I changed the root of the server to /usr/share/nginx/www/magento/current. Now my app works perfectly when I access http://myserver/index.php. I would like to configured phpmyadmin such that it is accessible as http://myserver/phpmyadmin/index.php. Please note that the root of my application is different than the root of phpmyadmin and both are to be served from the same server. If I create a location with ^~ /phpmyadmin/ can I set a root in it? Will I have to do something to pass the index.php to FCGI. Will this have to be done in the location directive? Thanks, Yash Posted at Nginx Forum: http://forum.nginx.org/read.php?2,229189,229189#msg-229189 From ianevans at digitalhit.com Wed Aug 1 13:02:38 2012 From: ianevans at digitalhit.com (Ian Evans) Date: Wed, 01 Aug 2012 09:02:38 -0400 Subject: nginx simple caching solutions In-Reply-To: <20120801094449.5e767b6c@e-healthexpert.org> References: <032ddfa8ed12dba1173cc7d2ba7993aa.squirrel@www.digitalhit.com> <20120801094449.5e767b6c@e-healthexpert.org> Message-ID: <501928EE.1070104@digitalhit.com> On 01/08/2012 4:44 AM, Mark Alan wrote: >> Any thoughts or config example links? > > 1. create dir /var/lib/nginx/fastcgicache and make it readable by nginx > > 2. at the very begining of your /etc/nginx/sites-available/yoursite > file (i.e., before the first server { ... line) include these 3 lines: > > fastcgi_cache_path /var/lib/nginx/fastcgicache levels=1:2 > keys_zone=MYCACHE:5m inactive=2h max_size=1g loader_files=1000 > loader_threshold=2000; > > map $http_cookie $no_cache { default 0; ~SESS 1; } > > fastcgi_cache_key "$scheme$request_method$host$request_uri"; > > 3. in the block where you pass the control to php-fpm (i.e., the block > were you put include fastcgi_params; fastcgi_pass ...; > fastcgi_param ...; and so on ) add these: > > fastcgi_cache MYCACHE; > fastcgi_keep_conn on; > fastcgi_cache_bypass $no_cache; > fastcgi_no_cache $no_cache; > fastcgi_cache_valid 200 301 10m; > fastcgi_cache_valid 302 5m; > fastcgi_cache_valid 404 1m; > fastcgi_cache_use_stale error timeout invalid_header updating http_500; > fastcgi_ignore_headers Cache-Control Expires; > expires epoch; > fastcgi_cache_lock on; Wow, Mark, thanks. I was doing some testing earlier tonight and a few of my pages were fast loading on their own but under load they just ground the system to a halt. So this should help. How do I turn off the caching for a specific directory? I'll need to do that on my phpmyadmin and data entry/update dirs. Thanks again. From appa at perusio.net Wed Aug 1 13:13:16 2012 From: appa at perusio.net (=?UTF-8?B?QW50w7NuaW8=?= P. P. Almeida) Date: Wed, 01 Aug 2012 15:13:16 +0200 Subject: nginx simple caching solutions In-Reply-To: <501928EE.1070104@digitalhit.com> References: <032ddfa8ed12dba1173cc7d2ba7993aa.squirrel@www.digitalhit.com> <20120801094449.5e767b6c@e-healthexpert.org> <501928EE.1070104@digitalhit.com> Message-ID: <876292vjpv.wl%appa@perusio.net> On 1 Ago 2012 15h02 CEST, ianevans at digitalhit.com wrote: > On 01/08/2012 4:44 AM, Mark Alan wrote: >>> Any thoughts or config example links? >> >> 1. create dir /var/lib/nginx/fastcgicache and make it readable by >> nginx >> >> 2. at the very begining of your /etc/nginx/sites-available/yoursite >> file (i.e., before the first server { ... line) include these 3 >> lines: >> >> fastcgi_cache_path /var/lib/nginx/fastcgicache levels=1:2 >> keys_zone=MYCACHE:5m inactive=2h max_size=1g loader_files=1000 >> loader_threshold=2000; >> >> map $http_cookie $no_cache { default 0; ~SESS 1; } >> >> fastcgi_cache_key "$scheme$request_method$host$request_uri"; >> >> 3. in the block where you pass the control to php-fpm (i.e., the >> block >> were you put include fastcgi_params; fastcgi_pass ...; >> fastcgi_param ...; and so on ) add these: >> >> fastcgi_cache MYCACHE; fastcgi_keep_conn on; >> fastcgi_cache_bypass $no_cache; fastcgi_no_cache >> $no_cache; fastcgi_cache_valid 200 301 10m; >> fastcgi_cache_valid 302 5m; fastcgi_cache_valid 404 >> 1m; fastcgi_cache_use_stale error timeout >> invalid_header updating http_500; >> fastcgi_ignore_headers Cache-Control Expires; >> expires epoch; fastcgi_cache_lock on; > > Wow, Mark, thanks. I was doing some testing earlier tonight and a > few of my pages were fast loading on their own but under load they > just ground the system to a halt. So this should help. > > How do I turn off the caching for a specific directory? I'll need to > do that on my phpmyadmin and data entry/update dirs. Add at the http level a map directive: map $uri $no_cache_dirs { default 0; /phpmyadmin 1; /data/dir 1; /update/dir 1; } Use the proper URIs for your setup. Add $no_cache_dirs to your fastcgi_cache_bypass + fastcgi_cache_no_cache directives. --- appa From christian.boenning at gmail.com Wed Aug 1 13:59:45 2012 From: christian.boenning at gmail.com (=?ISO-8859-1?Q?Christian_B=F6nning?=) Date: Wed, 1 Aug 2012 15:59:45 +0200 Subject: openssl depencies for nginx Message-ID: Hi, I've noticed that once '--with-openssl' is specified as parameter to './configure' it takes ages to build nginx (which is depending on openssl build really, not nginx). However there's a little changerequest on how nginx could install the openssl depency: openssl does provide a `make install_sw` command which does not install all that man-stuff which isn't needed anyway. for the record - from an already compiled version: lb01# time make install_sw [...] real 0m3.073s user 0m0.252s sys 0m1.924s lb01# time make install [...] real 1m30.423s user 0m26.846s sys 1m0.440s lb01# Regards, Chris From mdounin at mdounin.ru Wed Aug 1 14:32:37 2012 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 1 Aug 2012 18:32:37 +0400 Subject: openssl depencies for nginx In-Reply-To: References: Message-ID: <20120801143237.GN40452@mdounin.ru> Hello! On Wed, Aug 01, 2012 at 03:59:45PM +0200, Christian B?nning wrote: > I've noticed that once '--with-openssl' is specified as parameter to > './configure' it takes ages to build nginx (which is depending on > openssl build really, not nginx). However there's a little > changerequest on how nginx could install the openssl depency: openssl > does provide a `make install_sw` command which does not install all > that man-stuff which isn't needed anyway. Yep, I've looked into install_sw a while ago, it looks much faster. Unfortunately, "install_sw" target isn't available in OpenSSL 0.9.7 (which is the minimum OpenSSL version as supported by nginx now). Maxim Dounin From ianevans at digitalhit.com Wed Aug 1 15:18:36 2012 From: ianevans at digitalhit.com (Ian Evans) Date: Wed, 01 Aug 2012 11:18:36 -0400 Subject: nginx simple caching solutions In-Reply-To: <876292vjpv.wl%appa@perusio.net> References: <032ddfa8ed12dba1173cc7d2ba7993aa.squirrel@www.digitalhit.com> <20120801094449.5e767b6c@e-healthexpert.org> <501928EE.1070104@digitalhit.com> <876292vjpv.wl%appa@perusio.net> Message-ID: <501948CC.4010708@digitalhit.com> I set up the cache and restarted nginx. How do I debug it? I ab tested the front page and it's still slow with concurrent requests. There seems to be a few snippets in the cache dir, but not every php file is getting cached. Thanks. From appa at perusio.net Wed Aug 1 15:27:24 2012 From: appa at perusio.net (=?UTF-8?B?QW50w7NuaW8=?= P. P. Almeida) Date: Wed, 01 Aug 2012 17:27:24 +0200 Subject: nginx simple caching solutions In-Reply-To: <501948CC.4010708@digitalhit.com> References: <032ddfa8ed12dba1173cc7d2ba7993aa.squirrel@www.digitalhit.com> <20120801094449.5e767b6c@e-healthexpert.org> <501928EE.1070104@digitalhit.com> <876292vjpv.wl%appa@perusio.net> <501948CC.4010708@digitalhit.com> Message-ID: <873946vdib.wl%appa@perusio.net> On 1 Ago 2012 17h18 CEST, ianevans at digitalhit.com wrote: > I set up the cache and restarted nginx. How do I debug it? I ab > tested the front page and it's still slow with concurrent > requests. There seems to be a few snippets in the cache dir, but not > every php file is getting cached. Use cURL. Add a header with the cache status: ## Add a cache miss/hit status header. add_header X-My-Cache $upstream_cache_status; to your cache config. Check for this header when doing requests with cURL. --- appa From ianevans at digitalhit.com Wed Aug 1 15:43:11 2012 From: ianevans at digitalhit.com (Ian Evans) Date: Wed, 01 Aug 2012 11:43:11 -0400 Subject: nginx simple caching solutions In-Reply-To: <873946vdib.wl%appa@perusio.net> References: <032ddfa8ed12dba1173cc7d2ba7993aa.squirrel@www.digitalhit.com> <20120801094449.5e767b6c@e-healthexpert.org> <501928EE.1070104@digitalhit.com> <876292vjpv.wl%appa@perusio.net> <501948CC.4010708@digitalhit.com> <873946vdib.wl%appa@perusio.net> Message-ID: <50194E8F.9050307@digitalhit.com> On 01/08/2012 11:27 AM, Ant?nio P. P. Almeida wrote: > Use cURL. Add a header with the cache status: > > ## Add a cache miss/hit status header. > add_header X-My-Cache $upstream_cache_status; > > to your cache config. Check for this header when doing requests with cURL. Thanks. I forgot I was sending out a cookie on the test page. Got rid of that and the page was cached. From francis at daoine.org Wed Aug 1 18:33:27 2012 From: francis at daoine.org (Francis Daly) Date: Wed, 1 Aug 2012 19:33:27 +0100 Subject: change the root directory in nginx In-Reply-To: References: <06273df2dde92efcb5754657f886ca34.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20120801183327.GD32371@craic.sysops.org> On Wed, Aug 01, 2012 at 07:02:44AM -0400, sayres wrote: Hi there, Every location{} can have a "root" directive. If a location has a "root" directive, then that is the root for requests handled in that location. If a location does not have a "root" directive, then the "root" directive from the server{} level applies. If the server{} does not have a "root" directive, then the "root" directive from the http{} level applies. If http{} does not have a "root" directive, then the compile-time default applies. > Which part I must add /home/user/web?? I suggest that you remove all of the current "root" directives, and add your one into the server{} block. f -- Francis Daly francis at daoine.org From francis at daoine.org Wed Aug 1 18:45:28 2012 From: francis at daoine.org (Francis Daly) Date: Wed, 1 Aug 2012 19:45:28 +0100 Subject: How to change the root for testing In-Reply-To: <50190530.7070602@ianhobson.co.uk> References: <5014F11C.8090803@ianhobson.co.uk> <18c5a33758a4ceef0d64283e926031b2.NginxMailingListEnglish@forum.nginx.org> <50190530.7070602@ianhobson.co.uk> Message-ID: <20120801184528.GE32371@craic.sysops.org> On Wed, Aug 01, 2012 at 11:30:08AM +0100, Ian Hobson wrote: > On 30/07/2012 16:19, leki75 wrote: Hi there, > >Do you mean that requesting /testing/something.php tries to use > >/home/ian/websites/coachmaster/htsecure/something.php instead of > >/home/ian/websites/coachmaster/testing/something.php? > Yes, Exactly. > >In the regexp location the document_root is inherited from the server > >context. > I expected that to be replaced as the location is more specific. Oh well. One request is handled by one location. You have the location definitions: location /testing {} location ~ \.php$ {} Per the docs (http://nginx.org/r/location), the request for /testing/something.php is handled by the second location there. What you want is some way of setting "fastcgi_param SCRIPT_FILENAME" to point to the testing filename, when appropriate. Probably the least-impact way of doing this would be to add a new location ~ ^/testing/.*php$ {} with the same content as your current php location, plus the root directive that you want. There are other ways of doing this too. > >rewrite ^/(?kaleidoscope|chat|Spanish|3MCoach|testing)(.*)$ > >$2?rs=$reseller last; > I don't understand what the ? bit is doing. (and Google has > been no help!). It's a perl-compatible regex named capture. Older pcre libraries may not recognise the syntax, in which case you should see a clear warning to that effect. All the best, f -- Francis Daly francis at daoine.org From francis at daoine.org Wed Aug 1 19:08:11 2012 From: francis at daoine.org (Francis Daly) Date: Wed, 1 Aug 2012 20:08:11 +0100 Subject: Single server with multiple hierarchies In-Reply-To: <6f3cd8cbed3f5644de9ba76e372d10b3.NginxMailingListEnglish@forum.nginx.org> References: <6f3cd8cbed3f5644de9ba76e372d10b3.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20120801190811.GF32371@craic.sysops.org> On Wed, Aug 01, 2012 at 07:37:10AM -0400, yashgt wrote: Hi there, > My directory structure is: > /usr/share/nginx/www > /magento/current <- This is my application > /phpmyadmin > > I changed the root of the server to > /usr/share/nginx/www/magento/current. You changed the server{}-level "root" directive, so that any location{} that does not include its own "root" directive will use that one. > I would like to configured phpmyadmin such that it is accessible as > http://myserver/phpmyadmin/index.php. > > Please note that the root of my application is different than the root > of phpmyadmin and both are to be served from the same server. A http request comes in, and is processed by exactly one location. An internal rewrite may cause it to then be processed by one other location. This can happen again. > If I create a location with ^~ /phpmyadmin/ can I set a root in it? Will > I have to do something to pass the index.php to FCGI. Will this have to > be done in the location directive? Yes to each. You'll probably want to nest locations, like location ^~ /phpmyadmin/ { #root and other directives location ~ php { #php-relevant directives } } Good luck with it, f -- Francis Daly francis at daoine.org From nginx-forum at nginx.us Wed Aug 1 21:09:03 2012 From: nginx-forum at nginx.us (sayres) Date: Wed, 1 Aug 2012 17:09:03 -0400 (EDT) Subject: change the root directory in nginx In-Reply-To: References: <06273df2dde92efcb5754657f886ca34.NginxMailingListEnglish@forum.nginx.org> Message-ID: <48c40709180e78c3a177e3a09ed4699f.NginxMailingListEnglish@forum.nginx.org> Sorry dude Can you explain by example??? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,229128,229214#msg-229214 From nginx-forum at nginx.us Wed Aug 1 21:09:39 2012 From: nginx-forum at nginx.us (sayres) Date: Wed, 1 Aug 2012 17:09:39 -0400 (EDT) Subject: change the root directory in nginx In-Reply-To: <48c40709180e78c3a177e3a09ed4699f.NginxMailingListEnglish@forum.nginx.org> References: <06273df2dde92efcb5754657f886ca34.NginxMailingListEnglish@forum.nginx.org> <48c40709180e78c3a177e3a09ed4699f.NginxMailingListEnglish@forum.nginx.org> Message-ID: <97de19968c22f2ee5e9174b5c475a49e.NginxMailingListEnglish@forum.nginx.org> Sorry dude Can you explain by example??? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,229128,229215#msg-229215 From nginx-forum at nginx.us Thu Aug 2 02:32:55 2012 From: nginx-forum at nginx.us (jakecattrall) Date: Wed, 1 Aug 2012 22:32:55 -0400 (EDT) Subject: Curl, Nginx and the connections conundrum. Message-ID: Hey folks, I'd like to turn your attentions to this frustrating issue I've been facing. I'm running a dedicated server... Centos 6, PHP 5.3.3, Nginx 1.0.15. Great hardware, no problems. Nginx uses fastcgi to run php. The server communicates with another server using remote sql. A file called download.php initiates a mysql connection, checks some details in the database and then begins streaming bytes to the user with content-displacement. No matter what I do, I cannot get simultaneous connections to download a file above 5. For instance if I download a file using a download manager, a maximum of 5 connections can be made, the rest timeout. I've setup nginx to accept up to 32 connections, mysql connection is closed before the file begins to stream so there shouldn't be connection limit issues there. According to the logs, Nginx seems to be giving a return code 499 on the 6th request. Also it's max 5 connections every time, and when I want to download a second file it won't start because there's a maximum of 5 connections for my ip. Does anybody have any idea how I can increase the amount of connections? Perhaps an idea of what else I can check? I've also posted my troubles here: http://stackoverflow.com/questions/11769520/linux-server-file-download-connection-limit Thanks. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,229216,229216#msg-229216 From nbubingo at gmail.com Thu Aug 2 03:51:55 2012 From: nbubingo at gmail.com (=?GB2312?B?0qbOsLHz?=) Date: Thu, 2 Aug 2012 11:51:55 +0800 Subject: Can multiple nginx instances share a single proxy cache store? In-Reply-To: <5017E798.1060302@heinlein-support.de> References: <5017DF1D.5080206@heinlein-support.de> <20120731135827.GS40452@mdounin.ru> <5017E798.1060302@heinlein-support.de> Message-ID: Maybe you can try the srcache module: https://github.com/agentzh/srcache-nginx-module. This module can store and fetch cached page from remote memcached. Thanks. 2012/7/31 Isaac Hailperin : > Thank you Maxim, I have made my decision now! > > Isaac > > > On 07/31/2012 03:58 PM, Maxim Dounin wrote: >> >> Hello! >> >> On Tue, Jul 31, 2012 at 03:35:25PM +0200, Isaac Hailperin wrote: >> >>> >>> Hi, >>> >>> I am planning to deploy multiple nginx servers (10) to proxy a bunch >>> of apaches(20). The apaches host about 4000 vhosts, with a total >>> volume of about 1TB. >>> One scenario I am thinking of with regard to hard disk storage of >>> the proxy cache would be to have a single storage object, eg. NAS, >>> connected to all 10 nginx servers via fibre channel. >>> >>> This would have the advantage of only pulling items into cache once, >>> and would also avoid cache inconsistencies, which could at least be >>> a temporal problem if all 10 nginx servers would have their own >>> cache. >>> >>> My question now is: would this work in theory? >>> Can multiple nginx instances share a single proxy cache store? >> >> >> No. >> >>> I am thinking of cache management, all 10 nginx instances would try >>> to manage the same cache directory. I don't know enough about the >>> cache management to understand if there are problems with this >>> scenario. >>> >>> Strictly speaking this is a second question, but still: the >>> alternative would be to give the nginx local storage for the proxy >>> cache (e.g. a raid 5, or even jbod (just a bunch of disks)). This >>> would obviously be much simpler to set up and manage, und thus be >>> more robust (the single storage would be a single point of failure). >>> Which would you recommend? >> >> >> Use local storage. >> >> The main disadvantage of a network storage pretending to be a >> local filesystem is blocking I/O. Even with fully working AIO (in >> contrast to one available under Linux, which requires directio) >> there are still blocking operations like open()/fstat(), and this >> will likely result in suboptimal nginx performance even if just >> serving static files from such storage. >> >> Maxim Dounin >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx >> > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From chris+nginx at schug.net Thu Aug 2 05:39:31 2012 From: chris+nginx at schug.net (Christoph Schug) Date: Thu, 02 Aug 2012 07:39:31 +0200 Subject: MIME type oddity when using try_files in combination with location/alias using regex captures Message-ID: <6a68df70c093f8cca848a3a38d6efbc0@schug.net> Hello! Given is following minimized test case server { listen 80; server_name t1.example.com; root /data/web/t1.example.com/htdoc; location ~ ^/quux(/.*)?$ { alias /data/web/t1.example.com/htdoc$1; try_files '' =404; } } on Nginx 1.3.4 (but not specific to that version) # nginx -V nginx version: nginx/1.3.4 TLS SNI support enabled configure arguments: --prefix=/usr/share/nginx --conf-path=/etc/nginx/nginx.conf --sbin-path=/usr/sbin/nginx --http-log-path=/var/log/nginx/access.log --error-log-path=/var/log/nginx/error.log --pid-path=/var/run/nginx.pid --user=nginx --group=nginx --with-openssl=openssl-1.0.1c --with-debug --with-http_stub_status_module --with-http_ssl_module --with-ipv6 and following file system layout # find /data/web/t1.example.com/htdoc/ /data/web/t1.example.com/htdoc/ /data/web/t1.example.com/htdoc/foo /data/web/t1.example.com/htdoc/foo/bar.gif Accessing the file directly returns the expected 'Content-Type' response header with the value 'image/gif' $ curl -s -o /dev/null -D - -H 'Host: t1.example.com' http://127.0.0.1/foo/bar.gif HTTP/1.1 200 OK Server: nginx/1.3.4 Date: Thu, 02 Aug 2012 05:13:40 GMT Content-Type: image/gif Content-Length: 68 Last-Modified: Thu, 02 Aug 2012 05:04:56 GMT Connection: keep-alive ETag: "501a0a78-44" Accept-Ranges: bytes Accessing the file via location /quux returns 'Content-Type' response header with the value 'application/octet-stream' (basically it falls back to the setting of 'default_type') $ curl -s -o /dev/null -D - -H 'Host: t1.example.com' http://127.0.0.1/quux/foo/bar.gif HTTP/1.1 200 OK Server: nginx/1.3.4 Date: Thu, 02 Aug 2012 05:13:42 GMT Content-Type: application/octet-stream Content-Length: 68 Last-Modified: Thu, 02 Aug 2012 05:04:56 GMT Connection: keep-alive ETag: "501a0a78-44" Accept-Ranges: bytes It is unclear to me if this is expected behavior (and if so, I am having a hard time to find it in the documentation) and what would be the best way to mitigate the problem. Defining nested locations within the /quux one for each combination of file extension/MIME type works but looks very wrong to me. -cs From johannes_graumann at web.de Thu Aug 2 05:45:36 2012 From: johannes_graumann at web.de (Johannes Graumann) Date: Thu, 02 Aug 2012 07:45:36 +0200 Subject: Distribution of requests to multiple lxc containers Message-ID: Dear all, My setup is to run multiple web services in different lxc containers and nginx on the host to distribute requests to the appropriate container. I'm trying the two entries below as seperate files in sites-available/sites- enabled, but requests to "mail.MAYDOMAIN.org" (trying to reach the lxc container (supposedly) proxied by the second entry) end up returning an error produced by the the first one. Any insight into this newbies folly would be highly appreciated - where do I err? Thank you for your time. Sincerely, Joh PS: Can the SSL related entries be put into one joint place for all entries rather than repeating it? JG > server { > listen 443; > server_name MYDOMAIN.org HOSTING.net; > client_max_body_size 40M; > # SSL is using CACert credentials > ssl on; > ssl_certificate /etc/ssl/private/cacert.MYDOMAIN.org.pem; > ssl_certificate_key /etc/ssl/private/cacert.MYDOMAIN.org_privatkey.pem; > ssl_session_timeout 5m; > ssl_protocols SSLv3 TLSv1; > ssl_ciphers ALL:!ADH:!EXPORT56:!LOW:RC4+RSA:+HIGH:+MEDIUM:+SSLv3: +EXP; > ssl_prefer_server_ciphers on; > # Proxy the "plone.MYDOMAIN.org" lxc container > location / { > proxy_pass http://10.10.10.2:8080/VirtualHostBase/https/HOSTING.net:443/MYDOMAINPlone/VirtualHostRoot/; > } > } > server { > listen 443; > server_name mail.MYDOMAIN.org mail.HOSTING.net; > client_max_body_size 40M; > # SSL is using CACert credentials > ssl on; > ssl_certificate /etc/ssl/private/cacert.MYDOMAIN.org.pem; > ssl_certificate_key /etc/ssl/private/cacert.MYDOMAIN.org_privatkey.pem; > ssl_session_timeout 5m; > ssl_protocols SSLv3 TLSv1; > ssl_ciphers ALL:!ADH:!EXPORT56:!LOW:RC4+RSA:+HIGH:+MEDIUM:+SSLv3: +EXP; > ssl_prefer_server_ciphers on; > # Proxy the "kolab.MYDOMAIN.org" lxc container > location / { > proxy_pass http://10.10.10.4; > } > } From chris+nginx at schug.net Thu Aug 2 06:29:52 2012 From: chris+nginx at schug.net (Christoph Schug) Date: Thu, 02 Aug 2012 08:29:52 +0200 Subject: Best practice on FastCGI configuration to avoid ambiguous FCGI_PARAMS keys Message-ID: Most of the times when configuring a FastCGI application it is just good enough to include the fastcgi.conf which ships with the Nginx distribution. On the other hand sometimes it is required to just change one or few of the definitions in that file, for example when the application resided outside document root. Practically something like include fastcgi.conf; fastcgi_param DOCUMENT_ROOT /foo/bar; worked for me so far (in combination with php-fpm(8)). While debugging a different issue I also had a network sniffer running the traffic between Nginx and the FastCGI application server, I found out that Nginx takes that sort of configuration literally and transfers two DOCUMENT_ROOT parameters, first with the setting given in fastcgi.conf and then the one specified explicitly. From theoretical point of view one can argue that this is a configuration flaw, but from a practical point of view I wonder if it wouldn't make sense for Nginx to allow override of FCGI_PARAMS keys and just transfer the last definition. The FastCGI specification [1] is pretty vague on ambiguous FCGI_PARAMS keys and the order of processing. This basically passes the ball to the FastCGI application server to sort out what would be "best", e.g. take the very first value definition of a specific FCGI_PARAMS, the last one, role a dice, etc. This might lead to the situation where the same Nginx configuration could trigger different behavior dependent in which FastCGI application server would be used. Again, I don't see this directly as Nginx's fault, the question is just of this is desirable for the ease of system administration to keep configurations compact and clear with still a well-defined behavior. Reducation of network traffic between Nginx and the FastCGI application would be another aspect. Any other opinions on that? -cs [1] http://www.fastcgi.com/om_archive/kit/doc/fcgi-spec.html#S5.2 From chris+nginx at schug.net Thu Aug 2 06:54:45 2012 From: chris+nginx at schug.net (Christoph Schug) Date: Thu, 02 Aug 2012 08:54:45 +0200 Subject: Curl, Nginx and the connections conundrum. In-Reply-To: References: Message-ID: On 2012-08-02 04:32, jakecattrall wrote: > Hey folks, > > I'd like to turn your attentions to this frustrating issue I've been > facing. > > I'm running a dedicated server... Centos 6, PHP 5.3.3, Nginx 1.0.15. > Great hardware, no problems. > > Nginx uses fastcgi to run php. > > The server communicates with another server using remote sql. > > A file called download.php initiates a mysql connection, checks some > details in the database and then begins streaming bytes to the user > with > content-displacement. > > No matter what I do, I cannot get simultaneous connections to > download a > file above 5. For instance if I download a file using a download > manager, a maximum of 5 connections can be made, the rest timeout. [...] Have you checked the 'pm.max_children' setting in php-fpm.conf? -cs From chris+nginx at schug.net Thu Aug 2 08:38:15 2012 From: chris+nginx at schug.net (Christoph Schug) Date: Thu, 02 Aug 2012 10:38:15 +0200 Subject: Bad side effect of (even unmatched) nested regex locations in regex locations with anonymous captures with try_files/alias Message-ID: <6e95fcc77c44c5f0bc98e6e8c26c1697@schug.net> Hello! I have another interesting scenario ;-) Given is following minimized test case server { listen 80; server_name t2.example.com; root /data/web/t2.example.com/htdoc; location ~ ^/bar(/.*)? { alias /data/web/t2.example.com/htdoc/foo$1; try_files '' =404; } location ~ ^/bla(/.*)? { alias /data/web/t2.example.com/htdoc/foo$1; try_files '' =404; location ~ child_of_bla(?P.*)$ { return 418; } } } on Nginx 1.3.4 (but not specific to that version) # nginx -V nginx version: nginx/1.3.4 TLS SNI support enabled configure arguments: --prefix=/usr/share/nginx --conf-path=/etc/nginx/nginx.conf --sbin-path=/usr/sbin/nginx --http-log-path=/var/log/nginx/access.log --error-log-path=/var/log/nginx/error.log --pid-path=/var/run/nginx.pid --user=nginx --group=nginx --with-openssl=openssl-1.0.1c --with-debug --with-http_stub_status_module --with-http_ssl_module --with-ipv6 and follow file system layout $ find /data/web/t2.example.com/htdoc/ /data/web/t2.example.com/htdoc/ /data/web/t2.example.com/htdoc/foo /data/web/t2.example.com/htdoc/foo/quux.txt Plain access to quux.txt works of course $ curl -s -D - -H 'Host: t2.example.com' http://127.0.0.1/foo/quux.txt HTTP/1.1 200 OK Server: nginx/1.3.4 Date: Thu, 02 Aug 2012 08:27:57 GMT Content-Type: text/plain Content-Length: 5 Last-Modified: Thu, 02 Aug 2012 07:15:43 GMT Connection: keep-alive ETag: "501a291f-5" Accept-Ranges: bytes QUUX Access to location /bar works as well as expected $ curl -s -D - -H 'Host: t2.example.com' http://127.0.0.1/bar/quux.txt HTTP/1.1 200 OK Server: nginx/1.3.4 Date: Thu, 02 Aug 2012 08:28:10 GMT Content-Type: application/octet-stream Content-Length: 5 Last-Modified: Thu, 02 Aug 2012 07:15:43 GMT Connection: keep-alive ETag: "501a291f-5" Accept-Ranges: bytes QUUX but it breaks when the location of the same style has a nested location (which might be even unmatched; here this child_of_bla thingy), which also does a regex capture (doesn't matter whether this is an anonymous or named capture) $ curl -s -D - -H 'Host: t2.example.com' http://127.0.0.1/bla/quux.txt HTTP/1.1 404 Not Found Server: nginx/1.3.4 Date: Thu, 02 Aug 2012 08:28:13 GMT Content-Type: text/html Content-Length: 168 Connection: keep-alive 404 Not Found

404 Not Found


nginx/1.3.4
Nginx doesn't even seem to process anyhting in the 'try files phase' according to the debug log. 2012/08/02 10:28:13 [debug] 15741#0: *17 http process request line 2012/08/02 10:28:13 [debug] 15741#0: *17 http request line: "GET /bla/quux.txt HTTP/1.1" 2012/08/02 10:28:13 [debug] 15741#0: *17 http uri: "/bla/quux.txt" 2012/08/02 10:28:13 [debug] 15741#0: *17 http args: "" 2012/08/02 10:28:13 [debug] 15741#0: *17 http exten: "txt" 2012/08/02 10:28:13 [debug] 15741#0: *17 http process request header line 2012/08/02 10:28:13 [debug] 15741#0: *17 http header: "User-Agent: curl/7.19.7 (i386-redhat-linux-gnu) libcurl/7.19.7 NSS/3.13.1.0 zlib/1.2.3 libidn/1.18 libssh2/1.2.2" 2012/08/02 10:28:13 [debug] 15741#0: *17 http header: "Accept: */*" 2012/08/02 10:28:13 [debug] 15741#0: *17 http header: "Host: t2.example.com" 2012/08/02 10:28:13 [debug] 15741#0: *17 http header done 2012/08/02 10:28:13 [debug] 15741#0: *17 rewrite phase: 0 2012/08/02 10:28:13 [debug] 15741#0: *17 test location: ~ "^/bar(/.*)?" 2012/08/02 10:28:13 [debug] 15741#0: *17 test location: ~ "^/baz(/.*)?" 2012/08/02 10:28:13 [debug] 15741#0: *17 using configuration "" 2012/08/02 10:28:13 [debug] 15741#0: *17 http cl:-1 max:1048576 2012/08/02 10:28:13 [debug] 15741#0: *17 rewrite phase: 2 2012/08/02 10:28:13 [debug] 15741#0: *17 post rewrite phase: 3 2012/08/02 10:28:13 [debug] 15741#0: *17 generic phase: 4 2012/08/02 10:28:13 [debug] 15741#0: *17 generic phase: 5 2012/08/02 10:28:13 [debug] 15741#0: *17 access phase: 6 2012/08/02 10:28:13 [debug] 15741#0: *17 access phase: 7 2012/08/02 10:28:13 [debug] 15741#0: *17 post access phase: 8 2012/08/02 10:28:13 [debug] 15741#0: *17 try files phase: 9 (nothing happens here?) 2012/08/02 10:28:13 [debug] 15741#0: *17 content phase: 10 2012/08/02 10:28:13 [debug] 15741#0: *17 content phase: 11 2012/08/02 10:28:13 [debug] 15741#0: *17 content phase: 12 2012/08/02 10:28:13 [debug] 15741#0: *17 http filename: "/data/web/t2.example.com/htdoc/bla/quux.txt" 2012/08/02 10:28:13 [error] 15741#0: *17 open() "/data/web/t2.example.com/htdoc/bla/quux.txt" failed (2: No such file or directory), client: 127.0.0.1, server: t2.example.com, request: "GET /bla/quux.txt HTTP/1.1", host: "t2.example.com" 2012/08/02 10:28:13 [debug] 15741#0: *17 http finalize request: 404, "/bla/quux.txt?" a:1, c:1 2012/08/02 10:28:13 [debug] 15741#0: *17 http special response: 404, "/bla/quux.txt?" 2012/08/02 10:28:13 [debug] 15741#0: *17 http set discard body 2012/08/02 10:28:13 [debug] 15741#0: *17 HTTP/1.1 404 Not Found Interestingly enough it works when I change the anonymous capture for /bar to a named one and replace the $1 in the alias by that named variable. Is this expected behavior or should I rather assume "anonymous captures are evil"? -cs From nginx-forum at nginx.us Thu Aug 2 08:52:08 2012 From: nginx-forum at nginx.us (e123e123e123) Date: Thu, 2 Aug 2012 04:52:08 -0400 (EDT) Subject: php can't run in auth folder Message-ID: I try to use auth for .password , but after key in username and password , the page just blank only , .php file can't run , pls help , setting as below : location ^~/phpmyadmin { fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME /home/httpd/html/phpmyadmin$fastcgi_script_name; auth_basic "Restricted"; auth_basic_user_file /home/httpd/html/phpmyadmin/.htpasswd; } Posted at Nginx Forum: http://forum.nginx.org/read.php?2,229232,229232#msg-229232 From mdounin at mdounin.ru Thu Aug 2 10:01:10 2012 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 2 Aug 2012 14:01:10 +0400 Subject: Best practice on FastCGI configuration to avoid ambiguous FCGI_PARAMS keys In-Reply-To: References: Message-ID: <20120802100110.GS40452@mdounin.ru> Hello! On Thu, Aug 02, 2012 at 08:29:52AM +0200, Christoph Schug wrote: > Most of the times when configuring a FastCGI application it is just > good enough to include the fastcgi.conf which ships with the Nginx > distribution. On the other hand sometimes it is required to just > change one or few of the definitions in that file, for example when > the application resided outside document root. Practically something > like > > include fastcgi.conf; > fastcgi_param DOCUMENT_ROOT /foo/bar; > > worked for me so far (in combination with php-fpm(8)). While > debugging a different issue I also had a network sniffer running the > traffic between Nginx and the FastCGI application server, I found > out that Nginx takes that sort of configuration literally and > transfers two DOCUMENT_ROOT parameters, first with the setting given > in fastcgi.conf and then the one specified explicitly. > > From theoretical point of view one can argue that this is a > configuration flaw, but from a practical point of view I wonder if > it wouldn't make sense for Nginx to allow override of FCGI_PARAMS > keys and just transfer the last definition. The FastCGI > specification [1] is pretty vague on ambiguous FCGI_PARAMS keys and I tend to think that it's more or less clarified here: http://www.fastcgi.com/om_archive/kit/doc/fcgi-spec.html#S6.2 As params are expected to represent "CGI/1.1 environment variables", keys should be unique. > the order of processing. This basically passes the ball to the > FastCGI application server to sort out what would be "best", e.g. > take the very first value definition of a specific FCGI_PARAMS, the > last one, role a dice, etc. This might lead to the situation where > the same Nginx configuration could trigger different behavior > dependent in which FastCGI application server would be used. Moreover, it would trigger different behaviour in different versions of the same application. E.g. php changed handling somewhere near 5.2 (not sure about exact version) from using first param passed to a last param passed (well, not sure again, somewhere). > Again, I don't see this directly as Nginx's fault, the question is > just of this is desirable for the ease of system administration to > keep configurations compact and clear with still a well-defined > behavior. Reducation of network traffic between Nginx and the > FastCGI application would be another aspect. > > Any other opinions on that? For now nginx doesn't impose any protocol-specific restrictions on what may be used in fastcgi_params (as well as similar things for other protocols), it just passes literally what configuration says to. Matching the protocol restrictions is left as an exercise for a system administrator. I'm don't think we want to require uniqueness here, but probably a warning on duplicate fastcgi_params would be a way to go. Maxim Dounin From i.hailperin at heinlein-support.de Thu Aug 2 10:09:07 2012 From: i.hailperin at heinlein-support.de (Isaac Hailperin) Date: Thu, 02 Aug 2012 12:09:07 +0200 Subject: How to set keys_zone for proxy_cache_path on linux Message-ID: <501A51C3.6040808@heinlein-support.de> Hi, I was wondering how to set the keys_zone parameter of the proxy_cache_path. The wiki says "Zone size should be set proportional to number of pages to cache. The size of the metadata for one page (file) depends on the OS; currently it is 64 bytes for FreeBSD/i386, and 128 bytes for FreeBSD/amd64. " Are these numbers also true for linux? If not, what are they? Isaac From mdounin at mdounin.ru Thu Aug 2 10:42:49 2012 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 2 Aug 2012 14:42:49 +0400 Subject: Bad side effect of (even unmatched) nested regex locations in regex locations with anonymous captures with try_files/alias In-Reply-To: <6e95fcc77c44c5f0bc98e6e8c26c1697@schug.net> References: <6e95fcc77c44c5f0bc98e6e8c26c1697@schug.net> Message-ID: <20120802104249.GU40452@mdounin.ru> Hello! On Thu, Aug 02, 2012 at 10:38:15AM +0200, Christoph Schug wrote: > Hello! > > I have another interesting scenario ;-) Given is following minimized > test case > > server { > listen 80; > server_name t2.example.com; > > root /data/web/t2.example.com/htdoc; > > location ~ ^/bar(/.*)? { > alias /data/web/t2.example.com/htdoc/foo$1; This is expected to break once between location matching and accessing a file (which will evaluate variables in the "alias" directive) any regexp matching will happen. Not only nested regex location matching (which is kind of explicit), but even lookup of a map variable (with regexps) will be enough to break things. And this is why it's not recommended to use enumerated captures except for very simple configurations (or "rewrite" directive, where use of enumerated captures immediatly follows regexp matching). Use named captures instead and you'll be fine. [...] Maxim Dounin From chris+nginx at schug.net Thu Aug 2 15:57:21 2012 From: chris+nginx at schug.net (Christoph Schug) Date: Thu, 02 Aug 2012 17:57:21 +0200 Subject: Best practice on FastCGI configuration to avoid ambiguous FCGI_PARAMS keys In-Reply-To: <20120802100110.GS40452@mdounin.ru> References: <20120802100110.GS40452@mdounin.ru> Message-ID: <68dff31d4457c4edb5b3846299943979@schug.net> On 2012-08-02 12:01, Maxim Dounin wrote: [...] > I tend to think that it's more or less clarified here: > > http://www.fastcgi.com/om_archive/kit/doc/fcgi-spec.html#S6.2 > > As params are expected to represent "CGI/1.1 environment > variables", keys should be unique. I somehow missed that part, thanks for the hint. [...] > I'm don't think we want to require uniqueness here, but probably a > warning on duplicate fastcgi_params would be a way to go. Yup, that would definitively be step forward. Thanks -cs From chris+nginx at schug.net Thu Aug 2 15:59:40 2012 From: chris+nginx at schug.net (Christoph Schug) Date: Thu, 02 Aug 2012 17:59:40 +0200 Subject: Bad side effect of (even unmatched) nested regex locations in regex locations with anonymous captures with try_files/alias In-Reply-To: <20120802104249.GU40452@mdounin.ru> References: <6e95fcc77c44c5f0bc98e6e8c26c1697@schug.net> <20120802104249.GU40452@mdounin.ru> Message-ID: <56d1a8305aa987c410f1519dae2f8660@schug.net> On 2012-08-02 12:42, Maxim Dounin wrote: [...] > And this is why it's not recommended to use enumerated captures > except for very simple configurations (or "rewrite" directive, > where use of enumerated captures immediatly follows regexp > matching). Use named captures instead and you'll be fine. Thanks Maxim, using named captures it exactly what I did. The question to me was more or less if the other configuration was intended to break. If that's the case, that this is mainly a documentation issue which should be added to either [1] or [2] (best with cross reference to each other). [1] http://www.nginx.org/en/docs/http/ngx_http_core_module.html#alias [2] http://www.nginx.org/en/docs/http/ngx_http_core_module.html#location The topic "named captures" is as far as I can see is only mentioned in [3]. It might be good to demonstrate its use in a wider context. While doing so, also a comment on the syntax might be great, as PCRE not always supported the Perl-style notation of "(?)" [4]. [3] http://www.nginx.org/en/docs/http/ngx_http_core_module.html#server_name [4] http://vcs.pcre.org/viewvc/code/trunk/doc/pcre.txt?r1=91&r2=93#l3410 Cheers -cs From nginx-forum at nginx.us Thu Aug 2 16:05:46 2012 From: nginx-forum at nginx.us (jakecattrall) Date: Thu, 2 Aug 2012 12:05:46 -0400 (EDT) Subject: Curl, Nginx and the connections conundrum. In-Reply-To: References: Message-ID: Perfect, thanks for the advice. Edit /etc/init.d/php_cgi set server_childs=32 Posted at Nginx Forum: http://forum.nginx.org/read.php?2,229216,229253#msg-229253 From mdounin at mdounin.ru Thu Aug 2 17:20:12 2012 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 2 Aug 2012 21:20:12 +0400 Subject: Bad side effect of (even unmatched) nested regex locations in regex locations with anonymous captures with try_files/alias In-Reply-To: <56d1a8305aa987c410f1519dae2f8660@schug.net> References: <6e95fcc77c44c5f0bc98e6e8c26c1697@schug.net> <20120802104249.GU40452@mdounin.ru> <56d1a8305aa987c410f1519dae2f8660@schug.net> Message-ID: <20120802172011.GB40452@mdounin.ru> Hello! On Thu, Aug 02, 2012 at 05:59:40PM +0200, Christoph Schug wrote: > On 2012-08-02 12:42, Maxim Dounin wrote: > [...] > >And this is why it's not recommended to use enumerated captures > >except for very simple configurations (or "rewrite" directive, > >where use of enumerated captures immediatly follows regexp > >matching). Use named captures instead and you'll be fine. > > Thanks Maxim, > > using named captures it exactly what I did. The question to me was > more or less if the other configuration was intended to break. If > that's the case, that this is mainly a documentation issue which > should be added to either [1] or [2] (best with cross reference to > each other). > > [1] http://www.nginx.org/en/docs/http/ngx_http_core_module.html#alias > [2] > http://www.nginx.org/en/docs/http/ngx_http_core_module.html#location > > The topic "named captures" is as far as I can see is only mentioned > in [3]. It might be good to demonstrate its use in a wider context. > While doing so, also a comment on the syntax might be great, as PCRE > not always supported the Perl-style notation of "(?)" [4]. > > [3] http://www.nginx.org/en/docs/http/ngx_http_core_module.html#server_name > [4] > http://vcs.pcre.org/viewvc/code/trunk/doc/pcre.txt?r1=91&r2=93#l3410 We have various details about the captures in general and the issue with enumerated captures discussed in an introduction article here: http://nginx.org/en/docs/http/server_names.html#regex_names Adding links to every directive which support variables and/or execute regular expressions might be a bit too verbose. Maxim Dounin From francis at daoine.org Thu Aug 2 18:02:18 2012 From: francis at daoine.org (Francis Daly) Date: Thu, 2 Aug 2012 19:02:18 +0100 Subject: change the root directory in nginx In-Reply-To: <97de19968c22f2ee5e9174b5c475a49e.NginxMailingListEnglish@forum.nginx.org> References: <06273df2dde92efcb5754657f886ca34.NginxMailingListEnglish@forum.nginx.org> <48c40709180e78c3a177e3a09ed4699f.NginxMailingListEnglish@forum.nginx.org> <97de19968c22f2ee5e9174b5c475a49e.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20120802180218.GG32371@craic.sysops.org> On Wed, Aug 01, 2012 at 05:09:39PM -0400, sayres wrote: > Can you explain by example??? In your current config file, add root /usr/share/nginx/html; immediately after server { and delete the other four mentions of root /usr/share/nginx/html; that are inside different location{} blocks. Your server should be effectively the same as it was To change to a new root directory, change the one line that you added. f -- Francis Daly francis at daoine.org From ianevans at digitalhit.com Thu Aug 2 19:23:09 2012 From: ianevans at digitalhit.com (Ian M. Evans) Date: Thu, 2 Aug 2012 15:23:09 -0400 Subject: Proxying webmin Message-ID: I sometime like to work in the quiet environs of the local library. They've recently blocked access to various ports, so I can't ssh, sftp or even go to webmin which is usually at port 10000. I understand I can have nginx proxy, say example.com/admin to example.com:10000, but I'm wondering if anyone out there is currently doing this and are there any pitfalls I should watch out for on either the nginx or webmin side. Currently _at_ the library so I can't attempt any of this right now. Figured I'd send the email in case I might have some answers when I get home. :-) From nginx-forum at nginx.us Fri Aug 3 06:28:09 2012 From: nginx-forum at nginx.us (diego_xa) Date: Fri, 3 Aug 2012 02:28:09 -0400 (EDT) Subject: Location regular expression explanation Message-ID: <53e689fb7ff66fb15fa369406302e233.NginxMailingListEnglish@forum.nginx.org> Hi, I was recently reading the PHPfastcgi example at http://wiki.nginx.org/PHPFcgiExample. When it talks about the configuration to avoid the exploit on uploads folder, there is a regulare expresion saying: location ~* (^(?!(?:(?!(php|inc)).)*/blogs\.dir/).*?(php|inc)) { I don't understand that regular expresion. Could someone explain me what ?! ?: mean or send me some link to documentation about this?? I cannot find anything after some googling. Thanks and regards. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,229276,229276#msg-229276 From ru at nginx.com Fri Aug 3 07:17:32 2012 From: ru at nginx.com (Ruslan Ermilov) Date: Fri, 3 Aug 2012 11:17:32 +0400 Subject: Location regular expression explanation In-Reply-To: <53e689fb7ff66fb15fa369406302e233.NginxMailingListEnglish@forum.nginx.org> References: <53e689fb7ff66fb15fa369406302e233.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20120803071732.GC76464@lo0.su> On Fri, Aug 03, 2012 at 02:28:09AM -0400, diego_xa wrote: > Hi, > > I was recently reading the PHPfastcgi example at > http://wiki.nginx.org/PHPFcgiExample. When it talks about the > configuration to avoid the exploit on uploads folder, there is a > regulare expresion saying: location ~* > (^(?!(?:(?!(php|inc)).)*/blogs\.dir/).*?(php|inc)) { > > I don't understand that regular expresion. Could someone explain me what > ?! ?: mean or send me some link to documentation about this?? I cannot > find anything after some googling. > > Thanks and regards. http://www.pcre.org/pcre.txt From johannes_graumann at web.de Fri Aug 3 08:03:04 2012 From: johannes_graumann at web.de (Johannes Graumann) Date: Fri, 03 Aug 2012 10:03:04 +0200 Subject: Sub-domains through different /etc/nginx/sites-enabled entries? Message-ID: Hello, I'm refining here my question that remained unanswered under the topic "Distribution of requests to multiple lxc containers" ... Is it possible to distribute sub-domains of one domain through independent files/links in sites-available/sites-enables entries? I'd love to have the flexibility to enable/disable services running at different sub-domains without having to edit a (joined) file/link, but with the two entries below requests to "mail.MYDOMAIN.org" (trying to reach a lxc container (supposedly) proxied by the second entry) end up returning an error produced by the the first one. Thanks for any pointers on what I might be doing wrong or whether I have to resort to one sites-available/enabled file. Joh > server { > listen 443; > server_name MYDOMAIN.org HOSTING.net; > client_max_body_size 40M; > # SSL is using CACert credentials > ssl on; > ssl_certificate /etc/ssl/private/cacert.MYDOMAIN.org.pem; > ssl_certificate_key /etc/ssl/private/cacert.MYDOMAIN.org_privatkey.pem; > ssl_session_timeout 5m; > ssl_protocols SSLv3 TLSv1; > ssl_ciphers ALL:!ADH:!EXPORT56:!LOW:RC4+RSA:+HIGH:+MEDIUM:+SSLv3: +EXP; > ssl_prefer_server_ciphers on; > # Proxy the "plone.MYDOMAIN.org" lxc container > location / { > proxy_pass http://10.10.10.2:8080/VirtualHostBase/https/HOSTING.net:443/MYDOMAINPlone/VirtualHostRoot/; > } > } > server { > listen 443; > server_name mail.MYDOMAIN.org mail.HOSTING.net; > client_max_body_size 40M; > # SSL is using CACert credentials > ssl on; > ssl_certificate /etc/ssl/private/cacert.MYDOMAIN.org.pem; > ssl_certificate_key /etc/ssl/private/cacert.MYDOMAIN.org_privatkey.pem; > ssl_session_timeout 5m; > ssl_protocols SSLv3 TLSv1; > ssl_ciphers ALL:!ADH:!EXPORT56:!LOW:RC4+RSA:+HIGH:+MEDIUM:+SSLv3: +EXP; > ssl_prefer_server_ciphers on; > # Proxy the "kolab.MYDOMAIN.org" lxc container > location / { > proxy_pass http://10.10.10.4; > } > } From unai at leanservers.com Fri Aug 3 09:26:23 2012 From: unai at leanservers.com (Unai Rodriguez) Date: Fri, 03 Aug 2012 17:26:23 +0800 Subject: Sub-domains through different /etc/nginx/sites-enabled entries? In-Reply-To: References: Message-ID: <32c6a1397e9e81852b4d6df7e1c5d2b4@leanservers.com> Johannes, Is this what you are asking for? # cat /etc/nginx/sites-enabled/site1 server { listen 80; server_name subdomain1.domain.com # config... } # cat /etc/nginx/sites-enabled/site2 server { listen 80; server_name subdomain2.domain.com # config... } ... # cat /etc/nginx/sites-enabled/siteN server { listen 80; server_name subdomainN.domain.com # config... } where: /etc/nginx/sites-enabled/siteX is a symlink to: /etc/nginx/sites-available/siteX Yes, this is possible :) -- Unai Rodriguez Cofounder LeanWired LLP www.leanservers.com unai at leanservers.com From nginx-forum at nginx.us Fri Aug 3 11:17:19 2012 From: nginx-forum at nginx.us (sayres) Date: Fri, 3 Aug 2012 07:17:19 -0400 (EDT) Subject: change the root directory in nginx In-Reply-To: <06273df2dde92efcb5754657f886ca34.NginxMailingListEnglish@forum.nginx.org> References: <06273df2dde92efcb5754657f886ca34.NginxMailingListEnglish@forum.nginx.org> Message-ID: This is my new current config: ############## Document Root ####################### root /home/sayres/web; server { ############### General Settings #################### listen 80; server_name _; location / { # root /usr/share/nginx/html; index index.html index.htm index.php; } error_page 404 /404.html; location = /404.html { # root /usr/share/nginx/html; } error_page 500 502 503 504 /50x.html; location = /50x.html { # root /usr/share/nginx/html; } ############## PHPMyAdmin ####################### location /phpMyAdmin { alias /usr/share/phpMyAdmin; index index.php index.html index.htm; } location ~ /phpMyAdmin/.*\.php$ { fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME /usr/share/$uri; fastcgi_intercept_errors on; include fastcgi_params; } ############## PHP ################################# location ~ \.php$ { try_files $uri =404; # root /usr/share/nginx/html; fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include fastcgi_params; } location ~ /\.ht { deny all; } } [root at hp sayres]# ls -l /home/sayres/ total 96 drwxrwxrwx. 1 root root 4096 Aug 3 01:14 D drwxr-xr-x. 2 sayres sayres 4096 Jul 2 11:42 Desktop drwxr-xr-x. 2 sayres sayres 4096 Jul 21 13:28 Documents drwxr-xr-x. 3 sayres sayres 4096 Jul 3 14:16 Downloads drwxrwxr-x. 3 sayres sayres 36864 Jul 5 20:26 localrepo drwxrwxr-x. 2 sayres sayres 4096 Jul 21 14:03 logs drwxr-xr-x. 2 sayres sayres 4096 Jul 2 11:41 Music -rw-r--r--. 1 root root 9678 Jul 31 11:11 php-fpm.txt drwxr-xr-x. 3 sayres sayres 4096 Jul 21 13:24 Pictures drwxr-xr-x. 2 sayres sayres 4096 Jul 2 11:41 Public drwxrwxr-x. 4 sayres sayres 4096 Jul 21 13:25 simpleconky drwxr-xr-x. 2 sayres sayres 4096 Jul 31 20:00 Templates drwxr-xr-x. 2 sayres sayres 4096 Jul 5 20:57 Videos drwxrwxrwx. 3 sayres sayres 4096 Aug 3 14:44 web after that and restart nginx service : systemctl restart nginx.service It seems that everything currect. Thanks dude. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,229128,229283#msg-229283 From nginx-forum at nginx.us Fri Aug 3 14:11:42 2012 From: nginx-forum at nginx.us (deoren) Date: Fri, 3 Aug 2012 10:11:42 -0400 (EDT) Subject: GeSHi language file for nginx config In-Reply-To: <832518eca6f773907a060668336f4983.NginxMailingListEnglish@forum.nginx.org> References: <1282670633.3516.309.camel@portable-evil> <832518eca6f773907a060668336f4983.NginxMailingListEnglish@forum.nginx.org> Message-ID: <5eaf7207d5d8a5c298087a39d5c43bdb.NginxMailingListEnglish@forum.nginx.org> Hi Cliff, Do you have a recent copy of the nginx.php GeSHi language file? I opened a request at the SourceForge project tracker here ( https://sourceforge.net/tracker/?func=detail&aid=3554024&group_id=114997&atid=670234 ) to have the copy you've already provided added, but if there is a newer file I'd like to provide it to them. Do you happen to have it in a repo somewhere? I looked through the nginx SVN repo and didn't see it there. Thanks. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,123194,229285#msg-229285 From guzman.braso at gmail.com Fri Aug 3 17:48:01 2012 From: guzman.braso at gmail.com (=?ISO-8859-1?B?R3V6beFuIEJyYXPz?=) Date: Fri, 3 Aug 2012 14:48:01 -0300 Subject: Weird issue between backend and upstream (0 bytes) Message-ID: Hi list! I've a weid issue I've been banging my head without been able to understand what's going on. Setup is as follow: - 1 nginx 0.7.67 doing load balance to two backends. - backend 1 with Nginx 1.2.2 and php-fpm 5.3 - backend 2 with Nginx 0.7.67 and php-fpm 5.3 Some, and only some requests log in the upstream status 200 and 0 byte returned. Same request in the backend log shows a 200 status and ALWAYS the same amount of bytes, which change between backends. When this happen on a request that went to backend 1: upstream logs 200 0 byte, backend logs 200 15776 bytes. When this happen on a request that went to backend 2: upstream logs 200 0 byte, backend logs 200 15670 bytes. I've tried without luck to reproduce the problem, so I decided to start debugging all requests to this site to try to understand why nginx is returning empty responses. This is what I see in upstream when error happens: (...) 2012/08/03 13:25:32 [debug] 1546#0: *823 http proxy header: "Accept: */*" 2012/08/03 13:25:32 [debug] 1546#0: *823 http proxy header: "User-Agent: AdsBot-Google (+http://www.google.com/adsbot.html)" 2012/08/03 13:25:32 [debug] 1546#0: *823 http proxy header: "Accept-Encoding: gzip,deflate" 2012/08/03 13:25:32 [debug] 1546#0: *823 http proxy header: 2012/08/03 13:25:32 [debug] 1546#0: *823 http cleanup add: 0000000001A3DF10 2012/08/03 13:25:32 [debug] 1546#0: *823 get rr peer, try: 2 2012/08/03 13:25:32 [debug] 1546#0: *823 get rr peer, current: 0 8 2012/08/03 13:25:32 [debug] 1546#0: *823 socket 149 2012/08/03 13:25:32 [debug] 1546#0: *823 epoll add connection: fd:149 ev:80000005 2012/08/03 13:25:32 [debug] 1546#0: *823 connect to 176.31.64.205:8059, fd:149 #824 2012/08/03 13:25:32 [debug] 1546#0: *823 http upstream connect: -2 2012/08/03 13:25:32 [debug] 1546#0: *823 event timer add: 149: 30000:1344011162971 2012/08/03 13:25:32 [debug] 1546#0: *823 http run request: "/s/miracle+noodle?" 2012/08/03 13:25:32 [debug] 1546#0: *823 http upstream check client, write event:1, "/s/miracle+noodle" 2012/08/03 13:25:32 [debug] 1546#0: *823 http upstream recv(): -1 (11: Resource temporarily unavailable) 2012/08/03 13:25:32 [debug] 1546#0: *823 http upstream request: "/s/miracle+noodle?" 2012/08/03 13:25:32 [debug] 1546#0: *823 http upstream send request handler 2012/08/03 13:25:32 [debug] 1546#0: *823 http upstream send request 2012/08/03 13:25:32 [debug] 1546#0: *823 chain writer buf fl:1 s:358 2012/08/03 13:25:32 [debug] 1546#0: *823 chain writer in: 0000000001A3DF48 2012/08/03 13:25:32 [debug] 1546#0: *823 writev: 358 2012/08/03 13:25:32 [debug] 1546#0: *823 chain writer out: 0000000000000000 2012/08/03 13:25:32 [debug] 1546#0: *823 event timer del: 149: 1344011162971 2012/08/03 13:25:32 [debug] 1546#0: *823 event timer add: 149: 120000:1344011252972 2012/08/03 13:25:33 [debug] 1546#0: *823 http run request: "/s/miracle+noodle?" 2012/08/03 13:25:33 [debug] 1546#0: *823 http upstream check client, write event:0, "/s/miracle+noodle" 2012/08/03 13:25:33 [debug] 1546#0: *823 http upstream recv(): 0 (11: Resource temporarily unavailable) 2012/08/03 13:25:33 [debug] 1546#0: *823 http run request: "/s/miracle+noodle?" 2012/08/03 13:25:33 [debug] 1546#0: *823 http upstream check client, write event:1, "/s/miracle+noodle" 2012/08/03 13:25:45 [debug] 1546#0: *823 http upstream request: "/s/miracle+noodle?" 2012/08/03 13:25:45 [debug] 1546#0: *823 http upstream process header 2012/08/03 13:25:45 [debug] 1546#0: *823 malloc: 0000000001B8DC30:16384 2012/08/03 13:25:45 [debug] 1546#0: *823 recv: fd:149 4344 of 16296 2012/08/03 13:25:45 [debug] 1546#0: *823 http proxy status 200 "200 OK" 2012/08/03 13:25:45 [debug] 1546#0: *823 http proxy header: "Server: nginx" 2012/08/03 13:25:45 [debug] 1546#0: *823 http proxy header: "Date: Fri, 03 Aug 2012 16:24:26 GMT" 2012/08/03 13:25:45 [debug] 1546#0: *823 http proxy header: "Content-Type: text/html" 2012/08/03 13:25:45 [debug] 1546#0: *823 http proxy header: "Connection: close" 2012/08/03 13:25:45 [debug] 1546#0: *823 http proxy header: "X-Powered-By: PHP/5.3.15-1~dotdeb.0" 2012/08/03 13:25:45 [debug] 1546#0: *823 http proxy header done 2012/08/03 13:25:45 [debug] 1546#0: *823 HTTP/1.1 200 OKM 2012/08/03 13:25:45 [debug] 1546#0: *823 finalize http upstream request: -1 2012/08/03 13:25:45 [debug] 1546#0: *823 finalize http proxy request 2012/08/03 13:25:45 [debug] 1546#0: *823 free rr peer 2 0 2012/08/03 13:25:45 [debug] 1546#0: *823 close http upstream connection: 149 2012/08/03 13:25:45 [debug] 1546#0: *823 event timer del: 149: 1344011252972 2012/08/03 13:25:45 [debug] 1546#0: *823 http upstream cache fd: 0 2012/08/03 13:25:45 [debug] 1546#0: *823 http file cache free 2012/08/03 13:25:45 [debug] 1546#0: *823 http finalize request: -1, "/s/miracle+noodle?" 1 2012/08/03 13:25:45 [debug] 1546#0: *823 http close request 2012/08/03 13:25:45 [debug] 1546#0: *823 http log handler 2012/08/03 13:25:45 [debug] 1546#0: *823 run cleanup: 0000000001A3DC40 2012/08/03 13:25:45 [debug] 1546#0: *823 run cleanup: 0000000001850E58 2012/08/03 13:25:45 [debug] 1546#0: *823 free: 0000000001B8DC30 2012/08/03 13:25:45 [debug] 1546#0: *823 free: 00000000018501C0, unused: 1 2012/08/03 13:25:45 [debug] 1546#0: *823 free: 0000000001A3D890, unused: 757 2012/08/03 13:25:45 [debug] 1546#0: *823 close http connection: 148 2012/08/03 13:25:45 [debug] 1546#0: *823 free: 00000000019B7C20 2012/08/03 13:25:45 [debug] 1546#0: *823 free: 0000000001B2BEF0 2012/08/03 13:25:45 [debug] 1546#0: *823 free: 0000000001B3DC00, unused: 8 2012/08/03 13:25:45 [debug] 1546#0: *823 free: 00000000019F55B0, unused: 112 The above debug log on upsteams seems to say that the backend closed the connection after headers, am I right? However, backend debug of another affected request that always log as returned the same amount of bytes (which is pretty weird), shows a different story.. After many of... (...) 2012/08/03 13:56:17 [debug] 1519#0: *221 pipe recv chain: -2 2012/08/03 13:56:17 [debug] 1519#0: *221 pipe buf busy s:1 t:1 f:0 000000000109F1E0, pos 00000000010A2FD0, size: 49752 file: 0, size: 0 2012/08/03 13:56:17 [debug] 1519#0: *221 pipe buf busy s:1 t:1 f:0 000000000109F1E0, pos 00000000010AF230, size: 65456 file: 0, size: 0 2012/08/03 13:56:17 [debug] 1519#0: *221 pipe buf free s:0 t:1 f:0 00000000010BF1F0, pos 00000000010BF1F0, size: 80 file: 0, size: 0 2012/08/03 13:56:17 [debug] 1519#0: *221 pipe length: -1 2012/08/03 13:56:17 [debug] 1519#0: *221 pipe write downstream: 0 2012/08/03 13:56:17 [debug] 1519#0: *221 pipe read upstream: 0 2012/08/03 13:56:17 [debug] 1519#0: *221 pipe buf busy s:1 t:1 f:0 000000000109F1E0, pos 00000000010A2FD0, size: 49752 file: 0, size: 0 2012/08/03 13:56:17 [debug] 1519#0: *221 pipe buf busy s:1 t:1 f:0 000000000109F1E0, pos 00000000010AF230, size: 65456 file: 0, size: 0 2012/08/03 13:56:17 [debug] 1519#0: *221 pipe buf free s:0 t:1 f:0 00000000010BF1F0, pos 00000000010BF1F0, size: 80 file: 0, size: 0 2012/08/03 13:56:17 [debug] 1519#0: *221 pipe length: -1 (...) Sudenly this comes: 2012/08/03 13:56:17 [debug] 1519#0: *221 pipe recv chain: -2 2012/08/03 13:56:17 [debug] 1519#0: *221 pipe buf busy s:1 t:1 f:0 000000000109F1E0, pos 00000000010A2FD0, size: 49752 file: 0, size: 0 2012/08/03 13:56:17 [debug] 1519#0: *221 pipe buf busy s:1 t:1 f:0 000000000109F1E0, pos 00000000010AF230, size: 65456 file: 0, size: 0 2012/08/03 13:56:17 [debug] 1519#0: *221 pipe buf free s:0 t:1 f:0 00000000010BF1F0, pos 00000000010BF1F0, size: 49232 file: 0, size: 0 2012/08/03 13:56:17 [debug] 1519#0: *221 pipe length: -1 2012/08/03 13:56:17 [debug] 1519#0: *221 pipe write downstream: 0 2012/08/03 13:56:17 [debug] 1519#0: *221 pipe read upstream: 0 2012/08/03 13:56:17 [debug] 1519#0: *221 pipe buf busy s:1 t:1 f:0 000000000109F1E0, pos 00000000010A2FD0, size: 49752 file: 0, size: 0 2012/08/03 13:56:17 [debug] 1519#0: *221 pipe buf busy s:1 t:1 f:0 000000000109F1E0, pos 00000000010AF230, size: 65456 file: 0, size: 0 2012/08/03 13:56:17 [debug] 1519#0: *221 pipe buf free s:0 t:1 f:0 00000000010BF1F0, pos 00000000010BF1F0, size: 49232 file: 0, size: 0 2012/08/03 13:56:17 [debug] 1519#0: *221 pipe length: -1 2012/08/03 13:56:17 [debug] 1519#0: *221 event timer: 13, old: 1344013157360, new: 1344013157362 2012/08/03 13:56:17 [debug] 1519#0: *221 event timer: 12, old: 1344013037361, new: 1344013037362 2012/08/03 13:56:17 [debug] 1519#0: *221 post event 000000000106B360 2012/08/03 13:56:17 [debug] 1519#0: *221 delete posted event 000000000106B360 2012/08/03 13:56:17 [debug] 1519#0: *221 http run request: "/s/new+balance+1500?" 2012/08/03 13:56:17 [debug] 1519#0: *221 http upstream check client, write event:0, "/s/new+balance+1500" 2012/08/03 13:56:17 [debug] 1519#0: *221 http upstream recv(): 0 (11: Resource temporarily unavailable) 2012/08/03 13:56:17 [debug] 1519#0: *221 finalize http upstream request: 499 2012/08/03 13:56:17 [debug] 1519#0: *221 finalize http fastcgi request 2012/08/03 13:56:17 [debug] 1519#0: *221 free rr peer 1 0 2012/08/03 13:56:17 [debug] 1519#0: *221 close http upstream connection: 13 2012/08/03 13:56:17 [debug] 1519#0: *221 free: 0000000000FF7CD0, unused: 48 2012/08/03 13:56:17 [debug] 1519#0: *221 event timer del: 13: 1344013157360 2012/08/03 13:56:17 [debug] 1519#0: *221 reusable connection: 0 2012/08/03 13:56:17 [debug] 1519#0: *221 http upstream temp fd: -1 2012/08/03 13:56:17 [debug] 1519#0: *221 http output filter "/s/new+balance+1500?" 2012/08/03 13:56:17 [debug] 1519#0: *221 http copy filter: "/s/new+balance+1500?" 2012/08/03 13:56:17 [debug] 1519#0: *221 image filter 2012/08/03 13:56:17 [debug] 1519#0: *221 xslt filter body 2012/08/03 13:56:17 [debug] 1519#0: *221 http postpone filter "/s/new+balance+1500?" 0000000000FEDA18 2012/08/03 13:56:17 [debug] 1519#0: *221 http copy filter: -1 "/s/new+balance+1500?" 2012/08/03 13:56:17 [debug] 1519#0: *221 http finalize request: -1, "/s/new+balance+1500?" a:1, c:1 So this means the backend felt the upstream closed the connection before it was allowed to return all data: 2012/08/03 13:56:17 [debug] 1519#0: *221 finalize http upstream request: 499 If that's the case, why it saves this to the logfile: [03/Aug/2012:13:56:17 -0300] "GET /s/new+balance+1500 HTTP/1.0" 200 15776 "-" "AdsBot-Google (+http://www.google.com/adsbot.html)" - Cache: - 200 24.535 I think the problem may be in the upstream, as this weird beahavior happens with both backends and both use different nginx versions, but I'm really out of answers right now with this issue. Any idea? Hint? Today of 8K requests, 2779 returned 0 bytes on the upstream and 15776 bytes on the backend.... Thank you!! Guzm?n -- Guzm?n Bras? N??ez Senior Perl Developer / Sysadmin Web: http://guzman.braso.info Mobile: +598 98 674020 -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Fri Aug 3 23:40:51 2012 From: nginx-forum at nginx.us (justin) Date: Fri, 3 Aug 2012 19:40:51 -0400 (EDT) Subject: Convert lighttpd rewrite rule to nginx In-Reply-To: References: Message-ID: Sorry, bump. Anybody have ideas? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,228909,229289#msg-229289 From nginx-forum at nginx.us Sat Aug 4 14:36:26 2012 From: nginx-forum at nginx.us (endikos) Date: Sat, 4 Aug 2012 10:36:26 -0400 (EDT) Subject: Odd 500 Internal Server Error with a Wordpress file (NGINX & PHP-FPM) Message-ID: <075ec0fc58b870b9c9d3c185740ecfab.NginxMailingListEnglish@forum.nginx.org> Hey folks. I'd appreciate some help with this bit of wierdness: Fresh install of wordpress downloaded yesterday, running on nginx with a fastcgi_pass to php-fpm. The install went flawlessly, and most everything seems to work correctly, except that when I click on the "Posts" button in the admin interface (/wp-admin/edit.php), I get a mostly blank screen that says "Invalid Post Type". Like I said, this is a completely fresh install, so I don't have ANY custom page types or anything, it's perfectly default right now. While I've performed plenty of wordpress installs without issue, I'm a relative neophyte to nginx anf php-fpm, and am sure something's gone wrong somehow on the backend. I do notice that php-fpm is logging a 500 status code when I hit that page, but can't seem to get any meaninful error message in the browser telling me exactly what went wrong. Nginx is connecting to php-fpm via fastcgi_pass to a unix domain socket. I've made sure that php-fpm and nginx are both running as the same user, and that the directory being served from has appropriate permissions. phpinfo() returns flawlessly on a test.php page. Any ideas from those more experienced than me? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,229298,229298#msg-229298 From nginx-forum at nginx.us Mon Aug 6 04:02:05 2012 From: nginx-forum at nginx.us (bigmeow) Date: Mon, 6 Aug 2012 00:02:05 -0400 (EDT) Subject: how to tell ngnix where is my libevent? Message-ID: since i build libevent myself, and install it to my home/install dir, i have checked auto/configure --help to find options related to libevent dir, but not find it, i have also tried to tell nginx the path of libevent using ./auto/configure CFLAGS="-I/home/bigmeow/install/include" ./auto/configure: error: invalid option "CFLAGS=-I/home/bigmeow/install/include" also failed. so is there any method to tell ngnix where is my libevent? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,229318,229318#msg-229318 From kvhdude at gmail.com Mon Aug 6 05:10:39 2012 From: kvhdude at gmail.com (kvhdude) Date: Sun, 5 Aug 2012 22:10:39 -0700 (PDT) Subject: upstream server and ip hash In-Reply-To: <1343035891761-7580949.post@n2.nabble.com> References: <1343035891761-7580949.post@n2.nabble.com> Message-ID: <1344229839977-7581128.post@n2.nabble.com> Folks, Can anyone help me out? To clarify: I am just looking for confirmation as i add an upstream server, will hash change for existing users (these are using XMPP sessions so they will suffer disruption of service). Thanks, -kvh kvhdude wrote > > Hi, > > I need a clarification regarding ip_hash directive : > Would adding a new upstream server affect existing users that are bound to > some other servers? > I am trying to dynamically scale up the servers (using scalr) in EC2 > setting. > > I am guessing it won't, but i needed to be sure. > > thanks, > -kvh > -- View this message in context: http://nginx.2469901.n2.nabble.com/upstream-server-and-ip-hash-tp7580949p7581128.html Sent from the nginx mailing list archive at Nabble.com. From mdounin at mdounin.ru Mon Aug 6 07:24:34 2012 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 6 Aug 2012 11:24:34 +0400 Subject: how to tell ngnix where is my libevent? In-Reply-To: References: Message-ID: <20120806072434.GK40452@mdounin.ru> Hello! On Mon, Aug 06, 2012 at 12:02:05AM -0400, bigmeow wrote: > since i build libevent myself, and install it to my home/install dir, > i have checked auto/configure --help to find options related to libevent > dir, but not find it, > > i have also tried to tell nginx the path of libevent using > ./auto/configure CFLAGS="-I/home/bigmeow/install/include" > ./auto/configure: error: invalid option > "CFLAGS=-I/home/bigmeow/install/include" > also failed. > > so is there any method to tell ngnix where is my libevent? You don't need libevent for nginx. And just for record, to pass additional options to compiler and linker you may use --with-cc-opt and --with-ld-opt configure arguments, see http://nginx.org/en/docs/install.html. Maxim Dounin From nginx-forum at nginx.us Mon Aug 6 08:26:54 2012 From: nginx-forum at nginx.us (pavleg) Date: Mon, 6 Aug 2012 04:26:54 -0400 (EDT) Subject: module variable get_handler() runs before filter? Message-ID: Good day, I am writing a filter module, that has to set a environment variable to a certain value, if a special condition is met. For example, if there is &flag=true in the arguments, I want the PHP to see a $_SERVER[ 'FLAG_DETECTED' ] variable. (I know that you might suggest just checking for $_GET['flag'] , but what I describe here is oversimplification of an actual case) The only way I see of doing it is adding "fastcgi_param FLAG_DETECTED $flag_var;" and then changing the $flag_var in module. On detection of this special value (&flag=true) , my filter (inserted at ngx_http_top_header_filter) creates a context, adds it to the request ( with ngx_http_set_ctx(r,my_module)) , but when the variable's get_handle() runs ngx_http_get_module_ctx( r, my_module ), I receive NULL instead of the context. My prints to the log show that the get_handle() runs prior to filter, and the CTX is not set Is there any way to do what I want? Thank you Posted at Nginx Forum: http://forum.nginx.org/read.php?2,229322,229322#msg-229322 From ru at nginx.com Mon Aug 6 08:38:35 2012 From: ru at nginx.com (Ruslan Ermilov) Date: Mon, 6 Aug 2012 12:38:35 +0400 Subject: upstream server and ip hash In-Reply-To: <1344229839977-7581128.post@n2.nabble.com> References: <1343035891761-7580949.post@n2.nabble.com> <1344229839977-7581128.post@n2.nabble.com> Message-ID: <20120806083835.GC71130@lo0.su> On Sun, Aug 05, 2012 at 10:10:39PM -0700, kvhdude wrote: > Folks, > Can anyone help me out? > To clarify: I am just looking for confirmation as i add an upstream > server, will hash change for existing users (these are using XMPP sessions > so they will suffer disruption of service). Yes, it will. The hashing implemented by ip_hash is not consistent. There's limited support for preserving hash values when a server in an upstream goes down, in the form of the "down" parameter, please see http://nginx.org/r/ip_hash for details. Adding consistent hashing is planned in future versions of nginx. > kvhdude wrote > > > > Hi, > > > > I need a clarification regarding ip_hash directive : > > Would adding a new upstream server affect existing users that are bound to > > some other servers? > > I am trying to dynamically scale up the servers (using scalr) in EC2 > > setting. > > > > I am guessing it won't, but i needed to be sure. From nginx-forum at nginx.us Mon Aug 6 11:54:25 2012 From: nginx-forum at nginx.us (yashgt) Date: Mon, 6 Aug 2012 07:54:25 -0400 (EDT) Subject: Single server with multiple hierarchies In-Reply-To: <6f3cd8cbed3f5644de9ba76e372d10b3.NginxMailingListEnglish@forum.nginx.org> References: <6f3cd8cbed3f5644de9ba76e372d10b3.NginxMailingListEnglish@forum.nginx.org> Message-ID: <202d3f03f810a2a5a0c0f2383dab2175.NginxMailingListEnglish@forum.nginx.org> Bang on target. Nested locations worked perfectly. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,229189,229326#msg-229326 From kvhdude at gmail.com Mon Aug 6 12:43:49 2012 From: kvhdude at gmail.com (kvhdude) Date: Mon, 6 Aug 2012 05:43:49 -0700 (PDT) Subject: upstream server and ip hash In-Reply-To: <20120806083835.GC71130@lo0.su> References: <1343035891761-7580949.post@n2.nabble.com> <1344229839977-7581128.post@n2.nabble.com> <20120806083835.GC71130@lo0.su> Message-ID: Ruslan, Thanks for your response. -kvh On Mon, Aug 6, 2012 at 2:08 PM, Ruslan Ermilov [via nginx] < ml-node+s2469901n7581131h55 at n2.nabble.com> wrote: > On Sun, Aug 05, 2012 at 10:10:39PM -0700, kvhdude wrote: > > Folks, > > Can anyone help me out? > > To clarify: I am just looking for confirmation as i add an upstream > > server, will hash change for existing users (these are using XMPP > sessions > > so they will suffer disruption of service). > > Yes, it will. The hashing implemented by ip_hash is not consistent. > > There's limited support for preserving hash values when a server in > an upstream goes down, in the form of the "down" parameter, please > see http://nginx.org/r/ip_hash for details. > > Adding consistent hashing is planned in future versions of nginx. > > > kvhdude wrote > > > > > > Hi, > > > > > > I need a clarification regarding ip_hash directive : > > > Would adding a new upstream server affect existing users that are > bound to > > > some other servers? > > > I am trying to dynamically scale up the servers (using scalr) in EC2 > > > setting. > > > > > > I am guessing it won't, but i needed to be sure. > > _______________________________________________ > nginx mailing list > [hidden email] > http://mailman.nginx.org/mailman/listinfo/nginx > > > ------------------------------ > If you reply to this email, your message will be added to the discussion > below: > > http://nginx.2469901.n2.nabble.com/upstream-server-and-ip-hash-tp7580949p7581131.html > To unsubscribe from upstream server and ip hash, click here > . > NAML > -- View this message in context: http://nginx.2469901.n2.nabble.com/upstream-server-and-ip-hash-tp7580949p7581133.html Sent from the nginx mailing list archive at Nabble.com. -------------- next part -------------- An HTML attachment was scrubbed... URL: From agentzh at gmail.com Mon Aug 6 18:49:55 2012 From: agentzh at gmail.com (agentzh) Date: Mon, 6 Aug 2012 11:49:55 -0700 Subject: [ANN] ngx_openresty devel version 1.2.1.11 released In-Reply-To: References: Message-ID: Hello, guys! After one week's active development, I'm pleased to announce the new development version of ngx_openresty, 1.2.1.11: http://openresty.org/#Download Below is the complete change log for this release, as compared to the last release, 1.2.1.9: * bundled LuaRestyDNSLibrary 0.04 and enabled it by default: https://github.com/agentzh/lua-resty-dns it is a nonblocking DNS (Domain Name System) resolver library based on LuaNginxModule's cosocket API. * upgraded LuaNginxModule to 0.5.12. * bugfix: the UDP cosocket object could no longer be used after an read or write error happened. * bugfix: ngx.exit(status) always resulted in "200 OK" response status when status > 200 and status < 300. thanks Nginx User for reporting this issue. * upgraded HeadersMoreNginxModule to 0.18. * bugfix: fixed a "set-but-not-read" warning from the "clang" static code analyzer. * fixed compatibility with nginx 0.7.65. thanks Banping for reporting this. * upgraded DrizzleNginxModule to 0.1.2. * minor code cleanup in the built-in connection pool. Special thanks go to all our contributors and users for helping make this happen :) As a sidenote, we just have a dedicated English mailing list for OpenResty, named "openresty-en": https://groups.google.com/group/openresty-en The old "openresty" mailing list will mainly for Chinese speakers. OpenResty (aka. ngx_openresty) is a full-fledged web application server by bundling the standard Nginx core, lots of 3rd-party Nginx modules, as well as most of their external dependencies. See OpenResty's homepage for details: http://openresty.org/ Enjoy! -agentzh From nginx-forum at nginx.us Mon Aug 6 21:22:44 2012 From: nginx-forum at nginx.us (yaronycity) Date: Mon, 6 Aug 2012 17:22:44 -0400 (EDT) Subject: Redirect Mobile Users to m.domain.com Message-ID: <3f5b920fcd3457d349694682ad35a6a5.NginxMailingListEnglish@forum.nginx.org> Hi all, I have a wordpress website http://domain.com. I have created a mobile version of it and put in "m" folder of that domain. I want all my mobile users to be redirected to http://m.domain.com where http://m.domain.com will be using a mobile version of the website located in "m" folder. By using the below rewrite rules, it does redirect to http://m.domain.com but it does not display any website from the "m" folder. ( Please help: ----------------------- server { server_name domain.com www.domain.com; access_log /home/nginx/domains/domain.com/log/access.log combined buffer=32k; error_log /home/nginx/domains/domain.com/log/error.log; root /home/nginx/domains/domain.com/public; try_files $uri $uri/ /index.php; location / { index index.html index.htm index.php; try_files $uri $uri/ /index.php?$args; } location ^~ /m/index.html { rewrite ^/m/(.*) http://m.domain.com/$1 permanent; } # Directives to send expires headers and turn off 404 error logging. location ~* \.(js|css|png|jpg|jpeg|gif|ico)$ { expires 24h; log_not_found off; } include /usr/local/nginx/conf/staticfiles.conf; include /usr/local/nginx/conf/php.conf; include /usr/local/nginx/conf/drop.conf; #include /usr/local/nginx/conf/errorpage.conf; } -------------------- Please help Posted at Nginx Forum: http://forum.nginx.org/read.php?2,229364,229364#msg-229364 From nginx-forum at nginx.us Mon Aug 6 21:31:50 2012 From: nginx-forum at nginx.us (TECK) Date: Mon, 6 Aug 2012 17:31:50 -0400 (EDT) Subject: Force "internal" directive to display a custom 404? Message-ID: <9f7d10d28e31c4c540dd7af709206bcb.NginxMailingListEnglish@forum.nginx.org> Example: error_page 404 /404.html; location = /404.html { internal; } location /alpha { internal; } The proper behavior would be to display the custom 404.html in every case, which does not. I'm not sure this could be marked as new feature, or is feasible with some configuration I don't know about it? Thanks. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,229365,229365#msg-229365 From edho at myconan.net Mon Aug 6 21:33:58 2012 From: edho at myconan.net (Edho Arief) Date: Tue, 7 Aug 2012 04:33:58 +0700 Subject: Redirect Mobile Users to m.domain.com In-Reply-To: <3f5b920fcd3457d349694682ad35a6a5.NginxMailingListEnglish@forum.nginx.org> References: <3f5b920fcd3457d349694682ad35a6a5.NginxMailingListEnglish@forum.nginx.org> Message-ID: On Tue, Aug 7, 2012 at 4:22 AM, yaronycity wrote: > Hi all, > > I have a wordpress website http://domain.com. I have created a mobile > version of it and put in "m" folder of that domain. I want all my mobile > users to be redirected to http://m.domain.com where http://m.domain.com > will be using a mobile version of the website located in "m" folder. By > using the below rewrite rules, it does redirect to http://m.domain.com > but it does not display any website from the "m" folder. ( Please help: > > ----------------------- > server { > server_name domain.com www.domain.com; > access_log /home/nginx/domains/domain.com/log/access.log combined > buffer=32k; > error_log /home/nginx/domains/domain.com/log/error.log; > root /home/nginx/domains/domain.com/public; > > try_files $uri $uri/ /index.php; > > location / { > index index.html index.htm index.php; > try_files $uri $uri/ /index.php?$args; > } > > location ^~ /m/index.html { > rewrite ^/m/(.*) http://m.domain.com/$1 permanent; > } > Should be just ^~ /m/ { ... } Also maybe add the server block which handles the m. domain... server { server_name m.domain.com; root /some/where/m; ... } From nginx-forum at nginx.us Mon Aug 6 21:45:43 2012 From: nginx-forum at nginx.us (yaronycity) Date: Mon, 6 Aug 2012 17:45:43 -0400 (EDT) Subject: Redirect Mobile Users to m.domain.com In-Reply-To: References: Message-ID: <68c5e6c226aa4880ae14e92490572ef6.NginxMailingListEnglish@forum.nginx.org> I have changed it this, and it still does not work:( server { server_name domain.com www.domain.com; root /home/nginx/domains/domain.com/public; server_name m.domain.com www.m.domain.com; root /home/nginx/domains/domain.com/public/m; try_files $uri $uri/ /index.php; location / { index index.html index.htm index.php; try_files $uri $uri/ /index.php?$args; } location ^~ /m/ { rewrite ^/m/(.*) http://m.domain.com/$1 permanent; } # Directives to send expires headers and turn off 404 error logging. location ~* \.(js|css|png|jpg|jpeg|gif|ico)$ { expires 24h; log_not_found off; } include /usr/local/nginx/conf/staticfiles.conf; include /usr/local/nginx/conf/php.conf; include /usr/local/nginx/conf/drop.conf; #include /usr/local/nginx/conf/errorpage.conf; } Posted at Nginx Forum: http://forum.nginx.org/read.php?2,229364,229367#msg-229367 From edho at myconan.net Mon Aug 6 21:48:01 2012 From: edho at myconan.net (Edho Arief) Date: Tue, 7 Aug 2012 04:48:01 +0700 Subject: Redirect Mobile Users to m.domain.com In-Reply-To: <68c5e6c226aa4880ae14e92490572ef6.NginxMailingListEnglish@forum.nginx.org> References: <68c5e6c226aa4880ae14e92490572ef6.NginxMailingListEnglish@forum.nginx.org> Message-ID: On Tue, Aug 7, 2012 at 4:45 AM, yaronycity wrote: > I have changed it this, and it still does not work:( > > server { > server_name domain.com www.domain.com; > root /home/nginx/domains/domain.com/public; > > server_name m.domain.com www.m.domain.com; > root /home/nginx/domains/domain.com/public/m; > Add as separate server { ... }, not merged into one. From ramesh.nethi at gmail.com Tue Aug 7 12:04:37 2012 From: ramesh.nethi at gmail.com (Ramesh Nethi) Date: Tue, 7 Aug 2012 17:34:37 +0530 Subject: nginx and multi-threading support Message-ID: I am looking at using nginx for an embedded environment. I understand that nginx uses non-blocking event-driven IO and has multiple worker processes to scale to multiple cores. Looking at the code, I see there is an option to use multiple threads as well. One of the model I am looking at is to have a single process but multiple worker threads. Is this functionality mature/complete ? I see a similar question asked about 9 months back on this forum, but am wondering if something is changed since then ? http://forum.nginx.org/read.php?2,217370,217455#msg-217455 thanks Ramesh -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Tue Aug 7 12:53:36 2012 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 7 Aug 2012 16:53:36 +0400 Subject: nginx-1.2.3 Message-ID: <20120807125336.GY40452@mdounin.ru> Changes with nginx 1.2.3 07 Aug 2012 *) Feature: the Clang compiler support. *) Bugfix: extra listening sockets might be created. Thanks to Roman Odaisky. *) Bugfix: nginx/Windows might hog CPU if a worker process failed to start. Thanks to Ricardo Villalobos Guevara. *) Bugfix: the "proxy_pass_header", "fastcgi_pass_header", "scgi_pass_header", "uwsgi_pass_header", "proxy_hide_header", "fastcgi_hide_header", "scgi_hide_header", and "uwsgi_hide_header" directives might be inherited incorrectly. *) Bugfix: trailing dot in a source value was not ignored if the "map" directive was used with the "hostnames" parameter. *) Bugfix: incorrect location might be used to process a request if a URI was changed via a "rewrite" directive before an internal redirect to a named location. Maxim Dounin From mdounin at mdounin.ru Tue Aug 7 13:40:25 2012 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 7 Aug 2012 17:40:25 +0400 Subject: Force "internal" directive to display a custom 404? In-Reply-To: <9f7d10d28e31c4c540dd7af709206bcb.NginxMailingListEnglish@forum.nginx.org> References: <9f7d10d28e31c4c540dd7af709206bcb.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20120807134025.GC40452@mdounin.ru> Hello! On Mon, Aug 06, 2012 at 05:31:50PM -0400, TECK wrote: > Example: > > error_page 404 /404.html; > > location = /404.html { > internal; > } > > location /alpha { > internal; > } > > The proper behavior would be to display the custom 404.html in every > case, which does not. > I'm not sure this could be marked as new feature, or is feasible with > some configuration I don't know about it? Do you actually have 404.html file under document root? The above config properly returns custom 404 for "/404.html", "/alpha", and "/alpha/foo" requests here, just tested. Maxim Dounin From nginx-forum at nginx.us Tue Aug 7 13:51:13 2012 From: nginx-forum at nginx.us (TECK) Date: Tue, 7 Aug 2012 09:51:13 -0400 (EDT) Subject: Force "internal" directive to display a custom 404? In-Reply-To: <20120807134025.GC40452@mdounin.ru> References: <20120807134025.GC40452@mdounin.ru> Message-ID: Hi Maxim, The problem was located somewhere between my seat and the computer screen. :) I forgot to add the error_page 404 /404.html; condition into ssl server... doh. Regards, Floren Posted at Nginx Forum: http://forum.nginx.org/read.php?2,229365,229391#msg-229391 From nginx-forum at nginx.us Tue Aug 7 14:38:59 2012 From: nginx-forum at nginx.us (double) Date: Tue, 7 Aug 2012 10:38:59 -0400 (EDT) Subject: NGINX crash Message-ID: <28e50aac81607c312ebc3337885a7532.NginxMailingListEnglish@forum.nginx.org> Hello, "nginx" crashes in "ngx_http_limit_req_handler". This does not happen, if "limit_req" is not nested. We couldn't reproduce this issue in a testing environment. Thanks a lot _____Request_____ GET / HTTP/1.1 Host: hostname.com X-REAL-IP: 123.123.12.123 _____Config_____ (simplified) http { # ipaddress + ipaddress/16 + ipaddress/24 map $http_x_real_ip $ipaddress { default $remote_addr; "~^\d+\.\d+\.\d+\.\d+$" $http_x_real_ip; } map $ipaddress $ipaddress_net { default ""; "~^(?P\d+\.\d+)\.\d+\.\d+$" $ip; } map $ipaddress $ipaddress_host { default ""; "~^(?P\d+\.\d+\.\d+)\.\d+$" $ip; } # limit requests limit_req_log_level warn; limit_req_zone $ipaddress zone=staticzone_ip:32m rate=100r/s; limit_req zone=staticzone_ip burst=50; limit_req_zone $ipaddress zone=dynamiczone_ip:32m rate=2r/s; limit_req_zone $ipaddress zone=dynamiczone_ddos:32m rate=6r/m; limit_req_zone $ipaddress_host zone=dynamiczone_host:32m rate=10r/s; limit_req_zone $ipaddress_net zone=dynamiczone_net:32m rate=20r/s; server { location / { limit_req zone=dynamiczone_ip burst=10; limit_req zone=dynamiczone_ddos burst=1000 nodelay; limit_req zone=dynamiczone_host burst=20; limit_req zone=dynamiczone_net burst=40; fastcgi_pass unix:/var/run/fastcgi.sock; ... } } } _____Backtrace_____ #0 0x000000000040b25d in ngx_vslprintf (buf=0x7fff4f9be1df "", last=0x7fff4f9be970 "0", fmt=0x48231b "v\"", args=0x7fff4f9be970) at src/core/ngx_string.c:245 p = 0x7fff4f9be192 "0: *1901681 the value of the \"ipaddress\" variable is more than 65535 bytes: \"" zero = 32 ' ' d = 0 f = 6.9058890903168798e-310 len = 1937 slen = 18446744073709551615 i64 = 0 ui64 = 0 frac = 0 ms = 140734529003968 width = 0 sign = 1 hex = 0 max_width = 0 frac_width = 0 scale = 18446744070750200624 n = 0 v = 0x1b958148 vv = 0x1b99a040 #1 0x000000000040535c in ngx_log_error_core (level=4, log=0x1b9f0e70, err=0, fmt=0x4822e0 "the value of the \"%V\" variable is more than 65535 bytes: \"%v\"") at src/core/ngx_log.c:120 args = {{gp_offset = 48, fp_offset = 48, overflow_arg_area = 0x7fff4f9bea70, reg_save_area = 0x7fff4f9be9a0}} p = 0x7fff4f9be19e "the value of the \"ipaddress\" variable is more than 65535 bytes: \"" last = 0x7fff4f9be970 "0" msg = 0x7fff4f9be19e "the value of the \"ipaddress\" variable is more than 65535 bytes: \"" errstr = "2012/08/07 16:17:42 [error] 30667#0: *1901681 the value of the \"ipaddress\" variable is more than 65535 bytes: \"\000\000\000\000\000\000\000\000\000>\001\000\006\000\000\000\000`\233\231\033\000\000\000\000\303)\000\000\000\000\000\000\341\346\233O\377\177\000\000he\233\033\000\000\000\000V\\\000\000\000\000\000\000\000\200\001\000\000\000\000\000pd\233\033\000\000\000\000p\312s\033\371*\000\000\000\000\000\000\000\000\000\000P\363\233O\377\177\000\000\020c\233\033\000\000\000\000"... #2 0x000000000046d5d2 in ngx_http_limit_req_handler (r=0x1b99faa0) at src/http/modules/ngx_http_limit_req_module.c:192 len = 194343136 hash = 4294967295 rc = -5 n = 0 excess = 0 delay = 463076000 vv = 0x1b99a040 ctx = 0x1b958128 lrcf = 0x1b9594a0 limit = 0x1b959f00 limits = 0x1b959f00 #3 0x0000000000437b08 in ngx_http_core_generic_phase (r=0x1b99faa0, ph=0x1b968e58) at src/http/ngx_http_core_module.c:899 rc = -5 #4 0x0000000000437ab6 in ngx_http_core_run_phases (r=0x1b99faa0) at src/http/ngx_http_core_module.c:877 rc = -2 ph = 0x1b968df8 cmcf = 0x1b932278 #5 0x0000000000437a32 in ngx_http_handler (r=0x1b99faa0) at src/http/ngx_http_core_module.c:860 cmcf = 0x2af91c2ba0a8 #6 0x000000000044459c in ngx_http_process_request (r=0x1b99faa0) at src/http/ngx_http_request.c:1688 c = 0x2af91b73d258 #7 0x0000000000443259 in ngx_http_process_request_headers (rev=0x2af91c2ba0a8) at src/http/ngx_http_request.c:1132 p = 0x10
len = 463051616 n = 43 rc = 0 rv = 140734529006688 h = 0x1b99a1c0 c = 0x2af91b73d258 hh = 0x0 r = 0x1b99faa0 cscf = 0x1b963668 cmcf = 0x1b932278 #8 0x0000000000442afe in ngx_http_process_request_line (rev=0x2af91c2ba0a8) at src/http/ngx_http_request.c:932 host = 0x1b999b60 "\211\245\231\033" n = 58 rc = 0 rv = 368 c = 0x2af91b73d258 r = 0x1b99faa0 cscf = 0x1b99a020 #9 0x00000000004423b5 in ngx_http_init_request (rev=0x2af91c2ba0a8) at src/http/ngx_http_request.c:519 tp = 0x6975a0 i = 463074232 c = 0x2af91b73d258 r = 0x1b99faa0 sin = 0x7fff4f9bed70 port = 0x1b969230 addr = 0x1b969240 ctx = 0x1b9f0eb0 addr_conf = 0x1b969248 hc = 0x1b9f0ec8 cscf = 0x1b963668 clcf = 0x1b963700 cmcf = 0x1b932278 #10 0x00000000004328b2 in ngx_epoll_process_events (cycle=0x1b9316c0, timer=500, flags=1) at src/event/modules/ngx_epoll_module.c:679 events = 1 revents = 1 instance = 1 i = 0 level = 1335619040 err = 0 rev = 0x2af91c2ba0a8 wev = 0x428d62 queue = 0x0 c = 0x2af91b73d258 #11 0x0000000000425dc0 in ngx_process_events_and_timers (cycle=0x1b9316c0) at src/event/ngx_event.c:247 flags = 1 timer = 500 delta = 1344349062687 #12 0x0000000000430d04 in ngx_worker_process_cycle (cycle=0x1b9316c0, data=0x0) at src/os/unix/ngx_process_cycle.c:808 i = 140734529008464 c = 0x0 #13 0x000000000042dda1 in ngx_spawn_process (cycle=0x1b9316c0, proc=0x430bb0 , data=0x0, name=0x47d6c8 "worker process", respawn=-4) at src/os/unix/ngx_process.c:198 on = 1 pid = 0 s = 6 #14 0x000000000042ffd7 in ngx_start_worker_processes (cycle=0x1b9316c0, n=4, type=-4) at src/os/unix/ngx_process_cycle.c:365 i = 2 ch = {command = 1, pid = 30666, slot = 5, fd = 15} #15 0x000000000042fbf7 in ngx_master_process_cycle (cycle=0x1b9316c0) at src/os/unix/ngx_process_cycle.c:250 title = 0x1b94e44b "" p = 0x1b94e48d "" size = 67 i = 3 n = 462608624 sigio = 0 set = {__val = {0 }} itv = {it_interval = {tv_sec = 0, tv_usec = 0}, it_value = {tv_sec = 8, tv_usec = 18}} live = 1 delay = 0 ls = 0x4 ccf = 0x1b931ce8 #16 0x0000000000403280 in main (argc=3, argv=0x7fff4f9bf358) at src/core/nginx.c:410 i = 30 log = 0x697300 cycle = 0x1b92d6b0 init_cycle = {conf_ctx = 0x0, pool = 0x1b92c930, log = 0x697300, new_log = {log_level = 0, file = 0x0, connection = 0, handler = 0, data = 0x0, action = 0x0}, files = 0x0, free_connections = 0x0, free_connection_n = 0, reusable_connections_queue = {prev = 0x0, next = 0x0}, listening = {elts = 0x1b92ce80, nelts = 1, size = 200, nalloc = 10, pool = 0x1b92c930}, pathes = {elts = 0x0, nelts = 0, size = 0, nalloc = 0, pool = 0x0}, open_files = {last = 0x0, part = {elts = 0x0, nelts = 0, next = 0x0}, size = 0, nalloc = 0, pool = 0x0}, shared_memory = {last = 0x0, part = {elts = 0x0, nelts = 0, next = 0x0}, size = 0, nalloc = 0, pool = 0x0}, connection_n = 0, files_n = 0, connections = 0x0, read_events = 0x0, write_events = 0x0, old_cycle = 0x0, conf_file = {len = 20, data = 0x7fff4f9c0e91 ""}, conf_param = {len = 0, data = 0x0}, conf_prefix = { len = 10, data = 0x7fff4f9c0e91 ""}, prefix = {len = 17, data = 0x47a67e "/usr/local/nginx/"}, lock_file = {len = 0, data = 0x0}, hostname = {len = 0, data = 0x0}} ccf = 0x1b92ded8 Posted at Nginx Forum: http://forum.nginx.org/read.php?2,229394,229394#msg-229394 From nginx-forum at nginx.us Tue Aug 7 15:46:51 2012 From: nginx-forum at nginx.us (Ensiferous) Date: Tue, 7 Aug 2012 11:46:51 -0400 (EDT) Subject: nginx and multi-threading support In-Reply-To: References: Message-ID: Multi threading support was removed from nginx a long time ago, there might be a few traces left in the source code but it's not functional. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,229381,229396#msg-229396 From kworthington at gmail.com Tue Aug 7 15:54:00 2012 From: kworthington at gmail.com (Kevin Worthington) Date: Tue, 7 Aug 2012 11:54:00 -0400 Subject: nginx-1.2.3 In-Reply-To: <20120807125336.GY40452@mdounin.ru> References: <20120807125336.GY40452@mdounin.ru> Message-ID: Hello Nginx Users, Now available: Nginx 1.2.3 For Windows http://goo.gl/BifNJ (32-bit and 64-bit versions) These versions are to support legacy users who are already using Cygwin based builds of Nginx. Officially supported native Windows binaries are at nginx.org. Announcements are also available via my Twitter stream (http://twitter.com/kworthington), if you prefer to receive updates that way. Thank you, Kevin -- Kevin Worthington kworthington *@* /gmail\ [dot} {com) http://kevinworthington.com/ http://twitter.com/kworthington On Tue, Aug 7, 2012 at 8:53 AM, Maxim Dounin wrote: > > Changes with nginx 1.2.3 07 Aug 2012 > > *) Feature: the Clang compiler support. > > *) Bugfix: extra listening sockets might be created. > Thanks to Roman Odaisky. > > *) Bugfix: nginx/Windows might hog CPU if a worker process failed to > start. > Thanks to Ricardo Villalobos Guevara. > > *) Bugfix: the "proxy_pass_header", "fastcgi_pass_header", > "scgi_pass_header", "uwsgi_pass_header", "proxy_hide_header", > "fastcgi_hide_header", "scgi_hide_header", and "uwsgi_hide_header" > directives might be inherited incorrectly. > > *) Bugfix: trailing dot in a source value was not ignored if the "map" > directive was used with the "hostnames" parameter. > > *) Bugfix: incorrect location might be used to process a request if a > URI was changed via a "rewrite" directive before an internal redirect > to a named location. > > > Maxim Dounin > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From nginx-forum at nginx.us Tue Aug 7 16:55:26 2012 From: nginx-forum at nginx.us (helliax) Date: Tue, 7 Aug 2012 12:55:26 -0400 (EDT) Subject: 404 fallback only hitting 2/3 upstream sets Message-ID: I currently have a setup where I have 3 upstream sets (directives?), and it will fallback in a chain. So if it gets a 404 from setA, it'll go to setB. If setB also has a 404, then setC. The config is: upstream setA { server ipA:port } upstream setB { server ipB:port } upstream setC { server ipC:port } server { listen 80; server_name localhost; #SetAHandler location / { access_log logs/access.log main; error_page 404 502 503 504 = @SetBHandler; proxy_intercept_errors on; proxy_pass http://setA; proxy_set_header Host $host; proxy_connect_timeout 60s; proxy_next_upstream error timeout http_500 http_502 http_503 http_504 http_404; # proxy_cache nginx_cache; # proxy_cache_valid 200 304 12h; # expires 1d; # break; } location @SetBHandler { access_log logs/access.log main; proxy_intercept_errors on; error_page 404 502 503 504 = @SetCHandler; proxy_pass http://setB; proxy_set_header Host $host; proxy_connect_timeout 60s; proxy_next_upstream error timeout http_500 http_502 http_503 http_504 http_404; # proxy_cache nginx_cache; # proxy_cache_valid 200 304 12h; # expires 1d; # break; } location @SetCHandler { access_log logs/access.log main; proxy_pass http://setC; proxy_set_header Host $host; proxy_connect_timeout 60s; proxy_next_upstream error timeout http_500 http_502 http_503 http_504 http_404; #Last proxy, return whatever it gives proxy_intercept_errors off; # proxy_cache nginx_cache; # proxy_cache_valid 200 304 12h; # expires 1d; # break; } The problem is, it only seems to check setA and setB, and bypasses setC completely. I tested it by putting a file on the third IP in setC that wasn't in setA or setB. Whenever I hit Nginx, it always returns 404, when the expected behavior is for it to return the file at ipC. I have it logging th $upstream_addr, and everytime it 404s, the logs always show: : Is there some directive from the HttpProxyModule that limits it to 2 by default? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,229402,229402#msg-229402 From nginx-forum at nginx.us Tue Aug 7 17:00:07 2012 From: nginx-forum at nginx.us (Yahook) Date: Tue, 7 Aug 2012 13:00:07 -0400 (EDT) Subject: lua handler aborted: memory allocation error: not enough memory Message-ID: <6e1583dc241fabd9614a47f91027430a.NginxMailingListEnglish@forum.nginx.org> Hello, Please help me to solve a problem. I use Lua to get a value from memcache and then jump into next nginx location and get these errors from time to time (ofen enaugh). lua handler aborted: memory allocation error: not enough memory (lua-atpanic) Lua VM crashed, reason: not enough memory My Lua code is simple enaugh: content_by_lua ' local memcached = require "resty.memcached"; local memc = memcached:new(); memc:set_timeout(500); local connection, err = memc:connect("..."); if not connection then return ngx.exec("..."); else local value, flags, err = memc:get(...); local connection, err = memc:close() if value return ngx.exec("..."); else return ngx.exec("..."); end end '; My server memory stats: Mem: 70G Active, 7296M Inact, 11G Wired, 3736M Cache, 9828M Buf, 806M Free Swap: 32G Total, 545M Used, 31G Free, 1% Inuse Posted at Nginx Forum: http://forum.nginx.org/read.php?2,229403,229403#msg-229403 From nginx-forum at nginx.us Tue Aug 7 17:15:45 2012 From: nginx-forum at nginx.us (helliax) Date: Tue, 7 Aug 2012 13:15:45 -0400 (EDT) Subject: 404 fallback only hitting 2/3 upstream sets In-Reply-To: References: Message-ID: I think i got it. I had to add the following directive under the server { } heading: recursive_error_pages on; I'm checking the access logs, and I'm seeing the third one now (ipA : ipB : ipC), and it's also returning the file correctly. I hope this is the right way. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,229402,229405#msg-229405 From nginx-forum at nginx.us Tue Aug 7 17:30:55 2012 From: nginx-forum at nginx.us (edtaa) Date: Tue, 7 Aug 2012 13:30:55 -0400 (EDT) Subject: deny directory abc denies directory abcd etc Message-ID: Hi I'm new to this forum so I hope this hasn't come up before I didn't find anything by searching. I'm using the following to block access to multiple directories: location ~ /(abc|def|ghi) { deny all; access_log off; log_not_found off; return 404;} The problem I find is that if there is a file or directory beginning with the same three characters, it too is blocked. So, for example, a directory or file named abcd would also be blocked. Is there a solution to this or is my code incorrect? Thanks Terry Posted at Nginx Forum: http://forum.nginx.org/read.php?2,229406,229406#msg-229406 From cliff at develix.com Tue Aug 7 19:08:39 2012 From: cliff at develix.com (Cliff Wells) Date: Tue, 07 Aug 2012 12:08:39 -0700 Subject: deny directory abc denies directory abcd etc In-Reply-To: References: Message-ID: <1344366519.2440.1.camel@portable-evil> On Tue, 2012-08-07 at 13:30 -0400, edtaa wrote: > Hi > I'm new to this forum so I hope this hasn't come up before I didn't find > anything by searching. > > I'm using the following to block access to multiple directories: > > location ~ /(abc|def|ghi) { deny all; access_log off; log_not_found > off; return 404;} > Regex locations take precedence over literal locations. Change your regex to this: location ~ /(abc|def|ghi)/?$ Cliff From nginx-forum at nginx.us Tue Aug 7 20:23:25 2012 From: nginx-forum at nginx.us (TiagoCruz) Date: Tue, 7 Aug 2012 16:23:25 -0400 (EDT) Subject: Dynamic Cache in Nginx Message-ID: <339cee6b9a4a9132e59da7c2cf68de57.NginxMailingListEnglish@forum.nginx.org> Hello guys, I'm trying to set some Expires in a lot of JavaScript that I got here, but they are not static on filesystem, it is dynamic generated by a Java application. If I do this: =================== location /bla { root /tmp; expires 30m; } =================== And put some .js files inside /tmp/bla everything works fine. But when I try to do this: =================== location /script { expires 30m; proxy_cache cache; proxy_pass http://localhost:8080; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $http_host; } =================== Does not work. The reply from the server is always 200, never 304 as it should be. What can I do to fix this? I also tried to use nginx 1.3.3 with Etag support, but also only works with static files :( Thanks!! Posted at Nginx Forum: http://forum.nginx.org/read.php?2,229409,229409#msg-229409 From arcanexane at gmail.com Tue Aug 7 20:33:57 2012 From: arcanexane at gmail.com (Joesep Kilsal) Date: Tue, 7 Aug 2012 16:33:57 -0400 Subject: Getting proxy_pass to work with sub_filter Message-ID: Hello, I'm trying to have nginx proxy for apache while trying to utilize sub_filter. My default.conf block with the modules listed above looks like this: server { location / { proxy_pass http://127.0.0.1:8080; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; sub_filter_once on; sub_filter '' '

filter test12345

'; } } With this, I can remove either the proxy_pass line or the sub_filter line and the other will work. It seems to only stop working when I try to have both active at the same time. I did have a look around regarding this issue and have disabled gzip, but to no avail. Does anyone have any insight towards this issue? Thank you. -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Tue Aug 7 20:51:08 2012 From: nginx-forum at nginx.us (double) Date: Tue, 7 Aug 2012 16:51:08 -0400 (EDT) Subject: NGINX crash In-Reply-To: <28e50aac81607c312ebc3337885a7532.NginxMailingListEnglish@forum.nginx.org> References: <28e50aac81607c312ebc3337885a7532.NginxMailingListEnglish@forum.nginx.org> Message-ID: <058dc01a9363787ddad154fc3dacfe39.NginxMailingListEnglish@forum.nginx.org> It does not crash if I remove the limit_req "dynamiczone_net". Posted at Nginx Forum: http://forum.nginx.org/read.php?2,229394,229412#msg-229412 From nginx-forum at nginx.us Wed Aug 8 00:22:56 2012 From: nginx-forum at nginx.us (borgita) Date: Tue, 7 Aug 2012 20:22:56 -0400 (EDT) Subject: Python in Nginx // windows 2008 Message-ID: I need to run python in Nginx. What do I need to install and configure? I have windows 2008. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,229419,229419#msg-229419 From nbubingo at gmail.com Wed Aug 8 04:41:36 2012 From: nbubingo at gmail.com (=?GB2312?B?0qbOsLHz?=) Date: Wed, 8 Aug 2012 12:41:36 +0800 Subject: nginx-1.2.3 In-Reply-To: References: <20120807125336.GY40452@mdounin.ru> Message-ID: Hi, A compile warning in my Ubuntu server: clang -c -pipe -O -Wall -Wextra -Wpointer-arith -Wno-unused-parameter -g -I src/core -I src/event -I src/event/modules -I src/os/unix -I objs \ -o objs/src/os/unix/ngx_setaffinity.o \ src/os/unix/ngx_setaffinity.c src/os/unix/ngx_setaffinity.c:57:13: warning: expression result unused [-Wunused-value] CPU_SET(i, &mask); ^~~~~~~~~~~~~~~~~ In file included from src/os/unix/ngx_setaffinity.c:7: In file included from src/core/ngx_config.h:26: In file included from src/os/unix/ngx_linux_config.h:41: /usr/include/sched.h:72:33: note: instantiated from: # define CPU_SET(cpu, cpusetp) __CPU_SET_S (cpu, sizeof (cpu_set_t), cpusetp) ^ In file included from src/os/unix/ngx_setaffinity.c:7: In file included from src/core/ngx_config.h:26: In file included from src/os/unix/ngx_linux_config.h:41: In file included from /usr/include/sched.h:35: /usr/include/bits/sched.h:145:9: note: instantiated from: : 0; })) ^ 1 diagnostic generated. Clang version: yaoweibin at li398-116:~/work/nginx-1.2.3$ clang -v clang version 1.1 (branches/release_27) Target: i386-pc-linux-gnu Thread model: posix OS version: yaoweibin at li398-116:~/work/nginx-1.2.3$ uname -a Linux li398-116 3.0.18 #1 SMP Mon Jan 30 11:44:09 EST 2012 i686 GNU/Linux 2012/8/7 Kevin Worthington : > Hello Nginx Users, > > Now available: Nginx 1.2.3 For Windows http://goo.gl/BifNJ (32-bit and > 64-bit versions) > > These versions are to support legacy users who are already using > Cygwin based builds of Nginx. Officially supported native Windows > binaries are at nginx.org. > > Announcements are also available via my Twitter stream > (http://twitter.com/kworthington), if you prefer to receive updates > that way. > > Thank you, > Kevin > -- > Kevin Worthington > kworthington *@* /gmail\ [dot} {com) > http://kevinworthington.com/ > http://twitter.com/kworthington > > > On Tue, Aug 7, 2012 at 8:53 AM, Maxim Dounin wrote: >> >> Changes with nginx 1.2.3 07 Aug 2012 >> >> *) Feature: the Clang compiler support. >> >> *) Bugfix: extra listening sockets might be created. >> Thanks to Roman Odaisky. >> >> *) Bugfix: nginx/Windows might hog CPU if a worker process failed to >> start. >> Thanks to Ricardo Villalobos Guevara. >> >> *) Bugfix: the "proxy_pass_header", "fastcgi_pass_header", >> "scgi_pass_header", "uwsgi_pass_header", "proxy_hide_header", >> "fastcgi_hide_header", "scgi_hide_header", and "uwsgi_hide_header" >> directives might be inherited incorrectly. >> >> *) Bugfix: trailing dot in a source value was not ignored if the "map" >> directive was used with the "hostnames" parameter. >> >> *) Bugfix: incorrect location might be used to process a request if a >> URI was changed via a "rewrite" directive before an internal redirect >> to a named location. >> >> >> Maxim Dounin >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From lists at ruby-forum.com Wed Aug 8 08:24:05 2012 From: lists at ruby-forum.com (Jimish Jobanputra) Date: Wed, 08 Aug 2012 10:24:05 +0200 Subject: proxy_pass strips http when there are 2 or more redirect link Message-ID: Hello Guys, I am using proxy_pass to do something like this: proxy_pass 'http://affiliates.somedomain.com/ez/ciaapkznke/&subid1=&lnkurl=http://www.flipkart.com/%3Fpid=9780099579939%26affid%3Dtyroo%26cmpid%3Daffiliate_promo_tyroo' But what happens is it passes the request directly to "http://www.flipkart.com/%3Fpid=9780099579939%26affid%3Dtyroo%26cmpid%3Daffiliate_promo_tyroo" (second http part of the proxy_pass link) It does not hit http://affiliates.somedomain.com/ez/ciaapkznke/ at all. Can anyone help me with the same? Thanks! -- Posted via http://www.ruby-forum.com/. From al-nginx at none.at Wed Aug 8 09:05:46 2012 From: al-nginx at none.at (Aleksandar Lazic) Date: Wed, 08 Aug 2012 11:05:46 +0200 Subject: add header after fcgi handling Message-ID: Hi, please can anybody help me, I'm stucked and sure that I think in wrong way. the .htaccess /web/root/interfaces/.htaccess RewriteEngine on RewriteBase /interfaces RewriteRule ^json/(.*) index.php?x=$1&%{QUERY_STRING} [NC] Translated to (the easy part I think) location ~ /interfaces/json/(\S+) { try_files /$uri /interfaces/index.php?x=$1&$args; } now Header set Access-Control-Allow-Origin * Header set Access-Control-Allow-Headers "X-Requested-With, *" to location /interfaces { include /home/nginx/server/conf/add_header-Access-Control-Allow-Origin.conf; add_header Access-Control-Allow-Headers "X-Requested-With, *"; } but when the php request is handled the 'location /interfaces {..}' does not match any more, of course. How can I add the header to the reply after the php is executed?! Many thanks for help. Best regards Aleks From ne at vbart.ru Wed Aug 8 10:00:47 2012 From: ne at vbart.ru (Valentin V. Bartenev) Date: Wed, 8 Aug 2012 14:00:47 +0400 Subject: NGINX crash In-Reply-To: <28e50aac81607c312ebc3337885a7532.NginxMailingListEnglish@forum.nginx.org> References: <28e50aac81607c312ebc3337885a7532.NginxMailingListEnglish@forum.nginx.org> Message-ID: <201208081400.47788.ne@vbart.ru> On Tuesday 07 August 2012 18:38:59 double wrote: > Hello, > "nginx" crashes in "ngx_http_limit_req_handler". > This does not happen, if "limit_req" is not nested. > We couldn't reproduce this issue in a testing environment. > Thanks a lot > [...] Could you provide debug log with crash or/and core dump? http://nginx.org/en/docs/debugging_log.html http://wiki.nginx.org/Debugging#Core_dump wbr, Valentin V. Bartenev From mdounin at mdounin.ru Wed Aug 8 10:14:48 2012 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 8 Aug 2012 14:14:48 +0400 Subject: NGINX crash In-Reply-To: <28e50aac81607c312ebc3337885a7532.NginxMailingListEnglish@forum.nginx.org> References: <28e50aac81607c312ebc3337885a7532.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20120808101448.GG40452@mdounin.ru> Hello! On Tue, Aug 07, 2012 at 10:38:59AM -0400, double wrote: > Hello, > "nginx" crashes in "ngx_http_limit_req_handler". > This does not happen, if "limit_req" is not nested. > We couldn't reproduce this issue in a testing environment. > Thanks a lot What does nginx -V show on the affected host? Any 3rd party modules/patches? Which OS, which PCRE library version? Are hardware problems ruled out (i.e. are you able to reproduce the problem on another host)? It would be cool if you could obtain debug log. > _____Request_____ > > GET / HTTP/1.1 > Host: hostname.com > X-REAL-IP: 123.123.12.123 > Is it actual request which corresponds to a backtrace below? > _____Config_____ (simplified) > > http { > # ipaddress + ipaddress/16 + ipaddress/24 > map $http_x_real_ip $ipaddress { > default $remote_addr; > "~^\d+\.\d+\.\d+\.\d+$" $http_x_real_ip; > } It would be better to see full actual config used. [...] > #1 0x000000000040535c in ngx_log_error_core (level=4, log=0x1b9f0e70, > err=0, fmt=0x4822e0 "the value of the \"%V\" variable is more than 65535 > bytes: \"%v\"") at src/core/ngx_log.c:120 > args = {{gp_offset = 48, fp_offset = 48, overflow_arg_area = > 0x7fff4f9bea70, reg_save_area = 0x7fff4f9be9a0}} > p = 0x7fff4f9be19e "the value of the \"ipaddress\" variable is > more than 65535 bytes: \"" > last = 0x7fff4f9be970 "0" > msg = 0x7fff4f9be19e "the value of the \"ipaddress\" variable is > more than 65535 bytes: \"" > errstr = "2012/08/07 16:17:42 [error] 30667#0: *1901681 the > value of the \"ipaddress\" variable is more than 65535 bytes: > \"\000\000\000\000\000\000\000\000\000>\001\000\006\000\000\000\000`\233\231\033\000\000\000\000\303)\000\000\000\000\000\000\341\346\233O\377\177\000\000he\233\033\000\000\000\000V\\\000\000\000\000\000\000\000\200\001\000\000\000\000\000pd\233\033\000\000\000\000p\312s\033\371*\000\000\000\000\000\000\000\000\000\000P\363\233O\377\177\000\000\020c\233\033\000\000\000\000"... > > #2 0x000000000046d5d2 in ngx_http_limit_req_handler (r=0x1b99faa0) at > src/http/modules/ngx_http_limit_req_module.c:192 > len = 194343136 > hash = 4294967295 > rc = -5 > n = 0 > excess = 0 > delay = 463076000 > vv = 0x1b99a040 > ctx = 0x1b958128 > lrcf = 0x1b9594a0 > limit = 0x1b959f00 > limits = 0x1b959f00 It looks like the $ipaddress variable was corrupted somehow. Could you please show fr 2 p *limit p *ctx p *vv p *r output from gdb? [...] Maxim Dounin From mdounin at mdounin.ru Wed Aug 8 10:31:07 2012 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 8 Aug 2012 14:31:07 +0400 Subject: 404 fallback only hitting 2/3 upstream sets In-Reply-To: References: Message-ID: <20120808103106.GI40452@mdounin.ru> Hello! On Tue, Aug 07, 2012 at 01:15:45PM -0400, helliax wrote: > I think i got it. I had to add the following directive under the server > { } heading: > > recursive_error_pages on; > > I'm checking the access logs, and I'm seeing the third one now (ipA : > ipB : ipC), and it's also returning the file correctly. I hope this is > the right way. It is, more or less. You should be careful though, as recursive_error_pages set to on might easily result in infinite loop (nginx will break it eventually, but nevertheless). I would recommend limiting recursive_error_pages scope to minimum possible. In your case, it should be enough to set recursive_error_pages to on in "location /". Maxim Dounin From mdounin at mdounin.ru Wed Aug 8 10:37:23 2012 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 8 Aug 2012 14:37:23 +0400 Subject: Dynamic Cache in Nginx In-Reply-To: <339cee6b9a4a9132e59da7c2cf68de57.NginxMailingListEnglish@forum.nginx.org> References: <339cee6b9a4a9132e59da7c2cf68de57.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20120808103723.GJ40452@mdounin.ru> Hello! On Tue, Aug 07, 2012 at 04:23:25PM -0400, TiagoCruz wrote: > Hello guys, > > I'm trying to set some Expires in a lot of JavaScript that I got here, > but they are not static on filesystem, it is dynamic generated by a Java > application. > > If I do this: > > =================== > location /bla { > root /tmp; > expires 30m; > } > =================== > > And put some .js files inside /tmp/bla everything works fine. > > But when I try to do this: > > =================== > location /script { > expires 30m; > proxy_cache cache; > proxy_pass http://localhost:8080; > proxy_set_header X-Real-IP $remote_addr; > proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; > proxy_set_header Host $http_host; > } > =================== > > Does not work. The reply from the server is always 200, never 304 as it > should be. > > What can I do to fix this? > > I also tried to use nginx 1.3.3 with Etag support, but also only works > with static files :( For conditinal requests to work (and 304 to be returned) you need Last-Modified (and/or ETag) header in original response. Try adding one in your application. Maxim Dounin From mdounin at mdounin.ru Wed Aug 8 10:40:58 2012 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 8 Aug 2012 14:40:58 +0400 Subject: Getting proxy_pass to work with sub_filter In-Reply-To: References: Message-ID: <20120808104058.GK40452@mdounin.ru> Hello! On Tue, Aug 07, 2012 at 04:33:57PM -0400, Joesep Kilsal wrote: > Hello, I'm trying to have nginx proxy for apache while trying to utilize > sub_filter. > > My default.conf block with the modules listed above looks like this: > > server { > location / { > proxy_pass http://127.0.0.1:8080; > proxy_set_header Host $host; > proxy_set_header X-Real-IP $remote_addr; > sub_filter_once on; > sub_filter '' '

filter test12345

'; > } > } > > With this, I can remove either the proxy_pass line or the sub_filter line > and > the other will work. It seems to only stop working when I try to have both > active at the same time. I did have a look around regarding this issue and > have > disabled gzip, but to no avail. Does anyone have any insight towards this > issue? Most often this question is asked when gzip is enabled on a backend host, and hence nginx sess gzipped content which it can't change. Where you tried to disable gzip? Correct thing to do is to disable gzip on a backend. Alternatively, you may do proxy_set_header Accept-Encoding ""; on nginx side. This will inform backend that content encodings aren't acceptable and will effectively disable gzip on a backend as well. Maxim Dounin From mdounin at mdounin.ru Wed Aug 8 10:55:51 2012 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 8 Aug 2012 14:55:51 +0400 Subject: add header after fcgi handling In-Reply-To: References: Message-ID: <20120808105551.GL40452@mdounin.ru> Hello! On Wed, Aug 08, 2012 at 11:05:46AM +0200, Aleksandar Lazic wrote: > Hi, > > please can anybody help me, I'm stucked and sure that I think in > wrong way. > > the .htaccess > > /web/root/interfaces/.htaccess > > RewriteEngine on > RewriteBase /interfaces > RewriteRule ^json/(.*) index.php?x=$1&%{QUERY_STRING} [NC] > > Translated to (the easy part I think) > > location ~ /interfaces/json/(\S+) { > try_files /$uri /interfaces/index.php?x=$1&$args; > } > > now > > Header set Access-Control-Allow-Origin * > Header set Access-Control-Allow-Headers "X-Requested-With, *" > > to > location /interfaces { > include > /home/nginx/server/conf/add_header-Access-Control-Allow-Origin.conf; > add_header Access-Control-Allow-Headers "X-Requested-With, > *"; > } > > but when the php request is handled the 'location /interfaces {..}' > does not match any more, > of course. > > How can I add the header to the reply after the php is executed?! You have to add headers in the location where you pass requests to php. And you may need to add another location if you don't want to add headers to all responses from php. Something like this should be fine: location /interfaces/ { add_header ... location ~ \.php$ { fastcgi_pass ... ... } } Maxim Dounin From nginx-forum at nginx.us Wed Aug 8 11:08:11 2012 From: nginx-forum at nginx.us (jimishjoban) Date: Wed, 8 Aug 2012 07:08:11 -0400 (EDT) Subject: Rewrite automatically unencodes question mark. Message-ID: <56fa8664b61c28ca32bfb4ffeb9f7dcc.NginxMailingListEnglish@forum.nginx.org> Hello, I have this rewrite rule. rewrite ^(.+)$ $url? permanent; where $url is "http://example.com/ez/ciaapkznke/&subid1=1&lnkurl=http://www.someotherwebsite.com/books%3Faffid%3Dtyroo%26cmpid%3Daffiliate_promo_tyroo" However its redirected to where $url is "http://example.com/ez/ciaapkznke/&subid1=1&lnkurl=http://www.someotherwebsite.com/books?affid%3Dtyroo%26cmpid%3Daffiliate_promo_tyroo" %3F got changed to ? which i dont intend to... Any help? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,229436,229436#msg-229436 From nginx-forum at nginx.us Wed Aug 8 11:24:39 2012 From: nginx-forum at nginx.us (leeus) Date: Wed, 8 Aug 2012 07:24:39 -0400 (EDT) Subject: Rewrite PHP block older browsers Message-ID: <5e6abbe41af80654c945eacccb812f22.NginxMailingListEnglish@forum.nginx.org> I have the following config file that works for everything but PHP pages. I am trying to get everything as an ancient_browser to add /IE/ in the URL. Essentially changing the root for these clients although you cannot set this within an if. This is working for anything not .php and loads the content silently from the /IE/ content. If I add the rewrite within the PHP location it doesn't work either. Any ideas? location / { if ($ancient_browser) { rewrite ^(.*)$ /IE/$1 break; } try_files $uri $uri/ /index.php; } location ~ \.php$ { include /etc/nginx/fastcgi_params; fastcgi_index index.php; if (-f $request_filename) { fastcgi_pass 127.0.0.1:9000; } } Posted at Nginx Forum: http://forum.nginx.org/read.php?2,229437,229437#msg-229437 From al-nginx at none.at Wed Aug 8 11:31:34 2012 From: al-nginx at none.at (Aleksandar Lazic) Date: Wed, 08 Aug 2012 13:31:34 +0200 Subject: add header after fcgi handling In-Reply-To: <20120808105551.GL40452@mdounin.ru> References: <20120808105551.GL40452@mdounin.ru> Message-ID: Hi Maxim, On 08-08-2012 12:55, Maxim Dounin wrote: > Hello! > > On Wed, Aug 08, 2012 at 11:05:46AM +0200, Aleksandar Lazic wrote: > >> Hi, >> >> please can anybody help me, I'm stucked and sure that I think in >> wrong way. [snipp] >> How can I add the header to the reply after the php is executed?! > > You have to add headers in the location where you pass requests to > php. And you may need to add another location if you don't want to > add headers to all responses from php. Something like this should > be fine: > > location /interfaces/ { > add_header ... > > location ~ \.php$ { > fastcgi_pass ... > ... > } > } Super thank you. I must rethink my setup with nested locations ;-). Best regards Aleks From nginx-forum at nginx.us Wed Aug 8 14:04:15 2012 From: nginx-forum at nginx.us (Yahook) Date: Wed, 8 Aug 2012 10:04:15 -0400 (EDT) Subject: lua handler aborted: memory allocation error: not enough memory In-Reply-To: <6e1583dc241fabd9614a47f91027430a.NginxMailingListEnglish@forum.nginx.org> References: <6e1583dc241fabd9614a47f91027430a.NginxMailingListEnglish@forum.nginx.org> Message-ID: I've found what caused this problem. I declared fastcgi_cache_path incorrectly. I allocated too much memory for keys. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,229403,229449#msg-229449 From christian.boenning at gmail.com Wed Aug 8 14:26:19 2012 From: christian.boenning at gmail.com (=?ISO-8859-1?Q?Christian_B=F6nning?=) Date: Wed, 8 Aug 2012 16:26:19 +0200 Subject: multiple replacements with http_sub_module? Message-ID: Hi, is there any way to ship around the limitation that only a single `sub_filter` directive is supported per location? In fact I want/need to replace 3 "strings" (/fileadmin/, /typo3conf/ and /typo3temp/) within my root-location and "move" the requests over to absolute servernames (like a cdn) by the time the response is running through my frontend lb. Any ideas how to accomplish that? Regards, Christian From nginx-forum at nginx.us Wed Aug 8 15:04:25 2012 From: nginx-forum at nginx.us (i0n) Date: Wed, 8 Aug 2012 11:04:25 -0400 (EDT) Subject: proxying/redirecting after request has been rewritten Message-ID: Hi everyone. I'm trying to get Nginx to run some Regex's on requests and send them to another server block, returning the response headers from the second server block. At the moment I get a 302 response status, how do I get the headers from the second server block? So as an example, I would like a request like: http://nginxrouting.local/some/stuff/that/needs/to/be/removed/itemid=1234/more/stuff/topicid=1234 to be sent to http://nginxrouting_destination.local/itemid=1234topicid=1234 returning the headers from the new location The server blocks look like this: server { server_name nginxrouting.local; root /var/nginxrouting/public; location / { if ($request_uri ~* ".*(itemid=[0-9]*){1}.*") { set $itemid $1; } if ($request_uri ~* ".*(topicid=[0-9]*){1}.*") { set $topicid $1; } if ($request_uri ~* ".*(&type=RESOURCES){1}.*") { set $resources $1; } rewrite ^ http://nginxrouting_destination.local/$itemid$topicid$resources? redirect; add_header itemid $itemid; } } server { server_name nginxrouting_destination.local; root var/govuk/nginxrouting/public_destination; location / { add_header working yes; return 200; } } Posted at Nginx Forum: http://forum.nginx.org/read.php?2,229458,229458#msg-229458 From nginx-forum at nginx.us Wed Aug 8 15:35:51 2012 From: nginx-forum at nginx.us (ainou) Date: Wed, 8 Aug 2012 11:35:51 -0400 (EDT) Subject: How to prevent nginx locks Message-ID: <71df6630b17c1242b9cfc7934750ea80.NginxMailingListEnglish@forum.nginx.org> Hello all, I have a situation with nginx where occasionally all my worker process hang (a ps -awx returns D state) which means that my webserver becomes irresponsive. I have to add that the lock occurs when trying to serve some files from a OCFS2 file system and the only way to solve this is to reboot our fileserver. Obviously I can see that the main problem lies in OCFS2 and not nginx, however what I would like do is to prevent nginx from blocking. Is there anything I can do in order to have nginx unlock after some timeout or something like that? I've already looked at all the documentation but found nothing. I also increased the worker processes to 7 (I have 6 CPU) but all processes simply locks and increasing that number is not the solution. By the way this same situation (nginx blocking) also occurred in the past with some SolR http requests we did in our code. Thanks in advance. Adelino Posted at Nginx Forum: http://forum.nginx.org/read.php?2,229461,229461#msg-229461 From mdounin at mdounin.ru Wed Aug 8 15:38:18 2012 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 8 Aug 2012 19:38:18 +0400 Subject: proxying/redirecting after request has been rewritten In-Reply-To: References: Message-ID: <20120808153818.GZ40452@mdounin.ru> Hello! On Wed, Aug 08, 2012 at 11:04:25AM -0400, i0n wrote: > Hi everyone. > > I'm trying to get Nginx to run some Regex's on requests and send them to > another server block, returning the response headers from the second > server block. At the moment I get a 302 response status, how do I get > the headers from the second server block? > > So as an example, I would like a request like: > http://nginxrouting.local/some/stuff/that/needs/to/be/removed/itemid=1234/more/stuff/topicid=1234 > to be sent to > http://nginxrouting_destination.local/itemid=1234topicid=1234 returning > the headers from the new location Use proxy_pass. See http://nginx.org/r/proxy_pass for details. Maxm Dounin From mdounin at mdounin.ru Wed Aug 8 16:19:30 2012 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 8 Aug 2012 20:19:30 +0400 Subject: nginx-1.2.3 In-Reply-To: References: <20120807125336.GY40452@mdounin.ru> Message-ID: <20120808161930.GB40452@mdounin.ru> Hello! On Wed, Aug 08, 2012 at 12:41:36PM +0800, ??? wrote: > Hi, A compile warning in my Ubuntu server: > > clang -c -pipe -O -Wall -Wextra -Wpointer-arith -Wno-unused-parameter > -g -I src/core -I src/event -I src/event/modules -I src/os/unix -I > objs \ > -o objs/src/os/unix/ngx_setaffinity.o \ > > src/os/unix/ngx_setaffinity.c > src/os/unix/ngx_setaffinity.c:57:13: warning: expression result unused > [-Wunused-value] > CPU_SET(i, &mask); > ^~~~~~~~~~~~~~~~~ > In file included > from src/os/unix/ngx_setaffinity.c:7: > In file included from src/core/ngx_config.h:26: > In file included > from src/os/unix/ngx_linux_config.h:41: > /usr/include/sched.h:72:33: note: instantiated from: > # define > CPU_SET(cpu, cpusetp) __CPU_SET_S (cpu, sizeof (cpu_set_t), cpusetp) > ^ > In file included > from src/os/unix/ngx_setaffinity.c:7: > In file included from src/core/ngx_config.h:26: > In file included > from src/os/unix/ngx_linux_config.h:41: > In file included from /usr/include/sched.h:35: > > /usr/include/bits/sched.h:145:9: note: instantiated from: > : 0; })) > ^ > 1 diagnostic generated. Looks like valid warning for a glibc code, you may want to report it to glibc folks. Nothing here we could do (and actually it's one of the reasons we've not enabled -Werror for Clang yet). Maxim Dounin From nginx-forum at nginx.us Wed Aug 8 16:35:41 2012 From: nginx-forum at nginx.us (helliax) Date: Wed, 8 Aug 2012 12:35:41 -0400 (EDT) Subject: 404 fallback only hitting 2/3 upstream sets In-Reply-To: <20120808103106.GI40452@mdounin.ru> References: <20120808103106.GI40452@mdounin.ru> Message-ID: <57a5a2ce79b0db14ad81f611e21126e0.NginxMailingListEnglish@forum.nginx.org> Thanks for heads up, Maxim. In what cases would it go into an infinite loop if I left it outside? I had thought that "location /" would catch all requests, and setting proxy_intercept_errors off at the last handler would break the chain by returning whatever upstream setC returned? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,229402,229466#msg-229466 From mdounin at mdounin.ru Wed Aug 8 16:51:33 2012 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 8 Aug 2012 20:51:33 +0400 Subject: 404 fallback only hitting 2/3 upstream sets In-Reply-To: <57a5a2ce79b0db14ad81f611e21126e0.NginxMailingListEnglish@forum.nginx.org> References: <20120808103106.GI40452@mdounin.ru> <57a5a2ce79b0db14ad81f611e21126e0.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20120808165133.GD40452@mdounin.ru> Hello! On Wed, Aug 08, 2012 at 12:35:41PM -0400, helliax wrote: > Thanks for heads up, Maxim. In what cases would it go into an infinite > loop if I left it outside? I had thought that "location /" would catch > all requests, and setting > > proxy_intercept_errors off > > at the last handler would break the chain by returning whatever upstream > setC returned? Even with proxy_intercept_errors set to off errors might happen - e.g. 502 will be generated if nginx won't be able to connect to upstream for some reason. In any case I don't think that your config creates a loop (unless you have some error_pages defined at http level). I've mostly wrote about recursive_error_pages in general. Maxim Dounin From nginx-forum at nginx.us Wed Aug 8 16:55:00 2012 From: nginx-forum at nginx.us (helliax) Date: Wed, 8 Aug 2012 12:55:00 -0400 (EDT) Subject: 404 fallback only hitting 2/3 upstream sets In-Reply-To: <20120808165133.GD40452@mdounin.ru> References: <20120808165133.GD40452@mdounin.ru> Message-ID: <245063823a096aa0312525269ff82c68.NginxMailingListEnglish@forum.nginx.org> That makes sense. Thanks so much for all your help. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,229402,229468#msg-229468 From nginx-forum at nginx.us Wed Aug 8 17:12:46 2012 From: nginx-forum at nginx.us (jimishjoban) Date: Wed, 8 Aug 2012 13:12:46 -0400 (EDT) Subject: Rewrite automatically unencodes question mark. In-Reply-To: <56fa8664b61c28ca32bfb4ffeb9f7dcc.NginxMailingListEnglish@forum.nginx.org> References: <56fa8664b61c28ca32bfb4ffeb9f7dcc.NginxMailingListEnglish@forum.nginx.org> Message-ID: <6b488d69d6c686dc9d91c5f98113c709.NginxMailingListEnglish@forum.nginx.org> It seems only the first %3F gets un-encoded. Remaining %3F remains as it is... Could it be a bug? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,229436,229469#msg-229469 From mdounin at mdounin.ru Wed Aug 8 17:58:37 2012 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 8 Aug 2012 21:58:37 +0400 Subject: Rewrite automatically unencodes question mark. In-Reply-To: <6b488d69d6c686dc9d91c5f98113c709.NginxMailingListEnglish@forum.nginx.org> References: <56fa8664b61c28ca32bfb4ffeb9f7dcc.NginxMailingListEnglish@forum.nginx.org> <6b488d69d6c686dc9d91c5f98113c709.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20120808175837.GF40452@mdounin.ru> Hello! On Wed, Aug 08, 2012 at 01:12:46PM -0400, jimishjoban wrote: > It seems only the first %3F gets un-encoded. Remaining %3F remains as it > is... Could it be a bug? More or less yes, it's weird historic behaviour which should be eventually fixed (and we even have several TODO test cases for this, see http://mdounin.ru/hg/nginx-tests/file/tip/rewrite_unescape.t). For now I would recommend using return 301 $uri; which doesn't try to do any unescaping. Maxim Dounin From lists at ruby-forum.com Wed Aug 8 18:30:57 2012 From: lists at ruby-forum.com (Edward Stembler) Date: Wed, 08 Aug 2012 20:30:57 +0200 Subject: Url rewrite for restful url Message-ID: <092fbf8bc6f45d40b70d728a675e47d4@ruby-forum.com> I have a Rails web app which has static files under dynamic restful urls. For example: /projects/1/attachments/some_file.xls I want to setup Nginx to redirect to the static file on the server: /public/attachments/1/some_file.xls Where 1 is the dynamic project id. How would the location block and rewrite statement look for the Nginx config file? StackOverflow link: http://stackoverflow.com/questions/11852854/nginx-url-rewrite-for-restful-url -- Posted via http://www.ruby-forum.com/. From nginx-forum at nginx.us Wed Aug 8 19:37:36 2012 From: nginx-forum at nginx.us (vdavidoff) Date: Wed, 8 Aug 2012 15:37:36 -0400 (EDT) Subject: Mark upstream proxy inoperable based on response body or arbitrary http status? Message-ID: <970ae236831a239db5e72641e2febc08.NginxMailingListEnglish@forum.nginx.org> Hi, I just started playing with Nginx. My goal is to rate limit requests to a set of backend proxies, and it looks like I want to do this using the modules HttpUpstreamModule, HttpProxyModule, and HttpLimitReqModule modules. I see that HttpUpstreamModule marks an upstream server as inoperable based on the setting of proxy_next_upstream. However, I need to be able to mark a server as inoperable given conditions that I don't appear to be able to set. Specifically, if I could examine the results of a server response and mark it inoperable, or operable, based on the contents, that'd be best. Another, possibly less complicated solution would be for me to mark a server inoperable if the http status in the response is 302, but I don't see a "http_302" for proxy_next_upstream. It would also be great if the amount of time before re-checking a server marked as inoperable was configurable separate from HttpUpstreamModule's fail_timeout (so I could say something like, if a server fails once in 1 second mark it inoperable, but don't try to check it again for 3 hours). Am I overlooking options to do what I want to do here? Thanks! Andy Posted at Nginx Forum: http://forum.nginx.org/read.php?2,229472,229472#msg-229472 From lists at ruby-forum.com Wed Aug 8 19:46:31 2012 From: lists at ruby-forum.com (Edward Stembler) Date: Wed, 08 Aug 2012 21:46:31 +0200 Subject: Nginx + Thin config on Windows for Rails app Message-ID: <598db8343ff8d27317c8f3f296e59cff@ruby-forum.com> I'm somewhat confused about using Nginx and Thin for serving my Rails 3.2 app. Previously I had Thin serving my Rails app on Windows Server 2008 R2 without any issues. I would start Thin on the production server specifying the server's IP address on port 80 like such: rails server thin -b 10.xx.x.xxx -p 80 -e production Now, I'm trying to add Nginx to the mix and I'm confused about how I should start Thin and how I should configure Nginx to forward to Thin. For example, now that Nginx is listening on port 80, should I start Thin locally on a different port? Like 0.0.0.0:3000 (or 127.0.0.1:3000)? Or do I start Thin like I did before on 10.xx.x.xxx:80? In my Nginx conf file do I specify the upstream servers as localhosts, or the machine's IP address? I'm not really sure what it's for. upstream mywebapp_thin { server 0.0.0.0:3000; } server { listen 80; server_name mywebserver www.mywebserver; # locations et. al. excluded for brevity... Most examples I see have the upstream servers running on ports 3000 or 5000. I'm wondering if those examples are really for a development setup, and not production? Or does Thin need to run on a different port other than 80 since Nginx is listening on it now? I noticed that my web app does not respond to the basic urls (mywebserver/projects) unless I add the port Thin is running on (mywebserver:3000/projects) StackOverflow link: http://stackoverflow.com/questions/11849827/nginx-thin-config-on-windows-for-rails-app -- Posted via http://www.ruby-forum.com/. From gelonida at gmail.com Wed Aug 8 21:04:43 2012 From: gelonida at gmail.com (Gelonida N) Date: Wed, 08 Aug 2012 23:04:43 +0200 Subject: Python in Nginx // windows 2008 In-Reply-To: References: Message-ID: On 08/08/2012 02:22 AM, borgita wrote: > I need to run python in Nginx. What do I need to install and configure? > I have windows 2008. > > Posted at Nginx Forum: http://forum.nginx.org/read.php?2,229419,229419#msg-229419 > Python can be run in multiple ways on an nginx server. Do you want pythan code as a CGI script, then just look on how to run CGO on nginx or do you wnat to run it via fast-cgi then you could look at flup http://wiki.nginx.org/PythonFlup or do you ant python WSGI code, then you could for example at uwsgi http://projects.unbit.it/uwsgi/ or at gunicorn http://gunicorn.org/ See also: http://stackoverflow.com/questions/6078225/recommended-nginx-wsgi-configurations From gelonida at gmail.com Wed Aug 8 21:08:26 2012 From: gelonida at gmail.com (Gelonida N) Date: Wed, 08 Aug 2012 23:08:26 +0200 Subject: Python in Nginx // windows 2008 In-Reply-To: References: Message-ID: Apologies, I overlooked the 'windows 2008 part'. On 08/08/2012 11:04 PM, Gelonida N wrote: > On 08/08/2012 02:22 AM, borgita wrote: >> I need to run python in Nginx. What do I need to install and configure? >> I have windows 2008. >> >> Posted at Nginx Forum: >> http://forum.nginx.org/read.php?2,229419,229419#msg-229419 >> > > Python can be run in multiple ways on an nginx server. > > Do you want pythan code as a CGI script, then just look on how to run > CGI on nginx > > or do you wnat to run it via fast-cgi then you could look at flup > http://wiki.nginx.org/PythonFlup PythonFlup should be working on windows. > > or do you ant python WSGI code, then you could for example at uwsgi > > http://projects.unbit.it/uwsgi/ nginx definitely does NOT run on windows so you can skip it. > or at gunicorn http://gunicorn.org/ I think gunicorn is also Unix only > See also: > http://stackoverflow.com/questions/6078225/recommended-nginx-wsgi-configurations > From bertrand.caplet at okira.net Wed Aug 8 21:51:20 2012 From: bertrand.caplet at okira.net (Bertrand Caplet) Date: Wed, 08 Aug 2012 23:51:20 +0200 Subject: IP backend reverse proxy problem Message-ID: <5022DF58.70702@okira.net> Hi, I've installed nginx as a reverse proxy with 2 backends web servers (one nginx, one apache). I want the backends servers to have the "real" IP of the client on their logs and on the webpage. I tried proxy_set_header like here : https://help.ubuntu.com/community/Nginx/ReverseProxy and apache with the rpaf module but it doesn't work... Hope you'll have some advices. I just want the backends server to display the client IP not like right now, have a look : http://wiki.okira.net/w/Accueil it display the reverse proxy IP (127.0.0.1 or 87.106.165.190). Regards. -- Bertrand Caplet Tel. 02 33 35 20 94 Mob. 06 35 43 68 46 Web. blog.okira.net bertrand.caplet at okira.net -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: sig-ecolo.gif Type: image/gif Size: 1988 bytes Desc: not available URL: From nginx-forum at nginx.us Wed Aug 8 21:58:33 2012 From: nginx-forum at nginx.us (klml) Date: Wed, 8 Aug 2012 17:58:33 -0400 (EDT) Subject: trigger a git commit for static site generator Message-ID: I build a small static site generator: just PUT markdown from a textarea to a git versioned file and post-commit markdown.sh (and little bit templating) to html for nginx. The only thing I miss the commit triggerd by nginx after the PUT. I tried it with shell ect[^1] and even netcat ;) but nothing worked really nice. Is there an easy way to commit via nginx? thx klml [^1]: http://forum.nginx.org/read.php?2,181239 Posted at Nginx Forum: http://forum.nginx.org/read.php?2,229477,229477#msg-229477 From jdorfman at netdna.com Wed Aug 8 22:44:25 2012 From: jdorfman at netdna.com (Justin Dorfman) Date: Wed, 8 Aug 2012 15:44:25 -0700 Subject: trigger a git commit for static site generator In-Reply-To: References: Message-ID: This *might *work (and can be dangerous IMO): x="`tail -f /usr/local/nginx/logs/access.log |grep PUT |sed 's/["]//g' |awk {'print $5'}`"; while x="PUT"; do git commit -m "Hello Igor"; done Your logs would have to be formated like this: 8.8.8.8 foo.bar.netdna-cdn.com [08/Aug/2012:22:41:23 +0000] "PUT ... 4.2.2.2 bar.foo.netdna-cdn.com [08/Aug/2012:22:41:23 +0000] "PUT ... Regards, Justin Dorfman NetDNA ? The Science of Acceleration? On Wed, Aug 8, 2012 at 2:58 PM, klml wrote: > I build a small static site generator: just PUT markdown from a textarea > to a git versioned file and post-commit markdown.sh (and little bit > templating) to html for nginx. > > The only thing I miss the commit triggerd by nginx after the PUT. > > I tried it with shell ect[^1] and even netcat ;) but nothing worked > really nice. > > Is there an easy way to commit via nginx? > > thx > klml > > [^1]: http://forum.nginx.org/read.php?2,181239 > > Posted at Nginx Forum: > http://forum.nginx.org/read.php?2,229477,229477#msg-229477 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From lists at ruby-forum.com Wed Aug 8 22:55:58 2012 From: lists at ruby-forum.com (Swapnil P.) Date: Thu, 09 Aug 2012 00:55:58 +0200 Subject: How to write ScriptAlias in nginx.conf like in httpd.conf Message-ID: <602457a7a87d408cc41f3df78068244e@ruby-forum.com> I have to configure nginx.conf just like the httpd.conf and stuck while configuring ScriptAlias. I have ScriptAlias /cgi-bin/ "/var/www/cgi-bin/" i want to write the above in nginx.conf. How can I write it? -- Posted via http://www.ruby-forum.com/. From francis at daoine.org Wed Aug 8 23:08:15 2012 From: francis at daoine.org (Francis Daly) Date: Thu, 9 Aug 2012 00:08:15 +0100 Subject: How to write ScriptAlias in nginx.conf like in httpd.conf In-Reply-To: <602457a7a87d408cc41f3df78068244e@ruby-forum.com> References: <602457a7a87d408cc41f3df78068244e@ruby-forum.com> Message-ID: <20120808230815.GH32371@craic.sysops.org> On Thu, Aug 09, 2012 at 12:55:58AM +0200, Swapnil P. wrote: Hi there, > I have to configure nginx.conf just like the httpd.conf and stuck while > configuring ScriptAlias. I have > > ScriptAlias /cgi-bin/ "/var/www/cgi-bin/" > > i want to write the above in nginx.conf. How can I write it? location /cgi-bin/ { proxy_pass http://apache_server:apache_port } (ScriptAlias in apache httpd is for configuring CGI handling. nginx doesn't handle CGI, so you need to decide how you want the urls to be handled. This version just proxies them to your running apache.) Depending on the rest of your nginx.conf, you may want " ^~" after "location". http://nginx.org/r/location http://nginx.org/r/proxy_pass f -- Francis Daly francis at daoine.org From lists at ruby-forum.com Wed Aug 8 23:17:45 2012 From: lists at ruby-forum.com (Swapnil P.) Date: Thu, 09 Aug 2012 01:17:45 +0200 Subject: How to load module in nginx.conf like we do in httpd.conf? Message-ID: <76c2c1017900d2d0399e3c4f2fd163d9@ruby-forum.com> Hi, I want to load all the modules which are mentioned below are in httpd.conf.Can you tell me how i can do this in nginx.conf? httpd.conf: LoadModule auth_basic_module modules/mod_auth_basic.so LoadModule auth_digest_module modules/mod_auth_digest.so LoadModule authn_file_module modules/mod_authn_file.so LoadModule authn_alias_module modules/mod_authn_alias.so LoadModule authn_anon_module modules/mod_authn_anon.so LoadModule authn_dbm_module modules/mod_authn_dbm.so LoadModule authn_default_module modules/mod_authn_default.so LoadModule authz_host_module modules/mod_authz_host.so LoadModule authz_user_module modules/mod_authz_user.so LoadModule authz_owner_module modules/mod_authz_owner.so LoadModule authz_groupfile_module modules/mod_authz_groupfile.so LoadModule authz_dbm_module modules/mod_authz_dbm.so LoadModule authz_default_module modules/mod_authz_default.so LoadModule ldap_module modules/mod_ldap.so LoadModule authnz_ldap_module modules/mod_authnz_ldap.so LoadModule include_module modules/mod_include.so LoadModule log_config_module modules/mod_log_config.so LoadModule logio_module modules/mod_logio.so LoadModule env_module modules/mod_env.so LoadModule ext_filter_module modules/mod_ext_filter.so LoadModule mime_magic_module modules/mod_mime_magic.so LoadModule expires_module modules/mod_expires.so LoadModule deflate_module modules/mod_deflate.so LoadModule headers_module modules/mod_headers.so LoadModule usertrack_module modules/mod_usertrack.so LoadModule setenvif_module modules/mod_setenvif.so LoadModule mime_module modules/mod_mime.so LoadModule dav_module modules/mod_dav.so LoadModule status_module modules/mod_status.so LoadModule autoindex_module modules/mod_autoindex.so LoadModule info_module modules/mod_info.so LoadModule dav_fs_module modules/mod_dav_fs.so LoadModule vhost_alias_module modules/mod_vhost_alias.so LoadModule negotiation_module modules/mod_negotiation.so LoadModule dir_module modules/mod_dir.so LoadModule actions_module modules/mod_actions.so LoadModule speling_module modules/mod_speling.so LoadModule userdir_module modules/mod_userdir.so LoadModule alias_module modules/mod_alias.so LoadModule rewrite_module modules/mod_rewrite.so ### LoadModule proxy_module modules/mod_proxy.so ### LoadModule proxy_balancer_module modules/mod_proxy_balancer.so ### LoadModule proxy_ftp_module modules/mod_proxy_ftp.so ### LoadModule proxy_http_module modules/mod_proxy_http.so ### LoadModule proxy_connect_module modules/mod_proxy_connect.so LoadModule cache_module modules/mod_cache.so LoadModule suexec_module modules/mod_suexec.so LoadModule disk_cache_module modules/mod_disk_cache.so LoadModule file_cache_module modules/mod_file_cache.so LoadModule mem_cache_module modules/mod_mem_cache.so LoadModule cgi_module modules/mod_cgi.so LoadModule version_module modules/mod_version.so -- Posted via http://www.ruby-forum.com/. From francis at daoine.org Wed Aug 8 23:18:42 2012 From: francis at daoine.org (Francis Daly) Date: Thu, 9 Aug 2012 00:18:42 +0100 Subject: IP backend reverse proxy problem In-Reply-To: <5022DF58.70702@okira.net> References: <5022DF58.70702@okira.net> Message-ID: <20120808231842.GI32371@craic.sysops.org> On Wed, Aug 08, 2012 at 11:51:20PM +0200, Bertrand Caplet wrote: Hi there, > Hi, I've installed nginx as a reverse proxy with 2 backends web servers > (one nginx, one apache). I want the backends servers to have the "real" > IP of the client on their logs and on the webpage. The backend servers will always see the tcp connection coming from the nginx reverse proxy. All nginx can do is send an extra http header which says what the ip address of the client connecting to it was. The backend server can then choose to do something special with that header, or not. > I tried > proxy_set_header like here : > https://help.ubuntu.com/community/Nginx/ReverseProxy and apache with the > rpaf module but it doesn't work... Look at the traffic from nginx to the backend server. If you see X-Real-IP: and the client address, your nginx configuration is correct. If not, it isn't. If your nginx configuration is not correct, provide details. If it is correct, you have a non-nginx problem to address. Good luck with it, f -- Francis Daly francis at daoine.org From francis at daoine.org Wed Aug 8 23:25:48 2012 From: francis at daoine.org (Francis Daly) Date: Thu, 9 Aug 2012 00:25:48 +0100 Subject: How to load module in nginx.conf like we do in httpd.conf? In-Reply-To: <76c2c1017900d2d0399e3c4f2fd163d9@ruby-forum.com> References: <76c2c1017900d2d0399e3c4f2fd163d9@ruby-forum.com> Message-ID: <20120808232548.GJ32371@craic.sysops.org> On Thu, Aug 09, 2012 at 01:17:45AM +0200, Swapnil P. wrote: Hi there, > I want to load all the modules which are mentioned below are in > httpd.conf.Can you tell me how i can do this in nginx.conf? You can't. "nginx -V" to see how this instance was built. http://nginx.org/en/docs/install.html for how to build a fresh instance with the modules you want and without the modules you don't. "./configure --help" for, among other things, a list of the available distributed modules. Good luck with it, f -- Francis Daly francis at daoine.org From francis at daoine.org Wed Aug 8 23:41:33 2012 From: francis at daoine.org (Francis Daly) Date: Thu, 9 Aug 2012 00:41:33 +0100 Subject: Nginx + Thin config on Windows for Rails app In-Reply-To: <598db8343ff8d27317c8f3f296e59cff@ruby-forum.com> References: <598db8343ff8d27317c8f3f296e59cff@ruby-forum.com> Message-ID: <20120808234133.GK32371@craic.sysops.org> On Wed, Aug 08, 2012 at 09:46:31PM +0200, Edward Stembler wrote: Hi there, > Now, I'm trying to add Nginx to the mix and I'm confused about how I > should start Thin and how I should configure Nginx to forward to Thin. If you have nginx listening on port 80, you should put Thin listening on another port. It is probably best to put it listening on an address like 127.0.0.1. > In my Nginx conf file do I specify the upstream servers as localhosts, > or the machine's IP address? I'm not really sure what it's for. The upstream is whatever Thin is listening on, and should include a specific address, not 0.0.0.0. (The "upstream" configuration does nothing until it is referenced in the proxy_pass directive which follows.) > upstream mywebapp_thin { > server 0.0.0.0:3000; > } > > server { > listen 80; > server_name mywebserver www.mywebserver; > # locations et. al. excluded for brevity... The important location{} block is the one that refers to all urls that should be handled by Thin (and refers to no urls that should not be handled by Thin). In that location{}, you will want something like proxy_pass http://mywebapp_thin; or maybe proxy_pass http://mywebapp_thin/; Depending on how Thin responds, you may also want some proxy_set_header lines. http://nginx.org/r/location http://nginx.org/r/proxy_pass http://nginx.org/r/proxy_set_header Good luck with it, f -- Francis Daly francis at daoine.org From francis at daoine.org Wed Aug 8 23:57:59 2012 From: francis at daoine.org (Francis Daly) Date: Thu, 9 Aug 2012 00:57:59 +0100 Subject: Url rewrite for restful url In-Reply-To: <092fbf8bc6f45d40b70d728a675e47d4@ruby-forum.com> References: <092fbf8bc6f45d40b70d728a675e47d4@ruby-forum.com> Message-ID: <20120808235759.GL32371@craic.sysops.org> On Wed, Aug 08, 2012 at 08:30:57PM +0200, Edward Stembler wrote: Hi there, > /projects/1/attachments/some_file.xls > > I want to setup Nginx to redirect to the static file on the server: > > /public/attachments/1/some_file.xls > > Where 1 is the dynamic project id. If you want requests like /projects/N/attachments/F to return the content of the file /public/attachments/N/F, then "alias" is probably the way to go. http://nginx.org/r/alias location ~ ^/projects/(\d+)/attachments/(.*) { alias /public/attachments/$1/$2; } Good luck with it, f -- Francis Daly francis at daoine.org From nginx-forum at nginx.us Thu Aug 9 01:20:21 2012 From: nginx-forum at nginx.us (vdavidoff) Date: Wed, 8 Aug 2012 21:20:21 -0400 (EDT) Subject: Mark upstream proxy inoperable based on response body or arbitrary http status? In-Reply-To: <970ae236831a239db5e72641e2febc08.NginxMailingListEnglish@forum.nginx.org> References: <970ae236831a239db5e72641e2febc08.NginxMailingListEnglish@forum.nginx.org> Message-ID: <245f7df00f6476793f158de8cc4a8af0.NginxMailingListEnglish@forum.nginx.org> I tried playing with proxy_intercept_errors and error_page to change 302s to 500s, then added http_500 to proxy_next_upstream, but this doesn't seem to mark a proxy as down. I assume as far as HttpUpstreamModule is concerned, the response is a 302, prior to it being converted to a 500 (?). Andy Posted at Nginx Forum: http://forum.nginx.org/read.php?2,229472,229487#msg-229487 From nginx-forum at nginx.us Thu Aug 9 01:37:45 2012 From: nginx-forum at nginx.us (sergiocampama) Date: Wed, 8 Aug 2012 21:37:45 -0400 (EDT) Subject: Nginx + SSL + Thin Message-ID: <2364c58730f6c2b607bab08be84faae6.NginxMailingListEnglish@forum.nginx.org> Hello, I am trying to configure my web app to receive SSL support. I have nginx with 2 server{} clauses, one for http and another (almost identical) for https. I have a thin upstream server at port 3000, which only gets redirected if a certain /subfolder is accessed. I have the certificates in working order, I can access static content perfectly. I also have a proxy_pass directive for http://rails_upstream. I tried setting the https server to proxy_pass to https://rails_upstream but it gave a 500 error. Is there anything else I need to configure so that the same thin server receives the https requests? Or do I need to configure another thin server specifically for https purposes? Best regards, Posted at Nginx Forum: http://forum.nginx.org/read.php?2,229488,229488#msg-229488 From nginx-forum at nginx.us Thu Aug 9 03:40:19 2012 From: nginx-forum at nginx.us (vdavidoff) Date: Wed, 8 Aug 2012 23:40:19 -0400 (EDT) Subject: Mark upstream proxy inoperable based on response body or arbitrary http status? In-Reply-To: <245f7df00f6476793f158de8cc4a8af0.NginxMailingListEnglish@forum.nginx.org> References: <970ae236831a239db5e72641e2febc08.NginxMailingListEnglish@forum.nginx.org> <245f7df00f6476793f158de8cc4a8af0.NginxMailingListEnglish@forum.nginx.org> Message-ID: <8f9323107fc0dcbab13724fc149edd50.NginxMailingListEnglish@forum.nginx.org> I did something I am sure is terrible and hacked up the source so that I could use http_302 along with proxy_next_upstream. This appears to do what I want, though now I am not sure that HttpLimitReqModule can actually do what I need. Andy Posted at Nginx Forum: http://forum.nginx.org/read.php?2,229472,229489#msg-229489 From loki77 at gmail.com Thu Aug 9 04:35:37 2012 From: loki77 at gmail.com (Michael Barrett) Date: Wed, 8 Aug 2012 21:35:37 -0700 Subject: issues w/ client certificates from a self-signed CA Message-ID: Hi, I'm trying to get client certificate authentication going using client certificates signed by a self-signed certificate authority created with openssl. After getting a bunch of '400 The SSL certificate error' errors I put nginx in debug mode and saw the following: 2012/08/08 23:22:14 [info] 27556#0: *1 client SSL certificate verify error: (18:self signed certificate) while reading client request headers, client: 50.18.140.88, server: _, request: "GET /blah/ HTTP/1.1", host: "example.com:8080" I see that error 18 when I try to verify the client cert with the CA cert via openssl as well, but the verify still returns an 'OK' so it seems like it's more of a warning. Would that lead to the 400 error that my client is seeing? If so, is there anyway to get nginx to accept certificates signed by a self-signed CA? I'm running nginx 1.1.19 on Ubuntu 12.04. Let me know if there's any other info you might need - thanks! -- Michael Barrett loki77 at gmail.com From martinloy.uy at gmail.com Thu Aug 9 04:54:16 2012 From: martinloy.uy at gmail.com (Martin Loy) Date: Thu, 9 Aug 2012 01:54:16 -0300 Subject: Weird issue between backend and upstream (0 bytes) In-Reply-To: References: Message-ID: Anyone saw/had any similar issue? we are kinda with this issue :S Regards M On Fri, Aug 3, 2012 at 2:48 PM, Guzm?n Bras? wrote: > Hi list! > > I've a weid issue I've been banging my head without been able to > understand what's going on. > > Setup is as follow: > - 1 nginx 0.7.67 doing load balance to two backends. > - backend 1 with Nginx 1.2.2 and php-fpm 5.3 > - backend 2 with Nginx 0.7.67 and php-fpm 5.3 > > Some, and only some requests log in the upstream status 200 and 0 byte > returned. > Same request in the backend log shows a 200 status and ALWAYS the same > amount of bytes, which change between backends. > When this happen on a request that went to backend 1: upstream logs 200 0 > byte, backend logs 200 15776 bytes. > When this happen on a request that went to backend 2: upstream logs 200 0 > byte, backend logs 200 15670 bytes. > > I've tried without luck to reproduce the problem, so I decided to start > debugging all requests to this site to try to understand why nginx is > returning empty responses. > > This is what I see in upstream when error happens: > (...) > 2012/08/03 13:25:32 [debug] 1546#0: *823 http proxy header: "Accept: */*" > 2012/08/03 13:25:32 [debug] 1546#0: *823 http proxy header: "User-Agent: > AdsBot-Google (+http://www.google.com/adsbot.html)" > 2012/08/03 13:25:32 [debug] 1546#0: *823 http proxy header: > "Accept-Encoding: gzip,deflate" > 2012/08/03 13:25:32 [debug] 1546#0: *823 http proxy header: > 2012/08/03 13:25:32 [debug] 1546#0: *823 http cleanup add: 0000000001A3DF10 > 2012/08/03 13:25:32 [debug] 1546#0: *823 get rr peer, try: 2 > 2012/08/03 13:25:32 [debug] 1546#0: *823 get rr peer, current: 0 8 > 2012/08/03 13:25:32 [debug] 1546#0: *823 socket 149 > 2012/08/03 13:25:32 [debug] 1546#0: *823 epoll add connection: fd:149 > ev:80000005 > 2012/08/03 13:25:32 [debug] 1546#0: *823 connect to 176.31.64.205:8059, > fd:149 #824 > 2012/08/03 13:25:32 [debug] 1546#0: *823 http upstream connect: -2 > 2012/08/03 13:25:32 [debug] 1546#0: *823 event timer add: 149: > 30000:1344011162971 > 2012/08/03 13:25:32 [debug] 1546#0: *823 http run request: > "/s/miracle+noodle?" > 2012/08/03 13:25:32 [debug] 1546#0: *823 http upstream check client, write > event:1, "/s/miracle+noodle" > 2012/08/03 13:25:32 [debug] 1546#0: *823 http upstream recv(): -1 (11: > Resource temporarily unavailable) > 2012/08/03 13:25:32 [debug] 1546#0: *823 http upstream request: > "/s/miracle+noodle?" > 2012/08/03 13:25:32 [debug] 1546#0: *823 http upstream send request > handler > 2012/08/03 13:25:32 [debug] 1546#0: *823 http upstream send request > 2012/08/03 13:25:32 [debug] 1546#0: *823 chain writer buf fl:1 s:358 > 2012/08/03 13:25:32 [debug] 1546#0: *823 chain writer in: 0000000001A3DF48 > 2012/08/03 13:25:32 [debug] 1546#0: *823 writev: 358 > 2012/08/03 13:25:32 [debug] 1546#0: *823 chain writer out: 0000000000000000 > 2012/08/03 13:25:32 [debug] 1546#0: *823 event timer del: 149: > 1344011162971 > 2012/08/03 13:25:32 [debug] 1546#0: *823 event timer add: 149: > 120000:1344011252972 > 2012/08/03 13:25:33 [debug] 1546#0: *823 http run request: > "/s/miracle+noodle?" > 2012/08/03 13:25:33 [debug] 1546#0: *823 http upstream check client, write > event:0, "/s/miracle+noodle" > 2012/08/03 13:25:33 [debug] 1546#0: *823 http upstream recv(): 0 (11: > Resource temporarily unavailable) > 2012/08/03 13:25:33 [debug] 1546#0: *823 http run request: > "/s/miracle+noodle?" > 2012/08/03 13:25:33 [debug] 1546#0: *823 http upstream check client, write > event:1, "/s/miracle+noodle" > 2012/08/03 13:25:45 [debug] 1546#0: *823 http upstream request: > "/s/miracle+noodle?" > 2012/08/03 13:25:45 [debug] 1546#0: *823 http upstream process header > 2012/08/03 13:25:45 [debug] 1546#0: *823 malloc: 0000000001B8DC30:16384 > 2012/08/03 13:25:45 [debug] 1546#0: *823 recv: fd:149 4344 of 16296 > 2012/08/03 13:25:45 [debug] 1546#0: *823 http proxy status 200 "200 OK" > 2012/08/03 13:25:45 [debug] 1546#0: *823 http proxy header: "Server: nginx" > 2012/08/03 13:25:45 [debug] 1546#0: *823 http proxy header: "Date: Fri, 03 > Aug 2012 16:24:26 GMT" > 2012/08/03 13:25:45 [debug] 1546#0: *823 http proxy header: "Content-Type: > text/html" > 2012/08/03 13:25:45 [debug] 1546#0: *823 http proxy header: "Connection: > close" > 2012/08/03 13:25:45 [debug] 1546#0: *823 http proxy header: "X-Powered-By: > PHP/5.3.15-1~dotdeb.0" > 2012/08/03 13:25:45 [debug] 1546#0: *823 http proxy header done > 2012/08/03 13:25:45 [debug] 1546#0: *823 HTTP/1.1 200 OKM > 2012/08/03 13:25:45 [debug] 1546#0: *823 finalize http upstream request: -1 > 2012/08/03 13:25:45 [debug] 1546#0: *823 finalize http proxy request > 2012/08/03 13:25:45 [debug] 1546#0: *823 free rr peer 2 0 > 2012/08/03 13:25:45 [debug] 1546#0: *823 close http upstream connection: > 149 > 2012/08/03 13:25:45 [debug] 1546#0: *823 event timer del: 149: > 1344011252972 > 2012/08/03 13:25:45 [debug] 1546#0: *823 http upstream cache fd: 0 > 2012/08/03 13:25:45 [debug] 1546#0: *823 http file cache free > 2012/08/03 13:25:45 [debug] 1546#0: *823 http finalize request: -1, > "/s/miracle+noodle?" 1 > 2012/08/03 13:25:45 [debug] 1546#0: *823 http close request > 2012/08/03 13:25:45 [debug] 1546#0: *823 http log handler > 2012/08/03 13:25:45 [debug] 1546#0: *823 run cleanup: 0000000001A3DC40 > 2012/08/03 13:25:45 [debug] 1546#0: *823 run cleanup: 0000000001850E58 > 2012/08/03 13:25:45 [debug] 1546#0: *823 free: 0000000001B8DC30 > 2012/08/03 13:25:45 [debug] 1546#0: *823 free: 00000000018501C0, unused: 1 > 2012/08/03 13:25:45 [debug] 1546#0: *823 free: 0000000001A3D890, unused: > 757 > 2012/08/03 13:25:45 [debug] 1546#0: *823 close http connection: 148 > 2012/08/03 13:25:45 [debug] 1546#0: *823 free: 00000000019B7C20 > 2012/08/03 13:25:45 [debug] 1546#0: *823 free: 0000000001B2BEF0 > 2012/08/03 13:25:45 [debug] 1546#0: *823 free: 0000000001B3DC00, unused: 8 > 2012/08/03 13:25:45 [debug] 1546#0: *823 free: 00000000019F55B0, unused: > 112 > > The above debug log on upsteams seems to say that the backend closed the > connection after headers, am I right? > > However, backend debug of another affected request that always log as > returned the same amount of bytes (which is pretty weird), shows a > different story.. > After many of... > (...) > 2012/08/03 13:56:17 [debug] 1519#0: *221 pipe recv chain: -2 > 2012/08/03 13:56:17 [debug] 1519#0: *221 pipe buf busy s:1 t:1 f:0 > 000000000109F1E0, pos 00000000010A2FD0, size: 49752 file: 0, size: 0 > 2012/08/03 13:56:17 [debug] 1519#0: *221 pipe buf busy s:1 t:1 f:0 > 000000000109F1E0, pos 00000000010AF230, size: 65456 file: 0, size: 0 > 2012/08/03 13:56:17 [debug] 1519#0: *221 pipe buf free s:0 t:1 f:0 > 00000000010BF1F0, pos 00000000010BF1F0, size: 80 file: 0, size: 0 > 2012/08/03 13:56:17 [debug] 1519#0: *221 pipe length: -1 > 2012/08/03 13:56:17 [debug] 1519#0: *221 pipe write downstream: 0 > 2012/08/03 13:56:17 [debug] 1519#0: *221 pipe read upstream: 0 > 2012/08/03 13:56:17 [debug] 1519#0: *221 pipe buf busy s:1 t:1 f:0 > 000000000109F1E0, pos 00000000010A2FD0, size: 49752 file: 0, size: 0 > 2012/08/03 13:56:17 [debug] 1519#0: *221 pipe buf busy s:1 t:1 f:0 > 000000000109F1E0, pos 00000000010AF230, size: 65456 file: 0, size: 0 > 2012/08/03 13:56:17 [debug] 1519#0: *221 pipe buf free s:0 t:1 f:0 > 00000000010BF1F0, pos 00000000010BF1F0, size: 80 file: 0, size: 0 > 2012/08/03 13:56:17 [debug] 1519#0: *221 pipe length: -1 > (...) > > Sudenly this comes: > 2012/08/03 13:56:17 [debug] 1519#0: *221 pipe recv chain: -2 > 2012/08/03 13:56:17 [debug] 1519#0: *221 pipe buf busy s:1 t:1 f:0 > 000000000109F1E0, pos 00000000010A2FD0, size: 49752 file: 0, size: 0 > 2012/08/03 13:56:17 [debug] 1519#0: *221 pipe buf busy s:1 t:1 f:0 > 000000000109F1E0, pos 00000000010AF230, size: 65456 file: 0, size: 0 > 2012/08/03 13:56:17 [debug] 1519#0: *221 pipe buf free s:0 t:1 f:0 > 00000000010BF1F0, pos 00000000010BF1F0, size: 49232 file: 0, size: 0 > 2012/08/03 13:56:17 [debug] 1519#0: *221 pipe length: -1 > 2012/08/03 13:56:17 [debug] 1519#0: *221 pipe write downstream: 0 > 2012/08/03 13:56:17 [debug] 1519#0: *221 pipe read upstream: 0 > 2012/08/03 13:56:17 [debug] 1519#0: *221 pipe buf busy s:1 t:1 f:0 > 000000000109F1E0, pos 00000000010A2FD0, size: 49752 file: 0, size: 0 > 2012/08/03 13:56:17 [debug] 1519#0: *221 pipe buf busy s:1 t:1 f:0 > 000000000109F1E0, pos 00000000010AF230, size: 65456 file: 0, size: 0 > 2012/08/03 13:56:17 [debug] 1519#0: *221 pipe buf free s:0 t:1 f:0 > 00000000010BF1F0, pos 00000000010BF1F0, size: 49232 file: 0, size: 0 > 2012/08/03 13:56:17 [debug] 1519#0: *221 pipe length: -1 > 2012/08/03 13:56:17 [debug] 1519#0: *221 event timer: 13, old: > 1344013157360, new: 1344013157362 > 2012/08/03 13:56:17 [debug] 1519#0: *221 event timer: 12, old: > 1344013037361, new: 1344013037362 > 2012/08/03 13:56:17 [debug] 1519#0: *221 post event 000000000106B360 > 2012/08/03 13:56:17 [debug] 1519#0: *221 delete posted event > 000000000106B360 > 2012/08/03 13:56:17 [debug] 1519#0: *221 http run request: > "/s/new+balance+1500?" > 2012/08/03 13:56:17 [debug] 1519#0: *221 http upstream check client, write > event:0, "/s/new+balance+1500" > 2012/08/03 13:56:17 [debug] 1519#0: *221 http upstream recv(): 0 (11: > Resource temporarily unavailable) > 2012/08/03 13:56:17 [debug] 1519#0: *221 finalize http upstream request: > 499 > 2012/08/03 13:56:17 [debug] 1519#0: *221 finalize http fastcgi request > 2012/08/03 13:56:17 [debug] 1519#0: *221 free rr peer 1 0 > 2012/08/03 13:56:17 [debug] 1519#0: *221 close http upstream connection: 13 > 2012/08/03 13:56:17 [debug] 1519#0: *221 free: 0000000000FF7CD0, unused: 48 > 2012/08/03 13:56:17 [debug] 1519#0: *221 event timer del: 13: 1344013157360 > 2012/08/03 13:56:17 [debug] 1519#0: *221 reusable connection: 0 > 2012/08/03 13:56:17 [debug] 1519#0: *221 http upstream temp fd: -1 > 2012/08/03 13:56:17 [debug] 1519#0: *221 http output filter > "/s/new+balance+1500?" > 2012/08/03 13:56:17 [debug] 1519#0: *221 http copy filter: > "/s/new+balance+1500?" > 2012/08/03 13:56:17 [debug] 1519#0: *221 image filter > 2012/08/03 13:56:17 [debug] 1519#0: *221 xslt filter body > 2012/08/03 13:56:17 [debug] 1519#0: *221 http postpone filter > "/s/new+balance+1500?" 0000000000FEDA18 > 2012/08/03 13:56:17 [debug] 1519#0: *221 http copy filter: -1 > "/s/new+balance+1500?" > 2012/08/03 13:56:17 [debug] 1519#0: *221 http finalize request: -1, > "/s/new+balance+1500?" a:1, c:1 > > So this means the backend felt the upstream closed the connection before > it was allowed to return all data: > 2012/08/03 13:56:17 [debug] 1519#0: *221 finalize http upstream request: > 499 > > If that's the case, why it saves this to the logfile: > [03/Aug/2012:13:56:17 -0300] "GET /s/new+balance+1500 HTTP/1.0" 200 15776 > "-" "AdsBot-Google (+http://www.google.com/adsbot.html)" - Cache: - 200 > 24.535 > > I think the problem may be in the upstream, as this weird beahavior > happens with both backends and both use different nginx versions, but I'm > really out of answers right now with this issue. > > Any idea? Hint? Today of 8K requests, 2779 returned 0 bytes on the > upstream and 15776 bytes on the backend.... > > Thank you!! > > Guzm?n > > > > -- > Guzm?n Bras? N??ez > Senior Perl Developer / Sysadmin > Web: http://guzman.braso.info > Mobile: +598 98 674020 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -- *Nunca hubo un amigo que hiciese un favor a un enano, ni un enemigo que le hiciese un mal, que no se viese recompensado por entero.* -------------- next part -------------- An HTML attachment was scrubbed... URL: From nbubingo at gmail.com Thu Aug 9 05:03:11 2012 From: nbubingo at gmail.com (=?GB2312?B?0qbOsLHz?=) Date: Thu, 9 Aug 2012 13:03:11 +0800 Subject: multiple replacements with http_sub_module? In-Reply-To: References: Message-ID: Try my substitutions module: https://github.com/yaoweibin/ngx_http_substitutions_filter_module 2012/8/8 Christian B?nning : > Hi, > > is there any way to ship around the limitation that only a single > `sub_filter` directive is supported per location? > > In fact I want/need to replace 3 "strings" (/fileadmin/, /typo3conf/ > and /typo3temp/) within my root-location and "move" the requests over > to absolute servernames (like a cdn) by the time the response is > running through my frontend lb. > > Any ideas how to accomplish that? > > Regards, > Christian > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From mdounin at mdounin.ru Thu Aug 9 07:56:41 2012 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 9 Aug 2012 11:56:41 +0400 Subject: Nginx + SSL + Thin In-Reply-To: <2364c58730f6c2b607bab08be84faae6.NginxMailingListEnglish@forum.nginx.org> References: <2364c58730f6c2b607bab08be84faae6.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20120809075641.GK40452@mdounin.ru> Hello! On Wed, Aug 08, 2012 at 09:37:45PM -0400, sergiocampama wrote: > Hello, > > I am trying to configure my web app to receive SSL support. I have nginx > with 2 server{} clauses, one for http and another (almost identical) for > https. I have a thin upstream server at port 3000, which only gets > redirected if a certain /subfolder is accessed. > > I have the certificates in working order, I can access static content > perfectly. I also have a proxy_pass directive for http://rails_upstream. > I tried setting the https server to proxy_pass to https://rails_upstream > but it gave a 500 error. > > Is there anything else I need to configure so that the same thin server > receives the https requests? Or do I need to configure another thin > server specifically for https purposes? Unless you need https between nginx and thin for some reason, it's usually enough to just write proxy_pass to http://rails_upstream. Note "http", not "https". If you want https between nginx and thin, you have to run another thin instance to handle https. Maxim Dounin From nginx-forum at nginx.us Thu Aug 9 08:24:00 2012 From: nginx-forum at nginx.us (n1xman) Date: Thu, 9 Aug 2012 04:24:00 -0400 (EDT) Subject: nginx-sticky & nginx_http_upstream_check modules not working together In-Reply-To: References: Message-ID: <20ad52ee5c052388ce6068d1f708b72f.NginxMailingListEnglish@forum.nginx.org> Hi ???, We have tested it on test environment and it is working now; and thanks to you! :) However, in our test environment we also wanted to test nginx_tcp_proxy_module for WebSocket. We have noticed upstream server health check and status monitor bundle to the nginx_tcp_proxy_module. Can you tell if we compile nginx_tcp_proxy_module, we can drop the nginx_http_upstream_check_module; and we still patch the nginx-sticky-module with nginx_http_upstream_check_module/nginx-sticky-module.patch and use nginx_tcp_proxy_module for upstream health check also..? BTW: We have tried to compile both nginx_tcp_proxy_module and nginx_http_upstream_check_module together, following error fires during the make process. [hirantha at abmx-test nginx-1.2.1]$ patch -p1 < ./contrib/yaoweibin-nginx_upstream_check_module-be97c70/check_1.2.1.patch patching file src/http/modules/ngx_http_upstream_ip_hash_module.c patching file src/http/ngx_http_upstream_round_robin.c patching file src/http/ngx_http_upstream_round_robin.h [hirantha at abmx-test nginx-1.2.1]$ patch -p1 < ./contrib/yaoweibin-nginx_tcp_proxy_module-a40c99a/tcp.patch patching file src/core/ngx_log.c Hunk #1 succeeded at 67 (offset 1 line). patching file src/core/ngx_log.h Hunk #1 succeeded at 30 (offset 1 line). patching file src/event/ngx_event_connect.h Hunk #1 succeeded at 33 (offset 1 line). [hirantha at abmx-test nginx-1.2.1]$ ./configure --add-module=./contrib/yaoweibin-nginx_tcp_proxy_module-a40c99a --add-module=./contrib/yaoweibin-nginx_upstream_check_module-be97c70 checking for OS + Linux 2.6.18-194.el5PAE i686 checking for C compiler ... found .. checking for openat(), fstatat() ... found configuring additional modules adding module in ./contrib/yaoweibin-nginx_tcp_proxy_module-a40c99a checking for nginx_tcp_module ... found + ngx_tcp_module was configured adding module in ./contrib/yaoweibin-nginx_upstream_check_module-be97c70 checking for ngx_http_upstream_check_module ... found + ngx_http_upstream_check_module was configured checking for PCRE library ... found checking for PCRE JIT support ... not found checking for OpenSSL library ... found checking for zlib library ... found creating objs/Makefile [hirantha at abmx-test nginx-1.2.1]$make ... ... objs/addon/modules/ngx_tcp_ssl_module.o \ objs/addon/yaoweibin-nginx_upstream_check_module-be97c70/ngx_http_upstream_check_module.o \ objs/addon/yaoweibin-nginx_upstream_check_module-be97c70/ngx_http_upstream_check_handler.o \ objs/ngx_modules.o \ -lpthread -lcrypt -lpcre -lssl -lcrypto -ldl -lz objs/addon/yaoweibin-nginx_upstream_check_module-be97c70/ngx_http_upstream_check_handler.o:(.rodata+0x40): multiple definition of `sslv3_client_hello_pkt' objs/addon/yaoweibin-nginx_tcp_proxy_module-a40c99a/ngx_tcp_upstream_check.o:(.rodata+0x0): first defined here collect2: ld returned 1 exit status make[1]: *** [objs/nginx] Error 1 make[1]: Leaving directory `/usr/local/hirantha/nginx-1.2.1' make: *** [build] Error 2 Thanks in advance. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,227937,229498#msg-229498 From nbubingo at gmail.com Thu Aug 9 12:30:00 2012 From: nbubingo at gmail.com (=?GB2312?B?0qbOsLHz?=) Date: Thu, 9 Aug 2012 20:30:00 +0800 Subject: nginx-sticky & nginx_http_upstream_check modules not working together In-Reply-To: <20ad52ee5c052388ce6068d1f708b72f.NginxMailingListEnglish@forum.nginx.org> References: <20ad52ee5c052388ce6068d1f708b72f.NginxMailingListEnglish@forum.nginx.org> Message-ID: Hi, Try the latest code, you can use these modules both. Thanks. 2012/8/9 n1xman : > Hi ???, > > We have tested it on test environment and it is working now; and thanks > to you! :) > > However, in our test environment we also wanted to test > nginx_tcp_proxy_module for WebSocket. We have noticed upstream server > health check and status monitor bundle to the nginx_tcp_proxy_module. > > Can you tell if we compile nginx_tcp_proxy_module, we can drop the > nginx_http_upstream_check_module; and we still patch the > nginx-sticky-module with > nginx_http_upstream_check_module/nginx-sticky-module.patch and use > nginx_tcp_proxy_module for upstream health check also..? > > BTW: We have tried to compile both nginx_tcp_proxy_module and > nginx_http_upstream_check_module together, following error fires during > the make process. > > [hirantha at abmx-test nginx-1.2.1]$ patch -p1 < > ./contrib/yaoweibin-nginx_upstream_check_module-be97c70/check_1.2.1.patch > patching file src/http/modules/ngx_http_upstream_ip_hash_module.c > patching file src/http/ngx_http_upstream_round_robin.c > patching file src/http/ngx_http_upstream_round_robin.h > [hirantha at abmx-test nginx-1.2.1]$ patch -p1 < > ./contrib/yaoweibin-nginx_tcp_proxy_module-a40c99a/tcp.patch > patching file src/core/ngx_log.c > Hunk #1 succeeded at 67 (offset 1 line). > patching file src/core/ngx_log.h > Hunk #1 succeeded at 30 (offset 1 line). > patching file src/event/ngx_event_connect.h > Hunk #1 succeeded at 33 (offset 1 line). > [hirantha at abmx-test nginx-1.2.1]$ ./configure > --add-module=./contrib/yaoweibin-nginx_tcp_proxy_module-a40c99a > --add-module=./contrib/yaoweibin-nginx_upstream_check_module-be97c70 > checking for OS > + Linux 2.6.18-194.el5PAE i686 > checking for C compiler ... found > .. > checking for openat(), fstatat() ... found > configuring additional modules > adding module in ./contrib/yaoweibin-nginx_tcp_proxy_module-a40c99a > checking for nginx_tcp_module ... found > + ngx_tcp_module was configured > adding module in > ./contrib/yaoweibin-nginx_upstream_check_module-be97c70 > checking for ngx_http_upstream_check_module ... found > + ngx_http_upstream_check_module was configured > checking for PCRE library ... found > checking for PCRE JIT support ... not found > checking for OpenSSL library ... found > checking for zlib library ... found > creating objs/Makefile > > [hirantha at abmx-test nginx-1.2.1]$make > ... > ... > objs/addon/modules/ngx_tcp_ssl_module.o \ > > objs/addon/yaoweibin-nginx_upstream_check_module-be97c70/ngx_http_upstream_check_module.o > \ > > objs/addon/yaoweibin-nginx_upstream_check_module-be97c70/ngx_http_upstream_check_handler.o > \ > objs/ngx_modules.o \ > -lpthread -lcrypt -lpcre -lssl -lcrypto -ldl -lz > objs/addon/yaoweibin-nginx_upstream_check_module-be97c70/ngx_http_upstream_check_handler.o:(.rodata+0x40): > multiple definition of `sslv3_client_hello_pkt' > objs/addon/yaoweibin-nginx_tcp_proxy_module-a40c99a/ngx_tcp_upstream_check.o:(.rodata+0x0): > first defined here > collect2: ld returned 1 exit status > make[1]: *** [objs/nginx] Error 1 > make[1]: Leaving directory `/usr/local/hirantha/nginx-1.2.1' > make: *** [build] Error 2 > > Thanks in advance. > > Posted at Nginx Forum: http://forum.nginx.org/read.php?2,227937,229498#msg-229498 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From nginx-forum at nginx.us Thu Aug 9 12:54:19 2012 From: nginx-forum at nginx.us (Jugurtha) Date: Thu, 9 Aug 2012 08:54:19 -0400 (EDT) Subject: Connection refused Message-ID: <521c6540cc699c0dd0a4ed57be48b4db.NginxMailingListEnglish@forum.nginx.org> Hello everybody, I have recently some problem with my nginx server and i need some help. My platform is linux 2.6.32 - SLESS 11 SP1 (x86_64) (64Go RAM & 16 Cpu With Fusion IO Card ) Nginx server has two instances of "server", a frontend (cache) and a backend (fetches static files on another external server). On the frontend server there are error logs like this: 2012/08/09 14:13:19 [error] 23794#0: *62532 connect() failed (111: Connection refused) while connecting to upstream, client: 127.0.0.1, server: localhost, request: "GET 444d58dfa1be-596d.jpg HTTP/1.0", upstream: "http://10.13.10.100:80/444d58dfa1be-596d.jpg", host: "xxxx.com", referrer: "http://xxxxxxxx/file.htm" I have 1700 requests per minute (including 150 requests with "111: Connection refused") Anybody has idea about avoiding this or fixing it ? Thx Posted at Nginx Forum: http://forum.nginx.org/read.php?2,229504,229504#msg-229504 From guzman.braso at gmail.com Thu Aug 9 13:24:51 2012 From: guzman.braso at gmail.com (=?ISO-8859-1?B?R3V6beFuIEJyYXPz?=) Date: Thu, 9 Aug 2012 10:24:51 -0300 Subject: Weird issue between backend and upstream (0 bytes) In-Reply-To: References: Message-ID: Hi... Just made myself of a dump from the backend perspective and analized with wireshark to see what were on those bytes, I was pretty sure those bytes were some error from the app with the key to fix it.... but no, those 15776 bytes are always 15776 but the first 15776 of every request, so every request is different. According to tcpdump everything flows OK and after one common ack from upstream while backend sends data, upstream inmediatly sends the FIN flag and inmediatly after that a group of RST arrive. So, clearly there's no backend issue here, whatever happens it's on the upstream, and now it looks more a problem with the OS than nginx fault. Anyone have any ideas why the OS may be closing & resetting nginx connections? OS: Debian Lenny Virtualized inside XenServer (Citrix free version) Virtual cores: 2 ( Load: 0.10) RAM: 2GB (Currently: 44mb free / 1.3GB os cached, 310MB buffers) Currently show no load and problem occurrs to 20%-30% of traffic to both backends but only for a specific url. This main nginx it's doing load balancing for more domains, some of them with a lot of traffic, and there's not a single status 200 0 byte request in the logs of others domains. Some network stats: netstat -anp|awk '$1~/^tcp/ { print $6 }'|sort -n|uniq -c 2 CLOSING 164 ESTABLISHED 21 FIN_WAIT1 150 FIN_WAIT2 1 LAST_ACK 9 LISTEN 4 SYN_RECV 1 SYN_SENT 293 TIME_WAIT No custom configuration inside /etc/sysctl.conf... I'll be gladly submit more thata from the faulty nginx server if anyone needs, just ask Thanks for any hint, I'm really out of ideas On Fri, Aug 3, 2012 at 2:48 PM, Guzm?n Bras? wrote: > Hi list! > > I've a weid issue I've been banging my head without been able to > understand what's going on. > > Setup is as follow: > - 1 nginx 0.7.67 doing load balance to two backends. > - backend 1 with Nginx 1.2.2 and php-fpm 5.3 > - backend 2 with Nginx 0.7.67 and php-fpm 5.3 > > Some, and only some requests log in the upstream status 200 and 0 byte > returned. > Same request in the backend log shows a 200 status and ALWAYS the same > amount of bytes, which change between backends. > When this happen on a request that went to backend 1: upstream logs 200 0 > byte, backend logs 200 15776 bytes. > When this happen on a request that went to backend 2: upstream logs 200 0 > byte, backend logs 200 15670 bytes. > > I've tried without luck to reproduce the problem, so I decided to start > debugging all requests to this site to try to understand why nginx is > returning empty responses. > > This is what I see in upstream when error happens: > (...) > 2012/08/03 13:25:32 [debug] 1546#0: *823 http proxy header: "Accept: */*" > 2012/08/03 13:25:32 [debug] 1546#0: *823 http proxy header: "User-Agent: > AdsBot-Google (+http://www.google.com/adsbot.html)" > 2012/08/03 13:25:32 [debug] 1546#0: *823 http proxy header: > "Accept-Encoding: gzip,deflate" > 2012/08/03 13:25:32 [debug] 1546#0: *823 http proxy header: > 2012/08/03 13:25:32 [debug] 1546#0: *823 http cleanup add: 0000000001A3DF10 > 2012/08/03 13:25:32 [debug] 1546#0: *823 get rr peer, try: 2 > 2012/08/03 13:25:32 [debug] 1546#0: *823 get rr peer, current: 0 8 > 2012/08/03 13:25:32 [debug] 1546#0: *823 socket 149 > 2012/08/03 13:25:32 [debug] 1546#0: *823 epoll add connection: fd:149 > ev:80000005 > 2012/08/03 13:25:32 [debug] 1546#0: *823 connect to 176.31.64.205:8059, > fd:149 #824 > 2012/08/03 13:25:32 [debug] 1546#0: *823 http upstream connect: -2 > 2012/08/03 13:25:32 [debug] 1546#0: *823 event timer add: 149: > 30000:1344011162971 > 2012/08/03 13:25:32 [debug] 1546#0: *823 http run request: > "/s/miracle+noodle?" > 2012/08/03 13:25:32 [debug] 1546#0: *823 http upstream check client, write > event:1, "/s/miracle+noodle" > 2012/08/03 13:25:32 [debug] 1546#0: *823 http upstream recv(): -1 (11: > Resource temporarily unavailable) > 2012/08/03 13:25:32 [debug] 1546#0: *823 http upstream request: > "/s/miracle+noodle?" > 2012/08/03 13:25:32 [debug] 1546#0: *823 http upstream send request > handler > 2012/08/03 13:25:32 [debug] 1546#0: *823 http upstream send request > 2012/08/03 13:25:32 [debug] 1546#0: *823 chain writer buf fl:1 s:358 > 2012/08/03 13:25:32 [debug] 1546#0: *823 chain writer in: 0000000001A3DF48 > 2012/08/03 13:25:32 [debug] 1546#0: *823 writev: 358 > 2012/08/03 13:25:32 [debug] 1546#0: *823 chain writer out: 0000000000000000 > 2012/08/03 13:25:32 [debug] 1546#0: *823 event timer del: 149: > 1344011162971 > 2012/08/03 13:25:32 [debug] 1546#0: *823 event timer add: 149: > 120000:1344011252972 > 2012/08/03 13:25:33 [debug] 1546#0: *823 http run request: > "/s/miracle+noodle?" > 2012/08/03 13:25:33 [debug] 1546#0: *823 http upstream check client, write > event:0, "/s/miracle+noodle" > 2012/08/03 13:25:33 [debug] 1546#0: *823 http upstream recv(): 0 (11: > Resource temporarily unavailable) > 2012/08/03 13:25:33 [debug] 1546#0: *823 http run request: > "/s/miracle+noodle?" > 2012/08/03 13:25:33 [debug] 1546#0: *823 http upstream check client, write > event:1, "/s/miracle+noodle" > 2012/08/03 13:25:45 [debug] 1546#0: *823 http upstream request: > "/s/miracle+noodle?" > 2012/08/03 13:25:45 [debug] 1546#0: *823 http upstream process header > 2012/08/03 13:25:45 [debug] 1546#0: *823 malloc: 0000000001B8DC30:16384 > 2012/08/03 13:25:45 [debug] 1546#0: *823 recv: fd:149 4344 of 16296 > 2012/08/03 13:25:45 [debug] 1546#0: *823 http proxy status 200 "200 OK" > 2012/08/03 13:25:45 [debug] 1546#0: *823 http proxy header: "Server: nginx" > 2012/08/03 13:25:45 [debug] 1546#0: *823 http proxy header: "Date: Fri, 03 > Aug 2012 16:24:26 GMT" > 2012/08/03 13:25:45 [debug] 1546#0: *823 http proxy header: "Content-Type: > text/html" > 2012/08/03 13:25:45 [debug] 1546#0: *823 http proxy header: "Connection: > close" > 2012/08/03 13:25:45 [debug] 1546#0: *823 http proxy header: "X-Powered-By: > PHP/5.3.15-1~dotdeb.0" > 2012/08/03 13:25:45 [debug] 1546#0: *823 http proxy header done > 2012/08/03 13:25:45 [debug] 1546#0: *823 HTTP/1.1 200 OKM > 2012/08/03 13:25:45 [debug] 1546#0: *823 finalize http upstream request: -1 > 2012/08/03 13:25:45 [debug] 1546#0: *823 finalize http proxy request > 2012/08/03 13:25:45 [debug] 1546#0: *823 free rr peer 2 0 > 2012/08/03 13:25:45 [debug] 1546#0: *823 close http upstream connection: > 149 > 2012/08/03 13:25:45 [debug] 1546#0: *823 event timer del: 149: > 1344011252972 > 2012/08/03 13:25:45 [debug] 1546#0: *823 http upstream cache fd: 0 > 2012/08/03 13:25:45 [debug] 1546#0: *823 http file cache free > 2012/08/03 13:25:45 [debug] 1546#0: *823 http finalize request: -1, > "/s/miracle+noodle?" 1 > 2012/08/03 13:25:45 [debug] 1546#0: *823 http close request > 2012/08/03 13:25:45 [debug] 1546#0: *823 http log handler > 2012/08/03 13:25:45 [debug] 1546#0: *823 run cleanup: 0000000001A3DC40 > 2012/08/03 13:25:45 [debug] 1546#0: *823 run cleanup: 0000000001850E58 > 2012/08/03 13:25:45 [debug] 1546#0: *823 free: 0000000001B8DC30 > 2012/08/03 13:25:45 [debug] 1546#0: *823 free: 00000000018501C0, unused: 1 > 2012/08/03 13:25:45 [debug] 1546#0: *823 free: 0000000001A3D890, unused: > 757 > 2012/08/03 13:25:45 [debug] 1546#0: *823 close http connection: 148 > 2012/08/03 13:25:45 [debug] 1546#0: *823 free: 00000000019B7C20 > 2012/08/03 13:25:45 [debug] 1546#0: *823 free: 0000000001B2BEF0 > 2012/08/03 13:25:45 [debug] 1546#0: *823 free: 0000000001B3DC00, unused: 8 > 2012/08/03 13:25:45 [debug] 1546#0: *823 free: 00000000019F55B0, unused: > 112 > > The above debug log on upsteams seems to say that the backend closed the > connection after headers, am I right? > > However, backend debug of another affected request that always log as > returned the same amount of bytes (which is pretty weird), shows a > different story.. > After many of... > (...) > 2012/08/03 13:56:17 [debug] 1519#0: *221 pipe recv chain: -2 > 2012/08/03 13:56:17 [debug] 1519#0: *221 pipe buf busy s:1 t:1 f:0 > 000000000109F1E0, pos 00000000010A2FD0, size: 49752 file: 0, size: 0 > 2012/08/03 13:56:17 [debug] 1519#0: *221 pipe buf busy s:1 t:1 f:0 > 000000000109F1E0, pos 00000000010AF230, size: 65456 file: 0, size: 0 > 2012/08/03 13:56:17 [debug] 1519#0: *221 pipe buf free s:0 t:1 f:0 > 00000000010BF1F0, pos 00000000010BF1F0, size: 80 file: 0, size: 0 > 2012/08/03 13:56:17 [debug] 1519#0: *221 pipe length: -1 > 2012/08/03 13:56:17 [debug] 1519#0: *221 pipe write downstream: 0 > 2012/08/03 13:56:17 [debug] 1519#0: *221 pipe read upstream: 0 > 2012/08/03 13:56:17 [debug] 1519#0: *221 pipe buf busy s:1 t:1 f:0 > 000000000109F1E0, pos 00000000010A2FD0, size: 49752 file: 0, size: 0 > 2012/08/03 13:56:17 [debug] 1519#0: *221 pipe buf busy s:1 t:1 f:0 > 000000000109F1E0, pos 00000000010AF230, size: 65456 file: 0, size: 0 > 2012/08/03 13:56:17 [debug] 1519#0: *221 pipe buf free s:0 t:1 f:0 > 00000000010BF1F0, pos 00000000010BF1F0, size: 80 file: 0, size: 0 > 2012/08/03 13:56:17 [debug] 1519#0: *221 pipe length: -1 > (...) > > Sudenly this comes: > 2012/08/03 13:56:17 [debug] 1519#0: *221 pipe recv chain: -2 > 2012/08/03 13:56:17 [debug] 1519#0: *221 pipe buf busy s:1 t:1 f:0 > 000000000109F1E0, pos 00000000010A2FD0, size: 49752 file: 0, size: 0 > 2012/08/03 13:56:17 [debug] 1519#0: *221 pipe buf busy s:1 t:1 f:0 > 000000000109F1E0, pos 00000000010AF230, size: 65456 file: 0, size: 0 > 2012/08/03 13:56:17 [debug] 1519#0: *221 pipe buf free s:0 t:1 f:0 > 00000000010BF1F0, pos 00000000010BF1F0, size: 49232 file: 0, size: 0 > 2012/08/03 13:56:17 [debug] 1519#0: *221 pipe length: -1 > 2012/08/03 13:56:17 [debug] 1519#0: *221 pipe write downstream: 0 > 2012/08/03 13:56:17 [debug] 1519#0: *221 pipe read upstream: 0 > 2012/08/03 13:56:17 [debug] 1519#0: *221 pipe buf busy s:1 t:1 f:0 > 000000000109F1E0, pos 00000000010A2FD0, size: 49752 file: 0, size: 0 > 2012/08/03 13:56:17 [debug] 1519#0: *221 pipe buf busy s:1 t:1 f:0 > 000000000109F1E0, pos 00000000010AF230, size: 65456 file: 0, size: 0 > 2012/08/03 13:56:17 [debug] 1519#0: *221 pipe buf free s:0 t:1 f:0 > 00000000010BF1F0, pos 00000000010BF1F0, size: 49232 file: 0, size: 0 > 2012/08/03 13:56:17 [debug] 1519#0: *221 pipe length: -1 > 2012/08/03 13:56:17 [debug] 1519#0: *221 event timer: 13, old: > 1344013157360, new: 1344013157362 > 2012/08/03 13:56:17 [debug] 1519#0: *221 event timer: 12, old: > 1344013037361, new: 1344013037362 > 2012/08/03 13:56:17 [debug] 1519#0: *221 post event 000000000106B360 > 2012/08/03 13:56:17 [debug] 1519#0: *221 delete posted event > 000000000106B360 > 2012/08/03 13:56:17 [debug] 1519#0: *221 http run request: > "/s/new+balance+1500?" > 2012/08/03 13:56:17 [debug] 1519#0: *221 http upstream check client, write > event:0, "/s/new+balance+1500" > 2012/08/03 13:56:17 [debug] 1519#0: *221 http upstream recv(): 0 (11: > Resource temporarily unavailable) > 2012/08/03 13:56:17 [debug] 1519#0: *221 finalize http upstream request: > 499 > 2012/08/03 13:56:17 [debug] 1519#0: *221 finalize http fastcgi request > 2012/08/03 13:56:17 [debug] 1519#0: *221 free rr peer 1 0 > 2012/08/03 13:56:17 [debug] 1519#0: *221 close http upstream connection: 13 > 2012/08/03 13:56:17 [debug] 1519#0: *221 free: 0000000000FF7CD0, unused: 48 > 2012/08/03 13:56:17 [debug] 1519#0: *221 event timer del: 13: 1344013157360 > 2012/08/03 13:56:17 [debug] 1519#0: *221 reusable connection: 0 > 2012/08/03 13:56:17 [debug] 1519#0: *221 http upstream temp fd: -1 > 2012/08/03 13:56:17 [debug] 1519#0: *221 http output filter > "/s/new+balance+1500?" > 2012/08/03 13:56:17 [debug] 1519#0: *221 http copy filter: > "/s/new+balance+1500?" > 2012/08/03 13:56:17 [debug] 1519#0: *221 image filter > 2012/08/03 13:56:17 [debug] 1519#0: *221 xslt filter body > 2012/08/03 13:56:17 [debug] 1519#0: *221 http postpone filter > "/s/new+balance+1500?" 0000000000FEDA18 > 2012/08/03 13:56:17 [debug] 1519#0: *221 http copy filter: -1 > "/s/new+balance+1500?" > 2012/08/03 13:56:17 [debug] 1519#0: *221 http finalize request: -1, > "/s/new+balance+1500?" a:1, c:1 > > So this means the backend felt the upstream closed the connection before > it was allowed to return all data: > 2012/08/03 13:56:17 [debug] 1519#0: *221 finalize http upstream request: > 499 > > If that's the case, why it saves this to the logfile: > [03/Aug/2012:13:56:17 -0300] "GET /s/new+balance+1500 HTTP/1.0" 200 15776 > "-" "AdsBot-Google (+http://www.google.com/adsbot.html)" - Cache: - 200 > 24.535 > > I think the problem may be in the upstream, as this weird beahavior > happens with both backends and both use different nginx versions, but I'm > really out of answers right now with this issue. > > Any idea? Hint? Today of 8K requests, 2779 returned 0 bytes on the > upstream and 15776 bytes on the backend.... > > Thank you!! > > Guzm?n > > > > -- > Guzm?n Bras? N??ez > Senior Perl Developer / Sysadmin > Web: http://guzman.braso.info > Mobile: +598 98 674020 > -- Guzm?n Bras? N??ez Senior Perl Developer / Sysadmin Web: http://guzman.braso.info Mobile: +598 98 674020 -------------- next part -------------- An HTML attachment was scrubbed... URL: From lists at ruby-forum.com Thu Aug 9 14:07:13 2012 From: lists at ruby-forum.com (Edward Stembler) Date: Thu, 09 Aug 2012 16:07:13 +0200 Subject: Nginx + Thin config on Windows for Rails app In-Reply-To: <598db8343ff8d27317c8f3f296e59cff@ruby-forum.com> References: <598db8343ff8d27317c8f3f296e59cff@ruby-forum.com> Message-ID: <6978bf54012d46d54f4f6a0ffcb9cccd@ruby-forum.com> Thanks, that helps! I finally got it working. I started Thin on port 8080, and removed the upstream block while only referencing Thin in a location block location / { try_files $uri/index.html $uri.html $uri @mywebapp_thin; error_page 404 /404.html; error_page 422 /422.html; error_page 500 502 503 504 /500.html; error_page 403 /403.html; } location @mywebapp_thin { proxy_pass http://10.x.x.x:8080; } This seems to work. Thanks for explaining things! -- Posted via http://www.ruby-forum.com/. From nginx at nginxuser.net Thu Aug 9 14:13:53 2012 From: nginx at nginxuser.net (Nginx User) Date: Thu, 9 Aug 2012 17:13:53 +0300 Subject: Nginx 1.2.3 Rewrite Issue Message-ID: Wierd problem with Nginx 1.2.3 rewrites. See http://pastie.org/4433865 and note Line 30. The arg is part of the main url. Results in a 404 not found as the final url becomes "/test/index.php?x2_SID=41efad7adffaa9d25967dc913919cbc0?x2_SID=41efad7adffaa9d25967dc913919cbc0" Any cluees From mdounin at mdounin.ru Thu Aug 9 14:15:30 2012 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 9 Aug 2012 18:15:30 +0400 Subject: Weird issue between backend and upstream (0 bytes) In-Reply-To: References: Message-ID: <20120809141530.GO40452@mdounin.ru> Hello! On Thu, Aug 09, 2012 at 10:24:51AM -0300, Guzm?n Bras? wrote: > Just made myself of a dump from the backend perspective and analized with > wireshark to see what were on those bytes, I was pretty sure those bytes > were some error from the app with the key to fix it.... but no, those 15776 > bytes are always 15776 but the first 15776 of every request, so every > request is different. > > According to tcpdump everything flows OK and after one common ack from > upstream while backend sends data, upstream inmediatly sends the FIN flag > and inmediatly after that a group of RST arrive. This might be a result of several first packets (e.g. initial buffer size) being sent before app crashes somewhere at the end of response generation. Have you tried to look what happens from the app's point of view? [...] Maxim Dounin From mdounin at mdounin.ru Thu Aug 9 14:20:07 2012 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 9 Aug 2012 18:20:07 +0400 Subject: Nginx 1.2.3 Rewrite Issue In-Reply-To: References: Message-ID: <20120809142007.GP40452@mdounin.ru> Hello! On Thu, Aug 09, 2012 at 05:13:53PM +0300, Nginx User wrote: > Wierd problem with Nginx 1.2.3 rewrites. > > See http://pastie.org/4433865 and note Line 30. The arg is part of the > main url. Results in a 404 not found as the final url becomes > "/test/index.php?x2_SID=41efad7adffaa9d25967dc913919cbc0?x2_SID=41efad7adffaa9d25967dc913919cbc0" > > Any cluees It's not clear what you are trying to do, but if you just want to preserve arguments as is - there is no need to write anything extra, just rewrite ^/test/index\.html$ /test/index.php last; Original request arguments (if any) will be preserved as is. Maxim Dounin From nginx at nginxuser.net Thu Aug 9 14:26:40 2012 From: nginx at nginxuser.net (Nginx User) Date: Thu, 9 Aug 2012 17:26:40 +0300 Subject: Nginx 1.2.3 Rewrite Issue In-Reply-To: <20120809142007.GP40452@mdounin.ru> References: <20120809142007.GP40452@mdounin.ru> Message-ID: Ha! Got it. I was trying to preserve arguments indeed. I take it this is related to needing to add a "?" to the end of such to prevent the args being appended again. Right? On 9 August 2012 17:20, Maxim Dounin wrote: > Hello! > > On Thu, Aug 09, 2012 at 05:13:53PM +0300, Nginx User wrote: > >> Wierd problem with Nginx 1.2.3 rewrites. >> >> See http://pastie.org/4433865 and note Line 30. The arg is part of the >> main url. Results in a 404 not found as the final url becomes >> "/test/index.php?x2_SID=41efad7adffaa9d25967dc913919cbc0?x2_SID=41efad7adffaa9d25967dc913919cbc0" >> >> Any cluees > > It's not clear what you are trying to do, but if you just want to > preserve arguments as is - there is no need to write anything > extra, just > > rewrite ^/test/index\.html$ /test/index.php last; > > Original request arguments (if any) will be preserved as is. > > Maxim Dounin > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From mdounin at mdounin.ru Thu Aug 9 14:29:03 2012 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 9 Aug 2012 18:29:03 +0400 Subject: Nginx 1.2.3 Rewrite Issue In-Reply-To: References: <20120809142007.GP40452@mdounin.ru> Message-ID: <20120809142902.GR40452@mdounin.ru> Hello! On Thu, Aug 09, 2012 at 05:26:40PM +0300, Nginx User wrote: > Ha! Got it. I was trying to preserve arguments indeed. > > I take it this is related to needing to add a "?" to the end of such > to prevent the args being appended again. Right? For sure. Maxim Dounin > > > > On 9 August 2012 17:20, Maxim Dounin wrote: > > Hello! > > > > On Thu, Aug 09, 2012 at 05:13:53PM +0300, Nginx User wrote: > > > >> Wierd problem with Nginx 1.2.3 rewrites. > >> > >> See http://pastie.org/4433865 and note Line 30. The arg is part of the > >> main url. Results in a 404 not found as the final url becomes > >> "/test/index.php?x2_SID=41efad7adffaa9d25967dc913919cbc0?x2_SID=41efad7adffaa9d25967dc913919cbc0" > >> > >> Any cluees > > > > It's not clear what you are trying to do, but if you just want to > > preserve arguments as is - there is no need to write anything > > extra, just > > > > rewrite ^/test/index\.html$ /test/index.php last; > > > > Original request arguments (if any) will be preserved as is. > > > > Maxim Dounin > > > > _______________________________________________ > > nginx mailing list > > nginx at nginx.org > > http://mailman.nginx.org/mailman/listinfo/nginx > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From guzman.braso at gmail.com Thu Aug 9 15:10:57 2012 From: guzman.braso at gmail.com (=?ISO-8859-1?B?R3V6beFuIEJyYXPz?=) Date: Thu, 9 Aug 2012 12:10:57 -0300 Subject: Weird issue between backend and upstream (0 bytes) In-Reply-To: <20120809141530.GO40452@mdounin.ru> References: <20120809141530.GO40452@mdounin.ru> Message-ID: Hi Maxim! Thanks for taking time to check it out... So the 499 seen by the php-fpm nginx here It's not that main nginx closed the connection but that fastcgi closed the connection? All the time thought was nothing to do with the backend... there's no php warning or error on the php-fpm side when this happens, will try to enable debug mode in php-fpm and swim around the logs.... Thanks! > 2012/08/03 13:56:17 [debug] 1519#0: *221 finalize http upstream request: 499 On Thu, Aug 9, 2012 at 11:15 AM, Maxim Dounin wrote: > Hello! > > On Thu, Aug 09, 2012 at 10:24:51AM -0300, Guzm?n Bras? wrote: > > > Just made myself of a dump from the backend perspective and analized with > > wireshark to see what were on those bytes, I was pretty sure those bytes > > were some error from the app with the key to fix it.... but no, those > 15776 > > bytes are always 15776 but the first 15776 of every request, so every > > request is different. > > > > According to tcpdump everything flows OK and after one common ack from > > upstream while backend sends data, upstream inmediatly sends the FIN > flag > > and inmediatly after that a group of RST arrive. > > This might be a result of several first packets (e.g. initial > buffer size) being sent before app crashes somewhere at the end of > response generation. Have you tried to look what happens from > the app's point of view? > > [...] > > Maxim Dounin > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -- Guzm?n Bras? N??ez Senior Perl Developer / Sysadmin Web: http://guzman.braso.info Mobile: +598 98 674020 -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Thu Aug 9 18:26:52 2012 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 9 Aug 2012 22:26:52 +0400 Subject: Weird issue between backend and upstream (0 bytes) In-Reply-To: References: <20120809141530.GO40452@mdounin.ru> Message-ID: <20120809182652.GZ40452@mdounin.ru> Hello! On Thu, Aug 09, 2012 at 12:10:57PM -0300, Guzm?n Bras? wrote: > Hi Maxim! Thanks for taking time to check it out... > > So the 499 seen by the php-fpm nginx here It's not that main nginx closed > the connection but that fastcgi closed the connection? > > All the time thought was nothing to do with the backend... there's no php > warning or error on the php-fpm side when this happens, will try to enable > debug mode in php-fpm and swim around the logs.... Ah, sorry, it looks like I've misunderstood what you were trying to say. Partialy because of strange usage of the "upstream" word - from frontend point of view it's more or less synonym for "backend", you probably mean to say "frontend" instead. Looking though debug log you've provided earlier suggests that everything is actually ok. Here is quote from frontend logs: > 2012/08/03 13:25:45 [debug] 1546#0: *823 http proxy header done > 2012/08/03 13:25:45 [debug] 1546#0: *823 HTTP/1.1 200 OKM > 2012/08/03 13:25:45 [debug] 1546#0: *823 finalize http upstream request: -1 > 2012/08/03 13:25:45 [debug] 1546#0: *823 finalize http proxy request The intresting part is somewhere between "HTTP..." and "finalize ..." lines, but it's at low level and was misssed from grep. Most likely (assuming request finalization is for reason) client closed request and this resulted in writev() failure. This in turn caused finalization of a request, and close of the connection to the upstream server (aka backend). There are 0 bytes sent as nothing was actually sent. The fact that you see many such log lines suggests that you've probably disabled client abort checks (proxy_ignore_client_abort). On a backend you see this as "200" with some small number of bytes which is some first bytes it was able to send before connection was closed by the frontend. The 499 finalization is internal, and as response status as sent to client is already 200 - it doesn't affect access log. Maxim Dounin From nginx-forum at nginx.us Thu Aug 9 18:37:36 2012 From: nginx-forum at nginx.us (eiji-gravion) Date: Thu, 9 Aug 2012 14:37:36 -0400 (EDT) Subject: proper setup for forward secrecy Message-ID: <503eaaaa74d81aba4c28ef3d5059cbbe.NginxMailingListEnglish@forum.nginx.org> Hello, I was reading an article written by Adam Langley and he says: "You also need to be aware of Session Tickets in order to implement forward secrecy correctly. There are two ways to resume a TLS connection: either the server chooses a random number and both sides store the session information, of the server can encrypt the session information with a secret, local key and send that to the client. The former is called Session IDs and the latter is called Session Tickets. But Session Tickets are transmitted over the wire and so the server's Session Ticket encryption key is capable of decrypting past connections. Most servers will generate a random Session Ticket key at startup unless otherwise configured, but you should check." So my question is, how does nginx handle this? Thanks Posted at Nginx Forum: http://forum.nginx.org/read.php?2,229538,229538#msg-229538 From rodrigofariat at yahoo.com.br Thu Aug 9 18:46:15 2012 From: rodrigofariat at yahoo.com.br (rodrigo tavares) Date: Thu, 9 Aug 2012 11:46:15 -0700 (PDT) Subject: Permisson Denied 403 Message-ID: <1344537975.46203.YahooMailNeo@web124502.mail.ne1.yahoo.com> Hello People ! I begin a configure DSPAM, and the interface is basead in ngix. When I go to http://10.26.7.249:8080/dspam/cgi-bin/admin.cgi, come 403 Forbbiden. The configuration the DSPAM is abaixo,case any can to read. server { ??????? listen?? 8080; ## listen for ipv4 ??????? #listen?? [::]:80 default ipv6only=on; ## listen for ipv6 ??????? server_name? 10.26.7.249; ??????? access_log? /var/log/nginx/localhost.access.log; ??????? location / { ??????????????? root?? /var/www; ??????????????? index? index.html index.htm; ??????? } ??????? location /doc { ??????????????? root?? /usr/share; ??????????????? autoindex on; ??????????????? allow 127.0.0.1; ??????????????? deny all; ??????? } ??????? location /images { ??????????????? root?? /usr/share; ??????????????? autoindex on; ??????? } ?location /dspam/cgi-bin { ??????????????? auth_basic DSPAM; ??????????????? auth_basic_user_file /var/www/dspam/passwords; ??????????????? include /etc/nginx/fastcgi_params; ??????????????? index dspam.cgi; ??????????????? fastcgi_param? SCRIPT_FILENAME ??????????????? $document_root$fastcgi_script_name; ??????????????? fastcgi_param REMOTE_USER? $remote_user; ??????????????? if ($uri ~? \.cgi$ ){ ??????????????????????? fastcgi_pass? unix:/var/run/fcgiwrap.socket; ???????????????????? } ??????????? } ?? How I can make this ? Thanks. Rodrigo --------------------------------------------------------------------------------------------------- http://wiki.linuxwall.info/doku.php/en:ressources:dossiers:dspam See the steps: ---------------------------------------------------- Put user DSPAM #/etc/init.d/fcgiwrap FCGI_USER= dspam FCGI_GROUP=dspam /etc/init.d/fcgiwrap restart # chmod o+w /var/run/fcgiwrap.socket ------------------------------------------------------------- Add this lines in /etc/nginx/sites-available/default vim /etc/nginx/sites-available/default [...] location /dspam/cgi-bin { auth_basic ? DSPAM ?; auth_basic_user_file /var/www/dspam/passwords; include /etc/nginx/fastcgi_params; index dspam.cgi; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; fastcgi_param REMOTE_USER $remote_user; if ($uri ~ ? \.cgi$ ?){ fastcgi_pass unix:/var/run/fcgiwrap.socket; } } # /etc/init.d/nginx restart --------------------------------------------------------------------------------------------- Define password para o user jean-kevin at debian.lab htpasswd -c /var/www/dspam/passwords jean-kevin at debian.lab New password: Re-type new password: Adding password for user jean-kevin at debian.lab # cat /var/www/dspam/passwords jean-kevin at debian.lab:H2CigqsDz1U4E # chown dspam:www-data /var/www/dspam/passwords # chmod o-rwx /var/www/dspam/password ------------------------------------------------------------------------------------------------- Copy inteface for /var/www/dspam cp -r ~/dspam-3.9.1-RC1/webui/* /var/www/dspam/ # chown dspam:www-data /var/www/dspam -R -------------------------------------------------------------------------------------------------- #define variables for configure.pl $CONFIG{?DSPAM_HOME?} = ?/var/spool/dspam?; $CONFIG{?DSPAM_BIN?} = ?/usr/bin?; [...] $CONFIG{?WEB_ROOT?} = ?/dspam/htdocs/?; [...] $CONFIG{?LOCAL_DOMAIN?} = ?debian.lab?; ----------------------------------------------------------------------------------------------------- he interface provides an administration section.?To have access to it, you need to declare an admin in the file ?/var/www/dspam/cgi-bin/admins?. echo ?jean-kevin at debian.lab? >> /var/www/dspam/cgi-bin/admin -------------- next part -------------- An HTML attachment was scrubbed... URL: From guzman.braso at gmail.com Thu Aug 9 19:07:28 2012 From: guzman.braso at gmail.com (=?ISO-8859-1?B?R3V6beFuIEJyYXPz?=) Date: Thu, 9 Aug 2012 16:07:28 -0300 Subject: Weird issue between backend and upstream (0 bytes) In-Reply-To: <20120809182652.GZ40452@mdounin.ru> References: <20120809141530.GO40452@mdounin.ru> <20120809182652.GZ40452@mdounin.ru> Message-ID: Hi Maxim!, once again thank you... Exactly what I thought.. but something doesn't make sense, the owner of the site put some paid traffic on it and I now numbers are bigger than before, it could not be that hundreds of people abort the connection exactly and precisely at the same byte. It's always the same byte, and it's always random thought I've never been able to reproduce it myself. Just sniffed others traffic to see it. Just checked and there's no mention of proxy_ignore_client_abort either on frontend (nginx load balancer) or backend (nginx proxy for php-fpm). This is really weird.... I think I still have the original debug logs from where that grep came from, will check it out as soon I can ssh into it. Thank you once again!! Guzm?n On Thu, Aug 9, 2012 at 3:26 PM, Maxim Dounin wrote: > Hello! > > On Thu, Aug 09, 2012 at 12:10:57PM -0300, Guzm?n Bras? wrote: > > > Hi Maxim! Thanks for taking time to check it out... > > > > So the 499 seen by the php-fpm nginx here It's not that main nginx closed > > the connection but that fastcgi closed the connection? > > > > All the time thought was nothing to do with the backend... there's no php > > warning or error on the php-fpm side when this happens, will try to > enable > > debug mode in php-fpm and swim around the logs.... > > Ah, sorry, it looks like I've misunderstood what you were trying > to say. Partialy because of strange usage of the "upstream" word - > from frontend point of view it's more or less synonym for > "backend", you probably mean to say "frontend" instead. > > Looking though debug log you've provided earlier suggests that > everything is actually ok. Here is quote from frontend logs: > > > 2012/08/03 13:25:45 [debug] 1546#0: *823 http proxy header done > > 2012/08/03 13:25:45 [debug] 1546#0: *823 HTTP/1.1 200 OKM > > 2012/08/03 13:25:45 [debug] 1546#0: *823 finalize http upstream request: > -1 > > 2012/08/03 13:25:45 [debug] 1546#0: *823 finalize http proxy request > > The intresting part is somewhere between "HTTP..." and "finalize > ..." lines, but it's at low level and was misssed from grep. Most > likely (assuming request finalization is for reason) client closed > request and this resulted in writev() failure. This in turn > caused finalization of a request, and close of the connection to > the upstream server (aka backend). There are 0 bytes sent as > nothing was actually sent. The fact that you see many such log > lines suggests that you've probably disabled client abort checks > (proxy_ignore_client_abort). > > On a backend you see this as "200" with some small number of bytes > which is some first bytes it was able to send before connection > was closed by the frontend. The 499 finalization is internal, and > as response status as sent to client is already 200 - it doesn't > affect access log. > > Maxim Dounin > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -- Guzm?n Bras? N??ez Senior Perl Developer / Sysadmin Web: http://guzman.braso.info Mobile: +598 98 674020 -------------- next part -------------- An HTML attachment was scrubbed... URL: From wrr at mixedbit.org Thu Aug 9 19:10:46 2012 From: wrr at mixedbit.org (Jan Wrobel) Date: Thu, 9 Aug 2012 21:10:46 +0200 Subject: wwwhisper project announcement Message-ID: Hello, wwwhisper is a generic access control layer for nginx HTTP server. It utilizes Maxim Dounin's auth_request module. wwwhisper allows to grant access to HTTP resources based on users' email addresses. Mozilla Persona is used to prove in a secure way that a visitor owns an allowed email, no site specific password is created. The access control layer is application independent. It can be used for anything served by nginx - dynamic content, static files, content generated by back-end servers. No support from applications or back-ends is needed. More details can be found here: https://github.com/wrr/wwwhisper This is the first release. Any feedback, contribution or questions are very welcome! Thanks, Jan From guzman.braso at gmail.com Thu Aug 9 19:12:20 2012 From: guzman.braso at gmail.com (=?ISO-8859-1?B?R3V6beFuIEJyYXPz?=) Date: Thu, 9 Aug 2012 16:12:20 -0300 Subject: Weird issue between backend and upstream (0 bytes) In-Reply-To: References: <20120809141530.GO40452@mdounin.ru> <20120809182652.GZ40452@mdounin.ru> Message-ID: By the way, it's getting worse with time.... Total php requests with status 200 since today 6:00 AM: 3708. Of those, 2292 returned status 200 and received a RST from nginx load balancer, so more than half of it are getting caught by this issue... Will try to get the original logs to see if there's some light there... Thank you! Guzman On Thu, Aug 9, 2012 at 4:07 PM, Guzm?n Bras? wrote: > Hi Maxim!, once again thank you... > > Exactly what I thought.. but something doesn't make sense, the owner of > the site put some paid traffic on it and I now numbers are bigger than > before, it could not be that hundreds of people abort the connection > exactly and precisely at the same byte. It's always the same byte, and it's > always random thought I've never been able to reproduce it myself. Just > sniffed others traffic to see it. > > Just checked and there's no mention of proxy_ignore_client_abort either on > frontend (nginx load balancer) or backend (nginx proxy for php-fpm). This > is really weird.... > > I think I still have the original debug logs from where that grep came > from, will check it out as soon I can ssh into it. > > Thank you once again!! > > Guzm?n > > > > > On Thu, Aug 9, 2012 at 3:26 PM, Maxim Dounin wrote: > >> Hello! >> >> On Thu, Aug 09, 2012 at 12:10:57PM -0300, Guzm?n Bras? wrote: >> >> > Hi Maxim! Thanks for taking time to check it out... >> > >> > So the 499 seen by the php-fpm nginx here It's not that main nginx >> closed >> > the connection but that fastcgi closed the connection? >> > >> > All the time thought was nothing to do with the backend... there's no >> php >> > warning or error on the php-fpm side when this happens, will try to >> enable >> > debug mode in php-fpm and swim around the logs.... >> >> Ah, sorry, it looks like I've misunderstood what you were trying >> to say. Partialy because of strange usage of the "upstream" word - >> from frontend point of view it's more or less synonym for >> "backend", you probably mean to say "frontend" instead. >> >> Looking though debug log you've provided earlier suggests that >> everything is actually ok. Here is quote from frontend logs: >> >> > 2012/08/03 13:25:45 [debug] 1546#0: *823 http proxy header done >> > 2012/08/03 13:25:45 [debug] 1546#0: *823 HTTP/1.1 200 OKM >> > 2012/08/03 13:25:45 [debug] 1546#0: *823 finalize http upstream >> request: -1 >> > 2012/08/03 13:25:45 [debug] 1546#0: *823 finalize http proxy request >> >> The intresting part is somewhere between "HTTP..." and "finalize >> ..." lines, but it's at low level and was misssed from grep. Most >> likely (assuming request finalization is for reason) client closed >> request and this resulted in writev() failure. This in turn >> caused finalization of a request, and close of the connection to >> the upstream server (aka backend). There are 0 bytes sent as >> nothing was actually sent. The fact that you see many such log >> lines suggests that you've probably disabled client abort checks >> (proxy_ignore_client_abort). >> >> On a backend you see this as "200" with some small number of bytes >> which is some first bytes it was able to send before connection >> was closed by the frontend. The 499 finalization is internal, and >> as response status as sent to client is already 200 - it doesn't >> affect access log. >> >> Maxim Dounin >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx >> > > > > -- > Guzm?n Bras? N??ez > Senior Perl Developer / Sysadmin > Web: http://guzman.braso.info > Mobile: +598 98 674020 > -- Guzm?n Bras? N??ez Senior Perl Developer / Sysadmin Web: http://guzman.braso.info Mobile: +598 98 674020 -------------- next part -------------- An HTML attachment was scrubbed... URL: From lists at ruby-forum.com Thu Aug 9 19:14:40 2012 From: lists at ruby-forum.com (Swapnil P.) Date: Thu, 09 Aug 2012 21:14:40 +0200 Subject: How to give location of WebDAV lock database in nginx.conf? Message-ID: How to write the below code in nginx.conf which is currently in httpd.conf? # Location of the WebDAV lock database. DAVLockDB /var/lib/dav/lockdb I have written below: if (mod_dav_fs.c) { # Location of the WebDAV lock database. location ~ /var/lib/dav/lockdb { DAVLockDB } } Not sure how to write DAVLockDB inside {}. Thanks, -- Posted via http://www.ruby-forum.com/. From nginx-forum at nginx.us Thu Aug 9 21:28:06 2012 From: nginx-forum at nginx.us (abstein2) Date: Thu, 9 Aug 2012 17:28:06 -0400 (EDT) Subject: Gzipping proxied content after using subs_filter fails Message-ID: As the title says, I'm having an issue with gzipping proxied web pages after using subs_filter. It doesn't always happen, but it looks like whenever a page exceeds the size of one of my proxy_buffers I get an error in the error log saying: [alert] 18544#0: *490084 deflate() failed: 2, -5 while sending to client, client: xxx.xxx.xxx.xxx, server: www.test.com, request: "GET / HTTP/1.1", upstream: "http://xxx.xxx.xxx.xxx:80/", host: "www.test.com", referrer: "http://xxx.xxx.xxx.xxx/" The page, without any compression, is roughly 80k. When I boost my proxy_buffers to 100k the page loads fine. When it is below the 80k of the page, I only receive a partial version of the page (roughly 5k, whether proxy_buffer_size is set to 4k or 8k). Turning off gzip serves the entire page as does disabling the subs_filter commands. Is there any workaround/fix for this outside of just boosting the proxy_buffers value so that it exceeds every possible text/html page I'm serving? In case they're helpful, my normal proxy and gzip settings are: proxy_cache_use_stale updating error timeout invalid_header http_500 http_502 http_504; proxy_cache_valid 60m; proxy_redirect off; proxy_connect_timeout 120; proxy_send_timeout 120; proxy_read_timeout 120; proxy_buffers 8 16k; proxy_buffer_size 16k; proxy_busy_buffers_size 64k; proxy_cache_key $host$request_uri$is_args$args; gzip on; gzip_http_version 1.1; gzip_vary on; gzip_comp_level 8; gzip_proxied any; gzip_types text/plain text/css application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript; gzip_buffers 16 8k; gzip_disable "MSIE [1-6].(?!.*SV1)"; Thanks! Posted at Nginx Forum: http://forum.nginx.org/read.php?2,229545,229545#msg-229545 From nginx-forum at nginx.us Thu Aug 9 22:42:30 2012 From: nginx-forum at nginx.us (klml) Date: Thu, 9 Aug 2012 18:42:30 -0400 (EDT) Subject: trigger a git commit for static site generator In-Reply-To: References: Message-ID: <47398a8701e031477de8bf643ea156fb.NginxMailingListEnglish@forum.nginx.org> Hi Justin, thank you for this answer > ... (and can be dangerous IMO): > x="`tail -f /usr/local/nginx/logs/access.log |grep PUT |sed 's/["]//g' |awk > .... using tail on logs for this is very freaky;) but its a thinkable approach. To get only PUT requests I used an extra PUT log in the nginx config: if ($request_method = PUT) { access_log /var/log/nginx/accessPUT.log; } so I dont need to grep. Unfortunately I dont "get over" the forced tail x="`tail -f /var/log/nginx/accessPUT.log`"; c="PUT" //to start an endless loop while c="PUT"; // do echo $x; #~ git commit -m "Hello Igor"; done gave me noting, I expected the lines from the log. At the moment I use, not-out-of-the-box;(, inotifywait but this works ;) But I will think more about tail http://stackoverflow.com/questions/420143/making-git-auto-commit Posted at Nginx Forum: http://forum.nginx.org/read.php?2,229477,229546#msg-229546 From dzhou at netdna.com Fri Aug 10 02:02:13 2012 From: dzhou at netdna.com (Don Zhou) Date: Thu, 9 Aug 2012 19:02:13 -0700 Subject: how nginx cache manager delete file cached on disk Message-ID: We used nginx proxy to cache lots of static files. Our monitoring tool showed that the write I/O on the server spike once a day on 4am GMT time and server will slow down significantly because of the high IO. I couldn't find any cron that can cause this. I am guessing it might be the nginx cache manager was clearing the disk cache once a day. We have cache setting like: proxy_cache_path /cache levels=2:2:2 keys_zone=cache:2000m inactive=1d max_size=400000m; Any idea? Thanks! -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Fri Aug 10 09:03:10 2012 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 10 Aug 2012 13:03:10 +0400 Subject: how nginx cache manager delete file cached on disk In-Reply-To: References: Message-ID: <20120810090310.GA40452@mdounin.ru> Hello! On Thu, Aug 09, 2012 at 07:02:13PM -0700, Don Zhou wrote: > We used nginx proxy to cache lots of static files. Our monitoring tool > showed that the write I/O on the server spike once a day on 4am GMT time > and server will slow down significantly because of the high IO. I couldn't > find any cron that can cause this. I am guessing it might be the nginx > cache manager was clearing the disk cache once a day. No, it's not nginx cache manager. It removes files as soon as they become inactive and/or max_size reached. > We have cache setting like: proxy_cache_path /cache levels=2:2:2 > keys_zone=cache:2000m inactive=1d max_size=400000m; > > Any idea? Thanks! Something like "top -mio" / iotop might be helpfull. Time suggests most likely it's some daily periodic task. Maxim Dounin From mdounin at mdounin.ru Fri Aug 10 09:07:06 2012 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 10 Aug 2012 13:07:06 +0400 Subject: proper setup for forward secrecy In-Reply-To: <503eaaaa74d81aba4c28ef3d5059cbbe.NginxMailingListEnglish@forum.nginx.org> References: <503eaaaa74d81aba4c28ef3d5059cbbe.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20120810090706.GB40452@mdounin.ru> Hello! On Thu, Aug 09, 2012 at 02:37:36PM -0400, eiji-gravion wrote: > Hello, > > I was reading an article written by Adam Langley and he says: > > "You also need to be aware of Session Tickets in order to implement > forward secrecy correctly. There are two ways to resume a TLS > connection: either the server chooses a random number and both sides > store the session information, of the server can encrypt the session > information with a secret, local key and send that to the client. The > former is called Session IDs and the latter is called Session Tickets. > > But Session Tickets are transmitted over the wire and so the server's > Session Ticket encryption key is capable of decrypting past connections. > Most servers will generate a random Session Ticket key at startup unless > otherwise configured, but you should check." > > So my question is, how does nginx handle this? As per OpenSSL default - as long as session tickets are supported by OpenSSL version you use, random key for session tickets will be generated automatically on nginx startup. Maxim Dounin From nginx-forum at nginx.us Fri Aug 10 09:28:32 2012 From: nginx-forum at nginx.us (ffeldhaus) Date: Fri, 10 Aug 2012 05:28:32 -0400 (EDT) Subject: Does Nginx allow to specify multiple root certificates for client certificate verification? In-Reply-To: <20120731164820.GZ40452@mdounin.ru> References: <20120731164820.GZ40452@mdounin.ru> Message-ID: <7de28e2cfcde47c58d7ff8be83c907cf.NginxMailingListEnglish@forum.nginx.org> Maxim Dounin Wrote: ------------------------------------------------------- > Hello! > > On Tue, Jul 31, 2012 at 11:21:26AM -0400, > ffeldhaus wrote: > > > Hi, > > > > Maxim Dounin Wrote: > > > > > > Hello! > > > > > > On Tue, Jul 31, 2012 at 05:43:31AM -0400, > > > ffeldhaus wrote: > > > > > > > For a project as part of the European Grid > > > Infrastructure (EGI) we need > > > > SSL client certificate verification for a > > > service running on nginx. As > > > > there are several root CAs allowed within > EGI, > > > we need nginx to check > > > > them all during client certificate > validation. > > > In the documentation of > > > > nginx I could only find the parameter > > > ssl_client_certificate which > > > > allows to specify just one file containing a > > > root certificate. > > > > > > > > Is there a way to specify more than one root > CA > > > for client certificate > > > > verification in nginx or do I have to use > Apache > > > for this? > > > > > > Yes. Just put multiple root CA certificates > into > > > a file specified > > > in the ssl_client_certificate directive. > > > > > > Note the docs explicitly say "certificates" > > > (plural), see > > > http://nginx.org/r/ssl_client_certificate. > > > > I had hoped there would be another way. Putting > the currently 105 > > certificates in one file may work, but the > problem is, that the > > certificates may change and with 105 CA > certificates at the moment the > > chance that a certificate is updated/revoked is > not negligible anymore. > > If CA certificate is updated/revoked it probably > needs some double > checking by a human anyway. Updating the file and > asking nginx to > reload it's config isn't going to be a big deal > then. I don't agree. For most Linux distributions you get a list of CA certificates automatically installed and they are often updated transparent to the administrator. For EGI this is even more true, as there is a secure, certified way how certificates are created / updated / removed by a daily cron job. Again, this is transparent to the user. > > I could write a cron job to update the single > certificate file after > > each update, but it would be much easier if > nginx would support multiple > > CA certificate files out of the box. For Apache > there is a directive > > called SSLCACertificatePath to do just this. Do > you think this could be > > a feature worth implementing in Nginx? If so, > how could I help? > > "Certificate file" vs "certificate path" > difference isn't about > running something after updates of certificates or > not (in both > cases you have to update something, either cat to > a single file or > the c_rehash script to create symbolic links in > case of CApath). > The difference is about certificates in memory vs. > certficates on > disk, and the later implies syscalls and disk > access on each > certificate check. > > As nginx is designed to work under high loads, > with many requests > (and handshakes) per second, it uses CAfile > variant. And as nginx > configuration reload is seamless, it's unlikely > the CApath variant > will add any extra value. I disagree. The fastest way to do a lookup is to use the hash based filename lookup. If there are lots of certificates in one file, the lookup will take a lot longer then the creation of a hash for the CA to be looked up and then the lookup using the hash based filenames of the CA certificates. It would be interesting to see why the Apache guys are using the hash based CA lookup and also a profiling of file vs. directory based CA lookup. If I find the time, I will measure the response time for Apache using both methods and compare them to Nginx. Cheers, Florian Posted at Nginx Forum: http://forum.nginx.org/read.php?2,229129,229555#msg-229555 From mdounin at mdounin.ru Fri Aug 10 09:28:52 2012 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 10 Aug 2012 13:28:52 +0400 Subject: Weird issue between backend and upstream (0 bytes) In-Reply-To: References: <20120809141530.GO40452@mdounin.ru> <20120809182652.GZ40452@mdounin.ru> Message-ID: <20120810092852.GC40452@mdounin.ru> Hello! On Thu, Aug 09, 2012 at 04:07:28PM -0300, Guzm?n Bras? wrote: > Hi Maxim!, once again thank you... > > Exactly what I thought.. but something doesn't make sense, the owner of the > site put some paid traffic on it and I now numbers are bigger than before, > it could not be that hundreds of people abort the connection exactly and > precisely at the same byte. It's always the same byte, and it's always > random thought I've never been able to reproduce it myself. Just sniffed > others traffic to see it. As frontend's nginx detects connection abort only when it tries to send first bytes of a response, and from logs you've provided it looks like response generation takes more than 10 seconds, - it's highly unlikely that the abort really happens just before first byte. Most likely it happens during 10 seconds in question. > Just checked and there's no mention of proxy_ignore_client_abort either on > frontend (nginx load balancer) or backend (nginx proxy for php-fpm). This > is really weird.... Not really. There are number of cases which might prevent nginx from detecting connection abort by a client, even if it's not specifically configured to ignore connection aborts. Well-known cases include connection close with pending pipelined request in a connection, or clean connection close of a ssl connection (with shutdown alert sent by a client). In such cases nginx will be able to detect connection close only if event method might provide some hint about pending connection close (kqueue only as of now). Maxim Dounin > > I think I still have the original debug logs from where that grep came > from, will check it out as soon I can ssh into it. > > Thank you once again!! > > Guzm?n > > > > > On Thu, Aug 9, 2012 at 3:26 PM, Maxim Dounin wrote: > > > Hello! > > > > On Thu, Aug 09, 2012 at 12:10:57PM -0300, Guzm?n Bras? wrote: > > > > > Hi Maxim! Thanks for taking time to check it out... > > > > > > So the 499 seen by the php-fpm nginx here It's not that main nginx closed > > > the connection but that fastcgi closed the connection? > > > > > > All the time thought was nothing to do with the backend... there's no php > > > warning or error on the php-fpm side when this happens, will try to > > enable > > > debug mode in php-fpm and swim around the logs.... > > > > Ah, sorry, it looks like I've misunderstood what you were trying > > to say. Partialy because of strange usage of the "upstream" word - > > from frontend point of view it's more or less synonym for > > "backend", you probably mean to say "frontend" instead. > > > > Looking though debug log you've provided earlier suggests that > > everything is actually ok. Here is quote from frontend logs: > > > > > 2012/08/03 13:25:45 [debug] 1546#0: *823 http proxy header done > > > 2012/08/03 13:25:45 [debug] 1546#0: *823 HTTP/1.1 200 OKM > > > 2012/08/03 13:25:45 [debug] 1546#0: *823 finalize http upstream request: > > -1 > > > 2012/08/03 13:25:45 [debug] 1546#0: *823 finalize http proxy request > > > > The intresting part is somewhere between "HTTP..." and "finalize > > ..." lines, but it's at low level and was misssed from grep. Most > > likely (assuming request finalization is for reason) client closed > > request and this resulted in writev() failure. This in turn > > caused finalization of a request, and close of the connection to > > the upstream server (aka backend). There are 0 bytes sent as > > nothing was actually sent. The fact that you see many such log > > lines suggests that you've probably disabled client abort checks > > (proxy_ignore_client_abort). > > > > On a backend you see this as "200" with some small number of bytes > > which is some first bytes it was able to send before connection > > was closed by the frontend. The 499 finalization is internal, and > > as response status as sent to client is already 200 - it doesn't > > affect access log. > > > > Maxim Dounin > > > > _______________________________________________ > > nginx mailing list > > nginx at nginx.org > > http://mailman.nginx.org/mailman/listinfo/nginx > > > > > > -- > Guzm?n Bras? N??ez > Senior Perl Developer / Sysadmin > Web: http://guzman.braso.info > Mobile: +598 98 674020 > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From nginx-forum at nginx.us Fri Aug 10 09:42:25 2012 From: nginx-forum at nginx.us (eiji-gravion) Date: Fri, 10 Aug 2012 05:42:25 -0400 (EDT) Subject: proper setup for forward secrecy In-Reply-To: <20120810090706.GB40452@mdounin.ru> References: <20120810090706.GB40452@mdounin.ru> Message-ID: Hello, Is there a way to frequently change random keys without having to restart nginx each time? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,229538,229557#msg-229557 From mdounin at mdounin.ru Fri Aug 10 09:56:35 2012 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 10 Aug 2012 13:56:35 +0400 Subject: Gzipping proxied content after using subs_filter fails In-Reply-To: References: Message-ID: <20120810095635.GD40452@mdounin.ru> Hello! On Thu, Aug 09, 2012 at 05:28:06PM -0400, abstein2 wrote: > As the title says, I'm having an issue with gzipping proxied web pages > after using subs_filter. It doesn't always happen, but it looks like > whenever a page exceeds the size of one of my proxy_buffers I get an > error in the error log saying: > > [alert] 18544#0: *490084 deflate() failed: 2, -5 while sending to > client, client: xxx.xxx.xxx.xxx, server: www.test.com, request: "GET / > HTTP/1.1", upstream: "http://xxx.xxx.xxx.xxx:80/", host: "www.test.com", > referrer: "http://xxx.xxx.xxx.xxx/" Here nginx did deflate with Z_SYNC_FLUSH (2) and deflate() returned error Z_BUF_ERROR (-5, no progress possible). The error message suggests you are using an old nginx, and subs filter somehow triggers the problem as fixed here: http://trac.nginx.org/nginx/changeset/4469/nginx Changes with nginx 1.1.15: *) Bugfix: calling $r->flush() multiple times might cause errors in the ngx_http_gzip_filter_module. It's not clear why subs filter triggers flush (and does so twice in a row), most likely it's a bug in subs filter (you may want to ask it's author), but upgrading nginx to 1.1.15+ should resolve the problem. [...] Maxim Dounin From nginx-forum at nginx.us Fri Aug 10 10:03:14 2012 From: nginx-forum at nginx.us (darkweaver871) Date: Fri, 10 Aug 2012 06:03:14 -0400 (EDT) Subject: [Nginx] perl module get an empty body Message-ID: <273eb1e9dc65fb58218e5eac3af2aeb3.NginxMailingListEnglish@forum.nginx.org> Hi all, I'm using Nginx as a reverse proxy and I use Nginx perl module to inspect my requests and redirect to a different upstream. It works well but some of request_body are empty and others just make nginx timeout. Here is my nginx.conf: user www-data; worker_processes 4; pid /var/run/nginx.pid; events { worker_connections 768; # multi_accept on; } http { ## # Basic Settings ## sendfile on; tcp_nopush on; tcp_nodelay on; keepalive_timeout 65; types_hash_max_size 2048; # server_tokens off; # server_names_hash_bucket_size 64; # server_name_in_redirect off; include /etc/nginx/mime.types; default_type application/octet-stream; ## # Logging Settings ## access_log /var/log/nginx/access.log; error_log /var/log/nginx/error.log; ## # Gzip Settings ## gzip on; gzip_disable "msie6"; # gzip_vary on; # gzip_proxied any; # gzip_comp_level 6; # gzip_buffers 16 8k; # gzip_http_version 1.1; # gzip_types text/plain text/css application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript; ## # Virtual Host Configs ## perl_modules /opt/test/perl/; perl_require switcher_test.pm; perl_set $test switcher_test::handler; log_format cestate '$remote_addr,$http_x_forwarded_for - $remote_user [$time_local] "$request" $status $body_bytes_sent "$http_referer" "$http_user_agent" $request_time # $test'; access_log /var/log/nginx/cestate.log cestate; include /etc/nginx/conf.d/*.conf; include /etc/nginx/sites-enabled/*; } My proxy.conf: proxy_redirect off; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Host $host; proxy_set_header X-Forwarded-Server $host; client_max_body_size 21474836470; client_body_buffer_size 128k; proxy_connect_timeout 90; proxy_send_timeout 90; proxy_read_timeout 3600; proxy_buffers 32 8k; proxy_max_temp_file_size 0; My script: #!/usr/bin/perl package switcher_test; use nginx; sub handler { my $r = shift; if($r->has_request_body(sub{})) { return 3 }; return 4; } 1; __END__ My vhost on test the value of $test and proxy_pass to the good upstream. When Nginx timeout, I have the following log: 2012/08/10 11:56:15 [alert] 30570#0: *1 zero size buf in output t:1 r:0 f:0 0000000001481728 0000000001481728-0000000001481728 0000000000000000 0-0 while sending request to upstream, client: xxx.xxx.xxx.xxx, server: www.test.com, request: "POST /test.php HTTP/1.1", upstream: "http://xxx.xxx.xxx.xxx:80/test.php", host: "www.test.com" 2012/08/10 11:56:41 [info] 30580#0: Using 32768KiB of shared memory for push module in /etc/nginx/nginx.conf:63 Has someone any idea of what's going on ? Thanks. R?mi Posted at Nginx Forum: http://forum.nginx.org/read.php?2,229560,229560#msg-229560 From nginx-forum at nginx.us Fri Aug 10 10:04:28 2012 From: nginx-forum at nginx.us (darkweaver871) Date: Fri, 10 Aug 2012 06:04:28 -0400 (EDT) Subject: [Nginx] perl module get an empty body In-Reply-To: <273eb1e9dc65fb58218e5eac3af2aeb3.NginxMailingListEnglish@forum.nginx.org> References: <273eb1e9dc65fb58218e5eac3af2aeb3.NginxMailingListEnglish@forum.nginx.org> Message-ID: <8137cd3a4ae0d523a7c71eaadd56aba9.NginxMailingListEnglish@forum.nginx.org> I forgot to mention. I'm using the following nginx version: # nginx -vV nginx version: nginx/1.2.1 TLS SNI support enabled configure arguments: --prefix=/etc/nginx --conf-path=/etc/nginx/nginx.conf --error-log-path=/var/log/nginx/error.log --http-client-body-temp-path=/var/lib/nginx/body --http-fastcgi-temp-path=/var/lib/nginx/fastcgi --http-log-path=/var/log/nginx/access.log --http-proxy-temp-path=/var/lib/nginx/proxy --http-scgi-temp-path=/var/lib/nginx/scgi --http-uwsgi-temp-path=/var/lib/nginx/uwsgi --lock-path=/var/lock/nginx.lock --pid-path=/var/run/nginx.pid --with-pcre-jit --with-debug --with-http_addition_module --with-http_dav_module --with-http_flv_module --with-http_geoip_module --with-http_gzip_static_module --with-http_image_filter_module --with-http_mp4_module --with-http_perl_module --with-http_random_index_module --with-http_realip_module --with-http_secure_link_module --with-http_stub_status_module --with-http_ssl_module --with-http_sub_module --with-http_xslt_module --with-ipv6 --with-sha1=/usr/include/openssl --with-md5=/usr/include/openssl --with-mail --with-mail_ssl_module --add-module=/tmp/buildd/nginx-1.2.1/debian/modules/nginx-auth-pam --add-module=/tmp/buildd/nginx-1.2.1/debian/modules/chunkin-nginx-module --add-module=/tmp/buildd/nginx-1.2.1/debian/modules/headers-more-nginx-module --add-module=/tmp/buildd/nginx-1.2.1/debian/modules/nginx-development-kit --add-module=/tmp/buildd/nginx-1.2.1/debian/modules/nginx-echo --add-module=/tmp/buildd/nginx-1.2.1/debian/modules/nginx-http-push --add-module=/tmp/buildd/nginx-1.2.1/debian/modules/nginx-lua --add-module=/tmp/buildd/nginx-1.2.1/debian/modules/nginx-upload-module --add-module=/tmp/buildd/nginx-1.2.1/debian/modules/nginx-upload-progress --add-module=/tmp/buildd/nginx-1.2.1/debian/modules/nginx-upstream-fair --add-module=/tmp/buildd/nginx-1.2.1/debian/modules/nginx-dav-ext-module Posted at Nginx Forum: http://forum.nginx.org/read.php?2,229560,229561#msg-229561 From nginx-forum at nginx.us Fri Aug 10 11:03:27 2012 From: nginx-forum at nginx.us (darkweaver871) Date: Fri, 10 Aug 2012 07:03:27 -0400 (EDT) Subject: [Nginx] perl module get an empty body In-Reply-To: <273eb1e9dc65fb58218e5eac3af2aeb3.NginxMailingListEnglish@forum.nginx.org> References: <273eb1e9dc65fb58218e5eac3af2aeb3.NginxMailingListEnglish@forum.nginx.org> Message-ID: It can be reproduced by commenting/uncommenting the append in the following code: #!/usr/bin/python # -*- coding: utf-8 -*- import urllib import urllib2 url = 'https://www.test.com/test.php' params = { 'name': 'Luke', 'location': 'Tatooine' } pList = [] for paramKey, paramValue in params.items(): pList.append(urllib.quote_plus(unicode('%s=%s' % (paramKey, paramValue)))) #pList.append('%s=%s' % (paramKey, urllib.quote_plus(unicode(paramValue)))) post_params = '&'.join(pList) req = urllib2.Request(url, data=post_params) f = urllib2.urlopen(req) print f.read() Posted at Nginx Forum: http://forum.nginx.org/read.php?2,229560,229566#msg-229566 From mdounin at mdounin.ru Fri Aug 10 11:12:26 2012 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 10 Aug 2012 15:12:26 +0400 Subject: [Nginx] perl module get an empty body In-Reply-To: <273eb1e9dc65fb58218e5eac3af2aeb3.NginxMailingListEnglish@forum.nginx.org> References: <273eb1e9dc65fb58218e5eac3af2aeb3.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20120810111226.GE40452@mdounin.ru> Hello! On Fri, Aug 10, 2012 at 06:03:14AM -0400, darkweaver871 wrote: > Hi all, > > I'm using Nginx as a reverse proxy and I use Nginx perl module to > inspect my requests and redirect to a different upstream. > It works well but some of request_body are empty and others just make > nginx timeout. [...] > > perl_modules /opt/test/perl/; > perl_require switcher_test.pm; > perl_set $test switcher_test::handler; [...] > sub handler { > my $r = shift; > if($r->has_request_body(sub{})) { return 3 }; This doesn't work as $r->has_request_body() currently assumes it's called from a perl content handler, not from a perl variable handler, and does bad things if called from a variable handler. You may try to test $http_content_length instead. Maxim Dounin From ar at xlrs.de Fri Aug 10 11:36:46 2012 From: ar at xlrs.de (Axel) Date: Fri, 10 Aug 2012 13:36:46 +0200 Subject: HttpUpstreamModule: Need more detailed Information Message-ID: <5024F24E.6020409@xlrs.de> Hello all, i'm new to nginx and first of all i have to say it's a great piece of software. I need some more detailed Information about nginx behaviour regarding HttpUpstreamModule and I hope you can give me some hints and links where i can learn more about it. I set up nginx/1.2.2 as reverse proxy in front of a bunch of apache server(*) which are located in different housing locations. Now I have some questions and I can't find any docs or wiki pages with detailed answers. - how does nginx detect if one ore more upstream servers has diappeared? - what kind of mechanism does nginx use? icmp or something else? - how often does nginx request the status of upstream servers? - how can I monitor the status of upstream server seen by nginx (I monitor the status of running apache prcesses on the upstream server separately) It's a is really simple setup atm and I have not activated any other module: upstream frontend { server frontend1:80 weight=100 max_fails=2 fail_timeout=10s; server frontend2:80 weight=100 max_fails=2 fail_timeout=10s; server frontend3:80 weight=100 max_fails=2 fail_timeout=10s; server frontend4:80 weight=100 max_fails=2 fail_timeout=10s; server frontend5:80 weight=100 max_fails=2 fail_timeout=10s; } location / { proxy_next_upstream http_502 http_503 error; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_pass http://frontend/; proxy_redirect default; } I googled around and found this thread http://forum.nginx.org/read.php?29,108477 but got no answer. Any help is appreciated Regards, Axel From nginx-forum at nginx.us Fri Aug 10 13:44:14 2012 From: nginx-forum at nginx.us (darkweaver871) Date: Fri, 10 Aug 2012 09:44:14 -0400 (EDT) Subject: [Nginx] perl module get an empty body In-Reply-To: References: <273eb1e9dc65fb58218e5eac3af2aeb3.NginxMailingListEnglish@forum.nginx.org> Message-ID: <29f54fc6987000bfb2c360b82d330567.NginxMailingListEnglish@forum.nginx.org> Hi Maxim, Thanks for your reply. I don't understand what you mean by "content handler" as I'm a noob in nginx perl usage (and I'm not sure it was clear enough for you). I tested http_content_length but it's empty also. I really need to have the calling request body (I deleted the major part of my code but I parse some stuff in). Let's say I want to return 3 to perl_set if the request body match "test" and 4 otherwise (and log it), I would do: #!/usr/bin/perl package switcher_test; use nginx; use Sys::Syslog; my $facility = 'local2'; sub handler { my $r = shift; openlog('tester', 'ndelay', $facility); $r->has_request_body(sub{}); if ($r->request_body =~ /.*test.*/) { return 3 }; syslog(LOG_DEBUG, 'request_body: '.$r->request_body); return 4; } 1; __END__ It works but when another program (developed by another company who don't want to change the code) POST to my Nginx using: pList.append('%s=%s' % (paramKey, urllib.quote_plus(unicode(paramValue)))) (cf. my test program above) it doesn't works and Nginx timeouts. According to your previous answer how would you do it ? Thanks. R?mi Posted at Nginx Forum: http://forum.nginx.org/read.php?2,229560,229570#msg-229570 From mdounin at mdounin.ru Fri Aug 10 13:45:36 2012 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 10 Aug 2012 17:45:36 +0400 Subject: HttpUpstreamModule: Need more detailed Information In-Reply-To: <5024F24E.6020409@xlrs.de> References: <5024F24E.6020409@xlrs.de> Message-ID: <20120810134536.GI40452@mdounin.ru> Hello! On Fri, Aug 10, 2012 at 01:36:46PM +0200, Axel wrote: > Hello all, > > i'm new to nginx and first of all i have to say it's a great piece > of software. > > I need some more detailed Information about nginx behaviour > regarding HttpUpstreamModule and I hope you can give me some hints > and links where i can learn more about it. > I set up nginx/1.2.2 as reverse proxy in front of a bunch of apache > server(*) which are located in different housing locations. > > Now I have some questions and I can't find any docs or wiki pages > with detailed answers. > > - how does nginx detect if one ore more upstream servers has diappeared? > - what kind of mechanism does nginx use? icmp or something else? It detects based on status of requests to upstream servers. If requests fail - the server is considered down and additional requests aren't routed to it for some time. http://nginx.org/en/docs/http/ngx_http_upstream_module.html#server > - how often does nginx request the status of upstream servers? For alive servers - as often as normal requests are routed to the servers. For servers already considered down - once per fail_timeout (per worker, see below). > - how can I monitor the status of upstream server seen by nginx (I > monitor the status of running apache prcesses on the upstream server > separately) Currently, there is no way. Moreover, each nginx worker process has it's own idea about status of upstream servers. Maxim Dounin From nginx-forum at nginx.us Fri Aug 10 13:46:42 2012 From: nginx-forum at nginx.us (darkweaver871) Date: Fri, 10 Aug 2012 09:46:42 -0400 (EDT) Subject: [Nginx] perl module get an empty body In-Reply-To: <273eb1e9dc65fb58218e5eac3af2aeb3.NginxMailingListEnglish@forum.nginx.org> References: <273eb1e9dc65fb58218e5eac3af2aeb3.NginxMailingListEnglish@forum.nginx.org> Message-ID: <4705d40bdcaf89e7f276d90b52bad08a.NginxMailingListEnglish@forum.nginx.org> "and I'm not sure it was clear enough for you" - I meant my post wasn't clear enough Posted at Nginx Forum: http://forum.nginx.org/read.php?2,229560,229572#msg-229572 From mdounin at mdounin.ru Fri Aug 10 13:54:11 2012 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 10 Aug 2012 17:54:11 +0400 Subject: [Nginx] perl module get an empty body In-Reply-To: <29f54fc6987000bfb2c360b82d330567.NginxMailingListEnglish@forum.nginx.org> References: <273eb1e9dc65fb58218e5eac3af2aeb3.NginxMailingListEnglish@forum.nginx.org> <29f54fc6987000bfb2c360b82d330567.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20120810135410.GJ40452@mdounin.ru> Hello! On Fri, Aug 10, 2012 at 09:44:14AM -0400, darkweaver871 wrote: > Hi Maxim, > > Thanks for your reply. > > I don't understand what you mean by "content handler" as I'm a noob in > nginx perl usage (and I'm not sure it was clear enough for you). I > tested http_content_length but it's empty also. > I really need to have the calling request body (I deleted the major part > of my code but I parse some stuff in). > > Let's say I want to return 3 to perl_set if the request body match > "test" and 4 otherwise (and log it), I would do: You have to use perl directive to handle request. Then you'll be able to use $r->has_request_body(), see here: http://nginx.org/en/docs/http/ngx_http_perl_module.html#methods For conditional processing by other modules you may use $r->internal_redirect() method. > #!/usr/bin/perl > > package switcher_test; > use nginx; > use Sys::Syslog; > > my $facility = 'local2'; > > sub handler { > my $r = shift; > > openlog('tester', 'ndelay', $facility); > > $r->has_request_body(sub{}); > if ($r->request_body =~ /.*test.*/) { return 3 }; This is just wrong: $r->request_body will be only available when body handler function is called ("sub{}" in the above code). Using it here won't work. [...] Maxim Dounin From nginx-forum at nginx.us Fri Aug 10 14:12:56 2012 From: nginx-forum at nginx.us (darkweaver871) Date: Fri, 10 Aug 2012 10:12:56 -0400 (EDT) Subject: [Nginx] perl module get an empty body In-Reply-To: <4705d40bdcaf89e7f276d90b52bad08a.NginxMailingListEnglish@forum.nginx.org> References: <273eb1e9dc65fb58218e5eac3af2aeb3.NginxMailingListEnglish@forum.nginx.org> <4705d40bdcaf89e7f276d90b52bad08a.NginxMailingListEnglish@forum.nginx.org> Message-ID: OK, so is this better ? #!/usr/bin/perl package switcher_test; use nginx; use Sys::Syslog; my $facility = 'local2'; sub handler { my $r = shift; return $r->has_request_body(\& test); } sub test { my $r = shift; if ($r->request_body =~ /.*test.*/) { return 3 }; openlog('switcher', 'ndelay', $facility); syslog(LOG_DEBUG, 'request_body: '.$r->request_body); syslog(LOG_DEBUG, 'request_body_file: '.$r->request_body_file); return 4; } 1; __END__ It still doesn't work this way. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,229560,229574#msg-229574 From nginx-forum at nginx.us Fri Aug 10 15:17:17 2012 From: nginx-forum at nginx.us (elf-pavlik) Date: Fri, 10 Aug 2012 11:17:17 -0400 (EDT) Subject: [PATCH] Add "pass_only" option to ssl_verify_client to enable app-only validation In-Reply-To: <6b9a9bca924a46e5fe4c7eafaf3ce9ae.NginxMailingListEnglish@forum.nginx.org> References: <20120723183143.GX31671@mdounin.ru> <6b9a9bca924a46e5fe4c7eafaf3ce9ae.NginxMailingListEnglish@forum.nginx.org> Message-ID: ahoy! any progress on this one? IMHO WebID can become a very hot technology soon :) WebID ACL & SPARQL - http://lists.w3.org/Archives/Public/public-rww/2012Jun/0072.html Posted at Nginx Forum: http://forum.nginx.org/read.php?2,228761,229575#msg-229575 From mdounin at mdounin.ru Fri Aug 10 17:11:18 2012 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 10 Aug 2012 21:11:18 +0400 Subject: [Nginx] perl module get an empty body In-Reply-To: References: <273eb1e9dc65fb58218e5eac3af2aeb3.NginxMailingListEnglish@forum.nginx.org> <4705d40bdcaf89e7f276d90b52bad08a.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20120810171118.GL40452@mdounin.ru> Hello! On Fri, Aug 10, 2012 at 10:12:56AM -0400, darkweaver871 wrote: > OK, so is this better ? > > #!/usr/bin/perl > > package switcher_test; > use nginx; > > use Sys::Syslog; > > my $facility = 'local2'; > > sub handler { > my $r = shift; > > return $r->has_request_body(\& test); > > } > > sub test { > my $r = shift; > if ($r->request_body =~ /.*test.*/) { return 3 }; > openlog('switcher', 'ndelay', $facility); > syslog(LOG_DEBUG, 'request_body: '.$r->request_body); > syslog(LOG_DEBUG, 'request_body_file: '.$r->request_body_file); > return 4; > } > > 1; > __END__ > > > It still doesn't work this way. Again: you can't use this code in perl_set directive. You have to use perl directive: perl switcher_test::handler; And in perl package (mostly cut-n-paste from docs, with an additional test for body): package switcher_test; use nginx; sub handler { my $r = shift; if ($r->request_method ne "POST") { return DECLINED; } if ($r->has_request_body(\&post)) { return OK; } return HTTP_BAD_REQUEST; } sub post { my $r = shift; $r->send_http_header; $r->print("request_body: \"", $r->request_body, "\"
"); $r->print("request_body_file: \"", $r->request_body_file, "\"
\n"); if ($r->request_body =~ /test/) { return HTTP_BAD_REQUEST; } return OK; } 1; Maxim Dounin From nginx-forum at nginx.us Fri Aug 10 19:51:15 2012 From: nginx-forum at nginx.us (double) Date: Fri, 10 Aug 2012 15:51:15 -0400 (EDT) Subject: NGINX crash In-Reply-To: <28e50aac81607c312ebc3337885a7532.NginxMailingListEnglish@forum.nginx.org> References: <28e50aac81607c312ebc3337885a7532.NginxMailingListEnglish@forum.nginx.org> Message-ID: <54dc6debfd9f63e1097c4bbce3c4fdbc.NginxMailingListEnglish@forum.nginx.org> Hello, Thanks a lot for your answer. The machine runs CentOS5 (PCRE from repository). I must say, we never had a single crash - not even once. If we `telnet` the above request, NGINX crashes immediately. For full config or core-dumps please contact us per email: sandyherman [at] gmx [dot] net Thanks a lot Sandy # "http_fastcgi_module", "http_limit_conn_module", "http_limit_req_module" ./configure --prefix=/usr/local/nginx --user=www --group=www --with-pcre --without-http_charset_module --without-http_gzip_module --without-http_ssi_module --without-http_userid_module --without-http_autoindex_module --without-http_geo_module --without-http_split_clients_module --without-http_referer_module --without-http_proxy_module --without-http_uwsgi_module --without-http_scgi_module --without-http_memcached_module --without-http_empty_gif_module --without-http_browser_module --without-http_upstream_ip_hash_module --without-mail_pop3_module --without-mail_imap_module --without-mail_smtp_module (gdb) fr 2 #2 0x000000000046d5d2 in ngx_http_limit_req_handler (r=0x1b99faa0) at src/http/modules/ngx_http_limit_req_module.c:192 192 in src/http/modules/ngx_http_limit_req_module.c (gdb) p *limit $1 = {shm_zone = 0x1b9319e8, burst = 10000, nodelay = 0} (gdb) p *ctx $2 = {sh = 0x2af913765000, shpool = 0x2af913735000, rate = 2000, index = 2, var = {len = 9, data = 0x1b9580f9 "ipaddress"}, node = 0x0} (gdb) p *vv $3 = {len = 194343136, valid = 1, no_cacheable = 0, not_found = 0, escape = 0, data = 0x0} (gdb) p *r $4 = {signature = 1347703880, connection = 0x2af91b73d258, ctx = 0x1b999f70, main_conf = 0x1b932068, srv_conf = 0x1b958310, loc_conf = 0x1b9590e8, read_event_handler = 0x445737 , write_event_handler = 0x437a34 , cache = 0x0, upstream = 0x0, upstream_states = 0x0, pool = 0x1b999b60, header_in = 0x1ba0dba0, headers_in = {headers = {last = 0x1b99fb10, part = {elts = 0x1b99a190, nelts = 2, next = 0x0}, size = 48, nalloc = 20, pool = 0x1b999b60}, host = 0x1b99a190, connection = 0x0, if_modified_since = 0x0, if_unmodified_since = 0x0, user_agent = 0x0, referer = 0x0, content_length = 0x0, content_type = 0x0, range = 0x0, if_range = 0x0, transfer_encoding = 0x0, expect = 0x0, authorization = 0x0, keep_alive = 0x0, user = {len = 0, data = 0x0}, passwd = {len = 0, data = 0x0}, cookies = {elts = 0x1b99a550, nelts = 0, size = 8, nalloc = 2, pool = 0x1b999b60}, server = {len = 9, data = 0x1b9c7625 "seite.net"}, content_length_n = -1, keep_alive_n = -1, connection_type = 0, msie = 0, msie6 = 0, opera = 0, gecko = 0, chrome = 0, safari = 0, konqueror = 0}, headers_out = {headers = {last = 0x1b99fc28, part = {elts = 0x1b999bb0, nelts = 0, next = 0x0}, size = 48, nalloc = 20, pool = 0x1b999b60}, status = 0, status_line = {len = 0, data = 0x0}, server = 0x0, date = 0x0, content_length = 0x0, content_encoding = 0x0, location = 0x0, refresh = 0x0, last_modified = 0x0, content_range = 0x0, accept_ranges = 0x0, www_authenticate = 0x0, expires = 0x0, etag = 0x0, override_charset = 0x0, content_type_len = 0, content_type = {len = 0, data = 0x0}, charset = {len = 0, data = 0x0}, content_type_lowcase = 0x0, content_type_hash = 0, cache_control = {elts = 0x0, nelts = 0, size = 0, nalloc = 0, pool = 0x0}, content_length_n = -1, date_time = 0, last_modified_time = -1}, request_body = 0x0, lingering_time = 0, start_sec = 1344349062, start_msec = 773, method = 2, http_version = 1001, request_line = {len = 14, data = 0x1b9c7610 "GET / HTTP/1.1\nHost"}, uri = {len = 1, data = 0x1b9c7614 "/ HTTP/1.1\nHost"}, args = {len = 0, data = 0x0}, exten = {len = 0, data = 0x0}, unparsed_uri = {len = 1, data = 0x1b9c7614 "/ HTTP/1.1\nHost"}, method_name = {len = 3, data = 0x1b9c7610 "GET / HTTP/1.1\nHost"}, http_protocol = {len = 8, data = 0x1b9c7616 "HTTP/1.1\nHost"}, out = 0x0, main = 0x1b99faa0, parent = 0x0, postponed = 0x0, post_subrequest = 0x0, posted_requests = 0x0, virtual_names = 0x1b969258, phase_handler = 4, content_handler = 0x472884 , access_code = 0, variables = 0x1b99a020, ncaptures = 0, captures = 0x0, captures_data = 0x1b99a56d "195.162.24.220195.162.24.220\356(F\256?z:\365\311&\337bog\267I\265?\324*:E\027\225Y\265)!MiL\262\177/\207\303\021\243]\260t\335?\222\302\316)dmL\312*i\333?M3\317ah\350\313\\\024\204g\a\211\030\306H\370\f\360\f\321\351L\216\f", limit_rate = 65536, header_size = 0, request_length = 58, err_status = 0, http_connection = 0x1b9f0ec8, log_handler = 0x446b6e , cleanup = 0x0, subrequests = 201, count = 1, blocked = 0, aio = 0, http_state = 2, complex_uri = 0, quoted_uri = 0, plus_in_uri = 0, space_in_uri = 0, invalid_header = 0, add_uri_to_alias = 0, valid_location = 1, valid_unparsed_uri = 1, uri_changed = 0, uri_changes = 11, request_body_in_single_buf = 0, request_body_in_file_only = 0, request_body_in_persistent_file = 0, request_body_in_clean_file = 0, request_body_file_group_access = 0, request_body_file_log_level = 5, subrequest_in_memory = 0, waited = 0, cached = 0, proxy = 0, bypass_cache = 0, no_cache = 0, limit_conn_set = 0, limit_req_set = 0, pipeline = 0, plain_http = 0, chunked = 0, header_only = 0, keepalive = 1, lingering_close = 0, discard_body = 0, internal = 0, error_page = 0, ignore_content_encoding = 0, filter_finalize = 0, post_action = 0, request_complete = 0, request_output = 0, header_sent = 0, expect_tested = 0, root_tested = 0, done = 0, logged = 0, buffered = 0, main_filter_need_in_memory = 0, filter_need_in_memory = 0, filter_need_temporary = 0, allow_ranges = 0, state = 0, header_hash = 103689151937377, lowcase_index = 9, lowcase_header = "x-real-ip", '\000' , header_name_start = 0x1b9c7649 "\net", header_name_end = 0x1b9c7638 "", header_start = 0x1b9c763a "195.162.24.220", header_end = 0x1b9c7649 "\net", uri_start = 0x1b9c7614 "/ HTTP/1.1\nHost", uri_end = 0x1b9c7615 " HTTP/1.1\nHost", uri_ext = 0x0, args_start = 0x0, request_start = 0x1b9c7610 "GET / HTTP/1.1\nHost", request_end = 0x1b9c761e "\nHost", method_end = 0x1b9c7612 "T / HTTP/1.1\nHost", schema_start = 0x0, schema_end = 0x0, host_start = 0x0, host_end = 0x0, port_start = 0x0, port_end = 0x0, http_minor = 1, http_major = 1} Posted at Nginx Forum: http://forum.nginx.org/read.php?2,229394,229582#msg-229582 From dzhou at netdna.com Fri Aug 10 20:02:58 2012 From: dzhou at netdna.com (Don Zhou) Date: Fri, 10 Aug 2012 13:02:58 -0700 Subject: how nginx cache manager delete file cached on disk In-Reply-To: <20120810090310.GA40452@mdounin.ru> References: <20120810090310.GA40452@mdounin.ru> Message-ID: You right! I found it! It is the mlocate.cron come with centos install. It basic run updatedb on the file system and set the nice to be +19. I have so many files in my caching directory and updatedb will consume almost all the IO. I disabled my caching directory in updatedb.conf. Thanks for the help. On Fri, Aug 10, 2012 at 2:03 AM, Maxim Dounin wrote: > Hello! > > On Thu, Aug 09, 2012 at 07:02:13PM -0700, Don Zhou wrote: > > > We used nginx proxy to cache lots of static files. Our monitoring tool > > showed that the write I/O on the server spike once a day on 4am GMT time > > and server will slow down significantly because of the high IO. I > couldn't > > find any cron that can cause this. I am guessing it might be the nginx > > cache manager was clearing the disk cache once a day. > > No, it's not nginx cache manager. It removes files as soon as > they become inactive and/or max_size reached. > > > We have cache setting like: proxy_cache_path /cache levels=2:2:2 > > keys_zone=cache:2000m inactive=1d max_size=400000m; > > > > Any idea? Thanks! > > Something like "top -mio" / iotop might be helpfull. Time > suggests most likely it's some daily periodic task. > > Maxim Dounin > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From bertrand.caplet at okira.net Wed Aug 8 21:21:36 2012 From: bertrand.caplet at okira.net (Bertrand Caplet) Date: Wed, 08 Aug 2012 23:21:36 +0200 Subject: IP problem in backend of reverse proxy Message-ID: <5022D860.3060602@okira.net> Hi, I've installed a reverse proxy with nginx. And I have 2 backends webservers (one nginx and one apache), i'm hosting MediaWiki on it. I want the backends servers to have the IP of the "real" client in their logs. I tried "proxy_set_header" like here : https://help.ubuntu.com/community/Nginx/ReverseProxy but I doesn't work. In the logs and on the server it display the IP of the reverse proxy (127.0.0.1 or 87.106.165.190) Test if you want : http://wiki.okira.net/w/Accueil Thanks for the help, Regards. -- Bertrand Caplet Tel. 02 33 35 20 94 Mob. 06 35 43 68 46 Web. blog.okira.net bertrand.caplet at okira.net -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: sig-ecolo.gif Type: image/gif Size: 1988 bytes Desc: not available URL: From nginx-forum at nginx.us Fri Aug 10 23:54:12 2012 From: nginx-forum at nginx.us (mk.fg) Date: Fri, 10 Aug 2012 19:54:12 -0400 (EDT) Subject: [PATCH] Add "pass_only" option to ssl_verify_client to enable app-only validation In-Reply-To: <20120723183143.GX31671@mdounin.ru> References: <20120723183143.GX31671@mdounin.ru> Message-ID: <54482f944c6e12677c85c89379f190dd.NginxMailingListEnglish@forum.nginx.org> Maxim Dounin Wrote: ------------------------------------------------------- > Hello! > > On Thu, Jul 19, 2012 at 05:13:01PM -0400, mk.fg > wrote: > > > > > Proposed patch enables use-case scenario when > Nginx asks Client for TLS > > certificate but does not make any attempt to > validate it, passing it to > > the application instead. Application itself if > then free to decide > > whether provided certificate is valid and is > able to reject it with the > > same http status codes as Nginx does. > > > > I don't think that the patch is a right way to go. > If client > certificates without CA being known in advance is > indeed something > required - it probably needs something like > optional_no_ca instead > as done in Apache's mod_ssl, i.e. with usual > expiration checks and so on, > but without verification against CAs. > New version of the patch, with these requirements taken into account. Option is now called "optional_no_ca", as suggested, and allows to check all certificate parameters except for a trust chain. I've used ssl_verify_error_is_optional macro (listing trust-chain related errors) directly from apache 2.4.2 codebase. Note that since ngx_ssl_get_client_verify now has to access configuration, which is accessible from ngx_http_request_t, it wasn't enough to pass ngx_connection_t to it, plus it was only used from ngx_http_ssl_module.c, so I've moved the modified version of it into ngx_http_ssl_module.c, to avoid having to include http-only stuff into ngx_event_openssl.c. If that was a bad idea, and there's a need to keep that function generic (non-http-only), please suggest whether generic copy should just be kept in ngx_event_openssl.c, it's signature should be extended to have http-specific options or maybe there should be conditional includes for http stuff. URL for the patch: https://raw.github.com/gist/3319062/ diff --git a/src/event/ngx_event_openssl.c b/src/event/ngx_event_openssl.c index 4356a05..f2c3511 100644 --- a/src/event/ngx_event_openssl.c +++ b/src/event/ngx_event_openssl.c @@ -2309,31 +2309,6 @@ ngx_ssl_get_serial_number(ngx_connection_t *c, ngx_pool_t *pool, ngx_str_t *s) } -ngx_int_t -ngx_ssl_get_client_verify(ngx_connection_t *c, ngx_pool_t *pool, ngx_str_t *s) -{ - X509 *cert; - - if (SSL_get_verify_result(c->ssl->connection) != X509_V_OK) { - ngx_str_set(s, "FAILED"); - return NGX_OK; - } - - cert = SSL_get_peer_certificate(c->ssl->connection); - - if (cert) { - ngx_str_set(s, "SUCCESS"); - - } else { - ngx_str_set(s, "NONE"); - } - - X509_free(cert); - - return NGX_OK; -} - - static void * ngx_openssl_create_conf(ngx_cycle_t *cycle) { diff --git a/src/event/ngx_event_openssl.h b/src/event/ngx_event_openssl.h index cd6d885..97da051 100644 --- a/src/event/ngx_event_openssl.h +++ b/src/event/ngx_event_openssl.h @@ -141,6 +141,14 @@ ngx_int_t ngx_ssl_get_client_verify(ngx_connection_t *c, ngx_pool_t *pool, ngx_str_t *s); +#define ngx_ssl_verify_error_is_optional(errnum) \ + ((errnum == X509_V_ERR_DEPTH_ZERO_SELF_SIGNED_CERT) \ + || (errnum == X509_V_ERR_SELF_SIGNED_CERT_IN_CHAIN) \ + || (errnum == X509_V_ERR_UNABLE_TO_GET_ISSUER_CERT_LOCALLY) \ + || (errnum == X509_V_ERR_CERT_UNTRUSTED) \ + || (errnum == X509_V_ERR_UNABLE_TO_VERIFY_LEAF_SIGNATURE)) + + ngx_int_t ngx_ssl_handshake(ngx_connection_t *c); ssize_t ngx_ssl_recv(ngx_connection_t *c, u_char *buf, size_t size); ssize_t ngx_ssl_write(ngx_connection_t *c, u_char *data, size_t size); diff --git a/src/http/modules/ngx_http_ssl_module.c b/src/http/modules/ngx_http_ssl_module.c index d759489..8f29f3c 100644 --- a/src/http/modules/ngx_http_ssl_module.c +++ b/src/http/modules/ngx_http_ssl_module.c @@ -22,6 +22,8 @@ static ngx_int_t ngx_http_ssl_static_variable(ngx_http_request_t *r, ngx_http_variable_value_t *v, uintptr_t data); static ngx_int_t ngx_http_ssl_variable(ngx_http_request_t *r, ngx_http_variable_value_t *v, uintptr_t data); +static ngx_int_t ngx_http_ssl_variable_get_client_verify(ngx_http_request_t *r, + ngx_http_variable_value_t *v, uintptr_t data); static ngx_int_t ngx_http_ssl_add_variables(ngx_conf_t *cf); static void *ngx_http_ssl_create_srv_conf(ngx_conf_t *cf); @@ -48,6 +50,7 @@ static ngx_conf_enum_t ngx_http_ssl_verify[] = { { ngx_string("off"), 0 }, { ngx_string("on"), 1 }, { ngx_string("optional"), 2 }, + { ngx_string("optional_no_ca"), 3 }, { ngx_null_string, 0 } }; @@ -214,8 +217,9 @@ static ngx_http_variable_t ngx_http_ssl_vars[] = { { ngx_string("ssl_client_serial"), NULL, ngx_http_ssl_variable, (uintptr_t) ngx_ssl_get_serial_number, NGX_HTTP_VAR_CHANGEABLE, 0 }, - { ngx_string("ssl_client_verify"), NULL, ngx_http_ssl_variable, - (uintptr_t) ngx_ssl_get_client_verify, NGX_HTTP_VAR_CHANGEABLE, 0 }, + { ngx_string("ssl_client_verify"), NULL, + ngx_http_ssl_variable_get_client_verify, + 0, NGX_HTTP_VAR_CHANGEABLE, 0 }, { ngx_null_string, NULL, NULL, 0, 0, 0 } }; @@ -306,6 +310,60 @@ ngx_http_ssl_add_variables(ngx_conf_t *cf) } +static ngx_int_t +ngx_http_ssl_variable_get_client_verify(ngx_http_request_t *r, + ngx_http_variable_value_t *v, uintptr_t data) +{ + ngx_str_t s; + ngx_connection_t *c; + long rc; + X509 *cert; + ngx_http_ssl_srv_conf_t *sscf; + + c = r->connection; + + if (c->ssl) { + + rc = SSL_get_verify_result(c->ssl->connection); + + if (rc != X509_V_OK) { + sscf = ngx_http_get_module_srv_conf(r, ngx_http_ssl_module); + + if (sscf->verify != 3 || ngx_ssl_verify_error_is_optional(rc)) { + ngx_str_set(&s, "FAILED"); + return NGX_OK; + } + } + + cert = SSL_get_peer_certificate(c->ssl->connection); + + if (cert) { + ngx_str_set(&s, "SUCCESS"); + + } else { + ngx_str_set(&s, "NONE"); + } + + X509_free(cert); + + v->len = s.len; + v->data = s.data; + + if (v->len) { + v->valid = 1; + v->no_cacheable = 0; + v->not_found = 0; + + return NGX_OK; + } + } + + v->not_found = 1; + + return NGX_OK; +} + + static void * ngx_http_ssl_create_srv_conf(ngx_conf_t *cf) { @@ -466,7 +524,7 @@ ngx_http_ssl_merge_srv_conf(ngx_conf_t *cf, void *parent, void *child) if (conf->verify) { - if (conf->client_certificate.len == 0) { + if (conf->verify != 3 && conf->client_certificate.len == 0) { ngx_log_error(NGX_LOG_EMERG, cf->log, 0, "no ssl_client_certificate for ssl_client_verify"); return NGX_CONF_ERROR; diff --git a/src/http/ngx_http_request.c b/src/http/ngx_http_request.c index cb970c5..96cec55 100644 --- a/src/http/ngx_http_request.c +++ b/src/http/ngx_http_request.c @@ -1642,7 +1642,9 @@ ngx_http_process_request(ngx_http_request_t *r) if (sscf->verify) { rc = SSL_get_verify_result(c->ssl->connection); - if (rc != X509_V_OK) { + if ((sscf->verify != 3 && rc != X509_V_OK) + || !(sscf->verify == 3 && ngx_ssl_verify_error_is_optional(rc))) + { ngx_log_error(NGX_LOG_INFO, c->log, 0, "client SSL certificate verify error: (%l:%s)", rc, X509_verify_cert_error_string(rc)); Posted at Nginx Forum: http://forum.nginx.org/read.php?2,228761,229586#msg-229586 From nginx-forum at nginx.us Sat Aug 11 04:21:30 2012 From: nginx-forum at nginx.us (dewey) Date: Sat, 11 Aug 2012 00:21:30 -0400 (EDT) Subject: Errors on Make Message-ID: <7f26ee5543d2a3d688c64687e9a93c76.NginxMailingListEnglish@forum.nginx.org> I'm trying to compile 1.2.3 with: ./configure --with-http_ssl_module --with-http_gzip_static_module --with-http_mp4_module --with-http_secure_link_module --with-pcre=~/pcre-8.31 --with-openssl=~/openssl-1.0.1c --with-zlib=~/zlib-1.2.7 and am getting: gcc -c -pipe -O -W -Wall -Wpointer-arith -Wno-unused-parameter -Werror -g -I src/core -I src/event -I src/event/modules -I src/os/unix -I ~/pcre-8.31 -I ~/openssl-1.0.1c/.openssl/include -I ~/zlib-1.2.7 -I objs \ -o objs/src/core/nginx.o \ src/core/nginx.c In file included from src/core/ngx_core.h:73, from src/core/nginx.c:9: src/event/ngx_event_openssl.h:15:25: error: openssl/ssl.h: No such file or directory src/event/ngx_event_openssl.h:16:25: error: openssl/err.h: No such file or directory src/event/ngx_event_openssl.h:17:26: error: openssl/conf.h: No such file or directory src/event/ngx_event_openssl.h:18:28: error: openssl/engine.h: No such file or directory src/event/ngx_event_openssl.h:19:25: error: openssl/evp.h: No such file or directory In file included from src/core/ngx_core.h:73, from src/core/nginx.c:9: src/event/ngx_event_openssl.h:29: error: expected specifier-qualifier-list before ?SSL_CTX? src/event/ngx_event_openssl.h:35: error: expected specifier-qualifier-list before ?SSL? src/event/ngx_event_openssl.h:105: error: expected ?=?, ?,?, ?;?, ?asm? or ?__attribute__? before ?*? token src/event/ngx_event_openssl.h:114: error: expected ?)? before ?*? token src/event/ngx_event_openssl.h:115: error: expected declaration specifiers or ?...? before ?SSL_SESSION? make[1]: *** [objs/src/core/nginx.o] Error 1 make[1]: Leaving directory `/root/nginx-1.2.3' make: *** [build] Error 2 I really have no idea where to even start. Any help would be appreciated. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,229589,229589#msg-229589 From nginx-forum at nginx.us Sat Aug 11 07:11:34 2012 From: nginx-forum at nginx.us (abcomp01) Date: Sat, 11 Aug 2012 03:11:34 -0400 (EDT) Subject: nginx rewite novice help Message-ID: <917db2bcc5b4c82bc89aa5e987bd0bac.NginxMailingListEnglish@forum.nginx.org> http://www.example.com/tag.php?q=test to http://www.example.com/t/test thanks note i am begin Posted at Nginx Forum: http://forum.nginx.org/read.php?2,229592,229592#msg-229592 From ru at nginx.com Sun Aug 12 08:18:51 2012 From: ru at nginx.com (Ruslan Ermilov) Date: Sun, 12 Aug 2012 12:18:51 +0400 Subject: Errors on Make In-Reply-To: <7f26ee5543d2a3d688c64687e9a93c76.NginxMailingListEnglish@forum.nginx.org> References: <7f26ee5543d2a3d688c64687e9a93c76.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20120812081851.GB86989@lo0.su> On Sat, Aug 11, 2012 at 12:21:30AM -0400, dewey wrote: > I'm trying to compile 1.2.3 with: > > ./configure --with-http_ssl_module --with-http_gzip_static_module > --with-http_mp4_module --with-http_secure_link_module > --with-pcre=~/pcre-8.31 --with-openssl=~/openssl-1.0.1c > --with-zlib=~/zlib-1.2.7 > > and am getting: > > > gcc -c -pipe -O -W -Wall -Wpointer-arith -Wno-unused-parameter -Werror > -g -I src/core -I src/event -I src/event/modules -I src/os/unix -I > ~/pcre-8.31 -I ~/openssl-1.0.1c/.openssl/include -I ~/zlib-1.2.7 -I objs > \ > -o objs/src/core/nginx.o \ > src/core/nginx.c > In file included from src/core/ngx_core.h:73, > from src/core/nginx.c:9: > src/event/ngx_event_openssl.h:15:25: error: openssl/ssl.h: No such file > or directory > src/event/ngx_event_openssl.h:16:25: error: openssl/err.h: No such file > or directory > src/event/ngx_event_openssl.h:17:26: error: openssl/conf.h: No such file > or directory > src/event/ngx_event_openssl.h:18:28: error: openssl/engine.h: No such > file or directory > src/event/ngx_event_openssl.h:19:25: error: openssl/evp.h: No such file > or directory > In file included from src/core/ngx_core.h:73, > from src/core/nginx.c:9: > src/event/ngx_event_openssl.h:29: error: expected > specifier-qualifier-list before ?SSL_CTX? > src/event/ngx_event_openssl.h:35: error: expected > specifier-qualifier-list before ?SSL? > src/event/ngx_event_openssl.h:105: error: expected ?=?, ?,?, > ?;?, ?asm? or ?__attribute__? before ?*? token > src/event/ngx_event_openssl.h:114: error: expected ?)? before > ?*? token > src/event/ngx_event_openssl.h:115: error: expected declaration > specifiers or ?...? before ?SSL_SESSION? > make[1]: *** [objs/src/core/nginx.o] Error 1 > make[1]: Leaving directory `/root/nginx-1.2.3' > make: *** [build] Error 2 > > I really have no idea where to even start. Any help would be > appreciated. Replace ~'s in the configure command above with $PATH or a full path to your home directory. From nginx-forum at nginx.us Sun Aug 12 10:32:15 2012 From: nginx-forum at nginx.us (darkweaver871) Date: Sun, 12 Aug 2012 06:32:15 -0400 (EDT) Subject: [Nginx] perl module get an empty body In-Reply-To: <273eb1e9dc65fb58218e5eac3af2aeb3.NginxMailingListEnglish@forum.nginx.org> References: <273eb1e9dc65fb58218e5eac3af2aeb3.NginxMailingListEnglish@forum.nginx.org> Message-ID: Hello Maxim, OK got it now ;-) It's working, you saved my day :-D So to put it in a nutshell: - I used perl switcher_test::handler in my location / - in the handler method I call $r->has_request_body(\& test) and in test method according to what I found in the body I do an internal_redirect (as far as I understood it can only redirect to a location in the same vhost) to /proxy1 (or /proxy2 depending) and return. - in /proxyX I do a rewrite /proxyX(.*) $1 break; proxy_pass http://my_upstream It's working perfectly but I just wonder if there is a more elegant/efficient way to do it ? R?mi Posted at Nginx Forum: http://forum.nginx.org/read.php?2,229560,229601#msg-229601 From nginx-forum at nginx.us Sun Aug 12 13:13:10 2012 From: nginx-forum at nginx.us (dewey) Date: Sun, 12 Aug 2012 09:13:10 -0400 (EDT) Subject: Errors on Make In-Reply-To: <20120812081851.GB86989@lo0.su> References: <20120812081851.GB86989@lo0.su> Message-ID: <2587e36907f9e84fe2800730369f2dd6.NginxMailingListEnglish@forum.nginx.org> perfect. that worked. thx! Posted at Nginx Forum: http://forum.nginx.org/read.php?2,229589,229604#msg-229604 From agentzh at gmail.com Sun Aug 12 22:57:32 2012 From: agentzh at gmail.com (agentzh) Date: Sun, 12 Aug 2012 15:57:32 -0700 Subject: [ANN] ngx_openresty devel version 1.2.1.13 released In-Reply-To: References: Message-ID: Hi, folks! After one week's active development, I'm pleased to announce the new development version of ngx_openresty, 1.2.1.13: http://openresty.org/#Download Special thanks go to all our contributors and users for helping make this happen :) Below is the complete change log for this release, as compared to the last release, 1.2.1.11: * upgraded LuaNginxModule to 0.5.13. * feature: added new directive lua_socket_log_errors that can be used to disable automatic error logging for both the TCP and UDP cosockets. thanks Brian Akins for the patch. * bugfix: segmentation faults might happen when 1. the nginx worker was shutting down (i.e., the Lua VM is closing), 2. ngx.re.gmatch was used, and 3. regex cache is enabled via the "o" regex flag. this bug had appeared in LuaNginxModule 0.5.0rc30 (and OpenResty 1.0.15.9). * bugfix: segmentation faults might happen when the system is out of memory: there was one place where we did not check the pointer returned from "ngx_array_push". * bugfix: we should avoid complicated Lua stack operations that might require memory allocaitons in the Lua "atpanic" handler because it would produce another exception in the handler leading to infinite loops. * upgraded EchoNginxModule to 0.41. * bugfix: we incorrectly returned the 500 error code in our nginx output body filters. * bugfix: segmentation faults might happen when the system is out of memory: we forgot to check the returned pointer from "ngx_calloc_buf" in our nginx output body filter. * upgraded LuaRestyDNSLibrary to 0.05. * feature: now we use 4096 as the receive buffer size instead of the value 512 that is suggested by RFC 1035. this could avoid data truncation when the DNS server supports datagram sizes larger than 512 bytes. * feature: now we pick a random nameserver from the nameservers list at the first time. * docs: fixed a mistake in the sample code and tuned it to be more illustrative. thanks Sandesh Kotwal for reporting. The HTML version of the change contains some helpful hyper-links and can be browsed here: http://openresty.org/#ChangeLog1002001 OpenResty (aka. ngx_openresty) is a full-fledged web application server by bundling the standard Nginx core, lots of 3rd-party Nginx modules, as well as most of their external dependencies. See OpenResty's homepage for details: http://openresty.org/ Enjoy! -agentzh From nginx-forum at nginx.us Mon Aug 13 00:57:30 2012 From: nginx-forum at nginx.us (jakecattrall) Date: Sun, 12 Aug 2012 20:57:30 -0400 (EDT) Subject: New server, Same config, New error Message-ID: <260d5a21bb6bec77d6d4d63504fecfa7.NginxMailingListEnglish@forum.nginx.org> Centos 6 with nginx and php using php-fpm. Exactly like my other server only, that one works. No matter what I do, I get this: "File not found." In the headers it shows it's a 404. I've googled now for hours, followed all the tutorials but still nothing. # # The default server # server { listen 80; server_name _; client_max_body_size 500G; #charset koi8-r; #access_log logs/host.access.log main; location / { rewrite ^/dl/(.*)/(.*)/headers/$ /download.php?link=$1&filename=$2&headers=1 last; rewrite ^/dl/(.*)/(.*)$ /download.php?link=$1&filename=$2 last; root /usr/share/nginx/html; index index.html index.htm index.php; include conf.d/blockips.conf; } error_page 404 /404.html; location = /404.html { root /usr/share/nginx/html; } # redirect server error pages to the static page /50x.html # error_page 500 502 503 504 /50x.html; location = /50x.html { root /usr/share/nginx/html; } # proxy the PHP scripts to Apache listening on 127.0.0.1:80 # #location ~ \.php$ { # proxy_pass http://127.0.0.1; #} # pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000 # location ~ \.php$ { fastcgi_split_path_info ^(.+\.php)(.*)$; fastcgi_pass backend; root /usr/share/nginx/html; fastcgi_index index.php; fastcgi_param SCRIPT_NAME $document_root$fastcgi_script_name; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; fastcgi_param QUERY_STRING $query_string; fastcgi_param REQUEST_METHOD $request_method; fastcgi_param CONTENT_TYPE $content_type; fastcgi_param CONTENT_LENGTH $content_length; fastcgi_param REQUEST_URI $request_uri; fastcgi_param DOCUMENT_URI $document_uri; fastcgi_param DOCUMENT_ROOT $document_root; fastcgi_param SERVER_PROTOCOL $server_protocol; fastcgi_param GATEWAY_INTERFACE CGI/1.1; fastcgi_param SERVER_SOFTWARE nginx/$nginx_version; fastcgi_param REMOTE_ADDR $remote_addr; fastcgi_param REMOTE_PORT $remote_port; fastcgi_param SERVER_ADDR $server_addr; fastcgi_param SERVER_PORT $server_port; fastcgi_param SERVER_NAME $server_name; # PHP only, required if PHP was built with --enable-force-cgi-redirect fastcgi_param REDIRECT_STATUS 200; fastcgi_connect_timeout 60; fastcgi_send_timeout 180; fastcgi_read_timeout 180; fastcgi_buffer_size 128k; fastcgi_buffers 4 256k; fastcgi_busy_buffers_size 256k; fastcgi_temp_file_write_size 256k; fastcgi_intercept_errors off; fastcgi_ignore_client_abort off; expires max; break; } # deny access to .htaccess files, if Apache's document root # concurs with nginx's one # location ~ /\.ht { deny all; } } upstream backend { server 127.0.0.1:9000; } Posted at Nginx Forum: http://forum.nginx.org/read.php?2,229610,229610#msg-229610 From mdounin at mdounin.ru Mon Aug 13 02:26:53 2012 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 13 Aug 2012 06:26:53 +0400 Subject: NGINX crash In-Reply-To: <54dc6debfd9f63e1097c4bbce3c4fdbc.NginxMailingListEnglish@forum.nginx.org> References: <28e50aac81607c312ebc3337885a7532.NginxMailingListEnglish@forum.nginx.org> <54dc6debfd9f63e1097c4bbce3c4fdbc.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20120813022653.GS40452@mdounin.ru> Hello! On Fri, Aug 10, 2012 at 03:51:15PM -0400, double wrote: [...] > (gdb) fr 2 > #2 0x000000000046d5d2 in ngx_http_limit_req_handler (r=0x1b99faa0) at > src/http/modules/ngx_http_limit_req_module.c:192 > 192 in src/http/modules/ngx_http_limit_req_module.c > > (gdb) p *limit > $1 = {shm_zone = 0x1b9319e8, burst = 10000, nodelay = 0} > > (gdb) p *ctx > $2 = {sh = 0x2af913765000, shpool = 0x2af913735000, rate = 2000, index = > 2, var = {len = 9, data = 0x1b9580f9 "ipaddress"}, node = 0x0} > > (gdb) p *vv > $3 = {len = 194343136, valid = 1, no_cacheable = 0, not_found = 0, > escape = 0, data = 0x0} [...] Ok, it looks like I see the problem. It might be triggered if the same variable is used multiple times on a right side of a map. Attached patch should fix this. Maxim Dounin -------------- next part -------------- # HG changeset patch # User Maxim Dounin # Date 1344824745 -14400 # Node ID 5f6bfa7fff58dcaf4610a6871cf696c1eb1ef7d8 # Parent 5f9a1c6f51c84964fd629d22f756aaa4cee80a94 Map: fixed optimization of variables as values. Previous code incorrectly used ctx->var_values as an array of variable pointers (not variables as it should). Additionally, ctx->var_values inspection failed to properly set var on match. diff --git a/src/http/modules/ngx_http_map_module.c b/src/http/modules/ngx_http_map_module.c --- a/src/http/modules/ngx_http_map_module.c +++ b/src/http/modules/ngx_http_map_module.c @@ -416,11 +416,12 @@ ngx_http_map(ngx_conf_t *cf, ngx_command for (i = 0; i < ctx->var_values.nelts; i++) { if (index == (ngx_int_t) var[i].data) { + var = &var[i]; goto found; } } - var = ngx_palloc(ctx->keys.pool, sizeof(ngx_http_variable_value_t)); + var = ngx_array_push(&ctx->var_values); if (var == NULL) { return NGX_CONF_ERROR; } @@ -431,13 +432,6 @@ ngx_http_map(ngx_conf_t *cf, ngx_command var->len = 0; var->data = (u_char *) index; - vp = ngx_array_push(&ctx->var_values); - if (vp == NULL) { - return NGX_CONF_ERROR; - } - - *vp = var; - goto found; } From nbubingo at gmail.com Mon Aug 13 02:34:09 2012 From: nbubingo at gmail.com (=?GB2312?B?0qbOsLHz?=) Date: Mon, 13 Aug 2012 10:34:09 +0800 Subject: Gzipping proxied content after using subs_filter fails In-Reply-To: <20120810095635.GD40452@mdounin.ru> References: <20120810095635.GD40452@mdounin.ru> Message-ID: Hi, Can you try the latest code from github: https://github.com/yaoweibin/ngx_http_substitutions_filter_module? I'm just refactoring this module. And I want to add more verification with the flush buffer. There may be bugs with the old version. Thanks for your report. 2012/8/10 Maxim Dounin : > Hello! > > On Thu, Aug 09, 2012 at 05:28:06PM -0400, abstein2 wrote: > >> As the title says, I'm having an issue with gzipping proxied web pages >> after using subs_filter. It doesn't always happen, but it looks like >> whenever a page exceeds the size of one of my proxy_buffers I get an >> error in the error log saying: >> >> [alert] 18544#0: *490084 deflate() failed: 2, -5 while sending to >> client, client: xxx.xxx.xxx.xxx, server: www.test.com, request: "GET / >> HTTP/1.1", upstream: "http://xxx.xxx.xxx.xxx:80/", host: "www.test.com", >> referrer: "http://xxx.xxx.xxx.xxx/" > > Here nginx did deflate with Z_SYNC_FLUSH (2) and deflate() > returned error Z_BUF_ERROR (-5, no progress possible). The error > message suggests you are using an old nginx, and subs filter > somehow triggers the problem as fixed here: > > http://trac.nginx.org/nginx/changeset/4469/nginx > > Changes with nginx 1.1.15: > > *) Bugfix: calling $r->flush() multiple times might cause errors in the > ngx_http_gzip_filter_module. > > It's not clear why subs filter triggers flush (and does so twice in a row), > most likely it's a bug in subs filter (you may want to ask it's author), but > upgrading nginx to 1.1.15+ should resolve the problem. > > [...] > > Maxim Dounin > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From nbubingo at gmail.com Mon Aug 13 02:43:49 2012 From: nbubingo at gmail.com (=?GB2312?B?0qbOsLHz?=) Date: Mon, 13 Aug 2012 10:43:49 +0800 Subject: HttpUpstreamModule: Need more detailed Information In-Reply-To: <5024F24E.6020409@xlrs.de> References: <5024F24E.6020409@xlrs.de> Message-ID: 2012/8/10 Axel : > Hello all, > > i'm new to nginx and first of all i have to say it's a great piece of > software. > > I need some more detailed Information about nginx behaviour regarding > HttpUpstreamModule and I hope you can give me some hints and links where i > can learn more about it. > I set up nginx/1.2.2 as reverse proxy in front of a bunch of apache > server(*) which are located in different housing locations. > > Now I have some questions and I can't find any docs or wiki pages with > detailed answers. > > - how does nginx detect if one ore more upstream servers has diappeared? > - what kind of mechanism does nginx use? icmp or something else? > - how often does nginx request the status of upstream servers? > > - how can I monitor the status of upstream server seen by nginx (I monitor > the status of running apache prcesses on the upstream server separately) You may try the Tengine (http://tengine.taobao.org/), a forked nginx version. We added proactive health check for the upstream servers. This module (http://tengine.taobao.org/document/http_upstream_check.html) can check the upstream servers periodically. If the upstream is marked down, the request will not be sent to this server. If it's up, the request will be sent to it again. If you don't like this forked Nginx, you also can use the single ngx_http_upstream_check module (https://github.com/yaoweibin/nginx_upstream_check_module). It can supply the same feature as Tengine. Thanks. Good luck. > > > It's a is really simple setup atm and I have not activated any other module: > > upstream frontend { > server frontend1:80 weight=100 max_fails=2 fail_timeout=10s; > server frontend2:80 weight=100 max_fails=2 fail_timeout=10s; > server frontend3:80 weight=100 max_fails=2 fail_timeout=10s; > server frontend4:80 weight=100 max_fails=2 fail_timeout=10s; > server frontend5:80 weight=100 max_fails=2 fail_timeout=10s; > } > > location / { > proxy_next_upstream http_502 http_503 error; > proxy_set_header Host $host; > proxy_set_header X-Real-IP $remote_addr; > proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; > proxy_pass http://frontend/; > proxy_redirect default; > } > > I googled around and found this thread > http://forum.nginx.org/read.php?29,108477 but got no answer. > > Any help is appreciated > > Regards, Axel > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From nginx-forum at nginx.us Mon Aug 13 03:53:10 2012 From: nginx-forum at nginx.us (n1xman) Date: Sun, 12 Aug 2012 23:53:10 -0400 (EDT) Subject: nginx-sticky & nginx_http_upstream_check modules not working together In-Reply-To: References: Message-ID: <6fc24b5b56ca261e228f541e297a7186.NginxMailingListEnglish@forum.nginx.org> Hi ???, With the latest code, I was managed to compile both modules together without any issue. Thanks for the modules and great support. :) Posted at Nginx Forum: http://forum.nginx.org/read.php?2,227937,229615#msg-229615 From nginx-forum at nginx.us Mon Aug 13 05:50:10 2012 From: nginx-forum at nginx.us (abstein2) Date: Mon, 13 Aug 2012 01:50:10 -0400 (EDT) Subject: Gzipping proxied content after using subs_filter fails In-Reply-To: References: Message-ID: <86f8ba447688b32e2c74380e16cbbe1e.NginxMailingListEnglish@forum.nginx.org> It looks like there's an issue with the newest revision of the module and nginx 1.2.3. When installed, whether gzip is on or off, the portion of code that was previously missing/not transmitted now gets transmitted, but isn't the actual page content. Instead it's gibberish with some of the raw nginx configuration mixed in. An example of the code being output: Xv?t?Xv?t? ?? ?? w.google-analytics.com/urchin.js" type="text/javascript"> ct.asp">CONTACT US | PRIVACY POLICY | TERMS OF USE ccel-expiresx-accel-charsetx-accel-redirectstatusx-accel-bufferingx-accel-limit-ratex-accel-expiresx-accel-charsetx-accel-redirect???/usr/scgi_temp@??????????@??statusx-accel-bufferingx-accel-limit-ratex-accel-expiresx-accel-charsetx-accel-redirect??/usr/scgi_temp?????@????????statusx-accel-bufferingx-accel-limit-ratex-accel-expiresx-accel-charsetx-accel-redirecth??/usr/scgi_temp??@??????????statusx-accel-bufferingx-accel-limit-ratex-accel-expiresx-accel-charsetx-accel-redirect???/usr/scgi_temp????????@?????statusx-accel-bufferingx-accel-limit-ratex-accel-expiresx-accel-charsetx-accel-redirect(??/usr/scgi_temp?????@????????statusx-accel-bufferingx-accel-limit-ratex-accel-expiresx-accel-charsetx-accel-redirect???/usr/scgi_temp@??????????@??statusx-accel-bufferingx-accel-limit-ratex-accel-expiresx-accel-charsetx-accel-redirect???/usr/scgi_temp????????@?????statusx-accel-bufferingx-accel-limit-ratex-accel-expiresx-accel-charsetx-accel-redirectH??/usr/scgi_temp??@??????????statusx-accel-bufferingx-accel-limit-ratex-accel-expiresx-accel-charsetx-accel-redirect???/usr/scgi_temp@??????????@??statusx-accel-bufferingx-accel-limit-ratex-accel-expiresx-accel-charsetx-accel-redirect?? The actual HTML code does show up after a block of text like the one above, but it begins mid-string as though it must be a completely new chunk in the buffer. All subs_filter replacements have already occurred by the time it reaches the gibberish block. Using 1.2.1 along with an older version of the module seems to work properly as Maxim had suggested. I'm happy to help continue to test this in any way possible. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,229545,229616#msg-229616 From nbubingo at gmail.com Mon Aug 13 06:00:52 2012 From: nbubingo at gmail.com (=?GB2312?B?0qbOsLHz?=) Date: Mon, 13 Aug 2012 14:00:52 +0800 Subject: Gzipping proxied content after using subs_filter fails In-Reply-To: <86f8ba447688b32e2c74380e16cbbe1e.NginxMailingListEnglish@forum.nginx.org> References: <86f8ba447688b32e2c74380e16cbbe1e.NginxMailingListEnglish@forum.nginx.org> Message-ID: Thanks. I'll check this problem tonight. Can you send me your nginx.conf and the test page? You can sent it to my private email: yaoweibin at gmai.com 2012/8/13 abstein2 : > It looks like there's an issue with the newest revision of the module > and nginx 1.2.3. When installed, whether gzip is on or off, the portion > of code that was previously missing/not transmitted now gets > transmitted, but isn't the actual page content. Instead it's gibberish > with some of the raw nginx configuration mixed in. > > An example of the code being output: > > Xv?t? Xv?t? ?? ?? w.google-analytics.com/urchin.js" > type="text/javascript"> ct.asp">CONTACT US | PRIVACY POLICY | TERMS OF > USE > ccel-expires x-accel-charset x-accel-redirect status x-accel-buffering x-accel-limit-rate x-accel-expires x-accel-charset x-accel-redirect ??? /usr/scgi_temp@?? ??? ??? ?? @?? status x-accel-buffering x-accel-limit-rate x-accel-expires x-accel-charset x-accel-redirect ?? /usr/scgi_temp??? ?? @?? ??? ??? status x-accel-buffering x-accel-limit-rate x-accel-expires x-accel-charset x-accel-redirect h?? /usr/scgi_temp?? @?? ??? ??? ?? status x-accel-buffering x-accel-limit-rate x-accel-expires x-accel-charset x-accel-redirect ??? /usr/scgi_temp??? ??? ?? @?? ??? status x-accel-buffering x-accel-limit-rate x-accel-expires x-accel-charset x-accel-redirect (?? /usr/scgi_temp??? ?? @?? ??? ??? status x-accel-buffering x-accel-limit-rate x-accel-expires x-accel-charset x-accel-redirect ??? /usr/scgi_temp@?? ??? ??? ?? @?? status x-accel-buffering x-accel-limit-rate x-accel-expires x-accel-charset x-accel-redirect ??? /usr/scgi_temp??? ??? ?? @?? ??? status x-accel-buffering x-accel-limit-rate x-accel-expires x-accel-charset x-accel-redirect H?? /usr/scgi_temp?? @?? ??? ??? ?? status x-accel-buffering x-accel-limit-rate x-accel-expires x-accel-charset x-accel-redirect ??? /usr/scgi_temp@?? ??? ??? ?? @?? status x-accel-buffering x-accel-limit-rate x-accel-expires x-accel-charset x-accel-redirect ?? > > The actual HTML code does show up after a block of text like the one > above, but it begins mid-string as though it must be a completely new > chunk in the buffer. All subs_filter replacements have already occurred > by the time it reaches the gibberish block. > > Using 1.2.1 along with an older version of the module seems to work > properly as Maxim had suggested. I'm happy to help continue to test this > in any way possible. > > Posted at Nginx Forum: http://forum.nginx.org/read.php?2,229545,229616#msg-229616 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From ar at xlrs.de Mon Aug 13 06:23:47 2012 From: ar at xlrs.de (Axel) Date: Mon, 13 Aug 2012 08:23:47 +0200 Subject: HttpUpstreamModule: Need more detailed Information In-Reply-To: <20120810134536.GI40452@mdounin.ru> References: <5024F24E.6020409@xlrs.de> <20120810134536.GI40452@mdounin.ru> Message-ID: <50289D73.2020905@xlrs.de> Hi, thanks for your answers. I will have a look on nginx development and take a look as suggested in the other answer. Regards, Axel Am 10.08.2012 15:45, schrieb Maxim Dounin: >> - how does nginx detect if one ore more upstream servers has diappeared? >> - what kind of mechanism does nginx use? icmp or something else? > > It detects based on status of requests to upstream servers. If > requests fail - the server is considered down and additional > requests aren't routed to it for some time. >> - how often does nginx request the status of upstream servers? > > For alive servers - as often as normal requests are routed to the > servers. For servers already considered down - once per > fail_timeout (per worker, see below). > >> - how can I monitor the status of upstream server seen by nginx (I >> monitor the status of running apache prcesses on the upstream server >> separately) > > Currently, there is no way. Moreover, each nginx worker process > has it's own idea about status of upstream servers. > > Maxim Dounin > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > From valery+nginxen at grid.net.ru Mon Aug 13 07:27:19 2012 From: valery+nginxen at grid.net.ru (Valery Kholodkov) Date: Mon, 13 Aug 2012 09:27:19 +0200 Subject: [announcement] new logging modules Message-ID: <5028AC57.20608@grid.net.ru> Hi! I created 2 new cool logging modules. The fitst one, nginx socketlog module, is a connection-oriented alternative of udplog module that uses BSD syslog remote logging protocol. It can be used with ng-syslogd to centralize logging. You can get it here: http://www.binpress.com/app/nginx-socketlog-module/1030?ad=461 The second one, nginx redislog module, is a redis logging client for nginx. It maintains one connection per peer per worker to a redis databse and logs everything that you say directly to that database. Destination key is of course dynamic: you can have one key per server or with the help of special variable even per server per day! You can use standard nginx variables in key names. This opens truly amazing opportunities. No need to rotate logs anymore! Come and look at what this module is capable of: http://www.binpress.com/app/nginx-redislog-module/998?ad=461 Here in my blog I expand a little bit on internals guts of these modules: http://www.nginxguts.com/2012/08/better-logging-for-nginx/ -- Best regards, Valery Kholodkov From quan.nexthop at gmail.com Mon Aug 13 09:20:40 2012 From: quan.nexthop at gmail.com (Geoge.Q) Date: Mon, 13 Aug 2012 17:20:40 +0800 Subject: About resolver and 502 error Message-ID: hi Guys: When I run NGINX as a proxy and set a resolver(such as 8.8.8.8), I found out if DNS fail to resolve, 502 is popped up. We can not set two or more DNS entry, so how to handle such situation? post my configuration as following: server { listen 80; resolver 8.8.8.8; location / { proxy_pass $scheme://$host$request_uri; proxy_set_header Host $http_host; proxy_buffer_size 128k; proxy_buffers 32 32k; proxy_busy_buffers_size 128k; proxy_buffering off; client_max_body_size 1000m; client_body_buffer_size 256k; proxy_connect_timeout 600; proxy_read_timeout 600; proxy_send_timeout 600; proxy_temp_file_write_size 128k; } } some error logs were printed out as following; 2012/08/11 10:43:20 [error] 13297#0: *183 recv() failed (104: Connection reset by peer) while reading response header from upstream, client: 192.168.52.8, server: , request: "GET /complete/search?client=chrome&hl=en-US&q=www. HTTP/1.1", upstream: " http://74.125.235.197:80/complete/search?client=chrome&hl=en-US&q=www.", host: "clients1.google.com" 2012/08/11 10:43:21 [error] 13297#0: *191 recv() failed (104: Connection reset by peer) while reading response header from upstream, client: 192.168.52.8, server: , request: "GET /complete/search?client=chrome&hl=en-US&q=www. HTTP/1.1", upstream: " http://74.125.235.200:80/complete/search?client=chrome&hl=en-US&q=www.", host: "clients1.google.com" 2012/08/11 10:43:21 [error] 13297#0: *196 recv() failed (104: Connection reset by peer) while reading response header from upstream, client: 192.168.52.8, server: , request: "GET /complete/search?client=chrome&hl=en-US&q=www.g HTTP/1.1", upstream: " http://74.125.235.206:80/complete/search?client=chrome&hl=en-US&q=www.g", host: "clients1.google.com" 2012/08/11 10:43:21 [error] 13297#0: *183 recv() failed (104: Connection reset by peer) while reading response header from upstream, client: 192.168.52.8, server: , request: "GET /complete/search?client=chrome&hl=en-US&q=www. HTTP/1.1", upstream: " http://74.125.235.198:80/complete/search?client=chrome&hl=en-US&q=www.", host: "clients1.google.com" 2012/08/11 10:43:21 [error] 13297#0: *191 recv() failed (104: Connection reset by peer) while reading response header from upstream, client: 192.168.52.8, server: , request: "GET /complete/search?client=chrome&hl=en-US&q=www. HTTP/1.1", upstream: " http://74.125.235.196:80/complete/search?client=chrome&hl=en-US&q=www.", host: "clients1.google.com" 2012/08/11 10:43:21 [error] 13297#0: *191 recv() failed (104: Connection reset by peer) while reading response header from upstream, client: 192.168.52.8, server: , request: "GET /complete/search?client=chrome&hl=en-US&q=www. HTTP/1.1", upstream: " http://74.125.235.197:80/complete/search?client=chrome&hl=en-US&q=www.", host: "clients1.google.com" ~ -------------- next part -------------- An HTML attachment was scrubbed... URL: From ru at nginx.com Mon Aug 13 09:47:38 2012 From: ru at nginx.com (Ruslan Ermilov) Date: Mon, 13 Aug 2012 13:47:38 +0400 Subject: About resolver and 502 error In-Reply-To: References: Message-ID: <20120813094738.GE98664@lo0.su> On Mon, Aug 13, 2012 at 05:20:40PM +0800, Geoge.Q wrote: > hi Guys: > When I run NGINX as ?a proxy and set a resolver(such as 8.8.8.8), I found > out if DNS fail to resolve, 502 is popped up.? > We can not set two or more DNS entry, so how to handle such?situation?? To answer this specific question: upgrade. Recent versions of nginx (1.1.7+, 1.2.x, 1.3.x) support multiple resolvers. http://nginx.org/r/resolver From nginx-forum at nginx.us Mon Aug 13 10:16:08 2012 From: nginx-forum at nginx.us (bigbenarmy) Date: Mon, 13 Aug 2012 06:16:08 -0400 (EDT) Subject: Gzipping proxied content after using subs_filter fails In-Reply-To: References: Message-ID: <2b9252304427e0426343df34446fce26.NginxMailingListEnglish@forum.nginx.org> It is a great chance for all the pc game fans for that the Guild Wars 2 is coming on the way! We could freely enjoy the game after it is released on August 28. Now, I have found a good place to[url=http://www.gw2goldsale.com][b]buy guild wars 2 gold[/b][/url], the gw2 gold in this store is cheaper than any other[url=http://www.gw2goldsale.com][b]gw2 gold[/b][/url] store. If youwant to buy cheap guild wars 2 gold. I think it is an idea choice! I ama old fan for guild wars and I do hopely the guild wars 2 can comesoon! Posted at Nginx Forum: http://forum.nginx.org/read.php?2,229545,229631#msg-229631 From nginx-forum at nginx.us Mon Aug 13 10:17:08 2012 From: nginx-forum at nginx.us (bigbenarmy) Date: Mon, 13 Aug 2012 06:17:08 -0400 (EDT) Subject: About resolver and 502 error In-Reply-To: References: Message-ID: <1b3ca89b11944019d25cf6f3e44cfd38.NginxMailingListEnglish@forum.nginx.org> It is a great chance for all the pc game fans for that the Guild Wars 2 is coming on the way! We could freely enjoy the game after it is released on August 28. Now, I have found a good place to[url=http://www.gw2goldsale.com][b]buy guild wars 2 gold[/b][/url], the gw2 gold in this store is cheaper than any other[url=http://www.gw2goldsale.com][b]gw2 gold[/b][/url] store. If youwant to buy cheap guild wars 2 gold. I think it is an idea choice! I ama old fan for guild wars and I do hopely the guild wars 2 can comesoon! Posted at Nginx Forum: http://forum.nginx.org/read.php?2,229627,229632#msg-229632 From janowski.m at gmail.com Mon Aug 13 13:48:08 2012 From: janowski.m at gmail.com (Marcin "WMP" Janowski) Date: Mon, 13 Aug 2012 15:48:08 +0200 Subject: Gallery 3.0.4 on nginx Message-ID: Hello, i cant run gallery3 on ongix. Now i use this configure: server { listen (...); server_name *.wmp.(...) wmp.(...); access_log /home/wmp/www/wmp.(...)/logs/access.log; error_log /home/wmp/www/wmp.(...)/logs/error.log; root /home/wmp/www/wmp.(...)/htdocs; index index.php index.html; autoindex off; location /gallery3 { if_modified_since off; add_header Last-Modified ""; fastcgi_index index.php; fastcgi_split_path_info ^(.+.php)(.*)$; index index.php; if (-f $request_filename) { expires max; break; } if (!-e $request_filename) { rewrite ^/gallery3/index.php/(.+)$ /gallery3/index.php?kohana_uri=$1 last; } } location ~ \.php$ { include /etc/nginx/fastcgi_params; fastcgi_index index.php; fastcgi_pass unix:/var/run/nginx/wmp.php-fpm.socket; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; } } This is my files in htdocs/gallery3: $ ls htdocs/gallery3/ LICENSE README application index.php installer lib modules php.ini robots.txt system themes var $ On this configure i have 404 on css and JS files on gallery3/combined. -- Marcin Janowski(WMP) From janowski.m at gmail.com Mon Aug 13 13:51:38 2012 From: janowski.m at gmail.com (Marcin "WMP" Janowski) Date: Mon, 13 Aug 2012 15:51:38 +0200 Subject: Gallery 3.0.4 on nginx In-Reply-To: References: Message-ID: And on other links on gallery3/ (eq: gallery3/user_profile/show/2 or gallery3/logout) i have 404. 2012/8/13 Marcin "WMP" Janowski : > Hello, i cant run gallery3 on ongix. Now i use this configure: > > server { > listen (...); > server_name *.wmp.(...) wmp.(...); > > access_log /home/wmp/www/wmp.(...)/logs/access.log; > error_log /home/wmp/www/wmp.(...)/logs/error.log; > > root /home/wmp/www/wmp.(...)/htdocs; > index index.php index.html; > autoindex off; > > location /gallery3 { > if_modified_since off; > add_header Last-Modified ""; > fastcgi_index index.php; > fastcgi_split_path_info ^(.+.php)(.*)$; > index index.php; > if (-f $request_filename) { > expires max; > break; > } > if (!-e $request_filename) { > rewrite ^/gallery3/index.php/(.+)$ > /gallery3/index.php?kohana_uri=$1 last; > } > } > > location ~ \.php$ { > include /etc/nginx/fastcgi_params; > fastcgi_index index.php; > fastcgi_pass unix:/var/run/nginx/wmp.php-fpm.socket; > fastcgi_param SCRIPT_FILENAME > $document_root$fastcgi_script_name; > } > } > > This is my files in htdocs/gallery3: > $ ls htdocs/gallery3/ > LICENSE README application index.php installer lib modules > php.ini robots.txt system themes var > $ > > On this configure i have 404 on css and JS files on gallery3/combined. > -- > Marcin Janowski(WMP) -- Marcin Janowski(WMP) From quan.nexthop at gmail.com Mon Aug 13 14:29:23 2012 From: quan.nexthop at gmail.com (Geoge.Q) Date: Mon, 13 Aug 2012 22:29:23 +0800 Subject: About resolver and 502 error In-Reply-To: <20120813094738.GE98664@lo0.su> References: <20120813094738.GE98664@lo0.su> Message-ID: thanks Ruslan, it works now :) On Mon, Aug 13, 2012 at 5:47 PM, Ruslan Ermilov wrote: > On Mon, Aug 13, 2012 at 05:20:40PM +0800, Geoge.Q wrote: > > hi Guys: > > When I run NGINX as a proxy and set a resolver(such as 8.8.8.8), I found > > out if DNS fail to resolve, 502 is popped up. > > We can not set two or more DNS entry, so how to handle such situation? > > To answer this specific question: upgrade. Recent versions of nginx > (1.1.7+, > 1.2.x, 1.3.x) support multiple resolvers. > > http://nginx.org/r/resolver > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From kratzyyy at gmail.com Mon Aug 13 14:58:33 2012 From: kratzyyy at gmail.com (Chris White) Date: Mon, 13 Aug 2012 15:58:33 +0100 Subject: Nginx won't start after adding scgi to config Message-ID: I'm trying to set up rutorrent, and their wiki tells me to add this to my server block: location /RPC2 { include scgi_params; scgi_pass unix:/tmp/rpc.sock; } After I do that and restart nginx, I get this error: $ sudo /etc/init.d/nginx start Starting nginx: nginx: [emerg] unknown directive "scgi_pass" in /etc/nginx/conf.d/localhost.conf:24 nginx: configuration file /etc/nginx/nginx.conf test failed As far as I know the SCGI module has been built in by default since 0.8.42. I'm running 1.1.19 which I've verified with nginx -v. Here's my entire server block: http://pastebin.com/NiaRvuVn Note that everything runs fine if I remove the /RPC2 location block. -------------- next part -------------- An HTML attachment was scrubbed... URL: From contact at jpluscplusm.com Mon Aug 13 15:32:12 2012 From: contact at jpluscplusm.com (Jonathan Matthews) Date: Mon, 13 Aug 2012 16:32:12 +0100 Subject: Nginx won't start after adding scgi to config In-Reply-To: References: Message-ID: On 13 August 2012 15:58, Chris White wrote: > I'm trying to set up rutorrent, and their wiki tells me to add this to my > server block: > > location /RPC2 { > include scgi_params; > scgi_pass unix:/tmp/rpc.sock; > } > > After I do that and restart nginx, I get this error: > > $ sudo /etc/init.d/nginx start > Starting nginx: nginx: [emerg] unknown directive "scgi_pass" in > /etc/nginx/conf.d/localhost.conf:24 > nginx: configuration file /etc/nginx/nginx.conf test failed > > As far as I know the SCGI module has been built in by default since 0.8.42. > I'm running 1.1.19 which I've verified with nginx -v. Having verified what you say about the module (whilst I don't use it myself) here's a non-nginx-config/compile-y idea: I suggest you make /absolutely/ sure that the binary in your (root?) path that gets hit with "nginx -v" is the same one you end up running via the init script. As in, explicitly trace it through the script and the filesystem to ensure you're not hitting an older binary for some reason. Jonathan -- Jonathan Matthews Oxford, London, UK http://www.jpluscplusm.com/contact.html From nginx-forum at nginx.us Mon Aug 13 17:51:15 2012 From: nginx-forum at nginx.us (abstein2) Date: Mon, 13 Aug 2012 13:51:15 -0400 (EDT) Subject: Gzipping proxied content after using subs_filter fails In-Reply-To: References: Message-ID: <29f64b0504fe2fc851bb4fa35a0a87ac.NginxMailingListEnglish@forum.nginx.org> I have e-mailed Yao Weibin, but wanted to give an update here regarding my findings. It appears the issue is linked to long strings of text unbroken by a new line. With my settings, it appears that if a line contains ~40k bytes without being broken by a new line, something occurs whether gzip is on or off, that causes the gibberish to appear on the page. Thank you all for your input so far. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,229545,229651#msg-229651 From nginx-forum at nginx.us Mon Aug 13 23:46:56 2012 From: nginx-forum at nginx.us (double) Date: Mon, 13 Aug 2012 19:46:56 -0400 (EDT) Subject: NGINX crash In-Reply-To: <28e50aac81607c312ebc3337885a7532.NginxMailingListEnglish@forum.nginx.org> References: <28e50aac81607c312ebc3337885a7532.NginxMailingListEnglish@forum.nginx.org> Message-ID: Hi Maxim, Patch works. You are definitly a genius. Thanks a lot Posted at Nginx Forum: http://forum.nginx.org/read.php?2,229394,229665#msg-229665 From nginx-forum at nginx.us Tue Aug 14 08:21:38 2012 From: nginx-forum at nginx.us (arabek) Date: Tue, 14 Aug 2012 04:21:38 -0400 (EDT) Subject: Configuration for generic $user.domain.tld to $homedir mapping Message-ID: <8b5dcc9eff7ae0ede61e0d1522a449a3.NginxMailingListEnglish@forum.nginx.org> Hi, I've looked trough the web to find a solution, yet couldn't find anything suitable. Currently I'm using Apache, with a UserHostMacro directive that maps $user.domain.tld to /home/$user/public_html Is it possible to configure nginx to do the same? Any hints on how to proceed? With kind regards, -- Wojtek Arabczyk Posted at Nginx Forum: http://forum.nginx.org/read.php?2,229679,229679#msg-229679 From nginxlist at serverphorums.com Mon Aug 13 20:23:31 2012 From: nginxlist at serverphorums.com (nginxlist at serverphorums.com) Date: Mon, 13 Aug 2012 22:23:31 +0200 (CEST) Subject: how nginx handle connection with backend web server? Message-ID: <91c8ed792e18e37142bc7180f7511c4b.Nginx@www.serverphorums.com> Xuepeng Li, I have the same question as you asked. do you undersatnd now? please share with us. Thanks. --- posted at http://www.serverphorums.com http://www.serverphorums.com/read.php?5,24955,545198#msg-545198 From nginx-forum at nginx.us Tue Aug 14 11:10:05 2012 From: nginx-forum at nginx.us (kj) Date: Tue, 14 Aug 2012 07:10:05 -0400 (EDT) Subject: 400 bad request with the other method such as info,list and lock Message-ID: <079ba6714636c3807ca2591b089adf15.NginxMailingListEnglish@forum.nginx.org> Hello Nginx team, I am trying to install our solution with Nginx as a reverse proxy. And I am having some problem shown as below. 172.30.2.84 - - [14/Aug/2012:19:51:17 +0900] "GET /api/local/test.doc/info HTTP/1.1" 400 971 "-" 172.30.2.84 - - [14/Aug/2012:19:51:17 +0900] "GET /api/local/info HTTP/1.1" 400 971 "-" 172.30.2.84 - - [14/Aug/2012:19:51:17 +0900] "GET /api/local/info HTTP/1.1" 400 971 "-" 172.30.2.84 - - [14/Aug/2012:19:51:17 +0900] "GET /api/local/info HTTP/1.1" 400 971 "-" 172.30.2.84 - - [14/Aug/2012:19:51:17 +0900] "GET /api/local/test.doc/info HTTP/1.1" 400 971 "-" 172.30.2.84 - - [14/Aug/2012:19:51:17 +0900] "GET /api/local/list HTTP/1.1" 400 971 "-" 172.30.2.84 - - [14/Aug/2012:19:51:17 +0900] "GET /api/local/info HTTP/1.1" 400 971 "-" As you can see, 400 bad request error occurred. I have already experienced that I used to handle pound reverse proxy server with our solution. Pound server has xhttp option that allow WEBDAV method from outside through itself. I could solve similar issue under pound server by using xhttp option. below description quoted from pound man page. xHTTP value Defines which HTTP verbs are accepted. The possible values are: (default) accept only standard HTTP requests (GET, POST, HEAD). additionally allow extended HTTP requests (PUT, DELETE). additionally allow standard WebDAV verbs (LOCK, UNLOCK, PROPFIND, PROPPATCH, SEARCH, MKCOL, MOVE, COPY, OPTIONS, TRACE, MKACTIVITY, CHECKOUT, MERGE, REPORT). additionally allow MS extensions WebDAV verbs (SUBSCRIBE, UNSUBSCRIBE, NOTIFY, BPROPFIND, BPROPPATCH, POLL, BMOVE, BCOPY, BDELETE, CONNECT). additionally allow MS RPC extensions verbs (RPC_IN_DATA, RPC_OUT_DATA). I want to know same function in Nginx server to solve issue that blocking INFO,LIST,LOCK,UNLOCK. Please let me know how to configure or the way. Thank you in advance. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,229681,229681#msg-229681 From nginx-forum at nginx.us Tue Aug 14 14:29:17 2012 From: nginx-forum at nginx.us (jimishjoban) Date: Tue, 14 Aug 2012 10:29:17 -0400 (EDT) Subject: Rewrite automatically unencodes question mark. In-Reply-To: <56fa8664b61c28ca32bfb4ffeb9f7dcc.NginxMailingListEnglish@forum.nginx.org> References: <56fa8664b61c28ca32bfb4ffeb9f7dcc.NginxMailingListEnglish@forum.nginx.org> Message-ID: <36535a17f8122948495f9e28fbeb706d.NginxMailingListEnglish@forum.nginx.org> Thanks! that works but would it be possible to execute something else in background after doing a return? So I need to execute something which does the redirect and then inserts some logs in the database... But only after redirect, so its a bit faster... Posted at Nginx Forum: http://forum.nginx.org/read.php?2,229436,229684#msg-229684 From nginx-forum at nginx.us Tue Aug 14 21:10:58 2012 From: nginx-forum at nginx.us (DenisTRUFFAUT) Date: Tue, 14 Aug 2012 17:10:58 -0400 (EDT) Subject: SPDY compilation fail - Warnings In-Reply-To: <25f6e6fa8c4340c7f53aa037266146c4.NginxMailingListEnglish@forum.nginx.org> References: <25f6e6fa8c4340c7f53aa037266146c4.NginxMailingListEnglish@forum.nginx.org> Message-ID: <9be35f5eff43cc22727cf98c07ba1c10.NginxMailingListEnglish@forum.nginx.org> Oh-oh... :O Another error with NginX 1.3.4 and patch 52 : objs/src/http/ngx_http_request.o: In function `ngx_http_ssl_handshake_handler': /usr/local/src/nginx-1.3.4/src/http/ngx_http_request.c:643: undefined reference to `SSL_get0_next_proto_negotiated' objs/src/http/modules/ngx_http_ssl_module.o: In function `ngx_http_ssl_merge_srv_conf': /usr/local/src/nginx-1.3.4/src/http/modules/ngx_http_ssl_module.c:469: undefined reference to `SSL_CTX_set_next_protos_advertised_cb' collect2: ld returned 1 exit status make[1]: *** [objs/nginx] Error 1 make[1]: Leaving directory `/usr/local/src/nginx-1.3.4' make: *** [build] Error 2 This time, I have installed open ssl 1.0.1. Is there something else required to compile ? (It is another machine) 11:41:41|root at SkeetMeet-1> openssl version OpenSSL 1.0.1c 10 May 2012 Posted at Nginx Forum: http://forum.nginx.org/read.php?2,228980,229695#msg-229695 From christian.boenning at gmail.com Tue Aug 14 21:15:17 2012 From: christian.boenning at gmail.com (Christian Boenning) Date: Tue, 14 Aug 2012 23:15:17 +0200 Subject: SPDY compilation fail - Warnings In-Reply-To: <9be35f5eff43cc22727cf98c07ba1c10.NginxMailingListEnglish@forum.nginx.org> References: <25f6e6fa8c4340c7f53aa037266146c4.NginxMailingListEnglish@forum.nginx.org> <9be35f5eff43cc22727cf98c07ba1c10.NginxMailingListEnglish@forum.nginx.org> Message-ID: <129BE902-4F88-46EF-9D3A-BA0EAE595C85@gmail.com> Hi, do you have installed the libraries too? On ubuntu (for example) you need to install the package libssl-dev too. The binaries are not enough in this case. Regards, Chris Am 14.08.2012 um 23:10 schrieb "DenisTRUFFAUT" : > Oh-oh... :O > > Another error with NginX 1.3.4 and patch 52 : > > objs/src/http/ngx_http_request.o: In function > `ngx_http_ssl_handshake_handler': > /usr/local/src/nginx-1.3.4/src/http/ngx_http_request.c:643: undefined > reference to `SSL_get0_next_proto_negotiated' > objs/src/http/modules/ngx_http_ssl_module.o: In function > `ngx_http_ssl_merge_srv_conf': > /usr/local/src/nginx-1.3.4/src/http/modules/ngx_http_ssl_module.c:469: > undefined reference to `SSL_CTX_set_next_protos_advertised_cb' > collect2: ld returned 1 exit status > make[1]: *** [objs/nginx] Error 1 > make[1]: Leaving directory `/usr/local/src/nginx-1.3.4' > make: *** [build] Error 2 > > This time, I have installed open ssl 1.0.1. > Is there something else required to compile ? > (It is another machine) > > 11:41:41|root at SkeetMeet-1> openssl version > OpenSSL 1.0.1c 10 May 2012 > > Posted at Nginx Forum: http://forum.nginx.org/read.php?2,228980,229695#msg-229695 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From ne at vbart.ru Tue Aug 14 21:21:37 2012 From: ne at vbart.ru (Valentin V. Bartenev) Date: Wed, 15 Aug 2012 01:21:37 +0400 Subject: SPDY compilation fail - Warnings In-Reply-To: <9be35f5eff43cc22727cf98c07ba1c10.NginxMailingListEnglish@forum.nginx.org> References: <25f6e6fa8c4340c7f53aa037266146c4.NginxMailingListEnglish@forum.nginx.org> <9be35f5eff43cc22727cf98c07ba1c10.NginxMailingListEnglish@forum.nginx.org> Message-ID: <201208150121.37470.ne@vbart.ru> On Wednesday 15 August 2012 01:10:58 DenisTRUFFAUT wrote: > Oh-oh... :O > > Another error with NginX 1.3.4 and patch 52 : > > objs/src/http/ngx_http_request.o: In function > `ngx_http_ssl_handshake_handler': > /usr/local/src/nginx-1.3.4/src/http/ngx_http_request.c:643: undefined > reference to `SSL_get0_next_proto_negotiated' > objs/src/http/modules/ngx_http_ssl_module.o: In function > `ngx_http_ssl_merge_srv_conf': > /usr/local/src/nginx-1.3.4/src/http/modules/ngx_http_ssl_module.c:469: > undefined reference to `SSL_CTX_set_next_protos_advertised_cb' > collect2: ld returned 1 exit status > make[1]: *** [objs/nginx] Error 1 > make[1]: Leaving directory `/usr/local/src/nginx-1.3.4' > make: *** [build] Error 2 > > This time, I have installed open ssl 1.0.1. > Is there something else required to compile ? > (It is another machine) > > 11:41:41|root at SkeetMeet-1> openssl version > OpenSSL 1.0.1c 10 May 2012 > It looks like that you have two different versions of OpenSSL installed, or at least the headers from the old one. wbr, Valentin V. Bartenev From ne at vbart.ru Tue Aug 14 21:30:28 2012 From: ne at vbart.ru (Valentin V. Bartenev) Date: Wed, 15 Aug 2012 01:30:28 +0400 Subject: SPDY compilation fail - Warnings In-Reply-To: <201208150121.37470.ne@vbart.ru> References: <25f6e6fa8c4340c7f53aa037266146c4.NginxMailingListEnglish@forum.nginx.org> <9be35f5eff43cc22727cf98c07ba1c10.NginxMailingListEnglish@forum.nginx.org> <201208150121.37470.ne@vbart.ru> Message-ID: <201208150130.28103.ne@vbart.ru> On Wednesday 15 August 2012 01:21:37 Valentin V. Bartenev wrote: > On Wednesday 15 August 2012 01:10:58 DenisTRUFFAUT wrote: > > Oh-oh... :O > > > > Another error with NginX 1.3.4 and patch 52 : > > > > objs/src/http/ngx_http_request.o: In function > > `ngx_http_ssl_handshake_handler': > > /usr/local/src/nginx-1.3.4/src/http/ngx_http_request.c:643: undefined > > reference to `SSL_get0_next_proto_negotiated' > > objs/src/http/modules/ngx_http_ssl_module.o: In function > > `ngx_http_ssl_merge_srv_conf': > > /usr/local/src/nginx-1.3.4/src/http/modules/ngx_http_ssl_module.c:469: > > undefined reference to `SSL_CTX_set_next_protos_advertised_cb' > > collect2: ld returned 1 exit status > > make[1]: *** [objs/nginx] Error 1 > > make[1]: Leaving directory `/usr/local/src/nginx-1.3.4' > > make: *** [build] Error 2 > > > > This time, I have installed open ssl 1.0.1. > > Is there something else required to compile ? > > (It is another machine) > > > > 11:41:41|root at SkeetMeet-1> openssl version > > OpenSSL 1.0.1c 10 May 2012 > > It looks like that you have two different versions of OpenSSL installed, or > at least the headers from the old one. > Oh, it's a linker error. That means libs. wbr, Valentin V. Bartenev From nginx-forum at nginx.us Tue Aug 14 21:47:47 2012 From: nginx-forum at nginx.us (DenisTRUFFAUT) Date: Tue, 14 Aug 2012 17:47:47 -0400 (EDT) Subject: SPDY compilation fail - Warnings In-Reply-To: <129BE902-4F88-46EF-9D3A-BA0EAE595C85@gmail.com> References: <129BE902-4F88-46EF-9D3A-BA0EAE595C85@gmail.com> Message-ID: Thanks for your reply, So, what to do ? 1] apt-get remove open-ssl, then compile the 1.0.1c ? 2] specify the the lib path in the configure ? 3] ? -- I succeed to compile it, but it was another machine, another NginX version and another SPDY patch, and I don't really know what are the differences that could produce these errors. I just know OpenSSL version was the same : 1.0.1c. And it was installed in the same way, on a 0.9xx-something : OPENSSL_VERSION='1.0.1c' cd /usr/local/src sudo rm -fr openssl-$OPENSSL_VERSION sudo wget -O openssl-$OPENSSL_VERSION.tar.gz "http://www.openssl.org/source/openssl-$OPENSSL_VERSION.tar.gz" sudo tar -xvzf openssl-$OPENSSL_VERSION.tar.gz sudo rm -fr openssl-$OPENSSL_VERSION.tar.gz cd openssl-$OPENSSL_VERSION sudo ./config threads \ zlib \ --prefix=/usr \ && sudo make && sudo make install && cd /usr/local/src sudo rm -fr openssl-$OPENSSL_VERSION Posted at Nginx Forum: http://forum.nginx.org/read.php?2,228980,229701#msg-229701 From nginx-forum at nginx.us Tue Aug 14 22:08:56 2012 From: nginx-forum at nginx.us (DenisTRUFFAUT) Date: Tue, 14 Aug 2012 18:08:56 -0400 (EDT) Subject: SPDY compilation fail - Warnings In-Reply-To: References: <129BE902-4F88-46EF-9D3A-BA0EAE595C85@gmail.com> Message-ID: Hexa -> yep, libssl-dev was installed before the compilation Posted at Nginx Forum: http://forum.nginx.org/read.php?2,228980,229702#msg-229702 From agentzh at gmail.com Wed Aug 15 05:21:55 2012 From: agentzh at gmail.com (agentzh) Date: Tue, 14 Aug 2012 22:21:55 -0700 Subject: [ANN] ngx_openresty stable version 1.2.1.14 released In-Reply-To: References: Message-ID: Hello, guys! I am happy to announce the new stable version of ngx_openresty, 1.2.1.14: http://openresty.org/#Download Special thanks go to all our contributors and users for helping make this happen! Below is the complete change log for this release, as compared to the last (development) release, 1.2.1.13: * bugfix: the dtrace static probes did not build on FreeBSD, Solaris, and Mac OS X (i.e., when the "--with-dtrace-probes" configure option is specified). * bugfix: the systemtap tapset files and the "stap-nginx" script were even installed on non-Linux systems. * upgraded LuaNginxModule to 0.5.14. * bugfix: the dtrace provider file did not compile on FreeBSD, Solaris, and Mac OS X. The following components are bundled in this stable version: * LuaJIT-2.0.0-beta10 * array-var-nginx-module-0.03rc1 * auth-request-nginx-module-0.2 * drizzle-nginx-module-0.1.2 * echo-nginx-module-0.41 * encrypted-session-nginx-module-0.02 * form-input-nginx-module-0.07rc5 * headers-more-nginx-module-0.18 * iconv-nginx-module-0.10rc7 * lua-5.1.5 * lua-cjson-1.0.3 * lua-rds-parser-0.05 * lua-redis-parser-0.09 * lua-resty-dns-0.05 * lua-resty-memcached-0.07 * lua-resty-mysql-0.10 * lua-resty-redis-0.11 * lua-resty-string-0.06 * lua-resty-upload-0.03 * memc-nginx-module-0.13rc3 * nginx-1.2.1 * ngx_coolkit-0.2rc1 * ngx_devel_kit-0.2.17 * ngx_lua-0.5.14 * ngx_postgres-1.0rc1 * rds-csv-nginx-module-0.05rc2 * rds-json-nginx-module-0.12rc10 * redis-nginx-module-0.3.6 * redis2-nginx-module-0.08rc4 * set-misc-nginx-module-0.22rc8 * srcache-nginx-module-0.14 * xss-nginx-module-0.03rc9 The HTML version of the change contains some helpful hyper-links and can be browsed here: http://openresty.org/#ChangeLog1002001 OpenResty (aka. ngx_openresty) is a full-fledged web application server by bundling the standard Nginx core, lots of 3rd-party Nginx modules, as well as most of their external dependencies. See OpenResty's homepage for details: http://openresty.org/ We have been running extensive testing on our Amazon EC2 test cluster and ensure that all the components (including the Nginx core) play well together. The latest test report can always be found here: http://qa.openresty.org Have fun! -agentzh From smallfish.xy at gmail.com Wed Aug 15 05:28:16 2012 From: smallfish.xy at gmail.com (smallfish) Date: Wed, 15 Aug 2012 13:28:16 +0800 Subject: [openresty] [ANN] ngx_openresty stable version 1.2.1.14 released In-Reply-To: References: Message-ID: great! -- blog: http://chenxiaoyu.org On Wed, Aug 15, 2012 at 1:21 PM, agentzh wrote: > Hello, guys! > > I am happy to announce the new stable version of ngx_openresty, 1.2.1.14: > > http://openresty.org/#Download > > Special thanks go to all our contributors and users for helping make > this happen! > > Below is the complete change log for this release, as compared to the > last (development) release, 1.2.1.13: > > * bugfix: the dtrace static probes did not build on FreeBSD, > Solaris, and Mac OS X (i.e., when the "--with-dtrace-probes" > configure option is specified). > > * bugfix: the systemtap tapset files and the "stap-nginx" script > were even installed on non-Linux systems. > > * upgraded LuaNginxModule to 0.5.14. > > * bugfix: the dtrace provider file did not compile on FreeBSD, > Solaris, and Mac OS X. > > The following components are bundled in this stable version: > > * LuaJIT-2.0.0-beta10 > > * array-var-nginx-module-0.03rc1 > > * auth-request-nginx-module-0.2 > > * drizzle-nginx-module-0.1.2 > > * echo-nginx-module-0.41 > > * encrypted-session-nginx-module-0.02 > > * form-input-nginx-module-0.07rc5 > > * headers-more-nginx-module-0.18 > > * iconv-nginx-module-0.10rc7 > > * lua-5.1.5 > > * lua-cjson-1.0.3 > > * lua-rds-parser-0.05 > > * lua-redis-parser-0.09 > > * lua-resty-dns-0.05 > > * lua-resty-memcached-0.07 > > * lua-resty-mysql-0.10 > > * lua-resty-redis-0.11 > > * lua-resty-string-0.06 > > * lua-resty-upload-0.03 > > * memc-nginx-module-0.13rc3 > > * nginx-1.2.1 > > * ngx_coolkit-0.2rc1 > > * ngx_devel_kit-0.2.17 > > * ngx_lua-0.5.14 > > * ngx_postgres-1.0rc1 > > * rds-csv-nginx-module-0.05rc2 > > * rds-json-nginx-module-0.12rc10 > > * redis-nginx-module-0.3.6 > > * redis2-nginx-module-0.08rc4 > > * set-misc-nginx-module-0.22rc8 > > * srcache-nginx-module-0.14 > > * xss-nginx-module-0.03rc9 > > The HTML version of the change contains some helpful hyper-links and > can be browsed here: > > http://openresty.org/#ChangeLog1002001 > > OpenResty (aka. ngx_openresty) is a full-fledged web application > server by bundling the standard Nginx core, lots of 3rd-party Nginx > modules, as well as most of their external dependencies. See > OpenResty's homepage for details: > > http://openresty.org/ > > We have been running extensive testing on our Amazon EC2 test cluster > and ensure that all the components (including the Nginx core) play > well together. The latest test report can always be found here: > > http://qa.openresty.org > > Have fun! > -agentzh > > -- > ???: ???openresty?,???????! > ??: ????? openresty at googlegroups.com > ??: ????? openresty+unsubscribe at googlegroups.com > ??: http://groups.google.com/group/openresty > ??: http://openresty.org/ > ??: https://github.com/agentzh/ngx_openresty > ??: ????? http://wiki.woodpecker.org.cn/moin/AskForHelp > ??: http://agentzh.org/misc/nginx/agentzh-nginx-tutorials-zhcn.html > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nvezes at live.com Wed Aug 15 07:08:24 2012 From: nvezes at live.com (Flavio Oliveira) Date: Wed, 15 Aug 2012 07:08:24 +0000 Subject: HttpUseridModule: Cookie lifetime In-Reply-To: <20120719131456.GB31671@mdounin.ru> References: , <20120719131456.GB31671@mdounin.ru> Message-ID: Hi, Is it possible to extend the cookie lifetime (expiration time for the cookie)? I would like to set short periods (userid_expires 2h) and extend that (for more 2 h) if the session is still active. Regards, Flavio -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Wed Aug 15 10:58:40 2012 From: nginx-forum at nginx.us (wouterdebie) Date: Wed, 15 Aug 2012 06:58:40 -0400 (EDT) Subject: x-accel-redirect and gzip_static In-Reply-To: References: Message-ID: I'm experiencing the same problem. Have you managed to find a workaround/fix? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,220280,229715#msg-229715 From mdounin at mdounin.ru Wed Aug 15 11:50:23 2012 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 15 Aug 2012 15:50:23 +0400 Subject: 400 bad request with the other method such as info,list and lock In-Reply-To: <079ba6714636c3807ca2591b089adf15.NginxMailingListEnglish@forum.nginx.org> References: <079ba6714636c3807ca2591b089adf15.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20120815115023.GJ40452@mdounin.ru> Hello! On Tue, Aug 14, 2012 at 07:10:05AM -0400, kj wrote: > Hello Nginx team, > > I am trying to install our solution with Nginx as a reverse proxy. And I am > having some problem shown as below. > > 172.30.2.84 - - [14/Aug/2012:19:51:17 +0900] "GET /api/local/test.doc/info > HTTP/1.1" 400 971 "-" > 172.30.2.84 - - [14/Aug/2012:19:51:17 +0900] "GET /api/local/info HTTP/1.1" > 400 971 "-" > 172.30.2.84 - - [14/Aug/2012:19:51:17 +0900] "GET /api/local/info HTTP/1.1" > 400 971 "-" > 172.30.2.84 - - [14/Aug/2012:19:51:17 +0900] "GET /api/local/info HTTP/1.1" > 400 971 "-" > 172.30.2.84 - - [14/Aug/2012:19:51:17 +0900] "GET /api/local/test.doc/info > HTTP/1.1" 400 971 "-" > 172.30.2.84 - - [14/Aug/2012:19:51:17 +0900] "GET /api/local/list HTTP/1.1" > 400 971 "-" > 172.30.2.84 - - [14/Aug/2012:19:51:17 +0900] "GET /api/local/info HTTP/1.1" > 400 971 "-" > > As you can see, 400 bad request error occurred. I have already experienced > that I used to handle pound reverse proxy server with our solution. > Pound server has xhttp option that allow WEBDAV method from outside through > itself. I could solve similar issue under pound server by using xhttp > option. below description quoted from pound man page. All syntactically correct methods are allowed by nginx by default when proxying. If method isn't allowed for a particular resource, nginx will return 405 (Method Not Allowed), not 400 (Bad Request). The above errors aren't result of method problems for sure, as you can see from request line logged the method used was GET. To find out the reason for errors in question take a look at error log. And you may also want to make sure errors are actually returned by nginx, not by your backend. The size logged (971) suggests it's at least not nginx standard error page. Maxim Dounin From nginx-forum at nginx.us Wed Aug 15 14:30:00 2012 From: nginx-forum at nginx.us (n1xman) Date: Wed, 15 Aug 2012 10:30:00 -0400 (EDT) Subject: Unexpected response code:400 while using nginx_tcp_proxy_module for WebSocket Message-ID: <40399ff3e167b32d79e4014ec0061cfa.NginxMailingListEnglish@forum.nginx.org> Hello, We are trying to use nginx_tcp_proxy_module for WebSocket with the back-end CometD/Jetty. Direct CometD exposed to public is working fine and it goes with WebSocket but through nginx the transport switch to long-poll after throwing above error. Following is the request-response headers I see while trying to establish the WebSocket connection. GET /CometServer/cometd HTTP/1.1 Upgrade: websocket Connection: Upgrade Host: comet.example.com Origin: http://comet.example.com Sec-WebSocket-Key: yL46RAYQU5/wH73WE6y0Xw== Sec-WebSocket-Version: 13 Sec-WebSocket-Extensions: x-webkit-deflate-frame Cookie: JSESSIONID=5v8ee7guk9y61x235jgu2t3ng; __utma=260652952.1760465236.1333589697.1333589697.1342161899.2; __utmz=260652952.1342161899.2.2.utmcsr=; BAYEUX_BROWSER=7e3f11lg7gmow6t4mh5welcj5oxr HTTP/1.0 400 Bad Request Date: Wed, 15 Aug 2012 13:12:50 GMT Access-Control-Allow-Origin: http://comet.example.com Access-Control-Allow-Credentials: true Content-Type: text/html;charset=ISO-8859-1 Cache-Control: must-revalidate,no-cache,no-store Content-Length: 1413 Server: Jetty(7.6.3.v20120416) X-Cache: MISS from proxy Connection: keep-alive I suspect this is due to CometD not seeing "Upgrade: websocket" header thus it considered as failed websocket request and return 400 "Unexpected response code" I fully understand that nginx is not natively support WebSocket as yet but we would like to try at least ngx_tcp_proxy module as it has a WebSocoket module. Following is my config. tcp { upstream websockets { server 172.17.241.191:8484; check interval=3000 rise=2 fall=5 timeout=1000; } server { listen 172.17.241.191:9000; server_name comet.example.com; tcp_nodelay on; websocket_pass websockets; access_log /var/log/nginx/tcp_access.log; } } nginx -V nginx version: nginx/1.2.1 built by gcc 4.1.2 20080704 (Red Hat 4.1.2-48) TLS SNI support disabled configure arguments: --prefix=/etc/nginx/ --sbin-path=/usr/sbin/nginx --conf-path=/etc/nginx/nginx.conf --error-log-path=/var/log/nginx/error.log --http-log-path=/var/log/nginx/access.log --pid-path=/var/run/nginx.pid --lock-path=/var/run/nginx.lock --http-client-body-temp-path=/var/cache/nginx/client_temp --http-proxy-temp-path=/var/cache/nginx/proxy_temp --http-fastcgi-temp-path=/var/cache/nginx/fastcgi_temp --http-uwsgi-temp-path=/var/cache/nginx/uwsgi_temp --http-scgi-temp-path=/var/cache/nginx/scgi_temp --user=nginx --group=nginx --with-http_ssl_module --with-http_realip_module --with-http_addition_module --with-http_sub_module --with-http_dav_module --with-http_flv_module --with-http_mp4_module --with-http_gzip_static_module --with-http_random_index_module --with-http_secure_link_module --with-http_stub_status_module --with-mail_ssl_module --with-file-aio --with-debug --with-cc-opt='-O2 -g -m32 -march=i386 -mtune=generic -fasynchronous-unwind-tables' --without-http_uwsgi_module --without-http_scgi_module --without-mail_pop3_module --without-mail_imap_module --without-mail_smtp_module --add-module=/usr/local/hirantha/rpmbuild/BUILD/nginx-1.2.1/contrib/yaoweibin-nginx_tcp_proxy_module-a40c99a --add-module=/usr/local/hirantha/rpmbuild/BUILD/nginx-1.2.1/contrib/simpl-ngx_devel_kit-24202b4 --add-module=/usr/local/hirantha/rpmbuild/BUILD/nginx-1.2.1/contrib/agentzh-echo-nginx-module-080c0a1 --add-module=/usr/local/hirantha/rpmbuild/BUILD/nginx-1.2.1/contrib/agentzh-set-misc-nginx-module-87d0ab2 --add-module=/usr/local/hirantha/rpmbuild/BUILD/nginx-1.2.1/contrib/mikewest-nginx-static-etags-25bfaf9 --add-module=/usr/local/hirantha/rpmbuild/BUILD/nginx-1.2.1/contrib/nginx-sticky-module-1.0 --add-module=/usr/local/hirantha/rpmbuild/BUILD/nginx-1.2.1/contrib/agentzh-memc-nginx-module-8befc56 --add-module=/usr/local/hirantha/rpmbuild/BUILD/nginx-1.2.1/contrib/agentzh-srcache-nginx-module-8df221e uname -rop 2.6.18-194.26.1.el5PAE i686 GNU/Linux I would appreciate some help from you guys. Thanks in advance. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,229724,229724#msg-229724 From nginx-forum at nginx.us Wed Aug 15 15:35:36 2012 From: nginx-forum at nginx.us (martinlove) Date: Wed, 15 Aug 2012 11:35:36 -0400 (EDT) Subject: Single server with multiple hierarchies In-Reply-To: <20120801190811.GF32371@craic.sysops.org> References: <20120801190811.GF32371@craic.sysops.org> Message-ID: Hi yashgt, I'm having a similar issue trying to get Magento and WordPress running on a single IP. On Apache this is easy using vhost. I've never used nginx before, so don't know it all that well. I've read and done lots of googling and your post seems to be the closest to what I'm trying to archive. Would it be possible for you to post a copy of you config> Cheers, Martin Posted at Nginx Forum: http://forum.nginx.org/read.php?2,229189,229726#msg-229726 From edho at myconan.net Wed Aug 15 15:50:25 2012 From: edho at myconan.net (Edho Arief) Date: Wed, 15 Aug 2012 22:50:25 +0700 Subject: Single server with multiple hierarchies In-Reply-To: References: <20120801190811.GF32371@craic.sysops.org> Message-ID: On Wed, Aug 15, 2012 at 10:35 PM, martinlove wrote: > Hi yashgt, > > I'm having a similar issue trying to get Magento and WordPress running on a > single IP. On Apache this is easy using vhost. I've never used nginx before, > so don't know it all that well. I've read and done lots of googling and your > post seems to be the closest to what I'm trying to archive. Would it be > That'd be different problem. Check this docs: http://www.nginx.org/en/docs/http/server_names.html From nbubingo at gmail.com Wed Aug 15 15:53:41 2012 From: nbubingo at gmail.com (=?GB2312?B?0qbOsLHz?=) Date: Wed, 15 Aug 2012 23:53:41 +0800 Subject: Unexpected response code:400 while using nginx_tcp_proxy_module for WebSocket In-Reply-To: <40399ff3e167b32d79e4014ec0061cfa.NginxMailingListEnglish@forum.nginx.org> References: <40399ff3e167b32d79e4014ec0061cfa.NginxMailingListEnglish@forum.nginx.org> Message-ID: Hi, Can you open an websocket for me to connect from internet? And tell me the test method? You can email me to my private email: yaoweibin at gmail.com Thanks. 2012/8/15 n1xman : > Hello, > > We are trying to use nginx_tcp_proxy_module for WebSocket with the back-end > CometD/Jetty. Direct CometD exposed to public is working fine and it goes > with WebSocket but through nginx the transport switch to long-poll after > throwing above error. Following is the request-response headers I see while > trying to establish the WebSocket connection. > > GET /CometServer/cometd HTTP/1.1 > Upgrade: websocket > Connection: Upgrade > Host: comet.example.com > Origin: http://comet.example.com > Sec-WebSocket-Key: yL46RAYQU5/wH73WE6y0Xw== > Sec-WebSocket-Version: 13 > Sec-WebSocket-Extensions: x-webkit-deflate-frame > Cookie: JSESSIONID=5v8ee7guk9y61x235jgu2t3ng; > __utma=260652952.1760465236.1333589697.1333589697.1342161899.2; > __utmz=260652952.1342161899.2.2.utmcsr=; > BAYEUX_BROWSER=7e3f11lg7gmow6t4mh5welcj5oxr > > > HTTP/1.0 400 Bad Request > Date: Wed, 15 Aug 2012 13:12:50 GMT > Access-Control-Allow-Origin: http://comet.example.com > Access-Control-Allow-Credentials: true > Content-Type: text/html;charset=ISO-8859-1 > Cache-Control: must-revalidate,no-cache,no-store > Content-Length: 1413 > Server: Jetty(7.6.3.v20120416) > X-Cache: MISS from proxy > Connection: keep-alive > > I suspect this is due to CometD not seeing "Upgrade: websocket" header thus > it considered as failed websocket request and return 400 "Unexpected > response code" > > I fully understand that nginx is not natively support WebSocket as yet but > we would like to try at least ngx_tcp_proxy module as it has a WebSocoket > module. Following is my config. > > > tcp { > upstream websockets { > server 172.17.241.191:8484; > check interval=3000 rise=2 fall=5 timeout=1000; > > } > > server { > listen 172.17.241.191:9000; > server_name comet.example.com; > tcp_nodelay on; > websocket_pass websockets; > > access_log /var/log/nginx/tcp_access.log; > > } > } > > > nginx -V > nginx version: nginx/1.2.1 > built by gcc 4.1.2 20080704 (Red Hat 4.1.2-48) > TLS SNI support disabled > configure arguments: --prefix=/etc/nginx/ --sbin-path=/usr/sbin/nginx > --conf-path=/etc/nginx/nginx.conf --error-log-path=/var/log/nginx/error.log > --http-log-path=/var/log/nginx/access.log --pid-path=/var/run/nginx.pid > --lock-path=/var/run/nginx.lock > --http-client-body-temp-path=/var/cache/nginx/client_temp > --http-proxy-temp-path=/var/cache/nginx/proxy_temp > --http-fastcgi-temp-path=/var/cache/nginx/fastcgi_temp > --http-uwsgi-temp-path=/var/cache/nginx/uwsgi_temp > --http-scgi-temp-path=/var/cache/nginx/scgi_temp --user=nginx --group=nginx > --with-http_ssl_module --with-http_realip_module --with-http_addition_module > --with-http_sub_module --with-http_dav_module --with-http_flv_module > --with-http_mp4_module --with-http_gzip_static_module > --with-http_random_index_module --with-http_secure_link_module > --with-http_stub_status_module --with-mail_ssl_module --with-file-aio > --with-debug --with-cc-opt='-O2 -g -m32 -march=i386 -mtune=generic > -fasynchronous-unwind-tables' --without-http_uwsgi_module > --without-http_scgi_module --without-mail_pop3_module > --without-mail_imap_module --without-mail_smtp_module > --add-module=/usr/local/hirantha/rpmbuild/BUILD/nginx-1.2.1/contrib/yaoweibin-nginx_tcp_proxy_module-a40c99a > --add-module=/usr/local/hirantha/rpmbuild/BUILD/nginx-1.2.1/contrib/simpl-ngx_devel_kit-24202b4 > --add-module=/usr/local/hirantha/rpmbuild/BUILD/nginx-1.2.1/contrib/agentzh-echo-nginx-module-080c0a1 > --add-module=/usr/local/hirantha/rpmbuild/BUILD/nginx-1.2.1/contrib/agentzh-set-misc-nginx-module-87d0ab2 > --add-module=/usr/local/hirantha/rpmbuild/BUILD/nginx-1.2.1/contrib/mikewest-nginx-static-etags-25bfaf9 > --add-module=/usr/local/hirantha/rpmbuild/BUILD/nginx-1.2.1/contrib/nginx-sticky-module-1.0 > --add-module=/usr/local/hirantha/rpmbuild/BUILD/nginx-1.2.1/contrib/agentzh-memc-nginx-module-8befc56 > --add-module=/usr/local/hirantha/rpmbuild/BUILD/nginx-1.2.1/contrib/agentzh-srcache-nginx-module-8df221e > > uname -rop > 2.6.18-194.26.1.el5PAE i686 GNU/Linux > > I would appreciate some help from you guys. > > Thanks in advance. > > Posted at Nginx Forum: http://forum.nginx.org/read.php?2,229724,229724#msg-229724 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From nginx-forum at nginx.us Wed Aug 15 18:52:09 2012 From: nginx-forum at nginx.us (TECK) Date: Wed, 15 Aug 2012 14:52:09 -0400 (EDT) Subject: Configure specific upstream node to spit 30x error Message-ID: <0a0af147c396d8b23a01dc2618361c21.NginxMailingListEnglish@forum.nginx.org> Hi all, I was wondering if there is feasible way to have a node spit a specific 30x error (i.e. 307) if it becomes unreachable, instead of passing to next upstream node? upstream mycluster { server 192.168.1.2; server 192.168.1.3; server 192.168.1.4; } If node 192.168.1.3 is down, it should return a 307, instead of passing to 192.168.1.4. Thank you for your help. Regards, Floren Posted at Nginx Forum: http://forum.nginx.org/read.php?2,229731,229731#msg-229731 From nginx-forum at nginx.us Wed Aug 15 19:21:14 2012 From: nginx-forum at nginx.us (TECK) Date: Wed, 15 Aug 2012 15:21:14 -0400 (EDT) Subject: Configure specific upstream node to spit 30x error In-Reply-To: <0a0af147c396d8b23a01dc2618361c21.NginxMailingListEnglish@forum.nginx.org> References: <0a0af147c396d8b23a01dc2618361c21.NginxMailingListEnglish@forum.nginx.org> Message-ID: <0de901cb6cad689d70fdca0f95f72506.NginxMailingListEnglish@forum.nginx.org> Just to clarify, my goal is to make sure a php-fpm POST is passed to next node, if current fails. From my understanding the 307 error is designed to do this. Thank you. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,229731,229734#msg-229734 From mdounin at mdounin.ru Wed Aug 15 20:12:56 2012 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 16 Aug 2012 00:12:56 +0400 Subject: Configure specific upstream node to spit 30x error In-Reply-To: <0a0af147c396d8b23a01dc2618361c21.NginxMailingListEnglish@forum.nginx.org> References: <0a0af147c396d8b23a01dc2618361c21.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20120815201255.GO40452@mdounin.ru> Hello! On Wed, Aug 15, 2012 at 02:52:09PM -0400, TECK wrote: > Hi all, > > I was wondering if there is feasible way to have a node spit a specific 30x > error (i.e. 307) if it becomes unreachable, instead of passing to next > upstream node? > > upstream mycluster { > server 192.168.1.2; > server 192.168.1.3; > server 192.168.1.4; > } > > If node 192.168.1.3 is down, it should return a 307, instead of passing to > 192.168.1.4. Something like this should work: error_page 502 504 =307 http://example.com/; proxy_next_upstream off; See here for details: http://nginx.org/r/error_page http://nginx.org/r/proxy_next_upstream Maxim Dounin From nginx-forum at nginx.us Wed Aug 15 20:37:21 2012 From: nginx-forum at nginx.us (martinlove) Date: Wed, 15 Aug 2012 16:37:21 -0400 (EDT) Subject: Single server with multiple hierarchies In-Reply-To: References: Message-ID: <3f7554f5fa7ea275ab1480614ab32306.NginxMailingListEnglish@forum.nginx.org> Hi Yash, Thanks for your prompt response. I'll have a look at the docs. Cheers, Martin Posted at Nginx Forum: http://forum.nginx.org/read.php?2,229189,229736#msg-229736 From nginx-forum at nginx.us Wed Aug 15 21:12:43 2012 From: nginx-forum at nginx.us (TECK) Date: Wed, 15 Aug 2012 17:12:43 -0400 (EDT) Subject: Configure specific upstream node to spit 30x error In-Reply-To: <20120815201255.GO40452@mdounin.ru> References: <20120815201255.GO40452@mdounin.ru> Message-ID: Thank you, Maxim. Related to second part of my question: Will the POST be passed properly to next node, if first one fails? If is not, is there a configuration setting that will enable that feature? Regards, Floren Posted at Nginx Forum: http://forum.nginx.org/read.php?2,229731,229737#msg-229737 From mdounin at mdounin.ru Wed Aug 15 21:30:45 2012 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 16 Aug 2012 01:30:45 +0400 Subject: Configure specific upstream node to spit 30x error In-Reply-To: References: <20120815201255.GO40452@mdounin.ru> Message-ID: <20120815213045.GS40452@mdounin.ru> Hello! On Wed, Aug 15, 2012 at 05:12:43PM -0400, TECK wrote: > Thank you, Maxim. Related to second part of my question: > Will the POST be passed properly to next node, if first one fails? If is > not, is there a configuration setting that will enable that feature? Once you return 307 redirect to a client - it's up to a client to decide what to do. If you want nginx to pass requests to a next upstream server, just leave proxy_next_upstream as set by default to "error timeout". It works for all request methods. See here for details: http://nginx.org/r/proxy_next_upstream Maxim Dounin From nginx-forum at nginx.us Thu Aug 16 07:38:06 2012 From: nginx-forum at nginx.us (plcsplitter) Date: Thu, 16 Aug 2012 03:38:06 -0400 (EDT) Subject: Configure specific upstream node to spit 30x error In-Reply-To: <0a0af147c396d8b23a01dc2618361c21.NginxMailingListEnglish@forum.nginx.org> References: <0a0af147c396d8b23a01dc2618361c21.NginxMailingListEnglish@forum.nginx.org> Message-ID: <1a2b5c3452462b5062506c962eb17793.NginxMailingListEnglish@forum.nginx.org> There will be a solution of, Posted at Nginx Forum: http://forum.nginx.org/read.php?2,229731,229766#msg-229766 From nginx-forum at nginx.us Thu Aug 16 07:39:26 2012 From: nginx-forum at nginx.us (plcsplitter) Date: Thu, 16 Aug 2012 03:39:26 -0400 (EDT) Subject: 400 bad request with the other method such as info,list and lock In-Reply-To: <079ba6714636c3807ca2591b089adf15.NginxMailingListEnglish@forum.nginx.org> References: <079ba6714636c3807ca2591b089adf15.NginxMailingListEnglish@forum.nginx.org> Message-ID: <144c437fab4ae3371eed95c5b7e2c826.NginxMailingListEnglish@forum.nginx.org> Server error, that is, files are missing 1 Posted at Nginx Forum: http://forum.nginx.org/read.php?2,229681,229767#msg-229767 From nginx-forum at nginx.us Thu Aug 16 07:40:57 2012 From: nginx-forum at nginx.us (plcsplitter) Date: Thu, 16 Aug 2012 03:40:57 -0400 (EDT) Subject: x-accel-redirect and gzip_static In-Reply-To: References: Message-ID: Redirect is very necessary, you could pass the weight . Posted at Nginx Forum: http://forum.nginx.org/read.php?2,220280,229769#msg-229769 From mdounin at mdounin.ru Thu Aug 16 12:28:27 2012 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 16 Aug 2012 16:28:27 +0400 Subject: NGINX crash In-Reply-To: References: <28e50aac81607c312ebc3337885a7532.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20120816122827.GV40452@mdounin.ru> Hello! On Mon, Aug 13, 2012 at 07:46:56PM -0400, double wrote: > Hi Maxim, > > Patch works. > You are definitly a genius. > > Thanks a lot Patch committed, thanks for testing. Maxim Dounin From nbubingo at gmail.com Thu Aug 16 15:17:23 2012 From: nbubingo at gmail.com (=?GB2312?B?0qbOsLHz?=) Date: Thu, 16 Aug 2012 23:17:23 +0800 Subject: Gzipping proxied content after using subs_filter fails In-Reply-To: <29f64b0504fe2fc851bb4fa35a0a87ac.NginxMailingListEnglish@forum.nginx.org> References: <29f64b0504fe2fc851bb4fa35a0a87ac.NginxMailingListEnglish@forum.nginx.org> Message-ID: Thanks. This bug is fixed in the latest revision (https://github.com/yaoweibin/ngx_http_substitutions_filter_module) 2012/8/14 abstein2 : > I have e-mailed Yao Weibin, but wanted to give an update here regarding > my findings. It appears the issue is linked to long strings of text > unbroken by a new line. With my settings, it appears that if a line > contains ~40k bytes without being broken by a new line, something occurs > whether gzip is on or off, that causes the gibberish to appear on the > page. > > Thank you all for your input so far. > > Posted at Nginx Forum: http://forum.nginx.org/read.php?2,229545,229651#msg-229651 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From nginx-forum at nginx.us Thu Aug 16 23:18:51 2012 From: nginx-forum at nginx.us (deoren) Date: Thu, 16 Aug 2012 19:18:51 -0400 (EDT) Subject: GeSHi language file for nginx config In-Reply-To: References: Message-ID: <86f2aa020df01c395170d9c16668db76.NginxMailingListEnglish@forum.nginx.org> Hi, One of the devs for 'GeSHi - Generic Syntax Highlighter' replied back and offered to include the nginx language file in the official distribution if someone is willing to cleanup some of the issues preventing the file from passing the language tests: http://sourceforge.net/tracker/?func=detail&atid=670234&aid=3554024&group_id=114997 I don't feel I have the necessary skills to do so, but I plan on giving it a shot when I get a chance. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,123194,229812#msg-229812 From nginx-forum at nginx.us Fri Aug 17 10:23:04 2012 From: nginx-forum at nginx.us (amonmitch) Date: Fri, 17 Aug 2012 06:23:04 -0400 (EDT) Subject: how to check for headers Message-ID: <75bcb08616a2c2f3a268aa37ec6e4bff.NginxMailingListEnglish@forum.nginx.org> Hello, How would you check for sth like HTTP:Accept-Language? and in general all headers? if it is equal to some value, empty or etc. Apache would allow this to be checked in a notation similar to the following inside a mod_rewrite config: %{HTTP:Accept-Language} Is there a similar variable to be used in nginx config? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,229839,229839#msg-229839 From francis at daoine.org Fri Aug 17 13:46:45 2012 From: francis at daoine.org (Francis Daly) Date: Fri, 17 Aug 2012 14:46:45 +0100 Subject: how to check for headers In-Reply-To: <75bcb08616a2c2f3a268aa37ec6e4bff.NginxMailingListEnglish@forum.nginx.org> References: <75bcb08616a2c2f3a268aa37ec6e4bff.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20120817134645.GQ32371@craic.sysops.org> On Fri, Aug 17, 2012 at 06:23:04AM -0400, amonmitch wrote: Hi there, > How would you check for sth like HTTP:Accept-Language? and in general all > headers? if it is equal to some value, empty or etc. Apache would allow this > to be checked in a notation similar to the following inside a mod_rewrite > config: > > %{HTTP:Accept-Language} > > Is there a similar variable to be used in nginx config? variables are at http://nginx.org/en/docs/http/ngx_http_core_module.html#variables "map" is at http://nginx.org/r/map "if" is at http://nginx.org/r/if Be aware of when you shouldn't use "if" inside "location", such as is written at http://wiki.nginx.org/IfIsEvil Depending on what exactly you want to do, frequently "map" can be used instead of "if". f -- Francis Daly francis at daoine.org From nginx-forum at nginx.us Fri Aug 17 16:18:55 2012 From: nginx-forum at nginx.us (theguy) Date: Fri, 17 Aug 2012 12:18:55 -0400 (EDT) Subject: nginx running as nginx.conf Message-ID: hi guys i have nginx on 2 servers. in one of them i'm getting the following correct output: # netstat -tulpn | grep -e nginx tcp 0 0 0.0.0.0:80 0.0.0.0:* LISTEN 4735/nginx # ps aux | grep nginx root 4735 0.0 0.0 41028 944 ? Ss Aug10 0:00 nginx: master process /usr/sbin/nginx -c /etc/nginx/nginx.conf nginx 4736 1.8 0.0 45644 6160 ? R Aug10 177:10 nginx: worker process However on the other I'm getting the following wrong output: # netstat -tulpn | grep -e nginx tcp 0 0 0.0.0.0:80 0.0.0.0:* LISTEN 7650/nginx.conf # ps aux | grep nginx root 7650 0.0 0.0 44632 1092 ? Ss 00:27 0:00 nginx: master process /usr/sbin/nginx -c /etc/nginx/nginx.conf nginx 7651 0.1 0.2 48412 5552 ? S 00:27 1:04 nginx: worker process Both servers are working fine and nginx is serving the pages without problems however I don't know why on one of my servers it is running as 'nginx.conf' when it should be running as 'nginx' Any ideas? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,229866,229866#msg-229866 From nginx-forum at nginx.us Fri Aug 17 16:33:14 2012 From: nginx-forum at nginx.us (magicdrums) Date: Fri, 17 Aug 2012 12:33:14 -0400 (EDT) Subject: nginx running as nginx.conf In-Reply-To: References: Message-ID: <852380b5b4fb3eb4e068d1e42988cbc0.NginxMailingListEnglish@forum.nginx.org> HI, Please Paste your /etc/nginx/nginx.conf THX. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,229866,229867#msg-229867 From nginx-forum at nginx.us Fri Aug 17 16:48:54 2012 From: nginx-forum at nginx.us (theguy) Date: Fri, 17 Aug 2012 12:48:54 -0400 (EDT) Subject: nginx running as nginx.conf In-Reply-To: References: Message-ID: user nginx; worker_processes 1; error_log /var/log/nginx/error.log warn; pid /var/run/nginx.pid; worker_rlimit_nofile 8192; events { worker_connections 7168; use epoll; } http { server_names_hash_max_size 2048; include /etc/nginx/mime.types; default_type application/octet-stream; log_format main '$remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"'; access_log off; sendfile on; tcp_nopush on; tcp_nodelay on; server_tokens off; keepalive_timeout 10; gzip on; gzip_min_length 1100; gzip_buffers 4 32k; gzip_types text/plain application/x-javascript text/xml text/css; ignore_invalid_headers on; client_header_timeout 3m; client_body_timeout 3m; send_timeout 3m; connection_pool_size 256; client_header_buffer_size 4k; large_client_header_buffers 4 32k; request_pool_size 4k; output_buffers 4 32k; postpone_output 1460; include /etc/nginx/conf.d/*.conf; } Both are running the same nginx.conf so I don't know what's going on. Im running CentOS 6.3 64-bits Posted at Nginx Forum: http://forum.nginx.org/read.php?2,229866,229868#msg-229868 From nginx-forum at nginx.us Fri Aug 17 17:33:34 2012 From: nginx-forum at nginx.us (Joseba) Date: Fri, 17 Aug 2012 13:33:34 -0400 (EDT) Subject: Can I create a call (method post) from auth_basic? Message-ID: Hi, sorry for my english. I newbie on nginx. I have a nginx reverse proxy configured to openerp. Openerp, needs a call (method posts) to login the user. The idea is to use the auth_basic to login automatically to openerp. Any idea about this? Thanks on advance! Joseba Posted at Nginx Forum: http://forum.nginx.org/read.php?2,229869,229869#msg-229869 From johannes_graumann at web.de Fri Aug 17 21:39:56 2012 From: johannes_graumann at web.de (Johannes Graumann) Date: Fri, 17 Aug 2012 23:39:56 +0200 Subject: How to converte this apache rewrite? Message-ID: Hi, How would I convert this RewriteEngine on RewriteRule /Microsoft-Server-ActiveSync(.*) /path/to/your/tine20_installation/index.php$1 [E=REDIRECT_ACTIVESYNC:true,E=REMOTE_USER:%{HTTP:Authorization}] To a nginx server definition? Thanks for any hints. Sincerely, Joh From zellster at gmail.com Fri Aug 17 23:18:32 2012 From: zellster at gmail.com (Adam Zell) Date: Fri, 17 Aug 2012 16:18:32 -0700 Subject: Caucho Resin: faster than nginx? Message-ID: FYI: http://www.caucho.com/resin-application-server/press/resin-java-web-server-outperforms-nginx/ " Using industry standard tool and methodology, Resin Pro web server was put to the test versus Nginx, a popular web server with a reputation for efficiency and performance. Nginx is known to be faster and more reliable under load than the popular Apache HTTPD. Benchmark tests between Resin and Nginx yielded competitive figures, with Resin leading with fewer errors and faster response times. In numerous and varying tests, Resin handled 20% to 25% more load while still outperforming Nginx. In particular, Resin was able to sustain fast response times under extremely heavy load while Nginx performance degraded. " -- Adam zellster at gmail.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From ne at vbart.ru Sat Aug 18 03:17:27 2012 From: ne at vbart.ru (Valentin V. Bartenev) Date: Sat, 18 Aug 2012 07:17:27 +0400 Subject: Caucho Resin: faster than nginx? In-Reply-To: References: Message-ID: <201208180717.27218.ne@vbart.ru> On Saturday 18 August 2012 03:18:32 Adam Zell wrote: > FYI: > http://www.caucho.com/resin-application-server/press/resin-java-web-server- > outperforms-nginx/ > > > " Using industry standard tool and methodology, Resin Pro web server was > put to the test versus Nginx, a popular web server with a reputation for > efficiency and performance. Nginx is known to be faster and more reliable > under load than the popular Apache HTTPD. Benchmark tests between Resin and > Nginx yielded competitive figures, with Resin leading with fewer errors and > faster response times. In numerous and varying tests, Resin handled 20% to > 25% more load while still outperforming Nginx. In particular, Resin was > able to sustain fast response times under extremely heavy load while Nginx > performance degraded. " What nginx configuration was used during the testing? Did they tune it? Did Resin use an equivalent level of logging? What build options were used to build nginx? Why did they test on 1k page? I don't think that the average size of typical web-page and its elements are about 1 Kb. Does it mean that the Resin cannot effectively handle files of more size? What about memory usage? And after all, why did they use the latest version of Resin and relatively old version of nginx? wbr, Valentin V. Bartenev P.S. vbart at vbart-laptop ~/Development/Nginx/tests/wrk $ curl -i http://localhost:8000/1k.html HTTP/1.1 200 OK Server: nginx/1.3.5 Date: Sat, 18 Aug 2012 03:10:13 GMT Content-Type: text/html Content-Length: 1063 Last-Modified: Sat, 18 Aug 2012 02:40:43 GMT Connection: keep-alive ETag: "502f00ab-427" Accept-Ranges: bytes
0 0123456789 0123456789 0123456789 0123456789 0123456789 0123456789 0123456789 0123456789 0123456789 
1 0123456789 0123456789 0123456789 0123456789 0123456789 0123456789 0123456789 0123456789 0123456789 
2 0123456789 0123456789 0123456789 0123456789 0123456789 0123456789 0123456789 0123456789 0123456789 
3 0123456789 0123456789 0123456789 0123456789 0123456789 0123456789 0123456789 0123456789 0123456789 
4 0123456789 0123456789 0123456789 0123456789 0123456789 0123456789 0123456789 0123456789 0123456789 
5 0123456789 0123456789 0123456789 0123456789 0123456789 0123456789 0123456789 0123456789 0123456789 
6 0123456789 0123456789 0123456789 0123456789 0123456789 0123456789 0123456789 0123456789 0123456789 
7 0123456789 0123456789 0123456789 0123456789 0123456789 0123456789 0123456789 0123456789 0123456789 
8 0123456789 0123456789 0123456789 0123456789 0123456789 0123456789 0123456789 0123456789 0123456789 
9 0123456789 0123456789 0123456789 0123456789 0123456789 0123456789 0123456789 0123456789 0123456789 
vbart at vbart-laptop ~/Development/Nginx/tests/wrk $ cat ../build/test.conf #error_log logs/error.log debug; worker_processes 2; worker_priority -5; worker_cpu_affinity 1000 0010; events { accept_mutex off; } http { sendfile on; access_log off; tcp_nopush on; open_file_cache max=16; open_file_cache_valid 1h; server { location / { } } } vbart at vbart-laptop ~/Development/Nginx/tests/wrk $ grep "model name" /proc/cpuinfo | uniq model name : Intel(R) Core(TM) i3 CPU M 350 @ 2.27GHz vbart at vbart-laptop ~/Development/Nginx/tests/wrk $ ./wrk -r 3m -c 10 -t 1 --pipeline 100 http://localhost:8000/1k.html Making 3000000 requests to http://localhost:8000/1k.html 1 threads and 10 connections Thread Stats Avg Stdev Max +/- Stdev Latency 5.79ms 50.47us 6.03ms 75.42% Req/Sec 170.72k 450.75 171.00k 72.03% 3000005 requests in 17.54s, 3.63GB read Requests/sec: 171078.30 Transfer/sec: 212.25MB From ne at vbart.ru Sat Aug 18 04:01:28 2012 From: ne at vbart.ru (Valentin V. Bartenev) Date: Sat, 18 Aug 2012 08:01:28 +0400 Subject: Caucho Resin: faster than nginx? In-Reply-To: <201208180717.27218.ne@vbart.ru> References: <201208180717.27218.ne@vbart.ru> Message-ID: <201208180801.28547.ne@vbart.ru> On Saturday 18 August 2012 07:17:27 Valentin V. Bartenev wrote: [...] > vbart at vbart-laptop ~/Development/Nginx/tests/wrk $ grep "model name" > /proc/cpuinfo | uniq model name : Intel(R) Core(TM) i3 CPU M > 350 @ 2.27GHz > vbart at vbart-laptop ~/Development/Nginx/tests/wrk $ ./wrk -r 3m -c 10 -t 1 > --pipeline 100 http://localhost:8000/1k.html Making 3000000 requests to > http://localhost:8000/1k.html > 1 threads and 10 connections > Thread Stats Avg Stdev Max +/- Stdev > Latency 5.79ms 50.47us 6.03ms 75.42% > Req/Sec 170.72k 450.75 171.00k 72.03% > 3000005 requests in 17.54s, 3.63GB read > Requests/sec: 171078.30 > Transfer/sec: 212.25MB > All the same + clang 3.1 -> gcc-4.7.1 + removed all unused modules vbart at vbart-laptop ~/Development/Nginx/tests/wrk $ ./wrk -r 3m -c 10 -t 1 --pipeline 100 http://localhost:8000/1k.html Making 3000000 requests to http://localhost:8000/1k.html 1 threads and 10 connections Thread Stats Avg Stdev Max +/- Stdev Latency 4.70ms 232.75us 5.39ms 76.29% Req/Sec 206.94k 281.90 207.00k 94.85% 3000008 requests in 14.46s, 3.63GB read Requests/sec: 207533.34 Transfer/sec: 257.29MB vbart at vbart-laptop ~/Development/Nginx/tests/build $ sbin/nginx -V nginx version: nginx/1.3.5 built by gcc 4.7.1 (Gentoo 4.7.1 p1.0, pie-0.5.3) configure arguments: --prefix=/home/vbart/Development/Nginx/tests/build --with- cc-opt='-O3 -march=native' --without-http-cache --without-http_charset_module -- without-http_gzip_module --without-http_ssi_module --without-http_userid_module --without-http_access_module --without-http_auth_basic_module --without- http_autoindex_module --without-http_status_module --without-http_geo_module -- without-http_map_module --without-http_split_clients_module --without- http_referer_module --without-http_rewrite_module --without-http_proxy_module -- without-http_fastcgi_module --without-http_uwsgi_module --without- http_scgi_module --without-http_memcached_module --without- http_limit_conn_module --without-http_limit_req_module --without- http_empty_gif_module --without-http_browser_module --without- http_upstream_ip_hash_module --without-http_upstream_least_conn_module -- without-http_upstream_keepalive_module From piotr.sikora at frickle.com Sat Aug 18 05:01:24 2012 From: piotr.sikora at frickle.com (Piotr Sikora) Date: Sat, 18 Aug 2012 07:01:24 +0200 Subject: Caucho Resin: faster than nginx? In-Reply-To: <201208180717.27218.ne@vbart.ru> References: <201208180717.27218.ne@vbart.ru> Message-ID: <3C61E2CAE530422DA3A0BF36C207BFC6@Desktop> Hey, > Why did they test on 1k page? Because in Resin "Small static files are cached in memory, improving performance by avoiding the filesystem entirely. Small files like 1-pixel images can be served with little delay." (source: http://wiki4.caucho.com/Web_Server:_Static_Files). Biased benchmark (Resin serving from memory vs nginx opening, reading, serving and closing files). Best regards, Piotr Sikora < piotr.sikora at frickle.com > From ianevans at digitalhit.com Sat Aug 18 05:10:01 2012 From: ianevans at digitalhit.com (Ian Evans) Date: Sat, 18 Aug 2012 01:10:01 -0400 Subject: nginx simple caching solutions In-Reply-To: <50194E8F.9050307@digitalhit.com> References: <032ddfa8ed12dba1173cc7d2ba7993aa.squirrel@www.digitalhit.com> <20120801094449.5e767b6c@e-healthexpert.org> <501928EE.1070104@digitalhit.com> <876292vjpv.wl%appa@perusio.net> <501948CC.4010708@digitalhit.com> <873946vdib.wl%appa@perusio.net> <50194E8F.9050307@digitalhit.com> Message-ID: <502F23A9.6070705@digitalhit.com> Just an update on my situation... I installed the fastcgi caching as per the suggestions in this thread. Using the add_header X-My-Cache $upstream_cache_status; line I'm able to confirm by the headers if I'm getting cache hits or misses. I've also got a small handful of directories that I don't want cached and as per the thread, they're marked: map $uri $no_cache_dirs { default 0; /dir1 1; /dir2 1; /dir3 1; /dir4 1; /dir5 1; } In my fastcgi.conf there are the following lines added for the caching: fastcgi_cache MYCACHE; fastcgi_keep_conn on; fastcgi_cache_bypass $no_cache $no_cache_dirs; fastcgi_no_cache $no_cache $no_cache_dirs; fastcgi_cache_valid 200 301 5m; fastcgi_cache_valid 302 5m; fastcgi_cache_valid 404 1m; fastcgi_cache_use_stale error timeout invalid_header updating http_500; fastcgi_ignore_headers Cache-Control Expires; expires epoch; fastcgi_cache_lock on; Did a test on some of the no cache dirs (including a wordpress blog) and the pages in them were still getting cached. Any ideas why? Thanks. Hope you all have great weekends. From jamesmikedupont at googlemail.com Sat Aug 18 05:14:10 2012 From: jamesmikedupont at googlemail.com (Mike Dupont) Date: Sat, 18 Aug 2012 05:14:10 +0000 Subject: Caucho Resin: faster than nginx? In-Reply-To: References: Message-ID: which version of resin did they use, the open source or pro version? mike On Fri, Aug 17, 2012 at 11:18 PM, Adam Zell wrote: > FYI: > http://www.caucho.com/resin-application-server/press/resin-java-web-server-outperforms-nginx/ > > " Using industry standard tool and methodology, Resin Pro web server was put > to the test versus Nginx, a popular web server with a reputation for > efficiency and performance. Nginx is known to be faster and more reliable > under load than the popular Apache HTTPD. Benchmark tests between Resin and > Nginx yielded competitive figures, with Resin leading with fewer errors and > faster response times. In numerous and varying tests, Resin handled 20% to > 25% more load while still outperforming Nginx. In particular, Resin was able > to sustain fast response times under extremely heavy load while Nginx > performance degraded. " > > -- > Adam > zellster at gmail.com > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -- James Michael DuPont Member of Free Libre Open Source Software Kosova http://flossk.org Saving wikipedia(tm) articles from deletion http://SpeedyDeletion.wikia.com Contributor FOSM, the CC-BY-SA map of the world http://fosm.org Mozilla Rep https://reps.mozilla.org/u/h4ck3rm1k3 From zellster at gmail.com Sat Aug 18 06:39:06 2012 From: zellster at gmail.com (Adam Zell) Date: Fri, 17 Aug 2012 23:39:06 -0700 Subject: Caucho Resin: faster than nginx? In-Reply-To: References: Message-ID: More details: http://blog.caucho.com/2012/07/05/nginx-120-versus-resin-4029-performance-tests/ . On Fri, Aug 17, 2012 at 10:14 PM, Mike Dupont < jamesmikedupont at googlemail.com> wrote: > which version of resin did they use, the open source or pro version? > mike > > On Fri, Aug 17, 2012 at 11:18 PM, Adam Zell wrote: > > FYI: > > > http://www.caucho.com/resin-application-server/press/resin-java-web-server-outperforms-nginx/ > > > > " Using industry standard tool and methodology, Resin Pro web server was > put > > to the test versus Nginx, a popular web server with a reputation for > > efficiency and performance. Nginx is known to be faster and more reliable > > under load than the popular Apache HTTPD. Benchmark tests between Resin > and > > Nginx yielded competitive figures, with Resin leading with fewer errors > and > > faster response times. In numerous and varying tests, Resin handled 20% > to > > 25% more load while still outperforming Nginx. In particular, Resin was > able > > to sustain fast response times under extremely heavy load while Nginx > > performance degraded. " > > > > -- > > Adam > > zellster at gmail.com > > > > _______________________________________________ > > nginx mailing list > > nginx at nginx.org > > http://mailman.nginx.org/mailman/listinfo/nginx > > > > -- > James Michael DuPont > Member of Free Libre Open Source Software Kosova http://flossk.org > Saving wikipedia(tm) articles from deletion > http://SpeedyDeletion.wikia.com > Contributor FOSM, the CC-BY-SA map of the world http://fosm.org > Mozilla Rep https://reps.mozilla.org/u/h4ck3rm1k3 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -- Adam zellster at gmail.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From jamesmikedupont at googlemail.com Sat Aug 18 07:26:49 2012 From: jamesmikedupont at googlemail.com (Mike Dupont) Date: Sat, 18 Aug 2012 07:26:49 +0000 Subject: Caucho Resin: faster than nginx? In-Reply-To: References: Message-ID: Resin Pro 4.0.29, so whats the point? We are talking about open source software here, no? mike On Sat, Aug 18, 2012 at 6:39 AM, Adam Zell wrote: > More details: > http://blog.caucho.com/2012/07/05/nginx-120-versus-resin-4029-performance-tests/ > . > > On Fri, Aug 17, 2012 at 10:14 PM, Mike Dupont > wrote: >> >> which version of resin did they use, the open source or pro version? >> mike >> >> On Fri, Aug 17, 2012 at 11:18 PM, Adam Zell wrote: >> > FYI: >> > >> > http://www.caucho.com/resin-application-server/press/resin-java-web-server-outperforms-nginx/ >> > >> > " Using industry standard tool and methodology, Resin Pro web server was >> > put >> > to the test versus Nginx, a popular web server with a reputation for >> > efficiency and performance. Nginx is known to be faster and more >> > reliable >> > under load than the popular Apache HTTPD. Benchmark tests between Resin >> > and >> > Nginx yielded competitive figures, with Resin leading with fewer errors >> > and >> > faster response times. In numerous and varying tests, Resin handled 20% >> > to >> > 25% more load while still outperforming Nginx. In particular, Resin was >> > able >> > to sustain fast response times under extremely heavy load while Nginx >> > performance degraded. " >> > >> > -- >> > Adam >> > zellster at gmail.com >> > >> > _______________________________________________ >> > nginx mailing list >> > nginx at nginx.org >> > http://mailman.nginx.org/mailman/listinfo/nginx >> >> >> >> -- >> James Michael DuPont >> Member of Free Libre Open Source Software Kosova http://flossk.org >> Saving wikipedia(tm) articles from deletion >> http://SpeedyDeletion.wikia.com >> Contributor FOSM, the CC-BY-SA map of the world http://fosm.org >> Mozilla Rep https://reps.mozilla.org/u/h4ck3rm1k3 >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx > > > > > -- > Adam > zellster at gmail.com > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -- James Michael DuPont Member of Free Libre Open Source Software Kosova http://flossk.org Saving wikipedia(tm) articles from deletion http://SpeedyDeletion.wikia.com Contributor FOSM, the CC-BY-SA map of the world http://fosm.org Mozilla Rep https://reps.mozilla.org/u/h4ck3rm1k3 From nginx-forum at nginx.us Sat Aug 18 10:54:01 2012 From: nginx-forum at nginx.us (mokriy) Date: Sat, 18 Aug 2012 06:54:01 -0400 (EDT) Subject: Upload Module - add response header with MD5 Message-ID: Hi! here is the snuppet of configuration I use for uploading: location /upload { auth_request /auth; upload_pass /200; upload_store /download 1; upload_resumable on; #Return download path to calling party in custom header value upload_add_header XXX-DownloadURI "$upload_tmp_path"; #Return md5 file checksum to calling party as custome header value upload_add_header X-XXX-MD5SizeChecksum "$upload_file_md5"; upload_add_header X-XXXXr-FileSize "$upload_file_size"; upload_aggregate_form_field "$upload_field_name.md5" "$upload_file_md5"; upload_aggregate_form_field "$upload_field_name.size" "$upload_file_size"; It works perfect, great job, thanks! Specially about upload_add_header. However the MD5 hashes returned in response header are.. not hashes. They are: 34383765656261393461336230613362 OR 34396436323764343839623564646664 I can't rely on them on caller side.. Could it be a problem with add_header? I assume that aggregate field is generated correctly, however when setting it to header via upload_add_header, there is an convertion mistake. Many thanks in advance! You are doing great Job Posted at Nginx Forum: http://forum.nginx.org/read.php?2,229882,229882#msg-229882 From nginx-forum at nginx.us Sat Aug 18 13:23:47 2012 From: nginx-forum at nginx.us (lenky0401) Date: Sat, 18 Aug 2012 09:23:47 -0400 (EDT) Subject: Is this a bug? Message-ID: <34781cfd70a193a5f371a6676eeee4ad.NginxMailingListEnglish@forum.nginx.org> 45: /* 46: * Preallocation of first nodes : 0, 1, 00, 01, 10, 11, 000, 001, etc. 47: * increases TLB hits even if for first lookup iterations. 48: * On 32-bit platforms the 7 preallocated bits takes continuous 4K, 49: * 8 - 8K, 9 - 16K, etc. On 64-bit platforms the 6 preallocated bits 50: * takes continuous 4K, 7 - 8K, 8 - 16K, etc. There is no sense to 51: * to preallocate more than one page, because further preallocation 52: * distributes the only bit per page. Instead, a random insertion 53: * may distribute several bits per page. 54: * 55: * Thus, by default we preallocate maximum 56: * 6 bits on amd64 (64-bit platform and 4K pages) 57: * 7 bits on i386 (32-bit platform and 4K pages) 58: * 7 bits on sparc64 in 64-bit mode (8K pages) 59: * 8 bits on sparc64 in 32-bit mode (8K pages) 60: */ 61: 62: if (preallocate == -1) { 63: switch (ngx_pagesize / sizeof(ngx_radix_tree_t)) { //ngx_radix_tree_t -> ngx_radix_node_t ? Thanks in advance. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,229884,229884#msg-229884 From mike503 at gmail.com Sat Aug 18 22:44:05 2012 From: mike503 at gmail.com (Michael Shadle) Date: Sat, 18 Aug 2012 15:44:05 -0700 Subject: add_header isn't taking effect Message-ID: This block basically works like a charm, passes the upstream headers from proxy_pass_header as expected, but the add_header doesn't seem to work no matter what. add_header is what I use upstream to add the headers like X-Hostname-Proxy. But I want to add a header local to this block. Ideas? I don't see anything in the wiki saying this is invalid. It says it is valid in location, server, etc blocks, which is what this is... :p location ^~ /foo/bar { proxy_pass http://sites:8080; proxy_set_header Host $host; proxy_set_header X-Forwarded-For $remote_addr; proxy_pass_header Server; proxy_pass_header Expires; proxy_pass_header X-Hostname-Proxy; proxy_pass_header Access-Control-Allow-Origin; add_header FOO "bar"; add_header FOO2 bar2; } From francis at daoine.org Sat Aug 18 23:04:16 2012 From: francis at daoine.org (Francis Daly) Date: Sun, 19 Aug 2012 00:04:16 +0100 Subject: add_header isn't taking effect In-Reply-To: References: Message-ID: <20120818230416.GR32371@craic.sysops.org> On Sat, Aug 18, 2012 at 03:44:05PM -0700, Michael Shadle wrote: Hi there, > This block basically works like a charm, passes the upstream headers > from proxy_pass_header as expected, but the add_header doesn't seem to > work no matter what. add_header is what I use upstream to add the > headers like X-Hostname-Proxy. It seems to work for me. Perhaps my test case is not the same as your test case. What request do you make; what response do you get; and what response do you expect to get? f -- Francis Daly francis at daoine.org From mdounin at mdounin.ru Sat Aug 18 23:18:36 2012 From: mdounin at mdounin.ru (Maxim Dounin) Date: Sun, 19 Aug 2012 03:18:36 +0400 Subject: Is this a bug? In-Reply-To: <34781cfd70a193a5f371a6676eeee4ad.NginxMailingListEnglish@forum.nginx.org> References: <34781cfd70a193a5f371a6676eeee4ad.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20120818231836.GT40452@mdounin.ru> Hello! On Sat, Aug 18, 2012 at 09:23:47AM -0400, lenky0401 wrote: > 45: /* > 46: * Preallocation of first nodes : 0, 1, 00, 01, 10, 11, 000, 001, > etc. > 47: * increases TLB hits even if for first lookup iterations. > 48: * On 32-bit platforms the 7 preallocated bits takes continuous 4K, > 49: * 8 - 8K, 9 - 16K, etc. On 64-bit platforms the 6 preallocated > bits > 50: * takes continuous 4K, 7 - 8K, 8 - 16K, etc. There is no sense to > 51: * to preallocate more than one page, because further preallocation > 52: * distributes the only bit per page. Instead, a random insertion > 53: * may distribute several bits per page. > 54: * > 55: * Thus, by default we preallocate maximum > 56: * 6 bits on amd64 (64-bit platform and 4K pages) > 57: * 7 bits on i386 (32-bit platform and 4K pages) > 58: * 7 bits on sparc64 in 64-bit mode (8K pages) > 59: * 8 bits on sparc64 in 32-bit mode (8K pages) > 60: */ > 61: > 62: if (preallocate == -1) { > 63: switch (ngx_pagesize / sizeof(ngx_radix_tree_t)) { > //ngx_radix_tree_t -> ngx_radix_node_t ? > > Thanks in advance. Sure. Thanks for spotting this, fix committed: http://trac.nginx.org/nginx/changeset/4824/nginx Maxim Dounin From mike503 at gmail.com Sat Aug 18 23:31:54 2012 From: mike503 at gmail.com (Michael Shadle) Date: Sat, 18 Aug 2012 16:31:54 -0700 Subject: add_header isn't taking effect In-Reply-To: <20120818230416.GR32371@craic.sysops.org> References: <20120818230416.GR32371@craic.sysops.org> Message-ID: It actually seems like if I add_header in one scope somewhere it will ignore in another. For example I have one gratuitous add_header at the server {} level - with a different name. Then I add one on the location level. Then I have one come from the upstream. I could reproduce it reliably by adding an add_header somewhere and it would drop the other one out. I also have two environments with nearly exact configurations and one passes all 3 headers I want vs. 2 headers consistently. Very odd. On Sat, Aug 18, 2012 at 4:04 PM, Francis Daly wrote: > On Sat, Aug 18, 2012 at 03:44:05PM -0700, Michael Shadle wrote: > > Hi there, > >> This block basically works like a charm, passes the upstream headers >> from proxy_pass_header as expected, but the add_header doesn't seem to >> work no matter what. add_header is what I use upstream to add the >> headers like X-Hostname-Proxy. > > It seems to work for me. Perhaps my test case is not the same as your > test case. > > What request do you make; what response do you get; and what response > do you expect to get? > > f > -- > Francis Daly francis at daoine.org > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From francis at daoine.org Sat Aug 18 23:36:56 2012 From: francis at daoine.org (Francis Daly) Date: Sun, 19 Aug 2012 00:36:56 +0100 Subject: Upload Module - add response header with MD5 In-Reply-To: References: Message-ID: <20120818233656.GS32371@craic.sysops.org> On Sat, Aug 18, 2012 at 06:54:01AM -0400, mokriy wrote: Hi there, > here is the snuppet of configuration I use for uploading: > location /upload { > #Return download path to calling party in custom header > value > upload_add_header XXX-DownloadURI "$upload_tmp_path"; What does "nginx -V" show? This is a way of asking "what upload module are you using that has 'upload_add_header'?" Thanks, f -- Francis Daly francis at daoine.org From appa at perusio.net Sat Aug 18 23:42:54 2012 From: appa at perusio.net (=?UTF-8?B?QW50w7NuaW8=?= P. P. Almeida) Date: Sun, 19 Aug 2012 01:42:54 +0200 Subject: nginx simple caching solutions In-Reply-To: <502F23A9.6070705@digitalhit.com> References: <032ddfa8ed12dba1173cc7d2ba7993aa.squirrel@www.digitalhit.com> <20120801094449.5e767b6c@e-healthexpert.org> <501928EE.1070104@digitalhit.com> <876292vjpv.wl%appa@perusio.net> <501948CC.4010708@digitalhit.com> <873946vdib.wl%appa@perusio.net> <50194E8F.9050307@digitalhit.com> <502F23A9.6070705@digitalhit.com> Message-ID: <871uj3n4w1.wl%appa@perusio.net> On 18 Ago 2012 07h10 CEST, ianevans at digitalhit.com wrote: > Just an update on my situation... > > I installed the fastcgi caching as per the suggestions in this > thread. Using the add_header X-My-Cache $upstream_cache_status; > line I'm able to confirm by the headers if I'm getting cache hits or > misses. > > I've also got a small handful of directories that I don't want > cached and as per the thread, they're marked: > > map $uri $no_cache_dirs { > default 0; > /dir1 1; > /dir2 1; > /dir3 1; > /dir4 1; > /dir5 1; > } > > In my fastcgi.conf there are the following lines added for the > caching: > > fastcgi_cache MYCACHE; fastcgi_keep_conn on; fastcgi_cache_bypass > $no_cache $no_cache_dirs; fastcgi_no_cache $no_cache $no_cache_dirs; > fastcgi_cache_valid 200 301 5m; fastcgi_cache_valid 302 5m; > fastcgi_cache_valid 404 1m; fastcgi_cache_use_stale error timeout > invalid_header updating http_500; fastcgi_ignore_headers > Cache-Control Expires; expires epoch; fastcgi_cache_lock on; > > Did a test on some of the no cache dirs (including a wordpress blog) > and the pages in them were still getting cached. Any ideas why? Yes, probably you need a regex based map: So use: map $uri $no_cache_dirs { default 0; ~^/(?:dir1|dir2|dir3|dir4|dir5) 1; } instead. --- appa From francis at daoine.org Sat Aug 18 23:46:27 2012 From: francis at daoine.org (Francis Daly) Date: Sun, 19 Aug 2012 00:46:27 +0100 Subject: add_header isn't taking effect In-Reply-To: References: <20120818230416.GR32371@craic.sysops.org> Message-ID: <20120818234627.GT32371@craic.sysops.org> On Sat, Aug 18, 2012 at 04:31:54PM -0700, Michael Shadle wrote: Hi there, > It actually seems like if I add_header in one scope somewhere it will > ignore in another. Yes, that's the expected behaviour for nginx configuration directives. A request is handled in exactly one location; the configuration in or inherited into that location is what matters; inheritance or merging is by replacement. > For example I have one gratuitous add_header at the server {} level - > with a different name. > > Then I add one on the location level. In this location, only this one will apply. In another location, only the server-level one will apply. > Then I have one come from the upstream. nginx doesn't care about headers being added; it cares about the add_header directive. So this line should not influence the outcome. > I could reproduce it reliably by adding an add_header somewhere and it > would drop the other one out. I also have two environments with nearly > exact configurations and one passes all 3 headers I want vs. 2 headers > consistently. Very odd. I suspect that if you compare the nearly exact configurations, you'll spot an important difference. If you want a specific extra add_header in a location, you must also repeat the other add_header directives within that location. f -- Francis Daly francis at daoine.org From mike503 at gmail.com Sun Aug 19 00:33:49 2012 From: mike503 at gmail.com (Michael Shadle) Date: Sat, 18 Aug 2012 17:33:49 -0700 Subject: add_header isn't taking effect In-Reply-To: <20120818234627.GT32371@craic.sysops.org> References: <20120818230416.GR32371@craic.sysops.org> <20120818234627.GT32371@craic.sysops.org> Message-ID: On Sat, Aug 18, 2012 at 4:46 PM, Francis Daly wrote: > In this location, only this one will apply. In another location, only > the server-level one will apply. I expected add_header to be an array item, which it might be, but I guess I forgot how nginx scope worked. I wish the scope would be a little looser (something that makes sense from someone who writes scripts in PHP for a living) I just got done reading that blog post (source I am too lazy to look up) explaining scope on variables, and got bit by fastcgi_param scope in the past the same way. Oh well. :) It is odd that I get inconsistent results and I *do* seem to get multiple headers coming through sometimes it seems, even though it doesn't make sense from the scope level. From farseas at gmail.com Sun Aug 19 21:32:02 2012 From: farseas at gmail.com (Bob Stanton) Date: Sun, 19 Aug 2012 17:32:02 -0400 Subject: user authentication with nginx Message-ID: I want to find a secure but simple method for authenticating users in an Nginx environment. I have succeeded in figuring out the auth_basic mod but that does not meet my needs. I specifically want to supply my own form, get the username and PW, check it against my DB with a CGI program, and then pass values back to Nginx. I need explained examples for nginx.conf. I can write my own backend in C for authentication, that is not part of my question. 1) What should nginx.conf look like? 2) How are values passed back to nginx? signals? function return codes? -------------- next part -------------- An HTML attachment was scrubbed... URL: From contact at jpluscplusm.com Sun Aug 19 22:24:19 2012 From: contact at jpluscplusm.com (Jonathan Matthews) Date: Sun, 19 Aug 2012 23:24:19 +0100 Subject: user authentication with nginx In-Reply-To: References: Message-ID: On 19 August 2012 22:32, Bob Stanton wrote: > I want to find a secure but simple method for authenticating users in an > Nginx environment. > > I have succeeded in figuring out the auth_basic mod but that does not meet > my needs. > > I specifically want to supply my own form, get the username and PW, check it > against my DB with a CGI program, and then pass values back to Nginx. Use proxy_pass (http://nginx.org/r/proxy_pass) or fastcgi_pass (http://nginx.org/r/fastcgi_pass) to communicate the Auth headers to your daemon, which should then respond with whatever page you want your users to see in the event of auth success or failure. There are many configuration examples for these on the interwebs. Jonathan -- Jonathan Matthews Oxford, London, UK http://www.jpluscplusm.com/contact.html From farseas at gmail.com Sun Aug 19 22:37:39 2012 From: farseas at gmail.com (Bob Stanton) Date: Sun, 19 Aug 2012 18:37:39 -0400 Subject: user authentication with nginx In-Reply-To: References: Message-ID: I am not clear on how this would work in the nginx.conf file. Also, aren't there security risks using the headers? Can't someone spoof the headers and gain access that way? Like I said, this is all rather unclear to me. On Sun, Aug 19, 2012 at 6:24 PM, Jonathan Matthews wrote: > On 19 August 2012 22:32, Bob Stanton wrote: > > I want to find a secure but simple method for authenticating users in an > > Nginx environment. > > > > I have succeeded in figuring out the auth_basic mod but that does not > meet > > my needs. > > > > I specifically want to supply my own form, get the username and PW, > check it > > against my DB with a CGI program, and then pass values back to Nginx. > > Use proxy_pass (http://nginx.org/r/proxy_pass) or fastcgi_pass > (http://nginx.org/r/fastcgi_pass) to communicate the Auth headers to > your daemon, which should then respond with whatever page you want > your users to see in the event of auth success or failure. > > There are many configuration examples for these on the interwebs. > > Jonathan > -- > Jonathan Matthews > Oxford, London, UK > http://www.jpluscplusm.com/contact.html > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Sun Aug 19 23:55:07 2012 From: nginx-forum at nginx.us (d2radio) Date: Sun, 19 Aug 2012 19:55:07 -0400 (EDT) Subject: Upstream SSL IIS Performance Message-ID: Greetings, I have an nginx server deployed infront of a Internet Information Services 6.0 server. The issue I have is that when requesting content from the IIS server via HTTPS I am seeing an considerable increase in reponse time from the upstream server. For example requesting the following javascript file I am seeing the following log entires (Using a custom log format to catch upsteam response times) GET /client/javascript/libraries/query-string/2.1.7/query-min.js HTTP/1.1" 200 upstream 0.040 request 0.046 [for reversproxy via 172.25.50.203:80] GET /client/javascript/libraries/query-string/2.1.7/query-min.js HTTP/1.1" 200 upstream 0.234 request 0.243 [for reverseproxy via 172.25.50.203:443] As you can see it's taking up to an additional 200+ ms to serve up the js file to nginx when talking to the upsteam server via HTTPS. I understand that there is an overhead with HTTPS but I wouldn't expect it to be this great. I also understand it would be better to serve the static content directly from nginx but at the moment this isn't an option. I am currently running version 1.3.1 of nginx with the following configure arguments --with-http_ssl_module --add-module=/root/simpl-ngx_devel_kit-bc97eea --add-module=/root/agentzh-headers-more-nginx-module-3580526 --add-module=/root/yaoweibin-ngx_http_substitutions_filter_module-f4080ae I'm assuming all nginx ssl directives are for communication between the client and nginx, Do I have any options for improving the https response performance with the upsteam IIS server? Apart from talking to it via HTTP? Thanks In Advance Dan Posted at Nginx Forum: http://forum.nginx.org/read.php?2,229909,229909#msg-229909 From nginx-forum at nginx.us Sun Aug 19 23:59:57 2012 From: nginx-forum at nginx.us (d2radio) Date: Sun, 19 Aug 2012 19:59:57 -0400 (EDT) Subject: Upstream SSL IIS Performance In-Reply-To: References: Message-ID: I should have also said that requesting this javascript file directly from the IIS server only takes ~20ms over HTTPS (timed using firebug). All static and dynamic content requested from this IIS server exhibits this behaviour. Thanks Posted at Nginx Forum: http://forum.nginx.org/read.php?2,229909,229910#msg-229910 From francis at daoine.org Mon Aug 20 00:48:11 2012 From: francis at daoine.org (Francis Daly) Date: Mon, 20 Aug 2012 01:48:11 +0100 Subject: user authentication with nginx In-Reply-To: References: Message-ID: <20120820004811.GU32371@craic.sysops.org> On Sun, Aug 19, 2012 at 06:37:39PM -0400, Bob Stanton wrote: > On Sun, Aug 19, 2012 at 6:24 PM, Jonathan Matthews wrote: > > On 19 August 2012 22:32, Bob Stanton wrote: Hi there, [rearranging for ease of reading.] > > > I want to find a secure but simple method for authenticating users in an > > > Nginx environment. http basic authentication within ssl. As in every http-server environment. > > > I have succeeded in figuring out the auth_basic mod but that does not meet > > > my needs. Why not? Which specific aspect of the nginx implementation of http basic authentication is unsuitable for your use case? Would http digest authentication avoid the problem you see? Or would an alternative credential-checking method avoid the problem? Does your own cookie-or-other authentication method avoid that problem? (There are 3rd party modules that can help implement the first two suggestions above, if you don't want to write your own module from scratch.) > > > I specifically want to supply my own form, get the username and PW, check it > > > against my DB with a CGI program, and then pass values back to Nginx. What part of the form submission is better than the simple http authentication that you rejected above? (There *can* be some parts; but without knowing what exactly your needs are, it is hard to suggest something that meets them.) > > Use proxy_pass (http://nginx.org/r/proxy_pass) or fastcgi_pass > > (http://nginx.org/r/fastcgi_pass) to communicate the Auth headers to > > your daemon, which should then respond with whatever page you want > > your users to see in the event of auth success or failure. That information is correct for the mechanics of how nginx will know to invoke your application. But I think you'll want a very clear idea of what your application will do, before needing that information. > I am not clear on how this would work in the nginx.conf file. I suggest you first gain a clear picture of how your application will work in the http world. After you determine that it can work, you can worry about the nginx implementation. (For what it's worth: I think your plan involves sending a Set-Cookie response header to the browser, expecting that the browser will send a Cookie request header in future requests. But maybe I think wrong.) > Also, aren't there security risks using the headers? Can't someone spoof > the headers and gain access that way? Yes. Anyone can send a request with http authentication headers or with cookie headers. Or with username and password details in the request, or in the request body. But it's not yet obvious to me how http basic authentication differs from your alternative, in this respect. > Like I said, this is all rather unclear to me. Me too. If you can explain why basic authentication doesn't meet your needs, perhaps a suitable alternative can be suggested. (Quite possibly form-submission to set a cookie *is* the best solution for you. But maybe nginx-auth-request-module can let http basic authentication work for you and will be easier. Or maybe something else is best.) f -- Francis Daly francis at daoine.org From francis at daoine.org Mon Aug 20 01:12:18 2012 From: francis at daoine.org (Francis Daly) Date: Mon, 20 Aug 2012 02:12:18 +0100 Subject: Upstream SSL IIS Performance In-Reply-To: References: Message-ID: <20120820011218.GV32371@craic.sysops.org> On Sun, Aug 19, 2012 at 07:55:07PM -0400, d2radio wrote: Hi there, > GET /client/javascript/libraries/query-string/2.1.7/query-min.js HTTP/1.1" > 200 upstream 0.040 request 0.046 [for reversproxy via 172.25.50.203:80] > > GET /client/javascript/libraries/query-string/2.1.7/query-min.js HTTP/1.1" > 200 upstream 0.234 request 0.243 [for reverseproxy via 172.25.50.203:443] time curl https://172.25.50.203/client/javascript/libraries/query-string/2.1.7/query-min.js Does it show about 200 ms like above, or about 20 ms like firebug showed? I suspect that the 200 ms is due to the ssl session being established afresh for each request, while firebug is piggy-backing on a previous request. (You could test this by starting firebug and making sure that the very first https query is for this url, and seeing how long it takes.) > Do I have any options for improving the https response > performance with the upsteam IIS server? Apart from talking to it via HTTP? Untested by me, but: do you see any difference when you include the configuration at http://nginx.org/r/keepalive ? That might allow re-use of the ssl session across http requests. f -- Francis Daly francis at daoine.org From mike503 at gmail.com Mon Aug 20 02:32:43 2012 From: mike503 at gmail.com (Michael Shadle) Date: Sun, 19 Aug 2012 19:32:43 -0700 Subject: user authentication with nginx In-Reply-To: <20120820004811.GU32371@craic.sysops.org> References: <20120820004811.GU32371@craic.sysops.org> Message-ID: On Aug 19, 2012, at 5:48 PM, Francis Daly wrote: > Why not? > > Which specific aspect of the nginx implementation of http basic > authentication is unsuitable for your use case? > > Would http digest authentication avoid the problem you see? I would like to see digest auth supported personally. For proper spnego situations if the Kerberos/gssapi stuff fails it is supposed to fall back to digest. I still have my "not sure if it works at all" nginx+spnego module and someone else posted another one (may or may not have been based on the one I funded, I still want to sync up with them) that I'd really love to get more action on. I would really like to use nginx inside of my company's intranet and be able to provide my users with the "transparent" recognition that it would provide. Anyone reading this please feel free to contact me off-list if interested in development, testing, funding, using or discussing this module (or if you are the guy who made his own port too!) :) From nginx-forum at nginx.us Mon Aug 20 03:06:22 2012 From: nginx-forum at nginx.us (d2radio) Date: Sun, 19 Aug 2012 23:06:22 -0400 (EDT) Subject: Upstream SSL IIS Performance In-Reply-To: <20120820011218.GV32371@craic.sysops.org> References: <20120820011218.GV32371@craic.sysops.org> Message-ID: <981474bf3824cc7c5d9f8ca8cb3a7b3b.NginxMailingListEnglish@forum.nginx.org> Thanks Francis, Yes I suspected that it was somehow renegotiating the ssl handshake for each request where as firefox/firebug was caching the handshake thus showing quicker response times. Timing curl over https gave me an average of 80ms response time, timing curl over http gave me an average of 10ms similar to what nginx was achieving talking to the backend via http. I'm happy to annouce though that your were bang on the money with the keepalive directive. As soon as I added that into my upstream declaration the reponse times dropped considerably and I'm now getting performance similar to as if I was requesting the content directly from the upstream server. Thanks Francis your a legend :) Posted at Nginx Forum: http://forum.nginx.org/read.php?2,229909,229915#msg-229915 From javi at lavandeira.net Mon Aug 20 04:45:27 2012 From: javi at lavandeira.net (Javi Lavandeira) Date: Mon, 20 Aug 2012 13:45:27 +0900 Subject: user authentication with nginx In-Reply-To: References: Message-ID: Hello, On 2012/08/20, at 6:32, Bob Stanton wrote: > I specifically want to supply my own form, get the username and PW, check it against my DB with a CGI program, and then pass values back to Nginx. Do you mean that you want to know how to create an HTML form, pass the parameters to a CGI, and then return an HTML output to the user? Regards, -- Javi Lavandeira Twitter: @javilm Blog: http://www.lavandeira.net/blog -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Mon Aug 20 07:28:59 2012 From: nginx-forum at nginx.us (mokriy) Date: Mon, 20 Aug 2012 03:28:59 -0400 (EDT) Subject: Upload Module - add response header with MD5 In-Reply-To: <20120818233656.GS32371@craic.sysops.org> References: <20120818233656.GS32371@craic.sysops.org> Message-ID: Hi Francis! Many thanks for you reply. Basically, I am using the one from Valeriy Kholodkov: http://www.grid.net.ru/nginx/upload.ru.html upload_add_header feature appeared in v 2.2, I guess: https://github.com/vkholodkov/nginx-upload-module/blob/2.2/ngx_http_upload_module.c I can find it in C code - 'upload_add_header'. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,229882,229917#msg-229917 From nginx-forum at nginx.us Mon Aug 20 10:25:09 2012 From: nginx-forum at nginx.us (ConnorMcLaud) Date: Mon, 20 Aug 2012 06:25:09 -0400 (EDT) Subject: Accept connections and close connection hooks in custom module In-Reply-To: <094e9a88aa683f6230d5f72d481d4ee9.NginxMailingListEnglish@forum.nginx.org> References: <094e9a88aa683f6230d5f72d481d4ee9.NginxMailingListEnglish@forum.nginx.org> Message-ID: <3e631f9f3ea0f1d16f22bd4647d42a3a.NginxMailingListEnglish@forum.nginx.org> I think I've finally understood. There is global structure ngx_event_actions with add_conn/del_conn handlers which supposed to do what I want. There is only one issue (probably a bug) with add_conn handler. In version 1.2.2 with epoll usage handler add_conn never called. You should use add handler instead and check incoming parameters (event = NGX_READ_EVENT, flags = NGX_CLEAR_REQUEST). Hope it helps someone. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,229100,229921#msg-229921 From farseas at gmail.com Mon Aug 20 10:42:26 2012 From: farseas at gmail.com (Bob Stanton) Date: Mon, 20 Aug 2012 06:42:26 -0400 Subject: user authentication with nginx In-Reply-To: References: Message-ID: Sorry for my lack of precision. I know how to do all the below, I just don't know how to tell nginx whether or not user authentication was successful. On Mon, Aug 20, 2012 at 12:45 AM, Javi Lavandeira wrote: > Hello, > > On 2012/08/20, at 6:32, Bob Stanton wrote: > > I specifically want to supply my own form, get the username and PW, check > it against my DB with a CGI program, and then pass values back to Nginx. > > > Do you mean that you want to know how to create an HTML form, pass the > parameters to a CGI, and then return an HTML output to the user? > > Regards, > > -- > Javi Lavandeira > > *Twitter*: @javilm > *Blog*: http://www.lavandeira.net/blog > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From javi at lavandeira.net Mon Aug 20 11:06:56 2012 From: javi at lavandeira.net (Javi Lavandeira) Date: Mon, 20 Aug 2012 20:06:56 +0900 Subject: user authentication with nginx In-Reply-To: References: Message-ID: Make your CGI/PHP/Python/Perl script return a "Status: xxx" header. I'm curious, why do you need to do it this way? -- Javi Lavandeira Twitter: @javilm Blog: http://www.lavandeira.net/blog On 2012/08/20, at 19:42, Bob Stanton wrote: > Sorry for my lack of precision. I know how to do all the below, I just don't know how to tell nginx whether or not user authentication was successful. > > On Mon, Aug 20, 2012 at 12:45 AM, Javi Lavandeira wrote: > Hello, > > On 2012/08/20, at 6:32, Bob Stanton wrote: > >> I specifically want to supply my own form, get the username and PW, check it against my DB with a CGI program, and then pass values back to Nginx. > > Do you mean that you want to know how to create an HTML form, pass the parameters to a CGI, and then return an HTML output to the user? > > Regards, > > -- > Javi Lavandeira > > Twitter: @javilm > Blog: http://www.lavandeira.net/blog > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Mon Aug 20 11:42:44 2012 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 20 Aug 2012 15:42:44 +0400 Subject: Upstream SSL IIS Performance In-Reply-To: <981474bf3824cc7c5d9f8ca8cb3a7b3b.NginxMailingListEnglish@forum.nginx.org> References: <20120820011218.GV32371@craic.sysops.org> <981474bf3824cc7c5d9f8ca8cb3a7b3b.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20120820114244.GE40452@mdounin.ru> Hello! On Sun, Aug 19, 2012 at 11:06:22PM -0400, d2radio wrote: > Thanks Francis, > > Yes I suspected that it was somehow renegotiating the ssl handshake for each > request where as firefox/firebug was caching the handshake thus showing > quicker response times. > > Timing curl over https gave me an average of 80ms response time, timing curl > over http gave me an average of 10ms similar to what nginx was achieving > talking to the backend via http. > > I'm happy to annouce though that your were bang on the money with the > keepalive directive. As soon as I added that into my upstream declaration > the reponse times dropped considerably and I'm now getting performance > similar to as if I was requesting the content directly from the upstream > server. > > Thanks Francis your a legend :) Strange thing is that SSL session reuse doesn't work for you. It is on by default and should do more or less the same thing unless you've switched it off with proxy_ssl_session_reuse[1] directive or forgot to configure session cache on your backend server. (Another question to consider is whether you really need to spend resources on SSL between nginx and your backend.) [1] http://nginx.org/r/proxy_ssl_session_reuse Maxim Dounin From farseas at gmail.com Mon Aug 20 12:33:28 2012 From: farseas at gmail.com (Bob Stanton) Date: Mon, 20 Aug 2012 08:33:28 -0400 Subject: user authentication with nginx In-Reply-To: References: Message-ID: How else would you do it? I don't want to use basic_auth because I want to be able to style my own form. On Mon, Aug 20, 2012 at 7:06 AM, Javi Lavandeira wrote: > Make your CGI/PHP/Python/Perl script return a "Status: xxx" header. > > I'm curious, why do you need to do it this way? > > > -- > Javi Lavandeira > > *Twitter*: @javilm > *Blog*: http://www.lavandeira.net/blog > > On 2012/08/20, at 19:42, Bob Stanton wrote: > > Sorry for my lack of precision. I know how to do all the below, I just > don't know how to tell nginx whether or not user authentication was > successful. > > On Mon, Aug 20, 2012 at 12:45 AM, Javi Lavandeira wrote: > >> Hello, >> >> On 2012/08/20, at 6:32, Bob Stanton wrote: >> >> I specifically want to supply my own form, get the username and PW, check >> it against my DB with a CGI program, and then pass values back to Nginx. >> >> >> Do you mean that you want to know how to create an HTML form, pass the >> parameters to a CGI, and then return an HTML output to the user? >> >> Regards, >> >> -- >> Javi Lavandeira >> >> *Twitter*: @javilm >> *Blog*: http://www.lavandeira.net/blog >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx >> > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From javi at lavandeira.net Mon Aug 20 12:42:51 2012 From: javi at lavandeira.net (Javi Lavandeira) Date: Mon, 20 Aug 2012 21:42:51 +0900 Subject: user authentication with nginx In-Reply-To: References: Message-ID: <2E44FA60-6A69-4EED-828D-83A307614D87@lavandeira.net> From what you've told us so far, it looks like you just want an HTML form and a CGI to process it and then send some HTML back to the user. You don't need to complicate things too much. Just set up FastCGI with your scripting language of choice, and don't worry about sending back an HTTP status code to NGINX. The web server needs to know the code only if you're going to implement error pages for common HTTP errors. Most of the time you just send a human-readable error message with your HTML. -- Javi Lavandeira Twitter: @javilm Blog: http://www.lavandeira.net/blog On 2012/08/20, at 21:33, Bob Stanton wrote: > How else would you do it? > > I don't want to use basic_auth because I want to be able to style my own form. > > > > On Mon, Aug 20, 2012 at 7:06 AM, Javi Lavandeira wrote: > Make your CGI/PHP/Python/Perl script return a "Status: xxx" header. > > I'm curious, why do you need to do it this way? > > > -- > Javi Lavandeira > > Twitter: @javilm > Blog: http://www.lavandeira.net/blog > > On 2012/08/20, at 19:42, Bob Stanton wrote: > >> Sorry for my lack of precision. I know how to do all the below, I just don't know how to tell nginx whether or not user authentication was successful. >> >> On Mon, Aug 20, 2012 at 12:45 AM, Javi Lavandeira wrote: >> Hello, >> >> On 2012/08/20, at 6:32, Bob Stanton wrote: >> >>> I specifically want to supply my own form, get the username and PW, check it against my DB with a CGI program, and then pass values back to Nginx. >> >> Do you mean that you want to know how to create an HTML form, pass the parameters to a CGI, and then return an HTML output to the user? >> >> Regards, >> >> -- >> Javi Lavandeira >> >> Twitter: @javilm >> Blog: http://www.lavandeira.net/blog >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From farseas at gmail.com Mon Aug 20 13:22:44 2012 From: farseas at gmail.com (Bob Stanton) Date: Mon, 20 Aug 2012 09:22:44 -0400 Subject: user authentication with nginx In-Reply-To: <2E44FA60-6A69-4EED-828D-83A307614D87@lavandeira.net> References: <2E44FA60-6A69-4EED-828D-83A307614D87@lavandeira.net> Message-ID: I want to send status back to nginx because of the map directive combined with the alias directive: events { } http { map $remote_user $profile_directory { default $remote_user; } server { root /var/www/sites/dyvn/http; location / { auth_request /auth.html; alias /var/www/sites/mysite.com/http/$profile_directory/; } } } On Mon, Aug 20, 2012 at 8:42 AM, Javi Lavandeira wrote: > From what you've told us so far, it looks like you just want an HTML form > and a CGI to process it and then send some HTML back to the user. > > You don't need to complicate things too much. Just set up FastCGI with > your scripting language of choice, and don't worry about sending back an > HTTP status code to NGINX. The web server needs to know the code only if > you're going to implement error pages for common HTTP errors. Most of the > time you just send a human-readable error message with your HTML. > > > -- > Javi Lavandeira > > *Twitter*: @javilm > *Blog*: http://www.lavandeira.net/blog > > On 2012/08/20, at 21:33, Bob Stanton wrote: > > How else would you do it? > > I don't want to use basic_auth because I want to be able to style my own > form. > > > > On Mon, Aug 20, 2012 at 7:06 AM, Javi Lavandeira wrote: > >> Make your CGI/PHP/Python/Perl script return a "Status: xxx" header. >> >> I'm curious, why do you need to do it this way? >> >> >> -- >> Javi Lavandeira >> >> *Twitter*: @javilm >> *Blog*: http://www.lavandeira.net/blog >> >> On 2012/08/20, at 19:42, Bob Stanton wrote: >> >> Sorry for my lack of precision. I know how to do all the below, I just >> don't know how to tell nginx whether or not user authentication was >> successful. >> >> On Mon, Aug 20, 2012 at 12:45 AM, Javi Lavandeira wrote: >> >>> Hello, >>> >>> On 2012/08/20, at 6:32, Bob Stanton wrote: >>> >>> I specifically want to supply my own form, get the username and PW, >>> check it against my DB with a CGI program, and then pass values back to >>> Nginx. >>> >>> >>> Do you mean that you want to know how to create an HTML form, pass the >>> parameters to a CGI, and then return an HTML output to the user? >>> >>> Regards, >>> >>> -- >>> Javi Lavandeira >>> >>> *Twitter*: @javilm >>> *Blog*: http://www.lavandeira.net/blog >>> >>> _______________________________________________ >>> nginx mailing list >>> nginx at nginx.org >>> http://mailman.nginx.org/mailman/listinfo/nginx >>> >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx >> >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx >> > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From javi at lavandeira.net Mon Aug 20 13:33:04 2012 From: javi at lavandeira.net (Javi Lavandeira) Date: Mon, 20 Aug 2012 22:33:04 +0900 Subject: user authentication with nginx In-Reply-To: References: <2E44FA60-6A69-4EED-828D-83A307614D87@lavandeira.net> Message-ID: <92D54EE2-3C12-4F93-A257-50E8C8C3FC4D@lavandeira.net> You don't need to talk back to NGINX for that. Just make your script return a "Location:" header redirecting the user's web browser to his profile directory. I think this is already outside of the scope of this list. Feel free to contact me in private. -- Javier Lavandeira http://www.lavandeira.net On Aug 20, 2012, at 22:22, Bob Stanton wrote: > I want to send status back to nginx because of the map directive combined with the alias directive: > > events { > } > http { > map $remote_user $profile_directory { > default $remote_user; > } > server { > root /var/www/sites/dyvn/http; > location / { > auth_request /auth.html; > alias /var/www/sites/mysite.com/http/$profile_directory/; > } > } > } > > > > > > On Mon, Aug 20, 2012 at 8:42 AM, Javi Lavandeira wrote: > From what you've told us so far, it looks like you just want an HTML form and a CGI to process it and then send some HTML back to the user. > > You don't need to complicate things too much. Just set up FastCGI with your scripting language of choice, and don't worry about sending back an HTTP status code to NGINX. The web server needs to know the code only if you're going to implement error pages for common HTTP errors. Most of the time you just send a human-readable error message with your HTML. > > > -- > Javi Lavandeira > > Twitter: @javilm > Blog: http://www.lavandeira.net/blog > > On 2012/08/20, at 21:33, Bob Stanton wrote: > >> How else would you do it? >> >> I don't want to use basic_auth because I want to be able to style my own form. >> >> >> >> On Mon, Aug 20, 2012 at 7:06 AM, Javi Lavandeira wrote: >> Make your CGI/PHP/Python/Perl script return a "Status: xxx" header. >> >> I'm curious, why do you need to do it this way? >> >> >> -- >> Javi Lavandeira >> >> Twitter: @javilm >> Blog: http://www.lavandeira.net/blog >> >> On 2012/08/20, at 19:42, Bob Stanton wrote: >> >>> Sorry for my lack of precision. I know how to do all the below, I just don't know how to tell nginx whether or not user authentication was successful. >>> >>> On Mon, Aug 20, 2012 at 12:45 AM, Javi Lavandeira wrote: >>> Hello, >>> >>> On 2012/08/20, at 6:32, Bob Stanton wrote: >>> >>>> I specifically want to supply my own form, get the username and PW, check it against my DB with a CGI program, and then pass values back to Nginx. >>> >>> Do you mean that you want to know how to create an HTML form, pass the parameters to a CGI, and then return an HTML output to the user? >>> >>> Regards, >>> >>> -- >>> Javi Lavandeira >>> >>> Twitter: @javilm >>> Blog: http://www.lavandeira.net/blog >>> >>> _______________________________________________ >>> nginx mailing list >>> nginx at nginx.org >>> http://mailman.nginx.org/mailman/listinfo/nginx >>> >>> _______________________________________________ >>> nginx mailing list >>> nginx at nginx.org >>> http://mailman.nginx.org/mailman/listinfo/nginx >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From francis at daoine.org Mon Aug 20 19:18:59 2012 From: francis at daoine.org (Francis Daly) Date: Mon, 20 Aug 2012 20:18:59 +0100 Subject: Upload Module - add response header with MD5 In-Reply-To: References: <20120818233656.GS32371@craic.sysops.org> Message-ID: <20120820191859.GW32371@craic.sysops.org> On Mon, Aug 20, 2012 at 03:28:59AM -0400, mokriy wrote: Hi there, > Basically, I am using the one from Valeriy Kholodkov: > http://www.grid.net.ru/nginx/upload.ru.html > > upload_add_header feature appeared in v 2.2, I guess: Ah, you're using a development work-in-progress version. upload_add_header is not in the released 2.2.0 tarball. > https://github.com/vkholodkov/nginx-upload-module/blob/2.2/ngx_http_upload_module.c > I can find it in C code - 'upload_add_header'. I guess you've found a bug in the development code. > However the MD5 hashes returned in response header are.. not hashes. > They are: > 34383765656261393461336230613362 OR > 34396436323764343839623564646664 It looks like it isn't a new bug in the post-2.2.0 code; but upload_add_header makes it easier to expose. It appears that (at least) for the aggregate variables, accessing one more than once will cause the previous value to be read as bytes, and written in hexadecimal ascii representation. So in your first example above, the md5sum began 487eeba94a3b0a3b. Test with something like upload_aggregate_form_field "${upload_field_name}_md5" $upload_file_md5; upload_aggregate_form_field "${upload_field_name}_md6" $upload_file_md5; upload_aggregate_form_field "${upload_field_name}_md7" $upload_file_md5; upload_aggregate_form_field "${upload_field_name}_md8" $upload_file_md5; and you'll see that the later values include increasing strings of 3s. (Of course, it was never sane to use $upload_file_md5 more than once, so this didn't matter. But now it is, and it does.) f -- Francis Daly francis at daoine.org From smallfish.xy at gmail.com Tue Aug 21 02:42:43 2012 From: smallfish.xy at gmail.com (smallfish) Date: Tue, 21 Aug 2012 10:42:43 +0800 Subject: user authentication with nginx In-Reply-To: References: Message-ID: try use ngx_lua module, in lua code check the db password etc. example http 401 auth (simple chinse version): http://chenxiaoyu.org/2012/02/08/nginx-lua-401-auth.html -- blog: http://chenxiaoyu.org On Mon, Aug 20, 2012 at 5:32 AM, Bob Stanton wrote: > I want to find a secure but simple method for authenticating users in an > Nginx environment. > > I have succeeded in figuring out the auth_basic mod but that does not meet > my needs. > > I specifically want to supply my own form, get the username and PW, check > it against my DB with a CGI program, and then pass values back to Nginx. > > I need explained examples for nginx.conf. I can write my own backend in C > for authentication, that is not part of my question. > > 1) What should nginx.conf look like? > 2) How are values passed back to nginx? signals? function return codes? > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Tue Aug 21 03:14:20 2012 From: nginx-forum at nginx.us (d2radio) Date: Mon, 20 Aug 2012 23:14:20 -0400 (EDT) Subject: Upstream SSL IIS Performance In-Reply-To: <20120820114244.GE40452@mdounin.ru> References: <20120820114244.GE40452@mdounin.ru> Message-ID: <65a343508ba41afb3f1790066d8c7eeb.NginxMailingListEnglish@forum.nginx.org> Hi Maxim, Maxim Dounin Wrote: ------------------------------------------------------- > Hello! > > On Sun, Aug 19, 2012 at 11:06:22PM -0400, d2radio wrote: > > > Thanks Francis, > > > > Yes I suspected that it was somehow renegotiating the ssl handshake > for each > > request where as firefox/firebug was caching the handshake thus > showing > > quicker response times. > > > > Timing curl over https gave me an average of 80ms response time, > timing curl > > over http gave me an average of 10ms similar to what nginx was > achieving > > talking to the backend via http. > > > > I'm happy to annouce though that your were bang on the money with > the > > keepalive directive. As soon as I added that into my upstream > declaration > > the reponse times dropped considerably and I'm now getting > performance > > similar to as if I was requesting the content directly from the > upstream > > server. > > > > Thanks Francis your a legend :) > > Strange thing is that SSL session reuse doesn't work for you. It > is on by default and should do more or less the same thing unless > you've switched it off with proxy_ssl_session_reuse[1] directive or > forgot to configure session cache on your backend server. > > (Another question to consider is whether you really need to spend > resources on SSL between nginx and your backend.) > > [1] http://nginx.org/r/proxy_ssl_session_reuse > > Maxim Dounin > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx Thanks, Yes I thought it was strange that ssl session reuse didn't work either as I thought that had been enabled by default in a recent release. I can confirm that we don't have the directive proxy_ssl_session_reuse set in any of the config files and we have left the upstream server caching settings at their defaults which I think for IIS 6.0 is 5 minutes if I remember correctly. Yes your correct, I would agree that it's probably not the best approach to be talking to a upstream server via HTTPS but unfortunatly at the moment that's not an option due to how the upstream applications work which weren't written by me. Thanks for your time. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,229909,229936#msg-229936 From nginx-forum at nginx.us Tue Aug 21 06:03:54 2012 From: nginx-forum at nginx.us (medialy) Date: Tue, 21 Aug 2012 02:03:54 -0400 (EDT) Subject: backup resolver possible? Message-ID: version: nginx 1.2.0 on centos 5.5 64bit Nginx works as a http proxy. There are two resolver: ip1 and ip2. When ip1 fails on some url, nginx gives 502 error, but ip2 works on the url. Can multi-resolver be used as master-slave? nginx may try slave resolver when master resolver fails. worker_processes 1; error_log /dev/null crit; worker_rlimit_nofile 65535; events { use epoll; worker_connections 65535; } http { charset off; override_charset off; proxy_buffering off; gzip off; access_log off; resolver ip1,ip2; proxy_buffer_size 64k; proxy_buffers 4 64k; server { listen 1002; location / { proxy_set_header Host $host; proxy_set_header Accept-Encoding ""; proxy_pass http://$http_host$request_uri; } } } Posted at Nginx Forum: http://forum.nginx.org/read.php?2,229938,229938#msg-229938 From nginx-forum at nginx.us Tue Aug 21 07:23:09 2012 From: nginx-forum at nginx.us (liangrubo) Date: Tue, 21 Aug 2012 03:23:09 -0400 (EDT) Subject: Tuning workers and connections In-Reply-To: <33c66c80907011823r7c6224e8w2945765c150f1f55@forum.nginx.org> References: <33c66c80907011823r7c6224e8w2945765c150f1f55@forum.nginx.org> Message-ID: <1bb22d04200e00c8cb502dd6daff02ab.NginxMailingListEnglish@forum.nginx.org> Hello, we meet similar problem. if we set "multi_accept off;" and "accept_mutex on;", it sometimes took 3 or 9 or even 21 seconds to connect to nginx on port 80 from the same server. if we set "multi_accept on;" and "accept_mutex off;", we can connect to nginx on port 80 instantly but it may take long to get the response (more than 3 seconds for small css files on the same server). Our deploy structure is as follows: frontend nginx listen on port 80, servering static files. For dynamic requests, nginx on port 80 reverse proxy to nginx on port 81 via tcp socket, nginx on port 81 again reverse proxy to uwsgi via unix sokcet. the site is handling about 1200 requests per second. the server has 24 cpu cores and 96G memory. the system has low load, lots of free memory and no IO bottleneck. some of the nginx configurations are as follows: worker_processes 8; //it was originally 4, I doubled it but it didn't help worker_rlimit_nofile 15240; worker_connections 15240; use epoll; we have keepalive setup for upstream(80 to 81 reverse proxy) as follows: keepalive 32; information shown by status_sub: Active connections: 4320 server accepts handled requests 9012300 9012300 37349248 Reading: 118 Writing: 199 Waiting: 4003 we examined the dynamic request handling is fast and even if it was slow, it should not affect static file servering on port 80 anyway, right? it seems nginx can't process the request fast enough but we can't find what is the bottleneck. any help is greatly appreciated. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,3638,229939#msg-229939 From nginx-forum at nginx.us Tue Aug 21 07:48:51 2012 From: nginx-forum at nginx.us (liangrubo) Date: Tue, 21 Aug 2012 03:48:51 -0400 (EDT) Subject: Tuning workers and connections In-Reply-To: <1bb22d04200e00c8cb502dd6daff02ab.NginxMailingListEnglish@forum.nginx.org> References: <33c66c80907011823r7c6224e8w2945765c150f1f55@forum.nginx.org> <1bb22d04200e00c8cb502dd6daff02ab.NginxMailingListEnglish@forum.nginx.org> Message-ID: <5be2d14a9ed9eebd66e084ab46b5ebed.NginxMailingListEnglish@forum.nginx.org> Sorry, some of my previous description is not correct: we were servering the static files on port 81 with nginx, static file serving becomes faster when we change it to be served by nginx on port 80, it seems that communication between nginx on port 80 and port 81 is the bottleneck, how can we resolve this issue? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,3638,229940#msg-229940 From nginx-forum at nginx.us Tue Aug 21 07:52:45 2012 From: nginx-forum at nginx.us (mokriy) Date: Tue, 21 Aug 2012 03:52:45 -0400 (EDT) Subject: Upload Module - add response header with MD5 In-Reply-To: <20120820191859.GW32371@craic.sysops.org> References: <20120820191859.GW32371@craic.sysops.org> Message-ID: Oh, many thanks Francis.. That is tricky. Ok, will try. To get my current flow working, I will use variable assignment in nginx and set variable a value with aggregate field. So, I can reuse it without calling aggregation several times. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,229882,229941#msg-229941 From summerxyt at gmail.com Tue Aug 21 08:06:36 2012 From: summerxyt at gmail.com (=?GB2312?B?z8TStczt?=) Date: Tue, 21 Aug 2012 16:06:36 +0800 Subject: how to only enable TLS1.1 or TLS1.2 only Message-ID: Hi all, I want to only use TLS1.1 or above with the nginx. I searched on the Internet but there is only information about how to enable ssl/tls with the nginx. I can write "ssl_protocols SSLv3 SSLv2 TLSv1;" in the nginx.conf to enable sslv3,sslv2 and tlsv1. So what can I do if I want to only enable tls1.1 or above? Thanks! From valery+nginxen at grid.net.ru Tue Aug 21 08:37:53 2012 From: valery+nginxen at grid.net.ru (Valery Kholodkov) Date: Tue, 21 Aug 2012 09:37:53 +0100 (BST) Subject: Upload Module - add response header with MD5 In-Reply-To: Message-ID: <14708042.17136.1345538273013.JavaMail.root@zone.mtgsy.net> Aggregate variables cannot be used in upload_add_header directive, because they are not valid outside of a request body part. ----- mokriy wrote: > Hi! > here is the snuppet of configuration I use for uploading: > location /upload { > auth_request /auth; > upload_pass /200; > upload_store /download 1; > > upload_resumable on; > > #Return download path to calling party in custom header > value > upload_add_header XXX-DownloadURI "$upload_tmp_path"; > #Return md5 file checksum to calling party as custome header > value > upload_add_header X-XXX-MD5SizeChecksum "$upload_file_md5"; > upload_add_header X-XXXXr-FileSize "$upload_file_size"; > > upload_aggregate_form_field "$upload_field_name.md5" > "$upload_file_md5"; > upload_aggregate_form_field "$upload_field_name.size" > "$upload_file_size"; > > It works perfect, great job, thanks! Specially about upload_add_header. > However the MD5 hashes returned in response header are.. not hashes. > They are: > 34383765656261393461336230613362 OR > 34396436323764343839623564646664 > I can't rely on them on caller side.. > > Could it be a problem with add_header? I assume that aggregate field is > generated correctly, however when setting it to header via > upload_add_header, there is an convertion mistake. > > Many thanks in advance! > You are doing great Job -- Regards, Valery Kholodkov From stef at scaleengine.com Tue Aug 21 09:14:39 2012 From: stef at scaleengine.com (Stefan Caunter) Date: Tue, 21 Aug 2012 05:14:39 -0400 Subject: Tuning workers and connections In-Reply-To: <1bb22d04200e00c8cb502dd6daff02ab.NginxMailingListEnglish@forum.nginx.org> References: <33c66c80907011823r7c6224e8w2945765c150f1f55@forum.nginx.org> <1bb22d04200e00c8cb502dd6daff02ab.NginxMailingListEnglish@forum.nginx.org> Message-ID: On Tue, Aug 21, 2012 at 3:23 AM, liangrubo wrote: > Hello, we meet similar problem. > > if we set "multi_accept off;" and "accept_mutex on;", it sometimes took 3 or > 9 or even 21 seconds to connect to nginx on port 80 from the same server. if > we set "multi_accept on;" and "accept_mutex off;", we can connect to nginx > on port 80 instantly but it may take long to get the response (more than 3 > seconds for small css files on the same server). > > Our deploy structure is as follows: > frontend nginx listen on port 80, servering static files. For dynamic > requests, nginx on port 80 reverse proxy to nginx on port 81 via tcp socket, > nginx on port 81 again reverse proxy to uwsgi via unix sokcet. > > the site is handling about 1200 requests per second. > > the server has 24 cpu cores and 96G memory. the system has low load, lots > of free memory and no IO bottleneck. What is the system load? You have a bottleneck, you just have not found it. If static file access is problematic, look to the disk i/o subsystem. Are you caching? How many files is nginx accessing? What is the filesystem? Show the output of a filesystem utility under 1200r/s. If nginx is accessing many small static files from one partition, is it on its own partition, or is logging writing to the same partition? Concurrent access times to disk reduce geometrically as requests grow arithmetically. At 1200 per second, nginx processes are likely waiting for the disk to find request. > some of the nginx configurations are as follows: > worker_processes 8; //it was originally 4, I doubled it but it didn't help > worker_rlimit_nofile 15240; > worker_connections 15240; > > use epoll; > > we have keepalive setup for upstream(80 to 81 reverse proxy) as follows: > keepalive 32; > > > information shown by status_sub: > Active connections: 4320 > server accepts handled requests > 9012300 9012300 37349248 > Reading: 118 Writing: 199 Waiting: 4003 > > we examined the dynamic request handling is fast and even if it was slow, it > should not affect static file servering on port 80 anyway, right? Right. Your dynamic processing is not i/o bound from what you have described. > it seems nginx can't process the request fast enough but we can't find what > is the bottleneck. > > any help is greatly appreciated. > Stefan Caunter From luky-37 at hotmail.com Tue Aug 21 09:41:53 2012 From: luky-37 at hotmail.com (Lukas Tribus) Date: Tue, 21 Aug 2012 11:41:53 +0200 Subject: how to only enable TLS1.1 or TLS1.2 only In-Reply-To: References: Message-ID: Please consult the docs: http://nginx.org/en/docs/http/ngx_http_ssl_module.html#ssl_protocols? Upgrade to?at least?1.1.13 and 1.0.12 and specify? ssl_protocols?TLSv1.1 TLSv1.2; ---------------------------------------- > Date: Tue, 21 Aug 2012 16:06:36 +0800 > Subject: how to only enable TLS1.1 or TLS1.2 only > From: summerxyt at gmail.com > To: nginx at nginx.org > > Hi all, > > I want to only use TLS1.1 or above with the nginx. I searched on the > Internet but there is only information about how to enable ssl/tls > with the nginx. I can write > > "ssl_protocols SSLv3 SSLv2 TLSv1;" > > in the nginx.conf to enable sslv3,sslv2 and tlsv1. > > So what can I do if I want to only enable tls1.1 or above? > > Thanks! > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From mdounin at mdounin.ru Tue Aug 21 12:30:16 2012 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 21 Aug 2012 16:30:16 +0400 Subject: Accept connections and close connection hooks in custom module In-Reply-To: <3e631f9f3ea0f1d16f22bd4647d42a3a.NginxMailingListEnglish@forum.nginx.org> References: <094e9a88aa683f6230d5f72d481d4ee9.NginxMailingListEnglish@forum.nginx.org> <3e631f9f3ea0f1d16f22bd4647d42a3a.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20120821123016.GL40452@mdounin.ru> Hello! On Mon, Aug 20, 2012 at 06:25:09AM -0400, ConnorMcLaud wrote: > I think I've finally understood. There is global structure ngx_event_actions > with add_conn/del_conn handlers which supposed to do what I want. There is > only one issue (probably a bug) with add_conn handler. In version 1.2.2 with > epoll usage handler add_conn never called. You should use add handler > instead and check incoming parameters (event = NGX_READ_EVENT, flags = > NGX_CLEAR_REQUEST). Hope it helps someone. The add_conn/del_conn handlers isn't what you want to touch from your module, it's handlers for event modules to register connections in an event handling machinery. These hooks are called not only for client connections, but e.g. for nginx own internal connections as well, and for listening sockets also. Moreover, nginx core is aware of aspects of various event methods supported and might not call add_conn/del_conn even if the are set. The issue with epoll you've hit is just one of the cases. Right now there is no good way to hook new accepted connections unless you are writing your own core module which creates listening sockets by itself. Most recent point for correct hooks available as of now for http modules is NGX_HTTP_POST_READ_PHASE. Maxim Dounin From j.boggiano at seld.be Tue Aug 21 12:58:37 2012 From: j.boggiano at seld.be (Jordi Boggiano) Date: Tue, 21 Aug 2012 14:58:37 +0200 Subject: Issue with SNI/SSL and default_server Message-ID: <503385FD.8070903@seld.be> Heya, I have a server with two domains using SSL on one IP via SNI. So far so good, but the problem is that one of the site is marked as default_server to catch all (then I do a redirect to the proper domain, I left out some parts of the config below for conciseness). The problem is, if you have a ssl server marked as default_server, it seems to take over everything else, and domainb.com is not reachable via SSL anymore. server { listen 80 default_server; server_name domaina.com ; } server { listen 443 ssl default_server; server_name domaina.com ; } server { listen 80; server_name domainb.com; } server { listen 443 ssl; server_name domainb.com ; } The workaround I found is the following: I put the IP in the server_name, and therefore can remove the default_server flag from the ssl server (it's not completely equivalent, but close enough for my purposes). The problem is that it needs the server public IP in, which isn't ideal to have generic vhost templates in puppet: server { listen 80 default_server; server_name domaina.com ; } server { listen 443 ssl; server_name domaina.com ; } server { listen 80; server_name domainb.com; } server { listen 443 ssl; server_name domainb.com ; } I am not sure whether this is a bug or an expected feature, which is why I am writing here. Cheers -- Jordi Boggiano @seldaek - http://nelm.io/jordi From mdounin at mdounin.ru Tue Aug 21 13:28:05 2012 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 21 Aug 2012 17:28:05 +0400 Subject: nginx-1.3.5 Message-ID: <20120821132805.GM40452@mdounin.ru> Changes with nginx 1.3.5 21 Aug 2012 *) Change: the ngx_http_mp4_module module no longer skips tracks in formats other than H.264 and AAC. *) Bugfix: a segmentation fault might occur in a worker process if the "map" directive was used with variables as values. *) Bugfix: a segmentation fault might occur in a worker process if the "geo" directive was used with the "ranges" parameter but without the "default" parameter; the bug had appeared in 0.8.43. Thanks to Zhen Chen and Weibin Yao. *) Bugfix: in the -p command-line parameter handling. *) Bugfix: in the mail proxy server. *) Bugfix: of minor potential bugs. Thanks to Coverity. *) Bugfix: nginx/Windows could not be built with Visual Studio 2005 Express. Thanks to HAYASHI Kentaro. Maxim Dounin From matteo.picciolini at gmail.com Tue Aug 21 14:36:04 2012 From: matteo.picciolini at gmail.com (Matteo Picciolini) Date: Tue, 21 Aug 2012 16:36:04 +0200 Subject: help on ngnix configuration Message-ID: Hi guys i have a problem with my web server configuration . I have a virtual machine with ngnix and 5 virtual machine with apache . I wont to reverse proxy ngnix to apache but i have only one domain : mydomain.com So if i typing mydomain.com/abc ngnix proxy in virtual machine 1 where is the abc application . proxy_pass work , but i see blank page , in fact css and image not appare . I think the problem is ngnix search static content in localhost not in virtual machine path . Anyone can help me ? Thank -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Tue Aug 21 14:46:32 2012 From: nginx-forum at nginx.us (ConnorMcLaud) Date: Tue, 21 Aug 2012 10:46:32 -0400 (EDT) Subject: Accept connections and close connection hooks in custom module In-Reply-To: <20120821123016.GL40452@mdounin.ru> References: <20120821123016.GL40452@mdounin.ru> Message-ID: In fact I mostly need catch client deconnection. There is no good way for that as for accepted connection, isn't it? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,229100,229958#msg-229958 From nginx-forum at nginx.us Tue Aug 21 14:54:46 2012 From: nginx-forum at nginx.us (rahul2k6.nitjsr) Date: Tue, 21 Aug 2012 10:54:46 -0400 (EDT) Subject: Stressing nginx In-Reply-To: References: <73ea0d3c3eac97e69a33bd523154a20e.NginxMailingListEnglish@forum.nginx.org> Message-ID: Did you get the answer to this question? I am facing similar issue on nginx. 2012/08/21 06:29:02 [debug] 27567#0: *13 http fastcgi record byte: 00 2012/08/21 06:29:02 [debug] 27567#0: *13 http fastcgi record byte: 01 2012/08/21 06:29:02 [debug] 27567#0: *13 http fastcgi record byte: 00 2012/08/21 06:29:02 [debug] 27567#0: *13 http fastcgi record byte: 00 2012/08/21 06:29:02 [debug] 27567#0: *13 http fastcgi record byte: 00 2012/08/21 06:29:02 [debug] 27567#0: *13 http fastcgi record byte: 00 2012/08/21 06:29:02 [debug] 27567#0: *13 http fastcgi record length: 0 2012/08/21 06:29:02 [error] 27567#0: *13 upstream closed prematurely FastC GI stdout while reading response header from upstream, client: 10.11.18.32 , server: , request: "GET /cgi-mod/nph-export_log.cgi?et=1357555736&primar y_tab=LOG&password=852c5469f9ba3c8bdf5bc8964ea3aded&auth_type=Local&user=a dmin&locale=en_US&secondary_tab=vpn_log&log=vpn_log HTTP/1.1", upstream: " fastcgi://unix:/var/run/fcgi_socket:", host: "10.11.22.51", referrer: "htt ps://10.11.22.51/cgi-mod/index.cgi?auth_type=Local&et=1357555726&locale=en _US&password=066f426d95d849b1d9d6f15849009e80&user=admin&primary_tab=LOG&s econdary_tab=vpn_log" 2012/08/21 06:29:02 [debug] 27567#0: *13 http next upstream, 8 2012/08/21 06:29:02 [debug] 27567#0: *13 free rr peer 1 4 2012/08/21 06:29:02 [debug] 27567#0: *13 finalize http upstream request: 5 02 Posted at Nginx Forum: http://forum.nginx.org/read.php?2,62672,229959#msg-229959 From kevin at ouelong.com Tue Aug 21 15:12:21 2012 From: kevin at ouelong.com (Kevin Pratt) Date: Tue, 21 Aug 2012 18:12:21 +0300 Subject: Error with basic auth on windows Message-ID: Hello, I'm trying to setup basic auth on windows and i'm getting an error: CreateFile() "C:\Jira\nginx-1.2.3/conf/htpasswd" failed (2: The system cannot find the file specified), I've searched all over but I've not been able to find an answer to why this would be happening. has anyone else seen this issue? Thanks Kevin. -------------- next part -------------- An HTML attachment was scrubbed... URL: From kworthington at gmail.com Tue Aug 21 17:49:40 2012 From: kworthington at gmail.com (Kevin Worthington) Date: Tue, 21 Aug 2012 13:49:40 -0400 Subject: nginx-1.3.5 In-Reply-To: <20120821132805.GM40452@mdounin.ru> References: <20120821132805.GM40452@mdounin.ru> Message-ID: Hello Nginx Users, Now available: Nginx 1.3.5 For Windows http://goo.gl/fGm3c (32-bit and 64-bit versions) These versions are to support legacy users who are already using Cygwin based builds of Nginx. Officially supported native Windows binaries are at nginx.org. Announcements are also available via my Twitter stream (http://twitter.com/kworthington), if you prefer to receive updates that way. Thank you, Kevin -- Kevin Worthington kworthington *@* (gmail] [dot} {com) http://kevinworthington.com/ http://twitter.com/kworthington On Tue, Aug 21, 2012 at 9:28 AM, Maxim Dounin wrote: > Changes with nginx 1.3.5 21 Aug 2012 > > *) Change: the ngx_http_mp4_module module no longer skips tracks in > formats other than H.264 and AAC. > > *) Bugfix: a segmentation fault might occur in a worker process if the > "map" directive was used with variables as values. > > *) Bugfix: a segmentation fault might occur in a worker process if the > "geo" directive was used with the "ranges" parameter but without the > "default" parameter; the bug had appeared in 0.8.43. > Thanks to Zhen Chen and Weibin Yao. > > *) Bugfix: in the -p command-line parameter handling. > > *) Bugfix: in the mail proxy server. > > *) Bugfix: of minor potential bugs. > Thanks to Coverity. > > *) Bugfix: nginx/Windows could not be built with Visual Studio 2005 > Express. > Thanks to HAYASHI Kentaro. > > > Maxim Dounin > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From summerxyt at gmail.com Wed Aug 22 01:01:02 2012 From: summerxyt at gmail.com (=?GB2312?B?z8TStczt?=) Date: Wed, 22 Aug 2012 09:01:02 +0800 Subject: how to only enable TLS1.1 or TLS1.2 only In-Reply-To: References: Message-ID: thank you very much! 2012/8/21 Lukas Tribus : > > Please consult the docs: > http://nginx.org/en/docs/http/ngx_http_ssl_module.html#ssl_protocols > > > Upgrade to at least 1.1.13 and 1.0.12 and specify > > ssl_protocols TLSv1.1 TLSv1.2; > > > > > > ---------------------------------------- >> Date: Tue, 21 Aug 2012 16:06:36 +0800 >> Subject: how to only enable TLS1.1 or TLS1.2 only >> From: summerxyt at gmail.com >> To: nginx at nginx.org >> >> Hi all, >> >> I want to only use TLS1.1 or above with the nginx. I searched on the >> Internet but there is only information about how to enable ssl/tls >> with the nginx. I can write >> >> "ssl_protocols SSLv3 SSLv2 TLSv1;" >> >> in the nginx.conf to enable sslv3,sslv2 and tlsv1. >> >> So what can I do if I want to only enable tls1.1 or above? >> >> Thanks! >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From cickumqt at gmail.com Wed Aug 22 04:38:23 2012 From: cickumqt at gmail.com (Christopher Meng) Date: Wed, 22 Aug 2012 12:38:23 +0800 Subject: Too many 400 bad request Message-ID: Hi everyone, I've deployed a server for rsync public mirror,and I use nginx as web server to display some html for visitors. Now the problem is, too many 400 errors. I paste "only a small amount of them" below: ================================= 216.216.33.242 - - [21/Aug/2012:03:02:05 -0700] "-" 400 0 "-" "-" "-" 174.128.254.98 - - [21/Aug/2012:03:02:05 -0700] "-" 400 0 "-" "-" "-" 216.216.33.242 - - [21/Aug/2012:03:02:05 -0700] "-" 400 0 "-" "-" "-" 207.99.82.229 - - [21/Aug/2012:03:02:05 -0700] "-" 400 0 "-" "-" "-" 173.255.246.181 - - [21/Aug/2012:03:02:05 -0700] "-" 400 0 "-" "-" "-" 206.53.187.66 - - [21/Aug/2012:03:02:05 -0700] "-" 400 0 "-" "-" "-" 173.255.246.181 - - [21/Aug/2012:03:02:05 -0700] "-" 400 0 "-" "-" "-" 216.40.205.154 - - [21/Aug/2012:03:02:05 -0700] "-" 400 0 "-" "-" "-" 67.222.104.104 - - [21/Aug/2012:03:02:05 -0700] "GET /centos/5.8/extras/x86_64/repodata/repomd.xml HTTP/1.1" 200 2146 "-" "urlgrabber/3.1.0 yum/3.2.22" "-" 75.125.106.50 - - [21/Aug/2012:03:02:05 -0700] "-" 400 0 "-" "-" "-" 75.125.106.50 - - [21/Aug/2012:03:02:05 -0700] "-" 400 0 "-" "-" "-" 64.28.85.10 - - [21/Aug/2012:03:02:05 -0700] "-" 400 0 "-" "-" "-" 38.107.176.34 - - [21/Aug/2012:03:02:05 -0700] "GET /centos/5.8/os/i386/repodata/repomd.xml HTTP/1.1" 200 1140 "-" "urlgrabber/3.1.0 yum/3.2.22" "-" 174.36.195.24 - - [21/Aug/2012:03:02:05 -0700] "-" 400 0 "-" "-" "-" 132.239.242.24 - - [21/Aug/2012:03:02:05 -0700] "-" 400 0 "-" "-" "-" 64.156.195.66 - - [21/Aug/2012:03:02:05 -0700] "-" 400 0 "-" "-" "-" 199.180.132.136 - - [21/Aug/2012:03:02:05 -0700] "-" 400 0 "-" "-" "-" 173.228.120.18 - - [21/Aug/2012:03:02:05 -0700] "-" 400 0 "-" "-" "-" 209.119.87.126 - - [21/Aug/2012:03:02:05 -0700] "-" 400 0 "-" "-" "-" 174.137.174.201 - - [21/Aug/2012:03:02:05 -0700] "-" 400 0 "-" "-" "-" 208.70.208.200 - - [21/Aug/2012:03:02:05 -0700] "-" 400 0 "-" "-" "-" ==================================== Well,this really make me confused.And I've asked other mirror admins who use nginx,they all have this problem. Is it caused by nginx? Do anyone know the solution? Thanks. *Yours sincerely,* *Christopher Meng* Ambassador/Contributor of Fedora Project and many others. http://cicku.me -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Wed Aug 22 06:23:16 2012 From: nginx-forum at nginx.us (slevytam) Date: Wed, 22 Aug 2012 02:23:16 -0400 (EDT) Subject: Rewrite a Shortened URL to a Pretty URL? Message-ID: <90c9fbfa01b32937ef7afeb2bc3caed7.NginxMailingListEnglish@forum.nginx.org> Hi, I was hoping someone could help me with a rewrite question. Currently, I use a basic rewrite for my url shortener. if (!-e $request_filename) { rewrite ^/(.*)$ /entry/index.php?id=$1 permanent; } This forwards a url like http://www.domain.com/1234 to http://www.domain.com/entry/index.php?id=1234. It works great. The problem is that I've realized that pretty urls are strongly preferred by google. So I would like following scenario: http://www.domain.com/1234 to rewrite to http://www.domain.com/entry/index.php?id=1234 while showing the user http://www.domain.com/1234/this-is-the-pretty-part in the address bar Can anyone tell me what is the best way to do this? Thanks, slevytam Posted at Nginx Forum: http://forum.nginx.org/read.php?2,229974,229974#msg-229974 From edho at myconan.net Wed Aug 22 06:31:03 2012 From: edho at myconan.net (Edho Arief) Date: Wed, 22 Aug 2012 13:31:03 +0700 Subject: Rewrite a Shortened URL to a Pretty URL? In-Reply-To: <90c9fbfa01b32937ef7afeb2bc3caed7.NginxMailingListEnglish@forum.nginx.org> References: <90c9fbfa01b32937ef7afeb2bc3caed7.NginxMailingListEnglish@forum.nginx.org> Message-ID: On Wed, Aug 22, 2012 at 1:23 PM, slevytam wrote: > Hi, > > I was hoping someone could help me with a rewrite question. > > Currently, I use a basic rewrite for my url shortener. > > if (!-e $request_filename) { > rewrite ^/(.*)$ /entry/index.php?id=$1 permanent; > } > > This forwards a url like http://www.domain.com/1234 to > http://www.domain.com/entry/index.php?id=1234. It works great. > > The problem is that I've realized that pretty urls are strongly preferred by > google. > > So I would like following scenario: > http://www.domain.com/1234 to rewrite to > http://www.domain.com/entry/index.php?id=1234 while showing the user > http://www.domain.com/1234/this-is-the-pretty-part in the address bar > > Can anyone tell me what is the best way to do this? > remove the permanent keyword. (note that there may be better way to do this) From javi at lavandeira.net Wed Aug 22 08:52:02 2012 From: javi at lavandeira.net (Javi Lavandeira) Date: Wed, 22 Aug 2012 17:52:02 +0900 Subject: Rewrite a Shortened URL to a Pretty URL? In-Reply-To: <90c9fbfa01b32937ef7afeb2bc3caed7.NginxMailingListEnglish@forum.nginx.org> References: <90c9fbfa01b32937ef7afeb2bc3caed7.NginxMailingListEnglish@forum.nginx.org> Message-ID: <72163348-52A5-4078-851C-43B41E51E077@lavandeira.net> On 2012/08/22, at 15:23, "slevytam" wrote: > Currently, I use a basic rewrite for my url shortener. > > if (!-e $request_filename) { > rewrite ^/(.*)$ /entry/index.php?id=$1 permanent; > } [...] > So I would like following scenario: > http://www.domain.com/1234 to rewrite to > http://www.domain.com/entry/index.php?id=1234 while showing the user > http://www.domain.com/1234/this-is-the-pretty-part in the address bar Just change your regexp to match everything up to the first slash in the URL and ignore the rest. From memory, this would be something like this: rewrite ^/(.*?)/.*$ /entry/index.php?id=$1 permanent; If this regexp syntax is correct (please check it, I'm replying from the subway and can't check the manpages), then this should select everything in the URL up to the first slash, assign it to the $1 positional parameter, and ignore the rest. The idea is that if your pretty URL is http://www.example.com/1234/whatever-goes-here then the regexp would match the "1234" regardless of what's after it. GoogleBot will be happy and you'll get more visitors. Regards, -- Javi Lavandeira Twitter: @javilm Blog: http://www.lavandeira.net/blog -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Wed Aug 22 09:26:32 2012 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 22 Aug 2012 13:26:32 +0400 Subject: Too many 400 bad request In-Reply-To: References: Message-ID: <20120822092631.GW40452@mdounin.ru> Hello! On Wed, Aug 22, 2012 at 12:38:23PM +0800, Christopher Meng wrote: > Hi everyone, > > I've deployed a server for rsync public mirror,and I use nginx as web > server to display some html for visitors. > > Now the problem is, too many 400 errors. > > I paste "only a small amount of them" below: > > ================================= > 216.216.33.242 - - [21/Aug/2012:03:02:05 -0700] "-" 400 0 "-" "-" "-" > 174.128.254.98 - - [21/Aug/2012:03:02:05 -0700] "-" 400 0 "-" "-" "-" > 216.216.33.242 - - [21/Aug/2012:03:02:05 -0700] "-" 400 0 "-" "-" "-" Try looking e.g. here for a possible cause: http://mailman.nginx.org/pipermail/nginx/2011-September/029045.html At least Chrome is known to cause such log entries if there aren't many resources on a page requested due to additional connections being open and then closed without sending any single request in them. [...] > Well,this really make me confused.And I've asked other mirror admins who > use nginx,they all have this problem. > > Is it caused by nginx? > Do anyone know the solution? > Thanks. It's just a way how nginx logs connections which are just opened and then closed. Confusing part is that some other web servers doesn't seem to log anything in this case. Maxim Dounin From cickumqt at gmail.com Wed Aug 22 10:49:19 2012 From: cickumqt at gmail.com (Christopher Meng) Date: Wed, 22 Aug 2012 18:49:19 +0800 Subject: Too many 400 bad request In-Reply-To: <20120822092631.GW40452@mdounin.ru> References: <20120822092631.GW40452@mdounin.ru> Message-ID: I think I've met empty request....I will do a test tonight,thanks. -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Wed Aug 22 11:07:42 2012 From: nginx-forum at nginx.us (yashgt) Date: Wed, 22 Aug 2012 07:07:42 -0400 (EDT) Subject: index directive not taking effect in case of alias Message-ID: <7f8170294f2783239f207fcf50bd8446.NginxMailingListEnglish@forum.nginx.org> I would like the pnp4nagios app to be accessed as: http://server/pnp4nagios/. With this URL it gives a 403 Forbidden error. It works when I give: http://server/pnp4nagios/index.php. What might be wrong? Here is the pnp4nagios location config: location ~ /pnp4nagios { alias $pnp4nagiosbase/share; auth_basic "Nagios Restricted"; ## Message shown in login window auth_basic_user_file $nagiosbase/etc/htpasswd.users ; ## See /etc/nginx/htpassword index index.php; try_files $uri $uri/ @handler ; ## If missing pass the URI to Magento's front handler location ~ /([^\/]*\.php)$ { alias $pnp4nagiosbase/share/$1; #include /etc/nginx/fastcgi.conf; fastcgi_pass $fcgipass; fastcgi_param SCRIPT_FILENAME $fastcgi_script_name; include fastcgi_params; ## See /etc/nginx/fastcgi_params } location ~ \.php$ { #include /etc/nginx/fastcgi.conf; include fastcgi_params; ## See /etc/nginx/fastcgi_params fastcgi_pass $fcgipass; } } Thanks, Yash Posted at Nginx Forum: http://forum.nginx.org/read.php?2,229985,229985#msg-229985 From mdounin at mdounin.ru Wed Aug 22 11:34:29 2012 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 22 Aug 2012 15:34:29 +0400 Subject: index directive not taking effect in case of alias In-Reply-To: <7f8170294f2783239f207fcf50bd8446.NginxMailingListEnglish@forum.nginx.org> References: <7f8170294f2783239f207fcf50bd8446.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20120822113429.GB40452@mdounin.ru> Hello! On Wed, Aug 22, 2012 at 07:07:42AM -0400, yashgt wrote: > I would like the pnp4nagios app to be accessed as: > http://server/pnp4nagios/. With this URL it gives a 403 Forbidden error. It > works when I give: http://server/pnp4nagios/index.php. > What might be wrong? > > Here is the pnp4nagios location config: > > location ~ /pnp4nagios { > alias $pnp4nagiosbase/share; - location ~ /pnp4nagios { + location /pnp4nagios { [...] Maxim Dounin From chris+nginx at schug.net Wed Aug 22 11:52:59 2012 From: chris+nginx at schug.net (Christoph Schug) Date: Wed, 22 Aug 2012 13:52:59 +0200 Subject: MIME type oddity when using try_files in combination with location/alias using regex captures In-Reply-To: <6a68df70c093f8cca848a3a38d6efbc0@schug.net> References: <6a68df70c093f8cca848a3a38d6efbc0@schug.net> Message-ID: <49718a960cd8434cfaf7318ea5341f85@schug.net> On 2012-08-02 07:39, Christoph Schug wrote: > Given is following minimized test case > > server { > listen 80; > server_name t1.example.com; > > root /data/web/t1.example.com/htdoc; > > location ~ ^/quux(/.*)?$ { > alias /data/web/t1.example.com/htdoc$1; > try_files '' =404; > } > } > > on Nginx 1.3.4 (but not specific to that version) > > # nginx -V > nginx version: nginx/1.3.4 > TLS SNI support enabled > configure arguments: --prefix=/usr/share/nginx > --conf-path=/etc/nginx/nginx.conf --sbin-path=/usr/sbin/nginx > --http-log-path=/var/log/nginx/access.log > --error-log-path=/var/log/nginx/error.log > --pid-path=/var/run/nginx.pid --user=nginx --group=nginx > --with-openssl=openssl-1.0.1c --with-debug > --with-http_stub_status_module --with-http_ssl_module --with-ipv6 > > and following file system layout > > # find /data/web/t1.example.com/htdoc/ > /data/web/t1.example.com/htdoc/ > /data/web/t1.example.com/htdoc/foo > /data/web/t1.example.com/htdoc/foo/bar.gif > > Accessing the file directly returns the expected 'Content-Type' > response header with the value 'image/gif' > > $ curl -s -o /dev/null -D - -H 'Host: t1.example.com' > http://127.0.0.1/foo/bar.gif > HTTP/1.1 200 OK > Server: nginx/1.3.4 > Date: Thu, 02 Aug 2012 05:13:40 GMT > Content-Type: image/gif > Content-Length: 68 > Last-Modified: Thu, 02 Aug 2012 05:04:56 GMT > Connection: keep-alive > ETag: "501a0a78-44" > Accept-Ranges: bytes > > Accessing the file via location /quux returns 'Content-Type' response > header with the value 'application/octet-stream' (basically it falls > back to the setting of 'default_type') > > $ curl -s -o /dev/null -D - -H 'Host: t1.example.com' > http://127.0.0.1/quux/foo/bar.gif > HTTP/1.1 200 OK > Server: nginx/1.3.4 > Date: Thu, 02 Aug 2012 05:13:42 GMT > Content-Type: application/octet-stream > Content-Length: 68 > Last-Modified: Thu, 02 Aug 2012 05:04:56 GMT > Connection: keep-alive > ETag: "501a0a78-44" > Accept-Ranges: bytes > > It is unclear to me if this is expected behavior (and if so, I am > having a hard time to find it in the documentation) and what would be > the best way to mitigate the problem. Defining nested locations > within > the /quux one for each combination of file extension/MIME type works > but looks very wrong to me. No feedback so far, just wanted to add the the same behavior is reproducible using Nginx 1.3.5. -cs From nginx-forum at nginx.us Wed Aug 22 13:24:03 2012 From: nginx-forum at nginx.us (leki75) Date: Wed, 22 Aug 2012 09:24:03 -0400 (EDT) Subject: Using rate limitation for files smaller than the defined limit. In-Reply-To: <20120731162106.GY40452@mdounin.ru> References: <20120731162106.GY40452@mdounin.ru> Message-ID: Dear Maxim, thank you for your suggestions. Analyzing the code the following turned out about http_write_filter: 1. it is able to buffer output (eg. postpone_output) 2. can delay response before sending bytes (limit <= 0) 3. delays response after sending bytes (nsent - sent) As you mentioned we delay sending the last byte of the response and only do millisecond calculation when sending it. $ cat limit_rate.patch --- src/http/ngx_http_write_filter_module.c 2012-01-18 16:07:43.000000000 +0100 +++ src/http/ngx_http_write_filter_module.c 2012-08-22 12:44:03.862873715 +0200 @@ -47,9 +47,10 @@ ngx_int_t ngx_http_write_filter(ngx_http_request_t *r, ngx_chain_t *in) { - off_t size, sent, nsent, limit; + off_t size, sent, nsent, limit, nlimit; ngx_uint_t last, flush; ngx_msec_t delay; + ngx_time_t *t; ngx_chain_t *cl, *ln, **ll, *chain; ngx_connection_t *c; ngx_http_core_loc_conf_t *clcf; @@ -214,6 +215,23 @@ limit = r->limit_rate * (ngx_time() - r->start_sec + 1) - (c->sent - clcf->limit_rate_after); + if (last && size == 1) { + t = ngx_timeofday(); + + if (t->msec < r->start_msec) { + t->sec--; + t->msec += 1000; + } + + nlimit = r->limit_rate * (t->sec - r->start_sec) + + r->limit_rate * (t->msec - r->start_msec) / 1000 + - (c->sent + size - clcf->limit_rate_after); + + if (nlimit <= 0) { + limit = nlimit; + } + } + if (limit <= 0) { c->write->delayed = 1; ngx_add_timer(c->write, @@ -224,6 +242,12 @@ return NGX_AGAIN; } + if (last && limit > size - 1) { + if (size > 1) { + limit = size - 1; + } + } + if (clcf->sendfile_max_chunk && (off_t) clcf->sendfile_max_chunk < limit) { It also turned out that setting sendfile_max_chunk to a small enough value is also a solution for our problem, but this patch also works with default sendfile_max_chunk = 0 setting. Anyway, in nginx 1.2.3 source we found this: if (size == 0 && !(c->buffered & NGX_LOWLEVEL_BUFFERED)) { if (last) { r->out = NULL; c->buffered &= ~NGX_HTTP_WRITE_BUFFERED; return NGX_OK; } if (flush) { do { r->out = r->out->next; } while (r->out); c->buffered &= ~NGX_HTTP_WRITE_BUFFERED; return NGX_OK; } Instead we could use this if I am right: if (size == 0 && !(c->buffered & NGX_LOWLEVEL_BUFFERED)) { if (last || flush) { r->out = NULL; c->buffered &= ~NGX_HTTP_WRITE_BUFFERED; return NGX_OK; } Thanks for your help. Regards, Gabor Posted at Nginx Forum: http://forum.nginx.org/read.php?2,229094,229995#msg-229995 From agentzh at gmail.com Wed Aug 22 23:02:18 2012 From: agentzh at gmail.com (agentzh) Date: Wed, 22 Aug 2012 16:02:18 -0700 Subject: [ANN] ngx_openresty devel version 1.2.3.1 released In-Reply-To: References: Message-ID: Hi, guys! I am happy to announce the new development version of ngx_openresty, 1.2.3.1: http://openresty.org/#Download Special thanks go to all our contributors and users for making this happen! Below is the complete change log for this release, as compared to the last (stable) release, 1.2.1.14: * upgraded the Nginx core to 1.2.3. * see for changes. * upgraded LuaNginxModule to 0.6.2. * feature: (re)implemented the standard Lua coroutine API, which means that the user is now free to create and run their own coroutines within the boilerplate coroutine created automatically by LuaNginxModule. thanks chaoslawful and jinglong for the design and implementation. * feature: added new dtrace static probes for the user coroutine mechanism: "http-lua-coroutine-create" and "http-lua-coroutine-resume". * feature: added new dtrace static probes for the cosocket mechanism: "http-lua-socket-tcp-send-start", "http-lua-socket-tcp-receive-done", and "http-lua-socket-tcp-keepalive-buf-unread". * bugfix: the send timeout timer for downstream output was not deleted in time in our write event handler, which might result in request abortion for long running requests. thanks Demiao Lin (ldmiao) for reporting this issue. * bugfix: tcpsock:send() might send garbage if it was not the first call: we did not properly initialize the chain writer ctx for every "send()" call. thanks Zhu Dejiang for reporting this issue. * bugfix: the "ngx_http_lua_probe.h" header file was not listed in the "NGX_ADDON_DEPS" list in the "config" file. * optimize: removed unnecessary code that was for the old coroutine abortion mechanism based on Lua exceptions. we no longer need that at all because we have switched to using coroutine yield to abort the current coroutine for "ngx.exec", "ngx.exit", "ngx.redirect", and "ngx.req.set_uri(uri, true)". * upgraded LuaRestyDNSLibrary to 0.06. * feature: added support for MX type resource records. * feature: unrecognized types of resource records will return their raw resource data (RDATA) as the "rdata" Lua table field. * upgraded LuaRestyRedisLibrary to 0.13. * feature: added new method read_reply, mostly for using the Redis Pub/Sub API. * feature: added new class method add_commands to allow adding support for new Redis commands on-the-fly. thanks Praveen Saxena for requesting this feature. * docs: added a code sample for using the Redis transactions. * upgraded DrizzleNginxModule to 0.1.4. * bugfix: the "open socket #N left in connection" alerts would appear in the nginx error log file when the MySQL/Drizzle connection pool was used and the worker process was shutting down. * upgraded PostgresNginxModule to 1.0rc2. * bugfix: the "open socket #N left in connection" alerts would appear in the nginx error log file when the PostgreSQL connection pool was used and the worker process was shutting down. * bugfix: removed the useless http-cache related code from "ngx_postgres_upstream_finalize_request" to suppress clang warnings. * added more dtrace static probes to the Nginx core: "timer-add", "timer-del", and "timer-expire". * added more systemtap tapset functions: "ngx_chain_next", "ngx_chain_writer_ctx_out", "ngx_chain_dump", and "ngx_iovec_dump". The HTML version of the change contains some helpful hyper-links and can be browsed here: http://openresty.org/#ChangeLog1002003 OpenResty (aka. ngx_openresty) is a full-fledged web application server by bundling the standard Nginx core, lots of 3rd-party Nginx modules, as well as most of their external dependencies. See OpenResty's homepage for details: http://openresty.org/ We have been running extensive testing on our Amazon EC2 test cluster and ensure that all the components (including the Nginx core) play well together. The latest test report can always be found here: http://qa.openresty.org Enjoy! -agentzh From nginx-forum at nginx.us Thu Aug 23 11:13:45 2012 From: nginx-forum at nginx.us (heronote) Date: Thu, 23 Aug 2012 07:13:45 -0400 (EDT) Subject: Free Nginx eBook Update In-Reply-To: <20120821132805.GM40452@mdounin.ru> References: <20120821132805.GM40452@mdounin.ru> Message-ID: http://www.heronote.com/files/nginx.htm Posted at Nginx Forum: http://forum.nginx.org/read.php?2,229953,230028#msg-230028 From ivtoman at inet.hr Thu Aug 23 13:48:45 2012 From: ivtoman at inet.hr (Ivan Toman) Date: Thu, 23 Aug 2012 15:48:45 +0200 Subject: Firefox cache images more than cache-control: max-age Message-ID: <503634BD.6040204@inet.hr> Hello. Some visitors of my site complain about their Firefox is pulling images from cache, whereas it should not do that, because images on the site are changing few times a day. In other words, .png images that are static, always have same file name, but few times a day they get overwriten with new ones. When visitor loads web page, fresh versions of images must be displayed, but Firefox ofted pulls them from it's cache. I don't have any report of other browsers than FF is doing that problem. However, I don't found issue myself; browsing all the time with Firefox on Linux at home, and also few Firefox on Windows on work and everything is OK. This is why I cannot track issue very well. In nginx configuration I allowed little bit of caching (6 minutes) so when visitor loads page again in a short period of time they can be cached versions, but not for a long time, say few hours! So I set: location / { root html; index index.html index.htm; autoindex on; expires 6m; } Checking with Live HTTP Headers I'm getting these responses for images: http://46.102.240.202/images/gfs/06z/pcs/pcs_18.png GET /images/gfs/06z/pcs/pcs_18.png HTTP/1.1 Host: 46.102.240.202 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:10.0.4) Gecko/20100101 Firefox/10.0.4 Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 Accept-Language: en-us,en;q=0.7,hr;q=0.3 Accept-Encoding: gzip, deflate Connection: keep-alive Referer: http://maps.meteoadriatic.net/gfs-karte-aktualne/eksperimentalni-pcs-index-brn.html HTTP/1.1 200 OK Server: meteoadriatic.net Date: Thu, 23 Aug 2012 13:05:55 GMT Content-Type: image/png Content-Length: 31313 Last-Modified: Thu, 23 Aug 2012 09:34:29 GMT Connection: keep-alive Expires: Thu, 23 Aug 2012 13:11:55 GMT Cache-Control: max-age=360 Accept-Ranges: bytes So I think everything should be set to cache images for 360 seconds (6 minutes) and when image is loaded after that time pass, they have to be pulled from server again, not from cache. What can I do to help those users who has to refresh page manually every time they want to see fresh images? Thanks in advance, Ivan From nginx-forum at nginx.us Thu Aug 23 19:58:01 2012 From: nginx-forum at nginx.us (double) Date: Thu, 23 Aug 2012 15:58:01 -0400 (EDT) Subject: "zero size buf in output" Message-ID: <53cae3512b5fa58ee83b9cc30bcb9e34.NginxMailingListEnglish@forum.nginx.org> Hello, We have strange messages in our error.log - but everything works. We tried to re-produce this in a testing-environment - but we couldn't. Thanks a lot Sandy 2012/08/23 19:14:54 [alert] 17017#0: *1378690 zero size buf in output t:0 r:0 f:0 00000000007DD3A0 00000000007DD3A0-00000000007DE6DB 0000000000000000 0-0 while sending to client, client: 1.2.3.4, server: seite.com, request: "GET /pathto/image.jpg HTTP/1.1", upstream: "http://192.168.55.2:80/pathto/image.jpg", host: "www.seite.com", referrer: "http://www.seite.com/" 2012/08/23 19:40:02 [alert] 17017#0: *1383588 zero size buf in output t:0 r:0 f:0 00000000007BB260 00000000007BB260-00000000007C5B75 0000000000000000 0-0 while sending to client, client: 1.2.3.4, server: seite.com, request: "GET /pathto/image.jpg HTTP/1.1", upstream: "http://192.168.55.2:80/pathto/image.jpg", host: "www.seite.com", referrer: "http://www.seite.com/" 2012/08/23 19:49:47 [alert] 17016#0: *1385690 zero size buf in output t:0 r:0 f:0 0000000000725010 0000000000725010-00000000007272B6 0000000000000000 0-0 while sending to client, client: 1.2.3.4, server: seite.com, request: "GET /pathto/image.jpg HTTP/1.1", upstream: "http://192.168.55.2:80/pathto/image.jpg", host: "www.seite.com", referrer: "http://www.seite.com/" 2012/08/23 19:49:48 [alert] 17016#0: *1385689 zero size buf in output t:0 r:0 f:0 0000000000725010 0000000000725010-000000000072A58C 0000000000000000 0-0 while sending to client, client: 1.2.3.4, server: seite.com, request: "GET /pathto/image.jpg HTTP/1.1", upstream: "http://192.168.55.2:80/pathto/image.jpg", host: "www.seite.com", referrer: "http://www.seite.com/" 2012/08/23 19:49:48 [alert] 17016#0: *1385685 zero size buf in output t:0 r:0 f:0 0000000000847910 0000000000847910-0000000000851E0C 0000000000000000 0-0 while sending to client, client: 1.2.3.4, server: seite.com, request: "GET /pathto/image.jpg HTTP/1.1", upstream: "http://192.168.55.2:80/pathto/image.jpg", host: "www.seite.com", referrer: "http://www.seite.com/" Posted at Nginx Forum: http://forum.nginx.org/read.php?2,230072,230072#msg-230072 From ne at vbart.ru Thu Aug 23 20:08:52 2012 From: ne at vbart.ru (Valentin V. Bartenev) Date: Fri, 24 Aug 2012 00:08:52 +0400 Subject: "zero size buf in output" In-Reply-To: <53cae3512b5fa58ee83b9cc30bcb9e34.NginxMailingListEnglish@forum.nginx.org> References: <53cae3512b5fa58ee83b9cc30bcb9e34.NginxMailingListEnglish@forum.nginx.org> Message-ID: <201208240008.53003.ne@vbart.ru> On Thursday 23 August 2012 23:58:01 double wrote: > Hello, > > We have strange messages in our error.log - but everything works. > We tried to re-produce this in a testing-environment - but we couldn't. [...] Do you use any 3rd-party modules? wbr, Valentin V. Bartenev From cabbar at gmail.com Thu Aug 23 20:17:48 2012 From: cabbar at gmail.com (Cabbar Duzayak) Date: Thu, 23 Aug 2012 23:17:48 +0300 Subject: Weird proxy pass timeout Message-ID: Hi, I have an nginx server proxying traffic to a java app server (they are on separate machines). I will be giving the configuration below, but just to summarize the problem I am getting a "2012/08/23 21:25:09 [error] 1798#0: *61949 upstream timed out (110: Connection timed out) while connecting to upstream, client: ..." after 60 seconds from NGINX couple of times a day. The weird thing is that I check the corresponding log entry on the other side (java app server), and it looks like the request is completed successfully on the java server side and the response was prepared super fast (a few ms) since this was a heavily cached content. BTW, traffic is really really low on both NGINX and java server (like 1 req every 30 seconds or so), so it is not about worker threads, number of processes, or any other congestion related issue, etc... Can you guys please provide any idea about what / how to debug? Since Java side is super fast and I am getting this every once in a while, I am really puzzled by this. Thanks in advance... BTW, my nginx configuration: upstream appservers { server X.X.X.X:5000; } location / { proxy_pass http://appservers; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $http_host; proxy_set_header x-forwarded-proto $scheme; } From nginx-forum at nginx.us Fri Aug 24 01:20:22 2012 From: nginx-forum at nginx.us (guangpigu) Date: Thu, 23 Aug 2012 21:20:22 -0400 (EDT) Subject: =?UTF-8?B?aGVscDptcGxpY2l0IGRlY2xhcmF0aW9uIG9mIGZ1bmN0aW9uIOKAmHByZWFk4oCZ?= =?UTF-8?B?ICDigJhwd3JpdGXigJk=?= Message-ID: <9223af5e7cc34ec5fcd9842afdbbc5f4.NginxMailingListEnglish@forum.nginx.org> when i want install nginx 1.2.3 on debian squeeze there have a problem: -o objs/src/os/unix/ngx_files.o \ src/os/unix/ngx_files.c cc1: warnings being treated as errors src/os/unix/ngx_files.c: In function ?ngx_read_file?: src/os/unix/ngx_files.c:29: error: implicit declaration of function ?pread? src/os/unix/ngx_files.c: In function ?ngx_write_file?: src/os/unix/ngx_files.c:80: error: implicit declaration of function ?pwrite? make[1]: *** [objs/src/os/unix/ngx_files.o] Error 1 make[1]: Leaving directory `/root/nginx-1.2.3' make: *** [build] Error 2 please help me . Posted at Nginx Forum: http://forum.nginx.org/read.php?2,230079,230079#msg-230079 From nginx-forum at nginx.us Fri Aug 24 08:31:17 2012 From: nginx-forum at nginx.us (kustodian) Date: Fri, 24 Aug 2012 04:31:17 -0400 (EDT) Subject: Redirect 404 errors to a separate log file Message-ID: I have been trying for a whole day to figure out how to redirect 404 errors to a separate log file. I managed to do this in the server directive: error_page 404 = @404; location @404 { error_log /var/log/nginx/404.log; } This does write all 404 errors to 404.log, but the problem is that it also writes these errors in the default error.log, and I don't want that. I want 404 errors to be writtent to only into the 404.log. Do you have an idea how to do this? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,230087,230087#msg-230087 From nginx-forum at nginx.us Fri Aug 24 12:26:54 2012 From: nginx-forum at nginx.us (double) Date: Fri, 24 Aug 2012 08:26:54 -0400 (EDT) Subject: "zero size buf in output" In-Reply-To: <201208240008.53003.ne@vbart.ru> References: <201208240008.53003.ne@vbart.ru> Message-ID: <1c024be0fb2d53b9ddf8f7782cf3477f.NginxMailingListEnglish@forum.nginx.org> nginx: 1.2.2 Modules: access, map, rewrite, proxy configure arguments: --prefix=/usr/local/nginx --with-pcre --without-http_charset_module --without-http_gzip_module --without-http_ssi_module --without-http_userid_module --without-http_autoindex_module --without-http_geo_module --without-http_split_clients_module --without-http_referer_module --without-http_uwsgi_module --without-http_scgi_module --without-http_memcached_module --without-http_empty_gif_module --without-http_browser_module --without-http_upstream_ip_hash_module --without-mail_pop3_module --without-mail_imap_module --without-mail_smtp_module --without-http_fastcgi_module Posted at Nginx Forum: http://forum.nginx.org/read.php?2,230072,230098#msg-230098 From nginx-forum at nginx.us Fri Aug 24 13:04:13 2012 From: nginx-forum at nginx.us (iqbalmp) Date: Fri, 24 Aug 2012 09:04:13 -0400 (EDT) Subject: CSS 404 Error Message-ID: <36cb46b6f59688950730d92fbdb0a260.NginxMailingListEnglish@forum.nginx.org> HI, Ngix giving '404 Not Found' error when I am trying to access the below url: http://mysite.com/skin/frontend/tester/default/css/styles-ie.css' but at the same time I can access php file from the same locaiton: http://mysite.com/skin/frontend/tester/default/css/test.php Please help. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,230099,230099#msg-230099 From nginx-forum at nginx.us Fri Aug 24 13:12:40 2012 From: nginx-forum at nginx.us (kustodian) Date: Fri, 24 Aug 2012 09:12:40 -0400 (EDT) Subject: CSS 404 Error In-Reply-To: <36cb46b6f59688950730d92fbdb0a260.NginxMailingListEnglish@forum.nginx.org> References: <36cb46b6f59688950730d92fbdb0a260.NginxMailingListEnglish@forum.nginx.org> Message-ID: <3f5e01d54610656d7c08aa21fea06f90.NginxMailingListEnglish@forum.nginx.org> Well you didn't give us anything to work one. First of does that file exist in the same directoy where test.php is. Can you provide your ngnix.conf, maybe you have some redirects for eather php files, or for css files. Are permissions ok? Maybe Nginx cannot read that file? It could be many things, but you need to give us more information, if you would like to get some help. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,230099,230101#msg-230101 From nginx-forum at nginx.us Fri Aug 24 13:19:16 2012 From: nginx-forum at nginx.us (kustodian) Date: Fri, 24 Aug 2012 09:19:16 -0400 (EDT) Subject: index directive not taking effect in case of alias In-Reply-To: <7f8170294f2783239f207fcf50bd8446.NginxMailingListEnglish@forum.nginx.org> References: <7f8170294f2783239f207fcf50bd8446.NginxMailingListEnglish@forum.nginx.org> Message-ID: You could also define index inside the server block, outside the location block: server { index index.php; location ... location ... } Posted at Nginx Forum: http://forum.nginx.org/read.php?2,229985,230102#msg-230102 From nginx-forum at nginx.us Fri Aug 24 13:44:20 2012 From: nginx-forum at nginx.us (iqbalmp) Date: Fri, 24 Aug 2012 09:44:20 -0400 (EDT) Subject: CSS 404 Error In-Reply-To: <3f5e01d54610656d7c08aa21fea06f90.NginxMailingListEnglish@forum.nginx.org> References: <36cb46b6f59688950730d92fbdb0a260.NginxMailingListEnglish@forum.nginx.org> <3f5e01d54610656d7c08aa21fea06f90.NginxMailingListEnglish@forum.nginx.org> Message-ID: <76a97a94d2cfe3c5f47fde4581bfdca4.NginxMailingListEnglish@forum.nginx.org> kustodian, Thank You for your fast reply, i have resolved this by editing one of my 'location' in my nginx.conf file: .............. location / { root /var/www/html; index index.php index.html index.htm; fastcgi_param REDIRECT_STATUS 200; } ..................... Previously this was '/usr/share/nginx/html', but it was 'var/www/html' in all other 'location's. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,230099,230103#msg-230103 From nginx-forum at nginx.us Fri Aug 24 14:56:22 2012 From: nginx-forum at nginx.us (iqbalmp) Date: Fri, 24 Aug 2012 10:56:22 -0400 (EDT) Subject: htaccess Message-ID: <280073048ea9e4d21494c762df64384f.NginxMailingListEnglish@forum.nginx.org> Hi, I am pasting part of my .htacess file used in Apache; please help me to convert this to nginx configuraiton settings: ....................................... ############################################ ## enable rewrites Options +FollowSymLinks RewriteEngine on ############################################ ## you can put here your magento root folder ## path relative to web root #RewriteBase /magento/ ############################################ ## uncomment next line to enable light API calls processing # RewriteRule ^api/([a-z][0-9a-z_]+)/?$ api.php?type=$1 [QSA,L] ############################################ ## rewrite API2 calls to api.php (by now it is REST only) RewriteRule ^api/rest api.php?type=rest [QSA,L] ############################################ ## workaround for HTTP authorization ## in CGI environment RewriteRule .* - [E=HTTP_AUTHORIZATION:%{HTTP:Authorization}] ############################################ ## TRACE and TRACK HTTP methods disabled to prevent XSS attacks RewriteCond %{REQUEST_METHOD} ^TRAC[EK] RewriteRule .* - [L,R=405] ############################################ ## redirect for mobile user agents #RewriteCond %{REQUEST_URI} !^/mobiledirectoryhere/.*$ #RewriteCond %{HTTP_USER_AGENT} "android|blackberry|ipad|iphone|ipod|iemobile|opera mobile|palmos|webos|googlebot-mobile" [NC] #RewriteRule ^(.*)$ /mobiledirectoryhere/ [L,R=302] ############################################ ## always send 404 on missing files in these folders RewriteCond %{REQUEST_URI} !^/(media|skin|js)/ ############################################ ## never rewrite for existing files, directories and links RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteCond %{REQUEST_FILENAME} !-l ############################################ ## rewrite everything else to index.php RewriteRule .* index.php [L] .................................................... Thank You. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,230106,230106#msg-230106 From cickumqt at gmail.com Fri Aug 24 15:12:35 2012 From: cickumqt at gmail.com (Christopher Meng) Date: Fri, 24 Aug 2012 23:12:35 +0800 Subject: htaccess In-Reply-To: <280073048ea9e4d21494c762df64384f.NginxMailingListEnglish@forum.nginx.org> References: <280073048ea9e4d21494c762df64384f.NginxMailingListEnglish@forum.nginx.org> Message-ID: How about taking a look at http://winginx.com/htaccess. ? -- *Yours sincerely,* *Christopher Meng* Ambassador/Contributor of Fedora Project and many others. http://cicku.me -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Fri Aug 24 15:21:25 2012 From: nginx-forum at nginx.us (iqbalmp) Date: Fri, 24 Aug 2012 11:21:25 -0400 (EDT) Subject: htaccess In-Reply-To: References: Message-ID: <744ddc4d75e45a0e8a086affb47cafa4.NginxMailingListEnglish@forum.nginx.org> http://winginx.com/htaccess resulted : ===================== # nginx configuration index index.php; charset off; location ~ /(media|skin|js)/ { } location /api { rewrite ^/api/rest /api.php?type=rest break; } location / { if (!-e $request_filename){ rewrite ^(.*)$ /index.php break; } } location /RELEASE_NOTES.txt { deny all; } ================ But I already have a 'location' settings in my nginx.conf: ======================== location / { root /var/www/html; index index.php index.html index.htm; fastcgi_param REDIRECT_STATUS 200; } =================== how can i merge both of this? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,230106,230108#msg-230108 From nginx-forum at nginx.us Fri Aug 24 17:28:18 2012 From: nginx-forum at nginx.us (iqbalmp) Date: Fri, 24 Aug 2012 13:28:18 -0400 (EDT) Subject: htaccess In-Reply-To: References: Message-ID: http://winginx.com/htaccess resulted : ===================== # nginx configuration index index.php; charset off; location ~ /(media|skin|js)/ { } location /api { rewrite ^/api/rest /api.php?type=rest break; } location / { if (!-e $request_filename){ rewrite ^(.*)$ /index.php break; } } location /RELEASE_NOTES.txt { deny all; } ================ But I already have a 'location' settings in my nginx.conf: ======================== location / { root /var/www/html; index index.php index.html index.htm; fastcgi_param REDIRECT_STATUS 200; } =================== how can i merge both of this? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,230106,230109#msg-230109 From jdrukman at gmail.com Fri Aug 24 20:40:27 2012 From: jdrukman at gmail.com (Jon Drukman) Date: Fri, 24 Aug 2012 13:40:27 -0700 Subject: Proxy pass confusion Message-ID: Here is my server setup: Amazon load balancer (loadbalancer.aws.com) -> nginx servers in reverse-proxy/caching mode -> back end PHP servers If I visit the nginx server directly in my browser, everything works perfectly. If I visit loadbalancer.aws.com, the nginx server redirects me to http://decupstream, which is what I named the upstream block in my configuration. The nginx.conf looks like: upstream decupstream { server 10.167.1.50:8080; server 10.160.242.232:8080; server 10.222.218.126:8080; } proxy_cache_path /var/cache/nginx levels=1:2 keys_zone=gop2012:10m; server { listen 80; server_name gop-proxy.mycompany.com; location / { proxy_pass http://decupstream; proxy_cache gop; proxy_cache_valid 10m; proxy_cache_use_stale error timeout invalid_header http_500; proxy_next_upstream error timeout invalid_header http_500; proxy_cache_lock on; proxy_ignore_headers Cache-Control Expires; proxy_set_header Host "gop.mycompany.com"; } } How do I get nginx to forward requests to the Apache/PHP servers even if they don't come in directly to the nginx box? -jsd- -------------- next part -------------- An HTML attachment was scrubbed... URL: From francis at daoine.org Fri Aug 24 22:00:04 2012 From: francis at daoine.org (Francis Daly) Date: Fri, 24 Aug 2012 23:00:04 +0100 Subject: Proxy pass confusion In-Reply-To: References: Message-ID: <20120824220004.GY32371@craic.sysops.org> On Fri, Aug 24, 2012 at 01:40:27PM -0700, Jon Drukman wrote: Hi there, > Amazon load balancer (loadbalancer.aws.com) -> nginx servers in > reverse-proxy/caching mode -> back end PHP servers > > If I visit the nginx server directly in my browser, everything works > perfectly. > > If I visit loadbalancer.aws.com, the nginx server redirects me to > http://decupstream, which is what I named the upstream block in my > configuration. Are you sure that it is nginx redirecting you, and not the back-end servers? Look at the headers from a "curl -i" of a request that fails, and see is there any indication that it came from the back-end. Maybe compare that with the headers from the same request that succeeds when you visit the nginx server directly. The only way I can think this would happen would be if you didn't have the "proxy_set_header Host" line in the actually-running configuration. The nginx.conf fragment you've included fails to load. ("proxy_cache" zone "gop" is unknown.) Can you confirm that "proxy_set_header Host" is included in the working one, and provide a minimal fragment that demonstrates the problem? For example, if you omit all of the proxy_cache-related directives, do you still see the problem? If so, then you've found a simpler test case. (And if not, then maybe the cache configuration should be examined more closely.) f -- Francis Daly francis at daoine.org From nginx-forum at nginx.us Fri Aug 24 22:29:45 2012 From: nginx-forum at nginx.us (Krishna Guda) Date: Fri, 24 Aug 2012 18:29:45 -0400 (EDT) Subject: can proxy use resolver to forward traffic to https sites? Message-ID: When it happens to be http traffic it works. if you look at the line directive lines, proxy_pass http://$http_host$uri$is_args$args; // Doesnt work proxy_pass http://$http_host$uri$is_args$args; // Works server { listen 8080; location / { resolver 8.8.8.8; proxy_pass http://$http_host$uri$is_args$args; // Doesn't work proxy_pass http://$http_host$uri$is_args$args; // Works } error_page 500 502 503 504 /50x.html; location = /50x.html { root html; } } Posted at Nginx Forum: http://forum.nginx.org/read.php?2,230113,230113#msg-230113 From nginx-forum at nginx.us Sat Aug 25 03:09:35 2012 From: nginx-forum at nginx.us (Krishna Guda) Date: Fri, 24 Aug 2012 23:09:35 -0400 (EDT) Subject: proxy cannot forward https traffic with resolver? Message-ID: <288c8f08dfc82be04ee79391eebca2fc.NginxMailingListEnglish@forum.nginx.org> Sorry, there was a typo in the previous question. I couldn't delete the previous one. Hence posting again. =================================================================================== When it happens to be http traffic proxy server works but doesn't work for the https. I pasted both the sevrer configurations below. Please look at proxy_pass directive. server { listen 8080; location / { resolver 8.8.8.8; proxy_pass https://$http_host$uri$is_args$args; // Works } server { listen 8090; location / { resolver 8.8.8.8; proxy_pass https://$http_host$uri$is_args$args; // Doesn't work } Posted at Nginx Forum: http://forum.nginx.org/read.php?2,230114,230114#msg-230114 From nginx-forum at nginx.us Sat Aug 25 06:32:02 2012 From: nginx-forum at nginx.us (soniclee) Date: Sat, 25 Aug 2012 02:32:02 -0400 (EDT) Subject: Gzip not compressing response body with status code other than 200, 403, 404 In-Reply-To: References: Message-ID: <877016e0578575554305828cedbaa7a7.NginxMailingListEnglish@forum.nginx.org> I met same problem while try to gzip response with other status code. Any update for this issue? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,203569,230115#msg-230115 From nginx-forum at nginx.us Sat Aug 25 20:17:09 2012 From: nginx-forum at nginx.us (eiji-gravion) Date: Sat, 25 Aug 2012 16:17:09 -0400 (EDT) Subject: SPDY slower than normal HTTPS Message-ID: Hello, Anyone else noticing that not only is nginx 1.3.5 slower than 1.2.3 but that SPDY is making it even slower than non-SPDY HTTPS? On a page refresh I'm seeing 428ms for a large forum page via non-SPDY HTTPS and close to 900ms when using SPDY. Also, even using plain HTTP, nginx 1.3.5 loads it at 428ms, while 1.2.3 loads it at 328ms. I'm also able to reproduce this on ANY nginx site that has SPDY enabled. Very odd. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,230118,230118#msg-230118 From nginx-forum at nginx.us Sat Aug 25 20:35:06 2012 From: nginx-forum at nginx.us (eiji-gravion) Date: Sat, 25 Aug 2012 16:35:06 -0400 (EDT) Subject: SPDY slower than normal HTTPS In-Reply-To: References: Message-ID: Just wanted to make it clear that this appears to happen on refreshes, not initial loads. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,230118,230119#msg-230119 From ianevans at digitalhit.com Sun Aug 26 18:40:40 2012 From: ianevans at digitalhit.com (Ian M. Evans) Date: Sun, 26 Aug 2012 14:40:40 -0400 Subject: Nginx location rule for Wordpress Multisite in subdirectories Message-ID: Hi everyone! I currently have a wordpress blog located at blogname and, as per threads here, I had the following on my nginx.conf: location ^~ /blogname { try_files $uri /blogname/index.php?q=$uri; location ~ \.php$ { fastcgi_pass 127.0.0.1:10004; } } I'm actually adding a few more blogs and will be moving to the multisite options in wordpress and running the various blogs in subdirectories under /blogs, e.g.: /blogs/blog1 /blogs/blog2 etc. I need to be able to catch the above in either /blog or its suddirs, since WP installs a main blog in the /blogs dir i.e.: location ^~ /blogs { try_files $uri /blogs/index.php?q=$uri; and location ^~ /blogs/blog1 { try_files $uri /blogs/blog1/index.php?q=$uri; I realize I could create locations specific to each dir, but is there a wildcard way to do this? Thanks. p.s. as I've said before, my nickname should be "Sucks at regex" From hagaizzz at yahoo.com Sun Aug 26 19:47:43 2012 From: hagaizzz at yahoo.com (hagai avrahami) Date: Sun, 26 Aug 2012 12:47:43 -0700 (PDT) Subject: Tuning workers and connections In-Reply-To: <1bb22d04200e00c8cb502dd6daff02ab.NginxMailingListEnglish@forum.nginx.org> References: <33c66c80907011823r7c6224e8w2945765c150f1f55@forum.nginx.org> <1bb22d04200e00c8cb502dd6daff02ab.NginxMailingListEnglish@forum.nginx.org> Message-ID: <1346010463.81160.YahooMailNeo@web164501.mail.gq1.yahoo.com> Hi 3,9 and 27 seconds sounds like tcp retransmission timeouts http://my.opera.com/cstrep/blog/2011/07/23/how-to-detect-tcp-retransmit-timeouts-in-your-network It sounds like the tcp listen backlog queue is too small and syn packets incoming from the client are dropped You can set max for all TCP sockets by configuring sysctl file net.ipv4.tcp_max_syn_backlog = 10000 And to add to Nginx configuration? (Nginx default value is 511) listen 80 default backlog=10000; I hope it helps. Hagai Avrahami I hope it helps. >________________________________ > From: liangrubo >To: nginx at nginx.org >Sent: Tuesday, August 21, 2012 10:23 AM >Subject: Re: Tuning workers and connections > >Hello, we meet similar problem. > >if we set "multi_accept off;" and "accept_mutex on;", it sometimes took 3 or >9 or even 21 seconds to connect to nginx on port 80 from the same server. if >we set "multi_accept on;" and "accept_mutex off;",? we can connect to nginx >on port 80 instantly but it may take long to get the response (more than 3 >seconds for small css files on the same server). > >Our deploy structure is as follows: >frontend nginx listen on port 80, servering static files. For dynamic >requests, nginx on port 80 reverse proxy to nginx on port 81 via tcp socket, >nginx on port 81 again reverse proxy to uwsgi via unix sokcet. > >the site is handling about 1200 requests per second. > >the server has 24 cpu cores and 96G memory.? the system has low load, lots >of free memory and no IO bottleneck. > >some of the nginx configurations are as follows: >worker_processes 8; //it was originally 4, I doubled it but it didn't help >worker_rlimit_nofile 15240; >worker_connections? 15240; > >use epoll; > >we have keepalive setup for upstream(80 to 81 reverse proxy) as follows: >keepalive 32; > > >information shown by status_sub: >Active connections: 4320 >server accepts handled requests >9012300 9012300 37349248 >Reading: 118 Writing: 199 Waiting: 4003 > >we examined the dynamic request handling is fast and even if it was slow, it >should not affect static file servering on port 80 anyway, right? > >it seems nginx can't process the request fast enough but we can't find what >is the bottleneck. > >any help is greatly appreciated. > >Posted at Nginx Forum: http://forum.nginx.org/read.php?2,3638,229939#msg-229939 > >_______________________________________________ >nginx mailing list >nginx at nginx.org >http://mailman.nginx.org/mailman/listinfo/nginx > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Mon Aug 27 10:05:39 2012 From: nginx-forum at nginx.us (iqbalmp) Date: Mon, 27 Aug 2012 06:05:39 -0400 (EDT) Subject: APC with Nginx Message-ID: Hi, I have installed APC in my server (CentOS) using 'pecl install apc'; also nginx is there. How can configure APC for nginx? Please help. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,230137,230137#msg-230137 From anoopalias01 at gmail.com Mon Aug 27 10:23:17 2012 From: anoopalias01 at gmail.com (Anoop Alias) Date: Mon, 27 Aug 2012 15:53:17 +0530 Subject: APC with Nginx In-Reply-To: References: Message-ID: It should be configured with PHP not nginx On Mon, Aug 27, 2012 at 3:35 PM, iqbalmp wrote: > Hi, > > I have installed APC in my server (CentOS) using 'pecl install apc'; also > nginx is there. > How can configure APC for nginx? > > Please help. > > Posted at Nginx Forum: > http://forum.nginx.org/read.php?2,230137,230137#msg-230137 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -- Anoop P Alias (PGP Key ID : 0x014F9953) GNU system administrator http://UniversalAdm.in -------------- next part -------------- An HTML attachment was scrubbed... URL: From smallfish.xy at gmail.com Mon Aug 27 10:26:05 2012 From: smallfish.xy at gmail.com (smallfish) Date: Mon, 27 Aug 2012 18:26:05 +0800 Subject: APC with Nginx In-Reply-To: References: Message-ID: apc just for php, not for nginx. configure your php.ini file -- blog: http://chenxiaoyu.org On Mon, Aug 27, 2012 at 6:05 PM, iqbalmp wrote: > Hi, > > I have installed APC in my server (CentOS) using 'pecl install apc'; also > nginx is there. > How can configure APC for nginx? > > Please help. > > Posted at Nginx Forum: > http://forum.nginx.org/read.php?2,230137,230137#msg-230137 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Mon Aug 27 12:46:43 2012 From: nginx-forum at nginx.us (dantes) Date: Mon, 27 Aug 2012 08:46:43 -0400 (EDT) Subject: Nginx won't send FIN ACK to PHP-FPM Message-ID: <09a94de13d22ac34f6c4ab5eaf2a0568.NginxMailingListEnglish@forum.nginx.org> Hey, I have Rest API server powered by Nginx and PHP-FPM. Each API call produces several CURL requests. The script that executes the API calls, also utilizes CURL. I use curl multi exec, with 1,000 threads. All the setup creates a little bit less than 10K sockets. Anyway, here is the problem... The script that executes the API calls, let's say 3,000 on 1K threads. It takes around 170 secs to finish processing all those API calls. However, when the main script tries to return a response to the browser and initiate a FIN_WAIT it takes 8 minutes until everything is returned to the browser. Here is how it looks: http://d.pr/i/1WkI My theory is that it happens because Nginx puts the worker somehow on hold or something like that. This was the first worker in the chain of workers. There is no data being transferred since the worker initiated the request to PHP-FPM. Any ideas what should I do? Thanks. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,230145,230145#msg-230145 From nginx-forum at nginx.us Mon Aug 27 12:49:09 2012 From: nginx-forum at nginx.us (iqbalmp) Date: Mon, 27 Aug 2012 08:49:09 -0400 (EDT) Subject: APC with Nginx In-Reply-To: References: Message-ID: Thank You Anoop Alias, smallfish, I have done it through my php configuration. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,230137,230146#msg-230146 From nginx-forum at nginx.us Mon Aug 27 13:58:19 2012 From: nginx-forum at nginx.us (Varix) Date: Mon, 27 Aug 2012 09:58:19 -0400 (EDT) Subject: Problem with own error pages in nginx 1.3.5 Message-ID: <771cdbfda3f1e8311a13b9a5f6beaa7a.NginxMailingListEnglish@forum.nginx.org> I have a problem with my own error pages with nginx 1.3.5. [CODE]... error_page 401 402 403 404 /40x.html; location = /40x.html { root /www/default/html; } ...[/CODE] With Firefox is all OK. With IE 8 and IE 9 the browser is unable to show me the an error-site from the webserver. The error-site from the IE browser is shown. with [code]... error_page 401 402 403 404 /40x.html; location = /40x.html { root /html; } ...[/code] the browser show me the error-site from nginx. I change the configuration in nginx to [code]... error_page 401 402 403 404 /40x.html; location = /40x.html { root /www/default/html; } ...[/code] and restart nginx. Than I push F5 and the browser shows my error-site. When I call again the website with the brower, the browser is unable to show the error-site from the webserver. The browser error-page is shown again. Varix Posted at Nginx Forum: http://forum.nginx.org/read.php?2,230147,230147#msg-230147 From francis at daoine.org Mon Aug 27 20:47:38 2012 From: francis at daoine.org (Francis Daly) Date: Mon, 27 Aug 2012 21:47:38 +0100 Subject: Nginx location rule for Wordpress Multisite in subdirectories In-Reply-To: References: Message-ID: <20120827204738.GD32371@craic.sysops.org> On Sun, Aug 26, 2012 at 02:40:40PM -0400, Ian M. Evans wrote: Hi there, > I need to be able to catch the above in either /blog or its suddirs, since > WP installs a main blog in the /blogs dir i.e.: > > location ^~ /blogs { > try_files $uri /blogs/index.php?q=$uri; > > and > > location ^~ /blogs/blog1 { > try_files $uri /blogs/blog1/index.php?q=$uri; > > > I realize I could create locations specific to each dir, but is there a > wildcard way to do this? nginx locations can be "regex" or "prefix". If you have a url prefix which matches all of the subdir blogs, and none of the non-subdir blogs, then something like the following might work (I assume that all subdir blogs start with /blogs/blog; and therefore that everything that does not start with that pattern belongs to the /blogs location, which remains as-is.): == if ($uri ~ (/blogs/[^/]*)) { set $blogname $1; } include fastcgi.conf; location ^~ /blogs/blog { try_files $uri $blogname/index.php?q=$uri; location ~ php { fastcgi_pass 127.0.0.1:10004; } } == If you haven't got a clear prefix separation for the blogs, then you might want to try a regex location; or perhaps something involving "map" which will (probably) require enumeration all of the blogs. f -- Francis Daly francis at daoine.org From nginx-forum at nginx.us Mon Aug 27 21:42:36 2012 From: nginx-forum at nginx.us (ypeskin) Date: Mon, 27 Aug 2012 17:42:36 -0400 (EDT) Subject: jQuery-File-upload and nginx Message-ID: <5a2d0214c3e6507e205d37b6b73d32c3.NginxMailingListEnglish@forum.nginx.org> I have a problem setting up jQuery-File-Upload plugin and nginx. I've downloaded and installed nginx 1.2.3, and upload module 2.2.0 from source, here's my config file (names have been changed to protect the innocent): upstream web_backend { server 4dc1.domain.com:8080 weight=10 max_fails=3; server 4dc1.domain.com:8081 weight=10 max_fails=3; server 4dc2.domain.com:8080 weight=10 max_fails=3; server 4dc2.domain.com:8081 weight=10 max_fails=3; } server { listen 80; listen 8080; server_name domain.com www.domain.com; error_log /var/www/vhosts/domain.com/statistics/logs/error_log.nginx warn; access_log /var/www/vhosts/domain.com/statistics/logs/access_log.nginx main buffer=32k; root /var/www/vhosts/domain.com/httpdocs/website; # auth_basic "Restricted"; # auth_basic_user_file .htpasswd; index index.html; location / { expires 7d; } error_page 404 /404.html; error_page 500 502 503 504 /50x.html; location = /50x.html { root /var/www/vhosts/domain.com/httpdocs/website; } location = /404.html { root /var/www/vhosts/domain.com/httpdocs/website; } location /img/ { # auth_basic off; expires 10m; } location ~* ^.+\.(jpg|jpeg|gif|png|css|zip|tgz|gz|rar|bz2|doc|xls|exe|pdf|ppt|txt|tar|wav|bmp|rtf|js|ico|swf|mov)$ { expires 7d; } location /jsonrpc/ { proxy_pass http://web_backend$request_uri; include /etc/nginx/proxy.conf; } location /upload/ { upload_pass @test; upload_store /path/to/file/store/ 1; upload_store_access user:rw group:rw; upload_set_form_field $upload_field_name.name "$upload_file_name"; upload_set_form_field $upload_field_name.content_type "$upload_content_type"; upload_set_form_field $upload_field_name.path "$upload_tmp_path"; upload_aggregate_form_field "$upload_field_name.md5" "$upload_file_md5"; upload_aggregate_form_field "$upload_field_name.size" "$upload_file_size"; upload_pass_form_field "^submit$ | ^description$"; upload_cleanup 400 404 499 500-505; } location @test { proxy_pass http://localhost:8080; } } The html file contains a form with method set to "POST", hence the upload module. Here's the relevant bits of the html file:
Add files...
 

I keep getting 405 Not allowed error. How can I get jQuery-File-Upload and nginx to work together? Thanks. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,230157,230157#msg-230157 From nginx-forum at nginx.us Tue Aug 28 01:29:47 2012 From: nginx-forum at nginx.us (wideawake) Date: Mon, 27 Aug 2012 21:29:47 -0400 (EDT) Subject: X-accel-redirect serving html download page instead of file. Message-ID: I have a script that uses x-accel-redirect on a nginx server to send large files (150mb+). The script serves downloads fine as long as their is a space in the name of the directory but not when the directory is a single word. Anyone know why it would be doing so when the file exists and the path is correct? Layout: /Dir/file.exe /Dir2/file.exe /Another Dir/file.exe If it's /Another Dir/file.exe it serves the file perfectly, however if its /Dir/file.exe it serves the html of the download page. Anyone seen this before? nginx config: server { access_log logs/access_log; access_log on; limit_conn gulag 3; error_log /var/log/nginx/vhost-error_log crit; listen 80 default; server_name www.my.com my.com; root /home/mysite/public_html; #root /usr/share/nginx/html; autoindex off; index index.php index.html index.htm; #rewrite ^.*/([-\w\.]+)/([-\w\.]+)\.zip$ /download.php?model=$1&file=$2.zip last; #rewrite ^.*/([-\w\.]+)/([-\w\.]+)\.exe$ /download.php?model=$1&file=$2.exe last; #rewrite ^.*/([-\w\.]+)/([-\w\.]+)\.nbh$ /download.php?model=$1&file=$2.nbh last; #rewrite ^.*/([-\w\.]+)/([-\w\.]+)\.rar$ /download.php?model=$1&file=$2.rar last; error_page 503 /503.html; location = /503.html { root /home/mysite/public_html/errors; } error_page 504 /504.html; location = /504.html { root /home/mysite/public_html/errors; } # pass the PHP scripts to FastCGI server location ~ \.php$ { fastcgi_pass 127.0.0.1:8888; #fastcgi_pass unix:/var/run/nginx-fcgi.sock; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME /home/mysite/public_html$fastcgi_script_name; include /etc/nginx/fastcgi_params; fastcgi_intercept_errors off; } } php code: "; } if (! isset($_GET["model"])) { $error=TRUE; $msg .= "No Model information present. Unable to continue.
"; } else { $fileserver_path .= $model."/"; $fileurlpath .= $model."/"; /* check if file exists */ if (!file_exists($fileserver_path.$req_file)) { $error=TRUE; $msg .= "$req_file doesn't exist."; } } /* no web spamming */ if (!preg_match("/^[a-zA-Z0-9._-]+$/", $req_file, $matches)) { $error=TRUE; $msg .= "Spamming! I can't do that. Sorry."; } if(! ($error)) { //Hotlink Code if(eregi($_SERVER["HTTP_HOST"], str_replace("www.", "", strtolower($_SERVER["HTTP_REFERER"])))) { if (! isset($_GET['send_file'])) { Header("Refresh: 5; url=http://my.com/$thisfile?model=$model&file=$req_file&send_file=yes"); } else { header("Cache-Control: public"); header('Content-Description: File Transfer'); header("Content-type: application/octet-stream"); //header('Content-Type: application/force-download'); //header('Content-Length: ' . filesize($fileserver_path.$req_file)); header('Content-Disposition: attachment; filename=' . $req_file); header("Content-Transfer-Encoding: binary"); header("X-Accel-Redirect: /shipped/" . $path); exit; } //More Hotlink Code } else { Header("Refresh: ; url=http://my.com/$thisfile?model=$model&file=$req_file"); } //End Hotlink } ?> Posted at Nginx Forum: http://forum.nginx.org/read.php?2,230161,230161#msg-230161 From jdorfman at netdna.com Tue Aug 28 01:56:32 2012 From: jdorfman at netdna.com (Justin Dorfman) Date: Mon, 27 Aug 2012 18:56:32 -0700 Subject: X-accel-redirect serving html download page instead of file. In-Reply-To: References: Message-ID: Hey, Try adding this to your nginx conf where applicable: add_header Content-Disposition attachment; Regards, Justin Dorfman NetDNA ? The Science of Acceleration? On Mon, Aug 27, 2012 at 6:29 PM, wideawake wrote: > I have a script that uses x-accel-redirect on a nginx server to send large > files (150mb+). The script serves downloads fine as long as their is a > space > in the name of the directory but not when the directory is a single word. > Anyone know why it would be doing so when the file exists and the path is > correct? > > Layout: /Dir/file.exe /Dir2/file.exe /Another Dir/file.exe > > If it's /Another Dir/file.exe it serves the file perfectly, however if its > /Dir/file.exe it serves the html of the download page. Anyone seen this > before? > > nginx config: > server { > access_log logs/access_log; > access_log on; > limit_conn gulag 3; > error_log /var/log/nginx/vhost-error_log crit; > listen 80 default; > server_name www.my.com my.com; > root /home/mysite/public_html; > #root /usr/share/nginx/html; > autoindex off; > index index.php index.html index.htm; > > > > #rewrite ^.*/([-\w\.]+)/([-\w\.]+)\.zip$ > /download.php?model=$1&file=$2.zip last; > #rewrite ^.*/([-\w\.]+)/([-\w\.]+)\.exe$ > /download.php?model=$1&file=$2.exe last; > #rewrite ^.*/([-\w\.]+)/([-\w\.]+)\.nbh$ > /download.php?model=$1&file=$2.nbh last; > #rewrite ^.*/([-\w\.]+)/([-\w\.]+)\.rar$ > /download.php?model=$1&file=$2.rar last; > > error_page 503 /503.html; > location = /503.html { > root /home/mysite/public_html/errors; > } > > > error_page 504 /504.html; > location = /504.html { > root /home/mysite/public_html/errors; > } > > > # pass the PHP scripts to FastCGI server > location ~ \.php$ { > fastcgi_pass 127.0.0.1:8888; > #fastcgi_pass unix:/var/run/nginx-fcgi.sock; > fastcgi_index index.php; > fastcgi_param SCRIPT_FILENAME > /home/mysite/public_html$fastcgi_script_name; > include /etc/nginx/fastcgi_params; > > fastcgi_intercept_errors off; > } > > > } > > php code: > /*File Download Script*/ > $fileserver_path = "./shipped/"; // change this to the directory > your files > reside > $fileurlpath = "http://my.com/shipped/"; > $req_file = basename($_GET['file']); > $model = $_GET['model']; > $thisfile = basename(__FILE__); > $msg=""; > $error=FALSE; > > $path = $model."/".$req_file; > > > if (empty($req_file)) { > $error=TRUE; > $msg .= "Usage: $thisfile?model=<model > name>&file=<file_to_download>
"; > } > > if (! isset($_GET["model"])) { > $error=TRUE; > $msg .= "No Model information present. Unable to continue.
"; > } else { > $fileserver_path .= $model."/"; > $fileurlpath .= $model."/"; > /* check if file exists */ > if (!file_exists($fileserver_path.$req_file)) { > $error=TRUE; > $msg .= "$req_file doesn't exist."; > } > } > > /* no web spamming */ > if (!preg_match("/^[a-zA-Z0-9._-]+$/", $req_file, $matches)) { > $error=TRUE; > $msg .= "Spamming! I can't do that. Sorry."; > } > > if(! ($error)) { > //Hotlink Code > if(eregi($_SERVER["HTTP_HOST"], str_replace("www.", "", > strtolower($_SERVER["HTTP_REFERER"])))) > { > if (! isset($_GET['send_file'])) { > > Header("Refresh: 5; > url=http://my.com/$thisfile?model=$model&file=$req_file&send_file=yes"); > > } > else { > > header("Cache-Control: public"); > header('Content-Description: File Transfer'); > header("Content-type: application/octet-stream"); > //header('Content-Type: application/force-download'); > //header('Content-Length: ' . > filesize($fileserver_path.$req_file)); > header('Content-Disposition: attachment; filename=' . > $req_file); > header("Content-Transfer-Encoding: binary"); > header("X-Accel-Redirect: /shipped/" . $path); > > exit; > > } > //More Hotlink Code > } else { > Header("Refresh: ; > url=http://my.com/$thisfile?model=$model&file=$req_file"); > } > //End Hotlink > } > ?> > > Posted at Nginx Forum: > http://forum.nginx.org/read.php?2,230161,230161#msg-230161 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Tue Aug 28 02:05:28 2012 From: nginx-forum at nginx.us (wideawake) Date: Mon, 27 Aug 2012 22:05:28 -0400 (EDT) Subject: X-accel-redirect serving html download page instead of file. In-Reply-To: References: Message-ID: <9c287e7f2664424e2e4ca84d246d642c.NginxMailingListEnglish@forum.nginx.org> I tried adding what you suggested like so location = /shipped { add_header Content-Disposition attachment; root /home/mysite/public_html/shipped; internal; } and by just adding the line to the site config. Neither of which worked unfortunately. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,230161,230163#msg-230163 From jdorfman at netdna.com Tue Aug 28 02:31:10 2012 From: jdorfman at netdna.com (Justin Dorfman) Date: Mon, 27 Aug 2012 19:31:10 -0700 Subject: X-accel-redirect serving html download page instead of file. In-Reply-To: <9c287e7f2664424e2e4ca84d246d642c.NginxMailingListEnglish@forum.nginx.org> References: <9c287e7f2664424e2e4ca84d246d642c.NginxMailingListEnglish@forum.nginx.org> Message-ID: I checked my configs and add_header is after the root directive like so: location = /shipped { root /home/mysite/public_html/shipped; internal; add_header Content-Disposition attachment; } Not sure if order matters or if it should be above or below the internal directive. I know Maxim and Piotr S. will know ;) Regards, Justin Dorfman NetDNA ? The Science of Acceleration? On Mon, Aug 27, 2012 at 7:05 PM, wideawake wrote: > I tried adding what you suggested like so > > location = /shipped { > add_header Content-Disposition attachment; > root /home/mysite/public_html/shipped; > internal; > } > > and by just adding the line to the site config. Neither of which worked > unfortunately. > > Posted at Nginx Forum: > http://forum.nginx.org/read.php?2,230161,230163#msg-230163 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Tue Aug 28 03:02:33 2012 From: nginx-forum at nginx.us (wideawake) Date: Mon, 27 Aug 2012 23:02:33 -0400 (EDT) Subject: X-accel-redirect serving html download page instead of file. In-Reply-To: References: Message-ID: <1bc27a56c9d4517afcd9ea1a120dcd3c.NginxMailingListEnglish@forum.nginx.org> I noticed that if I use this in my config then shipped/Dir 1/file.exe returns file not found in browser and shipped/dir/file.exe still returns proper file extension (exe, etc) but contains the html content of the download page. location = /files { root /home/mysite/public_html/shipped; internal; add_header Content-Disposition attachment; } header("X-Accel-Redirect: /files/" . $path); While removing that and using "header("X-Accel-Redirect: /shipped/" . $path);" the actual file location works for shipped/Dir 1/file.exe but not for shipped/dir/file.exe. I'm at a lose as to why it's not working properly. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,230161,230165#msg-230165 From nginx-forum at nginx.us Tue Aug 28 04:33:06 2012 From: nginx-forum at nginx.us (bompus) Date: Tue, 28 Aug 2012 00:33:06 -0400 (EDT) Subject: Issue with SNI/SSL and default_server In-Reply-To: <503385FD.8070903@seld.be> References: <503385FD.8070903@seld.be> Message-ID: I've had the same issues and did some testing. The following causes the issue where the SSL certificate that is defined in the default_server block is being sent for requests that end up in another server block that has a different ssl_certificate defined. This only happens when adding the IP address as server_name. Example of issue: server { listen 443 default_server ssl; server_name _; ssl_certificate /usr/local/nginx/conf/ssl/default.crt; ssl_certificate_key /usr/local/nginx/conf/ssl/default.key; location / { return 403; } } server { listen 443 ssl; server_name 1.2.3.4; ssl_certificate /usr/local/nginx/conf/ssl/1.2.3.4.crt; ssl_certificate_key /usr/local/nginx/conf/ssl/1.2.3.4.key; location /test { return 401;} } When I access https://1.2.3.4/test , I receive a 401 error as expected, but the SSL certificate being sent is the one defined in default.crt Working: server { listen 443 ssl; server_name test.hostname.com; ssl_certificate /usr/local/nginx/conf/ssl/1.2.3.4.crt; ssl_certificate_key /usr/local/nginx/conf/ssl/1.2.3.4.key; location /test { return 401;} } Now when accessing test.hostname.com which is an A record to 1.2.3.4 , I get served the correct certificate as defined in 1.2.3.4 -- I've tested this multiple times on Ubuntu 12.04 w/ nginx as configured: nginx version: nginx/1.2.3 built by gcc 4.6.3 (Ubuntu/Linaro 4.6.3-1ubuntu5) TLS SNI support enabled configure arguments: --with-http_ssl_module --user=nobody --group=nobody Can anybody test and confirm this besides us? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,229949,230168#msg-230168 From goltus at gmail.com Tue Aug 28 04:39:55 2012 From: goltus at gmail.com (Nikhil Bhardwaj) Date: Mon, 27 Aug 2012 21:39:55 -0700 Subject: Issue with multiple location blocks to handle two webapps Message-ID: Hi All, My webapp is a combination of two apps running on one server. the community app (smaller one) is located under community directory and the all urls starting with community should be handled by it e.g. example.com/community/questionsexample,com/community/2/an-actual-qestion etc etc The main app takes care of rest of URLs such as example.com (which should redirect to example.com/city) example.com/city/sampledir etc etc the location blocks I have used are ---------------------- #handle community section separately location ^~ /*community*/ { try_files $uri $uri/ /index.php?qa-rewrite=$uri&$args; location ~ \.php$ { fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; include fastcgi_params; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; fastcgi_param PATH_INFO $fastcgi_script_name; access_log off; } } location =/ { #convert domain to domain/city rewrite ^(.*)$ $scheme://example.com/city permanent; } #these locations always use /city as prefix location ~^/(biz|biz/|profiles|profiles/) { rewrite ^(.*)$ $scheme://example.com/city$1 permanent; } #main app should handle everything except community location* /* { #remove trailing slash rewrite ^/(.*)/$ /$1 permanent; #NB this will remove the index.php from the url try_files $uri $uri /index.php?$args; } location ~ \.php$ { fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; include fastcgi_params; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; fastcgi_param PATH_INFO $fastcgi_script_name; access_log off; } ------------------------------- With this code the main app works fine but the community app ONLY works for example.com/community/ and example.com/community/?a=foo/bar etc etc but *NOT * for example.com/community/foo/bar I think example.com/community/foo/bar doesn't get handled by the community location block and goes to the main location which gives 404 error. I tried various combination for location but nothing is fixing it perfectly. I will appreciate any pointers / help. Thanks, Nikhil -------------- next part -------------- An HTML attachment was scrubbed... URL: From igor at sysoev.ru Tue Aug 28 04:45:10 2012 From: igor at sysoev.ru (Igor Sysoev) Date: Tue, 28 Aug 2012 08:45:10 +0400 Subject: Issue with SNI/SSL and default_server In-Reply-To: References: <503385FD.8070903@seld.be> Message-ID: On Aug 28, 2012, at 8:33 , bompus wrote: > I've had the same issues and did some testing. > > The following causes the issue where the SSL certificate that is defined in > the default_server block is being sent for requests that end up in another > server block that has a different ssl_certificate defined. This only happens > when adding the IP address as server_name. > > Example of issue: > server { > listen 443 default_server ssl; > server_name _; > ssl_certificate /usr/local/nginx/conf/ssl/default.crt; > ssl_certificate_key /usr/local/nginx/conf/ssl/default.key; > location / { return 403; } > } > > server { > listen 443 ssl; > server_name 1.2.3.4; > ssl_certificate /usr/local/nginx/conf/ssl/1.2.3.4.crt; > ssl_certificate_key /usr/local/nginx/conf/ssl/1.2.3.4.key; > location /test { return 401;} > } > > When I access https://1.2.3.4/test , I receive a 401 error as expected, but > the SSL certificate being sent is the one defined in default.crt > > Working: > > > server { > listen 443 ssl; > server_name test.hostname.com; > ssl_certificate /usr/local/nginx/conf/ssl/1.2.3.4.crt; > ssl_certificate_key /usr/local/nginx/conf/ssl/1.2.3.4.key; > location /test { return 401;} > } > > Now when accessing test.hostname.com which is an A record to 1.2.3.4 , I get > served the correct certificate as defined in 1.2.3.4 -- I've tested this > multiple times on Ubuntu 12.04 w/ nginx as configured: > nginx version: nginx/1.2.3 > built by gcc 4.6.3 (Ubuntu/Linaro 4.6.3-1ubuntu5) > TLS SNI support enabled > configure arguments: --with-http_ssl_module --user=nobody --group=nobody What client do you use to test ? It may not send a hostname in SSL hello request if the hostname is IP address. -- Igor Sysoev From nginx-forum at nginx.us Tue Aug 28 04:48:52 2012 From: nginx-forum at nginx.us (bompus) Date: Tue, 28 Aug 2012 00:48:52 -0400 (EDT) Subject: Issue with SNI/SSL and default_server In-Reply-To: References: Message-ID: <813d7429af1981f1dec737999ff2b620.NginxMailingListEnglish@forum.nginx.org> I tested with latest stable versions of Firefox, Chrome, IE9 on Windows 7 x64. What you said makes sense to me but I have no way of verifying from my end. If you find out this is the case, please make note of it in the documentation somewhere to save somebody else the frustration of figuring this out. Thanks! Posted at Nginx Forum: http://forum.nginx.org/read.php?2,229949,230171#msg-230171 From igor at sysoev.ru Tue Aug 28 05:13:01 2012 From: igor at sysoev.ru (Igor Sysoev) Date: Tue, 28 Aug 2012 09:13:01 +0400 Subject: Issue with SNI/SSL and default_server In-Reply-To: <813d7429af1981f1dec737999ff2b620.NginxMailingListEnglish@forum.nginx.org> References: <813d7429af1981f1dec737999ff2b620.NginxMailingListEnglish@forum.nginx.org> Message-ID: <5D74470A-8AAB-4372-B8EF-D66620A1010C@sysoev.ru> On Aug 28, 2012, at 8:48 , bompus wrote: > I tested with latest stable versions of Firefox, Chrome, IE9 on Windows 7 > x64. What you said makes sense to me but I have no way of verifying from my > end. If you find out this is the case, please make note of it in the > documentation somewhere to save somebody else the frustration of figuring > this out. I've just tested using Firefox/MacOSX. It does not send a hostname in SSL handshake packet if the hostname is IP address. -- Igor Sysoev From igor at sysoev.ru Tue Aug 28 05:15:42 2012 From: igor at sysoev.ru (Igor Sysoev) Date: Tue, 28 Aug 2012 09:15:42 +0400 Subject: Issue with SNI/SSL and default_server In-Reply-To: <503385FD.8070903@seld.be> References: <503385FD.8070903@seld.be> Message-ID: On Aug 21, 2012, at 16:58 , Jordi Boggiano wrote: > Heya, > > I have a server with two domains using SSL on one IP via SNI. So far so > good, but the problem is that one of the site is marked as > default_server to catch all (then I do a redirect to the proper domain, > I left out some parts of the config below for conciseness). > > The problem is, if you have a ssl server marked as default_server, it > seems to take over everything else, and domainb.com is not reachable via > SSL anymore. > > server { > listen 80 default_server; > server_name domaina.com ; > } > > server { > listen 443 ssl default_server; > server_name domaina.com ; > } > > server { > listen 80; > server_name domainb.com; > } > > server { > listen 443 ssl; > server_name domainb.com ; > } > > The workaround I found is the following: I put the IP in the > server_name, and therefore can remove the default_server flag from the > ssl server (it's not completely equivalent, but close enough for my > purposes). The problem is that it needs the server public IP in, which > isn't ideal to have generic vhost templates in puppet: > > server { > listen 80 default_server; > server_name domaina.com ; > } > > server { > listen 443 ssl; > server_name domaina.com ; > } > > server { > listen 80; > server_name domainb.com; > } > > server { > listen 443 ssl; > server_name domainb.com ; > } > > I am not sure whether this is a bug or an expected feature, which is why > I am writing here. These configuration should be equal from nginx point of view, since the first server becomes default_server anyway. Probably the real configuration does not correspond to them. -- Igor Sysoev From nginx-forum at nginx.us Tue Aug 28 05:25:05 2012 From: nginx-forum at nginx.us (bompus) Date: Tue, 28 Aug 2012 01:25:05 -0400 (EDT) Subject: Issue with SNI/SSL and default_server In-Reply-To: <5D74470A-8AAB-4372-B8EF-D66620A1010C@sysoev.ru> References: <5D74470A-8AAB-4372-B8EF-D66620A1010C@sysoev.ru> Message-ID: Good to know. Thank you for checking on this. If you could add this information to the documentation for SNI and/or SSL, that would be helpful for others. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,229949,230175#msg-230175 From nginx-forum at nginx.us Tue Aug 28 07:12:52 2012 From: nginx-forum at nginx.us (ovear) Date: Tue, 28 Aug 2012 03:12:52 -0400 (EDT) Subject: nginx worker process hang,cpu load very high Message-ID: <1d11893c8d47bc8112c0e0c25787ac98.NginxMailingListEnglish@forum.nginx.org> Hi, I have faced a trouble with nginx runs as a http revers proxy server,if you download a big file(like mp3,rar file),that nginx worker process will take lots of cpu, that process will hang, and wait a few minutes (about 3 - 8 minutes),it will be normal,but if you wait for a long time,it will happen again. [root at ovear ~]# lsb_release -a LSB Version: :core-4.0-amd64:core-4.0-ia32:core-4.0-noarch:graphics-4.0-amd64:graphics-4.0-ia32:graphics-4.0-noarch:printing-4.0-amd64:printing-4.0-ia32:printing-4.0-noarch Distributor ID: CentOS Description: CentOS release 5.8 (Final) Release: 5.8 Codename: Final [root at ovear ~]# uname -a Linux ovear 2.6.18-308.el5 #1 SMP Tue Feb 21 20:06:06 EST 2012 x86_64 x86_64 x86_64 GNU/Linux nginx version [root at ovear ~]# /usr/local/nginx/sbin/nginx -V nginx version: nginx/0.8.55 built by gcc 4.1.2 20080704 (Red Hat 4.1.2-52) TLS SNI support disabled configure arguments: --user=nobody --group=nobody --add-module=../ngx_cache_purge-1.5 --with-http_stub_status_module --with-http_ssl_module --with-http_realip_module [root at ovear ~]# (also tested under 1.x,have the same problem) nginx config worker_processes 4; worker_cpu_affinity 0001 0010 0100 1000; events { use epoll; worker_connections 51200; } http { include mime.types; default_type application/octet-stream; server_names_hash_bucket_size 128; client_header_buffer_size 32k; large_client_header_buffers 4 32k; client_max_body_size 50m; sendfile on; tcp_nopush on; keepalive_timeout 60; tcp_nodelay on; log_format main '$remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for" "$host"'; server { listen 9998; server_name _; location / { proxy_pass http://116.255.1.1; include proxy.conf; } location ~ ^/status { stub_status on; access_log off; } } } realip.conf set $my_x_real_ip ""; if ($http_x_real_ip ~ "^$") { set $my_x_real_ip $remote_addr; } if ($my_x_real_ip ~ "^$") { set $my_x_real_ip $http_x_real_ip; } proxy.conf include realip.conf; proxy_connect_timeout 30s; proxy_send_timeout 90; proxy_read_timeout 90; proxy_buffer_size 32k; proxy_buffers 4 32k; proxy_busy_buffers_size 64k; proxy_redirect off; proxy_hide_header Vary; proxy_set_header Accept-Encoding ''; proxy_set_header Host $host; proxy_set_header Referer $http_referer; proxy_set_header Cookie $http_cookie; proxy_set_header X-Real-IP $my_x_real_ip; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; trouble [root at ovear conf]# ps aux|grep -e CPU -e nginx USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND root 17826 0.0 0.0 41168 1060 ? Ss 23:16 0:00 nginx: master process /usr/local/nginx/sbin/nginx nobody 17827 2.1 0.5 60380 20436 ? S 23:16 0:00 nginx: worker process nobody 17828 1.5 0.5 60380 20436 ? S 23:16 0:00 nginx: worker process nobody 17829 23.6 0.5 63036 23444 ? S 23:16 0:04 nginx: worker process nobody 17830 3.0 0.5 60380 20724 ? S 23:16 0:00 nginx: worker process root 17851 0.0 0.0 61168 812 pts/1 S+ 23:16 0:00 grep -e CPU -e nginx [root at ovear conf]# nginx pid 17829 are hanging,there are nothing in error.log thanks Posted at Nginx Forum: http://forum.nginx.org/read.php?2,230177,230177#msg-230177 From francis at daoine.org Tue Aug 28 08:25:44 2012 From: francis at daoine.org (Francis Daly) Date: Tue, 28 Aug 2012 09:25:44 +0100 Subject: Issue with multiple location blocks to handle two webapps In-Reply-To: References: Message-ID: <20120828082544.GE32371@craic.sysops.org> On Mon, Aug 27, 2012 at 09:39:55PM -0700, Nikhil Bhardwaj wrote: Hi there, > location ^~ /*community*/ { > try_files $uri $uri/ /index.php?qa-rewrite=$uri&$args; If you ask for /community/foo/bar, this will end up using /index.php?qa-rewrite=/community/foo/bar&, which will be handled in the top level "location ~ \.php" block. Perhaps you want try_files $uri $uri/ /community/index.php?qa-rewrite=$uri&$args; here? (Not tested by me.) f -- Francis Daly francis at daoine.org From nginx-forum at nginx.us Tue Aug 28 09:11:42 2012 From: nginx-forum at nginx.us (iqbalmp) Date: Tue, 28 Aug 2012 05:11:42 -0400 (EDT) Subject: Ngix - Vanish Not Working Message-ID: <7f6ef134123433961e131acde6f0f2cd.NginxMailingListEnglish@forum.nginx.org> Hi, I have installed 'Varnish' cache along with Nginx; also configuration has been changed accordingly. But now i cannot access pages (could not cnnect to the host); Is there any settings that I have missed; please help. OS:CentOS /etc/default/varnish ========================================================= DAEMON_OPTS="-a :80 \ -T localhost:6082 \ -f /etc/varnish/default.vcl \ -S /etc/varnish/secret \ -s file,/var/lib/varnish/$INSTANCE/varnish_storage.bin,1G ========================================================= /etc/varnish/default.vcl ========================================================= backend apache { .host = "127.0.0.1"; .port = "8080"; } acl purge { "localhost"; "127.0.0.1"; } sub vcl_recv { ................................. .............................. ========================================================= /etc/nginx/conf.d/default.conf ========================================================= server { listen 8080 default; ## SSL directives might go here ........................................... ......................................... ========================================================= Posted at Nginx Forum: http://forum.nginx.org/read.php?2,230179,230179#msg-230179 From r at roze.lv Tue Aug 28 09:40:48 2012 From: r at roze.lv (Reinis Rozitis) Date: Tue, 28 Aug 2012 12:40:48 +0300 Subject: nginx worker process hang,cpu load very high In-Reply-To: <1d11893c8d47bc8112c0e0c25787ac98.NginxMailingListEnglish@forum.nginx.org> References: <1d11893c8d47bc8112c0e0c25787ac98.NginxMailingListEnglish@forum.nginx.org> Message-ID: <000EFFC302334D3FB69ADBD12B4B837F@MasterPC> > nginx version: nginx/0.8.55 This version is over a year old so pls upgrade - there have been issues in older versions regarding cpu hogging. rr From nginx-forum at nginx.us Tue Aug 28 09:57:55 2012 From: nginx-forum at nginx.us (w00t) Date: Tue, 28 Aug 2012 05:57:55 -0400 (EDT) Subject: Nginx and uploads Message-ID: Hello, I am trying to upload files to Nginx and process them via itself. I have compiled it with nginx_upload_module-2.2.0. My config is the following: server { listen 80; server_name example.com; client_max_body_size 100m; location / { root /www; autoindex on; } location /upload { root /www; if ($request_method = POST) { upload_pass @test; break; } upload_store /www/upload; upload_store_access user:rw group:rw all:rw; upload_set_form_field $upload_field_name.name "$upload_file_name"; upload_set_form_field $upload_field_name.content_type "$upload_content_type"; upload_set_form_field $upload_field_name.path "$upload_tmp_path"; upload_aggregate_form_field "$upload_field_name.md5" "$upload_file_md5"; upload_aggregate_form_field "$upload_field_name.size" "$upload_file_size"; upload_pass_form_field "^submit$|^description$"; upload_cleanup 400 404 499 500-505; } location @test { proxy_pass http://example.com:8080; } } server { listen 8080; server_name example.com; location / { root /www; } } This works, but it saves my uploads with random number names like 0000000001 0000000002 0000000003 0000000004 0000000005 0000000006 0000000007 0045422062 0059512315. I want to save them with the original name. Remember, my @test backend is also Nginx and not something else. Pls help. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,230184,230184#msg-230184 From sorrydreams at gmail.com Tue Aug 28 10:10:37 2012 From: sorrydreams at gmail.com (Chris) Date: Tue, 28 Aug 2012 18:10:37 +0800 Subject: mediawiki redirect issue when nginx is behind varnish and listen to not 80 Message-ID: Hi I hava migrated a mediawiki site to nginx server.Nginx listen to port 81 and is behind Varnish listen to 80.Varnish and Nginx on same server.Seems like the configuration used for the Nginx wiki have a redirect issue.Visit example.com index page will redirect to example:81. Any ideas? -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Tue Aug 28 09:31:53 2012 From: nginx-forum at nginx.us (iqbalmp) Date: Tue, 28 Aug 2012 05:31:53 -0400 (EDT) Subject: Ngix - Vanish Not Working In-Reply-To: <7f6ef134123433961e131acde6f0f2cd.NginxMailingListEnglish@forum.nginx.org> References: <7f6ef134123433961e131acde6f0f2cd.NginxMailingListEnglish@forum.nginx.org> Message-ID: I had changed my 'default.vcl' to: /etc/varnish/default.vcl ========================================================= backend default { .host = "127.0.0.1"; .port = "8080"; } acl purge { "localhost"; "127.0.0.1"; } sub vcl_recv { ................................. .............................. ========================================================= Posted at Nginx Forum: http://forum.nginx.org/read.php?2,230179,230182#msg-230182 From nginx-forum at nginx.us Tue Aug 28 09:30:50 2012 From: nginx-forum at nginx.us (ShreyasPrakash) Date: Tue, 28 Aug 2012 05:30:50 -0400 (EDT) Subject: Lowercase URLs Message-ID: <2ec297b914702b683d00c64aa031664b.NginxMailingListEnglish@forum.nginx.org> Hi, I would like to know how to convert the URLs to lowercase. When somebody hits an URL ( irrespective of the case ), it should be converted to lowercase in the address bar. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,230181,230181#msg-230181 From nginx-forum at nginx.us Tue Aug 28 12:54:11 2012 From: nginx-forum at nginx.us (ovear) Date: Tue, 28 Aug 2012 08:54:11 -0400 (EDT) Subject: nginx worker process hang,cpu load very high In-Reply-To: <000EFFC302334D3FB69ADBD12B4B837F@MasterPC> References: <000EFFC302334D3FB69ADBD12B4B837F@MasterPC> Message-ID: (also tested under 1.x,have the same problem) Posted at Nginx Forum: http://forum.nginx.org/read.php?2,230177,230200#msg-230200 From nginx-forum at nginx.us Tue Aug 28 12:59:04 2012 From: nginx-forum at nginx.us (jonarmstrong) Date: Tue, 28 Aug 2012 08:59:04 -0400 (EDT) Subject: Odd 500 Internal Server Error with a Wordpress file (NGINX & PHP-FPM) In-Reply-To: <075ec0fc58b870b9c9d3c185740ecfab.NginxMailingListEnglish@forum.nginx.org> References: <075ec0fc58b870b9c9d3c185740ecfab.NginxMailingListEnglish@forum.nginx.org> Message-ID: <62a09a2f77ac30b33acf3921a6a3d729.NginxMailingListEnglish@forum.nginx.org> I ran into the same problem (my Pages and Media lists were also empty, though the interface still showed the counts of each). I was running Wordpress in a subdirectory for testing and had copied a config from a tutorial for that particular setup. This line turned out to be the culprit: fastcgi_split_path_info ^(/wordpress)(/.*)$; I removed it with no ill effects that I can see so far and the admin pages now work as expected. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,229298,230201#msg-230201 From r at roze.lv Tue Aug 28 13:00:45 2012 From: r at roze.lv (Reinis Rozitis) Date: Tue, 28 Aug 2012 16:00:45 +0300 Subject: nginx worker process hang,cpu load very high In-Reply-To: References: <000EFFC302334D3FB69ADBD12B4B837F@MasterPC> Message-ID: <21647DEC92E34098A373780260D69194@MasterPC> > (also tested under 1.x,have the same problem) Which 1.x ? rr From nginx-forum at nginx.us Tue Aug 28 13:43:46 2012 From: nginx-forum at nginx.us (Ensiferous) Date: Tue, 28 Aug 2012 09:43:46 -0400 (EDT) Subject: Nginx and uploads In-Reply-To: References: Message-ID: I'm not sure you can really do what you want to. The upload module was not designed for this purpose. If you want to do this you either need to modify the source of the upload module or perhaps you can handle such logic by using the lua module to script what to do with the file uploads. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,230184,230206#msg-230206 From chris+nginx at schug.net Tue Aug 28 13:55:39 2012 From: chris+nginx at schug.net (Christoph Schug) Date: Tue, 28 Aug 2012 15:55:39 +0200 Subject: Lowercase URLs In-Reply-To: <2ec297b914702b683d00c64aa031664b.NginxMailingListEnglish@forum.nginx.org> References: <2ec297b914702b683d00c64aa031664b.NginxMailingListEnglish@forum.nginx.org> Message-ID: On 2012-08-28 11:30, ShreyasPrakash wrote: > Hi, > > I would like to know how to convert the URLs to lowercase. When > somebody > hits an URL ( irrespective of the case ), it should be converted to > lowercase in the address bar. With the help of lua-nginx-module (see https://github.com/chaoslawful/lua-nginx-module) it could look like this ... location ~ "\p{Lu}" { rewrite_by_lua 'return ngx.redirect(string.lower(ngx.var.uri), ngx.HTTP_MOVED_PERMANENTLY)'; } -cs From quintinpar at gmail.com Tue Aug 28 14:43:28 2012 From: quintinpar at gmail.com (Quintin Par) Date: Tue, 28 Aug 2012 20:13:28 +0530 Subject: Soft lock for Basic Auth In-Reply-To: References: Message-ID: Hi all, Is it possible to apply a soft lock timeout on invalid password attempts for auth_basic "Login"; auth_basic_user_file /etc/.htpasswd; - Quintin -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Tue Aug 28 15:02:31 2012 From: nginx-forum at nginx.us (w00t) Date: Tue, 28 Aug 2012 11:02:31 -0400 (EDT) Subject: Nginx and uploads In-Reply-To: References: Message-ID: <8d313218b58600cf524cf33d1da03b2d.NginxMailingListEnglish@forum.nginx.org> This seems odd. If it wasn't meant for Nginx to process uploaded files, then it couldn't have processed them by itself. For example, I changed location @test { proxy_pass http://example.com:8080; to location @test { default_type text/html; root /www; and removed the :8080 server part and it still works. So I am inclined to think that there must be a way to set the filename without going to such lenghts as to change the code. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,230184,230212#msg-230212 From igor at sysoev.ru Tue Aug 28 16:54:01 2012 From: igor at sysoev.ru (Igor Sysoev) Date: Tue, 28 Aug 2012 20:54:01 +0400 Subject: Issue with SNI/SSL and default_server In-Reply-To: References: <5D74470A-8AAB-4372-B8EF-D66620A1010C@sysoev.ru> Message-ID: On Aug 28, 2012, at 9:25 , bompus wrote: > Good to know. Thank you for checking on this. If you could add this > information to the documentation for SNI and/or SSL, that would be helpful > for others. We will add. BTW, using SNI for IP addresses is forbidden by RFC: http://tools.ietf.org/html/rfc4366#section-3.1 Currently, the only server names supported are DNS hostnames; however, this does not imply any dependency of TLS on DNS, and other name types may be added in the future (by an RFC that updates this document). Safari is the only browser in our tests which uses SNI for IP addresses. -- Igor Sysoev From nginx-forum at nginx.us Tue Aug 28 17:18:07 2012 From: nginx-forum at nginx.us (knocte) Date: Tue, 28 Aug 2012 13:18:07 -0400 (EDT) Subject: http_chunked_filter_module not present in modules page? Message-ID: <1158a6a78bac7467a606234d426f6c13.NginxMailingListEnglish@forum.nginx.org> Hey people, quick question, Why the http_chunked_filter_module[1] is not present in the modules page[2]? Thanks [1] http://lxr.evanmiller.org/http/source/http/modules/ngx_http_chunked_filter_module.c [2] http://wiki.nginx.org/Modules Posted at Nginx Forum: http://forum.nginx.org/read.php?2,230232,230232#msg-230232 From piotr.sikora at frickle.com Tue Aug 28 17:37:02 2012 From: piotr.sikora at frickle.com (Piotr Sikora) Date: Tue, 28 Aug 2012 19:37:02 +0200 Subject: http_chunked_filter_module not present in modules page? In-Reply-To: <1158a6a78bac7467a606234d426f6c13.NginxMailingListEnglish@forum.nginx.org> References: <1158a6a78bac7467a606234d426f6c13.NginxMailingListEnglish@forum.nginx.org> Message-ID: Hey, > Why the http_chunked_filter_module[1] is not present in the modules > page[2]? Because it's always enabled and you cannot configure anything. Best regards, Piotr Sikora < piotr.sikora at frickle.com > From nginx-forum at nginx.us Tue Aug 28 20:04:40 2012 From: nginx-forum at nginx.us (knocte) Date: Tue, 28 Aug 2012 16:04:40 -0400 (EDT) Subject: http_chunked_filter_module not present in modules page? In-Reply-To: References: Message-ID: <8a29ceded81e275015229081cdd0e484.NginxMailingListEnglish@forum.nginx.org> On 28/08/12 18:37, Piotr Sikora wrote: > Hey, > >> Why the http_chunked_filter_module[1] is not present in the modules >> page[2]? > > Because it's always enabled and you cannot configure anything. Ok thanks Piotr. Where can I get docs about what that module does though? Is there a way to disable it? Thanks PS: Actually, maybe I'm looking at the wrong place. I'm looking at trying to disabling the Transfer-Encoding usage of NGinx. Any ideas? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,230232,230235#msg-230235 From ianevans at digitalhit.com Tue Aug 28 20:12:52 2012 From: ianevans at digitalhit.com (Ian Evans) Date: Tue, 28 Aug 2012 16:12:52 -0400 Subject: Nginx location rule for Wordpress Multisite in subdirectories In-Reply-To: <20120827204738.GD32371@craic.sysops.org> References: <20120827204738.GD32371@craic.sysops.org> Message-ID: <503D2644.9020806@digitalhit.com> On 27/08/2012 4:47 PM, Francis Daly wrote: > nginx locations can be "regex" or "prefix". If you have a url prefix which > matches all of the subdir blogs, and none of the non-subdir blogs, then > something like the following might work (I assume that all subdir blogs > start with /blogs/blog; and therefore that everything that does not start > with that pattern belongs to the /blogs location, which remains as-is.): > > == > if ($uri ~ (/blogs/[^/]*)) { > set $blogname $1; > } > > include fastcgi.conf; > > location ^~ /blogs/blog { > try_files $uri $blogname/index.php?q=$uri; > location ~ php { > fastcgi_pass 127.0.0.1:10004; > } > } > == > > If you haven't got a clear prefix separation for the blogs, then you > might want to try a regex location; or perhaps something involving "map" > which will (probably) require enumeration all of the blogs. Thanks so much for the example, I'll give it a try. Is there a specific order the locations have to go in? /blogs before /blogs/blog or vice-versa? From goltus at gmail.com Tue Aug 28 21:43:45 2012 From: goltus at gmail.com (Nikhil Bhardwaj) Date: Tue, 28 Aug 2012 14:43:45 -0700 Subject: Issue with multiple location blocks to handle two webapps In-Reply-To: <20120828082544.GE32371@craic.sysops.org> References: <20120828082544.GE32371@craic.sysops.org> Message-ID: Thanks Francis, > > location ^~ /*community*/ { > > try_files $uri $uri/ /index.php?qa-rewrite=$uri&$args; > > If you ask for /community/foo/bar, this will end up using > /index.php?qa-rewrite=/community/foo/bar&, which will be handled in the > top level "location ~ \.php" block. > > Perhaps you want > > try_files $uri $uri/ /community/index.php?qa-rewrite=$uri&$args; > > here? (Not tested by me.) > f This did the trick. $uri had /community as the first part but you are right using try_files $uri $uri/ /community/index.php?qa-rewrite=$uri&$args; forced it to use the correct php location and then in the application removing /community from the $uri (which translates to $_GET['qa-rewrite']) worked perfectly for me. Thanks again. Nikhil From nginx-forum at nginx.us Tue Aug 28 23:33:07 2012 From: nginx-forum at nginx.us (nano) Date: Tue, 28 Aug 2012 19:33:07 -0400 (EDT) Subject: Renaming server response in 1.3.x not same as 1.2.x? Message-ID: In nginx 1.2 & 1.3 you can modify the source to change the name: src/http/ngx_http_header_filter_module.c (lines 48 and 49): static char ngx_http_server_string[] = "Server: Not Nginx" CRLF; static char ngx_http_server_full_string[] = "Server: Not Nginx/1.0" CRLF; Renaming those and recompiling the source changes the server response name in nginx 1.2. However doing this in 1.3 does not seem to change the name. It still says nginx/1.3.5. With server_tokens off; it still just says nginx. How can I change the server name in 1.3 without installing the 3rd party addon "headers more"? I want to compile from source. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,230243,230243#msg-230243 From nginx-forum at nginx.us Tue Aug 28 23:43:49 2012 From: nginx-forum at nginx.us (wideawake) Date: Tue, 28 Aug 2012 19:43:49 -0400 (EDT) Subject: Special headers and X-Accel-Redirect In-Reply-To: <87hb83gh0r.wl%appa@perusio.net> References: <87hb83gh0r.wl%appa@perusio.net> Message-ID: It was a error in the nginx config all is well now! Posted at Nginx Forum: http://forum.nginx.org/read.php?2,204296,230244#msg-230244 From nginx-forum at nginx.us Wed Aug 29 04:51:31 2012 From: nginx-forum at nginx.us (Ensiferous) Date: Wed, 29 Aug 2012 00:51:31 -0400 (EDT) Subject: Nginx and uploads In-Reply-To: <8d313218b58600cf524cf33d1da03b2d.NginxMailingListEnglish@forum.nginx.org> References: <8d313218b58600cf524cf33d1da03b2d.NginxMailingListEnglish@forum.nginx.org> Message-ID: w00t Wrote: ------------------------------------------------------- > This seems odd. If it wasn't meant for Nginx to process uploaded > files, then it couldn't have processed them by itself. > For example, I changed > location @test { > proxy_pass http://example.com:8080; > > to > location @test { > default_type text/html; > root /www; > and removed the :8080 server part and it still works. > So I am inclined to think that there must be a way to set the filename > without going to such lenghts as to change the code. Yes there is, but the default nginx install does not have the features available to do what you want and the upload module does not introduce any such feature either. If you want nginx to be able to rename files based on user input then you need a module which introduces that capability into nginx. Of course, at this point I'm talking theory only as I haven't ever done anything like this, but it should work provided either lua module or perl module has the adds the required features - in theory. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,230184,230247#msg-230247 From nginx-forum at nginx.us Wed Aug 29 05:17:20 2012 From: nginx-forum at nginx.us (ovear) Date: Wed, 29 Aug 2012 01:17:20 -0400 (EDT) Subject: nginx worker process hang,cpu load very high In-Reply-To: <21647DEC92E34098A373780260D69194@MasterPC> References: <21647DEC92E34098A373780260D69194@MasterPC> Message-ID: 1.0.15,and 1.2.3 Posted at Nginx Forum: http://forum.nginx.org/read.php?2,230177,230249#msg-230249 From cickumqt at gmail.com Wed Aug 29 06:31:36 2012 From: cickumqt at gmail.com (Christopher Meng) Date: Wed, 29 Aug 2012 14:31:36 +0800 Subject: nginx worker process hang,cpu load very high In-Reply-To: <1d11893c8d47bc8112c0e0c25787ac98.NginxMailingListEnglish@forum.nginx.org> References: <1d11893c8d47bc8112c0e0c25787ac98.NginxMailingListEnglish@forum.nginx.org> Message-ID: what about upgrading your system to rhel5/6? -- *Yours sincerely,* *Christopher Meng* Ambassador/Contributor of Fedora Project and many others. http://cicku.me -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Wed Aug 29 10:25:30 2012 From: nginx-forum at nginx.us (adamchal) Date: Wed, 29 Aug 2012 06:25:30 -0400 (EDT) Subject: Adding $request_path Variable In-Reply-To: References: <8762apf2pi.wl%appa@perusio.net> Message-ID: <738c7c73ec62e0400be0e433cd9cd0f9.NginxMailingListEnglish@forum.nginx.org> I've tried a bunch of approaches in the nginx.conf to get the original request path isolated to a variable for logging, but all have failed. I'm pretty desperate for this variable, but I don't have the time to apply a patch to the source code. I'm willing to pay for someone to do this. If anyone is interested, just let me know. It really shouldn't be that complicated. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,227612,230262#msg-230262 From zhang.libing at gmail.com Wed Aug 29 10:39:51 2012 From: zhang.libing at gmail.com (Roast) Date: Wed, 29 Aug 2012 18:39:51 +0800 Subject: how to keep "hot" files in memory? Message-ID: Dear all. We use nginx to server lots of static content with high traffic loads. And the server is getting slower, because slower SATA drive. So I think how to keep "hot" files in memory without other software,just like varnish or squid? Does someone has worked out modules for nginx? Thanks. -------------- next part -------------- An HTML attachment was scrubbed... URL: From Richard.Kearsley at m247.com Wed Aug 29 10:47:47 2012 From: Richard.Kearsley at m247.com (Richard Kearsley) Date: Wed, 29 Aug 2012 10:47:47 +0000 Subject: how to keep "hot" files in memory? In-Reply-To: References: Message-ID: Linux will automatically keep recently accessed files in spare ram Also you can use a ram disk/tmpfs in a proxy cache setup if really needed Richard Kearsley Systems Developer | M247 Limited Internal Dial 2210 | Mobile +44 7970 621236 From: nginx-bounces at nginx.org [mailto:nginx-bounces at nginx.org] On Behalf Of Roast Sent: 29 August 2012 10:40 To: nginx Subject: how to keep "hot" files in memory? Dear all. We use nginx to server lots of static content with high traffic loads. And the server is getting slower, because slower SATA drive. So I think how to keep "hot" files in memory without other software,just like varnish or squid? Does someone has worked out modules for nginx? Thanks. -------------- next part -------------- An HTML attachment was scrubbed... URL: From ne at vbart.ru Wed Aug 29 11:04:47 2012 From: ne at vbart.ru (Valentin V. Bartenev) Date: Wed, 29 Aug 2012 15:04:47 +0400 Subject: Nginx and uploads In-Reply-To: <8d313218b58600cf524cf33d1da03b2d.NginxMailingListEnglish@forum.nginx.org> References: <8d313218b58600cf524cf33d1da03b2d.NginxMailingListEnglish@forum.nginx.org> Message-ID: <201208291504.47368.ne@vbart.ru> On Tuesday 28 August 2012 19:02:31 w00t wrote: > This seems odd. If it wasn't meant for Nginx to process uploaded files, > then it couldn't have processed them by itself. You should do this task in your application and it's not odd (see below why). [...] > So I am inclined to think that there must be a way to set the filename > without going to such lenghts as to change the code. Just rename the uploaded file is not enough. You also need to validate its content. Otherwise, you will open a potential security hole in your server. wbr, Valentin V. Bartenev From nginx-forum at nginx.us Wed Aug 29 11:11:16 2012 From: nginx-forum at nginx.us (ShreyasPrakash) Date: Wed, 29 Aug 2012 07:11:16 -0400 (EDT) Subject: Lowercase URLs In-Reply-To: References: Message-ID: Can't we get this functionality without using lua-nginx-module? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,230181,230270#msg-230270 From nginx-forum at nginx.us Wed Aug 29 11:56:17 2012 From: nginx-forum at nginx.us (kolbyjack) Date: Wed, 29 Aug 2012 07:56:17 -0400 (EDT) Subject: Adding $request_path Variable In-Reply-To: <738c7c73ec62e0400be0e433cd9cd0f9.NginxMailingListEnglish@forum.nginx.org> References: <8762apf2pi.wl%appa@perusio.net> <738c7c73ec62e0400be0e433cd9cd0f9.NginxMailingListEnglish@forum.nginx.org> Message-ID: This worked for me in a quick test: map $request_uri $request_path { ~(?[^?]*) $captured_path; } Posted at Nginx Forum: http://forum.nginx.org/read.php?2,227612,230275#msg-230275 From chris+nginx at schug.net Wed Aug 29 12:45:08 2012 From: chris+nginx at schug.net (Christoph Schug) Date: Wed, 29 Aug 2012 14:45:08 +0200 Subject: Lowercase URLs In-Reply-To: References: Message-ID: <92aad6c59d05f5e70c7a41c54f5ae29d@schug.net> On 2012-08-29 13:11, ShreyasPrakash wrote: > Can't we get this functionality without using lua-nginx-module? AFAIK Nginx does support this out of the box, i.e. there's nothing like the internal functions tolower/toupper of RewriteMap in Apache HTTP (see http://httpd.apache.org/docs/2.4/mod/mod_rewrite.html#rewritemap). If you prefer Perl over Lua, go with the Perl module. From nginx-forum at nginx.us Wed Aug 29 13:39:18 2012 From: nginx-forum at nginx.us (adamchal) Date: Wed, 29 Aug 2012 09:39:18 -0400 (EDT) Subject: Adding $request_path Variable In-Reply-To: References: <8762apf2pi.wl%appa@perusio.net> <738c7c73ec62e0400be0e433cd9cd0f9.NginxMailingListEnglish@forum.nginx.org> Message-ID: kolbyjack, this is really smart. I didn't think about this approach. The only downside is doing an additional regex for each request. But, in the meantime, this will work. Thanks! Posted at Nginx Forum: http://forum.nginx.org/read.php?2,227612,230282#msg-230282 From nginx-forum at nginx.us Wed Aug 29 16:42:28 2012 From: nginx-forum at nginx.us (mokriy) Date: Wed, 29 Aug 2012 12:42:28 -0400 (EDT) Subject: Header is not passed to proxy Message-ID: <21fd5d4bd8cfebd05e5891a5288d39a7.NginxMailingListEnglish@forum.nginx.org> Hi Maxim, thanks a lot for your moduel. it was extremely useful for me. I have the following config: location /upload { auth_request /auth; ... location /auth { proxy_pass ; } The initial request to /upload has header. I have supposed this header to be propogated down to . But this does not happen. Question: does it mean that i have to use $request_uri instead of header propogation? Thanks a lot! Oleksiy Posted at Nginx Forum: http://forum.nginx.org/read.php?2,230292,230292#msg-230292 From nginx-forum at nginx.us Wed Aug 29 17:04:13 2012 From: nginx-forum at nginx.us (nfn) Date: Wed, 29 Aug 2012 13:04:13 -0400 (EDT) Subject: Hide/Delete a cookie when stored in cache Message-ID: <3d38400ed3fc0bcf6ff8d24d3ecfd764.NginxMailingListEnglish@forum.nginx.org> Hello, I'm caching pages with nginx using the following rules: fastcgi_no_cache $cookie_member_id $is_args fastcgi_cache_bypass $cookie_member_id $is_args fastcgi_ignore_headers Cache-Control Expires Set-Cookie; Now, there is a cookie (session_id) that need to be passed to backed, but it shouldn't be stored in the cache, since it's the session_id of a guest user and other guests should not see this. Is there a way to store the page in the cache but before that, remove this cookie? Thanks Nuno Posted at Nginx Forum: http://forum.nginx.org/read.php?2,230294,230294#msg-230294 From gnu.yair at gmail.com Wed Aug 29 17:48:59 2012 From: gnu.yair at gmail.com (=?ISO-8859-1?Q?Yair_Avenda=F1o?=) Date: Wed, 29 Aug 2012 12:48:59 -0500 Subject: help Message-ID: Hello my name is such Yair am currently working on a project in my school I am student my project is to migrate to Apache in nginx but I have a problem with the rules used in their safety nginx why I come to ask for help if I can send a file with the rules and regulations that serve as aLA solution I have months looking nginx rules Extio and I did it. I await your prompt response. greetings. -------------- next part -------------- An HTML attachment was scrubbed... URL: From ne at vbart.ru Wed Aug 29 18:37:57 2012 From: ne at vbart.ru (Valentin V. Bartenev) Date: Wed, 29 Aug 2012 22:37:57 +0400 Subject: http_chunked_filter_module not present in modules page? In-Reply-To: <8a29ceded81e275015229081cdd0e484.NginxMailingListEnglish@forum.nginx.org> References: <8a29ceded81e275015229081cdd0e484.NginxMailingListEnglish@forum.nginx.org> Message-ID: <201208292237.58136.ne@vbart.ru> On Wednesday 29 August 2012 00:04:40 knocte wrote: [...] > PS: Actually, maybe I'm looking at the wrong place. I'm looking at trying > to disabling the Transfer-Encoding usage of NGinx. Any ideas? Looking at the official documentation is always a good idea: http://nginx.org/r/chunked_transfer_encoding wbr, Valentin V. Bartenev From nginx-forum at nginx.us Thu Aug 30 01:29:25 2012 From: nginx-forum at nginx.us (Lincolne) Date: Wed, 29 Aug 2012 21:29:25 -0400 (EDT) Subject: How to deal with Windows 7 password recovery Message-ID: I forgot my Windows 7 administrator login password! Is there any way I can recover the password? I have a lot of stuff on my laptop, so I don't want to factory reset it. Are there any ideas on Windows 7 password recovery? Thanks! After consulting my IT friends, I get a quick response. There is a Windows 7 boot disk that will allow me to recover Windows 7 password. It is Linux based and uses a very easy menu system. Any way, I am not a computer expert; I can't listen to this friend for further info. I need a more easy way! Then I wonder if isn't there a program without a boot disk? it would be much easier for me but thanks anyways! What a pity things is that another friends answered my question: Not really, most Windows password recovery requires a boot disk, unless the main admin account is not password protected. Try accessing it by pressing f8 during boot and entering safe mode. Once asks for login, press ctrl-alt-del, and type in administrator. This might work. While, I have only one administrator account - the default admin! I search my password question words via Google, what's surprised me is that here is one site that might help me. It is named Windows Password Recovery Tool that will work for any Windows OS, also include Windows 7. This tool is a well-known professional third party application designed for particularly Windows Password Recovery. To operate such a tool is very easy, even though I am a newbie, I can fully understand and follow the below steps to finish the task. Now, please follow me: Step 1: Google search Windows Password Recovery 3.0, get results from suggestion; Step 2: Download needed Windows Password Recovery Tool Professional and install it on an accessible pc; Step 3: Burn a password reset disk with a blank USB Flash drive. Step 4: Insert the burned reset disk into password lost computer, and set the computer to boot from USB drive to recover Windows 7 password. Things now become easy! From this day, I think I will never take such experience make me mad! Now, I get new password for my pc login. What's more, I disable the Windows 7 default Administrator and create a back up admin account! For security purpose, disabling the built-in Administrator account on our personal computer is very necessary since the default Administrator can do anything to your computer, like delete crucial files, delete accounts and reset other user password etc. So avoid this by making another account which is not names as Administrator but with administrative privileges. Also, I prepare another way to help me restore forgotten Windows 7 login password - create a password reset disk for my admin account! Dear reader, if you are a Windows 7 OS pc owner, you can also take the following steps to help you save much time once you lost password: Step1. Insert a prepared USB flash drive into your computer. Step2. Type "reset" in the Windows search box and select "Create a password reset disk". Step3. When the "Forgotten Password Wizard" appears, click "Next". Step4. Select your USB flash drive and click "Next". Step5. When the wizards finish creating the reset disk, please click "Next" and then "Finish". Please pay attention to the above method again, to create a password reset disk can only be available if we create it before lost password or need Windows 7 password recovery. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,230304,230304#msg-230304 From javi at lavandeira.net Thu Aug 30 07:32:23 2012 From: javi at lavandeira.net (Javi Lavandeira) Date: Thu, 30 Aug 2012 16:32:23 +0900 Subject: help In-Reply-To: References: Message-ID: Dear Yair, Please use periods to separate your sentences. It is really very, very difficult to understand what you mean. What's your question? Regards, -- Javi Lavandeira Twitter: @javilm Blog: http://www.lavandeira.net/blog On 2012/08/30, at 2:48, Yair Avenda?o wrote: > Hello my name is such Yair am currently working on a project in my school I am student my project is to migrate to Apache in nginx but I have a problem with the rules used in their safety nginx why I come to ask for help if I can send a file with the rules and regulations that serve as aLA solution I have months looking nginx rules Extio and I did it. I await your prompt response. > greetings. > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Thu Aug 30 07:46:47 2012 From: nginx-forum at nginx.us (mokriy) Date: Thu, 30 Aug 2012 03:46:47 -0400 (EDT) Subject: Auth request module header passing Message-ID: <72bb9527bf3493f6d02530d635ed7afa.NginxMailingListEnglish@forum.nginx.org> Hi Maxim, Many thanks to your work! I am using auth req module and have experienced the following problem (issue). I can't get the headers to be passed to auth backend from initial request. location /initial { auth_request /auth } location /auth { proxy_pass ; proxy_set_header X-Header $http_x_header_from_request; } Unfortunately, auth service does not receive the X-header. Do I understand correct that auth_req module does not receive headers from initial request? Thanks in advance, Oleksiy Posted at Nginx Forum: http://forum.nginx.org/read.php?2,230313,230313#msg-230313 From nginx-forum at nginx.us Thu Aug 30 08:10:04 2012 From: nginx-forum at nginx.us (brusestudery) Date: Thu, 30 Aug 2012 04:10:04 -0400 (EDT) Subject: How to deal with Windows 7 password recovery In-Reply-To: References: Message-ID: <24caf0234db25e0560a1af3e7ab9491a.NginxMailingListEnglish@forum.nginx.org> This is a video guide for you to reset windows 7 password http://www.youtube.com/watch?v=GP5tX_eUZzc Posted at Nginx Forum: http://forum.nginx.org/read.php?2,230304,230314#msg-230314 From nginx-forum at nginx.us Thu Aug 30 09:16:56 2012 From: nginx-forum at nginx.us (nfn) Date: Thu, 30 Aug 2012 05:16:56 -0400 (EDT) Subject: how to keep "hot" files in memory? In-Reply-To: References: Message-ID: <09917d720a834d90988446d5b026bfd2.NginxMailingListEnglish@forum.nginx.org> Hello, You could try open_file_cache http://nginx.org/en/docs/http/ngx_http_core_module.html#open_file_cache Best regards Posted at Nginx Forum: http://forum.nginx.org/read.php?2,230263,230317#msg-230317 From nginx-forum at nginx.us Thu Aug 30 09:57:10 2012 From: nginx-forum at nginx.us (leki75) Date: Thu, 30 Aug 2012 05:57:10 -0400 (EDT) Subject: command line prefix (-p) parameter parsing Message-ID: <38966fac2076d4bcd1e3c1cf66c6e62e.NginxMailingListEnglish@forum.nginx.org> Hi, I found that giving prefix as command line parameter and the prefix does not start with '/' you append a '/' to prefix (version: 1.2.3): $ /usr/sbin/nginx -p prefix -c nginx.conf nginx: [emerg] open() "prefix/nginx.conf" failed (2: No such file or directory) $ /usr/sbin/nginx -p prefix/ -c nginx.conf nginx: [emerg] open() "prefix//nginx.conf" failed (2: No such file or directory) $ /usr/sbin/nginx -p /prefix -c nginx.conf nginx: [emerg] open() "/prefixnginx.conf" failed (2: No such file or directory) $ /usr/sbin/nginx -p /prefix/ -c nginx.conf nginx: [emerg] open() "/prefix/nginx.conf" failed (2: No such file or directory) I think you wanted to check the last character of prefix and not the first. So you should change nginx.c (line 839) from if (!ngx_path_separator(*p)) { to if (len == 0 || !ngx_path_separator(p[len - 1])) { Regards, Gabor Posted at Nginx Forum: http://forum.nginx.org/read.php?2,230320,230320#msg-230320 From nginx-forum at nginx.us Thu Aug 30 10:00:30 2012 From: nginx-forum at nginx.us (leki75) Date: Thu, 30 Aug 2012 06:00:30 -0400 (EDT) Subject: command line prefix (-p) parameter parsing In-Reply-To: <38966fac2076d4bcd1e3c1cf66c6e62e.NginxMailingListEnglish@forum.nginx.org> References: <38966fac2076d4bcd1e3c1cf66c6e62e.NginxMailingListEnglish@forum.nginx.org> Message-ID: Oh, I just found that you corrected this in 1.3.x version. Sorry for this redundant post. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,230320,230321#msg-230321 From nginx-forum at nginx.us Thu Aug 30 10:31:05 2012 From: nginx-forum at nginx.us (mokriy) Date: Thu, 30 Aug 2012 06:31:05 -0400 (EDT) Subject: Auth request module header passing In-Reply-To: <72bb9527bf3493f6d02530d635ed7afa.NginxMailingListEnglish@forum.nginx.org> References: <72bb9527bf3493f6d02530d635ed7afa.NginxMailingListEnglish@forum.nginx.org> Message-ID: <886a9bf56f95d63e6f82c0b5cac30d56.NginxMailingListEnglish@forum.nginx.org> It works, you have to put header in quotes: "$http_x_header_from_request" Posted at Nginx Forum: http://forum.nginx.org/read.php?2,230313,230324#msg-230324 From nginx-forum at nginx.us Thu Aug 30 10:31:50 2012 From: nginx-forum at nginx.us (mokriy) Date: Thu, 30 Aug 2012 06:31:50 -0400 (EDT) Subject: Header is not passed to proxy In-Reply-To: <21fd5d4bd8cfebd05e5891a5288d39a7.NginxMailingListEnglish@forum.nginx.org> References: <21fd5d4bd8cfebd05e5891a5288d39a7.NginxMailingListEnglish@forum.nginx.org> Message-ID: <3dba876c4e740dc4362e62d3ef989f5d.NginxMailingListEnglish@forum.nginx.org> The header value should be in quotes: "$x_your_header" Posted at Nginx Forum: http://forum.nginx.org/read.php?2,230292,230325#msg-230325 From nginx-forum at nginx.us Thu Aug 30 11:50:35 2012 From: nginx-forum at nginx.us (yashgt) Date: Thu, 30 Aug 2012 07:50:35 -0400 (EDT) Subject: SSL port other than 443 Message-ID: <4cc7b862da6f8c389ac2ca41d23e1f96.NginxMailingListEnglish@forum.nginx.org> Hi, I have 2 server sections in my config. One runs on port 80 and SSL on 443. The other on port 83 and its SSL on 444: listen 83 default ; ## SSL directives might go here listen 444 ssl; Once I restart nginx and run netstat -a I see port 443 being used but not port 444. What might be the issue? Regards, Yash Posted at Nginx Forum: http://forum.nginx.org/read.php?2,230326,230326#msg-230326 From nginx-forum at nginx.us Thu Aug 30 13:02:11 2012 From: nginx-forum at nginx.us (ovear) Date: Thu, 30 Aug 2012 09:02:11 -0400 (EDT) Subject: nginx worker process hang,cpu load very high In-Reply-To: References: Message-ID: <0050861afd7f45ae8d9255c27879264c.NginxMailingListEnglish@forum.nginx.org> Centos 6.0 5.8 5.4 has the same problem Posted at Nginx Forum: http://forum.nginx.org/read.php?2,230177,230307#msg-230307 From nginx-forum at nginx.us Thu Aug 30 13:02:15 2012 From: nginx-forum at nginx.us (w00t) Date: Thu, 30 Aug 2012 09:02:15 -0400 (EDT) Subject: Nginx and uploads In-Reply-To: <201208291504.47368.ne@vbart.ru> References: <201208291504.47368.ne@vbart.ru> Message-ID: Indeed, after some more search I have found what you are saying. http://forum.nginx.org/read.php?2,185057,186002#msg-186002 Posted at Nginx Forum: http://forum.nginx.org/read.php?2,230184,230323#msg-230323 From javi at lavandeira.net Thu Aug 30 13:04:42 2012 From: javi at lavandeira.net (Javi Lavandeira) Date: Thu, 30 Aug 2012 22:04:42 +0900 Subject: SSL port other than 443 In-Reply-To: <4cc7b862da6f8c389ac2ca41d23e1f96.NginxMailingListEnglish@forum.nginx.org> References: <4cc7b862da6f8c389ac2ca41d23e1f96.NginxMailingListEnglish@forum.nginx.org> Message-ID: <9B744668-2E5A-40CC-9767-ADBAA9E3D707@lavandeira.net> Hi, On 2012/08/30, at 20:50, "yashgt" wrote: > I have 2 server sections in my config. One runs on port 80 and SSL on 443. > The other on port 83 and its SSL on 444: > listen 83 default ; > ## SSL directives might go here > listen 444 ssl; > Once I restart nginx and run netstat -a I see port 443 being used but not > port 444. What might be the issue? Are you trying to use the same IP address for both server sections? Regards, -- Javi Lavandeira Twitter: @javilm Blog: http://www.lavandeira.net/blog -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Thu Aug 30 13:07:47 2012 From: nginx-forum at nginx.us (yashgt) Date: Thu, 30 Aug 2012 09:07:47 -0400 (EDT) Subject: SSL port other than 443 In-Reply-To: <9B744668-2E5A-40CC-9767-ADBAA9E3D707@lavandeira.net> References: <9B744668-2E5A-40CC-9767-ADBAA9E3D707@lavandeira.net> Message-ID: <236311517eefe83778759c2476717916.NginxMailingListEnglish@forum.nginx.org> Yes. Same IP address. With no SSL, this works fine . I can access one app as http://myserver/ and the other as http://myserver:83/. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,230326,230329#msg-230329 From javi at lavandeira.net Thu Aug 30 13:13:52 2012 From: javi at lavandeira.net (Javi Lavandeira) Date: Thu, 30 Aug 2012 22:13:52 +0900 Subject: SSL port other than 443 In-Reply-To: <236311517eefe83778759c2476717916.NginxMailingListEnglish@forum.nginx.org> References: <9B744668-2E5A-40CC-9767-ADBAA9E3D707@lavandeira.net> <236311517eefe83778759c2476717916.NginxMailingListEnglish@forum.nginx.org> Message-ID: <577E2F7E-3450-4CD6-8570-8B46758E9277@lavandeira.net> Hi, On 2012/08/30, at 22:07, "yashgt" wrote: > Yes. Same IP address. With no SSL, this works fine . I can access one app as > http://myserver/ and the other as http://myserver:83/. When working with SSL you need to use a different IP address for each SSL host. Regards, From igor at sysoev.ru Thu Aug 30 13:24:33 2012 From: igor at sysoev.ru (Igor Sysoev) Date: Thu, 30 Aug 2012 17:24:33 +0400 Subject: SSL port other than 443 In-Reply-To: <4cc7b862da6f8c389ac2ca41d23e1f96.NginxMailingListEnglish@forum.nginx.org> References: <4cc7b862da6f8c389ac2ca41d23e1f96.NginxMailingListEnglish@forum.nginx.org> Message-ID: <192AC371-D686-4AF1-833C-E6C8A93EB1BE@sysoev.ru> On Aug 30, 2012, at 15:50 , yashgt wrote: > Hi, > > I have 2 server sections in my config. One runs on port 80 and SSL on 443. > The other on port 83 and its SSL on 444: > listen 83 default ; > ## SSL directives might go here > listen 444 ssl; > Once I restart nginx and run netstat -a I see port 443 being used but not > port 444. What might be the issue? What does "nginx -t" show ? -- Igor Sysoev From igor at sysoev.ru Thu Aug 30 13:25:15 2012 From: igor at sysoev.ru (Igor Sysoev) Date: Thu, 30 Aug 2012 17:25:15 +0400 Subject: SSL port other than 443 In-Reply-To: <577E2F7E-3450-4CD6-8570-8B46758E9277@lavandeira.net> References: <9B744668-2E5A-40CC-9767-ADBAA9E3D707@lavandeira.net> <236311517eefe83778759c2476717916.NginxMailingListEnglish@forum.nginx.org> <577E2F7E-3450-4CD6-8570-8B46758E9277@lavandeira.net> Message-ID: On Aug 30, 2012, at 17:13 , Javi Lavandeira wrote: > Hi, > > On 2012/08/30, at 22:07, "yashgt" wrote: > >> Yes. Same IP address. With no SSL, this works fine . I can access one app as >> http://myserver/ and the other as http://myserver:83/. > > When working with SSL you need to use a different IP address for each SSL host. If server ports are different, you can use one IP address. -- Igor Sysoev From nginx-forum at nginx.us Thu Aug 30 13:27:07 2012 From: nginx-forum at nginx.us (yashgt) Date: Thu, 30 Aug 2012 09:27:07 -0400 (EDT) Subject: SSL port other than 443 In-Reply-To: <192AC371-D686-4AF1-833C-E6C8A93EB1BE@sysoev.ru> References: <192AC371-D686-4AF1-833C-E6C8A93EB1BE@sysoev.ru> Message-ID: root at v-enterprise15:/usr/local/pnp4nagios# nginx -t nginx: the configuration file /etc/nginx/nginx.conf syntax is ok nginx: configuration file /etc/nginx/nginx.conf test is successful Posted at Nginx Forum: http://forum.nginx.org/read.php?2,230326,230333#msg-230333 From nginx-forum at nginx.us Thu Aug 30 13:53:47 2012 From: nginx-forum at nginx.us (yashgt) Date: Thu, 30 Aug 2012 09:53:47 -0400 (EDT) Subject: SSL port other than 443 In-Reply-To: References: <192AC371-D686-4AF1-833C-E6C8A93EB1BE@sysoev.ru> Message-ID: <565cc0ece8fd1ab353b8d020ef471bb3.NginxMailingListEnglish@forum.nginx.org> Here is my nginx detail: # nginx -V nginx: nginx version: nginx/1.0.5 nginx: TLS SNI support enabled nginx: configure arguments: --prefix=/etc/nginx --conf-path=/etc/nginx/nginx.conf --error-log-path=/var/log/nginx/error.log --http-client-body-temp-path=/var/lib/nginx/body --http-fastcgi-temp-path=/var/lib/nginx/fastcgi --http-log-path=/var/log/nginx/access.log --http-proxy-temp-path=/var/lib/nginx/proxy --http-scgi-temp-path=/var/lib/nginx/scgi --http-uwsgi-temp-path=/var/lib/nginx/uwsgi --lock-path=/var/lock/nginx.lock --pid-path=/var/run/nginx.pid --with-debug --with-http_addition_module --with-http_dav_module --with-http_geoip_module --with-http_gzip_static_module --with-http_image_filter_module --with-http_realip_module --with-http_stub_status_module --with-http_ssl_module --with-http_sub_module --with-http_xslt_module --with-ipv6 --with-sha1=/usr/include/openssl --with-md5=/usr/include/openssl --with-mail --with-mail_ssl_module --add-module=/build/buildd/nginx-1.0.5/debian/modules/nginx-echo --add-module=/build/buildd/nginx-1.0.5/debian/modules/nginx-upstream-fair This doc says that it should be possible to share the same IP address. I use latest browsers. I intend to have multiple server sections in the config, each for a different app. I am using a self-signed certificate. Anything special needs to be done to the cert? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,230326,230335#msg-230335 From nginx-forum at nginx.us Thu Aug 30 13:54:25 2012 From: nginx-forum at nginx.us (yashgt) Date: Thu, 30 Aug 2012 09:54:25 -0400 (EDT) Subject: SSL port other than 443 In-Reply-To: <565cc0ece8fd1ab353b8d020ef471bb3.NginxMailingListEnglish@forum.nginx.org> References: <192AC371-D686-4AF1-833C-E6C8A93EB1BE@sysoev.ru> <565cc0ece8fd1ab353b8d020ef471bb3.NginxMailingListEnglish@forum.nginx.org> Message-ID: <43d5eadff71ad87f9ec1287b7ae2ca69.NginxMailingListEnglish@forum.nginx.org> Here is the doc: http://nginx.org/en/docs/http/configuring_https_servers.html#sni Posted at Nginx Forum: http://forum.nginx.org/read.php?2,230326,230336#msg-230336 From david at styleflare.com Thu Aug 30 17:12:25 2012 From: david at styleflare.com (David | StyleFlare) Date: Thu, 30 Aug 2012 13:12:25 -0400 Subject: Input Headers. - headers_more_module. Message-ID: <503F9EF9.1020002@styleflare.com> I am trying to figure out what I dont see request headers added to the request. I am trying to add 'X-Server-ID: $id' Here is a snippet form my config. location /{ more_set_input_headers 'X-Server-ID: $id'; more_set_headers 'X-Server-ID: $id'; uwsgi_pass://upstream; } I see X-Server-ID sent in the response headers, but my upstream application does not seem to get the Header Variable. I am not sure what I am doing wrong here. I read the docs here for explanation, but I cant seem to figure it out. http://wiki.nginx.org/HttpHeadersMoreModule and http://wiki.nginx.org/HttpUwsgiModule#Parameters_transferred_to_uWSGI-server. Thanks in advance for any help. From roberto at unbit.it Thu Aug 30 17:21:47 2012 From: roberto at unbit.it (Roberto De Ioris) Date: Thu, 30 Aug 2012 19:21:47 +0200 Subject: Input Headers. - headers_more_module. In-Reply-To: <503F9EF9.1020002@styleflare.com> References: <503F9EF9.1020002@styleflare.com> Message-ID: <190A17FB-9EC7-4D62-B957-E0829C295B68@unbit.it> Il giorno 30/ago/2012, alle ore 19:12, David | StyleFlare ha scritto: > I am trying to figure out what I dont see request headers added to the request. > > I am trying to add 'X-Server-ID: $id' > > Here is a snippet form my config. > > location /{ > more_set_input_headers 'X-Server-ID: $id'; > more_set_headers 'X-Server-ID: $id'; > > uwsgi_pass://upstream; > } > > > I see X-Server-ID sent in the response headers, but my upstream application does not seem to get the Header Variable. > > I am not sure what I am doing wrong here. I read the docs here for explanation, but I cant seem to figure it out. > http://wiki.nginx.org/HttpHeadersMoreModule > and > http://wiki.nginx.org/HttpUwsgiModule#Parameters_transferred_to_uWSGI-server. > If you need to pass custom data to a uWSGI backend just use uwsgi_param Server_ID $id; (you can use whatever key name you want, it will be appended to the WSGI environ) -- Roberto De Ioris http://unbit.it JID: roberto at jabber.unbit.it From david at styleflare.com Thu Aug 30 17:59:02 2012 From: david at styleflare.com (David | StyleFlare) Date: Thu, 30 Aug 2012 13:59:02 -0400 Subject: Input Headers. - headers_more_module. In-Reply-To: <190A17FB-9EC7-4D62-B957-E0829C295B68@unbit.it> References: <503F9EF9.1020002@styleflare.com> <190A17FB-9EC7-4D62-B957-E0829C295B68@unbit.it> Message-ID: <503FA9E6.5060403@styleflare.com> Thanks Roberto, That's one way to do it. I would still like to know why the first way of adding a request Header in nginx didnt work. On 8/30/12 1:21 PM, Roberto De Ioris wrote: > uwsgi_param Server_ID $id; From om.brahmana at gmail.com Thu Aug 30 18:28:02 2012 From: om.brahmana at gmail.com (Srirang Doddihal) Date: Thu, 30 Aug 2012 23:58:02 +0530 Subject: Achieving strong client -> upstrem_server affinity Message-ID: Hi, I am using Nginx 1.1.19 on Ubuntu 12.04 (LTS) server. Nginx is used to load balance traffic between two instances of the Punjab XMPP-BOSH server. Below is the relevant part from my nginx configuration : upstream chat_bosh { ip_hash; server 10.98.29.135:5280; server 10.98.29.135:5281; } server { ...................... ....................... location /http-bind { proxy_next_upstream off; proxy_pass http://chat_bosh; expires off; } } I am using ip_hash to make sure that a client will always be served by the same upstream server. This is essential. I am using "proxy_next_upstream off;" to prevent a request being tried on multiple upstream servers, because such requests will invariably fail. I realize that this will cost me redundancy and fallback in case a particular upstream server goes down, but that isn't useful in this particular case. I plan to handle it separately via monitoring and alerts. Anomalies : 1) Despite ip_hash being specified the request from a particular client IP sometimes (close to 7% of requests) get routed to a second upstream server 2) Despute proxy_next_upstream off; some requests (about 5%) are tried over multiple upstream servers. What could be causing these and how do I go about fixing these? Here are two sets of log line captures which depict the above mentioned problems. http://pastebin.com/vnEHQBxK - upstrem_next_server on; (i.e. default value) http://pastebin.com/vvPBsPgT - - upstrem_next_server off; (as specified above) These log lines were created with this command : tail -F /var/log/nginx/access.log | grep "POST /http-bind" | awk '{print $1 "|" $3 "|" $8 "|" $14}' $1 = $remote_addr $2 = $upstream_addr $3 = $msec $4 = $status To see the ip_hash anamoly search for "|404" and look at the adjacent lines. The same $remote_addr will be forwarded to two different upstream servers. To see the upstream_next_server off; anamoly search for "HTTP" - Because of my brittle awk statement the status is replace by the string "HTTP/1.1" when upstream_addr has multiple addresses. -- Regards, Srirang G Doddihal Brahmana. The LIGHT shows the way. The WISE see it. The BRAVE walk it. The PERSISTENT endure and complete it. I want to do it all ALONE. From agentzh at gmail.com Thu Aug 30 19:09:42 2012 From: agentzh at gmail.com (agentzh) Date: Thu, 30 Aug 2012 12:09:42 -0700 Subject: Input Headers. - headers_more_module. In-Reply-To: <503F9EF9.1020002@styleflare.com> References: <503F9EF9.1020002@styleflare.com> Message-ID: Hello! On Thu, Aug 30, 2012 at 10:12 AM, David | StyleFlare wrote: > I am trying to figure out what I dont see request headers added to the > request. > > I am trying to add 'X-Server-ID: $id' > > Here is a snippet form my config. > > location /{ > more_set_input_headers 'X-Server-ID: $id'; > more_set_headers 'X-Server-ID: $id'; > > uwsgi_pass://upstream; > } > > > I see X-Server-ID sent in the response headers, but my upstream application > does not seem to get the Header Variable. > The more_set_input_headers directive runs at the end of the nginx "rewrite" phase and I think your $id variable just didn't have a value (yet) at that phase. You can try inserting a constant header like this: set $id "My-ID"; more_set_input_headers "X-Server-ID: $id"; I've tested the following example with nginx 1.2.3 and ngx_headers_more 0.18: location = /t { set $id "123456"; more_set_input_headers 'X-Server-ID: $id'; uwsgi_pass 127.0.0.1:1234; } And then in another terminal, start "nc" to listen on the local port, 1234: $ nc -l 1234 And then accessing the /t location defined above: $ curl localhost:8080/t assuming your nginx is listening on the local port 8080. And now on the terminal running "nc", you should get something like this (omitted those non-printable bytes): H HTTP_HOST localhostHTTP_CONNECTIONCloseHTTP_X_SERVER_ID123456 We can see that the X-Server-ID request header name and its value, "123456", are indeed sent by ngx_uwsgi. Best regards, -agentzh From agentzh at gmail.com Thu Aug 30 19:12:58 2012 From: agentzh at gmail.com (agentzh) Date: Thu, 30 Aug 2012 12:12:58 -0700 Subject: Nginx and uploads In-Reply-To: References: Message-ID: Hello! On Tue, Aug 28, 2012 at 6:43 AM, Ensiferous wrote: > I'm not sure you can really do what you want to. The upload module was not > designed for this purpose. If you want to do this you either need to modify > the source of the upload module or perhaps you can handle such logic by > using the lua module to script what to do with the file uploads. > Yes! There's a lua-resty-upload library for ngx_lua that can do non-buffered uploading: https://github.com/agentzh/lua-resty-upload You don't have to touch the disk at all if you prefer sending the data chunks to the TCP/UDP backends (via cosockets) in a strict non-buffered mode. Best regards, -agentzh From david at styleflare.com Thu Aug 30 20:12:57 2012 From: david at styleflare.com (David | StyleFlare) Date: Thu, 30 Aug 2012 16:12:57 -0400 Subject: Input Headers. - headers_more_module. In-Reply-To: References: <503F9EF9.1020002@styleflare.com> Message-ID: <503FC949.5040801@styleflare.com> Thank You So this worked. The question is then if I am setting a value in /auth When does it get actually set? postgres_set $pg_server 0 0 required; Then in my location block; I do more_set_input_headers 'X-Server-ID: $pg_server'; When is $pg_server actually set? Thanks. On 8/30/12 3:09 PM, agentzh wrote: > Hello! > > On Thu, Aug 30, 2012 at 10:12 AM, David | StyleFlare > wrote: >> I am trying to figure out what I dont see request headers added to the >> request. >> >> I am trying to add 'X-Server-ID: $id' >> >> Here is a snippet form my config. >> >> location /{ >> more_set_input_headers 'X-Server-ID: $id'; >> more_set_headers 'X-Server-ID: $id'; >> >> uwsgi_pass://upstream; >> } >> >> >> I see X-Server-ID sent in the response headers, but my upstream application >> does not seem to get the Header Variable. >> > The more_set_input_headers directive runs at the end of the nginx > "rewrite" phase and I think your $id variable just didn't have a value > (yet) at that phase. > > You can try inserting a constant header like this: > > set $id "My-ID"; > more_set_input_headers "X-Server-ID: $id"; > > I've tested the following example with nginx 1.2.3 and ngx_headers_more 0.18: > > location = /t { > set $id "123456"; > more_set_input_headers 'X-Server-ID: $id'; > uwsgi_pass 127.0.0.1:1234; > } > > And then in another terminal, start "nc" to listen on the local port, 1234: > > $ nc -l 1234 > > And then accessing the /t location defined above: > > $ curl localhost:8080/t > > assuming your nginx is listening on the local port 8080. > > And now on the terminal running "nc", you should get something like > this (omitted those non-printable bytes): > > H HTTP_HOST localhostHTTP_CONNECTIONCloseHTTP_X_SERVER_ID123456 > > We can see that the X-Server-ID request header name and its value, > "123456", are indeed sent by ngx_uwsgi. > > Best regards, > -agentzh > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From zzz at zzz.org.ua Thu Aug 30 21:22:14 2012 From: zzz at zzz.org.ua (Alexandr Gomoliako) Date: Fri, 31 Aug 2012 00:22:14 +0300 Subject: Achieving strong client -> upstrem_server affinity In-Reply-To: References: Message-ID: > I am using ip_hash to make sure that a client will always be served by > the same upstream server. This is essential. > 1) Despite ip_hash being specified the request from a particular > client IP sometimes (close to 7% of requests) get routed to a second > upstream server > 2) Despute proxy_next_upstream off; some requests (about 5%) are tried > over multiple upstream servers. > how do I go about fixing these? With a little help from perl :) Check out this example: https://gist.github.com/2124034 It decides which upstream to use by hashing $r->uri, but you can replace it with $r->remote_addr From francis at daoine.org Thu Aug 30 21:27:37 2012 From: francis at daoine.org (Francis Daly) Date: Thu, 30 Aug 2012 22:27:37 +0100 Subject: Nginx location rule for Wordpress Multisite in subdirectories In-Reply-To: <503D2644.9020806@digitalhit.com> References: <20120827204738.GD32371@craic.sysops.org> <503D2644.9020806@digitalhit.com> Message-ID: <20120830212737.GA18253@craic.sysops.org> On Tue, Aug 28, 2012 at 04:12:52PM -0400, Ian Evans wrote: > On 27/08/2012 4:47 PM, Francis Daly wrote: Hi there, > >nginx locations can be "regex" or "prefix". > > Is there a specific order the locations have to go in? /blogs before > /blogs/blog or vice-versa? No. "prefix" locations (location, location ^~, and location =; although the last isn't really a prefix) have the longest matching one chosen, independent of the order in the config file. That's one of the reasons why it is good to avoid "regex" locations if possible. f -- Francis Daly francis at daoine.org From agentzh at gmail.com Thu Aug 30 21:30:35 2012 From: agentzh at gmail.com (agentzh) Date: Thu, 30 Aug 2012 14:30:35 -0700 Subject: Input Headers. - headers_more_module. In-Reply-To: <503FC949.5040801@styleflare.com> References: <503F9EF9.1020002@styleflare.com> <503FC949.5040801@styleflare.com> Message-ID: Hello! On Thu, Aug 30, 2012 at 1:12 PM, David | StyleFlare wrote: > Thank You > > So this worked. > Cool :) > The question is then if I am setting a value in /auth > > When does it get actually set? > > postgres_set $pg_server 0 0 required; > > Then in my location block; > I do > > more_set_input_headers 'X-Server-ID: $pg_server'; > > When is $pg_server actually set? > Are you using the ngx_http_auth_request module? The auth_request directive runs at the access phase. The order of running phases in nginx for any nginx locations always looks like this: rewrite phase (set, rewrite, more_set_input_headers, rewrite_by_lua, etc) access phase (auth_request, allow, deny, access_by_lua, etc) content phase (proxy_pass, postgres_pass, uwsgi_pass, content_by_lua, etc) log phase (log_by_lua, etc) A solution is to use the ngx_lua module's access_by_lua module to replace both auth_request and more_set_input_headers, as in location / { access_by_lua ' local res = ngx.location.capture("/auth") if res.status == 200 then ngx.req.set_header("X-Server-ID", res.var.pg_server) end '; uwsgi_pass ...; } location = /auth { internal; postgres_query "..."; postgres_pass ...; postgres_set $pg_server ...; } You can check out the ngx_lua's documentation for more details: http://wiki.nginx.org/HttpLuaModule Best regards, -agentzh From agentzh at gmail.com Thu Aug 30 21:33:03 2012 From: agentzh at gmail.com (agentzh) Date: Thu, 30 Aug 2012 14:33:03 -0700 Subject: Input Headers. - headers_more_module. In-Reply-To: References: <503F9EF9.1020002@styleflare.com> <503FC949.5040801@styleflare.com> Message-ID: Hello! On Thu, Aug 30, 2012 at 2:30 PM, agentzh wrote: > > location / { > access_by_lua ' > local res = ngx.location.capture("/auth") > if res.status == 200 then > ngx.req.set_header("X-Server-ID", res.var.pg_server) Sorry, typo here, this line should be ngx.req.set_header("X-Server-ID", ngx.var.pg_server) Best regards, -agentzh From david at styleflare.com Thu Aug 30 21:41:35 2012 From: david at styleflare.com (David J) Date: Thu, 30 Aug 2012 17:41:35 -0400 Subject: Input Headers. - headers_more_module. In-Reply-To: References: <503F9EF9.1020002@styleflare.com> <503FC949.5040801@styleflare.com> Message-ID: LUA. Yay I needed an excuse to play with lua. Seriously sounds a bit scary but I will take the plunge and see what happens. Thanks On Aug 30, 2012 5:33 PM, "agentzh" wrote: > Hello! > > On Thu, Aug 30, 2012 at 2:30 PM, agentzh wrote: > > > > location / { > > access_by_lua ' > > local res = ngx.location.capture("/auth") > > if res.status == 200 then > > ngx.req.set_header("X-Server-ID", res.var.pg_server) > > Sorry, typo here, this line should be > > ngx.req.set_header("X-Server-ID", ngx.var.pg_server) > > Best regards, > -agentzh > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From zhang.libing at gmail.com Fri Aug 31 00:49:41 2012 From: zhang.libing at gmail.com (Roast) Date: Fri, 31 Aug 2012 08:49:41 +0800 Subject: how to keep "hot" files in memory? In-Reply-To: <09917d720a834d90988446d5b026bfd2.NginxMailingListEnglish@forum.nginx.org> References: <09917d720a834d90988446d5b026bfd2.NginxMailingListEnglish@forum.nginx.org> Message-ID: Thanks. But it seems open_file_cache not cache the content of the file, am I right? On Thu, Aug 30, 2012 at 5:16 PM, nfn wrote: > Hello, > > You could try open_file_cache > > http://nginx.org/en/docs/http/ngx_http_core_module.html#open_file_cache > > Best regards > > Posted at Nginx Forum: > http://forum.nginx.org/read.php?2,230263,230317#msg-230317 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -- The time you enjoy wasting is not wasted time! -------------- next part -------------- An HTML attachment was scrubbed... URL: From dzhou at netdna.com Fri Aug 31 01:15:47 2012 From: dzhou at netdna.com (Don Zhou) Date: Thu, 30 Aug 2012 18:15:47 -0700 Subject: how to keep "hot" files in memory? In-Reply-To: References: <09917d720a834d90988446d5b026bfd2.NginxMailingListEnglish@forum.nginx.org> Message-ID: Check out memcached module for nginx. http://wiki.nginx.org/HttpMemcachedModule. It required you to install memcached however. Regards On Thu, Aug 30, 2012 at 5:49 PM, Roast wrote: > Thanks. > > But it seems open_file_cache not cache the content of the file, am I right? > > > On Thu, Aug 30, 2012 at 5:16 PM, nfn wrote: > >> Hello, >> >> You could try open_file_cache >> >> http://nginx.org/en/docs/http/ngx_http_core_module.html#open_file_cache >> >> Best regards >> >> Posted at Nginx Forum: >> http://forum.nginx.org/read.php?2,230263,230317#msg-230317 >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx >> > > > > -- > The time you enjoy wasting is not wasted time! > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From zhang.libing at gmail.com Fri Aug 31 01:41:58 2012 From: zhang.libing at gmail.com (Roast) Date: Fri, 31 Aug 2012 09:41:58 +0800 Subject: how to keep "hot" files in memory? In-Reply-To: References: <09917d720a834d90988446d5b026bfd2.NginxMailingListEnglish@forum.nginx.org> Message-ID: Thanks. Memcached module is a better solution, it will reduce much disk IO. But it seems not high efficiency, we just want keep hot file in shared memory at millions of static file. But the cache algorithms of memcached is LRU, so without big memory the cache will be set and then will be delete frequently. Am I right? And I think, maybe we need a moudle to solve this problem? On Fri, Aug 31, 2012 at 9:15 AM, Don Zhou wrote: > Check out memcached module for nginx. > http://wiki.nginx.org/HttpMemcachedModule. It required you to install > memcached however. > > Regards > > > On Thu, Aug 30, 2012 at 5:49 PM, Roast wrote: > >> Thanks. >> >> But it seems open_file_cache not cache the content of the file, am I >> right? >> >> >> On Thu, Aug 30, 2012 at 5:16 PM, nfn wrote: >> >>> Hello, >>> >>> You could try open_file_cache >>> >>> http://nginx.org/en/docs/http/ngx_http_core_module.html#open_file_cache >>> >>> Best regards >>> >>> Posted at Nginx Forum: >>> http://forum.nginx.org/read.php?2,230263,230317#msg-230317 >>> >>> _______________________________________________ >>> nginx mailing list >>> nginx at nginx.org >>> http://mailman.nginx.org/mailman/listinfo/nginx >>> >> >> >> >> -- >> The time you enjoy wasting is not wasted time! >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx >> > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -- The time you enjoy wasting is not wasted time! -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Fri Aug 31 04:36:00 2012 From: nginx-forum at nginx.us (coombesy) Date: Fri, 31 Aug 2012 00:36:00 -0400 (EDT) Subject: $host includes port for reverse proxy In-Reply-To: <944017e8808a64f4824087634eff16c4.NginxMailingListEnglish@forum.nginx.org> References: <944017e8808a64f4824087634eff16c4.NginxMailingListEnglish@forum.nginx.org> Message-ID: <4f7626d58a349eec1d634b20ab21b7d4.NginxMailingListEnglish@forum.nginx.org> My bad. I didn't have correct "proxy_set_header" settings. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,227919,230361#msg-230361 From nginx-forum at nginx.us Fri Aug 31 07:27:13 2012 From: nginx-forum at nginx.us (lenky0401) Date: Fri, 31 Aug 2012 03:27:13 -0400 (EDT) Subject: where assign values to rrp->peers->last_cached? Message-ID: Assignment to the variable rrp->peers->last_cached cannot be found in anywhere in total nginx-1.3.5 source code, if the variable's default value is zero, this conditional will never be true: ngx_int_t ngx_http_upstream_get_round_robin_peer(ngx_peer_connection_t *pc, void *data) { ngx_http_upstream_rr_peer_data_t *rrp = data; ... if (rrp->peers->last_cached) { ==============>//never be true? /* cached connection */ c = rrp->peers->cached[rrp->peers->last_cached]; rrp->peers->last_cached--; /* ngx_unlock_mutex(ppr->peers->mutex); */ #if (NGX_THREADS) c->read->lock = c->read->own_lock; c->write->lock = c->write->own_lock; #endif pc->connection = c; pc->cached = 1; return NGX_OK; } ... i don't know if i am missing something or not, can anyone help me understand this? Thank you in advance. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,230366,230366#msg-230366 From om.brahmana at gmail.com Fri Aug 31 07:57:46 2012 From: om.brahmana at gmail.com (Srirang Doddihal) Date: Fri, 31 Aug 2012 13:27:46 +0530 Subject: Achieving strong client -> upstrem_server affinity In-Reply-To: References: Message-ID: Hi, On Fri, Aug 31, 2012 at 2:52 AM, Alexandr Gomoliako wrote: >> I am using ip_hash to make sure that a client will always be served by >> the same upstream server. This is essential. > >> 1) Despite ip_hash being specified the request from a particular >> client IP sometimes (close to 7% of requests) get routed to a second >> upstream server > >> 2) Despute proxy_next_upstream off; some requests (about 5%) are tried >> over multiple upstream servers. > >> how do I go about fixing these? > > With a little help from perl :) > Check out this example: https://gist.github.com/2124034 > It decides which upstream to use by hashing $r->uri, but you can > replace it with $r->remote_addr Perl.. ouch.!!. :) I was hoping it wouldn't come to that. So does that mean ip_hash works on a best effort basis and doesn't always ensure that requests from a particular remote_addr go the same upstream server? > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -- Regards, Srirang G Doddihal Brahmana. The LIGHT shows the way. The WISE see it. The BRAVE walk it. The PERSISTENT endure and complete it. I want to do it all ALONE. From mdounin at mdounin.ru Fri Aug 31 08:55:00 2012 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 31 Aug 2012 12:55:00 +0400 Subject: Achieving strong client -> upstrem_server affinity In-Reply-To: References: Message-ID: <20120831085459.GP40452@mdounin.ru> Hello! On Thu, Aug 30, 2012 at 11:58:02PM +0530, Srirang Doddihal wrote: > Hi, > > I am using Nginx 1.1.19 on Ubuntu 12.04 (LTS) server. > > Nginx is used to load balance traffic between two instances of the > Punjab XMPP-BOSH server. > > Below is the relevant part from my nginx configuration : > > upstream chat_bosh { > ip_hash; > server 10.98.29.135:5280; > server 10.98.29.135:5281; > } > > server { > ...................... > ....................... > location /http-bind { > proxy_next_upstream off; > proxy_pass http://chat_bosh; > expires off; > } > } > > I am using ip_hash to make sure that a client will always be served by > the same upstream server. This is essential. > I am using "proxy_next_upstream off;" to prevent a request being tried > on multiple upstream servers, because such requests will invariably > fail. I realize that this will cost me redundancy and fallback in case > a particular upstream server goes down, but that isn't useful in this > particular case. I plan to handle it separately via monitoring and > alerts. In addition to "proxy_next_upstream off" you should at least specify max_fails=0 for upstream servers. Else a server might be considered down and request from a client will be re-hashed to another one. Note that docs might be a bit misleading here as they say one should refer to proxy_next_upstream setting to see what is considered to be server failure. This isn't exactly true: if upstream server fails to return correct http answer (i.e. on error, timeout, invalid_header in proxy_next_upstream terms) the failure is always counted. What can be considered to be failure or not is valid http responses, i.e. http_500 and so on. > Anomalies : > > 1) Despite ip_hash being specified the request from a particular > client IP sometimes (close to 7% of requests) get routed to a second > upstream server This is correct as long as upstream servers fail and you don't use max_fails=0. See above. > 2) Despute proxy_next_upstream off; some requests (about 5%) are tried > over multiple upstream servers. This is strange, and you may want to provide more info, see http://wiki.nginx.org/Debugging. I would suggest this is likely some configuration error though (requests are handled in another location, without proxy_next_upstream set to off?). Maxim Dounin From mdounin at mdounin.ru Fri Aug 31 08:57:32 2012 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 31 Aug 2012 12:57:32 +0400 Subject: where assign values to rrp->peers->last_cached? In-Reply-To: References: Message-ID: <20120831085732.GQ40452@mdounin.ru> Hello! On Fri, Aug 31, 2012 at 03:27:13AM -0400, lenky0401 wrote: > Assignment to the variable rrp->peers->last_cached cannot be found in > anywhere in total nginx-1.3.5 source code, if the variable's default value > is zero, this conditional will never be true: > > ngx_int_t > ngx_http_upstream_get_round_robin_peer(ngx_peer_connection_t *pc, void > *data) > { > ngx_http_upstream_rr_peer_data_t *rrp = data; > ... > if (rrp->peers->last_cached) { ==============>//never be true? > > /* cached connection */ > > c = rrp->peers->cached[rrp->peers->last_cached]; > rrp->peers->last_cached--; > > /* ngx_unlock_mutex(ppr->peers->mutex); */ > > #if (NGX_THREADS) > c->read->lock = c->read->own_lock; > c->write->lock = c->write->own_lock; > #endif > > pc->connection = c; > pc->cached = 1; > > return NGX_OK; > } > ... > > i don't know if i am missing something or not, can anyone help me understand > this? Thank you in advance. This code is lefover from earlier incomplete attempts to implement cached connections. It's not currently used. Maxim Dounin From hari.h at csmcom.com Fri Aug 31 10:06:21 2012 From: hari.h at csmcom.com (Hari Hendaryanto) Date: Fri, 31 Aug 2012 17:06:21 +0700 Subject: client-ip mail proxy Message-ID: <50408C9D.2050506@csmcom.com> hi, i've setup pop3/imap4 proxy using nginx recently, everything work great, unless i cannot obtain client ip address, i'm using external auth_http server run on apache/php. How do i forward client ip from nginx to apache auth server? something like 'X-Forwarded-For' in http reverse proxy. this is what i have done so far by adding auth_http_header mail { ..... ..... server { protocol pop3; listen 110; starttls on; auth_http_header X-Auth-Client-IP $remote_addr; } } i still seeing nginx ip address in apache log i need some enlightenment here TIA PT.CITRA SARI MAKMUR SATELLITE & TERRESTRIAL NETWORK Connecting the distance - anytime, anywhere, any content http://www.csmcom.com From nginx-forum at nginx.us Fri Aug 31 10:28:39 2012 From: nginx-forum at nginx.us (nexon) Date: Fri, 31 Aug 2012 06:28:39 -0400 (EDT) Subject: Nginx rewrite rule for favicon.ico files Message-ID: <843d5db3cd171aa836ddefa3eea90d43.NginxMailingListEnglish@forum.nginx.org> Nginx rewrite rules for favicon.ico for multiple subdomains. In nginx /etc/sites-avaliable/vhost_config_file, I have the following rewrite rules for serving multiple subdomains with their favicon.ico files: if ($host = example1.example.com){ rewrite ^/favicon.ico$ /favicon_example.ico break; } if ($host = example2.example.com){ rewrite ^/favicon.ico$ /favicon_example.ico break; } if ($host = example3.example.com){ rewrite ^/favicon.ico$ /favicon_example.ico break; } Please, how can I make this simple operation by adding **ONE GENERAL REWRITE RULE**: That rule must do: 1.For subdomains example1,example2... to check in location /images/$host.ico is there a favicon_example.ico file and serve it. 2.If there is no such file, than to serve just a plain favicon.ico file. Thank's in advance... Posted at Nginx Forum: http://forum.nginx.org/read.php?2,230375,230375#msg-230375 From igor at sysoev.ru Fri Aug 31 10:42:59 2012 From: igor at sysoev.ru (Igor Sysoev) Date: Fri, 31 Aug 2012 14:42:59 +0400 Subject: Nginx rewrite rule for favicon.ico files In-Reply-To: <843d5db3cd171aa836ddefa3eea90d43.NginxMailingListEnglish@forum.nginx.org> References: <843d5db3cd171aa836ddefa3eea90d43.NginxMailingListEnglish@forum.nginx.org> Message-ID: <2422B510-A47C-41A5-9A69-7F275C8950C2@sysoev.ru> On Aug 31, 2012, at 14:28 , nexon wrote: > Nginx rewrite rules for favicon.ico for multiple subdomains. > > In nginx /etc/sites-avaliable/vhost_config_file, I have the following > rewrite rules for serving multiple subdomains with their favicon.ico files: > > > > if ($host = example1.example.com){ > rewrite ^/favicon.ico$ /favicon_example.ico break; > } > > if ($host = example2.example.com){ > rewrite ^/favicon.ico$ /favicon_example.ico break; > } > > if ($host = example3.example.com){ > rewrite ^/favicon.ico$ /favicon_example.ico break; > } > > > Please, how can I make this simple operation by adding **ONE GENERAL REWRITE > RULE**: > > That rule must do: > > 1.For subdomains example1,example2... to check in location /images/$host.ico > is there a favicon_example.ico file and serve it. > > 2.If there is no such file, than to serve just a plain favicon.ico file. > > Thank's in advance... Why not to create a separate server block for each site instead of this if/rewrite mess ? -- Igor Sysoev From igor at sysoev.ru Fri Aug 31 10:47:02 2012 From: igor at sysoev.ru (Igor Sysoev) Date: Fri, 31 Aug 2012 14:47:02 +0400 Subject: client-ip mail proxy In-Reply-To: <50408C9D.2050506@csmcom.com> References: <50408C9D.2050506@csmcom.com> Message-ID: <75A3E041-E72F-4F44-91B7-2E596E1F541B@sysoev.ru> On Aug 31, 2012, at 14:06 , Hari Hendaryanto wrote: > hi, > > i've setup pop3/imap4 proxy using nginx recently, everything work great, unless i cannot obtain client ip address, i'm using external auth_http server run on apache/php. > > How do i forward client ip from nginx to apache auth server? something like 'X-Forwarded-For' in http reverse proxy. > this is what i have done so far by adding auth_http_header > > mail { > ..... > ..... > > server { > protocol pop3; > listen 110; > starttls on; > auth_http_header X-Auth-Client-IP $remote_addr; > } > > } > > i still seeing nginx ip address in apache log > i need some enlightenment here A client IP is sent in "Client-IP" header by default. -- Igor Sysoev From r at roze.lv Fri Aug 31 10:54:22 2012 From: r at roze.lv (Reinis Rozitis) Date: Fri, 31 Aug 2012 13:54:22 +0300 Subject: Nginx rewrite rule for favicon.ico files In-Reply-To: <843d5db3cd171aa836ddefa3eea90d43.NginxMailingListEnglish@forum.nginx.org> References: <843d5db3cd171aa836ddefa3eea90d43.NginxMailingListEnglish@forum.nginx.org> Message-ID: > That rule must do: > 1.For subdomains example1,example2... to check in location > /images/$host.ico is there a favicon_example.ico file and serve it. > 2.If there is no such file, than to serve just a plain favicon.ico file. >From you description I don't fully understand what exactly is your current file structure but you can use try_files ( http://wiki.nginx.org/HttpCoreModule#try_files ) and add the $host variable in whatever place is needed. For example: location /favicon.ico { try_files /images/$host/favicon_example.ico /favicon.ico; } rr From nginx-forum at nginx.us Fri Aug 31 11:05:44 2012 From: nginx-forum at nginx.us (extern) Date: Fri, 31 Aug 2012 07:05:44 -0400 (EDT) Subject: problem with load php Message-ID: <6048d0b44c8b39ea57fd82a045be1de2.NginxMailingListEnglish@forum.nginx.org> hello. I use nginx on Windose server 2003,so for load script get erorr 403 Forbidden ,but when load a html page it load how i can use php on nginx ? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,230381,230381#msg-230381 From cickumqt at gmail.com Fri Aug 31 11:15:50 2012 From: cickumqt at gmail.com (Christopher Meng) Date: Fri, 31 Aug 2012 19:15:50 +0800 Subject: problem with load php In-Reply-To: <6048d0b44c8b39ea57fd82a045be1de2.NginxMailingListEnglish@forum.nginx.org> References: <6048d0b44c8b39ea57fd82a045be1de2.NginxMailingListEnglish@forum.nginx.org> Message-ID: Most popular is : nginx+php-fpm -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Fri Aug 31 11:27:38 2012 From: nginx-forum at nginx.us (nexon) Date: Fri, 31 Aug 2012 07:27:38 -0400 (EDT) Subject: Nginx rewrite rule for favicon.ico files In-Reply-To: <2422B510-A47C-41A5-9A69-7F275C8950C2@sysoev.ru> References: <2422B510-A47C-41A5-9A69-7F275C8950C2@sysoev.ru> Message-ID: <4ae6dfe7fef76b884ea6a71dc873117f.NginxMailingListEnglish@forum.nginx.org> Igor Sysoev Wrote: ------------------------------------------------------- > Why not to create a separate server block for each site instead of > this > if/rewrite mess ? > > > -- > Igor Sysoev > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx Can't do that, because I have to serve different favicon.ico to different subdomain. In example, there is several domains in server_name and some of them need to have their own favicon.ico, if there is no favicon.ico for some of them, than they need to be served with the main favicon.ico file. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,230375,230384#msg-230384 From igor at sysoev.ru Fri Aug 31 11:31:55 2012 From: igor at sysoev.ru (Igor Sysoev) Date: Fri, 31 Aug 2012 15:31:55 +0400 Subject: Nginx rewrite rule for favicon.ico files In-Reply-To: <4ae6dfe7fef76b884ea6a71dc873117f.NginxMailingListEnglish@forum.nginx.org> References: <2422B510-A47C-41A5-9A69-7F275C8950C2@sysoev.ru> <4ae6dfe7fef76b884ea6a71dc873117f.NginxMailingListEnglish@forum.nginx.org> Message-ID: On Aug 31, 2012, at 15:27 , nexon wrote: > Igor Sysoev Wrote: > ------------------------------------------------------- >> Why not to create a separate server block for each site instead of >> this >> if/rewrite mess ? >> >> >> -- >> Igor Sysoev >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx > > Can't do that, because I have to serve different favicon.ico to different > subdomain. > In example, there is several domains in server_name and some of them need > to have their own favicon.ico, > if there is no favicon.ico for some of them, than they need to be served > with the main favicon.ico file. Anything except favicons in this servers is the same ? -- Igor Sysoev From nginx-forum at nginx.us Fri Aug 31 11:32:49 2012 From: nginx-forum at nginx.us (nano) Date: Fri, 31 Aug 2012 07:32:49 -0400 (EDT) Subject: Reverse proxy is caching php pages? Message-ID: I setup a reverse proxy on my forum and everything is working okay. (I think?). http { ... proxy_cache_path /usr/local/nginx/cache levels=1:2 keys_zone=STATIC:10m inactive=24h max_size=1g; ... } server { listen 80; #listen [::]:80 ipv6only=on; return 301 https://$host$request_uri; } server { listen ssl.xxx.yyy:443 ssl spdy; #listen [::]:443 ipv6only=on ssl; server_name ssl.xxx.yyy; ssl on; ssl_session_cache shared:SSL:10m; ssl_session_timeout 10m; ssl_ciphers ECDHE-RSA-AES256-SHA384:AES256-SHA256:RC4:HIGH:!MD5:!aNULL:!EDH:!AESGCM; ssl_protocols SSLv3 TLSv1 TLSv1.1 TLSv1.2; ssl_prefer_server_ciphers on; ssl_certificate /usr/local/nginx/ssl/server.pem; ssl_certificate_key /usr/local/nginx/ssl/ssl.key; ssl_ecdh_curve secp521r1; keepalive_timeout 300; add_header Strict-Transport-Security "max-age=7776000; includeSubdomains"; error_page 502 504 /offline.html; location / { proxy_pass http://xxx.xxx.xxx.xxx/; proxy_set_header Host xxx.yyy; proxy_set_header CF-Connecting-IP $remote_addr; proxy_set_header X-Forwarded-Proto https; proxy_cache STATIC; proxy_cache_valid 200 7d; proxy_cache_use_stale error timeout invalid_header updating http_500 http_502 http_503 http_504; } location = /offline.html { root html; } } [small note: Do I really even need ssl_ecdh_curve? fine without? I can't find documentation on it so idk why I'm using it] Config on my main site: server { listen [::]:80 ipv6only=on; # listen for IPv6 only traffic on IPv6 sockets listen 80; # listen also for IPv4 traffic on "regular" IPv4 sockets server_name xxx.yyy; client_max_body_size 30M; #for large file uploads access_log /home/web/logs/access.log main; error_log /home/web/logs/error.log; root /home/web/site; error_page 404 403 /404.html; location ~ /\.ht { deny all; return 404; } location ~ /(addons|data|attachments) { deny all; return 404; } location ~ \.php$ { try_files $uri = 404; fastcgi_pass unix:/tmp/php-fpm.sock; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include fastcgi_params; } location ~ \.(?:ico|css|js|gif|jpe?g|png)$ { expires 90d; add_header Cache-Control public; } location ~ \.(?:html|htm|rtf|rtx|svg|svgz|txt|xsd|xsl|xml)$ { expires 180s; add_header Cache-Control "public, must-revalidate"; } } When I have scripts that should not be cached they get cached. For example if I have a script that displays my IP address, visiting that page on the reverse proxy it gets cached and is publicly viewable to everyone. How can I prevent the reverse proxy from caching PHP pages or pages ending in .php? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,230385,230385#msg-230385 From hari.h at csmcom.com Fri Aug 31 11:32:03 2012 From: hari.h at csmcom.com (Hari Hendaryanto) Date: Fri, 31 Aug 2012 18:32:03 +0700 Subject: client-ip mail proxy In-Reply-To: <75A3E041-E72F-4F44-91B7-2E596E1F541B@sysoev.ru> References: <50408C9D.2050506@csmcom.com> <75A3E041-E72F-4F44-91B7-2E596E1F541B@sysoev.ru> Message-ID: <5040A0B3.1010608@csmcom.com> On 8/31/2012 5:47 PM, Igor Sysoev wrote: > On Aug 31, 2012, at 14:06 , Hari Hendaryanto wrote: > >> hi, >> >> i've setup pop3/imap4 proxy using nginx recently, everything work great, unless i cannot obtain client ip address, i'm using external auth_http server run on apache/php. >> >> How do i forward client ip from nginx to apache auth server? something like 'X-Forwarded-For' in http reverse proxy. >> this is what i have done so far by adding auth_http_header >> >> mail { >> ..... >> ..... >> >> server { >> protocol pop3; >> listen 110; >> starttls on; >> auth_http_header X-Auth-Client-IP $remote_addr; >> } >> >> } >> >> i still seeing nginx ip address in apache log >> i need some enlightenment here > A client IP is sent in "Client-IP" header by default. > > > -- > Igor Sysoev > ah., yes i missed that. thanks PT.CITRA SARI MAKMUR SATELLITE & TERRESTRIAL NETWORK Connecting the distance - anytime, anywhere, any content http://www.csmcom.com From nginx-forum at nginx.us Fri Aug 31 11:44:16 2012 From: nginx-forum at nginx.us (nexon) Date: Fri, 31 Aug 2012 07:44:16 -0400 (EDT) Subject: Nginx rewrite rule for favicon.ico files In-Reply-To: References: Message-ID: > Anything except favicons in this servers is the same ? > > > -- > Igor Sysoev > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx Please explain your question. If I understand you correctly, you are asking is every subdomain have their own page, than yes. The trick is that if subdomain have it's own favicon than it needs to be served with it, if not, than to be served with main favicon.ico... Posted at Nginx Forum: http://forum.nginx.org/read.php?2,230375,230388#msg-230388 From nginx-forum at nginx.us Fri Aug 31 11:57:37 2012 From: nginx-forum at nginx.us (Arkokat) Date: Fri, 31 Aug 2012 07:57:37 -0400 (EDT) Subject: memc_pass and upstream Message-ID: After debugging 6 hours... I'm pretty tired of trying the same things over and over. I'm trying to recover some data from multiple memcached servers, I've configure an upstream and using the memc module. Some how I'm getting all the time gateway timeouts for no explainable reason. upstream memc_servers { server 174.123.250.28:11211; server 184.173.29.131:11211; } memc_read_timeout 5s; memc_connect_timeout 5s; memc_send_timeout 5s; memc_next_upstream not_found; location /main_page { set $memc_key $host/?$request_uri; set $memc_cmd get; default_type text/html; memc_pass memc_servers; } If some one could help it would be appriciated +) Posted at Nginx Forum: http://forum.nginx.org/read.php?2,230390,230390#msg-230390 From igor at sysoev.ru Fri Aug 31 12:29:22 2012 From: igor at sysoev.ru (Igor Sysoev) Date: Fri, 31 Aug 2012 16:29:22 +0400 Subject: Nginx rewrite rule for favicon.ico files In-Reply-To: References: Message-ID: On Aug 31, 2012, at 15:44 , nexon wrote: >> Anything except favicons in this servers is the same ? > > Please explain your question. > If I understand you correctly, you are asking is every subdomain have their > own page, than yes. > The trick is that if subdomain have it's own favicon than it needs to be > served with it, if not, than to be served with main favicon.ico... The question is why do you try to push everything inside one server instead of separating processing in several servers: server { server_name sub1.domain.com; ... } server { server_name sub2.domain.com; ... } server { server_name sub3.domain.com; ... } -- Igor Sysoev From nginx-forum at nginx.us Fri Aug 31 13:13:53 2012 From: nginx-forum at nginx.us (nexon) Date: Fri, 31 Aug 2012 09:13:53 -0400 (EDT) Subject: Nginx rewrite rule for favicon.ico files In-Reply-To: References: Message-ID: > From you description I don't fully understand what exactly is your > current > file structure but you can use try_files ( > http://wiki.nginx.org/HttpCoreModule#try_files ) and add the $host > variable > in whatever place is needed. > > For example: > > location /favicon.ico { > try_files /images/$host/favicon_example.ico /favicon.ico; > } > > > rr > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx Could you please be more specific in sintax? Thank's Posted at Nginx Forum: http://forum.nginx.org/read.php?2,230375,230395#msg-230395 From varia at e-healthexpert.org Fri Aug 31 13:14:21 2012 From: varia at e-healthexpert.org (Mark Alan) Date: Fri, 31 Aug 2012 14:14:21 +0100 Subject: Nginx rewrite rule for favicon.ico files In-Reply-To: <843d5db3cd171aa836ddefa3eea90d43.NginxMailingListEnglish@forum.nginx.org> References: <843d5db3cd171aa836ddefa3eea90d43.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20120831141421.2b670288@e-healthexpert.org> On Fri, 31 Aug 2012 06:28:39 -0400 (EDT), "nexon" wrote: > Nginx rewrite rules for favicon.ico for multiple subdomains. > > In nginx /etc/sites-avaliable/vhost_config_file, I have the following > rewrite rules for serving multiple subdomains with their favicon.ico > files: > > > > if ($host = example1.example.com){ > rewrite ^/favicon.ico$ /favicon_example.ico break; > } > > if ($host = example2.example.com){ > rewrite ^/favicon.ico$ /favicon_example.ico break; > } > > if ($host = example3.example.com){ > rewrite ^/favicon.ico$ /favicon_example.ico break; > } > > > Please, how can I make this simple operation by adding **ONE GENERAL > REWRITE RULE**: > > That rule must do: > > 1.For subdomains example1,example2... to check in > location /images/$host.ico is there a favicon_example.ico file and > serve it. > > 2.If there is no such file, than to serve just a plain favicon.ico > file. Would this help? [note: from memory, not tested] location = /favicon.ico { try_files /images/$server_name.ico /favicon.ico =204; } M. From ne at vbart.ru Fri Aug 31 13:17:58 2012 From: ne at vbart.ru (Valentin V. Bartenev) Date: Fri, 31 Aug 2012 17:17:58 +0400 Subject: Nginx rewrite rule for favicon.ico files In-Reply-To: References: Message-ID: <201208311717.58863.ne@vbart.ru> On Friday 31 August 2012 17:13:53 nexon wrote: [...] > > Could you please be more specific in sintax? > http://nginx.org/r/try_files wbr, Valentin V. Bartenev From nginx-forum at nginx.us Fri Aug 31 13:52:48 2012 From: nginx-forum at nginx.us (nexon) Date: Fri, 31 Aug 2012 09:52:48 -0400 (EDT) Subject: Nginx rewrite rule for favicon.ico files In-Reply-To: References: Message-ID: <53fb35efdccff138a1f5b1c50fd6461d.NginxMailingListEnglish@forum.nginx.org> Igor Sysoev Wrote: ------------------------------------------------------- > On Aug 31, 2012, at 15:44 , nexon wrote: > > >> Anything except favicons in this servers is the same ? > > > > Please explain your question. > > If I understand you correctly, you are asking is every subdomain > have their > > own page, than yes. > > The trick is that if subdomain have it's own favicon than it needs > to be > > served with it, if not, than to be served with main favicon.ico... > > The question is why do you try to push everything inside one server > instead of separating processing in several servers: > > server { > server_name sub1.domain.com; > ... > } > > server { > server_name sub2.domain.com; > ... > } > > server { > server_name sub3.domain.com; > ... > } > > > -- > Igor Sysoev > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx I am using several vhosts and this is part of the setup file for one of them: server_name sub1.example.com *.domain.com domain.com sub2.example.com; if ($host = something1.domain.com){ rewrite ^/favicon.ico$ /favicon_domain.com.ico break; } if ($host = something2.domain.com){ rewrite ^/favicon.ico$ /favicon_domain.com.ico break; } etc... I plan to put all *.ico files in location /images/ something1.domain.com.ico something2.domain.com.ico something3.domain.com.ico etc... So what I need is: when request is for something1.domain.com.ico happens than to check in location /images/ is there a file something1.domain.com.ico and serve it if it is there, but if there is no such a file, than to serve clasic favicon.ico file. Please help... Posted at Nginx Forum: http://forum.nginx.org/read.php?2,230375,230394#msg-230394 From nginx-forum at nginx.us Fri Aug 31 13:53:42 2012 From: nginx-forum at nginx.us (nexon) Date: Fri, 31 Aug 2012 09:53:42 -0400 (EDT) Subject: Nginx rewrite rule for favicon.ico files In-Reply-To: <20120831141421.2b670288@e-healthexpert.org> References: <20120831141421.2b670288@e-healthexpert.org> Message-ID: <081795f4cb4e87714226be64531ade5b.NginxMailingListEnglish@forum.nginx.org> > Would this help? [note: from memory, not tested] > > location = /favicon.ico { > try_files /images/$server_name.ico /favicon.ico =204; > } > > M. > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx I have several vhosts, this is part of the config file for vhost in question: server_name sub1.domain.com *.mydomain.com mydomain.com sub2.domain.com; if ($host = www.mydomain.com){ rewrite ^/favicon.ico$ /favicon_mydomain.ico break; } if ($host = boat.mydomain.com){ rewrite ^/favicon.ico$ /favicon_boat.ico break; } if ($host = car.mydomain.com){ rewrite ^/favicon.ico$ /favicon_car.ico break; } etc... I plan to put all .ico files in /images/ dir. favicon_mydomain.ico favicon_boat.ico favicon_car.ico and many more... So what I need is next: when request is happens for car.mydomain.com, nginx searches for /images/favicon_car.ico and it serves it if exists, if not than serves regular favicon.ico. Please help... Posted at Nginx Forum: http://forum.nginx.org/read.php?2,230375,230400#msg-230400 From igor at sysoev.ru Fri Aug 31 14:01:25 2012 From: igor at sysoev.ru (Igor Sysoev) Date: Fri, 31 Aug 2012 18:01:25 +0400 Subject: Nginx rewrite rule for favicon.ico files In-Reply-To: <53fb35efdccff138a1f5b1c50fd6461d.NginxMailingListEnglish@forum.nginx.org> References: <53fb35efdccff138a1f5b1c50fd6461d.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20120831140124.GA59517@nginx.com> On Fri, Aug 31, 2012 at 09:52:48AM -0400, nexon wrote: > Igor Sysoev Wrote: > ------------------------------------------------------- > > On Aug 31, 2012, at 15:44 , nexon wrote: > > > > >> Anything except favicons in this servers is the same ? > > > > > > Please explain your question. > > > If I understand you correctly, you are asking is every subdomain > > have their > > > own page, than yes. > > > The trick is that if subdomain have it's own favicon than it needs > > to be > > > served with it, if not, than to be served with main favicon.ico... > > > > The question is why do you try to push everything inside one server > > instead of separating processing in several servers: > > > > server { > > server_name sub1.domain.com; > > ... > > } > > > > server { > > server_name sub2.domain.com; > > ... > > } > > > > server { > > server_name sub3.domain.com; > > ... > > } > > I am using several vhosts and this is part of the setup file for one of > them: > > server_name sub1.example.com *.domain.com domain.com sub2.example.com; > > if ($host = something1.domain.com){ > rewrite ^/favicon.ico$ /favicon_domain.com.ico break; > } > > if ($host = something2.domain.com){ > rewrite ^/favicon.ico$ /favicon_domain.com.ico break; > } > > etc... > > I plan to put all *.ico files in location /images/ > > something1.domain.com.ico > something2.domain.com.ico > something3.domain.com.ico > > etc... > > So what I need is: > when request is for something1.domain.com.ico happens > than to check in location /images/ is there a file something1.domain.com.ico > and serve it if it is there, > but if there is no such a file, than to serve clasic favicon.ico file. server { server_name something1.domain.com; location = /favicon.ico { alias /image/something1.domain.com.ico; } ... } server { server_name something2.domain.com; location = /favicon.ico { alias /image/something2.domain.com.ico; } ... } server { server_name something3.domain.com; location = /favicon.ico { alias /image/something3.domain.com.ico; } ... } -- Igor Sysoev From om.brahmana at gmail.com Fri Aug 31 14:28:52 2012 From: om.brahmana at gmail.com (Srirang Doddihal) Date: Fri, 31 Aug 2012 19:58:52 +0530 Subject: Achieving strong client -> upstrem_server affinity In-Reply-To: <20120831085459.GP40452@mdounin.ru> References: <20120831085459.GP40452@mdounin.ru> Message-ID: Hello Maxim, Thank you very much for the detailed explanation. Things are much clearer now. On Fri, Aug 31, 2012 at 2:25 PM, Maxim Dounin wrote: > Hello! > > On Thu, Aug 30, 2012 at 11:58:02PM +0530, Srirang Doddihal wrote: > >> Hi, >> >> I am using Nginx 1.1.19 on Ubuntu 12.04 (LTS) server. >> >> Nginx is used to load balance traffic between two instances of the >> Punjab XMPP-BOSH server. >> >> Below is the relevant part from my nginx configuration : >> >> upstream chat_bosh { >> ip_hash; >> server 10.98.29.135:5280; >> server 10.98.29.135:5281; >> } >> >> server { >> ...................... >> ....................... >> location /http-bind { >> proxy_next_upstream off; >> proxy_pass http://chat_bosh; >> expires off; >> } >> } >> >> I am using ip_hash to make sure that a client will always be served by >> the same upstream server. This is essential. >> I am using "proxy_next_upstream off;" to prevent a request being tried >> on multiple upstream servers, because such requests will invariably >> fail. I realize that this will cost me redundancy and fallback in case >> a particular upstream server goes down, but that isn't useful in this >> particular case. I plan to handle it separately via monitoring and >> alerts. > > In addition to "proxy_next_upstream off" you should at least > specify max_fails=0 for upstream servers. Else a server might > be considered down and request from a client will be re-hashed to > another one. Got it. How about the following scenario : Request - 1] "client-1" is forwarded to "server-1". Request - 2] "server-1" does not respond properly and hence is considered down. "client-1" gets an error message Request - 3] "client-1" is now hashed to "server-2" and is forwarded to "server-2" Request - 4] Now will "client-1" continue to be forwarded to "server-2" or will it be come back to "server-1"? i.e Whether the re-hash is permanent or a temporary? > > Note that docs might be a bit misleading here as they say one > should refer to proxy_next_upstream setting to see what is > considered to be server failure. This isn't exactly true: if > upstream server fails to return correct http answer (i.e. on > error, timeout, invalid_header in proxy_next_upstream terms) the > failure is always counted. Understood till here. > What can be considered to be failure > or not is valid http responses, i.e. http_500 and so on. > This was confusing. Are you saying that any only HTTP 1xx, 2xx or 3xx responses from the upstream server will not count towards failure count and any 4xx or 5xx responses will be considered as failures? >> Anomalies : >> >> 1) Despite ip_hash being specified the request from a particular >> client IP sometimes (close to 7% of requests) get routed to a second >> upstream server > > This is correct as long as upstream servers fail and you don't use > max_fails=0. See above. > >> 2) Despute proxy_next_upstream off; some requests (about 5%) are tried >> over multiple upstream servers. > > This is strange, and you may want to provide more info, see > http://wiki.nginx.org/Debugging. > I will try to get a debug log. Currently I am using the Ubuntu package. I probably will have do a custom build for this. > I would suggest this is likely some configuration error though > (requests are handled in another location, without > proxy_next_upstream set to off?). All requests to the concerned upstream servers are sent from only one location and that location has proxy_next_upstream set to off. I am setting up a test environment to isolate this issue. I will get back with more details a little later. Is there anything specific that you want to capture? > > Maxim Dounin > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -- Regards, Srirang G Doddihal Brahmana. The LIGHT shows the way. The WISE see it. The BRAVE walk it. The PERSISTENT endure and complete it. I want to do it all ALONE. From nginx-forum at nginx.us Fri Aug 31 14:50:49 2012 From: nginx-forum at nginx.us (nexon) Date: Fri, 31 Aug 2012 10:50:49 -0400 (EDT) Subject: Nginx rewrite rule for favicon.ico files In-Reply-To: <20120831140124.GA59517@nginx.com> References: <20120831140124.GA59517@nginx.com> Message-ID: <782fc8f4305d7a7cd5165333a78bc56c.NginxMailingListEnglish@forum.nginx.org> Yes, this would be much beter, but I am stuck with my conf as it is... Thank's Posted at Nginx Forum: http://forum.nginx.org/read.php?2,230375,230405#msg-230405 From nginx-forum at nginx.us Fri Aug 31 14:52:10 2012 From: nginx-forum at nginx.us (nexon) Date: Fri, 31 Aug 2012 10:52:10 -0400 (EDT) Subject: Nginx rewrite rule for favicon.ico files In-Reply-To: <201208311717.58863.ne@vbart.ru> References: <201208311717.58863.ne@vbart.ru> Message-ID: Thank you Valentin, this solved my problem... Posted at Nginx Forum: http://forum.nginx.org/read.php?2,230375,230406#msg-230406 From nginx-forum at nginx.us Fri Aug 31 14:54:45 2012 From: nginx-forum at nginx.us (nexon) Date: Fri, 31 Aug 2012 10:54:45 -0400 (EDT) Subject: Nginx rewrite rule for favicon.ico files In-Reply-To: <20120831141421.2b670288@e-healthexpert.org> References: <20120831141421.2b670288@e-healthexpert.org> Message-ID: <1c55c1fefc99c74bdce90652e2370098.NginxMailingListEnglish@forum.nginx.org> Thank's Mark, problem solved... Posted at Nginx Forum: http://forum.nginx.org/read.php?2,230375,230407#msg-230407 From igor at sysoev.ru Fri Aug 31 14:57:01 2012 From: igor at sysoev.ru (Igor Sysoev) Date: Fri, 31 Aug 2012 18:57:01 +0400 Subject: Nginx rewrite rule for favicon.ico files In-Reply-To: <782fc8f4305d7a7cd5165333a78bc56c.NginxMailingListEnglish@forum.nginx.org> References: <20120831140124.GA59517@nginx.com> <782fc8f4305d7a7cd5165333a78bc56c.NginxMailingListEnglish@forum.nginx.org> Message-ID: <0D35DC29-5A1B-40F2-BBC8-5E364AE9160A@sysoev.ru> On Aug 31, 2012, at 18:50 , nexon wrote: > Yes, this would be much beter, but I am stuck with my conf as it is... The more complex configuration I will eventually create the more often you will stuck with it. It's better to create a scaleable configuration from the very start or at least as early as possible. -- Igor Sysoev From david at styleflare.com Fri Aug 31 16:12:40 2012 From: david at styleflare.com (David | StyleFlare) Date: Fri, 31 Aug 2012 12:12:40 -0400 Subject: Input Headers. - headers_more_module. In-Reply-To: References: <503F9EF9.1020002@styleflare.com> <503FC949.5040801@styleflare.com> Message-ID: <5040E278.40000@styleflare.com> Strangely I still dont see the header variable set. I installed nginx-lua and I added the snippet you sent me to the nginx config. nginx starts fine, but I dont see the value set. Any ideas? I also upgraded to the 1.3.4 version of Nginx while I did this. Thanks so much.. David. On 8/30/12 5:33 PM, agentzh wrote: > Hello! > > On Thu, Aug 30, 2012 at 2:30 PM, agentzh wrote: >> location / { >> access_by_lua ' >> local res = ngx.location.capture("/auth") >> if res.status == 200 then >> ngx.req.set_header("X-Server-ID", res.var.pg_server) > Sorry, typo here, this line should be > > ngx.req.set_header("X-Server-ID", ngx.var.pg_server) > > Best regards, > -agentzh > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From nfnlists at gmail.com Fri Aug 31 16:46:02 2012 From: nfnlists at gmail.com (Nuno Neves) Date: Fri, 31 Aug 2012 17:46:02 +0100 Subject: Different cache time for different locations Message-ID: Hello, All my requests passes in the php location to fastcgi ( location ~ \.php$ ) and I would like to have different cache time for different requests. As an example : / -> cache time for 10 minutes /about -> cache time for 1 day /products -> cache time for 1h Right now I have the same cache for all requests: fastcgi_cache_valid 200 302 10m; How am I able to accomplish this setup? Thanks Nuno From mdounin at mdounin.ru Fri Aug 31 17:08:24 2012 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 31 Aug 2012 21:08:24 +0400 Subject: Different cache time for different locations In-Reply-To: References: Message-ID: <20120831170824.GV40452@mdounin.ru> Hello! On Fri, Aug 31, 2012 at 05:46:02PM +0100, Nuno Neves wrote: > Hello, > > All my requests passes in the php location to fastcgi ( location ~ > \.php$ ) and I would like to have different cache time for different > requests. > > As an example : > > / -> cache time for 10 minutes > /about -> cache time for 1 day > /products -> cache time for 1h > > Right now I have the same cache for all requests: fastcgi_cache_valid > 200 302 10m; > > How am I able to accomplish this setup? Configure different locations to handle different requests, for example location = / { fastcgi_pass ... fastcgi_cache_valid 10m; ... } location = /about { fastcgi_pass ... fastcgi_cache_valid 1d; ... } location = /products { fastcgi_pass ... fastcgi_cache_valid 1h; ... } Maxim Dounin From mdounin at mdounin.ru Fri Aug 31 17:36:09 2012 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 31 Aug 2012 21:36:09 +0400 Subject: Soft lock for Basic Auth In-Reply-To: References: Message-ID: <20120831173608.GW40452@mdounin.ru> Hello! On Tue, Aug 28, 2012 at 08:13:28PM +0530, Quintin Par wrote: > Hi all, > > Is it possible to apply a soft lock timeout on invalid password attempts > for > > > > auth_basic "Login"; > > auth_basic_user_file /etc/.htpasswd; As far as I understand the question, something like this should work: error_page 401 /401.html; location = /401.html { delay 1s; } Where the "delay" directive is added by a trivial module from [1]. (It may be equally done e.g. with embedded perl, or with limit_req, but it's usually a good idea to keep things simple.) [1] http://mdounin.ru/hg/ngx_http_delay_module Maxim Dounin From ianevans at digitalhit.com Fri Aug 31 17:40:16 2012 From: ianevans at digitalhit.com (Ian Evans) Date: Fri, 31 Aug 2012 13:40:16 -0400 Subject: Photo uploads and scalability Message-ID: <5040F700.2070502@digitalhit.com> Getting ready to work on a new section of the site which will require a reader to upload a photo and write a caption. It's my first time writing anything to do with uploading and I'm wondering what things in the nginx realm I need to consider so the system doesn't get bogged down. Nginx is a dream for us serving files, but since I've never handled uploads, what pitfalls/traps/surprises do I need to be prepared for? I'd like to think of scalability from day one so I'm not trying to fix something under load. Right now I have one server. Is there anyway to prepare my nginx.conf for easily handling, say, new servers getting added/removed in the cloud? [Hey, we might get 1 upload a week or a million a day, so it's good to be prepared.] From a programming end I already know I'll queue any post-processing and inform the reader when it's done so that process can be handled by my local server or cloud instances etc. Thanks for any tips! From ne at vbart.ru Fri Aug 31 18:01:09 2012 From: ne at vbart.ru (Valentin V. Bartenev) Date: Fri, 31 Aug 2012 22:01:09 +0400 Subject: Photo uploads and scalability In-Reply-To: <5040F700.2070502@digitalhit.com> References: <5040F700.2070502@digitalhit.com> Message-ID: <201208312201.09257.ne@vbart.ru> On Friday 31 August 2012 21:40:16 Ian Evans wrote: [...] > From a programming end I already know I'll queue any post-processing > and inform the reader when it's done so that process can be handled by > my local server or cloud instances etc. > TIP: You can avoid passing files over an upstream connection. Example: location = /upload { proxy_pass http://backend; client_body_in_file_only clean; proxy_set_body $request_body_file; } Your application should "mv" uploaded file somewhere, then register a task for further background processing, and return a response. Reference: http://nginx.org/r/client_body_in_file_only wbr, Valentin V. Bartenev From agentzh at gmail.com Fri Aug 31 18:15:12 2012 From: agentzh at gmail.com (agentzh) Date: Fri, 31 Aug 2012 11:15:12 -0700 Subject: Input Headers. - headers_more_module. In-Reply-To: <5040E278.40000@styleflare.com> References: <503F9EF9.1020002@styleflare.com> <503FC949.5040801@styleflare.com> <5040E278.40000@styleflare.com> Message-ID: Hello! On Fri, Aug 31, 2012 at 9:12 AM, David | StyleFlare wrote: > Strangely > > I still dont see the header variable set. > > I installed nginx-lua and I added the snippet you sent me to the nginx > config. > Sorry, ngx.location.capture disables nginx variable sharing between subrequests and their parent by default (for safety reasons). You need to explicitly allow that for your variable: ngx.location.capture("/auth", { share_all_vars = true }) See the official documentation of ngx_lua for more details: http://wiki.nginx.org/HttpLuaModule#ngx.location.capture But it's highly recommended to use the response body and/or headers (instead of nginx variables) to return data from the subrequest back to its parent, for example: location / { access_by_lua ' local res = ngx.location.capture("/auth") if res.status == 200 and res.body then ngx.req.set_header("X-Server-ID", res.body) end '; uwsgi_pass ...; } location = /auth { internal; postgres_query "select server_id from ..."; postgres_pass backend; postgres_output value 0 0; } That is, we use the postgres_output directive instead of postgres_set in location /auth here so that the server ID data will be returned as the subrequest's response body. Best regards, -agentzh From mdounin at mdounin.ru Fri Aug 31 18:19:28 2012 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 31 Aug 2012 22:19:28 +0400 Subject: how to keep "hot" files in memory? In-Reply-To: References: Message-ID: <20120831181928.GY40452@mdounin.ru> Hello! On Wed, Aug 29, 2012 at 06:39:51PM +0800, Roast wrote: > Dear all. > > We use nginx to server lots of static content with high traffic loads. And > the server is getting slower, because slower SATA drive. > > So I think how to keep "hot" files in memory without other software,just > like varnish or squid? Does someone has worked out modules for nginx? Normally OS will do right thing in keeping hot files in memory without any other software. You may want to make sure OS isn't low on memory though and have some memory to cache files. Maxim Dounin From farseas at gmail.com Fri Aug 31 18:22:15 2012 From: farseas at gmail.com (Bob Stanton) Date: Fri, 31 Aug 2012 14:22:15 -0400 Subject: Auth request module header passing In-Reply-To: <886a9bf56f95d63e6f82c0b5cac30d56.NginxMailingListEnglish@forum.nginx.org> References: <72bb9527bf3493f6d02530d635ed7afa.NginxMailingListEnglish@forum.nginx.org> <886a9bf56f95d63e6f82c0b5cac30d56.NginxMailingListEnglish@forum.nginx.org> Message-ID: Hello mokriy, Are you using the perl script that Maxim included with auth_request or are you doing it some other way? I am working to setup auth_request using my own form and with a CGI program in C but I would be very interested in what you are doing. What does "" represent in your nginx.conf? Thank you both very much. On Thu, Aug 30, 2012 at 6:31 AM, mokriy wrote: > It works, > you have to put header in quotes: > "$http_x_header_from_request" > > Posted at Nginx Forum: > http://forum.nginx.org/read.php?2,230313,230324#msg-230324 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From paigeat at paigeat.info Fri Aug 31 18:32:13 2012 From: paigeat at paigeat.info (Thompson, Paige) Date: Fri, 31 Aug 2012 11:32:13 -0700 Subject: Stud -> Haproxy -> and Nginx; nginx real_ip_header isn't working as expected, can't scale Message-ID: I got through all of that, finally i'm to nginx... I only have one load balancer at the moment, but given the addition of a second or third in which I cannot rely on all of the ip addresses to be expressible any other way than 0.0.0.0/24. set_real_ip_from 10.0.0.0/24; real_ip_header X-Forwarded-For; This simply does not work, however if I put a single load balancers IP address there, it does. It seems like you guys went out of your way to make sure that people set /something/ rather than nothing with the real_ip_header variable which is good, the bad thing is you're not leaving me many options as far as overriding the behavior of preventing me from allowing anybody in the world to send X-Forwarded-For... .....which doesn't make any sense because thanks to iptables the only machine that could ever send that would be my load balancer or balancers: ACCEPT tcp -- 10.178.101.53 anywhere tcp dpt:http ACCEPT tcp -- 10.178.101.53 anywhere tcp dpt:https I'm begging you guys please. Please don't save me from myself, completely. Please. I have absolutely no need for this behavior, given that stud, my ssl terminator, gets the tcp remote connection ip which it uses for X-Forwarded-For, which in turn is sent to haproxy... and the nginx servers only allow connections from the haproxy server... oh another important thing to mention is that stud runs on the load balancer server(s). Again there could end up being multiple stud+haproxy servers that could talk to the nginx nodes... CIDR can't express random ip addresses..... please fix set_real_ip_from to allow 0.0.0.0/24. Thank you, Paige Adele Thompson http://paigeat.info paigeat at paigeat.info From david at styleflare.com Fri Aug 31 19:28:58 2012 From: david at styleflare.com (David J) Date: Fri, 31 Aug 2012 15:28:58 -0400 Subject: Input Headers. - headers_more_module. In-Reply-To: References: <503F9EF9.1020002@styleflare.com> <503FC949.5040801@styleflare.com> <5040E278.40000@styleflare.com> Message-ID: OK I can use that method. I prefer it. Thank you very much for this solution. I am very excited to learn a little bit of LUA to. On Aug 31, 2012 2:15 PM, "agentzh" wrote: > Hello! > > On Fri, Aug 31, 2012 at 9:12 AM, David | StyleFlare > wrote: > > Strangely > > > > I still dont see the header variable set. > > > > I installed nginx-lua and I added the snippet you sent me to the nginx > > config. > > > > Sorry, ngx.location.capture disables nginx variable sharing between > subrequests and their parent by default (for safety reasons). You need > to explicitly allow that for your variable: > > ngx.location.capture("/auth", { share_all_vars = true }) > > See the official documentation of ngx_lua for more details: > > http://wiki.nginx.org/HttpLuaModule#ngx.location.capture > > But it's highly recommended to use the response body and/or headers > (instead of nginx variables) to return data from the subrequest back > to its parent, for example: > > location / { > access_by_lua ' > local res = ngx.location.capture("/auth") > if res.status == 200 and res.body then > ngx.req.set_header("X-Server-ID", res.body) > end > '; > > uwsgi_pass ...; > } > > location = /auth { > internal; > postgres_query "select server_id from ..."; > postgres_pass backend; > postgres_output value 0 0; > } > > That is, we use the postgres_output directive instead of postgres_set > in location /auth here so that the server ID data will be returned as > the subrequest's response body. > > Best regards, > -agentzh > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Fri Aug 31 21:52:40 2012 From: mdounin at mdounin.ru (Maxim Dounin) Date: Sat, 1 Sep 2012 01:52:40 +0400 Subject: Header is not passed to proxy In-Reply-To: <21fd5d4bd8cfebd05e5891a5288d39a7.NginxMailingListEnglish@forum.nginx.org> References: <21fd5d4bd8cfebd05e5891a5288d39a7.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20120831215240.GA40452@mdounin.ru> Hello! On Wed, Aug 29, 2012 at 12:42:28PM -0400, mokriy wrote: > Hi Maxim, thanks a lot for your moduel. it was extremely useful for me. > > I have the following config: > location /upload { > auth_request /auth; > ... > location /auth { > proxy_pass ; > } > > The initial request to /upload has header. I have supposed this header to be > propogated down to . But this does not happen. > > Question: does it mean that i have to use $request_uri instead of header > propogation? I don't really understand the question, but original request headers are passed to auth service by default. If it doesn't work for you - probably you did something wrong. Maxim Dounin From mdounin at mdounin.ru Fri Aug 31 21:54:48 2012 From: mdounin at mdounin.ru (Maxim Dounin) Date: Sat, 1 Sep 2012 01:54:48 +0400 Subject: Hide/Delete a cookie when stored in cache In-Reply-To: <3d38400ed3fc0bcf6ff8d24d3ecfd764.NginxMailingListEnglish@forum.nginx.org> References: <3d38400ed3fc0bcf6ff8d24d3ecfd764.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20120831215448.GB40452@mdounin.ru> Hello! On Wed, Aug 29, 2012 at 01:04:13PM -0400, nfn wrote: > Hello, > > I'm caching pages with nginx using the following rules: > > fastcgi_no_cache $cookie_member_id $is_args > fastcgi_cache_bypass $cookie_member_id $is_args > fastcgi_ignore_headers Cache-Control Expires Set-Cookie; > > Now, there is a cookie (session_id) that need to be passed to backed, but it > shouldn't be stored in the cache, since it's the session_id of a guest user > and other guests should not see this. > > Is there a way to store the page in the cache but before that, remove this > cookie? Use fastcgi_hide_header Set-Cookie; See nginx.org/r/fastcgi_hide_header. Maxim Dounin From francis at daoine.org Fri Aug 31 22:34:45 2012 From: francis at daoine.org (Francis Daly) Date: Fri, 31 Aug 2012 23:34:45 +0100 Subject: Stud -> Haproxy -> and Nginx; nginx real_ip_header isn't working as expected, can't scale In-Reply-To: References: Message-ID: <20120831223445.GB18253@craic.sysops.org> On Fri, Aug 31, 2012 at 11:32:13AM -0700, Thompson, Paige wrote: Hi there, > I only have one load balancer at the moment, but given the addition of > a second or third in which I cannot rely on all of the ip addresses to > be expressible any other way than 0.0.0.0/24. > > set_real_ip_from 10.0.0.0/24; You've used two different cidr addresses there. It's not clear which one you actually mean. Are you aware that 10.0.0.0/24 means "10.0.0.anything"? If you want "10.anything", that is 10.0.0.0/8. > real_ip_header X-Forwarded-For; > > This simply does not work, however if I put a single load balancers IP > address there, it does. It looks to me like it should work. And a quick test of curl -i -H 'X-Forwarded-For: 10.0.1.99' http://localhost:8080/ shows "10.0.1.99" as the first field in access.log. What does your set_real_ip_from directive say? And what is the IP address of the load balancer that is talking to nginx? I suspect that when you configure it right, nginx will work fine. Good luck with it, f -- Francis Daly francis at daoine.org From francis at daoine.org Fri Aug 31 22:42:41 2012 From: francis at daoine.org (Francis Daly) Date: Fri, 31 Aug 2012 23:42:41 +0100 Subject: Stud -> Haproxy -> and Nginx; nginx real_ip_header isn't working as expected, can't scale In-Reply-To: <20120831223445.GB18253@craic.sysops.org> References: <20120831223445.GB18253@craic.sysops.org> Message-ID: <20120831224241.GC18253@craic.sysops.org> On Fri, Aug 31, 2012 at 11:34:45PM +0100, Francis Daly wrote: > On Fri, Aug 31, 2012 at 11:32:13AM -0700, Thompson, Paige wrote: Hi there, > And a quick test of > > curl -i -H 'X-Forwarded-For: 10.0.1.99' http://localhost:8080/ > > shows "10.0.1.99" as the first field in access.log. ...when my source address was 127.0.0.1 and my important other directive was set_real_ip_from 127.0.0.0/16; in order to include my source address in the set_real_ip_from network f -- Francis Daly francis at daoine.org From nfnlists at gmail.com Fri Aug 31 23:03:01 2012 From: nfnlists at gmail.com (Nuno Neves) Date: Sat, 1 Sep 2012 00:03:01 +0100 Subject: Hide/Delete a cookie when stored in cache In-Reply-To: <20120831215448.GB40452@mdounin.ru> References: <3d38400ed3fc0bcf6ff8d24d3ecfd764.NginxMailingListEnglish@forum.nginx.org> <20120831215448.GB40452@mdounin.ru> Message-ID: Hello, No dia Sexta-feira, 31 de Agosto de 2012, Maxim Douninmdounin at mdounin.ruescreveu: > Hello! > > On Wed, Aug 29, 2012 at 01:04:13PM -0400, nfn wrote: > > > Hello, > > > > I'm caching pages with nginx using the following rules: > > > > fastcgi_no_cache $cookie_member_id $is_args > > fastcgi_cache_bypass $cookie_member_id $is_args > > fastcgi_ignore_headers Cache-Control Expires Set-Cookie; > > > > Now, there is a cookie (session_id) that need to be passed to backed, > but it > > shouldn't be stored in the cache, since it's the session_id of a guest > user > > and other guests should not see this. > > > > Is there a way to store the page in the cache but before that, remove > this > > cookie? > > Use > > fastcgi_hide_header Set-Cookie; > > See nginx.org/r/fastcgi_hide_header. > > Maxim Dounin > > I can't use hide_header because It will hide all cookies. I just want to hide/delete session_id cookie Any ideas? Thanks, Nuno -------------- next part -------------- An HTML attachment was scrubbed... URL: